Pages

How Stuff Works: Spring Component Scanning

Spring has an interesting feature of scanning its components defined and load it.So the configuration is tied to application ie the code, using annotations.Spring javaconfig also provides the capability to do convention over configuration.There are a lot of documents,references etc explaining how to do the spring configuration. I was looking into the under the hood flow of how the stuff works...

A minimal config to application context xml

<context:component-scan base-package="packageName"/>

will scan all the component classes in the package.The component classes in the classpath are detected and bean definitions are auto-registered for them.

As per the Schema URI and Schema XSD, the context namespace will be like this - Reference


There are stereotype annotations which are markers for any class that fulfills a role within an application.This is well showcased in SpringMVC.More about the annotations

For efficient configuration, we can have multiple context xmls for maintaining resources. The application can have one for DAOs, one for services and so on.The layers can be effectively scanned by the context loader with this usage. So MVC applications will have seperate xmls for @Repository (data access tier),@Service (service),@Controller (web tier) components.
So for the example in a simple java app, I used them in a single xml. But this is mot an mvc app.

A UserDAO Interface

package com.sample.data;

import java.util.List;

public interface UserDAO {

List<String> getUsers();
}


Its Implementation

package com.sample.data;

import java.util.ArrayList;
import java.util.List;

import org.springframework.stereotype.Repository;

@Repository("userDAO")
public class UserDAOImpl implements UserDAO {

@Override
public List<String> getUsers() {

List<String> l = new ArrayList<String>();
l.add("Roger Moore");
l.add("Pierce Brosnan");

return l;
}

}


Service Layer

package com.sample.service;

import java.util.List;

public interface UserService {
List<String> getUsers();
}

And its implementation

package com.sample.service;

import java.util.List;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import com.sample.data.UserDAO;


@Service("userService")
public class UserServiceImpl implements UserService{

@Autowired
private UserDAO userDAO;

@Override
public List<String> getUsers() {

return userDAO.getUsers();
}


}


the client

package com.sample.client;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import com.sample.service.UserService;

public class SpringClient {

private UserService service;

public SpringClient(){
ApplicationContext appContext = new ClassPathXmlApplicationContext("resource/applicationContext.xml");
service= (UserService) appContext.getBean("userService");
((ClassPathXmlApplicationContext)appContext).close();
}


public void showUsers() {
for (String s : service.getUsers()) {
System.out.println(s);
}
}

public static void main(String[] args) {
SpringClient spc = new SpringClient();
spc.showUsers();
}

}




applicationContext

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-2.5.xsd
http://www.springframework.org/schema/aop
http://www.springframework.org/schema/aop/spring-aop-2.5.xsd
http://www.springframework.org/schema/tx
http://www.springframework.org/schema/tx/spring-tx-2.5.xsd"
default-autowire="byName">

<!-- Enable autowiring via @Autowire annotations -->
<context:annotation-config/>


<context:component-scan base-package="com.sample.data">
<context:include-filter type="annotation"
expression="org.springframework.stereotype.Repository"/>
</context:component-scan>

<context:component-scan base-package="com.sample.service">
<context:include-filter type="annotation"
expression="org.springframework.stereotype.Service"/>
</context:component-scan>

</beans>



After bootstrapping, the set of application components and service that need to be created are identified. AbstractbeanDefinitionReader will read resource definitions. DefaultListableBeanFactory will be used as default bean factory based on bean definition objects. XmlBeanDefinitionReader.loadBeanDefinitions() load bean definitions from the specified XML file in which the BeanDefinitionParser will identify the context namespaces and parses the applicationContext xml. The resources are identified by the implementation of ResourcePatternResolver:, ie PathMatchingResourcePatternResolver in which the location patterns are found like an ant-style. Internally it uses ClassLoader.getResources(String name) method which returns an Enumeration containing URLs representing classpath resources. Then the ComponentScanBeanDefinitionParser will parse through the context defintion nodes. If annotation configuration is enabled, autowiring of components takes place as these "candidate patterns" can be set as autowired. A default AutowiredAnnotationBeanPostProcessor will be registered by the "context:annotation-config" and "context:component-scan" XML tags.If filters are added, then it will parse the type filters. In the example I have provided annotation is the type filter. So it will use AnnotationTypeFilter to load the annotation Repository which is provided as the DAO's annotation resolver.

And we know that spring classes are designed to be extended.I was going through the API docs and found that we can add and exclude filters programmatically too.

So I added showComponents to client code using a selected base package

public void showComponents(){

ClassPathScanningCandidateComponentProvider provider =
new ClassPathScanningCandidateComponentProvider(true);
String basePackage = "com/sample/data";
provider.addExcludeFilter(new AnnotationTypeFilter(Repository.class, true));
Set<BeanDefinition> filteredComponents = provider.findCandidateComponents(basePackage);
System.out.println("No of components :"+filteredComponents.size());

for (BeanDefinition component : filteredComponents) {
System.out.println("Component:"+ component.getBeanClassName());
}

provider.resetFilters(true);
provider.addIncludeFilter(new AnnotationTypeFilter(Repository.class, true));
filteredComponents = provider.findCandidateComponents(basePackage);
System.out.println("No of components :"+filteredComponents.size());

for (BeanDefinition component : filteredComponents) {
System.out.println("Component:"+ component.getBeanClassName());
}

}

So the output will be

No of components :0
No of components :1
Component: com.sample.data.UserDAOImpl



If code and configuration are static like beans, the scanner annotations are useful. For accessing resources, jndi or jdbc or anything dynamic like that better go for xml as it is easy to modify it without code change.Its widely used for request mappings and controllers in Spring MVC.The xml overrides the config.When classes are more scanning will be difficult, so we have to filter them based on the type required.


More Reading

Classpath scanning and managed components

Model Driven Development in Adobe RIA and Eclipse RCP new thoughts

I had a chance to attend Adobe Dev Summit 09 Hyderabad. Among the plethora of Adobe "hoo-hah" products for the new age developers, a very good demo caught my attention. Model driven development in Adobe. Adobe is investing a lot in rooting themselves in the enterprise arena by their Adobe Livecycle stack.

The product has a LiveCycle Workbench ES to integrate with Flex studio which is an Eclipse based development environment. RIA has find its way to enterprise as Web 2.0 has moved from a hype to a standard development use case.But for me, the attention grabber is the powerful Eclipse development environment.More than an IDE, it is a wonderful platform for developing powerful applications with its handy plug-in architecture and widgets.From simple RSS readers to Rational platform, eclipse has its magic hand.

RIA for desktop is AIR.If one develops app in AIR we wont be able to use native widgets, as the runtime provides them.But we can access native windows . Also from the AIR2 next api release it is possible to run native applications from AIR.

What about RCP ? It has SWT runtime component uses native widgets..We may think most of the RCP apps will look like the dry eclipse IDE look and feel... Nope !



See the look of Lotus Domino with the power of custom skinning. Lotus Software has been purchased by IBM and has put a lot of resources to continue the development of different Lotus components (such as Domino and Notes). The UX feel of applications based on Eclipse 4 would be amazing.The upcoming e4 architecture of Eclipse is a promising space for next generation eclipse based applications.User experience guidelines for IBM Lotus rich client applications and plug-ins is here


There has always been a rivalry between desktop and web applications. Both of these genre of applications fight to conquer the business space farmed across the computing world.Web apps want to be in desktop and desktop apps want be in we.Its a fame race. RCP has got a wide enterprise users.The choice of RIA or RCP depends on use case.GWT and Flex in web are good in data visualizations. But 2D and 3D will require the power of native processors. The RCP could make use of Java 2D.There are a lot of solutions in market. More choice more pain ! So I may write about them later.

In the demo (Adobe Max)




rich data model can be modelled and exposed as a crud application with a rich UI
Similar presentation was done by Sujith Reddy where the presentation slides he shared

In Eclipse the model driven development is accomplished using EMF - Eclipse Modelling Framework.It provides an API to access the models. Access models by reflection and dynamic creation are possible.As it goes with RCP,it is like an UI for domain objects.The MVC architecture is done through EMF.It can make use of hibernate, JPA or eclipse link.

As seen in the demo, Flex became the UI of the model.

But there are developments going on in eclipse RCP framework. As Flex SDK is open source these eclipse guys wrote programs to create java for flash !

E4 ?

e4 is the incubator for Eclipse 4.0, to be released 2010 built on current Eclipse and OSGi technology as a solid foundation.

Use of web styling technology (CSS), allows the presentation of user interface elements to be infinitely tweaked and reconfigured without any modification of application code.

This is bringing Eclipse runtime technology into the JavaScript world, and enabling
software written in JavaScript to be executed in the Eclipse runtime.

E4 comes with a framework for defining the design and structure of Standard Widget Toolkit (SWT) applications declarative.This eliminates writing of repetitive boilerplate SWT code, thus reducing development cost, improving UI consistency, and enabling customized application rendering. Plug-ins are coded in Java. A typical plug-in consists of Java code in a JAR library, some read-only files, and other resources such as images, web templates, message catalogs, native code libraries, etc

E4/Eclipse Application Services
-Eclipse APIs are refactored to services which should be offered as separate, independent APIs, so that clients can make use of them without having to buy into all of them. Structuring them as individual services also makes it easier to use them from other languages / environments such as e.g. JavaScript. In service programming models the consumers receive dependencies via dependency injection. This theoretically allows application code to completely eliminate its dependency on a particular container technology, thus enabling greater reuse.

Modeled UI - The E4 user interface is model based; everything that appears in the presentation is backed by a representation in the UI Model.

To that end, e4 is investigating bringing both the benefits of Eclipse to the JavaScript world (modularity, extensibility, and tooling), and JavaScript components into the Eclipse desktop environment.OSGi modularity makes integration of JavaScript Bundles easier.

Use of CSS and declarative styling a pluggable styling engine is used to customize the fonts, colors, and other aspects of widget presentation.

SWT Browser Edition- zero install widgets like RAP in which a JavaScript library (qooxdoo) is running on the client, rendering widgets that are manipulated from Java running on the server or Java is cross-compiled to Flex (ActionScript), Dojo (JavaScript) and Silverlight (.NET) technologies which is a "GWT-like" (Google Web Toolkit) approach.

This shows E4 SWT Java for Flash




More demos

Workbench - perspectives, toolbars, menus, parts. The e4 workbench greatly increases flexibility for application designers by providing a formal model of the elements that comprise an instance of the workbench.

XML UI for SWT (XWT), is a framework for writing SWT widgets declaratively in XML. In XWT, the complete structure of an application or widget hierarchy is expressed declaratively, along with bindings of the widgets to some underlying application model or to Java-based call-backs implementing the widget behavior.

More RCP apps

Some IdentityHashMap gotchas

Most of java guys are familiar with IdentityHashMap, which is mainly used when there is a need of maintaining keys based on reference equality. Lot of people, have written about a lot of collections hashing and so on.. But I thought whats so interesting about this map. Even the algorithm behind it is about 45 years old !

Every object is having an "Identity", i.e., the internal "address" is unique for the lifetime of that object. So map will contain unique objects as the keys. Consider HashMap, HashSet, TreeMap, TreeSet, PriorityQueue etc, they are "equality-dependent". One can override the equals() method in the object stored to keep the object as the keys in these collections.But IdentityHashMap is "equality-independent". Even if the objects residing it changes its comparability or equality, the map will stride fine. But it violates the symmetry property of the equals contract. Symmetry means that for any two references a and b, a.equals(b) must return true if and only if b.equals(a) returns true. That sets the contract for IdentityHashMap at odds with the contract for Map, the interface it implements, which specifies that equality should be used for key comparison.

According to the spec in bold says so... "This class is not a general-purpose Map implementation! While this class implements the Map interface, it intentionally violates Map’s general contract, which mandates the use of the equals() method when comparing objects. This class is designed for use only in the rare cases wherein reference-equality semantics are required."

IdenityHashMap data structure is based on open addressing, which is a method in which when a collision occurs, alternative locations are tried until an empty cell is found. All the items are stored in an array. This map uses linear probing, wherein the next position is found out sequentially.

ie newlocation = (startingValue + stepSize) % arraySize (here the stepsize be 1)

This is basically storing all keys in a single array. The occupied addresses will form clusters (primary clusters). These clusters are distributed and the keys in them are localized to some are. So the collision will occur when the hash function returns the index of insertion wherein the keys are occupied, consuming time for insertion. Basically this is a form file/memory management.We can say that map behaves like of addressing of arrays. It puts the keys and values alternating in a single array (supposedly good for large data sets) and doesn't need to create or reuse entry objects. Even when the map size be large as it grows the insertions are not affected, while other map implementations(which uses chaining) gets slower. However, lookup is much cheaper than insertion, as we be looking items up much more often than you insert them. Using probing is somewhat faster than following a linked list, when a reference to the value can be placed directly in the array,For other hash-based collections, this is not the case because they store the hash code. This will be efficient when a get operation must check whether it has found the right key by reference (equality is an expensive operation, as it checks for the right hash code). So this takes less memory and its faster.

I read that linear probing was first analyzed by Knuth in a 1963 (at the age of 24 !) memorandum now considered to be the birth of the area of analysis of algorithms. Knuth’s analysis, as well as most of the work that has since gone into understanding the properties of linear probing, is based on the assumption that hash function is a truly random function, mapping all keys independently.

The keys are stored in an array Tab. The hash function h will be mapping keys.When a key x is inserted, index = h(x). If Tab[index] is occupied, then we scan sequentially until x is found. If an empty slot found, x is inserted. The next insertion location will be nearest next empty slot from h(x).In IdentityHasMap h(x) is basically System.identityHashCode (as per source some more calculations are done). As this algorithm uses modulo arithmetic it goes around until the size is filled up. So in the implementation MAXIMUM_SIZE is doubled up untill allowed space for object is filled in the permspace.

According to Knuth, it is possible to estimate the average number of probes for a successful search, where l is the load factor as 1/2(1+ 1/(1-l))

The put(k,v) method looks for the position for insertion. As the he index is found out based on calculations involve System.identityHashCode(returning 32-bit integer value), identity hash codes are well distributed, almost like random numbers. Using System.identityHashCode means that for objects that have overridden hashCode and equals, even if two objects are equal() they will be put in a different hash buckets ie it treats them as two distinct objects.

Some popular anomalies:


IdentityHashMap<long,> orderMap1 = new IdentityHashMap<Long,Order>();
orderMap1.put(1L,null);
orderMap1.put(1L,null);

System.out.println(orderMap1.keySet().size());

Prints 1. As the 1L will be autoboxed in Java 1.5. But as the values less than 127 could be pointing to same object there will be a single reference. When use 1000L for the keys, the size will be 2 as it creates distinct objects. Long has a private class for caching these values in consideration to performance as those values are commonly used. This is applicable to Integer,Character etc.


private static class LongCache {
private LongCache(){}

static final Long cache[] = new Long[-(-128) + 127 + 1];

static {
for(int i = 0; i < cache.length; i++)
cache[i] = new Long(i - 128);
}
}

Take another example

IdentityHashMap<String,OrderI> orderMap6 = new IdentityHashMap<String,OrderI>();
OrderI oI2 = new OrderI(1);
OrderI oI3 = new OrderI(2);

orderMap6.put("order1",oI3);
orderMap6.put("order1",oI2);



Here the key size will be one. As the string literals are referred from the String heap itself they point to same location.So the second insertion overrides the previous value.

Where can this map be used.Some says , for caches, as they perform very well. Only need to store references though. For cyclic graphs, we must use identity rather than equality to check whether nodes are the same. Calculating equality between two graph node objects requires calculating the equality of their fields, which in turn means computing all their successors and we are back to the original problem. A IdentityHashMap, by contrast, will report a node as being present only if that same node has previously been put into the map. Thus by saving the size and node traversals.

I just wanted to know about this map thats it...

References

http://www.ece.uwaterloo.ca/~ece250/Lectures/Slides/6.07.LinearProbing.ppt
http://www.owasp.org/index.php/Java_gotchas#Immutable_Objects_.2F_Wrapper_Class_Caching
http://www.siam.org/proceedings/soda/2009/SODA09_072_thorupm.pdf
http://www.cs.unm.edu/~moret/graz01.pdf
http://www.it-c.dk/people/pagh/papers/linear-jour.pdf
http://reference.kfupm.edu.sa/content/l/i/linear_probing_and_graphs_122432.pdf

ETags - Roles in Web Application to Cloud Computing

A web server returns a value in the response header known as ETag (entity tag) helps the client to know if there is any change in content at a given URL which requested.When a page is loaded in the browser, it is cached.It knows the ETag of that page.The browser uses the value of ETag as the value of the header key "If-None-Match".The server reads this http header value and compares with the ETag of the page.If the value are same ie the content is not changed, a status
code 304 is returned ie. 304:Not Modified. These HTTP meta data can be very well used for predicting the page downloads thereby optimizing the bandwidth used.But a combination of a checksum (MD5) of the data as the ETag value and a correct time-stamp of modification could possible give quality result in predicting the re-download. An analysis of the effectiveness of chosing the value of ETag is described in this paper.

According to http://www.mnot.net/cache_docs/

A resource is eligible for caching if:

  • There is caching info in HTTP response headers
  • Non secure response (HTTPS wont be cached)
  • ETag or LastModified header is present
  • Fresh cache representation

Entity tags can be strong or weak validators.The strong validator provide the uniqueness of representation.If we use MD5 or SHA1, entity value changes when one bit of data is changed, while a weak value changes whenever the meaning of an entity(which can be a set of semantically related) changes.

More info on conditional requests explaining strong and weak ETags in here

In Spring MVC, Support for ETags is provided by the servlet filter ShallowEtagHeaderFilter. If you see the source here

String responseETag = generateETagHeaderValue(body);
.... ......

protected String generateETagHeaderValue(byte[] bytes) {
StringBuilder builder = new StringBuilder("\"0");
Md5HashUtils.appendHashString(bytes, builder);
builder.append('"');
return builder.toString();
}


The default implementation generates an MD5 hash for the JSP body it generated.So whenever the same page is requested, this checks for If-None-Match, a 304 is send back.


String requestETag = request.getHeader(HEADER_IF_NONE_MATCH);
if (responseETag.equals(requestETag)) {
if (logger.isTraceEnabled()) {
logger.trace("ETag [" + responseETag + "] equal to If-None-Match, sending 304");
}
response.setStatus(HttpServletResponse.SC_NOT_MODIFIED);
}



This reduces the processing and bandwidth usage.Since it is a plain Servlet Filter, and thus can be used in combination any web framework.A MD5 hash assures that the actual etag is only 32 characters long, while ensuring that they are highly unlikely to collide.A deeper level of ETag implementation penetrating to the model layer for the uniqueness is also possible.It could be realted to the revisions of row data. Matching them for higher predicatability of lesser downloads of data will be an effective solution.

As per JSR 286 portlet specification Portlet should set Etag property (validationtoken) and expiration-time when rendering. New render/resource requests will only be called after expiration-time is reached.New request will be sent the Etag. Portlet should examine it and determine if cache is still good if so, set a new expiration-time and do not render.This specification is implemented in Spring MVC.(see JIRA )

A hypothetical model for REST responses using deeper Etags could be effective while an API is exposed or two applications are integrated.I have seen such an implementation using Python here

When cloud computing is considered, for Amazon S3 receives a PUT request with the Content-MD5 header, Amazon S3 computes the MD5 of the object received and returns a 400 error if it doesn't match the MD5 sent in the header.Here Amazon or Azure uses Content-MD5 which is of 7 bytes.

According to the article here in S3 for some reason the entity was updated with the exact same bits that it previously had, the ETag will not have changed, but then, that's probably ok anyway.

According to S3 REST API,

Amazon S3 returns the first ten megabytes of the file, the Etag of the file, and the total size of the file (20232760 bytes) in the Content-Length field.

To ensure the file did not change since the previous portion was downloaded, specify the if-match request header. Although the if-match request header is not required, it is recommended for content that is likely to change.


The ETag directive in the HTTP specification makes available to developers to implement caching, which could be very effective at the transport level for REST services as well as web applications.The trade-off would be, there may be security implications to having data reside on the transport level.

But in the case of static files which is having a large "Expires" value and clustered files, Etag will not be effective because of the unique checksum for files that are distributed will be transported to client for each GET requests.By removing the ETag header, you disable caches and browsers from being able to validate files, so they are forced to rely on your Cache-Control and Expires header.Thus by reducing the header size which was having the checksum value.

A peek into metaprogramming

Metaprogramming is about programming acting on other programs. Modifying the programs on the fly. Consider a compiler which is a program written to act,parse,execute the code written, analyzed on grammatical domain of its own lexical structure. It interacts and compose programs of small code components like importing required classes, #define macros, message passing etc. when we are to discuss about compilers, they are translating the source to it machine representation. In meta programming, its about source to source and machine independent. In C++, there are templates which is a data object for a set of programs,generates the code that is executed. The compilers use the program analysis method of type system, a check for the type correctness.an oracle, that is,some method for checking whether the system under test has behaved correctly on a particular execution. It is the assertions embedded in the code.Many programs read an input sequence and produce an output sequence, maintaining a logical correspondence between the input and output structures. When the type come into picture in a dynamic way,the system needs to be intact. This is being handled by the compiler itself as it asserts the type. Usually a program will have a structure ; a syntax tree,like an AST.The meta programs can manipulate these representations. The abstract syntax can then be used as an intermediate language, such that multiple languages can be expressed in it, and meta-programs can be reused for several source languages. Functional languages can have polymorphic higher-order functions take other functions as arguments and returns functions, treating functions as values. Java uses AOP, which introduces pointcuts to a program that can then be modified at runtime (using bytecode manipulation etc).

Take, Javascript, we can attach additional properties to an object ("expando").


var testObject = {};
testObject.variable1 = 'string';
testObject.variable2 = 3;


The type system is dynamic.

It has functions as first class objects.


testObject.functionA = function(){};

Function can return other functions as well as passed in as parameters.


testObject.functionC = function(functionA){ return functionB(){};};


Closure occurs when a function is defined within another function,and the inner function refers to local variables of the outer function. Currying a function is modified with each call so that the arguments passed into it become part of the function which will be helpful in partially evaluating functions. It is implemented using closures.

for the simple stupid code ,


function show(){

alert(testAB('A','B'));//AB
var testB = testAB('A');
alert(testB);
alert(testB('B'));//AB

}

function testAB(a,b){
if (arguments.length < 1) {
return add;
} else if (arguments.length < 2) {
return function(c) { return a + c }
} else {
return a + b;
}
}

In Groovy,


def testAB = { a, b -< a + b }
assert testAB('A', 'B') == 'AB'// or testAB.call('A','B')

def testB = testAB.curry('A')
assert testB('B', 'World') == 'AB'


In Groovy,each object has it is own meta class defines the behavior of any given Groovy or Java class.Alter the metaclass (getMetaClass()) and change the object's behavior at runtime. A special MetaClass called an ExpandoMetaClass that allows dynamically adding methods or properties.


String.metaClass.echo = {->
return "This is an echo"
}

println "Test".echo() //will print This is an echo.



String is final, even methods can be added !

The behavior of program changes with a concise code.Instead of test driven development,the methodology of behavior-driven development do follow test-first code-writing. In this case tests are considered specs, or "expectations" about how code will behave. Its is checking what code should do, than what it had done. To understand the concepts and practical uses, need to explore more

Groovy and some Griffon experiments

Groovy is dynamic with its superset java syntax to have fun with and be productive.It can be compiled to bytecode in .class and interpreted at runtime as a script.Adds closures,dynamic typing (duck), auto imports,no semicolon,no return... like a ruby-fied java.There can came railing Grails.Griffon for desktop, is very similar to developing web applications with Grails, basically the MVC + "convention over configuration".In Griffon, the Model is a POGO,the view is SwingBuilder and the controller (injected to) manage the Model and View together ().We know the applet coding and managing is cumbursome, for eg an applet form populated by an OrderDO etc.The coupling of model logic along with the applet controllers is mostly seen in web applications.So I decided to have some fun on this cute creature,Griffon.

A quick work around to display a bindable xml to the text area.I tried with tables, but some issues with events, so I will try later.Developing is easy and with less code.Scripting makes a flexible (agile) application layers like views and controller.Groovy can act as a super glue for different layers.It could combine XML parsing,widgets,networking...Each widget node written in the view classes is syntactically a method call.All of these method calls are dynamically dispatched.Most don’t actually exist in bytecode.Child closures create hierarchical relations.Closures are first class objects.This hierarchy of components would normally be created through a series of repetitive instantiations, setters, and finally attaching child to its respective parent.Child widgets are added to parent containers.
Like a ScrollPane having a text area etc.An event listener method is equivalent to adding a listener with that method; actionPerformed : { ... } ;in Groovy.

For this same Java does as

button.addActionListener(new ActionListener() {
void actionPerformed(ActionEvent evt) { ...}
})

Samples



source
Download source
Follow the quickstart guide and run the app from griffon-test directory

Groovy rocks!! says http://onestepback.org/articles/groovy/index.html

More here
http://groovy.codehaus.org/Griffon+Quick+Start
http://groovy.codehaus.org/Swing+Builder
http://griffon.codehaus.org/

JiBX - Part 1

When we consider XML as a data model while programming, focus is mostly given on strings/tags/elements in parsing data.The ETL operations are done on this ubiquitous document interchange model when communication is made between applications.To make things simple in the internet; which is fond of documents ; XML became the foundation of services through web.The object, which is accessed from object graphs in the memory have features such as inheritance / polymorphism and attributes / object-based relationships while XML does not have any of these features as objects have.It is merely a grammatical (hierarchical) representation of data with all its branches mangled to itself .But they have similarity in the sense of representation of real world data.Therefore they can exists as the representation of each other which eases in programming.They are effective in defining business uses cases.But there is an impedance mismatch between objects and xmls.An application written in Java, will have it data types defined within the scope while XML Schema which defines the data is richer than Java.Complex objects find difficulty in serializing.Have a look on this paper which explains the X/O impedance mismatch.

We can generate Java classes from XML or vice versa.In this case, it is a "Schema-Centric" approach.We define an XML schema.Then one or more XML documents.Then generate java classes based on them.In this case you need a stable schema which can be used to validate data.Essential for a "reliable" web service.But application code is forced to use an interface that reflects the XML structure which makes them tightly coupled to contracts defined.Any change in schema make a need to regenerate the object model and change application code to match.

If we map the classes using bindings, then its "Java technology-centric " approach.This can be adopted when you don't want object model tied to schema.When there is a need to support different versions of the schema with the same object model or a common data exchange format for existing objects is needed.Binding definitions are themselves XML documents, with a schema defined for it.

JiBX is fundamentally a Java technology-centric (mapped binding) approach uses binding definitions.You define how XML relates to Java objects.Start from schema, code, or both.Binding code is compiled (or wired ?) into files (which uses BCEL byte code enhancements ). It can be done at build time, or on-the-fly at runtime.This makes JiBX compact and fast.JiBX achieves its performance by using post-compilation bytecode manipulation rather than the reflection.The advantage is that there is no need for getters, setters and no-arg constructors.One can write the class without considering mapping issues and then map it without modifications. XML schema describe domain objects.JiBX really maps well to XSD. JiBX is based on a XML pull parser architecture (XPP3) . Rather than generating code from a DTD or Schema, JiBX works with a binding definition that associates user-supplied classes with XML structure. The binding compiler is executed after code is compiled and the marshalling/unmarshalling code is added on to the class files.There are tools along with it, like Bindgen that can generate schema from existing classes.


Tutorials

http://jibx.sourceforge.net/binding/tutorial/binding-start.html
Pdfs - JiBX -Part1 JiBX-Part2 Intro

Previous articles
Creating a Java WebService using Axis 2 (a lazy approach)
A simple RSS parser

A slothy ride through "Work Flows"... a primer

IT has become integral part of every organization.Business people need to be empowered with the tools that would keep an up hand on the competitive market.The execution of business processes in an organization can be done through work flow management systems.It is basically the flow of data that is associated with a work.Distribution of work is needed for efficient management.So the non technical people find it very easy if the applications they work on to control the process, is graphically manageable.In the deep level, the process models are based on graph theories and algorithms.As a programmer you are familiar with activity diagrams/ data flow diagrams,so we can observe these work flow patterns described by them.The distribution of work and its monitoring from the inception to the end can be effectively managed by work flow systems.Work flow systems helps to model the services for Business process management (BPM) (which is a management discipline that focuses on the designing them).Anyway the software used to model them did become role players of the industry.

I recently, read some articles on evolution of work flow tools.In 70's the work flow domain focused on office information systems.They were not successful in earlier days due to the limitations of computing, graphical user interfaces etc.But in these latest times, we could see the massive change in the concept of computing... semantic web,web 2.0,advanced networking etc that helped work flow systems to spread from managing a single organization to social networks(in the sense of social marketing tools, collaborative work etc ).There are a mindful of software like IBM MQseries WorkFlow, Tibco Staffware Oracle BPEL PM , jBPM ,OpenWFE (Ruby) used for work flow development.There is a consortium, formed to define standards for the interoperability of workflow management systems known as Workflow Management Coalition (WfMC) . According to their reference model


- workflow engine the instantiation and execution of process instances based on a predefined process model
- this model is specified using a process definition tool and is subsequently passed to the engine for execution
- workflow client interaction and managing work items on the worklist
- toools for adminstration and monitoring of processes and their instances
- external workflow engines can be communicated while processing done

take the case of jBPM



  • JBoss JBPM core component is the workflow engine (also referred to as core process engine) which takes care of the execution of process instances.
  • JBoss jBPM Graphical Process Designer (GPD), an plugin to Eclipse is the process definition tool which provides support for defining processes in jPDL both in a graphical format and in XML format. (processes can also be defined as Java objects or as records in the jBPM database)
  • jPDL (jBPM Process Definition Language) is the process language
  • JBoss jBPM console web application is the web based workflow client as well as an administration and monitoring tool
more here

At the programming side it uses Graph Oriented Programming.Graph Oriented Programming is a very simple technique that describes how graphs can be defined and executed on a plain OO programming language. Domain specific languages has a pivotal role in the development of work flows.Graph Oriented Programming is the foundation for all domain specific languages that are based on executing a graph. If you compare the system, it can be similar to any process model involving state changes,transitions etc.

Some cool stuffs i read

Making Web 2.0 Meaningful and Achievable by Jackbe
Take advantage of Web 2.0 for next-generation BPM 2.0
Evolution of the workflow management systems

More to know about the meta languages used in these systems.....

Java modularity and metadata

As a java developer one will encounter the spaghetti world of class path hell.Loads of libraries,jars,dependencies etc are used in large projects.Manageability is the major part of any application that is evolving through continued engineering process.When we access a class in a library we have to add the library class (which can be in jar file) to class path.This helps the compiler to use the class when its get compiled and the JVM to load the class when used at runtime. Just the basic stuff.java.lang.ClassLoader plays the main role in the life of class objects.we have jars/plugin mechanisms to do all the required functionality of so called modular.A jar can have dependencies to other jar files.We use lot of jar files usually the Apache commons library and all.Some time we wont be needing all the classes for the required functionality.Intelligent packaging is one of the main concern of any project while deployment.Less dependencies could overcome performance bottlenecks.Better versioning could help a better way of continued engineering and maintenance.The default class loader in java runtime is immutable.Any class that is loaded by the class loader is added to a name space which cannot be changed i.e you can add any class to this namespace but unable to unload them.That's what happening when the object is instantiated with new operator.So when we create a new ClassLoader instance it creates a new namespace.One of the solutions provided by the tech commumity is the modularization of JDK itself.This would allow applications to be installed with just those components of the JDK that they actually require.A specification JSR 277 was targeted to be delivered as a component of Java SE 7.0.The specification defines an architecture with first-class modularity, packaging and deployment support in the Java platform, including a distribution format, a versioning scheme, a repository infrastructure, and runtime support.They introduced the Java module format called JAM.But later on they decided to halt its development because of difficulty in integrating with JVM.They started a project Jigsaw under open jdk.The modular system can be independent or be implemented based on the language or compiler changes.Anyway, the concept of modules is interesting.According to JSR-277 spec
A Java module is a unit of encapsulation. It is generally written as a development module in a programming language in the Java platform, and is ultimately compiled into metadata and packaged together with classes and other resources as a deployment module.
These modules provides the metadata related to itself, like name,version,imported classes,dependencies etc.

But there is already an evolving system known as OSGI that intends to do the modular concept.In this bundles will be the modular jar files.You can install, uninstall, start and stop these bundles(without restarting the VM ).Also, it does offer services that can be dynamically discovered in JAR files at runtime.You can package OSGi with your application, or list it as a prerequisite, and then run on a wide range of Java platforms( design point of OSGi has always been to support a broad set of Java ME). .The concept of bundles is powered by custom class loaders which will provide encapsulation as well as runtime dynamics.
A bundle is a group of Java classes and additional resources equipped with a detailed manifest on all its contents, as well as additional services needed to give the included group of Java classes more sophisticated behaviors, to the extent of deeming the entire aggregate a component.
We know that a jar will have MANIFEST.MF file attached to its structure inside the META-INF directorythat contains information about its contents.By adding OSGi headers to the MANIFEST.MF we can make the jar a bundle.These headers can be

Bundle-Name: a name for this bundle
Bundle-SymbolicName: a unique identifier for a bundle (naming like java packages).
Bundle-Description: description of functionality.
Bundle-ManifestVersion: which OSGi specification to use for reading this bundle.
Bundle-Version: version number of the bundle.
Bundle-Activator: the class name to be invoked once a bundle is activated.
Export-Package: packages contained in a bundle available to the outside world.
Import-Package: packages required from the outside world

These standardized metadata is used by the framework to provide robust modularity while integrating to most of the existing applications.This metadata can act as a semantic definition to integrate pluggable modules to be integrated in large applications.I think modern enterprise application need this type of modularity to provide an easy integration with low cost of maintanence and development.The ISVs can provide an efficient solution by the light weight applications instead of heavy application suites having complex integration structure.It can be an asset to the low budget IT solutions.Also the standardization helps to coexist (and interoperable) with different solutions provided by open jdk community or spring community or any other vendors.

More header information in this OSGi core specification
Some osgi implementations equinox,apache felix
About jigsaw
OSGi elearning

A snippet on Apache Sling

We know everything is content.Content(Knowledge) management systems are widely used in various fields and is really an old concept.Usually data is stored in databases or file system as it is very easy to manage unless data is huge..When we consider filesystem, it is unstructured and hierarchical.Database is structured with a well defined schema but restricted by constraints and transactional behavior.Web 2.0 caused such an insurge of highly dynamic data into web.It became inevitable for collaboration and integration.If we have to access information of different
formats, then we can use content repositories; for its inherent nature of file system and database (with some extra features).Its a best of both worlds.Here is the definition of a content repository from from JSR-170 spec

A content repository is a hierarchical(tree like) content store with support for structured and unstructured content, full text search, versioning, transactions, observation, and more(they can be considered as content services).

It is like a high level information management system.It can have both text files as well as binary files like pdfs,docs,images etc

Apache Sling (with a new release of version 5) is an open source Web framework for the Java platform designed to create content-centric applications on top of a JSR-170-compliant (aka JCR) content repository like Apache Jackrabbit.If you use Sling, it will be easy to develop large websites with thousands of pages.It works on top of JCR.If we are familiar with configuration management systems like changeman,VSS or subversion we know how it is easy to manage content.It avoids spreading of files in webserver Sling process the HTTP request in a RESTful way.Sling uses scripts or servlets to render and process content.Modularization by OSGi (using Apache Felix ) concept which loads modules to memory dynamically.Sling uses JSP,ESP (server side ECMA script),Groovy etc.We can directly map URL to a content.I think I have to know more about OSGi to dig whats happening inside Sling.I believe Sling could be used in large websites like social networks.DevDay says how Open Social and Apache Sling fit together.

more on Sling later...

Apache Lucene - Indexing Part2

I was going through some interesting sections of Apache Lucene these days.I found it really interesting because, the project is a very popular one and it made most of the web applications to integrate complex search modules in them.Some might be knowing JIRA tracker by Atlassian uses lucene which traverses huge buglists,comments,codes,documents etc.
For a vanilla search tool, comparing the search key with strings in the file is very slow.So the indexing like inverted index comes in handy.When indexing is done by lucene, it will create document ids for each document.It will collect all the words and associate them with each docId in which the word belongs. Therefore, each docId will be having alist of positions of words in the document.The index datastructure, the store of documents, with its associated fields is constructed to provide a random access data retrieval.The Lucene inverted index can be either opened to add more documents or delete existing documents at a time.
To update a document you must delete it first, close the index and add it again.
The Analyzer , specfied in the Indexwriter, will extracts the tokens to be indexed.There is a default analyzer for english texts(for multilingual one custom analyzers are needed).Before analyzing is done, the documents like pdf,doc etc are to be parsed.A Term is the basic unit for searching. Similar to the Field object, it consists of a pair of string elements: the name of the field and the value of that field.A term is defined as a pair of <fieldname,text>A term vector is a collection of terms.The inverted index map terms to documents.For each term T , it should store the set of all documents containing that term.So the duty of analyzer is to look for the terms in documents and create a token stream so that they can be mapped.Terms are stored in segments and they are sorted.The term frequency will tell how well that term describes the document contents.But term which appear in many documents are not very useful for filtering.The Kth most frequent term has frequency approx 1/K ie for 100 tokens, the index will contain 50% text.For the indexing strategies : - they can be chosen from
  • Batch based - like a simple file parsing and sorting-
  • BTree - indexing - similar to indexing by file systems and databases - as it is a tree the update can be done in place
  • Segment based which is common, created by lots of small indexes
The algorithm used for lucene indexing can be
  • indexing a single document and merging a set of indexes
  • incremental algorithm in which there will be a stack of segments and new indexes are pushed to stack (segment based)

Apache Lucene - Indexing - Part 1

"Information retrieval (IR) is the science of searching for documents, for information within documents and for metadata about documents, as well as that of searching relational databases and the World Wide Web."

Most of the application uses search features.If you are looking to add a powerful text search engine feature to your application then use Lucene, which can add advanced Search Engine capabilities to an application.This is a really powerful Java API which gave birth to powerful tools such as Nutch,Hadoop,Hibernate search and so on.Lucene was started in 1997 and adopted by Apache in 2001.The main functionality Lucene does is the powerful full text indexing of data.
Indexing with Lucene breaks down into three main operations: converting data to text, analyzing it, and saving it to the index.Lucene looks for strings only , so the documents has to be parsed and indexed.
To search large amounts of text quickly, you must first index that text and convert it into a format that will let you search it rapidly, eliminating the slow sequential scanning process. This conversion process is called indexing, and its output is called an index. So the searching is done on this index to find the data related with a cost of space 'storing indexes'.
These index files can be stored in a directory .A lucene index is divided into segments madeup of several index files(Lucene Documents).An index can be related to mutiple documents.So if new documents are indexed , it is added to segments than modifying the existing index file.Lucene uses a feature called incremental indexing ie there will be a global indexing and index those incremental documents so that it is searchable.Regarding the structure of a lucene index, it is an inverted index .While searching, lucene loads the index to memory .It uses a high performance indexing which has an index size roughly 20-30% of the size of text indexed which uses less memory. The documents in an index is a collection of fields which is a named collection of terms like <field,term>.These fields are independent search space defined at run-time.The segments or sub-indexes are independently searchable and the results of these segments are merged.Suppose a wiki article is indexed , we can set the field properties, so that the field objects contain actual indexed article data or stored one.



More about lucene index file formats - here

Few videos i liked from Mix 09

I have worked on some .NET RIA projects before.I saw some videos from Mix09, and sharing some I liked.

Building a Rich Social Network Application


Learn how to build a social networking site using Microsoft Silverlight.Also explains how to mash up existing services, how to tag and store data in back-end services,and how to build a rich and engaging experience.






Get Microsoft Silverlight











Get Microsoft Silverlight


Visit this website for different ux patterns



http://quince.infragistics.com




There are different patterns for designing an application .See this video for how to enable rich communication patterns between your AJAX Web pages and Web server using existing and new features in WCF, Windows Communication Foundation.






Get Microsoft Silverlight





I have written about RESTful services before.In this video, look for ADO.NET Data Services matching REST.






Get Microsoft Silverlight


For Three Tier RIA Pattern

markItUp! a jQuery plugin

I saw a very cool html editor plugin called markItUp!, which is a JavaScript plugin built on the jQuery library. It allows us to turn any textarea into a markup editor.Even our own Markup system can be easily implemented.It enables us to quickly modify any standard TEXTAREA within our page into a powerful markup editor, It’s so simple that instantiating the editor is as easy as:

$('#html').markItUp(Settings);

settings is a json defining the html settings for parsing text area - like shortcut key mappings,markup sets etc


For more features
http://markitup.jaysalvat.com

Develop an Open social application in 60 seconds

Open social application development made a giant leap.An eclipse plugin that eases the development of opensocial apps.I have written about open social applications before and I had a chance to work on it.Apache shindig is the initiative to develop a SNS container for application development and testing.I would say OSDE plugin developed by Yoichiro Tanaka rocks!! It uses apache shindig and hibernate for a dynamic development, so the developer can create a single application for different data models.As apache shindig provides java REST support, the application development will become more extensible.The database packaged with the plugin is H2(a Java SQL database).Before this plugin came, we had to develop and run the applications iniside a sandbox which was really tiring.This plugin has features of wizard like development for both javascript widgets and Java REST client applications.We can have our own custom social data which can be easily persisted due to the excellent plugin architecture.So I tried to develop a simple application ...

After the plugin is installed create a new OSDE project


Specify the gadget.xml and the API specs etc


For the development we need to run the apache shindig in the background.



To have a custom social data create people and add relationships between them.





Write a simple gadget ... (templates can be generated by the plugin if needed)
----src-------


<?xml version="1.0" encoding="UTF-8" standalone="yes"?><module><moduleprefs author_email="harisa@pramati.com" description="A friendly os app" title="Friends"><require feature="opensocial-0.8"><require feature="dynamic-height"></moduleprefs><content view="canvas" type="html">

<!-- Fetching People and Friends -->
<div>
<button onclick='fetchPeople();'>Fetch</button>
<div style="margin-left:20px;">
I am ... <span id='viewer' style="background-"></span><br/>My friends are ...
<ul id='friends' style="margin-top:5px;list-style:none;margin-left:75px;"></ul>
</div>
</div>
<script type='text/javascript'>
function fetchPeople() {
var req = opensocial.newDataRequest();
req.add(req.newFetchPersonRequest(opensocial.IdSpec.PersonId.VIEWER), 'viewer');
var params = {};
params[opensocial.IdSpec.Field.USER_ID] = opensocial.IdSpec.PersonId.VIEWER;
params[opensocial.IdSpec.Field.GROUP_ID] = 'FRIENDS';
var idSpec = opensocial.newIdSpec(params);
req.add(req.newFetchPeopleRequest(idSpec), 'friends');
req.send(function(data) {
var viewer = data.get('viewer').getData();
document.getElementById('viewer').innerHTML = viewer.getId();
var friends = data.get('friends').getData();
document.getElementById('friends').innerHTML = '';
friends.each(function(friend) {
document.getElementById('friends').innerHTML += '<li>&#187;' + friend.getId() + '</li>';
});
gadgets.window.adjustHeight();
});
}
</script>

----src ends---------

Run the application

Gadget --->


Cool... Its simple. But this plugin will be really useful when we develop the complex applications and aiming for multiple containers supporting open social api.

More ....
opensocial-development-environment
screencasts
youtube

An interesting talk by the creator of CouchDB

I watched this presentation which is really inspiring, by Damien Katz who developed a database using Erlang(a functional language).He talks about the circumstances and hardships he had while developing a new db even there are others in industry which has become integral part of infrastructure.Cool people make cool stuffs !!

http://www.infoq.com/presentations/katz-couchdb-and-me

What so interesting about CouchDB ?
In CouchDB, the data is a collection of JSON documents.It is more of object database than a relational db.I shows how powerful is javascript in server side -->Views are created by javascript like a map reduce.It is a very good choice to use for scalabale RESTful applications.Currently this project is in alpha stage.

More about CouchDB http://couchdb.apache.org/docs/intro.html

Applications using CouchDB http://wiki.apache.org/couchdb/CouchDB_in_the_wild

A midnight exception... is never caught

2 am .. Its dark and cold.Sleep deprived.Enjoying 7th symphony.I donn remember how many times ... The wizardry of a deaf ! Like a fall... sweeping egos out of pride minds into the depth of folly.

grrr...

All these things are a fallacy.A bigotry.An apartheid from the very existence of nature around us.. Wait, Is there anything left ? I forgot...

I closed my eyes...

Technology sucks !! It will plunge u into a dazed state from which you will never recover.Human intelligence and creativity to please his unending desires.To satisfy the eternal consumer needs.There we go ; the "samsara" of material flow.Ha !

When do a loose puck like me feel elevated ? I donno. I feel high most of the times.
I hated computers once.But the unabated pleasure it gave me... fulfilling the undemanding desires made me a slave.

No, am the master.I program to control machines.

Wait.... mmm Waited....Eventually I learned meself, that I program to survive.Such a pathetic demise to the pride self.Who is the slave? Whos who a slave ?

Its too late...

When the ambitions become unbounded, will eventually consume the sorrowful seed that will grow upon you like a banyan tree;underneath beckons the despondent shade that will comfort your inferior ego.The pursuit of happiness is a myth.

May be Adam Smith was right in his Theory of Moral Sentimets

...my incompetent leftist thoughts.

Think of this marvelous machinery succumbs most of the human needs.Programmers as artists.They are like master craftsmen whose never moved their ass from the shrine , and whose eyes gaze through the entire Google's index...

I donno.Who cares.I feel sleepy.Go and sleep ....

RESTful Java , some links

I would like to share some interesting links  for developing Restful applications in java 

JSR 311: JAX-RS: The JavaTM API for RESTful Web Services:
This JSR will develop an API for providing support for RESTful(Representational State Transfer) Web Services in the Java Platform.

Jersey - JSR 311:JAXRS implementation

REST for Java developers, Part 1: It's about the information, stupid - JavaWorld

REST for Java developers: Restlet for the weary - JavaWorld

REST for Java developers, Part 3: NetKernel - JavaWorld

JavaFX RESTful Pet Catalog Client

Some pdfs


If you are a Netbeans fan you should view these screencasts 
RESTful Web Services Pet Catalog
RESTful Web Services in NetBeans IDE 6.0
Building an End-to-End Restful Web Application
RESTful Web Services Pet Catalog - NetBeans IDE 6.5
YouTube: NetBeans REST Web Services, Building and Deploying (Part 1)
YouTube: NetBeans REST Testing and Invoking REST Resources (Part 2)


EXTJS,GWT RPC ...an experiment on widgets

I was doing samples on GWT during my free time.I tried Ext-GWT (not GWT-Ext).There are lot of articles / screen casts around the web regarding how to work out RPC in GWT.Some people mess up sometimes.So I decided to document the method that worked out for me. This is for the GWT version 1.5.3.It supports Java 1.5.Even if GWT is using java, we are not supposed to harness the very features of Java 1.5.Anyway GWT is the Swing for web.Adding the elegant extjs with GWT is indeed a good choice for RIA development.ExtGWT is a library written in Java.If you need a kick start - go here - http://extjs.com/helpcenter/index.jsp having screenshots for setting up a project.If you are using eclipse for GWT development there are a lot of plugins around, like gooplise .I know I will mess up with that.So I decided to develop from scratch than plugin generated RPC codes.Before creating packaging structure we have to be aware of the Directoryconvention used in GWT RPC

More detailed RPC development here

The server code should go to "server" and client inside "client" We can override the paths in app.gwt.xml If we add a " < server path="path" > ,for packages/ modules added . We have to add the client and intl paths which are default.Its similar to java default constructor, when we add a arg constructor, the def constructor will be overridden... sort of

So I developed a poll in EXT- GWT ..oof .Compiling seems slow.I found it very difficult to debug.Common issues and resolutions:

a.com.google.gwt.user.server.rpc.RemoteServiceServlet - ClassNotFoundException
resolution - put the Service interface and ServiceAsync implementations inside client. In the server side, point the server path to the Service implementation.Try to run the server classes to invoke RPC after deploying in jetty/tomcat

ie. in gwt.xml it will be

<servlet path="/VoteService" class="demo.app.server.rpc.VoteServiceImp" />

in web.xml

<servlet>
<servlet-name>vote</servlet-name>
<servlet-class>demo.app.server.rpc.VoteServiceImp</servlet-class>
</servlet>

<servlet-mapping>
<servlet-name>vote </servlet-name>
<url-pattern>/VoteService </url-pattern>
</servlet-mapping>

b. do not forget to call layout.. when using ext implementations

c. there are tab panel rendering issues... i have to resolve

d. mmm got problem with serializations. I was transferring a map of string keys and Integer values.So used annotations....

/*
* @gwt.typeArgs <java.lang.string,java.lang.integer>
* */

Interface looks cool.. using GWT the code becomes maintainable.They are new products, so we have to think of choosing them for production quality applications...

I have to go deep into these .Anyway I have made some samples.. i have uploaded one

download code

Use eclipse. The setup and configurations can be seen from previously screencast/article links .Compile using GWT dev tool.Transfer the generated classes to WEB-INF/classes folder.Add gwt-server.jar to libs. Access the App.html to see the widget.

In my opinion working on EXT-GWT will be overkill.We have to resolve the issues of both !!!

I will document a better one later.... Now see the screenshots.