Pages

An account of Open Source Summit 2008 Hyderabad

During this weekend I attended the Open source Summit held at IIIT Hyderabad which was on 13-14 (I was unable to attend the event on the second day :( ). On Saturday morning I took the MMTS from the station nearby home and came to HafeezPet and from there reached IIIT by auto around 11 am.


The first session I attended was on BeleniX, opensolaris LiveCD project - Moinak Ghosh,I arrived the conference room, while the presentation was halfway around . The presenter was upgrading the openSolaris and while it was going on, other applications were executed !! He was explaining how openSolaris and ZFS is useful in production ready environment. He demonstrated creating separate snapshots. He explained about using DTrace, which can be used to dynamically inject debug codes while the application is running (can be used for debugging kernel). He explained about the difference between zones in Open Solaris and virtualization, concept of RAMDisk etc. The session was good as practical samples are demonstrated.


Next session was more interesting was, by Mahesh Patil from National Ubiquitous Computing Research CenterCDAC, Embedded Linux and Real time Operating systems. I really enjoyed and understood the technology. When I was in my college (MCET Trivandrum) we used to conduct lot of seminars. Sensor networks, nano technology were most presented those days. But this session as a great experience as he had to show something cool. He had a board with ARM processor and he demonstrated loading the Linux OS into it. He explained about ToolChains , how it can be used, packaging the kernel images etc. He described how an embedded OS is different from RTOS and about the preemptive nature of RTOS.RTOS uses the dual kernel approach in which the interrupt handling latency can be reduced by a kernel handling it and other operations by other kernel. The core kernel operations are given a low priority as the other tasks which are to be executed with higher priority in the queue. I came to know that most of the embedded Linux is based on POSIX compliance, but in Japan it is MicroItron. He explained about ECOS a configurable OS which can be configured for embedded or real time. Then about the Smart Dust project, cool futuristic technology; tiny devices floating around which communicate within a small range where they sleep most of the time. I was wondering about how huge the data will be produced by these devices. Think about real-time heat maps of different boxes holding vaccines those are distributed around the world! (Now pharmaceutical companies have a device kept inside the package to record the temperature when it was packed and check the change in temperature when opened) .Also came to know about the 3KB Linux – TinyOS ! Cool and simple… even though I am not from electronics background ...


On to the stage, was a geek – Arun Raghavan from Nvidia.He is a developer in Gentoo Linux community.I hasn't tried this Linux variant before. It's a linux for developers!! Any application can be customized for performance, maintainability creating ebuilds which will make it so flexible for developers. I think it will have a good learning curve as most of the installations and customizations can be done by a user .He demonstrated creating ebuilds for gtwitter a twitter client.He demonstrated the ease of using Portage which is a package management system used by Gentoo Linux.Visit Gentoo.org to know more about this linux.I really liked the article written by Daniel Robbins(Architect of Gentoo Linux) about the birth of it; read here .


I attended another session which was on Hadoop by Venkatesh from Yahoo research team.Hadoop is an opensource project for large data centers .I was looking forward for this presentation as it is about the web 2.0 (cloud computing) and large scale computing (blogged before). It is a framework written in Java that supports data intensive distributed applications.To know more about large scale data processing using Hadoop you can read this paper.It has a file system called HDFS filesystem ( pure-Java file system !!) stores replicated data as chunks across the unRaided SATA disks.There is a name node and a cluster of data nodes like a master-slave system.Name node stores an inverted index of data stored across the file system.Concept is similar to Google File System and its cluster features.More about the concept here (Nutch) , here This framework can be used for processing high volume data integrated with lucene will help to create a quality search engine of our own.This framework is used by Facebook. In one of the Engineering @ Facebook's Notes explains why Hadoop was integrated . Read here. It is used by IBM(IBM MapReduce Tools for Eclipse), Amazon,Powerset (which was acquired by Microsoft recently),Last.fm ... Read more More hadoop in data intensive scalable computing.Related projects Mahout (machine learning libraries),tashi (cluster management system).


So it was worth as I was able to attend these sessions .... Thanks to twnicnling.org

A sample iWidget



Finally I got iWidget published :) A simple one that fetches my friend feed.Click on "View" button to fetch the data. The interesting thing is that this "patent-pending" Write Once Run Anywhere widget platform will provide the users publish their widget to Facebook,Myspace,igoogle,netvibes,widgetbox,clearspring as of now.Others like hi5,orkut,gigya are on the way. I think this platform could become a good tool for opensocial gadget developers.

A try on iWidget Creation Technology

I tried to build and publish a widget in the iWidgets platform.A data view widget in which my friendfeed will be displayed in it.The UI is similar to other mashup tools like yahoo pipes,Presto wires . It has a customization wizard for the widget, its cool.It supports native widgets.Their WidgetWORA™ technology is patent-pending and it is going to have a wide audience.Peter Yared, Founder & CEO was CTO of Sun Microsystems' Application Server Division.They have a Social Media Accelerator Program with a pay-per-performance pricing as well a free Self-Service form with ads.A good model for monetization.I tried to publish my widget. I couldn't find any link for showing my widget in my blog :( Also these social gadgets can be published to facebook,igoogle,myspace.My iWidget web workspace looked like a desktop tool.


A gmap opensocial gadget - Voila


Recently I started working on an opensocial application platform based on java and php.Previous posts on opensocial and apache shindig is written while I am  involved in the development. The open social applications allow to share the data between different sites and social networks.I developed a sample gadget to mashup data from our social application platform with Google maps, I named it "Voila".Using this one can save the last location you been and share it with friends.Also you can view your friends shared locations.One can play around for those who don't carry a location based device.It uses Opensocial's persistence feature. 



I made another gadget which uses a job site's service .The gadget named "Empleo" is intelligent , it automatically gathers owner's job interests and update each time .Also he can go for search tool embedded within it.Another thing I did was to integrate Tell-a-Friend WOMM widget with an opensocial gadget.So the user can tell a friend about a job of him using any service.

Empleo Screenshots viewed inside preview window .



Web 3.0 A case study part -1

While I was with TeamTa , a case study on Web 3.0 was made. It was really a wonderful experience for me as I was able to learn a lot about the emerging technologies. Atleast I came to know that how vast emerging web is the potential of web of entertainment to defense purposes.It is like another universe in which it has its own economy, culture,society,technology etc.. I tried to read as much as I can... yes it was really vast , infinite potential.Many research papers, many articles, many magazines... I know I am good at research.I was able to understand concisely. I am not expert in all these buzzword technologies ... I think one need not be working in high end technologies, but can learn about it and share it or use the idea in some other field.Internet did open human minds around the globe.Anyway I was able to understand and express the way I got it... I decided to open it and share it to world as its an interesting one... so someone somewhere will read it comment about it. As time pass by I will be able to review on the past works I have done.Anyway this blog was made for that purpose.View my case study.

web 3.0 part1
View SlideShare presentation or Upload your own. (tags: web 3.0web)

What i think about event processing...

An amateur thought.

I think the most interesting area of information processing is about event processing.Most of the large scale enterprise applications are based on event driven architecture.Event based information processing is the most advanced area i haven't gone through yet.But reading about it i found it really interesting.The state models,lexical analysis,reactor patterns, callback event models etc are the used behind it.Event driven design is an approach to program design that focuses on events to which a program reacts.According to these events there event handlers registered will respond.This is the fundamental of any GUI based application.An event listener will be attached to a button and handler responds to events.I think it is the basic underlying architecture of any responsive application.If you worked on a 3D application the events on 3D positions of polygons have to be registered.Every movement in space and trigger an event.. good gaming.. If i have to think big , consider the finance stock viewer online.The stock responses are reflected in real-time...most of them know about ajax based technology which is popular behind the dynamic graphs. But what about the complex business logic? Any rule engine will define a set of rules to act according to changes in input.I can compare this system as a stimulus response of an organism.If we take human brain, the predefined genetic rules will be there to adapt to these ever changing environment.There can be sudden stimuli or gradual one, depending on inputs.What about the pattern recognition ? Human brain is highly sophisticated ... mmm i am boring now.If it is about realtime processing, then I like to refer to CEP, Complex Event Processing (CEP) which is a technology for low-latency filtering, correlating, aggregating, and computing on real- world event data. If this complex event processing is enabled in a network...? To a collective intelligence? I read that context based switches are now implemented in CDNs.Whatever.... its really complex and interesting..No wonder the huge amount of data in the web can be used for social "business" intelligence ...CEP actually builds on what business intelligence (BI), services oriented architecture (SOA), cloud computing, business process modeling (BPM) provide.Mashup technologies along with semantic web can provide more granulated data where most of the technology based products moving into.Some people say SOA, some WOA and SaaS,cloud and so on..Consider about NASA satellite data.Huge amount of data from satellites gushing all through the channels are processed using various algorithms of image processing,signal processing algorithms... What about all those RFID based data affecting the supply chain tracking? What if we are going to track every consumption of fule in the world in a realtime using gps trackers and sensors ? What about streams of data processed by supercomputers on weather forecast based on certain models ? They are crucial and brain forging.That`s how information technology becomes the backbone and most sophisticated part of human civilization.

Its all about data and Network is the computer!!

May be we are trying to make an efficient system as fast as our brain.At least the model of all these logical applications are expert systems.Why should I write about stuffs that are very complex to me ... I am not expert in all these.. just blogged in curiosity.There are basics to learn...

There is a good article in wikipedia about CEP

http://en.wikipedia.org/wiki/Complex_Event_Processing

Another article in infoq

others... Link Link .

An article on NASA funded CEP project Link


Mozilla ubiquity in unified communications

The mozilla ubiquity is a cool graphical keyboard commad-line ui in which the user can enter commands and get them executed. Browser as the platform and network as the computer is evolving into a new phase.The unified communications paradigm that converge media did revolutionize the telecommunication industry.These web 2.0 revolution is reflected in all the areas of human communication.Recently I have seen a video where the Druid team, used the mozilla ubiquity tool integrated with their unified communication platform Druid. Druid bring together voicemail, VOIP, mobile phone, faxes and instant messaging into a common platform so that data from multiple sources get merged ;that's what uc meant to ....

video



Instead of writing a plugin/addon from scratch, this excellent tool enabled them to give a new provision for command execution.In the video Druid with SugarCRM turns Firefox into a powerful UC application that allows users to dial numbers, send faxes, set presence, and many other cool applications.Druid got a SOAP API to integrate with other applications like Zimbra, SugarCRM etc. Druid is open source.

really cool !!

Druid site

For those developers and ip telephony guys ..

http://in.youtube.com/voiceroute

UI redress vulnerability

The hot trend that catched my interest in web application security is clickjacking a.k.a UI redress vulnerability.It is a vulnerability in the DOM model of web browsers.According to a bug reported in 2002 on mozilla - http://bugzilla.mozilla.org/show_bug.cgi?id=154957 where the browsers allow transparent iframes to be rendered.Most browsers do. So any crooked head use this idea to show an iframe which is transparent one "over" his site, he can make the visitors to click the buttons in the pseudo web pages.When the poor user clicks , he might be clcking on a advertisements.. (click frauds).The innocent user will be using the buttons in the malicious web page even though the site in front of him is urging him to do a harmless action ! The web page might be having different iframes... Now a days facebook apps, opensocial apps are common around the web.So we might be clicking hidden buttons on the hidden iframe!!
Some can spy on you .How ? Its simple. If we have a web cam,microphone , it can be accessed by adobe flash if we allow to do so.So if the site is having a hidden iframe and the useris unknowingly clicking the allow button to leak your personal world to web.Anyway adobe has resolved the issue in flash player 10.

http://www.adobe.com/support/security/advisories/apsa08-08.html

More details
http://ha.ckers.org/blog/20081007/clickjacking-details/

Solutions ?

1.window.top != window to inhibit rendering, or override window.top.location.
if if (top != self){ top.location.href= location.href} which is iframe-breaker

2.re-authentication on all ui actions (not practical!!)

More on Google's solution(by famous hacker Michal Zalewski )

http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2008-September/016284.html

If you use Noscript or disable javascripts etc the clickjacking can be prevented.Even then most browsers wont disable IFRAMES.

More
http://www.gnucitizen.org/blog/more-advanced-clickjacking-ui-redress-attacks/
http://hackademix.net/2008/09/29/clickjacking-and-other-browsers-ie-safari-chrome-opera/
http://www.cgisecurity.org/2008/10/interview-jerem.html

Develop java applications for Iphone

Using a concept called XMLVM (its opensource!!) its possible to develop java for iphone by cross compiling bytecode.

XMLVM serves as an umbrella for several projects. For all projects, a Java class file or a .NET executable is first translated to an XML-document. Based on the XML-document generated by the front-end, various transformations are possible. The first transformation cross-compiles from .NET to JVM byte code. Another transformation enables Java or .NET applications to be cross-compiled to JavaScript so that they can run as AJAX applications in any browser. Yet another transformation allows to cross-compile a Java program to Objective-C to create a native iPhone application.


The technology is in early stage... Take a look at the recent google talk video.


Ambient Intelligence and location aware web

Ambient intelligence(Aml) is explained by wikipedia as
In computing, ambient intelligence (AmI) refers to electronic environments that are sensitive and responsive to the presence of people.In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in easy, natural way using information and intelligence that is hidden in the network connecting these
devices.
Its also called pervasive computing or ubiquitous computing.We can build up Necromancer-like future visions of ultra sci-fi devices roaming invisibly around us mumbling 0s and 1s wrapping us inside its matrix... In the Wikipedia , you can read an interesting scenario about the use of ambient intelligence.This idea will be used by most gadgets now in market.. all those smart home appliances,wifi devices, mobiles,RFID, GPS etc.There is a lot to know about the concepts and implementation (complex middle-ware architectures ) for Aml.

In the web world some recent works brought everyone's attention are location aware technology standards and APIs.Geographical Information systems was here for a longtime.Those tools like ARCGIS are prominent in the field.After the Gmap revolution hit the web, there was such an insurge of developing applications based on the geo spatial information.The virtual representation of real world is always the cyber punk readers dreams.The recent technological innovations do make them viable .

Yes the web browsers are the ubiquitous application which will connect the real space to virtual one.Now most mobile devices have browsers.So the web is getting ready for the opera of next generation location aware applications.W3C has now recently released the draft of Geolocation API specification standard.

Mozilla has developed Geode,an experimental add-on to explore geo-location in Firefox 3. It includes a single experimental geolocation service provider so that any computer with WiFi can get accurate positioning data.






Yahoo! Fire Eagle is a service that acts as a broker for your location, creating a single place where any web service, from any device can get access to your last updated location.service enables you to share your geographic position over many different applications, websites and services. You can update your location with your GPS-enabled phone (e.g. iPhone 3G) or any other software that integrates with FireEagle and you can allow websites and services to use this information.

Google released its Geo location API for google Gears .The Geolocation API provides the best estimate of the user's position using a number of sources (called location providers). These providers may be on board(GPS for example) or server-based (a network location provider).


So it is possible to use these apis to develop environmentally intelligent applications in devices especially mobile .It is possible to read my feeds / news based on locality ... like reading The Times of India epaper from Bangalore edition or Hyderabad edition based on location while I am traveling.

What about a web based Advertising platform based on location ..?

No wonder why Google urge to allocate ambient spaces for wifi ... free the airwaves !! These technologies will give numerous possibilities for Android development

Cartography did give the light to Renaissance.It did make revolutions,wars,civilizations ... Now the neo-geographers is leading the world to the cyberspace navigation.World is again shrinking... an invisible revolution ?

The ambient intelligence is realized by the human centric products than technology centric products.The information technology is now spreading from big corporates dealing with the mammoth analytical data and transactions to common man...where the data is unimaginably galactic.. geo-spatial data.. the paradigm of consumerism enabled by the new generation web ideologies. These ubiquitous location aware devices could form the new era of invisible computing
a term invented by Donald Norman in his book The Invisible Computer published a decade ago to describe the coming age of ubiquitous task-specific computing devices. The devices are so highly optimized to particular tasks that they blend into the world and require little technical knowledge on the part of their users.

image from here

Experimenting on Space4j , an in memory java database system

Its always challenging to develop scalable and high performance applications in java.There are different form of data persistence made available in all programming platforms.Considering the back end data persistence , when there is another layer of mapping ... Object Relational Mapping layer to map java objects and data store , then it will be easy for providing maintenance, supporting multi-vendor products at the cost of performance.I am not expertise in the vast numbered ORM tools (I had written an article before ).In java its a hack .. using the Reflection and collection APIs effectively.This article is about a new form of database system for java.I think its better suitable for real time ,standalone applications .We know the key value pair data structure is used mostly in applications.So the said to be database system is based on maps in Java Collections API.This is an in memory database.It reads and persists data using serialization.For always on application data is always in the memory.If there is any crash then the data is regenerated from the commands previously exectued.
I have read an article in InfoQ regarding the primary in-RAM application.The hardware level changes could tend to rapidly redesign all the framework paradigms, tools ,design patterns etc

So I decided to play around with the Space4j .The data is the space.We have to create a space in memory and we assign data to it.The map stores the serializable objects.Clients will use the Space4J object to execute commands in the underlying space. Space4J will be running in the same JVM the client is running, directly accessing from memory.

ie space4jObj.exec(Command)

Commands can be
- CreateMapCmd that will initiate the map.Like creating table.
- PutCmd to insert data
- RemoveCmd to remove data

We create a map and store in the space.There are different implementations for space.I tried with SimpleSpace4j I have created a CURD operation for space4j sample.Create a movie list and
store.Add jar to classpath.

Movie.java

package com.space4j.sample;
import java.io.Serializable;
public class Movie implements Serializable {
private static final long serialVersionUID = -1L;
private String name ;
private String id;
public Movie(String name,String id){
this.name = name;
this.id = id;
}
public String getName(){
return name;
}
public String getId(){
return id;
}
@Override
public String toString(){
return id+":"+name;
}
@Override
public boolean equals(Object obj){
boolean b = false;
if(obj instanceof Movie){
Movie mov = (Movie) obj;
if (mov.getId() == this.id){b = true;}
}
return b;
}

}
SpaceDAO.java

package com.space4j.sample;
import java.io.IOException;
import java.net.UnknownHostException;
import java.util.Iterator;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.space4j.CommandException;
import org.space4j.LoggerException;
import org.space4j.Space4J;
import org.space4j.Space;
import org.space4j.command.CreateMapCmd;
import org.space4j.command.PutCmd;
import org.space4j.command.RemoveCmd;
import org.space4j.implementation.SimpleSpace4J;


public class SpaceDAO {


private static String MAP_NAME = "movies";
private Space4J space4j = null;
private Space space = null;
public SpaceDAO(){
try {
space4j = new SimpleSpace4J("SpaceDAO");
space4j.start();
space = space4j.getSpace();
if(!space.check(MAP_NAME)){
space4j.exec(new CreateMapCmd(MAP_NAME));
}


} catch (LoggerException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
} catch (CommandException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
} catch (UnknownHostException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
} catch (ClassNotFoundException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
}
}

public void create(Movie mov){


try {
space4j.exec(new PutCmd(MAP_NAME,mov.getId(), mov));

} catch (CommandException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
} catch (LoggerException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
}

}

public void update(Movie mov){

try {
space4j.exec(new PutCmd(MAP_NAME, mov.getId(), mov));

} catch (CommandException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
} catch (LoggerException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
}


}

public Iterator read(){
return space.getIterator(MAP_NAME);
}
public void delete(Movie mov){
try {
space4j.exec(new RemoveCmd(MAP_NAME, mov.getId()));
} catch (CommandException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
} catch (LoggerException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
}

}

public void createSnapshot(){
try {
space4j.executeSnapshot();
} catch (LoggerException ex) {
Logger.getLogger(SpaceDAO.class.getName()).log(Level.SEVERE, null, ex);
}
}

}

Test.java

package com.space4j.sample;

import java.util.Iterator;

/**
*
* @author Haris
*/
public class Test {

public static void main(String ars[]){
SpaceDAO sp = new SpaceDAO();
Movie mov1 = new Movie("Shawshank Redemption","M1");
sp.create(mov1);
Iterator iter = sp.read();
Movie mov = null;
while(iter.hasNext()){
mov = iter.next();
System.out.println(mov.getName());
}
mov1 = new Movie ("No Country for Old Men","M1");
sp.update(mov1);
sp.createSnapshot();

}
}


There will be a space4j_db folder created in your work directly automatically storing all the log files based on the persistent class names.When we take a snapshot a file with extension ".snap" is created.Things i donn like: No package structure for files.Files are named by numbers. like 000000000001.log etc.Creating too many files.
It supports indexing , sequence numbers,clustering,db replication etc.
I did this for a study purpose only.Actually read an article from InfoQ .. so tried out hows it .. Article
Project site

Edge Computing

In our academic final year BTech project , we built a Linux cluster to demonstrate how an on demand SaaS philosophy (actually we didn't know the term at that time !) application could be implemented.It was not a 100% ideological demonstration.We could run the cluster of 4 nodes in the lab and the application ... but not the grid level implementation . We as a team did work well to make it happen.It was cutting edge at a smaller level...

Edge computing ... ?

I did write an article on CDNs(Content Delivery Network) before. But I didn't know about the concept of edge computing even when I saw Beijing Olympics through NBC online. I never guessed when i watched Youtube videos. The real transformation that is happening in this web 2.0 cyberspace.The web uses additional metadata to cache and transport data than traditional web caching.Its content-aware caching(CAC).This form of computing is made available to on demand applications too. All those web 2.0 mashups,widget networks, etc I thinks its too complex in the architectural and deep theoretical level to understand the technology, because it seems so "cloudy".All these data network latencies, jittering, content replications ,databases spread throughout... and so on.The request to an html or javascript are delivered from a nearby cdn node in networks like Akamai or Limelight (used by Youtube) to our pc/gadget. What about the personlized pages? They also need dynamically generated content.They get data from databases.The databases are spread across the web. The database cache mechanisms deliver the data. What about business applications ? Their database results are to be cached.Suppose two queries having same subset of data . They are to be cached reducing frequent database access.Using query templates and containment check it can be done. I saw a ppt from here explaining web application replication strategies. .Technologies like sharding using MysQl , database clustering etc are used these days.There were a lot papers published on this technology for a long time.Its practical and becoming the backbone of internet for technologies like video streaming , VOIP, mashup services ,even illegal botnets !! etc these days.Its providing high QoS, availability,virtualized computing,but security through firewalls at all the nodes (centralized as well as network level).So the internet at the network level is becoming strong needing more complex administration.... cool for network engineers.Its complex it sounds interesting and promising.

Its about data ...More curious What about the computing and implementation level ?

I grabbed an article from Akamai regarding the deployment of java enterprise applications in their CDNs from here

Another article was found about Objectweb's JonAS application server working on a self-* autonomic computing based on IBM research.

From the manifesto i read about the comparison of today's internet autonomic computing and our autonomic nervous system, freeing the conscious mind with all those involuntary activities .....

Many intelligent students,professors, engineers working on it .Yes the collective intelligence is in the embryo stage....

The breadcrumb folk tale and intelligent user interfaces

In the old German folk tale of Hansel and Gritel the two young children attempt to mark their trail by leaving breadcrumbs on their path as they walk through a forest. In information architecture, especially interface design or GUIs, breadcrumb refers to some sort of visual path that allows the user to see where they are in the interface and to retrace their steps if needed.The technology is called hyperlink connecting Univeral Resource Identifiers.Its the human psychology to virtually traverse through his thoughts on sorting out the infomration he needs.The words in the mind-space is connected through the virtual links tracing the inner thoughts in the process of the data quest.The cliche information at the tip of fingers ... The links in the web page you see in the browser are the virtual representation of information deep down in the cyberspace.Are we entering a metaphysical psyche warp of our mind through these portals? :P

A psychology paper from the University of Wichita shows that using the breadcrumbs in the model tends the users to use these shortcuts more .This is definitely a good approach to define the related entities and the users will be able to navigate through the huge data in web.Is it possible for an intelligence in web, some mention it as collective intelligence, converse to user using this tool of navigation?


As semantic technologies will become more prominent in near future internet information architecture,the user interfaces to communicate this collective intelligence effectively to user are expected to be simple and elegant.May be the touch screens and 3D interfaces will provide an immersive user interaction.What will the fastest way of traversal?

An old Buddhist monk revealed to his disciple that mind is the fastest traveler in this universe.. Will the future interfaces catch up with lightning speed of mind? We can expect warp holes in our web pages ? Will there be information teleporters on our desktops ( that sucks!)? Even hyper links are based on these ideologies .. may that will suffice.. The problem is we choose the path now ...

Ha there I saw an old paper describing about AI based interfaces.It says

Most researchers would agree that a system that can maintain a human dialogue would be considered intelligent (remember the Turing test?). The problem is that there are a lot of interfaces that we would consider intelligent, that do not look "human" in any sense at all. An example is the PUSH interface , which presents hypertext in a manner that is adapted to the user's current task. The system is controlled mainly through direct manipulation, but the output consists of a text where certain pieces of the text are "hidden" from view, to give a comprehensive overview of the pieces of text that are most relevant to the user in his or her current task. This very passive form of user adaptation does not in any way mimic human behaviour, but is constructed to be a natural extension of the hypertext view of information.
I like playing computer games.In games the interface , or the protagonists goals can be changed by the AI of the game as per the user interaction.I will say thats what happening in web.The information we seek will be based on the understandings made by a collective intelligence in the web.Suppose you want travel to a place during your vacation.. what if the application itself choose the best destination,information about it , the room,the flight ,the travelers tools.. so on to be displayed in the browser within a second..? Sounds cool even if you are your way back home with your personal intelligent assistant (possibly a future iphone !)See this movie made by Apple in 1980s , a film short that also imagines a future of computing via intelligent agents a long way back internet blips occurred. Link


In future choice wont be yours..

Google Guice - sample code on multiple bindings

Google Guice is a dependency injection framework in Java.Its lightweight because it is not a container like other AOP(Aspect Oriented Programming) containers such as PicoContainer or Spring.They are based on Inversion of Control principle.I don't have much experience in Spring or PicoContainer.Guice is used in Apache Shindig.Guice is a dependency injector using the features of generics and annotations.I wanted to know more about aspect oriented programming,IoC etc.So I tried with a simple one .. Guice.If you want to know more about IoC and Dependency Injection go through Martin Fowler's article . I wrote some an example.This example shows how multiple dependency bindings can be done in Guice.


IM.java

package com.im.sample;

public interface IM {
void sendMessage(String message);
}

An interface IM is created to be implemented by different classes.We can bind this interface
to a class by @ImplementedBy annotation, while it supports for one.We need multiple
implementations.

GTalkIMImpl.java

package com.im.sample;

public class GTalkIMImpl implements IM{
@Override
public void sendMessage(String message){
System.out.println("Gtalk:"+message);
}
}

YahooIMImpl.java

package com.im.sample;

public class YahooIMImpl implements IM {

@Override
public void sendMessage(String message){
System.out.println("Yahoo:"+message);
}
}


We use Modules to bind classes to interface.It acts as the configuration mechanism.

IMModule.java

package com.im.sample;

import com.google.inject.Binder;
import com.google.inject.Module;
import com.google.inject.name.Names;

public class IMModule implements Module{

@Override
public void configure(Binder binder) {

binder.bind(IM.class).annotatedWith(Names.named("GTalk")).to(GTalkIMImpl.class);
binder.bind(IM.class).annotatedWith(Names.named("Yahoo")).to(YahooIMImpl.class);
}

}


Here we have to create two annotations to bind the implementations.Names calss will
automatically bind the annotation to the specific classes.Names.named will take the annotation name.

GTalk.java

package com.im.sample;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

import com.google.inject.BindingAnnotation;

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.FIELD)
@BindingAnnotation
public @interface GTalk{}


Yahoo.java

package com.im.sample;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

import com.google.inject.BindingAnnotation;

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.FIELD)
@BindingAnnotation
public @interface Yahoo{}


These annotations provide the named binding.@Retention defines the class to be available in runtime.@Target define where we the elements are applicable.During injection the target define the elements.@BindingAnnotation annotates annotations which are used for binding.More here here

Messenger.java

package com.im.sample;

import com.google.inject.Inject;
import com.google.inject.name.Named;

public class Messenger {

@Inject @Named("GTalk") private IM gtalk;
@Inject @Named("Yahoo") private IM yahoo;

public Messenger(){

}

public IM getGtalk() {
return gtalk;
}

public IM getYahoo() {
return yahoo;
}

}

Test.java

package com.im.sample;

import com.google.inject.Guice;
import com.google.inject.Injector;

public class Test {

public static void main(String[] args) {

Injector injector = Guice.createInjector(new IMModule());
Messenger msg = new Messenger();
injector.injectMembers(msg);
msg.getGtalk().sendMessage("Hello Gtalk");
msg.getYahoo().sendMessage("Hello Yahoo");

}

}


Guice automatically finds the IM class and injects it into Messenger class. Guice seems simple and cool.Using guice we can mostly avoid configuration files (I donno whether this will be advantage in all situations!) , we can provide better maintainability.The coding is smooth.Using annotations it becomes more flexible.

Go through users guide here.

If there is anything to be changed or to be improved .. try out anyway.

Debug open social gadgets : Error codes

I was playing around with Apache shindig to develop a simple open social application.
I find it easy to use the error codes available in open social specification.Use this sample gadget for testing purposes.In the specification http://code.google.com/apis/opensocial/docs/0.8/reference/#opensocial.ResponseItem.getErrorCode

it says like to get error code from getErrorCode function. Its returning an enumeration.So its easy to get error codes from Error class.I tried for response.getErrorCode().. hopeless .. its undefined while response.hadError() was working !! I think spec document should be clear with examples ...mmm try the code ..





<script type="text/javascript" >
function getData() {
var req = opensocial.newDataRequest();
req.add(req.newFetchPersonRequest('VIEWER'), 'viewer');
req.send(callback);
}


function callback(response){


if(!response.hadError()){
alert("Success");
var html = "Test OK"
document.getElementById('message').innerHTML = html;

}else {
alert(response.getErrorMessage());
var viewer_resp_item = response.get('viewer')
switch(viewerresp.getErrorCode())
{
case opensocial.ResponseItem.Error.INTERNAL_ERROR :
/*The request encountered an unexpected condition that prevented it from fulfilling the request*/
alert('There was an error on the server.');
break;
case opensocial.ResponseItem.Error.UNAUTHORIZED :
/*The gadget does not have access to the requested data.*/
alert('There was a permissions issue: the app is not allowed to show the data.');
break;
case opensocial.ResponseItem.Error.BAD_REQUEST :
/*The request was invalid.Parameters are wrong etc.*/
alert('There was an error in the Container.');
break;
case opensocial.ResponseItem.Error.FORBIDDEN :
/*The gadget can never have access to the requested data.*/
alert('The Container was unable to contact the server.');
break;
case opensocial.ResponseItem.Error.NOT_IMPLEMENTED :
/*This container does not support the request that was made.
Different version of implementations*/
alert('did not implement this particular OpenSocial interface.');
break;
case opensocial.ResponseItem.Error.LIMIT_EXCEEDED :
/*The gadget exceeded a quota on the request.*/
alert('Limit exceeded.');
break;


}

}
}


gadgets.util.registerOnLoadHandler(getData);

</script>





Cool opensource projects associated with Apache Shindig

There are a lot open source initiative happening for the open social application development.Apache shindig is an incubator project helps to develop opensocial containers .

Guice

The dependency injection framework from Google.It helps in developing custom handlers in shindig.It takes creating instances in the form of Services from the Application client code and the Dependency between the Clients to its Services is automatically injected through some easy injection configuration mechanism.
http://code.google.com/p/google-guice/

Apache commons

The popular resusable components.
http://commons.apache.org/
-Lang
Extends java.lang functionality
http://commons.apache.org/lang/
-Betwixt
Maps java beans to XML
http://commons.apache.org/betwixt/
-Codec
For encoding decoding algorithms (base64 etc) utility.
http://commons.apache.org/codec/

Oauth

The security protocol.I have written about oauth here
For java.
http://oauth.googlecode.com/svn/code/java/core/

JSON

The javascript data interchange objects.
http://json.org/

Joda Time

Replaces java date and time.
http://joda-time.sourceforge.net/

Abdera

Implementing ATOM protocol and syndication.Its all about data in google..

http://incubator.apache.org/abdera/

Caja

Caja (pronounced ka-ha) makes it possible to run third-party JavaScript alongside existing code. Mechanism for javascript security.
http://code.google.com/p/google-caja/

Maven

The popular build tool.Commonly used project management tool for open source projects.Build,manage dependencies,update and so on.Very flexible.It appear as many things ..
http://maven.apache.org/

Jetty is used to run sample applications provided with shindig.

ICU4j

For internationalization.
http://www.icu-project.org/



What is Computational REST ?

I have written about REST before.CREST is about computational REST.I think the concept will be a milestone in developing applications on Resource Oriented Architecture (ROA).ROA is a set of guidelines of implementing REST architecture.This wikipedia article give a simple idea of the world of representations.

The image is taken from here gives an illustration of the kind of message passing used in Representational State Transfer.

REST is architectural style of distributed systems.RESTful Webservices are gaining importance in developing scalable applications.The web 2.0 technologies,ajax,mashups,comet,pub-sub technologies do envision the power of computing using internet.The computational resources in the web can be accessed by URIs.Whats buzz CREST ?

A client needs to execute a program.The origin server executes the code and return the result.This form the basis of CREST or Computational REST. Code snippets having conditional logic can be sent and the response will reflect the behavior of application.So the application can become more client centric.It can be like a service behavior changes with different client conditions.Computational REST (CREST) as an architectural style to guide the construction of computational web elements... The idea is similar to mobile code.This idea is originated from Rethinking Web Services from First Principles .They have given the underlined principles of REST and CREST.The technology is under research.So internet technologies are heating up.
The technology for using REST for process intensive integration is in a budding stage only.The distributed processing in REST way.. Programmability of web is becoming complex and disruptive!

A pretty pass time on competency management and ontology

I happened to read about competency management systems (CMS) and some researches going on it.Just curious about the how they are made and all.. Why we need as CMS ? Whats competence ? Read In large organizations human beings as skilled resources are managed using HRMS tools in an effective manner so that maximum productivity can be achieved....

I have read that in armies the skilled men are selectively recruited to complete mission critical tasks. In a super industrial society the human workers are picked as components and assembled as a team and the task is executed.After that , they are dispersed and combined with others.The process go on.This happens commonly in big software companies.So the softwares like CMS used to manage people in a company or team, for manageability and efficiency (mostly quantifiable).This is mainly because the world is entirely a knowledge economy.So knowledge management is the essential factor in any sector.The employees are to be trained very well on the desired competencies.The gapping skills have to be filled.The information about people,process,products and all resources has to be managed.
The companies do involve business process and related information works.Software companies do have people like domain experts,subject matter experts,developers,managers etc Knowledge management (KM) combines tools and technologies to provide support to the capture, access, reuse and dissemination of knowledge, generating benefits for the organization and their members.The skills of employees need to be updated as technology is becoming advanced day by day due the fast and wide distribution of information.As new generation internet technologies and concepts like web 2.0, social tools etc did make things more advancing beyond our imagination.I found that NASA uses CMS for managing the skills of their work force.I tried searching on net again.Then I learned that ontology based relationships (or rule based systems) can define the model for competency.One can build frameworks that allows them to build shareable ontologies and knowledge bases represented with Semantic Web Languages and to develop Competency-Based Web Services dedicated to Human Resource Management.A simple ontology look like this -
Pretty n00b .. i think .. :D
I think it basically a research subject and for more complex scenarios more complex systems will be evolved .Thats what I understood.World is too competitive!!

Links for the interested :
http://www.ontology-advisory.org/
http://www.successfactors.com/
http://professional-learning.eu/overview
Papers i found here here

Good pass time..

Simal - Integrating social semantic nature of web to open source projects


Open source projects are becoming more prominent and powerful due to the web 2.0 / 3.0 revolution.There are mammoth number of projects get huddled in the web.Sites like sourceforge,freshmeat etc have thousands of open softwares cataloged in them.There are projects undertaken by universities and academicians.Many developers around the world work on various technologies .What if socialization is happening to the project endeavors ? How to integrate and share information of multiple projects around the web ?

Simal is a tool for building and tracking projects and the people involved in those projects.It is a framework for building project registry.Meta-data data models like RDF and document languages like XML do provide a solid foundation for cataloging mechanisms.The catalogs can be generated from RDF documents maintained and hosted by the projects themselves.This entry contains a description of what the project is doing, team members, community's structure, deliverables, etc.By exposing such data it is possible for interaction between related projects.The project catalog is built by importing Description of a Project (DOAP) files from projects.DOAP have an RDF vocabulary.

The DOAP for the project looks like this

The DOAP project design is explained by Edd Dumbill here , here , here.

The RDF semantics is used to build a community like feature.It can use the RSS feeds (trac/wiki changes,version control feeds,mailing list feeds etc) from the project and aggregate it.It could be a sort of friendfeed.The bugs,changes,their solutions etc can be updated among various projects.

A very interesting thing I noted is the addition of FOAF into the vocabulary.FOAF allows groups of people to describe social networks without the need for a centralized database.The Person Browser in their sample demo shows the implementation of the feature.Simal has REST API providing a high level means of accessing the data stored in the registry.

The integration of OpenSocial API and gadgets availability can make the application more widespread.The Simal Web module provides an open social container.Simal uses Shindig which is a refernce implementation of Open Social Standard.

Simal is utilizing the social semantic nature of web (termed web 3.0 ?) to make the open source revolution more powerful.

What about Simal used in large software companies ? Project repository cataloging that could give information about projects,people team, their social networks,projects done with other organizations.Simal will become a powerful utility in developing next generation project management and enterprise applications.

See demo

Visual Analytics -- Programming meets Art



Images provide an effective form of communicating information.From cave age men to gadget chimps of new era draw,use,share images as a medium of message and emotions.As time progressed we have gathered to process most of the information around us.So the data is large..immense and unimaginable... People use these bits to analyze,predict factors affecting the context.The area , Visual Analytics , I believe is one of the challenging fields for the information engineers.Visual analytics has been defined as "the science of analytical reasoning supported by the interactive visual interface.More ..


I have seen most blogs , web 2.0 sites having a visual element known as "tagcloud".Information is tagged and the visitor is informed with the relevance of the site with the level of occurence for each word.It creates a semantic space ... mmm like a cloud.Its merely a statistical visualization for the frequency of use of tagged words.Its a relevance-by-size lists.It thus provides a form of information filtering tool.I don't want to discuss more about the lexical analysis or polysemous distribution, but the power of visualization and the interactivity model. I came to know that a lot of interesting things happen in information graphics field.The one I came through was a project released IBM called ManyEyes.Essentially Many Eyes is a mashup machine for visualizing data! Here.They have a gallery of visualizations.The software engineering meets art.They uses Prefuse, a set of software tools for creating rich interactive data visualizations. Written by Jeff Heer, this toolkit supports a rich set of features for data modeling, visualization, and interaction. Also Google, provides an excellent API for visualization .

Robert Tappan Morris, the guy who developed the first worm was trying to map the total computers in the world.Interestingly, currently there is a project running to visualize the nodes around the globe !
Opte is a project that lets you graphically map the internet. The data represented and collected here serves a multitude of purposes: Modeling the Internet, analyzing wasted IP space, IP space distribution, detecting the result of natural disasters, weather, war, and aesthetics/art.


This graph is by far our most complex. It is using over 5 million edges and has an estimated 50 million hop count.

Asia Pacific - Red
Europe/Middle East/Central Asia/Africa - Green
North America - Blue
Latin American and Caribbean - Yellow
RFC1918 IP Addresses - Cyan
Unknown - White

Cool..

Death of ECMA 4 and future of Native JSON parsing



JSON is a subset of JavaScript.It is used to represent data as tokens of name - value pairs.This technology provides an efficient dataportability for mashable appliations.The JSON syntax is like JavaScript's object literal syntax except that the objects cannot be assigned to a variable. JSON just represents the data itself.Data is the string.This literal has to be converted to object.We can use eval() in Javascript to evaluate script.But using eval() is harmful.So a JSON parser will recognize only JSON text, rejecting all scripts.




If u need a javascript parser use ..

JSON.parse(strJSON) - converts a JSON string into a JavaScript object.
JSON.stringify(objJSON) - converts a JavaScript object into a JSON string.
Other parsers are

Jackson JSON Processor based on , STreaming Api for Xml processing .

JSON-lib is a java library for transforming beans, maps, collections, java arrays and XML to JSON and back again to beans and DynaBeans.

If you are using mozilla firefox and your browser supported above gecko 1.9, there is a regexp:test() function.

According to specification proposed by D.Crockford :

A JSON text can be safely passed into JavaScript's eval() function (which compiles and executes a string) if all the characters not enclosed in strings are in the set of characters that form JSON tokens. This can be quickly determined in JavaScript with two regular expressions and calls to the test and replace methods.

var my_JSON_object = !(/[^,:{}\[\]0-9.\-+Eaeflnr-u \n\r\t]/.test(
text.replace(/"(\\.|[^"\\])*"/g, ''))) &&
eval('(' + text + ')');


This particular technology is alternative to XML to port data.The webservices implementations return XML data.The applications use parsers to generate native objects from these XML.XML is the data used in AJAX, which became the buzz in new age web.The data format is vital to the efficiency of any application.JSON provides an efficient data transfer.

Read JSON: The Fat-Free Alternative to XML

If you want to add JSON to application , read here how Yahoo webservices implemented this.

According to John Resig, browsers should support native json support,

He summarises that

The current, recommended, implementation of JSON parsing and serialization is harmful and slow. Additionally, upcoming standards imply that a native JSON (de-)serializer already exists. Therefore, browsers should be seriously looking at defining a standard for native JSON support, and upon completion implement it quickly and broadly.
Read here for the ECMA4 proposal

but now ?

ECMAScript 4.0 Is Dead


JavaScript standards wrangle swings Microsoft's way

The industry is again having war on browsers and internet standards.

So different methodologies to make JSON will be exist for some years on.As the system become complex, vulnerabilites will arise and we have to findout resolutions for those issues every time.

Need privacy ? Go for oAuth



In facebook we can import contact list from our mail accounts ... mmm Do you trust to give away your secret password?

Here comes the saviour ( hope !!)



An open protocol to allow secure API authentication in a simple and standard method from desktop and web applications.This API is very useful for the authentication mechanisms enabled by mashups/widgets.It is a Data Portability standard. The authentication method can help to port data between different web apps around.Most of the mashup technology widgets asks for the credentials from users for getting contact list,post links to bookmarks,blogs etc.Using the oAuth the users no longer need to give up their confidential Google accounts user name and password to 3rd party services in order for the 3rd party services to access their data on Google services.I thinks its more than OpenID.oAuth is now available for all Google Data APIs, everything from Gmail contacts to Google Calendar to Docs to YouTube.

Thus oAuth allows you to grant access to your private resources on one site to another site without sharing your passwords..

cool!


ReadWritreWeb Says:

Apps that don't use the approved Google user authentication method in short order will be acting like a mail carrier who says they have to have a key to the inside of your house to pick up your mail because they aren't familiar with the mailbox on the front porch.
Read Mashups: Google's Adoption Makes oAuth a Must Have for All Apps


oAuth.net says:

Everyday new website offer services which tie together functionality from other sites. A photo lab printing your online photos, a social network using your address book to look for friends, and APIs to build your own desktop application version of a popular site. These are all great services – what is not so great about some of the implementations available today is their request for your username and password to the other site. When you agree to share your secret credentials, not only you expose your password to someone else (yes, that same password you also use for online banking), you also give them full access to do as they wish. They can do anything they wanted –even change your password and lock you out.

This is what OAuth does, it allows the you the User to grant access to your private resources on one site (which is called the Service Provider), to another site (called Consumer, not to be confused with you, the User). While OpenID is all about using a single identity to sign into many sites, OAuth is about giving access to your stuff without sharing your identity at all (or its secret parts).

How does it work ?


If there is a service provider X (a mashup/widget/social app/bookmark app....) we have to get the resource data from them.The consumer C can have an app Y ,it can be a 3rd party widget, a desktop app or any other tool which is using the data from the service X. The consumer app Y has to be registered with the service X.



When the user decides to get the data,There will be a dialog between service and consumer getRequestToken(signed) will happen as

http://X/oauth/get_request_token

Then the user is directed to the service with the token and he is authorized

http://X/oauth/authorize.

Then user is send to the consumer site.Then there will be a dialog to exchange the request token for access token

http://X/oauth/get_access_token

Further communication will be based on this signed acesskey.

So the user neednot concern about the privacy as this form of "contract" exists between the consumer app and the service provider.

oAuth UX Flow :


A number of companies and individuals are working on solutions to this problem including Google, Yahoo and Microsoft, as well as the OAuth project. Initiated by Blaine Cook, Chris Messina, Larry Halff and David Recordon, OAuth aims to provide an open standard for API access delegation. The OAuth discussion group was founded in April 2007 to provide a mechanism for this small group of implementers to write the draft proposal for the protocol.

OAuth is already gaining considerable momentum, with implementations for many popular languages including Java, C#,Objective-C, Perl, PHP and Ruby. The majority of these implementations are hosted by the OAuth project via a GoogleCode repository. Sites supporting OAuth include Twitter, Ma.gnolia ,Photobucket and Google

First law for an unruly software programmer

All I want to do in life is code (!! or ?) mm... eat,sleep,have relationships (perpetual me-self),play a lot of computer games,read books..read read..research.. These thoughts became the fundamental principle for a "programmer" who didn't know about the laws of programming! Ha is there any laws to be followed ? The thermodynamics of a heated discussion on solving a bug within a team would have made revelations to some ... what if we have found out this bug early or why did we care less about the code block.. etc etc. I have a little experience about the spaghetti world of programming languages (being humble :P).

First law of programming says:

Lowering quality lengthens development time.

Yes.Its a good read.

We do a job, but we have to make the work best as we could.To err is human.We are in a rush to finish the task and impress our customer.IMHO , if we are able to solve the bugs step by step along with the development most of the issues will be solved.Ya, test driven development,agile,extreme programming ...
I think it will need experience by coding, code reviews we will be able find out bugs during development it self.Bug repositories, tracking etc could provide a checklist for the desired results.
Some people say that requirements are not fully achieved by the developed product , but some says we needn't give much importance to requirements .More importance to the code and the developed software.Make it smart.But we should know what we are making and what the customer will be expecting from the product.Are these requirements bullshit ? Read here

A good programmer will be a a good hacker.But if that guy is doing the hack-oriented programming, will the estimated scheduled will be met ? His job will be at a stake...? It depends upon a good manager :)

Some time the induced bugs can be lucrative sometime (the devilish side).Bug fixing prices !! I believe that's a bad way of service by providing the successfully "on-time" delivered product that sucks. Jolly customers !!

It all depends on what we are to develop and where we are.Happy programming.