Tuesday, December 9, 2008

Going Undercover....

I recently completed the video game 'Need for Speed Undercover', the latest member of the popular NFS series by EA Games. The game was so exciting and thrilling that I had to blog about it.

I have been a big fan of the NFS series since NFS 2. I really liked NFS Underground, NFS Underground 2 and NFS Most Wanted. NFS Underground was the first game in the NFS series to have a base storyline. Underground 2 and Most Wanted took the NFS series to a whole new level by making the storyline even more exciting and adding more features to the game. However I was a bit disappointed when NFS Carbon came out. It certainly did not live up to the hype and I found the game rather short and less challenging compared to its predecessors. Things got even worse when they released NFS Pro Street. To me Pro Street is the most unsuccessful game in the whole series. The whole point of playing a racing game on your PC or PS3 is to drive extremely fast and crash into things (which is something you cannot do in the real world without hurting someone or getting brain damage). But Pro Street does not allow the player to do that since going fast and crashing into objects would cause your car to get totaled and immobilized. Also fixing damaged cars will consume large sums of money the player has earned in the career mode of the game. And the biggest problem I had with NFS Pro Street was that it took the game back to the world of professional racing on closed racing circuits. For somebody who liked the virtual street racing experience offered by NFS Underground and NFS Most Wanted getting used to racing on closed tracks is almost like getting tortured.

However it seems the folks at EA have learned their lesson from the feedback they got from the fans for the NFS Pro Street. With NFS Undercover they have taken the action back into the open streets. Player gets to ride in a virtual city in some of the finest auto mobiles of the world. Couple of cars worth mentioning are Bugatti Veryon and the McLaren F1 which was last available in NFS 6. Cars are categorized into three, namely Japanese, European and American (see complete car list). However NFS Carbon fans will realize that these three categories effectively map to the three car categories in NFS Carbon (tuners, exotics, and muscles). The city has plenty of wide open roads where the player can really push the cars into their limits. Most of the tracks enable the player to accelerate the car to its top speed and hold that velocity.

Another aspect of the game I liked most was the soundtracks. Predecessors of NFS Undercover had mostly hard rock and technical death metal oriented sound tracks. But Undercover brings you a little bit of pop, a little bit of Latin and even a little bit of Spanish in addition to the rock stuff.

The underlying story of the game is perhaps the most interesting feature. In the previous versions of NFS the storyline was simple and very much predictable. In those games there is some kind of a bad ass, champion racer who bosses the other racers in the town (for example Razor in Most Wanted and Darius in Carbon). The player enters the world of racing as an amateur and starts to work his way up to become the new racing king of the town. In the process the player will have to confront many other racers, companions of the existing champion and sometimes even the police. But NFS Undercover brings you a totally different storyline. The story is full of action, suspense and even a bit of romance. The storyline clearly has some traces of 'Gone in 60 Seconds', 'Fast and the Furious' and 'CSI Miami'. Also the game enables the player to play the roles of the street racer, gang member, car thief as well as the federal agent.

The game is pretty fast paced and action packed. Graphics and the related effects are so natural and realistic. It's everything the racing gamers ever dreamed of. If you are like me and if you haven't played Undercover yet then go for it. It's totally worth it. Well done EA. You guys rock!

Also don't forget to check out NFS Planet for some cool Undercover wallpapers and movies.

You are not good
and you are not bad.
You blend and you listen
and you don't trust anyone.
Once you go undercover
you are on your own.

Thursday, October 23, 2008

MOINC - The Future Web Services!!!

As Web services continue to become the dominant paradigm in the arena of distributed computing, more and more people around the world are starting to get interested in improving the performance and scalability achievable from Web services deployments. An enterprise grade Web services deployment must be high available and scalable in order for it to be useful to the clients and provide a competitive advantage for the service provider. Due to this reason it has become a must for the Web services middleware providers to support high availability and scalability via general techniques such as clustering.

Me and some of my colleagues recently started a venture to explore the potential of grid computing as a means of improving the availability and scalability of Web services deployments. Our plan is to design and develop a complete open source Java Web services deployment platform which makes use of grid computing as the basis for clustering. This exciting concept is a brain child of one of our lecturers at the Department of Computer Science and Engineering, Dr. Sanjiva Weerawarana, who also happens to be the chairman and CEO of WSO2. This research project focuses on not only improving the scalability of Web services deployments via grid computing but also putting the much wasted electrical energy to good use by getting idling computers to do some useful work.

We have named the project 'MOINC' which stands for Mora Open Infrastructure for Network Computing. (If you are confused what 'Mora' is all about it's a little nick name for the University of Moratuwa. 'Mora' also means shark in Sinhala.) So far we have managed to successfully complete the requirements gathering and design phases of this massive R&D effort. We have identified four components in our target platform and four groups of undergrads are working on implementing the components. The four components are;
  • MOINC Server
  • MOINC Client Agent
  • MOINC Server Manager
  • Thisara Communication Framework
You can learn more about our project and its four main components from our official website. You will also find links to our design documents, proposals, specifications and SVN repositories there. We recently released the 0.1-alpha version of the Thisara messaging framework which is one of the four components of the MOINC platform. You can download the binaries from our website too.

Friday, October 10, 2008

Google Chrome, Looking Good

It seems almost everybody is taking a strong liking into Google Chrome these days and almost everybody who has downloaded it seems to be appreciating it a lot. So I myself thought of giving Google Chrome a trial run. 

However I have to say that my first impression on Google Chrome was not a very good one mainly due to the fact that it’s only available for the Windows platform at the moment. I’m an open source fan and that makes me naturally dislike anything that won’t run on Linux out of the box. However I think I can forgive Google for that. I believe they just wanted to keep the things simple by focusing on one target platform since this is only a beta release. Also they probably wanted to get the maximum possible test coverage out of this release. In that case Windows is the only logical solution since whether we like it or not Windows still dominates the client side system software market. (and it will probably remain that way for a long time :-( )

Other than the above mentioned glitch my experience with Google Chrome has been a fairly good one. It’s easy to install and configure. I really like the wording and labeling convention used in Chrome. It’s simple and casual English that anybody can understand. Not a single technical word is used so that even someone who is browsing the Internet for the first time can easily get used to Google Chrome. Even the buttons that are usually rendered as ‘browse’ buttons in other browsers will be rendered as ‘choose file’ buttons. 

The user interfaces are pretty cool too. Chrome UI designers have clearly dumped the approaches taken by IE developers and Firefox developers to come up with their own style. It has no menu bars and no tool bars. All it has is the address bar with few more additional controls embedded into it. The address bar can be used to type in both URLs and search queries. One advantage of this organization is that it makes the browser’s main Window large so that the user can see more content without having to scroll much. But the problem is for someone who is so used to IE or Firefox, figuring out how to control the browser is going to be a bit of a pain.

In my opinion it has slightly better performance compared to Firefox-3 and IE-7. (well at least I feel that way) I have experienced frequent browser crashes with both IE-7 and Firefox-3. But I’m yet to experience that in Chrome. 

Perhaps the most enticing feature of Chrome is that every time a new tab is opened a new process is forked off by the browser. This way each tab gets its own set of resources. This will hopefully rectify most of the memory related issues experienced by other browsers. If you are a regular Firefox user you know that as we continue to open new tabs in Firefox the overall performance of the browser degrades significantly. This is because all the tabs have to share the same set of resource (memory in particular). By dedicating a separate process for each tab Google Chrome effectively deals with this issue.

I will continue to test Google Chrome in the days to come. I just started with Chrome and I will be in a better position to give a comprehensive feedback after another couple of weeks. Until then ‘good job Google’!

Wednesday, October 8, 2008

Lex It! Yacc It!

Lately I have been working a lot with the two legendary parser generator tools, ‘Lex’ and ‘Yacc’. (Well it’s actually ‘Flex’ and ‘Bison’ to be precise) To be honest I didn’t have any experience with Lex and Yacc two weeks back. But the two applications are so easy to learn and master that anyone could become an expert compiler developer in no time. Practically speaking one could start developing parsers and compilers without any prior knowledge on compiler theory or language parsing. All you need to get started is some experience in C programming. However I have to admit that some knowledge on regular expressions, context free grammars and push down automatons can make your life lot easier since that would help you better understand the syntax of the two configuration languages used by Lex and Yacc. 

Lex (or Flex) generates a C function called yylex which is capable of scanning and tokenizing a given input. The input to Lex primarily consists of regular expressions and C code segments to be executed on detection of strings that match the regular expression definitions. Yacc (or Bison) generates a C function called yyparse which accepts a sequence of tokens from yylex and parses them. The YACC input would include the productions of the language grammar and C code segments to be executed on application of the productions. 

It’s truly amazing how easy to develop parsers to accept complex languages with Lex and Yacc. The more I use it more I like it. The syntax of the configuration files are well structured and so easy to understand. I was so inspired by these two applications that I started developing my own command-line mathematics package for Unix/Linux systems. My plan is to develop something which is similar to the command interpreter that comes with MATLAB. I have already implemented support for executing simple mathematical expressions, trigonometric functions, logarithms and variables. I’m currently trying to implement the support for user defined functions. Once that is done I intend to implement support for matrix algebra and simple statistics.

The above sounds like a lot of work but thanks to Lex and Yacc I have to write only a very small amount of code. The two magic tools do the job by generating hundreds of lines of C code for me. Of course needless to say that developing something similar from the scratch would surely take months.

I’ve always wondered how come there are so many programming languages. But now I think I know the answer. It’s so freaking easy to develop parsers and compilers for a language with tools like Lex and Yacc. Once you have a language expressed in terms of a set of productions it takes only a few minutes to develop a fully functional compiler. So anyone can come up with an idea for a new language and turn it into reality in a flash. In fact I’m thinking about doing the same in the near future :-)

If you are like me and want to learn and master Lex and Yacc here is a cool reference guide. It helped me out a lot and I’m sure it will do the same for you.

Monday, September 1, 2008

GSoC 2008 ....... Done

You may have noticed that my blog has gone rather silent in the last couple of months. Well, the reason was my Google Summer of Code (GSoC) project. The final deadline was on 18th of August so I worked really hard to deliver the goods on time and finally I can proudly state that I managed to successfully complete my project on time.

The objective of my GSoC project was to implement XML schema type alternatives support for Apache Xerces2/J, the legendary open source XML parser for Java applications. Type alternatives is the answer from W3C XML schema working group, for conditional type assignment problem. This feature was first introduced in the XML schema 1.1 structures specification. Type alternatives allow a type to be assigned to an element dynamically at validation time based on one or more conditions. Conditions are expressed as XPath 2.0 expressions. Here is an example element declaration which uses XML schema type alternatives.

As stated in the above code snippet the declared type of the message element is the complex type called messageType. However the element declaration also contains a few type alternatives. These type alternatives allow the type of the message elements to be determined at the validation time. Based on the conditions stated as the test attribute values the schema validator will assign a type for the message element prior to validating its content. The expressions '@kind' and '@code' refer to two attributes of the message element. The first type alternative will assign the type called messageTypeString if the kind attribute has the value 'string' and the code attribute has a value greater than 1000.

When the schema validator encounters an element declaration with one or more type alternatives it will evaluate the test expressions one by one until an expression which evaluates to true is found. When such a matching type alternative is found the corresponding type will be assigned to the element. If none of the type alternatives match then a default type will be assigned.

My project mainly consisted of two main sections. First section of the project was to implement the type alternatives traversal support so that Xerces2/J can properly traverse an XML schema document which contains type alternatives and add the corresponding information to the schema grammar. Implementing this was fairly easy and I managed to complete it prior to th GSoC mid term evaluations. The second part of the project was to implement type alternatives validation. This was fairly difficult since I had to develop a bare minimal XPath 2.0 implementation for Xerces2/J. Developing the XPath processor actually covered a significant portion of the entire project.

My workings will be fully available in the Xerces2/J code base (even now the code related to traversal part is in one of the SVN branches) very soon. All in all it was a great learning experience as it was a great opportunity for me to learn a whole bunch of cool technologies like XML, XML schema, and XPath 2.0. I also got the opportunity to put some of my knowledge on theory of computing into action and sharpen my programming skills. I would like to give my heartiest gratitude to the Google, the Apache community and very specially to my mentor Khaled Noaman for being a very supportive guide right from the start of my project.

Tuesday, August 19, 2008

Financial Applications with WSO2 ESB

It's getting better and better folks. FIX protocol support in WSO2 ESB is evolving in a blistering rate. This article by Asanka Abeysinghe shows how an entire broker trading platform can be re-engineered using WSO2 ESB. It combines a set of well known usecases of the FIX transport of WSO2 ESB and demonstrates how the ESB can be used to build complete financial applications.

However when looking back at the way things have gone this is nothing to be surprised of. Few months back Paul Fremantle, CTO of WSO2, infact predicted that FIX protocol support in WSO2 ESB will soon evolve to this stage.


Sunday, August 10, 2008

FIX Support in WSO2 ESB

WSO2 ESB is an ultra fast, lightweight, open source ESB based on Apache Synapse. The FIX transport implementation of Apache Synapse is fully operational in WSO2 ESB 1.7. Asanka Abeysinghe from WSO2 has recently published a comprehensive article on using the FIX transport in WSO2 ESB. He starts by giving a short introduction to the FIX protocol and goes onto explaining a number of interesting usecases for the FIX transport in WSO2 ESB.

If you are looking for a powerful SOA based solution to deal with your FIX transactions, this is a must read for you. While you are at it don't forget to take a peek at the article titled 'Apache Synapse FIX'ed', and article I wrote few months back introducing the FIX transport implementation of Apache Synapse.

Saturday, August 9, 2008

Fixing the FIX Transport

The 1.2 version of Apache Synapse, the open source lightweight ESB was released on June 2008. One of the most striking features of this release was the FIX transport implementation, mainly due to the fact that most ESBs in the world still do not support the FIX protocol. The Synapse FIX transport implementation which is only two months old at the moment has already made a lot of hype in the world of Software Engineering and SOA. It seems we already got a few interested people (potential clients?) who are at the moment testing and playing with the transport.

Synapse 1.2 ships with two samples that demonstrate the FIX transport module. One of them demonstrates how two FIX endpoints can be bridged using Apache Synapse. The other samples shows how to bridge an HTTP client with a FIX endpoint. Over the last couple of months the Synapse community has worked really hard to further improve the FIX transport implementation and also to identify more exciting usecases for the transport module. As a result we managed to add four new samples demonstrating the FIX transport to the Synapse documentation. These samples are currently available in the Apache Synapse Snapshot and will be most likely included in the next release.

The first of the four newly added samples (sample 259) shows how to bridge a FIX client with an HTTP endpoint. The sample effectively demonstrates how Banzai, the sample FIX blotter that ships with Quickfix/J can be used to send order requests via Synapse to a service listening on HTTP. The sample uses the XSLT mediator to convert the FIX messages into a SOAP messages.

The second of the new samples (sample 260) shows how Synapse can be used to bridge a FIX endpoint with an AMQP endpoint. Here once the FIX message is converted into XML it will be bound to a JMS payload and sent to an AMQP consumer. Since AMQP is used widely in business applications, I believe that this sample will open up the door way to a ton of really cool usecases.

The third sample (sample 261) demonstrates how Synapse can be used to switch between FIX sessions with different versions (BeginString values). The sample successfully bridges a FIX 4.0 session with a FIX 4.1 session but the underlying concepts can be used to bridge virtually any two FIX sessions. One thing worth mentioning here is that the FIX transport implementation of Synapse initially did not support bridging FIX sessions of different versions. But considering the potential usecases, we implemented that feature very recently.

The fourth newly added sample (sample 262) demonstrates how CBR (Content Based Routing) can be done with FIX messages using Apache Synapse. The sample configuration causes Synapse to accept FIX messages over a session, read a certain symbol in the messages and based on the symbol value route the messages to different endpoints.

We also added namespace support to the FIX transport module so that it can properly parse and validate XML based FIX messages with namespaces. Another recent feature addition was improving the way FIX sessions are initialized in Synapse. The initial implementation lazy initializes all the FIX sessions for outgoing FIX messages. That means an outgoing session will not be created until a message arrives for that particular session. However since this leads to fairly large delays we improved the transport module so that the outgoing sessions are also initialized at the startup along with the incoming sessions. (the old way of initializing sessions is also supported)

Currently we are working on adding support for FIX repeated groups. (we already have a feature request for this on the JIRA from one of our users) All in all it seems that the FIX transport module for Synapse is increasingly becoming a very powerful and matured piece of software in a blistering rate.

Thursday, June 26, 2008

Be Careful Using Ubuntu Update Manager

Ubuntu Linux comes with a pretty cool Update Manager tool which searches for software updates and patches on the Internet and provides options to install them. I've been using this tool for months to install various updates on my system. The best thing about the Ubuntu Update Manager is that, it not only updates the OS but also updates the drivers and other utilities installed on the system. Everything seemed to work fine until I installed some kernel updates on my system using the Update Manager.

After applying the updates the Update Manager required a system restart to complete the update process. This has happened before so I patiently complied. But when the system rebooted I noticed that my system has gone dumb. Sounds were not working at all. And then I tried to connect to the Internet via Wi-Fi, only to find that my wireless network device is also screwed up. Luckily for me I have some friends who are professional Linux experts. I hooked up my computer to a wired network and contacted them over the Internet.

Fixing the wireless connectivity issue was not a big deal. The very Update Manager which got me into trouble helped me out. I installed a bunch of driver updates listed by the Update Manager. Then after a system reboot my Wi-Fi device was back up and running.

The difficult part was fixing the audio related issue. I had to fire up the synaptic package manager and install a couple of Linux backport modules and restart the system to get sounds working again.

However all in all it was yet another learning experience for me. If you are a regular Ubuntu Linux user like me, my advice to you is be careful when installing kernel patches using the Update Manager. Know what you are doing. Be ready to deal with some undesirable results.

Wednesday, June 11, 2008

WSO2 WSAS 2.3 Released

The WSO2 WSAS team is pleased to announce the release of the WSO2 WSAS 2.3. WSO2 WSAS is an enterprise ready Web services engine powered by Apache Axis2 release under the Apache Software License 2.0.

This release can be downloaded from;
http://wso2.org/projects/wsas/java


From the WSO2 WSAS 2.3 - Release Note - 10th June 2008

WSO2 WSAS is an enterprise ready Web services engine powered by Apache Axis2 which offers a complete middleware solution. It is a lightweight, high performing platform for Service Oriented Architectures, enabling business logic and applications.

Bringing together a number of Apache Web services projects, WSO2 WSAS provides a secure, transactional and reliable runtime for deploying and managing Web services.


Key Features

* Data services support - Expose you enterprise data as a services in a jiffy
* WSAS IDE - Eclipse IDE integration
* Clustering support for High Availability & High Scalability
* Full support for WS-Security, WS-Trust, WS-Policy and WS-Secure Conversation and XKMS
* EJB service provider support - Expose your EJBs as services
* Axis1 backward compatibility - Deploy Axis1 services on WSAS & Engage advanced WS-* protocols in front of legacy services
* JMX & Web interface based monitoring and management
* WS-* & REST support
* GUI, command line & IDE based tools for Web service development


New Features In This Release

* Improved interoperability
* Improved Data Services support
* Various bug fixes to Apache Axis2, Apache Rampart & WSAS
* WSO2 Mercury Integration - A new WS-RM Implementation


Data Services - Bringing Enterprise Data to Web

* Service enable data locked in relational databases, CSV & Excel files in no time
* Zero code. Simple descriptor file describes the data to service mapping
* Controlled access to your data
* Customizable XML output
* Benefit from REST & WS-* support
* Built-in Connection pooling support
* Supports exposing Stored procedures & functions
* Built-in caching
* Throttling - to ensure your database is never overloaded.
* Easy configuration via graphical console
* Test your services via Try-it tool


Training

WSO2 Inc. offers a variety of professional Training Programs, including

training on general Web services as well as WSO2 WSAS, Apache Axis2, Data Services and a number of other products.

For additional support information please refer to
http://wso2.com/training/course-catalog/


Support

WSO2 Inc. offers a variety of development and production support programs, ranging from Web-based support up through normal business hours, to premium 24x7 phone support.

For additional support information please refer to http://wso2.com/support/

For more information on WSO2 WSAS, visit the WSO2 Oxygen Tank (http://wso2.org)

Tuesday, June 10, 2008

WSO2 ESB 1.7 Released

The WSO2 Enterprise Service Bus (ESB) team is pleased to announce the release of its version 1.7 of the Open Source ESB.

The WSO2 ESB is an ultra fast, light-weight and versatile Enterprise Service Bus based on the Apache Synapse ESB. It allows you to Connect, Manage and Transform service interactions between Web services, REST/POX services and Legacy systems. You can easily switch transports between http/s, JMS, File Systems, Mail, FIX etc, or read/write from Databases, split, aggregate or clone messages and support declarative enforcement of QoS aspects such as WS-Security, WS-Reliable Messaging etc, and also switch between message formats such as SOAP 1.1/1.2, PoX/REST, Hessian, Text, Binary, MTOM and SwA.

The WSO2 ESB is released under the Apache Software License v2.0, and ships with a graphical management and administration console and enhanced JMX management/monitoring support, and integrates seamlessly with the WSO2 Registry.

Webinar series introducing the WSO2 ESB v1.7:
In this Webinar series Paul Fremantle, CTO of WSO2, will introduce the new features and capabilities of the WSO2 ESB. The first session will recap on the overall approach and benefits of the WSO2 ESB solution and the underlying Apache Synapse project, and then go into the added functionality and benefits of the 1.7 release. The series will include details of the newly released support for Hessian, FIX, AMQP and also discuss the improvements in performance and stability.

* For more details on the Webinar series, and to register,
visit http://wso2.com/about/news/esb-webinar-june-17/


Core features of the WSO2 ESB includes:
* Proxy services / Service mediation and Message mediation
* Support for Non-blocking http/s, JMS, FIX, Apache VFS (s/ftp, file,
zip/tar/gz, webdav, cifs..), POP3/IMAP/SMTP, AMQP transports
* Support for SOAP 1.1/1.2, PoX/REST, Hessian, Text and Binary payloads
* Support for scheduled task execution and management
* Support for custom extensions in Java through custom mediators, POJO
Classes and Java Command classes
* Support for Apache BSF Scripting languages such as (Javascript, Ruby,
Groovy..etc)
* Support for clustered deployment with pinned services and tasks
* Throttling, Caching, Load balancing and Failover support
* Support for declarative WS-Reliable Messaging, WS-Security and
WS-Policy attachment
* Integrated WSO2 Registry with support for external Registries
* Ability to stop, re-start and gracefully shutdown the ESB through JMX
* Cluster aware sticky load balancing support

New features of the v.1.7 release includes:
* Support for Hessian binary messages
* FIX (Financial Information eXchange) protocol transport
* WS-Reliable Messaging support with WSO2 Mercury
* Ability to stop, re-start and gracefully shutdown the ESB through JMX
* Integrated WSO2 Registry shipped, with ability to connect to a remote
WSO2 Registry
* Support for re-usable database connection pools for DB report/lookup
mediators
* Support for GZip encoding and HTTP 100 continue
* Natural support for dual channel messaging with WS-Addressing
* Cluster aware sticky load balancing support
* Non-blocking streaming of large messages at high concurreny with
constant memory usage
* Support for an ELSE clause for the Filter mediator
* Ability to specify XPath expressions relative to the envelope or body
* Support for separate policies for incoming/outgoing messages
* Support for a mandatory sequence before mediation
* New Router mediator
* Ability to re-deploy proxy services

Useful Links
Download WSO2 ESB - http://wso2.org/downloads/esb/
Quickstart Guide
Installation Guide
Administration Guide
Samples Guide
Documentation Index

Contribute to WSO2 ESB
SVN: http://wso2.org/repos/wso2/trunk/esb/java/
JIRA: http://wso2.org/jira/browse/ESBJAVA
User list: esb-java-user@wso2.org
Developer list: esb-java-dev@wso2.org
Web Forum: http://wso2.org/forum/187

Training
WSO2 Inc. offers a variety of professional Training Programs, including training on general Web services as well as WSO2 ESB, Apache Synapse and Axis2, Data Services and a number of other products. For additional support information please refer to http://wso2.com/training/course-catalog/

Support
WSO2 Inc. offers a variety of development and production support programs, ranging from Web-based support up through normal business hours, to premium 24x7 phone support. For additional support information please refer to http://wso2.com/support/ For more information on WSO2 ESB visit the WSO2 Oxygen Tank (http://wso2.org)

Monday, June 9, 2008

Apache Synapse 1.2 Released

The Apache Synapse team is pleased to announce the release of version 1.2 of the Open Source Enterprise Service Bus (ESB).

Apache Synapse is an lightweight and easy-to-use Open Source Enterprise Service Bus (ESB) available under the Apache Software License v2.0. Apache Synapse allows administrators to simply and easily configure message routing, intermediation, transformation and logging task scheduling, etc.. The runtime has been designed to be completely asynchronous, non-blocking and streaming.

The Apache Synapse project and the 1.2 release can be found here:
http://synapse.apache.org

Apache Synapse offers connectivity and integration with a range of legacy systems, XML-based services and SOAP Web Services. It supports non-blocking HTTP and HTTPS using the Apache HTTPCore (http://hc.apache.org) components, as well as supporting JMS (v1.0 and higher) and a range of file systems and FTP sources including SFTP, FTP, File, ZIP/JAR/TAR/GZ via the Apache VFS project (http://commons.apache.org/vfs/filesystems.html).

At the same time Synapse 1.2 release adds the support for the Financial Information eXchange (FIX) an industry driven messaging standard through QuickFixJ, Hessian binary web service protocol, as well as other functional, stability and performance improvements. Synapse supports transformation and routing between protocols without any coding via configurable virtual services.

Synapse provides first class support for standards such as WS-Addressing, Web Services Security (WSS), Web Services Reliable Messaging (WSRM), Throttling and caching, configurable via WS-Policy upto message level, as well as efficient binary attachments (MTOM/XOP).

The 1.2 release contains a set of enhancements based on feedback from the user community, including:

* Support for Hessian binary web service protocol
* FIX (Financial Information eXchange) protocol for messaging
* WS-Reliable Messaging support with WSO2 Mercury
* Support for re-usable database connection pools for DB report/lookup mediators
* Support for GZip encoding and HTTP 100 continue
* Natural support for dual channel messaging with WS-Addressing
* Cluster aware sticky load balancing support
* Non-blocking streaming of large messages at high concurreny with constant memory usage
* Support for an ELSE clause for the Filter mediator
* Ability to specify XPath expressions relative to the envelope or body
* Support for separate policies for incoming/outgoing messages
* Support for a mandatory sequence before mediation

The combination of XML streaming and asynchronous support for HTTP and HTTPS using Java NIO ensures that Synapse has very high scalability under load. Performance tests show that Synapse can scale to support thousands of concurrent connections with constant memory on standard server hardware.

Apache Synapse ships with over 50 samples (http://synapse.apache.org/Synapse_Samples.html) designed to demonstrate common integration patterns "out-of-the-box", along with supporting sample services, and service clients that demonstrate these scenarios. Apache Synapse is configured using a straightforward XML configuration syntax
(http://synapse.apache.org/Synapse_Configuration_Language.html).

Apache Synapse is openly developed by a community that welcomes all forms of input, ranging from suggestions and bug reports to patches and code contributions. Your comments and feedback on the project and release are welcome.

The Apache Synapse code and binaries are available from the website at http://synapse.apache.org

Tuesday, June 3, 2008

Having Issues with Ubuntu Visual Effects?

I have been experiencing issues activating Ubuntu visual effects on my laptop. I installed Ubuntu Gutsy on my laptop few months back (yes, I'm yet to upgrade to Ubuntu Hardy) and it works fine except that it never allowed me to turn the visual effects on. I have seen these visual effects in action on other computers and they are really exciting. I quite haven't seen anything similar even in the Windows world. So I badly wanted to use visual effects on my personal laptop but strangely it never allowed me. Whenever I tried to activate visual effects, Ubuntu gave me an error simply saying something like 'unable to turn visual effects on'. (wow! what a descriptive error message, isn't it?)

So after a bit of googling and reading some forum entries I managed to diagnose the problem. The issue was with my VGA card which is treated as 'blacklisted' by Linux. This particular type of VGA (Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller) is being used in a variety of laptops these days and hence I thought of blogging a bit about this and let the world know. If you don't know how to find the type of VGA in your computer simply enter the following command in a shell.
$ lspci | grep VGA
In order to find whether your VGA is blacklisted or not run the following command.
$ compiz --replace ccp &
If your VGA is blacklisted you will see something like this on the terminal.
[1] 6002
Checking for Xgl: not present
Blacklisted PCIID '8086:2a02' found
aborting and using fallback: /usr/bin/metacity
If this is the case you can do a simple override so that Linux won't care whether your VGA is blacklisted or not. Then you can turn the visual effects on and off at will. To perform the override run the following command. (this workaround has been tested on the above mentioned VGA only, so beware and attempt at your own risk!)
$ mkdir -p ~/.config/compiz; echo SKIP_CHECKS=yes >> ~/.config/compiz/compiz-manager
This will create a small configuration file named compiz-manager in the ~/.config/compiz directory. This file will instruct Linux to skip certain tests which cause the failures in turning the visual effects on.

And that's basically it. If you need more information please take a look at this forum post.

Sunday, May 18, 2008

University of Moratuwa Climbs to the Top with GSoC 2008

For years University of Moratuwa in Sri Lanka has been doing a kick-ass job in the process of converting talented youngsters in Sri Lanka into world class computer engineers and IT professionals. It is not a secret that out of all the IT education centers in the country University of Moratuwa is the best in the business. Graduates passed out from the University of Moratuwa can be found virtually in any software firm in Sri Lanka and every year a significant number of University of Moratuwa graduates end up joining some of the finest organizations and universities in the world.

Talking about kicking butt, this time around University of Moratuwa has managed to kick some really serious butt big time. Being able to secure the first place in the Google Summer of Code 2008 (GSoC 2008) top participants list is no ordinary thing. According to the latest statistics from Google it is from the University of Moratuwa, Sri Lanka that the largest number of GSoC 2008 applications has been received. The blistering number of applications is 93! The second place of this list goes to University of Campinas, Brazil. 29 applications have been received from University of Campinas. So clearly University of Moratuwa is way ahead from the rest of the world with this regard.

Also Google has declared that University of Moratuwa has forwarded the largest number of winning GSoC 2008 applications. There are 23 winning applications from University of Moratuwa this year from which 18 are from the department of Computer Science and Engineering. I personally feel very proud of this achievement and feel really happy about myself for being able to be one of those 18 winners.

Sri Lanka is among the top 10 GSoC 2008 participant countries this year. Sri Lanka is holding this position along with countries like USA, Canada and India who are at the frontier of the global IT industry. However out of all the IT education institutes in Sri Lanka only University of Moratuwa has earned a position in the top 10 GSoC 2008 participant institutes list. This is an clear indication of the level of excellence University of Moratuwa has reached over the past couple of decades in terms of quality of education. In addition to that this achievement by the University of Moratuwa is an indication of the fact that a new generation of computer engineers and IT professionals who appreciate and support open source software is being created at Sri Lanka.

Sunday, May 11, 2008

Open Source is Cool!

Recently among the GsoC 2008 participants, there was a very interesting discussion about what makes open source software cool. The discussion lasted for several days and it really brought up some interesting ideas and comments about the open source software movement. It was really fascinating to see youngsters from all over the world describing their opinions about open source software in their own words and own style of writing. Some of the postings did even touch areas like economics, philosophy and politics.

According to some of the GSoCers, open source software is cool mainly due to the community interaction associated with them. With open source software one can make a few contributions to a project and in return get thousands of contributions from the community. All these contributions help to improve the quality of the software and help the community achieve their goals. Also this opens up some opportunities for individuals to make some new friends with common interests.

Some GSoCers believe that it's the transparency aspect of open source software which makes them cooler than proprietary software. With open source software it is really possible to see how something works down to the least bit of byte code. And if it doesn't work then the individual contributors can fix it themselves. In addition to that open source software enable individual users and developers to change the software at will to suit the requirements which is not an option with proprietary software. With proprietary software most of the time you have to change the requirements to suit the capabilities of the software. According to some GSoCers software resembles knowledge. Open source software thus becomes a powerful medium for sharing knowledge in a free, transparent and collaborative environment. They believe that by making software and thus knowledge freely available and accessible the entire human race can benefit from everyone's work.

According to the opinion of some GSoCers open source software brings results faster. With source code of software open to everybody bugs can be detected faster and corrected quickly. Also due to this nature open source software projects mature faster in terms of quality and performance.

Apart from the two or three ideas described above there were also some exciting comments on software freedom, humanitarian aspects of open source software and economical aspects of open source software. All in all I believe that this discussion makes some fine reference material for everybody who is interested in computer science and software engineering. It's really heartening to see that a whole new generation of software developers that trusts, respects and appreciates open source software is being born.

Tuesday, April 29, 2008

GSoC 2008 is On

The all important initial phase of the Google Summer of Code 2008 (GSoC) came to an end last week. The selected project proposals were published on the GsoC website on 21st April. I too applied for this year's contest and my project proposal to implement type alternatives for Apache Xerces2-J was among the qualified proposals. The Apache Software Foundation has received around 30 slots this year and one of them is allocated to my proposal (yipeee!!!). I consider this as an great opportunity to learn and master XML and XML schema while contributing to a world renowned open source project.

Apache Xerces2-J is a high performance, standard compliant XML parser. It currently supports a number of XML related standards like XML 1.0, XML 1.1, DOM, SAX, JAXP and XML schema 1.0. A variety of open source and proprietary software projects make use of Apache Xerces2-J as the core XML parsing and processing mechanism. The reason for this immense popularity of Apache Xerces2-J is probably the high number of standards it supports and the way it supports them. Nowadays Apache Xerces2-J is even distributed along with popular Sun's JDK.

Xerces development team is currently involved in getting Xerces2-J to support the XML schema 1.1 standard which is the latest XML schema specification. XML schema 1.1 specification like its predecessor is comprised of three main parts namely the primer, structures and data types. Type alternatives is a feature that falls under the XML schema 1.1: structures spec. This is one of the most significant additions to the XML schema standard and it provides a well organized mechanism to implement conditional type assignment which has been in the XML schema feature wish lists for years.

With type alternatives XML elements can be assigned types based on one or more conditions (thus the name conditional type assignment). The conditions are specified as Xpath 2.0 expressions and the relationship between a condition and the corresponding type can be expressed using the 'alternative' element as in the following example.

The Xpath expression which specifies the condition is expressed as the value of the 'test' attribute and the corresponding type is expressed as the value of the 'type' attribute. Alternatively one could use 'simpleType' or 'complexType' child elements to specify the type instead of using the 'type' attribute. A complete example illustrating XML schema type alternatives would be as follows.
Here we have defined an element named 'value' which is of declared type 'valueType'. But based on the actual value of the 'kind' attribute the 'value' elements can have a different governing type. When XML schema validations are performed the elements will be validated against their governing types.

Type alternatives can add lot more flexibility to the way XML schema documents are used and it gives more freedom and power to the XML schema author. With type alternatives the XML elements having the same name can be of different governing types. In the above example different 'value' elements can take one of three types (integer, short or byte). Also the same element can have a governing type that is different of the declared type. This wouldn't have been possible if not for the type alternatives.

All in all type alternatives is a very interesting and useful feature for XML schema authors. That makes it very important for Xerces2-J to support it. I have worked with XML and XML parsers like DOM and SAX a lot in the past. I have used Apache Xerces2-J in a number of occasions too. But to be honest I haven't really worked with XML schema much. So this really is a big learning opportunity to me. I have been studying the XML schema specs for the last few weeks and I have already collected a whole bunch of stuff on XML schema to my knowledge base.

My heart is itching to start with the coding part but I know that there are lot of things to be studied, analyzed and clarified before I get to that point. Wish me luck!!

Monday, April 28, 2008

Say Hello to WSO2 Mashup Server

You have heard about it! You have read about it! You may even have dreamt about it! The WSO2 Mashup Server is a platform for creating, deploying, and consuming Web services Mashups in the simplest fashion possible. And when I say simple I mean like a piece of cake. In fact you don't have to know a damn thing about Web services to get started with the WSO2 Mashup Server. Some basic knowledge on JavaScript is all you need. Now how cool is that?

So today I'm going to provide some guidelines to those out there who are willing to get their hands dirty with this whole Web services Mashups thing. After reading this entry you will realize how easy (and not to mention the fun) it is to develop and deploy Web Services Mashups with the WSO2 Mashup Server. Provided that you have Java 5 or a higher version installed on your computer the first step is to download the Mashup server. Extract the downloaded archive and go to the bin directory. Simply execute the server startup script relevant to your operating system. Windows users should execute the startup.bat and Linux users should execute the startup.sh (you might need to set the execution permissions for this file). The server will start throwing out a bunch of text on your console and when the server has successfully started you will see something similar to the following.

INFO [2008-02-15 10:57:14,690]  HTTP port            : 7762
INFO [2008-02-15 10:57:14,691] HTTPS port : 7443
INFO [2008-02-15 10:57:14,691]
INFO [2008-02-15 10:57:14,691] WSO2 Mashup Server started in 51597 ms

Now open up a Web browser instance and browse to the URL http://localhost:7762. You will be provided with a form to create a user account for yourself since this is the first time you are signing in to the Mashup server. Simply fill in the form and submit to create your account leaving the sign in check box checked. At this point you should be seeing your Mashup server home page.

And now begins the fun part. Click on the 'Create a new service' link under the management tasks. Give a name to the service and click create button. For this simple demonstration I'm going to create a service named 'helloworld'.

Then you will be provided with an editor to write your mashup. You may notice that a considerable amount of coding is already done for you but that is really not necessary at this point. Remove all that code and copy and paste the following code segment into the editor.

this.serviceName = “helloworld”;
this.documentation = “My first mashup”;

sayHello.documentation = “Greet”;
sayHello.inputTypes = { “name” : “String” };
sayHello.outputType = { “String” };
function sayHello(name)
{
return “Hi, and welcome “+name;
}

What we have done here is defining a simple service with one operation. The name of the operation is 'sayHello'. It takes in a string parameter called 'name' and returns a string value. More specifically when you invoke the operation sayHello with a name like 'Joe' it will return the string 'Hi, and welcome Joe'. Now click on the 'Save changes' button to save your workings. It will take some time to your service to get properly deployed and listed under the' My Mashups' list on the home page. If it seems to be taking too much time simply refresh the page.


Click on 'helloworld' link under 'My Mashups' to see the details of the helloworld mashup. Now you can go about and start playing with your mashup.
Do not forget to test your mashup with the 'Try It' feature.

Also you can view and edit the source of your mashup, give a rating to it, view the WSDL for the service and do a lot more. See how easy it is to get up and running with the WSO2 Mashup Server? Did you have to use any serious Web services stuff? No! Did you have to do any complex Java programming? No! All we did was typing a few lines of simple JavaScript code and making a few clicks here and there on a nice looking web interface. Now if you have signed up on http://mooshup.com you can directly upload your mashup to the mooshup community site by using the 'Share Mashup' feature so other mashup developers can have a look at your mashup. Sounds fun! Isn't it?

Friday, April 18, 2008

Let's Get Mashed Up!!!

Day by day Web Services are making the Web more dynamic, interactive and fun. Thanks to Web Services the World Wide Web which used to be a pile of content for reading and browsing has turned into the most dynamic and active source of information in the world. At the same time it has become a play field for those who love to play, a huge world wide dating service for those who wanna flirt, a medium for expressing viewpoints for those who want their opinion to be heard by others (I personally know a guy who used his blog to announce that he has started an affair with a girl) and a primary means of interactive communication for everybody in general. With Web Services the entire World Wide Web is restructuring itself into a pull-based model where the user pulls out exactly the content he wishes to see from the network. The traditional push-based model where the Web administrators simply push the content into their Websites that users has to browse and see is becoming more and more outdated. All the businesses are now rapidly adopting SOA and Web Services and they are organizing their business activities around Web services. Web services allow organizations to stay up-to-date on current socio-economic trends, competitors and the organization itself thus effectively reducing lot of overheads associated with management and control.

The best thing about Web services is the flexibility associated with them. One can decide to use a Web Service as it is or he can mix it up with other Web services to build his own customized Web Services. For an example take a Web service which brings you the latest up-to-date weather information in Asia continent in plain text format. If you are lazy, dull and a bit tardy you can consume the service as provided by the service provider. Or else if you are active, energetic and fun loving you can mix it up with other services to make it even more useful. In the example I have picked one might mix the weather data service up with an on-line mapping service like Google maps to create a dynamic weather map of Asia. Now how cool is that? As another example take a service which brings you the latest prices of cars. If we can mash it up with a service which brings you the architectural details of cars we can create a live feed which gives you all the details one would possibly want to know about cars.

Well it really sounds pretty exciting isn't it? But Web services are serious business. They are inherently complicated and difficult to handle. So how on earth one can just 'mash up' multiple services to form one service out of them when even handling one service requires lot of effort? Well believe it or not it's lot easier than you think, 'if' you are using the right tool. And what is this 'right tool'? It's called the WSO2 Mashup Server. All you need to know is a bit of JavaScript and you can get started with mashing up services in no time.

WSO2 Mashup server 1.0 was released recently and it's already making a lot of hype in the SOA world. WSO2 also put the mooshup.com (beta), the community site of mashup developers on-line couple of days back and there we already have a number of users sharing the mashups they have created. WSO2 Mashup server really makes mixing and mashing up services embarrassingly easy. I myself didn't know a damn thing about mashing services up few weeks back and now thanks to the WSO2 Mashup server I have already put a couple of mashups on-line at mooshup.com. 'YahooTunes', one of the mashups I developed recently takes the RSS feed from the Yahoo music (http://music.yahoo.com) which has a top chart of songs and mashes it up with the Youtube data API (http://gdata.youtube.com) to bring you the videos of the top ten songs. All I had to do was to write a few lines of JavaScript code. Most of the advanced tasks handled for you by the host objects provided by the Mashup server. 'YahooTunes' mashup is now on-line at https://mooshup.com/services/hiranya/yahootunes and is a good demonstration of the power of WSO2 Mashup server and what it has to offer. You will also realize that WSO2 Mashup server allows generation and displaying of WSDL and source code of mashups. Deploying, editing and managing mashups are also made easier tasks with the cool looking Web frontend. User management is provided using WSO2 User Manager and InfoCard authentication is facilitated using WSO2 Identity Solution. Sharing the mashups you create is also easy. Simply create and test the mashup on your local mashup server and hit share to upload it to the community site or some other mashup server instance.

WSO2 Mashup server is pretty stable and feature-rich for a 1.0 release. I didn't really find any major drawbacks and pitfalls so far. Full credit goes to all the developers that contributed to put this wonderful piece of software engineering together. However beware people. WSO2 Mashup server is known to be addictive!!!

Thursday, April 17, 2008

FIXing Synapse!!!

Recently I got the opportunity to get involved in developing a new transport module for Apache Synapse to support the FIX protocol. Apache Synapse which is a light weight mediation framework for Web services, has had support for a number of application layer protocols like HTTP/S, SMTP, JMS and VFS. The development of the FIX transport module took nearly one and a half month and now this module is available in the Apache Synapse SVN trunk along with couple of samples and some documentation.

FIX protocol or the Financial Information eXchange is a messaging standard developed specifically to facilitate securities transactions. Strangely enough this protocol which is being used by hundreds of banks, stock exchanges and broker dealers all around the world is still not very popular in the Web services world (See here for a list of FIX users). The protocol has been in existence since 1992 and there are six major versions of the specification at the moment of writing (4.0, 4.1, 4.2, 4.3, 4.4, 5.0). The specs are owned by the FIX Protocol Limited (FPL) but it is essentially a free and open standard.

FIX specifications focus on two layers of the OSI reference model, namely the application layer and the session layer. Any application that wishes to communicate with another application using the FIX protocol must first establish a FIX session. A FIX session can exist among only two parties where one party is the acceptor and the other party is the initiator. Initiator is the one who starts the conversation by sending out the initial login request.

FIX messages are essentially a series of key-value pairs where each key-value pair is known as a field. Fields are separated by using the ASCII Start of Header (0001) character as the delimiter. The key of a field is simply a positive integer. But these integers have meanings and they are defined in the specifications. A typical FIX message might appear to be as follows.
8=FIX.4.09=10235=D34=1649=BANZAI
52=20080314-05:01:4756=SYNAPSE11=1205470907396
21=138=540=154=155=IBM59=0
10=078

A FIX message can be logically separated into a header, body and a trailer. The fields that should appear in each of these portions are clearly specified in the FIX specs. For an example the BeginString (8) field is a header field. The Checksum (10) is a trailer field. The content of a FIX message can vary greatly depending on the type of the message.

We used an open source FIX engine known as Quickfix/J to develop the FIX transport module for Apache Synapse. It currently supports five out of the six major versions of the FIX specification. Quickfix/J provides a very simple API to develop FIX based applications and applications developed on Quickfix/J are highly configurable. In addition to that Quickfix/J offers powerful message parsing, validation and logging.

Quickfix/J project is driven by a very active development team and a very enthusiastic user base. Quickfix/J uses Apache MINA and hence is based on Java NIO asynchronous network communications system.

All the transport modules of Apache Synapse are developed using the Apache Axis2 transport framework. Any transport module developed on the Axis2 transport framework must have two main elements, namely the transport listener and the transport receiver. The Axis2 transport framework provides the necessary interfaces and the base classes to implement these elements. The transport listener implementation is basically responsible for accepting in bound messages. For each accepted incoming message the transport listener should create an Axis2 message context, populate the message context accordingly and hand it over to the Axis2 kernel for further processing. The transport sender implementation is used by Axis2 kernel to send out messages. This implementation should be capable of processing Axis2 message contexts and converting the SOAP messages embedded in message contexts into messages that can be sent over the wire. Also depending on the nature of the protocol transport sender may also have to handle incoming response messages.

The implementations of the transport listener and the transport sender for the FIX transport module are named FIXTransportListener and FIXTransportSender respectively. The FIXTransportListener makes use of a FIXSessionFactory class which takes care of creating, storing and managing FIX sessions. The class FIXIncomingMessageHandler is where Apache Synapse binds with Quickfix/J. This class implements the quickfix.Application interface. For each accepted FIX message the transport module forks off a new thread from the thread pool associated with the transport listener implementation. This thread then converts the FIX message into XML using the Apache AXIOM API.

The FIX message converted into XML is then placed in a SOAP envelope. When converting FIX messages into XML, CDATA tags are used, basically as a precaution because theoretically FIX messages can have any kind of data in the fields. Fields can even contain XML or binary data. The transport do not change any of the field values while converting the FIX messages into XML. However if binary data is found in the message then necessary action will be taken to put the binary data in an Axis2 message context as a binary attachment. Finally the SOAP envelopes holding the FIX messages will be handed over to Axis2 kernel for further processing.

Another possible approach we could have taken here was to embed the FIX messages in SOAP envelopes without converting them into XML. But converting the FIX messages into XML has many advantages. Once converted into XML the Synapse user have more control over the FIX message content. Technologies like XPath and XQuery can be used to manipulate the FIX message content within the Synapse core.

One major problem we faced during the development of the new transport module was implementing in order message delivery. In order message delivery and processing is a characteristic feature of the FIX protocol. But since Synapse uses a separate thread to handle each incoming message we could notice that messages are not sent out in the order they were received. This is due to the thread switching that takes place while Synapse is performing the mediation. As a solution to this issue we introduced an application level sequence numbering mechanism to the transport module. Each and every incoming FIX message will be given a sequence number by the transport listener. A sequence number is unique for a given FIX session. A string that uniquely identifies the session is also associated with the messages. These information are specified in the SOAP envelope it self as attributes of the message element. The FIX transport sender implementation looks at these values and sends the messages in the exact order they were received. However having these attributes in the SOAP envelope is optional. They are required only if the user wants the FIX transport sender to send out messages in the order they were received.

While developing the FIX transport module we tried our best to provide all the options and choices Quickfix/J normally provides to the users. Apart from the logging features provided by Apache Synapse, users can enable logging at transport level (before messages are converted into XML) using Quickfix/J. Quickfix/J offers a number of options when it comes to message logging. You can either log messages on to the console, into a file or into a database. Same thing can be done with Synapse as well. Also all the message store implementations that come with Quickfix/J are available with Synapse too. By default Synapse will try to use the memory based message store implementation with acceptors and initiators. But the users can use other implementations (file, jdbc etc) if they want.

All in all developing the FIX transport module for Apache Synapse turned out to be a huge success and it indeed was a great learning opportunity for me. Not much ESBs in the world currently support the FIX protocol so it really improved the value and marketability of Apache Synapse. This module is still somewhat in its child state so we expect the contributions from the developers and FIX experts out there to improve it.

Monday, April 14, 2008

Hello World with Web Services

“Time for minor skirmishes is over!!! Now we do battle!!!”

Don’t get alarmed by reading the above. I’m still blogging about Web services and related technologies (not about ancient war craft). I just wanted to say that having discussed a lot of theoretical stuff in the previous blog entries (or rather going around and around the bush) now I’m fully armed and ready to start developing and deploying Web services of my own. There are loads of tested and proven methods, tools and IDEs to help Web services developers. So to start with I’m going to stick to a very simple and straight forward method which involves Apache Axis2 (well actually I will be using WSO2 WSAS, which is powered by Axis2), Eclipse IDE and Axis2 Eclipse plugins. It is pretty much a very painless and known-to-work kind of method and ideal for budding Web services developers (and even for pros of the industry). Let this blog entry be a tutorial to those out there who have just stepped into the world of Web services and willing to experiment with the technology (and may the best developer win :-)).

Here’s what you need…

Set them up...

Installing these software is fairly easy and straight forward. Remember to setup the JAVA_HOME environment variable after installing the Java development kit.

(A public WSO2 WSAS instance is available at http://wso2.org/tools from which you can test your Web services and get a feel for it.)

Once you have all the software tools you need properly installed and tested we can get on with developing out first Web service.

And here’s the recipe…

Run Eclipse IDE and create a new Java project. (File --> New --> Java Project)

Give the project any name you like.

Create the following class in the project. (Names of the class and the package do not matter. Just give any name of your choice.)

To add a new class to your project simply right-click on the node corresponding to your project on the Package explorer pane and click on ‘New’ and then select ‘Class’ from the submenu.

package testws;

public class MyFirstWS
{
public String greet(String name)
{
return "Hi, " + name + ", welcome to Web services World!!!";
}
}

As you can see it is a very simple Java class with just one method. It accepts a String parameter called ‘name’ and returns a String value. 

Now save all the files and build the project. (Project --> Build Project)

Then run the Axis2 Service Archiver Wizard. (File --> New --> Other --> Axis2 Wizards --> Axis2 Service Archiver)

Browse and specify the class file location of your project. Generally this is the ‘bin’ folder in your top level project folder. Also check the ‘Include .class files only’ check box (“this is not really necessary, but then again why take any risks right?”) and click ‘Next’.

Next window allows you to specify a WSDL for the Web service. Since we don’t have a WSDL for our service simply check the ‘Skip WSDL’ check box and click ‘Next’.

You can add any additional Java libraries required to run the service from the next Window. Since we don’t need any additional libraries for our service simply click ‘Next’ to proceed. (However if you want to add any additional libraries just enter the full path of the library and click ‘Add’. You may add any number of libraries in this fashion.)

Next window deals with the service.xml file which is important when it comes to deploying the Web service. In this case we will get our service.xml file auto generated. So check the ‘Generate the service xml automatically’ and click ‘Next’ to continue.

Now you can specify the Java class that you want to expose as a Web service. Type the fully qualified name of the class (in my case it is testws.MyFirstWS) and click ‘Load’. The methods of the class should appear in the area below. Select the methods you want to be in your service. Also at the top most text box you can specify a name for your Web service. This name will be displayed to the external parties so best to pick a nice and cool name.

Finally specify a name for the service archive file that will be generated and also a location to save it. Click ‘Finish’ to exit the wizard. If everything went right Eclipse will display a message box saying ‘Service archive generated successfully!’

Now browse to the location you specified in the last step of the wizard. A file with the name you specified and an .aar extension will be there. This is the service archive file of your Web service and you are going to need this file to deploy the Web service. Service archive files are in a way similar to compressed .zip files. They are collections of other files. Mainly it consists of the class files and the service xml.

Now it’s time to put the service on-line. If you are using WSO2 WSAS simply start the server, log on to https://localhost:9443 using your web browser and sign in to the WSAS management console. The default username and password are ‘admin’ and ‘admin’. Click on ‘Services’ and Click on ‘Upload service artifact’ which is the top most link in the page. Browse to the location of your .aar file and click ‘Upload’. WSAS will display a message saying the service archive has been uploaded. (WSO2 WSAS supports hot deployment. So there is no need to restart the server.)

If by chance you are using Tomcat or Axis2 standalone server to deploy Web services copy the .aar file manually to the ‘webapps’ directory of your Tomcat or Axis2 installation. Then restart the server.

So that’s basically it. You have created and deployed a Web service. You can access service by logging into the WSAS management console and clicking on the ‘Services’. Your service will be listed their along with other Web services. Now you can do a lot of things with it. WSO2 WSAS gives you a whole bunch of features to play around with deployed Web services. You can view the WSDL of your service, generate clients for it and even try it out on-line using the WSAS 'Try It' feature.


You can directly invoke the service and test it if you want. Open a browser window and log on to http://localhost:9443/services//greet?name=

If you get a result similar to the following you have done right. It means your Web service is up and running and also it is working without any faults. So pat on your back and be proud of yourself.

Wow!!! Wasn't that easy? Well that's mainly because we used a very powerful IDE to develop the service and a feature rich application server to deploy it. If we had to do everything manually this would have been a pain in the butt. I think we really should appreciate these software tools and the people who have put in a lot of effort to create them.

Please note that I have done the above example in Linux. But the exact procedure can be safely carried out in the other environments like Windows and Mac-OS.

The Bee and the Eye

A new paradigm…

Information technology is probably not the only science that has gone through some drastic improvements during the last couple of decades. Many other sciences have lifted themselves from the ground level to a much higher level where humans now find it difficult to live without them. Some new sciences have popped up from nowhere and have managed to entirely change the way humans used to look at the nature and the universe at the beginning of the last century. Some sciences have combined together to create so powerful technologies giving rise to a variety of new application domains.

Business management, technology management and human resource management are few such sciences that have really become key aspects of any organization, society or nation today. Even though these sciences are not at all strange or peculiar to the present day human beings, I seriously doubt that people who lived about 100 years back had even heard the terms. Above sciences which are sometimes collectively referred to as ‘management’ have really become absolute necessities of organizations and people than just areas of study. So many new theories and principles have emerged in these sciences which in turns have affected the way people now carry out various tasks within an organizational environment.

In the meantime information technology has always been in its usual ever developing mode discovering more and more new application domains. It is now impossible to find an area of study or a science that hasn’t been touched by information technology. Management sciences have also been vastly reshaped and reworked with the introduction of information technology. Computer has become the most widely used business tool in the world (Paul Fremantle, one of the co-founders of WSO2 believes that a laptop computer is one of the basic things one needs to have, in order to start a business) and most of the software systems developed today are targeted towards organizations to help them with their management activities. Information technology is now so tightly entwined with management sciences and that has given birth to a new area of study formally known as ‘Business Intelligence’ or in short BI.

Explaining the Bee(B) and the Eye(I) …

BI is mainly concerned with applications and technologies used to gather, provide access to and analyze data related to organizational operations. A business intelligence system of an organization is one of the most important and key factors which directly affects the organization’s decision making process. Business intelligence systems generally have powerful data processing capabilities and effective presentation methods. A business intelligence system enables the administration of an organization to make more informed, timely and accurate decisions while contributing to the organization’s competitive advantage. With a BI system the administration no longer has to depend solely on the common sense and the experience. Also risky, ad-hoc methods of problem solving and decision making can be avoided.

Talking about BI systems there are hundreds of software tools, (for different types of platforms) each optimized for one or more subtasks associated with BI. Some of these subtasks are data gathering and storing, data security, data analysis, reporting and communication. BI systems are ideally suited for handling huge amounts of data. They can generally work well with multiple databases or data warehouses. BI systems are slightly smarter than usual software systems in that they can consider a large number of parameters while analyzing data and make accurate decisions. In the industry we can see BI systems being used in three modes.

  • As a means of analyzing performance, projects and operations internal to organizations

  • As a means of data storing and analyzing

  • As a tool for managing the human side of businesses (marketing, HRM etc.)

BI has its own terminology with unique technical terms and buzzwords. Each of the above mentioned usages of BI are backed by a variety of BI systems and tools, both proprietary and open source.

OLAP Overlapped!!!

Since I have strongly emphasized on the usefulness and power of BI systems it is worth mentioning something about OLAP (On-Line Analytical Processing) as well which is a concept associated with BI. The term ‘OLAP’ is probably the most popular buzzword in the study of BI. Some common applications of OLAP are business reporting, management reporting, budgeting and forecasting. With OLAP the primary concern is to optimize databases and other related applications to support fast retrieval, analysis and reporting of data. In OLAP, databases are optimized for fast retrieval rather than efficient use of space. There are special APIs (eg: ODBO) and query languages (eg: MDX) to be used with OLAP. (In fact MDX is the de-facto standard being used with OLAP)

Here Comes Web Services…

Web services being the next ‘big thing’ in the world of IT has already begun to make an impact on BI. Web services can take BI to the next level by adding the notion of interoperability to BI systems. With Web services there is no limit to the number of data sources a BI system might use as Web services can enable the BI system to talk to remote APIs, databases and other data sources, even ones implemented using incompatible technologies. Also one can use Web services to expand the usability of a BI system by distributing the services offered by the system over the web. It is almost impossible imagine what a BI system can become when used along with Web services.

However there are some pitfalls that we should lookout for. When the services of a BI system are exposed to the external world as Web services some critical measures should be taken to enforce security and reliability of the system. Many BI system providers use various tools, protocols and technologies for this purpose (ranging from tools like LDAP to protocols like HTTPS). Also using Web services with BI systems has a significant impact on how metadata is handled by the BI system. Metadata management is one of the most influential features of a BI system. When used with Web services almost all the metadata management tasks including metadata modification and synchronization across applications should be exposed as Web services.

By combining the power of interoperability of Web services with the power of mega scale data analysis of BI systems we can create dynamic real time BI systems so that organizations can monitor events in real time and make very accurate decisions based on the most up-to-date information. All the users of such a system can be dynamically notified and kept in synchronism with the organizational activities by sending constant data feeds or periodic updates. Data gathering and reporting processes can be fully automated in this approach. Traditional BI systems are more or less batch processing systems that takes a collection of data and perform some operations on it as and when users make requests. But with Web services BI systems can constantly monitor data and related events in real time and take actions as and when a situation arises.

Sunday, April 13, 2008

Huawie ETS 1000 on Linux

My CDMA dial-up Internet connection at home has been driving me insane for months. I have both Windows XP and Ubuntu Linux installed on my PC. From the two operating systems I prefer to use Linux for all my academic work, development work and documentation stuff. The purpose of having Windows is simply to play computer games. But unfortunately up to now I have never been able to connect to the Internet from Linux using the dial-up connection.

My CDMA dial-up connection is made using a Huawie ETS 1000 wireless terminal and the installation disk I got from my ISP had only Windows drivers. So ironically I had to use Windows to surf the Internet from home since Linux didn't seem to treat the wireless terminal nicely. Not to mention that this was rather silly and painful because every time I had to search the Internet for something during my day to day work I had to shutdown Linux and start Windows on my PC.

Recently I realized that I never really looked for an answer to this ridiculous problem. After searching for a solution on Google for a few minutes I understood that the answer has been lying right under my nose for about ten months. Apparently one my friends had already discovered a workaround for the problem and he had published the solution on his blog on last June.

Moral of the story – 'Discuss your technical issues openly with your friends. They may have already solved them.'

The solution my friend Mohanjith had found worked for me and some of the comments made on his blog suggest that it had worked for many others as well. So I thought of sharing it since I'm pretty sure there could be lot of others like me who have trouble getting connected to the Internet using CDMA dial-up connections on Linux. In Sri Lanka of course almost all the ISPs that offer CDMA based Internet connections provide their clients with Huawie ETS 1000 wireless terminals. Let this be a walkthrough to those who have trouble getting connected to the Internet using CDMA dial-up connections on Linux.

Let's Roll
Start by checking the Linux kernel version installed on your computer. It has to be above 2.6.*. You may check the kernel version by entering the command uname -a on your console.
$ uname -a
Linux hiranya-laptop 2.6.22-14-generic #1 SMP Sun Oct 14 23:05:12 GMT 2007 i686 GNU/Linux

Step 1:
If that condition is met simply plug in the USB cable coming from your Huawie terminal to your PC. Then type in dmesg -c on the console. You might have to become the super user for this command to execute. On Ubuntu of course you can make use of sudo. However executing this command will throw out a bunch of text on the console. Try to see if you can find the following couple of lines among the text.
ti_usb_3410_5052 1-1:2.0 : TI USB 3410 1 port adapter converter detected
usb 1-1: TI USB 3410 1 port adapter converter now attached to /dev/ttyUSB0
If you can find them you are lucky. Your wireless terminal is installed and ready to use. Simply proceed to step 3. But if you find the following couple of lines instead you got some work to do.
ti_usb_3410_5052 1-1:1.0: TI USB 3410 1 port adapter converter detected
ti_usb_3410_5052: probe of 1-1:1.0 failed with error -5
Problem here is with the connector cable that hooks the Huawie terminal up with your PC. No need to panic. Let's take care of this problem.

Step 2:
Create a new rules file called 026_ti_usb_3410.rules in your /etc/udev/rules.d directory. You will have to become the super user to create a new file in this directory. Using your favorite text editor add the following entries in the newly created file.
#TI USB 3410
SUBSYSTEM=="usb_device" ACTION=="add" SYSFS{idVendor}=="0451",SYSFS{idProduct}=="3410" \
SYSFS{bNumConfigurations}=="2" \
SYSFS{bConfigurationValue}=="1" \
RUN+="/bin/sh -c 'echo 2 > /sys%p/device/bConfigurationValue'"
Once done save and close the file. Now disconnect the USB cable from the Huawie terminal and plug it in again. Enter dmesg -c command in your console and inspect the output. If you have done everything right so far you should be able to locate the following lines in the output.
ti_usb_3410_5052 1-1:2.0: TI USB 3410 1 port adapter converter detected
usb 1-1: TI USB 3410 1 port adapter converter now attached to /dev/ttyUSB

That's it!! Your computer is now ready to go on-line with your CDMA dial-up connection.

Step 3:
Become the super user and edit the /etc/wvdial.conf file of your PC accordingly. My configuration is as follows.
[Dialer Defaults]
Modem = /dev/ttyUSB0
Baud = 230400
Phone = your-isp-phone-number
Init1 = ATZ
Stupid Mode = 1
Dial Command = ATDT
Username = your-username
Password = your-password
New PPPD = yes
PPPD Options = crtcts multilink usepeerdns lock defaultroute
Now try to get connected by entering the command wvdial pctl. You might have to execute this command as the super user.
There may be issues with name servers. In such a situation you will have to configure the name servers in the /etc/resolv.conf file. The name server IPs can be found from the console output when wvdial is trying to connect to your ISP.
WvDial<*1>: local IP address 122.255.2.26
WvDial<*1>: pppd: ��[06][08]
WvDial<*1>: remote IP address 2.2.2.2
WvDial<*1>: pppd: ��[06][08]
WvDial<*1>: primary DNS address 202.124.160.2
WvDial<*1>: pppd: ��[06][08]
WvDial<*1>: secondary DNS address 122.255.1.2
WvDial<*1>: pppd: ��[06][08]