Showing posts with label web services. Show all posts
Showing posts with label web services. Show all posts

Saturday, June 20, 2015

Expose Any Shell Command or Script as a Web API

I implemented a tool that can expose any shell command or script as a simple web API. All you have to specify is the binary (command/script) that needs to be exposed, and optionally a port number for the HTTP server. Source code of the tool in its entirety is shown below. In addition to exposing simple web APIs, this code also shows how to use Golang's built-in logging package, slice to varargs conversion and a couple of other neat tricks.
// This tool exposes any binary (shell command/script) as an HTTP service.
// A remote client can trigger the execution of the command by sending
// a simple HTTP request. The output of the command execution is sent
// back to the client in plain text format.
package main

import (
 "flag"
 "fmt"
 "io/ioutil"
 "log"
 "net/http"
 "os"
 "os/exec"
 "strings"
)

func main() {
 binary := flag.String("b", "", "Path to the executable binary")
 port := flag.Int("p", 8080, "HTTP port to listen on")
 flag.Parse()

 if *binary == "" {
  fmt.Println("Path to binary not specified.")
  return
 }

 l := log.New(os.Stdout, "", log.Ldate|log.Ltime)
 http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
  var argString string
  if r.Body != nil {
   data, err := ioutil.ReadAll(r.Body)
   if err != nil {
    l.Print(err)
    http.Error(w, err.Error(), http.StatusInternalServerError)
    return
   }
   argString = string(data)
  }

  fields := strings.Fields(*binary)
  args := append(fields[1:], strings.Fields(argString)...)
  l.Printf("Command: [%s %s]", fields[0], strings.Join(args, " "))

  output, err := exec.Command(fields[0], args...).Output()
  if err != nil {
   http.Error(w, err.Error(), http.StatusInternalServerError)
   return
  }
  w.Header().Set("Content-Type", "text/plain")
  w.Write(output)
 })

 l.Printf("Listening on port %d...", *port)
 l.Printf("Exposed binary: %s", *binary)
 http.ListenAndServe(fmt.Sprintf("127.0.0.1:%d", *port), nil)
}
Clients invoke the web API by sending HTTP GET and POST requests. Clients can also send in additional flags and arguments to be passed into the command/script wrapped within the web API. Result of the command/script execution is sent back to the client as a plain text payload.
As an example, assume you need to expose the "date" command as a web API. You can simply run the tool as follows:
./bash2http -b date
Now, the clients can invoke the API by sending an HTTP request to http://host:8080. The tool will run the "date" command on the server, and send the resulting text back to the client. Similarly, to expose the "ls" command with the "-l" flag (i.e. long output format), we can execute the tool as follows:
./bash2http -b "ls -l"
Users sending an HTTP request to http://host:8080 will now get a file listing (in the long output format of course), of the current directory of the server. Alternatively users can POST additional flags and a file path to the web API, to get a more specific output. For instance:
curl -v -X POST -d "-h /usr/local" http://host:8080
This will return a file listing of /usr/local directory of the server with human-readable file size information.
You can also use this tool to expose custom shell scripts and other command-line programs. For example, if you have a Python script foo.py which you wish to expose as a web API, all you have to do is:
./bash2http -b "python foo.py"

Friday, January 2, 2015

Developing Web Services with Go

Golang facilitates implementing powerful web applications and services using a very small amount of code. It can be used to implement both HTML rendering webapps as well as XML/JSON rendering web APIs. In this post, I'm going to demonstrate how easy it is to implement a simple JSON-based web service using Go. We are going to implement a simple addition service, that takes two integers as the input, and returns their sum as the output.
package main

import (
        "encoding/json"
        "net/http"
)

type addReq struct {
        Arg1,Arg2 int
}

type addResp struct {
        Sum int
}

func addHandler(w http.ResponseWriter, r *http.Request) {
        decoder := json.NewDecoder(r.Body)
        var req addReq
        if err := decoder.Decode(&req); err != nil {
                http.Error(w, err.Error(), http.StatusInternalServerError)
                return
        }
 
        jsonString, err := json.Marshal(addResp{Sum: req.Arg1 + req.Arg2})
        if err != nil {
                http.Error(w, err.Error(), http.StatusInternalServerError)
                return
        }
        w.Header().Set("Content-Type", "application/json")
        w.Write(jsonString)
}

func main() {
        http.HandleFunc("/add", addHandler)
        http.ListenAndServe(":8080", nil)
}
Lets review the code from top to bottom. First we need to import the JSON and HTTP packages into our code. The JSON package provides the functions for parsing and marshaling JSON messages. The HTTP package enables processing HTTP requests. Then we define two data types (addReq and addResp) to represent the incoming JSON request and the outgoing JSON response. Note how addReq contains two integers (Arg1, Arg2) for the two input values, and addResp contains only one integer (Sum) for holding the total.
Next we define what is called a HTTP handler function which implements the logic of our web service. This function simply parses the incoming request, and populates an instance of the addReq struct. Then it creates an instance of the addResp struct, and serializes it into JSON. The resulting JSON string is then written out using the http.ResponseWriter object.
Finally, we have a main function that ties everything together, and starts executing the web service. This main function, simply registers our HTTP handler with the "/add" URL context, and starts an HTTP server on port 8080. This means any requests sent to the "/add" URL will be dispatched to the addHandler function for processing.
That's all there's to it. You may compile and run the program to try it out. Use Curl as follows to send a test request.
curl -v -X POST -d '{"Arg1":5, "Arg2":4}' http://localhost:8080/add
You will get a JSON response back with the total.

Friday, June 21, 2013

White House API Standards, DX and UX

The White House recently published some standards for developing web APIs. While going through the documentation, I came across a new term - DX. DX stands for developer experience. As anybody would understand, providing a good developer experience is the key to the success of a web API. Developers love to program with clean, intuitive APIs. On the other hand clunky, non-intuitive APIs are difficult to program with and usually are full of nasty surprises that make the developer's life hard. Therefore DX is perhaps the single most important factor when it comes to differentiating a good API from a not-so-good API.
The term DX reminds me of another similar term - UX. As you would guess UX stands for user experience. A few years ago UX was one of the most exciting topics in the IT industry. For a moment there everybody was talking and writing about UX and how websites and applications should be developed with UX best practices in mind. It seems with the rise of the web APIs, cloud and mobile apps, DX is starting to generate a similar buzz. In fact I think for a wide range of application development, PaaS, web and middleware products DX would be way more important than UX. Stephen O'Grady was so right. Developers are the new kingmakers

Friday, April 5, 2013

MDCC - Strong Consistency with Performance

A few weeks back me and a couple of my colleagues finished developing a complete implementation of the MDCC (Multi-Data Center Consistency) protocol. MDCC is a fast commit protocol proposed by UC Berkeley for large-scale geo-replicated databases. The main advantage of MDCC is that is supports strong consistency for data while providing transaction performance similar to eventually consistent systems. 
With traditional distributed commit protocols, supporting strong consistency usually requires executing complex distributed consensus algorithms (e.g. Paxos). Such algorithms generally require multiple rounds of communication. Therefore when deployed in a multi-data center setting where the inter-data center latency is close to 100ms, the performance of the transactions being executed degrades to almost unacceptable levels. For this reason most replicated database systems and cloud data stores has opted to support a weaker notion of consistency. This greatly speeds up the transactions but you always run the risk of data becoming inconsistent or even lost.
MDCC employs a special variant of Paxos called Fast Paxos. Fast Paxos takes a rather optimistic approach by which it is able to commit most transactions within a single network roundtrip. This way a data object update can be replicated to any number of data centers within a single request-response window. The protocol is also effectively masterless which means if the application is executing in a data center in Europe, it does not have to contact a special master server which could potentially reside in a data center in USA. The only time this protocol doesn't finish within a single request-response window is when two or more transactions attempt to update the same data object (transaction conflict). In that case a per-object master is elected and the Classic Paxos protocol is invoked to resolve the conflict. If the possibility of a conflict is small, MDCC will commit most transactions within a single network roundtrip thus greatly improving the transaction throughput and latency. 
Unlike most replicated database systems, MDCC doesn't require explicit sharding of data into multiple segments. But it can be supported on MDCC if needed. Also unlike most cloud data stores, MDCC has excellent support for atomic multi-row (multi-object) transactions. That is multiple data objects can be updated atomically within a single read-write transaction. All these interesting properties make MDCC an excellent choice for implementing powerful database engines for modern day distributed and cloud computing environments.
Our implementation of MDCC is based on Java. We use Apache Thrift as the communication framework between different components. ZooKeeper is used for leader election purposes (we need to elect a per-object leader whenever there is a conflict). HBase server is used as the storage engine. All the application data and metadata are stored in HBase. In order to reduce the number of storage accesses we also have a layer of in-memory caching. All the critical information and updates are written through to the underlying HBase server to maintain strong consistency. The cache still helps to avoid a large fraction of storage references. Our experiments show that most read operations are able to complete without ever going to HBase layer. 
We provide a simple and intuitive API in our MDCC implementation so that users can write their own applications using our MDCC engine. A simple transaction implementing using this API would look like this.
        TransactionFactory factory = new TransactionFactory();
        Transaction txn = factory.create();
        try {
            txn.begin();
            byte[] foo = txn.read("foo");
            txn.write("bar", "bar".getBytes());
            txn.commit();
        } catch (TransactionException e){
            reportError(e);
        } finally {
            factory.close();
        }
We also did some basic performance tests on our MDCC implementation using the YCSB benchmark. We used 5 EC2 micro instances distributed across 3 data centers (regions) and deployed a simple 2-shard MDCC cluster. Each shard consisted of 5 MDCC storage nodes (amounting to a total of 10 MDCC storage nodes). We ran several different types of workloads on this cluster and in general succeeded in achieving < 1ms latency for read operations and < 100ms latency for write operations. Our implementation performs best with mostly-read workloads, but even with a fairly large number of conflicts, the system delivers reasonable performance. 
Our system ensures correct and consistent transaction semantics. We have excellent support for atomic multi-row transactions, concurrent transactions and even some rudimentary support for crash recovery. If you are interested to give this implementation a try, grab the source code from https://github.com/hiranya911/mdcc. Use Maven3 to build the distribution, extract and run.

Wednesday, November 3, 2010

Rediscover SOA with WSO2 Carbon & WS-Discovery

One of the hottest features of new WSO2 Carbon 3.0 based releases is the out-of-the-box WS-Discovery support. WS-Discovery is a standard protocol for discovering services and service endpoints. This enables service clients to search for services based on a given criteria and bind with the discovered services. The WS-Discovery specification defines two modes of operation:
1. Adhoc mode
In the adhoc mode, servers advertise the services they have using a UDP multicast protocol. Similarly client applications can search for available services by sending out probe requests over UDP multicast. Servers listening for such probe requests can then send the service information to the client over a unicast channel.
2. Managed mode
In the managed mode, servers and clients use an intermediary known as the discovery proxy for all service discovery purposes. Servers will register the available services with the discovery proxy by sending notifications over HTTP. Then clients can directly probe the discovery proxy to discover the registered services. This mode of operation does not use UDP multicast for any sort of communication. All interactions take place over regular HTTP channels and if needed, HTTPS can be used to provide transport level security.
Starting from version 3.0, WSO2 Carbon framework has full support for WS-Discovery managed mode. The following products ship the necessary WS-Discovery libraries with them by default:
WSO2 G-Reg has the ability to act as a discovery proxy. This mode is enabled by default and user doesn't have to configure anything in the G-Reg side. In products like WSAS and DSS, WS-Discovery support should be manually enabled by pointing them to an already existing discovery proxy. Once configured, these server applications will automatically register the services they have with the discovery proxy. Service information will be synchronized with the discovery proxy at server start up, shutdown and service level updates. This ensures that the information in the discovery proxy are always up-to-date. WSO2 ESB can query one or more discovery proxy instances to find the necessary services and service endpoints. The ESB UI also enables creating proxy services and mediation endpoints using the discovered endpoints.
There are lot of other cool things you can do in Carbon platform with WS-Discovery. There are several plug-ins and extensions available to download and try out as well. I'm planning to post a series of blogs addressing various aspects of WS-Discovery in the next few months. So stay tuned! In the meantime read this article to learn how Jos Dirksen integrated Mule with WSO2 G-Reg using the WS-Discovery capabilities of Carbon framework.

Sunday, June 27, 2010

Axis2 TCP Transport Revamped

I recently did some major improvements to the Axis2 TCP transport. The TCP transport enables Axis2 services and clients to send/receive SOAP messages over TCP, using Java TCP sockets. The old TCP transport was very simple and it must be configured globally in the axis2.xml file. Due to this limitation, a service could not open up its own port to listen to incoming TCP messages. All the TCP requests were captured by a single, globally configured port and WS-Addressing headers were used to dispatch the requests to the appropriate services.
The new transport implementation is fairly advanced with a wide range of configuration options. It can be configured globally in the axis2.xml file, or it can be configured at the service level in the corresponding services.xml files. Only the port number was configurable in the previous TCP transport implementation. The new implementation supports all the parameters described below:
  • transport.tcp.port - Port number (mandatory parameter)
  • transport.tcp.hostname - The host name to which the server socket should be bound
  • transport.tcp.backlog - The length of the message back log for the server socket (defaults to 50)
  • transport.tcp.contentType - Content type of requests (defaults to text/xml)
In the new transport, if a request is received by a port configured at the service level, it is pre-dispatched to the corresponding service. If the global port receives a TCP message, WS-Addressing headers will be looked up while dispatching.
These improvements are now available in the Axis2 trunk. So feel free to take it for a ride and give us your feedback. I also have some plans to make some improvements to the Axis2 UDP transport. I intend to add multicast support to the existing UDP transport and with that we will be able to support multicast request – unicast response message exchange pattern in Axis2.

Thursday, November 19, 2009

New Kids in Town

The team at WSO2 just released the latest version of WSO2 Carbon (v2.0.2) along with a whole bunch of Carbon based products. Log on to the WSO2 Oxygen Tank to lay your hands on the following smoking hot releases, right from the WSO2 oven ;)
  • WSO2 Enterprise Service Bus v2.1.2
  • WSO2 Web Services Application Server v3.1.2
  • WSO2 Business Process Server v1.1.0
  • WSO2 Mashup Server v2.0.1
  • WSO2 Identity Server v2.0.2
You will be particularly interested in our Business Process Server and Mashup Server releases since we haven't done any releases of them for a fairly long time. Needless to say that all new releases have many new features, bug fixes and enhancements over the previous versions. So download today and experience the power of SOA.

Wednesday, July 15, 2009

Head First Security in SOA

No, O'Reilly Media hasn't published a book on 'Security in SOA' in their world famous Head First series (at least not that I know of). This post is about a wonderful presentation on the above mentioned subject, conducted by Prabath SiriWardena, couple of weeks back at the WSO2 Summer School. Prabath is one of my colleagues at WSO2 and he is one of the most experienced folks we got. Prabath's expertise is on computer security and at WSO2 he leads all security related projects including WSO2 Identity Server. In this summer school presentation Prabath has started simple, by explaining the fundamental concepts of computer security like confidentiality, integrity and availability and goes on to more complex topics such as public key cryptography, transport level and message level secuity, WS-Security specs and Username-Token authentication. He gives a glimpse on various options available to SOA architects and Web Service authors to secure their applications at different levels while emphasizing on the importance of interoperability.
Throughout the presentation he has kept things simple yet extremely interesting. You will find the entire presentation sort of follows the storyline based, fun-filled teaching method which is a very effective technique commonly used in the books of the Head First series (and hence the title).
If you want to learn the fundamentals of Security in SOA and how it is used in the enterprise (or how it should be used in the enterprise), this presentation would be a great starting point. So start flipping through the slides now and see for yourself.

View more documents from wso2.org.

Sunday, July 12, 2009

Amplify Your SOA with WSO2 ESB 2.1

If you have been following my blog, then you already know that WSO2 Carbon 2.0 and a host of other Carbon based WSO2 SOA products were released last week. WSO2 Enterprise Service Bus 2.1, which is one of the released products, must be highlighted as a high quality, feature rich and extremely user friendly piece of SOA middleware for a number of reasons. Like all its predecessors, this version of WSO2 ESB is also based on Apache Synapse, the lightweight, ultra-fast ESB. WSO2 ESB is generally popular among SOA enthusiasts because of the following set of key features provided by the ESB.
  • Proxy services - facilitating synchronous/asynchronous transport, interface (WSDL/Schema/Policy), message format (SOAP 1.1/1.2, POX/REST, Text, Binary), QoS (WS-Addressing/WS-Security/WS-RM) and optimization switching (MTOM/SwA).
  • Non-blocking HTTP/S transports based on Apache HttpCore for ultrafast execution and support for thousands of connections at high concurreny with constant memory usage.
  • Built in Registry/Repository, facilitating dynamic updating and reloading of the configuration and associated resources (e.g. XSLTs, XSD, WSDL, Policies, JS, Configurations ..)
  • Easily extendable via custom Java class (mediator and command)/Spring mediators, or BSF Scripting languages (Javascript, Ruby, Groovy, etc.)
  • Built in support for scheduling tasks using the Quartz scheduler.
  • Load-balancing (with or without sticky sessions)/Fail-over, and clustered Throttling and Caching support
  • WS-Security, WS-Reliable Messaging, Caching & Throttling configurable via (message/operation/service level) WS-Policies
  • Lightweight, XML and Web services centric messaging model
  • Support for industrial standards (Hessian binary web service protocol/ Financial Information eXchange protocol and optional Health Level-7 protocol)
  • Enhanced support for the VFS(File/FTP/SFTP)/JMS/Mail transports with optional TCP/UDP transports and transport switching for any of the above transports
  • Support for message splitting & aggregation using the EIP and service callouts
  • Database lookup & store support with DBMediators with reusable database connection pools
  • WS-Eventing support with event sources and event brokering
  • Rule based mediation of the messages using the Drools rule engine
  • Transactions support via the JMS transport and Transaction mediator for database mediators
  • Internationalized GUI management console with user/permission management for configuration development and monitoring support with statistics, configurable logging and tracing
  • JMX monitoring support and JMX management capabilities like, Gracefull/Forcefull shutdown/restart
Wow! That's a lot of features for a software product developed openly and distributed free of charge under Apache Software License 2.0. As you would imagine, it is mainly the performance and the lightweight operation model of WSO2 ESB which makes it stand out from the rest. In addition to the above mentioned key features the latest ESB 2.1 release brings you the following set of new features.
  • Rule based mediation via Drools
  • Fine grained authorization for services via the Entitlement mediator
  • Reliable-Messaging specification 1.1 support
  • Enhanced WS-Eventing support and Event Sources making it an even broker
  • Enhanced AJAX based sequence, endpoint and proxy service editors
  • Enhanced transport configuration management through the graphical console
  • Enhanced integrated registry and search functionalities with versioning, notifications, rating of resources, and commenting
  • Enhanced remote registry support
  • Default persistence to the registry for the configuration elements
  • Enhanced permission model with the user management
  • Enhanced REST/GET and other HTTP method support
  • P2 based OSGi feature support, for optional features like service management, runtime governance
The coolest thing about WSO2 ESB 2.1 is that it is 100% OSGi based (thanks to the Carbon platform of course!). All the features are packed into OSGi bundles and therefore adding new features and removing unnecessary features cannot get any easier. The newly introduced provisioning support based on Equinox P2 makes it particularly easy to deploy new features and third party libraries into the ESB. With WSO2 ESB 2.1, you can deploy only the features you want and only them. You are not forced to load any features/libraries that you never use. Why waste memory and other resources on features never used, right?
With ESB 2.1 registry integration support has improved vastly. WSO2 ESB 2.1 comes with an embedded WSO2 G-Reg instance but you can easily point the ESB to a remotely hosted registry instance in a matter of seconds. ESB 2.1 also has the ability to export its entire configuration to the registry and reload the configuration back from the registry at the server startup.
User interfaces and context sensitive help system have gone through lot of rework. You will find that most of the web interfaces are now fully AJAX compliant making it easy and fun to work with WSO2 ESB. All UIs are fully internationalized and can be even separated from the backend system to be hosted as different web application.
WSO2 ESB 2.1 is a giant step forward by the WSO2 folks to make their ESB even more elegant and enterprise ready. It gives you a combination of high performance, modularity and user friendliness. It is completely free and open source, with a very active and supportive community of developers to back up all the development work. WSO2 also offers user training, development support and production support to any party which requires such facilities.
If you are looking for a robust mediation solution to power your enterprise SOA or if you are tired of trying out expensive proprietary ESB solutions, it’s high time you give WSO2 ESB 2.1 a spin. It will be totally worth it!!!

Friday, July 10, 2009

WSO2 Carbon Platform Takes Another Giant Step Forward

Weeks have passed by since I did my last blog post. The reason for my prolonged silence in the blog sphere was due to the fact that I was working hard on a very important point release of WSO2 Carbon 2.0 and a host of WSO2 Java products based on the Carbon platform. Ideally these releases should have come out bit early but realizing the importance of implementing some new features, we decided to push the releases back by a few weeks. Anyway WSO2 Carbon 2.0 and four of the Carbon based WSO2 products (WSAS 3.1, ESB 2.1, G-Reg 3.0 and IS 2.0) were released on 8th July 2009 and you can now download them from the download page of the WSO2 Oxygen Tank.

WSO2 Carbon is an OSGi based application framework for SOA and Web Services. All WSO2 Java products including WSO2 ESB, WSO2 WSAS, WSO2 Governance Registry and WSO2 Identity Server are based on this revolutionary SOA platform. We have made some very important architectural and functional enhancements to WSO2 Carbon recently. Therefore the users of Carbon 2.0 based products will be able to reap the benefits of those improvements in the forms of improved performance, reliability, extensibility, user friendliness and stability.

One of the most important architectural changes we made to the Carbon platform this time around is getting rid of OSGi bundle activators and instead, adapting OSGi declarative services to handle bundle initialization and cleanup. OSGi declarative services are considered the best solution when it comes to managing systems with a large number of OSGi bundles. Declarative services allow bundles to startup and initialize properly while sharing required data among various bundles without following a pre configured bundle startup order. Declarative services also helped us improve the dynamic nature of the overall Carbon platform, so now starting bundles and stopping bundles at runtime is easier and safer.

Another key enhancement we did for this release of Carbon is the introduction of the P2 based provisioning system. This work is still at its early stages but it already enables Carbon and Carbon based product users to easily deploy Carbon components, user developed OSGi bundles, mediators and third party dependencies to the system easily, without having to write a single line of code to handle OSGi initialization or cleanup work.

In General Carbon 2.0 and Carbon 2.0 products have better registry integration support, better transport management support, better user management support, better security features and better user interfaces. We gave most of the Carbon component UIs a fresh look through the extended use of AJAX. Persistence of configuration data is another area where we have done a lot of work lately. WSO2 ESB 2.1 in particular has the ability to store the entire ESB configuration in a registry and then load the configuration back from the registry during server startup.

Documentation is yet another area that’s been largely improved with this release. We paid special attention to improving our context sensitive help documents (accessed via product user interfaces) and other downloadable guides and tutorials. In the previous releases of Carbon and Carbon based products we were mostly focused on system architecture and functionality. But now the Carbon framework is largely stabilized we were able to spare some cycles for improving our documentation too. We have added lots of new guides and catalogs including mediator catalog, transports catalog, server administration guide and the endpoints guide. Our plan is to make our products easy to learn and use via making our user interfaces simple, integrating help with the user interfaces and making ample documentation freely available to all.

In addition to the key improvements mentioned above Carbon 2.0 and related products come with tons of new features, minor bug fixes, performance enhancements and usability improvements which add lot of value to the WSO2 Carbon SOA platform. If you are interested in SOA and Web Services middleware you must give some of the new Carbon 2.0 based WSO2 products a try. After all it’s totally free and open source so it doesn’t cost you a penny to try our software. I guarantee that it would be a great experience and after that you will love WSO2 products.

On a finishing note I would like to quote the WSO2 Carbon 2.0 release note here….

WSO2 Carbon 2.0.0 - Middleware a' la carte

Welcome to the WSO2 Carbon 2.0.0 release. This release is available for download at http://wso2.org/projects/carbon

WSO2 Carbon is the base platform for all WSO2 Java products. Built on OSGi, Carbon encapsulates major SOA functionality such as data services, business process management, ESB routing/transformation, rules, security, throttling, caching, logging and monitoring. These product capabilities are no longer tied to individual products, but are available as components.

A Carbon feature consists of one or more Carbon components. These are nothing but Eclipse equinox P2 features. In order to extend the functionality of you Carbon server, simply install new features onto you server as outlined in https://wso2.org/wiki/display/carbon/p2-based-provisioning-support

Key Features
  1. Extensible OSGi component based architecture
  2. P2 provisioning based feature enhancement of the server
  3. Unified server management framework
  4. Unified Management Console
  5. Web service, JMX & Equinox OSGi console based administration
  6. Integrated security & permissions management
New Features In This Release
  • Experimental Equinox P2 based provisioning support - extend your Carbon instance by installing new P2 features (See https://wso2.org/wiki/display/carbon/p2-based-provisioning-support)
  • Fixed start levels eliminated
  • Performance improvements to the Registry
  • Ability to make normal jar files into OSGi bundle (Just copy the jar files into CARBON_HOME/repository/components/lib)
  • Various bug fixes and enhancements including architectural improvements to Apache Axis2, Apache Rampart, Apache Sandesha2 , WSO2 Carbon and other projects, including security fixes
How to Run
  1. Extract the downloaded zip
  2. Go to the bin directory in the extracted folder
  3. Run the wso2server.sh or wso2server.bat as appropriate
  4. Point you browser to the URL https://localhost:9443/carbon/
  5. Use "admin", "admin" as the username and password.
  6. If you need to start the OSGi console with the server use the property -DosgiConsole when starting the server
For more details, run, wso2server.sh (wso2server.bat) --help

How to Install Additional Features

You can build your own server by selecting only the features that you require
For further details refer to
Known Issues

All known issues have been recorded at
Training

WSO2 Inc. offers a variety of professional Training Programs, including training on general web services as well as WSO2 Carbon, Apache Axis2, Data Services and a number of other products.

For additional support information please refer to
Support

WSO2 Inc. offers a variety of development and production support programs, ranging from web-based support up through normal business hours, to premium 24x7 phone support.

For additional support information please refer to
For more information on WSO2 Carbon, visit the WSO2 Oxygen Tank (http://wso2.org)

Saturday, May 2, 2009

A 'Good' ESB Should....

As technologies like SOA and Web Services continue to become more and more dominant mechanisms for implementing complex distributed systems, the demand for efficient and reliable enterprise service bus middleware is becoming larger and larger. There are dozens of potential ESB solutions out there, some open source and some proprietary, but the cold hearted truth is most of these products have a number of short comings, which makes them totally useless in production environments. Production environments don't require fast ESBs. They need ultra fast ESBs that can handle thousands of concurrent user requests. such deployments require middleware which can satisfy scalability and availability requirements through features such as load balancing, clustering and fail over support. A good ESB should also be capable of dealing with many communication protocols, transport mechanisms and messaging standards.

Fortunately we are not totally out of hope. There are some really good products out there which give you all the above mentioned features and deliver 100% in production deployments. WSO2 ESB is certainly one of those middleware solutions which is fast, reliable and feature rich. It is based on Apache's tried and tested Apache Synapse light weight ESB and starting from version 2.0 it is also based on WSO2's revolutionary Carbon framework.

WSO2 folks recently published a cool flash presentation which walks us through all the basic features of WSO2 ESB within a few minutes. Have a look and see whether your ESB delivers at least half of the things the WSO2 ESB provides.

Thursday, March 19, 2009

Web Services for Mission Critical Apps

While reading the “Beautiful Code” from O'Reilly Media I came across a chapter compiled by Ronald Mak. He was a senior scientist at the Research Institute for Advanced Computer Science and he has been contracted twice by the NASA to develop their enterprise applications to manage space missions. In the chapter 20 of “Beautiful Code” Ronald describes the internals of the Collaborative Information Portal (CIP) which he and his team developed for the NASA for the Mars Exploration Rover (MER) mission which commenced in year 2003. The development of the project has taken nearly two years and needless to say being a highly mission critical ERP solution the requirements of the system have been very demanding. Among the most fundamental features of the system were,
  • Time management (Managing all time zones on Earth and two time zones on Mars)
  • Personnel management (Managing all mission personnel) and,
  • Data management (Managing all the data sent from the two automated Rovers on Mars)
So how did Ronald and his team tackled this mammoth task? They used a 3-tiered service oriented architecture (SOA). Most of the code has been written in Java and they have followed J2EE standards. Also the developers of the CIP have used commercial off-the-shelf (COTS) software as much as possible to avoid re-inventing the wheel thus saving loads of time and effort. The client tier of the system consists mainly of standalone GUI applications developed mostly with Swing. These client applications communicate with a set of Web Services hosted on a J2EE compliant application server. This forms the middleware tier of the CIP. Finally there is a data tier which consists of mission file servers, databases and other metadata repositories. Ronald explains that using an SOA based on J2EE, was one of the main driving forces behind their success with the CIP.

The Web Services in the middleware tier are perhaps the most important components of the system. These services have been implemented using Enterprise Java Beans (EJB). They have developed two types of beans, namely stateless session beans and stateful session beans. Stateless session beans act as the service providers by receiving all the client requests and routing them to relevant stateful session beans which implement the business logic of the system. Ronald further elaborates that use of Web Services made CIP modular, language independent and loosely coupled.

All-in-all Ronald and his team have manged to deliver the solution on schedule. The end system has recorded a 99.9% uptime, indicating how reliable the system was. In addition to using Web Services and SOA to make the system highly maintainable and scalable, they have employed a number of other techniques to shape up the CIP into a highly reliable ERP solution. According to Roland, logging was one such technique. They have used a system wide logging mechanism based on Apache Log4j to log all the important (and even not so important) activities that take place within the system. They have also developed services to specifically support the monitoring function of the system which means that they have considered monitoring as a fundamental requirement of the system. In addition, Ronald and the team made the CIP dynamically configurable and hot swappable to minimize the down time maximize the robustness.

I believe that CIP of NASA is one of many examples which effectively demonstrates the power of Web Services and SOA. These technologies can help even huge government agencies like NASA to develop large complex enterprise software systems on time while meeting all the strict reliability, availability and scalability requirements as desired. As the time goes on these state of the art technologies will prove to be even more powerful and useful. I'm just glad that I'm into further exercising SOA as a Software Engineer.

Tuesday, March 17, 2009

Pushing to the Limits

Just a couple of days back, me and the rest of the MOINC team conducted a large scale performance testing round for MOINC at the advanced computing lab of the Department of CSE, University of Moratuwa. We used over 20 PCs for our tests and the resulting numbers we obtained were pretty amazing. My friend and colleague Aravinda is already in the process of analyzing the statistics collected and we hope to compile a research paper based on the results. We will be publishing the details regarding the experiments as well as the collected data soon on our website in a more aesthetically pleasing manner. In the meantime I would like to say that we as the developers are very happy regarding the level of scalability the prototype MOINC platform has displayed during the tests.

Following are some of the pictures that were taken during the performance testing round. The sight of the MOINC screen saver coming up in about 20 PCs at the same time was truly a treat to the eye :)

I thank our department head, Mrs. Vishaka Nanayakkara and officer in charge of the advanced computing lab, Mr. Ananda Fransiscu for all the support in carrying out this round of testing.

Monday, March 16, 2009

Get Ready to Get MOINCed

Few weeks back me and my team managed to finalize developing the first prototype of our dream project, MOINC. If you are clueless as to what project MOINC is all about, it is an attempt to combine the Web Services paradigm with grid computing. The goal of the project is to be able to deploy enterprise Web Services for high availability and high scalability using commodity hardware. So far we have done three formal presentations describing each of the major components of the MOINC platform and we have written three research papers which we hope to publish very soon. In addition to the presentations we also conducted a formal demonstration of MOINC at University of Moratuwa, using 6 computers which effectively gave a preview of MOINC in action to a panel headed by our project supervisor Dr. Sanjiva Weerawarana and our senior lecturer Dr. Chathura De Silva. Presentations were attended by Dr. Sanath Jayasena, Dr. Chandana Gamage, Dr. Chathura De Silva and Mr. Shantha Fernando of the department of CSE, University of Moratuwa.

We got pretty good feedback from all those who attended the presentations and the demonstration, clearly indicating that we are on the right track to make MOINC into a useful software solution for business organizations worldwide. Some of the core features of the MOINC platform that we demonstrated are listed below.

  1. Deploy, manage and undeploy service artifacts (Service artifacts are uploaded as Axis archives - *.aar files)
  2. Track down idling computers in the local network and add them to the MOINC grid dynamically
  3. Download service artifacts into idling computers from a centralized registry/repository
  4. Action script based screen saver for client PCs
  5. Load balance the incoming service requests among all the active nodes connected to the MOINC grid (Powered by Apache Synapse)
  6. Monitor the grid via an AJAX based Web interface (Powered by WSO2 WSF/AJAX)
  7. Collect statistics related to computers connected to the MOINC grid and use them in the intelligent load balance mode
  8. MOINC community portal and forums (backed by WSO2 Registry and JForum)

That’s certainly a lot of features for a mere prototype. No wonder we got pretty good feedback. However there is certainly lot more work to be done. We need to improve the overall security of the platform. Currently there is a couple of security loop holes in MOINC SMM that we need to close off. Also Dr. Sanjiva suggested writing a Java security manager for MOINC client agent which ensures the security of PCs connected to the MOINC server. We need to start working on that soon as well. Also we have to finish our Maven2 integration stuff and get the release artifacts for MOINC 0.1-alpha out soon. We still haven’t cut the release artifacts for all the components of MOINC but the full source code of the prototype can be checked out from our SVN. Also checkout the developer resources section on our website for all the slides we used for the above mentioned three presentations.

I will also make arrangements for all of you humble readers to have a sneak peak at our research papers before we actually publish them. I promise. Meanwhile enjoy the presentations and other design documents :)

Thursday, October 23, 2008

MOINC - The Future Web Services!!!

As Web services continue to become the dominant paradigm in the arena of distributed computing, more and more people around the world are starting to get interested in improving the performance and scalability achievable from Web services deployments. An enterprise grade Web services deployment must be high available and scalable in order for it to be useful to the clients and provide a competitive advantage for the service provider. Due to this reason it has become a must for the Web services middleware providers to support high availability and scalability via general techniques such as clustering.

Me and some of my colleagues recently started a venture to explore the potential of grid computing as a means of improving the availability and scalability of Web services deployments. Our plan is to design and develop a complete open source Java Web services deployment platform which makes use of grid computing as the basis for clustering. This exciting concept is a brain child of one of our lecturers at the Department of Computer Science and Engineering, Dr. Sanjiva Weerawarana, who also happens to be the chairman and CEO of WSO2. This research project focuses on not only improving the scalability of Web services deployments via grid computing but also putting the much wasted electrical energy to good use by getting idling computers to do some useful work.

We have named the project 'MOINC' which stands for Mora Open Infrastructure for Network Computing. (If you are confused what 'Mora' is all about it's a little nick name for the University of Moratuwa. 'Mora' also means shark in Sinhala.) So far we have managed to successfully complete the requirements gathering and design phases of this massive R&D effort. We have identified four components in our target platform and four groups of undergrads are working on implementing the components. The four components are;
  • MOINC Server
  • MOINC Client Agent
  • MOINC Server Manager
  • Thisara Communication Framework
You can learn more about our project and its four main components from our official website. You will also find links to our design documents, proposals, specifications and SVN repositories there. We recently released the 0.1-alpha version of the Thisara messaging framework which is one of the four components of the MOINC platform. You can download the binaries from our website too.

Friday, April 18, 2008

Let's Get Mashed Up!!!

Day by day Web Services are making the Web more dynamic, interactive and fun. Thanks to Web Services the World Wide Web which used to be a pile of content for reading and browsing has turned into the most dynamic and active source of information in the world. At the same time it has become a play field for those who love to play, a huge world wide dating service for those who wanna flirt, a medium for expressing viewpoints for those who want their opinion to be heard by others (I personally know a guy who used his blog to announce that he has started an affair with a girl) and a primary means of interactive communication for everybody in general. With Web Services the entire World Wide Web is restructuring itself into a pull-based model where the user pulls out exactly the content he wishes to see from the network. The traditional push-based model where the Web administrators simply push the content into their Websites that users has to browse and see is becoming more and more outdated. All the businesses are now rapidly adopting SOA and Web Services and they are organizing their business activities around Web services. Web services allow organizations to stay up-to-date on current socio-economic trends, competitors and the organization itself thus effectively reducing lot of overheads associated with management and control.

The best thing about Web services is the flexibility associated with them. One can decide to use a Web Service as it is or he can mix it up with other Web services to build his own customized Web Services. For an example take a Web service which brings you the latest up-to-date weather information in Asia continent in plain text format. If you are lazy, dull and a bit tardy you can consume the service as provided by the service provider. Or else if you are active, energetic and fun loving you can mix it up with other services to make it even more useful. In the example I have picked one might mix the weather data service up with an on-line mapping service like Google maps to create a dynamic weather map of Asia. Now how cool is that? As another example take a service which brings you the latest prices of cars. If we can mash it up with a service which brings you the architectural details of cars we can create a live feed which gives you all the details one would possibly want to know about cars.

Well it really sounds pretty exciting isn't it? But Web services are serious business. They are inherently complicated and difficult to handle. So how on earth one can just 'mash up' multiple services to form one service out of them when even handling one service requires lot of effort? Well believe it or not it's lot easier than you think, 'if' you are using the right tool. And what is this 'right tool'? It's called the WSO2 Mashup Server. All you need to know is a bit of JavaScript and you can get started with mashing up services in no time.

WSO2 Mashup server 1.0 was released recently and it's already making a lot of hype in the SOA world. WSO2 also put the mooshup.com (beta), the community site of mashup developers on-line couple of days back and there we already have a number of users sharing the mashups they have created. WSO2 Mashup server really makes mixing and mashing up services embarrassingly easy. I myself didn't know a damn thing about mashing services up few weeks back and now thanks to the WSO2 Mashup server I have already put a couple of mashups on-line at mooshup.com. 'YahooTunes', one of the mashups I developed recently takes the RSS feed from the Yahoo music (http://music.yahoo.com) which has a top chart of songs and mashes it up with the Youtube data API (http://gdata.youtube.com) to bring you the videos of the top ten songs. All I had to do was to write a few lines of JavaScript code. Most of the advanced tasks handled for you by the host objects provided by the Mashup server. 'YahooTunes' mashup is now on-line at https://mooshup.com/services/hiranya/yahootunes and is a good demonstration of the power of WSO2 Mashup server and what it has to offer. You will also realize that WSO2 Mashup server allows generation and displaying of WSDL and source code of mashups. Deploying, editing and managing mashups are also made easier tasks with the cool looking Web frontend. User management is provided using WSO2 User Manager and InfoCard authentication is facilitated using WSO2 Identity Solution. Sharing the mashups you create is also easy. Simply create and test the mashup on your local mashup server and hit share to upload it to the community site or some other mashup server instance.

WSO2 Mashup server is pretty stable and feature-rich for a 1.0 release. I didn't really find any major drawbacks and pitfalls so far. Full credit goes to all the developers that contributed to put this wonderful piece of software engineering together. However beware people. WSO2 Mashup server is known to be addictive!!!

Monday, April 14, 2008

Hello World with Web Services

“Time for minor skirmishes is over!!! Now we do battle!!!”

Don’t get alarmed by reading the above. I’m still blogging about Web services and related technologies (not about ancient war craft). I just wanted to say that having discussed a lot of theoretical stuff in the previous blog entries (or rather going around and around the bush) now I’m fully armed and ready to start developing and deploying Web services of my own. There are loads of tested and proven methods, tools and IDEs to help Web services developers. So to start with I’m going to stick to a very simple and straight forward method which involves Apache Axis2 (well actually I will be using WSO2 WSAS, which is powered by Axis2), Eclipse IDE and Axis2 Eclipse plugins. It is pretty much a very painless and known-to-work kind of method and ideal for budding Web services developers (and even for pros of the industry). Let this blog entry be a tutorial to those out there who have just stepped into the world of Web services and willing to experiment with the technology (and may the best developer win :-)).

Here’s what you need…

Set them up...

Installing these software is fairly easy and straight forward. Remember to setup the JAVA_HOME environment variable after installing the Java development kit.

(A public WSO2 WSAS instance is available at http://wso2.org/tools from which you can test your Web services and get a feel for it.)

Once you have all the software tools you need properly installed and tested we can get on with developing out first Web service.

And here’s the recipe…

Run Eclipse IDE and create a new Java project. (File --> New --> Java Project)

Give the project any name you like.

Create the following class in the project. (Names of the class and the package do not matter. Just give any name of your choice.)

To add a new class to your project simply right-click on the node corresponding to your project on the Package explorer pane and click on ‘New’ and then select ‘Class’ from the submenu.

package testws;

public class MyFirstWS
{
public String greet(String name)
{
return "Hi, " + name + ", welcome to Web services World!!!";
}
}

As you can see it is a very simple Java class with just one method. It accepts a String parameter called ‘name’ and returns a String value. 

Now save all the files and build the project. (Project --> Build Project)

Then run the Axis2 Service Archiver Wizard. (File --> New --> Other --> Axis2 Wizards --> Axis2 Service Archiver)

Browse and specify the class file location of your project. Generally this is the ‘bin’ folder in your top level project folder. Also check the ‘Include .class files only’ check box (“this is not really necessary, but then again why take any risks right?”) and click ‘Next’.

Next window allows you to specify a WSDL for the Web service. Since we don’t have a WSDL for our service simply check the ‘Skip WSDL’ check box and click ‘Next’.

You can add any additional Java libraries required to run the service from the next Window. Since we don’t need any additional libraries for our service simply click ‘Next’ to proceed. (However if you want to add any additional libraries just enter the full path of the library and click ‘Add’. You may add any number of libraries in this fashion.)

Next window deals with the service.xml file which is important when it comes to deploying the Web service. In this case we will get our service.xml file auto generated. So check the ‘Generate the service xml automatically’ and click ‘Next’ to continue.

Now you can specify the Java class that you want to expose as a Web service. Type the fully qualified name of the class (in my case it is testws.MyFirstWS) and click ‘Load’. The methods of the class should appear in the area below. Select the methods you want to be in your service. Also at the top most text box you can specify a name for your Web service. This name will be displayed to the external parties so best to pick a nice and cool name.

Finally specify a name for the service archive file that will be generated and also a location to save it. Click ‘Finish’ to exit the wizard. If everything went right Eclipse will display a message box saying ‘Service archive generated successfully!’

Now browse to the location you specified in the last step of the wizard. A file with the name you specified and an .aar extension will be there. This is the service archive file of your Web service and you are going to need this file to deploy the Web service. Service archive files are in a way similar to compressed .zip files. They are collections of other files. Mainly it consists of the class files and the service xml.

Now it’s time to put the service on-line. If you are using WSO2 WSAS simply start the server, log on to https://localhost:9443 using your web browser and sign in to the WSAS management console. The default username and password are ‘admin’ and ‘admin’. Click on ‘Services’ and Click on ‘Upload service artifact’ which is the top most link in the page. Browse to the location of your .aar file and click ‘Upload’. WSAS will display a message saying the service archive has been uploaded. (WSO2 WSAS supports hot deployment. So there is no need to restart the server.)

If by chance you are using Tomcat or Axis2 standalone server to deploy Web services copy the .aar file manually to the ‘webapps’ directory of your Tomcat or Axis2 installation. Then restart the server.

So that’s basically it. You have created and deployed a Web service. You can access service by logging into the WSAS management console and clicking on the ‘Services’. Your service will be listed their along with other Web services. Now you can do a lot of things with it. WSO2 WSAS gives you a whole bunch of features to play around with deployed Web services. You can view the WSDL of your service, generate clients for it and even try it out on-line using the WSAS 'Try It' feature.


You can directly invoke the service and test it if you want. Open a browser window and log on to http://localhost:9443/services//greet?name=

If you get a result similar to the following you have done right. It means your Web service is up and running and also it is working without any faults. So pat on your back and be proud of yourself.

Wow!!! Wasn't that easy? Well that's mainly because we used a very powerful IDE to develop the service and a feature rich application server to deploy it. If we had to do everything manually this would have been a pain in the butt. I think we really should appreciate these software tools and the people who have put in a lot of effort to create them.

Please note that I have done the above example in Linux. But the exact procedure can be safely carried out in the other environments like Windows and Mac-OS.

The Bee and the Eye

A new paradigm…

Information technology is probably not the only science that has gone through some drastic improvements during the last couple of decades. Many other sciences have lifted themselves from the ground level to a much higher level where humans now find it difficult to live without them. Some new sciences have popped up from nowhere and have managed to entirely change the way humans used to look at the nature and the universe at the beginning of the last century. Some sciences have combined together to create so powerful technologies giving rise to a variety of new application domains.

Business management, technology management and human resource management are few such sciences that have really become key aspects of any organization, society or nation today. Even though these sciences are not at all strange or peculiar to the present day human beings, I seriously doubt that people who lived about 100 years back had even heard the terms. Above sciences which are sometimes collectively referred to as ‘management’ have really become absolute necessities of organizations and people than just areas of study. So many new theories and principles have emerged in these sciences which in turns have affected the way people now carry out various tasks within an organizational environment.

In the meantime information technology has always been in its usual ever developing mode discovering more and more new application domains. It is now impossible to find an area of study or a science that hasn’t been touched by information technology. Management sciences have also been vastly reshaped and reworked with the introduction of information technology. Computer has become the most widely used business tool in the world (Paul Fremantle, one of the co-founders of WSO2 believes that a laptop computer is one of the basic things one needs to have, in order to start a business) and most of the software systems developed today are targeted towards organizations to help them with their management activities. Information technology is now so tightly entwined with management sciences and that has given birth to a new area of study formally known as ‘Business Intelligence’ or in short BI.

Explaining the Bee(B) and the Eye(I) …

BI is mainly concerned with applications and technologies used to gather, provide access to and analyze data related to organizational operations. A business intelligence system of an organization is one of the most important and key factors which directly affects the organization’s decision making process. Business intelligence systems generally have powerful data processing capabilities and effective presentation methods. A business intelligence system enables the administration of an organization to make more informed, timely and accurate decisions while contributing to the organization’s competitive advantage. With a BI system the administration no longer has to depend solely on the common sense and the experience. Also risky, ad-hoc methods of problem solving and decision making can be avoided.

Talking about BI systems there are hundreds of software tools, (for different types of platforms) each optimized for one or more subtasks associated with BI. Some of these subtasks are data gathering and storing, data security, data analysis, reporting and communication. BI systems are ideally suited for handling huge amounts of data. They can generally work well with multiple databases or data warehouses. BI systems are slightly smarter than usual software systems in that they can consider a large number of parameters while analyzing data and make accurate decisions. In the industry we can see BI systems being used in three modes.

  • As a means of analyzing performance, projects and operations internal to organizations

  • As a means of data storing and analyzing

  • As a tool for managing the human side of businesses (marketing, HRM etc.)

BI has its own terminology with unique technical terms and buzzwords. Each of the above mentioned usages of BI are backed by a variety of BI systems and tools, both proprietary and open source.

OLAP Overlapped!!!

Since I have strongly emphasized on the usefulness and power of BI systems it is worth mentioning something about OLAP (On-Line Analytical Processing) as well which is a concept associated with BI. The term ‘OLAP’ is probably the most popular buzzword in the study of BI. Some common applications of OLAP are business reporting, management reporting, budgeting and forecasting. With OLAP the primary concern is to optimize databases and other related applications to support fast retrieval, analysis and reporting of data. In OLAP, databases are optimized for fast retrieval rather than efficient use of space. There are special APIs (eg: ODBO) and query languages (eg: MDX) to be used with OLAP. (In fact MDX is the de-facto standard being used with OLAP)

Here Comes Web Services…

Web services being the next ‘big thing’ in the world of IT has already begun to make an impact on BI. Web services can take BI to the next level by adding the notion of interoperability to BI systems. With Web services there is no limit to the number of data sources a BI system might use as Web services can enable the BI system to talk to remote APIs, databases and other data sources, even ones implemented using incompatible technologies. Also one can use Web services to expand the usability of a BI system by distributing the services offered by the system over the web. It is almost impossible imagine what a BI system can become when used along with Web services.

However there are some pitfalls that we should lookout for. When the services of a BI system are exposed to the external world as Web services some critical measures should be taken to enforce security and reliability of the system. Many BI system providers use various tools, protocols and technologies for this purpose (ranging from tools like LDAP to protocols like HTTPS). Also using Web services with BI systems has a significant impact on how metadata is handled by the BI system. Metadata management is one of the most influential features of a BI system. When used with Web services almost all the metadata management tasks including metadata modification and synchronization across applications should be exposed as Web services.

By combining the power of interoperability of Web services with the power of mega scale data analysis of BI systems we can create dynamic real time BI systems so that organizations can monitor events in real time and make very accurate decisions based on the most up-to-date information. All the users of such a system can be dynamically notified and kept in synchronism with the organizational activities by sending constant data feeds or periodic updates. Data gathering and reporting processes can be fully automated in this approach. Traditional BI systems are more or less batch processing systems that takes a collection of data and perform some operations on it as and when users make requests. But with Web services BI systems can constantly monitor data and related events in real time and take actions as and when a situation arises.

Saturday, April 12, 2008

First Step into The World of Web Services

Down the Memory Lane...

Information technology has grown in leaps and bounds over the last two or three decades. It has gradually developed into a state which was never anticipated at the inception of the computer technology. Today, computers are used as a powerful business tool, a medium of communication which connects people all around the globe and an unlimited source of information and entertainment. Computers play a major role in almost all kinds of businesses and have become a key ingredient in the areas like education, communication and resource management.

Probably the two most important driving forces behind this rapid and significant improvement of information technology are the developments in the technologies (hardware and software) itself and the introduction and rapid growth of the World Wide Web (WWW).

Hardware and software technologies have gone through some major improvements in the last few decades. Few years back having a computer running on a 400MHz Intel Pentium II was considered great. I used to admire some of my friends who had such computers. But nowadays having a computer of that nature will make others think either you are extremely poor or you have gone nuts. This is a simple indication of how hardware technologies have developed. Nowadays finding a computer running on a processor with a clock speed in excess of 3.0GHz and having physical memory in excess of 1.0GB is sometimes easier than finding a loaf of bread. Software technologies are not so far behind hardware in terms of performance and features. The days of small software applications with boring UIs and meager capabilities are long gone. Enterprise scale software applications with loads of features and dashing interfaces have come into play. Industry no longer talks about small applications. If a software does not have much features it is considered bad.

The WWW has given an entirely new set of capabilities to the existing software. Even though the WWW was initially used merely as a source of information and a medium to publish and retrieve information, today it is being used for things which were never thought possible few years back. With WWW the software residing in one physical machine can communicate and collaborate with some other software located in a distant remote machine to get some job done. This has become a possibility mainly because of the inception of the concept of Web services. According to the W3C a Web service is defined as follows.

"A software system designed to support interoperable machine to machine interaction over a network"

The keyword to be noted here is 'interoperability'. Web services allow machines and software systems to interoperate by exchanging data and information among them without relying on their underlying implementations. This means a software systems written using Microsoft .NET framework running on Microsoft Windows operating system may collaborate with a software system written using Java running on Linux operating system. As long as the two systems can understand the messages sent by the each other system they can work in collaboration like a newly married couple.

Web Services is the Way....

Web services can be very helpful (may be more than just 'helpful') in today's complex and growing network of businesses. Today, organizations around the globe often need to collaborate with many other organizations to get jobs done. Concepts like outsourcing, globalization and internationalization have become necessary ingredients of all types of business models. With this kind of dynamic communication and collaboration oriented businesses and organizations, even standalone enterprise software systems do not stand much of a chance. Different organizations use different kinds of technologies, different types of hardware platforms and different types of software systems. Trying to enable these diverse systems to collaborate could be difficult than finding kryptonite without Web services. As mentioned before Web services allow systems to leave their technological and environmental differences behind and interoperate to get a particular job done.

Also Web services allow valuable resources (eg: databases, software APIs) to be accessed and used remotely by a large number of people. Thanks to the Web services people no longer have to work on one dedicated PC anymore. They can decide to work at home, at a friend's place or at the other end of the world. Web services are very flexible in that we can develop different types of client applications with different capabilities to consume the same Web service. Web services are generally secured, reliable and easy to manage and maintain.

Many organizations around the world are fast switching to Web services from casual software systems because they are more reliable and cheaper. It gives a whole lot of confidence, flexibility and freedom to the end user and improves the profitability of the organization as a whole. Organizations that use Web services are more visible and reachable from the point of view of the external world. Web services are getting attention all around the world due to these reasons any even most of the software companies now openly support development and use of Web services. Some organizations like WSO2 and Apache Software Foundation have clearly undertaken the mission of popularizing the use of Web services at industry level and at day to day applications.

A Glimpse at Web Services Platform...

Web services are fairly complex software systems to understand. They are made of many different components. So a layered model (a stacked model) called the Web services stack is used to easily analyze and understand the principles behind Web services. Two key ingredients of the Web services platform are HTTP and XML. HTTP is an application layer protocol in the TCP/IP stack commonly used in WWW. XML is a simple but very effective language that can be used for message passing and data representation. In Web services XML is the most commonly used medium of message passing. Almost all the key protocols and standards are built around XML.

Adhering to the standards and specifications is very important in Web services. To enable two different systems to communicate and collaborate it is vital to have an agreed upon set of standards. Then only the messages sent by one party can be fully understood by the other parties. SOAP is one standard protocol used in Web services for message passing. It is based on XML and provides the basic messaging framework for Web services and forms the foundation layer of the Web services stack. Upon this foundation formed by SOAP many other abstract layers can be built to complete the Web services stack. Higher layers of the Web services stack provides more advanced features like Web services security, Web services policy and Web services manageability.

WSDL, another XML based language is used to describe Web services to external parties so they can properly communicate with the Web services and build up client applications to consume Web services.

The Dark Side of Web Services...? - Well it's Not That Dark...

One major point against the popularization of Web services is that they are very complex to understand and difficult to develop. Some people question the performance levels achieved by Web services due to the use of XML, SOAP and HTTP. Some argue that Web service principles tend to violate the normal model on which the Internet operates. According to the TCP/IP stack HTTP resides in the application layer and that is the top most layer. With Web services we mount another set of layers on top of application layer or to be more specific on HTTP.

However in my personal opinion Web services are in existence as a result of the inevitable improvements in the technology and changing user requirements. Therefore sooner or later they are going to dominate the world of information technology. This transformation is gradually taking place as you read this page. New development techniques, tools and frameworks have been developed for easy development of Web services and client applications. As a simple example, the WSO2 Web Services Application Server, a state of the art server for Web services (with lots of other features) comes with a plugin for the popular IDE Eclipse. With that the end user can easily develop Web services using Eclipse and directly deploy them using the WSO2 Web Services Application Server (you don't have to know a damn thing about Web services stack and stuff). Many popular IDEs today, including Eclipse and Microsoft Visual Studio support development and testing of Web services. Lot of support and guidance is also available for Web services developers.

Web services are indeed the next big thing! (Perhaps they already are) Because of that may be after few years people will start making fun of enterprise software systems like they do now about those old console based applications. (“Console applications...ha ha ha...what a retard!!!”)