Monday, December 31, 2012

Creating and Uploading Eucalyptus Images

By far the best reference I've found on the subject:

Simple, elegant and most importantly no BS! I've tried these steps out several times in Euca2 and Euca3 and they work like a charm. Nice work Igor.

Saturday, December 29, 2012

Deploying Applications in the Cloud Using AppScale

In my last two posts I briefly explained how to setup AppScale and get it up and running. Once you have a running AppScale PaaS, you can start deploying webapps in the cloud. You develop webapps for AppScale using Google App Engine (GAE) SDK. AppScale is fully API compatible with GAE and therefore any GAE application can be deployed on AppScale with no code changes. If you do not have any GAE applications to try on AppScale, you may follow one of the official GAE tutorials and develop a sample GAE application using Python, Java or Go. Alternatively you can checkout the AppScale sample-apps repository and try to deploy one of the pre-packaged sample apps. To checkout the sample-apps repository, execute the following command on a shell:
git clone
This will checkout a directory named sample-apps to your local disk. This directory contains 3 subdirectories named python, java and go. As the names suggest, each subdirectory contains a number of sample GAE applications written in the corresponding language. One of the simplest sample applications available for you to try out is the "guestbook" application. The Python version of the application can be found in the sample-apps/python/guestbook directory and the Java version of it can be found in the sample-apps/java/guestbook directory. This application provides a simple web front-end for users to enter a comment and browse comments entered by other users. It uses the GAE datastore API under the hood to store and retrieve comments entered by users. You can use AppScale-Tools to upload the guestbook application to your AppScale cloud.
appscale-upload-app --file samples-apps/python/guestbook --keyname appscale_test
The keyname flag should indicate the keyname you provided when starting the AppScale instance using the appscale-run-instances command. Once you execute the above command you will be prompted to enter the admin e-mail address for your application. Here you can enter the admin e-mail address you used when starting AppScale. Application deployment could take a few minutes. If everything goes smoothly, tools will print the URL through which your webapp can be accessed.
Your app can be reached at the following URL:
Try uploading a few applications using the above command and see how it goes. You can also try developing your own custom apps and deploying them in the cloud. 
AppScale-Tools also allow you to start an AppScale cloud with an application preloaded. To invoke this feature, you simple need to pass the --file option to the appscale-run-instances command.
appscale-run-instances --min 10 --max 10 --infrastructure euca --machine emi-12345678 --keyname appscale_test --group appscale_test --file samples-apps/python/guestbook
To undeploy an application you can use the appscale-remove-app command.
appscale-remove-app --appname guestbook --keyname appscale_test
Finally you can terminate and tear down an AppScale PaaS using the appscale-terminate-instances command.
appscale-terminate-instances --keyname appscale_test
If your AppScale PaaS was running in over an IaaS layer such as EC2, the above command will also take care of terminating the VMs in the cloud.
In my next few posts, I'll explain a little bit about GAE APIs and how to implement cool apps for AppScale using those APIs.

Wednesday, December 26, 2012

Starting AppScale

This post is written assuming you already have a machine or a VM image with AppScale and AppScale-Tools installed. If you don't please refer to my previous post on "Setting Up AppScale". 
We use AppScale-Tools to start, manage and terminate AppScale instances in various environments. The inputs we should pass into the tools differ slightly based on the environment in which we want to start AppScale. If you're going to deploy AppScale without the help of an IaaS layer like EC2 or Eucalyptus, then you're responsible for manually starting up the required machines or VMs. For an example if you're going to run AppScale on the Xen hypervisor, then you should first start your AppScale Xen images manually. Similarly if you're going to run AppScale over VMWare Fusion, you should start your AppScale Fusion images manually. Once the instances are up and running note down their IP addresses (assuming that the instances obtain IP addresses from a service such as DHCP). As an example lets assume you have 3 VM instances up and running and their IP addresses are, and With this information we should compile a simple YAML configuration like this:
Lets call this file ips.yaml. This configuration instructs AppScale to use one machine as the controller and the rest as ordinary nodes. The controller (aka head node) operates as a load balancer and a ZooKeeper leader. The other nodes will assume the roles of application server and DB server. In this case both server nodes will assume the application server and DB server roles. Therefore we end up with an application server cluster (of 2) and a replicated DB cluster (of 2) fronted by a single load balancer. You can further fine tune how AppScale assigns roles to machines by changing your ips.yaml file. For more information regarding the role placement configuration please refer the "Placement Support" article on AppScale wiki.
Once you have your ips.yaml file you can start deploying AppScale on the VMs. First create a SSH key-pair so that AppScale-Tools can login to the relevant machines remotely. This is done by executing the appscale-add-keypair command.
appscale-add-keypair --ips /path/to/ips.yaml
This will prompt you to enter the root passwords for each of the machines specified in your ips.yaml. Once this step has completed you can fire off AppScale using the app scale-run-instances command.
appscale-run-instances --ips /path/to/ips.yaml
This will start the AppScale daemons on each of the machines and initialize the AppScale PaaS. At some point you will be prompted to enter an email address and a password for the AppScale admin account. Just enter any email address or password for this. You can later use these credentials to login to AppScale management console. (For more details on deploying AppScale over a virtualized cluster setup refer the AppScale wiki)
If you are deploying AppScale over an IaaS layer such as EC2 or Eucalyptus, then you don't need to start the VMs manually nor you need any ips.yaml file. All you need is the unique ID of your AMI or EMI provided by your IaaS provider and the security credentials to interact with the IaaS. In case of EC2 you will need your AWS access key, secret key, AWS private key and X509 certificate. You can get these from the security credentials page of your AWS management console. If you're on Eucalyptus you can download a zip file from Euca admin console which contains all the required credentials.
Using these credentials you should first setup some environment variables.
export EC2_CERT=~/mycert.pem 
export EC2_PRIVATE_KEY=~/mypk.pem 
export EC2_ACCESS_KEY=my-access-key 
export EC2_SECRET_KEY=my-secret-key
In case of Eucalyptus you can simply source the eucarc file found in the downloaded credentials file.
source eucarc
With these environment variables in place you're ready to go. Simply execute the app scale-run-instances command as follows.
appscale-run-instances --min 1 --max 1 --infrastructure ec2 --machine ami-52912a3b --keyname app scale_test --group app scale_test
This will start a simple 1-node AppScale cloud in EC2. If you want more instances in your PaaS, simply adjust the values of min and max flags. For deployment on Eucalyptus change the value of infrastructure flag to 'euca' and provide a valid EMI ID for the machine flag. Here's the command for a 10 node AppScale deployment over Eucalyptus.
appscale-run-instances --min 10 --max 10 --infrastructure euca --machine emi-12345678 --keyname app scale_test --group app scale_test
The values of keyname and group flags will be used to create a keypair and a security group in the respective IaaS environment. Therefore make sure they are unique for your EC2/Eucalyptus account. Note that when running in a billed environment such as EC2, the machines spawned by AppScale are billed against the EC2 credentials provided (the credentials that we set as environment variables). AppScale spawns m1.large instances which cost about 26 cents an hour.
An AppScale deployment could take several minutes to complete. In environments like EC2 and Eucaplyptus, deployment times over 10 minutes are not uncommon. This delay is primarily due to the VM bootup and initialization overhead of IaaS environments which cannot be avoided. Once the AppScale deployment is complete AppScale-Tools will spit out a URL that you can use to check the status of the AppScale deployment:
The status of your AppScale instance is at the following URL:
Simply copy and paste the URL into your web browser to view the AppScale status page.
In my next post I'll explain how to deploy applications in an already running AppScale PaaS.

Tuesday, December 25, 2012

Setting Up AppScale

The AppScale installation procedure is already well documented in the AppScale wiki. Therefore this blog post only serves as a summary and a refresher of the most important steps. The installation of AppScale usually boils down to creating a virtual machine (VM) image for the environment in which you intend to deploy AppScale. For an example, if you wish to deploy AppScale on Amazon EC2, then you need to create an EC2 AMI with AppScale installed on it. If you wish to run AppScale on Eucalyptus, you will be creating a Eucalyptus EMI with AppScale on it. Similarly you can create AppScale VM images for other virtualized environments (eg: Xen, Virtual Box, Fusion etc). The only exception to this is when you want to deploy AppScale without a virtualization layer. In that case you can directly install AppScale on the host operating system.
As of the time of writing, AppScale is only supported on Ubuntu Lucid 10.04 Server Edition. This is common to all the target environments. Therefore regardless of the virtualization layer you intend to use, your AppScale VM image must be running Ubuntu Lucid 10.04 Server Edition. However we will start supporting newer versions of Ubuntu pretty soon therefore I'd recommend you to refer the AppScale wiki to find out about the latest OS supported by the system. 
Once you have procured your Ubuntu Lucid image you can boot it up and start installing AppScale on it. First login to the VM as root. We will use the command-line Git client to pull the latest AppScale source code on to the VM. But first we need to install the command-line Git client.
apt-get install git-core
Now clone the AppScale and AppScale-Tools repositories to the home directory of root.
git clone
git clone
If you want to pull any recent bug fixes or improvements, simply pull the AppScale testing branches. You can do this by adding "--branch testing" flag to the above two commands. Once you have the two repositories checked out, simply change into the appscale directory and run the following command to kick off the build process.
bash debian/
AppScale build can take about 20-30 minutes depending on how fast your Internet connection is. Once the AppScale build is complete check for a directory named /etc/appscale. This should contain a subdirectory named after the version of AppScale you just installed and this subdirectory should contain a collection of text files named after different database engines (eg: cassandra, hbase etc). If these files are in place that's a pretty reasonable indication that everything has gone according to the plan. Now change into appscale-tools directory you checked out earlier, and run the same build command as above to start building the AppScale command-line tools. This shouldn't take more than a few seconds. 
You are now done installing AppScale on the VM image. You can now go ahead and start bundling/packing the VM image for your target environment. For an example if you're going to run AppScale on EC2, you can start the AMI bundling process now. The actual procedure for bundling and uploading VM images will depend on your target environment.
It is particularly trivial to setup AppScale in the EC2 environment. The EBS technology of Amazon makes it absolutely simple to create AMI images with AppScale. Here's all you have to do:
1. Start a fresh Ubuntu Lucid 10.04 EBS image (eg: ami-1634de7f). If you start this VM as a m1.large instance the whole thing will be over much quicker.
2. Run the above described procedure to checkout and build AppScale on the EC2 instance.
3. Login to your AWS management console. Right click on your EC2 instance and select the "Create Image (EBS AMI)" option from the menu that appears. This will start bundling your AppScale AMI and it will be available for deployment within several minutes.
It is sometimes necessary to create VM images with only the AppScale-Tools. In this case you may only checkout the AppScale-Tools source code and build it. However you will have to install a few additional libraries manually.
apt-get install openjdk-6-jre
apt-get install ruby
apt-get install rubygems
apt-get install ruby1.8-dev
gem install json
This should enable you to run AppScale-Tools on your VM image.
In my next post I'll describe how to start AppScale and deploy applications on the AppScale PaaS.

Saturday, November 10, 2012

Flexible Cloud Computing with AppScale

I recently started contributing to AppScale, an open source project aimed at developing a scalable Platform-as-a-Service (PaaS) solution. AppScale project was initiated by UC Santa Barbara with the intention of implementing an open cloud PaaS that would enable more research and studies in the area of cloud computing. But over the years AppScale has evolved rapidly gathering a wide range of features and now many enterprise users are finding it useful as a platform that facilitates private, public and hybrid cloud deployments. 
One of the most attractive features of AppScale is the flexibility it provides to the cloud administrators and cloud application developers. Cloud administrators can deploy AppScale on a variety of infrastructure setups. It can be deployed on virtualized computing clusters based on solutions such as Xen and KVM. AppScale also runs on Infrastructure-as-a-Service (IaaS) solutions such as Amazon EC2 and Eucalyptus (It is worth mentioning that Eucalyptus also started out as a research project in UC Santa Barbara). Also if needed AppScale can be deployed directly on physical hardware without the support of any virtualization service. AppScale also comes with a Ruby API and a set of command line tools that can be used to deploy AppScale clouds on any of the above infrastructure setups with minimal human intervention. A single shell command is all it takes to deploy even a 100-node AppScale cloud. To make this process even easier, I recently implemented a new web UI component which allows users to deploy AppScale without bothering about the infrastructure complexities at all (more on this in a future blog post).
As a PaaS offering, AppScale exports a wide range of services for the cloud application developers to use in their applications:
  • Datastore - Persistent storage for application data. Generally operates as a replicated key-value store with support for range queries and transactions within entity groups.
  • Namespace - Facilitates segmenting data into multiple partitions. Can be used in scenarios where certain data items need to be separated from each other (e.g: Production data vs Test data)
  • Memcache - Distributed cache. Useful in developing stateful applications and improving application performance.
  • Blobstore - Persistent storage for large data objects and files.
  • XMPP - Provides instant messaging capabilities to AppScale applications.
  • Channel - Allows pushing data into client's JavaScript code.
  • Users - User account creation and profile management.
  • Mail - Facilitates sending e-mails from applications.
  • Images - Supports programmatic manipulation of images.
  • URL Fetch - Facilitates consuming local and remote REST APIs.
  • Task Queue - Facilitates asynchronous execution of long running jobs.
All these fundamental services are fully API compatible with Google App Engine (GAE). Therefore any GAE application can be deployed on AppScale with zero modifications. This has two very interesting outcomes for the users. First it makes it absolutely simple for the users to migrate from GAE to their own private or hybrid cloud offering based on AppScale. Second, it allows developing applications for AppScale quite straightforward as all GAE APIs are very well documented and comes with a powerful SDK. As a result AppScale has managed to gather a large number of sample applications and a very large developer community actively writing apps for AppScale in very quick time. Just like in GAE, applications can be developed in Java, Python or Go for AppScale. Another interesting aspect is that AppScale allows using a wide range of database systems underneath its Datastore API. Currently supported database systems include Cassandra, HBase, Hypertable, MongoDB, MemcacheDB, MySQL cluster, Voldemort and Redis. This is one area where the flexibility of the AppScale architecture can be observed clearly as it enables cloud administrators to setup AppScale with any one of these database solutions depending on their application requirements and organizational standards.
One thing that should be stressed is that AppScale is not just about running GAE applications. It facilitates deploying a wide range of other applications in the cloud too. This is mainly enabled by Neptune, a software overlay that runs on top of AppScale. It's comprised of a domain specific language that allows developers to execute any arbitrary program in the AppScale cloud PaaS. These programs may include standalone programs written using any arbitrary language, MapReduce jobs and high performance computing applications developed using technologies such as MPI, UPC, X10 and StochKit. The ability of AppScale and Neptune to execute high performance computing applications in the cloud has attracted a lot of attention from the scientific research community as it enables executing long running resource intensive tasks on the cloud using as many nodes as required thus greatly reducing the task completion time and eliminating the need to procure expensive server grade hardware.
On top of all this flexibility, AppScale also provides excellent fault-tolerance and autoscaling capabilities. All the critical services such as the database and the application server can be easily replicated for high availability. ZooKeeper is used to keep track of all the active services and nodes, and automatic failover is performed upon detecting failures. The AppScale autoscaler component keeps track of resource utilization and system performance related metrics, and spins out new nodes dynamically as the demand changes over time. Autoscaler is another very flexible component in the AppScale architecture, in that it allows cloud administrators to engage custom autoscaling policies depending on their application performance and scalability requirements. Some of the built-in autoscaling policies include HA aware autoscaling, QoS aware autoscaling and cost aware autoscaling. If needed more than one autoscaling policy can be engaged at once with an administrator defined priority arrangement. 
One of my personal favorite features of AppScale is its placement support. This is the ability of the PaaS to smartly place cloud services given a set of nodes. For an example if we start an AppScale instance  using three nodes (that is 3 physical or virtual machines), it will place an application server component and a database component in each of them. One of the database components would act as the master and the others would act as slaves. Automatic data replication will be enabled among all database components. One of the three nodes will be designated as the head node and the load balancer and ZooKeeper will be deployed in that node. Note that AppScale attempts to use the nodes in the most optimal manner possible by replicating all the critical services. The actual placement strategy however is also configurable in AppScale. But if the administrator does not explicitly state a placement strategy, we can rely on AppScale to figure out a suitable placement strategy on its own. 
If my introduction of AppScale has intrigued you to try it out, feel free to grab the latest stable source from our github repo. Detailed instructions on building the source and creating your own AppScale machine images can be found in the following wiki pages:
If you need a more ready-to-roll distribution of AppScale to take a quick look, check out our public EC2 image ami-52912a3b which is preloaded with AppScale 1.6.3. If you already have an EC2 account, you can simply setup AppScale command line tools on your computer and start an AppScale instance in EC2 using the tools and the above AMI.
I will roll out a couple of detailed blog posts on setting up AppScale in the near future, so stay tuned.

Tuesday, September 25, 2012

Busting Synapse and WSO2 ESB Myths

Paul Fremantle, PMC chair of the Apache Synapse project and CTO of WSO2, has written a very interesting blog post addressing some of the myths concerning Apache Synapse and WSO2 ESB. As of now both projects are quite popular, mature and have a very large user base including some of the largest organizations in the world. Surprisingly there are still some people who believe that these projects do not fall under the category of ESB (Enterprise Service Bus) implementations. In his latest post, Paul gives a clear and complete answer to all these misbeliefs, and backs it up with a wide range of facts.
ESB is one of those things in the IT world which don't have a proper standard definition. The best definitions I've come across attempt to align the term along the following cues:
  • An architectural construct that provides fundamental services to complex architectures
  • An entity that acts as a hub connecting many diverse systems
  • A central driver that facilitates Enterprise Application Integration (EAI)
Apache Synapse and WSO2 ESB pass with flying colors on all the above criteria. They provide an array of fundamental services to the systems and architectures that rely on them.  Some of these basic services are:
  • Message passing, routing and filtering
  • Message transformation
  • Protocol conversion
  • QoS enforcement (security, reliable delivery etc)
  • Logging, auditing and monitoring
Because Synapse and WSO2 ESB do such a good job providing these fundamental services, they can be used to integrate a large number of heterogeneous systems in an enterprise setting. As Paul has also pointed out in his post, Synapse and WSO2 ESB are currently used in hundreds of production deployments all around the world to connect various applications, implemented using various technologies (both open source and proprietary) running on various platforms (Windows, Linux, .NET, J2EE, LAMP, name it). In other words Synapse and WSO2 ESB are widely used as centralized drivers that facilitate EAI. The configuration model of Synapse and WSO2 ESB is so agile and powerful that practically any EAI pattern can be implemented on top of them. In fact there are tons of samples, articles and tutorials that explain how various well-known EAI patterns can be implemented using these 'ESB implementations'.
One thing that I've learnt from writing code to Synapse is that it has a very flexible enterprise messaging model. Support for any wire level protocol or any message format can be easily implemented on top of this model and can be deployed as a separate pluggable module. During the last few years, I myself have contributed to the implementation of following adapters/connectors on various occasions:
  • FIX transport
  • SAP transport (IDoc and BAPI support)
  • MLLP transport and HL7 message formats
  • CSV and various other office document formats
  • Thrift connector
  • Numerous other custom binary protocols based on TCP/IP
This is just a bunch of stuff that I've had the privilege of implementing for Synapse/WSO2 ESB. I know for a fact that other committers of Synapse and WSO2 ESB have been working on supporting dozens of other protocols, message formats and mediators. Thanks to all this hard work Synapse and WSO2 ESB are currently two of the most powerful and feature-complete ESB implementations anyone will ever come across. Also the existence of connectors for so many protocols and applications is a testament to the agility and flexibility that Synapse and WSO2 ESB can bring in as ESB products.
Another aspect of Synapse/WSO2 ESB that has been questioned many times is their ability to support RESTful integrations (Paul also addresses this issue in his post). This confusion stems from the fact that Synapse uses SOAP as its intermediary message format. Without going into too many technical details, I'd just like to point out that one of the largest online marketplace and auctioning providers in the world uses Synapse/WSO2 ESB to process several hundred millions of REST calls in a daily basis. The new API support we have implemented in Synapse makes it absolutely simple to design, implement and expose RESTful APIs on Synapse/WSO2 ESB. In fact I recently published an article which demonstrates through practical examples how powerful RESTful applications can be implemented using Synapse/WSO2 ESB while supporting advanced REST semantics such as HATEOAS. The recently released WSO2 API Manager product which supports exposing rich web APIs with support for API key management is also based on Synapse/WSO2 ESB.
I think I have made my point. Both Synapse and WSO2 ESB are two excellent ESB choices if you're looking to adopt SOA or enterprise integration within your organization. Their wide range of features is only second to the very high level of performance they offer in terms of high throughput and low resource utilization. Please also go through the post made by Paul, where he has explained some of the above issues with low level technical details. I particularly like his analogy concerning Heisenberg's principle of uncertainty :) 

Tuesday, September 11, 2012

How to GET a Cup of Coffee the WSO2 Way: An Article

Since we implemented REST API support for WSO2 ESB (and Apache Synapse), we have received many requests for new samples, articles and tutorials explaining this powerful feature. I started working on an end-to-end sample for API support somewhere around March but didn't have enough cycles to write it up. Finally, after a several months delay, I was able to submit the finalized sample and the article to WSO2 Oxygen Tank last month. Now it has been published and available for reading on-line at
This articles starts with a quick overview on the REST support provided by WSO2 platform. It describes the API mediation capability of the ESB in detail providing several examples. Then the article explains how a complete order management system can be constructed using the REST support available in WSO2 ESB. This sample application has been inspired by the popular article on RESTful application development titled "How to GET a Cup of Coffee" by Jim Webber. Webber's article describes a hypothetical application used in a Starbucks coffee shop which supports placing orders, making payments, and processing orders. The sample described in my article implements all the major interfaces and message flows outlined by Webber. I have also made all the source code and configurations available so that anybody can do the same on their own machines using WSO2 middleware. To make it even more interesting I also implemented a couple of user interfaces for Webber's Starbucks application. These UI tools are also described in my article and they can help you better understand how the applications communicate with each other using RESTful API calls and how their application states change according to typical HATEOAS principles.
Towards the latter part of the article I discuss some of the advanced features of RESTful application development and how such features can be implemented on top of WSO2 middleware. This includes some very important topics such as security, caching and content negotiation.
I hope you will find this article interesting and useful. As always feel free to send any feedback either directly to me or to

Saturday, September 8, 2012

Turning Over a New Leaf

Last month I said good bye to a fantastic career at WSO2. I joined WSO2 on April 2009 as a Software Engineer and worked my way up the ladder to become a Senior Technical Lead. Most of my time at WSO2 was spent on WSO2 ESB, WSO2's flagship product. I also contributed to Carbon, the core platform on which all WSO2 products are built, and also implemented a number of cross cutting features which are currently used in multiple WSO2 product offerings. Towards the latter part of my WSO2 career I was mostly working on improving the REST support in WSO2 platform and implementing the WSO2 API Manager product. The WSO2 API Manager 1.0 release that went out in early August marked the end of my career at WSO2 (at least for the next few years).
My time at WSO2 has been a great learning experience. I learned and mastered  a variety of technologies like Web Services, SOA, cloud computing and API management. I got to travel to many countries and had the privilege of  meeting and working with some of the largest IT organizations in the world. Above all I had a lot of fun working with the team WSO2. It's truly a great organization with some of the smartest and coolest people I've ever had the privilege of working with.
This month I'm starting my Computer Science graduate studies at University of California, Santa Barbara. I'll be doing distributed systems research with prof. Chandra Krintz at UCSB Mayhem lab and RACE lab. Popular open source cloud platforms like Eucalyptus and AppScale were originally designed and developed in these labs, and so currently I'm spending some time on getting up to speed on these platforms. You will see me blogging about these technologies time to time over the next few days. Wish me luck :)

Wednesday, August 22, 2012

WSO2 API Manager: Designed for Scalability

Scalability is a tough nut to crack. When developing enterprise software and deploying them in mission critical environments, you need to think about the scalability aspects from day one. If you don’t, you may rest assured that a whole bunch of unpleasant surprises are heading your way. Some of the problems you may encounter are systems crashing inexplicably under heavy load, customers constantly rambling about the poor performance of the system and system administrators having to play watch dog to the deployed applications day in and day out. In addition to these possible mishaps, experience tells us that attempting to make a live production system scalable is hell of a lot more difficult and expensive. So it’s always wise to think about scalability before your solutions go live.
The crew at WSO2 have a firm grip on this reality. Therefore when designing and developing the WSO2 API Manager, we made scalability of the end product a top priority. We thought about how the overall solution is going to scale and how its individual components are going to scale. In general we thought about how the API Manager can scale under following circumstances.
  • Growing number of API subscribers (growth of the user base)
  • Growing number of APIs (growth of metadata and configurations)
  • Growing number of API calls (growth of traffic)
Now let’s take a look at the architecture of WSO2 API Manager and how it can scale against the factors listed above. Following schematic provides a high level view of the major components of the product and their interactions.
When you download the WSO2 API Manager binary distribution, you get all the above components packaged as a single artifact. You can also run the entire thing in a single JVM. We call this the standalone or out-of-the-box setup. If you only have a few hundred users and a handful of APIs, then the standalone setup is probably sufficient to you. But if you have thousands and thousands of users and hundreds of APIs then you should start thinking about deploying the API Manager components in a distributed and scalable manner. Let’s go through each of the components in the above diagram and try to understand how we can make them scalable.
WSO2 API Manager uses 2 main databases - the registry database and the API management database. The registry database is used by the underlying registry components and governance components to store system and API related metadata. API management database is primarily used to store API subscriptions. In the standalone setup, these 2 databases are created in the embedded H2 server.
In a scalable setup, it will be necessary to create these databases elsewhere, ideally in a clustered and high available database engine. One may use a MySQL cluster, SQL Server cluster or an Oracle cluster for this purpose. As you may see in the next few sections of this post, in a scalable deployment we might cluster some of the internal components of the WSO2 API Manager. Therefore there will be more than one JVM involved. All these JVMs can share the same databases created in the same clustered database engine.
Settings for the registry database are configured in a file named registry.xml which resides in the repository/conf directory of the API Manager. API management database settings are configured in a file named api-manager.xml which also resides in the same directory. Additionally there’s also a master-datasources.xml file where all the different data sources can be defined and you have the option of reusing these data sources in registry.xml and api-manager.xml.
API Publisher and API Store
These 2 components are implemented as 2 web applications using Jaggery.js. However they require some of the underlying Carbon components to function – most notably the API management components, governance components and registry components. If your deployment has a large user base, then chances are both API Publisher and API Store will receive a large volume of web traffic. Therefore it’s advisable to scale these two web applications up.
One of the simplest ways to scale them up is by clustering the WSO2 API Manager. You can run multiple instances of the API Manager pointed at the same database. An external load balancer (a hardware load balancer, WSO2 Load Balancer or any HTTP load balancer) can distribute the incoming web traffic among the different API Manager nodes. Tomcat session replication can be enabled among the API Manager nodes so that the HTTP sessions established by the users are replicated across the entire cluster.
The default distribution of WSO2 API Manager has both API Publisher and API Store loaded into the same container. Therefore an out-of-the-box API Manager node plays a dual role. But you have the option of removing one of these components and making a node play a single role. That is a single node can act either as an API Publisher instance or as an API Store instance. Using this capability you can add a bit of traffic shaping into your clustered API Manager deployment. In a typical scenario there will be only a handful of people (less than 50) who create APIs but a large number of subscribers (thousands) who consume the published APIs. Therefore you can have a large cluster with many API Store nodes and a small cluster of API Publisher nodes (or even a single API Publisher node would do). Two clusters can be setup separately with their own load balancers.
Key Management
Key management component is responsible for generating and keeping track of API keys. It’s also in charge of validating API keys when APIs are invoked by subscribers. All the core functions of this component are exposed as web services.  The other components such as the API Store and API Gateway communicate with the key manager via web service calls. Therefore if your system has many consumers and if it receives a large number of API calls, then it’s definitely advisable to scale this component up.
Again the easiest way to scale this component is by clustering the API Manager deployment. That way we will get multiple key management service endpoints which can be put behind a load balancer. It’s also not a bad idea to have a separate dedicated cluster of Carbon servers that run as key management servers. An API Manager node can be stripped of its API Publisher, API Store and other unnecessary components to turn it into a dedicated key management server. 
User Management
This is the component against which all user authentication and permission checks are carried out. API Publisher and API Store frequently communicate with this component over a web service interface. In the standalone setup, a database in the embedded H2 server is used to store user profiles and roles. But in a real world deployment, this can be hooked up with a corporate LDAP or an Active Directory instance. To scale this component, we can again make use of simple clustering techniques. All the endpoints of the exposed user management services can be put behind a load balancer and exposed to the API Publisher and API Store.
API Gateway
This is the powerhouse where all the validating, throttling and routing of API calls take place. It mainly consists of WSO2 ESB components and hence can be easily clustered, just as how you would setup an ESB cluster. One of the gateway nodes will function as the primary node through which all API configuration changes are applied. API Publisher will communicate with the primary node via web service calls to deploy, update and undeploy APIs. Carbon’s deployment synchronizer can take care of propagating all the configuration changes from the primary node to rest of the nodes in the gateway cluster.
API Gateway also caches a lot of information related to API key validation in order to prevent having to query the key manager frequently. This information is stored in the built-in distributed cache of Carbon (based on Infinispan). Therefore in a clustered setup, information cached by a single gateway node becomes visible to other gateway nodes in the cluster. This further helps to reduce the load on the key manager and improves the response time of API invocations.
Usage Tracking
We use WSO2 BAM components to publish, analyze and display API statistics. BAM has its own scalability model. Thrift is used to publish statistics from API Gateway to a remote Cassandra cluster. Use of Thrift ensures that statistics can be published from API Gateway to the Cassandra store at a rapid rate. The BAM data publisher also employs its own queuing mechanism and thread pool so that data can be published asynchronously without having any impact on the messages routed through the API Gateway. Use of Cassandra enables fast read-write operations on enormous data sets. 
Once the data has been written to the Cassandra cluster, Hadoop and Hive are used to process the collected information. Analyzed data are then stored in a separate database from which API Manager (or any other monitoring application) can pull out the numbers and display in various forms of tables and charts.
Putting It All Together
As you can see WSO2 API Manager provides many options to scale up its individual components. However it doesn’t mean you should scale up each and every piece of it for the overall solution to be scalable. You should decide which components to scale up by looking at your requirements and the expected usage patterns of the solution. For instance, if you only have a handful of subscribers you don’t have to worry about scaling up API Store and API Publisher, regardless of how much traffic they are going to send. If you have thousands of subscribers, but only a handful of them are actually sending any traffic, then the scalability of API Store will be more important than scaling up the Gateway and statistics collection components.

Friday, August 17, 2012

Introducing WSO2 Carbon 4.0

Samisa Abeysinghe speaking about the latest release of WSO2 Carbon.

Monday, August 6, 2012

WSO2 API Manager 1.0.0 Goes GA

Last Friday we released WSO2 API Manager 1.0. It is the result of months of hard work. We started brainstorming about a WSO2 branded API management solution back in mid 2011. Few months later, in October, I implemented API support for Apache Synapse, which was a huge step in improving the REST support in our integration platform (specially in WSO2 ESB). This addition also brought us several steps closer to implementing a fully fledged API management solution based on WSO2 Carbon and related components. Then somewhere around February 2012, a team of WSO2 engineers officially started working on the WSO2 API Manager product. Idea was simple - combine our existing components to offer a smooth and end-to-end API management experience while addressing a number of challenges such as API provisioning, API governance, API security and API monitoring. The idea of combining our mediation, governance, identity and activity monitoring components to build the ultimate API management solution was a fascinating one to think about even for us.
I officially joined the WSO2 API Manager team in late April. It's been 15 hectic weeks since then but at the same time it's been 15 enjoyable weeks. Nothing is more fulfilling than seeing a project evolving from a set of isolated components into a feature complete product with its own UIs, samples and documentation. The development team was also one of the best a guy could ask for with each member delivering his/her part to the fullest, consistently going beyond expectations. 
This release of WSO2 API Manager supports creating APIs, versioning them and then publishing them into an 'API Store' after a review process. API documentation, technical metadata and ownership information can also be collected and tracked through the solution. The built-in API Store allows API consumers to browse the published APIs, provide feedback on them, and ultimately obtain API keys required to access them. API security is based on OAuth bearer token profile and OAuth resource owner grant types are supported to allow end-user authentication for the APIs. The API gateway (runtime) publishes events and statistics to a remote BAM server which then runs a series of analyzers  to extract useful usage information and display them on a dashboard. 
We are currently working with a group of customers and independent analysts to evaluate the product and further improve it. Objective is to go into 'release early - release often' mode and do a series of patch releases, thereby driving the product into maturity quickly. You can also join the effort by downloading the product, trying out a few scenarios and giving us some feedback on our mailing lists. You can report any issues or feature requests on our JIRA. Please refer the on-line documentation if you need any clarifications on any features. Have fun!

Sunday, July 22, 2012

Handling I/O in Java

Handling input/output or I/O is one of the most common situations that programmers have to deal with. If you are writing a real world application, then you have to write a considerable amount of code to handle I/O regardless of the programming language or platform you’re going to use. Interestingly I/O plays a major role even in some of the simplest programs we can write. Even to write a standard ‘Hello World’ program in a language like Java or C, you need to know how to output characters to a console.
However, most developers often tend to ignore the importance and significance of I/O when writing code. Developers generally have an API level understanding of how to perform I/O operations using their main stream programming language. But they do not possess an in-depth understanding of how I/O works in the underlying system or how it can affect the performance and stability of the programs they write. Java developers in particular believe that knowing how to use the standard I/O API of Java is sufficient to write applications of good quality. Their lack of understanding on limitations and performance bottlenecks of the standard I/O API is astonishing. Most Java developers don’t have a firm grip on I/O coding best practices, third party I/O libraries available or relatively new concepts like NIO.
Last week I gave a talk at JAVA Colombo, the Sri Lankan JUG, trying to address some of the above issues. I started by giving a brief overview on I/O and I/O APIs in Java. I also introduced the NIO API and gave a short live demonstration of it comparing its performance to the standard I/O API of Java. Finally I discussed some of the best practices, patterns and anti-patterns related to writing I/O related code in Java. Full slide deck is now available on-line. Feel free to go through it send in your feedback.

Friday, July 13, 2012

WSO2 API Manager Community Features: The Social Side of Things

Ability to build, nurture and sustain a healthy community of subscribers (API consumers) is one of the most prominent features expected from an API management solution. However the ability of the solution to support a rich and growing user base never stands on its own. In fact it's always contingent upon many functional, usability and social aspects of the underlying software. Techees such as myself, usually do a good job when identifying and implementing the functional side of things, but we suck at identifying other non-technical nitty-grittys. Therefore when designing the social aspects of WSO2 API Manager, we went through a ton of API management related literature. We wanted to make sure that the solution we build fits the customer needs and industry expectations well. We read many articles, blogs and case studies that highlighted the community aspects expected in API management solutions and how various software vendors have adopted those principles. We also talked to a number of customers who were either looking to enter the business of API management or were already struggling with some API management solution. As a result of this exercise we were able to identify a long list of functional and non-functional requirements that has a direct impact on the social aspects of API management solutions. I'm listing some of the most important ones here:
  1. Ability to associate documentation and samples with APIs
  2. Overall user friendliness of the API store
  3. Ability to tag APIs and powerful search capabilities
  4. Ability to rate APIs
  5. Ability to provider feedback on APIs
  6. Ability to track API usage by individual subscribers
If you go through the WSO2 API Manager Beta release you will notice how we have incorporated some of the above requirements into the product design.  If you login to API Publisher as a user who has the “Create” permission, then you are given a set of options to create documents for each API.
Again we have taken into consideration the fact that some API providers might already have tons of documentation, managed by an external CMS. For such users, it is not required to import all the available documents from the CMS into the WSO2 API Manager. They can continue to use the CMS for managing files and simply manage file references (URLs) through the API Manager.
Once the APIs are published to the API Store, subscribers and potential subscribers can browse through the available documents.
Subscribers are also given options to rate and comment on APIs.

API providers can associate one or more tags with each API.
Subscribers can use these tags to quickly jump into the APIs they are interested in.
In the monitoring front, WSO2 API Manager allows API providers to track how often individual subscribers have invoked the APIs.
In general, all the above features combine to give a pretty sleek and smooth API management experience as well as a strong notion of a user community. Feel free to browse through the other related features offered by WSO2 API Manager and see how the end-to-end story fits together. Personally I don't think we are 100% there yet in terms of the social aspects of the product, but I think we are off to a great start (anybody who tells you that their solution is 100% complete in social aspects is full of crap).
One very crucial feature that we are currently lacking in this area is alerting and notifications. Ideally the API Manager should notify the API subscribers about any changes that may occur in an API (for an example, an API becoming deprecated). On the other hand it should alert API providers when an API is not generating enough hype or subscriptions. We are already brainstorming about ways to add these missing pieces into the picture. Idea is to take them up as soon as the API Manager 1.0.0 goes GA so hopefully we can have an even more compelling community features story by the 1.1 release.  

Wednesday, July 11, 2012

API Life Cycles with WSO2 API Manager

One of the key API management challenges that we're trying to address in WSO2 API Manager is governing the API life cycle. Once you have developed, tested and published an API, you need to maintain it. Time to time you will roll out newer versions of the API into production with bug fixes, functional improvements and performance enhancements. When a new version is rolled out, you need to deprecate previously published versions thereby encouraging the subscribers to use the latest stable version of the API. However you will have to keep supporting the old versions for a period of time, thus providing existing subscribers some concession time to migrate to the latest version. But at some point you will completely take the old API versions off production and retire them from service. At a glance all this looks very simple and straightforward. But when you add the usual DevOps overhead into the picture, managing the API life cycle becomes one of the most complicated, long running tasks that you need deal with. If this long term procedure isn't tedious enough, there are many short term maintenance activities you have to do on APIs which are already in production. More often than not, you would want to temporarily block the active APIs so the system administrators can perform the required maintenance work on them. A well thought out API life cycle design should take these long term as well as short term requirements into consideration.
We at WSO2 gave this problem a long, hard thought and came up with the following schematic which illustrates the major life cycle states of a typical API.
If you download the beta release of WSO2 API Manager, you will see how we have incorporated the above model into the product design. Not only the product supports different life cycle states, it makes it absolutely simple to switch between different states. It's quite fascinating to see how API Publisher, API Store and API Gateway adjust to the changes in perfect synchronism as you take an API through its complete life cycle. In general these are the semantics that we have enforced for each life cycle state.
  • Created – A new API has been created and awaiting moderation. It hasn't been published to the API Store yet and hence cannot be consumed.
  • Published – API has been moderated and published to the API Store and is available to subscribers. Subscribers can use them in applications, generate keys against them and consume them.
  • Deprecated – API has been marked for retirement in near future. New subscriptions are not allowed on the API, but the existing subscriptions and keys will continue to work. However the existing subscribers are highly advised to upgrade to the latest version of the API.
  • Blocked – The API has been temporarily disabled for maintenance.
  • Retired – The API has been permanently taken off service.
Lets try some of this stuff out and see for ourselves. Extract the downloaded WSO2 API Manager Beta pack, go into the “bin” directory and start the server up. Launch your web browser and enter the URL https://localhost:9443/publisher. Login to the portal using the default admin credentials (user: admin, pass: admin). Go ahead and create a new API. You can create the “FindTweets” API described in one of previous posts.
Note that the API is initially in the “CREATED” state. While in this state, the API is not visible in the API Store. Point your web browser to http://localhost:9763/store to confirm this. Now go back to the API Publisher portal and click on the “FindTweets” API. Select the “Life Cycle” tab which contains the necessary controls to change the API life cycle state. 
To start with put the API into the “PUBLISHED” state and click “Update”. Go to the API Store and refresh the page. The “FindTweets” API should now be visible there. 
You can login to the API Store, click on it and subscribe to the API using the “DefaultApplication”. When done your “My Subscriptions” page will look something like the following.
Go ahead and generate a production key too. Using this you can invoke the “FindTweets” API. Refer to my previous “Hello World” post if you're not sure how to invoke an API.
Now go back to API Publisher and change the API state to “BLOCKED”. If you immediately come back to the “My Subscriptions” page and refresh it, you will see that the “FindTweets” API is displayed as a “Blocked” API.
Any attempts to invoke this API will result in a HTTP 503 response.
Go back to the API Publisher and change the API state back to “PUBLISHED”. Try to invoke the API again, and you will notice that things are back to normal. Now in the API Publisher, go to the “Overview” tab of the “FindTweets” API and click on the “Copy” button.
Enter a new version number (eg: 2.0.0) to create a new version of the “FindTweets” API. The newly created version will be initially in the “CREATED” state and hence will not show up in API Store. However the API Publisher will list both versions of the API.
Now go to the “Life Cycle” tab of “FindTweets-2.0.0” and select the “PUBLISHED” state from the drop down. This time, since your API has some older versions, you will get some additional options to check.
Make sure all the provided options are checked. Checking the “Deprecate Old Versions” option ensures that all the older versions of the “FindTweets” API will be put into the “DEPRECATED” state as you roll out the new version into API Store. Checking the “Require Re-Subscription” option makes sure that existing subscribers will have to re-subscribe to the new version of the API in order to consume it. That is the keys generated against the old version of the API will not be forward compatible with the new version of the API. Having checked all the options, click the “Update” button to apply the changes. If you go back and check your “My Subscriptions” page you will notice that the API has been marked as “Deprecated”.
However, if you go to the API Store home page you will notice that the old version of “FindTweets” is no longer listed there. Only the latest published version is listed on this page.
Even if you somehow manage to access the “FindTweets-1.0.0” API details page on API Store (may be via the link available in “My Subscriptions” page), you will notice that you are no longer given any options for subscribing to the API. In other words, no new subscriptions are allowed on the deprecated APIs.
However if you try to invoke the old version of the API, using the previously obtained key, you will notice that it continues to function as usual.
And now finally, go back to the API Publisher and put the “FindTweets 1.0.0” API to the “RETIRED” state. Check the “My Subscriptions” page after the change.
Also try invoking the API with the old key. You will get a HTTP 404 back as the API has been undeployed from the system.
I hope this post gave you a brief introduction to how API governance and API life cycle management works in WSO2 API Manager. You can try various other options available in the product such as leaving the “Require Re-Subscription” option unchecked when publishing a new version of the API. See how that will automatically subscribe old subscribers to the new version of the API. See the extent to which API meta data is copied when creating a new version of the API using the “Copy” option. Send us your feedback to or

Tuesday, July 10, 2012

WSO2 API Manager Permission Model: Controlling What Users Can Do

Most API management solutions segregate their user base into two main categories – API providers (creators) and API consumers (subscribers). Depending on the group to which a user belongs, the system determines what that person can and cannot do once he logs in. At WSO2, we were also thinking about supporting a similar notion of user groups in WSO2 API Manager from day one. However one thing we weren't very sure was whether a model with two user groups sufficient to address the real world business and technical needs in the field of API management. After a number of brain storming sessions, heated discussions and customer meetings we identified three main roles associated with API management.
  1. API Creator – Designs, develops and creates APIs. Typically a techee, such as a developer.
  2. API Publisher – Reviews the work done by the API creators and approves them to be published in the API Store. Doesn't necessarily have to be a techee. 
  3. API Subscriber – Browses the API Store to discover APIs and subscribe to them. Typically a technical person, who's looking to develop an application using one or more APIs.
Once we had identified the user groups, the next challenge was incorporating them into our design in a flexible manner. We didn't want to tie our implementation to a specific set of user groups (roles) as that will make the end product quite rigid. Most users would want to simply plug their existing user store (a corporate LDAP or an Active Directory instance) into WSO2 API Manager, and they wouldn't want to introduce WSO2 API Manager specific user groups into their user store. Ideally they would like to designate some of their existing user groups as API creators, publishers and subscribers. And then we needed our solution to handle various edge cases without much of an effort. For an example, some companies would not want API creators and publishers to be different user groups. Some companies would prefer, API creators or publishers to also act as API subscribers so they can test the production APIs by directly consuming them. As you can see these edge cases can easily blur the distinction we have between different user groups.
So how do we introduce the notion of 3 user groups, while having the flexibility to reuse the roles defined elsewhere and combine multiple roles together when required? Thankfully, our WSO2 Carbon architecture has the perfect answer. Carbon gives us a powerful hierarchical permission model. So we introduced 3 Carbon permissions instead of 3 predefined user groups. 
  1. Manage > API > Create
  2. Manage > API > Publish
  3. Manage > API > Subscribe
These permissions can be assigned to any user group as you see fit. If required, multiple permissions can be assigned to the same group too. For an example take an LDAP server which defines 3 different user groups – A,B and C. By properly assigning the above permissions to the respective groups, we can designate users in group-A as API creators, users in group-B as API publishers and users in group-C as subscribers. If we grant both Create and Publish permissions to group-A, users in that group will be able to both create and publish APIs.
Lets take an API Manager 1.0.0-Beta1 pack and try some of these stuff out. Download the binary distribution and extract it to install the product. Go to the "bin" directory of the installation and launch the file (or wso2server.bat if you are on Windows) to start the server. Once the server has fully started up, start your web browser and go to the URL https://localhost:9443/carbon. You should get to the login page of WSO2 API Manager management console.
Login to the console using default admin credentials (user: admin, pass: admin). Select the 'Configure' tab and click on 'Users and Roles' from the menu to open the "User Management" page.
Lets start by adding a couple of new roles. Click on “Roles” which will list all the currently existing roles. WSO2 API Manager uses an embedded H2 database engine as the default user store. So if you were wondering where the user 'admin' and the default roles are stored, there's your answer. To use a different user store such as an external LDAP server, you need to configure the repository/conf/user-mgt.xml file accordingly.
You will notice 3 roles in the listing – admin, everyone and subscriber. The default "subscriber" role has the "Manage > API > Subscribe" permission. But don't get the wrong idea that the system is somehow tied into this role. The default "subscriber" role is created by the self sign up component of API Manager and it is fully configurable. I will get back to that later. For now click on the “Add New Role” option and start the role creation wizard. 
Enter the name “creator” as the role name and click “Next”.
From the permission tree select the “Configure > Login” and “Manage > API > Create” permissions and click “Next”. 
You will be asked to add some users to the newly created role. But for now simply click on “Finish” to save the changes and exit the wizard. You will now see the new “creator” permission also listed on the “Roles” page. Start the role creation wizard once more to add another role named “publisher”. Assign the “Configure > Login” and “Manage > API > Publish” permission to this role.
Now lets go ahead and create a couple of user accounts and add them to the newly created roles. Select “Users and Roles” option from the breadcrumbs to go back to the “User Management” page. Click on “Users”. 
This page should list all the user accounts that exist in the embedded user store. By default  the system has only one user account (admin) and so only that item will be listed. Click on “Add New User” to initiate the user creation wizard. 
Specify “alice” as the username and “alice123” as the password and click “Next”. 
Check the “creator” role from the list of available roles. The “everyone” role will be selected by default and you are not allowed by the system to uncheck that. Click on “Finish” to complete the wizard. Both “admin” and “alice” should now be listed on the “Users” page. Launch the user creation wizard once more to create another user named “bob”. Specify “bob123” as the password and make sure to add him to the “publisher” role. When finished, sign out from the management console.
So far we have created a couple of new roles and added some users to those roles. It's time to see how the API Manager utilizes the Carbon permission system. Point your web browser to https://localhost:9443/publisher to get to the API Publisher portal. Sign in using the username “alice” and password “alice123”. Since you are logging in with a user who has the “Manage > API > Create” permission, API creation options will be displayed to you.
Click on the “New API” button and create a new API (see my previous post on creating APIs). When you're finished click on the newly created API on the API Publisher home page. 
You will be shown options to edit the API and add documentation, but you will not see any options related to publishing the API. To be able to publish an API, we must login as a user who has the “Manage > API > Publish” permission. So sign out from the portal and login again as the user “bob”.
First thing to note is you don't get the “Create API” option on the main menu. So go ahead and just select the API previously created by the user “alice” from the home page.
While on this page, note that there are no options available for “bob” to edit the API. However you will see a “Life Cycle” tab which wasn't visible earlier when you were logged in as “alice”. Select this tab, and you should see the options to publish the API. Select the entry “PUBLISHED” from the “State” drop down and click “Update” to publish the API. The “Life Cycle History” section at the bottom of the page will get updated saying that “bob” has modified the API status. 
We are done with the API Publisher portal for the moment. Lets head out to API Store. Sign out from the API Publisher and point your browser to http://localhost:9763/store.  First of all try signing in as “alice” or “bob”. You will end up with an error message similar to the following.
Only those users with the “Subscribe” permission are allowed to login to the API Store. So we need to create a new user account with the “Manage > API > Subscribe” permission. We can use the self sign up feature of API Store to do that. Click on the “Sign-up” option at the top of the API Store portal. 
Create an account with the username “chris” and password “chris123”. When completed you should see a dialog box similar to the following.
Now go ahead and login as “chris”. This time the login attempt will be successful. The self sign up component, not only creates the user account, but also adds him to the “subscriber” role that we saw earlier in the management console. Since this role already has the “Manage > API > Subscribe” permission, self signed up users can always login to the API Store without any issues. Also try signing into the API Publisher portal as the user “chris”. You will get an error message similar to the following.
A word about the default “subscriber” role. This is not a role managed by the system, but rather something created by the self sign up component of the API Manager. In other words, this is the role to which all self signed up users will be assigned. You can change the name of this role by modifying the “SubscriberRoleName” parameter in the repository/conf/api-manager.xml file. If you are using an external user store, you can specify an existing role for this purpose too. If you don't want the API Manager to attempt to create this role in the system, set the “CreateSubscriberRole” parameter in api-manager.xml to “false”. If you're going to use an existing role as the subscriber role, make sure that role has the “Configure > Login” and “Manage > API > Subscribe” permissions so that the self signed up users can login to API Store and subscribe to APIs.
In order to see the effect of granting multiple API management permissions to the same role, you can use the default “admin” account. This user is in a role named “admin” which has all the permissions in the Carbon permission tree. Therefore this user can login to both API Store and API Publisher portals. On the API Publisher, he can see all the options related API creation, editing as well as publishing.
I believe you found our work on API Manager permission model interesting. Feel free to try different combinations of permissions and different types of user stores. Send us your feedback to or