Showing posts with label wso2. Show all posts
Showing posts with label wso2. Show all posts

Thursday, January 2, 2014

Calling WSO2 Admin Services in Python

I’m using some WSO2 middleware for my ongoing research, and recently I had the requirement of calling some admin services from Python 2.7. All WSO2 products expose a number of special administrative web services (admin services), using which the WSO2 server instances can be controlled, configured and monitored. In fact, all the web-based UI components that ship with WSO2 middleware make use of these admin services under the hood to manage the server runtime.
WSO2 admin services are SOAP services (based on Apache Axis2), and are secured using HTTP basic authentication. All admin services expose a WSDL document using which client applications can be written or generated to consume the admin services. In this post I’m going to summarize how to implement a simple Python client to consume the WSO2 admin services.
We will be writing our Python client using the Suds SOAP library for Python. Suds is simple, lightweight and extremely easy to use. As the first step, we should install Suds. Depending on the Python package manager you wish to use, one of the following commands should do the trick (tested on OS X and Ubuntu):
sudo easy_install suds
sudo pip install suds
Next we need to instruct the target WSO2 server product to expose the admin service WSDLs. By default these WSDLs are hidden. To unhide them, open up the repository/conf/carbon.xml file of the WSO2 product, and set the value of HideAdminServiceWSDLs parameter to false:
<HideAdminServiceWSDLs>false</HideAdminServiceWSDLs>
Now restart the WSO2 server, and you should be able to access the admin service WSDLs using a web browser. For example, to access the WSDL of the UserAdmin service, point your browser to http://localhost:9443/services/UserAdmin?wsdl
Now we can go ahead and write the Python code to consume any of the available admin services. Here’s a working sample that consumes the UserAdmin service. This simply prints out a list of roles defined in the WSO2 User Management component:
from suds.client import Client
from suds.transport.http import HttpAuthenticated
import logging

if __name__ == '__main__':
    #logging.basicConfig(level=logging.INFO)
    #logging.getLogger('suds.client').setLevel(logging.DEBUG)

    t = HttpAuthenticated(username='admin', password='admin')
    client = Client('https://localhost:9443/services/UserAdmin?wsdl', location='https://localhost:9443/services/UserAdmin', transport=t)
    print client.service.getAllRolesNames()
That’s pretty much it. I have tested this approach with several WSO2 admin services, and they all seem to work without any issues. If you need to debug something, uncomment the two commented out lines in the above example. That will print all the SOAP messages and the HTTP headers that are being exchanged.
I also tried to write a client using the popular SOAPy library, but unfortunately couldn’t get it to work due to several bugs in SOAPy. SOAPy was incapable of retrieving the admin service WSDLs over HTTPS. This can be worked around by using the HTTP URL for the WSDL, but in that case SOAPy failed to generate the correct request messages to call the admin services. Basically, the namespaces of the generated SOAP messages were messed up. But with Suds I didn’t run into any issues.

Tuesday, February 5, 2013

How the World's Fastest ESB was Made

A couple of years ago, at WSO2 we implemented a new HTTP transport for WSO2 ESB. Requirements for this new transport can be summarized as follows:
  1. Ultra-fast, low latency mediation of HTTP requests.
  2. Supporting a very large number of inbound (client-ESB) and outbound (ESB-server) connections concurrently (we were looking at several thousand concurrent connections).
  3. Automatic throttling and graceful performance degradation in the presence of slow or faulty clients and servers.
The default non-blocking HTTP (NHTTP) transport from Apache Synapse, which we were also using in WSO2 ESB, supported the above requirements up to a certain extent but we wanted to do better. The default transport was very generic and it was designed to offer reasonable performance in all the integration scenarios the ESB could potentially participate in. However HTTP load balancing, HTTP URL routing (URL rewriting) and HTTP header-based routing are some of the most widely used integration patterns in the industry and to support these use cases well, we needed a specialized transport. 
The old NHTTP transport was based on a dual buffer model. Incoming message content was placed in a SharedInputBuffer and the outgoing message content was placed in a SharedOutputBuffer. Apache Axiom, Apache Axis2 and the Synapse mediation engine sit between the two buffers, reading from the input buffer and writing to the output buffer. This architecture is illustrated in the following diagram.
The key advantage of this architecture is that it enables the ESB (mediators) to intercept all the messages and manipulate them in any way necessary. The main downside is every message happens to go through the Axiom layer, which is not really necessary in cases like HTTP load balancing and HTTP header-based routing. Also the overhead of moving data from one buffer to another was not always justifiable in this model. So when we started working on the new HTTP transport we wanted to get rid of these limitations. We knew that this might result in a not-so-generic HTTP transport, but we were willing to pay that price at the time.
So after some very interesting brainstorming sessions, an exciting 1-week long hackathon followed by several months of testing, bug-fixing and refactoring we came up with what’s today known as the HTTP pass-through transport. This transport was based on a single buffer model and completely bypassed the Axiom layer. The resulting architecture is illustrated below.
The HTTP pass-through transport was first released in June 2011 along with WSO2 ESB 4.0. Back then it was disabled by default and the user had to enable it by uncommenting a few entries in the axis2.xml file. The performance numbers we were seeing with the new transport were simply remarkable. WSO2 also published some of these benchmarking results in a March 2012 article. However at this point the 2 main limitations in the new transport were starting to give us headaches.
  1. Configuration overhead (Users had to explicitly enable the transport depending on their target use cases)
  2. Cannot support any integration scenario that requires HTTP content manipulation (because Axiom was bypassed, any mediator attempting to access the message payload would not get anything useful to work with)
In addition to these technical issues there were other process related issues that we had to deal with. For instance maintaining two separate HTTP transports was twice as work for the developers and testers. We found that because the pass-through transport was not used as the default, it often lagged behind the default NHTTP transport in terms of features and stability. So after a few brainstorming sessions we decided to try and make the pass-through transport the default HTTP transport in Apache Synapse/WSO2 ESB. But this required making the content manipulation use cases (content aware use cases) work with the new transport. This implied bringing Axiom back into the picture, the very thing we wanted to avoid in our initial implementation. So in order to balance out our performance and heterogeneous integration requirements we came up with the idea of “on-demand message parsing in the mediation engine”.
In this new model, each mediator instance belongs to one of two classes.
  1. Content-unaware mediators – Mediators that never access the message content in anyway (eg: drop mediator)
  2. Content-aware mediators – Mediators that always access the message content (eg: xslt mediator)
We also identified a third class known as conditionally content-aware mediators. These mediators could be either content-aware or content-unaware depending on their exact instance configuration. For an example a simple log mediator instance, configured as <log/> is content-unaware. However a log mediator configured as <log level=”full”/> would be content-aware since it’s expected to log the message payload. Similarly a simple property mediator instance such as <property name=”foo” value=”bar”/> is content-unaware but <property name=”foo” expression=”/some/xpath”/> could be content-aware depending on what the XPath expression does. In order to capture this content-awareness characteristic of mediator instances at runtime, we introduced a new method (isContentAware) to the top level Mediator interface of Synapse. The default implementation in AbstractMediator class returns true by default so as to maintain backward compatibility. 
With this change in place we modified the mediation engine to check the content-awareness of property of each mediator at runtime before submitting a message to it. List mediators such as the SequenceMediator would run the check recursively on its child mediators to obtain the final value. Assuming that messages are always received through the pass-through HTTP transport, the mediation engine would invoke a special message parsing routine whenever a mediator is detected to be content-aware. It is in this special routine that we bring Axiom into the picture. Therefore if none of the mediators in a given flow or a service is content-aware, the pass-through transport works as it usually does without ever engaging Axiom. But whenever a content-aware mediator is involved, we bring Axiom in. This way we can reap the performance benefits of the pass-through transport while supporting all integration scenarios of the ESB. Since we engage Axiom on-demand we get the best possible outcome for all scenarios. For instance a simple pass through proxy would always work without any Axiom interactions. An XSLT proxy that transforms requests would engage Axiom only in the request flow. Response flow would operate without parsing the messages.
Another tricky problem we encountered was dealing with message parsing itself. For instance how do we parse a message and then send it out when there is only one buffer provided by the underlying pass-through transport? Ideally we need two buffers to read the incoming message from and write the outgoing message to. Also the fact that the Axis2 message builder framework can only handle streams posed a few problems. The buffer we maintained in the pass-through transport was a Java NIO ByteBuffer instance. So we needed to adapt the buffer into a stream implementation whenever the mediation engine engages Axiom. We solved the first problem by implementing our message builder routine to create a second output buffer whenever Axiom is dragged into the picture. The outgoing messages are serialized into this second buffer and the pass-through transport was modified to pick the outgoing content from the second buffer when it’s available. Writing an InputStream implementation that can wrap a ByteBuffer instance solved the second problem.
One last problem that needed to be solved was handling security. In Synapse/WSO2 ESB, security is handled by Apache Rampart, which runs as an Axis2 module that intercepts the messages before they hit the mediation engine. So on-demand parsing at the mediation engine doesn’t work in this scenario. We need to parse the messages before Rampart intercepts them. We solved this issue by introducing a new smart handler to the Axis2 handler chain, which intercepts every message and performs an early parse if security is engaged on the flow. The same solution can be extended to support other modules that require parsing message payload in the Axis2 handler chain.
The reason I decided to compile this blog is because WSO2 folks just released WSO2 ESB 4.6. And this release is based on the new model I’ve described here. Pass-through transport is what the users now get by default. The WSO2 team has also published some performance figures that clearly indicate what the new design is capable of. It turns out the latest release of WSO2 ESB outperforms all the major open source ESB vendors by a significant margin. This release also comes with a new XSLT mediator (Fast XSLT) that operates on the top of the pass-through model of the underlying transport and a new streaming XPath implementation based on Antlr.
The next step of this effort would be to get these improvements integrated into the Apache Synapse code base. This work is already underway and you can monitor its progress through SYNAPSE-913 and SYNAPSE-920.

Sunday, January 20, 2013

On Premise API Management for Services in the Cloud

In some of my recent posts I explained how to install and start AppScale. I showed how to use AppScale command-line tools to manage an AppScale PaaS on virtualized environments such as Xen and IaaS environments such as EC2 and Eucalyptus. Then we also looked at how to deploy Google App Engine (GAE) apps over AppScale. In this post we are going to try something different.
Here I’m going to describe a possible hybrid architecture for deploying RESTful services in the cloud and exposing those services through an on-premise API management platform. This type of an architecture is most suitable for B2B integration scenarios where one organization provides a range of services and several other organizations consume them with their own custom use cases and SLAs. Both service providers and service consumers can greatly benefit from the proposed hybrid architecture. It enables the API providers to reap the benefits of the cloud with reduced deployment cost, reduced long-term maintenance overhead and reduced time-to-market. API consumers can use their own on-premise API management platform as a local proxy, which provides powerful access control, rate control, analytics and community features on top of the services already deployed in the cloud. 
To try this out, first spin up an AppScale PaaS in a desired cloud environment. You can refer my previous posts or go through the AppScale wiki to learn how to do this. Then we can deploy a simple RESTful web service in our AppScale cloud. Here I’m posting the source code for a simple web service called “starbucks” written in Python using the GAE APIs. The “starbucks” service can be used to submit and manage simple drink orders. It uses the GAE datastore API to store all the application data and exposes all the fundamental CRUD operations as REST calls (Creare = POST, Update = PUT, Read = GET, Delete = DELETE).
try:
  import json
except ImportError:
  import simplejson as json

import random
import uuid
from google.appengine.ext import db, webapp
from google.appengine.ext.webapp.util import run_wsgi_app

PRICE_CHART = {}

class Order(db.Model):
  order_id = db.StringProperty(required=True)
  drink = db.StringProperty(required=True)
  additions = db.StringListProperty()
  cost = db.FloatProperty()

def get_price(order):
  if PRICE_CHART.has_key(order.drink):
    price = PRICE_CHART[order.drink]
  else:
    price = random.randint(2, 6) - 0.01
    PRICE_CHART[order.drink] = price
  if order.additions is not None:
    price += 0.50 * len(order.additions)
  return price

def send_json_response(response, payload, status=200):
  response.headers['Content-Type'] = 'application/json'
  response.set_status(status)
  if isinstance(payload, Order):
    payload = {
      'id' : payload.order_id,
      'drink' : payload.drink,
      'cost' : payload.cost,
      'additions' : payload.additions
    }
  response.out.write(json.dumps(payload))

class OrderSubmissionHandler(webapp.RequestHandler):
  def post(self):
    order_info = json.loads(self.request.body)
    order_id = str(uuid.uuid1())
    drink = order_info['drink']
    order = Order(order_id=order_id, drink=drink, key_name=order_id)
    if order_info.has_key('additions'):
      additions = order_info['additions']
      if isinstance(additions, list):
        order.additions = additions
      else:
        order.additions = [ additions ]
    else:
      order.additions = []
    order.cost = get_price(order)
    order.put()
    self.response.headers['Location'] = self.request.url + '/' + order_id
    send_json_response(self.response, order, 201)

class OrderManagementHandler(webapp.RequestHandler):
    def get(self, order_id):
      order = Order.get_by_key_name(order_id)
      if order is not None:
        send_json_response(self.response, order)
      else:
        self.send_order_not_found(order_id)

    def put(self, order_id):
      order = Order.get_by_key_name(order_id)
      if order is not None:
        order_info = json.loads(self.request.body)
        drink = order_info['drink']
        order.drink = drink
        if order_info.has_key('additions'):
          additions = order_info['additions']
          if isinstance(additions, list):
            order.additions = additions
          else:
            order.additions = [ additions ]
        else:
          order.additions = []
        order.cost = get_price(order)
        order.put()
        send_json_response(self.response, order)
      else:
        self.send_order_not_found(order_id)

    def delete(self, order_id):
      order = Order.get_by_key_name(order_id)
      if order is not None:
        order.delete()
        send_json_response(self.response, order)
      else:
        self.send_order_not_found(order_id)

    def send_order_not_found(self, order_id):
      info = {
        'error' : 'Not Found',
        'message' : 'No order exists by the ID: %s' % order_id,
      }
      send_json_response(self.response, info, 404)

app = webapp.WSGIApplication([
    ('/order', OrderSubmissionHandler),
    ('/order/(.*)', OrderManagementHandler)
], debug=True)

if __name__ == '__main__':
  run_wsgi_app(app)
Before we go any further let’s take a few seconds and appreciate how simple and concise this piece of code is. With just about 100 lines of Python code we have developed a comprehensive webapp, which uses JSON as the data exchange format and also does database access and provides decent error handling. Imagine doing the same thing in a language like Java in a traditional servlet container environment. We will have to write lot more code and also bundle a ridiculous amount of additional dependencies to parse and construct JSON and perform database queries. But as seen here, GAE APIs make it absolutely trivial to develop powerful web APIs for the cloud with a minimum amount of code.
You can download the complete “starbucks” application from here. Simply extract the downloaded tar ball and you’re good to go. The webapp consists of just 2 files. The main.py contains all the source code of the app and app.yaml is the GAE webpp descriptor. No additional libraries or files are needed to make this work. Use AppScale-Tools to deploy the app in your AppScale cloud.
appscale-upload-app –-file /path/to/starbucks --keyname my_key_name
To try out the app, put the following JSON string into a file named order.json:
{
  "drink" : "Caramel Frapaccino",
  "additions" : [ "Whip Cream" ]
}
Now execute the following Curl request on your App:
curl –v –d @order.json –H “Content-type: application/json” http://host:port/order
Replace 'host' and 'port'  with the appropriate values for your AppScale PaaS. This request should return a HTTP 201 Created response with a Location header.
And now for the API management part. For this I’m going to use the open source API management solution from WSO2, a project that I was a part of a while ago. Download the latest WSO2 API Manager and install it on your local computer by extracting the zip archive. Go into the bin directory and execute wso2server.sh (or wso2server.bat for Windows) to start the API Manager. You need to have JDK 1.6 or higher installed to be able to do this.
Once the server is up and running, navigate to http://localhost:9763/publisher and sign in to the console using “admin” as both the username and the password. Go ahead and create an API for our “starbucks” service in the cloud. You can use http://host:port as the service URL where 'host' and 'port' should point to the AppScale PaaS. API creation process should be pretty straightforward. If you need any help, you can refer my past blog posts on WSO2 API Manager or go through the WSO2 documentation. Once the API is created and published, head over to the API Store at http://localhost:9763/store.
Now you can sign up at the API Store as an API consumer, generate an API key for the Starbucks API and start using it.
Submit Order:
curl –v –d @order.json –H “Content-type: application/json” –H “Authorization: Bearer api_key” http://localhost:8280/starbucks/1.0.0/order
Review Order:
curl –v –H “Authorization: Bearer api_key” http://localhost:8280/starbucks/1.0.0/order/order_id
Delete Order:
curl –v –X DELETE –H “Authorization: Bearer api_key” http://localhost:8280/starbucks/1.0.0/order/order_id
Replace 'api_key' with the API key generated by the API Store. Replace the 'order_id' with the unique identifier sent in the response for the submit order request.
There you have it. On-premise API management for services in the cloud. This looks pretty simple at first glimpse, but actually this is a quite powerful architecture. Note that all the critical components (service runtime, registry and consumer) are very well separated from each other, which allows maximum flexibility. The portions in the cloud can benefit from cloud specific features such as autoscaling to deliver the maximum throughput with optimal resource utilization. Since the API management platform is being controlled by individual consumer organizations, they can easily enforce their own custom policies, SLAs and optimize for their common access patterns.

Tuesday, September 25, 2012

Busting Synapse and WSO2 ESB Myths

Paul Fremantle, PMC chair of the Apache Synapse project and CTO of WSO2, has written a very interesting blog post addressing some of the myths concerning Apache Synapse and WSO2 ESB. As of now both projects are quite popular, mature and have a very large user base including some of the largest organizations in the world. Surprisingly there are still some people who believe that these projects do not fall under the category of ESB (Enterprise Service Bus) implementations. In his latest post, Paul gives a clear and complete answer to all these misbeliefs, and backs it up with a wide range of facts.
ESB is one of those things in the IT world which don't have a proper standard definition. The best definitions I've come across attempt to align the term along the following cues:
  • An architectural construct that provides fundamental services to complex architectures
  • An entity that acts as a hub connecting many diverse systems
  • A central driver that facilitates Enterprise Application Integration (EAI)
Apache Synapse and WSO2 ESB pass with flying colors on all the above criteria. They provide an array of fundamental services to the systems and architectures that rely on them.  Some of these basic services are:
  • Message passing, routing and filtering
  • Message transformation
  • Protocol conversion
  • QoS enforcement (security, reliable delivery etc)
  • Logging, auditing and monitoring
Because Synapse and WSO2 ESB do such a good job providing these fundamental services, they can be used to integrate a large number of heterogeneous systems in an enterprise setting. As Paul has also pointed out in his post, Synapse and WSO2 ESB are currently used in hundreds of production deployments all around the world to connect various applications, implemented using various technologies (both open source and proprietary) running on various platforms (Windows, Linux, .NET, J2EE, LAMP, cloud..you name it). In other words Synapse and WSO2 ESB are widely used as centralized drivers that facilitate EAI. The configuration model of Synapse and WSO2 ESB is so agile and powerful that practically any EAI pattern can be implemented on top of them. In fact there are tons of samples, articles and tutorials that explain how various well-known EAI patterns can be implemented using these 'ESB implementations'.
One thing that I've learnt from writing code to Synapse is that it has a very flexible enterprise messaging model. Support for any wire level protocol or any message format can be easily implemented on top of this model and can be deployed as a separate pluggable module. During the last few years, I myself have contributed to the implementation of following adapters/connectors on various occasions:
  • FIX transport
  • SAP transport (IDoc and BAPI support)
  • MLLP transport and HL7 message formats
  • CSV and various other office document formats
  • Thrift connector
  • Numerous other custom binary protocols based on TCP/IP
This is just a bunch of stuff that I've had the privilege of implementing for Synapse/WSO2 ESB. I know for a fact that other committers of Synapse and WSO2 ESB have been working on supporting dozens of other protocols, message formats and mediators. Thanks to all this hard work Synapse and WSO2 ESB are currently two of the most powerful and feature-complete ESB implementations anyone will ever come across. Also the existence of connectors for so many protocols and applications is a testament to the agility and flexibility that Synapse and WSO2 ESB can bring in as ESB products.
Another aspect of Synapse/WSO2 ESB that has been questioned many times is their ability to support RESTful integrations (Paul also addresses this issue in his post). This confusion stems from the fact that Synapse uses SOAP as its intermediary message format. Without going into too many technical details, I'd just like to point out that one of the largest online marketplace and auctioning providers in the world uses Synapse/WSO2 ESB to process several hundred millions of REST calls in a daily basis. The new API support we have implemented in Synapse makes it absolutely simple to design, implement and expose RESTful APIs on Synapse/WSO2 ESB. In fact I recently published an article which demonstrates through practical examples how powerful RESTful applications can be implemented using Synapse/WSO2 ESB while supporting advanced REST semantics such as HATEOAS. The recently released WSO2 API Manager product which supports exposing rich web APIs with support for API key management is also based on Synapse/WSO2 ESB.
I think I have made my point. Both Synapse and WSO2 ESB are two excellent ESB choices if you're looking to adopt SOA or enterprise integration within your organization. Their wide range of features is only second to the very high level of performance they offer in terms of high throughput and low resource utilization. Please also go through the post made by Paul, where he has explained some of the above issues with low level technical details. I particularly like his analogy concerning Heisenberg's principle of uncertainty :) 

Tuesday, September 11, 2012

How to GET a Cup of Coffee the WSO2 Way: An Article

Since we implemented REST API support for WSO2 ESB (and Apache Synapse), we have received many requests for new samples, articles and tutorials explaining this powerful feature. I started working on an end-to-end sample for API support somewhere around March but didn't have enough cycles to write it up. Finally, after a several months delay, I was able to submit the finalized sample and the article to WSO2 Oxygen Tank last month. Now it has been published and available for reading on-line at http://wso2.org/library/articles/2012/09/get-cup-coffee-wso2-way
This articles starts with a quick overview on the REST support provided by WSO2 platform. It describes the API mediation capability of the ESB in detail providing several examples. Then the article explains how a complete order management system can be constructed using the REST support available in WSO2 ESB. This sample application has been inspired by the popular article on RESTful application development titled "How to GET a Cup of Coffee" by Jim Webber. Webber's article describes a hypothetical application used in a Starbucks coffee shop which supports placing orders, making payments, and processing orders. The sample described in my article implements all the major interfaces and message flows outlined by Webber. I have also made all the source code and configurations available so that anybody can do the same on their own machines using WSO2 middleware. To make it even more interesting I also implemented a couple of user interfaces for Webber's Starbucks application. These UI tools are also described in my article and they can help you better understand how the applications communicate with each other using RESTful API calls and how their application states change according to typical HATEOAS principles.
Towards the latter part of the article I discuss some of the advanced features of RESTful application development and how such features can be implemented on top of WSO2 middleware. This includes some very important topics such as security, caching and content negotiation.
I hope you will find this article interesting and useful. As always feel free to send any feedback either directly to me or to dev@wso2.org.

Saturday, September 8, 2012

Turning Over a New Leaf

Last month I said good bye to a fantastic career at WSO2. I joined WSO2 on April 2009 as a Software Engineer and worked my way up the ladder to become a Senior Technical Lead. Most of my time at WSO2 was spent on WSO2 ESB, WSO2's flagship product. I also contributed to Carbon, the core platform on which all WSO2 products are built, and also implemented a number of cross cutting features which are currently used in multiple WSO2 product offerings. Towards the latter part of my WSO2 career I was mostly working on improving the REST support in WSO2 platform and implementing the WSO2 API Manager product. The WSO2 API Manager 1.0 release that went out in early August marked the end of my career at WSO2 (at least for the next few years).
My time at WSO2 has been a great learning experience. I learned and mastered  a variety of technologies like Web Services, SOA, cloud computing and API management. I got to travel to many countries and had the privilege of  meeting and working with some of the largest IT organizations in the world. Above all I had a lot of fun working with the team WSO2. It's truly a great organization with some of the smartest and coolest people I've ever had the privilege of working with.
This month I'm starting my Computer Science graduate studies at University of California, Santa Barbara. I'll be doing distributed systems research with prof. Chandra Krintz at UCSB Mayhem lab and RACE lab. Popular open source cloud platforms like Eucalyptus and AppScale were originally designed and developed in these labs, and so currently I'm spending some time on getting up to speed on these platforms. You will see me blogging about these technologies time to time over the next few days. Wish me luck :)

Wednesday, August 22, 2012

WSO2 API Manager: Designed for Scalability

Scalability is a tough nut to crack. When developing enterprise software and deploying them in mission critical environments, you need to think about the scalability aspects from day one. If you don’t, you may rest assured that a whole bunch of unpleasant surprises are heading your way. Some of the problems you may encounter are systems crashing inexplicably under heavy load, customers constantly rambling about the poor performance of the system and system administrators having to play watch dog to the deployed applications day in and day out. In addition to these possible mishaps, experience tells us that attempting to make a live production system scalable is hell of a lot more difficult and expensive. So it’s always wise to think about scalability before your solutions go live.
The crew at WSO2 have a firm grip on this reality. Therefore when designing and developing the WSO2 API Manager, we made scalability of the end product a top priority. We thought about how the overall solution is going to scale and how its individual components are going to scale. In general we thought about how the API Manager can scale under following circumstances.
  • Growing number of API subscribers (growth of the user base)
  • Growing number of APIs (growth of metadata and configurations)
  • Growing number of API calls (growth of traffic)
Now let’s take a look at the architecture of WSO2 API Manager and how it can scale against the factors listed above. Following schematic provides a high level view of the major components of the product and their interactions.
When you download the WSO2 API Manager binary distribution, you get all the above components packaged as a single artifact. You can also run the entire thing in a single JVM. We call this the standalone or out-of-the-box setup. If you only have a few hundred users and a handful of APIs, then the standalone setup is probably sufficient to you. But if you have thousands and thousands of users and hundreds of APIs then you should start thinking about deploying the API Manager components in a distributed and scalable manner. Let’s go through each of the components in the above diagram and try to understand how we can make them scalable.
Databases
WSO2 API Manager uses 2 main databases - the registry database and the API management database. The registry database is used by the underlying registry components and governance components to store system and API related metadata. API management database is primarily used to store API subscriptions. In the standalone setup, these 2 databases are created in the embedded H2 server.
In a scalable setup, it will be necessary to create these databases elsewhere, ideally in a clustered and high available database engine. One may use a MySQL cluster, SQL Server cluster or an Oracle cluster for this purpose. As you may see in the next few sections of this post, in a scalable deployment we might cluster some of the internal components of the WSO2 API Manager. Therefore there will be more than one JVM involved. All these JVMs can share the same databases created in the same clustered database engine.
Settings for the registry database are configured in a file named registry.xml which resides in the repository/conf directory of the API Manager. API management database settings are configured in a file named api-manager.xml which also resides in the same directory. Additionally there’s also a master-datasources.xml file where all the different data sources can be defined and you have the option of reusing these data sources in registry.xml and api-manager.xml.
API Publisher and API Store
These 2 components are implemented as 2 web applications using Jaggery.js. However they require some of the underlying Carbon components to function – most notably the API management components, governance components and registry components. If your deployment has a large user base, then chances are both API Publisher and API Store will receive a large volume of web traffic. Therefore it’s advisable to scale these two web applications up.
One of the simplest ways to scale them up is by clustering the WSO2 API Manager. You can run multiple instances of the API Manager pointed at the same database. An external load balancer (a hardware load balancer, WSO2 Load Balancer or any HTTP load balancer) can distribute the incoming web traffic among the different API Manager nodes. Tomcat session replication can be enabled among the API Manager nodes so that the HTTP sessions established by the users are replicated across the entire cluster.
The default distribution of WSO2 API Manager has both API Publisher and API Store loaded into the same container. Therefore an out-of-the-box API Manager node plays a dual role. But you have the option of removing one of these components and making a node play a single role. That is a single node can act either as an API Publisher instance or as an API Store instance. Using this capability you can add a bit of traffic shaping into your clustered API Manager deployment. In a typical scenario there will be only a handful of people (less than 50) who create APIs but a large number of subscribers (thousands) who consume the published APIs. Therefore you can have a large cluster with many API Store nodes and a small cluster of API Publisher nodes (or even a single API Publisher node would do). Two clusters can be setup separately with their own load balancers.
Key Management
Key management component is responsible for generating and keeping track of API keys. It’s also in charge of validating API keys when APIs are invoked by subscribers. All the core functions of this component are exposed as web services.  The other components such as the API Store and API Gateway communicate with the key manager via web service calls. Therefore if your system has many consumers and if it receives a large number of API calls, then it’s definitely advisable to scale this component up.
Again the easiest way to scale this component is by clustering the API Manager deployment. That way we will get multiple key management service endpoints which can be put behind a load balancer. It’s also not a bad idea to have a separate dedicated cluster of Carbon servers that run as key management servers. An API Manager node can be stripped of its API Publisher, API Store and other unnecessary components to turn it into a dedicated key management server. 
User Management
This is the component against which all user authentication and permission checks are carried out. API Publisher and API Store frequently communicate with this component over a web service interface. In the standalone setup, a database in the embedded H2 server is used to store user profiles and roles. But in a real world deployment, this can be hooked up with a corporate LDAP or an Active Directory instance. To scale this component, we can again make use of simple clustering techniques. All the endpoints of the exposed user management services can be put behind a load balancer and exposed to the API Publisher and API Store.
API Gateway
This is the powerhouse where all the validating, throttling and routing of API calls take place. It mainly consists of WSO2 ESB components and hence can be easily clustered, just as how you would setup an ESB cluster. One of the gateway nodes will function as the primary node through which all API configuration changes are applied. API Publisher will communicate with the primary node via web service calls to deploy, update and undeploy APIs. Carbon’s deployment synchronizer can take care of propagating all the configuration changes from the primary node to rest of the nodes in the gateway cluster.
API Gateway also caches a lot of information related to API key validation in order to prevent having to query the key manager frequently. This information is stored in the built-in distributed cache of Carbon (based on Infinispan). Therefore in a clustered setup, information cached by a single gateway node becomes visible to other gateway nodes in the cluster. This further helps to reduce the load on the key manager and improves the response time of API invocations.
Usage Tracking
We use WSO2 BAM components to publish, analyze and display API statistics. BAM has its own scalability model. Thrift is used to publish statistics from API Gateway to a remote Cassandra cluster. Use of Thrift ensures that statistics can be published from API Gateway to the Cassandra store at a rapid rate. The BAM data publisher also employs its own queuing mechanism and thread pool so that data can be published asynchronously without having any impact on the messages routed through the API Gateway. Use of Cassandra enables fast read-write operations on enormous data sets. 
Once the data has been written to the Cassandra cluster, Hadoop and Hive are used to process the collected information. Analyzed data are then stored in a separate database from which API Manager (or any other monitoring application) can pull out the numbers and display in various forms of tables and charts.
Putting It All Together
As you can see WSO2 API Manager provides many options to scale up its individual components. However it doesn’t mean you should scale up each and every piece of it for the overall solution to be scalable. You should decide which components to scale up by looking at your requirements and the expected usage patterns of the solution. For instance, if you only have a handful of subscribers you don’t have to worry about scaling up API Store and API Publisher, regardless of how much traffic they are going to send. If you have thousands of subscribers, but only a handful of them are actually sending any traffic, then the scalability of API Store will be more important than scaling up the Gateway and statistics collection components.

Friday, August 17, 2012

Introducing WSO2 Carbon 4.0

Samisa Abeysinghe speaking about the latest release of WSO2 Carbon.

Monday, August 6, 2012

WSO2 API Manager 1.0.0 Goes GA

Last Friday we released WSO2 API Manager 1.0. It is the result of months of hard work. We started brainstorming about a WSO2 branded API management solution back in mid 2011. Few months later, in October, I implemented API support for Apache Synapse, which was a huge step in improving the REST support in our integration platform (specially in WSO2 ESB). This addition also brought us several steps closer to implementing a fully fledged API management solution based on WSO2 Carbon and related components. Then somewhere around February 2012, a team of WSO2 engineers officially started working on the WSO2 API Manager product. Idea was simple - combine our existing components to offer a smooth and end-to-end API management experience while addressing a number of challenges such as API provisioning, API governance, API security and API monitoring. The idea of combining our mediation, governance, identity and activity monitoring components to build the ultimate API management solution was a fascinating one to think about even for us.
I officially joined the WSO2 API Manager team in late April. It's been 15 hectic weeks since then but at the same time it's been 15 enjoyable weeks. Nothing is more fulfilling than seeing a project evolving from a set of isolated components into a feature complete product with its own UIs, samples and documentation. The development team was also one of the best a guy could ask for with each member delivering his/her part to the fullest, consistently going beyond expectations. 
This release of WSO2 API Manager supports creating APIs, versioning them and then publishing them into an 'API Store' after a review process. API documentation, technical metadata and ownership information can also be collected and tracked through the solution. The built-in API Store allows API consumers to browse the published APIs, provide feedback on them, and ultimately obtain API keys required to access them. API security is based on OAuth bearer token profile and OAuth resource owner grant types are supported to allow end-user authentication for the APIs. The API gateway (runtime) publishes events and statistics to a remote BAM server which then runs a series of analyzers  to extract useful usage information and display them on a dashboard. 
We are currently working with a group of customers and independent analysts to evaluate the product and further improve it. Objective is to go into 'release early - release often' mode and do a series of patch releases, thereby driving the product into maturity quickly. You can also join the effort by downloading the product, trying out a few scenarios and giving us some feedback on our mailing lists. You can report any issues or feature requests on our JIRA. Please refer the on-line documentation if you need any clarifications on any features. Have fun!

Friday, July 13, 2012

WSO2 API Manager Community Features: The Social Side of Things

Ability to build, nurture and sustain a healthy community of subscribers (API consumers) is one of the most prominent features expected from an API management solution. However the ability of the solution to support a rich and growing user base never stands on its own. In fact it's always contingent upon many functional, usability and social aspects of the underlying software. Techees such as myself, usually do a good job when identifying and implementing the functional side of things, but we suck at identifying other non-technical nitty-grittys. Therefore when designing the social aspects of WSO2 API Manager, we went through a ton of API management related literature. We wanted to make sure that the solution we build fits the customer needs and industry expectations well. We read many articles, blogs and case studies that highlighted the community aspects expected in API management solutions and how various software vendors have adopted those principles. We also talked to a number of customers who were either looking to enter the business of API management or were already struggling with some API management solution. As a result of this exercise we were able to identify a long list of functional and non-functional requirements that has a direct impact on the social aspects of API management solutions. I'm listing some of the most important ones here:
  1. Ability to associate documentation and samples with APIs
  2. Overall user friendliness of the API store
  3. Ability to tag APIs and powerful search capabilities
  4. Ability to rate APIs
  5. Ability to provider feedback on APIs
  6. Ability to track API usage by individual subscribers
If you go through the WSO2 API Manager Beta release you will notice how we have incorporated some of the above requirements into the product design.  If you login to API Publisher as a user who has the “Create” permission, then you are given a set of options to create documents for each API.
Again we have taken into consideration the fact that some API providers might already have tons of documentation, managed by an external CMS. For such users, it is not required to import all the available documents from the CMS into the WSO2 API Manager. They can continue to use the CMS for managing files and simply manage file references (URLs) through the API Manager.
Once the APIs are published to the API Store, subscribers and potential subscribers can browse through the available documents.
Subscribers are also given options to rate and comment on APIs.

API providers can associate one or more tags with each API.
Subscribers can use these tags to quickly jump into the APIs they are interested in.
In the monitoring front, WSO2 API Manager allows API providers to track how often individual subscribers have invoked the APIs.
In general, all the above features combine to give a pretty sleek and smooth API management experience as well as a strong notion of a user community. Feel free to browse through the other related features offered by WSO2 API Manager and see how the end-to-end story fits together. Personally I don't think we are 100% there yet in terms of the social aspects of the product, but I think we are off to a great start (anybody who tells you that their solution is 100% complete in social aspects is full of crap).
One very crucial feature that we are currently lacking in this area is alerting and notifications. Ideally the API Manager should notify the API subscribers about any changes that may occur in an API (for an example, an API becoming deprecated). On the other hand it should alert API providers when an API is not generating enough hype or subscriptions. We are already brainstorming about ways to add these missing pieces into the picture. Idea is to take them up as soon as the API Manager 1.0.0 goes GA so hopefully we can have an even more compelling community features story by the 1.1 release.  

Wednesday, July 11, 2012

API Life Cycles with WSO2 API Manager


One of the key API management challenges that we're trying to address in WSO2 API Manager is governing the API life cycle. Once you have developed, tested and published an API, you need to maintain it. Time to time you will roll out newer versions of the API into production with bug fixes, functional improvements and performance enhancements. When a new version is rolled out, you need to deprecate previously published versions thereby encouraging the subscribers to use the latest stable version of the API. However you will have to keep supporting the old versions for a period of time, thus providing existing subscribers some concession time to migrate to the latest version. But at some point you will completely take the old API versions off production and retire them from service. At a glance all this looks very simple and straightforward. But when you add the usual DevOps overhead into the picture, managing the API life cycle becomes one of the most complicated, long running tasks that you need deal with. If this long term procedure isn't tedious enough, there are many short term maintenance activities you have to do on APIs which are already in production. More often than not, you would want to temporarily block the active APIs so the system administrators can perform the required maintenance work on them. A well thought out API life cycle design should take these long term as well as short term requirements into consideration.
We at WSO2 gave this problem a long, hard thought and came up with the following schematic which illustrates the major life cycle states of a typical API.
If you download the beta release of WSO2 API Manager, you will see how we have incorporated the above model into the product design. Not only the product supports different life cycle states, it makes it absolutely simple to switch between different states. It's quite fascinating to see how API Publisher, API Store and API Gateway adjust to the changes in perfect synchronism as you take an API through its complete life cycle. In general these are the semantics that we have enforced for each life cycle state.
  • Created – A new API has been created and awaiting moderation. It hasn't been published to the API Store yet and hence cannot be consumed.
  • Published – API has been moderated and published to the API Store and is available to subscribers. Subscribers can use them in applications, generate keys against them and consume them.
  • Deprecated – API has been marked for retirement in near future. New subscriptions are not allowed on the API, but the existing subscriptions and keys will continue to work. However the existing subscribers are highly advised to upgrade to the latest version of the API.
  • Blocked – The API has been temporarily disabled for maintenance.
  • Retired – The API has been permanently taken off service.
Lets try some of this stuff out and see for ourselves. Extract the downloaded WSO2 API Manager Beta pack, go into the “bin” directory and start the server up. Launch your web browser and enter the URL https://localhost:9443/publisher. Login to the portal using the default admin credentials (user: admin, pass: admin). Go ahead and create a new API. You can create the “FindTweets” API described in one of previous posts.
Note that the API is initially in the “CREATED” state. While in this state, the API is not visible in the API Store. Point your web browser to http://localhost:9763/store to confirm this. Now go back to the API Publisher portal and click on the “FindTweets” API. Select the “Life Cycle” tab which contains the necessary controls to change the API life cycle state. 
To start with put the API into the “PUBLISHED” state and click “Update”. Go to the API Store and refresh the page. The “FindTweets” API should now be visible there. 
You can login to the API Store, click on it and subscribe to the API using the “DefaultApplication”. When done your “My Subscriptions” page will look something like the following.
Go ahead and generate a production key too. Using this you can invoke the “FindTweets” API. Refer to my previous “Hello World” post if you're not sure how to invoke an API.
Now go back to API Publisher and change the API state to “BLOCKED”. If you immediately come back to the “My Subscriptions” page and refresh it, you will see that the “FindTweets” API is displayed as a “Blocked” API.
Any attempts to invoke this API will result in a HTTP 503 response.
Go back to the API Publisher and change the API state back to “PUBLISHED”. Try to invoke the API again, and you will notice that things are back to normal. Now in the API Publisher, go to the “Overview” tab of the “FindTweets” API and click on the “Copy” button.
Enter a new version number (eg: 2.0.0) to create a new version of the “FindTweets” API. The newly created version will be initially in the “CREATED” state and hence will not show up in API Store. However the API Publisher will list both versions of the API.
Now go to the “Life Cycle” tab of “FindTweets-2.0.0” and select the “PUBLISHED” state from the drop down. This time, since your API has some older versions, you will get some additional options to check.
Make sure all the provided options are checked. Checking the “Deprecate Old Versions” option ensures that all the older versions of the “FindTweets” API will be put into the “DEPRECATED” state as you roll out the new version into API Store. Checking the “Require Re-Subscription” option makes sure that existing subscribers will have to re-subscribe to the new version of the API in order to consume it. That is the keys generated against the old version of the API will not be forward compatible with the new version of the API. Having checked all the options, click the “Update” button to apply the changes. If you go back and check your “My Subscriptions” page you will notice that the API has been marked as “Deprecated”.
However, if you go to the API Store home page you will notice that the old version of “FindTweets” is no longer listed there. Only the latest published version is listed on this page.
Even if you somehow manage to access the “FindTweets-1.0.0” API details page on API Store (may be via the link available in “My Subscriptions” page), you will notice that you are no longer given any options for subscribing to the API. In other words, no new subscriptions are allowed on the deprecated APIs.
However if you try to invoke the old version of the API, using the previously obtained key, you will notice that it continues to function as usual.
And now finally, go back to the API Publisher and put the “FindTweets 1.0.0” API to the “RETIRED” state. Check the “My Subscriptions” page after the change.
Also try invoking the API with the old key. You will get a HTTP 404 back as the API has been undeployed from the system.
I hope this post gave you a brief introduction to how API governance and API life cycle management works in WSO2 API Manager. You can try various other options available in the product such as leaving the “Require Re-Subscription” option unchecked when publishing a new version of the API. See how that will automatically subscribe old subscribers to the new version of the API. See the extent to which API meta data is copied when creating a new version of the API using the “Copy” option. Send us your feedback to dev@wso2.org or architecture@wso2.org

Tuesday, July 10, 2012

WSO2 API Manager Permission Model: Controlling What Users Can Do

Most API management solutions segregate their user base into two main categories – API providers (creators) and API consumers (subscribers). Depending on the group to which a user belongs, the system determines what that person can and cannot do once he logs in. At WSO2, we were also thinking about supporting a similar notion of user groups in WSO2 API Manager from day one. However one thing we weren't very sure was whether a model with two user groups sufficient to address the real world business and technical needs in the field of API management. After a number of brain storming sessions, heated discussions and customer meetings we identified three main roles associated with API management.
  1. API Creator – Designs, develops and creates APIs. Typically a techee, such as a developer.
  2. API Publisher – Reviews the work done by the API creators and approves them to be published in the API Store. Doesn't necessarily have to be a techee. 
  3. API Subscriber – Browses the API Store to discover APIs and subscribe to them. Typically a technical person, who's looking to develop an application using one or more APIs.
Once we had identified the user groups, the next challenge was incorporating them into our design in a flexible manner. We didn't want to tie our implementation to a specific set of user groups (roles) as that will make the end product quite rigid. Most users would want to simply plug their existing user store (a corporate LDAP or an Active Directory instance) into WSO2 API Manager, and they wouldn't want to introduce WSO2 API Manager specific user groups into their user store. Ideally they would like to designate some of their existing user groups as API creators, publishers and subscribers. And then we needed our solution to handle various edge cases without much of an effort. For an example, some companies would not want API creators and publishers to be different user groups. Some companies would prefer, API creators or publishers to also act as API subscribers so they can test the production APIs by directly consuming them. As you can see these edge cases can easily blur the distinction we have between different user groups.
So how do we introduce the notion of 3 user groups, while having the flexibility to reuse the roles defined elsewhere and combine multiple roles together when required? Thankfully, our WSO2 Carbon architecture has the perfect answer. Carbon gives us a powerful hierarchical permission model. So we introduced 3 Carbon permissions instead of 3 predefined user groups. 
  1. Manage > API > Create
  2. Manage > API > Publish
  3. Manage > API > Subscribe
These permissions can be assigned to any user group as you see fit. If required, multiple permissions can be assigned to the same group too. For an example take an LDAP server which defines 3 different user groups – A,B and C. By properly assigning the above permissions to the respective groups, we can designate users in group-A as API creators, users in group-B as API publishers and users in group-C as subscribers. If we grant both Create and Publish permissions to group-A, users in that group will be able to both create and publish APIs.
Lets take an API Manager 1.0.0-Beta1 pack and try some of these stuff out. Download the binary distribution and extract it to install the product. Go to the "bin" directory of the installation and launch the wso2server.sh file (or wso2server.bat if you are on Windows) to start the server. Once the server has fully started up, start your web browser and go to the URL https://localhost:9443/carbon. You should get to the login page of WSO2 API Manager management console.
Login to the console using default admin credentials (user: admin, pass: admin). Select the 'Configure' tab and click on 'Users and Roles' from the menu to open the "User Management" page.
Lets start by adding a couple of new roles. Click on “Roles” which will list all the currently existing roles. WSO2 API Manager uses an embedded H2 database engine as the default user store. So if you were wondering where the user 'admin' and the default roles are stored, there's your answer. To use a different user store such as an external LDAP server, you need to configure the repository/conf/user-mgt.xml file accordingly.
You will notice 3 roles in the listing – admin, everyone and subscriber. The default "subscriber" role has the "Manage > API > Subscribe" permission. But don't get the wrong idea that the system is somehow tied into this role. The default "subscriber" role is created by the self sign up component of API Manager and it is fully configurable. I will get back to that later. For now click on the “Add New Role” option and start the role creation wizard. 
Enter the name “creator” as the role name and click “Next”.
From the permission tree select the “Configure > Login” and “Manage > API > Create” permissions and click “Next”. 
You will be asked to add some users to the newly created role. But for now simply click on “Finish” to save the changes and exit the wizard. You will now see the new “creator” permission also listed on the “Roles” page. Start the role creation wizard once more to add another role named “publisher”. Assign the “Configure > Login” and “Manage > API > Publish” permission to this role.
Now lets go ahead and create a couple of user accounts and add them to the newly created roles. Select “Users and Roles” option from the breadcrumbs to go back to the “User Management” page. Click on “Users”. 
This page should list all the user accounts that exist in the embedded user store. By default  the system has only one user account (admin) and so only that item will be listed. Click on “Add New User” to initiate the user creation wizard. 
Specify “alice” as the username and “alice123” as the password and click “Next”. 
Check the “creator” role from the list of available roles. The “everyone” role will be selected by default and you are not allowed by the system to uncheck that. Click on “Finish” to complete the wizard. Both “admin” and “alice” should now be listed on the “Users” page. Launch the user creation wizard once more to create another user named “bob”. Specify “bob123” as the password and make sure to add him to the “publisher” role. When finished, sign out from the management console.
So far we have created a couple of new roles and added some users to those roles. It's time to see how the API Manager utilizes the Carbon permission system. Point your web browser to https://localhost:9443/publisher to get to the API Publisher portal. Sign in using the username “alice” and password “alice123”. Since you are logging in with a user who has the “Manage > API > Create” permission, API creation options will be displayed to you.
Click on the “New API” button and create a new API (see my previous post on creating APIs). When you're finished click on the newly created API on the API Publisher home page. 
You will be shown options to edit the API and add documentation, but you will not see any options related to publishing the API. To be able to publish an API, we must login as a user who has the “Manage > API > Publish” permission. So sign out from the portal and login again as the user “bob”.
First thing to note is you don't get the “Create API” option on the main menu. So go ahead and just select the API previously created by the user “alice” from the home page.
While on this page, note that there are no options available for “bob” to edit the API. However you will see a “Life Cycle” tab which wasn't visible earlier when you were logged in as “alice”. Select this tab, and you should see the options to publish the API. Select the entry “PUBLISHED” from the “State” drop down and click “Update” to publish the API. The “Life Cycle History” section at the bottom of the page will get updated saying that “bob” has modified the API status. 
We are done with the API Publisher portal for the moment. Lets head out to API Store. Sign out from the API Publisher and point your browser to http://localhost:9763/store.  First of all try signing in as “alice” or “bob”. You will end up with an error message similar to the following.
Only those users with the “Subscribe” permission are allowed to login to the API Store. So we need to create a new user account with the “Manage > API > Subscribe” permission. We can use the self sign up feature of API Store to do that. Click on the “Sign-up” option at the top of the API Store portal. 
Create an account with the username “chris” and password “chris123”. When completed you should see a dialog box similar to the following.
Now go ahead and login as “chris”. This time the login attempt will be successful. The self sign up component, not only creates the user account, but also adds him to the “subscriber” role that we saw earlier in the management console. Since this role already has the “Manage > API > Subscribe” permission, self signed up users can always login to the API Store without any issues. Also try signing into the API Publisher portal as the user “chris”. You will get an error message similar to the following.
A word about the default “subscriber” role. This is not a role managed by the system, but rather something created by the self sign up component of the API Manager. In other words, this is the role to which all self signed up users will be assigned. You can change the name of this role by modifying the “SubscriberRoleName” parameter in the repository/conf/api-manager.xml file. If you are using an external user store, you can specify an existing role for this purpose too. If you don't want the API Manager to attempt to create this role in the system, set the “CreateSubscriberRole” parameter in api-manager.xml to “false”. If you're going to use an existing role as the subscriber role, make sure that role has the “Configure > Login” and “Manage > API > Subscribe” permissions so that the self signed up users can login to API Store and subscribe to APIs.
In order to see the effect of granting multiple API management permissions to the same role, you can use the default “admin” account. This user is in a role named “admin” which has all the permissions in the Carbon permission tree. Therefore this user can login to both API Store and API Publisher portals. On the API Publisher, he can see all the options related API creation, editing as well as publishing.
I believe you found our work on API Manager permission model interesting. Feel free to try different combinations of permissions and different types of user stores. Send us your feedback to dev@wso2.org or architecture@wso2.org