Tag Archives: python

“Robot Framework Test Automation” book review

From time to time PacktPub will request a book review of one of their Python-related titles. This time around it was regarding their “Robot Framework Test Automation“ book they recently released. Since I’ve been using this awesome acceptance testing tool at work for more than two years, I was happy to comply.

In a nutshell, Robot Framework provides a great interface that acts as the middle-man between variour stakeholders. Indeed, tests are written in plain text (though other formats are supported, I never use them) with a rather minimal set of rules making it (almost) straightforward to read even by non-technical persons. The dirty technical details being hidden away and implemented in Python and executable in one of the various Python VM (CPython, Jython, IronPython are supported out of the box).

Most of the time, the basics of the Robot Framework data model and workflow can be taught in a couple of hours. However, being efficient with it will take a little more time. Still, people don’t have to learn a complete programming language (Python) itself and that’s a relief meaning they are happy to work with Robot Framework sometimes cumbersome syntax.

In spite of having a rather extensive documentation available online, the project did lack a good, straight to the point summary that takes you by the hand. Moreover, the documentation’s style of the project is fairly dry and Unix-style making it tedious to browse sometimes. Still, the content is there and it rarely failed me. With that said, having a friendly book on the subject is a great thing. Kudos to PacktPub. Now about the book…

The good

The book provides an introduction to the tool, its most common usages and even tries to guide you getting more from it. It’s a short book, 83 pages, that will not bore you with complex details. In other words, it’s a good companion of the online documentation if you start with Robot Framework.

Sumit Bisht, the author, does a good job keeping a neutral point of view in regards to how you should use Robot Framework. Indeed, depending on your software under test, you might want to have a more data-oriented approach (ala fitness), a behavior-driven testing approach or even a more assert-oriented style. Not many software can deal with all of them equally and it depends also on how testing is perceived in your organisation. Robot Framework can cope with all of them.

The bad

Though I could understand it’s only an introduction, it feels like some concepts are not properly explored. The idea behind keywords, the internal data model, dynamic libraries, etc. In other words, you will not really understand the underlying blocks and axioms that are the pedestal of the whole tool, you’ll rather learn the basics of using it. In fact, the only section where the book goes into more technical details (with a good example on using sikuli) will probably confuse you since it failed to properly introduce the principles behind them.

The ugly

There isn’t anything particulary that bad with this book, again it should be considered as a friendly introduction. I do not agree with a few minor points Sumit makes but they hardly matter and aren’t wrong anyway, just a matter of opinion. Note also that the book lacks examples a couple of times where it would have mattered but I don’t believe this makes the book any less useful.

The only thing that annoys me really is that PacktPub book’s layout still looks so unprofesionnal. They should really make an effort as the code is, most of the time, too hard to read (actually on this one item, it wasn’t that bad).

Final note

I think this book is ideal if you are about to start with Robot Framework as it will speed up the basics. If you’re already used to the tool, I am not sure it will help very much.

 

 

 

ws4py – WebSocket client and server library for Python

Recently I released ws4py, a package that provides client and server WebSocket support for Python 2.6 and 2.7.

Let’s first have a quick overview of what ws4py offers for now:

  • WebSocket specification draft-10 of the current specification.
  • A threaded client. This gives a simple client that doesn’t require an external dependency.
  • A Tornado client. This client is based on Tornado 2.0 which is quite a popular way of running asynchronous networking code these days. Tornado provides its own server implementation so I didn’t include mine in ws4py.
  • A CherryPy extension so that you can integrate WebSocket from within your CherryPy 3.2.1 server.
  • A gevent server based on the popular gevent library. This is courtesy of Jeff Lindsay.
  • Based on Jeff’s work, a pure WSGI middleware as well (available in the current master branch only until the next release).
  • ws4py runs on Android devices thanks to the SL4A package

Hopefully more client and servers will be added along the way as well as Python 3.x support. The former should be rather simple to add due to the way I designed ws4py.

The main idea is to make a distinction between the bytes provider and the bytes processing. The former is essentially reading and writing bytes from the connected socket. The latter is the function of making something out of the received bytes based on the WebSocket specification. In most implementations I have seen so far, both are rather heavily intertwined making it difficult to use a different bytes provider.

ws4py tries a different path by relying on a great feature of Python: the possibility to send data back to a generator. For instance, the frame parsing yields the quantity of bytes each time it needs more and the caller feeds back the generator those bytes once they are received. In fact, the caller of a frame parser is a stream object which acts the same way. The caller of that stream object is in fact the bytes provider (a client or a server). The stream is in charge of aggregating frames into a WebSocket message. Thanks to that design, both the frame and stream objects are totally unaware of the bytes provider and can be easily adapted in various contexts (gevent, tornado, CherryPy, etc.).

On my TODO list for ws4py:

  • Upgrade to a more recent version of the specification
  • Python 3.x implementation
  • Better documentation, read, write documentation.
  • Better performances on very large WebSocket messages

Running CherryPy on Android with SL4A

CherryPy runs on Android thanks to the SL4A project. So if you feel like running Python and your own web server on your Android device, well you can just do so. You’ve probably not heard something that awesome since the pizza delivery guy rung the door.

How to get on about it? Well that’s the surprise, CherryPy in itself doesn’t need to be patched. Granted I haven’t tried all the various tools provided by CherryPy but the server and the dispatching works just fine.

First, you need get the CherryPy source code, build and copy the resulting cherrypy package into the SL4A scripts directory.

Once you’ve plugged your phone to your machine through USB, run the next commands:

$ svn co http://svn.cherrypy.org/trunk cp3-trunk
$ cd cp3-trunk
$ python setup.py build
$ cp -r build/lib.linux-i686-2.6/cherrypy/ /media/usb0/sl4a/scripts/

Just change the path to match your environment. That’s it.

Now you can copy your own script, let’s assume you use something like below:

# -*- coding: utf-8 -*-
import logging
# The multiprocessing package isn't
# part of the ASE installation so
# we must disable multiprocessing logging
logging.logMultiprocessing = 0
 
import android
import cherrypy
 
class Root(object):
    def __init__(self):
        self.droid = android.Android()
 
    @cherrypy.expose
    def index(self):
        self.droid.vibrate()
        return "Hello from my phone"
 
    @cherrypy.expose
    def location(self):
        location = self.droid.getLastKnownLocation().result
        location = location.get('network', location.get('gps'))
        return "LAT: %s, LON: %s" % (location['latitude'],
                                     location['longitude'])
 
def run():
    cherrypy.config.update({'server.socket_host': '0.0.0.0'})
    cherrypy.quickstart(Root(), '/')
 
if __name__ == '__main__':
    run()

As you can see we must disable the multiprocessing logging since the multiprocessing package isn’t included with SL4A.

Save that script on your computer as cpdroid.py for example. Copy that file into the scripts directory of SL4A.

$ cp cpdroid.py /media/usb0/sl4a/scripts/

Unplug your phone and go to the SL4A application. Click on the cpdroid.py script, it should start fine. Then from your browser, go to http://phone_IP:8080/ and tada! You can also go to the /location path to get the geoloc of your phone.

Integrating SQLAlchemy into a CherryPy application

Quite often, people come on the CherryPy IRC channel asking about the way to use SQLAlchemy with CherryPy. There are a couple of good recipes on the tools wiki but I find them a little complex to begin with. Not to the recipes’ fault, many people don’t necessarily know about CherryPy tools and plugins at that stage.

The following recipe will try to make the example complete whilst as simple as possible to allow folks to start up with SQLAlchemy and CherryPy.

# -*- coding: utf-8 -*-
import os, os.path
 
import cherrypy
from cherrypy.process import wspbus, plugins
 
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column
from sqlalchemy.types import String, Integer
 
# Helper to map and register a Python class a db table
Base = declarative_base()
 
class Message(Base):
    __tablename__ = 'message'
    id = Column(Integer, primary_key=True)
    value =  Column(String)
 
    def __init__(self, message):
        Base.__init__(self)
        self.value = message
 
    def __str__(self):
        return self.value.encode('utf-8')
 
    def __unicode__(self):
        return self.value
 
    @staticmethod
    def list(session):
        return session.query(Message).all()
 
 
class SAEnginePlugin(plugins.SimplePlugin):
    def __init__(self, bus):
        """
        The plugin is registered to the CherryPy engine and therefore
        is part of the bus (the engine *is* a bus) registery.
 
        We use this plugin to create the SA engine. At the same time,
        when the plugin starts we create the tables into the database
        using the mapped class of the global metadata.
 
        Finally we create a new 'bind' channel that the SA tool
        will use to map a session to the SA engine at request time.
        """
        plugins.SimplePlugin.__init__(self, bus)
        self.sa_engine = None
        self.bus.subscribe("bind", self.bind)
 
    def start(self):
        db_path = os.path.abspath(os.path.join(os.curdir, 'my.db'))
        self.sa_engine = create_engine('sqlite:///%s' % db_path, echo=True)
        Base.metadata.create_all(self.sa_engine)
 
    def stop(self):
        if self.sa_engine:
            self.sa_engine.dispose()
            self.sa_engine = None
 
    def bind(self, session):
        session.configure(bind=self.sa_engine)
 
class SATool(cherrypy.Tool):
    def __init__(self):
        """
        The SA tool is responsible for associating a SA session
        to the SA engine and attaching it to the current request.
        Since we are running in a multithreaded application,
        we use the scoped_session that will create a session
        on a per thread basis so that you don't worry about
        concurrency on the session object itself.
 
        This tools binds a session to the engine each time
        a requests starts and commits/rollbacks whenever
        the request terminates.
        """
        cherrypy.Tool.__init__(self, 'on_start_resource',
                               self.bind_session,
                               priority=20)
 
        self.session = scoped_session(sessionmaker(autoflush=True,
                                                  autocommit=False))
 
    def _setup(self):
        cherrypy.Tool._setup(self)
        cherrypy.request.hooks.attach('on_end_resource',
                                      self.commit_transaction,
                                      priority=80)
 
    def bind_session(self):
        cherrypy.engine.publish('bind', self.session)
        cherrypy.request.db = self.session
 
    def commit_transaction(self):
        cherrypy.request.db = None
        try:
            self.session.commit()
        except:
            self.session.rollback()  
            raise
        finally:
            self.session.remove()
 
 
 
 
class Root(object):
    @cherrypy.expose
    def index(self):
        # print all the recorded messages so far
        msgs = [str(msg) for msg in Message.list(cherrypy.request.db)]
        cherrypy.response.headers['content-type'] = 'text/plain'
        return "Here are your list of messages: %s" % '\n'.join(msgs)
 
    @cherrypy.expose
    def record(self, msg):
        # go to /record?msg=hello world to record a "hello world" message
        m = Message(msg)
        cherrypy.request.db.add(m)
        cherrypy.response.headers['content-type'] = 'text/plain'
        return "Recorded: %s" % m
 
if __name__ == '__main__':
    SAEnginePlugin(cherrypy.engine).subscribe()
    cherrypy.tools.db = SATool()
    cherrypy.tree.mount(Root(), '/', {'/': {'tools.db.on': True}})
    cherrypy.engine.start()
    cherrypy.engine.block()

The general idea is to use the plugin mechanism to register functions on an engine basis and enable a tool that will provide an access to the SQLAlchemy session at request time.

Using Jython as a CLI frontend to HBase

HBase, the well known non-relational distributed database, comes with a console program to perform various operations on a HBase cluster. I’ve personally found this tool to be a bit limited and I’ve toyed around the idea of writing my own. Since HBase only comes with a Java driver for direct access and the various RPC interfaces such as Thrift don’t offer the full set of functions over HBase, I decided to go for Jython and to directly use the Java API. This article will show a mock-up of such a tool.

The idea is to provide a simple Python API over the HBase one and couple it with a Python interpreter. This means, it offers the possibility to perform any Python (well Jython) operations whilst operating on HBase itself with an easier API than the Java one.

Note also that the tool uses the WSPBus already described in an earlier article to control the process itself. You will therefore need CherryPy’s latest revision.

# -*- coding: utf-8 -*-
import sys
import os
import code
import readline
import rlcompleter
 
from org.apache.hadoop.hbase import HBaseConfiguration, \
     HTableDescriptor, HColumnDescriptor
from org.apache.hadoop.hbase.client import HBaseAdmin, \
     HTable, Put, Get, Scan
 
import logging
from logging import handlers
 
from cherrypy.process import wspbus
from cherrypy.process import plugins
 
class StaveBus(wspbus.Bus):
    def __init__(self):
        wspbus.Bus.__init__(self)
        self.open_logger()
        self.subscribe("log", self._log)
 
        sig = plugins.SignalHandler(self)
        if sys.platform[:4] == 'java':
            del sig.handlers['SIGUSR1']
            sig.handlers['SIGUSR2'] = self.graceful
            self.log("SIGUSR1 cannot be set on the JVM platform. Using SIGUSR2 instead.")
 
            # See http://bugs.jython.org/issue1313
            sig.handlers['SIGINT'] = self._jython_handle_SIGINT
        sig.subscribe()
 
    def exit(self):
        wspbus.Bus.exit(self)
        self.close_logger()
 
    def open_logger(self, name=""):
        logger = logging.getLogger(name)
        logger.setLevel(logging.INFO)
        h = logging.StreamHandler(sys.stdout)
        h.setLevel(logging.INFO)
        h.setFormatter(logging.Formatter("[%(asctime)s] %(name)s - %(levelname)s - %(message)s"))
        logger.addHandler(h)
 
        self.logger = logger
 
    def close_logger(self):
        for handler in self.logger.handlers:
            handler.flush()
            handler.close()
 
    def _log(self, msg="", level=logging.INFO):
        self.logger.log(level, msg)
 
    def _jython_handle_SIGINT(self, signum=None, frame=None):
        # See http://bugs.jython.org/issue1313
        self.log('Keyboard Interrupt: shutting down bus')
        self.exit()
 
class HbaseConsolePlugin(plugins.SimplePlugin):
    def __init__(self, bus):
        plugins.SimplePlugin.__init__(self, bus)
        self.console = HbaseConsole()
 
    def start(self):
        self.console.setup()
        self.console.run()
 
class HbaseConsole(object):
    def __init__(self):
        # we provide this instance to the underlying interpreter
        # as the interface to operate on HBase
        self.namespace = {'c': HbaseCommand()}
 
    def setup(self):
        readline.set_completer(rlcompleter.Completer(self.namespace).complete)
        readline.parse_and_bind("tab:complete")
        import user
 
    def run(self):
        code.interact(local=self.namespace)
 
class HbaseCommand(object):
    def __init__(self, conf=None, admin=None):
        self.conf = conf
        if not conf:
            self.conf = HBaseConfiguration()
        self.admin = admin
        if not admin:
            self.admin = HBaseAdmin(self.conf)
 
    def table(self, name):
        return HTableCommand(name, self.conf, self.admin)
 
    def list_tables(self):
        return self.admin.listTables().tolist()
 
class HTableCommand(object):
    def __init__(self, name, conf, admin):
        self.conf = conf
        self.admin = admin
        self.name = name
        self._table = None
 
    def row(self, name):
        if not self._table:
            self._table = HTable(self.conf, self.name)
        return HRowCommand(self._table, name)
 
    def create(self, families=None):
        desc = HTableDescriptor(self.name)
        if families:
            for family in families:
                desc.addFamily(HColumnDescriptor(family))
        self.admin.createTable(desc)
        self._table = HTable(self.conf, self.name)
        return self._table
 
    def scan(self, start_row=None, end_row=None, filter=None):
        if not self._table:
            self._table = HTable(self.conf, self.name)
 
        sc = None
        if start_row and filter:
            sc = Scan(start_row, filter)
        elif start_row and end_row:
            sc = Scan(start_row, end_row)
        elif start_row:
            sc = Scan(start_row)
        else:
            sc = Scan()
        s = self._table.getScanner(sc)
        while True:
            r = s.next()
            if r is None:
                raise StopIteration()
 
            yield r
 
    def delete(self):
        self.disable()
        self.admin.deleteTable(self.name)
 
    def disable(self):
        self.admin.disableTable(self.name)
 
    def enable(self):
        self.admin.enableTable(self.name)
 
    def exists(self):
        return self.admin.tableExists(self.name)
 
    def list_families(self):
        desc = HTableDescriptor(self.name)
        return desc.getColumnFamilies()
 
class HRowCommand(object):
    def __init__(self, table, rowname):
        self.table = table
        self.rowname = rowname
 
    def put(self, family, column, value):
        p = Put(self.rowname)
        p.add(family, column, value)
        self.table.put(p)
 
    def get(self, family, column):
        r = self.table.get(Get(self.rowname))
        v = r.getValue(family, column)
        if v is not None:
            return v.tostring()
 
 
if __name__ == '__main__':
    bus = StaveBus()
    HbaseConsolePlugin(bus).subscribe()
    bus.start()
    bus.block()

To test the tool, you can simply grab the latest copy of HBase and run:

hbase-0.20.4$ ./bin/start-hbase.sh

Then you need to configure your classpath so that it includes all the HBase dependencies. To determine them:

$ ps auwx|grep java|grep org.apache.hadoop.hbase.master.HMaster|perl -pi -e "s/.*classpath //"

Copy the full list of jars and export CLASSPATH with it. (This is from the HBase wiki on Jython and HBase).

Next you have to add an extra jar to the classpath so that Jython supports readline:

$ export CLASSPATH=$CLASSPATH:$HOME/jython2.5.1/extlibs/libreadline-java-0.8.jar

Make sure you’ll install libreadline-java as well.

Now, that your environment is setup, save the code above under a script named stave.py and run it as follow:

$ jython stave.py
Python 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54) 
[Java HotSpot(TM) Server VM (Sun Microsystems Inc.)] on java1.6.0_20
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> c.table('myTable').create(families=['aFamily:'])
>>> c.table('myTable').list_families()
array(org.apache.hadoop.hbase.HColumnDescriptor)
>>> c.table('myTable').row('aRow').put('aFamily', 'aColumn', 'hello world!')
>>> c.table('myTable').row('aRow').get('aFamily', 'aColumn')
'hello world!'
>>> list(c.table('myTable').scan())
[keyvalues={aRow/aFamily:aColumn/1277645421824/Put/vlen=12}]

You can import any Python module available to your Jython environment as well of course.

I will probably extend this tool over time but in the meantime I hope you’ll find it a useful canvas to operate HBase.

A quick chat WebSockets/AMQP client

In my previous article I described how to plug WebSockets into AMQP using Tornado and pika. As a follow-up, I’ll show you how this can be used to write the simplest chat client.

First we create a web handler for Tornado that will return a web page containing the Javascript code that will connect and converse with our WebSockets endpoint following the WebSockets API.

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        username = "User%d" % random.randint(0, 100)
        self.write("""<html>
        <head>
          <script type='application/javascript' src='/static/jquery-1.4.2.min.js'> </script>
          <script type='application/javascript'>
            $(document).ready(function() {
              var ws = new WebSocket('ws://localdomain.dom:8888/ws');
              ws.onmessage = function (evt) {
                 $('#chat').val($('#chat').val() + evt.data + '\\n');                  
              };
              $('#chatform').submit(function() {
                 ws.send('%(username)s: ' + $('#message').val());
                 $('#message').val("");
                 return false;
              });
            });
          </script>
        </head>
        <body>
        <form action='/ws' id='chatform' method='post'>
          <textarea id='chat' cols='35' rows='10'></textarea>
          <br />
          <label for='message'>%(username)s: </label><input type='text' id='message' />
          <input type='submit' value='Send' />
          </form>
        </body>
        </html>
        """ % {'username': username})

Every time, the user enters a message and submits it too our WebSockets endpoint which, in return, will forward any messages back to the client. These will be appended to the textarea.

Internally, each client gets notified of any message through AMQP and the bus. Indeed the WebSockets handler are subscribed to a channel that will be notified every time the AMQP server pushes data to the consumer. A side effect of this is that the Javascript code above doesn’t update the textarea when it sends the message the user has entered, but when the server sends it back.

Let’s see how we had to change the Tornado application to support that handler as well as the serving of jQuery as a static resource (you need the jQuery toolkit in the same directory as the Python module).

 
if __name__ == '__main__':
    application = tornado.web.Application([
        (r"/", MainHandler),
        (r"/ws", WebSocket2AMQP),
        ], static_path=".", bus=bus)
 
    http_server = tornado.httpserver.HTTPServer(application)
    http_server.listen(8888)
 
    bus.subscribe("main", poll)
    WS2AMQPPlugin(bus).subscribe()
    bus.start()
    bus.block()

The code is here.

Once the server is running, open two browser windows and access http://localhost:8888/. You should be able to type messages in one and see them appears in both windows.

Note:

This has been tested against the latest Chrome release. You will need to either set the “localdomain.dom” or provide the IP address of your network interface in the Javascript above since Chrome doesn’t allow for localhost nor 127.0.0.1.

Plugging AMQP and WebSockets

In my last article, I discussed the way the WSPBus could help your management of Python processes. This time, I’ll show how the bus can help plugging in heterogeneous frameworks and manage them properly too.

The following example will plug the WebSockets and AMQP together in order to channel data in and out of a WebSockets channel into AMQP exchanges and queues. For this, we’ll be using the Tornado web framework to handle the WebSockets side and pika for the AMQP one.

pika uses the Python built-in asyncore module to perform the non-blocking socket operations whilst Tornado comes with its own main loop on top of select or poll. Since Tornado doesn’t offer a single function call to iterate once, we’ll be directly using their main loop to block the process and therefore won’t be using the bus’ own block method.

Let’s see how the bus looks like

 class MyBus(wspbus.Bus):
    def __init__(self, name=""):
        wspbus.Bus.__init__(self)
        self.open_logger(name)
        self.subscribe("log", self._log)
 
        self.ioloop = tornado.ioloop.IOLoop.instance()
        self.ioloop.add_callback(self.call_main)
 
    def call_main(self):
        self.publish('main')
        time.sleep(0.1)
        self.ioloop.add_callback(self.call_main)
 
    def block(self):
        ioloop = tornado.ioloop.IOLoop.instance()
        try:
            ioloop.start()
        except KeyboardInterrupt:
            ioloop.stop()
            self.exit()
 
    def exit(self):
        wspbus.Bus.exit(self)
        self.close_logger()
 
    def open_logger(self, name=""):
        logger = logging.getLogger(name)
        logger.setLevel(logging.INFO)
        h = logging.StreamHandler(sys.stdout)
        h.setLevel(logging.INFO)
        h.setFormatter(logging.Formatter("[%(asctime)s] %(name)s - %(levelname)s - %(message)s"))
        logger.addHandler(h)
 
        self.logger = logger
 
    def close_logger(self):
        for handler in self.logger.handlers:
            handler.flush()
            handler.close()
 
    def _log(self, msg="", level=logging.INFO):
        self.logger.log(level, msg)

Next we create a plugin that will subscribe to the bus and which will be in charge for the AMQP communication.

class WS2AMQPPlugin(plugins.SimplePlugin):
    def __init__(self, bus):
        plugins.SimplePlugin.__init__(self, bus)
        self.conn = pika.AsyncoreConnection(pika.ConnectionParameters('localhost'))
        self.channel = self.conn.channel()
        self.channel.exchange_declare(exchange="X", type="direct", durable=False)
        self.channel.queue_declare(queue="Q", durable=False, exclusive=False)
        self.channel.queue_bind(queue="Q", exchange="X", routing_key="")
 
        self.channel.basic_consume(self.amqp2ws, queue="Q")
 
        self.bus.subscribe("ws2amqp", self.ws2amqp)
        self.bus.subscribe("stop", self.cleanup)
 
    def cleanup(self):
        self.bus.unsubscribe("ws2amqp", self.ws2amqp)
        self.bus.unsubscribe("stop", self.cleanup)
        self.channel.queue_delete(queue="Q")
        self.channel.exchange_delete(exchange="X")
        self.conn.close()
 
    def amqp2ws(self, ch, method, header, body):
        self.bus.publish("amqp2ws", body)
        ch.basic_ack(delivery_tag=method.delivery_tag)
 
    def ws2amqp(self, message):
        self.bus.log("Publishing to AMQP: %s" % message)
        self.channel.basic_publish(exchange="X", routing_key="", body=message)

The interesting bits are the amqp2ws and ws2amqp methods. The former is called anytime the AMQP broker pushes data to our AMQP consumer, we then use the bus to publish the message to any interested subscribers. The latter publishes to AMQP messages that come from the WebSockets channel.

Next let’s see the Tornado WebSockets handler.

class WebSocket2AMQP(websocket.WebSocketHandler):
    def __init__(self, *args, **kwargs):
        websocket.WebSocketHandler.__init__(self, *args, **kwargs)
        self.settings['bus'].subscribe("amqp2ws", self.push_message)
 
    def open(self):
        self.receive_message(self.on_message)
 
    def on_message(self, message):
        self.settings['bus'].publish("ws2amqp", message)
        self.write_message(message)
        self.receive_message(self.on_message)
 
    def on_connection_close(self):
        self.settings['bus'].unsubscribe("amqp2ws", self.push_message)
 
    def push_message(self, message):
        self.write_message(message)

The on_message method is called whenever data is received from the client, the push_message is used to push data to the client.

Finally, we setup the plug everything together:

if __name__ == '__main__':
    application = tornado.web.Application([
        (r"/ws", WebSocket2AMQP),
        ], bus=bus)
 
    http_server = tornado.httpserver.HTTPServer(application)
    http_server.listen(8888)
 
    bus.subscribe("main", poll)
    WS2AMQPPlugin(bus).subscribe()
    bus.start()
    bus.block()

Notice the fact we subscribe the asyncore poll function to the main channel of the bus so that pika works properly as if we had called asyncore.loop()

The code can be found here.

Book Review – Expert Python Programming

Expert Python Programming by Tarek Ziadé is a great book that I recommend to anyone wishing to explore Python in a professional fashion.

The book covers topics that aren’t usual for Python books, at least to my knowledge, like how to build, package, release, distribute your software using tools that have become de facto standards in the Python community (setuptools, paste, zc.buildout, etc.). The book also gently introduces the reader to distributed version control and test-driven development.

The good

The book has 14 chapters and I really believe chapters 5 to 13 are worth every penny. They focus on the delivery of software from building to distributing through testing and optimizing. There is a little something for everyone there and I agree with the book saying it’s an authoritative reference on the subject. If you want an extensive resource on the process of moving from a personal project to a professional one, these chapters are what you need. Of course these chapters aren’t exhaustive but do offer enough meat to get you going and whet your appetite.

Tarek has a great experience in those fields an it transpires he surely wanted to include more content but that would have probably made the book harder to follow.

The “it depends on who you are and what you’ll be looking for”

I don’t want to say bad as it’s only my own opinion here but, I wasn’t convinced about chapters 1 to 4 nor was I about chapter 14. They deal with mainly advanced Python programming dos and donts: iterators, metaclasses, MRO, design patterns. Sadly, I didn’t feel they belonged to the topic of the book. They are interesting on their own but make the book less tight, as if it had started in one direction and suddenly shifted. That being said, they are useful as a reference and even if they aren’t always crystal clear, they are an enjoyable source of information.

A summary

Writing a book of this nature is hard because it’s large in scope but also in audience. Tarek explains that the book aims at Python developers but that in some instances project managers may find it useful. This is probably true if they were developers at some point. Expert Python Programming is a rich technical book which names isn’t too cocky considering its content.

Not only did I enjoy the book I also brought it to the office as it was immediatly useful to a case I was working on. Recommended.