XMPP, a ground for social technologies

I’ve been a long fan of the XMPP protocol and I’ve started implementing it using the fantastic Kamaelia. With all the GSoC discussion around it appeared that lots of people were more and more interested in seeing XMPP becoming the natural partner of HTTP in the maze that the Internet has quickly become. Consequently Peter Saint-André created today the Social mailing-list to all people interested in discussing how XMPP could be used in what is that social web of yours.

I’m totally biased but I think there is more to XMPP than IM, the protocol and its suite of extensions provide great power applicable to RIA, whether they reside inside our outside the browser. For instance, I do believe that rather than using Comet one ought to use XMPP to push notifications to the client. In such a case one might consider the client as a node in a cloud of lazily interconnected nodes and then start thinking of the browser as more than a HTML rendering engine and without the resort to abuse HTTP for things it was never meant to support.

I wish browser vendors could start implementing XMPP within the browser as that would provide a fantastic incentive for more applications based on the power of XMPP.

CherryPy in the field

Michael Schurter just posted a message on the main CherryPy users mailing-list asking developers using CherryPy to let the project team know about it. I want to support the idea as I would love our dusty success stories page being updated with new entries. I have a feeling that CherryPy is used quite a lot but mainly as a light HTTP framework in administration tools context and those projects quite likely are internals and don’t have much visibility outside of their scope. Nonetheless we’d be interested in knowing where CherryPy is used so please let us know.

amplee 0.6.0 released

As per today’s announcement. I’m glad to release amplee 0.6.0.
This release is an important move from previous releases as it doesn’t include support for any HTTP layer out of the box anymore. The reason is that it made the previous API needlessly complex and stopped people to actually use amplee for what it aims at being: one simple representation of the AtomPub protocol server side. Basically I wish amplee was used as a library rather than as a host for AtomPub applications.

The 0.6.x branch will focus therefore on polishing the AtomPub model API as well as the related sub-packages such as the index and graph extension. Moreover I would like to improve the performance of amplee although they have already improved since 0.5.x. The graph sub-package is a first stab at using graph theory via the igraph package to perform foxy manipulations of Atom feeds.

One major change since 0.5.x is the move from bridge to Amara to parse, query and generate XML documents within amplee. I think that change was for the best considering the capabilities of Amara.

Another change is that I’ve dropped the INI file format for configuration and loading an amplee structure. Instead you can now directly use the XML service document itself and complete using a bit of extra code. That allows for some funny capabilities such as mirroring existing AtomPub service document (see the example directory for instance).
I would like to thank Eric Larson and Mohanaraj Gopala Krishnan for their feedback and patience. They have provided the project with a tremendous help.

IronPython, OpenGL, GLFW and SDL

Triggered by a question posted on the IronPython mailing-list by Jane Janet, I decided to see how IronPython would deal with OpenGL.
I quickly realized that the Tao framework was my best bet to gain access to OpenGL with .NET.
I therefore played a bit with the provided examples (ported from excellent tutorials such as NeHe) and started to port them to IronPython. I chose to use GLFW in order to get a context to run an OpenGL example rather than FreeGlut as I prefer the API design of GLFW.
I finally ported/wrote three examples on the IronPython cookbook as invited by Michael J. Foord.
The GLFW+OpenGL examples were easy to port and run.
However the SDL example has been quite a pain to setup. I think it comes down to the fact that Tao exposes the SDL API has an unmanaged code which means you have some conversion to do between IntPtr objects and SDL structures. Moreover I kept running into the SystemError: Missing or incorrect header for method GetValue exception when accessing structure attributes and that drove me nuts. Finally I settled for the horrible evt_type = Sdl.SDL_Event.type._get_(evt)</em> call that basically says:<em> in that structure, I want to grab the value of that field for the provided instance. Ugly I say.

Anyway I hope those few examples will help using IronPython in multimedia context without having to resort to Direct X.

Atom, related and Wikipedia

I needed recently to grab an Atom feed of entris that contained link elements with a rel attribute set to “related”, for some reason it’s not that common and I therefore decided to make my own by scraping a few pages from Wikipedia, generating said link elements from the “See Also” sections of each articles. Here is a dirty script that performs such task.

# -*- coding: utf-8 -*-

from datetime import datetime

from urlparse import urljoin, urlparse

import uuid

from xml.sax.saxutils import escape

import amara

from BeautifulSoup import BeautifulSoup

import httplib2

visited_links = []

ATOM10_NS = u’http://www.w3.org/2005/Atom’

ATOM10_PREFIX = u’atom’

def qname(local_name, prefix=None):

if not prefix:

return local_name

return u”%s:%s” % (prefix, local_name)

def init_feed():

d = amara.create_document(prefixes={ATOM10_PREFIX: ATOM10_NS})

feed = d.xml_create_element(qname(u”feed”, ATOM10_PREFIX), ns=ATOM10_NS)

d.xml_append(feed)

feed.xml_append(d.xml_create_element(qname(u”id”, ATOM10_PREFIX), ns=ATOM10_NS,

content=u’urn:uuid:’ + unicode(uuid.uuid4())))

feed.xml_append(d.xml_create_element(qname(u”updated”, ATOM10_PREFIX), ns=ATOM10_NS,

content=unicode(datetime.utcnow().isoformat())))

return d, feed

d, feed = init_feed()

def run(url):

print “Visiting: %s” % url

entry = init_entry(url)

h = httplib2.Http(‘.cache’)

r, c = h.request(url)

soup = BeautifulSoup(c)

entry.xml_append(d.xml_create_element(qname(u”content”, ATOM10_PREFIX), ns=ATOM10_NS,

attributes={u’type’: u’text/html’,

u’src’: unicode(url)}))

see_also = soup.find(name=’a’, attrs={‘id’: ‘See_also’})

if see_also:

see_also_links = see_also.parent.findNextSibling(name=’ul’)

if see_also_links: # some pages have the

empty 🙁
see_also_links = see_also_links.findAll(‘li’)

if see_also_links:
next_to_visits = []
for link in see_also_links:
link = str(link.a[‘href’])
if urlparse(link)[1] == ”:
link = urljoin(‘http://en.wikipedia.org’, link)
entry.xml_append(d.xml_create_element(qname(u”link”, ATOM10_PREFIX), ns=ATOM10_NS,
attributes={u’rel’: u’related’, u’type’: u’text/html’,
u’href’: unicode(link)}))
if link not in visited_links:
visited_links.append(link)
next_to_visits.append(link)

for link in next_to_visits:
run(link)

def init_entry(url):
entry = d.xml_create_element(qname(u”entry”, ATOM10_PREFIX), ns=ATOM10_NS)
feed.xml_append(entry)

entry.xml_append(d.xml_create_element(qname(u”id”, ATOM10_PREFIX), ns=ATOM10_NS,
content=u’urn:uuid:’ + unicode(uuid.uuid5(uuid.NAMESPACE_URL, url))))

entry.xml_append(d.xml_create_element(qname(u”title”, ATOM10_PREFIX), ns=ATOM10_NS,
content=unicode(url), attributes={u’type’: u’text’}))

entry.xml_append(d.xml_create_element(qname(u”updated”, ATOM10_PREFIX), ns=ATOM10_NS,
content=unicode(datetime.utcnow().isoformat())))

entry.xml_append(d.xml_create_element(qname(u”link”, ATOM10_PREFIX), ns=ATOM10_NS,
attributes={u’rel’: u’self’, u’type’: u’text/html’,
u’href’: unicode(url)}))

author = d.xml_create_element(qname(u”author”, ATOM10_PREFIX), ns=ATOM10_NS)
author.xml_append(d.xml_create_element(qname(u”uri”, ATOM10_PREFIX), ns=ATOM10_NS,
content=u’http://en.wikipedia.org’))
entry.xml_append(author)

return entry

if _name_ == ‘_main_’:
try:
print “Ctrl-C to stop the scrapping”
run(‘http://en.wikipedia.org/wiki/Castle’)
except KeyboardInterrupt:
file(‘wikipedia_related_feed.atom’, ‘wb’).write(feed.xml(indent=True))
print “Saved as wikipedia_related_feed.atom”
except Exception, ex:
file(‘wikipedia_related_feed.atom’, ‘wb’).write(feed.xml(indent=True))
print “Saved as wikipedia_related_feed.atom”
raise