Python LDAP for Windows and Python 2.4

At work we need to do some ldap work and it so happens we run Python 2.4. Unfortunately the only pre-compiled package I’ve found for Windows is for Python 2.5. I’ve tried all morning compiling python-ldap for Windows but that requires so much tweaking that I’ve gone slightly mad. Hence my request to the good Python community, would anyone have python-ldap for Python 2.4 on Windows (not cygwin by the way)?

Update: Well as an anonymous commenter showed me, the eggs for Python 2.4 linked from the python-ldap download page are indeed okay to work with. I had simply missed the DLLs package. I can only wish I could blame on being Friday.

XMPP in the browser

Following my post on XMPP yesterday, one comment by Steven Kryskalla raised the point of the Mozilla extension: xmpp4moz. This extension is indeed an excellent showcase of what I wish could see more in the browser. However, while I knew about that extension I chose not to discuss about it as I wanted to stress over the need of a standard XMPP API, built-into browsers. This would imply a discussion shared by all the browser vendors and I thought inducing that Mozilla was already there could have disrupted the message. That being said xmpp4moz is a great piece of work and would definitely be an appropriate ground for more discussion.

What I think xmpp4moz does right is to use E4X to represent XMPP stanzas rather than providing its own datamodel. That means that the API stays quite at a low level but also means it’s much quicker to grasp and more flexible.

I think ultimately what I’d like to see is a standard API that allows me from Javascript to do XMPP connection, authentication, session and binding. Then provide a nice interface to register handlers for given stanzas. By standard API I obviously meant that the same Javascript works the same way in any browser. Yeah I’m demanding like that.

Why Javascript? Two reasons that I can think of:

  • The Ajax crazy party has pushed dramatically the competition across Javascript supporters and proved it was a valid and solid solution all along.
  • Javascript is now the programing language with the best support inside browsers, no point going for something different.

Overall, considering how small XMPP-core is, I don’t believe this is an impossible goal. The guys behind xmpp4moz have demonstrated it. What is tough with XMPP is not to provide support for the core but to start supporting its myriad of extensions that make the protocol so rich. But if people don’t even have an access to the core they won’t be able to build applications at all.

XMPP, a ground for social technologies

I’ve been a long fan of the XMPP protocol and I’ve started implementing it using the fantastic Kamaelia. With all the GSoC discussion around it appeared that lots of people were more and more interested in seeing XMPP becoming the natural partner of HTTP in the maze that the Internet has quickly become. Consequently Peter Saint-André created today the Social mailing-list to all people interested in discussing how XMPP could be used in what is that social web of yours.

I’m totally biased but I think there is more to XMPP than IM, the protocol and its suite of extensions provide great power applicable to RIA, whether they reside inside our outside the browser. For instance, I do believe that rather than using Comet one ought to use XMPP to push notifications to the client. In such a case one might consider the client as a node in a cloud of lazily interconnected nodes and then start thinking of the browser as more than a HTML rendering engine and without the resort to abuse HTTP for things it was never meant to support.

I wish browser vendors could start implementing XMPP within the browser as that would provide a fantastic incentive for more applications based on the power of XMPP.

CherryPy in the field

Michael Schurter just posted a message on the main CherryPy users mailing-list asking developers using CherryPy to let the project team know about it. I want to support the idea as I would love our dusty success stories page being updated with new entries. I have a feeling that CherryPy is used quite a lot but mainly as a light HTTP framework in administration tools context and those projects quite likely are internals and don’t have much visibility outside of their scope. Nonetheless we’d be interested in knowing where CherryPy is used so please let us know.

Atom, related and Wikipedia

I needed recently to grab an Atom feed of entris that contained link elements with a rel attribute set to “related”, for some reason it’s not that common and I therefore decided to make my own by scraping a few pages from Wikipedia, generating said link elements from the “See Also” sections of each articles. Here is a dirty script that performs such task.

# -*- coding: utf-8 -*-

from datetime import datetime

from urlparse import urljoin, urlparse

import uuid

from xml.sax.saxutils import escape

import amara

from BeautifulSoup import BeautifulSoup

import httplib2

visited_links = []

ATOM10_NS = u’’

ATOM10_PREFIX = u’atom’

def qname(local_name, prefix=None):

if not prefix:

return local_name

return u”%s:%s” % (prefix, local_name)

def init_feed():

d = amara.create_document(prefixes={ATOM10_PREFIX: ATOM10_NS})

feed = d.xml_create_element(qname(u”feed”, ATOM10_PREFIX), ns=ATOM10_NS)


feed.xml_append(d.xml_create_element(qname(u”id”, ATOM10_PREFIX), ns=ATOM10_NS,

content=u’urn:uuid:’ + unicode(uuid.uuid4())))

feed.xml_append(d.xml_create_element(qname(u”updated”, ATOM10_PREFIX), ns=ATOM10_NS,


return d, feed

d, feed = init_feed()

def run(url):

print “Visiting: %s” % url

entry = init_entry(url)

h = httplib2.Http(‘.cache’)

r, c = h.request(url)

soup = BeautifulSoup(c)

entry.xml_append(d.xml_create_element(qname(u”content”, ATOM10_PREFIX), ns=ATOM10_NS,

attributes={u’type’: u’text/html’,

u’src’: unicode(url)}))

see_also = soup.find(name=’a’, attrs={‘id’: ‘See_also’})

if see_also:

see_also_links = see_also.parent.findNextSibling(name=’ul’)

if see_also_links: # some pages have the

empty 🙁
see_also_links = see_also_links.findAll(‘li’)

if see_also_links:
next_to_visits = []
for link in see_also_links:
link = str(link.a[‘href’])
if urlparse(link)[1] == ”:
link = urljoin(‘’, link)
entry.xml_append(d.xml_create_element(qname(u”link”, ATOM10_PREFIX), ns=ATOM10_NS,
attributes={u’rel’: u’related’, u’type’: u’text/html’,
u’href’: unicode(link)}))
if link not in visited_links:

for link in next_to_visits:

def init_entry(url):
entry = d.xml_create_element(qname(u”entry”, ATOM10_PREFIX), ns=ATOM10_NS)

entry.xml_append(d.xml_create_element(qname(u”id”, ATOM10_PREFIX), ns=ATOM10_NS,
content=u’urn:uuid:’ + unicode(uuid.uuid5(uuid.NAMESPACE_URL, url))))

entry.xml_append(d.xml_create_element(qname(u”title”, ATOM10_PREFIX), ns=ATOM10_NS,
content=unicode(url), attributes={u’type’: u’text’}))

entry.xml_append(d.xml_create_element(qname(u”updated”, ATOM10_PREFIX), ns=ATOM10_NS,

entry.xml_append(d.xml_create_element(qname(u”link”, ATOM10_PREFIX), ns=ATOM10_NS,
attributes={u’rel’: u’self’, u’type’: u’text/html’,
u’href’: unicode(url)}))

author = d.xml_create_element(qname(u”author”, ATOM10_PREFIX), ns=ATOM10_NS)
author.xml_append(d.xml_create_element(qname(u”uri”, ATOM10_PREFIX), ns=ATOM10_NS,

return entry

if _name_ == ‘_main_’:
print “Ctrl-C to stop the scrapping”
except KeyboardInterrupt:
file(‘wikipedia_related_feed.atom’, ‘wb’).write(feed.xml(indent=True))
print “Saved as wikipedia_related_feed.atom”
except Exception, ex:
file(‘wikipedia_related_feed.atom’, ‘wb’).write(feed.xml(indent=True))
print “Saved as wikipedia_related_feed.atom”