
Final thoughts
Well, today is my last day at [OpenAdvantage](https://www.openadvantage.org/) and I have been working on a few articles that others may be interested:
* [National Institute of Mental Health in England case study](https://www.openadvantage.org/casestudies/oadocument.2006-09-01.9021661077) – this is one of the projects I worked on, and it was awesome to take them from an initial curiosity of Open Source through to a full solution.
* [Closing words](https://www.openadvantage.org/articles/oadocument.2006-09-01.8392124825) – [Paul](https://www.devel.co.uk/) asked if I could write an article about my experience at OpenAdvantage and what we achieved, so this is it.
Its been awesome working at OpenAdvantage, and now a new chapter opens. Onto Canonical…

Transparency in process
As I finish up my few remaining days at [OpenAdvantage](https://www.openadvantage.org/), a few people have mailed me with comments and thoughts about the [recent update debacle with Ubuntu](https://www.ubuntu.com/FixForUpgradeIssue). Personally, I have not wanted to blog about it as I have not had a huge amount to bring to the discussion, but [Mark’s post](https://www.markshuttleworth.com/archives/54) brings up some issues I do want to talk about. Now, I must stress here that I am not privy to any internal strategy at Canonical about this issue, I haven’t even started working there yet, and my blog is most certainly not a platform for me to advertise Canonical strategy, not that they would ever ask for it to be. Every word you read on this blog now and in the future are *my* words, so *do* read them as my words.
One of the reasons why I love free software is that process is *transparent*. When you immerse yourself in the free software ecosystem, you get used to such an ethos. You can easily read developer discussions, view bug reports, access source code, interact with the community and more. Software development is something that is performed in the open, and this transparency is essential – it is the lifeblood behind the concepts and philosophy that we call *freedom* and *openness*. As a community, these concepts are our binding agreement to be fair to each other, and transparency is the currency that is exchanged to be a part of such a functioning community.
Although transparency can be easily defined at the raw software engineering level, the waters can sometimes get a little murky when it comes to governance, structure, policy and direction. In some situations, transparency is swapped for convenience and a pressure to achieve something, preferably within the direction and roadmap agreed in the open community. With this compromise in openness, criticism and one-shot slogans often hit the newswires, with terms such as *not invented here* , *design by committee*, *closed shop* and cynicism spreads questioning how open and free a project really is. It is clear that there are indeed certain situations and circumstances when complete openness is either not an option or not a sane option, and such situations net dramatically different reactions. Evidently *transparency*, *openness* and *freedom* have no yardstick.
The Ubuntu project is an astonishing example of transparency at work. The project has defined a strong set of structures and policies to define open process, and the entire project was birthed in a culture that the development of Ubuntu within each of the many disciplines (art, docs, translations, coding, QA, a11y, packaging etc) should be community driven. I am convinced that part of the reason Ubuntu has been so successful so far is such a strong commitment to this transparency. Again, this is relatively straightforward to achieve when it comes to development, and has been for hundreds of other Open Source projects. One key example, particularly pertinent to Ubuntu is Debian. Debian have not only led free software in technology but also in defining open processes that scale. Where it does get interesting is when you mix in Canonical.
Now, again, I must be clear here – I haven’t actually started at Canonical yet, and I start on Monday, but I have been having some discussions about some projects and areas to focus on when I do start. These conversations have been with Canonical people such as Jane Silber, Mark Shuttleworth and Matt Zimmerman, and their intelligent commitment to community has been even stronger than I expected. The reason it is intelligent is a real understanding of what community actually means, and to not just deride it as *something that is good for PR* or *cheap labour*. These discussions have placed the community at the core, complete with supporting infrastructure to help things tick along. This is why Ubuntu is so popular – not only is there an open process and some solid, well developed technology, but Canonical have a real understanding of the community themselves. I don’t think I have seen any other company who hits the nail so perfectly on the head, and this is why I wanted to work there. I simply would not work for a company who did not have a clear understanding of what drives our community, and I am proud that in a few days I will be part of a company that really does “get it”.
Bringing the discussion back to the update mistake, the response was another demonstration of this open and transparent process, and how part of subscribing to such a process is to admit when you miss the target and screw something up. In the traditional IT world, a crack team of PR monkeys would have no doubt been instructed to paper over the cracks and help move the news cycle on, but the Ubuntu project instead identified the issue and worked to resolve it quickly, outlining the problem and the solution clearly on the website and elsewhere. Mark’s pleasantly candid blog post further secured the message that something went wrong, it is entirely unacceptable, and efforts are underway to stop this happening. In a traditional market, this message could be received with scepticism, but in the transparent market of Open Source and free software, you yourself can watch the open landscape for such efforts to prevent future mishaps. Mark’s hands-on approach and acceptance of the issue speaks legions about what transparency means to Canonical, and knowing him, it is not just paying lip service.
Sure, nothing is perfect, and there are many bugs to fix, problems to solve and ways to further improve this openness and transparency, and my big list has a collection of action points and areas in which I am determined to further improve the process. I don’t believe in resting on your laurels, and there is always scope to tweak and improve methods of working and refining your approach. Whenever you join a vendor and start working for them, there is always a prescribed assumption from onlookers that you will tow the party line, and always buy into the message of that vendor. I have always remained committed to only working with clueful people who have a message and ethos that agree with, and I will always be my own judge. I am looking forward to working with these clueful people in the open and transparent community we cherish so much.

Getting started with GStreamer with Python
You know, there are tonnes of undocumented things out there. Really, really cool technologies that should be getting used more are *not* getting used as much because there lacks decent docs. And, to make matters worse, the developers naturally just want to get on and write the software. So, I would like to urge everyone who reads this (and I am thinking you ‘orrible lot on [Planet GNOME](https://planet.gnome.org/) in particular) should write an article about something that you have discovered that isn’t particularly well documented. This could be a technique, a technology, skill or something. Lets get some Google juice pumping and get some extra docs to help people get started. 🙂
So, with this in mind, I am going to write a simple first guide to getting started with [GStreamer](https://gstreamer.freedesktop.org/) using the excellent Python bindings. This tutorial should be of particular interest if you want to hack on [Jokosher](https://www.jokosher.org/), [Pitivi](https://www.pitivi.org/) or [Elisa](https://www.fluendo.com/elisa/) as they, like many others, are written in Python and use GStreamer.
Ready? Right, lets get started with the pre-requisites. You will need the following:
* GStreamer 0.10
* Python
* PyGTK (often packaged as python-gtk2)
You will also need a text editor. Now, some of you will want to have a big ‘ole argument about which one that is. Come back in four hours and we can continue. 😛
## An overview
So, what is GStreamer and how do they help you make multimedia applications? Well, GStreamer is a multimedia framework that allows you to easily create, edit and play multimedia by creating special pipelines with special multimedia elements.
GStreamer has a devilishly simple way of working. With GStreamer you create a *pipeline*, and it contains a bunch of *elements* that make that multimedia shizzle happen. This is very, very similar to pipelines on the Linux/BSD/UNIX command line. As an example, on the normal command line you may enter this command:
foo@bar:~$ ps ax | grep “apache” | wc -l
This command first grabs a process listing, then returns all the processes called “apache” and then feeds this list into the `wc` command which counts the number of lines with the `-l` switch. The result is a number that tells you how many instances of “apache” are running.
From this we can see that each command is linked with the `|` symbol and the output of the command on the left of the `|` is fed into the input on the command on the right of the `|`. This eerily similar to how GStreamer works.
With GStreamer you string together elements, and each element does something in particular. To demonstrate this, find an Ogg file (such as [my latest tune](https://www.recreantview.org/songs/jonobacon-beatingheart.ogg) 😛 ), save it to a directory, `cd` to that directory in a terminal and run the following command:
foo@bar:~$ gst-launch-0.10 filesrc location=jonobacon-beatingheart.ogg ! decodebin ! audioconvert ! alsasink
(you can press Ctrl-C to stop it)
When you run this, you should hear the track play. Lets look at what happened.
The `gst-launch-0.10` command can be used to run GStreamer pipelines. You just pass the command the elements you want to play one by one, and each element is linked with the `!` symbol. You can think of the `!` as the `|` in a normal command-line list of commands. The above pipeline contains a bunch of elements, so lets explain what they do:
* `filesrc` – this element loads a file from your disk. Next to the element you set its `location` property to point to the file you want to load. More on *properties* later.
* `decodebin` – you need something to decode the file from the filesrc, so you use this element. This element is a clever little dude, and it detects the type of file and automatically constructs some GStreamer elements in the background to decode it. So, for an Ogg Vorbis audio file, it actually uses the `oggdemux` and `vorbisdec` elements. Just mentally replace the `decodebin` part of the pipeline for `oggdemux ! vorbisdec` and you get an idea of what is going on.
* `audioconvert` – the kind of information in a sound file and the kind of information that needs to come out of your speakers are different, so we use this element to convert between them.
* `alsasink` – this element spits audio to your sound card using ALSA.
So, as you can see, the pipeline works the same as the command-line pipeline we discussed earlier – each element feeds into the next element to do something interesting.
At this point you can start fiddling with pipelines and experimenting. To do this, you need to figure out which elements are available. You can do this by running the following command:
foo@bar:~$ gst-inspect-0.10
This lists all available elements, and you can use the command to find out details about a specific element, such as the `filesrc` element:
foo@bar:~$ gst-inspect-0.10 filesrc
## More about GStreamer
OK, lets get down and dirty about some of the GStreamer terminology. Some people get quite confused by some of the terms such as *pads* and *caps*, not to mention *bins* and *ghost pads*. It is all rather simple to understand when you get your head around it, so lets have a quick run around the houses and get to grips with it.
We have already discussed what a *pipeline* is, and that *elements* live on the pipeline. Each element has a number of *properties*. These are settings for that particular element (like knobs on a guitar amp). As an example, the `volume` element (which sets the volume of a pipeline) has properties such as `volume` which sets the volume and `mute` which can be used to mute the element. When you create your own pipelines, you will set properties on a lot of elements.
Each element has virtual plugs in which data can flow in and out called *pads*. If you think of an element as a black box that does something to the information that is fed into it, on the left and right side of the box would be sockets in which you can plug in a cable to feed that information into the box. This is what pads do. Most elements have an input pad (called a *sink* and an output pad called a *src*). Using my l33t ASCII art mad skillz, this is how our pipeline above looks in terms of the pads:
[src] ! [sink src] ! [sink src] ! [sink]
The element on the far left only has a *src* pad as it only provides information (such as the `filesrc`). The next few elements take information and do something to it, so they have sink and src pads (such as the `decodebin` and `audioconvert` elements), and the final element only receives information (such as the `alsasink`). When you use the `gst-inspect-0.10` command to look at an element’s details, it will tell you which pads the element has.
So, we know we have pads, and data flows through them from the first element on the pipeline to the last element, and now we need to talk about *caps*. Each element has particular *caps* and this says what kind of information the element takes (such as whether it takes audio or video). You can think of caps as the equivalent rules on a power socket that says that it takes electricity of a particular voltage.
Lets now talk about *bins*. A lot of people get confused about bins, and they are pretty simple. A *bin* is just a convenient way of collecting elements together into a container. As an example, you may have a bunch of elements that decode a video and apply some effects to it. To make this easier to handle, you could put these elements into a bin (which is like a container) and then you can just refer to that bin to in turn refer to those elements. As such, the bin becomes an element. As as an example, if your pipeline was `a ! b ! c ! d`, you could put them all into `mybin` and when you refer to `mybin`, you are actually using `a ! b ! c ! d`. Cool, huh?
Finally, this brings us onto *ghost pads*. When you create a bin and shove a bunch of elements in there, the bin then becomes your own custom element which in turn uses those elements in the bin. To do this, your bin naturally needs its own pads that hook up to the elements inside the bin. This is exactly what *ghost pads* are. When you create a bin, you create the ghost pads and tell them which elements inside the bin they hook up to. Simple. 🙂
## Writing some code
To make this GStreamer goodness happen in a Python script, you only need to know a few core skills to get started. These are:
* Create a pipeline
* Create elements
* Add elements to the pipeline
* Link elements together
* Set it off playing
So, lets get started, we are going to create a program that does the equivalent of this:
foo@bar:~$ gst-launch-0.10 audiotestsrc ! alsasink
Here we use the `audiotestsrc` element which just outputs an audible tone, and then feed that into an `alsasink` so we can hear it via the sound card. Create a file called *gstreeamertutorial-1.py* and add the following code:
#!/usr/bin/python
import pygst
pygst.require(“0.10”)
import gst
import pygtk
import gtk
class Main:
def __init__(self):
self.pipeline = gst.Pipeline(“mypipeline”)
self.audiotestsrc = gst.element_factory_make(“audiotestsrc”, “audio”)
self.pipeline.add(self.audiotestsrc)
self.sink = gst.element_factory_make(“alsasink”, “sink”)
self.pipeline.add(self.sink)
self.audiotestsrc.link(self.sink)
self.pipeline.set_state(gst.STATE_PLAYING)
start=Main()
gtk.main()
[Download the code for this script here](https://jonobacon.com/files/gstreamertutorial-1.py).
So, lets explain how this works. First we import some important Python modules:
import pygst
pygst.require(“0.10”)
import gst
import pygtk
import gtk
Here the GStreamer modules (pygst and gst) are imported and we also use the gtk modules. We use the GTK modules so we can use the GTK mainloop. A mainloop is a process that executes the code, and we need some kind of mainloop to do this, so we are using the GTK one.
Now lets create a Python class and its constructor:
class Main:
def __init__(self):
Now, to the meat. First create a pipeline:
self.pipeline = gst.Pipeline(“mypipeline”)
Here you create a pipeline that you can reference in your Python script as `self.pipeline`. The `mypipeline` bit in the brackets is a name for that particular instance of a pipeline. This is used in error messages and the debug log (more on the debug log later).
Now lets create an element:
self.audiotestsrc = gst.element_factory_make(“audiotestsrc”, “audio”)
Here you create the `audiotestsrc` element by using the `element_factory_make()` method. This method takes two arguments – the name of the element you want to create and again, a name for that instance of the element. Now lets add it to the pipeline:
self.pipeline.add(self.audiotestsrc)
Here we use the `add()` method that is part of the pipeline to add our new element.
Lets do the same for the `alsasink` element:
self.sink = gst.element_factory_make(“alsasink”, “sink”)
self.pipeline.add(self.sink)
With our two elements added to the pipeline, lets now link them:
self.audiotestsrc.link(self.sink)
Here you take the first element (`self.audiotestsrc`) and use the `link()` method to link it to the other element (`self.sink`).
Finally, lets set the pipeline to play:
self.pipeline.set_state(gst.STATE_PLAYING)
Here we use the `set_state()` method from the pipeline to set the pipeline to a particular state. There are a bunch of different states, but here we set it to `PLAYING` which makes the pipeline run. Other pipeline states include `NULL`, `READY` and `PAUSED`.
Finally, here is the code that create the `Main` instance and runs it:
start=Main()
gtk.main()
To run this script, set it to be executable and run it:
foo@bar:~$ chmod a+x gstreamertutorial-1.py
foo@bar:~$ ./gstreamertutorial-1.py
You should hear the audible tone through your speakers. Press Ctrl-C to cancel it.
## Setting properties
Right, lets now add a line of code to set a property for an element. Underneath the `self.audiotestsrc = gst.element_factory_make(“audiotestsrc”, “audio”)` line add the following line:
self.audiotestsrc.set_property(“freq”, 200)
This line uses the `set_property()` method as part of the element to set a particular property. Here we are setting the `freq` property and giving it the value of `200`. This property specifies what frequency the tone should play at. Add the line of code above (or download an updated file [here](https://archivedblog.jonobacon.com/files/gstreamertutorial-2.py)) and run it. You can then change the value from `200` to `400` and hear the difference in tone. Again, use `gst-inspect-0.10` to see which properties are available for that particular element.
You can change properties while the pipeline is playing, which is incredibly useful. As an example, you could have a volume slider that sets the `volume` property in the `volume` element to adjust the volume while the audio is being played back. This makes your pipelines really interactive when hooked up to a GUI. 🙂
## Hooking everything up to a GUI
Right, so how do we get this lot working inside a GUI? Well, again, its fairly simple. This section will make the assumption that you know how to get a Glade GUI working inside your Python program (see [this excellent tutorial](https://www.learningpython.com/2006/05/07/creating-a-gui-using-pygtk-and-glade/) if you have not done this before).
Now, go and download [this glade file](https://archivedblog.jonobacon.com/files/gui.glade) and [this Python script](https://archivedblog.jonobacon.com/files/gstreamertutorial-3.py). The Python script has the following code in it:
#!/usr/bin/python
import pygst
pygst.require(“0.10”)
import gst
import pygtk
import gtk
import gtk.glade
class Main:
def __init__(self):
# Create gui bits and bobs
self.wTree = gtk.glade.XML(“gui.glade”, “mainwindow”)
signals = {
“on_play_clicked” : self.OnPlay,
“on_stop_clicked” : self.OnStop,
“on_quit_clicked” : self.OnQuit,
}
self.wTree.signal_autoconnect(signals)
# Create GStreamer bits and bobs
self.pipeline = gst.Pipeline(“mypipeline”)
self.audiotestsrc = gst.element_factory_make(“audiotestsrc”, “audio”)
self.audiotestsrc.set_property(“freq”, 200)
self.pipeline.add(self.audiotestsrc)
self.sink = gst.element_factory_make(“alsasink”, “sink”)
self.pipeline.add(self.sink)
self.audiotestsrc.link(self.sink)
self.window = self.wTree.get_widget(“mainwindow”)
self.window.show_all()
def OnPlay(self, widget):
print “play”
self.pipeline.set_state(gst.STATE_PLAYING)
def OnStop(self, widget):
print “stop”
self.pipeline.set_state(gst.STATE_READY)
def OnQuit(self, widget):
gtk.main_quit()
start=Main()
gtk.main()
In this script you basically create your pipeline in the constructor (as well as the code to present the GUI). We then have a few different class methods for when the user clicks on the different buttons. The Play and Stop buttons in turn execute the class methods which in turn just set the state of the pipeline to either `PLAYING` (Play button) or `READY` (Stop button).
## Debugging
Debugging when things go wrong is always important. There are two useful techniques that you can use to peek inside what is going on in your pipelines within your GStreamer programs. You should first know how to generate a debug log file from your program. You do so by setting some environmental variables before you run your program. As an example, to run the previous program and generate a debug log called *log*, run the following command:
foo@bar:~$ GST_DEBUG=3,python:5,gnl*:5 ./gstreamertutorial.py > log 2>&1
This will generate a file called *log* that you can have a look into. Included in the file are ANSI codes to colour the log lines to make it easier to find errors, warnings and other information. You can use `less` to view the file, complete with the colours:
foo@bar:$ less -R log
It will mention it is a binary file and ask if you want to view it. Press `y` and you can see the debug log. Inside the log it will tell you which elements are created and how they link together.
## Onwards and upwards
So there we have it, a quick introduction to GStreamer with Python. There is of course much more to learn, but this tutorial should get you up and running. Do feel free to use the comments on this blog post to discuss the tutorial, add additional comments and ask questions. I will answer as many questions as I get time for, and other users may answer other questions. Good luck!
…oh and I haven’t forgotten. I want to see everyone writing at least one tutorial like I said at the beginning of this article. 🙂
If you thought this was interesting, you might want to Join As a Member. This will ensure you never miss an article, you get access to exclusive member-only content, early-access to new projects and member-only events, and the possibility of winning free 1-on-1 workshops. It is entirely FREE and I will never sell your information or spam you (those people suck). Find out more here.

New song: Beating Heart
I am proud to announce my brand new song, [Beating Heart](https://www.recreantview.org/blog/?p=60).
This is a thick, heavy, catchy metal tune with plenty of bouncy riffs, pounding double bass drums and a melodic chorus. The song is about the rather grubby subject of war, and looks at how both sides can justify their position. Full lyrics are available with the song.
DOWNLOAD: [Ogg](https://www.recreantview.org/songs/jonobacon-beatingheart.ogg) (9.1MB) [MP3](https://www.recreantview.org/songs/jonobacon-beatingheart.mp3) (6.3MB)

End of an era
You know, its odd enough when leaving a job, but its even stranger when you stop working with people you really respect. Today was my last day working with [Elliot](https://townx.org/blog) as he is training tomorrow and on holiday for my final week.
I have a huge amount of respect for Elliot. Not only is he a talented developer, but he is a really genuine, down to earth guy who is entirely selfless in his work. Elliot is one of those guys that never gets tired of helping people, and goes above and beyond the call of duty in his work at [OpenAdvantage](https://www.openadvantage.org/) and elsewhere.
It has been a privilege to work with him and it is people like him who make Open Source what it is. I really hope to keep in touch with him in the future. 🙂

PRIVATE/CONFIDENTIAL
ATTN:
Dear Sir/M,
I am Mr.Jono Bacon. an Auditor of a BANK OF THE JONO BACON,WOLVERHAMPTON (NFSFG). I have the courage to Crave indulgence for this important business believing that you will never let me down either now or in the future. Some years ago, an English welder /tradesman with the York Trailers company, made intimate relations with a Hairdresser of Yorkshire descent. After naughty crank a child was born of BACON on September 17th 1979. This child was me.
I am looking for a foreigner or native who will stand in as beneficiary, and OPEN a transaction to facilitate the transfer of Amazon products to my household. This is simple, all you have to do is to OPEN a browser in the world and visit [my Amazon wishlist](https://www.amazon.co.uk/gp/registry/14BOKBBNDAFJN). There is no risk at all, and all the paper work for this transaction will be done by me using my position and connections in Amazon. This business transaction is guaranteed.
Please observe the utmost confidentiality, and be rest assured that this transaction would be most profitable for both of
us because I shall require your assistance to invest some of my share in making me happy to help free software. I look forward to your earliest reply.
[Amazon wish list](https://www.amazon.co.uk/gp/registry/14BOKBBNDAFJN)
Yours,
Mr.Jono
Bacon.

All men play on 10
Metal fans, tonight we (Seraphidian) uploaded some demo recordings of our new tunes, recorded live in my home studio. We have uploaded Death Blow, Bludgeon, Into Nothing and Intergression. Go listen to them [here](https://www.myspace.com/seraphidian). On them I sing and play guitar.
Also, last night I wrote a [Jokosher 0.2 update](https://www.jokosher.org/2006/08/22/02-development-progress/).

Getting video off MythTV and onto a DVD
Today Sooz pointed out that our MythTV box did not tape something last night, and it turned out it was 99% full. This is not unsurprising as it has full seasons of The West Wing, 24, Sleeper Cell, Prison Break, Charmed (hers), Medium (hers), The Ghost Whisperer (hers) and much more on there. With it so chock full of stuff I needed to look into getting things off and onto DVD. After succesfully burning a few DVDs I figured I should blog about it as someone is sure to find my experience useful if they are in the same position.
On our MythTV box we are running version 0.18. I have not upgraded to 0.19, and with 0.20 allegedly around the corner, I figured I would until then to upgrade. As such, I cannot use the mucho-mucho-fantastico [MythArchive](https://www.mythtv.org/wiki/index.php/MythArchive) for getting the recordings onto DVD. MythArchive solves this entire problem and allows you to select which things to burn, complete with DVD menus to boot. So, if you are on 0.18 and don’t have MythArchive, this is what you do.
You essentially have two options for getting video onto a DVD:
* Use [nuvexport](https://svn.forevermore.net/nuvexport/).
* Encode the video yourself and burn to DVD.
`nuvexport` is a utility that converts the `.nuv` files that MythTV saves its video in, into a format that someone has actually heard of. This problem goes away in later versions of MythTV and it just saves it as MPEG, but until then, you need to convert the .nuv file yourself. To do this, you can download nuvexport and it provides a simple command line menu to do the conversion.
If you are on MythTV 0.18, you need to grab nuvexport 0.2 from [the archive](https://forevermore.net/files/nuvexport/archive/), as 0.3 does not work correctly for this Myth version. Although this sounds great, I had problems with nuxexport, and Juski from #mythtv-users informed me that I probably needed a special, super cleverly compiled, custom ffmpeg. I went to get this, and they demand you grab it from Subversion. While groaning and checking it out, I discovered something new to avert such drudgery…
It turns out that if you are running one of the WinTV PVR-
You can use a tool called [tovid](https://tovid.berlios.de/en/index.html) to burn it to DVD, and this excellent little tool also includes support for menus and such. Although I tried to run my videos through tovid, it barfed and told me it could not understand what kind of audio was on the video file. So, it seems that my .nuv files are MPEG with some kind of freaky audio track on there.
To solve this I used [avidemux](https://fixounet.free.fr/avidemux/) to convert it to something that can live on a DVD just fine. To do this, install avidemux and simply click the *Auto* menu and select *DVD*. Then encode the video and feed it into tovid for all your DVD lovelyness.
This is not easy, and I suspect MythArchive should resolve all of these problems, but creating and burning DVD content on the normal desktop is notoriously difficult. I really hope this gets easier sometime soon. We have a pretty awesome stack, its just the user experience that needs consolidating. 🙂

Raising the bar: Awesome Python and PyGTK tutorials
You know, one of the things I love about the Open Source community is when people demonstrate quality in so many different areas. Every so often someone steps up and demonstrates something they have been working on that is well written, and raises the bar. You can see this in a number of places, but recently I just stumbled across a new one – The [Learning Python](https://www.learningpython.com/) blog.
It is a blog written by someone who wanted to learn Python, and as they learned, wrote articles about the many different subjects while learning. The site boasts a number of really high quality articles such as [writing a WordPress offline blogging tool](https://www.learningpython.com/2006/08/19/wordpress-python-library/), [writing custom widgets with PyGTK](https://www.learningpython.com/2006/07/25/writing-a-custom-widget-using-pygtk/), [building an application with Glade](https://www.learningpython.com/2006/05/30/building-an-application-with-pygtk-and-glade/) and creating a game with PyGame in [three](https://www.learningpython.com/2006/03/12/creating-a-game-in-python-using-pygame-part-one/) [great](https://www.learningpython.com/2006/03/19/creating-a-game-in-python-using-pygame-part-two-creating-a-level/) [parts](https://www.learningpython.com/2006/04/16/creating-a-game-in-python-using-pygame-part-3-adding-the-bad-guys/).
This is exactly what Python needs – awesome documentation in the form of well written, simple tutorials, and I am really pleased to see he is using PyGTK for much of this. I would love to see him/her write some tutorials about using Cairo, GStreamer and Gnonlin. 🙂

My humps, my humps, my humps, my humps…check it out
You know, there are two things that bug me about most sites running WordPress:
* Its almost impossible to find someones email address.
* Its almost impossible to find their entries and comments RSS feeds.
Surely these are fairly straightforward needs, at least the first one.