You know what, I love being a community manager. I love the challenges, I love the opportunities, and I love the diversity of application and work. There are of course some frustrating elements, and one of these frustrating elements is the pre-conceived perspectives that some people have about this kind of work, and to make matters worse, the things that some community managers do to compound the situation. One such example, but one in which no specific community manager is at fault, but has been something of an endemic voice is that *community is vastly free-form and immeasurable*.
Bollocks.
Don’t get me wrong, community is very much a *soft science*. It is about relationships, it is about connections, and most importantly it is about trust. When there are no relationships, no connections and no trust, community managers tend to start looking for jobs as taxi drivers.
A soft science though does not mean though that there is an excuse to just assume the world is a big analogue blur that we can only measure and assess by licking a finger and lifting it to the breeze. A key trick in being an effective community leader is to discover the *mechanics* of your community, and understand how to assess and measure them.
When Daniel and Jorge both came onto my team, the thing I said to both of them on day one was that I always wanted them to explore two key areas as part of their work – developing *strategy* and the *mechanics* behind that strategy. This is core to everything that we do – we have a strategic plan, goals, deadlines, and a range of graphs measuring our work that would look really freaking awesome in the war room from Wargames. Alas, about as good as we have is Jorge’s second flat-screen. We use these metrics to assess our work and the health of the community.
A typical example is the upstream report in Launchpad which we are readying for beta right now – I will have more details on this soon when it is complete. The upstream report shows a bunch of upstream projects, the number of open bugs, the number of bugs with upstream activities (this means the bug is likely to be an upstream bug), and the number of bugs with upstream watches (a known upstream bug that is linked to the Ubuntu bug). This provides us with useful data for which upstreams need most focus. We are currently getting some additional features into the report for colour coding, sorting the results and removing dupes. Bugs are a metric, they are a *mechanic* – they are the nuts and bolts of the software development process, and we measure them closely.
A huge amount of community management is the soft science, but I urge everyone out there to think about the mechanics. Think about the things you can assess, the things you can measure, and use them as a means to identify if your community is healthy and growing and being effective in the ways that you want it to be.