Category Archives: Uncategorized

Thoughts on OSM design, and looking forward and back

I really liked something I read in The New New thing ( http://www.amazon.com/New-Thing-Silicon-Valley-Story/dp/0140296468/ref=ntt_at… ) about Jim Clark ( http://en.wikipedia.org/wiki/James_H._Clark ) when the author Michael Lewis would ask about the past of his companies and stuff he’d say “that’s boring” and “That’s the past. I really don’t give a shit about the past”. ( http://books.google.com/books?id=UJw3i9_ZqQkC&lpg=PP1&dq=the%20new%20new%20th…&q=shit%20about%20the%20past&f=false )

I think we’re locked in between three groups of thought on the OSM right now.

Right up front we have the school of thought that everything is perfect the way it is. That uservoice is some kind of inherently crappy system (see the uservoice ideas page at http://osm.uservoice.com/ ). That we shouldn’t allow people to use tools which make fixing the map easier (see @chilly on twitter), that people are inherently stupid and there should be a barrier to entry to editing in OSM because it’s complicated. This school of thought is essentially still living in 1991 and I’ll call this school the Game Haters: everything is wrong, even talking about it is wrong.

In the middle we have a bunch of thought on how the site should or shouldn’t be. Legitimate questions about putting the map or help up front, or using OSB or uservoice, or some new system, or something. But nobody can agree with anyone else, and anyone who actually does anything comes under attack because they’d never encompass everyone’s idea of what the design or UI should be. Let’s call this school the Player Haters: the game is there, we can play by the rules but don’t like it when someone plays better

Lastly we have a school which is looking forward and willing to throw out ideas and try them. They don’t instantly hate everything or dismiss it because they don’t personally like it. There is room in this school to understand that there are other schools out there, that what works for them might not work for someone else.

At points like these, I think we have to decide though some debate where the project is going to go. If we want to just keep the tools hard to use and subject people to PL1 and trac, then that’s a legitimate point of view. If we can stand some innovation like group 2, then that’s cool too, or if we’re able to just move on and keep innovating.

If we look back, we’ve actually mostly not given a shit about the past. We threw out segments, threw out entire codebases (like 0.1 0.2 and so on) in the search for something better. We in OpenStreetMap tend to innovate. That’s not to say it’s not messy, it’s a horribly messy process from a ‘consensus’ and community point of view, because often their isn’t any consensus on anything, ever.

It’s that central freedom to not conform that is the most important, beautiful and gratifying thing in the project but sometimes like now with the design, it holds us back.

I don’t want the entire design debate to be about uservoice, but it’s a great example that exposes the extremes of thought. Going through the extremes:

* Some people *literally* don’t want any feedback.
* Some want feedback, but in trac or hidden in some other horrible system
* Some want feedback that’s easy, but just not on the front page
* Some want feedback that’s easy and upfront but not too exposed
* Some want feedback that’s easy and exposed to the most people (like having maplint or keepright or OSM switched on the front page by default)

Will we ever get a consensus through debate? I highly doubt it.

For the record – yet again – I’m not proposing uservoice as the final solution. I’m not proposing we use it for map bugs. But, it is a brilliant tool for many sites and it’s provocative and brings up cool ideas of what we can do in the future with something similar.

It’s worth also thinking about where the schools of thought communicate. Mostly the negative ones are on the lists, and the positive ones have been in uservoice and on opengeodatas comments on the blog posts. Why is this? That’s hard to answer. I think it might be simply that there are a lot of barriers to entry on the list, flames and baggage that a newbie doesn’t want to deal with – because *they’re a different group of people*.

The project can exist with these different schools of thought.

When I think back to most of the beginning years of OSM I’m struck remembering how much time I spent fucking around with SQL doing the big horrible jobs that nobody wants to do. Our sysadmins today mostly do all this awesome work and probably enjoy it like I did even, we need that skill set and school of thought to make the project run.

In other words – we have people who contribute in all sorts of ways.

At the same time, we growing in to the realm of a new school of thought. We’re increasingly hitting people who can contribute enormously but just not in the way we’re used to. Basically it’s a question of time and how much mapping/software/community/etc you can contribute per unit time if you’re a random member of the public new to the project:

* If you want to contribute a half-decade to OSM you can, and many have
* If you want to contribute a year to OSM you can, and get a lot out but you need that time
* If you want to contribute a month, that’s reasonable
* If you want to contribute a week, you can do it just about probably with some pointers
* If you want to contribute an hour you need lots of help, like a mapping party
* If you want to contribute a minute, you’re screwed

Everyone in OSM has basically been contributing for the kinds of extended periods of time as above, not the minutes or hours. Many see someone contributing so little as wrong or pointless. I say just the opposite. The people who spend minutes or hours disappear because we just don’t welcome them.

It should be perfectly possible to contribute an error in 1 minute in OSM, and have someone who’s prepared to spend a lot of time on it fix it. But so many people fight that idea. There’s nothing to be afraid of, we’re just increasing the size and reach of the community – and that’s a good thing.

To think of it another way, consider a scale free distribution of OSM contributions, because that’s what it looks like: we have a very few people spending 24/7 on OSM, we have a few people spending hours on the site a day and then *lots* people spending 5 minutes a day. What we should be able to do is connect those groups. If we have 60 people spending one minute to report a bug, say textually, on OSM then there are plenty of people in OSM willing to spend an hour going through and doing the actual editing. And the project would be infinitely stronger for it.

It’s not like this is something new, it’s exactly what map maker and waze do in their own way. We can do it better though. Our simple attempts like keepright, OSB and the dupe_nodes stuff points how big these kinds of feedback could be with more polish.

And we have to be honest about how bad things are. I know when I say that OSM is crap, PL1 is crap and so on many of you get all offended… but it’s just the reality of it. I don’t think we really need to get newbies to do screencasts on usability like wikipedia did? Get people in and record them trying to fix a street. We can, but if you can put yourself in a newbies shoes for a second you can see it.

We need new thinking and a fresh push on design and usability. That might not come from within the existing community, by definition if it’s going to attract new and different kinds of people, which it needs to if we’re going to scale to the next level of contribution.

Basically it comes down to trusting someone to do this stuff and not giving them too much crap for actually getting it done. We made many similar arguments with the license change process you might recall. Many think you can have a valid legal opinion without the nuisance of an actual law degree in the same way you can write kick-ass C code without having a degree in computer science. Everyone has those legal opinions the same way everyone has design opinions, but they’re rarely right.

Back to what we need to fix – design, the editor experience, the logo (this spontaneously came up on uservoice again).. they all have fundamental problems. Just because they’ve worked for us old timers, we like the existing logo and we’ve learnt to put up with PL1’s foibles doesn’t mean that’s the right thing to keep going forward.

I know matt takes it personally that the logo is anything other than perfect, and richard takes it personally that potlatch is crap for newbies… but that’s just fact of the matter as I constantly hear from designers, newbies and so many others. I don’t care about personal feelings on those particular topics nearly as much as it physically pains me to think about all the people we turn away every single day because of it.

We have IIRC about a 70% drop off rate of people who create an account and never do anything else. I’ve heard the school of thought that says “fuck them they wont contribute anyway” but I have to disagree. A simple prominent feedback tab to report map bugs or feature requests is the simplest possible thing you can add, and I promise it will lead to a huge spike in contributions. I’m willing to bet people money on that.

Most maps in the world try their best to hide their bugs, like closed software. We should be bold end expose them so they’re fixed faster.

Lastly I want to talk about implementation.

It’s clear after years of chatter that the community is the wrong place to innovate on design and probably editing too. The model of ‘wait for someone to do it’ works well on a bunch of things, but not everything. How did I get the design done? I paid $70 to a really great designer and html coder in peru who I worked with over skype to come up with the straw design. For $70 (at $7/hour) I got more done than the last 1-2 years of design in OSM. That should be celebrated not attacked because the current site is perfect (really, it isn’t) or I didn’t warn everybody (JFDI is supposed to be a virtue here).

There are talented flash coders, designers and more who will even work for free to help us too but just can’t put up with people pissing all over their work, which is what usually happens on these lists, or bizarre tool chains or having to refactor crappy code. They don’t have that thick skin and time for it. I think we need to find ways to work with them, or pay them, to work on this stuff. And that involves putting a buffer between the old timers in the community and people who want to move it forward.

Or maybe there’s a better way, you tell me?

Case in point, PL2. We have no idea when it’s coming, if it will work or what. What I personally want to see is a community of people behind building the thing like there is behind the rails codebase or even JOSM. But everyone’s so afraid of pissing off richard, or doesn’t have the time to work it all out, we’re not moving forward like we should be.

Here’s a radical solution – flash programmers are $10-$20/hour on oDesk.com and others (and it will be quicker and cheaper than you might think). Why don’t we club together to get it finished? I’ll come back to how in a second – but just think for a minute how many people are turned off because of the editor. If we can bring in just 1% more people a day with a better user experience and editor, then compounded over time we have a huge gain in map quality, community and everything else.

Back to paying someone. Is it the best solution? No. Will the result be perfect? No. Is it the best way to get open source code built? No, but I point out that most of the Linux kernel is now built by paid employees. Would Richard like to be paid to work on PL? No, I’ve tried a bunch of times. Would someone else in the community? Maybe. Should we try something again like bounties? Maybe that too. But just sitting around with the status quo on all these issues isn’t getting anywhere fast.

At this point I will get a load of flames that Richard is awesome, he’s spent loads of time on the project and all that. I agree. But, I take that as a given as I do with anyone in the project. We’re all here giving time, love and effort. We shouldn’t have to preface every criticism with three paragraphs about how we’re all so great.

Lastly – I’m saying all this to promote a debate and discussion. Paying someone is just one option. Do I want to do it tomorrow? No, but it does look interesting. And, by the way, just paying doesn’t mean we just get the software out. It means from experience you usually also really get the paid coder interested in helping long term (not always, but if you’re good at picking who does the work) and in this case we’ll also be able to pull in lots of people who are professional flash coders who will expect the code to be laid out a certain way, with such and such a toolchain and all that.

Let the flaming commence.

Crisis Mapping Conference in October

Mikel visited your city and now a earthquake-tornado-flood-tsunami hit you.

Then godzilla came to finish you off.

Who you gonna call?

Crisis Mappers!

Now, with Crisis Mapping: The Conference. Looks like it should be pretty interesting post Haiti and all the awesome work done there, and every crisis mapper gets a fee Jeep Wrangler:

2nd INTERNATIONAL CONFERENCE ON CRISIS MAPPING (ICCM 2010):
HAITI AND BEYOND

Leveraging mobile platforms, computational linguistics, geospatial technologies, and visual analytics to power effective early warning for rapid response to complex humanitarian emergencies.

BOSTON, OCTOBER 1-3, 2010

OSM Routing

Interesting post here: http://blog.telemapics.com/?p=245 on using OSM as an outsider

“For many of us, it seems so difficult to discover things about OSM, its data, and the use of the data. Maybe I just need to spend more time reading their Wiki (guilty). However, I admit that I am confused about OSM licensing practices and liability issues. Every time that I start to research these issues, I get a headache. While I think I understand the limitations of the license to use OSM data and why these “carve-outs” are necessary, I find it difficult to understand how to use the data to any commercial advantage and wonder if that will limit the usefulness of OSM’s contribution.”

Of course there is no confusion on using proprietary data – it’s very clear how expensive and wrong it is, and how they accept no liability.

Essentially Crappy Technology

(Note: another thing I wrote a while ago)

In July 2007 at the O’Reilly Open Source Convention in Portland, Oregon, Tim O’Reilly (publisher, conference organiser and seer) and Eben Moglen (general counsel to the Free Software Foundation) banged heads on the subject of Web 2.0, and the future of Free Software when nobody cares about software any more. Why would you not care? Because you’re using online spreadsheets for free instead – software as a service. Various business models (say, Microsoft and Excel) currently are based on software as a product.

At the beginning of their exchange Tim pointed to Eben’s disbelief of the ‘Web 2.0 era’. Eben responded

“… Look the web is a data storage system, distributed and powerful because it’s distributed. Full of essentially crappy technology which will be replaced.”

Wikis are definitely essentially crappy technology.

But do I mean that’s a bad thing? In the whole, no. The meaning behind the phrase lends itself to simplicity. If you look at the standards and protocols that run the internet, they’re generally pragmatic simplicity enshrined. Or, the quickest and simplest thing that will ‘just work’. They were built on-the-fly to explore what was possible and in many cases aspects of widespread protocols taken as standard today were mere design experiments or accidents, perhaps even the result of a chat over a pint of beer.

This was all in part by design as most of the protocols are in layers which build upon each other. They can purposefully be small parts of a chain to accomplish things, and in many ways act like swiss-army knives in that they have multiple uses. As an example, consider someone delivering you a package. It doesn’t matter what road the firm drives down to deliver it, or what the model of truck is. Probably the government provides a base road network, the delivery firm chooses a truck and the driver chooses a route. And you, of course, choose the package.

The internet works sort-of similarly to that. Someone owns some cables, someone owns boxes to send magic signals down those cables and you choose the web page being delivered. But it works deeper down too. The magical signals are layered – someone designed a way for two computers to talk to each other with one cable, then someone else designed a way to send messages between any number of computers, but only using the protocol each individual uses to talk to each other. Then someone else uses this new protocol to ask questions of remote computers, in a special new format, a new protocol. Questions like ‘do I have any email?’

Wikis evolved like that. The simplest possible thing you can do is allow someone to save a text document on to a server. Then you let people view that document. Then you do things like say “if there is a line surrounded by equal signs, then it’s probably a title” and the software magically turns things like =My Nice Title= in to My Nice Title. It’s called a markup language.

Magic. But then, what about if I delete your title accidentally or on purpose? Then the system needs to store each edit as it happens, along with who, when and why the edit was made. This way, you can browse back over edits and see the evolution of something like a page in wikipedia.

Wikipedia revision history for the OpenStreetMap article – the who, what, why and when.

If you play with wikis today, you will find them like a lot of the tools that run the internet – essentially crappy. The basic application hasn’t changed in years for text wikis, for example Wikipedia itself. Using MediaWiki, the software behind Wikipedia, hasn’t changed much in many years – the innovation in some ways stopped.

Don’t get me wrong, development goes on. A lot of effort has been put in to scaling, that is to making it go from handling 3 users and a cat to 20 million readers a day. But it’s still the same wikipedia. If you use many other wikis and things like Google Docs today then you can use WYSIWYG

tools to add bold text, titles and so on. No strange markup. But wikipedia hasn’t moved with the times. Whether it needs to is debatable.

If you and I edit the same article at the same time then we may have made conflicting edits. If this happens, the system effectively gives up and shows you the differences. That’s essentially crappy, but it works.

It will never work

(Note: this is another thing I wrote awhile ago, I will post more of them just to get them out the door)

I was in a hospital once and noticed a sign encouraging doctors and nurses to use an “evidence-based approach” to care. It’s a bit scary to think they might be using dice to decide how to treat your broken leg, but hey, it’s a possibility.

One of the things that advocates of wikis get asked repeatedly is whether a wiki is trustworthy. Surely if anyone can edit then they will add nonsense, or get things wrong. It’s almost not worth debating the point if you take an evidence-based approach. Wikipedia works

Honest, people use it every day to look up all kinds of things.

You can be in an auditorium and get this question. Is it trustworthy someone asks, and a sea of faces look slightly panicked – maybe it really is turtles all the way down?

But you can easily flip it. You ask the audience how many people use wikipedia, say once a week? A cornfield of hands goes up. Then you ask how many use Encarta? You might get some people own up to that one, but it’s not like Encarta is a slouch – we’re talking about the people who broke Encyclopaedia Britannica and their business model. (Note: this was written when Encarta was alive).

Which is not to say wikis are 100% trustworthy. But they’re not even designed to be. Britannica never was 100% tustworthy (or is) and nor was Encarta. There are always mistakes. If you really want to know, with accuracy, how tall the Eiffel Tower is then go climb it. It’s a lot cheaper to stay at home and get the information for free from wikipedia though, with say 99.999% certainty or whatever the figure turns out to be.

pastedGraphic

This is what’s called ‘versioned information’. You have one version of the information, which has some ‘quality’ and zero ‘price’ from wikipedia. (I use price and quality in quotation as it’s a bit of a hand wavy argument. A guess is really zero price, but wikipedia requires a computer, internet, electricity, fingers… And of course ‘quality’ is a relative term here). Then you get other versions at other price/quality positions.

You have a higher price version sitting on a shelf at your local library or book store in a book – higher price as you have to go to the library or buy the book. That higher price might reflect the higher accuracy. In a market of information, there should be many choices on the price/accuracy graph and it’s up to you to choose which one you want.

To go though the example, a guess is free… you’ve seen the Eiffel Tower in pictures so you know it’s roughly a couple of hundred metres tall. Next ask your 10 friends when you see them for lunch and average their opinion. Next, go ask wikipedia. Next, for all we know the Britannica article was written by the guy that built it, but then you have to go find a library or buy a book. And very last you can fly to Paris which allows you to be as accurate as you want but it comes with a associated cost.

It turns out that lots of people choose wikipedia. The small increase in quality, if it exists, to go buy or read a paid-for encyclopaedia (or fly to Paris) generally does not appear to be worth the large increases in price. Nw imagine the Y axis on the above graph being the number of page views or attempts. There will be lots of guesses, a few people asking friends, a huge number viewing wikipedia and basically nobody relying on Britannica or actually surveying the tower.

This is what happens, more broadly, out in the real world. Information is versioned on quality, timeliness, availability, presentation and so on despite often being the same information. Stock quotes are more expensive if they’re live than 20 minutes old. Books are more expensive if they’re new (hardback) than old (paperback).

PS it’s 324 metres, or so says the notoriously inaccurate wikipedia (which anyone can edit!)

Herding Cats

(Note: another thing I wrote a while ago)

 

 

 The book ‘producing open source software’ by Karl Fogel has a wonderful cover (pictured). It shows many smaller arrows of varying sizes all pointing to the right, and a larger arrow in yellow that it seems to me is the implied effect. If you have all your horses pulling in one direction, you can move a house it seems to say.

The image comes to me like that because it reminds me a lot of various diagrams you would make in physics class. If the ball hits the other ball at such and such an angle, where does the shoe drop? That kind of thing.

It’s a beautiful image, and it’s the one I had of Free/Open software before I got involved in it. All these people pulling in the same direction to make an encyclopaedia, or an operating system. But I tell you, it’s very wrong until you hit maturity.

First, the size of the arrows. The distribution of effort in collaborative projects can vary an awful lot. When a project starts its effort is often distributed like this:

 

 

 

 Stage 1: One man band Effort / Frequency graph

What the hell does that mean? It means that one person is putting a lot of effort in. It’s low on the frequency scale (there’s only one person). It’s high on the effort because they’re doing all the coding, making the website work, replying to email, battling rival projects with dumb ideas, coming up with better ideas and so on. It’s the most critical phase of a projects life and it’s why so many projects are synonymous with the people who started them. Or at least the myth of who started them as the successful projects attract people willing to claim credit for things that went right. Linux, Linus. Wikipedia, Jimbo Wales. Mono.NET, Miguel de Icaza. Debian, Ian Murdock.

There are thousands of smaller projects still effectively run by the guys who started them. File utilities, mouse drivers, deployment tools, image tools and so on. When they get big in many cases they retain their founders because if they don’t, they tend to fuck up. Committees are not good at running large distributed projects which rely on peoples free(!) time. Benevolent dictators tend to work out well, and when they don’t the project forks or fails and a hundred other flowers bloom.

Think about Debian vs. Ubuntu in this context. And if you have no idea what that means, it’s not important.

The next stage for these projects hinges a lot on the temperament of the guy running it. Often they won’t accept new ideas, or they’re only doing it for publicity (which often behind the scene means money). Or they get bored, or they want to control everything, or they don’t have the time. There are millions of reasons. When OSM started there were two existing similar projects I knew of. One was geowiki.com started by Richard Fairhurst and friends. It was (and is) a beautiful site, arguably required a lot of effort to make and didn’t have any community participation (mailing lists and so on). It stagnated because nobody could get involved and all the data, code and tools were locked up.

Second, there is free-map.org.uk which is run to this day by Nick Whitelegg. It is very focused on the walker. Here in England we have a large community of people who walk all over the place on rights of way (which often cross private land). Free-map catered to those users and Nick throws a lot of effort at it, but it remains mostly Nick because of its focus. Nick has thousands of great ideas but only Nick to implement them. In his spare time.

When OSM started it was light years behind these projects but like them it essentially rejected the geo-dogma. Today, Nick and Richard are key contributors to OSM.

After the first ever OSM talk, freemap was set up. It was very top down, using all the latest standards and it got basically nowhere despite good intentions because it concentrated on technology, not community.

  • Community is everything. OSM only ever sought to make maps. Not the prettiest maps. Not the best maps. Not the fastest maps. Not even maps that worked in 3 out of 24 hours. What it did (and does) is make map making as simple as possible both technically and socially.
  • The second corner stone was getting out of the way. All the code was open. All the data was open. It was as open as you could possibly make it, barring privacy constraints.

These two things were the key, and what separated OSM from the other projects. They’re kind of fairly easy to posit and talk about and wave around. The third pillar is not.

  • Only let the people doing actual work talk. Everyone else can bugger off.

OSM for a good two years was full of people with brilliant designs for space ships. I love space ships. I love flying cars. I am still disappointed every time I see a DeLorean that it doesn’t vertically take off. But a space ship requires highly paid experts from NASA to come build it. And technicians to make sure it’s ready to fly. And a ground crew in case there’s a problem. These are all wonderful things, but rarely do you get them for free in someone’s spare time.

OSM on the other hand was Buzz Lightyear’s cardboard spaceship. It looked like a space ship. If you threw it and squinted for a few seconds before turning away then it kind of flew. It was held together with string and glue. It was a dog. But, crucially, it worked. Anyone could contribute to it, anyone could have the data. It was the only project with a simple API that anyone could talk to.

Continually there were calls to strap jet engines on (this was a mooted feature for the Space Shuttle by the way) or inter-galactic warp drive. Mostly these people were told to bugger off unless they designed, built and strapped on the rockets.

Why such a harsh attitude? Because building something like OSM is hard. It requires love and time and attention over long periods of time. Therefore nerves can fray when a space cadet lands next to your cardboard ship and tells you its all wrong because they read it in a book somewhere, and it says so there. If you have 3 space cadets arguing 3 different warp drive designs, make a sail and tell them all to bugger off.

So back to the arrows.

At this stage, a project still has a main leader but it has a bunch of users and the odd person adding significant amounts of code. OSM was slightly strange in this regard as the users are in a sense much more important than coders. They’re out there in the rain on a Sunday, for free, mapping some awful housing estate. Without them out there, you have no map. Therefore the simplest thing that could work is OK, so long as you have people out there mapping.

 

 

 

 Stage 2: Lumpy Effort / Frequency graph

And so we arrive at stage 2. The founder remains on the left, being one person throwing a lot of time at it. Next there is a lump of a few people doing key things. In OSMs case this was a guy called Imi who made an amazing map editor called JOSM, a guy called Andy and one called 80n (yes, really) who did a lot of mapping and started to put together OSMs ontology system. It doesn’t have an ontology but it’ll be our metaphor for now. Further up the frequency scale there are many people out doing small things. Mapping some streets. Fixing the odd bug. Publicising the project.

Notice that the area of each blob on the graph represents, roughly, the amount of work going in per unit time. Notice that the two right-hand blobs are larger in area than the one on the left. They are, in total, putting more work in.

There is a downside – the work is lumpy on the right. Someone maps an entire town and then disappears. Someone writes an entire editing suite and then gets a girlfriend. So the bars are shifting in all directions as time goes on. Who knows what the project will look like next month?

 

 

 

 Stage 3: Effort / Frequency graph smoothes out to a curve

Eventually the distribution is large enough because it has so many people, that it smoothes out.

While not mathematically correct, this graph is often called the long tail. The effort of a few full time individuals on the left is equal to thousands of contributors spending a few minutes a day. OSM isn’t there yet, but it’s getting close. That leads to a specific set of arrow lengths, many short ones and a few bigger ones.

What does this say about direction, too? Revisiting our stages, I think the first two look something like this:

 

 

 

 

Yes, some of the lines are pointing backward.

So in stage one, you have your one founder. Stage two attracts people who think the whole thing is crazy, or start rival projects – hence they’re pointing the other way. You have people with different focuses, different ideas and so these are at an angle. Different sizes as they’re throwing varying sizes of effort in.

Last, as the graph smoothes out and the ideas and technologies are accepted you’ll see something much more like the cover of that book.

Nice blog post by someone getting started with OSM

Over here: http://frogplate.net/blog/amateur-cartographer.html

“Now Google Maps, and my Navman SatNav have a typo in the name of the street in which I live, and so on reaching home I checked the OpenStreetMap site to see if they had got it right. Initially I was disappointed to find that, although my street was on the map, it and a number of other local roads weren’t named. And then I realised that this was my opportunity to use my lunchtime walks to make a small contribution to the common effort. The next lunchtime found me setting off with a screen print of OpenStreetMap.org, a pen and a GPS receiver.”