What is "success" with post-disaster crowdsourcing?
/At a recent workshop I gave on webGIS, after giving an overview of some of the recent uses of crowdsourced and VGI in disasters (fire in San Diego, earthquake in Christchurch, Ushahidi everywhere...), I was asked about success of these projects. Who used the data? How? (and who funded these websites, but that is another story.) And I had only the vaguest of answers. Here is a thoughful critique on this subject by Paul Currion on MobileActive.org. He examines the use of the Ushahidi project in Haiti. Paul is an aid worker who has been working on the use of ICTs in large-scale emergencies for the last 10 years. He asks whether crowdsourcing adds significant value to responding to humanitarian emergencies, arguing that merely increasing the quantity of information in the wake of a large-scale emergency may be counterproductive. Why? because aid workers need clear answers, not a fire-hose of information. Information from the crowd needs to be curated, organized, targeted for response. He makes the point that since crowdsourced data is going have to be sorted through, and can be biased, and can be temporary, aid agencies are going to have to carry out exactly the same needs assessments that they would done without the crowdsourced information.
Where and when do crowdsourced data add value to a situation or project? How can we effectively deal with the bias in the data that comes naturally? We deal with this all the time in my smaller web-related projects: oakmapper and snamp for example. What is the future role of the web for adaptive forest management for example? How do these new collaborative and extensive tools help us make important decisions about natural resources management in often contentious contexts? More to think about.