The Gov Flood is coming. Source: Flickr user alykat.
Developers are excited this year to be tapping in to what they call “the firehose” of Twitter’s real-time information stream. The more data you have access to, they realize, the more useful things you can do with it. But the firehose of tweets may seem like just a trickle as the firehose of government data opens up, potentially joining with Twitter to form a mighty river of information.
Last week, the White House announced an online data dump of thousands of information sets from all cabinet-level departments — everything from Medicare data and workplace injury counts to tire safety ratings. That effort, part of a broad push for transparency, efficiency and interactivity, is being echoed by state and city governments, as well as those of other nations.
All this information is a noisy mess right now, and some of it may not seem especially valuable at first glance. But the use of this data is like a pure-science endeavor. As with Twitter, we probably won’t discover its most valuable applications until after we’ve played around with it a bit. The task for developers is two-fold: to cull value from all these numbers with helpful applications and to establish some uniformity among different data sets in order to keep those applications honest.
Some government data can be made more valuable simply by pushing it through the filters of contemporary NewNet technologies and platforms. What government data lacks in real-time qualities — much of it comes from slow, painstaking research — it makes up for with a density of hard facts that is much richer than the often gossipy Twittersphere. Mash up this information with location data and augmented reality, and it will gain real-time relevancy as users move about its contours in the real world. Visualizations could help put data in context – for example, illustrating public spending trends. Or, imagine:
- looking around a downtown city street and seeing restaurants in terms of their health department and crime records.
- driving in a town you’ve never been to before and being warned by your smartphone that you’re approaching a dangerous intersection – one with the most accidents in the state, for example.
- going house-hunting with an app that shows you trends in home values wherever you look, the lines separating school districts, and the political activities and criminal records of folks in the neighborhood.
- being able to track flight delays in near-real-time across the country.
Beyond such on-the-street applications, there are others. Business owners could analyze traffic flow and hyperlocal unemployment levels in a given location where they’re considering setting up shop. On a global scale, imagine the value — to world travelers, investors or corporations that do business globally — of being able to monitor geopolitical and geosocietal unrest as trouble brews.
As all this data is being harnessed, still others are managing the flow of information in the opposite direction, enlisting citizens to further contribute to the data stream through crowdsourcing apps that will lend more real-time characteristics to government data. Startups like CitySourced and SeeClickFix are already doing this locally — making it easy for people to report potholes and other problems to municipal authorities — and Expert Labs, a startup incubator, is doing analogous work at the federal level.
As local apps grow alongside federal efforts, it will help at some point to reconcile any differences in the data sets — to make sure, for example, that restaurant health records or street intersection accidents are counted the same way in Seattle as they are in Portland, so that fair comparisons can be made and a consistent benchmark used. That standardization task, which might follow or parallel the invention and discovery of useful apps, is just one more noise-filtering job for developers that will become more important, and potentially more lucrative, as the river of information continues to swell.