The folks at New York Times infographics department have found and made great use of another extraordinary dataset:
This map was generated from a new data-set of 125 million buildings created by Microsoft and released open source on OpenStreetMap. The data was created using the CNTK Unified Toolkit, an open-source machine learning library that analysed satellite imagery to trace and then polygonize building footprints.
The approach to generating this data brings up issues of accuracy and validation. The Nytimes mappers address this to some degree, discussing how they addressed inaccurate tracing of building footprints by replacing areas where better data was available. This approach of validating across multiple data sets is of course an essential part of geodata modeling, and something we often discussed with students – part of the broader “author your own data” initiative, which encourages the interrogation of existing data, and the generation of new data on the ground as a way to take ownership of the regimes of data production and visualization, and the agency derived therein.
What is perhaps most interesting here is the ways computer vision takes on an increasing function in the interpretation – and design – of the world, as computational systesms are endowed with the ability to “sense the world and learn to think” (to paraphrase Ben Bratton). What does it mean to design objects (buildings or otherwise) that respond not just to the human visual sensitivity, but also to computational visual biases? Will driverless car visioning make us rethink Kevin Lynch’s characteristics of a “well imaged city”? Will building footprints and the figure-ground relationships of the city be transformed through the eye of OpenStreetMap’s all seeing and now all drawing eye? What do urban features – building, parks, cars, benches, trees, etc. look like to an AI vision system – and will these non-human “aesthetics” have any impact on how objects are designed?
On another note, another great post from Derek Watkins from the NYTimes discussing the challenges and computational approaches involved in displaying super-high resolution simulations of Antarctic ice flows with web motion graphics: