The folks at New York Times infographics department have found and made great use of another extraordinary dataset:
This map was generated from a new data-set of 125 million buildings created by Microsoft and released open source on OpenStreetMap. The data was created using the CNTK Unified Toolkit, an open-source machine learning library that analysed satellite imagery to trace and then polygonize building footprints.
The approach to generating this data brings up issues of accuracy and validation. The Nytimes mappers address this to some degree, discussing how they addressed inaccurate tracing of building footprints by replacing areas where better data was available. This approach of validating across multiple data sets is of course an essential part of geodata modeling, and something we often discussed with students – part of the broader “author your own data” initiative, which encourages the interrogation of existing data, and the generation of new data on the ground as a way to take ownership of the regimes of data production and visualization, and the agency derived therein.
What is perhaps most interesting here is the ways computer vision takes on an increasing function in the interpretation – and design – of the world, as computational systesms are endowed with the ability to “sense the world and learn to think” (to paraphrase Ben Bratton). What does it mean to design objects (buildings or otherwise) that respond not just to the human visual sensitivity, but also to computational visual biases? Will driverless car visioning make us rethink Kevin Lynch’s characteristics of a “well imaged city”? Will building footprints and the figure-ground relationships of the city be transformed through the eye of OpenStreetMap’s all seeing and now all drawing eye? What do urban features – building, parks, cars, benches, trees, etc. look like to an AI vision system – and will these non-human “aesthetics” have any impact on how objects are designed?
On another note, another great post from Derek Watkins from the NYTimes discussing the challenges and computational approaches involved in displaying super-high resolution simulations of Antarctic ice flows with web motion graphics:
Theodore Spyropoulos and his team at the DRL is doing some incredible research on the potential of soft materiality and self-assembling systems:
In searching for a good geometric system for the tiling of the pillows, we have become very interested in pentagon tiling. This system allows for a great deal of variation in the pattern while using the same shape. While we are not entirely constrained to use only one shape, this will make the fabrication process easier, and allow us to continue to design the overall shape after we start production of the individual pillows.
The definition of pentagon tiling patterns is a rich mathematical subject, with new patterns continuing to be discovered. So far, 15 have been discovered, the most recent in 2015. We are focusing on types 7 and 8, as they produce the most interesting, non-repetitive patterns:
Good sites as a reference:
The interactive aspect of the pavilion has focused around the idea of “soft interface.” We consider this to be a key component of the soft city in general, and will use the pavilion as a chance to try to better define what it means for an interface to be soft. Preliminary schematics for this include the adaptability, plasticity, self-learning, tactility, embeddedness of an interface within a system. Considerations for the soft interface prototype in the pavilion could address sound, sight, text, or touch.
Prof. Sean Ahlquist at the University of Michigan is working on some very relevant research to this idea of soft interface, including his recent project “Social Sensory Surfaces” which:
looks to develop new material technologies as tactile interfaces designed to confront critical challenges of learning and social engagement for children with Autism Spectrum Disorder (ASD)…The project connects expertise and technology in textile structures and CNC knitting, programming of gestural and tactile input devices, and design of haptic and visual interfaces for enhanced musical expression. With textiles, the tactile interface is expanded in scale, from wearables to environments and varied in types of input for human-computer interactions. The textiles are tailored for gradations of touch and pressure sensitive input from large sweeping gestures to fine touch, calibrated to prompt a wide variety of response.
In considering how to implement a tactile system such as this as part of the inflatable system, we are considering two possibilities. The first would be to use barometric pressure sensors inside the inflatable to sense if a given inflatable has been squeezed. Though potentially quite simple to implement, obvious disadvantage of this approach is very low resolution (1 pixel!) and would require the use of relatively small inflatable pillows. A second approach, which seems to pick up on the approach described in Prof. Ahlquist’s project, would be to employ stretch sensors integrated into the inflatable fabric to register pressing touch across a surface. Conductive rubber cord (from Adafruit) organized in a grid) is one relatively cheap system to achieve this. Here is a link from taobao.
And some more links for soft circuitry and other sensitive fabrics:
Some great projects regarding air quality sensing.
1) A really beautiful project from 2013 called FLOAT: Air Quality Monitoring Kites in Beijing by Deren Guler and Xiaomei Wang:
2) Air Quality Balloon
3) Air Quality Sensor Setup:
Inspired by this recent radiolab episode on bubbles, in particular, David Stein’s “big bubble thing” apparatus. This gets me thinking about the form that the BDW pavilion might take, these could prove useful inspiration. The idea of bubbles inside of bubbles is one direction worth exploring. Mostly just fun, with my friend Nick Hanna’s amazing bubble machine kicking it off:
As well as some impressive bubble artistry:
and this guy…