The folks at New York Times infographics department have found and made great use of another extraordinary dataset:
This map was generated from a new data-set of 125 million buildings created by Microsoft and released open source on OpenStreetMap. The data was created using the CNTK Unified Toolkit, an open-source machine learning library that analysed satellite imagery to trace and then polygonize building footprints.
The approach to generating this data brings up issues of accuracy and validation. The Nytimes mappers address this to some degree, discussing how they addressed inaccurate tracing of building footprints by replacing areas where better data was available. This approach of validating across multiple data sets is of course an essential part of geodata modeling, and something we often discussed with students – part of the broader “author your own data” initiative, which encourages the interrogation of existing data, and the generation of new data on the ground as a way to take ownership of the regimes of data production and visualization, and the agency derived therein.
What is perhaps most interesting here is the ways computer vision takes on an increasing function in the interpretation – and design – of the world, as computational systesms are endowed with the ability to “sense the world and learn to think” (to paraphrase Ben Bratton). What does it mean to design objects (buildings or otherwise) that respond not just to the human visual sensitivity, but also to computational visual biases? Will driverless car visioning make us rethink Kevin Lynch’s characteristics of a “well imaged city”? Will building footprints and the figure-ground relationships of the city be transformed through the eye of OpenStreetMap’s all seeing and now all drawing eye? What do urban features – building, parks, cars, benches, trees, etc. look like to an AI vision system – and will these non-human “aesthetics” have any impact on how objects are designed?
On another note, another great post from Derek Watkins from the NYTimes discussing the challenges and computational approaches involved in displaying super-high resolution simulations of Antarctic ice flows with web motion graphics:
Theodore Spyropoulos and his team at the DRL is doing some incredible research on the potential of soft materiality and self-assembling systems:
In searching for a good geometric system for the tiling of the pillows, we have become very interested in pentagon tiling. This system allows for a great deal of variation in the pattern while using the same shape. While we are not entirely constrained to use only one shape, this will make the fabrication process easier, and allow us to continue to design the overall shape after we start production of the individual pillows.
The definition of pentagon tiling patterns is a rich mathematical subject, with new patterns continuing to be discovered. So far, 15 have been discovered, the most recent in 2015. We are focusing on types 7 and 8, as they produce the most interesting, non-repetitive patterns:
Good sites as a reference:
The interactive aspect of the pavilion has focused around the idea of “soft interface.” We consider this to be a key component of the soft city in general, and will use the pavilion as a chance to try to better define what it means for an interface to be soft. Preliminary schematics for this include the adaptability, plasticity, self-learning, tactility, embeddedness of an interface within a system. Considerations for the soft interface prototype in the pavilion could address sound, sight, text, or touch.
Prof. Sean Ahlquist at the University of Michigan is working on some very relevant research to this idea of soft interface, including his recent project “Social Sensory Surfaces” which:
looks to develop new material technologies as tactile interfaces designed to confront critical challenges of learning and social engagement for children with Autism Spectrum Disorder (ASD)…The project connects expertise and technology in textile structures and CNC knitting, programming of gestural and tactile input devices, and design of haptic and visual interfaces for enhanced musical expression. With textiles, the tactile interface is expanded in scale, from wearables to environments and varied in types of input for human-computer interactions. The textiles are tailored for gradations of touch and pressure sensitive input from large sweeping gestures to fine touch, calibrated to prompt a wide variety of response.
In considering how to implement a tactile system such as this as part of the inflatable system, we are considering two possibilities. The first would be to use barometric pressure sensors inside the inflatable to sense if a given inflatable has been squeezed. Though potentially quite simple to implement, obvious disadvantage of this approach is very low resolution (1 pixel!) and would require the use of relatively small inflatable pillows. A second approach, which seems to pick up on the approach described in Prof. Ahlquist’s project, would be to employ stretch sensors integrated into the inflatable fabric to register pressing touch across a surface. Conductive rubber cord (from Adafruit) organized in a grid) is one relatively cheap system to achieve this. Here is a link from taobao.
And some more links for soft circuitry and other sensitive fabrics:
Some great projects regarding air quality sensing.
1) A really beautiful project from 2013 called FLOAT: Air Quality Monitoring Kites in Beijing by Deren Guler and Xiaomei Wang:
2) Air Quality Balloon
3) Air Quality Sensor Setup:
Inspired by this recent radiolab episode on bubbles, in particular, David Stein’s “big bubble thing” apparatus. This gets me thinking about the form that the BDW pavilion might take, these could prove useful inspiration. The idea of bubbles inside of bubbles is one direction worth exploring. Mostly just fun, with my friend Nick Hanna’s amazing bubble machine kicking it off:
As well as some impressive bubble artistry:
and this guy…
This fascinating, and somewhat unsettling study came out last year that illustrates techniques for user profiling based on a relatively small number of geolocated tags:
The underlying implications of this paper radically shifts the conversation on user profiling from a purely relational analytic to one with a distinctly spatial dimension. “You Are Where You Go” as the paper says. Looking at geospatial checkins taken from Weibo users in Beijing and Shanghai, the study had success in predicting demographic profile information such as “gender, age, education background, sexual orientation, marital status, blood type and zodiac sign.” Columbia University students also recently implemented the application for the US and US based geolocation/social media platforms.
How does this change the way we think about identity as a spatial construct in the city and the various (soft) tactics for manipulating our identity the city: camouflage, mimicry, misdirection, jamming, etc.? Is even Banksy safe? What new counter-mapping platforms are needed deploying tactics for amplifying or obscuring one’s geo-digital footprint, not only to foil the impending geospatial tracking capacities of governments, corporations, and other potential adversaries, but also, and more importantly, to open up a more engaged conversation with our (increasingly data-mediated) environment?
In this spirit, here is a great collection of maps and mapping practices that sought to “transgress space”, from the Situationists’ Naked City, to William Bunge’s radical cartography of 1960’s Detroit, to forensic maps of drone strikes or toxic waste spills.
The Tactical Technology Collective has a great interview with the artist Sascha Pohflepp as part of its Nervous Systems Interview series. There is a great discussion about the nature of the network, and how more and more of our technology is driving towards a “network platform paradigm”. Some nice quotes form Sascha below:
It is a good example of something I have not really talked about much, which is that, at some level, you are possibly never really taking a photograph. You are not image-making, you are recording metadata. Which leads to the other, deeper point that photography by now is actually a fully realized networked practice…Photography as a networked practice now exists to the point where, especially in popular use, the value of the communication value of a photo has outstripped the memory value of photographs. Look at Snapchat for example, which is an explicit communication technique, not a memory technique, all based on images. And there is Instagram of course. Those are not image-making practices but networked communication techniques that are used billions of times per day. This is a definite shift in the history of photography.
So it is kind of obvious what is next – any thing that can be emulated by a Universal Turing Machine, which for cameras is easily imagined, will be. It’s the same for cars. Not that the car will become digital, but it will get sucked into the same sort of platform paradigm. You may believe that you have a smartphone holder in your car, but it actually is a car holder for your smartphone. The key thing I am getting at is that the network – and the knowledge that is embodied in the networked – is paramount.
The work of David Cope (his website here), referenced by Douglas Hofstadter in his Singularity talk I posted earlier, merits its own post. David’s EMI (Experiements in Music Intelligence) platform can emulate music composition built on a database of musical scores from a given composer or musical style. It opens up a number of very interesting questions about the nature of creativity, authorship, and design in the context of computational mediation. I love David’s characterization of the EMI software as a “foil” to his own creative process. Here is an article on the history of his work.
Here is piece that Radiolab did on his work:
a Bach-like piece composed by EMI:
and some more in-depth interviews with David: