Octopus Pavilion Progress – 20160906

Opening in 15 Days

On Site in 8 Days

Much progress over the weekend. Full scale mock up of a 2m x 2m pillow with electronics installed, including fans, LED’s and sensors. Good tests during the day and night. We still need more power in our fans, and there are many sensitivity issues with the sensor that we are working through. The interactive company we are working with will provide a final prototype by the end of the week. Grasshopper simulation was made more usable thanks to the help of Casey and Han at MAD.

For the next few days, while we wait for the final prototype, we will focus on the overall design, including cell shape and size, overall shape in the site, carriage design, structure, and rain-screen, both in digital simulation and physical prototypes.





Octopus Pavilion Progress – 20160826

27 Days to go

Here are some videos of the physics simulation we are setting up in grasshopper to test the design and interactivity, using the kangaroo2 physics library:

Also some progress on the prototype. Working with a nice nylon type fabric, with sowing:

Soft Interface

The interactive aspect of the pavilion has focused around the idea of “soft interface.” We consider this to be a key component of the soft city in general, and will use the pavilion as a chance to try to better define what it means for an interface to be soft. Preliminary schematics for this include the adaptability, plasticity, self-learning, tactility, embeddedness of an interface within a system. Considerations for the soft interface prototype in the pavilion could address sound, sight, text, or touch.

Prof. Sean Ahlquist at the University of Michigan is working on some very relevant research to this idea of soft interface, including his recent project “Social Sensory Surfaces” which:

looks to develop new material technologies as tactile interfaces designed to confront critical challenges of learning and social engagement for children with Autism Spectrum Disorder (ASD)…The project connects expertise and technology in textile structures and CNC knitting, programming of gestural and tactile input devices, and design of haptic and visual interfaces for enhanced musical expression. With textiles, the tactile interface is expanded in scale, from wearables to environments and varied in types of input for human-computer interactions. The textiles are tailored for gradations of touch and pressure sensitive input from large sweeping gestures to fine touch, calibrated to prompt a wide variety of response.

In considering how to implement a tactile system such as this as part of the inflatable system, we are considering two possibilities. The first would be to use barometric pressure sensors inside the inflatable to sense if a given inflatable has been squeezed. Though potentially quite simple to implement, obvious disadvantage of this approach is very low resolution (1 pixel!) and would require the  use of relatively small inflatable pillows. A second approach, which seems to pick up on the approach described in Prof. Ahlquist’s project, would be to employ stretch sensors integrated into the inflatable fabric to register pressing touch across a surface. Conductive rubber cord (from Adafruit) organized in a grid) is one relatively cheap system to achieve this. Here is a link from taobao.

And some more links for soft circuitry and other sensitive fabrics:

Beyond Turing?

One question that been bubbling up is the Turing-Church theorem, and its potential implied limitation on the possibility of machine intelligence (i.e. the limits of logic and computation). The prospect of interactive computation escaping this limitation seems like a particularly promising thought, one which aligns nicely with some of the thinking Walter and I have been doing on what we are referring to as the “bidirectionality” of abstraction in the mind.

Here Jeff Hawkins (founder of Palm, Treo, and now Numenta) suggest in the title of the below lecture that his HTM (Hierarchical Temporal Memory) machines can “Computing beyond Turing.” One has to be a bit skeptical of whether we are really getting beyond Turing here. As the final questioner points out, this whole thing is running on a Turing machine, after all. The name itself jumps out – can a fundamentally hierarchical system ever go beyond Turing, or are we once again turning the world into so many trees (in Christopher Alexander’s sense of the word)? In fact, as Hawkins admits near the end of the lecture, his system is in fact less general than the Turing machine – a hierarchical system can only solve hierarchical problems – it is all nails to a hammer. Certainly the human mind has the capacity for a whole range of non-hierarchical “leaps”. It does open an interesting set of questions about the “inherently hierarchical organization of the world” as no doubt many key problems (at least one that can be monetized in a straightforward manner) do tend towards hierarchical characteristics, but again, we must wonder to what extent this is due our own projections into the world (and the projections of our long history of rationalization machines – from money to maps to machines). But in any case, Hawkins certainly brings some interesting approaches, and I will definitely explore the software in so far as it is available.

One side note, just noticed as I enter into this world of Neuroscience + AI research – the characters are quite an interesting bunch. Brilliant no doubt, and all quite polyglot, as you would expect. But particular as well. From “Theme Park” to AI…sure why not? From Palm Pilot to AI… what else would you do…? It does beg the question: is AI research the new-age, post-dot com success mid-life crisis dream job de jour? And for that matter, what is the impact of the business-driven approaches that underpin many of these efforts at realizing AI. How does this monetization of the technology impact was is produced?

Beyond the obvious TED Talkers, it will be interesting to dig into some people who are operating at a bit more of a theoretical level, whatever that might mean.

A few more videos that may (or may not) shed some light on the Turing incompleteness limitations:




Interactive Computing

A nice little rabbit hole here. What happens to Turing incompleteness (and the argument that it renders machine intelligence impossible) once machines start interacting directly with the world? Finite State Machine (and non-determinant FSM’s) also seem quite useful going forward.

From Wikipedia:

The famous Church-Turing thesis attempts to define computation and computability in terms of Turing machines. However the Turing machine model only provides an answer to the question of what computability of functions means and, with interactive tasks not always being reducible to functions, it fails to capture our broader intuition of computation and computability. While this fact was admitted by Alan Turing himself, it was not until recently that the theoretical computer science community realized the necessity to define adequate mathematical models of interactive computation. Among the currently studied mathematical models of computation that attempt to capture interaction are Japaridze’s hard- and easy-play machines elaborated within the framework of computability logic, Goldin’s persistent Turing machines, and Gurevich’sabstract state machines. Peter Wegner has additionally done a great deal of work on this area of computer science.

Douglas Engelbart, father of interactive computing.

Theory of Computation course at Portland State (FSM, etc.):