Music Meaning + Music Intelligence

The work of David Cope (his website here), referenced by Douglas Hofstadter in his Singularity talk I posted earlier, merits its own post. David’s EMI (Experiements in Music Intelligence) platform can emulate music composition built on a database of musical scores from a given composer or musical style. It opens up a number of very interesting questions about the nature of creativity, authorship, and design in the context of computational mediation. I love David’s characterization of the EMI software as a “foil” to his own creative process. Here is an article on the history of his work.

Here is piece that Radiolab did on his work:

a Bach-like piece composed by EMI:

and some more in-depth interviews with David:

Beyond Turing?

One question that been bubbling up is the Turing-Church theorem, and its potential implied limitation on the possibility of machine intelligence (i.e. the limits of logic and computation). The prospect of interactive computation escaping this limitation seems like a particularly promising thought, one which aligns nicely with some of the thinking Walter and I have been doing on what we are referring to as the “bidirectionality” of abstraction in the mind.

Here Jeff Hawkins (founder of Palm, Treo, and now Numenta) suggest in the title of the below lecture that his HTM (Hierarchical Temporal Memory) machines can “Computing beyond Turing.” One has to be a bit skeptical of whether we are really getting beyond Turing here. As the final questioner points out, this whole thing is running on a Turing machine, after all. The name itself jumps out – can a fundamentally hierarchical system ever go beyond Turing, or are we once again turning the world into so many trees (in Christopher Alexander’s sense of the word)? In fact, as Hawkins admits near the end of the lecture, his system is in fact less general than the Turing machine – a hierarchical system can only solve hierarchical problems – it is all nails to a hammer. Certainly the human mind has the capacity for a whole range of non-hierarchical “leaps”. It does open an interesting set of questions about the “inherently hierarchical organization of the world” as no doubt many key problems (at least one that can be monetized in a straightforward manner) do tend towards hierarchical characteristics, but again, we must wonder to what extent this is due our own projections into the world (and the projections of our long history of rationalization machines – from money to maps to machines). But in any case, Hawkins certainly brings some interesting approaches, and I will definitely explore the software in so far as it is available.

One side note, just noticed as I enter into this world of Neuroscience + AI research – the characters are quite an interesting bunch. Brilliant no doubt, and all quite polyglot, as you would expect. But particular as well. From “Theme Park” to AI…sure why not? From Palm Pilot to AI… what else would you do…? It does beg the question: is AI research the new-age, post-dot com success mid-life crisis dream job de jour? And for that matter, what is the impact of the business-driven approaches that underpin many of these efforts at realizing AI. How does this monetization of the technology impact was is produced?

Beyond the obvious TED Talkers, it will be interesting to dig into some people who are operating at a bit more of a theoretical level, whatever that might mean.

A few more videos that may (or may not) shed some light on the Turing incompleteness limitations:




Interactive Computing

A nice little rabbit hole here. What happens to Turing incompleteness (and the argument that it renders machine intelligence impossible) once machines start interacting directly with the world? Finite State Machine (and non-determinant FSM’s) also seem quite useful going forward.

From Wikipedia:

The famous Church-Turing thesis attempts to define computation and computability in terms of Turing machines. However the Turing machine model only provides an answer to the question of what computability of functions means and, with interactive tasks not always being reducible to functions, it fails to capture our broader intuition of computation and computability. While this fact was admitted by Alan Turing himself, it was not until recently that the theoretical computer science community realized the necessity to define adequate mathematical models of interactive computation. Among the currently studied mathematical models of computation that attempt to capture interaction are Japaridze’s hard- and easy-play machines elaborated within the framework of computability logic, Goldin’s persistent Turing machines, and Gurevich’sabstract state machines. Peter Wegner has additionally done a great deal of work on this area of computer science.

Douglas Engelbart, father of interactive computing.

Theory of Computation course at Portland State (FSM, etc.):


The Chinese Room

John Searle offers a contrasting point of view to the previous post and its “solve AI and you solve the problems of the world AI” optimism. His Chinese Room thought experiment is a powerful critique of strong AI. There is something wonderfully Borges-esque about Searle’s imagery in this, something akin to the Library of Babel, as one mechanically shifts from input, to database, to code, to output, and the baroque infinitude of such an undertaking. But what about boredom in this, the human (conscious) impulse to introduce humor, playfulness, creativity into the most repetitive tasks?

In this video of a recent talk at Google, Searle expounds on the Chinese Room, as well as his theories on epidemiological- vs. ontological – subjectivity and its critique of the possibility of machine consciousness as a category error. I will need to think about this a bit more, though we might ask, as Hassabis points out, if the question of machine consciousness (at least one predicated on the human/mammalian brain and experience of consciousness) is even an appropriate question for the development of AI.

Searle, a key philosopher in the analytical tradition, is also famous for his not-so-friendly exchange with Derrida over speech-act theory.

Demis Hassabis + DeepMind

Some very interesting emerging work from the leader of DeepMind, now part of Google, and its research into AI, Deep Learning. His incredible background goes from chess master at the age of 13, design of the video game Theme Park at 17, a PhD in neuroscience, and finally the founding of DeepMind and its purchase by Google in 2014. DeepMind recently made the news for its AlphaGo program, which beat the European Go champion. A couple recent lectures: