Singularity Summit 10 Afternoon

By | August 14, 2010

2:00 Steven Mann

Humanistic Intelligence Augmentation and Mediation

Steven Mann is a cyborg. As we watch him speak we’re getting a live feed from the video camera mounted on his eyeglasses. He’s been wearing computers for 30 years.

He is demoing the Eyetap (one of his many inventions), a camera that enables him to continuosly broadcast more or less excatly what he is seeing at all times.

He describes what he does as “Glogging” (abbreviated from “cyborg logging.”) Unlike a blog which provides digital, discrete data a Glog presents continuous, streaming data.

Now a mini-concert played on the hydrolophone, a musical instrument invented by Mann:

“World’s first musical instrument that produces sound from vibrations in in water itself, and also uses water as the user interface.”

This kind of pure streaming experience is a brief glimps into the world of the undigital singularity — which I don’t get, but it sounds kind of cool. In any case, it’s a fascinating merger of science, engineering, and art.

 

3:00 Mandayam Srinivasan

Enhancing our bodies and evolving our brains

Haptic interfaces — touch interfaces. Showed a video of deaf/blin individuals trained via a methodology called TADOMA. Amazing, seeing a person with no eyesight or hearing having a conversation with someone, able to “listen” to what the other person is saying just by touching her face.

Haptics applications:

Virtual reality — using real touch to operate in artificial environments

Teleoperation — using real touch to operate in real envionments via computer interfaces

 

3:25 Brian Litt

The past, present and future of brain machine interfaces.

I sat this and the next one out as I was putting some audio together for tonight’s podcast. But From Brian’s abstract:

Brain-computer interfaces (aka Brain-Machine Interfaces or Neuroprosthetics), long of interest to science fiction writers and creative thinkers, became a government funded research discipline in the United States beginning in the 1970s. The vision of its architects at DARPA and the National Science Foundation was to restore motor control to soldiers with brain, spinal cord and limb injuries, programs that continue to flourish today. Early devices sampled a variety of neural signals, including scalp EEG and evoked potentials, though the first dramatic successes arose ~ 20 years later from more modern technologies that allowed completely paralyzed (or “locked in”) patients to operate computers or move robotic arms using nothing but their thoughts. These systems record multi-unit neuronal activity from small, targeted brain regions, compute transfer functions to transduce this activity into movement control signals, and conduct it to “effectors,” such as computer cursors or robotic limbs. What has followed is an explosion of innovation in hardware (materials, batteries, computation speed and miniaturization), software (e.g. machine learning), and systems neuroscience that is producing a growing array of implantable neural recording and activation devices to treat disease, restore and potentially augment human function.

BCIs are now universally accepted in a variety of forms. Brain stimulation devices for movement disorders and pain are implanted in patients on almost every continent. New successes, such as recent reports of treating depression with brain stimulation, are world news. Auditory prostheses such as cochlear implants are now commonplace, visual prostheses have reached early milestones to restore low resolution sight, and haptics research holds promise to restore sensation in the setting of limb loss, brain and peripheral nerve injury. Early areas of emphasis, such as prosthetic limb research, have made the most progress, using both real-time feedback to improve responsiveness of artificial arms and legs, and transplanted peripheral nerves to drive sensors. BCIs for speech work slowly but they function enough to be gaining users, and those for cognition, particularly for memory, are being tested in early forms, with great promise. Underlying all of these implementations are an understanding that “neuroplasticity,” the brain’s ability to adapt and interpret regular and logical signals when taught, can take even low levels of information and interpret it logically. This is the case, for example, in cochlear implants where patients can learn to interpret crude electrical stimulations through a handful of macroelectrodes as intelligible speech.

The major hurdles to better BCIs are both technical and rooted in neuroscience. Materials science researcher must deliver more durable and better-tolerated implantable materials to prevent failure and rejection. Engineers must craft smaller, higher resolution devices with more contacts, higher density but that can also cover larger regions, to be able to record and activate the large neuronal networks involved in brain functions. Better machine learning techniques to extract pertinent information from neural signals without relying on human experts to identify them are required. Finally, ways of dramatically increasing information transfer rates, and to optimize neuroplasticity are required to get fast enough bandwidth from humans to devices to make their speed useful. Challenges on the neuroscience side are equally important, most crucially determining on what scale to record neural activity (e.g. single neurons, cortical columns, broad brain regions etc.), how much activity, and over how large a region. We also need better techniques to map the diverse regions in the brain that work together in cognition and other functions, both invasively and non-invasively in humans, in order to unlock how they work.

The future of BCI research is extremely bright. The scientific community worldwide is making rapid progress in each of the above challenge areas, as demonstrated by the number of devices being invented, tested, deployed for human use, and the dramatically increasing research literature in the area of BCI. Most crucially, the rate of information transfer from human brain to computers is rapidly increasing, though in part by using more invasive technologies. Taking the step from repairing damage and restoring function to augmenting our abilities to see, hear, move or think is a dramatic one, and one with major ethical and moral implications. Devices to restore and enhance memory are already being tested, and our growing understanding of how memories are encoded and retrieved give dim glimpses of how information might be transferred from computer storage to human consciousness, though this type of application seems far off now. Augmentation of strength, perhaps reducible to mechanical design once appropriate control is established, seems much less challenging by comparison. What seems most clear is that the pace of advancement in these areas is accelerating. That BCI research will eventually transition from plasticity and repair to augmentation is not in doubt. It is imperative that we think carefully about how and where, scientifically, this shift should take place, and how we might best guide this process.

 

4:15 Demis Hassabis

The past, present and future of brain machine interfaces.

Also missed this one, but  Brian Wang writes:

Neuroscience is rapidly teasing apart the functional roles of the brain’s components, and in some cases even the types of algorithms that they use. Machine learning, meanwhile, is producing a growing collection of techniques for specific kinds of problems, but as yet no general purpose algorithm for artificial intelligence. By bringing these two fields together, we can have both a high level architecture for an artificial general intelligence, and working algorithms for implementing many of the required components. In this talk I will outline the case for pursuing this approach, some current work in progress, and some of the challenges we face going forward.