report to the national science foundation: workshop on neuromorphic engineering · 2015-07-28 ·...

97
Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride, CO Monday, June 29 to Sunday, July 19, 1998 Shihab Shamma Avis Cohen Tim Horiuchi Giacomo Indiveri with R. Douglas C. Koch T. Sejnowski

Upload: others

Post on 15-May-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Report to the National Science Foundation:

WORKSHOP ON NEUROMORPHIC ENGINEERING

Telluride, CO

Monday, June 29 to Sunday, July 19, 1998

Shihab Shamma Avis Cohen

Tim Horiuchi Giacomo Indiveri

with

R. Douglas C. Koch T. Sejnowski

Page 2: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Contents

1 Summary 41.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Future Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Telluride 1998: the details 62.1 Applications to Workshops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Funding and Commercial Support . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Local Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.4 Setup and Computer Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.5 Workshop Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 Tutorials and Project Workgroups 123.1 VLSI Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.2 Floating Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.3 Interchip communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.4 Project Workgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4 Neuromorphic Robots 144.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4.1.1 Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.1.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4.2 Learning: on-line learning using the Khepera robot . . . . . . . . . . . . . . . . . 174.2.1 The Network Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.2.2 Work done in Telluride . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.2.3 Conclusion and future work . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.3 Olfaction: modeling and experimenting with an artificial nose . . . . . . . . . . . 204.3.1 Biological Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.3.2 Measurement Set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.3.3 Combining TNose and Xmorph . . . . . . . . . . . . . . . . . . . . . . . 244.3.4 Olfactory Bulb Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.3.5 Cell types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.3.6 Synapse types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.3.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.3.8 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.4 Audition: auditory localization using the Koala robot . . . . . . . . . . . . . . . . 274.5 Vision: view based navigation using the Khepera robot . . . . . . . . . . . . . . . 29

1

Page 3: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

4.5.1 Input Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.5.2 Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.5.3 Results to Date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.5.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.6 Vision: interfacing a Silicon Retina to a Koala Robot . . . . . . . . . . . . . . . . 334.7 Neuromorphic Flying Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.8 Optomotor response with an aerodynamic actuator . . . . . . . . . . . . . . . . . 344.9 Visual tracking using a silicon retina on a pan-tilt system . . . . . . . . . . . . . . 36

4.9.1 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.9.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.9.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.10 Optomotor Response of a Koala Robot with an aVLSI Motion Chip . . . . . . . . 374.11 Locomotion of segmented lamprey-like robots . . . . . . . . . . . . . . . . . . . . 38

5 Auditory Processing 425.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425.2 Peripheral Auditory Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.2.1 Analysis of informative features in natural acoustic signals . . . . . . . . . 435.2.2 Auditory processing with electronic cochlea chips . . . . . . . . . . . . . 435.2.3 Hardware realization of signal normalization, noise reduction, and feature

enhancement on the output of a cochlear chip . . . . . . . . . . . . . . . . 455.3 Auditory Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.3.1 Computation of sound-source lateral angle by a binaural cross-correlationnetwork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.3.2 Computation of sound-source lateral angle by a stereausis network . . . . . 475.4 Acoustic Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

5.4.1 Identification of speech in real noisy environments using a model of audi-tory cortical processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.4.2 Review of current prospects and limitations in speech recognition systems . 505.5 Collaborative Efforts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

5.5.1 Production of spectro-temporal receptive fields using projective-field map-ping of address-events from a 1-D sender array . . . . . . . . . . . . . . . 52

5.5.2 Directing a binaural robot towards a sound-emitting target . . . . . . . . . 525.5.3 An auditory complement to visual saliency . . . . . . . . . . . . . . . . . 535.5.4 Making Pinna Casts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.6 Retro- and Pro-spectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

6 Address Event Representation 556.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556.2 AER-based 1-D Stereo Work Group . . . . . . . . . . . . . . . . . . . . . . . . . 556.3 Line-Following Robot Using an Address-Event Optical Sensor . . . . . . . . . . . 566.4 2D Address-Event Senders and Receivers: Implementing Direction-Selectivity and

Orientation-Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586.5 One-Dimensional AER-based Remapping Project . . . . . . . . . . . . . . . . . . 596.6 Simulating AER-Cochlear Inputs With the 1-D AER Vision Chip . . . . . . . . . . 596.7 Serial Address-Event Representation . . . . . . . . . . . . . . . . . . . . . . . . . 61

6.7.1 SAER Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626.7.2 Pros and Cons of SAER . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

2

Page 4: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

6.7.3 SAER cabling standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.7.4 Project 1: The SAER computer interface board . . . . . . . . . . . . . . . 656.7.5 Project 2: The SAER universal routing block . . . . . . . . . . . . . . . . 666.7.6 Project 3: The SAER-to-AER converter blocks . . . . . . . . . . . . . . . 66

6.8 Serial AER Merger/Splitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676.9 FPGA Implementation of a Spike-based Displacement Integrator . . . . . . . . . . 68

6.9.1 The Displacement Integrator . . . . . . . . . . . . . . . . . . . . . . . . . 686.9.2 Digital Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706.9.3 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706.9.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

7 Discussion Groups 747.1 The What is computation? Discussion group . . . . . . . . . . . . . . . . . . . . . 747.2 Neuromorphic Systems for Prosthetics . . . . . . . . . . . . . . . . . . . . . . . . 75

7.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767.2.2 Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767.2.3 Prosthetics for sensorineural and sensorimotor applications . . . . . . . . . 777.2.4 Classifying prostheses techniques . . . . . . . . . . . . . . . . . . . . . . 787.2.5 What can neuromorphic systems offer . . . . . . . . . . . . . . . . . . . . 787.2.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

8 Personal Reports and Comments 80

A Participants of the 1998 Workshop 91

B Hardware Facilities of the 1998 Workshop 93

C Workshop Announcement 95

3

Page 5: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Chapter 1

Summary

1.1 Introduction

Neuromorphic engineering is a young field that is based on the design and fabrication of artificialneural systems, such as vision and hearing chips, head-eye systems, and autonomous robots, whosearchitecture and design principles are based on those of biological nervous systems. The goal ofour annual workshop is to bring together young investigators and more established researchers fromacademia with their counterparts in industry and national laboratories, including individuals work-ing on both neurobiological as well as engineering aspects of sensory systems and sensorimotorintegration.

During three weeks in June and July of 1998, the fifth “Neuromorphic Engineering” workshopwas held at the Telluride Summer Research Center (TSRC) in Telluride, Colorado. The work-shop was directed by the founders of the workshop, Profs. T. Sejnowski from the Salk Instituteand UCSD, C. Koch from Caltech and R. Douglas from the University of Zurich and the ETH inZurich, Switzerland as well as the new co-Directors, S. Shamma and A. Cohen both of the Univer-sity of Maryland. G. Indiveri, University of Zurich, and T. Horiuchi, Johns Hopkins University, alsoserved as major coordinators, as the new generation of staff begin to be phased into leadership roles.There were several additional staff and technical assistants, drawn from the various laboratories in-volved. The workshop hosted a total of 67 participants from academia, government laboratories, andindustry, whose backgrounds spanned physics, robotics, computer science, neurophysiology, psy-chophysics, electrical engineering, and computational neuroscience (see Appendix A for a completelisting).

1.2 Progress

As in previous years, the three-week workshop combined tutorials, lectures, and projects on a widerange of topics. The workshop, however, is evolving with significantly added emphasis on initiatinglong-term projects, while using the annual workshops to coordinate indisciplinary and internationalcooperation. Projects have focused on multichip neuromorphic systems that provide basic sensori-motor reflexes, and simple adaptive and learning behaviours for small neuromorphic robots. Whilethis new emphasis might have taken time away from the tutorials, it did provide a hands-on experi-ence that many of the participants found fruitful and stimulating.

The total number of participants was kept down, as this has been found to be important forgood working groups. This year, as last year, we emphasized non-visual sensors, placing a focus onaudition. This shift encouraged the integration of auditory projects with visual guidance projects. A

4

Page 6: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

number of potentially exciting collaborations for future work have sprung up from this integration.This year we made significant progress on the development of serial address-event representa-

tion (SAER), a technological advance which will facilitate efficient and expanded interchip commu-nication and with communication between different devices. This groundbreaking work is criticalfor building multichip systems and will continue to be stressed.

We also continued to have considerable success with the small mobile Koala robots equippedwith a Motorola 68C311 processor as a common platform for neuromorphic engineers. The robots,produced and marketed by K-team, in Lausanne/Switzerland were interfaced with visual and au-ditory sensors and used for sound tracking, and obstacle avoidance through either tethers or au-tonomously by downloaded cross-compiled programs over a serial port. Other robots implementedlamprey-like creatures that could generate traveling wave motions, and were used to explore the be-havior of systems of coupled oscillators. The lamprey-bots will eventually be equipped with visionand perhaps olfactory sensors. Some of these robots emerged from collaborations established atprevious workshops, and some of the others will continue to be the focus of new collaborations.

As a consequence of the Telluride workshop experience, the community of neuromorphic engi-neers has now grown to encompass a web of researchers at many universities and companies, bothin the US as well as in Europe. Participants who shared the intense non-stop experience of formerworkshops now send students to the workshop, and several request an opportunity to return them-selves. The community is growing, moving to new institutions, and spawning new collaborations.

1.3 Future Aims

Our specific goals for next years workshop can be summarized as:1) Expansion of projects on auditory and olfactory processes, and more sophisticated visual

perceptual work.

2) Expansion of multi-modality systems that involve attentional mechanisms, sensorimotor reflexes,and adaptive behaviors in small neuromorphic robots.

3) Select and enhance a few long-term collaborative projects that contribute to the current centralresearch questions of neuromorphic engineering, such as the applications of floating gate technol-ogy, fabrication and testing of multichip neuromorphic systems. This is an important step towardsmaking the workshop an annual meeting ground for exploring new and innovative research direc-tions.

5

Page 7: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Chapter 2

Telluride 1998: the details

Much of the basic organization for Telluride 1998 was identical to the organization we had in placein previous years, except that everything was more streamlined.

2.1 Applications to Workshops

We announced the workshop via our existing Telluride home-page on the World Wide Web, viaemail to previous workshop participants as well as to various mailing lists on January 22, 1998.

Examples of mailing lists that we announced the workshop to, are:

[email protected] Includes the Southern California neural network, neuromorphic en-gineering community.

[email protected] International mailing list with at least 1,000 subscribers in the neuralnetwork/connectionist area.

[email protected] Mailing list for computational neuroscience, primarily the group thatattends the annual CNS Meetings in July.

The text of the announcement is listed in Appendix C.We received 76 applications, of which we selected 32 as participants for the workshop. We

also invited a few key speakers from academia, government facilities and industry to contributepresentations as well as participate in the workshop.

The number of well-qualified applicants was high and many of the applicants that were notaccepted would have made good participants. The selection of participants for the workshop wasmade by the three main organizers of previous workshops (R. Douglas, C. Koch and T. Sejnowski),all of whom received copies of the full applications .

We selected participants who had demonstrated a practical interest in neuromorphic engineering,had some background in psychophysics and/or neurophysiology; could contribute to the teachingand the practicals at the workshop; could bring demonstrations to the meeting; were in a positionto influence the course of neuromorphic engineering at their institutes; or were very promisingbeginners. Finally, we were very interested in increasing the participation of women and under-represented minorities to the workshop and actively encouraged applicants from companies. Travelexpenses for participants from industry were paid by the companies.

The final profile of the workshop was satisfactory (see Appendix A). The majority of participantswere advanced graduate students, post-doctoral fellows or young faculty; six participants came from

6

Page 8: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

non-academic research institutions (D.D. Lee, M. Lades, M. Slaney, N. Srour, M. Tilden, A. Wuen-sche) and one from Industry (M. Nomura). Of the 55 selected participants (not counting organizersand staff), 31 were from US institutions, with the remainder coming from Switzerland (9), UnitedKingdom (4), Australia (2), Austria (2), France (2), Canada (1), Germany (1), Israel (1), Japan (1)and Spain (1). Nine of the participants were women and one Afro-American.

2.2 Funding and Commercial Support

This year we asked participants to pay a registration fee of $250 in order to reduce the workshopcosts. The registration fee accounted mainly for the running expenses of the workshop. The mainworkshop costs (including student travel reimbursements, equipment shipping reimbursements, ac-commodation expenses, etc.) were funded by the following sources:

� The U.S. National Science Foundation (40% of the total credit)

� The Gatsby Foundation, London (23% of the total credit)

� The Engineering Research Center, Caltech (12% of the total credit)

� NASA (11% of the total credit)

� The Office of Naval Research (8% of the total credit)

We would also like to thank the following companies for their support:

� Tanner Research for providing the VLSI layout tools, Ledit, TSpice and LVS.

� K-Team for providing and supporting the Khepera and Koala robots.

� Altera Corp. for providing a UP1 student development board.

� Mathworks, Inc. for providing the software package MATLAB.

2.3 Local Organization

Much of the workshop organization was handled by an interactive webpage that allowed participantsto “log-in” and select their accommodations (e.g., room type, roommates), enroll in various work-shops, inform the organizers of any hardware they planned to bring, and request software packagesthat they needed. This webpage also provided full accounting details (actual and estimated ex-penses) to organizers at all times. Information about each participant’s expenses and what fractionof these expenses the workshop would reimburse were included.

The exact amount of funding was not expected to be available until after the workshop at whichtime, a quick, accurate assessment of our expenses would be necessary to speed up the reimburse-ment process.

The information we gathered through the webpage before the beginning of the workshop al-lowed us to better organize the lectures and tutorials on one side and to improve on the housingarrangements on the other (for example, participants had a chance to choose their condo-mates andcondominium locations even before arriving in Telluride). All of the housing arrangements werecarried out in collaboration with the Telluride Summer Research Center (TSRC), using the work-shop’s interactive web-pages. By obtaining longer-term contracts with local condominiums, wewere able to provide adequate housing at reasonable rates.

7

Page 9: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

As in previous years, the workshop itself took place in a old, but beautifully renovated publicschool near the center of town. We had four large rooms available for our workshop:

1. for the talks and tutorials,

2. for the aVLSI CAD workstations, personal laptops, an olfactory system and related equip-ment,

3. for the circuit testing beds, auditory processing equipment and walking robots building/testingspace,

4. for the pan-tilt/silicon retina tracking setup and for building, programming and experimentingwith the Khepera and Koala robots.

The TSRC also rented the school and provided us with a very able, local assistant for photo-copying, buying supplies and for local public-relations.

We interacted with the local community by giving three public talks and a special presentationfor the local children’s summer program (between the ages of 8 and 14) and by marching under thebanner of ”Neuromorphic Engineering” in the local 4th-of-July parade. For the second consecutiveyear we won second-place for our imaginative rendering of ”Neuromorphic Engineers”, losing thefirst place honor (again) to the local ”Cows of Telluride”.

2.4 Setup and Computer Laboratory

The software/hardware setup lasted from Wednesday, June 24 to Saturday June 27. Appendix Bcontains the list of all the hardware facilities that were present at the workshop. With the supportfrom two system administrators, the setup of 20 computers went relatively smoothly. The computerswere fully networked and provided various internet services such as remote logins, file transfers,printing, electronic mail, and a world wide web server.

The computers where divided into three usage areas. The first was a general computer lab forrunning simulations, designing circuits, running demos, writing papers, or general internet accesssuch as web browsing. Another set of computers where used to control, collect, and process datafrom robots. The third set of computers were used as VLSI test stations.

Each participant at the workshop was given an account from which they could read and sendelectronic mail and transfer demonstration programs or operate them from their home computersthrough the network. Standard software was also available including various simulation and designpackages which were specifically requested before the beginning of the workshop such as NEU-RON, GENESIS, ANALOG, L-edit, and Matlab.

Throughout the entire course, we supported workstations, robots, oscilloscopes and other testdevices brought by all participants.

A World Wide Web site describing this year’s workshop, including the schedule, the list ofparticipants, its aims, and funding sources can be accessed at http://www.ini.unizh.ch/telluride981.

The Web site including information about the 1994, 1995, 1996, and 1997 workshops can beaccessed at http://www.klab.caltech.edu/˜ timmer/telluride.html

The computer lab proved to be very constructive since it not only allowed participants to demon-strate and instruct others about their software, but it offered the opportunity for others to makesuggestions and improvements.

1We strongly recommend that the interested reader scan this homepage. It contains many photos from the workshop,reports, lists of all participants etc.

8

Page 10: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

A large truck was rented in Pasadena, CA, loaded with computers and chip-testing equipmentfrom various CNS laboratories at Caltech and was driven to Telluride. In Telluride, heavy-dutyextension cords were strung from six other rooms to provide enough power for all of the computers.At the end of the course, these computers were returned via the van to Pasadena. Robots and somecomputers were also shipped from the Institute of Neuroinformatics in Zurich, from the Universityof Maryland and from Johns Hopkins University.

2.5 Workshop Schedule

The activities in the workshop were divided into three categories: formal lectures, tutorials andproject workgroups.

The lectures were attended by all of the participants. Lectures were presented at a sufficientlyelementary level to be understood by everyone, but were, nevertheless, comprehensive. The firstseries of lectures covered general and introductory topics, whereas the later lectures covered topicson state-of-the-art research in the field of neuromorphic engineering. It was found that two hour-and-a-half lectures rather than three one-hour lectures in the morning session was better for coveringa topic in depth as well as to allow adequate time for questions and discussions.

The afternoon sessions consisted mainly of tutorials and workgroup projects, whereas the eveningswere used for the discussion group meetings (which would often continue late into the night).

Sundays were left free for participants to enjoy the Telluride scenery. Typically, participantswould go hiking. This was a valuable opportunity for people to discuss science in a more informalatmosphere and catch up on the various projects being carried out by the other participants.

The schedule of the workshop activities was as follows:

Sunday 28 June

� Arrive in Telluride

Condo Check-In

� Evening:17:00 Welcome cocktail party (to be held at Christof Koch’splace!- details to follow)

19:00 Tour of the workshop facilities (@ the schoolhouse)

Monday 29 June

� Morning lectures:Welcome and Workshop Intro. (Christof Koch & Staff)

Coffee and Donut Break (as well as tea, bagels and croissants!)

Basic Biophysics and Neuron Models . (Christof Koch)

Go eat Lunch!(see bottom of page)

� Afternoon:14:00 - 16:00 Workgroup descriptions (see bottom of page)

* Basic aVLSI tutorial overview (G. Indiveri, J. Kramer)

* Floating-gate workgroup overview (C.Diorio)

* AER workgroup overview (T. Horiuchi)

16:00 - 18:00 Discussion Group Proposals/Organization (seebottomof page)

� Evening:19:30 Introductions by participants (see bottom of page)

Tuesday 30 June

� Morning lectures:Circuits in Neocortex (Rodney Douglas)

Neuromorphic aVLSI Systems . (Giacomo Indiveri)

� Afternoon Lecture:14:00 Neuromorphic Behaving Systems (P.Verschure)

Workgroup descriptions

* aVLSI vision sensors for behaving robots (G.Indiveri)

* Silicon Hearing Chips (A.van Schaik)

* Locomotion/Robots workgroup overview . (M.Tilden, A. Cohen)

� Evening:19:30 Joint aVLSI & Floating Gate Tutorial Lecture (Chris Diorio)

Wednesday 1 July

� Morning lectures:Computation in the Auditory System . (Leslie Smith)

Computational Vision for Sensorimotor Control and Navigation(Jim Clark)

� Afternoon Workgroups:1:30pm - 2pm: Robot tools introduction

2pm - 4pm: Floating Gate Tutorial

4pm - 6pm: aVLSI Tutorial

� Evening:19:30 On-chip Learning Discussion Group

Individual Work on Projects

Thursday 2 July

� Morning lectures:Towards Design Principles for Locomotion: lessons from biology.(Avis Cohen)

Acoustic Sensor Technology for Army Applications . (Nino Srour)

� Afternoon Workgroups:12 pm - 2 pm : FIRST Locomotion Project Group Meeting

2 pm - 4 pm: Floating Gate Tutorial (lecture room)

Auditory Discussion Group

4 pm: FIRST AER project group meeting/lecture (VLSI room)

4 pm - 5 pm: aVLSI Tutorial

� Evening:17:00 BBQ at Telluride Lodge

20:00 - 21:00 Public Lecture (Rodney Douglas )

Individual Work on Projects

Friday 3 July

9

Page 11: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

� Morning lectures:10:30 pm - 12:00 pm: Design, Evolution and Analysis ofBiologically-Inspired Control Systems for Walking (Randy Beer)

12:30 pm - 2:00 pm: About Cochlear Models, the psychophysicalscale for speech perception and speech recognition . (AndreasAndreou)

� Afternoon workgroups:3:30 pm - Auditory Project Group

2pm - 4pm: Floating Gate Tutorial

4pm - 6pm: aVLSI Tutorial

� Evening:6:00 pm TNose Project Group

7:30 pm - What is Computation? Discussion Group

9:30 pm - Neuromorphic Engineering for Prosthetics DiscussionGroup

Individual Work on Projects

Preparations for Independence Day Parade!

Saturday 4 July

� Fun:

9:00 am Meet at schoolhouse to prepare for parade

9:30 am Line up at the parade beginning (Colorado and Willow?)

10:00 am Independence Day Parade!

& BBQ lunch (Town Park)

� Evening:After dark: Independence day Fireworks

Individual Work on Projects

Sunday 5 July

� Free

Monday 6 July

� Morning lectures:The AER Communication Protocol (Kwabena Boahen)

Spike-based Computation (Wolfgang Maass)

� Afternoon workshops:2pm - 4pm: Auditory Group meeting

Floating Gate Tutorial

4pm AER project group meeting

4pm - 6pm: aVLSI Tutorial

� Evening:7:30 pm On-Chip Learning Discussion Group

7:30 pm Genetic Encoding Discussion Group

9:30 pm Neuromorphic Engineering for Prosthetics DiscussionGroup

11:00 pm Neuron modeling tutorial

Individual Work on Projects

Tuesday 7 July

� Morning lectures:Analysis of the Lamprey locomotion system (Avis Cohen)

Modeling the swimming of eel-like creatures (Thelma Williams)

� Afternoon workshops:12:00 - 2:00 pm: Locomotion Project Group

2 pm - 4 pm: Floating Gate Tutorial

4 pm: AER Project Group

4 pm - 6 pm: aVLSI Tutorial

� Evening:7:30 pm Visual Motion Discussion Group (Alan Stocker:”Computationof Optical Flow in a

Cooperative Manner - an aVLSI Implementation”)

7:30 pm On-Chip Learning Discussion Group

Individual Work on Projects

Wednesday 8 July

� Morning lectures:Recurrent neuronal circuits in cortex. (Rodney Douglas)

VLSI Implementations of Pattern Generators

and Intersegmental Coordination (Stephen DeWeerth)

� Afternoon workshops:1:00 pm River Rafting Trip (Be sure to sign up both in the robotroomand at Telluride Sports!)

� Evening:7:30 pm What is Computation Discussion Group

9:30 pm Neuromorphic Engineering for Prosthetics DiscussionGroup

Individual Work on Projects

Thursday 9 July

� Morning lectures:Silicon Retinas and CMOS imagers . (Tobi Delbruck)

Silicon Motion Sensors . (Chuck Higgins,Reid Harrison)

� Afternoon workshops:12:00 - 2:00 pm: Locomotion Project Group

2pm - 4pm: Floating Gate Tutorial

4pm - 6pm: aVLSI Tutorial

� Evening:17:00 BBQ at Telluride Lodge

20:00 - 21:00 Public Lecture (Mark Tilden )

Individual Work on Projects

Friday 10 July

� Morning lectures:Modeling area MT cell response (MT-MST models)(MasahideNomura)

To be announced (VLSI neu MOS) (Brad Minch)

12:00 - 1:00 pm: Machine Olfaction (Tim Pearce)

� Afternoon workshops:2pm - 4pm: Floating Gate Tutorial

4pm - 6pm: aVLSI Tutorial

5:00 pm - 6:30 pm - AER Project Group Lecture - Arbiters I(LectureRoom)

6:30 pm - 7:00 pm - AER Project Group Progress Meeting(Lecture Room)

� Evening:7:00 pm Helicopter Competition Video !

7:30 pm Flying Robot Discussion Group (Lecture room)

Individual Work on Projects

Saturday 11 July

� Morning lectures:Bottom-Up and Top-Down Models of Visual Attention(ChristofKoch)

Premotor theory of attention (Jim Clark)

Robot Competition! - Mark Tilden

� Afternoon workshops:2pm - 4pm: Floating Gate Tutorial

� Evening:Optional overnight mountain hike

Individual Work on Projects

Sunday 12 July

� Free day

� Evening:8:00 pm: Review of progress

Monday 13 July

� Morning lectures:Attention and Intention in Parietal Cortex (Terry Sejnowski)

Oculomotor Control Systems . (Terry Sejnowski)

� Afternoon workshops:2pm - 4pm: Floating Gate Tutorial

4pm - 6pm: aVLSI Tutorial

5:00 pm - 6:30 pm: AER Project Group Lecture - Arbiters II

� Evening:7:30 pm: Attention and Motor Control Discussion Group

(An aVLSI, Spike-Based Attentional System (Timmer Horiuchi))

9:30 pm: On-Chip Learning Discussion Group

Individual Work on Projects

Tuesday 14 July

� Morning lectures:The Auditory vs. the Visual Pathway . (Shihab Shamma)

Energetics of Information Processing (Andreas Andreou)

10

Page 12: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

� Afternoon workshops:12:00 - 2:00 pm: Locomotion Project Group

2pm - 4pm: Floating Gate Tutorial

4pm - 6pm: aVLSI Tutorial

5:00 pm: AER Project Group Research Presentations: - 1. ChuckHiggins,2. Steve DeWeerth

� Evening:7:30 pm - Visual Motion Discussion Group (Chuck Higgins:”Locationof Optical Flow Singular Points using an aVLSI sensor”)

Individual Work on Projects

Wednesday 15 July

� Morning lectures:Analog VLSI building blocks for an electronic auditory pathway(Andre van Schaik)

Computational Models of Audition . (Malcolm Slaney)

12:30 - 2 pm: Attention and Motor Control Discussion Group

� Afternoon workshops:2pm - 4pm: Floating Gate Tutorial

4pm - 6pm: aVLSI Tutorial

4:00 pm - AER Project Group - Arbiters III & project progressdiscussion

� Evening:7:30 pm What is Computation? Discussion Group

9:30 pm On-chip Learning Discussion Group

Individual Work on Projects

Thursday 16 July

� Morning lectures:8:30am - Assessing Observability of Visual Tracking Techniques(NicolaFerrier)

10:30 am - AER Project group meeting - Arbiters IV

� Afternoon workshops:12:00 - 1:00 pm: Locomotion Project Group

1:00 pm - 2:00 pm: ”The Dynamics of Discrete Networks:Implicationson Self-Organization and Memory”

. (Andy Wuenche)

2pm - 4pm: Floating Gate Tutorial

4pm - 6pm: aVLSI Tutorial

� Evening:17:00 BBQ at Telluride Lodge

20:00 - 21:00 Public Lecture

Individual Work on Projects

Friday 17 July

� Morning:Group presentations/demos

� Afternoon:Work on project group and personal reports

18:00 some ROBOT ROOM Computers ARE SHUT DOWN!

� Evening:20:00 Dinner at Leimgruber’s with Award Ceremony

Saturday 18 July

� Morning & Afternoon12:01 am ALL COMPUTERS ARE SHUT DOWN!!!

PACK UP and LOAD TRUCK

Sunday 19 July

� Check-out and departure

(remember! no default housing on this evening)

11

Page 13: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Chapter 3

Tutorials and Project Workgroups

The three tutorials were an opportunity for participants to acquire hands-on experience with thedominant technology upon which Neuromorphic Engineering is based. These tutorials were crucialfor disseminating practical knowledge amongst the participants, especially for newcomers to learnthe basics and quickly come up to speed with the rest of the group. For many biologists and com-putational modelers, this was their first friendly opportunity to learn about analog VLSI technologywith an eye towards neural computation.

In the following the major contributors to the tutorials and projects have been listed in parenthe-ses.

3.1 VLSI Basics

(Liu, Indiveri and Kramer)In this practical, we covered topics ranging from transistor physics, and characteristics to simple

circuits and circuit technology. We also demonstrated the use of software tools to simulate circuits,to do circuit layout, and to use layout-vs-schematic (LVS) verification tools.

Participants attended daily lectures on these topics and either did hands-on measurements ontransistor characteristics with chips that were prefabricated for this tutorial or they learned to usevarious software tools on the workstations provided. In the hands-on labs, participants also learnedto use different test equipment and software for collecting data from the chips.

Most participants found that the three weeks allocated for this practical were insufficient forthem to fully grasp the material, although they did report that they had sufficient information topursue this exercise on their own. We provided the participants with documentation of the labexercises, lecture notes, and public domain circuit simulation software for future use at their ownuniversity/research institution.

3.2 Floating Gates

(Diorio, Minch and Hasler)This practical consisted of extensive lectures on the physics of hot electron injection, tunnel-

ing, and high-voltage circuits for the control of floating gates. An excellent collection of notes wasprovided. Experiments were done with floating gate chips, fabricated for the workshop tutorial,containing silicon-synapse circuits that adapted over a time period measured in days. In the labora-tory sessions the use of pulse-modulation techniques for incremental control of floating gate voltage

12

Page 14: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

was investigated. Participants of this tutorial had already a basic knowledge of circuit design, butfound the topic of (analog) floating gate storage new and exciting.

3.3 Interchip communication

(Boahen and Horiuchi)As in previous years, Kwabena Boahen gave a comprehensive series of lectures on pulse-

generation and integration, formalism for self-timed communication protocols, and intra-chip ar-bitration. One-dimensional retina sender-receiver chips were available for hands-on investigation,as well as two-dimensional systems for demonstration.

This year there was increased participation in this tutorial and in the projects involving this tech-nology (see also Section 6.2). This expresses the fact (which also emerged last year) that there is agrowing need for the development of interchip communication techniques to design more elaboratemultichip systems and to interface these new sensors to robots and other digital systems.

3.4 Project Workgroups

The workgroup meetings gave people with common interests a chance to get together and discusstheir area in detail, to establish the most pressing questions in that area, to determine the state-of-the-art, and to make plans for future developments. Most importantly, project workgroups gavemany of the participants the opportunity to investigate their research topics practically, using theinfrastructure and the tools offered by the workshop. In particular, the workgroups typically con-sisted of participants of different experience levels, different scientific backgrounds, and differentinstitutions.

The project groups covered topics that ranged from artificial olfactory systems to flying robots.In order to provide a comprehensive summary, we divided the projects in three main categories: neu-romorphic robots, auditory processing and address event representation. Some of the groups spentmany nights agonizing over the various technical points, only briefly summarized in the followingsection.

13

Page 15: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Chapter 4

Neuromorphic Robots

4.1 Introduction

(Paul F.M.J. Verschure)This year the Neuromorphic robotics working group turned out to attract many of the regis-

tered participants in the workshop. In that sense a trend continued from the previous year. Duringthe 1997 workshop a concentrated and coherent effort was made to introduce mobile robots in theworkshop program. The collaborations established during 1997 have led to a number of publica-tions by different workshop participants on experiments which originated at the workshop and theorganization of a workshop on its central theme at the conference Neural Information ProcessingSystems 98. This demonstrates the combination of mobile robots and neuromorphic engineeringcan provide an ideal vehicle to address issues central to the field such as: the real world evalua-tion of neuromorphic devices and the importance of addressing system level issues involving hybrid(digital-analog) solutions. Most importantly, however, this approach facilitates collaborative effortsbetween different subgroups active during the workshop. Hence, one goal of this years workinggroup was to strengthen the collaborative components of the workshop using mobile robots as itsmedium. Based on the discussions and projects developed during 1997 the practical aim of theworkshop was to organize subprojects in such a way that they could converge on a common goal:“a complete system”. This plan is illustrated in Figure 4.1.

The aim was to realize a system which would combine visual, auditory, and olfactory neuromor-phic systems in the control of a mobile robot. This goal was a natural step building on the results ofthe previous workshop were these sensory systems were investigated in isolation. Roughly the taskwould consist in making the walking 6 feet high robot Roswell (Mark Tilden) move towards soundsources, associate the basic orienting responses with particular visual features, and subsequentlyredisplay these orienting responses in the absence of the auditory cues. These learned responseswould be triggered by the detection of specific smell by the olfactory system. This ambitious goalcould motivate individual participants of the workshop, facilitate activities in different subgroupsby providing goals, and allow collaborations across their boundaries.

4.1.1 Infrastructure

In order to facilitate the realization of the overall goal of the working group the following steps weretaken before the workshop:

14

Page 16: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Giacomo IndeveriTobi Delbruck

Retinae

* Boards, interfaces, software

Robot control using 1D and 2D

Feature extraction.

Visual: Retinae

Tim Pearce

Feature extraction.

Olfaction: TNose

Build model of GlomeruliOlfactory encoding

* TNose HW, Simulation, interface.

Learning on mobile robots

* Khepera-Koala, Simulation.

Distributed Adaptive Control

Alternative learning models

v. SchaikAndre

Learning

Neurobots

Mission:

* Build "complete" system* Win 4th of July parade

Robot control through sound cuesSpeech, pitch

Binaural soundsource localization

* Boards, interfaces, software

Auditory: Cochlea

David KleinAndreas Andreou

Model of saliency system

Visual saliency

Control orienting responses

* Khepera-Koala, Simulation.

Paul Verschure Paul Verschure

Roswell

Model of locomotionLamprey CPG, Nervous Nets

* Walkers, Simulation, interface.

LocomotionAvis Cohen

Mark Tilden

Mark Tilden

Working group Neuromorphic robots

Figure 4.1: Neuromorphic robotics working group plan

� Previous events have shown that the major problems in realizing projects at a more maturescale are found at the interfaces between the different technologies involved. In anticipation ofthis problem discussions were initiated before the workshop on the basic interface propertiesof both visual and auditory neuromorphic sensors.

� In order to facilitate the use of mobile robots different software packages were put together,tested, and documented using html (with the assistance of Regina Mudra, a participant of the1998 workshop, and Mark Blanchard, who participated in 1997 and is presently working atINI-Zurich) based on well tested environments used in our own research.

– IKhep: A small scale C based package with a MatLab graphical user interface providedan easily accessible environment for experiments with the microrobot Khepera and themobile robot Koala (with Regina Mudra).

– KhepDac: A C based application implementing neural learning systems which are in-terfaced to a mobile robot.

– RetinaMove: A C program with a MatLab GUI for integrating a aVLSI retina in thecontrol of a Koala robot. This program could be cross-compiled to the local CPU of the

15

Page 17: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

robot to support autonomous operation. This program helped people to get started usingaVLSI devices on Koala (with Giacomo Indiveri and Mark Blanchard).

– IQR421: A distributed simulation environment for the construction of large scale neuralsystems which can be interfaced with external devices (robots, camera, aVLSI retinae,etc.). IQR421 was used to realize more large scale projects (with Mark Blanchard).

� In order to support the activities in the working group a number of lectures were prepareddealing with the pragmatics of this type of research, its basic methodology, and a number ofissues relating to neural forms of behavioral control.

� K-team (Lausanne, Switzerland), the producer of the used mobile platforms again provided alarge number of Koala and Khepera robots in support of the workshop.

4.1.2 Results

Overall the working group has been very effective in creating a large number of constructive activ-ities which are discussed in the different individual reports. Many of these activities had an impacton future research pursued by the participants. Central subprojects aimed at the realization of the“complete” system were:

� 1: A software based system which allowed Koala to orient towards sound sources (see Sec-tion 4.4).

� 2: Orienting responses to auditory cues implemented on Koala using aVLSI cochleas and asoftware based interface (see Section 5.1).

� 3: Interfacing Koala to existing vision chips (see Sections 4.6, 4.10, 6.3).

� 4: The development of a tracking system using an active pan-tilt unit and a silicon retina (seeSection 4.9).

� 5: The development of a model of the olfactory bulb used in the classification of olfactorycues derived from an artificial nose (see Section 4.3).

� 6: The development of an autonomous mobile robot based on models of lamprey locomotion,using central pattern generators (see Section 4.11).

In conclusion the overall goal of a “complete” system was not achieved for very instructive rea-sons, although tremendous progress has been made in the subprojects. The main problems were ofa practical nature, for instance, project 1 prepared the ground for the inclusion of aVLSI cochleasin the orienting task which were initially tested in project 2. Project 1 not only realized all the con-trol necessary but also provided the basic software interfaces for the cochleas using IQR421 (thisinvolved a large amount of on the spot software development in an infrastructure which was notsetup for that purpose). The aVLSI cochlea system, which performed very well in isolation, was,however, not easily accessible for further experiments given the risk of damaging the setup. Thisexcluded further integration. A similar problem occurred with the 2D aVLSI retina, with whichvery interesting experiments have been performed in isolation, but which unfortunately did not sur-vive the last week of the workshop. Unfortunately, a similar faith befell Roswell which burned anumber of its motors during a test run. Hence, our ambitious goal of a “complete” system provided

16

Page 18: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

a very important reality check for the technology we attempt to develop. It showed that at a smallscale a lot can be achieved, a confirmation of last years experience, and as such it facilitated theseactivities tremendously. But, a large part of our future research is at the level of systems, and exactlyat this level a number of serious limitations of our present capabilities were revealed. Each failure,however, provides an important lesson. One observation is that the failures were in the domain ofthe technology employed, and not in that of our concepts for sensory processing, or behavior con-trol. Our observations on, and especially experience of, the above technical limitations are drivingongoing research. The above sub-projects have extended beyond the duration of the workshop andare pursued in a further collaboration between Zahn, Klein, Verschure and others.

The secondary goal of winning the 4th of July parade was also not achieved, despite the inclu-sion of a “dancing” Koala couple (with Andre van Schaik). Again the second place was reached.Further inquiry revealed that the jury had anticipated a better choreography of the human partici-pants. The different demos prepared for the annual open-house for local school children proved tobe very effective and received a strongly positive response.

The working group on Neuromorphic Robotics has provided an important platform to evaluateour present technology. For instance, about 15 participants were introduced to the package IQR421.With several of these further collaborations have been established based on this software environ-ment. Presently at the computer science department in Graz, Austria (Maass) the package DacKhepis used in a course on models of learning applied to mobile robots.

4.2 Learning: on-line learning using the Khepera robot

(Ranit Aharonov-Barki, Yuri Lopez de Meneses, Nicol Schraudolph)We designed an artificial neural network which used Reinforcement Learning (RL) to learn to

control a Khepera minirobot. The task of the robot was to find a target area while avoiding obstaclesscattered around the arena. The robot had 8 IR sensors and a one dimensional retina outputting anarray of light intensities. A light source is placed on the goal area, so the robot can use the retinalinput to track its position. The robot receives a reinforcement signal from its environment. Bumpinginto obstacles (by saturating the IR proximity sensors) is penalized, and reaching the goal is highlyrewarded, but most of the time no reinforcement signal is received.

Most previous work in RL used Q-learning, a paradigm whereby the neural network learnsto match state-action pairs in order to maximize the reward. In this case, the number of actions isinfinite, because the motor commands are essentially continuous. Therefore we use a neural networkwhich issues stochastic motor commands as a function of a sensory state and, in parallel, estimatesthe expected reward from the sensory state and the current action (motor command). By learningthe parameters of the stochastic motor process, the network can produce appropriate behavior for agiven situation, ranging from deterministic (exploitation) to stochastic (exploration).

4.2.1 The Network Design

The network we designed is composed of two sub-networks that implement two different functions.The action network maps the sensory state (input) to motor commands (i.e motor velocities), whilethe prediction network maps sensory input and planned action to a scalar called expected reward or

17

Page 19: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

......................

right motor speedleft motor speed

inputs

sigmamean mean sigma

expected reward

Figure 4.2: Structure of the neural network.

utility (U). (see fig. 1). Both networks are trained using backpropagation (BP) but with a differentpropagated error. The prediction network’s goal is to predict the future reward associated withtaking an action at a certain state, and is thus trained using a standard RL prediction error. The actionnetwork’s goal is to maximize U and is therefore trained by back propagating a constant positiveerror (1 by default). So there are two processes acting in parallel, one working to find a policy tomaximize U and the other trying to estimate the U of that policy. We expect that the combination ofthe two processes will lead the first network to estimate the real reward of the optimum policy (U*),as computed by the second network.

The network consists of 23 input nodes - the 8 readings of the IR sensors and 15 retinal inputswhich are low-pass filtered readings sub-sampled every tenth value. The sensory input layer ispropagated to 4 neurons that code the mean and standard deviation of the motor velocities. Theactual velocities are then calculated by:

Lmotor �MOTOR GAIN � ��L � �L � �triangle���� (4.1)

Rmotor �MOTOR GAIN � ��R � �R � �triangle���� (4.2)

Where triangle�� is a random process with triangular probability distribution, � can have avalue between -1 and 1, and � is between 0 and 1. This is equivalent to the motor neurons havinga synaptic weight of MOTOR GAIN with the � neurons and a weight of c � MOTOR GAIN �triangle�� to the � neuron. The value of connection c is randomly chosen at each iteration, and

18

Page 20: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

stored for the error backpropagation step. The second network propagates the sensory input as wellas the actual motor commands through a single layer of weights, onto a linear-output neuron thatcomputes the estimated utility U.

The error in the utility prediction is computed as ��t� � � � U�t� � U�t � �� � r�t�, wherer(t) is the current reinforcement signal or reward. This estimation error is then back-propagatedto the sensor input layer and to the motor-command neurons. The weight update equation on thesensor-to-utility network is a standard, one-layer gradient descent. The same error is used to modifythe weights in the motor-command to utility layer (the dashed arrows in 4.2). The update on thesecond network is a little bit trickier, since the error is not the same (we are trying to maximize theutility) and some of the weights are not modifiable. We back-propagate a constant, positive error� � �, which is equivalent to a desired target value of U+�. This error is backpropagated throughthe network, but it only modifies the weights in the sensor-to-� and sensor-to-� layers.

4.2.2 Work done in Telluride

Presently we have coded the neural network and update algorithm in C, using KrOS, the KheperaOperating System cross-compiler. The robot thus learns in a fully autonomous way, the serialconnection being used only to display on a terminal the messages sent by the robot. The robot triesto learn to reach the goal area, marked by the light source and a dark paper that can be detectedby an additional light sensor pointing to the ground. The robot has 10 trials of 100 steps each tolearn. In between trials, the robot repositions itself randomly and autonomously, by switching to aBraitenberg obstacle avoidance behavior, also coded in a neural network.

Figure 4.3: The Khepera robot, equipped with CSEM’s EDI retina, uses a light detector inthe front to detect the goal area (dark ground).

19

Page 21: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

4.2.3 Conclusion and future work

At present the robot has not yet learned to navigate in a test area. Much time had to be spentdeveloping the network and updating algorithm, as well as compiling and debugging. We intendhowever to pursue this project after the Neuromorphic Workshop ends. On one side, Yuri lopezde Meneses will fine-tune the robot and conduct the real-robot experiments and Nicol Schraudolphwill integrate his ELK1 weight-update algorithm to improve learning. Ranit Aharonov-Barki willconduct a similar experiment on the Webots Khepera simulator using genetic algorithms (GA) tosolve the same problem. The Webots software compatibility with the real Khepera should allowus to test the GA-evolved solution on the real robot too and establish a comparison between bothapproaches.

4.3 Olfaction: modeling and experimenting with an artificial nose

This report describes the progress made on the artificial nose project (which became affectionatelyknown as ’TNose’) during the 1998 Telluride Neuromorphic Workshop. The purpose of attendingthe workshop was to combine the artificial nose measurement and instrumentation set-up developedat Tufts University Medical School, Boston, USA by Tim Pearce and others1(described in Section4.3.2) with the neuronal modeling package Xmorph, developed at the Institute of Neuroinformatics,ETH, Switzerland by Paul Verschure. For the first time, this would provide an opportunity bothto collect biologically realistic sensory data (i.e. from large numbers of chemical sensors that arebroadly tuned), as well as permit signal processing using biologically plausible neuronal models.

While existing electronic nose systems typically make use of small arrays of non-specific,broadly-tuned chemical sensors, these tend to only comprise one sensor of each type or class inorder to maximize the diversity of the array as a whole. The biological system, however, not onlypossesses a large repertoire of olfactory receptor protein classes (estimated to be between 300–1,000in mammals, these are deployed in large numbers (approximately 10� in humans). The replicationof olfactory receptor proteins across a large population probably plays a number of different roles inthe olfactory pathway, but it is clear that one emergent property of this arrangement is a sensitivityenhancement, brought about by an increase in certainty in the sensory signal. Practical artificialnose systems have yet to take advantage of an analogous implementation of this mechanism in thebiology, which is the purpose of this study. By having just one sensor of each class, artificial nosesare generally deprived of the opportunity to perform any statistical estimates of the signal, and sodemonstrate limited sensitivity.

In the optically-based measurement set-up used in this study, we deploy large numbers of os-tensibly identical dyed silica microspheres which display sensitivity to a wide range of chemicals.This enables us, for the first time, to exploit the statistics of the data obtained from an artificial nosein order to investigate strategies for sensitivity enhancement. While standard statistical analyses ofthese data have already been made (outlined in Section 4.3.2), the Xmorph neuronal modeling pack-age provides an opportunity to apply some biologically plausible processing strategies (outlined inSection 4.3.4).

4.3.1 Biological Relevance

A model for the way information is integrated at the first processing site of the olfactory bulb isshown in Figure 4.4. Molecular stimuli diffuse through a mucous layer coating the olfactory epithe-

1Based upon an earlier system developed by John S. Kauer and Joel White, Department of Neuroscience, and usingsensors provided by Todd Dickinson and David Walt, Department of Chemistry.

20

Page 22: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

lium to interact with a large population of receptor neurons of 300-1,000 different types. Receptorsdemonstrate broadly-tuned responses to a wide range of odorants, displaying peak sensitivity to par-ticular groups of compounds. Receptor neurons generally depolarise to generate action potentials,and it is the pattern of activation across the receptor population as a whole that is thought to encodethe stimulus. The axons of receptor cells fasciculate through the cribiform plate to innervate regionsof the olfactory bulb known as glomeruli. These regions are typified by densely packed synapsesbetween the axons of receptor cells, the primary dendrites of mitral/tufted cells situated deeper inthe bulb, and periglomerular cells situated more superficially. Recent studies suggest that receptorneurons express one or at most a few putative receptor proteins from a large family of the genome.Furthermore, the axons of receptor cells expressing the same protein, or permutation of multipleproteins, tend to project to a single glomerulus, or perhaps two neighboring sites. This arrangementis schematised in Figure 4.4 where n receptors (where n is typically 2,500 in mammals) expressingthe same proteins converge onto a single glomerulus region.

Odour Stimuli Receptor Neuron Glomerulus

Figure 4.4: A schematic of the first stage of the biological olfactory pathway.

In the simplest scheme, we can consider the spike-trains generated by individual receptors asstatistically independent Poisson processes, where the probability of observing �N � X� actionpotentials within a time-window, T is governed by the Poisson distribution

Pr�N � X� ��XrX�

e��r (4.3)

where �r � ksT and ks is the mean firing rate expected for each stimulus, s. Since the receptorcell displays preferential tuning to particular stimuli, we would expect ks to vary for a particularreceptor over a test-set of odorants.

21

Page 23: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

microscopelens

arclamp

computercontrolledodorant delivery

dichroic mirror

emission filterexcitationfilter

to framegrabber  incomputer

beads

Figure 4.5: Schematic diagram of the measurement set-up used to acquire chemosensorydata as part of the TNose experiment.

However, one effect of convergence of receptor input at the glomerulus might be to aggregatemultiple spike-trains over a period of time. So while the statistics of spike generation at receptorcells may be governed by �r, at the glomerulus, n�r, spikes are expected on-average in any time-window, T . So we can consider the spiking activity at each glomerulus as another Poisson process,but now with time-constant �g � n�r.

We can derive the signal-to-noise ratio enhancement of this convergent architecture by consid-ering the variance of the aggregated signal at the glomerulus, �g, compared with the variance of theindividual receptor spike-trains, �r.

SNR ��g

�r�

��g

�r

����

�pn (4.4)

and so we expect an improvement in sensitivity to followpn, with increasing receptor convergence,

n. This is a form of hyperacuity where the biology takes advantage of the statistics of sensory infor-mation in order to generate system sensitivity that is greater than that of the underlying detectors.In this report we will consider analogous arrangements in an artificial nose in order to enhancesensitivity.

4.3.2 Measurement Set-up

Existing electronic nose systems are unable to exploit the statistics of sensory information, sincegenerally only a single sensor of each type or class is deployed so that limited size arrays can beendowed with as much sensor diversity as possible. For this reason, we propose a novel approachof deploying large numbers of identical sensor types in order to investigate schemes for sensitivityenhancement. To produce large sensor numbers, ca. 3 �m porous silica microspheres (beads) that

22

Page 24: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

directly adsorb solvachromatic dyes have been supplied by Dickenson & Walt. The dye/matrixcombination alters its fluorescence properties under different chemical environments which can bedetected using a simple optical set-up, which is shown in Figure 4.3.2.

In this arrangement beads are deposited on a glass slide, and excited by green light (wavelength530 nm). Optical quality bandwidth filters are used in order to ensure the light exciting the beadsis narrowband (10 nm optical bandpass filter). Under these conditions, the beads fluoresce at lowerenergy, or longer wavelength (typically 640 nm) which is detected using a low-cost 8-bit resolutionCCD video camera. A dichroic mirror is used within the optical set-up to ensure that none of theexcitation wavelength is detected by the camera. The peak emission wavelength of the fluorescencesignal is modulated (both positively and negatively) by the presence of chemicals local to the beadenvironment. By using a narrow-band optical filter, we can observe the modulation of the emissionspectra at one particular wavelength of interest, while applying a variety of chemical analytes. Thisis viewed as a gray-scale intensity shift at the CCD camera. The optical hardware is computercontrolled under NI Labview in order to synchronize the sampling activities; illuminate beads, applyodor delivery, and measure bead intensity shift.

(a)

Example Single Bead Response

Frame

Sta

ndar

dise

d Lu

min

osity

-0.02

0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.14

0.16

01

23

45

67

8910

1112

1314

1516

1718

1920

2122

2324

2526

2728

2930

3132

3334

3536

3738

3940

1:10

1:15

1:24

1:40

1:61

1:80

1:120

1:171

1:300

Air

(b)

Aggregated Bead Responses

Frame

Sta

ndar

dise

d Lu

min

osity

0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.14

01

23

45

67

8910

1112

1314

1516

1718

1920

2122

2324

2526

2728

2930

3132

3334

3536

3738

3940

1:10

1:15

1:24

1:40

1:61

1:80

1:120

1:171

1:300

Air

Figure 4.6: Bead responses to varying dilutions of saturated toluene vapor. (a) A singlebead response to toluene, indicating concentration discrimination down to 1:61 dilution,(b) the aggregated mean response of 201 beads, indicating discrimination down to 1:80dilution. Error bars indicate the standard error in the mean for the aggregated response.

As described by Pearce et al. (1998), independent measurements made within a single videoframe generate large data-sets of bead responses for statistical analysis. The beads demonstratelarge, reproducible, and reversible responses to most organic vapors. Figure 4.6(a) summarizes theperformance of a single bead in detecting varying dilutions of toluene at Standard Temperature andPressure (STP). Using a single bead response it is clear that discrimination is possible down todilutions of 1:61 of toluene at SVP, before the sensor response descends into noise. Figure 4.6(b)shows how the aggregated mean signal of 201 bead measurements can be used in order to improvethe discriminability of the system, in an analogous way to the early stages of the biological olfactorysystem considered in Section 4.3.1. Using the combined signal it is possible to discriminate 1:80 oftoluene at SVP, under the same conditions as for the single measurement.

By investigating randomly sampled subsets of the total pool of 201 measurements it was possibleto quantify this sensitivity enhancement, with increasing bead numbers. Overall, SNR enhancementwas shown to closely follow

pn, with n beads measurements being averaged. This result indicates

that it is possible to implement a biologically inspired sensitivity enhancement scheme that alsoclosely follows our model of the same process in the biological system. As such, this scheme pro-

23

Page 25: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

vides a practical method for sensitivity enhancement in artificial nose systems, that is independentof on-going improvements to the sensor technology.

In this project, our purpose was then to compare the sensitivity enhancement performance ob-tained using this simple statistical approach with the results obtained from a simple neuronal modelimplemented using Xmorph.

4.3.3 Combining TNose and Xmorph

Xmorph was designed as a tool to satisfy a diverse set of requirements. Firstly, to be able to easilydefine complex networks of heterogeneous neural structures that can be easily interfaced with ex-ternal devices. Secondly, to permit system level (or macroscopic) descriptions of neural elements,without losing biological relevance. In our experiment, the aim was to combine the modeling ca-pabilities of Xmorph with the data acquisition system described in Section 4.3.2. This permitsthe real-time analysis of large numbers of chemically sensitive bead measurements via a neuronalmodel implemented under Xmorph.

In the first instance, data were shared by the two systems through image files. The software forthe data acquisition system was modified to generate TIFF images of each video frame during odorpresentation, and Xmorph was subsequently modified to directly read these image file. In this way,during the early development of the combined system it was possible to “replay” the stored imagesunder different model conditions, for optimization purposes. The properties of the model are nowdescribed.

4.3.4 Olfactory Bulb Modeling

Process TNose consists of 9 simulated neural populations (Figure 4.7).

ReceptorsReceptorCell

N = 2500

Glomerulus1GlomerulusCell

N = 800

Glomerulus2GlomerulusCell

N = 800

Glomerulus3GlomerulusCell

N = 800

Mitral1MitralCell

N = 9

Mitral2MitralCell

N = 9

Mitral3MitralCell

N = 9

GranulleGranulleCell

N = 36

CortexCortexCell

N = 4

Figure 4.7: Circuit of process TNose. Each rectangle represents one population. Thename, cell type and size of each group is listed in the rectangle. red circles: excitatoryconnections, blue rectangles: inhibitory connections. detailed properties of the used neu-

ron types and synapse types are listed in tables 4.1 and 4.2

A 50x50 image reflecting the bead responses was projected onto three types of Glomeruli andMitral populations. Each stream was supposed to have preferential responses to a specific odor. Theforward excitation to the Granule cells and their recurrent inhibition would impose selectivity in theodor discrimination expressed by the read out system (conveniently called “Cortex”).

24

Page 26: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

4.3.5 Cell types

Table 4.1 lists the properties of the 9 different cell types of process TNose

Name Type �Ex

�Inh � Slope P Vm �

ReceptorCell IntegFire 1 0 0 1 0.8 0.95GlomerulusCell LinearThres 1 0 0 0 1 0

MitralCell IntegFire 0.22 1 0 1 1 0.8GranuleCell IntegFire 1 0 0 0 1 0CortexCell IntegFire 1 0 0 0 1 0

Table 4.1: The cell types of process TNose

Most of the used cell types were of integrate and fire type. These cells would emit a spike incase the integrated input exceeds a firing threshold.

4.3.6 Synapse types

Table 4.2 lists the properties of the 15 different synapse types of process TNose

Name Arborization Width Height P Min Max Self connectGlomerulus1-Ex-Receptors BLOCK 21 21 0.05 0 0.1 OffGlomerulus2-Ex-Receptors BLOCK 21 21 0.05 0 0.1 OffGlomerulus3-Ex-Receptors BLOCK 21 21 0.05 0 0.1 OffMitral1-Ex-Glomerulus1 BLOCK 9 9 1 0 0.02 OffMitral2-Ex-Glomerulus2 BLOCK 9 9 1 0 0.02 OffMitral3-Ex-Glomerulus3 BLOCK 9 9 1 0 0.02 Off

Cortex-Ex-Mitral1 BLOCK 1 1 1 0 0 OffCortex-Ex-Mitral2 BLOCK 1 1 1 0 0 OffCortex-Ex-Mitral3 BLOCK 1 1 1 0 0 Off

Granule-Ex-Mitral1 BLOCK 1 1 1 0 0 OffGranule-Ex-Mitral2 BLOCK 1 1 1 0 0 OffGranule-Ex-Mitral3 BLOCK 1 1 1 0 0 Off

Mitral3-Inh-Granulle BLOCK 1 1 1 0 0 OffMitral2-Inh-Granule BLOCK 1 1 1 0 0 OffMitral1-Inh-Granule BLOCK 1 1 1 0 0 Off

Table 4.2: The synapse types of process TNose

Most interconnections were one to one preserving the topology of the system. The Glomerulipopulations receive a broad projective field from the responses of the receptor cells. These arethe arborizations which, due to a learning process, were supposed to develop specific responses toparticular odors.

4.3.7 Results

In our experiments we evaluated whether stable identification to sequences of receptor responses toodor stimuli could be learned, using a local correlation based learning rule. Learning would imply

25

Page 27: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

that the glomerulus neuron would develop a “receptive field” which would express the prototypicalactivation pattern triggered by a specific odor. Figure 4.8 gives three examples of bead responses topentanol taken from our bead response database (we used a total of 50 response patterns).

A: typical bead response B: typical bead response C: typical response of receptor population

Figure 4.8: A, B: Images taken from the response database showing a typical response ofbeads to pentanol. C: Typical response of the cells of population Receptors to the beads.

As an evaluation of our learning method we continuously played 50 bead response patterns (toincreasing concentrations of pentanol) sinto the receptors of the model bulb. The receptors alsoshowed a spontaneous background activity of about 50 percent. As a test case (’distractor’) the lastpattern of the bead sequence was a mirror image of the preceding one. The assumption was that anon-learning glomerulus would not cease to respond to this test image, while a learning glomeruluswould. One Glomerulus, GlomerulusPassive, received projections from the receptors with fixedsynaptic efficacies. The activation threshold of this cell was tuned to a minimal value in order for itto respond to its inputs. The results of this test are summarized in Figure 4.9.

The average activation levels of the two evaluated glomeruli (Figures 4.9 B and C) follows theactivation level of the receptors (Figure 4.9 A). The learning glomerulus, however, in most casesdoes not respond to the distractor while the non-learning glomerulus does. This implies that it hasdeveloped a receptive field which appropriately distinguishes pentanol bead responses from otheractivity patterns. As an illustration of the specificity of this learning process Figure 4.10 displaysthe receptive field developed by this glomerulus neuron in this experiment.

The developed receptive field allowed the learning glomerulus to distinguish between the actualpentanol bead responses and the distractor inserted in the image sequence. This gave it enhancedperformance compared to the non-learning case.

4.3.8 Conclusion and Future Work

The experiments performed during the 1998 workshop provided a stepping stone towards the de-velopment of a biologically realistic model of the olfactory bulb which would discriminate betweenodor responses derived from an artificial nose. Although, much additional work needs to be donethe present experiments demonstrated the feasibility of such an approach which incorporates bothdigital and neuromorphic technologies.

Future work would require a comparison between the statistical detection and neuromorphicmodels, both described in this report. Of particular interest would be to obtain a population of

26

Page 28: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

A: Average response in receptor population

1.000.80

0.600.40

0.200.0000

122148 122348 122547 122747 122946 123146Time (mSec)

Act

Pot

Ex

Group statistics

B: response of the non-learning Glomerulus

15.0012.00

9.006.003.00

0.0005

122167 122367 122566 122766 122965 123165Time (mSec)

Act

Pot

Ex

Group statistics

C: response of the learning Glomerulus

0.500.400.300.200.10

0.0000

122156 122356 122555 122755 122954 123154Time (mSec)

Act

Pot

Ex

Group statistics

Figure 4.9: Responses of the olfactory bulb model. A: Average activity in the receptorpopulation. The drop in both traces indicate the presentation of the “distractor” at the endof a sequence. B: The non-learning glomerulus. C: Learning glomerulus. Gray: Activity,

Purple: membrane potential, Red: excitatory input. Time window 122148 to 123165.

different bead classes, enabling the development of a model that is more faithful to the biology.One method for achieving this would be to invite participants with an interest in olfaction to thenext workshop, in order to combine a variety of sensor technologies into a single processing modelunder Xmorph. The authors intend to pursue this possibility before the next workshop.

Part of this work was supported by the National Institute of Health (NIH), the Office of NavalResearch (ONR), and DARPA.

4.4 Audition: auditory localization using the Koala robot

(T. Zahn, P.F.M.J. Verschure)During the workshop I joined the auditory project group. My major goal was to implement a

sound localization algorithm to the KOALA robot in order to make him moving toward a self se-lected target sound source. The localization is based on a software model of the Inner Ear, someparts of the cochlear nucleus, the olivary complex and the inferior colliculus. All neuron modelshave been derived from a leaky integrate and fire model implemented as AVLSI Test-chip. Thesystem is based exclusively on Interaural Time Difference (ITD) evaluation using two stereo mi-crophones with a base of 25 cm. The resulting localization vector could be combined with visuallyobtained localization information in order to improve the performance to multisensoric cues.

27

Page 29: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 4.10: Receptive field acquired by the learning glomerulus, yellow circle to the righthand side, in the pentanol series. Bright red (dark gray for BW viewers) patches indicatestrong synapses, while gray patches (light gray for BW viewers) indicate weak synapses.

We employed a Pentium 200 as host and used radio communication to transfer the sound signalfrom the robot to the host. The control signal joined the IKHEP package of the Institute of Neuroin-formatics (ETHZ) and was there transfered into motor commands for the KOALA wheels. Aftersome elaborate days with great support by Dr. Verschure we met our goal and the koala was movingtoward a clapping sound source.There is a video of this available in Zurich.Based on the results we tried to incorporate the audio processing into the Xmorph simulation en-vironment and we currently continue on this in collaboration between Zurich and Ilmenau. At thesame time we try to improve the model toward speech source identification, which we started towork on in Telluride but couldn’t finish due to some library problems. The neuromorphic modelemployed, is using the following components. Two all-pole-gammatone filter-cascades with 16channels tuned between 100Hz and 2 kHz for the left and right Inner Ear.

Figure 4.11: functional structure of the hair cell-ganglion complex

A simple Hair Cell-Ganglion model shown in Figure 4.11 assigned once to each frequencychannel. 16 counter-propagating delay lines projecting to 16 x 33 coincidence detector cells of theIF type shown in Figure 4.12.

And 33 auditory space map cells of the same nature representing the azimuthal plane with novertical or front-back information.In front of the system there is an envelope based onset detection using 500 samples of the 44100Hz sound signal to perform the computation. This will be sciped in the real AVLSI Implementationwe are currently working on. Furthermore the system will be extended by a frequency sharpeninglayer for each time delay and a Winner Take All network for the localization in the azimuthal vector.The whole system is spike based from the stage of the receptor cells to the localization vector. Here

28

Page 30: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 4.12: extended model of the IF neuron assuming uniform synapses

the rate is calculated and transfered into a proportional motor signal. The model is also availableas MATLAB simulation package. The simulation of the activity in the azimuthal vector for a handclaps from 30 degree right is shown in Figure 4.13.

Activity in the azimuthal vector

time [samples]500 1000 1500 2000

−90

−48

−30

−14

0

14

30

48

90

Figure 4.13: spiking activity in the azimuthal vector resulting from a hand clap from 30degree right

During the workshop the scientists of Prof. Shammas lab provided a hardware board performingthe same task including either the cochlea chip of Andreas Andreou or Andre van Schaik. Therefore,we had some good discussions about the problems and advantages with the hardware and agreedto collaborate after the workshop. Prof. Andreou provided me his cochlea chip to be used asthe front end of a truly analog localization system. In the project group we also discussed someways to include front-back information with Dr. Horiuchi and the problems with spike based WTAstructures with Dr. Indiveri and Dr. Horiuchi. We will stay in touch to exchange experimentalresults on that as well.Finally the time was again to short to get everything done, but we reached the major goal to makethe robot moving based on a neuromorphic model. To work with all of these scientists has been agreat experience and not only saved me a lot of communication efforts but also gave me some newideas and made me aware of problems and limitations in my models.

4.5 Vision: view based navigation using the Khepera robot

(James J. Clark, Regina Mudra, Nicol N. Schraudolph)A recent study of route learning in automobile drivers (Beusmans et al, 1995) showed that people

only retained visual information about areas at which they needed to make a decision whether tomake a motor action, such as turning at an intersection. At other, ”passive”, locations, people

29

Page 31: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

exhibited very poor recall of visual details. This suggests that drivers are using some sort of view-based recognition of the scene and learn to associate these views with motor actions.

At the 1998 Telluride workshop we decided to see if we could train a Khepera mobile robot tonavigate a fixed route using such a view-based strategy. We modeled our approach loosely on thework of Bachelder and Waxman (1995) who also developed a view-based technique for robot map-learning. There were two major differences between our effort and that of Bachelder and Waxman:first, we used a simple 3-layer neural network, rather than the complicated fuzzy-ART network usedby Bachelder and Waxman, and secondly we aimed at getting the robot to learn a ”route”, ratherthan a complete spatial ”map”.

Our goal was to have a Khepera robot learn to navigate autonomously a route consisting of anumber of 45 degree turns separated by straight segments in an environment consisting of a numberof objects that could be potentially used as visual landmarks. A view of the environment is shownin figure 1.

Figure 4.14: Robot’s environment

4.5.1 Input Preprocessing

Using the Matlab environment on a PC running Linux, an image frame would be acquired from theon-board camera of the Khepera, processed, and then a motor command would be computed andsent to the Khepera, whereupon the cycle would repeat. The color images acquired by the Kheperawere converted to monochrome and then subsampled to a size of 26x53 pixels. Subsampling reducesthe amount of information that needs to be handled by the neural network, but also reduces theeffect of small shifts in position on the view-based recognition process that the network has to learn.These subsampled images were then bandpassed filtered to enhance edge features and to minimizeillumination variation effects. An example of such an image acquired by the Khepera in its testingenvironment is shown in figure 1. The images were then subsampled further by extracting fourrows - rows 1, 9, 17, 25. This was done to reduce the dimensionality (to 4x53 = 212) of the inputvector on which the Principal Component Analysis was done, so that we could obtain the principalcomponents in a reasonable amount of time. We tried to compute principal components of larger

30

Page 32: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

input images, but the unpredictable status of the lab computers (which were often rebooted to switchbetween Linux and Windows) prevented run-times of more than a few hours.

A set of 1000 such 212 element images were acquired at random positions and orientationsof the robot in the environment. This set of images was then used to compute a set of PrincipalComponents. This was done by computing the eigenvectors of the covariance matrix, C = X’X,of the matrix X whose rows correspond to the individual image vectors. X was, therefore, was amatrix with 1000 rows and 212 columns. The covariance matrix was a square matrix with 212 rowsand columns. We decided (arbitrarily) to retain as principal components those eigenvectors of Cwhose eigenvalues were greater than 10% of the maximum eigenvalue. This resulted in 50 principalcomponents.

In order to further reduce the effect of shift in position on the view recognition process wewindowed (with a Gaussian window) the rows of the 4x53 image array and then circularly shiftedthe rows so that the centroid of the image was centered in the array. The windowing reduced anyadverse edge effects due to the circular shifting. We found that this bit of shift invariance improvedthe learning rate of the network as it tried to learn a route.

4.5.2 Neural Network

The neural network we used was a standard three-layer feedforward network. The input layer con-sisted of 50 units, associated with the pre-processed image vector. The output layer consisted of 3units, which encoded the particular motor action to carry out (turn left, turn right, stop). The ”gostraight” motor action was implied by lack of activity in the three output units.

The network was trained by having the robot move through a pre-set trajectory and manuallyproviding desired output values (i.e. whether to turn left, turn right, go straight, or stop) at eachstopping point along the trajectory, along with the preprocessed image acquired at these points.The ELK1 learning technique was used (Schraudolph 1998a). This is a back-propagation algorithmwith an adaptive gain setting process that improves convergence rates. In addition, shortcut weights(which are weights connecting the input layer directly to the output layer) were used (Schraudolph1998b). The use of shortcut weights has the potential to reduce the blurring and attenuation ofthe back-propagated error signal, and to reduce time needed to learn the linear component of themapping.

During the training and processing of our neural network, we used as input to the network theprojection (dot-product) of the pre-processed (212 element vector) input image with the 50 principalcomponents. Thus, the neural network input layer had 50 units.

4.5.3 Results to Date

We trained the network with image data taken as the Khepera robot executed a pre-programmedtrajectory within its environment (which can be seen as the line drawn on the ground in figure 2).As we had not yet implemented a homing routine for the robot, the robot needed to be manuallyreturned to the starting point of the route after each learning trial. This requirement significantlyslowed the training process and taxed the patience of the experimenter (Regina Mudra). We wereable to carry out 200 training runs. The ”learning curve” of the neural network is shown in figure 3.Note that the learning error drops quickly in the first 10 or so trials, but slows quickly after that. Ourinterpretation of this is that the network quickly learns to ”go straight”, but will take much longerto learn to turn left or right. This is due to the fact that most of the images obtained by the robot asit moves along its route are at locations where it is supposed to go straight. There are only a fewimages in each training run where the robot is to turn left or right.

31

Page 33: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Range: [−38.2, 24.5] Dims: [26, 53]

Figure 4.15: Robot’s trajectory

0 1 2 3 4 5 61.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

2.4

Figure 4.16: Learning curve

32

Page 34: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Our conclusion is that the learning does seem to be proceeding, but that more training runs arenecessary. We have plans to continue this work at the ETH lab in Zurich, where Regina will developan automated testing process, allowing much more extensive training sessions.

The observed ability of humans to learn routes in less than 10 learning trials (Beusmans et al1995) is an indication that our approach is perhaps flawed. One could argue, however, that thehuman visual system has spent many years and sensed millions of images in learning to categorizescenes and objects, which our navigation neural network is trying to do from scratch. In some sense,the recognition of scene views and objects within the scenes is the hard part to learn; associatingthese views with motor actions required to execute a route is the easy part. So, one possible con-clusion is that we should spend more effort on the scene recognition aspect of our problem beforetrying to solve the full route navigation task.

4.5.4 References

Bachelder, I.A. and Waxman, A.M., “A view-based neurocomputational system for relational map-making and navigation in visual environments”, Robotics and Autonomous Systems, Vol. 16, pp267-298, 1995

Beusmans, J., Aginsky, V., Harris, C., and Rensink, R., “Analyzing situation awareness duringwayfinding in a driving simulator”, Nissan CBR technical report TR 95-4, 1995

Schraudolph, N.N., ”Online local gain adaptation for multi-layer perceptrons”, IDSIA Technicalreport 09-98, 1998a

Schraudolph, N.N., ”Slope centering: making shortcut weights effective”, IDSIA Technicalreport 32-98, 1998b

4.6 Vision: interfacing a Silicon Retina to a Koala Robot

(Shih Chii Liu and Tobi Delbruck)We ”modeled” an insect fixation response using a Koala robot and a scanned retina. The idea is

to model the behavior of flies when they try to fixate high contrast stimuli. In our case everything ishighly simplified! We simply wanted to see if a scanned retina with only 300 pixels could be usedfor fixation, or perhaps more accurately, line following. We wanted to run everything on the Koala:the frame acquisition, the cross correlations, and the motor control, to construct an autonomousdemonstration of the use of a small scanned retina on a mobile robot.

The Jorg Kramer 15-21 retina is mounted on the Koala and image frames are acquired by Koalausing the onboard digital I/O lines and ADC, using Mark Blanchard’s code.

Retina frames are cross correlated with 3 full-frame kernels that are tuned to dark vertical barstimuli in the left, middle and right parts of the 2-d image. These 3 cross-correlations we think ofas wide-field cells sensitive to high contrast dark stimuli in different parts of the visual field.

Comparisons of these correlations drive the motors directly, bang-bang, depending on whetherthe left or right correlations differ by more than a fixed threshold.

Performance in line following is marginal at present, although cross correlations provide fairlyrobust information about line position left or right.

We did not get to the interesting part of the project, that is, using the 2-d retina data to recognize2-d features, for example crossings, to produce more complex behavior.

33

Page 35: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

4.7 Neuromorphic Flying Robots

(Nici Schraudolph)Many birds and insects exhibit energy-efficient, high-performance flight characteristics un-

matched by conventional technology. Through bio- and neuromorphic design of flying machineswe hope to learn more about the aerodynamics and sensorimotor control strategies employed by na-ture to this end. In addition, such flying robots are the ultimate all-terrain vehicles, with applicationsranging from aerial surveillance to planetary exploration. Their design poses interesting challengesto neuromorphic engineering:

� Bottom-up, low-level approaches to navigation in three dimensions have not yet been widelyexplored, presumably due to a lack of (small and cheap) flying robots.

� The extreme tightness and criticality of control loops in many flying systems adds muchdifficulty to their design.

� The severe power and weight constraints these machines operate under call for small, lightweight,low-power, and highly integrated smart sensors. Now doesn’t that sound just like what aVLSIis all about?

As a result of these discussions, Mark Tilden, David Nicholson, and myself decided to build aneuromorphic flying robot during the last week of the workshop. We fitted a helium balloon with asmall motor and propeller, driven by one of Mark’s new miniature oscillator boards. Two directionallight sensors, mounted on top and bottom of the vehicle, modulated the oscillator’s duty cycle so asto generate a photophobic reflex.

Although this robot had no explicit directional control, a deliberate tilt of its sensorimotor axis,coupled with the balloon’s inherent tendency towards rotation, resulted in consistent and very “life-like” light-avoidance responses in three dimensions. We were entertained by its emergent behaviorssuch as repeated docking attempts with the (dark) underside of a ceiling lamp.

Our possibilities were limited by the fact that we had to supply power from the ground. Nextyear we hope to construct a fully autonomous, battery-operated version. Perhaps we can put smartsilicon sensors (such as Giacomo’s edge tracker) on board? We are also envisioning “Ben Hur”-styleautonomous balloon duels, with pins � � �

4.8 Optomotor response with an aerodynamic actuator

(Thomas Netter and Alan Stocker)This project experimented optomotor response with an aVLSI chip mounted on a hanging ap-

paratus so that it could oscillate freely.Towards the beginning of the workshop Reid Harrison successfully implemented optomotor re-

sponse on a Koala rover after only a couple of hours of building and programming (see Section 4.10).The installation of his aVLSI retina was inspired by research on the fly. The rover reacted to left orright relative contrast motion by orienting itself towards the flow. By correctly setting gains Reidmanaged to exhibit tracking behavior.

Reid’s work emphasized the importance of tuning the integration lag to obtain adequate responseof the vehicle whilst minimizing oscillations of the optomotor response. Nevertheless the roversetup prevents inertial coupling with the optomotor response. We decided to replicate more closelythe setup used in fly experiments.

A cardboard construction supporting :

34

Page 36: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

� An aVLSI retinotopic motion sensor designed by Alan Stocker,

� An amplifier and analog to digital converter,

� A Basic Stamp 2 microcontroller to generate pulses for a servo,

� A rudder blown by an electrical motor with propeller was hung using a nylon thread (figure4.17).

The circuit was stimulated by shifting a pattern of black and white stripes (each stripe is about2cm wide) left and right in front of the motion sensor.

Figure 4.17: The optomotor response setup under the command of its authors.

The setup was only built during the last week of the workshop leaving only time for a demon-stration which worked at the first try. No tuning was necessary to obtain tracking of the black andwhite comb at angular speeds estimated up to 30 deg/s. Angular performance could certainly beimproved by revising the design and using an external power supply instead of the rather heavylaptop computer battery carried underneath the “fuselage”. An interesting side-effect of this 12Vbattery is that a voltage regulator was required to lower the voltage to 5V. But an error in the voltageregulator documentation induced an unexpected experiment: the whole circuit operated at 12V for afew seconds without suffering from pyrexia (to put it in Norbert Wiener’s Cybernetics terminology).

Alan’s 26-pixel Smooth Optical Flow linear retina was mounted with an 8mm focal lengthlens. Its primary output are two continuous analog signals responding to either left or right motion.These signals are compared and amplified with op-amp circuitry. The system did not react whenthe stripes sheet was shifted at 1m from the lens. This short-sightedness is somewhat analogousto insect vision. At a closer distance, reactivity with black and white stripes was good but a sheetwith blue and white stripes (drawn with a felt marker) and providing less contrast did not elicit anyresponse from the retina chip. Alan adjusted several on-chip biases using potentiometers at an earlystage of the construction and it is possible that contrast sensitivity could be improved by readjustingthese biases.

The overall installation turned out to be fairly simple. Only 15 lines of Basic Stamp code werenecessary to average the signal and time the servo pulses at a 50Hz refresh rate. Programming wasdone with the help of a laptop and an oscilloscope. Thomas intends to build a lighter setup for amore systematic and quantitative study of its dynamic characteristics.

35

Page 37: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

+

-

+

-

RudderServoBasic

Stamp

NSADC0820

LM324

100K

100K

20K

20K

LM324

Retina

20K

left

right

0.1µF

LSB

MS

BLS

B

MS

B

RD CS

Figure 4.18: Optomotor response circuit.

Thanks: Reid Harrison, Timmer Horiuchi, Dan Lee, Yuri Lopez de Meneses, Philippe Pouliquen, NicolSchraudolph, Paul Verschure, Chuck Wilson.

4.9 Visual tracking using a silicon retina on a pan-tilt system

(Jorg Kramer Eduardo Ros Vidal)The aim of this project was to investigate the possibility of exploiting the parallel preprocessing

performed by an artificial retina to track a moving object in real time using a simple MATLABroutine to do the remaining processing required.

4.9.1 Experimental setup

A hexagonal silicon retina with a resolution of 125�94 pixels was mounted on a pan-tilt system.The retinal images were acquired by computer via a framegrabber. Image processing was performedusing MATLAB. The pan-tilt system was controlled by the computer via the serial port.

4.9.2 Algorithm

The retina was operated in a mode where adjacent pixels laterally inhibited each other, such thatedges were extracted in parallel. In a first implementation of a MATLAB routine the retina on thepan-tilt unit tracked the center of mass of the binarized edge image. This had the disadvantage thatin the presence of multiple objects it would track a point between the objects and that the trackingwould also be sensitive to noise. The routine was then modified to track a blob of moving edges.This was achieved by convolving the binarized edge image with a gaussian kernel and tracking theposition showing the highest value. A threshold was set on this value in order to avoid tracking ofrandom noise in the absence of any salient object.

The main distractors from the target were the strong flicker of the a.c.-driven room lighting andthe apparent motion of the background induced by the actual motion of retina on the pan-tilt system.The response time of the retina had to be tuned to a large value to get rid of the flicker susceptibility.Insensitivity to background motion could be achieved by acquiring new target positions while theretina was not moving and after it had adapted out the background edges from the previous motion.

36

Page 38: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

The movements of the system thus showed more saccadic than smooth pursuit behavior. In order toallow fast tracking the retina was thus run at a short adaptation time constant.

4.9.3 Results

With optimum tuning of the retina bias voltages and using the blob tracking routine the system wasable to reliably track the head or a hand of a person walking around the room in the presence oflight flicker and a random background. The maximum tracking rate was about 2Hz. The trackingalgorithm will be expanded to incorporate preference of more central locations and hysteresis toallow reliable tracking of a continuously moving object in the presence of other moving objects andto possibly speed up the tracking.

Acknowledgments

Giacomo Indiveri, Alan Stocker and Daniel Lee provided useful help with MATLAB and the pan-tiltunit.

4.10 Optomotor Response of a Koala Robot with an aVLSI MotionChip

(Reid Harrison and Thomas Netter)One of us (Reid Harrison) brought an analog VLSI motion detector array modeled after the HS

cells in the fly. The HS cells are a class of non-spiking neurons found in the lobular plate of thefly’s optic lobe. HS cells respond to full-field visual motion induced when the fly rotates about thevertical axis; they are visual matched filters for yaw. These cells are known to underlie the well-studied optomotor response – the ability of the fly to null out rotations during flight by producing acompensatory torque.

We built a robot model of the optomotor response by integrating our visual motion detector chipwith a Koala mobile robot. We used the A/D converter on the Koala to read the output of the motionchip, which was in the form of a continuous-time analog voltage. We wrote a simple program onthe Koala which integrated the signal from the chip, multiplied this value by some fixed gain, andsent this value to the motors. The left and right motors were driven with opposite signs to producerotation. The robot was driven towards the direction of motion reported by the chip in order tocancel the perceived motion.

To test the robot, we placed it on a flat sheet of cardboard which was resting on the floor. Themotion detection chip was fitted with a lens, and the lens was adjusted to be parallel to the ground.The robot was oriented so that it was ”looking” at a cluttered scene about one meter away. Thisscene consisted of chairs, cups, fruit, jars, and other typical items found in offices and labs. Therewere no ”ideal visual stimuli” such as vertical black-and-white bars. The robot’s implicit task wasto stabilize its orientation relative to these ”distant” visual stimuli while we rotated the cardboardfloor underneath it.

The robot did a good job of stabilizing its orientation as we rotated its floor. We varied thefeedback gain mentioned above and noted the following effects: (1) When the gain was set toolow, the robot did not cancel out all rotation (i.e., there was ”slip”). (2) When the gain was set toohigh, the robot would exhibit oscillations once the imposed rotation was halted. These oscillationsshowed a fixed amplitude and a frequency of around 1 Hz. It is interesting to note that similaroscillations have been observed in flies during closed-loop behavioral experiments.

37

Page 39: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Perhaps the most useful thing we learned from this experiment is the need for a system toquantitatively record robot movement. As we wish to move beyond anecdotal results, we mustrecord the robot’s position and (especially for this experiment) orientation with reasonably highprecision. We have initiated discussions on various techniques for recording robot movement, andwe hope that next year we will be able to conduct more robot/chip experiments of this type.

4.11 Locomotion of segmented lamprey-like robots

(Asli Arslan, Elizabeth Brauer, Avis Cohen, Steven DeWeerth, David Nicholson, Nici Schrau-dolph, Mario Simoni, Theron Stanford, Mark Tilden, Thelma Williams)

The Locomotor Work Group was formed with the intention of building and testing robots whichwould move. This objective was met through two robots based on the lamprey, an eel-like fish.The lamprey is a simple vertebrate with about 100 segments in the spinal cord. The first robot wasbuilt at the workshop with supplies brought by Mark Tilden. This segmented robot had a head plus 8segments. The DeWeerth lab at the Georgia Institute of Technology brought an 11-segment lampreymodel.

Figure 4.19: Oscillator circuit

The Telluride robot was built using an oscillator and motor attached to a metal ball in eachsegment. The oscillator used the configuration in Figure 4.19 with 2 coupled Schmitt triggers,capacitors, and 1 or more resistors. The resistor value(s) determine the frequency of oscillation.Two more Schmitt triggers provided the drive to the single motor on the segment. The motors werescavenged from Macintoshes (floppy eject motors). The metal ball provided contact to the surface;the motor caused the ball to move back and forth as the oscillator changed phases. See Figure 4.20for a photo of the robot.

The autonomous robot lamprey project was an attempt to mimic the rough behavior of a planarlamprey morphology using constrained, non-linear oscillator control. The robot device used stan-dard TTL control, 47% efficient 216:1 pancake motors driven by standard nervous net (Nv) controlboards, and was held together with silver solder, copper wire (for malleability) and superglue. It hadnine segments shared over eight motors with on-board power, signal generators, and passive visualsensors in its “head”. The device was built at the workshop, primarily under direction from AvisCohen to insure reasonable structural accuracy, as far as could be managed with the mechanicalcompromises necessary for an artificial creature.

38

Page 40: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 4.20: Lamprey robot

The first experiment was a determination of the frequency as a function of the segment resistanceusing the cross connection configuration with all other segment disabled. The result was f R =3D5.1e6 with less than 5% variation among the segments where f is the frequency of oscillation and Ris the resistance. The second experiment examined the effect on changing coupling strength fromsegment to segment. With ascending coupling stronger than descending coupling, the Telluridelamprey robot exhibited an ascending traveling wave in segments 5,6,7, and 8. See Figure 4.21.Unfortunately, the robot developed some circuit problems in the head segment and segments 3and 4 and never worked well enough again to get data while we were at the workshop. Furtherexperiments were planned to examine the entrainment from an external signal, coupling the twolamprey robots, and further experiments in coupling strength.

The DeWeerth Group from Georgia Tech, brought a biologically inspired lamprey model tocompare to the Telluride robot. The DeWeerth Lamprey consists of 11 nearest-neighbor-coupledsegments, where each segment contains a pair of Morris-Lecar type silicon neurons that are recip-rocally inhibited. The intersegmental coupling consists of ascending and descending excitatory andinhibitory coupling.=20

The control experiment consisted of using excitatory and inhibitory coupling with the descend-ing coupling being dominant. With this configuration, phase locking occurs with a total phase lagof about 100 degrees along the chain. The phase delay between segments, however, is not regulardue to the mismatch of the individual segments.

From this control configuration, all coupling was removed and the oscillators were tuned to havefrequencies that varied less than 5% from each other. To keep this model similar to the Telluriderobot, only excitatory coupling was added. Four experiments were then done by sweeping excita-tory coupling in different fashions and measuring the outputs of 8 of the 11 segmental oscillators.The four experiments included: symmetric coupling, ascending coupling only, descending couplingonly, and coupling in both directions, but the descending coupling was dominant. In the symmetriccoupling experiment, as the coupling was increased the frequencies of the individual oscillators insome segments were changed by as much as 25% but no phase locking occurred. In addition, if thecoupling was increased too much then the oscillations died completely. In the ascending couplingonly experiment, as the coupling strength was increased the oscillators began to phase lock until thewhole network became synchronous and in phase. There was no evidence of phase delays betweensegments. The same effects were observed for the descending coupling only case. In the experimentwhere the descending coupling was dominant, the end effect of increasing the coupling was depen-dent upon the absolute level of the ascending coupling. If the ascending coupling was too large, thenetwork experienced oscillator death. However, if the ascending coupling was small enough then

39

Page 41: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

6 6.2 6.4 6.6 6.8 7 7.2 7.4 7.6 7.8 8334

336

338

340

342

344

346

348

350

352

Des

cedn

ing

Pha

se

Segment

Descending Phase Lag

Figure 4.21: Traveling wave

the network entered synchrony as in the case with only ascending or descending coupling. In eithercase, phase delays between segments were not observed.

Figure 4.22 shows the frequency of oscillations for the 8 segments from which data was recorded.The horizontal axis is the number of the recording channel where 1 is the head and 8 is the tail. Theleftmost bar at each point is the frequency for no coupling whereas the rightmost is for the strongestcoupling. The vertical axis represents the frequency. According to these plots, phaselocking canonly occur if the frequencies of the oscillators are the same at any given coupling strength.

The work group leaders were Avis Cohen and Mark Tilden. The participants were Asli Arslan,Elizabeth Brauer, David Nicholson, Nici Schraudolph, Mario Simoni, Theron Stanford, Steven De-Weerth, and Thelma Williams. The group met twice per week to coordinate activities and for techni-cal discussions by Mark Tilden (locomotion principles in various creatures such as E. coli bacteria,starfish, and leeches) and Thelma Williams (coupled oscillators). Group members assembled theTelluride robot under direction from Mark Tilden and conducted experiments.

40

Page 42: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

1 2 3 4 5 6 7 80

5

10

15

20SYMMETRIC COUPLING

freq

(H

z)

1 2 3 4 5 6 7 80

5

10

15

20ASCENDING COUPLING ONLY

freq

(H

z)

1 2 3 4 5 6 7 80

5

10

15

20DESCENDING COUPLING ONLY

freq

(H

z)

1 2 3 4 5 6 7 80

5

10

15

20DESCENDING COUPLING DOMINANT

freq

(H

z)

Figure 4.22: Oscillator data

41

Page 43: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Chapter 5

Auditory Processing

5.1 Introduction

(David Klein)This section summarizes the activities of the auditory project group. Auditory project group

members worked on projects encompassing a broad range of problems faced by hearing systemsin real-world environments. These projects included the analysis of informative features in natu-ral acoustic signals, peripheral auditory processing using electronic VLSI cochleas, fast and robustcomputation of single sound-source lateral angle using two acoustical sensors, and identification ofspeech in a real noisy environment. Additionally, group members were responsible for the perva-siveness of auditory themes in other project groups. Collaborative activities resulted in projects suchas the production of spectro-temporal receptive fields using projective-field mapping of address-events from a 1-D sender array, dynamically learning a tonotopic-like address-event mapping froma 1-D sender array, and directing a binaural robot towards a sound-emitting target using electroniccochleas and a fast azimuth estimation routine.

The 1998 Neuromorphic Engineering Workshop proved to be an effective environment for stu-dents and researchers to become acquainted with and to work on projects inspired by what is knownabout biological auditory processing. With a substantial series of auditory-oriented lectures as abackdrop, members of the auditory project group worked on number of problems relevant to hearingsystems in real, noisy environments. As a result, important steps were made towards implementingsystems with as little as two acoustical sensors and the ability to navigate and communicate usingauditory sense data either exclusively or as a supplement to other sensory modalities.

Members of the auditory project group proper were:

Andreas Andreou, Phil Brown, Didier Depireux, Mete Erturk, Reid Harrison, DaveHillis, Tim Horiuchi, David Klein, Shihab Shamma, Jonathan Simon, Leslie Smith,Nino Srour, Andre van Schaik, and Thomas Zahn.

The activities of the auditory project group can be roughly arranged into three categories:

1. Projects which were concerned mainly with peripheral aspects of acoustical signal processing.

These projects covered issues such as the acoustical input to the system, cochlear filtering,and peripheral auditory system pre-processing.

2. Auditory localization projects.

These projects involved taking the output of a peripheral auditory system and computing thelocation of sound sources in a noisy environment.

42

Page 44: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

3. Projects which were concerned with extracting the identity of sound sources, again using theinformation provided by an auditory front-end process.

Additionally, there were some fruitful collaborations between the auditory project group andother project groups. Most productive among these were the collaborations with the Address-EventRepresentation (AER) project group, the behaving robots project group, and the visual saliencyproject group.

5.2 Peripheral Auditory Processing

In order for a hearing system to use information present in its acoustic environment for relativelycomplex tasks such as orientation and recognition, it is necessary for a front-end system to rapidlyextract this information from the raw pressure waves impinging on its sensors and to present itto more central auditory processes in a compact and clean form. In mammals, these operationsare presumably performed by the ear and the neurons in the cochlear nucleus (CN). In order tobuild artificial hearing systems, it is beneficial to both understand what information is available in areal acoustical environment and to understand the processing that is occurring in the cochleas andcochlear nuclei of animals.

Towards this end, auditory project group members had access to silicon cochleas, implementedwith analog VLSI technology, and cochlear interface boards, implemented with discrete componentsand Field-Programmable Gate Arrays (FPGAs). Additionally, members had access to software sim-ulations of peripheral auditory processing in MATLAB. In the following sub-sections, the projectsperformed with these tools at hand are described.

5.2.1 Analysis of informative features in natural acoustic signals

(Reid Harrison, Dave Hillis, Timmer Horiuchi, David Klein, Shihab Shamma, and LeslieSmith)

Prior to constructing a hearing system, it is helpful to know what kind of information is presentin natural acoustic signals. Ultimately, the features which are deemed informative will depend onthe task which is to be performed. The signals may be acquired passively, e.g., by listening, oractively, as they are in echolocation.

Members of this project group used both traditional signal processing tools as well as models ofperipheral auditory system function to examine and characterize features present in acoustic signalsunder the context of several different tasks, such as locating and identifying sound sources. Pre-recorded speech and musical sounds were examined, as were echolocation pulses generated andrecorded in-house. For example, Figure 5.1 shows the output of a peripheral auditory processingmodel implemented in MATLAB with a speech segment provided as input. Project members wereable to identify the different features present in this representation, such as the harmonic and formantstructures. By manipulating these features and inverting the representation back to an acousticwaveform, project members were able to experience how the different features effect the perceptionof the sound.

5.2.2 Auditory processing with electronic cochlea chips

(Andreas Andreou, Phil Brown, Mete Erturk, Andre van Schaik, and Leslie Smith )It was obvious from the above analysis of natural acoustic signals that many interesting signals

are characterized by a broad and heavily modulated spectral content which is constantly changing

43

Page 45: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 5.1: Representation of a speech segment “come home right away” at the output ofthe cochlear nucleus stage of the NSLTools MATLAB Auditory Toolbox. Rows of this “au-ditory spectrogram” indicate the time-varying activity of neurons tuned to different narrow

frequency bands. Darker areas indicate higher levels of activity.

in time (see Figure 5.1). Thus, it is evidently useful to have access to a continuous measure of thespectral energy of the received signal. Most animals do have access to this information; in the innerear, a cochlea or an analogous structure transduces the acoustic waveform into a topographicallyorganized pattern of excitation of a population of auditory nerve (AN) fibers. In other words, differ-ent frequencies in the signal excite different groups of AN fibers. Conversely, a given AN fiber willonly be excited by a subset of the audible frequencies.

The cochlea is hence well approximated functionally as a bank of band-pass filters. However,implementing the massive filtering operation of the cochlea digitally is a computationally inten-sive and time consuming task. Thus, it is beneficial to implement such filtering with analog VLSIcircuits. These circuits can be made compact, low-power, and operate in real time.

The members of this project group had access to two analog VLSI cochleas brought to theworkshop by Andre van Schaik and Andreas Andreou. Project members familiarized themselveswith the design of the cochleas and were able to adjust parameters such as the bandwidth of thefilters and the total bandwidth of the filter bank. The outputs of the cochleas were monitored andrecorded as different sounds were presented to the chips. Ultimately, the cochleas were used in aauditory localization system, to be detailed below. Project members were also familiarized with theproblems associated with implementing the filtering with analog VLSI technology, such as defectivechannels and mismatches between different cochleas.

44

Page 46: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

5.2.3 Hardware realization of signal normalization, noise reduction, and feature en-hancement on the output of a cochlear chip

(Phil Brown, Mete Erturk, Shihab Shamma, and Jonathan Simon)By processing incoming signals with silicon cochleas, it is presumed that the informative fea-

tures in the signals are more effectively evaluated in the spectro-temporal domain. However, thesignals at the output of the cochleas are still far from ideal; They suffer from environmental noisecorruption, offset mismatch across channels and across cochleas, and sometimes all-together defec-tive channels.

To some extent, biological hearing systems must also cope with such problems, and there ap-pear to be mechanisms present in the peripheral auditory system to minimize noise and channelmismatches while enhancing informative features. These mechanisms were reduced to a series ofsimple functional descriptions and implemented in hardware with discrete components and FPGAs,as schematicized in Figure 5.2. These processes were implemented on printed circuit boards andserved as interfaces between the cochleas and a higher-level processor; The data at the output of theboards were read into a computer, on which higher-level computations, such as those described insections below, were performed. Project members were responsible for building the cochlear inter-face boards and were able to exploit the flexibility of the board design to optimize the performanceof the boards for a given cochlea and a given task.

Figure 5.2: Block diagram of the cochlear interface board

5.3 Auditory Localization

Extracting the position of sound sources in a noisy environment is a well-studied but still insuffi-ciently solved problem. For this reason, it is beneficial to study how biological hearing systems areable to successfully perform this task.

There are several cues which can be exploited by a hearing system with two acoustical sensorsin order to estimate the position of a sound source. Important among these are the inter-auraltime difference (ITD) and inter-aural level difference (ILD) which systematically change with theposition of the sound source. Additionally, the pinnae of mammals shape the spectral characteristicsof a received sound in a manner which is sound-source position specific.

45

Page 47: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

In mammals, a sub-population of neurons in the CN of each hemisphere project bilaterally tothe Superior Olivary Complex (SOC) where the first computations of sound-source position are pre-sumed to take place. Analogously, members of the auditory project group had access to the outputsof peripheral processes made available by the projects elaborated above, and could implement algo-rithms similar to those presumed to reside in the SOC. Additionally, project group members wereable to compare different algorithms in order to assess which gave the best performance and whichcould be most easily implemented (see also Section 4.4).

5.3.1 Computation of sound-source lateral angle by a binaural cross-correlation net-work

(Phil Brown, Jonathan Simon, and Thomas Zahn)The lateral (azimuthal) angle of a sound source can be determined by the ITD of the sound

received in two spatially separated acoustical sensors. Not surprisingly, there are neurons in the SOCwhich are tuned to specific ITD values. However, the method by which these ITDs are computedby the system is still largely unknown.

One way to compute ITD is to perform a cross-correlation between the signals received at eachear. The cross-correlation function C���, is defined here as

C��� �

Zsc�t� � si�t� ��dt

where sc and si are the signals received at the contra-lateral and ipsi-lateral ears, respectively. Ide-ally, the lag-location of the peak of the cross-correlation function would correspond to the ITD.

The estimate of the ITD is made more robust after the frequency analysis of the cochlea. Now,an estimate of the ITD can be obtained in each frequency band. The final estimate can be achievedby simply averaging the results across all frequencies. However, more complicated schemes maybe employed if, for example, there are multiple sources with different spectral emissions.

Project members implemented cross-correlation networks in two ways. The first was by buildinga network of model neurons in software, as schematicized in Figure 5.3. The output of each cochlearfilter is passed through a series of delay lines. In the center of the network, model neurons, eachtuned to a specific frequency, receive input from a pair of delay stations, one from each ear. Thus,depending on which delay stations a neuron receives input from, it will be maximally excited whenthe relative delay between the two ears is at a certain value. It was therefore possible to ascertainthe ITD by monitoring the pattern of activity of this central population of neurons.

Because the above network was implemented in software, including the cochlear filtering, therewas a severe limit to the amount of data the network could process and still work in real time. Forthis reason an onset detector was implemented before the filtering. When an onset was detected,signaled by a sharp rise in the signal envelope, a short (� ��ms�) data sequence was processed bythe network. Unsurprisingly, the network performed best for transient stimuli, e.g., hand-claps.

Alternatively, project members tried more traditional signal processing techniques of computingcross-correlation between the outputs of the two cochleas. The cross-correlation was implementedin MATLAB using the data read in from the cochlear interface boards. Since the filtering wasperformed instantaneously by the cochlear chips, and the cross-correlation operation was faster, itwas possible to process larger data sequences (� ms�) than in the previous scheme. However,the processing time was approximately equal to the data duration. Thus, the system was mostsensitive to continuous stimuli, with transient stimuli being missed about half of the time. Anexample of the output of the cross-correlation algorithm for one frame of data is shown in Figure5.6.

46

Page 48: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 5.3: A cross-correlation network implemented with model neurons for estimatingITDs. See text for an explanation.

In Figure 5.4 is shown the performance of the non-neuronal cross-correlation network operatingon the outputs of the cochlear chips with a single brown noise source as input. Due to mismatchesbetween the cochleas, there are systematic errors in the estimated azimuthal angle, especially atthe larger angles (i.e., larger ITDs). However, the estimated vs. real angle curve is monotonicand saturating, which makes these errors easier to cope with in a control task, as evidenced in thecollaboration with the behaving robots group.

5.3.2 Computation of sound-source lateral angle by a stereausis network

(Phil Brown, Didier Depireux, David Klein, and Jonathan Simon)Despite the simple concepts involved in implementing a cross-correlation network, there is lit-

tle evidence that these type of networks, employing massive numbers of precisely arrayed delaylines, are actually implemented in real biological systems. A somewhat more plausible scheme isa “stereausis” network which compares the spatial disparity of the AN excitation pattern from eachear. Because the cochlea itself acts as a delay line from the basal to the apical end, central neuronswhich compare inputs from AN fibers coming from both ears effectively also act as ITD detectorswithin the frequency band to which they are tuned. Such a network is schematicized in Figure 5.5.

47

Page 49: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 5.4: Performance of the cross-correlation network for estimating ITDs

Project members implemented stereausis networks in two ways, both of which used the outputsof the cochlear chips as the inputs to the network. The simplest implementation involved matrixmultiplying the data coming from the two ears (see Figure 5.6). The rows of the each matrixwere the outputs of the cochlear filter bands (m � ��), and the columns were the time series(n � �). The matrices were multiplied such that the result was m by m. Matrix elements alongthe main diagonal in essence signal the presence of identical patterns in each ear. Elements off ofthe main diagonal signify a spatially (and hence temporally) shifted pattern in one ear relative tothe other. The computation of the matrix multiplication in MATLAB was found to be significantlyfaster and algorithmically simpler than the previously implemented cross-correlation computation.Performance of this stereausis network is shown in Figure 5.7.

A stereausis network was also implemented using model linear threshold neurons in a neuronalnetwork modeling package called Xmorph, provided by Paul Verschure. A central 2-D grid ofneurons received binaural inputs from the cochlear interface boards in a manner described above.Again, the center diagonal elements of the network signaled identical patterns in each ear, while off-center diagonal elements signaled non-zero ITD. In Figure 5.8, the activity of the diagonals of thenetwork over a period of approximately � milliseconds are shown, for a zero-ITD narrow-bandinput. Notice that the most consistent activation is along the central diagonal. This is made moreplain by averaging the activity over the entire duration, also shown in Figure 5.8. Peaks off diagonalare due to the narrow-band nature of the input signal.

5.4 Acoustic Pattern Recognition

Along with the estimation of sound-source location, it may be necessary to gather the identity of thesource and possibly to communicate with the source. These tasks require classifying the featuresat the output of the cochlea in a fast and efficient manner. The classification demands a compactspectro-temporal feature decomposition, which is presumed to occur in the primary auditory cortex

48

Page 50: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 5.5: A stereausis network used to estimate ITD. See text for an explanation.

(AI) of mammals.The projects concerned with acoustic pattern recognition took the output of a peripheral auditory

process (MATLAB model) operating on noisy speech signals and attempted to classify patterns ofinformative features in a fashion similar to that of AI. Towards this end, it was helpful to reviewcurrent manifestations of speech identification and recognition systems, how they work, and howthey may be improved.

5.4.1 Identification of speech in real noisy environments using a model of auditorycortical processing

(Dave Hillis, David Klein, Shihab Shamma, and Nino Srour)Project members had access to recordings in which there were invariantly multiple speakers in

an extremely noisy environment. Despite the noise, human listeners are able to easily and reliablydetect the presence or absence of the vocalizations. The goal of this project was to develop analgorithm by which a machine could detect the presence or absence of the speech as reliably.

Towards this end, a software model of the auditory system, provided by Shihab Shamma, wasemployed. The software model spanned the processing from the cochlea to the cortex. Especiallyuseful for this task was the cortical processing stage, in which the different scales of spectral and

49

Page 51: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 5.6: A frame of data from both cochlear interface boards (i.e., “ears”) are multi-plied to produce a stereausis representation (shown bottom left). The overlaid dashedline indicates the main diagonal. On the bottom right is the result of the cross-correlationbetween the two ears. The overlaid curve is the data averaged over all channels. Both

representations indicate that the sound source is on the left side of the head.

temporal modulations present in the signal are made explicit. By selectively filtering the signal atthe cortical level, spectral and temporal modulations not relevant to human speech could in essencebe ignored. It was found that by filtering the signal as such, the voicing previously embedded in thenoise were enhanced.

5.4.2 Review of current prospects and limitations in speech recognition systems

(Andreas Andreou and David Klein)Project members reviewed several current speech recognition systems which employ processing

stages inspired by what is known about human speech production and recognition systems. Speechrecognition systems typically employ a peripheral stage in which the input signal is finally reducedto a short series of spectral features. The output of the peripheral processing is then used as theinput to a feature classifier which, after a training period, is able to classify the series of features aswords. Much of the discussion was concerned with the final stages of the peripheral processing, inwhich the feature space is defined. Alternative feature representations were discussed in the light of

50

Page 52: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 5.7: Evaluation of the performance of the stereausis network.

Figure 5.8: Behavior of a stereausis network implemented in Xmorph, excited by a zero-ITD narrow-band input. (a) Output of the network organized by diagonal number. Thecentral diagonal (0) signals zero ITD. (b) Time average of the activity shown in (a). Side-

peaks are due to the narrow-band nature of the input.

what is known about the spectral analysis of the mammalian auditory cortex.

51

Page 53: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

5.5 Collaborative Efforts

The collaborations between the auditory project groups and the other project groups fell into twocategories. First, there were collaborations which focused on the hardware realization of some ofthe auditory processes which had previously relegated to software. The other collaborations wereconcerned with fusing the output of the auditory system with other sensory and/or motor systemswith the goal of accomplishing some task, e.g., approaching salient objects.

Details of the collaborations will not be given here; the several projects will only be summarized.For additional detail, please consult the progress reports of the project groups which these projectsare considered to be a part of.

5.5.1 Production of spectro-temporal receptive fields using projective-field mappingof address-events from a 1-D sender array

(Timmer Horiuchi and David Klein)This project was performed as part of the AER project group. Due to the absence of a working

cochlear chip with address-event outputs, it was necessary to use a 1-D retina with address-eventoutputs as a substitute for this project. Patterns of excitation along the “cochlea” were substitutedwith visual stimuli. For example, sinusoidal spectral profiles were simulated by focusing images ofsinusoidal gratings onto the focal plane of the chip.

The goal of this project was to produce meaningful spectro-temporal receptive fields in a re-ceiver chip. This was to be accomplished by manipulating the address projections from the senderto the receiver. A given sender address could be projected to a number of receiver neurons at dif-ferent times. Received address were either excitatory or inhibitory. Excitatory addresses were toincrease the stored charge at the receiver neuron by one unit. Inhibitory addresses prevented the in-crease of stored charge for a short time by blocking the reception of other excitatory addresses. Thereceiver neurons were leaky, so that the charge would continually decrease if no additional chargewas added. If the stored charge on a given neuron exceeded a set threshold, that neuron was signaledas “active”. The patterns in space and time with which the addresses were projected were identicalfor all sender neurons and determined which spatio-temporal patterns on the sender would activateneurons at the receiver. Actual data from the 1-D retina chip was used to simulate the projectivefield mappings in MATLAB.

5.5.2 Directing a binaural robot towards a sound-emitting target

(Phil Brown, Mete Erturk, David Klein, Paul Verschure, and Thomas Zahn)This project resulted from a collaboration between the auditory project group and the behaving

robots project group. The two localization projects, detailed above, were alternatively used as the“brains” of the Koala robots. The task performed by the robots was simply to locate and approachsound sources. Both robots were equipped with a pair of small microphones as their only sensors(on the second robot, the microphones were embedded in an attached dummy head as depicted inFigure 5.9). Typically, the results of the lateral-angle estimation were mapped with some gain to aturning velocity or turning duration. Due to the differences between the two localization algorithms,one robot would approach transient targets, e.g., hand-claps, while the other robot would approachtargets which continually emitted sounds, e.g., music or noise. In the second robot, both the cross-correlation and stereausis methods of estimating sound-source lateral angle were employed. Inall cases, the robots performed admirably. Videos of the robots in action were recorded and areavailable.

52

Page 54: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 5.9: Photo of robot with binaural head orienting itself towards a sound-emittingspeaker

5.5.3 An auditory complement to visual saliency

(David Klein, Max Rauschaber, and Paul Verschure)This project resulted from a collaboration between the auditory project group and the visual

saliency project group. A motor program to control a Khepera robot to move towards visuallysalient objects was implemented. The goal was to provide an auditory complement to the visualsaliency maps. For example, sound sources could be localized, and visual targets at those locationscould be enhanced. More intense auditory saliency could be assigned to objects emitting loudersounds and/or sounds which are changing more rapidly.

5.5.4 Making Pinna Casts

(Timmer Horiuchi, Andre Van Schaik)In an experiment to measure the spectro-temporal transfer characteristics of human pinnae, we

began an effort to make plaster casts of Timmer’s ears. Using a “body-parts” casting kit, left andright pinnae were successfully cast, however, some of the details of the concha and ear canal weredifficult to preserve due to the protective ear plugs that were worn to protect the ear drums. Unfor-tunately, we ran out of time and the transfer characteristics have not yet been measured because thedetailed carving required to recreate the concha and ear canal has not been finished. We hope tofinish some of the measurements in Prof. Shamma’s laboratory in the coming months.

5.6 Retro- and Pro-spectives

Auditory project members were largely successful in their endeavors to study and implement hear-ing systems with the ability to locate and identify relatively simple sounds in noisy environments.Partially, the success can be attributed to the work done prior to the workshop. Cochleas andcochlear interface boards were fabricated and tested and some projects were conceived before thestart of the workshop. Additionally, the group benefited from a relatively strong showing from

53

Page 55: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 5.10: Photo of the Timmer’s left pinna. Using a plaster-cast system, both pinnaewere cast with the intention of recording their spectro-temporal characteristics.

the auditory community. There were more lectures and personnel present than there had been inpast years. This aided in the development of additional projects that were conceived during theworkshop. This year also saw a relative abundance of auditory themes pervading into other projectgroups’ projects.

Although activity on some of the projects rose and fell quickly and other projects were merelydiscussions, it is certainly encouraging to reflect on the connections established between all of theprojects detailed in this report, both within the auditory project group and in collaboration withother groups. The different projects spanned signal theory, analog filtering, feature enhancementand noise suppression, tonotopic projection, spectro-temporal receptive field generation, sound lo-calization, feature classification, multi-sensory fusion, motor action, and more. As work on theseprojects continues both inside and outside the workshop, project group are likely to make significantstrides towards implementing a complex and effective biologically inspired auditory system.

Many project group members are committed to continuing the work started at the workshop.Many of the projects will be continued by collaborations between the members of University ofMaryland, College Park (UMCP) and Johns Hopkins University (JHU): Pamela Abshire, AndreasAndreou, Phil Brown, Marc Cohen, Didier Depireux, Mete Erturk, Tim Horiuchi, David Klein,Shihab Shamma, and Jonathan Simon. Meetings are expected to be held once a month and thefollowing projects will be continued: Auditory Localization, Speech Recognition, AER Productionof Spectro-temporal Receptive Fields, and AER Production of Tonotopic Mapping. Discourse be-tween the UMCP group and Thomas Zahn has continued, and continued work between the UMCPgroup and Paul Verschure of University of Zurich is likely.

Primary among the topics not sufficiently addressed this year is the trouble of dealing withmultiple sound sources. The algorithms implemented in the workshop were structured so that theloudest or most salient sound source wins. This is an insufficient solution in general, as thereare likely to be multiple sources of interest to a real system. Additionally, a particular sourceof interest may be partially masked by louder distracting sources. The problem of disentanglingsound sources will necessitate integrating source separation, saliency, recognition, localization, andattention systems in future workshops.

54

Page 56: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Chapter 6

Address Event Representation

6.1 Introduction

(Kwabena Boahen, Timmer Horiuchi)The address-event workgroup focused its energy on systems-level projects that utilized existing

chips for interface to either the Koala robot platform or to the laboratory PCs. This year we hadprojects involving both 1-D and 2-D retinas as well as development in the serial AER protocol.More of the focus this year was on the interface into other systems and how the data would be used,in contrast to previous years where the focus tended to settle on making the chips function properlyunder non-ideal conditions. It should be noted that the Telluride environment can be a notablyharsh one, leaving the experimenter to deal with wildly changing conditions in humidity (from rainto wicked, static-generating dryness), temperature, and lighting (strong sunlight to dim, flickeryfluorescent bulbs). Projects this year ranged from multi-chip 1-D stereo to 2-D AER remapping.

6.2 AER-based 1-D Stereo Work Group

(Alan Stocker, Yuri Lopez De Meneses, Charles Wilson, Tim Horiuchi, Alberto Pesavento)In this group, our goal was to combine two 1-D AER transmitting vision chips (designed by Tim

Horiuchi) in order to create a 1-D stereo system. The group split into two subgroups, one whichfocused on stereo algorithms and one which assembled and tested the hardware interface.

Tim Horiuchi had brought one working vision board that sent spikes to a PC parallel-port forthe recording of data. A second board was built and tuned to provide outputs similar to the firstboard. The two AER output streams were merged using a PIC microcontroller (Microchip Inc) andnew software was written to accept the stereo data. A set of recordings of various static and movingstimuli were made for use in the software subgroup’s simulation. Figure 6.1 shows the two boardsviewing a pair of low-contrast targets.

Alan Stocker and Yuri Lopez De Meneses worked on a software model of stereo-correspondenceproposed by Misha Mahowald. They first worked with data from a one-dimensional retina mounteda Khepera robot and then later switched to data acquired from the AER-based vision system.

In this model of stereopsis, multiple binocular images from different feature detector arrays arefed into correlator arrays which compute the stereo disparity matches for each cyclopean angle.For each cyclopean angle there is an analog cell which receives activity from each of the disparitycells in and reports the weighted-average disparity in each column (cyclopean angle). There aretwo mechanisms that are used to resolve conflicts when there are several possible disparities at onelocation: positive feedback and a winner-take-all (WTA) circuit.

55

Page 57: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

With positive feedback, the closer the position (disparity) of a correlator cell is to the analogcells output value (average) the higher is the feedback gain. By introducing one inhibitory cell percolumn, the positive feedback and the WTA mechanism ensure that false matches get suppressed.The analog cells are necessary to enable interactions with neighboring correlator columns. This isessential to fulfill the constraint of smooth disparity changes across space. The positive feedbackand the analog cell also make sure that no strong outliers win by performing a centroid computation.

Figure 6.1: Photo of the stereo vision setup using two one-dimensional AER-based im-agers. Each chip transmitted the location of spatial derivatives and motion signals at eachpixel location. These signals were transmitted to a computer using asynchronous spike

trains in a rate-coded manner.

6.3 Line-Following Robot Using an Address-Event Optical Sensor

(Philip Hafliger and Tim Horiuchi)The principal aim of this project was to interface an optical aVLSI sensor chip with address-

event (AER) output to a 16MHz 68C331 Motorola processor that controls the Koala robot (K-Team,Lausanne). The Koala was then programmed to perform a simple line following task.

Address-event representation (AER) is becoming a popular method to interconnect spiking hard-ware devices. Especially in neural models which require a large number of point to point connec-tions that are sparsely used, this time multiplexing approach is well matched.

Therefore the optical sensor chip used here produces AER output. It computes the spatial deriva-tive of the light intensity on a horizontal line, it detects vertical edges. Its resolution is 40 pixels.It also represents the relative motion of the edges on two additional 40 pixel 1 dimensional arrays,each coding one direction. The later two arrays were not used in this project.

Performance in line following can be improved by using adaptive aVLSI optical sensors, thatare able to reliably detect edges in various lighting conditions and so spare the processor computa-tionally expensive preprocessing. This has already been demonstrated last year in this workshop. Inorder to test the applicability of the AER communication in this configuration, we reproduced theexperiment that had been done using an optical chip with scanned analog output.

The AER interface on Koala was realized using interrupt driven handshaking. The optical chip’srequest line triggers an interrupt on Koala, which then reads in the address on the AER bus andacknowledges the reception. Following the protocol the chip resets its request line, then Koalaresets its acknowledge signal. The address points to a position in the optical array and the value at

56

Page 58: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

cyclopean angle

dis

par

ity

dis

par

ity

cyclopean angle

-

++

-0

50

100

150

200

250

0 5 10 15 20 25

binmot5.std

"binmot5.std"

time [s]ad

dre

ss

R

L1 40

1

2

3

Figure 6.2: (1) Recordings from the binocular system of 1D AER retinas. The addressspace is divided for the two retinas, each providing three different features for each imagelocation: spatial-intensity gradient, rightward motion and leftward motion. (2) Raw left andright retinal images, representing the temporally integrated spike train (over 500 msec:9.0 sec ¡ t ¡ 9.5 sec) from the recording in Figure 1. Note that the input dimension matchesthe number of features, 3x40. Since the spike frequency of the motion sensitive neuronsis relatively low, the motion features are almost not visible in the grayscale representation.(3) Here, activity of the correlator array on the left shows cells responding to particulardisparities and cyclopean angles due to the retinal input. On the left, local interconnectionsare disabled and false targets appear. On the right, the positive feedback to the disparitycells (not shown) as well as the winer-take-all along columns of the same cyclopean angleand the competition between winning cells in columns of different cyclopean angles takesplace. As a result, only the true target matches remain, indicating two objects lying almost

in the same, slightly positive, disparity plane.

57

Page 59: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

AER optical aVLSI chip WTA

PIDcontroller

AERinterface

KOALA processor

motors

environment

Figure 6.3: System block diagram of the tracking system. On the left, a 1-D AER-basedvision chip feeds edge and motion detection information to the Koala robot platform via

address-events. Software on-board the Koala performs tracking and directs the motors.

that position gets incremented. After that the interrupt handler gives up control.At regular time intervals, interrupts are disabled and another procedure reads the address his-

togram, updates the motor output, resets the optical array and enables interrupts again. To computethe new motor settings a winner takes all algorithm with hysteresis is run over the address histogram.A PID-feedback controller then computes the motor settings to correct deviations from the center.The feedback circuit is depicted in figure 6.3

The robot performed well; we let it track a light gray cable on a gray carpet at various daytimesand lighting conditions. The robot sometimes lost tracking in sharp curves. This can be blamed onthe mounting of the optical chip on koala. The field of view was set too far ahead. In sharp curvesthe robot corrected its direction too early and the angle of the cable and the robots observation linediminished substantially, which made the edge detection increasingly more difficult, up to a pointwhere it failed completely. That problem could be solved by focusing the optical chip to a line justin front of the robot. Shallow curves were followed reliably.

6.4 2D Address-Event Senders and Receivers: Implementing Direction-Selectivity and Orientation-Tuning

(Kwaben Boahen, Masahide Nomura, Eduardo Ros Vidal, and Rufin Van Rullen)Description: Orientation-tuned and disparity-tuned binocular receptive fields were implemented

in previous workshops. This year, we sought to implement direction-selective receptive fields. Anew retinomorphic chip—developed by Kwabena Boahen—made it possible to implement this mo-tion computation without using axonal or cortical delays. Thus, the receiver chips and projective-field processors used in previous workshops proved adequate for the task. The motion algorithm,the retinomorphic chip’s outputs, and the direction-selective cell’s responses are presented in thefigure.

We implemented three of the four DS cell-types using a 3x64x64 neuron address-event receiverboard, with a pair of 5MIPS microcontrollers (Microchip PIC16X57) computing projective fields foreach receiver chip in real-time. The microcontollers simulated a virtual receptive field by copyingand offsetting address-events from the retinomorphic chip. They sent these remapped address-events to the receiver chips, which generated excitatory post-synaptic potentials and performedleaky integration (each chip has 64x64 diode-capacitor integrators). The response of cells on the

58

Page 60: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

three receiver chips were displayed on an RGB monitor. Each DS cell received inputs from 4x2x5ganglion cells and its receptive field spanned 160 photoreceptors (there is a factor of 2 subsamplingat the ganglion cell level).

Our preliminary results show that this two-dimensional, multichip, motion architecture haspromise. They also demonstrate the felixibility and utility of virtual receptive and projective fieldsbased on the address-event representsation—when the computational primitives are encoded asspike trains. The system will be exhaustively tested, characterized, and improved over the comingyear at the University of Pennsylvania. We realized the need for faster projective field processorsand we plan to explore FPGA-based and custom-VLSI solutions.

In the second half of the workshop, we attempted to self-organize these orientation-tuned,direction-selective receptive fields (i.e., learn the mapping from retina to cortex for these four-celltypes automatically). Since no cortical delays are required, we believed it would be possible toachieve this simply by wiring together neurons that fire together. We designed and implemented asimple axonal growth mechanism on the microcontrollers, loosely based on activity-generated dif-fusible agents (e.g. nitric oxide) and chemotaxis. Unfortunately, we did not have enough time todebug the code. We plan to debug and test the algorithm at the University of Pennsylvania and hopeto report success at next year’s workshop.

6.5 One-Dimensional AER-based Remapping Project

(Marc Cohen, Tim Edwards, Theron Stanford, Gert Cauwenberghs, and Andreas Andreou,Pamela Abshire)

In this project, we continued an effort begun at last year’s Telluride meeting (see ”AdaptiveAddress-Event Router and Retinal Cortical Maps” in the 1997 report) to demonstrate an adaptiverouting mechanism for address-events between a sender and receiver, modeling the formation oftonotopic maps from the cochlea up to auditory cortex or alternatively, topology-preserving mapsfrom retina to visual cortex. The goal was to dynamically reorganize the sender-to-receiver addressmap, which was initially random, so that the final map preserved the spatial regularity of the senderchip. The constraints of the re-mapping operation were that only the activations of the receiver chipcould be observed. Furthermore, only short sequences of activations could be held in memory priorto making a re-mapping decision. Re-mapping algorithms were first tried in software and then anattempt was made to implement them in hardware using PIC controllers as the mapping devices.

The objective this year was two-fold: to explore further the biological plausibility of the learningscheme, and to implement the learning algorithm on a microcontroller that transforms addresses ofevents in a neuromorphic vision system. A PIC microcontroller-based board was built (PIC17C44and PIC17C43) and programmed, however, a number of persistent hardware bugs prevented thesystem from being completed. A one-dimensional AER-output imager chip that outputs salientfeatures based on motion and contrast was prepared for interface. We did test a number of newalgorithms on some of the real-time data that was recorded from Tim Horiuchi’s 1-D AER retina.The project will be continued at Johns Hopkins University.

6.6 Simulating AER-Cochlear Inputs With the 1-D AER Vision Chip

(Tim Horiuchi, Mete Erturk, David Klein)In this project, we were attempting to use the 1-D AER vision chip supplied by Tim to simulate

a silicon cochlea with AER outputs. Our primary effort was placed in attempting to simulate thedetection of pitch.

59

Page 61: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 6.4: (a) Four types of direction-selective (DS) receptive fields—two directions ofmotion times two signs of contrast. Four types of retinal ganglion cells provide input tothese DS cells: (i) ON-sustained (labeled ’++’ or green). (ii) OFF-sustained (labeled ’–’or red). (iii) ON-transient (labeled ’-+’ or yellow). (iv) OFF-transient (labeled +- or blue).(b) Raster plots and histograms of spike trains from retinomorphic chip that models thesefour retinal ganglion cell types. We showed the chip vertical black and white bars movinghorizontally. We recorded spike trains from 4x52 neurons lying on a vertical line (Column26 of the array). We used 20ms bins to create the histogram; the neurons spiked at amean rate of 5Hz. (c) Responses to motion in the preferred and null directions for one typeof DS cell over a range of speeds. The response was defined as the peak deviation in thecell’s membrane voltage; 16 stimulus-synchronized records were averaged. The cell was

direction-selective for speeds spanning at least one decade.

60

Page 62: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Figure 6.5: (a) The remapping problem: Initial connections between the receiver (depictedhere as a visual processing chip receiving images through a lens) are neither one-to-onenor topographic. Our goal state is where topography is preserved from the sender tothe receiver. (b) The hardware architecture we are constructing to simulate the learningproblem. An AER-based sender chip generates events on the AER bus which is receivedby a microcontroller-based look-up-table (LUT) which monitors incoming events and re-routes them according to the LUT. A training phase is to be generated, where activity onthe sender array is moving continuously in space, providing the microcontroller with data

to rearrange the LUT entries based on the statistics of the training data.

A spatio-temporal pattern was created that would produce AER outputs that would be expectedfrom a silicon cochlea receiving a pure tone as input. This pattern was shown to the 1-D retina andthe resulting spike trains were recorded. In particular, we were interested in recording the responsesfrom the motion detector units which are tuned to slow motions of a particular direction. This typeof response is consistent with what might be detected from the cochlear outputs. By detecting thelocation where the pressure wave (and thus the hair cell outputs) slows down (where the phase ofthe wave changes rapidly along the length of the cochlea), the frequency of pitch is determined.

Although we created and recorded from simulated spectral-ripple stimuli, we did not haveenough time to incorporate this data into any simulations.

6.7 Serial Address-Event Representation

(Philippe Pouliquen)This report summarizes the work done at the 1998 Telluride Workshop on Neuromorphic Engi-

neering on Serial Address-Event Representation.The Address-Event Representation work-group (led by Timmer Horiuchi) was subdivided into

many smaller work-groups, one of which was the Serial Address-Event Representation (SAER)work-group. The participants of this work-group were Andreas Andreou (The Johns Hopkins Uni-versity, Baltimore, MD USA), Philippe Pouliquen (The Johns Hopkins University, Baltimore, MDUSA), and Peter Stepien (University of Sydney, Australia).

The task that we assigned ourselves was to design and prototype some of the basic buildingblocks needed for SAER systems. The basic building blocks of interest were: a computer interfacecard capable of sending and receiving SAER packets, a universal SAER routing block, and converterblocks for converting between conventional AER and SAER circuits.

This report begins by describing the salient features of SAER, followed by a description of eachof the three building block projects we worked on.

61

Page 63: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

6.7.1 SAER Basics

Like conventional Address-Event Representation (AER), SAER is an electrical specification forinter-chip communication that makes extensive use of asynchronous circuits.

However, conventional AER is a parallel bundled data asynchronous protocol. This means thatit requires many wires to send data bits in parallel plus two extra wires for handshaking as shown inFigure 6.6.

Data

�Acknowledge

Request

p

p

p

p

p

p

Sender Receiver

Figure 6.6: Example of a conventional AER inter-chip communication wiring.

The time relationship between these signals is shown in Figure 6.7. The sender begins by settingthe data lines to their appropriate state and then asserts the Request line to indicate to the receiverthat there is valid data on the data lines. When the receiver has latched the data, it asserts theAcknowledge line to indicate to the sender that it has read the data. The sender is now free to stopdriving the data lines, or change their states. In any event, the sender then de-asserts the Requestline, whereupon the receiver de-asserts the Acknowledge line, and the cycle may begin again. (Notethat it is possible to shorten the cycle length by transmitting data on each transition of the Requestline).

Data

Request

Acknowledge

valid valid

time

Figure 6.7: Example of a conventional AER inter-chip communication signaling. Two datawords are sent consecutively using a four phase handshaking protocol.

In contrast, the basic SAER is a true delay insensitive asynchronous protocol which uses a(at least) four wires as shown in Figure 6.8. The data bits are transmitted serially to reduce thecomplexity of the circuits and the number of interconnect wires.

The time relationship between these signals is shown in Figure 6.9. The sender begins byasserting either the True or the False line (depending on the value of the first bit to transmit). Whenthe receiver has recognized the transmitted bit, it asserts the Acknowledge line. The sender then de-asserts the True or the False line (depending on which one had been previously asserted), whereuponthe receiver de-asserts the Acknowledge line. The cycle then repeats until all the data bits have beentransmitted. At this point a special cycle occurs using the Stop line. This line is used analogously tothe start/stop bits used in other serial protocols to indicates to the receiver that the entire data wordhas been transmitted.

62

Page 64: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Sender

Acknowledge�

Stop�

False �

True �

Receiver

Figure 6.8: Example of a serial AER inter-chip communication wiring.

Acknowledge

Stop

True

False

time

Figure 6.9: Example of a serial AER inter-chip communication signaling. The data beingtransmitted is 1011.

In practice, most of the Address-Events that we are interested in transmitting contain separateX and Y coordinate words, and an additional cell type or magnitude word. We have thereforeaugmented the four wires with an additional two sub-packet delimiter signals to simplify some ofthe processing that we need to do.

The resulting wiring is shown in Figure 6.10. An example of the time relationship betweenthese signals is shown in Figure 6.11. Note that the start/stop bit functionality has been folded intothe last sub-packet delimiter signal.

Acknowledge�

Sender Receiver

True �

False �

�X

Y

Z

Figure 6.10: Example of the serial AER inter-chip communication wiring.

6.7.2 Pros and Cons of SAER

There are three principal advantages to using serial rather than conventional AER for inter-chipcommunication.

63

Page 65: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Acknowledge

True

False

X

Y

Z

time

Figure 6.11: Example of a serial AER inter-chip communication signaling. The data beingtransmitted is X=011, Y=110, and Z=101.

First of course, is that SAER uses less wires than conventional AER in most cases. This meansthat building large systems with many VLSI chips all inter-communicating with SAER is feasible,whereas the conventional AER quickly becomes a wiring nightmare. Furthermore, each VLSI chipcan have many more SAER ports, so that it can communicate directly with many other VLSI chipsbefore package pins become a limitation (Note that this argument assumes that the parallel bundleddata line in the conventional AER are not shared between ports). Finally, SAER can be used tocommunicate over long distances as well, because only a small number of special line drivers wouldbe needed.

Secondly, SAER is truly delay insensitive. In other words, regardless of the propagation delaydifferences between each line, the transmitted data will not be corrupted. In contrast, the conven-tional AER requires that the data lines be stable and valid at the receiver before the Request line isasserted. Therefore, if the propagation delay along one of the data lines is greater than the prop-agation delay along the handshaking line, the receiver may latch corrupt data. Conventional AERtherefore requires much more careful VLSI chip layout and board level assembly to insure dataintegrity, and worst case timing must always be assumed in calculating signal speeds.

Thirdly, the SAER packet format is very flexible. Any number of True or False bits and sub-packet delimiters can be sent before the final Stop delimiter for the packet to be valid. For instance,we have designed special SAER routing blocks to merge or split SAER data streams which workindependently of the SAER packet size, and therefore do not need to be redesigned each time thepacket format changes! In contrast, one cannot easily interconnect conventional SAER VLSI chipsthat have differing numbers of data lines.

Each advantage of SAER also has a corresponding disadvantage (this is the nature of designtrade-offs).

By reducing the number of wires, we have made the sending and receiving circuits that muchmore complicated. To make SAER work, we need to introduce asynchronous shift registers, FIFOs,multiplexors, etc. However, we feel that the additional circuitry is a small price to pay for theincreased robustness and flexibility of SAER.

Also, by switching to a serial protocol instead of a parallel protocol, we have seemingly sloweddown the transmission rate of address-events. However, we have eliminated the time overhead inconventional AER which forced us assume worst case timing: for instance, where conventionalAER may require us to assert Request at least 100ns after driving the data lines to insure dataintegrity, SAER has no such built-in overheads. Furthermore, since we have fewer lines to drive,

64

Page 66: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

we can dedicate more VLSI silicon area to the pad drivers, and therefore drive the VLSI chip pinsat a higher rate. We therefore expect to end up with a SAER packet rate at least comparable toconventional AER.

6.7.3 SAER cabling standard

We have chosen to use conventional 100baseT ethernet wire and connectors for our SAER interfacecable, as it is easily obtainable, and the wire itself has already been standardized.

This wire uses four twisted pairs (or eight wires), of which we use one wire for each of True,False, X, Y, Z, Acknowledge, and signal ground. The eighth wire (called the Request line) isdriven by each sender to the logical OR of True, False, X, Y, and Z. This allows us to terminatea sender’s port by tying the Request line directly to the Acknowledge line. A receiver’s port isterminated by tying all the lines except Acknowledge to ground. This was done to allow us toremove any component of a SAER system by appropriately terminating the ports that the componentwas connected to.

A diagram of the SAER plug pin-out —(as found on the end of each cable) is shown in Fig-ure 6.12. Note that the non-terminating sub-packet delimiters are on the outside edge, so thatcheaper 6 conductor wires can be used when the sub-packet delimiters are not needed.

YRFGATZX

Figure 6.12: SAER plug pin-out. T: True, F: False, R: Request, A: Acknowledge, G: SignalGround.

6.7.4 Project 1: The SAER computer interface board

Before building any SAER VLSI chips or boards, we needed a method of synthesizing and capturingSAER packets. Although a digital signal analyzer can be used to capture single SAER packetsbetween a sender and a receiver, it cannot totally replace the receiver (because it doesn’t generate anacknowledge signal), and it cannot operate at high throughputs (usually because of limited memorycapacity). We have therefore designed and built a SAER transceiver with a built-in IBM PC/ISAinterface based on a single ACTEL FPGA.

Although the board had been built prior to the Telluride Workshop, the initial ACTEL FPGAprototype wasn’t functional. The control circuitry was redesigned with the help of another Work-shop attendee, Kwabena Boahen (University of Pennsylvania, PA USA). A new FPGA was burnedat the workshop and tested by connecting sender and receiver ports to each other. The new FPGAwas found to be functional except in the case where a long (greater than 10 feet) coiled wire wasused to connect the two ports (if the wire was stretched out, the transceiver worked fine).

The packet format used by the transceiver is fixed and is as follows: a 4 bit magnitude portion,followed by two 6 bit coordinates. When the sender port was terminated, we found that the packet

65

Page 67: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

length was on the order of 6 microseconds. We expect to improve on this in the future by usingfaster FPGAs, or multiple FPGAs.

6.7.5 Project 2: The SAER universal routing block

The Address-Event protocol is essentially a point-to-point protocol. That is, each sender connectsto one and only one receiver. However, in a AER system, we may want to simultaneously send thesame packet to multiple receivers or conversely, merge the multiple packet streams into one.

For instance, we may have a single silicon retina, but multiple feature detector chips (such asmotion detector chips). In order to connect the silicon retina to the motion detector chips, we needa routing block called a broadcast block. Now suppose also that one of the motion detector chipsdetects bright-dark edges moving to the left, while another motion detector chips detects dark-brightedges moving to the left. If we want to detect all edges moving to the left, we need to combine theoutput of the two motion detector chips using a routing block called a join block.

Alternatively, suppose we want to use a new silicon retina which has four different types of cellsin each pixel, with older receivers. The first two bits of each SAER packet indicate the cell type,and we want the activity of each cell type to go to a different receiver. To do this, we use a blockcalled a split block, which examines the first few bits of each packet, strips them off, and sends theremainder of the packet to only one of its output port depending on the pattern of the first few bits.Similarly, a merge block joins two or more packet streams and prepends bits to each packet thatindicates where the packet came from.

One of the targets of current SAER development is to design and implement each of these fourblock. We therefore designed a circuit suitable for an ACTEL FPGA which could implement anyof the four blocks (depending on the state of two configuration pins). Furthermore, each ACTELFPGA can hold three of these circuits, allowing a wide variety of SAER topologies.

We did not program any FPGAs, because our simulations showed that one of the four configu-rations (merge) was not operating properly. This circuit will be further examined at the Universityof Sydney.

6.7.6 Project 3: The SAER-to-AER converter blocks

Once the computer interface card was completed, the FPGA circuit was split in two to produceseparate parallel-to-serial and serial-to-parallel converter chips. We assembled two boards (one foreach FPGA), containing one FPGA socket, a 50pin connector of the type used on conventional AERboards, and one SAER socket.

There was some confusion at this point about the pin-out of the 50pin connector. As shown inFigure 6.13, the pin-out depends on what you connect to. The silicon retina and receiver boards builtby Kwabena Boahen have a single active high Request line, and a single active high Acknowledgeline. However, in the past we have use a National Instruments LAB-NB board (which emulates anIntel 8255 I/O chip) as either a sender or a receiver. Because the LAB-NB board uses two distinct8 bit ports for the data portion of the AER protocol, it has two Request lines and two Acknowledgelines! Furthermore, depending on whether it is sending or receiving, it uses sometimes completelydifferent lines for handshaking. And finally, most of the time, the LAB-NB signals are active low.There is therefore three different possible pin-out for the 50pin connector, and each FPGA has twoconfiguration bits so that the user can specify which pin-out should be used.

We burned the FPGAs, interconnected the two boards with a short piece of ethernet wire, andattempted to use the setup between a working silicon retina (sender) board and a working receiverboard built by Kwabena Boahen. Unfortunately, this setup did not work, and there was insufficient

66

Page 68: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

GNDGNDGND

+5V +5V

Request

pin

GND

GND

GNDGNDGND

36

34

32

30

28

26

24

22

20

18

16

14

12

10

8

6

4

21

3

5

7

9

11

13

15

17

19

21

23

25

27

29

31

33

35

37

39

41

43

47

45

5049

48

46

44

42

40

38

X�

X�

X�

X�

Y�

Y�

Y�

PA0

PA2

PA4

PA6

PB0

PB2

PB4

PB6

PA0

PA2

PA4

PA6

PB0

PB2

PB4

PB6

siliconretina orreceiver

siliconretina orreceiver

PA1

PA3

PA5

PA7

PB1

PB3

PB5

PB7

PA1

PA3

PA5

PA7

PB1

PB3

PB5

PB7

Y�

Y�

Y�

Y�

Y�

X�

X�

X�

X�

LAB-NBas receiver

LAB-NBas receiver

LAB-NBas sender

LAB-NBas sender

Acknowledge

Acknowledge

Request

Request

Request

Request

AcknowledgeAcknowledge

Acknowledge

Figure 6.13: Conventional AER 50pin connector pin-out.

time to narrow down the problem. These boards will be further examined at The Johns HopkinsUniversity.

6.8 Serial AER Merger/Splitter

(Peter Stepien)The Address Event Representation (AER) bus is an emerging standard used for communicating

between neuromorphic chips. The need has arisen due to the limited number of pins available onintegrate circuit (IC) packaging. An AER bus is used for point to point communication of theaddress of a neuron which has spiked. This is carried out in real time so that only the address needs

67

Page 69: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

to be transmitted. The address is conveyed as a parallel word of sufficient length. A serial version ofthe bus has also been proposed which further reduces the number of pins required with a reductionin bandwidth.

The serial version also has the provision to allow for breaking the address sent into three sub-groups: X address, Y address and a Z address. The X and Y address refers to two dimensional datasuch as a silicon retina. The Z address refers to the type of data that is being sent. Typically theaddresses are sent in reverse order, in the sequence Z, Y and X.

Since the AER bus is point to point, there are no implicit ways to connect a number of devicesinto one. Some ‘glue’ logic is required. The project undertaken was to design in digital hardwarefour building blocks for connecting a number of neuromorphic chips together. The four blocks are:

1. Merge two streams into one.

2. Merge two streams into one and add an extra address bit to indicate which path it came from.

3. Split a single stream into two.

4. Split a single stream into two based on the value of the first address bit in the message afterremoving the first address bit.

These building blocks could be used to merge or join a number of neuromorphic chips together.An example is where two silicon retinas may have their outputs merged into one stream to go intoanother chip for further processing. The address bit used for steering the data is the Z part of theaddress.

The four building blocks were designed and simulated using diglog. They were then combinedinto one design where two pins are used to select which of the four functional blocks is required.

The final implementation was an ACTEL A1020B FFGA. This chip has sufficient pins and logiccapacity to include three copies of the design all of which can be programmed individually to per-form one of the four blocks. Having three in one device could be used to merge four neuromorphicchips into one or to split one neuromorphic chip into four.

6.9 FPGA Implementation of a Spike-based Displacement Integrator

(James J. Clark)In the address event group I did a project, on my own, involving the development of a spike-

based digital implementation of the so-called ”displacement integrator” (sometimes called the ”burstintegrator”) which forms a part of the human oculomotor system (Wurtz 1996). This was designedand implemented on a single Altera Flex10K20 FPGA chip using the Altera student developmentsystem.

The motivation behind this project was three-fold: - to learn the use of the Altera FPGA de-velopment system (MAX-PLUS II) - to develop a digital implementation of a spiking neuron - toimplement and test a neural network model of the oculomotor ”displacement integrator”.

6.9.1 The Displacement Integrator

The current view of the human oculomotor system is summarized in figure 6.14.This project is concerned with the block labelled ”Displacement Integrator”. The purpose of this

block is to convert a commanded eye velocity into an estimate of the current eye position. This so-called ”efference copy” is compared with the initial target position, resulting in a motor error which

68

Page 70: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

+_

DisplacementIntegrator

GainNeuralIntegratortarget

position

efferencecopy ofeye position

eyevelocitycommand

eyemusclecommand

(t)(s)

(s)

(t) (t)

(t) = time coded(s) = space coded

+

Figure 6.14: Oculomotor system

is used to drive the eye muscles (after another integration, this time through the ”neural integrator”,which provides a tonic control signal to the eye muscles).

It had been long thought that the displacement integrator function was performed in the brain-stem, along with the neural integrator, and that this integration was performed on time (spike-rate) coded velocity signals. Current thinking, however, places the displacement integrator in the”buildup” layer of the superior colliculus (Wurtz 1996, Optican, 1995). Furthermore, while the in-put to the displacement integrator does appear to be a time- or rate-coded signal, the output appearsto be represented in a distributed place- or population-code.

Optican (1995) hinted at a neural model for implementing such a time-code to space-code inte-grator, but has not, at yet, published any details. Based on the sketchy ideas presented in (Optican,1995), we propose such a neural model, which is depicted in figure 6.15. This model consists of anetwork of assymetrically laterally connected integrate and fire neurons. We show just a one dimen-sional network, but the idea is easily extended to two dimensions. Each neuron has three excitatoryinputs and one inhibitory input. One of the excitatory inputs provides self-excitation sufficient to”hold” the current activity level. Another excitatory input comes from a ”position” input. This is toallow visual input to initialize the state of the integrator (to reflect the location of the saccade targetbefore the saccade, perhaps). The other excitatory input comes from the output of the neuron’simmediate (rightward) neighbor. This input is ”gated” by input from a so-called velocity input.When there is activity on the velocity input, the activity of the neighboring neuron is passed on tothe excitatory synapse. Likewise, the inhibitory input to the neuron, which comes from the neuronoutput, is also gated by the velocity input. When there is activity on the velocity input, then, theneuron is self inhibited which tends to reduce its activity, but is facilitated by whatever activity isgenerated by its rightward neighbor. In this way, neuron activity in the network is passed from rightto left upon receipt of spikes in the velocity inputs.

69

Page 71: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

+ + +

+ + +- - -

+ + +

position inputs

position outputs

velocity input

neuron neuron neuron

= gated synapse

Figure 6.15: Oculomotor system

6.9.2 Digital Implementation

We developed a digital circuit that implemented the displacement integrator model described above.Each neuron was implemented as a 6-bit up/down counter. The count of the neuron’s counter is arepresentation of the activity level of the neuron. The neuron emits a ”spike” whenever the countexceeds the value of a random number. The random numbers were generated using a linear feedbackshift register circuit. The functioning of excitatory and inhibitory synapses were implemented byincrementing the counter for each spike received on an excitatory synapse and decrementing foreach spike received on an inhibitory synapse. When a velocity spike was received the counterwas decremented by a fraction of the current count (implementing the gated self-inhibition) andincremented by a fraction of the current count of the rightward neighbor (implementing the gatelateral excitation).

The overall circuit is shown in figure 6.16, while the schematic of the individual neuron circuitis shown in figure 6.17. A 19-neuron system was implemented in an Altera FLEX10K20 FPGAchip, using the Altera MAX+PLUS II development software tools.

6.9.3 Testing

The displacement integrator network was simulated as well as downloaded into the FPGA on theAltera UP1 student development board.

Two different networks were simulated, differing only by the amount of activity transferredduring each input velocity spike. The simulation results are shown in figures 6.18 and 6.19. Fig-ure 6.18 shows the case for a higher weight on the lateral excitation. Note that as the velocity spikesare received, the activity in the network passes down the line, as desired. Note also that the activity

70

Page 72: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

thresh[5..0] thresh[12..7]

thresh[15..10] thresh[11..6]

clk3[15..0]

thresh[14..9] thresh[10..5]

clk2[15..0]

thresh[9..4] thresh[13..8] thresh[9..4]

thresh[8..3] thresh[12..7] thresh[8..3]

clk1[15..0]

thresh0

thresh[7..2] thresh[11..6] thresh[7..2]

thresh[15..0]

thresh[6..1]

thresh[15..0]

thresh[10..5]

thresh[15..0]

thresh[6..1]

thresh[5..0]

thresh5thresh3

clk3[0]

clk2[0]

thresh2

AND2

out12OUTPUT

AND2

out19OUTPUT

act12[3..0]OUTPUT act19[3..0]OUTPUT

D

DFF

CLRN

QPRN

D

DFF

CLRN

QPRN

AND2

out11OUTPUT

AND2

out18OUTPUT

vel_spike INPUT

act11[3..0]OUTPUT act18[3..0]OUTPUT

eq[]

LPM_COUNTER

NO

T

AN

D2 AND2

out10OUTPUT

AND2

out17OUTPUT

bogu

s_cl

kIN

PU

T

act10[3..0]OUTPUT act17[3..0]OUTPUT

eq[]

LPM_COUNTER

GND

AND2

out5OUTPUT

AND2

out9OUTPUT

AND2

out16OUTPUT

act5[3..0]OUTPUT act9[3..0]OUTPUT act16[3..0]OUTPUT

eq[]

LPM_COUNTER

AND2

out4OUTPUT

AND2

out8OUTPUT

AND2

out15OUTPUT

NOT

act4[3..0]OUTPUT act8[3..0]OUTPUT act15[3..0]OUTPUT

nt_integrator@91

mas

ter_

clk

INP

UT

AND2

out3OUTPUT

AND2

out7OUTPUT

AND2

out14OUTPUT

XOR XOR XOR

act3[3..0]OUTPUT act7[3..0]OUTPUT act14[3..0]OUTPUT

NOT

shiftinshiftout

q[]

aset

LPM_SHIFTREG

AND2

out2OUTPUT

AND2

out6OUTPUT

AND2

out13OUTPUT

act2[3..0]OUTPUT act6[3..0]OUTPUT act13[3..0]OUTPUT

input_spike INPUT

clr INPUT

AND2

out1OUTPUT

result[](cvalue)

LPM_CONSTANT

act1[3..0]OUTPUT

Figure 6.16: Circuit diagram

act[5..0]

act[5..2]

act[5..4]

vel_

spik

eIN

PU

T

left_

act[3

..0]

INP

UT

tran

sfer

_clk

2IN

PU

T

tran

sfer

_clk

1IN

PU

T

clr

INP

UT

deca

y_cl

kIN

PU

T

inpu

t_sp

ike

INP

UT

thre

sh[5

..0]

INP

UT

out_

spik

e_en

OU

TP

UT

act[5

..2]

OU

TP

UT

data[]

aloa

d

aclr

q[]

LPM_COUNTER

AND2

data[][] result[]LPM_OR

updown

aclr

q[]

LPM_COUNTER

data[]

aloa

d

aclr

q[]

LPM_COUNTER

OR2AND2

D

DFF

CLRN

QPRN

OR2

NOT dataa[]

datab[]

agb

LPM_COMPAREAND2 data[][] result[]

LPM_OR

AND2

AND2OR2

AND2 OR2

NOT

AND2

data[][]result[]LPM_OR

data[][]result[]LPM_AND

NOT

Figure 6.17: Individual neuron circuit

71

Page 73: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

MAX+plus II 7.21 File: D:\ALTERA\AER\DISPLACEMENT_INTEGRATOR.SCF Date: 07/14/98 20:08:25 Page: 1

[I] bogus_clk

[I] clr

[I] vel_spike

[I] input_spike

[O]out1

[O]out2

[O]out3

[O]out4

[O]out5

[O]out6

[O]out7

[O]out8

[O]out9

[O]out10

[O]out11

[O]out12

[O]out13

[O]out14

[O]out15

[O]out16

[O]out17

[O]out18

[O]out19

50.0ms 100.0ms 150.0ms 200.0ms 250.0ms 300.0ms 350.0ms 400.0ms 450.0ms 500.0Name:

Figure 6.18: Simulation results

spreads more quickly in the case of the higher lateral excitation weighting.A simple demonstration was downloaded to the development board. In this demo, one of the

push-buttons on the board was used to gate a clock into the input of the first neuron in the chain,while another push button was used to gate a clock into the velocity inputs of each neuron. Theoutputs of 16 of the 19 neurons were connected to the segments of the two 7-segment LEDs locatedon the board. A typical run of the demo proceeded as follows: first the clock to neuron 1’s input wasgated through; the first LED segment would begin to flash as neuron 1 began to increase its activity.Next the clock to the velocity inputs would be gated through, whereupon successive LED segmentswould begin to flash, with a procession through the chain of neurons. When the velocity gate wasreleased, the propagation of activity down the chain would cease, and the activity of each neuronwould slowly decay away.

6.9.4 Conclusions

We demonstrated that our neural model of the displacement integrator functions as desired, andshould serve as the basis for a more realistic biological model.

The particular FPGA that we used could probably be programmed to hold a network of twicethe size that was implemented. There are larger chips in this particular Altera family. The largestchip holds approximately 10 times as many gates as the one we used, so we could probably get 400or so neurons on a single FPGA using our current design. There are some simplifications that wecould do to increase the number of neurons that we can pack in, such as reducing the number ofbits used in the neuron counters. It may also be possible, because of the large mismatch in speedbetween the FPGA propagation delays (about 50 ns) and the time-scale of neural systems (about 1ms), that we can do time-multiplexing of the logic. This may give us a 1-10,000-fold increase in thenumber of neurons that we can implement in an FPGA. Thus, I feel that FPGA do have promise forbuilding large-scale silicon cortex building blocks.

Acknowledgements: ——————-I would like to thank Altera for donating a UP1 development board with FLEX10K20 and

72

Page 74: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

MAX+plus II 7.21 File: D:\ALTERA\AER\DISPLACEMENT_INTEGRATOR.SCF Date: 07/15/98 13:23:50 Page: 1

[I] bogus_clk

[I] clr

[I] vel_spike

[I] input_spike

[O]out1

[O]out2

[O]out3

[O]out4

[O]out5

[O]out6

[O]out7

[O]out8

[O]out9

[O]out10

[O]out11

[O]out12

[O]out13

[O]out14

[O]out15

[O]out16

[O]out17

[O]out18

[O]out19

50.0ms 100.0ms 150.0ms 200.0ms 250.0ms 300.0ms 350.0ms 400.0ms 450.0ms 500.0Name:

Figure 6.19: Simulation results

MAX7128 gate array chips, and associated development software.References: ————Optican, L.M. (1995), ”A field theory of saccade generation: Temporal-to-spatial transform in

the superior colliculus”, Vision Research, Vol. 35, No. 23-24, pp 3313-3320Wurtz, R.H. (1996), ”Vision for the control of movement: The Friedenwald lecture”, Investiga-

tive Ophthalmology and Visual Science, Vol. 37, No. 11, pp 2131-2145

73

Page 75: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Chapter 7

Discussion Groups

7.1 The What is computation? Discussion group

(Pamela Abshire)Traditional methods of assessing computation are applicable primarily within a digital com-

puting framework. These methods typically involve determining the complexity of calculation forparticular algorithms. This complexity can be measured in terms of utilization of resources suchas time, size, length, memory requirements, etc. It is unclear how these methods pertain to manyinteresting physical computing systems, including quantum, molecular, analog, and neural systems.As neuromorphic engineers, we are particularly concerned with the latter two: analog and neuralsystems.

The discussion group met three times. The first meeting resulted in many disagreements aboutwhat is meant by computation and about what formal methods are relevant to the study of physicalcomputing systems. While some people believe that Turing machines might provide an acceptabletheoretical framework, others remain skeptical. A few groups are pursuing information theoreticmethods for quantifying performance of analog and neural systems. In the next two meetings par-ticipants discussed such information theoretic approaches to analysing performance.

At the second meeting Pamela Abshire presented a physical model for the communication ca-pacity of an inverter, introducing concepts from information theory as appropriate. Starting with asymbolic description, she derived the entropy and mutual information of NAND and NOT. Fromthis she computed the capacity, the maximum information rate which can be transmitted. Then sheconsidered a simple two-transistor CMOS implementation of an inverter, and introduced realisticphysical models for the noise and signal transfer from input to output. Using these models, shederived a measure of channel capacity for the simple inverter implementation. This analysis can beapplied to understand the effect of physical parameters on the overall processing, and to comparedifferent implementations performing similar functions. The performance metric for comparisonwould be channel capacity, a fundamental physical property of any communication channel, or, fora task-related measure, the information rate for a particular input ensemble.

At the third meeting Christof Koch described his work on signal detection and estimation innoisy dendrites. He and Amit Manwani are working to extend cable theory to include neuronal noisesources. He discussed two paradigms for quantifying neural computation, signal reconstructionand signal detection theory, and applied them to a linear cable model with noise sources. Signalreconstruction investigates how much of the input signal is accounted for by the system’s output.It provides two metrics for quantifying signal transmission: coding fraction, which is the fractionof the variance of the original signal which is reconstructed; and mutual information between input

74

Page 76: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

and output. Signal detection poses a task, such as a two-alternative forced choice to decide whetherthere was a stimulus or not, and constructs the optimal detector for that task. The probabilities formisses and false alarms provide performance metrics.

The meetings of this group enjoyed lively, sometimes heated, discussion. Most of the discus-sion topics fall under three categories: benchmarks, technical feasibility, and performance-relatedmeasures. The first of these topics, benchmarks, are commonly used to assess performance, forexample in computer science or in neural networks. It is possible that a set of benchmark problemswould be useful in assessing complexity in analog and neural systems. One measure of complexitymight be the cost of tweaking a particular structure to make it compute some benchmark function.Sometimes different systems perform the same (or similar) benchmark function, and those systemscould be compared. However, it is also possible that benchmarks will not lead to new insights inbiology, since biological systems seek specific solutions to specific problems, not general solutionsto benchmark problems.

The second of the topics relates to the technical feasibility of information theoretic methods;they can require explicit linearized models or reams of experimental data. The investigator needsto understand a great deal about a system in order to apply any of these methods. She needs tounderstand, in detail, what sort of noise models are suitable and realistic for analog and neural sys-tems. Furthermore, she must decide what level of system descriptions are suitable (i.e. input-outputdescription, detailed biophysics, quantum mechanics). And once a description of performance iscomplete, the next step would be comparison among various systems; however, we just don’t knowthat much about costs in biology, so assigning relative costs to such design tradeoffs as extra den-drites versus extra power verges dangerously close to omphaloskepsis. It is not yet clear whetherthese information theoretic methods can say anything about higher cortical areas, beyond the pe-riphery.

General-purpose computing isn’t the goal of neurobiology. Using the term, computation, inreference to an analog or neural system, usually implies a particular task under consideration; thissuggests the final topic, performance-related measures. A commonly recurring refrain is that onlytask-related measures are ultimately of interest. (During one of the meetings Christof Koch re-marked, “Darwin is the only general theory we have.”) Perhaps it would be feasible to constructbenchmarks which are related to tasks of interest such as face recognition, speech recognition, mo-tion detection. Given such tasks, it would be necessary to define fidelity criteria for comparison.Issues mentioned above, such as the appropriate level at which to define a task, pertain here as well.Very few systems currently allow a description which connects to behavior or motor systems. In themeantime, progress must continue at intermediate level task descriptions.

This discussion group attracted many participants and produced lively discussions. Perhaps, forthose working in neuromorphic engineering from an empirical or heuristic or design perspective,the possibility of theoretical foundations is enticing. A computational theory of analog and neuralsystems could also provide a more substantial link between those biological wetware technologieswhich we attempt to understand or emulate and those artificial technologies which we engineer anddesign.

7.2 Neuromorphic Systems for Prosthetics

(L. Smith, P. Abshire, A. Arslan, A. van Schaik, M. Lades, D. Nicholson)The aim of this group was to discover where neuromorphic systems were being applied and

might be applied in the prosthetics field. The group was not restricted to discussing exactly whatcould be done with existing technology, but also discussed what might be possible given (likely)

75

Page 77: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

developments in technology. We noted that fields are generally driven not just by improvements intechnology, but also by the possibility of new application areas.

7.2.1 Motivation

It is clear from this meeting that the primary motivations for developing neuromorphic systems are:

� Computational neuroscience – i.e. the development of systems for improving understandingof how the brain works.

� To develop new systems (in the broadest sense of the word), systems which cannot easily beimplemented using more traditional technologies.

Prosthetics is one aspect of new systems that this group is interested in: we focus on

� what can be done

� what can nearly be done

� what we would need to be able to do to achieve goals in prosthetics.

7.2.2 Sensing

We define where neuromorphic systems have been or might be applied in sensing:

� Hearing: there is already a large body of work in this area, but very little application ofaVLSI technologies. Most hearing aids currently use digital signal processing techniques.Either different frequency bands of the sound signal are selectively amplified, (or some morecomplex sound to sound transform applied), or, in the case of cochlear implants, stimulationis applied to the organ of Corti inside the cochlea. The nerves of the spiral ganglion arestimulated. However, because the cochlea is spiral in form, it is not (currently) possible toapply signals to more than about one quarter of the cochlea. Different products use differingnumbers of electrodes, and different preprocessing techniques. Users of all forms of auditoryprostheses need to learn to hear using these new signals.

� Vision: there is a considerable body of work being done on the use of artificial retinas. Primar-ily there are three groups, one in Germany, under the direction of Professor R. Eckmiller [Eck-miller], at the University of Bonn, one under Gislin Dagniele and R.M. Massof at the JohnsHopkins University [Dagniele], and one under John Wyatt and Joseph Rizzo at MIT/Harvard[Wyatt]. All three of these aim to introduce electrically mediated signals into the retina: Wy-att and Rizzo propose using an epithelial implant which directly detects light and stimulatesthe retinal ganglion cells, although Eckmiller’s group appear to be more interested in usingan external camera, linked by a wireless connection to a retinal implant which stimulate theretinal ganglion cells. Although not reported in the literature, we understand that users ofthese implants require to learn to see using these prostheses.

In addition, there is work on the direct stimulation of the visual cortex, building on the seminalwork of Brindley in 1960. There is work ongoing at Utah [Normann], and believed to be alsoat NIH and the University of Waterloo, Canada. The Utah group works by using a 10 by10 array of electrodes inserted into the visual cortex. This is a highly invasive procedure,requiring the removal of a small piece of skull.

76

Page 78: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

� Touch: there is ongoing work for people with spinal cord injury, aimed at restoring some senseof touch at Case Western Reserve University, under Dr Clayton van Doren. Little informationis available on this [van Doren]. Much of the work in this area is related to attempts to improvethe quality of life of people with nerve (and particularly spinal cord) problems (see section7.2.3).

We also count in this section cross modality transforms – i.e. where information from one sense ispresented to another sense – such as work on aids for the blind, and (possibly) pain relief. For painrelief, we considered deep brain stimulation, drug pumps, and electrical stimulation of the spinalcord (TENS). The group also discussed non-invasive techniques of cortical stimulation, (such as aninverse from of SQUID synaptic current detection), but decided that this was not currently possible.

In addition, we included under prosthetic systems, techniques for noninvasive wireless monitor-ing of patient vital signs.

7.2.3 Prosthetics for sensorineural and sensorimotor applications

The second prosthetic application area we identify is in neuron–silicon and silicon–muscle inter-connection. Such applications would be useful in repairing damaged peripheral nervous systems.This is a common and serious problem, often resulting from accidents etc. Deficits may be due todamage to the spinal cord, or to damage to the peripheral nerves, and/or due to damage (or loss of)limbs.

Prostheses may thus need to

� pick up brain originated nerve signals

� transmit nerve signals to either (undamaged) musculature or to (electromechanical) prostheses

� pick up proprioceptive information from (either undamaged but un-innervated) limbs or fromprostheses

There are a number of serious problems with this type of work. For example, damaged nervecells tend to die off, and it is difficult to make long-lasting interfaces which send or receive nerveimpulses.

There is a great deal of work which has been done on attempting to make neural interfaces,particularly for people with spinal injury. A good review of current research is to be found in[APA]. There is considerable research in the use of methylprednisolone, and the introduction of arich growth medium round the nerve stump using Schwann cells and IN1 in nerve damage limitation,and in attempting to convince the nerve to regrow. Additionally, a key signalling agent gene, Shh,may help spinal nerve neurons to re-grow.

However, these techniques do not directly tackle the problems of silicon/neuron interfacing.There are (at least) two groups working on this. One, led by Fromherz at the Max Planck Institutefor Biochemistry in Germany, is based round a neuron/silicon junction which appears to be a formof field effect transistor where the nerve (suitably surrounded by a layer of lipids to protect it) formsthe gate of a transistor. This allows electrical communication from the nerve. Communication tothe nerve can be accomplished using capacitative electrical connection.

There is work at the University of Edinburgh (under Murray and others) on growing neurons onspecially prepared substrates. These substrates include electrical connections to/from the neurons.All of this work is still at an early stage, but holds promise for direct silicon/neuron interconnection.This could eventually allow direct neural control of prostheses, and conceivably also feedback fromthese prostheses to the brain, mimicking proprioceptive feedback.

77

Page 79: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Incorporating touch sensing into limb prosthesis is very important for their users. Sensor sys-tems for these systems are in development [OandP], and these clearly need to be interfaced on toneural systems. For upper limb prostheses, and particularly for hands, feedback on grip strength iscrucial for safe usage, and this requires some form of feedback to the user. One suggestion whichhas been made is that of cross-modality feedback (such as auditory feedback).

7.2.4 Classifying prostheses techniques

We attempted to classify the different possible modes of prosthesis/neural system communicationinto four classes:

� Peripheral nerve stimulation

� Sensory nerve (or sensory system) stimulation (e.g. stimulating the spiral ganglion neurons,or the retinal ganglion cells)

� Stimulating the brainstem nuclei

� Stimulating the cortical areas directly

Prostheses for limbs tend to use the first of these: auditory and visual prostheses tend to use thesecond: there is work on stimulating the brainstem nuclei and cortical areas, but these are highlyinvasive procedures.

7.2.5 What can neuromorphic systems offer

There already are some neuromorphic systems performing prosthetic tasks. In particular, there arepacemaker systems for providing stimulation to the heart muscles, and cochlear implants. (Wedo not distinguish between different technologies underlying these devices - we note that cochlearimplants are generally based on DSP technology). We can consider what is generally requiredfrom neuromorphic systems: whatever the actual function, they need to be low power, and eitherimplanted, or wearable. This makes constraints on size, and on the technology used for their imple-mentation. Certainly, aVLSI systems are generally small, and low power, but the same is true foradvanced DSP systems.

The neuromorphic systems may be

� direct prosthetic sensors i.e. transducers (such as an artificial retina, or a touch sensor)

� systems which process prosthetic inputs (such as systems for processing the microphone inputfor auditory prostheses, or for processing input from touch sensors)

� control systems for prostheses (such as systems for controlling the movement of prostheticlimbs or artificial hands)

� systems which perform signalling (such as those for transferring proprioceptive feedback toneural systems)

The groups conclusion was that neuromorphic systems had a great deal to offer in all of theseareas: however, we felt that the biggest advances needed to be made in silicon/neural interconnectionas this was seen to underlie many of the most interesting applications.

78

Page 80: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

7.2.6 References

Eckmillersee http://www.nero.uni-bonn.de/ri/retina-en.html

Dagniele:see http://www.spectrum.ieee.org/publicaccess/9605teaser/9605vis2.html

Wyatt:http://www.spectrum.ieee.org/publicaccess/9605teaser/9605vis5.html

Normann:http://www.spectrum.ieee.org/publicaccess/9605teaser/9605vis6.html

van Doren:section 6 of http://me210abc.stanford.edu/CDR-haptics/Files/Position Papers/vandoren.txt

APA:http://www.apacure.com/pirfal97.html

OandP:http://www.oandp.com

79

Page 81: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Chapter 8

Personal Reports and Comments

The following section contains, with as little editing as possible, the personal comments receivedby the participants in the days following the workshop. We solicited comments about their personalexperiences and suggestions about how to improve the workshop organization.

Pamela Abshire

I was at the Telluride Workshop on Neuromorphic Engineering for the entire three weeks, from the firstSunday (Jun 28) until the final Friday (Jul 17). This was my first time at the workshop, and I found thatthe lectures every morning were an excellent way to become acquainted with work outside my group andoutside my area. In the afternoons I participated in the workgroups on floating gate circuits and address-eventrepresentation. I also joined discussion groups on learning in VLSI and prosthetics, and led the discussiongroup on computation.

I especially appreciated talks by Christof Koch, Rodney Douglas, Avis Cohen, Tobi Delbruck, WolfgangMaass, and Shihab Shamma. Neuromorphic engineering is quite an interdisciplinary field, and consequentlyresults are reported in a very broad set of journals, from biology to engineering to computer science. It wasrefreshing and stimulating for these talks to be presented with an engineering emphasis and in an atmospherewhich encouraged discussion. In a slightly different vein, Tobi’s talk provided a detailed discussion of hisand Shih-Chii’s retina pixel, a circuit which has become ubiquitous in analog VLSI imager design.

I found the workgroup on floating gate circuits very informative and well-organized. It is my beliefthat the extraordinary capabilities of neurobiological systems rely heavily on adaptation mechanisms. I wasparticularly keen to attend this workgroup because floating gate techniques seem like the most promisingtechnology for integrating a similar density of adaptation into silicon circuits. It is also my impression thatfloating gates can be tricky to design and use, and I really appreciated the opportunity to learn firsthand fromChris Diorio, Brad Minch, and Paul Hasler. In the past I have designed a number of adaptive VLSI chips,and all of them have used volatile capacitive storage. These chips, as well as my future chips, will benefitfrom the understanding and techniques which I gained by participating in this floating gate workshop. Theworkshop would have benefitted from a laboratory component, so that the participants could gain experiencein testing and tweaking chips which use floating gates.

The address-event representation workshop was also very interesting for me. I appreciated KwabenaBoahen’s careful introduction to asynchronous VLSI. Asynchronous VLSI has become very important forAER, and someone from Alain Martin’s asynchronous design group at Caltech would have been welcomeat Telluride. I understand that this lab has written design tools for asynchronous VLSI, but I know nothingmore about them. It seems that the Telluride meeting provides an excellent opportunity for such “technologytransfer” among academics.

Most of my spare time at the workshop was spent in preparing for the discussion groups on computationand continuing ongoing research in the same area. For one of the meetings I prepared and presented aphysical model for the communication capacity of an inverter, based on a simple CMOS implementation. Ialso worked on a detailed biophysical model for the communication capacity of early blowfly vision.

80

Page 82: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Elizabeth J Brauer

My expectations of the Telluride Workshop were to meet other researchers and discuss issues in neuromorphicengineering, particularly in the area of locomotion. The primary benefit of the workshop was the opportunityto meet researchers in the specific areas of lamprey research, locomotion, and analog circuit design. Thelong-term benefits of the contacts developed at the workshop are tremendous. The secondary benefit of theworkshop is to stimulate some ideas about building robots. In particular, the Locomotion Work Group wasquite helpful in this regard. I will be developing a lamprey robot as a result of the Telluride Workshop.

To improve future workshops, I would suggest that the lecturers prepare simple handouts containing thefigures in the presentation and a list of references. This would be especially helpful for someone who is notan expert in the subject area but interested in exploring the topic further. Better chairs in the lecture roomwould be nice, since we spent so much time there. I would also suggest ending discussion groups by 9 pm.

The workshop was extremely well organized. The organizers are to be commended on a job well done.The facilities at the school were wonderful, as was the Telluride Academy staff. I would highly recommendthis workshop to others.

Marc Cohen

This was my first visit to the Neuromorphic Engineering Workshop and I enjoyed it immensely.Each morning from 8:30am to 12:00pm I attended lectures. I particularly enjoyed the series of talks

by Rodney Douglas. He began with the morphology of the neuron and dendrites, continued with corticalstructure and introduced some models for computing with spiking neurons.

Tobi Delbruck’s talk on the circuit details of the adaptive photo-detector was excellent. Many timescircuit details and analyses are covered in a ”hand-waving” manner. Tobi did a great job of leading usthrough his second order analysis of Shi Chii’s adaptive pixel.

Between 2pm and 4pm each day, I attended the floating gate workshop lead by the tag-team Chris, Bradand Paul. This was a great series of lectures on the development and uses of analog floating gate transistors. Ihave experimented with floating gate circuits before this workshop and now feel that I understand the devicephysics and design and synthesis techniques very well. It was unfortunate that we did not get hands onexperience with floating gate devices. I did however manage to get Paul Hassler to help me verify correctoperation of a floating gate circuit on an adaptive retina chip I had brought along to the workshop.

I also participated in two discussion groups; ”What is Computation?” and ”Learning in Silicon”. The for-mer discussion group was lead very well by Pamela Abshire. The discussion was lively and at times heated.It was unfortunate that so much time was spent on arguing over semantics. Pamela presented her view ofcomputation from an information processing point of view. Christof Koch also presented their work on infor-mation theory applied to computation in dendrites. The ”learning” discussion group wanted to concentrateon how to learn with spiking neurons. Unfortunately, it was never demonstrated that learning with spikingneurons was the correct way to proceed. It was argued that the kinds of learning that were discussed namely,LTP, LTD, Hebbian, anti-Hebbian and synchronization were all also possible with continuous-time learningrules.

The project I worked on was part of the Event Address Workshop led by Timmer Horiuchi and KwabenaBoahen. Our task was to build hardware that would take address event input from a 1-D retina or cochlearand learn a topographic or tonotopic mapping. This project was started at Telluride97 but had not reached thehardware implementation stage. I built up hardware consisting of a receiver board (using one of Kwabena’s64x64 receivers), a PIC chip to control the video output of the receiver and a daughter board which couldinterrupt the data path between the sender and the receiver. A PIC 17C44 microcontroller would execute thealgorithm which would remap the incoming addresses (scrambled) to ordered output addresses in real time. Iobtained real event address data from Timmer and together with Tim Edwards experimented with a few newalgorithms using the real data. Ultimately, the algorithm which was derived at Telluride97 was programmedonto the PIC by Tim Edwards. Unfortunately, I could not find a persistent hardware error and the wholesystem never operated. I plan to continue with this project back at Johns Hopkins University with Timmerand Tim.

81

Page 83: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Craig DeLancey

In terms of my own edification, this year’s conference was an unmitigated success. I learned much that wasnew and exciting for me; especially useful were the lectures on neuroscience and biology, the first week of theaVLSI tutorial, and my one-on-one interactions with all the other researchers. Much of what I learned willbe of substantial utility in my own future research in AI. Also, the exposure to the robot projects was bothinstructive and de-mystifying; this has inspired me to pursue the use of more robotics in my own research.

I fear that, for the Workshop, I was not equally helpful: I did not complete a significant project duringmy stay. I did, however, learn some things which will help me this year in some practical projects – and so, Iam pledge still to contribute to the cause. For example, in the last week of the Workshop I started coding therough beginnings of a project to use, in one of the organism simulations we are developing at IU’s AdaptiveSystem Lab, saliency maps that integrate motor and visual processing. This was inspired by some of thework on saliency others were doing, and also by Terry Sejnowski’s talk on recent research revealing that thetraditional division between motor and visual cortex may be too simplistic. This is work that I will continueas soon as I return (and I shall be sure to let you know about any successes in this undertaking).

I have only two pieces of advice for future conferences. First, you could make it even more clear be-forehand that we should bring as much of our own projects with us as is possible. The people who achievedsomething significant during these three weeks were mostly those who brought their ongoing research withthem. If I knew beforehand what I know now, I would have fit some portion of my own research into a neu-romorphic project that I could have brought with me. (Perhaps, however, my failure in this regard is becauseit is my first time here; I’ve heard it said that people who come several times get the most out of it!) Second,it would be beneficial to suggest some readings beforehand for the tutorials, if at all possible. I would havelearned more if I could have prepared a small amount.

My advice for invitees is to include, in some future Workshops, some scientists working on motivation.This is a topic about which I am biased, because it lies in the domain of my own research. However, along withlearning, perception, and locomotion, motivation is one of the fundamental aspects of adaptive or intelligentbehavior. I feel that the kind of research that happens at the workshop is well suited to explore this. PaulVerschure’s dream of the workshop culminating in a learning robot with integrated perception should beexpanded to include motivations.

Finally, I want to thank all of the organizers. I am grateful for having had this opportunity, and I appreciateall the work that went into preparation and management. I believe I have made some lasting friendships here,I certainly had a great time, and what I learned will be of lasting benefit to me.

Tim Edwards

I spent a total of six days at the Telluride workshop, which was considerably less than I wanted to, althoughwith a conference in Los Angeles at the beginning of July, I didn’t have much choice. Although I did not stayfor very long, I was able to immediately join the group working on developing, in hardware, an algorithmfor automatic unscrambling of randomly mixed address events. The active members of the group were MarcCohen and Tim Horiuchi. Timmer provided data from his 1-D retina chip which we used to confirm efficacyof the algorithm in Matlab. After extensive investigation of the algorithm, I took a crash course in PICprogramming and converted the algorithm into PIC assembly code. The intended end result was to be a smallcircuit board containing the PIC microcontroller, placed between an address-event transmitter (the 1-D retina)and receiver (which outputs a video signal to a monitor). Unfortunately the wire-wrapped circuit board didnot function properly, apparently due to loose wires or components, and so the complete system was neverrealized. Nevertheless, I derived a great benefit from the experience, which was learning how to write PICcode and verify it using the PC-based PIC emulator.

As usual, the most enjoyable part of Telluride was getting together with people from different back-grounds with different perspectives on neuromorphic engineering. Compared to the last time I attended,which was three years ago (1995), this year’s conference was better organized, and on the whole, peoplehad more realistic expectations on what is possible to accomplish within the time frame of three weeks; as aresult, they got more accomplished.

I have no specific recommendations to make for next year. Three years ago, I made a number of recom-mendations, and it appears that they all happened, so I’m left without any more ideas.

82

Page 84: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Mete Erturk

We have developed an interface board for two cochlear chips (from Andreou and van Schaik). I was involvedin the design and troubleshooting of the boards. We implemented a non-linearity (a hard limiter) and a lateralinhibition network on the 32 output channels of the cochlea. The boards were then connected to a PC whichran cross-correlation and stereausis algorithms and sent motor commands to a Koala robot. The robot hada dummy head and two microphones (embedded in the head) mounted on it and performed sound sourcelocalization tasks very successfully.

I was involved in hardware design and development of an Address Event Remapping project. The projectinvolved the use of a PIC (Microchip-17C44) to learn tonotopic axis remapping through un-supervised learn-ing.

I participated to a project using Horiuchi’s 1-D retina to emulate a cochlea. We have presented the retinawith plots of moving ripples and analyzed the spatial derivative and direction of motion outputs of the retina.

Charles Higgins

This year I gave three talks at Telluride and got input on my work from a wide range of disciplines. I endedup with a bunch of new ideas and tentative solutions to some long-term problems with my multi-chip systemsresearch. Overall, it was a very valuable experience.

My primary suggestion for improvement of the workshop is better allocation of workbench space. Nextyear, everyone who needs bench space should sign up (via the web) and be assigned some area in some room.This would avoid the contention we had this year (e.g. This space is MINE! Keep off!). I would suggestuse of the center tables for additional workbench space (rather than a general construction area), and moreefficient use of other rooms (such as the computer room periphery).

My second suggestion is a series of hyper-short tutorials at the beginning of the workshop to allow peoplea quicker decision as to which workgroup to join in. For example, in the first two days, there should be *half-hour* talks on the subjects:- What is analog VLSI?- What is AER?- How does a biomorphic robot work?- What can you do with a Khepera/Koala?This year, most new people didn’t know what AER stood for until late the second week.

David Klein

I attended the workshop last year and I found it to be a more productive experience this time around. I wassure Particularly, I was impressed by the attempts to keep the number of discussion groups to a minimum andto motivate people to start working on projects as soon as possible. Additionally, I found that the group ofparticipants this year was excellent. They were more motivated and ready to work on projects than the groupwas last year.

This time around, I was a member of a number of project groups, including: AER, Audition, VisualSaliency, and Behaving Robots. I co-managed the auditory project group, and thus was involved in mostprojects in that group (see the auditory project group report for more details). In the AER project group, Iworked on the 1-D dynamic re-mapping project and the 1-D sender to spectrotemporal receptive field receiverproject. In the visual saliency group, I helped modify a visual saliency program to control a Khepera robot.Jointly with the audition and behaving robots groups, I used the input from some silicon cochleas to guide aKoala robot towards sound sources.

My main personal goal for the workshop is to begin to find ways to implement spectro-temporal responsefields (of auditory cortical neurons) in hardware using networks of neuron-like elements. I achieved this goalto a limited extent from my work in the AER workgroup. I plan to continue working on this project, incollaboration with Mete Erturk, Andreas Andreou, and Tim Horiuchi.

83

Page 85: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

My suggestions for future workshops are standard. Productivity could increase if there was more hard-ware actually ready to go with suggested projects before the workshop. Then it wouldn’t be the third weekbefore anything was working and ready to be applied towards more interesting tasks. I was intrigued by PaulVerschure’s ambition to merge these different senses and different feature extractors into a cohesive project.I hope that next year this may be approached more successfully.

Yuri Lopez De Meneses

1- The technical and didactic support that the organizers provided was perfect. We were able to work as wellas in our own office. 2 - I came expecting to gain some hands-on experience as well as some theoreticalknowledge, and I was not disappointed. I feel that I have learned a lot, and the Workshop has been an eye-opener for other fields. 3 - The combination of lectures and hands-on projects has been very enriching. Ihave been involved on two projects, and at least one of them, on on-line robot learning, will go on as a post-Telluride cooperation. 4 - It would be interesting for the volleyball discussion group to held an initial lectureon the rules and tactics of the game. 5 - It would be interesting to have more biologists, especially if they getinvolved in projects with engineers.

Wolfgang Maass

For me the most fruitful and stimulating activities during this workshop were a large number of individualdiscussions and brainstorming sessions regarding various problems of computing and learning that arise inthe three main application domains considered in this workshop:a) biological systemsb) aVLSI, especially those involving pulsesc) robotics.

I enjoyed very much the opportunity to have at this workshop direct access to experts from these threeapplication domains, which I would not be able to meet at conferences in my own discipline (computerscience). Especially my discussions with Rodney Douglas, Paul Verschure, Kwabena B. Boahen, ChristofKoch, Chris Diorio, Andre van Scheik, and Gert Cauwenberghs brought me new insights and ideas that willhave a significant impact on my research during the coming years. Some of this research will be carried outin collaboration with these colleagues.

Apart from this, I learned quite a bit from the daily lectures in the morning, especially those that intro-duced me to problems of neural computation that were new for me, such as problems related to locomotion,audition, and VLSI.

In addition I participated during the first half of the workshop in the tutorial on floating gates, and duringthe second half I carried out some practical learning experiments with mobile robots. Both of these activitieswill be very useful for the work at my university during the coming year, since I am currently involved (jointlywith colleagues from other departments of my university) in developing a curriculum and research programin robotics. I also will explore the possibility to initiate (possibly in collaboration with the company AMS inGraz) some research in Austria on floating gates.

I regret that I was not able to participate also in the workshops on aVLSI and AER, but the days atTelluride were simply too short.

Regina Mudra

Wanted to help people to become familiar with K-robots. For this reason I prepared with Paul Verschurea documentation and C-software package called Ikhep. This package was based on older work by PaulVerschure to implement a Braitenberg model on K-robots. We used this example to explain to people how toprepare their own K-robot setup and make their own experiments with Khepera.

I was also interested in discussions about image pre-processsing, types of representation and robotics.

84

Page 86: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Suggestions and comments:

Telluride is a good place to come in contact with and to do such good small projects like the landmarkand path-finding task.

I was surprised that people, who at first wanted to learn to work with the K-robots, lost interest after theyhad heard that they have to install their own setup. interest. Besides this there were some hardware problemswith the framegrabbers and connectors, but these could mostly be solved.

To Giacomo, Timmer and also Dave: a big thank you for installing and updating the Telluride web-pages,it was a big help to find out about all the lectures, seminars and projects.

Perhaps it would be good to let the people also describe their interests on the sign-in pages so interestedpeople can contact and build their groups before Telluride and prepare their projects in a better way.

Thomas Netter

I was satisfied beyond expectations with this workshop. I fulfilled my hopes by being able to play with AlanStocker’s retina and building a somewhat aeronautical object which worked at first go. I was delighted to meetmany bright and interesting people. I was amazed by how much gear was available. It’s the first workshop Iever attend and am therefore very impressed.

Although I did not expect so many neurobiology-related lectures I was very happy with them. I think thebiology/electronics ratio was well balanced. I’m sorry that attending the aVLSI lectures prevented me fromattending AER. But I’ll follow AER next year!

I think there is no way to find such condo arrangements in Europe and the workshop should thereforeremain in Telluride.

Suggestions for next year and what I missed at the workshop:

* A warning regarding the flights : You might be bumped off and your luggage too. Keep your toothbrushwith you at all times just as you keep your transparencies. Check the guys who load the plane with luggage.Be ready to beg and plead the pilots to take your stuff. * More presence of Terry Sejnowski* An amplification system for some speakers which were difficult to hear. Maybe we should put a sign at theback of the lecture room with SPEAK LOUD AND CLEAR on it?* Some sort of Michelin guide to pick the best hamburger and sandwich places – let’s face it: Baked In Tel-luride really is lousy!

David Nicholson

I am a biologist and my present research fellowship is in the area of insect navigation at the University ofSussex Falmer, UK. I am also a member of the Center for Computational Neuroscience and Robotics at thesame location.

When I first signed up for the workshop I expected to learn a lot about VLSI and electronic engineering ingeneral and not much about biology. So, I was pleasantly surprised to find Avis Cohen and Thelma Williamsdescribing their work on the lamprey. I originally intended to participate in 3 workshops: Basic VLSI, AERand behaving systems. However, once I had been to a few lectures/talks I became very interested in MarkTilden’s work and so I joined the locomotion workshop and helped to build the robot lamprey and the flyingrobot. I also continued with the basic VLSI workshop. I found the fusion of biology and robotics within thelocomotion workshop very stimulating and thought provoking. As part of my work I aspire to the productionof biologically plausible robotic models of animal behaviour. So, whilst continuing to pursue my behaviouralexperiments with insects i intend to use Mark’s ’Nervous Net’ architecture as a starting point to build awalking robot of my own. I need to get a feel for the sort of engineering problems inherent within robotbuilding and I am attracted by the minimalist approach (as regards internal processing versus behaviour)which these systems afford.

I attended three discussion groups also: Pamela Abshire’s ’What is computation’ , Leslie Smith’s ’Pros-thetics’ group and Nicol Schraudolph’s ’Flying Robots’ group. All of them produced a lot of lively discussion

85

Page 87: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

late into the night and the flying robot was produced as a result of Nikki & Mark’s ideas. I must also mentionhow useful I found Rodney Douglas’s basic neurological tutorials.

How could the workshop be improved? More biology of course! But then I would say that wouldn’t I,being a biologist. Why not split the morning lectures up more. Then we would be more receptive throughout.Perhaps a longer break in between them and a shorter, later lunch-break. I don’t think this would intrudeupon the afternoon workshop time as people like to work into the night anyway.

I found the diversity of content of the workshop very impressive; from visual processing to Andy ‘Wuen-sche’s random boolean networks via Tim’s model of olfaction. Anyone who wants to be aware of newpossibilities and have hands on experience of robotics should certainly attend this workshop.

Maximilian Riesenhuber

I came to Telluride expecting a couple of things, namely i) to learn about the basics of aVLSI, ii) to learnhow useful neuromorphic aVLSI chips are for neuroscience and iii) to see how neuromorphic chips can beemployed to tackle engineering problems, e.g., in computer vision or robotics. Regarding the first point,I thoroughly enjoyed Joerg’s aVLSI tutorial which I was able to follow even without being an electricalengineer. The only critique I’d have is that I didn’t profit that much from the last week that focussed on chipdesign which seemed a little bit too specialized. Instead, I would have liked to learn more about some moreadvanced neuromorphic hardware, such as your WTA chip.

I also learned something about ii), insofar as in many cases it is not clear to me what additional insights animplementation in aVLSI offers when one already has a mathematical model, e.g., of a neuron. For large scalesystems like the silicon retina, efficient simulation might not be possible anymore, but the question remainswhat insights we gain from the aVLSI implementation that cannot be obtained from a simulation of a smallernumber of photoreceptors. The biggest opportunities for neuromorphic engineering I see in connection withapplications — I was especially impressed with the real-time processing capabilities of Joerg’s retina, forexample.

Apart from the tutorials and projects I was involved in (especially the projects were a crucial componentof the “Telluride experience” for me — neuromorphic engineering seems so easy until one tries to do ithimself :) ...), I very much liked the format of the workshop with the great variety in lectures in the morningand also the more informal elements like the BBQs that fostered so many inspiring interactions among theparticipants. I was impressed by the smooth organization of the whole event — including everything from theaccommodations to the local computer network. A big thanks to the young turks for making the workshopsuch a memorable experience :) ...

Nici Schraudolph

The workshop was great! I wanted to learn more about working with Kheperas, and Mark Tilden’s robots— and promptly wound up in two collaborations involving Kheperas (both of which will continue), and twoprojects with Mark. My specific goal to investigate navigation without directional control in a flying robotwas also realized, so the workshop fulfilled my expectations on all counts. In addition, the rich interactionswith other participants were invaluable, and resulted in several new impulses for my research program. Forinstance, I got interested in Wolfgang Maass’ model of temporal coding in spiking neurons, and expect towork on efficient, biologically plausible learning algorithms in such systems soon.

My only critique would be that at times the schedule was so crowded that there wasn’t enough time leftfor project work. It may also be a good idea to charter a flight from/to Denver at the time when everybodyarrives/leaves, since getting seats was quite difficult. All in all it was a great experience though — my thanksto everyone who made it possible!

Mario Simoni

My goals for the workshop were to learn more about floating gate devices and to share and get some ideasabout how to implement learning mechanisms in aVLSI. I think my goals were met in terms of learning aboutfloating gate devices, except that it would have been nice to be able to actually play with some floating gate

86

Page 88: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

transistors. I was able to get some ideas about learning and adaptation circuits from the ”on-chip learning”discussion group. There were other talks about higher level forms of learning from a behavioral perspectivewhich I enjoyed, such as those ideas presented by Jim Clark. I also benefited from meeting Thelma Williamsand talking to Avis Cohen about pattern generation in the lamprey.

I think next year it would be a good idea to have some talks about various ways that people are modelingneurons in silicon and how they are using these models. Perhaps, though, this would make a better discussiongroup rather than a formal talk.

I think this conference was fairly well organized and ran smoothly. (Larry [Rosen] did a great job, gethim again next year!) I liked the selection of talks, and I think I got a good understanding of the differentareas that people are working in. One thing that I think would help the workshops a lot is if we could preparefor the projects before the workshop begins. This would mean a little more commitment from people, but Ithink we could get a lot further while everyone was there. One of the things I puzzled over before comingwas how to prepare. I think if I had known more about what the locomotion project was going to be aboutand been in contact with the project leader, I could have prepared things ahead of time and brought morepertinent equipment and circuits.

I have thought of two people who could contribute a lot to the workshop next year: Larry Abbott andNancy Koppell. Both are mathematicians who have done extensive modeling of various properties of neu-rosystems.

Malcolm Slaney

I was only able to attend part of the last week of the Neuromorphic workshop this year. But what a week itwas. I was very impressed with the level of expertise and creativity that I saw.

My first reaction was there are a lot of really smart people, working on hard problems (perception), withone hand tied behind their back (analog VLSI). But after seeing the student demos at the end of the workshopI changed my mind. A large amount of work got done in a very short amount of time. I’m not sure the samelevel of performance could have been achieved with conventional means.

The biggest surprise of the workshop, for me at least, was the large number of ’bots that were assembled.Some were based on commercial test-beds. Others were put together with scotch tape and cardboard. Anumber of audition and vision chips were available and people cobbled together what ever they needed tomake things work.

I gave a lecture on computational models of pitch. I think it was well received and there were manygood questions. I especially enjoyed the lecture’s by Christof (noise models of neuronal cables) and NicolaFerrier (stability of real-time tracking). The last day’s demonstrations of all the work was most valuable.It was a shame that I had to miss the preparatory work, but it was really good to see all the work that wasaccomplished in a few short weeks. The ’bots were especially interesting to me.

All of the students were interesting to talk to. But meeting and working with Andreas Andreou (speechrecognition models) and Paul Verschure (machine learning) was especially good. An unexpected connectionwas the interest from Andreas and Shihab Shamma in my pattern playback tools. I expect we’ll work togetherin the future.

I’m really glad that I got to participate and to see everybody’s work. I look forward to participating again.

Leslie Smith

I very much enjoyed being part of such a stimulating group of people working in the area of neuromorphicsystems. This was important for me, as I am not normally part of a large group of researchers. The meetingwas well organized, and the schedule tight.

My two interests are in early auditory processing, and in better understanding of real neurons and theirplace in real networks. I had many useful and interesting discussions with the groups working in auditoryprocessing, although I did not find time to take a more active part in the interfacing of the auditory chips onto robotic systems: this was primarily because there were just so many things to do (see below).

I attended nearly all of the lectures given (as well as giving one). In retrospect, though this was veryinteresting, it took away from the time I might have otherwise spent on project work. I attended the course

87

Page 89: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

on analog VLSI design - indeed, this was very important for me, as I have a new research fellow startingin the Autumn, and I wanted to be more conversant with this technology. I learned a great deal about thistechnology, and about the design tools available. I also attended the first week of the floating-gate tutorialgroup: this was interesting to me as I see this technology as underpinning adaptive aVLSI systems, and Iwanted to understand how it functioned.

I organized the prosthetics discussion group, as well as attending meetings of some of the other groups.The prosthetics group was important to me, as I was (and remain) concerned about the need for there tobe specific applications for the neuromorphic technology. I consider that replacing damaged senses, or al-lowing people to communicate directly with replacement limbs is an important and feasible application forneuromorphic technology. The report of this group has been submitted separately.

I believe that I gained a lot from the meting: I met researchers who I had met briefly at the EWNS1conference, or who were previously only email addresses to me, and I was able to discuss issues which Ithink are important in audition and in the silicon neuron with the leading experts in the field.

Peter Stepian

This was the first time that I have attended the Workshop. Overall everything was excellent! Being relativelynew to the emerging field of Neuromorphic Engineering, the Workshop was very informative and the workgroups provided a way to contribute constructively while at the same time learning. Below are outlined somepositive aspects of the Workshop followed by some suggestions for the future.

The web based system to register for the Workshop is very good. This is especially true since manypeople are geographically far away. The mix of lectures, discussion groups and work groups was also good.The length of the Workshop, although at first thought a little long, ended being a good length, especially sincethe project groups can take up a lot of time. The amount of equipment available to conduct the work groupswas astounding, considering the location of the Workshop. Having equipment available is very importantsince the work groups take up a majority of the time. The computer support and network connection wasflawless.

To ensure that the participants of the Workshop know what is going on right from the start, it would benice if the program for the full three weeks be finalized right at the start. This could be done by extendingthe web based registration to include discussion group topics and time slots. Also the scheduling of the workgroups could be done before hand with more detailed descriptions of individual projects. This would savetime organizing them when the Workshop has already started. The mix of lectures was good, although itwould be nice to have more on the biology of the brain. These were the most interesting as they gave aninsight into how we should be designing systems.

The Workshop is a great way to bring together so many people from different fields all working towardsa common goal. The organizers are to be commended for making it all possible. I hope that the Workshopcontinues into the future and more people can benefit from it as I have.

Alan Stocker

This workshop was even better than the last one. There was much collective effort in realizing good projectsand creating teamwork. People showed good social skills (due to the world championship ?) and thereforethere was always a friendly and comfortable mood ’in the air’.I personally found a platform (discussion group) to present my project and get response from a variety ofpeople and experts which is an important point in a grad-student’s life. Further I could compare solutions andapproaches - and more important, improvements during one year - from other people working in the sameproblem field.On the other hand, the workshop gave me an opportunity to work on a topic far out of my usual research, thestereo-correspondence problem. There was a second project I was sort of involved since Thomas Netter usedone of my 1D motion chips for his ’flying cardboard-box’.I attended almost all of the morning lectures, except a few in the last week. This was mainly because theprojects had to come to an end and computer resources were quite limited.

88

Page 90: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Suggestions:

� More PCs.

� I think it would be a good idea to couple the lectures and the tutorials closer - both in content and time- together. I suggest a 2/3 day block of morning lectures and afternoon tutorial/work-group.

� I would appreciate if the important people could get involved more. This depends also on the timeavailable (see Terry) for them but the more the better.

Oh, before I forget: Giacomo and Timmer did a great job !

Mark Tilden

The original plan was to bring my largest robot to the workshop and, while it developed slowly, work outcollaborations with others to test out their eyes, ears and etc. as a prelude to a more systems integratedapproach to neuromorphic physics. As with mice, the best laid plans don’t always pan, and I instead spentthe first week trying to get the ”Roswell” large walker operational for the parade, only to suffer successivefailures due to electrical motor problems. At the same time I worked with the Locomotion group to plan outthe design of an autonomous robot lamprey for study. The first week planning, the second week building,and the third repair as the power supplies the robot ran from turned out to be rather twitchy. As a result, littledata was taken, but the robot will slither again.

Another project was the development of an interesting Braitenberg flying device which has strong kitpotential. Using the new micro-hextile boards, it turned out to have a broad range of interesting behaviors,and with luck, will make for a flying-joust competition next year which should be cool.

As usual, collaborations were made, promises made for next year, and laughs and beer were consumed.As usual, a good time.

Rufin Van Rullen

My overall feeling about the workshop is that it has been a strong and useful experience for everybody,or atleast for me.

¿From a more personal viewpoint, I have to point out that I am more a software than a hardware-basedcomputer scientist. Therefore, what I expected from the workshop was mostly to be introduced to the fieldof neuromorphic engineering, to learn the basic skills that are necessary to understand the work of otherpeople on silicon chips, hardware-implemented perception and behavior. These expectations have been fullysatisfied by the workshop. I think that it is important to keep a lot of introductory lectures and workgroupsfor next years workshops, so that people like me, with a great interest in the topic, but a few knowledge of it,can come and enjoy working with more experienced researchers. I also hope you will keep the format of theworkshop, because 3 weeks, though it might seem a long time at a first glance, were just enough for me toget familiar with the concepts and the tools that I had to use. Furthermore, it allows to create a lot of personaland professional contacts with people, which is often difficult in one week-long conferences or workshops.In that sense, the informal gatherings such as hikes, volley-ball games, or dinners are also an important partof the workshop, that shouldn’t be left beside.

Andre van Schaik

I came to the Telluride ’98 workshop with some ideas for projects in which I would try to apply my analogVLSI building blocks for the auditory pathway as smart sensors for robots (Koala’s) in a real world environ-ment. I normally do not have the possibility to work with robots, and discovered at the workshop that themain problem is the interfacing between analog VLSI chips and the digital hardware of the robots. For thisreason I did not actually succeed in using the chips on the robot, but I started a project with the INI lab todesign a silicon cochlea that is easier to interface. A second goal of the workshop was to establish or rein-force personal links with the people working in our domain. It is hard to meet everybody on a regular basis,

89

Page 91: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

especially when living in Australia, but the 3 week workshop allowed me to have good discussions with mostof my peers. It furthermore resulted in some very interesting new contacts.

Charles Wilson

My goal for this workshop was, among other things, to work with Kwabena Boahen to develop an AERsender board and an AER receiver board. I completed this project, but the sender board did not perform well.It worked, but barely. The receiver board was finished on the last day of the workshop, so I did not haveenough time to test it properly. Kwabena and I will continue to collaborate on this project after the workshop,as I intend to use his silicon retina as the front end to my future selective attention efforts. I also hoped towork on the 1D AER stereo correspondence project, but did not have enough time to work with that group.

In general, I thought the lectures were outstanding and informative. I participated in the Floating Gateand the AER tutorials, which were both quite good. However, the tutorial part of the AER group didn’tstart until later in the workshop, and we ran out of time to cover all the material. About the only criticismI have of the workshop was that there just wasn’t enough time each day – with each morning taken up bylectures, three to four hours each afternoon taken up by the tutorials, and then interesting discussion groupsand presentations in the evenings, there was very little time left for projects. Unfortunately, I can’t reallysuggest a solution; I wouldn’t want to cut back on the lectures.

Thomas Zahn

There where two major reasons I applied for the Telluride Workshop 1998. First of all I wanted to improvemy knowledge in the field of neuromorphic implementations as well as the modeling of spiking networks.Second, I wanted to compare my results to those of the experts and discuss my models with them. But I goteven more out of these days in Telluride.

What I enjoyed most was the personal contact, developing through cooperative work, to make theserobots moving and during the (sometimes endless) discussions. As a result of that, I will start to do jointwork with Shihab Shammas Lab in exchanging biological findings and simulation systems for the auditorypathway, with Andreas Andreou providing me the front end to my system and Paul Verschure to furtherimprove the Simulation of large spiking networks in audition. During long discussions with Wolfgang Maasand especially Timmer Horiuchi and Giacomo Indiveri I found many ways to compare my understanding ofneuromorphic modeling and to continue the exchange of ideas.

There has been a good deal of things I learned or at least understood better during the workshop whichwill influence my PhD thesis and the work after quite a bit. Listening even to Rodney Douglas late nightlessons and many scientists working in the field for many years I could develop a general feeling about thestate of the art in neuromorphic modeling and found some good reasons to question and improve my ownmodels.

Another pleasant surprise, for me as a first time participant, was the truly comprehensive course in appliedAVLSI, taught from the basics up to really useful implementations. Finally I used the chance to set up adiscussion group on the on chip learning issue together with Mario Simoni. During the three meetings therehas been a good chance to go into details, how to implement learning synapses and which learning algorithmsfor spiking neurons could be biological motivated.

The major advantage of this workshop to me was the possibility to obtain ideas from biologists and atthe same time having the chance to discuss effective ways of modeling and implementation while speakingthe same language of neuromorphic engineers. I strongly recommended the workshop to my co-workers andwill try to improve my results to have the chance to come back next year.

90

Page 92: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Appendix A

Participants of the 1998 Workshop

Alphabetic list of everybody present at the workshop.

Organizers:

� Avis Cohen, University of Maryland - [email protected]

� Rodney Douglas, UNI/ETH Zurich - [email protected]

� Christof Koch, Caltech - [email protected]

� Terry Sejnowski, Salk Institute - [email protected]

� Shihab Shamma, - [email protected]

Our Telluride Summer Research Center Liason:

� Larry Rosen, Telluride Academy -

Technical Personnel:

� Dave Flowers, Caltech - [email protected]

� Reid Harrison, Caltech - [email protected]

� Timmer Horiuchi, Johns Hopkins Univ. - [email protected]

� Giacomo Indiveri, UNI/ETH Zurich - [email protected]

� David Klein, University of Maryland, College Park - [email protected]

� Jorg Kramer, UNI/ETH Zurich - [email protected]

� David Lawrence, UNI/ETH Zurich - [email protected]

� Jonathan Simon, Institute for Systems Research - [email protected]

� Theron Stanford, Caltech - [email protected]

Participants:

� Pamela Abshire, - [email protected]

� Ranit Aharonov, Hebrew Univ. at Jerusalem - [email protected]

� Andreas Andreou, Johns Hopkins University - [email protected]

� Asli Arslan, The University of Edinburgh [email protected]

� Randall Beer, - [email protected]

� Kwabena Boahen, UPenn - [email protected]

� Elizabeth Brauer, Northern Arizona University - [email protected]

� C. Phillip Brown, Neural Systems Lab - University of Maryland [email protected]

� Gert Cauwenberghs, Johns Hopkins University - [email protected]

� Jim Clark, McGill University - [email protected]

� Marc Cohen, Johns Hopkins University - [email protected]

� Craig DeLancey, Indiana University Adaptive Systems Lab - [email protected]

� Steve DeWeerth, Georgia Tech - [email protected]

� Tobi Delbruck, INI, Zurich - [email protected]

� Didier Depireux, Institute for Systems Research - [email protected]

� Chris Diorio, The University of Washington - [email protected]

� Timothy R. Edwards, Johns Hopkins University [email protected]

� Mete Erturk, Neural Systems Lab, UMCP - [email protected]

� Nicola J. Ferrier, University of Wisconsin-Madison - [email protected]

� Harald M. Fuchs, TU-Graz - [email protected]

� Philipp Hafliger, Institute of Neuroinformatics, ETHZ/UNIZ,Switzerland - [email protected]

� Paul Hasler, Georgia Institute of Technology [email protected]

� Chuck Higgins, Caltech - [email protected]

� Martin Lades, - [email protected]

� Daniel D. Lee, Bell Laboratories - [email protected]

� Shih-Chii Liu, Institute of Neuroinformatics, ETH/UNIZ [email protected]

� Yuri Lopez de Meneses, Laboratoire de Microinformatique (LAMI)- EPFL - [email protected]

� Wolfgang Maass, Technische Universitaet Graz - [email protected]

� Brad Minch, School of Electrical Engineering, Cornell University- [email protected]

� Ania Mitros, - [email protected]

� Regina Mudra, Institut fuer Neuroinformatik [email protected]

� Thomas Netter, University of Nice - Sophia-Antipolis [email protected]

� David Nicholson, CCNR University of Sussex [email protected]

� Masahide Nomura, NEC Fundamental Res. Labs. - [email protected]

� Timothy Pearce, Tufts University Medical School [email protected]

� Alberto Pesavento, Caltech - [email protected]

� Philippe Pouliquen, The Johns Hopkins University [email protected]

� Maximilian Riesenhuber, Center for Biological and ComputationalLearning and Department of Brain and Cog - [email protected]

� Eduardo Ros Vidal, University of Granada, Spain - [email protected]

� Ralf Salomon, AI Lab, Dept. of Computer Science, University ofZurich - [email protected]

� Nicol Schraudolph, IDSIA (Istituto Dalle Molle di Studisull’Intelligenza Artificiale) - [email protected]

� Mario Simoni, Georgia Institute of Technology [email protected]

91

Page 93: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

� Malcolm Slaney, Interval Research Corporation - [email protected]

� Leslie Smith, University of Stirling - [email protected]

� Nino Srour, US Army Research Laboratory - [email protected]

� Peter Stepien, SEDAL, The University of Sydney [email protected]

� Alan Stocker, INI - [email protected]

� Mark W. Tilden, Robotics Research Scientist [email protected]

� Rufin Van Rullen, CNRS - [email protected]

� Paul Verschure, UNI/ETH Zurich - [email protected]

� Thelma Williams, University of London (St.George’s HospitalMedical School) - [email protected]

� Charles Wilson, Georgia Institute of Technology - [email protected]

� Andy Wuensche, Santa Fe Institute - [email protected]

� Thomas P. Zahn, Technical University of Ilmenau Germany [email protected]

� Andre van Schaik, University of Sydney - [email protected]

92

Page 94: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Appendix B

Hardware Facilities of the 1998Workshop

Computer:

� Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-ment

� Intel Pentium Computers (Computer) –from the Klab

� Intel Pentium Computers (Computer) –From the Intel Lab in Moore

� Intel Pentium Computers (Computer) –From the Intel Lab in Moore

� Clone 166 (Computer) –IBM PC Compatible Computer

� Intel Pentium Computers (Computer) –From the Intel Lab in Moore

� Intel Pentium Computers (Computer) –from the Klab

� (Computer) –200MHZ Pentium Pro with Micro DC30 and Pre-miere

� DEC HiNote (Computer) –Personal Laptop

� Toshiba Satellite (Computer) –Personal Laptop

� Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-ment

� Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-ment

� Pentium II (Computer) –Laptop

� Generic Pentium Pro (Computer) –From the Klab w/ GPIB andA/D card

� (Computer) –laptop

� IBM ThinkPad 380XD (Computer) –PC Laptop

� Intel Pentium II (400MHZ) (Computer) –from NASA

� Intel Pentium Computers (Computer) –From the Intel Lab in Moore

� Intel Pentium II (400MHZ) (Computer) –from NASA

� Intel Pentium II (Computer) –from CNS

� Intel Pentium II (Computer) –from CNS

� Intel Pentium Computers (Computer) –from the Klab

� Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-ment

� Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-ment

� Intel Pentium Computers (Computer) –From the Intel Lab in Moore

� Clone Pentium-Pro (Computer) –IBM PC Compatible Computer

� National Instruments PXI (Computer) –Compact PCI based racksystem with DAQ and image acquisition

� Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-ment

� Clone 486-33 (Computer) –IBM PC Compatible Computer –Drives Hardware Programmers

Computer I/O Card:

� National Instruments Lab-PC+ (Computer I/O Card) –General pur-pose I/O card with a couple of analog and digital

� ports

� (Computer I/O Card) –ZISC based digital neural network board

Electrometer:

� Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-tion

� Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-tion

� Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-tion

� Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-tion

� Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-tion

� Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-tion

Function Generator:

� SRS DS340 (Function Generator) –GPIB Digital Synthesis Func-tion Generator

� HP ? (Function Generator) –Klab function generator

� HP ? (Function Generator) –Klab function generator

� HP ? (Function Generator) –A Function Generator for the chip test-ing station

� HP ? (Function Generator) –A Function Generator for the chip test-ing station

� HP ? (Function Generator) –A Function Generator for the chip test-ing station

� HP ? (Function Generator) –A Function Generator for the chip test-ing station

� HP ? (Function Generator) –A Function Generator for the chip test-ing station

� HP ? (Function Generator) –A Function Generator for the chip test-ing station

Monitor:

� NEC Any MultiSync (Monitor) –

� Video Monitor (Monitor) –color NTSC monitor

Multimeter:

� FLUKE ? (Multimeter) –KLlab Fluke

� Fluke 70 (Multimeter) –Fluke for testing station

� Fluke 70 (Multimeter) –Fluke for testing station

� Fluke 70 (Multimeter) –Fluke for testing station

� FLUKE ? (Multimeter) –KLlab Fluke

� FLUKE ? (Multimeter) –KLlab Fluke

93

Page 95: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

� Fluke 70 (Multimeter) –Fluke for testing station

� Fluke 70 (Multimeter) –Fluke for testing station

� Fluke 70 (Multimeter) –Fluke for testing station

� FLUKE ? (Multimeter) –KLlab Fluke

� FLUKE ? (Multimeter) –KLlab Fluke

Oscilloscope:

� HP (Oscilloscope) –Digital (150MHZ) for chip testing stations

� Tek TDS460 (Oscilloscope) –4 channel scope from KLAB

� Tek 2445A (Oscilloscope) –2 channel scope from KLAB

� Tek 2445A (Oscilloscope) –2 channel scope from KLAB

� TekTronic (Oscilloscope) –Portable Scope

� HP (Oscilloscope) –Digital (150MHZ) for chip testing stations

� HP (Oscilloscope) –Digital (150MHZ) for chip testing stations

� HP (Oscilloscope) –Digital (150MHZ) for chip testing stations

� HP (Oscilloscope) –Digital (150MHZ) for chip testing stations

� HP (Oscilloscope) –Digital (150MHZ) for chip testing stations

Other:

� Pot boxes (Other) –chip-testing stations

� Slide Projector (Other) –

� Overhead Projector (Other) –

� Pot boxes (Other) –chip-testing stations

� Toshiba PAL/NTSC VCR (Other) –

� Analog Devices (Other) –Sharc-Emulator Board

� Video Camera (Other) –color camera without tape

� Pot boxes (Other) –chip-testing stations

� Copy Machine (Other) –

� Pot boxes (Other) –chip-testing stations

� Spindler & Hoya (Other) –Optical Set-up for Artificial Nose

� TI Activator 2 (Other) –Actel FPGA programmer (with 84PLCCadapter)

� JHU (Other) –Soldering and Wire-Wrap station

� Pot boxes (Other) –chip-testing stations

� Pot boxes (Other) –chip-testing stations

� K-team Color and monochrome video turrets (Other) –Camera’s

� K-team Khepera gripper module (Other) –

� (Other) –Smart Camera System with MAPP2200

� Dataman Dataman 48LV (Other) –Intelligent Universal Program-mer

� Microchip Picstart Plus (Other) –PIC programmer

� LAMI/CSEM Panoramic and Stereo Turrets (Other) –Artificial-retina based turrets for the Khepera robot

� ? XR-7007 (Other) –Vise

� SONY ? (Other) –4 head VCR for use with video camera

� (Other) –retinas

� ? XR-7007 (Other) –Vise

� ? XR-7007 (Other) –Vise

� ? XR-7007 (Other) –Vise

Parallel Port Peripheral:

� Timaginarium TX-1000 (Parallel Port Peripheral) –AER research:1-D retina board that reports edges and direction of

� motion

� Connectix QuickCam (BW) (Parallel Port Peripheral) –GreyscaleParallel Port Camera

Power Supply:

� (Power Supply) –12-14V , 6A power supply

� HP BRick (Power Supply) –KLAB

� HP BRick (Power Supply) –KLAB

� HP BRick (Power Supply) –KLAB

� HP BRick (Power Supply) –KLAB

� HP BRick (Power Supply) –KLAB

� HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-tion

� HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-tion

� HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-tion

� HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-tion

� HP BRick (Power Supply) –KLAB

� HP ? (Power Supply) –6v, +20, -20 supply

� HP E3610A (Power Supply) –current limit 12V

� HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-tion

� HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-tion

� (Power Supply) –6V, 5A

Printer:

� ? Laser Printer (Printer) –

Programmable Voltage Source:

� keithley 230 (Programmable Voltage Source) –Voltagesource up to101V for Chip testing station

� keithley 230 (Programmable Voltage Source) –Voltagesource up to101V for Chip testing station

� keithley 230 (Programmable Voltage Source) –Voltagesource up to101V for Chip testing station

� keithley 230 (Programmable Voltage Source) –Voltagesource up to101V for Chip testing station

� keithley 230 (Programmable Voltage Source) –Voltagesource up to101V for Chip testing station

� keithley 230 (Programmable Voltage Source) –Voltagesource up to101V for Chip testing station

Serial Port Peripheral:

� Directed Perception Pan Tilt System (Serial Port Peripheral) –Computer Controlled Pan Tilt System

� K-team Khepera microrobot (Serial Port Peripheral) –Robot

� K-team Koala (Serial Port Peripheral) –Mobile robot

� K-Team Khepera (Serial Port Peripheral) –

� (Serial Port Peripheral) –Samurai robot with 1 to 3 grippers (2 de-grees of freedom and one grip)

Visual Motion Stimulus:

� (Visual Motion Stimulus) –OHP w/ a lcd screen for a motion stim-ulus

94

Page 96: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

Appendix C

Workshop Announcement

This announcement was posted on 1/22/98 to various mailing lists and to our dedicated Web-Site.

=======================================================================================================================

"NEUROMORPHIC ENGINEERING WORKSHOP"

JUNE 29 - JULY 19, 1998

TELLURIDE, COLORADO

Deadline for application is February 1., 1998.

Avis Cohen (University of Maryland), Rodney Douglas (University of Zurich and ETH, Zurich/Switzerland), Christof Koch(California Institute of Technology), Terrence Sejnowski (Salk Institute and UCSD) and Shihab Shamma (University ofMaryland) invite applications for a three week summer workshop that will be held in Telluride, Colorado in 1998 fromMonday, June 29. until Sunday, July 19. 1998.

The 1997 summer workshop on "Neuromorphic Engineering", sponsored by the National Science Foundation, the GatsbyFoundation and by the "Center for Neuromorphic Systems Engineering" at the California Institute of Technology, was anexciting event and a great success. A detailed report on the workshop is available athttp://www.klab.caltech.edu/˜timmer/telluride.html We strongly encourage interested parties to browse through thesereports and photo albums.

GOALS:

Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication ofartificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and designprinciples are based on those of biological nervous systems. The goal of this workshop is to bring together younginvestigators and more established researchers from academia with their counterparts in industry and nationallaboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motorintegration. The focus of the workshop will be on "active" participation, with demonstration systems andhands-on-experience for all participants.

Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to thedesign of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and thedesign of parallel hardware, are inspired by biological systems. However, existing applications are modest and thechallenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levelsachieved by biological systems lies ahead. The assumption underlying this three week workshop is that the nextgeneration of neuromorphic systems would benefit from closer attention to the principles found through experimental andtheoretical studies of brain systems.

FORMAT:

The three week summer workshop will include background lectures, practical tutorials on analog VLSI design, small mobilerobots (Khoala), hands-on projects, and special interest groups. Participants are required to take part and possiblycomplete at least one of the projects proposed (soon to be defined). They are furthermore encouraged to become involvedin as many of the other activities proposed as interest and time allow.

There will be two lectures in the morning that cover issues that are important to the community in general. Because ofthe diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather thandetailed reports of current research. These lectures will be given by invited speakers. Participants will be free toexplore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons,and after dinner.

The analog VLSI practical tutorials will cover all aspects of analog VLSI design, simulation, layout, and testing overthe workshop of the three weeks. The first week covers basics of transistors, simple circuit design andsimulation. This material is intended for participants who have no experience with analog VLSI. The second week willfocus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, tobuilding the peripheral boards necessary for interfacing analog VLSI retinas to video output monitors. Retina chipswill be provided. The third week will feature sessions on floating gates, including lectures on the physics of tunnelingand injection, and on inter-chip communication systems.

Projects that are carried out during the workshop will be centered in a number of groups, including active vision,audition, olfaction, motor control, central pattern generator, robotics, multichip communication, analog VLSI andlearning.

95

Page 97: Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING · 2015-07-28 · Report to the National Science Foundation: WORKSHOP ON NEUROMORPHIC ENGINEERING Telluride,

The "active perception" project group will emphasize vision and human sensory-motor coordination. Issues to be coveredwill include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motioninformation for motor control. Demonstrations will include a robot head active vision system consisting of a threedegree-of-freedom binocular camera system that is fully programmable.

The "central pattern generator" group will focus on small walking robots. It will look at characteristics and sourcesof parts for building robots, play with working examples of legged robots, and discuss CPG’s and theories of nonlinearoscillators for locomotion. It will also explore the use of simple analog VLSI sensors for autonomous robots.

The "robotics" group will use rovers, robot arms and working digital vision boards to investigate issues of sensorymotor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics.

The "multichip communication" project group will use existing interchip communication interfaces to program smallnetworks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associativememory. Issues in multichip communication will be discussed.

LOCATION AND ARRANGEMENTS:

The workshop will take place at the Telluride Elementary School located in the small town of Telluride, 9000 feet highin Southwest Colorado, about 6 hours away from Denver (350 miles). Continental and United Airlines provide many dailyflights directly into Telluride. All facilities within the beautifully renovated public school building are fullyaccessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distanceof the school. Participants are expected to share condominiums. No cars are required. Bring hiking boots, warm clothesand a backpack, since Telluride is surrounded by beautiful mountains.

The workshop is intended to be very informal and hands-on. Participants are not required to have had previousexperience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling thebrain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia,industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk abouttheir own work or bring demonstrations to Telluride (e.g. robots, chips, software).

Internet access will be provided. Technical staff present throughout the workshops will assist with software andhardware issues. We will have a network of SUN workstations running UNIX, MACs and PCs running LINUX and Windows95.

Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three weekworkshop.

FINANCIAL ARRANGEMENT:

We have several funding requests pending to pay for most of the costs associated with this workshop.

Different from previous years, after notification of acceptances have been mailed out around March 15., 1998,participants are expected to pay a \$250.- workshop fee. In case of real hardship, this can be waived.

Shared condominiums will be provided for all academic participants at no cost to them. We expect participant fromNational Laboratories and Industry to pay for these modestly priced condominiums.

We expect to have funds to reimburse a small number of participants for up to travel (up to $500 for domestic travel andup to $800 for overseas travel). Please specify on the application whether such financial help is needed.

HOW TO APPLY:

The deadline for receipt of applications is February 1., 1998.

Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research andengineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualifiedwomen and minority candidates to apply.

Application should include:

1. Name, address, telephone, e-mail, FAX, and minority status(optional).2. Curriculum Vitae.3. One page summary of background and interests relevant to theworkshop.4. Description of special equipment needed for demonstrations that couldbe

brought to the workshop.5. Two letters of recommendation

=======================================================================================================================

96