sponsorship prospectus - aicas 2019 2019_sponsorship... · 2019-01-17 · silicon and non-silicon...
TRANSCRIPT
Sponsorship Prospectus
Invitation
Dear friends and colleagues,
As General Co-Chairs of the Organizing Committee, we would like to extend our most heartfelt
welcome to you for attending the IEEE International Conference on Artificial Intelligence
Circuits and Systems (AICAS 2019), which will be held at the Ambassador Hotel Hsinchu,
Taiwan, from March 18-20, 2019.
Artificial Intelligence (AI) is driving the new revolution of not only the information technology
but also all other industries. New algorithms and application systems are introduced with the
power of AI. New computing platforms are required to support the emerging AI algorithms
and applications, from cloud servers to edge devices, from system level to circuit level.
Facing this new challenge and opportunity, the 1st IEEE International Conference on Artificial
Intelligence Circuits and Systems (AICAS 2019) is established to facilitate the state-of-the-art
research, innovation and development activities at the frontiers of Artificial Intelligence
circuits and systems. It serves as the best platform allowing scholars, technological researchers
and industry from both domestic and international communities to exchange experiences,
demonstrate their studies and further advance AI technologies on circuits and systems.
Why should you support?
1. Demonstrate your company’s leadership in the field of Artificial Intelligence Circuits and
Systems
2. Extensive visibility to 300+ participants from around the world
3. Great opportunities to network with experts
4. Access to highly targeted group within your practice area
5. Expanded value due to intensive promotional communications
6. Exhibit and distribute your marketing and promotional materials
On behalf of the organizer of the 2019 AICAS, we sincerely invite you to provide valuable
support and kind assistance to ensure the success of this congress.
We look forward to seeing you in Hsinchu in 2019!
Sincerely yours,
Prof. Robert Chen-Hao Chang (張振豪) General Co-Chair, AICAS 2019
National Chung Hsing University, Taiwan
Prof. Mohamad Sawan General Co-Chair, AICAS 2019
Polytechnique Montréal , Canada
1
Congress Information
Congress
IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS
2019)
Date / Venue / Official Website
Ambassador Hotel Hsinchu, Taiwan,
March 18-20, 2019
http://www.aicas2019.org
Scale
Number of Participants: 300 (Domestic 150, Overseas 150)
Organizers
Topics of Congress
1. Circuits and systems for AI
2. Deep learning/machine learning/AI algorithms
3. Tools/Platforms for AI
4. Architecture for AI computing
5. Edge and cloud AI computing platforms
6. Hardware accelerators
7. Neuromorphic processors
8. Hardware/software co-design and design automation for AI systems
9. Advanced neural network design
10. Emerging applications: Deep learning for Internet-of-Things
11. Emerging applications of AI: Medical AI
12. Emerging applications of AI: Autonomous Vehicle
13. Emerging applications of AI: Smart Factory and Environment
2
Organizing Committee
Honorary Chair:
Liang-Gee Chen (陳良基部長), National Taiwan University, Taiwan
General Co-Chairs:
Robert Chen-Hao Chang (張振豪教授), National Chung Hsing University, Taiwan
Mohamad Sawan, Polytechnique Montréal, Canada
Technical Program Co-Chairs:
Shao-Yi Chien (簡韶逸教授), National Taiwan University, Taiwan
David Brooks, Harvard University, USA
Plenary Co-Chairs
Yen-Kuang Chen (陳彥光博士), Intel, USA
Zicheng Liu, Microsoft, USA
Special Session Co-Chairs:
Gwo-Giun (Chris) Lee (李國君教授), National Cheng Kung University, Taiwan
Marian Verhelst, KU Leuven, Belgium
Hyuk-Jae Lee, Seoul National University, Korea
Guoxing Wang, Shanghai Jiao Tong University, China
Tutorial Co-Chairs:
Chia-Lin Yang (楊佳玲教授), National Taiwan University, Taiwan
Timothy Constandinoum, Imperial College London, UK
Hai (Helen) Li, Duke University, USA
Youngsoo Shin, KAIST
Panel Session Co-Chairs:
Chia-Wen Lin (林嘉文教授), National Tsing Hua University, Taiwan
Kyung Ki Kim, Daegu University, Korea
Atsushi Takahashi, Tokyo Institute of Technology, Japan
Shouyi Yin, Tsinghua University, China
3
Demo Session Co-Chairs:
Meng-Fan (Marvin) Chang (張孟凡教授), National Tsing Hua University, Taiwan
Tobi Delbruck, University of Zurich and ETH Zurich, Switzerland
Arindam Basu, Nanyang Technological University, Singapore
Jongsun Park, Korea University, Korea
Industrial Session Co-Chairs:
Jiun-In Guo (郭峻因教授), National Chiao Tung University, Taiwan
Rajiv Joshi, IBM Research TJ Watson, USA
Kyomin Sohn, Samsung Electronics, Korea
Local Arrangement Co-Chairs:
Kea Tiong (Samuel) Tang (鄭桂忠教授), National Tsing Hua University, Taiwan
Ing-Jer Huang (黃英哲教授), National Sun Yat-sen University, Taiwan
Tai-Cheng Lee (李泰成教授), National Taiwan University, Taiwan
Exhibition Co-Chairs:
Chia-Hsiang Yang (楊家驤教授), National Taiwan University, Taiwan
Yu Wang, Tsinghua University, China
Publication Chair:
Yuan-Hao Huang (黃元豪教授), National Tsing Hua University, Taiwan
Advisory Committee:
Franco Malroberti, University of Pavia, Italy
Yong Lian, York University, Canada
Amara Amara, "Terre des hommes" Foundation, Switzerland
Myung Hoon Sunwoo, Ajou University, Korea
Eduard Alarcon, Technical University of Catalunya (UPC), Spain
International Liaison:
Yoshifumi Nishio, Tokushima University, Japan
Leonel Sousa, Universidade de Lisboa, Portugal
Chiara Bartolozzi, Italian Institute of Technology, Italy
Treasurer
Yin-Tsung Hwang (黃穎聰教授), National Chung Hsing University, Taiwan
4
AICAS 2019 Program Schedule
Monday, Mar. 18th 2019
Time \ Venue Ballroom C, 10F Ballroom D, 11F
08:30~ Registration
09:00-10:30 (90’) Tutorial 1-1
Connecting ONNX to Proprietary DLAs: An introduction to Open Neural Network Compiler
Luba Tang Skymizer, Taiwan
Tutorial 2-1
Neuromorphic Artificial Intelligence
Tobi Delbruck University of Zurich and ETH Zurich Switzerland
10:30-10:45 (15’) Coffee Break
10:45-12:15 (90’) Tutorial 1-2
Connecting ONNX to Proprietary DLAs: An introduction to Open Neural Network Compiler
Luba Tang Skymizer, Taiwan
Tutorial 2-2
Neuromorphic Artificial Intelligence
Tobi Delbruck University of Zurich and ETH Zurich Switzerland
12:15-13:15 (60’) Tutorial Lunch @Ballroom B, 10F
13:15-14:45 (90’) Tutorial 3
Memory-Centric Chip Architecture for Deep Learning
Sungjoo Yoo Seoul National University, Korea
Tutorial 4-1
BRAINWAY and Nano-Abacus Architecture: Brain-Inspired Cognitive Computing Using Energy Efficient Physical Computational Structures, Algorithms and
Architecture Co-Design
Andreas Andreou Johns Hopkins University, USA
14:45-15:00 (15’) Coffee Break
15:00-16:30 (90’) Tutorial 5
SRAM and RRAM based In-Memory Computing for Deep Learning: Opportunities and Challenges
Jae-sun Seo Arizona State University, USA
Tutorial 4-2
BRAINWAY and Nano-Abacus Architecture: Brain-Inspired Cognitive Computing Using Energy Efficient Physical Computational Structures, Algorithms and
Architecture Co-Design
Andreas Andreou Johns Hopkins University, USA
16:30-16:40 (10’) Break
16:40-17:00 (20’) Opening Ceremony @Ballroom B, 10F
17:00-18:00 (60’) Keynote #1
Re-Engineering Computing with Neuro-Inspired Learning: Devices, Circuits, and Systems
Kaushik Roy Purdue University, USA
@Ballroom B, 10F
18:00-18:30 (30’) Break
18:30-20:30 (120’)
Welcome Reception & Jeopardy @Ballroom B, 10F
5
AICAS 2019 Program Schedule
Tuesday, Mar. 19th, 2019
Time \ Venue Ballroom B, 10F Ballroom C, 10F Ballroom D, 11F
08:00 Registration
08:30-09:30 (60’) Keynote #2
How edge AI technology is redefining smart devices
Ryan Chen MediaTek Inc., Taiwan
09:30-10:50 (80’) Special Session 1
Smart Circuit Techniques for
Neural Networks
Lecture session 1
Deep Neural Network for
Computer Vision
Lecture session 2
Hardware Accelerators for AI
10:50-11:10 (20’) Coffee Break
11:10-12:30 (80’) Special Session 2
Edge and Fog Computing to
Enable AI in IoT
Lecture session 3
Neuromorphic Processors
Lecture session 4
Application Specific AI
Accelerators
12:30-13:30 (60’) Lunch
13:30-15:00 (90’) Panel Discussion
15:00-16:00 (60’) WICAS & YP
16:00-16:20 (20’) Coffee Break
16:20-18:00(100’) Lecture session 5
Deep Learning for Speech and
Low-dimensional Signal
Processing
Special Session 3
Analytics
Algorithm/Architecture for
Smart System Design
18:30-20:30 Banquet
6
AICAS 2019 Program Schedule
Wednesday, Mar. 20th, 2019
Time \ Venue Ballroom B, 10F Ballroom C, 10F Ballroom D, 11F
08:00 Registration
08:30-09:30 (60’) Keynote #3
Edge Intelligence for Optimized Systems & High-Performance
Devices
Anthony Vetro MERL, USA
09:30-10:30 (60’) Special Session/Forum
2018 Low-Power Image
Recognition Challenge and
Beyond
Lecture session 6
Medical AI (I)
Industrial Session 1
AI computing platform
10:30-10:50 (20’) Coffee Break
10:50-12:10 (80’) Special Session 4
Intelligent processing of time-
series signals
Lecture session 7
Medical AI (II)
Industrial Session 2
Compiler technology for AI
chip
12:10-13:10 (60’) Lunch
13:10-14:40 (90’) Poster session & Showcase
@ Ballroom A, 10F
14:40-16:00 (80’) Special Session 5
Emerging Memory
Technologies for
Neuromorphic Circuits and
Systems
Lecture session 8
Low Precision Neural
Network
16:00-16:20 (20’) Coffee Break
16:20-17:40 (80’) Special Session 6
AI in Advanced Applications
Lecture session 9
Hardware Oriented Neural
Network Optimization
7
Keynote Speech 1
Kaushik Roy
Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer
Engineering, Purdue University
USA
Title: Re-Engineering Computing with Neuro-Inspired Learning: Devices,
Circuits, and Systems
Date/Time: March. 18 (Mon) 17:00-18:00
Abstract
Advances in machine learning, notably deep learning, have led to computers matching or surpassing
human performance in several cognitive tasks including vision, speech and natural language
processing. However, implementation of such neural algorithms in conventional "von-Neumann"
architectures are several orders of magnitude more area and power expensive than the biological
brain. Hence, we need fundamentally new approaches to sustain exponential growth in performance
at high energy-efficiency beyond the end of the CMOS roadmap in the era of ‘data deluge’ and
emergent data-centric applications. Exploring the new paradigm of computing necessitates a multi-
disciplinary approach. In this talk I will discuss exploration of new learning algorithms inspired from
neuroscientific principles, developing network architectures best suited for such algorithms, new
hardware techniques to achieve orders of improvement in energy consumption, and nanoscale
devices that can closely mimic the neuronal and synaptic operations of the brain leading to a better
match between the hardware substrate and the model of computation.
Speaker Biography
Dr. Roy is the Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering at
Purdue University, where he joined the faculty in 1993. He is currently leading the National Center for Brain-
inspired Computing Enabling Autonomous Intelligence (C-BRIC) in USA. He was previously with the
Semiconductor Process and Design Center of Texas Instruments, Dallas, where he worked on FPGA
architecture development and low-power circuit design. He received his Ph.D. degree from the electrical and
computer engineering department of the University of Illinois at Urbana-Champaign in 1990. Dr. Roy received
the 2005 SRC Technical Excellence Award, the SRC Inventors Award, the Purdue College of Engineering
Research Excellence Award, and the 2010 IEEE Circuits and Systems Society Technical Achievement Award.
He is a Fellow of the IEEE. His research interests include spintronics, device-circuit co-design for nano-scale
silicon and non-silicon technologies, low-power electronics for portable computing and wireless
communications, and new computing models enabled by emerging technologies. Dr. Roy has published more
than 600 papers in refereed journals and conferences, holds 15 patents, graduated 56 Ph.D. students, and is
co-author of two books on low power CMOS VLSI design.
8
Keynote Speech 3
Anthony Vetro
Vice President & Director
Mitsubishi Electric Research Labs (MERL)
USA
Title: Edge Intelligence for Optimized Systems & High-Performance
Devices
Date/Time: March. 20 (Wed.) 08:30-09:30
Abstract
The combination of IoT sensing, edge computing and AI algorithms is creating new opportunities to
use real-time data to optimize system capabilities and increase device performance. In the
manufacturing domain, edge intelligence allows us to realize various forms of anomaly detection,
predict the lifetime or maintenance schedule of components, and adaptive learn improved control
policies. Connected cars will benefit from edge intelligence to improve safety and optimize traffic
flows. Additionally, the parameters of a circuit can be automatically tuned using data-driven machine
learning techniques to increase efficiency and performance. This presentation will highlight the
numerous benefits of the edge intelligence framework, and identify several open challenges and
issues.
Speaker Biography
Dr. Vetro is a Vice President and Director at Mitsubishi Electric Research Labs, in Cambridge, Massachusetts.
He is currently responsible for AI related research in the areas of computer vision, speech/audio processing,
and data analytics. In his 20+ years with the company, he has contributed to the transfer and development
of several technologies to Mitsubishi products, including digital television receivers and displays, surveillance
and camera monitoring systems, automotive equipment, as well as satellite imaging systems. He has published
more than 200 papers and has been an active member of the MPEG and ITU-T video coding standardization
committees for a number of years, serving as Head of the US Delegation to MPEG (2011-2014), and as the
Chair of the US Technical Advisory Group to ISO/IEC JTC 1/SC 29 (2015-2018). He is also active in various
IEEE conferences, technical committees and editorial boards. He currently serves on the Conference Board of
the IEEE Signal Processing Society. Past roles include Senior Editorial Board of IEEE Journal on Selected Topics
in Signal Processing and IEEE Journal on Emerging and Selected Topics in Circuits and Systems, Editorial Board
of IEEE Signal Processing Magazine and IEEE Multimedia, Associate Editor of IEEE Transactions on Circuits
and Systems for Video Technology and IEEE Transactions on Image Processing, Chair of TC on Multimedia
Signal Processing of the IEEE Signal Processing Society, and Steering Committee of IEEE Transactions on
Multimedia. He was a General Co-Chair of ICIP 2017 and ICME 2015 , and also served as a Technical Program
Co-Chair for ICME 2016. Dr. Vetro received the B.S., M.S. and Ph.D. degrees in Electrical Engineering from
Polytechnic University, in Brooklyn, NY (now NYU Tandon School of Engineering). He has received several
awards for his work on transcoding and is a Fellow of the IEEE.
9
Tutorial 1
Luba Tang
CEO and Founder, Skymizer
Taiwan
Title: Connecting ONNX to Proprietary DLAs: An introduction to Open
Neural Network Compiler
Date/Time: March 18 (Mon) 09:00-10:30, 10:45-12:15
Abstract
This tutorial introduces a retargetable Open Neural Network Compilation (ONNC) framework that connects
Open Neural Network eXchange (ONNX) models to proprietary deep learning accelerators. ONNX enables
interchangeability among neural network models designed in different frameworks and has become a pervasive
format supported by many high-tech titans and a community of researchers. ONNC is the first open source
compiler project designed from ground up to support ONNX. The target audience of this tutorial includes
researchers, graduate students, engineers, and whoever is interested in porting a compiler backend and
implementing compiler optimization algorithms for deep learning hardware. The tutorial consists of four short
talks. The first talk describes the top-level architecture and major design features of ONNC, how ONNC
differentiates itself from other frameworks, and how users benefit from adopting ONNC as their compilation
framework. For those who are interested in the broad view of ONNC, welcome to join us for 15-minute quick
update on the latest ONNC progress. The second talk introduces the “Vanilla” backend in ONNC and
demonstrates how fast porting to a new target DLA is carried out. For those who have demand for porting a
new compiler backend to a designated target device, do not miss the chance to see ONNC backend porting at
a glance. The third talk focuses on how to explore architecture design tradeoff and perform optimizations via
the pass manager in ONNC. This topic is especially designed for those who like to use ONNC as a research
framework to explore DLA design space for either commercial products or research topics. The last talk
provides an opportunity to get your hands dirty building ONNC, running benchmarks, and playing around the
source code with us. The last section is designed to get you started on ONNC programming. We welcome
more engineers as well as researchers to join the ONNC open source community and make contribution to the
project.
Speaker Biography
Luba Tang, CEO and founder, Skymizer. Luba Tang has 10+ years of compiler-related work experience. His
research interests include electronic system level (ESL) design, compilers and virtual machines. His most recent
work focus is on optimization algorithms for AI compiler and on architecture design for blockchain virtual
machine. He was the original writer of Marvell iterative compiler; the software architect of the MCLinker project;
the architect of the ONNC project; and the co-founder of the Lity Language Project.
10
Tutorial 2
Tobi Delbruck
Professor, Institute of Neuroinformatics,
University of Zurich and ETH Zurich
Switzerland
Title: Neuromorphic Artificial Intelligence
Date/Time: March 18 (Mon) 09:00-10:30, 10:45-12:15
Abstract
With hundreds of millions of dollars flowing this year into silicon developments for training and
running artificial intelligence (AI) deep neural networks (DNNs) via Nvidia, Intel, Nervana, Qualcomm,
Graphcore, ARM and dozens of others, it is worthwhile asking what is left to be done. Won't silicon
AI follow the same course as GPUs and become more and more tailored to efficiently compute
industrial AI?
This tutorial addresses this question from the context of neuromorphic engineering (NE), which
takes its inspiration from the brain's organizing principles. What have these principles of using
sparsity, local memory, time, and physics brought to the table? I will show historical development
of both NE and AI, how ideas from neuroscience came into both (e.g. ReLU, local adaptation, max
pooling), show recent developments of neuromorphic silicon from IBM, Intel, Zurich, and Manchester.
I will also compare these with upcoming industrial AI digital accelerators, and then show principles
of sparsity and local memory reuse can bring immediate benefit to both convolutional and recurrent
DNNs implemented in synchronous logic without requiring a new memory hierarchy. Finally I will
relate these ideas to event sensors, which our group has specialized in developing. I plan to include
a live demonstration of some of these ideas.
Speaker Biography
Tobi Delbruck (IEEE M’99–SM’06–F’13) received a Ph.D. degree from Caltech in 1993 as student
of Carver Mead. He was in the first group of students for the newly founded Computation and Neural
Systems program started by John Hopfield. He is currently a professor of physics and electrical
engineering at ETH Zurich in the Institute of Neuroinformatics, University of Zurich and ETH Zurich,
Switzerland, where he has been since 1998. The Sensors Group that he co-organizes with PD Dr.
Shih-Chii Liu focuses on neuromorphic event-based sensors, sensory processing, and efficient deep
neural network hardware architectures. He co-organizes the Telluride Neuromorphic Cognition
Engineering summer workshop and has organized the live demonstration sessions at ISCAS and
NIPS. Delbruck is past Chair of the IEEE CAS Sensory Systems Technical Committee. He worked
on electronic imaging at Arithmos, Synaptics, National Semiconductor, and Foveon and has founded
4 spin-off companies, including inilabs.com, a community-oriented organization that has distributed
R&D prototype neuromorphic sensors around the world. He has been awarded 9 IEEE awards.
Sensors group: http://sensors.ini.uzh.ch/home.html
Personal page: http://sensors.ini.uzh.ch/tobi.html
11
Tutorial 3
Sungjoo Yoo
Professor, Seoul National University
Korea
Title: Memory-Centric Chip Architecture for Deep Learning
Date/Time: March 18 (Mon) 13:15-14:45
Abstract
Memory is a critical component in designing chips for deep learning in terms of energy consumption and area
cost.
In this tutorial, we will first explain memory access behavior of state-of-the-art neural networks. Then, we will
introduce recent works on neural network accelerators where memory accesses are optimized exploiting the
memory access behavior. Specifically, we will focus on data reuse by broadcast and sparsity which reduce the
frequency of memory accesses and low precision which reduces the bit width of each memory access.
Speaker Biography
Sungjoo Yoo received Ph.D. from Seoul National University in 2000. From 2000 to 2004, he was researcher at
system level synthesis (SLS) group, TIMA laboratory, Grenoble France. From 2004 to 2008, he led, as principal
engineer, system-level design team at System LSI, Samsung Electronics. From 2008 to 2015, he was associate
professor at POSTECH. In 2015, he joined Seoul National University and is now full professor. His current
research interests are software/hardware co-design of deep neural networks and machine learning-based
optimization of computer architecture.
12
Tutorial 4
Andreas G. Andreou
Professor, Electrical and Computer Engineering, Center for Language and Speech
Processing and Whitaker Biomedical Engineering Institute, Johns Hopkins University
USA
Title: BRAINWAY and Nano-Abacus Architecture: Brain-Inspired
Cognitive Computing Using Energy Efficient Physical Computational
Structures, Algorithms and Architecture Co-Design
Date/Time: March 18 (Mon) 13:15-14:45, 15:00-16:30
Abstract:
Since the invention of the integrated circuit -the chip in short- in the 1950’s, the microelectronics industry has
seen a remarkable evolution from the centimeter scale devices created by Jack Kilby millimeter scale integrated
circuits fabricated by Robert Noyce to today’s 5nm feature size MOS transistors. During this time, not only
have exponential improvements been made in the scaling of size and the density of devices, but CAD and
workstation technologies have advanced at a similar pace enabling the design of complete truly complex
Systems On a Chip (SOC). The advances in the microelectronics industry have also enabled the proliferation
of computational fields for bioinformatics, systems biology imaging and multi-scale multi-domain modeling.
Semiconductor technology is contributing to the advancement of biotechnology, medicine and health care
delivery in ways that it was never envisioned; from scientific grade CMOS imagers to silicon photomultiplier
and ion sensing arrays. The stunning convergence of semiconductor technology and life science research is
transforming the landscape of the pharmaceutical, biotechnology, and healthcare industries, signaling the
arrival of personalized and molecular-level imaging diagnosis and treatment therefore speeding up the pace
of scientific discovery, and changing the practice and delivery of patient care. Whether through tissue and
organ imaging, Labs-on-Chip or genome sequences, biotechnology and modern medical diagnostics are
generating a staggering amount of data stored
in data centers! However, computing in data
centers, the engines behind our insatiable
desire for global communication, instant
connectedness and interaction comes at an
economic and environmental cost. Future
projected needs in data centers are data
intensive applications in Cognitive Computing
Technology (CCT). CCT is the foundation of the
Third Wave of AI aims at advancing intelligent
software and hardware that can process,
analyze, and distill knowledge from vast
quantities of text, speech, images and
biological data ultimately with and as much
nuance and depth of understanding as a
human would. To meet the scientific demand
13
for future data-intensive CCT for every day mundane tasks such as searching via images to the uttermost
serious health care disease diagnosis in personalized medicine [1], we urgently need a new cloud computing
paradigm and energy efficient i.e. green technologies.
The BRAINWAY project in my lab is aimed at the design of an energy efficient Cognitive Multi-Processor Unit
(CogMPU) that combines Ultra-Low-Voltage (ULV) circuit techniques with brain-inspired chip-multiprocessor
network-on-chip (NoC) architecture. The design of the CogMPU architecture is based on the recently developed
mathematical framework for architecture exploration and optimization [2]. The computational principles [3][4]
and architectural ideas in the BRAINWAY project have been embodied in the nano-Abacus SOC aimed at
real-time processing, information extraction and prediction from in streaming Wide Area Motion Imagery
(WAMI) data. Availability of full motion high resolution data over large, city-size, geographical areas, (100
square kilometers) offers unprecedented capabilities for situational awareness. The dynamic nature of the
imagery offers insights about actions and patterns of activities that static images do not. Civilian applications
of WAMI data allow for the monitoring and intelligent control of traffic across large geographical area and
inference of a hierarchy of events and activities and ultimately to “life-patterns”. Additional applications include
the coordination of activities in disaster areas and the monitoring of wildlife. In the nano-Abacus SOC, high
performance and high throughput is achieved through approximate computing and fixed-point arithmetic in a
variable precision (6 bits to 18 bits) architecture. The architecture implements a variety of processing
algorithms in what we consider today as Third Wave AI and Machine Intelligence ranging from convolutional
networks (ConvNets) to linear and non-linear morphological processing, probabilistic inference using exact and
approximate Bayesian methods. The processing pipeline is implemented entirely using spike based
neuromorphic computational primitives.
Tutorial Topics:
System Level:
1. System design considerations and architectures for compute in memory (CIM) and computational imagers.
2. Algorithm-architecture co-design methodology and optimization for throughput, latency and or energy
efficiency. We outline challenges and present a solution to the design of a multiprocessor system
architecture using mathematical framework in [2] and [3].
3. Mixed signal computational structures in 55nm GF and state of the art 16nm TSMC CMOS technology
that might be more efficient for scientific computing and machine learning aimed at computational
memory architectures.
4. Digital stochastic computation and computational structures that compute with probabilities
5. Digital morphological processing blocks for non-linear image processing.
Circuit Level:
1. Physical Analog to probability converters, architecture, and mixed signal circuits using a Random
Telegraph Noise physical random signal generator.
2. Charge based mixed-signal circuits. Mixed signal vector-vector multiplier architecture that can yield a
factor of 2X to 10X energy efficiency improvement over comparable optimized digital multiply-
accumulator unit fabricated in the same technology.
References:
14
[1] A. G. Andreou, “Johns Hopkins on the chip: microsystems and cognitive machines for sustainable,
affordable, personalized medicine and health care (invited paper),” IEE Electronics Letters (special supplement
on semiconductors for personalized medicine), pp. s34–s37, Dec. 2011. http://digital-
library.theiet.org/dbt/dbt.jsp?KEY=ELLEAK&Volume=47&Issue=26
[2] A. S. Cassidy and A. G. Andreou, “Beyond Amdahl's Law: an objective function that links multiprocessor
performance gains to delay and energy,” IEEE Transactions on Computers, vol. 61, no. 8, pp. 1110–1126, Aug.
2012.
[3] A. S. Cassidy, J. Georgiou, and A. G. Andreou, “Design of silicon brains in the nano-CMOS era: spiking
neurons, learning synapses and neural architecture optimization,” Neural Networks, pp. 1–28, Jun. 2013.
[4] D. H. Goldberg, G. Cauwenberghs, and A. G. Andreou, “Probabilistic synaptic weighting in a reconfigurable
network of VLSI integrate-and-fire neurons,” Neural Networks, vol. 14, pp. 781–793, 2001.
Speaker Biography
Andreas G. Andreou is a professor of electrical and computer engineering, computer science and the Whitaker
Biomedical Engineering Institute, at Johns Hopkins University. Andreou is the co-founder of the Johns Hopkins
University Center for Language and Speech Processing. Research in the Andreou lab is aimed at brain inspired
microsystems for sensory information and human language processing. Notable microsystems achievements
over the last 25 years, include a contrast sensitive silicon retina, the first CMOS polarization sensitive imager,
silicon rods in standard foundry CMOS for single photon detection, hybrid silicon/silicone chip-scale incubator,
and a large scale mixed analog/digital associative processor for character recognition. Significant algorithmic
research contributions for speech recognition include the vocal tract normalization technique and
heteroscedastic linear discriminant analysis, a derivation and generalization of Fisher discriminants in the
maximum likelihood framework. In 1996 Andreou was elected as an IEEE Fellow, “for his contribution in energy
efficient sensory Microsystems.”
15
Tutorial 5
Jae-sun Seo
Assistant Professor, Arizona State University
USA
Title: SRAM and RRAM based In-Memory Computing for Deep Learning:
Opportunities and Challenges
Date/Time: March 18 (Mon) 15:00-16:30
Abstract
Deep learning algorithms have been successful across many practical applications, but state-of-the-art
algorithms are compute-/memory-intensive. To bring expensive algorithms to a low-power processor, a number
of digital CMOS ASIC solutions have been previously proposed, but limitations still exist on memory access
and footprint.
To improve upon the conventional row-by-row operation of memories, several works recently demonstrated
“in-memory computing” designs, which performs analog computation inside memory arrays (e.g. along the
bitline) by asserting multiple or all rows simultaneously. This tutorial will present recent silicon demonstrations
of in-memory computing for deep learning systems, based on both SRAM and denser resistive RAM (RRAM)
fabrics. New memory bitcell circuits, array peripheral circuits, architectures, and optimizations for accurate
deep learning acceleration will be covered. Promising opportunities of in-memory computing (e.g. large energy
gains over digital ASIC), as well as particular challenges (e.g. variability) and new device/circuit design
considerations will be discussed.
Speaker Biography
Jae-sun Seo received his Ph.D. degree from University of Michigan in 2010. From 2010 to 2013, he was with
IBM T. J. Watson Research Center, where he worked on cognitive computing chip design for the DARPA
SyNAPSE project. In January 2014, he joined Arizona State University as an assistant professor in the School
of ECEE. His research interests are energy-efficient hardware design for deep learning and neuromorphic
computing. During the summer of 2015, he was a visiting faculty at Intel Circuits Research Lab. He was a
recipient of IBM Outstanding Technical Achievement Award in 2012 and NSF CAREER Award in 2017.
16
Secretariat of IEEE AICAS 2019
Ruby Tsai | Enjoy PCO Corp. | TEL+886-2-77160750 | Email: [email protected]
AICAS 2019 Sponsorship Agreement
Please fill out this form in BLOCK letters and mark the item that you would like to sponsor in “V”, and return it back to the Congress Secretariat at [email protected] by Jan. 31, 2019
*Company Name English:
Chinese:
*Receipt title
*VAT No.
*Address
*Contact Person Position
*Tel.
Sponsorship Fees
Item Description Cost Agree
Exhibition Booth
Size: 3m (w) x 2m (d) x 2.5m (h); Booth Allocation: Booth spaces will be allocated according to the level of sponsorship and the order of payment:
(1) The amount of direct sponsorship (The higher one has the priority) (2) The number of the sponsored exhibition booth. (The more one has
the priority) (3) The deposit payment date. (The one pay earlier has the priority)
100,000
____
Number
of Booth
Industrial
Workshop
The fees of venue and AV equipment will be covered by the organizer. Airfare, Accommodation and other fees are supported by sponsors. Up to 2 speakers’ registration fees are waivered
200,000
____
Number
of session
Advertisement
Program Book Inside Advertisement (size: full page of A4) Ad design provided by the
sponsor 50,000 □
Congress Bag*
Bags, pens, souvenirs and badge lanyards will be produced by the organizer.
Single Color logo of sponsors will be placed.
150,000 □
Congress Pen* 50,000 □
Souvenirs of
Participants* 150,000 □
Badge Lanyard* 100,000 □
Total Amount TWD
* Exclusive sponsorship Currency: TWD
Signed by company representative
Signed: __________________________ Date: ______ Cancellation and Refund Policy
Cancellation must be made in writing and sent to the 2019 AICAS Secretariat at [email protected]
* Before Jan. 31, 2019: 80% refund
* After Jan. 31, 2019: No refund
Company Stamp