(For printing: the agenda without borders)
Wednesday, 1 October 2014
20:00‑21:30
(90 min)
Welcome dinner "Bier und Bratwurst"
in the "Kulturbrauerei" in Heidelbergs Old Town: Leyergasse 6, 69117 Heidelberg.

Thursday, 2 October 2014
08:45
BrainScaleS 4th Frontiers in Neuromorphic Computing Conference
Venue: Kirchhoff Institut fuer Physik, Im Neuenheimer Feld 227, 69126 Heidelberg, Germany.
Registration desk opens at 8:00

The conference will review 10 years of research carried out in the European FACETS and BrainScaleS projects. Both projects have pioneered the interdisciplinary collaboration between experimental neuroscience, theory, computational neuroscience and research on neuromorphic computing. The invited speakers represent international researchers as well as project members. The conference is open to all interested scientists.
08:45‑09:00
(15 min)
 Welcome and introductionsKarlheinz Meier (UHEI)
09:00
Area: Theory
Session chair: Robert Legenstein
09:00‑09:25
(25+5 min)
 Memory and the statistical structure of the world (show presentation.pdf)
Abstract: When do we modify old memories, and when do we create new ones? I propose that this question is fundamentally linked to our inferences about the latent structure of the world: we create new memories when we infer that our observations are generated by unfamiliar latent causes. I present a computational theory of latent causal inference, and discuss how this viewpoint has wide-ranging implications for how we understand classic learning phenomena, such as Pavlovian conditioning and episodic memory. New experiments with rats and humans confirm some of the predictions of this theory. The link between memory and latent structure may potentially reshape how we think about "disorders of pathological memory" such as PTSD and addiction.
Sam Gershman (MIT)
09:30‑09:55
(25+5 min)
 Design rules that enable networks of spiking neurons to carry out complex computations
Spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However it turned out to be surprisingly difficult to design networks of spiking neurons that are able to carry out demanding computations. We present a new method for organizing such networks out of simple stereotypical network motifs in such a way that they can solve hard constraint satisfaction problems from the domains of planning / optimization and verification / logical inference. We use here noise as a computational resource, in spite of the fact that the timing of spikes (rather than just spike rates) are essential for the resulting computations. This new organization scheme is supported by a new theoretical understanding of spike-based stochastic computations. Surprisingly, one can identify in this context also a concrete computational advantage of spiking networks compared with traditional non-spiking stochastic neural networks (Boltzmann machines).
Wolfgang Maass (TUG)
10:00‑10:15
(15+10 min)
 Coffee break
10:25
Area: Modelling
Session chair: Sonja Gruen
10:25‑10:50
(25+5 min)
 Structural plasticity for learning priors in probabilistic neural networks
The brain employs probabilistic inference for sensory analysis and decision-making. Theoretical studies have shown that neural networks can perform Bayesian computation, but the underlying neural mechanism remains elusive. In this talk, I will show results of our ongoing study on neural network mechanisms of this fundamental computation by the brain. We propose a network model that learns internal models of probabilistic phenomena. A characteristic feature of this model is that both weights and wiring pattern of synaptic connections are modified during the learning of likelihood functions. Our results suggest how structural plasticity contributes to probabilistic inference by neural networks.
Tomoki Fukai (Riken)
10:55‑11:20
(25+5 min)
 Modeling Cortical Layers (show presentation.pdf)
We seek to understand the common principles that operate in all cortical regions. We believe that the cellular layers observed throughout the cortex perform unique roles in sensory inference, sensory-motor inference, and motor planning and behavior. We further believe that all cellular layers use a common memory architecture which relies on active distal dendrites and columnar inhibition. In my talk I will describe our overall theory of what a cortical region does, how we believe cellular layers learn temporal patterns, and how stability over changing patterns is achieved.

We have modeled many parts of this theory in software and I will show results of these models. Finally I will discuss the impact of these models on neuromorphic hardware.
Jeff Hawkins (Numenta)
11:25‑11:50
(25+5 min)
 Linking the Functional and Structural Human Connectome
The ongoing activity of the brain at rest, i.e. under no stimulation and in absence of any task, is astonishingly highly structured into spatio-temporal patterns. These spatio-temporal patterns, called resting state networks, display low-frequency characteristics (<0.1 Hz) observed typically in the blood-oxygenation level-dependent (BOLD) fMRI signal of human subjects. We aim here to understand the origins of resting state activity through modelling. Integrating the biologically realistic DTI/DSI based neuroanatomical connectivity into a brain model, the resultant emerging resting state functional connectivity of the brain network fits quantitatively best the experimentally observed functional connectivity in humans when the brain network operates at the edge of instability. Under these conditions, the slow fluctuating (< 0.1 Hz) resting state networks emerge as structured noise fluctuations around a stable low firing activity equilibrium state in the presence of latent "ghost" multi-stable attractors. The multistable attractor landscape defines a functionally meaningful dynamic repertoire of the brain network that is inherently present in the neuroanatomical connectivity.
Gustavo Deco (UPF)
11:55
Area: Neuromorphic computing
Session chair: Rene Schueffny
11:55‑12:20
(25+5 min)
 Large, multi-scale models of brain function
My lab has recently described a large-scale functional brain model called Spaun. This model performs eight different tasks, while respecting a wide variety of anatomical and physiological constraints. However, concern has been expressed that such models do not capture the important biological details. In this talk, I argue that we do not know what the important biological details are, and hence need to develop tools that allow us to explore that question systematically in the context of functional brain models. I present recent work showing that we can include conductance based, multi-compartmental single neurons modeled with Neuron into simulations run in Nengo, the simulation environment used to create Spaun. I suggest that such integration is important for building useful, scalable, and informative brain models.
Chris Eliasmith (Waterloo)
12:25‑12:50
(25+5 min)
 Accelerated Analog Neuromorphic Hardware: Chances and Challenges

Modeling biological nervous systems is an increasingly important aspect of contemporary Neuroscience. One class of models that could complement the ubiquitous computer simulation is the physical model based on analog microelectronics.
Due to its potential for highly accelerated continuous time modeling it is especially interesting for the research of learning and development. This talk summarizes the achievements of the Facets and BrainScaleS Projects in this area and presents the future research planned within the human brain project.
Johannes Schemmel (UHEI)
12:55‑13:55
(60 min)
 Buffet lunch
13:55
Area: Neuro-Biology
Session chair: Frederic Chavane
13:55‑14:20
(25+5 min)
 Representation and computation in visual cortical circuits
The cerebral cortex of primates contains a rich plexus of three dozen or more specific areas that contribute to visual perception and visually guided action. The presence and diversity of these specialized areas gives us an opportunity to learn which aspects of representation and computation are common across cortical areas, and which aspects are specialized and particular. I will illustrate these principles, using examples from cortical pathways specialized for processing visual form and visual motion.
Tony Movshon (NYU, USA)
14:25‑14:50
(25+5 min)
 A multi-scale view of the early visual system: an example of FET-driven interdisciplinarity
The aim of this talk is to present a partial overview of the interdisciplinary approaches developed for the study of the rodent and mammalian early sensory systems, in the context of the integrated FET projects Facets and BrainScales. In order to make echo to the invited talk of Prof Movshon on the awake behaving non human primate visual system (see above), we will focus the presentation on the study of functional dynamics evoked in the primary visual cortex of anesthetized cats, with multi-scale recording techniques, ranging from conductance dynamics to local field potentials and extrinsic voltage sensitive dye imaging. This overview will show that i) the early visual system is far from being understood, ii) the functional dynamics of visual cortical networks show a much higher level of complexity than classically thought and iii) mesoscopic collective order (at the map level) regulates more microscopic properties (at the receptive field level) in a top-down fashion.

LongAbstract.pdf
Yves Fregnac (CNRS UNIC)
14:55‑15:10
(15+10 min)
Coffee break
15:20
Area: High performance computing (HPC)
Session chair: Anders Lansner
15:20‑15:45
(25+5 min)
 Spiking network simulation code for the peta scale (show presentation.pdf)
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are nerve cells and the contacts mediating their interactions, the synapses. Over the past decade researchers have learned to manage the heterogeneity with efficient data structures. Already early parallel simulation codes had distributed target lists, consuming memory for a synapse on just one compute node. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise: Each nerve cell contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any node. The heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Using the JUQUEEN and the K computer we developed a data structure taking advantage of this double collapse using metaprogramming techniques. After arguing for the need of brain-scale simulations the talk discusses the performance of the upcoming NEST simulation code. The presentation
concludes with neuroscience perspectives and challenges at the advent of exascale computers.
Markus Diesmann (Juelich)
15:50‑16:15
(25+5 min)
 Supercomputers: instruments for science or dinosaurs that didn't go extinct yet? (show presentation.pdf)
High-performance computing dramatically improved scientific productivity over the past 50 years. It turned simulations into a commodity that all scientists can now use to produce knowledge and understanding about the world and the universe, using data from experiment and theoretical models that can be solved numerically. Since the beginnings of electronic computing, supercomputing - loosely defined as the most powerful scientific computing at any given time - has led the way in technology development. Yet, the way we interact with supercomputers today has not changed much since the days we stopped using punch cards. I do not claim to understand why, but nevertheless would like to propose a change in how we develop models and applications that run on supercomputers.
Thomas C. Schulthess (CSCS)
16:20
Conclusion
16:20‑16:45
(25 min)
FACETS and BrainScaleS: a 10 year journeyKarlheinz Meier (UHEI)
16:45
End of the conference
16:45‑17:00
(15 min)
 Farewell coffee
20:00
Speakers dinner
Venue: Backmulde, Schiffgasse 11, Heidelberg
Agenda page for printing (also as short info or short info with end-time)


 
. 
 

10 Feb 2019