{"id":498899,"date":"2018-08-17T10:25:28","date_gmt":"2018-08-17T17:25:28","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=498899"},"modified":"2018-08-17T10:27:16","modified_gmt":"2018-08-17T17:27:16","slug":"summer-institute-1998","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/summer-institute-1998\/","title":{"rendered":"UW Summer Institute 1998"},"content":{"rendered":"
Venue:<\/strong> University of Washington, and Related Events:<\/strong> The 1998 MSR\/UW Summer Institute will explore conceptual relationships and synergies between biological and computational perspectives on intelligent systems. The Institute will bring together scientists investigating intelligent systems from a variety of subdisciplines of the biological and computational sciences, including integrative and comparative biology, neuroscience, neuroethology, artificial intelligence, decision science, computational neuroscience, computational theory, computer engineering, operating systems, and networking and communications. <\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"1998-08-18","msr_enddate":"1998-08-23","msr_location":"Seattle and San Juan Island, Washington, USA","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"footnotes":""},"research-area":[13553],"msr-region":[197900],"msr-event-type":[197947],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-498899","msr-event","type-msr-event","status-publish","hentry","msr-research-area-medical-health-genomics","msr-region-north-america","msr-event-type-universities","msr-locale-en_us"],"msr_about":"Venue:<\/strong> University of Washington, and\r\nFriday Harbor Laboratories\r\nSeattle, Washington\r\n\r\nRelated Events:<\/strong>\r\n2018<\/a>\r\n2017<\/a>\r\n2016<\/a>\r\n2015<\/a>\r\n2014<\/a>\r\n2013<\/a>\r\n2012<\/a>\r\n2011<\/a>\r\n2010<\/a>\r\n2009<\/a>\r\n2008<\/a>\r\n2007<\/a>\r\n2006<\/a>\r\n2005<\/a>\r\n1999<\/a>\r\n\u00a0<\/strong>","tab-content":[{"id":0,"name":"About","content":"
\nFriday Harbor Laboratories
\nSeattle, Washington<\/p>\n
\n2018 (opens in new tab)<\/span><\/a>
\n2017 (opens in new tab)<\/span><\/a>
\n2016 (opens in new tab)<\/span><\/a>
\n2015 (opens in new tab)<\/span><\/a>
\n2014 (opens in new tab)<\/span><\/a>
\n2013 (opens in new tab)<\/span><\/a>
\n2012 (opens in new tab)<\/span><\/a>
\n2011 (opens in new tab)<\/span><\/a>
\n2010 (opens in new tab)<\/span><\/a>
\n2009 (opens in new tab)<\/span><\/a>
\n2008 (opens in new tab)<\/span><\/a>
\n2007 (opens in new tab)<\/span><\/a>
\n2006 (opens in new tab)<\/span><\/a>
\n2005 (opens in new tab)<\/span><\/a>
\n1999 (opens in new tab)<\/span><\/a>
\n\u00a0<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"Intelligent Systems: Biological and Computational Perspectives<\/h2>\r\nThe 1998 MSR\/UW Summer Institute will explore conceptual relationships and synergies between biological and computational perspectives on intelligent systems. The Institute will bring together scientists investigating intelligent systems from a variety of subdisciplines of the biological and computational sciences, including integrative and comparative biology, neuroscience, neuroethology, artificial intelligence, decision science, computational neuroscience, computational theory, computer engineering, operating systems, and networking and communications. The meeting goals are (1) to discuss principles, architectures, and unifying abstractions that may underlie intelligent systems, (2) to understand how findings elucidated by biologists can assist computer scientists and engineers in the design of robust and flexible computing systems, and (3) to investigate how results and insights developed by computer scientists can help biologists to understand the neurophysiological foundations of learning and intelligent behavior. The program will incorporate presentations, panel discussions, breakout group sessions, and side trips in a relaxed and enjoyable atmosphere. The program will begin at the University of Washington in Seattle, from Tuesday, August 18th through Thursday, August 20. Invitees will then be flown by seaplane to Friday Harbor Laboratories on San Juan Island, where laboratory sessions involving diverse organisms will augment the discussions. The program will conclude Saturday, August 22.\r\n
Background on the MSR\/UW Summer Institute<\/h2>\r\nThe MSR\/UW Summer Institute series was created by the Computer Science and Engineering Department of the University of Washington and Microsoft Research with the goal of bringing leading researchers to Seattle for summer programs to collaborate on key topics in computer science. The series began in 1997 with the successful MSR\/UW Summer Institute in Datamining.\r\n\r\nThe 1998 MSR\/UW Summer Institute on Intelligent Systems: Biological and Computational Perspectives has its roots in recent dialog between invertebrate biologists and computer scientists. In August 1997, Dennis Willows invited Eric Horvitz to Friday Harbor Laboratories to give a talk on the relationships between computational models for decision making under uncertainty and biological systems. The seminar was successful in stimulating additional interest, ongoing discussions, and thoughts about organizing another meeting. Chris Diorio arrived at the University of Washington in the Fall of 1997 and joined Dennis and Eric in discussions about the feasibility of organizing a meeting. With the assistance of Ed Lazowska, Chairman of the UW Computer Science and Engineering Department and Dan Ling, Director of Research at the Redmond Campus of Microsoft Research, a decision was made to focus the 1998 MSR\/UW Summer Institute on key questions about intelligent systems.\r\n
Organizers<\/h2>\r\nChris Diorio, University of Washington\r\nEric Horvitz<\/a>, Microsoft Research\r\nDennis Willows, University of Washington"},{"id":1,"name":"Abstracts","content":"[accordion]\r\n\r\n[panel header=\"Sensing and acting under limited cognitive resources: A decision-theoretic perspective on intelligent systems\"]\r\n\r\nEric Horvitz<\/a>\r\n\r\nSuccessful decision-making systems immersed in complex, competitive environments must grapple with uncertainty, and with potentially high-stakes, time-critical challenges. I will discuss decision-theoretic representations and inference strategies for sensing and action under uncertainty. After reviewing basic principles of decision-theoretic reasoning, I will present research on methods for deliberating about ideal actions under limited computational resources. The theoretical complexity of computing ideal actions suggests that resource limitations are likely prevalent in real-world decision making. I will describe key issues that arise in situations of limited or varying cognitive resources including the importance of relying on compiled reflexes in lieu of deliberative processes, and of employing flexible problem-solving procedures. Flexible procedures allow systems to trade off the accuracy of decision making with allocated cognitive resources, and to exhibit a graceful degradation in performance with increasing resource limitations, rather than dramatic failures. Adding flexible procedures can be shown to increase the expected value of a system\u2019s behavior. I will describe the importance of control systems for guiding flexible procedures and touch on the overall goal of understanding principles of bounded optimality---ideal performance in an environment conditioned on architectural or resource constraints at hand. Finally, I will highlight opportunities for applying key concepts on decision making under limited resources to better understand biological and computational decision-making systems. My interest in establishing deeper collaborations with neurobiologists on these concepts lays behind my enthusiasm for organizing this Summer Institute.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Information rates and coding efficiencies in sensory neurons: An information-theoretic perspective on intelligent system\"]\r\n\r\nFred Rieke\r\n\r\nSpiking neurons encode continuous, time-varying sensory input signals in trains of discrete action potentials or spikes. Understanding how sensory signals are represented in spike trains is a fundamental problem in neuroscience. Relatively simple algorithms allow estimation of the sensory signal from the spike train, providing an estimate of the rate at which the spike train provides information about the input signal. In several systems this information rate is close to upper limits set by the entropy of the spike train itself; thus neural coding is remarkably precise when viewed on an absolute scale. This precision suggests that theoretical models for spike generation based on \u2018optimal design\u2019 may be relevant to real neurons.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Complexity and robustness: A control and dynamical systems perspective on intelligent system\"]\r\n\r\nJohn Doyle\r\n[\/panel]\r\n\r\n[panel header=\"Modeling intelligence: A computational neuroscience perspective on intelligent system\"]\r\n\r\nSue Becker\r\n\r\nMany neuroscientists take a bottom-up approach to modelling intelligence, using synaptic or cellular level biophysical data to drive the model-building process. I argue the virtues of a more top-down approach, using simplified neuronal models but more powerful cost functions for describing the systems-level behavior of collections of neurons. Where these two approaches meet, anatomical and physiological data constrain top-down models, and we begin to understand how intelligent systems can emerge from biological machinery.\r\n\r\nBecker, S., Meeds, E. and Chin, A. (submitted) The hippocampus as a cascaded autoencoder network.\r\n\r\nBecker, S. (to appear), Implicit learning in 3D object recognition: The importance of temporal context, Neural Computation.\r\n\r\nBecker, S. (1996), Mutual information maximization: Models of cortical self-organization. Network: Computation in Neural Systems, Vol. 7, pp. 7-31.\r\n\r\nBecker, S. (1995) Unsupervised learning with global objective functions. In The Handbook of Brain Theory and Neural Networks, M. Arbib (ed), MIT Press.\r\n\r\nBecker, S. and Hinton, G. E. (1992), A self-organizing neural network that discovers surfaces in random-dot stereograms,\r\n\r\nNature, Vol. 355, pp. 161-163.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Cognitive Computation: A Cognitive Science Perspective\"]\r\n\r\nJames Anderson\r\n\r\nCognitive computation displays an intriguing blend of memory, discrete processes, and continuous processes. I will present:\r\n\r\n(1) A brief introduction of the interplay between discrete and continuous computation in the work of McCulloch and Pitts;\r\n\r\n(2) A worked out example of how humans might do do simple arithmetic when they don't use logic along with some thoughts about how one could control the direction of such a computation; and\r\n\r\n(3) sketch the outline of a scalable intermediate level neural network system called the \"Network of Networks\", formed from a large array of attractor networks, that combines discrete and continuous operations in a hybrid computational architecture.\r\n\r\nJ.A. Anderson, 1995,Introduction to Neural Networks Cambridge, MA: MIT Press.\r\n\r\nJ.A. Anderson, 1998, Seven times seven is about 50. Chapter 7 pp. 255-300, In \"An Invitation to Cognitive Science (2nd Ed). Volume 4. Methods, Models and Conceptual Issues\" Ed. D. Scarborough and S. Sternberg Cambridge, MA: MIT Press.\r\n\r\nJ.A. Anderson and J.P. Sutton, 1997, If we compute faster, do we understand better? In: Behavior Research Methods, Instruments, and Computers, v. 29. 67-77. <\/i>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Computers and Common Sense\"]\r\n\r\nMarvin Minsky\r\n\r\nI'll talk about why computers don't have common sense yet, why AI researchers don't work on this, and what we should do about it. In my view, the trouble is that each type of representation has serious limitations, so we'll have to learn how to use and switch between several different ones. One way would be to use the concept of 'paranomes' described in the final chapters of \"The Society of Mind.\"\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Control and stability: Animal movement and integrative systems\"]\r\n\r\nTom Daniel\r\n\r\n[\/panel]\r\n\r\n[panel header=\"The control of stability in fruit flies\"]\r\n\r\nMike Dickenson\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Biomechanical intelligence: Synthesizing a humanoid robot arm\"]\r\n\r\nBlake Hanneford\r\n\r\nOur work is focused on understanding the lowest level domains of motor control through design and experimentation with robotic models of the human arm. We aim to create a robotic arm which emulates the human arm on the levels of biomechanics, dynamics, and control at the level of spinal reflexes, because we believe that intelligent manipulation behavior must be built upon a substrate which can effectively modulate posture and biomechanical impedance according to task objectives. To test these ideas we are building a human arm replica, aiming for dynamic as well as kinematic accuracy, and including a real-time simulation of neural circuits in the human spinal cord segment. This talk will review our arm design, some of our postural control simulations, and future directions.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Scope of the simplest communication systems\"]\r\n\r\nPeter A.V. Anderson\r\n\r\nThe earliest known nervous systems are those found in members of the phylum Cnidaria, the jellyfish, anemones and corals. These nervous systems are very simple, structurally, yet display many of the physiological properties found in higher nervous system and are capable of controlling some remarkably elaborate behavior, including a limited degree of plasticity. The first part of this presentation will focus on the organization and capabilities of these early nervous systems.\r\n\r\nCommunication by way of nervous systems are not, however, the only means of achieving rapid and widespread communication. Many organisms, ranging from lower invertebrates through chordates, possess epithelial conduction, the production and propagation of action potentials between epithelial cells. This is a very efficient and economical means of coordinating widespread effectors (i.e. muscles, and secretory and bioluminescent tissues), coordinating the activities of individuals in colonies, and can provide animals with important sensory information. The latter part of this presentation will describe this phenomenon and its functional capabilities.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"It's informative where people look: Statistics of scenes at the center of gaze\"]\r\n\r\nPamela Reinagel\r\n\r\nEarly stages of visual processing may take advantage of the characteristic low-level statistics of natural image ensembles, in order to encode them efficiently. Animals actively select visual stimuli by orienting their eyes. In humans, eye positions determine which parts of a scene will fall on the fovea and thus be sampled at high resolution. We recorded eye positions of human subjects while they viewed images of natural scenes. We found that active selection changes the local statistics of the ensemble the fovea\r\n\r\nencounters. Specifically, the effective visual stimulus has higher contrast and lower spatial correlations, resulting in a higher estimated signal entropy. Thus eye movements serve to increase the information available for visual processing.\r\n\r\nCollaborator - Anthony Zador\r\n\r\n[\/panel]\r\n\r\n[panel header=\"The importance of learning what to ignore\"]\r\n\r\nLes Atlas\r\n\r\nIn spite of years of research effort, many pattern recognition tasks are still much more difficult for computers than for humans. Perhaps the greatest skill humans have is that of generalization, where experience from one domain maps onto another. Our working hypothesis is that instead of using complex computational schemes for recognition, humans generalize by learning those signal representations and transformations which allow us to ignore most of the irrelevant detail of the representation. These learned transformations are then used for other similar tasks and thus form the bases for generalization.\r\n\r\nFrom a signal processing perspective our point can be illustrated by using past observations to learn optimal smoothing functions of the usual representations in time and frequency. We will demonstrate this approach in several acoustic signal processing applications: sonar transient identification, condition monitoring of helicopter gearboxes, and reduced representations of distinctive features in speech. As a side-effect of these results, we observed that the information content in these acoustic signals was often manifest as a low bandwidth modulator. This observation matches some recent results in mammalian auditory cortical recordings and suggests a new signal processing approach--variational frequency\u2014which we will define and illustrate with audio examples.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Learning from atypical examples: Support Vector Machines and neurobiology\"]\r\n\r\nJohn Platt, Microsoft Research\r\n\r\nClassical statistical machine learning suggests that learning a category is accomplished by learning a typical member of that category and the distribution of typical variations of the category. Since 1979, Vapnik and his colleagues have shown just the opposite: learning a category can be done by learning those positive examples that are most unlike the category and those negative examples that are most like the category. The method that accomplishes this sort of learning is called a Support Vector Machine (SVM). An SVM learns a yes\/no function that determines whether an example is in a category or not.\r\n\r\nRecently, I have invented an algorithm for learning SVMs (related to certain numerical analysis algorithms from the 1960s). This new algorithm has various interesting neuromimetic properties: resistance to input noise, synaptic weights that have a predetermined sign and a maximum strength, and, most importantly, one-shot learning and allocation of new memories. The algorithm may have interesting implications for learning of concepts in real neurobiology.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Neuromodulators: How to get many behaviors from a small neural network\"]\r\n\r\nKatherine Graubard\r\n\r\nThe stomatogastric nervous system may not be able to pass a can-he-talk-and chew-gum-at-the-same-time intelligence test, but it can switch between the crabs versions of chewing and swallowing motorpatterns. The 1x2 mm stomatogastric ganglion contains the thirty neuroncircuit that runs two motor patterns and participates in at least three others. The activation of the motor patterns, the flexibility of each pattern, and the mode switches between patterns all require neuromodulatorinput from sensory neurons and from neurons in other parts of the nervous system. The big four neuromodulators in humans -- norepinephrine, dopamine, serotonin, and acetylcholine -- help switch us between such\r\n\r\nmodes as waking, deep sleep, and dreaming sleep. In the stomatogastric ganglion, the modulator inputs act by changing the excitability of individual neurons and by changing the strength of the connections (both chemical synapses and electrical coupling) between selected parts of the circuit. Each modulator input has its own unique set of effects and\r\n\r\nsculpts a new circuit by selective modification of the baseline anatomical network. Modulator inputs do not normally work in isolation and the effects of an input on a neuron can change depending on what other inputs are or have been active. The system is non-linear and has complex historic effects. Somehow the system is flexible and stable and mode-switching (and crabs do eat and grow). Our challenge is to extract the essential features that allow this to be accomplished.\r\n\r\nHarris-Warrick, R.M., F. Nagy, and M.P. Nusbaum. (1992) Neuromodulation of\r\n\r\nstomatogastric networks by identified neurons and transmitters. In:\r\n\r\nDynamic Biological Networks: The Stomatogastric Nervous System, R.M.\r\n\r\nHarris-Warrick, E. Marder, A.I. Selverston, and M. Moulins, eds.\r\n\r\nCambridge, MA: MIT Press, pp. 87-138.\r\n\r\nMarder, E., and Selverston, E. (1992) Modeling the Stomatogastric Nervous\r\n\r\nSystem. In: Dynamic Biological Networks: The Stomatogastric Nervous\r\n\r\nSystem, R.M. Harris-Warrick, E. Marder, A.I. Selverston, and M. Moulins,\r\n\r\neds. Cambridge, MA: MIT Press, pp. 161-196.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Oscillatory dynamics and olfactory information processing\"]\r\n\r\nAlan Gelperin\r\n\r\nOlfactory systems are characterized by oscillatory dynamics of local field potential in neural centers receiving direct input from olfactory receptors. The computational role of the oscillatory dynamics is unknown. The central olfactory processing network of the terrestrial mollusc Limax maximus provides a convenient system for experiments to explore the role of oscillatory dynamics using electrical and optical recording and simulations of network dynamics. The oscillatory circuit in the central olfactory processing region is also involved in odor learning, which is highly developed in Limax. A coupled oscillator model of the Limax odor processing circuit will be discussed and its relation to current experimental data will be shown. The Limax olfactory processor also propagates waves of excitation and the spatial dynamics of the activity waves is altered by odor stimulation. New modeling work suggests how the spatial and temporal dynamics of the Limax odor processing structure contribute to odor discrimination and odor memory storage. The relation of dynamics in the molluscan model system to the mammalian olfactory system will also be addressed.\r\n\r\nErmentrout, B., Flores, J., Gelperin, A. 1998 Minimal model of oscillations and waves in the Limax olfactory lobe with tests of the model's predictive power. J. Neurophysiol. 79:2677-2689. <\/u>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Computation in the circuitry for tail flip escape reactions of Crayfish\"]\r\n\r\nFrank Krasne\r\n\r\nMy presentation will be in three parts:\r\n\r\n1. I will discuss the overall organization of tail flip escape behavior circuitry emphasizing that it is mediated by two parallel systems. One operates very rapidly, produces very stereotyped responses, and is organized in a \"localist\" fashion. The other is slow but very flexible and seems to have a more \"parallel-distributed\" sort of organization. I will discuss why two systems operate in parallel to produce rather similar behavioral responses.\r\n\r\n2. I will discuss how crayfish decide which of the two systems to use. This decision is based on abruptness of stimulus onset. Abruptness is detected by the system of rectifying electrical synapses that converge on the decision\/command neurons of the fast system. Similar junctions would be ideal for detecting synchrony in other processing tasks requiring synchrony detection such as in current models of time sharing circuits in visual perception where the binding together of the representations for features that are part of a single object amongst many is based on synchrony of firing.\r\n\r\n3. Escape by the fast system is modulated by an inhibitory system that descends from higher centers. This inhibition is targeted to distaldendrites of the decision\/command neurons for escape rather than to more \"standard\" proximal sites near the spike initiation region of the neuron where spike initiation is more conveniently controlled. I will describe results from simple two compartment models of distal and proximal inhibitory synapses which show that (1) distal inhibition is preferable when it is desirable for excitation provided by environmental threats to compete with modulatory inhibition on a relatively equal footing, with excitation and inhibition each being able to overcome the other by sufficient increases of excitatory or inhibitory drive and (2) proximal inhibition is preferable when it is desirable that inhibition should prevent the decision\/command neuron from firing no matter how strong the excitatory drive. Many neural models require that excitation and inhibition should compete on a relatively equal footing; for these distal inhibition seems preferable.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Neurons, peptides and geomagnetic orientation: An intelligent system in a murky, marine environment\"]\r\n\r\nA.O. Dennis Willows\r\n\r\nBriefly, we are interested in the problem of geomagnetic orientation, why marine molluscs do it in the field, and how they do it in electrophysiological and biochemical\/molecular terms. We are motivated by our earlier findings (1,2) that show that intact animals, in the lab show a robust capability to orient to the earth's magnetic field. (As you may realize, no one yet knows the physiological basis for geomagnetic field detection in any organism, except bacteria.)\r\n\r\nIn the field, using SCUBA and individual animals (labeled and tracked), we try to answer the ecological question, \"Why do they do it?\" An answer is beginning to emerge, and suggests that they use their geomagnetic sense to move shoreward when they are disoriented. This makes sense because their food (and therefore mates) are arranged patchily in a band along the shoreline. To move in any other direction puts the animal at risk of losing contact with food\/mates because it is a big ocean out there offshore. The shoreline presents a safe boundary against which to navigate.\r\n\r\nIn electrophysiological terms, we seek specific neurons in the brain that respond when earth strength fields around the animal change directionally. And we have found 2 pairs of re-identifiable neurons that fire impulses when the horizontal component of the geomagnetic field is changed (2). They are also sensitive to local water currents (3, 5), which serves as another directional cue about shoreline orientation, since tidal currents occur reliably parallel to the shoreline. Recent experiments by my student Razvan Popescu show that these same neurons still respond to changing geomagnetic fields when their peripheral axons are intact, and all other sources of input and output to\/from the brain are cut off. This raises the suspicion (and exciting prospect) that we may be very close to the site of the transducer-detector\u2014it must be in those uncut nerves or the foot of the animal to which they are attached.\r\n\r\nIn biochemical terms, we have dissected those identified neurons mentioned above (and cited in 2, and 3) extracted and purified their contents and discovered that they make and use a previously undescribed trio of peptides (4). The peptides have now been sequenced, synthesized and used to make antibodies, suitable for immunohistochemical studies. We have found that they are released from the peripheral endings of these neurons near the ciliated cells of the foot epithelium, where they apparently promote ciliary beating (6), which in turn promotes locomotion---that is how these molluscs crawl---on their cilia, beating in the film of mucus. Maybe, these neurons are responsible for both the sensory and the motor components of the behavioral responses mentioned above. How could this be???? A wonderful and difficult question!! But the answer holds the key to the geomagnetic transducer, which is the prime motivator of all our work at the moment. I hope to find the solution to this question over the next 2-3 years or so, and am very excited by the prospect.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Evolution on the time scale of thought and action\"]\r\n\r\nWilliam Calvin\r\n\r\nTo have meaningful, connected experiences \u2014 ones that we can comprehend and reason about \u2014 we must be able to discern patterns to our actions, perceptions, and conceptions. There's a hunger for discovering hidden patterns. Most of the tasks of consciousness are coping with the novel, finding suitable patterns amid confusion or creating new choices. We know the Darwinian reputation for achieving quality on the timescale of species (millennia) and antibodies (weeks). The brain may also be able to use a Darwinian process to repeatedly improve the raw material beyond that seen in our nighttime dreams, with their jumble of people, places, and occasions that don't fit together very well (\"incoherent\"). We can create coherence, i.e., quality, with enough generations of variation and selection. But we need to achieve coherence quickly, on the time scale of conversational replies.\r\n\r\nThe newer parts of our cerebral cortex have the neural circuitry necessary for pattern copying, with variants competing for a workspace much as bluegrass and crabgrass might compete for a back yard; the multifaceted environment that makes one pattern outreproduce the other is a memorized one. This offers mechanisms for implementing both convergent and divergent thinking, even the structured types we associate with syntax.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"The neuronal dendrite: A complex biological system\"]\r\n\r\nNelson Spruston\r\n\r\nMost neurons in the central nervous system have elaborate, branching dendritic trees that form the input structures for tens of thousands of synapses arriving from other neurons. The simplest view of the neuron is that excitatory and inhibitory potentials are summed, and funneled passively toward the cell body, where an action potential is initiated in the axon if a threshold membrane potential is reached. Recent efforts to characterize the properties of neuronal dendrites have made it clear that this is a vast oversimplification of dendritic function. Many voltage-activated channels are present in dendritic membranes, thus endowing the neuron with complex, nonlinear integrative properties. Research in my laboratory has focused on improving our understanding of the active properties of dendrites. Using simultaneous patch-clamp recordings from the somata and dendrites of hippocampal CA1 pyramidal neurons, we are able to explore how synaptic potentials and action potentials propagate within the dendritic tree. We have also studied the properties of voltage-activated sodium channels in CA1 dendrites in an attempt to understand how the properties of these channels influence dendritic integration. I will present recent results showing that although the axon has the lowest threshold for action potential initiation in CA1 neurons, spikes are generated in the dendrites in response to synchronous synaptic activation. These dendritic spikes, however, do not propagate reliably to the soma and axon, suggesting that they may serve a function different from all-or-none signalling via axonal action potentials. These results suggest that the active properties of neuronal dendrites may facilitate complex dendritic computation.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Independent component analysis of population codes in Tritonia: Spike sorting and cell classification\"]\r\n\r\nGlen Brown\r\n\r\nExperimental studies of population codes in the nervous system usually involve the collection and analysis of action potential trains from many neurons recorded simultaneously. Using a voltage-sensitive dye and a photodiode array, we have recorded action potential activity from the swimming neural network of the seaslug Tritonia--up to several hundred neurons at a time. Often, each photodiode detector recorded multiple\r\n\r\nneurons, and each neuron appeared on multiple detectors. A new technique in statistical signal processing, independent component analysis (ICA) has several applications in large multivariate data sets of this type. ICA was first used to sort action potentials from different neurons into separate channels. In the same step, artifacts were removed from the data and noise was reduced. Action potential trains were then converted into spike frequency plots and smoothed. A second ICA step revealed groups of neurons within the population. These groups corresponded to neuron types that have been classified previously by other methods indicating that ICA can be used as an effective method for dimensionality reduction of population data.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Computational learning: A Bayesian perspective\"]\r\n\r\nChristopher M. Bishop\r\n\r\nProbability theory provides a consistent framework for the quantification of uncertainty. Learning can be viewed as a reduction in uncertainty resulting from the acquisition of new data, and can be formalised through Bayes' theorem. In this talk I will give a basic introduction to the Bayesian view of learning, and will illustrate the key ideas using a simple regression example. I will also highlight the distinction between the discriminative and generative paradigms as a central issue in both computational and biological learning.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Dissecting the distributed memory for sensitization in the marine mollusk Aplysia\"]\r\n\r\nBill Frost\r\n\r\nWhen surprised by an unexpected aversive stimulus, the marine mollusc Aplysia californica becomes sensitized for about an hour. During this period, the animal modifies its neural circuitry to enhance the amplitude and duration of its defensive gill and siphon withdrawal reflex. This presentation will first review data showing that the \"memory\" for this example of non-associative learning is encoded by a distributed set of nervous system modifications. The construction of a data-based computational model of the siphon-elicited siphon withdrawal circuit will then be reviewed, followed by the use of this realistic simulation to evaluate the \"information content\" of each component of the distributed memory for sensitization. This process involved placing physiologically recorded, learning-related synaptic modifications into the simulation, generating their effects on the firing responses of the simulated motor neurons, and then using these to drive actual siphon motor neurons and siphon contractions. This approach allowed us to dissect the contribution of each physiologically recorded circuit modification to the alterations in reflex amplitude and duration observed in the animal during sensitization.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Building Animals\"]\r\n\r\nRhanor Gillette\r\n\r\nThe behavior of all motil animals is organized along similar lines. A useful view of the complex behavior of higher animals, including humans, is that most complexity arises from increasing the number of sub-routines running under a few basic drives; these drives are shared with the simpler consciousness of sea-slugs. Evidence suggests that animals with even the simplest nervous systems organize their behavior hedonically to make informed cost-benefit decisions in foraging and reproductive behavior. Observations on behavior and neural organization from Octopus and the predatory snail Pleurobranchaea will be presented to derive a simple and generalizable neural model for such decision-making in foraging behavior. The model integrates sensation, experience and internal state in terms of documented neuromodulatory gain-control of the known neural circuitry via specific ion currents, and appears amenable to computational\/robotic simulation.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Information processing using populations of neuron\"]\r\n\r\nJohn Lewis\r\n\r\nIn many organisms, correlating neuronal activity with sensory input and behavioral output has revealed that information is encoded using populations of neurons. We investigate a population coding algorithm and its neural implementation in a simple reflexive behavior of the medicinal leech. This reflex is elicited by a light touch to the tubular body and results in a body-bend away from the touch stimulus. The bend is achieved by the contraction and relaxation of longitudinal muscles on the same and opposite sides of the body as the touch, respectively. The underlying neuronal network solves the problem of encoding touch location and then producing the appropriately directed bend. Because of the relatively small size of this network, we are able to monitor and manipulate its complete set of sensory inputs. We show that a \"population vector\" formed by the spike counts of the active sensory neurons contains sufficient information to account for the behavior. This population vector is also well-correlated with bend direction. And finally, the connectivity among the identified neurons in the network is well-suited for reading out this neural population vector.\r\n\r\nLewis & Kristan (1998). Nature 391:76-79\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Engineering artificial neuronal systems\"]\r\n\r\nChris Diorio\r\n\r\nDigital computers are adept at numerical computation, but, in metrics like adaptation and intelligence, have proved woefully inadequate by comparison with animal brains. Many computer scientists (myself included) retain the expectation that we can learn how to program intelligence into our digital machines. The future will ultimately decide this expectation. But I believe that there is another question that we should ask: \u201cDoes a machine\u2019s structure, and the representation that it uses, predispose the machine to certain computations?\u201d Put another way, will neuronal circuits always surpass digital ones at adaptation and learning? The principles underlying digital computation-Boolean algebra and low bit-error probabilities-are not the ones that neurobiology uses for its computations. Can we build artificial computing machines that use principles like local adaptation, rather than switching, to compute? And will these machines enable intelligence more naturally than do digital machines? In my talk, I will present recent work on building electronic (silicon) circuits modeled after neurobiology, and I will explore the possibilities and opportunities of artificial neuronlike computation.\r\n\r\n[\/panel]\r\n\r\n[\/accordion]"}],"msr_startdate":"1998-08-18","msr_enddate":"1998-08-23","msr_event_time":"","msr_location":"Seattle and San Juan Island, Washington, USA","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"August 18, 1998","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":null,"event_excerpt":"The 1998 MSR\/UW Summer Institute will explore conceptual relationships and synergies between biological and computational perspectives on intelligent systems. The Institute will bring together scientists investigating intelligent systems from a variety of subdisciplines of the biological and computational sciences, including integrative and comparative biology, neuroscience, neuroethology, artificial intelligence, decision science, computational neuroscience, computational theory, computer engineering, operating systems, and networking and communications.","msr_research_lab":[],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/498899"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":8,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/498899\/revisions"}],"predecessor-version":[{"id":501782,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/498899\/revisions\/501782"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=498899"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=498899"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=498899"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=498899"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=498899"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=498899"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=498899"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=498899"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=498899"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}