ICML 2011 banner
 
 
Follow us:
[LinkedIn] LinkedIn
[Twitter]  twitter
[FB]  Facebook
[Email] email

Invited Speakers

ICML is proud to announce our four invited talks and the distinguished speakers.

Embracing Uncertainty: Applied Machine Learning Comes of Age

Christopher Bishop, Microsoft Research Cambridge

Abstract: Over the last decade the number of deployed applications of machine learning has grown rapidly, with examples in domains ranging from recommendation systems and web search, to spam filters and voice recognition. Most recently, the Kinect 3D full-body motion sensor, which relies crucially on machine learning, has become the fastest-selling consumer electronics device in history. Developments such as the advent of widespread internet connectivity, with its centralisation of data storage, as well as new algorithms for computationally efficient probabilistic inference, will create many new opportunities for machine learning over the coming years. The talk will be illustrated with tutorial examples, live demonstrations, and real-world case studies.

Biography: Chris Bishop is a Distinguished Scientist at Microsoft Research Cambridge, where he leads the Machine Learning and Perception group. He is also Professor of Computer Science at the University of Edinburgh, and Vice President of the Royal Institution of Great Britain. He is a Fellow of the Royal Academy of Engineering, a Fellow of the Royal Society of Edinburgh, and a Fellow of Darwin College Cambridge. His research interests include probabilistic approaches to machine learning, as well as their practical application. Chris is the author of the leading textbook "Neural Networks for Pattern Recognition" (Oxford University Press, 1995) which has over 15,000 citations, and which helped to bring statistical concepts into the mainstream of the machine learning field. His latest textbook "Pattern Recognition and Machine Learning" (Springer, 2006) has over 4,000 citations, and has been widely adopted. In 2008 he presented the 180th series of annual Royal Institution Christmas Lectures, with the title "Hi-tech Trek: the Quest for the Ultimate Computer", to a television audience of close to 5 million.

Machine Learning in Google Goggles

Hartmut Neven, Google

Abstract: Google Goggles is a visual recognition service for which we have the ambition that eventually it will able to recognize any object. Machine learning is used pervasively to achieve these recognition abilities. This talk discusses three examples in which we performed large scale experiments using learning methods that recently elicited considerable interest:
1. For large scale object recognition we compared two classes of approaches to feature matching, one that employs feature space quantization and the other that employs full representations.
2.To learn features suitable for optical charter recognition in photos we contrasted numerous methods including multiple instance learning and deep belief networks.
3. Modern approaches to machine learning tend to formulate training of a classifier as an optimization problem in which a regularized empirical risk is minimized. For computational efficiency typically a convex objective is constructed. Here we report on experiments to train with non-convex loss functions using discrete optimization in a formulation adapted to take advantage of emerging quantum hardware.

Biography: Hartmut Neven (born 1964 in Aachen, Germany) is a scientist working in computational neurobiology, robotics and computer vision. He is best known for his work in face and object recognition. In 1996 he received his Ph.D. from the Institute for Neuroinformatics at the Ruhr University in Bochum, Germany, for a thesis on .Dynamics for vision-guided autonomous mobile robots. written under the tutelage of Christoph von der Malsburg. Neven was assistant professor of computer science at the University of Southern California at the Laboratory for Biological and Computational Vision. Later he returned as the head of the Laboratory for Human-Machine Interfaces at USCs Information Sciences Institute. Neven co-founded two companies, Eyematic for which he served as CTO and Neven Vision which he initially led as CEO. At Eyematic he developed real-time facial feature analysis for avatar animation. Neven Vision pioneered mobile visual search for camera phones and was acquired by Google in 2006. Today he manages a team responsible for advancing Google's visual search technologies and is the engineering manager for Google Goggles.

Evolutionary dynamics of competition and cooperation

Martin Nowak, Harvard University

Abstract: Cooperation implies that one individual pays a cost for another to receive a benefit. Cost and benefit are measured in terms of reproductive success. Cooperation is useful for construction in evolution: genomes, cells, multi-cellular organisms, animal and human societies are consequences of cooperation. Cooperation can be at variance with natural selection. Why should you help competitors? I present five mechanisms for the evolution of cooperation: kin selection, direct reciprocity, indirect reciprocity, spatial selection and group selection. Direct reciprocity means there are repeated interactions between the same two individuals and my behavior towards you depends on what you have done to me. Indirect reciprocity means there are repeated interactions within a group and my behavior towards you also depends on what you have done to others. I argue that indirect reciprocity is the key mechanism for understanding pro-social behavior among humans and has provided the right selection pressure for the evolution of social intelligence and human language.

Biography: Martin A. Nowak is Professor of Biology and of Mathematics at Harvard University and Director of Harvard's Program for Evolutionary Dynamics. Dr Nowak works on the mathematical description of evolutionary processes including the evolution of cooperation and human language, the dynamics of virus infections and human cancer. An Austrian by birth, he studied biochemistry and mathematics at the University of Vienna with Peter Schuster and Karl Sigmund. He received his Ph.D. sub auspiciis praesidentis in 1989. He went on to the University of Oxford as an Erwin Schrödinger Scholar and worked there with Robert May, the later Lord May of Oxford, with whom he co-authored numerous articles and his first book, Virus Dynamics (OUP, 2000). Nowak was Guy Newton Junior Research Fellow at Wolfson College and later Wellcome Trust Senior Research Fellow in Biomedical Sciences and E. P. Abraham Junior Research Fellow at Keble College. Dr. Nowak became head of the mathematical biology group in Oxford in 1995 and Professor of Mathematical Biology in 1997. A year later he moved to Princeton to establish the first program in theoretical biology at the Institute for Advanced Study. He accepted his present position at Harvard University in 2003.

Building Watson - An Overview of the DeepQA Project

David Ferrucci, IBM Research

Abstract: Computer systems that can directly and accurately answer peoples. questions over a broad domain of human knowledge have been envisioned by scientists and writers since the advent of computers themselves. Open domain question answering holds tremendous promise for facilitating informed decision making over vast volumes of natural language content. Applications in business intelligence, healthcare, customer support, enterprise knowledge management, social computing, science and government could all benefit from computer systems capable of deeper language understanding. The DeepQA project is aimed at exploring how advancing and integrating Natural Language Processing (NLP), Information Retrieval (IR), Machine Learning (ML), Knowledge Representation and Reasoning (KR&R) and massively parallel computation can greatly advance the science and application of automatic Question Answering. An exciting proof-point in this challenge was developing a computer system that could successfully compete against top human players at the Jeopardy! quiz show. Attaining champion-level performance at Jeopardy! requires a computer to rapidly and accurately answer rich open-domain questions, and to predict its own performance on any given question. The system must deliver high degrees of precision and confidence over a very broad range of knowledge and natural language content with a 3-second response time. To do this, the DeepQA team advanced a broad array of NLP techniques to find, generate, evidence and analyze many competing hypotheses over large volumes of natural language content to build Watson (www.ibmwatson.com). An important contributor toWatsons success is its ability to automatically learn and combine accurate confidences across a wide array of algorithms and over different dimensions of evidence. Watson produced accurate confidences to know when to buzz in against its competitors and how much to bet. High precision and accurate confidence computations are critical for real business settings where helping users focus on the right content sooner and with greater confidence can make all the difference. The need for speed and high precision demands a massively parallel computing platform capable of generating, evaluating and combing 1000s of hypotheses and their associated evidence. In this talk, I will introduce the audience to the Jeopardy! Challenge, explain how Watson was built on DeepQA to ultimately defeat the two most celebrated human Jeopardy Champions of all time and I will discuss applications of the Watson technology beyond in areas such as healthcare.

Biography: David is the principal investigator for the DeepQA project currently focused on buildingWatson - the computer system capable of competing with champion players at the Question Answering game of Jeopardy!. He also led the UIMA project as Chief Architect. UIMA is a framework for integrating text and multi-modal analytics for interpreting unstructured information (text, speech, images etc). IBM has contributed UIMA and its extension UIMA-AS to open-source. UIMA is now an OASIS standard and an APACHE open-source project. David has a background in AI, specifically Knowledge Representation and Reasoning. He is most interested in projects that combine NLP, Machine Learning and KR&R to develop and apply intelligent systems. His team is also working on the DARPA Machine Reading Program, extending DeepQA to perform deeper understanding of natural langage content.