Keynote Speakers

Keynote Speakers


Cynthia Breazeal

MIT Media Lab

Social Robots: Reflections and Predictions on the Past, Present and Future of the Human-Robot Relationship

Abstract: I will offer a perspective and reflections on the field of Social Robotics: its origins, its evolution and achievements, and its importance for the future. Since its inception in the late 1990s, Social Robotics introduced and advanced the socio-emotional and interpersonal dimensions of how people interact with autonomous robots. Kismet, widely regarded as the first social robot, explored the dynamic interplay of computationally modeled socio-emotive-cognitive processes with real-time human social behavior to engage, communicate, and collaborate with people. Since then, the field of Social Robotics has grown into a vibrant global community that has continued to advance three key areas and their interplay: (1) the computational science of endowing autonomous robots with greater social and emotional skills and intelligence, (2) the interaction design of social robots for a wide range of tasks and contexts where rapport is important, and (3) the psychological science of understanding how people experience and interact with social robots in increasingly sophisticated ways over longer periods of time. Twenty years since its origins, we are witnessing social robots begin to enter consumer and industrial markets in manufacturing, healthcare, education, aging, entertainment, mobility, and more. And the field continues to inform and influence the design of a menagerie of other personified AI technologies across a wide range of uses. The future of Social Robotics promises to be exciting as advances in AI, robotics, and cloud computing platforms are enabling the community to deploy multitudes of social robots over longer periods of time to more deeply understand what it means for all kinds of people to live and collaborate with social robots as part of daily life – beyond short-term interaction to consider the future of the human-robot relationship. This promise also raises important issues, challenges, and opportunities for how to design personified AI technologies in an ethical and responsible way to promote social good.

Cynthia Breazeal will give this talk on the occasion of receiving the IFAAMAS Influential Paper Award for her paper on emotion and sociable humanoid robots.

Bio: Cynthia Breazeal is Professor of Media Arts and Sciences at MIT, where she founded and directs the Personal Robots group at the Media Lab. She is known as a pioneer in the area of social robotics and human-robot interaction. Her work balances technical innovation in AI, UX design, and understanding the psychology of engagement to design personified AI technologies that promote human flourishing and personal growth. She received her doctorate in 2000 from MIT. Previous honours include a National Academy of Engineering’s Gilbreth Lecture Award and a AAAI Fellowship. She also founded the consumer social robotics company Jibo, where she served as Chief Scientist and Chief Experience Officer.


Noam Brown

Facebook AI Research

AI for Poker and Beyond: Equilibrium Finding in Adversarial Imperfect-Information Games

Abstract: The field of artificial intelligence has had a number of high-profile successes in the domain of perfect-information games like chess or Go, where all participants know the exact state of the world. But real-world strategic interactions typically involve hidden information, such as in negotiations, cybersecurity, and financial markets. Past AI techniques fall apart in these settings, with poker serving as the classic benchmark and challenge problem. In this talk I will cover the key breakthroughs behind Libratus and Pluribus, the first AI agents to defeat elite human professionals in two-player no-limit poker and multiplayer no-limit poker, respectively. In particular, I will discuss new forms of the counterfactual regret minimization equilibrium-finding algorithm and breakthroughs that enabled depth-limited search for imperfect-information games to be conducted orders of magnitude faster than previous algorithms. Finally, I will conclude with a discussion on recent work combining the previously separate threads of research on perfect-information and imperfect-information games.

Noam Brown will give his talk on the occasion of receiving the Victor Lesser Distinguished Dissertation Award.

Bio: Noam Brown is a Research Scientist at Facebook AI Research working on multi-agent artificial intelligence, with a particular focus on imperfect-information games. He co-created Libratus and Pluribus, the first AIs to defeat top humans in two-player no-limit poker and multiplayer no-limit poker, respectively. He has received the Marvin Minsky Medal for Outstanding Achievements in AI, was named one of MIT Tech Review’s 35 Innovators Under 35, and his work on Pluribus was named by Science to be one of the top 10 scientific breakthroughs of the year. Noam received his PhD from Carnegie Mellon University, where he received the School of Computer Science Distinguished Dissertation Award.


Vincent Conitzer

Duke University and University of Oxford

New Design Decisions for Modern AI Agents

Abstract: Consider an intelligent virtual assistant such as Siri, or perhaps a more capable future version of it.  Should we think of all of Siri as one big agent? Or is there a separate agent on every phone, each with its own objectives and/or beliefs? And what should those objectives and beliefs be? Such questions reveal that the traditional, somewhat anthropomorphic model of an agent – with clear boundaries, centralized belief formation and decision making, and a clear given objective – falls short for thinking about today’s AI systems. We need better methods for specifying the objectives that these agents should pursue in the real world, especially when their actions have ethical implications. I will discuss some methods that we have been developing for this purpose, drawing on techniques from preference elicitation and computational social choice. But we need to specify more than objectives. When agents are distributed, systematically forget what they knew before (say, for privacy reasons), and potentially face copies of themselves, it is no longer obvious what the correct way is even to do probabilistic reasoning, let alone to make optimal decisions. I will explain why this is so and discuss our work on doing these things well. (No previous background required.)

Vincent Conitzer will give this talk on the occasion of receiving the ACM/SIGAI Autonomous Agents Research Award for his work on multiagent systems spanning interdisciplinary areas in game theory, social choice, and economics.

Bio: Vincent Conitzer is the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University, as well as Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford. Much of his work has focused on AI and game theory. More recently, he has started to work on AI and ethics: how should we determine the objectives that AI systems pursue, when these objectives have complex effects on various stakeholders? He received his doctorate from Carnegie Mellon University in 2006. Previous honours include the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, and the inaugural Victor Lesser Dissertation Award.


Orna Kupferman

Hebrew University of Jerusalem

Alternating-time Temporal Logic

Abstract: Classical temporal logic enables the specification of on-going behaviors of reactive systems. The systems are modeled by labeled graphs, and path quantification enables the specification to refer to some or all computations of a system. This setting is well suited for specifying and reasoning about systems composed of a single agent. We introduce a computational framework for formally reasoning about multi-agent systems. The framework consists of three ingredients: a temporal-logic syntax (ATL) that explicitly talks about agents, a semantics (alternating transition systems) that captures the capabilities of individual agents, and an algorithm that extends classical model checking (enumerative and symbolic). The talk describes the development of the framework and its applications in formal verification, control, and AI. It also surveys extensions and generalizations, in particular the growing connection between formal methods for multi-agent systems and algorithmic game theory.

Orna Kupferman will give this talk on the occasion of receiving the IFAAMAS Influential Paper Award for her joint paper with Rajeev Alur and Thomas A. Henzinger on alternating-time temporal logic.

Bio: Orna Kupfermann is Professor of Computer Science at the Hebrew University of Jerusalem. Her research areas cover the theoretical foundations of the formal verification and synthesis of computer systems, including automata, temporal logics, game theory, and quantitative analysis.


Shimon Whiteson

University of Oxford and Waymo UK

The Killer App for Learning from Demonstration

Abstract: Learning from demonstration (LfD) is a paradigm for synthesising control policies for autonomous agents from data.  Rather than learning to maximise a reward signal, agents learn to imitate behaviour demonstrated by experts.  In this talk, I show how LfD can solve tricky control problems such as those arising in social robotics by avoiding the error-prone process of manually engineering a reward function.  In addition, I argue that for LfD to scale to real-world applications, it must leverage demonstrations that were occurring anyway and observed with already deployed sensors.  Consequently, I argue that the killer app of LfD is learning realistic behaviour rather than optimal behaviour.  As an example, I describe how we are using LfD at Waymo to synthesise realistic behaviour models for the cyclists, pedestrians, and human drivers that a self-driving car encounters on the road, thereby improving the simulation tools that are crucial to solving the grand challenge of autonomous driving.

Bio: Shimon Whiteson is a Professor of Computer Science at the University of Oxford and the Head of Research at Waymo UK. His research focuses on deep reinforcement learning and learning from demonstration, with applications in robotics and video games. He completed his doctorate at the University of Texas at Austin in 2007. He spent eight years as an Assistant and then an Associate Professor at the University of Amsterdam before joining Oxford as an Associate Professor in 2015. He was awarded a Starting Grant from the European Research Council in 2014, a Google Faculty Research Award in 2017, and a JPMorgan Faculty Award in 2019.