top of page
SPEAKERS AND MODERATORS
David McAllester
Professor, Toyota Technological Institute at Chicago
David McAllester is a Professor of Computer Science at the Toyota Technological Institute at Chicago (TTIC), where his research areas include machine learning, the theory of programming languages, automated reasoning, AI planning, computer game playing (computer chess), computational linguistics and computer vision. For example, a 1991 paper on AI planning proved to be one of the most influential papers of the decade in that area, and a 1993 paper on computer game algorithms influenced the design of the algorithms used in the Deep Blue system that defeated Gary Kasparov. He served on the faculty of Cornell University for the academic year of 1987-1988 and served on the faculty of MIT from 1988 to 1995. He was a member of technical staff at AT&T Labs-Research from 1995 to 2002. He has been a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) since 1997. From 2002 to 2017 he was Chief Academic Officer at the Toyota Technological Institute at Chicago (TTIC) where he is currently a Professor. He has received three "test of time" awards --- for a paper on systematic nonlinear planning at the AAAI conference, a paper on interval methods for constraint solving at the International Conference of Logic Programming, and a paper on the deformable part model in computer vision from the the conference on Computer Vision and Pattern Recognition (CVPR). He his B.S., M.S., and Ph.D. degrees from the Massachusetts Institute of Technology in 1978, 1979, and 1987 respectively.
David McAllester is a Professor of Computer Science at the Toyota Technological Institute at Chicago (TTIC), where his research areas include machine learning, the theory of programming languages, automated reasoning, AI planning, computer game playing (computer chess), computational linguistics and computer vision. For example, a 1991 paper on AI planning proved to be one of the most influential papers of the decade in that area, and a 1993 paper on computer game algorithms influenced the design of the algorithms used in the Deep Blue system that defeated Gary Kasparov. He served on the faculty of Cornell University for the academic year of 1987-1988 and served on the faculty of MIT from 1988 to 1995. He was a member of technical staff at AT&T Labs-Research from 1995 to 2002. He has been a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) since 1997. From 2002 to 2017 he was Chief Academic Officer at the Toyota Technological Institute at Chicago (TTIC) where he is currently a Professor. He has received three "test of time" awards --- for a paper on systematic nonlinear planning at the AAAI conference, a paper on interval methods for constraint solving at the International Conference of Logic Programming, and a paper on the deformable part model in computer vision from the the conference on Computer Vision and Pattern Recognition (CVPR). He his B.S., M.S., and Ph.D. degrees from the Massachusetts Institute of Technology in 1978, 1979, and 1987 respectively.
Melanie Mitchell
Davis Professor, Santa Fe Institute
Melanie Mitchell is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).
Melanie originated the Santa Fe Institute's Complexity Explorer platform, which offers online courses and other educational resources related to the field of complex systems. Her online course “Introduction to Complexity” has been taken by over 25,000 students, and is one of Course Central’s “top fifty online courses of all time”.
Melanie Mitchell is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).
Melanie originated the Santa Fe Institute's Complexity Explorer platform, which offers online courses and other educational resources related to the field of complex systems. Her online course “Introduction to Complexity” has been taken by over 25,000 students, and is one of Course Central’s “top fifty online courses of all time”.
Veronika Rockova
Professor of Econometrics and Statistics, University of Chicago
Veronika Rockova's research brings together statistics and machine learning to develop tools for learning from large datasets, particularly at the intersection of Bayesian and frequentist statistics, including: variable selection, uncertainty quantification, Bayesian nonparametrics, factor and dynamic models, and high-dimensional decision theory and inference. Her research was recognized by the prestigious CAREER Award for early-career faculty by the National Science Foundation in 2020, and she is on the Editorial Board of the Annals of Statistics.
Veronika Rockova's research brings together statistics and machine learning to develop tools for learning from large datasets, particularly at the intersection of Bayesian and frequentist statistics, including: variable selection, uncertainty quantification, Bayesian nonparametrics, factor and dynamic models, and high-dimensional decision theory and inference. Her research was recognized by the prestigious CAREER Award for early-career faculty by the National Science Foundation in 2020, and she is on the Editorial Board of the Annals of Statistics.
Stuart Russell
Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley and Honorary Fellow, Wadham College, Oxford
Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He is a recipient of the IJCAI Computers and Thought Award and held the Chaire Blaise Pascal in Paris. In 2021 he received the OBE from Her Majesty Queen Elizabeth and gave the Reith Lectures. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with a current emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.
Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He is a recipient of the IJCAI Computers and Thought Award and held the Chaire Blaise Pascal in Paris. In 2021 he received the OBE from Her Majesty Queen Elizabeth and gave the Reith Lectures. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with a current emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.
Rebecca Willett
Professor of Statistics and Computer Science & Director of AI at the Data Science Institute, University of Chicago
Rebecca Willett’s work in machine learning and signal processing reflects broad and interdisciplinary expertise and perspectives. She is known internationally for her contributions to the mathematical foundations of machine learning, large-scale data science, and computational imaging.
In particular, Rebecca studies methods to learn and leverage hidden structure in large-scale datasets; representing data in terms of these structures allows ML methods to produce more accurate predictions when data contain missing entries, are subject to constrained sensing or communication resources, correspond to rare events, or reflect indirect measurements of complex physical phenomena. These challenges are pervasive in science and technology data, and Rebecca's work in this space has had important implications in national security, medical imaging, materials science, astronomy, climate science, and several other fields. Her group has made contributions both in the mathematical foundations of signal processing and machine learning and in their application to a variety of real-world problems.
Rebecca Willett’s work in machine learning and signal processing reflects broad and interdisciplinary expertise and perspectives. She is known internationally for her contributions to the mathematical foundations of machine learning, large-scale data science, and computational imaging.
In particular, Rebecca studies methods to learn and leverage hidden structure in large-scale datasets; representing data in terms of these structures allows ML methods to produce more accurate predictions when data contain missing entries, are subject to constrained sensing or communication resources, correspond to rare events, or reflect indirect measurements of complex physical phenomena. These challenges are pervasive in science and technology data, and Rebecca's work in this space has had important implications in national security, medical imaging, materials science, astronomy, climate science, and several other fields. Her group has made contributions both in the mathematical foundations of signal processing and machine learning and in their application to a variety of real-world problems.
Alexandra Chouldechova
Estella Loomis McCandless Assistant Professor of Statistics and Public Policy at Heinz College, Carnegie Mellon University
Alexandra Chouldechova is the Estella Loomis McCandless Assistant Professor of Statistics and Public Policy at Carnegie Mellon University's Heinz College of Information Systems and Public Policy. Her research investigates questions of algorithmic fairness and accountability in data-driven decision-making systems, with a domain focus on criminal justice and human services. Her work has been supported through funding from organizations including the Hillman Foundation, the MacArthur Foundation, and the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon. She is a member of the executive committee for the ACM Conference on Fairness, Accountability and Transparency (FAccT), and previously served as a Program Committee co-Chair for the conference.
Dr. Chouldechova is a 2020 Research Fellow with the Partnership on AI, where she is working on understanding factors that drive racial bias in algorithmic risk assessment tools being developed for use in pre-trial, parole and sentencing contexts. She is also a member of the Pittsburgh Task Force on Public Algorithms.
Alexandra Chouldechova is the Estella Loomis McCandless Assistant Professor of Statistics and Public Policy at Carnegie Mellon University's Heinz College of Information Systems and Public Policy. Her research investigates questions of algorithmic fairness and accountability in data-driven decision-making systems, with a domain focus on criminal justice and human services. Her work has been supported through funding from organizations including the Hillman Foundation, the MacArthur Foundation, and the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon. She is a member of the executive committee for the ACM Conference on Fairness, Accountability and Transparency (FAccT), and previously served as a Program Committee co-Chair for the conference.
Dr. Chouldechova is a 2020 Research Fellow with the Partnership on AI, where she is working on understanding factors that drive racial bias in algorithmic risk assessment tools being developed for use in pre-trial, parole and sentencing contexts. She is also a member of the Pittsburgh Task Force on Public Algorithms.
Iason Gabriel
Staff Research Scientist,
DeepMind
Iason Gabriel is a Senior Research Scientist at DeepMind where he works in the Ethics Research Team. His research focuses on the applied ethics of artificial intelligence, human rights, and the question of how to align technology with human values. Before joining DeepMind, Iason was a Fellow in Politics at St John’s College, Oxford. He holds a doctorate in Political Theory from the University of Oxford and spent a number of years working for the United Nations in post-conflict environments.
DeepMind
Iason Gabriel is a Senior Research Scientist at DeepMind where he works in the Ethics Research Team. His research focuses on the applied ethics of artificial intelligence, human rights, and the question of how to align technology with human values. Before joining DeepMind, Iason was a Fellow in Politics at St John’s College, Oxford. He holds a doctorate in Political Theory from the University of Oxford and spent a number of years working for the United Nations in post-conflict environments.
Melanie Jeske
Postdoctoral Fellow, Institute on the Formation of Knowledge, University of Chicago
Melanie Jeske is a postdoctoral fellow at the Institute on the Formation of Knowledge at the University of Chicago. Situated at the intersection of sociology of medicine and science and technology studies, her research explores social, political and ethical dimensions of knowledge systems, emergent biotechnologies, and expertise. Her work across these areas has been published in journals including Science, Technology & Human Values, Social Science & Medicine, BioSocieties, PLOS ONE, and Engaging Science, Technology, and Society. She obtained her PhD in Sociology at the University of California, San Francisco (UCSF). She also holds a master of science degree in Science, Technology, and Society from Drexel University.
Melanie Jeske is a postdoctoral fellow at the Institute on the Formation of Knowledge at the University of Chicago. Situated at the intersection of sociology of medicine and science and technology studies, her research explores social, political and ethical dimensions of knowledge systems, emergent biotechnologies, and expertise. Her work across these areas has been published in journals including Science, Technology & Human Values, Social Science & Medicine, BioSocieties, PLOS ONE, and Engaging Science, Technology, and Society. She obtained her PhD in Sociology at the University of California, San Francisco (UCSF). She also holds a master of science degree in Science, Technology, and Society from Drexel University.
Karrie Karahalios
Professor in the Department of Computer Science, University of Illinois at Urbana-Champaign
Karrie Karahalios is noted for her work on the impact of computer science on people and society, analyses of social media, and algorithm auditing. She is co-founder of the Center for People and Infrastructures at the University of Illinois at Urbana-Champaign.
Karrie Karahalios is noted for her work on the impact of computer science on people and society, analyses of social media, and algorithm auditing. She is co-founder of the Center for People and Infrastructures at the University of Illinois at Urbana-Champaign.
Andre Uhl
Postdoctoral Researcher at the Rank of Instructor, Institute on the Formation of Knowledge, University of Chicago
Andre Uhl is a scholar of critical AI studies whose work draws new connections between media arts and sciences and tech policy and activism. Before joining the Institute on the Formation of Knowledge, he earned his PhD in Art, Film, and Visual Studies with a Secondary Field in Science, Technology, and Society from Harvard University.
Andre Uhl is a scholar of critical AI studies whose work draws new connections between media arts and sciences and tech policy and activism. Before joining the Institute on the Formation of Knowledge, he earned his PhD in Art, Film, and Visual Studies with a Secondary Field in Science, Technology, and Society from Harvard University.
Sarah Brayne
Associate Professor of Sociology, University of Texas at Austin
Sarah Brayne's first book, Predict and Surveil: Data, Discretion, and the Future of Policing (Oxford University Press), draws on ethnographic research with a large, urban police department to understand how law enforcement uses predictive analytics and new surveillance technologies to allocate resources, identify suspects, and conduct investigations. She demonstrates how the adoption of big data analytics transforms organizational practices and how the police themselves respond to these new data-driven strategies. In previous research, she developed a theory of "system avoidance," using survey data to test the relationship between criminal legal contact and involvement in medical, financial, labor market, and educational institutions. Brayne's research has appeared in the American Sociological Review, Social Problems, Law and Social Inquiry, the Annual Review of Law and Social Science, and the Annual Review of Criminology, and has received awards from the American Sociological Association, the Law and Society Association, and the American Society of Criminology.
Brayne is the founder and director of the Texas Prison Education Initiative, a group of faculty and students who volunteer teach college classes in prisons in Texas. She has been teaching college classes in prisons since 2012. Prior to joining the faculty at UT-Austin, Brayne was a Postdoctoral Researcher at Microsoft Research. She received her Ph.D. in Sociology and Social Policy from Princeton University.
Sarah Brayne's first book, Predict and Surveil: Data, Discretion, and the Future of Policing (Oxford University Press), draws on ethnographic research with a large, urban police department to understand how law enforcement uses predictive analytics and new surveillance technologies to allocate resources, identify suspects, and conduct investigations. She demonstrates how the adoption of big data analytics transforms organizational practices and how the police themselves respond to these new data-driven strategies. In previous research, she developed a theory of "system avoidance," using survey data to test the relationship between criminal legal contact and involvement in medical, financial, labor market, and educational institutions. Brayne's research has appeared in the American Sociological Review, Social Problems, Law and Social Inquiry, the Annual Review of Law and Social Science, and the Annual Review of Criminology, and has received awards from the American Sociological Association, the Law and Society Association, and the American Society of Criminology.
Brayne is the founder and director of the Texas Prison Education Initiative, a group of faculty and students who volunteer teach college classes in prisons in Texas. She has been teaching college classes in prisons since 2012. Prior to joining the faculty at UT-Austin, Brayne was a Postdoctoral Researcher at Microsoft Research. She received her Ph.D. in Sociology and Social Policy from Princeton University.
Ishanu Chattopadhyay
Assistant Professor, University of Chicago
Ishanu Chattopadhyay’s research focuses on the theory of unsupervised machine learning and the interplay of stochastic processes and formal language theory in exploring the mathematical underpinnings of the question of inferring causality from data. His most visible contributions include the algorithms for data smashing, inverse Gillespie inference, and nonparametric nonlinear and zero-knowledge implementations of Granger causal analysis that have crucial implications for biomedical informatics, data-enabled discovery in biomedicine, and personalized precision health care. His current work focuses on analyzing massive clinical databases of disparate variables to distill patterns indicative of hitherto unknown etiologies, dependencies, and relationships, potentially addressing the daunting computational challenge of scale and making way for ab initio and de novo modeling in an age of ubiquitous data. Chattopadhyay received an MS and PhD in mechanical engineering, as well as an MA in mathematics, from the Pennsylvania State University. He completed his postdoctoral training and served as a research associate in the Department of Mechanical Engineering at Penn State. He also held a postdoctoral fellowship simultaneously at the Department of Computer Science and the Sibley School of Mechanical and Aerospace Engineering at Cornell University.
Ishanu Chattopadhyay’s research focuses on the theory of unsupervised machine learning and the interplay of stochastic processes and formal language theory in exploring the mathematical underpinnings of the question of inferring causality from data. His most visible contributions include the algorithms for data smashing, inverse Gillespie inference, and nonparametric nonlinear and zero-knowledge implementations of Granger causal analysis that have crucial implications for biomedical informatics, data-enabled discovery in biomedicine, and personalized precision health care. His current work focuses on analyzing massive clinical databases of disparate variables to distill patterns indicative of hitherto unknown etiologies, dependencies, and relationships, potentially addressing the daunting computational challenge of scale and making way for ab initio and de novo modeling in an age of ubiquitous data. Chattopadhyay received an MS and PhD in mechanical engineering, as well as an MA in mathematics, from the Pennsylvania State University. He completed his postdoctoral training and served as a research associate in the Department of Mechanical Engineering at Penn State. He also held a postdoctoral fellowship simultaneously at the Department of Computer Science and the Sibley School of Mechanical and Aerospace Engineering at Cornell University.
bottom of page