Advisory Board
Dr. He Ruimin is the Chief Artificial Intelligence Officer; Deputy Chief Digital Technology Officer, Government of Singapore. Dr. He Ruimin is Singapore’s Chief Artificial Intelligence (AI) Officer, where he leads a multi-stakeholder effort to achieve Singapore’s strategic AI objectives, including developing and implementing Singapore’s national AI strategy. He is also concurrently the Singaporean government’s Deputy Chief Digital Technology Officer. His previous government roles include serving as Lieutenant-Colonel in the Singapore Navy, where he commanded naval vessels, and management positions in the Singaporean Ministry of Trade and Industry. Ruimin is also a member of the United Nations High-level Advisory Body on AI. Ruimin was previously the Chief Adviser to the CEO of Grab, where he oversaw Grab’s economics, analytics, growth, and safety. He has also personally developed multiple revenue-generating software applications, taught at various universities including the Lee Kuan Yew School of Public Policy and Nazarbayev University, and published papers in journals such as the American Economic Review. Ruimin has a BS in Electrical Science and Engineering, and a PhD in Economics, both from the Massachusetts Institute of Technology.
Kristian Lum is a Research Scientist at Google DeepMind, a Visiting Scientist and former faculty at the University of Chicago Data Science Institute, Co-founder of ACM FAccT, and formerly at Twitter. She holds a PhD in Statistics from Duke, and she has worked extensively on issues of algorithmic fairness and bias. In the criminal justice field, she has completed work on bias in risk assessment models and policing. She adopts statistical approaches to measuring and operationalizing algorithmic bias.
Patrick Hall is the principal scientist at HallResearch.ai. He is also an assistant professor of decision sciences at the George Washington University School of Business, teaching data ethics, business analytics, and machine learning classes. Patrick conducts research in support of NIST's AI Risk Management Framework, works with leading fair lending and AI risk management advisory firms,
and serves on the board of directors for the AI Incident Database. Prior to co-founding HallResearch.ai, Patrick was a partner at BNH.AI, where he pioneered the emergent discipline of auditing and red-teaming generative AI systems; he also led H2O.ai's efforts in the development of responsible AI, resulting in one of the world's first commercial applications for explainability and bias mitigation in machine learning. Patrick started his career in global customer-facing roles and R&D
roles at SAS Institute. Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University. He has been invited to speak on AI and machine learning topics at the National Academies of Sciences, Engineering, and Medicine, the Association for Computing Machinery SIG-KDD (“KDD”), and the American Statistical Association Joint Statistical Meetings. He has been published in outlets like Information, Frontiers in AI, McKinsey.com, O'Reilly Media, and Thomson Reuters Regulatory Intelligence, and his technical work has been profiled in Fortune, WIRED, InfoWorld, TechCrunch,
and others. Patrick is the lead author of the book Machine Learning for High-Risk Applications. With affiliations across private industry, civil society, academia, and government, Patrick brings one of the widest possible perspectives to AI and matters of risk. He has built machine learning software solutions and advised on AI risk for Fortune 100 companies, cutting-edge startups, Big Law, and US and foreign government agencies.
Ramayya Krishnan
Ramayya Krishnan is the W. W. Cooper and Ruth F. Cooper Professor of Management Science and Information Systems at the H. John Heinz III College and the Department of Engineering and Public Policy at Carnegie Mellon University. A faculty member at CMU since 1988, Krishnan was appointed as the Dean in 2009 of the Heinz College. Krishnan was educated at the Indian Institute of Technology and the University of Texas at Austin. He has a Bachelor’s degree in mechanical engineering, a Master’s degree in industrial engineering and operations research, and a PhD in Management Science and Information Systems. Krishnan’s research interests focus on consumer and social behavior in digitally instrumented environments. His work has addressed technical, policy and business problems that arise in these contexts and he has published extensively on these topics. He has founded multiple research centers at CMU and is the founding faculty director of the
Block Center for Technology and Society. He advises governments and policy making organizations on technology policy and the deployment of data driven policy making. He is an advisor to the President of the Asian Development Bank and is a member of the Geotech Commission of the Atlantic Council. He is an AAAS Fellow (section T), an INFORMS Fellow, an elected member of the National Academy of Public Administration and a distinguished alumnus of both the Indian institute of Technology and the University of Texas at Austin. He served in 2019 as the 25th President of INFORMS, the Global Operations Research and Analytics Society. He was appointed to the National AI Advisory Committee to the President and the AI
Rishi Bommasani is the Society Lead at the Stanford Center for Research on Foundation Models (CRFM). He is completing his PhD at Stanford Computer Science, advised by Percy Liang and Dan Jurafsky. His affiliations are Stanford NLP, Stanford AI, and Stanford HAI. Funding: NSF GRFP. Prior to Stanford, he began research at Cornell (BA Math, BA CS, MS CS) under Claire Cardie. He is deeply honored to have learned from the late Professor Arzoo Katiyar. He researches the societal impact of AI, especially foundation models. His research has been featured in The Atlantic, Axios,
Bloomberg, Euractiv, Fast Company, Financial Times, Fortune, The Information, MIT Technology Review, The New York Times, Politico, Quanta, Rappler, Reuters, Tech Policy Press, VentureBeat, The Verge and Vox.
Seraphina Goldfarb-Tarrant is the Head of Safety at Cohere, where she works on both the practice and the theory of evaluating and mitigating harms from LLMs. She did her PhD at the University of Edinburgh in Fairness for NLP, and her MSc on Natural Language Generation at the University of Washington. Her research interests include the intersection of fairness with robustness and generalisation, cross-lingual transfer, and causal analysis. Previously, she worked at Google for five years in Tokyo, NYC, and Shanghai, and has been a Research Engineer for DARPA and the Gates Foundation. She also spent two years as a sailor in the North Sea.
Dr. Eric Horvitz is Microsoft's Chief Scientific Officer, spearheading company-wide initiatives and navigating opportunities at the confluence of scientific frontiers, technology, and society, including strategic efforts in AI, medicine, and the biosciences. His research centers on challenges with machine learning, reasoning, and action amidst the complexities of the open world, and mechanisms that support human-AI interaction and complementarity. His efforts and collaborations have led to the fielding of AI technologies in healthcare, transportation, aerospace, and computing applications.Beyond his scientific work, Dr. Horvitz pursues programs, organizations, and studies on ethics, values, and safety with applications and influences of AI. He has been inducted into the National Academy of Engineering, the American Academy of Arts and Sciences, the American Philosophical Society, and the CHI Academy. He is also a Fellow of the Association for the Advancement of AI (AAAI), the Association for the Advancement of Science (AAAS), the Association for Computing Machinery (ACM), and the American College of Medical Informatics (ACMI).
David Haber is the Founder and CEO of Lakera A, a start-up dedicated to ensuring the security of AI models. With over a decade of experience in developing ML products, David's team focuses on providing development teams with tools to prioritize safety, security, and ethics in AI models. Previously, David was the Head of Machine Learning at Daedalean and Lead Engineer at Soma Analytics. He holds a degree from Imperial College London.