KAUST Center of Generative AI; Swiss AI Lab, IDSIA
Title: Falling Walls, WWW, Modern AI, and the Future of the Universe
Abstract:
Around 1990, the Berlin Wall came down, the WWW was born at CERN, mobile phones became popular, self-driving cars appeared in traffic, and modern AI based on very deep artificial neural networks emerged, including the principles behind the G, P, and T in ChatGPT. I place these events in the history of the universe since the Big Bang, and discuss what's next: not just AI behind the screen in the virtual world, but real AI for real robots in the real world, connected through a WWW of machines. Intelligent (but not necessarily super-intelligent) robots that can learn to operate the tools and machines operated by humans can also build (and repair when needed) more of their own kind. This will culminate in life-like, self-replicating and self-improving machine civilisations, which represent the ultimate form of upscaling, and will shape the long-term future of the entire cosmos. The wonderful short-term side effect is that our AI will continue to make people's lives longer, healthier and easier.
Bio:
The New York Times headlined: "When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'." In 1990-91, he laid the foundations of "Generative AI," by introducing the principles of Generative Adversarial Networks, unnormalised linear Transformers (see the T in ChatGPT), and self-supervised Pre-Training (see the P in ChatGPT). His lab also produced LSTM, the most cited AI of the 20th century, and the Highway Net (a variant of which is the most cited AI of the 21st century). He also pioneered artificial curiosity and meta-learning machines that learn to learn (since 1987). His formal theory of creativity \& curiosity \& fun (2006-2010) explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics (1997-2000). Elon Musk tweeted: "Schmidhuber invented everything." His AI is on over 3 billion smartphones, and used many billions of times per day.
School of Computer Science, Peking University
Title: Peng Cheng Cloud Brain and Mind Series of Large Model
Abstract:
As a revolutionary pre-training model, ChatGPT has already had a huge impact on global economic. It is the strong foundation of computing power that enables large models to continuously improve in the process of understanding massive data, resulting in breakthrough innovations. Based on the Peng Cheng Cloud Brain II E-level intelligent computing platform, Peng Cheng Laboratory is training PCL Mind Series of Large Model. Mind is the first fully autonomous, controllable, safe, open-source pre-training foundation model in China, where the performance of the 200 billion parameter base model reaches the international advanced, and the output content conforms to the Chinese core values. Peng Cheng Laboratory is opening up PCL Mind cooperation and work with external partners to continuously build a large model open-source consortium for domestic large model ecosystem. The next generation—Peng Cheng Cloud Brain III will break through key technologies such as high computing power chips, large-scale networking communication, high-performance software stacks, and large-scale parallel training, and support 10,000 chip-level parallel training of trillion-level parameter AI.
Bio:
Wen Gao is a member of Chinese Academy of Engineering, ACM Fellow and IEEE Fellow. He is the founding director of Pengcheng Laboratory (Shenzhen, China). He is also a Boya Chair Professor and the director at the Faculty of Information and Engineering Sciences, Peking University. He is currently a deputy to the 14th National People's Congress. He used to be a member of the 10th, 11th and 12th CPPCC National Committee, the vice president of National Natural Science Foundation of China, the chairman of China Computer Federation and the chief editor of Chinese Journal of Computers.
He earned seven State Awards in Science and Technology Achievements as the first accomplisher. He received the IEEE Innovation in Societal Infrastructure Award (2025), National May 1 Labor Medal (2023), Wu Wenjun AI Highest Achievement Award (2023), the Special Prize on Scientific and Technological Progress of the Guangdong Province Science and Technology Award (2023), the Prize for Scientific and Technological Progress of Ho Leung Ho Lee Foundation (2022), the Outstanding Contribution Award of Guangdong Province (2021), the CCF Wang Xuan Award and the title of "2005 China’s Top Ten Educational Talents''.
Wen Gao has engaged in the research of artificial intelligence, multimedia, computer vision, pattern recognition, image processing, and virtual reality. He published six books and over 300 papers in international journals in the above areas.
Computer Science, Stanford University
Title: From Retrieval to Reasoning: Advancing AI Agents for Knowledge Discovery and Collaboration
Abstract:
The web is the world’s largest knowledge repository, yet as AI systems become increasingly integrated into our digital infrastructure, the ability to retrieve, reason, and collaborate effectively has become paramount. Large Language Models (LLMs) are evolving from passive responders to activeknowledge agents that can retrieve complex information, validate hypotheses, and optimize interactions over multiple turns. In this talk, I will explore the frontiers of AI-driven knowledge retrieval and reasoning, drawing from recent research on knowledge graphs, semi-structured retrieval, adaptive tool use, and multi-turn AI collaboration. I will also discuss how agentic frameworks enable rigorous, automated hypothesis validation through sequential falsifications. Together, these advancements push beyond traditional search and QA systems, unlocking new capabilities for knowledgediscovery, scientific research, and human-AI collaboration. Finally, I will highlight key challenges and opportunities in building AI systems that are not just accurate, but also interactive, explainable, and aligned with human needs.
Bio:
Jure Leskovec is Professor of Computer Science at Stanford University. He is affiliated with the Stanford AI Lab, the Machine Learning Group and the Center for Research on Foundation Models. In thepast, he served as a Chief Scientist at Pinterest and was an investigator at Chan Zuckerberg BioHub. Most recently, he co-founded machine learning startup Kumo.AI. Leskovec pioneered the field of Graph Neural Networks and created PyG, the most widely-used graph neural network library.Research from his group has been used by many countries to fight COVID-19 pandemic, and has been incorporated into products at Facebook, Pinterest, Uber, YouTube, Amazon, and more. His research received several awards including Microsoft Research Faculty Fellowship in 2011, Okawa Research award in2012, Alfred P. Sloan Fellowship in 2012, Lagrange Prize in 2015, ICDM Research Contributions Award in 2019, and ACM SIGKDD Innovation award in 2023. His research contributions have spanned social networks, data mining and machine learning, and computational biomedicine with the focus on drugdiscovery. His work has won 12 best paper awards and 5 10-year test of time awards at premier venues in these research areas. Leskovec received his bachelor's degree in computer science from University of Ljubljana, Slovenia, PhD in machine learning from Carnegie Mellon University and postdoctoraltraining at Cornell University.
University of Southern California
Title: The AI Revolution in Time Series: Challenges and Opportunities
Abstract:
Recent advancements in deep learning and artificial intelligence have driven significant progress in time series modeling and analysis. On one hand, researchers seek breakthroughs in performance on classical tasks such as forecasting, anomaly detection, classification, etc. On the other hand, it is intriguing to explore the potential for answering more complex inference and reasoning tasks from time series. In this keynote, I will examine the pathways toward foundation models for time series and discuss future research directions in this rapidly evolving field.
The remarkable success of foundation models in natural language processing - exemplified by Generative Pre-trained Transformers (GPT) - suggests their potential to revolutionize time series analysis. I will introduce our recent efforts along this direction, including TEMPO, a novel framework designed to learn effective time series representations by leveraging two key inductive biases: one is explicit decomposition of trend, seasonal, and residual components, and the second is prompt-based distribution adaptation for diverse time series types.
Beyond representation learning, practical applications demands advanced reasoning capabilities with multi-step time series inference task, requiring both compositional reasoning and computational precision. To tackle this challenge, I will discuss TS-reasoner, a program-aided inference agent that integrates large language models (LLMs) with structured execution pipelines, in-context learning, and self-correction mechanisms. I will discuss a new benchmark dataset and evaluation framework to systematically assess multistep time series reasoning.
By bridging deep learning advances with structured reasoning, I will highlight the next frontier in time series research, i.e., developing foundation models that enhance forecasting performance, generative models, and reasoning capabilities from time series across diverse applications.
Bio:
Yan Liu is a Professor in the Computer Science Department and the Director of the Machine Learning Center at the University of Southern California. She received her Ph.D. degree from Carnegie Mellon University. Her research interest is machine learning for time series and its applications to geo-science, health care, and sustainability. She has received several awards, including NSF CAREER Award, Okawa Foundation Research Award, New Voices of Academies of Science, Engineering, and Medicine, Best Paper Award in SIAM Data Mining Conference. She serves as general co-chair for KDD 2020 and ICLR 2023, program co-chairs for WSDM 2018, SDM 2020, KDD 2022 and ICLR 2022, and associate editor-in-chief for IEEE Transactions on Pattern Analysis and Machine Intelligence.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.