Responsible AI Engineering - Building Safe and Responsible Web-based LLM Applications
Organiser: Liming Zhu
Date: Day 4, Thursday, 1 May 2025
The rapid advancements in AI, particularly the release of LLMs and their Web-based applications, have attracted significant global interest and raised substantial concerns on responsible AI and AI safety. While LLMs are impressive examples of AI models, it is the compound AI systems, which integrate these models with other key components for function and quality/risk control, that are ultimately deployed and have real-world impact.
Web-based LLM applications, including autonomous LLM agents require careful system-level engineering to ensure responsible AI and AI safety. Most of the agents use Web browsers as significant tools, retrieving Web data and writing to Web applications (e.g., Computer Use models like Operators), which introduce significant safety and security issues and engineering challenges. Responsible AI and AI safety requirements need to be measurable, verifiable, and monitorable to address evolving risks in Web-based LLM applications. Additionally, robust engineering and evaluation methods, alongside dedicated tools, are essential for systematically implementing responsible AI and AI safety throughout the entire engineering lifecycle of LLM applications.
This one-hour special session will provide a forum for researchers and practitioners to exchange insights on methods, best practices, case studies on engineering Web-based safe and responsible AI systems.
Program:
- “Responsible AI: Best Practices for Creating Trustworthy AI Systems”: Qinghua Lu, Liming Zhu, Jon Whittle, Xiwei Xu
- “Engineering AI Systems: Architecture and DevOps Essentials”: Len Bass, Qinghua Lu, Ingo Weber, Liming Zhu
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.