Adnan Masood, PhD

UST Global

Adnan Masood, PhD

Adnan Masood, Ph.D. is a software architect, machine learning researcher, author, speaker, and Microsoft MVP for Artificial Intelligence. He works as Chief Architect of AI and Machine Learning at UST Global. UST Global is a fast-pace digital company providing Advanced Computing and Digital Innovation Services including but not limited to Advanced Analytics, BI, Information Management, IoT, Mobility, Cloud, Infrastructure Management, Legacy Modernization, and Cybersecurity, Before UST, Dr. Masood worked as a Software Architect at Green Dot Corporation, a leading prepaid financial technology institution. In the past life he served as principal engineer for an e-commerce start-up, and as a solutions architect for a leading British nonprofit organization. A strong believer in the development community, Adnan is an active member of the Open Web Application Security Project (OWASP), an organization dedicated to software security. In the .NET community, he is a cofounder and president of the Pasadena .NET Developers group, co-organizer of Tampa Bay Data Science Group, and Irvine Programmer meetup. A certified ScrumMaster, Dr. Masood also hold certifications in big data, machine learning, and systems architecture from Massachusetts Institute of Technology; Application Security certification from Stanford University, and SOA Smarts certification from Carnegie Mellon University. he is a Microsoft Certified Solutions Developer, and Sun Certified Java Developer. Dr. Masood teaches Data Science course at Park University, and has taught Windows Communication Foundation (WCF) courses at the University of California, San Diego. He is a regular speaker to various academic and technology conferences (, IEEE-HST, IASA, and DevConnections), local code camps, and user groups. He is also a volunteer STEM FLL robotics coach for middle school students.

Model Interpretability and Transparency with Automated Machine Learning

Cloud/IoT (Room 302)
04:00 PM - 04:50 PM

In AI and machine learning literature, explainability and interpretability are often used interchangeably. Interpretability is about the extent to which a cause and effect can be observed within a system while Explainability, is how a machine learning or deep learning model can be explained in subject matter expert terms. For domains like finance, healthcare and law look to deploy AI and deep learning models, fairness, accountability and transparency becomes specially important. If we’re unable to provide decision rationale with interpretability, and explainability as part of our models, we’ll seriously be limiting the potential impact of AI based systems. In this talk I will explore how automated machine learning (AutoML) can be used to explain a model, and review techniques including LIME, ELI5, and SHAP for model explainability and interpretability