Testing AI Systems

Although there are several controversies and misunderstandings surrounding AI and machine learning, one thing is apparent — people have quality concerns about the safety, reliability, and trustworthiness of these types of systems. Not only are ML-based systems shrouded in mystery due to their largely black-box nature, they also tend to be unpredictable since they can adapt and learn new things at runtime. Validating ML systems is challenging and requires a cross-section of knowledge, skills, and experience from areas such as mathematics, data science, software engineering, cyber-security, and operations. Join Tariq King as he gives you a quality engineering introduction to testing AI and machine learning. You’ll learn AI and ML fundamentals, including how intelligent agents are modeled, trained and developed. Tariq then dives into approaches for validating ML models offline, prior to release, and online, continuously post-deployment. Engage with other participants to develop and execute a test plan for a live ML-based recommendation system, and experience the practical issues around testing AI first-hand.
  • Introduction
  • Testing the Teachable Machine
  • Model Evaluation
  • Classifier Performance
  • Regression Performance
  • Model Fit & Cross Validation
  • AI Fairness & Bias Mitigation
  • Clustering Validation
  • Testing Adaptive ML
  • ML Operations
Completion rules
  • All units must be completed