Take a faster, smarter path to AI-driven C/C++ test automation. Discover how >>
WEBINAR
The race to deploy AI and safety in road vehicles is accelerating, but the path to doing so safely is fraught with complex, system-level challenges. The ISO/PAS 8800:2024 standard offers critical guidance, defining rigorous requirements for AI systems in safety-related E/E architectures. For automotive developers, this means heightened demands on verification, validation, and safety assurance, especially for implementing software or interacting with AI components.
Watch this webinar as it breaks down how ISO/PAS 8800 reframes AI safety as a continuous engineering discipline, addressing the gaps left by standards like ISO 26262 and SOTIF.
Get an actionable roadmap for compliance and next-generation AI testing in one session. What you’ll learn:
ISO/PAS 8800 is all about making sure AI systems in cars are safe. It covers AI running inside the car and even external AI that influences how the car behaves, like smart traffic lights. The standard is upfront: it doesn’t promise perfect safety. Instead, it aims to lower AI-related risks to an acceptable level within the overall safety plan for the vehicle. Importantly, ISO/PAS 8800 doesn’t replace older standards like ISO 26262 (for functional safety) or SOTIF (for safety of the intended functionality). It works alongside them, filling in the gaps that AI introduces.
A big part of the confusion around AI safety comes down to language. ISO/PAS 8800 clarifies this by distinguishing between a system failing and being insufficient. An AI system might work exactly as programmed but still be unsafe because it doesn’t have enough knowledge, data, or the ability to handle new situations. These aren’t typical software bugs; they’re gaps in understanding. This means AI safety needs new ways of thinking, new evidence, and new approaches to manage these risks.
ISO/PAS 8800 is designed to work with what teams already know. It explicitly builds on ISO 26262, ISO 21448 (SOTIF), and ISO/IEC 22989 (AI terminology). This means you don’t have to throw away your current safety practices. Instead, you’re extending them to cover AI. Think of AI safety as an evolution of system safety, not a completely new field.
AI safety isn’t an afterthought; it needs to be planned and managed throughout the entire life of the AI system. ISO/PAS 8800 outlines a reference AI safety lifecycle that includes everything from defining requirements and design to testing, deployment, and ongoing operation. For machine learning systems, this is an iterative process – you refine and improve until safety goals are met.
Perhaps the most critical part of ISO/PAS 8800 is how it addresses proving AI safety. You can’t technically prove AI is safe in an absolute sense. Instead, you build a structured assurance argument. This involves clearly stating your safety claims and then backing them up with solid evidence. This evidence goes beyond simple accuracy numbers. It includes:
AI safety is ultimately a reasoned, evidence-backed argument that the system, as a whole, is acceptably safe.
Many AI failures can be traced back to issues with data. ISO/PAS 8800 treats data as a safety-critical asset. This means data sets need their own lifecycle, including definition, verification, validation, and ongoing improvement. Missing scenarios, biased data, or labeling errors can directly lead to unsafe AI behavior. For example, a pedestrian detection system must reliably work in all expected conditions – day, night, rain, snow, or busy streets. If critical scenarios, like a partially hidden pedestrian, are missing from the training data, the system might fail when it encounters them, which is a serious safety risk.
Verifying that AI meets safety requirements and validating that it operates safely in the real world is challenging. AI systems deal with high-dimensional inputs, and requirements can be fuzzy. ISO/PAS 8800 calls for multi-level testing, from individual AI components to the integrated system and the full vehicle. This includes a mix of:
These methods help explore rare or extreme scenarios and validate data when physical testing isn’t practical. The key is that AI testing must be systematic, repeatable, and directly linked to safety goals.
AI failures can be systemic, not just bugs. They might come from data gaps, incorrect generalizations (where the AI misapplies learned patterns), or unsafe interactions with other systems. While traditional failure analysis methods can be used, system-theoretic approaches like STPA are often better suited for complex AI behavior. Being proactive in ensuring quality reduces risk and builds the evidence needed for your assurance argument.
Safety doesn’t stop once the car is on the road. AI systems operate in a dynamic world. ISO/PAS 8800 emphasizes:
Deployment is just the beginning of operational responsibility. Continuous assurance is how AI stays safe throughout the vehicle’s life.
So, how do teams actually put ISO/PAS 8800 into practice? The standard stresses identifying AI risks early. This means adopting a shift-left testing approach.
AI introduces complexity that traditional testing alone can’t fully handle. AI-driven autonomous testing can help by automatically fixing violations, generating tests, improving coverage, prioritizing risks, and speeding up compliance. This reduces manual effort, gets products to market faster, and helps teams keep up with the demanding, iterative safety expectations of standards like ISO/PAS 8800 without burning out.
In essence, ISO/PAS 8800 defines what AI safety requires, and tools and practices like those from Parasoft help teams achieve it consistently, efficiently, and at scale, covering both AI and traditional software components.