Parasoft Logo

WEBINAR

Automated C/C++ Testing Roadmap for AI Safety & ISO/PAS 8800

The race to deploy AI and safety in road vehicles is accelerating, but the path to doing so safely is fraught with complex, system-level challenges. The ISO/PAS 8800:2024 standard offers critical guidance, defining rigorous requirements for AI systems in safety-related E/E architectures. For automotive developers, this means heightened demands on verification, validation, and safety assurance, especially for implementing software or interacting with AI components.

Watch this webinar as it breaks down how ISO/PAS 8800 reframes AI safety as a continuous engineering discipline, addressing the gaps left by standards like ISO 26262 and SOTIF.

Get an actionable roadmap for compliance and next-generation AI testing in one session. What you’ll learn:

  • From Theory to Workflow: Transform ISO/PAS 8800 clauses into a clear, actionable AI safety process.
  • Integrate Your Safety Strategy: Bridge ISO 26262 and SOTIF with 8800 to close AI-specific safety gaps.
  • Master the AI Safety Lifecycle: Apply the standard from data curation to system validation and operational monitoring.
  • Automate Compliance & Evidence: Leverage shift-left testing, CI/CD, and traceability to build your safety case faster.

Understanding ISO/PAS 8800: The Basics

ISO/PAS 8800 is all about making sure AI systems in cars are safe. It covers AI running inside the car and even external AI that influences how the car behaves, like smart traffic lights. The standard is upfront: it doesn’t promise perfect safety. Instead, it aims to lower AI-related risks to an acceptable level within the overall safety plan for the vehicle. Importantly, ISO/PAS 8800 doesn’t replace older standards like ISO 26262 (for functional safety) or SOTIF (for safety of the intended functionality). It works alongside them, filling in the gaps that AI introduces.

What’s New with AI Safety?

A big part of the confusion around AI safety comes down to language. ISO/PAS 8800 clarifies this by distinguishing between a system failing and being insufficient. An AI system might work exactly as programmed but still be unsafe because it doesn’t have enough knowledge, data, or the ability to handle new situations. These aren’t typical software bugs; they’re gaps in understanding. This means AI safety needs new ways of thinking, new evidence, and new approaches to manage these risks.

Building on Existing Standards

ISO/PAS 8800 is designed to work with what teams already know. It explicitly builds on ISO 26262, ISO 21448 (SOTIF), and ISO/IEC 22989 (AI terminology). This means you don’t have to throw away your current safety practices. Instead, you’re extending them to cover AI. Think of AI safety as an evolution of system safety, not a completely new field.

The AI Safety Lifecycle and Assurance

AI safety isn’t an afterthought; it needs to be planned and managed throughout the entire life of the AI system. ISO/PAS 8800 outlines a reference AI safety lifecycle that includes everything from defining requirements and design to testing, deployment, and ongoing operation. For machine learning systems, this is an iterative process – you refine and improve until safety goals are met.

The Assurance Argument: Proving Safety

Perhaps the most critical part of ISO/PAS 8800 is how it addresses proving AI safety. You can’t technically prove AI is safe in an absolute sense. Instead, you build a structured assurance argument. This involves clearly stating your safety claims and then backing them up with solid evidence. This evidence goes beyond simple accuracy numbers. It includes:

  • Data coverage: How well the training data represents real-world conditions.
  • Robustness: How the AI performs under various stresses or unexpected inputs.
  • Testing: Results from various testing methods.
  • Architectural safeguards: Safety features built into the system design.
  • Monitoring strategies: How the system’s performance is tracked.
  • Operational controls: Rules and procedures for how the AI is used.

AI safety is ultimately a reasoned, evidence-backed argument that the system, as a whole, is acceptably safe.

Data: A Critical Safety Asset

Many AI failures can be traced back to issues with data. ISO/PAS 8800 treats data as a safety-critical asset. This means data sets need their own lifecycle, including definition, verification, validation, and ongoing improvement. Missing scenarios, biased data, or labeling errors can directly lead to unsafe AI behavior. For example, a pedestrian detection system must reliably work in all expected conditions – day, night, rain, snow, or busy streets. If critical scenarios, like a partially hidden pedestrian, are missing from the training data, the system might fail when it encounters them, which is a serious safety risk.

Testing and Verification for AI Systems

Verifying that AI meets safety requirements and validating that it operates safely in the real world is challenging. AI systems deal with high-dimensional inputs, and requirements can be fuzzy. ISO/PAS 8800 calls for multi-level testing, from individual AI components to the integrated system and the full vehicle. This includes a mix of:

  • Statistical testing
  • Scenario replay
  • Robustness testing
  • Simulation (e.g., using tools like Carla, Nvidia Drive Sim)
  • Hardware-in-the-loop testing

These methods help explore rare or extreme scenarios and validate data when physical testing isn’t practical. The key is that AI testing must be systematic, repeatable, and directly linked to safety goals.

Beyond Traditional Failure Analysis

AI failures can be systemic, not just bugs. They might come from data gaps, incorrect generalizations (where the AI misapplies learned patterns), or unsafe interactions with other systems. While traditional failure analysis methods can be used, system-theoretic approaches like STPA are often better suited for complex AI behavior. Being proactive in ensuring quality reduces risk and builds the evidence needed for your assurance argument.

Keeping AI Safe After Deployment

Safety doesn’t stop once the car is on the road. AI systems operate in a dynamic world. ISO/PAS 8800 emphasizes:

  • Operational monitoring: To catch anomalies or unexpected inputs.
  • Field data collection: To learn about new conditions the AI encounters.
  • Periodic reassessments: Of the assurance argument.
  • Controlled updates: Like over-the-air software patches.

Deployment is just the beginning of operational responsibility. Continuous assurance is how AI stays safe throughout the vehicle’s life.

Practical Steps: Testing and Automation

So, how do teams actually put ISO/PAS 8800 into practice? The standard stresses identifying AI risks early. This means adopting a shift-left testing approach.

  • Static code analysis and coding standards: Catching traditional software faults early in the code surrounding the AI helps prevent issues before they even become AI triggering conditions.
  • Automated unit testing: Using frameworks like GoogleTest ensures that control logic, guardrails, and safety mechanisms work as intended, especially when AI outputs feed into safety-critical software.
  • Continuous Integration/Continuous Deployment (CI/CD): Embedding safety analysis, unit testing, and code coverage into CI/CD pipelines allows teams to continuously verify that changes don’t introduce new risks. This supports the iterative nature of AI safety.
  • Traceability and compliance automation: Linking requirements to test results helps generate compliance artifacts in real time, making it easier to show how safety requirements are being met.

AI-Driven Testing for Efficiency

AI introduces complexity that traditional testing alone can’t fully handle. AI-driven autonomous testing can help by automatically fixing violations, generating tests, improving coverage, prioritizing risks, and speeding up compliance. This reduces manual effort, gets products to market faster, and helps teams keep up with the demanding, iterative safety expectations of standards like ISO/PAS 8800 without burning out.

In essence, ISO/PAS 8800 defines what AI safety requires, and tools and practices like those from Parasoft help teams achieve it consistently, efficiently, and at scale, covering both AI and traditional software components.