AI

Environment-Conditional Interpretation Consistency (ECIC)

Measuring and improving LLM explanation stability under diverse environmental conditions for autonomous driving systems

Open Problem

Achieving consistent real-world interpretability under diverse environmental conditions remains an open research challenge for the broader LLM ecosystem (Ferrag et al., 2026, AgentDrive). The ECIC framework addresses this by formalizing interpretability consistency through four complementary metrics: Attribution Invariance Score (AIS), Explanation Semantic Similarity (ESS), Faithfulness Gap (FG), and a composite Consistency Index (CI).

93%
Faithfulness Gap Reduction
FG: 0.054 to 0.004
0.964
ECIC-Optimized CI
+0.028 vs Baseline
100%
Contrastive Check Pass Rate
50/50 evaluations
450
Pairwise Evaluations
10 scenarios x 45 pairs

Model Configuration Comparison

Per-Scenario Consistency Index

Phase Transition: Visibility Sweep

CI as a function of visibility distance (10m-1000m). Select a model configuration:

Phase Transition: Precipitation Sweep

CI as a function of precipitation intensity (0.0-1.0).

Contrastive Consistency Checks

All 50 evaluations (5 scenarios x 10 condition pairs) pass all three contrastive checks: rationale stability, adjustment coherence, and attribution proportionality.

Aggregate ECIC Metrics

ConfigurationCIAISESSFG (lower=better)DCR

Environmental Conditions

ConditionVisibility (m)PrecipitationLightFrictionSeverity