A dynamical systems model revealing how AI assistance can erode the human skills needed to supervise automated tasks, with domain-specific vulnerability analysis and intervention evaluation.
As organizations deploy AI assistants, a critical question emerges: does AI assistance erode the human skills required to supervise automated outputs? We formalize this through a dynamical systems model and identify deskilling traps -- parameter regimes where workers lose supervisory competence and simultaneously lose awareness of their incompetence.
We model a worker whose supervisory skill s(t) and metacognitive calibration m(t) evolve over discrete time steps (each representing one week). The worker handles 20 tasks per time step, delegating a fraction r(t) to an AI system.
Skill grows through unassisted practice (first term), decays from disuse when tasks are delegated to AI (second term), and partially transfers from reviewing AI outputs (third term).
Detection probability depends on both domain skill (to recognize errors) and metacognition (to avoid rubber-stamping).
| Parameter | Software | Medicine | Finance | Aviation |
|---|---|---|---|---|
| Error Severity | 0.30 | 0.90 | 0.60 | 0.95 |
| AI Reliability | 0.85 | 0.90 | 0.80 | 0.95 |
| Novelty Rate | 0.25 | 0.15 | 0.30 | 0.05 |
| Skill Growth | 0.05 | 0.03 | 0.04 | 0.04 |
| Skill Decay | 0.02 | 0.015 | 0.025 | 0.03 |
| Transfer Rate | 0.30 | 0.20 | 0.25 | 0.15 |
We simulate 200 weeks of AI-assisted work for novice, intermediate, and expert workers across four professional domains.
Supervisory skill level over 200 weeks. Dashed red line marks the supervision competence threshold (0.3).
| Domain | Level | Initial | Final Skill | Final Meta. | Detect Rate | Harm |
|---|
We sweep AI reliability from 0.50 to 0.99 and measure final skill and trap rates for novice software engineers.
Blue line: mean final skill (left axis). Red bars: deskilling trap rate. A critical threshold emerges at reliability ~0.938.
We compare four interventions for a novice software engineer, each run across 10 random seeds.
Trajectories over 200 weeks under different interventions. Scaffolded autonomy is the only one that reverses the deskilling trajectory.
| Intervention | Final Skill | Detection Rate | Total Harm |
|---|---|---|---|
| No Intervention | 0.048 +/- 0.000 | 0.234 +/- 0.012 | 67.1 +/- 1.7 |
| Scheduled Practice | 0.125 +/- 0.000 | 0.295 +/- 0.013 | 63.5 +/- 1.7 |
| Scaffolded Autonomy | 0.983 +/- 0.001 | 0.684 +/- 0.034 | 8.3 +/- 0.7 |
| Adversarial Training | 0.048 +/- 0.000 | 0.234 +/- 0.012 | 60.1 +/- 1.5 |
| Explainability Req. | 0.126 +/- 0.000 | 0.303 +/- 0.017 | 62.9 +/- 2.3 |
We compare Pre-AI workers (initial skill 0.75) and Post-AI workers (initial skill 0.20) over 300 weeks.
Comparison over 300 weeks. Pre-AI cohort maintains a persistent ~3x skill advantage.
Our model demonstrates that AI assistance can produce deskilling traps under realistic parameter regimes. The answer is conditionally affirmative: AI assistance hinders supervisory skill development when the practice signal is insufficiently preserved.
Implement scaffolded autonomy where AI gradually reduces assistance as workers demonstrate competence. The 20x skill improvement demonstrates that thoughtful AI system design can prevent deskilling.
Include deliberate unassisted practice, especially for AI-native workers. The generational asymmetry shows these workers need qualitatively different training.
The aviation results are alarming: all experience levels enter deskilling traps. Regulatory attention should focus on the most reliable AI systems, as these pose the greatest deskilling risk.
Shen et al., "How AI Impacts Skill Formation," arXiv:2601.20245, 2026.
Bainbridge, L., "Ironies of Automation," Automatica, 1983.
Parasuraman, R. & Riley, V., "Humans and Automation," Human Factors, 1997.
Endsley, M.R., "From Here to Autonomy," Human Factors, 2017.
Bastani et al., "Generative AI Can Harm Learning," 2024.
Kruger, J. & Dunning, D., "Unskilled and Unaware of It," JPSP, 1999.