
Sellen and Horvitz look at how artificial intelligence (AI) functions similarly to an aircraft co-pilot by enhancing human operations while maintaining human control authority. The authors support human-centered design through insights from human–computer interaction (HCI) combined with human factors engineering (HFE), to ensure humans retain their control position and situational awareness and continue developing their competencies.
The aviation analogy effectively explains the need for top-level supervision while maintaining both responsibility systems and clear explanations in critical situations. Evidence from system automation incidents--including China Airlines flight 006 and automated systems overuse--teaches valuable lessons about the problems AI integration will face when working with humans. The system faces four main risks, which include deterioration of attention, difficult transitions of control authority alongside operator skill reduction, and automated resource blindness.
The text effectively supports human-led AI development, which places an emphasis on visible operations along with dynamic responses and precise worker boundaries. The authors suggest three strategies that combine human and AI capabilities with trainable AI systems as well as automation specific to the roles of users to build trust and interest in the system. The authors stress that AI systems should improve human capabilities instead of reducing them, that is, they want future systems to serve as educational tools instead of management tools.
The authors take a mostly positive stance in their assessment; however, they discourage further development without continued interdisciplinary research. The framework serves as an organized structure to help designers and policymakers enhance--not replace--human capabilities through AI implementations.