(Editorial) Designing AI Using a Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness

In the rapidly evolving world of artificial intelligence (AI), the IEEE Transactions on Technology and Society recently published a special issue that delves into the heart of AI’s most pressing challenges and opportunities. This editorial piece has garnered widespread attention. Read the full editorial here.

J. R. Schoenherr, R. Abbas, K. Michael, P. Rivas and T. D. Anderson, “Designing AI Using a Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness,” in IEEE Transactions on Technology and Society, vol. 4, no. 1, pp. 9-23, March 2023, doi: 10.1109/TTS.2023.3257627.

The Essence of the Special Issue

This special issue comprises eight thought-provoking papers that collectively address the multifaceted nature of AI. The journey begins with a reconceptualization of AI, leading to discussions on the pivotal role of explainability and accuracy in AI systems. The papers emphasize that designing AI with a human-centered approach while recognizing the importance of ethics is not a zero-sum game.

Key Highlights

  1. Reconceptualizing AI: Clarke, a Fellow of the Australian Computer Society, revisits the original conception of AI and proposes a fresh perspective, emphasizing the synergy between human and artifact capabilities.
  2. The Challenge of Explainability: Adamson, a Past President of the IEEE’s Society on the Social Implications, delves into the complexities of AI systems, highlighting the concealed nature of many AI algorithms and the need for post-hoc reasoning.
  3. Trustworthy AI: Petkovic underscores that trustworthy AI requires both accuracy and explainability. He emphasizes the importance of explainable AI (XAI) in ensuring user trust, especially in high-stakes applications.
  4. Bias in AI: A team of researchers, including Nagpal, Singh, Singh, Vatsa, and Ratha, evaluates the behavior of face recognition models, shedding light on potential biases related to age and ethnicity.
  5. AI in Healthcare: Dhar, Siuly, Borra, and Sherratt discuss the challenges and opportunities of deep learning in the healthcare domain, emphasizing the ethical considerations surrounding medical data.
  6. AI in Education: Tham and Verhulsdonck introduce the “stack” analogy for designing ubiquitous learning, emphasizing the importance of a human-centered approach in smart city contexts.
  7. Ethics in Computer Science Education: Peterson, Ferreira, and Vardi discuss the role of ethics in computer science education, emphasizing the need for emotional engagement to understand the potential impacts of technology.

A Call to Action

As guest editors deeply engaged in human-centric approaches to AI, we challenge all stakeholders in the AI design process to consider the multidimensionality of AI. It’s crucial to move beyond the trade-offs mindset and prioritize accuracy and explainability. If a decision made by an AI system cannot be explained, especially in critical sectors like finance and healthcare, should it even be proposed?

This special issue is a testament to the importance of ethics, accuracy, explainability, and trustworthiness in AI. It underscores the need for a human-centered approach to designing AI systems that benefit society. For a deeper understanding of each paper and to explore the insights shared by the authors, check out the full special issue in IEEE Transactions on Technology and Society.