There is no single canonical definition, but there is a strong family resemblance across standards, policy, and scholarship:
Responsible AI is the design, development, deployment, and governance of AI systems in ways that are lawful, ethical, and technically robust, so that they support human rights and societal values rather than undermining them.
(adapted from Dignum (2019), High-Level Expert Group on Artificial Intelligence (2019), U.S. Department of Commerce (2023))
Across frameworks, technically, ‘Responsible AI’ often includes:
Note
I’ll be using the term ‘paradox’ somewhat lightly.
AI is increasingly treated as infrastructure.
But most of this infrastructure is:
Paradox
Public institutions (universities, hospitals, governments) now rely on infrastructure they don’t own, can’t fully inspect, and struggle to govern.
These debates point to concrete risks:
You might recognize these patterns:
These are not just cases of “bad procurement”: they illustrate a structural tension between public accountability and privately controlled AI infrastructure.
Questions for Responsible AI as infrastructure:
Responsible AI conversations often emphasize both:
At first glance, these values align. But in practice, they can be in tension:
Paradox 2.1
The more we protect privacy by not collecting sensitive data, the harder it becomes to see and correct bias. But the more data we collect for fairness audits, the more we risk privacy harms.
Recent empirical work highlights complex interactions among privacy, accuracy, and fairness in machine learning:
Paradox 2.2
Taken together, this suggests there is no free lunch: privacy mechanisms can protect individuals but may also mask or worsen structural inequities if deployed without fairness analysis.
XAI is often presented as synonymous with trust.
But human cognition is messy:
Paradox 3
Explanations designed to make AI more transparent can exacerbate overconfidence, confusion, or bias, especially when they are incomplete, misleading, or poorly aligned with human decision-making.
Empirical studies have uncovered several patterns:
Tl;dr: explanations are interventions in human cognition, not neutral windows onto the model.
Towards more responsible use of XAI:
Across these paradoxes, a pattern emerges:
Some cross-cutting questions to carry into your own work:
🤖 Note
Thank you.

Bridging Disciplines · Paradoxes of Responsible AI