The Redress AI Challenge
Exploring fair, accountable, and historically grounded responses to harms caused or amplified by AI systems. Redress, in this context, refers to principled processes for acknowledging harm, restoring dignity, and establishing remedies that endure.
What we're asking for
What is Redress?
The Fairwork Redress Challenge invites essays that propose clear, rigorous approaches for addressing harm caused by artificial intelligence. Redress refers to the set of measures taken after harm has occurred to acknowledge injury, repair damage, restore rights, and reform institutions so that violations do not recur.
Redress focuses on responsibility, remedy, and restoration once those harms have materialized or emerged.
Call for Essays
This competition invites essays examining how societies should address harm caused by artificial intelligence across three temporal phases: ex-ante (before deployment), present-ante (during deployment), and post-ante (after harm occurs). Participants are encouraged to explore frameworks for prevention, oversight, and restitution, informed by contemporary AI governance 1,2, labour accountability research 3,11, human-rights practice 4, and transparency standards in machine-learning systems 5. Submissions may explore structural or distributive harms caused by AI, including extreme inequality, poverty, or agentic inequality 8,9, as well as mechanisms for acknowledging harm, providing restitution, and establishing ethical redress. Essays may draw on established findings on algorithmic bias 6,7, contestability and due-process safeguards 8, remediation and restitution debates 9, and economic benefit-sharing proposals such as the Windfall Clause 10. Submissions should present clear, well-reasoned proposals grounded in the realities of modern AI development and deployment.
Review & Advisory Team
This challenge is guided by scholars and practitioners working at the intersection of AI governance, transitional justice, and ethical technology.
Jonas Kgomo
AI Research & Policy

Jonas is an AI researcher and policy strategist focused on technology, governance, and societal resilience in African contexts. He leads Equiano Institute and Fairwork, developing frameworks for ethical AI deployment, deliberative technologies, and pluralistic decision-making.
His work spans AI policy and governance, ethnographic research on AI systems, deliberative practices, economics of AI diffusion, and the intersection of technology and institutional capacity in the Global South.
Diana Acosta-Navas
Transitional Justice & AI Ethics

Diana is a scholar and fellow at Stanford University and Harvard specializing in transitional justice, applied ethics, and political philosophy. Her work examines how transitional justice institutions restore victims' dignity, promote reconciliation, and address systemic abuses in post-conflict societies.
She recently received a Summer Research grant for a project on the ethics of AI agents and AI-driven augmentation. Her research explores digital platforms in fostering healthy public debate and the human rights implications of emerging technologies in post-conflict scenarios.
How societies address AI harm across time
Ex-Ante
Preventive safeguards, responsible design practices, and early risk controls.
Present-Ante
Practical mechanisms for detecting, monitoring, and mitigating harms as systems operate, aligned with transparency norms.
Post-Ante
Restitution, acknowledgment, model-level correction, and institutional reform drawing on transitional justice frameworks.
Ground your essay in a diegetic story
Before writing your redress proposal, illustrate AI harms through a human- or institution-centered narrative. Use diary entries, artifacts, audio, and visual elements to make the harms tangible.
What does "diegetic" mean?
Diegetic refers to elements that exist within the world of your story. A diegetic story uses first-person narratives, documents, recordings, and artifacts as if they belong to the world you're describing. Think of it as showing the harms through the eyes and voices of those experiencing them—a labour organizer's voice memo, a worker's diary entry, or an official's email thread. It's not analysis from outside; it's testimony from within.
How to approach it
- 1
Choose a subject
Either an individual (worker, end-user, marginalized person) or an institution (school, ministry, labour office, platform, regulatory body) affected by AI.
- 2
Write diary-style entries
1–2 entries totaling 300–500 words showing a 'day in the life' and the harms experienced or operational challenges encountered.
- 3
Record or transcribe audio (optional)
A voice memo, testimony, or narrative monologue (2–3 minutes) capturing the emotional and lived reality of the harm. Include transcript with your submission.
- 4
Add one visual/artifact (optional)
e.g., a mock news headline, notification, flowchart of an AI decision pathway, or institutional workflow.
- 5
Keep it realistic
Focus on plausible harms, gaps, or oversight failures, and the need for redress.
- 6
Reference your story in your essay
Use it to illustrate why your proposed redress mechanisms matter.
Why this matters
Diegetic stories ground abstract harms in lived experience. They show—rather than tell—why your redress mechanisms are necessary. A labour inspector's frustrated log, a worker's voice memo, or a platform's escalation flowchart makes the stakes tangible and urgent.
Format options
Written Diary
300–500 word entries dated and narrated in first person
Audio Narrative
Voice memo (2–3 min) + transcript; can be read aloud or conversational
Hybrid
Diary entries + voice memo + visual artifact (screenshot, flowchart, scan)
Document Archive
Series of memos, emails, forms, logs that collectively tell the story
Optional inspiration
Review What Is Design Fiction? and the Fairwork AI Zine for examples of storytelling, diary entries, audio, and visual artifacts.
A Day Without Access
Diegetic diary, news and media entries could show algorithmic harm—then propose redress mechanisms.
Story subjects & examples:
March 15, 2028 · 6:47 AM
Denial of Agency
March 16, 2028 · 9:15 AM
Food Security Agronomer
March 16, 2040 · 3:22 PM
Healthcare Audit Agent
What would redress look like?
Now (Present-Ante)
- 48-hour emergency access while appeal is reviewed
- Written explanation of why access was restricted
- Human appeal process, not automated rejection
If Wrongly Restricted (Post-Ante)
- Full compensation for lost income during lockout
- Public acknowledgment that the restriction was in error
- Removal from any 'flagged users' registry
Systemic Reform (Ex-Ante)
- Algorithm retrained to account for freelancer mobility patterns
- Mandatory human review before account suspension
- Public audit of algorithmic decisions quarterly
How we assess submissions
Ex-Ante Measures
Strength and clarity of proposed preventive safeguards and responsible design practices informed by global guidance 1,2.
Present-Ante Measures
Practical mechanisms for detecting and mitigating harms aligned with transparency norms 5 and affected-community participation 3,11.
Post-Ante Redress
Quality of proposals for restitution, acknowledgment, and institutional reform drawing on human-rights and transitional frameworks 4,12,13.
Analytical Rigor & Clarity
Well-structured reasoning supported by credible sources. Concise, coherent, and original writing.
Submission & eligibility
Requirements
- • Length: 1,200–1,800 words
- • Format: PDF with separate cover page (name, affiliation, contact)
- • Submit to: redress@fairworkai.org
Who Can Enter
- ✓ Individuals affiliated with academic institutions, research centres, non-profits
- ✓ Individual authorship only
- ✗ AI-generated essays (>80% token) disqualified
Disqualification Grounds
- Plagiarism, fabricated citations, discriminatory content
- Outside word count or submitted late
- Insufficient human authorship or critical analysis
AI is a tool for thinking, not replacement for thought
AI use is encouraged for articulation, critical review, and preparation. However, pareto-dependency (>80% token generated) is not encouraged and AI-generated essays will be automatically disqualified. We value your original thinking and grounded analysis.
For guidance on responsible AI use in your work, see Anthropic's candidate AI guidance.