Essay Competition

The Redress AI Challenge

Exploring fair, accountable, and historically grounded responses to harms caused or amplified by AI systems. Redress, in this context, refers to principled processes for acknowledging harm, restoring dignity, and establishing remedies that endure.

Deadline28 December 2025
Word Count1,200–1,800 words
Prizes$150 • $100 • $50
Overview

What we're asking for

What is Redress?

The Fairwork Redress Challenge invites essays that propose clear, rigorous approaches for addressing harm caused by artificial intelligence. Redress refers to the set of measures taken after harm has occurred to acknowledge injury, repair damage, restore rights, and reform institutions so that violations do not recur.

Redress focuses on responsibility, remedy, and restoration once those harms have materialized or emerged.

Call for Essays

This competition invites essays examining how societies should address harm caused by artificial intelligence across three temporal phases: ex-ante (before deployment), present-ante (during deployment), and post-ante (after harm occurs). Participants are encouraged to explore frameworks for prevention, oversight, and restitution, informed by contemporary AI governance 1,2, labour accountability research 3,11, human-rights practice 4, and transparency standards in machine-learning systems 5. Submissions may explore structural or distributive harms caused by AI, including extreme inequality, poverty, or agentic inequality 8,9, as well as mechanisms for acknowledging harm, providing restitution, and establishing ethical redress. Essays may draw on established findings on algorithmic bias 6,7, contestability and due-process safeguards 8, remediation and restitution debates 9, and economic benefit-sharing proposals such as the Windfall Clause 10. Submissions should present clear, well-reasoned proposals grounded in the realities of modern AI development and deployment.

Leadership

Review & Advisory Team

This challenge is guided by scholars and practitioners working at the intersection of AI governance, transitional justice, and ethical technology.

Jonas Kgomo

AI Research & Policy

Jonas Kgomo

Jonas is an AI researcher and policy strategist focused on technology, governance, and societal resilience in African contexts. He leads Equiano Institute and Fairwork, developing frameworks for ethical AI deployment, deliberative technologies, and pluralistic decision-making.

His work spans AI policy and governance, ethnographic research on AI systems, deliberative practices, economics of AI diffusion, and the intersection of technology and institutional capacity in the Global South.

Diana Acosta-Navas

Transitional Justice & AI Ethics

Diana Acosta-Navas

Diana is a scholar and fellow at Stanford University and Harvard specializing in transitional justice, applied ethics, and political philosophy. Her work examines how transitional justice institutions restore victims' dignity, promote reconciliation, and address systemic abuses in post-conflict societies.

She recently received a Summer Research grant for a project on the ethics of AI agents and AI-driven augmentation. Her research explores digital platforms in fostering healthy public debate and the human rights implications of emerging technologies in post-conflict scenarios.

Call for Essays

How societies address AI harm across time

Before Deployment

Ex-Ante

Preventive safeguards, responsible design practices, and early risk controls.

During Deployment

Present-Ante

Practical mechanisms for detecting, monitoring, and mitigating harms as systems operate, aligned with transparency norms.

After Harm Occurs

Post-Ante

Restitution, acknowledgment, model-level correction, and institutional reform drawing on transitional justice frameworks.

Optional Component

Ground your essay in a diegetic story

Before writing your redress proposal, illustrate AI harms through a human- or institution-centered narrative. Use diary entries, artifacts, audio, and visual elements to make the harms tangible.

What does "diegetic" mean?

Diegetic refers to elements that exist within the world of your story. A diegetic story uses first-person narratives, documents, recordings, and artifacts as if they belong to the world you're describing. Think of it as showing the harms through the eyes and voices of those experiencing them—a labour organizer's voice memo, a worker's diary entry, or an official's email thread. It's not analysis from outside; it's testimony from within.

How to approach it

  1. 1

    Choose a subject

    Either an individual (worker, end-user, marginalized person) or an institution (school, ministry, labour office, platform, regulatory body) affected by AI.

  2. 2

    Write diary-style entries

    1–2 entries totaling 300–500 words showing a 'day in the life' and the harms experienced or operational challenges encountered.

  3. 3

    Record or transcribe audio (optional)

    A voice memo, testimony, or narrative monologue (2–3 minutes) capturing the emotional and lived reality of the harm. Include transcript with your submission.

  4. 4

    Add one visual/artifact (optional)

    e.g., a mock news headline, notification, flowchart of an AI decision pathway, or institutional workflow.

  5. 5

    Keep it realistic

    Focus on plausible harms, gaps, or oversight failures, and the need for redress.

  6. 6

    Reference your story in your essay

    Use it to illustrate why your proposed redress mechanisms matter.

Why this matters

Diegetic stories ground abstract harms in lived experience. They show—rather than tell—why your redress mechanisms are necessary. A labour inspector's frustrated log, a worker's voice memo, or a platform's escalation flowchart makes the stakes tangible and urgent.

Format options

  • Written Diary

    300–500 word entries dated and narrated in first person

  • Audio Narrative

    Voice memo (2–3 min) + transcript; can be read aloud or conversational

  • Hybrid

    Diary entries + voice memo + visual artifact (screenshot, flowchart, scan)

  • Document Archive

    Series of memos, emails, forms, logs that collectively tell the story

Optional inspiration

Review What Is Design Fiction? and the Fairwork AI Zine for examples of storytelling, diary entries, audio, and visual artifacts.

Sample Essay

A Day Without Access

Diegetic diary, news and media entries could show algorithmic harm—then propose redress mechanisms.

Story subjects & examples:

A financial institution with algorithmic systems responsible for lending decisions
A gig worker's recorded testimony about restricted platform access
A ministry official's field notes on economic impacts of AI systems
A healthcare worker's ridden with AI misdiagnosis

March 15, 2028 · 6:47 AM

Denial of Agency

I woke up this morning ready to start my workday, but Workplace AI the AI assistant managing my tools locked me out. It claimed I was misclassified and blocked access to agentic systems I rely on daily. Emails to support are filtered or delayed by the same AI. Deadlines are approaching, and I cannot complete my tasks. The review process will take 10 business days. I have three deliverables due before then. The platform's support portal offers no escalation option.

March 16, 2028 · 9:15 AM

Food Security Agronomer

While farming this month, I noticed satellite forecasts for crops were unusually slow. Predictive agents were throttled in my region, Central Africa, slowing updates on rainfall, soil moisture, and pest risks. The satellite AI system managing these forecasts had deprioritized our data streams, citing "low impact" compared to commercial farming regions. There’s no human oversight to correct it, and planning decisions are now uncertain.

March 16, 2040 · 3:22 PM

Healthcare Audit Agent

While auditing old patient records, I discovered that MedicAI, the system managing telehealth care since 2028, had misclassified several comorbidity conditions in my file. Readings for diabetes were recorded incorrectly, leading to slightly off medication adjustments over the past 12 years. I trusted the system implicitly. It wasn’t life-threatening, but minor complications accumulated unnoticed. Correcting these records now is impossible, and transparency is necessary to prevent recurrence.

What would redress look like?

  • Now (Present-Ante)

    • 48-hour emergency access while appeal is reviewed
    • Written explanation of why access was restricted
    • Human appeal process, not automated rejection
  • If Wrongly Restricted (Post-Ante)

    • Full compensation for lost income during lockout
    • Public acknowledgment that the restriction was in error
    • Removal from any 'flagged users' registry
  • Systemic Reform (Ex-Ante)

    • Algorithm retrained to account for freelancer mobility patterns
    • Mandatory human review before account suspension
    • Public audit of algorithmic decisions quarterly
Your essay could draw inspiration for similar stories and experiences. Propose the policies, mechanisms, and media coverages that turn this story into systemic change.
Evaluation Criteria

How we assess submissions

  • Ex-Ante Measures

    Strength and clarity of proposed preventive safeguards and responsible design practices informed by global guidance 1,2.

  • Present-Ante Measures

    Practical mechanisms for detecting and mitigating harms aligned with transparency norms 5 and affected-community participation 3,11.

  • Post-Ante Redress

    Quality of proposals for restitution, acknowledgment, and institutional reform drawing on human-rights and transitional frameworks 4,12,13.

  • Analytical Rigor & Clarity

    Well-structured reasoning supported by credible sources. Concise, coherent, and original writing.

How to Submit

Submission & eligibility

Requirements

  • • Length: 1,200–1,800 words
  • • Format: PDF with separate cover page (name, affiliation, contact)
  • • Submit to: redress@fairworkai.org

Who Can Enter

  • ✓ Individuals affiliated with academic institutions, research centres, non-profits
  • ✓ Individual authorship only
  • ✗ AI-generated essays (>80% token) disqualified

Disqualification Grounds

  • Plagiarism, fabricated citations, discriminatory content
  • Outside word count or submitted late
  • Insufficient human authorship or critical analysis
On AI Use

AI is a tool for thinking, not replacement for thought

AI use is encouraged for articulation, critical review, and preparation. However, pareto-dependency (>80% token generated) is not encouraged and AI-generated essays will be automatically disqualified. We value your original thinking and grounded analysis.

For guidance on responsible AI use in your work, see Anthropic's candidate AI guidance.

Submit Your EssayBack to home
References

Sources cited in this challenge

1 OECD AI Principles
2 UNESCO Recommendation on the Ethics of AI
3 Fairwork research on labour harms
4 OHCHR truth-seeking and reparations
5 Model Cards for Model Reporting
6 Buolamwini & Gebru on algorithmic bias
7 Baracas & Selbst on disparate impact
8 Citron & Pasquale on due process
9 Kaminski & Malgieri on AI restitution
10 Windfall Clause proposal
11 Graham & Woodcock on platform-work harms
12 Hayner, Unspeakable Truths (Routledge)
13 Sikkink on institutional reform (W.W. Norton)
14 Sharp et al. (2025) Agentic Inequality