Digital Evidence in the Age of AI

Background

In September 2024, Washington DC roundtable (entitled "International Justice, Evidence, and Generative Artificial Intelligence") provided valuable insights and foundations for addressing digital evidence challenges. As we navigate major geopolitical transitions and policy shifts, perspectives from beyond the United States have become increasingly crucial. Courts across Europe, Africa, and Asia are already confronting questions about the admissibility and reliability of digital evidence that may have been manipulated or synthetically generated, yet without consistent frameworks to guide their decisions. 

In the spring of 2025 and at a new peak of uncertainty for many sectors and communities, there is a critical need to expand the dialogue about AI and evidence to diverse international contexts. Until recently, US actors had led the discussion around AI-generated content and digital evidence; however, the rapidly evolving techno-legal landscape of AI-generated content and digital evidence poses complex challenges for atrocity crimes prosecutions worldwide that cannot be addressed through a US-centric approach alone. 

As generative AI technologies become more sophisticated and accessible globally, the international justice community urgently requires standards and protocols that incorporate diverse legal traditions, reflect regional experiences, and account for varying levels of technical capacity across jurisdictions.

Proposed Initiative

Fénix Foundation, supported by the Starling Lab, proposes to develop an ‘International Protocol on AI Evidence in International Crimes Prosecutions’ through a collaborative process involving experts from diverse legal traditions and regional perspectives. 

As a short term response to the worldwide need for guidance, we hosted a small, closed-doors meeting in July 2025 to discuss preliminary guiding principles for courts. These principles will include: 

  • Digital evidence standards transferable to evidence resulting from AI processes; 

  • Practical advocacy techniques for handling claims of deepfakes; 

  • Verification approaches from the field of digital provenance; 

Our next steps are to solicit broader expert input. This feedback will serve as foundational research for the International Protocol, the development of which will begin at the start of 2026.