Skip to main content

International arbitration in 2024

Generative AI: opportunities and risks in arbitration

By: Elliot Friedman, Marta García Bel, Veronika Timofeeva, Desmond Chong

IN BRIEF
AI is already used in many parts of arbitration practice, including in managing and reviewing large batches of documents and preparing chronologies. The rapid development of more advanced forms of AI, such as generative AI (GenAI) and large language models (LLMs), presents new opportunities and risks in the arbitration space.

The use of artificial intelligence (AI) in legal services is not new. According to the 2023 Wolters Kluwer Future Ready Lawyer Survey Report, 73 percent of surveyed legal professionals expect to integrate GenAI into their legal work in 2024. Similarly, many companies are expanding their use of GenAI in their operations and legal departments. 

How can AI benefit international arbitration?

AI is already being used in international arbitration in several key areas.

  • Dispute prevention: AI is being used for contract management and execution, mapping out potential risks, and even flagging contract breaches. In the construction industry, for example, AI is being used to automate the design process, optimize schedule management and cost estimation, and anticipate delays and risks, which can help parties avoid or mitigate delay and disruption claims.
  • Arbitrator selection: Existing AI tools can assist parties with arbitrator selection by synthesizing data relating to past decisions, tendencies, and expertise. We anticipate that new tools will soon be developed that will dig even deeper into these and other factors, with the arbitrator selection process becoming less subjective and word-of-mouth-based and more objective and, hopefully, diverse.
  • Management of arbitration proceedings: Arbitral institutions such as the ICC and the AAA/ICDR are either already using or are considering using AI to improve internal processes, save time and costs, and enhance procedural efficiency in the management of arbitration proceedings.
  • Drafting of awards: Several judges in different jurisdictions, including in the UK, Colombia, Brazil, India and Taiwan, have reportedly used GenAI when drafting decisions, or are developing AI tools to assist with judgment drafting. There have not yet been any public reports of arbitrators relying on GenAI, but we expect that to change soon. The potential efficiency gains are obvious, but there are risks associated with decision-makers relying on AI, some of which we discuss below.

Risks of AI in international arbitration

As with all innovative technologies, the use of AI also presents new risks.

  • Biases: As AI tools have been developed by humans, it is important to implement safeguards to mitigate the potential biases of their creators and of the underlying data set on which they have been trained. Furthermore, since most commercial awards are not public, the data on which AI tools rely may be incomplete.
  • Risks of ‘hallucinations’: this is a risk where outputs generated by an AI model become untethered from the source materials, including, for example, user’s prompts and input reference texts. There has, however, been continued effort and technical breakthroughs across the AI and academic communities to detect, measure and mitigate such risks. Within the context of dispute resolution, two New York attorneys were sanctioned in 2023 after filing a legal brief in federal court that referred to non-existent case law supplied by ChatGPT. In response, some courts in the US and Canada now require parties to disclose the use of AI or to certify that either no GenAI tool was used in drafting or that all content created by GenAI was reviewed and verified by a human. Other courts, such as those in New Zealand, consider it unnecessary to disclose the careful use of AI. As noted below, at least one arbitration body is developing guidelines addressing the use of AI in arbitration proceedings.
  • Privacy and confidentiality: Publicly available AI tools may raise confidentiality concerns where they store confidential data inputted by the user. Additionally, other legal and reputational risks may be associated with AI’s use, which could result in disputes related to copyright or personal data infringements, or negligence or liability claims.
  • Integrity of proceedings and evidence: Advancements in AI could heighten the risks of manipulated or false evidence, such as ‘deepfakes’, being submitted into the record of arbitration proceedings.
  • Due process issues: If arbitrators delegate their decision-making function (or part of it) to an AI tool, and do so in an undisclosed way, this could raise due process issues and, at an extreme, possibly give rise to annulment or vacatur arguments.

AI will find its way into international arbitration practice, like most other places. AI has great capacity to reduce costs and increase accuracy and efficiency, but it also comes with risks – as two New York lawyers recently experienced when the AI-generated case law in their brief turned out not to exist. Like in many other areas, it will pay to approach AI’s role in international arbitration cautiously and with an open mind.

Elliot Friedman
Freshfields Partner and Head of International Arbitration – Americas

AI regulation in arbitration

Recognizing these and other concerns, legislators and governments are considering how to regulate AI. A key development is the EU’s “AI Act”, which will regulate the use of AI in EU Member States. The Act – which is expected to apply from mid-2026 – aims to protect fundamental rights by putting limits on high-risk AI systems and provides for transparency requirements for general-purpose AI systems.

The most notable public initiative addressing the use of AI in arbitration is at present found in the Guidelines on the use of AI in Arbitration, drafted by a taskforce of the Silicon Valley Arbitration and Mediation Center and published in August 2023. These guidelines seek to reflect best practices and highlight risks associated with the use of AI in arbitration proceedings.

The draft is still undergoing a public consultation process, and its final version, expected to be released in 2024, will incorporate feedback from the arbitral community and institutions on certain controversial issues, including if, and to what extent, parties and arbitrators should have a general obligation to disclose the use of AI in arbitration proceedings.

While AI presents incredible opportunities for innovation and efficiency for legal practice, its deployment must be measured and thoughtful in order to mitigate risks that are rapidly emerging, including to maintain accuracy and credibility before tribunals, and preserve privacy rights and the confidentiality of sensitive information.

Tim Howard
Freshfields Partner and US Head of Data Security

Practical considerations for the use of AI in arbitration

In addition to general considerations to keep in mind when using AI (see our insights on top actions general counsel should take and GenAI considerations for lawyers), parties involved in arbitration proceedings should consider using AI to help prevent disputes. Investing in appropriate AI tools can help minimize, manage and monitor contract risks, and can assist with implementing strategies to mitigate them.

Addressing how AI is used in an arbitration proceeding from an early stage will be important. Parties and arbitrators should consider agreeing on the principles governing AI’s use during proceedings and incorporating those principles in the first procedural order. This will promote the transparency and legitimacy of the arbitration process, establish appropriate guardrails, and avoid costly and lengthy procedural battles.

Perhaps most importantly, before using AI tools in arbitration proceedings, companies and counsel should all understand how the tools work, the data they rely on, and the risks involved in their use.