The Upper Tribunal’s Warning on ChatGPT
The Upper Tribunal’s Warning on ChatGPT: What It Means for Confidential Legal Work
AI is becoming a routine part of legal work, from summarising documents to improving client friendly explanations. If you have ever been tempted to paste a client email into a tool for a quick rewrite, you are not alone. A recent Upper Tribunal decision is a reminder that convenience can quickly turn into a confidentiality problem, or an accuracy problem, or both.
In a ruling published on 24 February 2026, the Upper Tribunal (Immigration and Asylum Chamber) warned lawyers about putting client documents into ChatGPT and other public AI tools. The Tribunal also highlighted a rising issue in filings: fictitious case citations that look convincing but do not exist.

Why uploading client material to public AI tools is risky
One of the matters involved a solicitor who accepted that he had uploaded Home Office decision letters into ChatGPT to generate summaries for clients. He also said he had pasted draft client emails into the tool to improve the wording. He later accepted that this was a data breach and said he would notify clients and self report to his regulators.
The Tribunal’s warning is plain. Once confidential client information is placed into an open, public AI service, you may lose control over how that information is processed or retained. That can breach confidentiality and may raise privilege concerns. The judge said that a regulated lawyer or firm who does this may need to report the incident and should consider taking advice from the Information Commissioner’s Office.
ChatGPT-invented case law
The same decision dealt with two “Hamid” hearings, where the Tribunal addresses serious concerns about representatives’ conduct. In both matters, the Tribunal was presented with authorities that were not real. The judges described how this sends them on a needless search, consuming limited judicial resources. They also noted a noticeable increase in the second half of 2025 in fictitious authorities being cited in both appeals and judicial review work.

The Tribunal has now updated its forms so legal representatives must confirm by a statement of truth that any authority cited exists, can be located using the citation provided, and supports the proposition relied on.
Supervision and accountability do not go away
A key theme in the ruling was supervision. The Tribunal stressed that it does not matter whether a citation error appears because someone used an AI tool, relied on an old template, or drafted too quickly under pressure. The qualified professional with conduct of the matter is expected to check what is filed, ensure it is accurate, and make sure the tribunal is not misled.
AI does not create a new category of mistake. It makes familiar mistakes easier to produce at speed, and more persuasive in tone. That is why checking and supervision matter more, not less.
Using AI / ChatGPT safely in legal workflows
If your firm is adopting AI tools such as ChatGPT, a defensible approach usually comes down to three habits.
First, draw a bright line around client data. If a tool is public, not approved, or not contractually configured to keep information inside a controlled environment, do not paste in client documents, identifiable details, or privileged advice.
Second, treat ChatGPT output as drafting assistance, not authority. AI can help improve clarity and structure, but it should not be relied on to generate legal citations or propositions. If a tool suggests an authority, locate it in your usual research platforms, read it, and confirm it truly supports the point.

Third, make review routine. A short, consistent check before filing is far cheaper than remedial hearings, regulator engagement, and reputational harm, especially where junior staff are drafting under supervision.
How Tremark Associates supports best practice
Tremark Associates handles sensitive instructions every day for legal and corporate clients, often on time critical matters where defensibility matters. Our accreditations and compliance framework support best practice, including ISO 27001 for information security management, ISO 9001 for quality management, and British Standard BS 102000 for the provision of investigative services.
We also hold Cyber Essentials and FSQS accreditation, and we are a corporate member of the Association of British Investigators. These standards drive documented processes, staff training, access controls, and regular independent auditing, so the information you share with us is handled carefully and consistently.

The takeaway
The Tribunal’s warning is a useful line in the sand. AI tools such as ChatGPT can be helpful, but it does not dilute professional duties. Keep confidential information out of public tools, verify anything that looks like a legal authority, and make supervision real rather than assumed. Done properly, AI can support better service. Done casually, it can create risk faster than any team can manage it.
Categories
- Guidance
Popular Blogs









