Lawyers Using AI: Survey Sheds Light
Lawyers Using AI: Survey Sheds Light
AI is already part of legal work, even in teams that would not call themselves “legal tech forward”. The question is no longer whether lawyers will use AI, but where it belongs in the workflow and how client information stays protected as tools evolve.
Thomson Reuters’ Future of Professionals Report 2025 offers a helpful snapshot. In the survey, 80% of respondents said AI will have a high or transformational impact on their profession within the next five years, and 53% said their organisations are already seeing a return on investment linked to AI. In legal, the tone is largely positive: Thomson Reuters reports that 72% of legal professionals surveyed view AI as a force for good in their profession.

From experiment to everyday work
The survey results land because they map neatly to what many firms and in-house teams are seeing: AI is moving into core tasks, not only admin. Among legal professionals currently using AI tools, Thomson Reuters lists the most common use cases as document review (77%), legal research (74%), document summarisation (74%), drafting briefs or memos (59%), and drafting contracts (58%).
That mix also explains why the “time dividend” is the first metric people talk about. In the 2025 report, legal professionals surveyed expect AI to free up nearly 240 hours per year, up from 200 hours in the prior year’s research. Five hours a week back in the diary does not automatically translate into better outcomes, but it does create room for deeper analysis, faster turnaround, and more consistent client communication.
There is a UK lens too. Thomson Reuters has reported that 47% of UK corporate legal professionals now regularly use AI-powered tools to start or edit their work, at about double the rate within UK law firms. That gap hints at a familiar challenge: adoption accelerates when procurement, training and controls are centralised, rather than left to individual preference.
GDPR, confidentiality and the “don’t paste it into ChatGPT” moment
The biggest risk most legal teams face is informal use. A prompt can contain personal data, privileged detail, or case strategy. If that prompt goes to a public tool outside your control, you may have created an unmanaged transfer, or worse. The Law Society’s guidance is plain: as a general rule, firms should not permit the use of personal data or client confidential information for testing, templating or similar activity on generative AI, and should use fictional data instead. The same guidance encourages firms to be able to answer practical questions about where data is processed, who is processing it, how it is stored, and who can access it.
The regulator view points the same way. The ICO has set out questions organisations should ask when developing or using generative AI, including purpose, limiting unnecessary processing, and how individual rights requests will be handled. The SRA has highlighted confidentiality and accountability, including the risk of staff using online AI tools on client matters and the fact that solicitors remain responsible for outputs when AI is involved.

The good news is that a workable approach does not need to be heavy. It does need to be specific. Many firms are landing on a framework like this:
- Clear rules on what cannot be entered. A short “red list” covering client identifiers, privileged content, special category data, and anything governed by a court order or NDA.
- Approved tools and a safe route for common tasks. Vetted platforms with enterprise controls, plus internal templates for summarising, outlining, drafting, and research support using non-confidential inputs.
- Human review and accountability. A defined check step for anything that influences advice, filings, or client deliverables, with responsibility sitting with the supervising lawyer.
- Vendor diligence and record keeping. Notes that cover retention, access, locations of processing, and whether inputs are used for model improvement, along with a lightweight risk assessment per use case.
- Training that matches real work. Practical examples on redaction, prompt hygiene, and when to stop and escalate.
This is where firms quietly make progress: not by banning AI, but by making the compliant path the easiest path.
Where Tremark Associates fits
Tremark Associates works with law firms and legal teams on matters where discretion and data handling are non-negotiable. We’re ISO 27001 certified, which means our approach to information security is structured and auditable, covering how information is handled, access is controlled, and risks are managed.
As legal teams adopt AI, the security expectations around client data don’t change. If anything, they become more visible. AI can support productivity, but it also creates new ways for sensitive information to leak through informal use. The most practical step firms can take is to set clear internal rules on what can be shared with AI tools and to keep confidential client information within controlled systems.

Conclusion
The Thomson Reuters survey results paint a clear picture: lawyers expect meaningful change, many organisations can already point to ROI, and the everyday use cases are mainstream. The teams that get the most out of AI will be the ones that pair adoption with disciplined data handling.
FAQs
Can we use generative AI for drafting and summarising?
Yes, with guardrails. Start with non-confidential inputs and define a review step for outputs that affect advice or client deliverables.
Do we need to tell clients we use AI?
Often, transparency helps. Some firms update engagement terms or matter plans, especially where AI supports material parts of delivery.
What is a simple rule for staff?
If it identifies a client, reveals strategy, or is legally privileged, it does not go into a public AI tool. Use approved systems or fictionalised examples.
What should we ask an AI vendor before buying?
Where is data processed and stored, who can access it, how long it is retained, whether inputs are used for training, and what audit logs or independent assurance are available.
Categories
- Uncategorised
Popular Blogs









