Law Society’s AI Risk Guide: What Solicitors Need to Know
Law Society’s AI Risk Guide: What Solicitors Need to Know
The Law Society’s new guide on buying technology is a timely intervention for a profession that is already using AI in day-to-day practice. Many firms are now relying on AI tools for drafting, summarising, reviewing documents and analysing material. That makes this guidance important for far more than innovation teams or large firms with dedicated legal tech budgets. It speaks directly to partners, compliance teams, practice managers and fee earners who need to decide what can be used, how it should be used and where the risks sit.
The real issue is not whether AI has potential. It does. The issue is whether firms are adopting it with proper controls. The Law Society’s guide frames technology buying as a procurement and governance exercise, rather than a race to deploy the latest tool. In legal practice, a weak purchasing decision can quickly become a confidentiality problem, a regulatory problem or a client service problem.

Why this guidance matters now
One reason the guide matters is that AI use in law is no longer theoretical. It is already part of routine practice. That changes the risk profile for firms. The concern is not only that a firm might formally buy the wrong system. It is that lawyers and support teams may start using tools informally, without clear approval, without knowing how data is processed, and without any agreed method for checking outputs.
The Law Society’s answer is to slow the process down in the right places. The new guide sets out a practical sequence for firms: identify the business need, decide what type of technology is required, develop the brief, assess suppliers, negotiate contracts, embed the tool and then evaluate how it performs. That may sound basic, but it is exactly where many firms go wrong. Too often, procurement starts with a product demo and ends with enthusiasm. It should start with a defined problem and end with a defensible decision.
That point matters for smaller and mid-sized firms in particular. They may not have in-house technology specialists, but they still carry the same professional duties. A smaller firm does not get more room for error because a system was bought quickly or because the vendor sounded credible.

The biggest AI risks for solicitors
The guide has been published alongside a warning about unsafe AI adoption, and the risks identified are serious. Data protection is one of the most obvious. If lawyers are entering client or matter information into an AI system, the firm needs to know exactly where that data is going, who can access it, how long it is retained and whether it is used to train the provider’s systems. Those are not technical side questions. They go directly to confidentiality, client trust and compliance.
The Law Society’s wider AI guidance also makes the point that firms should be very cautious about entering confidential information into generative AI tools, particularly where they do not control the environment in which the tool operates. Free or public tools create obvious danger here. If the firm has no meaningful operational relationship with the provider, no negotiated safeguards and no clear visibility over data handling, the risk increases sharply.

Accuracy of AI tools
Accuracy is the second major issue. AI can produce outputs that look polished and authoritative while being wrong. In legal work, that is a dangerous combination. The courts have already had to address the misuse of AI-generated material, including fake citations. The professional position is straightforward. The solicitor remains responsible. Duties to the court and to the client do not change because a machine generated the first draft. Every output that matters still has to be checked, verified and owned by a human lawyer.
Bias is another concern that firms should take seriously. AI systems can reflect or amplify patterns in their training data. That creates risk where tools are used to assess, prioritise, classify or recommend. Even where a tool is being used for something that feels administrative, biased outputs can affect judgment, consistency and client experience.
There is also a contractual and ownership issue that deserves more attention than it usually gets. Some AI terms may allow providers to reuse input data or may create uncertainty over ownership of outputs. In a legal context, firms need clarity on control, confidentiality and intellectual property from the start. That should be addressed before purchase, not discovered after rollout.

What firms should ask before buying AI
The most useful aspect of the Law Society’s new guide is that it encourages better questions. Before a firm buys anything, it should define the business need in precise terms. What problem is this tool solving? Is it reducing admin time, improving document review, supporting knowledge management or helping triage work? If the answer is vague, the procurement process is not ready.
The next stage is supplier diligence. Firms should understand how the product works, what claims are backed by evidence and what level of human review is assumed. A good procurement process should also cover data processing, storage, access controls, integration with existing systems, cyber security protections, audit trails, service support, liability and exit arrangements.
Testing matters too. Firms should not trial AI products using live client data or real confidential material. The safer approach is to use fictional or sanitised examples and evaluate the product in a controlled way. That gives the firm a chance to assess performance without creating unnecessary exposure. It also helps reveal a common problem with product demonstrations. Vendors tend to show ideal scenarios. Firms need to know how the tool behaves in real legal workflows, under pressure, with imperfect inputs and across different types of matter.

Internal involvement is paramount. Procurement should not sit with one enthusiastic user or one external sales contact. IT, compliance, risk, operations and the lawyers who will actually use the system all need a voice. A tool that looks efficient in isolation can create problems later if it conflicts with existing systems, supervision arrangements or client obligations.
Implementation is where risk becomes real
Buying the tool is only the beginning. Once a system is introduced, the firm needs clear rules for use. That includes who can use it, for what tasks, with what level of supervision and with what restrictions on data entry. Training is essential, but training alone is not enough. Firms also need monitoring, periodic review and a route for escalating problems.
This is especially relevant in areas such as conveyancing and probate, where work can be process-heavy, time-sensitive and full of sensitive personal and financial information. These are exactly the conditions in which shortcuts can become habits. A tool that saves time on routine work may still create significant exposure if staff do not understand what should never be uploaded, what must always be checked and where human sign-off is required.
The Law Society’s message is sensible and measured. Innovation is not being rejected. It is being placed inside a framework of professional responsibility. That is where it belongs.

Conclusion
The Law Society’s new AI risk guidance should be read as a practical warning against careless procurement. If a law firm can explain why it is buying a tool, how it tested it, what contractual protections it secured, how data will be handled and who remains accountable for the output, it is on much stronger ground. If it cannot, the risk is already present.
FAQs
What is the Law Society’s new guide about?
It is a practical guide to buying new technology for legal practice. It is designed to help firms make structured decisions about identifying needs, comparing suppliers, negotiating contracts, implementing tools and reviewing whether they are working safely and effectively.
Does the guidance tell solicitors not to use AI?
No. The message is not to avoid AI. The message is to adopt it carefully, with proper procurement, clear policies, human oversight and a strong understanding of data, accuracy and accountability risks.
Who is responsible if an AI tool produces inaccurate legal material?
The solicitor and the firm remain responsible for the work they deliver. AI does not remove professional duties to clients or the court. Outputs must still be checked, verified and approved by a human lawyer.
Why is this especially relevant for conveyancing teams?
Because conveyancing involves high volumes of documentation, tight timelines and sensitive personal and financial data. That makes efficiency tools attractive, but it also raises the stakes where confidentiality, accuracy and supervision are concerned.
Categories
- Guidance
Popular Blogs









