We were honoured to be selected as one of the 15 startups who had the chance to introduce their innovations in the Startup Alley at the LegalGeek conference in London.
Our first LegalGeek conference
With more than 3200 attendees and 50+ exhibitors this year, LegalGeek organizes one of the biggest legal innovation events in Europe. The conference also hosts a trusted showcase of innovative legaltech startups: the Startup Alley.
The two days were packed with insightful discussions – predominantly about the potential and implications of using AI in legaltech –, product pitches, stage presentations and networking opportunities. We had the opportunity to immerse ourselves in the latest technologies and trends in a relaxed and fun environment.
Note: in this article, by AI we specifically mean generative AI using LLMs.
From our perspective, as a technology tool provider with the increasingly rare characteristic of deliberately not using AI in our services, the conference was really all about AI. Understandably, this is the subject today which raises the most exciting questions, since the legaltech industry is now coming to a similar disruptive point as the translation industry is already experiencing. Hence, some of the takeaways are familiar from the language industry events which we have attended earlier.
Takeaways
Shifts in legal work?
Verifying AI output should still be human
The users must be able to verify the results of AI tools. Data without human supervision leads to problems. Trusting AI output without verification can pose significant risks, that’s why accuracy is of utmost importance when assessing AI output. Only the human experts have the necessary knowledge and context to evaluate if the given output can be used as is, or needs correction (rings a bell from machine translation post-editing, isn’t it?).
In our view, the main question is: what happens between receiving an answer generated by an AI-tool, e.g. for the purposes of finding relevant information or reasoning in a given case, and putting the seal on it saying this is an accurate, error-free output and I can rely on it?
Our instinct tells us that AI output can only be verified by non-AI tools, be that looking up paragraphs manually in a lawbook, or using search engines to find the relevant pieces of information in a large document repository. The individual steps of these “traditional” methods of finding and checking information are traceable, and involve an expert’s judgement call about if all the relevant sources were consulted, and their conclusions logically synthesized. 1
In-house standards
There is a clear need to implement principles or in-house regulations on using AI in law firms or legal teams. Of course this is very important if we consider the confidentiality issues or enhanced personal data protection obligations of law firms. On the other hand, it should also embody a certain critical attitude towards AI tools in general.
Job stability
Technology will not replace lawyers but make their work more efficient (this is often argued in relation to linguists as well). In a very interesting stage debate, it was stated that although the fear of job losses has already been looming around in the legal sector as well, there are no signs of increased layoffs in this industry at the moment – though we heard accounts of using a downsized paralegal team to review AI output.
As we see it, AI would have most immediate impact on workflows of assignments of massive scale – an example we heard was a due-diligence on half million documents –, or cookie-cutter contracts that even before heavily relied on templates – though the lawyer among us would argue that no two cases are alike.
Chat during working hours
Chat as primary interface (so far)
Since chat is a least-effort interface for AI, that is what legaltech tools incorporating AI implement. 2 The main idea is to provide an AI tool of such a progressed state which lets lawyers type in a certain legal question and receive an accurate, error-free answer within seconds. 3 Because of the nature of AI models, this answer always needs to be verified by human experts to avoid unexpected legal risks and reputational loss.
Prompting
Prompting is a new skill lawyers are beginning to familiarise themselves with. We had the opportunity to participate in a prompting workshop as well, where we learned that certain law firms already have standards on how a prompt should be structured and which points it should cover.
In one case, they found that if their AI were provided with an emotive phrase at the end of the prompt, it would perform much better. In the example “It is very important that you get this task right, so take your time and approach it carefully” made a difference. 4
An interesting side-benefit of having to prompt is that it previously unwritten internal standards or procedural rules are now put down in writing, available for wider inspection, discussion and improvement.
How to distinguish AI services
We discussed with about twenty legal AI tool providers, some of them specializing in AI-based research tools, and most of them in contract-review tools. Apart from features and integrations added on top of AI, it was not always clear how the AI part distinguishes them from each other.
Levels of data source incorporation (as we defined them)
To help us map the various offerings, we came up with the following rough levels of classifying an AI solution, based on what data sources it uses:
- Only builds on generic existing LLMs (or in rare case, trains own LLM from scratch).
- Adds publicly available relevant data sources – for example legislation, case-law, guidelines.
- Adds in-house expert content, for example an expert summary of a sector’s legislation, or a comparative summary across jurisdictions.
- If/how closely the database follows the changing legal landscape and takes in-forceness into account.
- Ability to (securely and privately) incorporate customer data, such as internal resources of the law firm or legal department, i.e. individual contracts, memos, forms, pleadings, submissions, playbooks, etc.
How the AI services we met fit these levels
AIs focusing on contract review would naturally have an option for (4) incorporating customer data, typically via retrieval, but there was example where the provider would actually fine-tune based on that data.
As for points (2) and (3), since in AI’s current state it is not reasonable for an AI to be able to synthetize quality answer from finding and digesting all relevant pieces of legislation – as the chance for omission or misinterpretation is too large –, the providers prefer to build on pre-made legal synthesis, for example case-law or in-house expert summaries. Thus the AI answer will predominantly build on (and ideally refer) these summary documents.
Specifically the contract AI solutions mostly expect the customer to supply their playbooks which instruct what specific details to look for in contracts. The AI model itself would not correlate the contract’s contents with the actual law itself, unless the client summarized it and instructed so in their playbook. (We heard from a few providers that they are thinking about and/or implementing ways to bring in actual jurisdiction law though.)
We also have to note that most of these levels could incorporate their respective data either by training and/or retrieval. And (speculation ahead!) as the devil is in the details, even a solution missing some of the data sources might make up for it by clever engineering.
Data is King (and our next steps)
Integrating quality data in any AI-system – or actually, in any system, regardless of AI – is crucial to provide a service which will really prove efficient and useful for its users. The research AI tool providers we encountered all built on their preexisting legal repository, and AI is “just” an other means to access the contents, in addition to traditional navigation interfaces.
Based on these super interesting takeaways, we are considering different options on how Juremy’s existing database and functionalities could be adapted to the needs of lawyers and researchers working with EU-related cases. If you are interested in this aspect, we are curious to hear about your use-case. You can also sign up to receive news about our developments on our EU Law website. Stay tuned for more updates from us!
Last but not least, let us thank the organisers of LegalGeek for giving us the opportunity to introduce Juremy to conference attendees, we learned a lot! Also, it was great to connect with so many legaltech enthusiasts and innovators in such an inspiring ambiance, thank you to all for the insightful discussions!
-
There is a trend emerging in AI tools to provide intermediate output and reference links. This will aid the manual research, but won’t completely replace it. ↩︎
-
There are already signs that as tools mature, AI would still operate in the background, but less directly exposed. Natural language interaction would still be possible as an option instead of the main interface. ↩︎
-
Expecting an error-free answer for a legal question is quite a long shot, given that we heard from multiple representing AI tool providers that the quality of output is what you could expect from a junior lawyer. ↩︎
-
Which might make sense, because LLMs don’t learn to give a universally good answer, but they learn to give an completion that matches the style of the prompting context well. Imagine an AI trained on a set of poorly styled questions with poor quality answers, and greatly styled questions with careful answers. The presence of a greatly styled prompt would invoke it to rehash a careful answer then. ↩︎