Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

A Trusted ‘Knowledge’ Repository is Key to Generative AI Adoption in 2024

With barely a few days left in 2023, I am offering my views on the adoption and impact of generative AI in legal industry in 2024, in the context of the knowledge management function.

Generative AI will deliver value to those who know how to use it, the tool doesn’t understand your legal function

Whilst generative AI is becoming synonymous with AI, the reality is that the latter isn’t a single technology but a cluster of different technologies. Generative AI applications, on the other hand, specifically use natural language processing on data provided to deliver results and outcomes in response to precise requests. The generative AI application doesn’t understand the legal concepts underlying the questions that it is asked or the words/output that it produces. So, legal expertise within the context of matter strategy and complexity is needed to sanity-check the outputs the generative AI tool throws up.

Consequently, generative AI will deliver value and convenience only to those who know how to use it, and even help fine-tune thought processes to achieve the desired outcome. For instance, the same question can be asked in multiple ways, and generative AI will respond in different ways. Likewise, lawyers can receive different outputs to the same questions, depending on the documents that they are ‘authorized’ to access in the firm. The tool is processing natural language, after all!

AiThority Interview with Ryan Nichols, EVP & GM, Service Cloud at Salesforce

Microsoft will enable generative AI adoption, but compliance and confidentiality will be showstoppers

In the near term, the first ‘showstopper’ for meaningful use of generative AI will be compliance.  A lawyer using generative AI (be that co-pilot or its adaptive version) to create a report for sharing with clients or third parties needs to ensure that the document complies with all the applicable data protection and data privacy regulations. In addition to enabling generative AI adoption through the provision of a co-pilot, Microsoft will play a role in ensuring that data security and residency aren’t compromised.

Microsoft’s security policy for Azure ensures that data residing in individual organizations’ cloud tenants is not shared externally.

So, it will be incumbent on law firms to only use data that is within their own cloud tenant, much like it was in previous years where data resided within firms’ networks.

Related Posts
1 of 7,549

The second showstopper is going to be data confidentiality. At an individual user level, people will only have access to documents they are authorized to view, based on the principles of need-to-know security. This means that individuals using the firm’s generative AI tool will only surface results based on the data they are authorized to access. In some cases, the tool may produce output based on a completely different set of documents to another individual. This will possibly impact the value of generative AI in terms of contextual accuracy, currentness, and suitability.

Without a trusted knowledge source, your generative AI tool will deliver limited value

Document management systems will be the logical place for law firms to embark on their AI journey, which, BTW for many firms, have already started.

For instance, AI tools are embedded in document management systems to assist with document classification, automatic filing of emails and documents based on user behaviors, and even interrogating the application for information by asking specific questions. Note here that for this kind of broad AI adoption, there is no need to “train” the tool for it to deliver the desired results.

To adopt generative AI and ensure that the tool is “trained” on the most accurate, authorized, and current data, firms will need to create a central, curated repository of trusted data – i.e., the knowledge management system. This will ensure the best and most useful output. To illustrate, a lawyer would be able to instruct the firm’s generative AI tool to create an 800 words abstract based on a 100-page M&A contract, that refers to participants from the US and Germany and pertains to the New York jurisdiction – and be assured that the right data has been used to create the output.

AiThority Interview with Snyk’s CMO Jonaki Egenolf

It isn’t the ‘rise of the machine’ in legal

Finally, despite the hype surrounding generative AI, we aren’t heading towards a ‘rise of the machine’ scenario in legal. There are many issues to iron out – from hallucinations and security to ethics, data management, and regulation – which means that human intellect is very much indispensable. It’s true that AI technology more broadly has a lot to offer in the form of support, convenience, and efficiency – and that over time generative AI will become more accurate. But natural language processing is exactly that – i.e., language processing. Generative AI engines know what letters or words to put one after the other, but it does not understand the words or concepts and needs detailed instruction and high-quality data to deliver value.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.