LexisNexis or
Lexis+, over the period, have continuously afforded small and
mid-size law firms with limited access to resources a go-to legal research
platform, thus through an integration of the content and tools needed to aid an
efficient and thorough legal research. Its cutting-edge technology allows
lawyers to uncover opinions, identify cases, and connect cases that may
otherwise have been overlooked. It also aids in providing lawyers insights on
judges, lawyers, law firms, and courts so they can use well-researched data,
and have an advantage in making superior fact-based arguments. Its inherent
ability to unearth cases and put them online quicker gives it an advantage over
other solutions. It applies a combination of machine learning and natural
language processing. It also helps firms with limited resources to have an
estimation of the litigation timeline for a case before a specific judge or
courts and even determine the appropriate venue that may suit their client’s
case. They can also assess their opposing lawyer’s abilities in similar cases
and design a litigation strategy to pursue their case.
Kira Systems is
another AI-powered solution that aids in executing a more accurate due
diligence review on contracts via searching, highlighting, and extractions of
relevant content for analysis. It also allows continuous reviews by other team
members using extracted information with links to the original source. It is
estimated that this AI completes tasks up to 40% faster for first-time usage
and up to 90% for those with experience using it. This AI uses patented machine
learning for the identification, extraction, and content analysis of the
contracts and documentation fed to it. This patented AI can extract concepts
and data points at high-efficiency rates and accuracy, which either was not
possible with traditional rules-based systems. Aside from its patent, its quick
study, partner ecosystem, built-in intelligence, and adaptive models make it
uniquely different from the rest.
Lawdroid AI, a chatbox AI, can be used by firms with limited resources. Amanda Caffall, Executive Director, The Commons Law Centre, stated that LawDroid helps our non-profit start-up law firm, sort the vast unmet market for legal services into people we can help and people we can refer to other resources, saving us precious time while enabling us to make much-needed referrals. They are mainly hosted on the websites of the law firm and make them available to potential clients 24/7. Using videos and responsive conversations creates and builds trust with potential clients and captures their information as new leads for the firm. It also allows having an in-depth knowledge of your clients to make data-driven decisions. Using some conditional logic, it can intelligently create robust documents gathered from clients. Firms can scale up their expertise and services and charge for self-serve legal documents, issue spotting and legal guidance whilst business is asleep.
It applies natural
language processing to readily provide answers to legal questions from clients
it engages with. The 2020 Legal Trends Report found that 79% of potential
clients expect a response within 24 hours of reaching out. Thus, Lawdroid and
another chatbox AI come in handy to respond to this need in seconds. Overall,
Lawdroid AI helps to save time and money, and improve efficiency and profitability
whilst providing an efficient customer service experience and satisfaction.
Data is one of the
limitations. AI-powered solutions use machine learning, deep learning, neural
networks, and natural language processing. These feed on big data to help train
the AI model to power the solution. For example, with machine learning,
patterns identified by humans may not have been detected easily. The patterns
are detected based on the training data available and may not know other
existing patterns outside the big data used in training them. Thus, the data
may be very accurate or complete but still lack the contextual patterns that
may exist outside the training data. Thomas Redman in his article titled, ‘If Your
Data is Bad, Your Machine Learning Tools Are Useless’, explained
that to train properly a predictive model, historical data must meet
exceptionally broad and high-quality standards.
First, the data must be right: it must be correct, properly labeled, and so forth. But you must also have the right data–lots of unbiased data, over the entire range of inputs for which one aims to develop the predictive model. Shlomit Yanisky-Ravid and Sean Hallisey on Equality and Privacy by design indicated that the key attributes of data are volume, velocity, variety, and veracity. On veracity, they argued limitations arise based on the deviation of the data from the real world. Thus, where a selection bias existed, the training of the model will not exhibit the actual condition due to errors in sampling data.
Also, AI for predictive analytics is
limited by unavailable data. Nate Silver also reiterated that a lack of
meaningful data is one of the two principal factors that limit the success of
predictive analytics. Another limitation on data is that, where the AI-powered
solutions perform predictive analytics, most of the data it relies on are in
their generic nature, factual distinctions between these cases are therefore
difficult to track.
The design also introduces its limitations to the AI-powered system. In modeling an AI-powered solution; the human element is very critical. This makes the AI susceptible to human biases from the design stage. Kate Crawford wrote that
“Like all technologies before it, artificial intelligence will reflect the values of its creators. So, inclusivity matters–from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old biases and stereotypes’’.
This perfectly highlights the possibility of bias in
the design stage, which replicates the model created, thus churning out a
biased outcome. Kleinberg et al. identify three design choices that can lead to
algorithms operating in a discriminatory manner: the choice of output variable;
the choice of input variables (candidate predictors); and the choice of the
training procedure.
These solutions would have
some limitations if the data fed it is not updated to reflect changes in the
requisite laws, policies, or regulations. An example is a rule-based AI
solution relying on repealed law to still give automated answers and decisions
to clients. Where there are changes in the regulations but not updated in the
solution, the outcome churned out will be wrong. Accountability cannot be said
to be a limitation but can be characterized as a limitation of the system of
governance.
On the limitation of bias and
legal ceiling to be applied, with proper regulation, algorithms can help to reduce discrimination.
But the key phrase here is “proper regulation,” which we do not currently have.
If properly designed and used, algorithmic systems can be used to effectively
demonstrate bias in human endeavours and, therefore, be a positive force for
equity. Brian Sheppard on trade secrecy in AI tools indicated that secrecy
makes it harder for consumers to realize the full benefits of a competitive
marketplace. Thus, further regulation around the development of AI systems will
have enormous benefits for lawyers.
In conclusion, AI-powered solutions in their diverse ways have impacted the legal industry positively as stated in this and previous articles, per their unique contributions towards efficiency, operational strategy, and excellence and the profitability of the legal firms that employ their use.
Author: Ing. Bernard Lemawu, BSc Elect Eng, MBA, LLB, LLM Cand. | Member, Institute of ICT
Professionals Ghana
For comments, contact author ghwritesblog@gmail.com
No comments:
Post a Comment