Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe NowBy Adam Ira
When asked to discuss a topic of interest to new lawyers, this writer knew that a brief jurisprudential history of the Rule Against Perpetuities was just what the doctor ordered. But then, news broke of a large law firm hiring a “robot associate” named ROSS. New lawyers face a volatile future given the recent surge of technology capable of automating many tasks traditionally performed by lawyers. Many have prophesized a bleak future where machines replace people in the legal profession. But this writer believes in the power of people.
For those who may not be aware, the associate (which has been likened to Skynet, HAL 9000 and other harbingers of computerized doom) is an artificial intelligence named ROSS. ROSS is based on IBM’s Watson platform. Watson is a piece of software that simply answers questions, much like Siri or Cortana on smartphones. The ROSS A.I. program can bring value to law firms and their clients by giving lawyers the ability to instantly answer questions about the law without having to spend time dredging through research. For example, “Can a federal court sitting in diversity exercise jurisdiction over the claims of a non-diverse intervening plaintiff that only seeks to protect a workers’ compensation lien?” A senior partner with 20-plus years of litigation experience can answer this question rather quickly, but it still requires an associate to spend a significant amount of time researching various federal statutes, rules and caselaw to confirm the partner’s assessment.
Artificial intelligence presents several fascinating questions that the law is not yet prepared to answer. For example, at what point does a computer become so intelligent that it is considered sentient? If artificial intelligence is a form of consciousness, will it ever be illegal to “terminate” a computer? Can a computer take the bar exam? What happens if a computer passes the bar exam? If a programmer can teach a computer to hack servers on its own without instruction, and the computer illegally hacks a server without specifically being instructed to do so, who committed the crime? Outlandish as these scenarios may sound, they raise questions we, as a profession, should be asking now. These are questions that we, as new lawyers, will eventually have to answer. In January, this writer researched the possibility of artificial intelligence replacing lawyers, and the experts did not anticipate A.I. to possess the ability to perform associate-level work for about 10-15 years. With the recent announcement of a working ROSS A.I., we can be certain that the progression of technology is unpredictable.
The question of supreme importance to young lawyers is whether a computer can ever truly replace an attorney. The answer is “no” because our job as an attorney is to exercise independent judgment, and machines can only exercise judgment dependent on their programming. But a simpler answer is “No, because our profession is a people profession.” Machines cannot develop client relations. But machines can exercise logic. A computer algorithm is a formula. A computer uses that mathematical formula to interpret data. Some areas of law are largely formulaic statutory schemes. The Bankruptcy Code, for example, lends itself to automation. It is a complex statutory scheme where the statutes set out clear answers to most questions. A.I. appears to be an effective tool for answering bankruptcy questions because it can analyze a fairly clear set of instructions and use those instructions to answer questions. It may be quite some time before programmers can teach a computer to understand the common law. A computer may never be able to understand a client’s needs or advocate on a client’s behalf in the courtroom.
But consider A.I. in terms of economies of scale, or for those of us who abandoned an economics major, in terms of efficiency. A.I. will allow us to spend less time researching and more time lawyering. Research is very important, but clients undoubtedly perceive more value in paying a lawyer to write a brief than in paying a lawyer to research arcane property law cases from the late 1800s. However, some critics have pointed out that new lawyers learn the law through research, and A.I. poses a threat to a traditional training regime for new lawyers. There is merit to this argument, but one would presume that a lawyer’s duty of competence and diligence would require a lawyer using an A.I. program to perform some amount of research in order to confirm an A.I.’s conclusions about the law. New lawyers will still receive training through research, but it will be confirmatory rather than exploratory research.
New lawyers work in an exciting time where technology can change the game overnight. It is incumbent on those at the helm of our profession to prepare us for this brave new world by exercising proactivity instead of reactivity to technology. A good first step may be including guidance in the Indiana Rules of Trial Procedure with regard to e-discovery and the use of metadata. Lawyers and clients alike can look forward to the day when A.I. will be able to handle discovery. The Indiana Rules of Professional Conduct must guide us on the ethical concerns presented by the use of various technologies. In New York, for example, it can be an ethical violation to surreptitiously scour a document sent by opposing counsel for metadata. The ABA Model Rules of Professional Conduct have set the standard of a lawyer’s competence to include knowledge of relevant technologies:
To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology . . . .
Model Rules of Prof’l Conduct R. 1.1 cmt. 8. Indiana has yet to speak on a lawyer’s duty of competence with respect to technology. As A.I. develops the ability to perform more complex legal tasks, lawyers will undoubtedly require some guidance on the ethical use of such technology. For now, however, there are some questions only humans can answer:
Dave: “Computer, please explain the relevance of the Rule Against Perpetuities.”
Computer: “I’m sorry, Dave, I’m afraid I can’t do that.”•
Adam Ira is an associate attorney in Kightlinger & Gray’s Indianapolis office and a member of the firm’s data security practice group. He represents clients in a broad spectrum of state and federal litigation, including general liability defense and matters of state and municipal liability. The opinions expressed are those of the author.
Please enable JavaScript to view this content.