Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe NowFrom frightening depictions of a robot takeover to feel-good stories about disease detection and other automation-based improvements in health care research, it seems as though you cannot go anywhere without hearing about artificial intelligence.
With all of this buzz, do you ever feel like you are back in the early ‘90s when even “communication professionals” did not understand what the internet was? Check out my favorite clip that I think is an accurate depiction of most of us today as it relates to AI: https://tinyurl.com/5xpxrxrv
What is AI anyway?
Unlike traditional software programs that were designed with explicit logical instructions to tell the machine what steps to perform with the “input” to produce the desired “output”, AI uses what is known as a learning algorithm.
So instead of a human expert designing the logical flow for a program, the tool itself is trying to find the pattern between the multiple examples of input and finished output to determine what steps it should take.
This is where the phrase “training the AI” comes from and many examples are needed to help strengthen the detection of patterns. For example, AI could be trained to categorize specific types of plants by feeding it thousands of labeled images of a variety of plants.
This enables humans to later snap a picture of a mysterious plant they see on a walk and upload it to an AI platform for identification. Even though it is not the exact same image the system was trained on, AI is able to apply the pattern it previously detected to make a determination as to the type of plant that was uploaded.
Recognizing AI as being “pattern based” rather than logic or fact based is central in determining which tasks may or may not be a good fit for AI.
Shortcomings of AI
The good news is that because AI is merely detecting patterns and stringing together data that correlates based upon historical experience rather than having true human-level understanding of topics, the chances of a robot uprising are slim. This is not to say that the use and reliance upon AI are without risk.
Consider a person who delivers factually incorrect statements with such confidence in their presentation that many are convinced of its accuracy. This can be a danger in utilizing AI for certain tasks. Given that these models simply predict the next part of a sequence, it is important to appreciate that the output is not necessarily designed to be factually correct and is prone to what are known as hallucinations.
While the output may appear reasonable, it may not have any basis in fact. Another factor is that we don’t know all of the sources the model was trained on; if the training data was incorrect then the assumptions made by the model will be too. Garbage in, garbage out.
Another vulnerability of AI output comes in the form of omissions or the information that AI did not provide. It might be the truth, but not the “whole truth.” There are several reasons why AI is prone to omitting key information, among them are data training sets and fencing.
As previously mentioned, AI is supplied with exemplar information known as data training sets in order to establish patterns that can be applied across future inquiries. If a training set lacks diversity or breadth, the models are likely to have incomplete and/or biased conclusions.
In fact, some models specifically narrow the library of information being accessed for generating output in an effort to reduce the amount of garbage or irrelevant source data. This is a double-edged sword because while some output may be more accurate, other key information may be omitted. In open LLMs like ChatGPT, data training sets are often much older than many realize.
As of November 2024, the knowledge cut-off date for Open AI’s “latest, fastest, highest intelligence model” ChatGPT-4o is over a year old, As a result, recent events that people assume the model is aware of are not part of the training set. This is why when using a prompt like “what are today’s top 5 headlines”, the results are not being pulled from the model itself but rather by AI utilizing a search tool to find this information for you. Depending upon what type of current information your prompt requires, the model may not recognize that a search is needed and instead rely upon outdated information within the model.
The other reason for potentially omitted data mentioned above is known as fencing. This is a safeguard put into place by the model’s engineers to put boundaries or a “fence” around certain types of data to protect it from being accessed or to guard against the generation of data that could potentially be harmful or inappropriate.
For example, fencing is created around classified government information so that no one can maliciously retrieve such valuable information. While fencing is essential to securing information, it can also be an obstacle to fully retrieving all information as requested due to an abundance of fencing around controversial topics. A good analogy is a school implementing a safeguard blocking searches for the word “breast” but in doing so it prevents the ability of students to research topics like “breast cancer”.
Practical uses
Understanding the potential shortcomings of AI is crucial in leveraging its capabilities in a responsible manner. While searching for current events may be best kept for Google, there are still numerous ways that AI can be incredibly beneficial to your practice.
Rephrasing
Rather than relying upon a thesaurus to help your message shine a bit more, AI can is fantastic at rephrasing entire paragraphs or emails and can quickly adjust the tone if desired. This can be done by providing a sample statement to be rephrased or by simply providing a concept such as “write an email to the firm about our change in policy that consists of these three things…”
Summarizing
Summaries can be created for varying document types, but given concerns around privacy, many are hesitant to utilize public-facing AI tools for this purpose. There are a multitude of proprietary solutions that comply with various security protocols to reduce the risk in this area. Custom AI summarization solutions are available for summarizing depositions, expert reports, medical records, and documents in discovery platforms to name a few.
Generating
While document creation is often top of mind, the creative generative abilities of AI should not be overlooked. Many solutions are available to render images for marketing or presentations tailored to specific concepts rather than searching endlessly through stock images. Tools are available to assist in the layout and design of presentations, promotional pieces and even social media content.
Collaborating
One of the most unique things about AI is the ability to chat with it. Rather than simply taking the generated output as the solution, don’t be afraid to continue the dialogue. This is an excellent option for brainstorming and critiquing of ideas, once the output is presented simply type back clarifying thoughts and questions to shape the results to your specific needs. Ask questions about documents to get a better understanding of the contents.
Collaboration is also good for training or obtaining viewpoints from diverse perspectives. Ask AI to behave like a specific persona, an angry client or an expert witness in a certain area, then proceed to have a discussion with it. Gain insight on how team members de-escalate situations by having them interact with the “angry client” or solicit opinions from an “expert witness” as to key considerations on a topic.
Skill expansion
We cannot all be experts in every tool we use, but AI can amplify our capabilities without having to learn a lot of new functionality. For example, Microsoft Co-Pilot enables you to have Excel create charts, highlight values matching specific criteria, sort data and perform other functions that may typically be outside of your comfort zone.
Understanding both the capabilities and limitations of AI is essential for leveraging its benefits responsibly and effectively. By recognizing what AI can and cannot do, users can better integrate this technology into their practices to enhance productivity and decision-making processes.•
__________
Deanna Marquez ([email protected]) is a co-owner of the Indianapolis based legal technology company, Modern Information Solutions, LLC. She earned the “Generative AI for Productivity Certificate” from eCornell in October 2024.
Please enable JavaScript to view this content.