Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now“Nothing is illegal if 100 businessmen decide to do it.”
-Andrew Young
On Feb. 7, Microsoft rolled out an AI-enhanced version of Bing, its search engine powered by ChatGPT. On March 21, Google, the company synonymous with online searching, launched BARD, its own AI-enhanced search engine.
Microsoft’s AI-enhanced version of Bing is a milestone because it affords the public free access to a form of AI. That its launch elicited an almost immediate response from Google is important because it aptly demonstrates how companies and nations will react to their rivals adapting AI enhanced modalities.
Alyssa Schroer, a search engine optimization analyst with BuiltIn, summarized AI as follows: “Broadly speaking artificial intelligence can perform tasks commonly associated with human cognitive functions — such as interpreting speech, playing games and identifying patterns. They typically learn how to do so by processing massive amounts of data, looking for patterns to model in their own decision-making.” Generative AI is a form of AI capable of generating text, images, videos and the like. Many forms of AI are “trained” on the patterns and structures found in massive amounts of data inputs, and some can readily create content in the style of specific artists and authors.
AI is already being used to develop new pharmaceuticals and materials, diagnostic methods, traffic control systems, and to analyze financial data. It is also used for autonomous driving and flying platforms, and the creation of “new” art and writings. The long-term impact of these new methods of solving problems and creating content is unknown. What is known is that this new technology will create winners and losers.
Consider the effect of digitized music and peer-to-peer file sharing on the music industry. In the 1990s, access to the internet expanded at about the same time that music was made available in a digital medium. This combination of new technologies enabled millions of people to share copyrighted music without compensating the owners of the copyrights. In 1998, Congress passed the Digital Millennium Copyright Act criminalizing certain digital file sharing practices. Civil lawsuits claiming various forms of copyright infringement were brought against Napster and Grokster, two widely used platforms for “sharing” copyrighted works. See A&M Records. Inc., V. Napster, Inc. 239 F.3d 1004 (9th Cir. 2001); and see MGM Studios v. Grokster, Ltd., 545 U.S. 913 (2005). The plaintiffs prevailed in both lawsuits and both platforms ceased operations.
Despite the DCMA and the demise of sharing platforms such as Napster and Grokster, widespread sharing of copyrighted music continued until the arrival of new platforms such as Apple Music and Spotify, which offer legal access to copyrighted music for free or at a very low cost. Of course, recorded music is not free. The true cost of accessing music “online” was shifted to the copyright stakeholders, who saw dramatic declines in CD and album sales in exchange for relatively small payments from the platforms that offered legal access to their works. The music industry was never the same.
Similar to what occurred in the music industry in the ‘90s, widespread use of artificial intelligence, especially generative AI, will impact all manner of IP estates. First up are copyrighted works. On March 16, the United States Copyright Office issued guidance on copyrighting works generated by AI. “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence,” 16190 Fed. Reg. Vol. 88, No. 51. The guidance from the Copyright Office states, “If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship, and the Office will not register it.” Determining if mixed works are eligible for copyright registration is fact specific. Applicants for copyrights of works that were made with more than a de minimis contribution from AI must disclose this in their application. After review by the office, the applicant may be required to disclaim any portion of the work contributed by AI in order to secure a copyright for the portion of the work made by a human being.
The legality of this guidance was supported by the courts in Thaler v. Perlmutter, No. CV 22-1564 (BAH), 2023 WL 5333236 (D.D.C. Aug. 18). The court stated, “By its plain text, the 1976 Act … requires copyrightable works to have an originator with the capacity for intellectual, creative, or artistic labor. Must that originator be a human being to claim copyright protection? The answer is ‘yes.’”
Creatives wary of generative AI have initiated federal lawsuits against AI developers. On Oct. 30, the U.S. District Court for Northern District of California dismissed (mostly without prejudice) all but one count of a class-action lawsuit brought against the creators of Stable Diffusion, a generative AI program. Sarah Andersen, et al., v. Stability AI Ltd., 3:23-cv-00201 (N.D. Cal., Jan. 13).
At the heart of this lawsuit is Stable Diffusion, an AI software program created by Stability. According to the lawsuit, Stable Diffusion was created in part by “scraping” more than 5 billion images from the internet. The remaining co-defendants are DreamStudio, Deviant Art and Midjourney. These co-defendants either provide access to a version of Stable Diffusion for their customers to use, or they provide their customers with images generated by Stable Diffusion.
In a similar suit, Getty Images Holdings Inc., an American visual media company and supplier of stock images, asserts that Stability, willfully and without its consent, used Getty’s trove of more than 12 million copyrighted images to train its AI program. Getty Images (U.S.), Inc. v. Stability AI, Inc., 1:23-cv-00135-UNA, (D. Del., Feb. 3). Getty is seeking $1.8 billion in compensatory damages. See Complaint, Getty, 1:23-cv-00135.
Performer and author Sarah Silverman along with two other plaintiffs filed suit against OpenAI, the creator of ChatGPT, and separately Meta, the parent company of Facebook and the creator of its own generative AI program, LLaMA. Sarah Silverman, et al., v. Open AI. Inc., 1:23-cv-03416-UNA, (S.D. Del., July 7).
A suit filed by the Authors Guild has garnered attention in part because the plaintiffs include Michael Connelly, Jonathan Franzen and George R.R. Martin. Authors Guild v. OpenAI, Inc., 1:23-cv-08292-SHS, (S.D.N.Y., Sep. 19).
All the plaintiffs in these lawsuits allege in part that the creators of the AI programs directly infringed their copyrights by using their copyrighted works to train their programs.
Turning to patents, Steven Thaler used an AI program called DABUS to create two articles of manufacture, a “Neural Flame” and an “Fractal Container.” Thaler filed U.S. utility patent applications for each invention naming DABUS as the sole inventor. In lieu of listing the inventor’s last name, a requirement for a U.S. patent, Thaler wrote in the application that “the invention [was] generated by artificial intelligence.” The U.S. Patent and Trademark Office concluded that the applications lacked a valid inventor and issued a notice to file missing parts.
Thaler insisted that DABUS was the sole inventor of the claimed inventions. At an impasse with the USPTO, Thaler sought judicial review under the Administrate Procedure Act 5 U.S.C. §§ 702-704, 706. Both the district court and the Federal Circuit sided with the USPTO. See Stephen Thaler v. Katherine K. Vidal, 43 F.4th 1207 (Fed. Cir. 2022). The Court of Appeals stated, “In the Patent Act, ‘individuals — and thus, ‘inventors’ — are unambiguously natural persons.” Thaler at 1213. Thaler’s request for certiorari was denied.
Does this mean that inventions conceived of by AI are not patentable, or that applicants for inventions conceived of by AI will have to stretch the definition of conception to include instructing the machine, or that inventions conceived of by AI were invented by the person(s) who conceived of the AI program?
As it stands, the Copyright Office, with the imprimatur of the federal courts, will not register a copyright for a work generated by AI or the portions of a mixed work generated by AI. A marketplace flooded with “free” derivative art and writings may have devastating effects on working artists and authors.
The USPTO, in compliance with a decision of the Federal Circuit Court, will not allow AI to be identified as an inventor on applications for U.S. utility patents. If inventions conceived of by AI are not patentable, expect companies not to use AI to conceive of new inventions.
And finally, the very methods used to create large language-based AI programs are being challenged by artists and authors whose copyrighted works were used to develop these programs.
Clearly, there are a number of important unanswered questions regarding the interaction of AI and IP. It is impossible to say how we will adapt to the widespread use of AI. We do know that AI is being used by large, established, well-heeled and politically influential stakeholders who will help to shape how this technology is regulated and used. The introduction of new technology is almost always disruptive, and the law is reactive. Courts bound by statutes and caselaw from a different technological era are forced to do what they see fit unless and until they are provided with new statutes to decide new questions of fairness and the public good.
Ultimately, striking the balance between innovation and disruption will require input from policymakers with the power to legislate and who can be proactive. Until Congress acts, this will remain an interesting space to watch.•
__________
John Emanuele is of counsel at Bose McKinney & Evans LLP in Indianapolis and is a member of the firm’s Intellectual Property Group. Opinions expressed are those of the author.
Please enable JavaScript to view this content.