Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe NowIn recent months, the discreet behemoth that is perceived to provide a broad shield against liability for tech companies has been in the limelight: Section 230. Originally enacted in 1996 to protect children from predators and indecent content percolating online, Section 230 was embedded in the Communications Decency Act of 1996, representing one of the first federal efforts to regulate the internet writ large, or what was understood to be the depths of the internet at the time.
Section 230(c)(1) states, in relevant part, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In short, and without diving into its complex contours, as a result of these good Samaritan protections contained in Section 230, social media companies are immune from the liability typically attached to a traditional publisher or editor. The courts have generally applied that immunity liberally. At the time of its codification, Congress could not have fathomed the massive growth and turgid nature of the internet 25 years later and the shift to an information-based and digital economy. Social media companies continuously deflect accountability in managing the public discourse but meanwhile profit immeasurably beyond the traditional linear product-to-consumer transaction.
Recent legislative proposals have endeavored to curtail the perceived imbalance by attempting to amend Section 230, either applying archaic legal channels or forging a new construction implicating constitutional concerns. Some legal scholars, and even Supreme Court Justice Clarence Thomas, have posited whether social media companies could be considered common carriers by virtue of their marketplace dominance and their position as gatekeepers of speech, which would provide the government with broad deference for industry regulation and circumvent First Amendment violations. However, the common carrier theory is not cleanly applied as it is derived from the historic transportation of goods and utilities that constrict consumer choice (radio, telegraph) that are functionally more restrictive than the transference of an intangible (information, ideas) across an indefinite number of websites. If social media companies are not common carriers, the remaining suggested Section 230 reforms often collide with the First Amendment. A few considerations arise: whether “content” and “speech” are interchangeable; whether any codification of language could survive the applicable scrutiny; and whether the algorithms used for content moderation are protected speech. If social media companies can avoid liability for how they moderate content, through discretion (Section 230) and the algorithm’s design (First Amendment), then where does the onus to answer for harm lie? Does intertwining both protections generate a “have your cake and eat it too” paradox?
The above legislative approaches pose not only legal challenges, but possible outcome derivatives adverse to their proposed purpose. First, if the platforms are wholly liable for third-party content, then companies may further restrict access or, alternatively, aggressively moderate content to minimize any particular viewpoint on the basis of corporate self-interest. Second, although Section 230 has conferred a benefit to some Big Tech companies, it has conferred the same to emerging businesses across the internet, allotting space for riskier entrepreneurial endeavors. If innovative businesses are immediately bankrupt upon entering the competitive sphere, then tech giants will only increase their bandwidth. Third, new canons regulating the internet cannot become obsolete upon codification; rather, legislation should be malleable to adapt for scientific and technological developments.
So how do you hold social media companies accountable without afflicting public discourse, triggering censorship, hindering innovation or unduly burdening legitimate business ventures? If amending Section 230 is not practical, could state or federal legislators reach past the “how” (the ways companies moderate content) and instead regulate the “why” (the financial incentives underlying the decisions)? Social media companies profit by manipulating user engagement with algorithms designed to cultivate our emotional investment in the product, further galvanized by addictive notifications. A genuine concern is the impact these engagement decisions have on younger users as these companies will undoubtedly be embedded in their lives for decades to come.
Tech giants may not be able to self-regulate and impose ethical guidelines as their fiscal health is tied to the attentiveness of users and the details regarding that use. The web is not free: Its use is paid for by our privacy and time. Social media companies algorithmically monitor and harvest every miniscule online movement for hyper-targeted advertising. Unlike traditional media — like the “publishers” Section 230 pointedly differentiates — social media advertisements can be tailored to each user’s personalized data from information gleaned from search history, subject matter of conversations or videos scrolled past or rewatched. Even though these companies dispute that they “sell” data, they have monetized our online activity to the millisecond by amplifying use: Our data has been boxed up and tied in a bow for advertisers functionally similar to that of data brokers.
Thus, the crux of the content moderation concern: If social media companies grossly profit off the unwavering attention of consumers (from preteens to the elderly), then how do we ensure truthful transparency and appropriate regulation for their hand in curating social discourse? Another more direct legislative approach may be plausible: minimizing the incentives to collect and monetize personal data, which, in turn, forces adjustments to placate algorithms and provides proper application of the platform’s community guidelines. Social media companies are broaching new territory — a nuanced data broker. Certainly, this problem can be addressed by congressional regulation, but states are sovereign and not dependent on federal solutions. State legislators are primed to enact their own regulations for data privacy and use of personal information while addressing the aforementioned parallel concerns. The landscape is unrestricted and in need of legal architecture addressing social media platforms’ use of personal data by means other than “selling” and possibly imposing a financial deterrence for both the over-collection of data as well as the manipulation of social discourse.
By constructing a scaled penalty or fine dictated by the size of the company and the amount of data collected, entrepreneurs could continue to benefit from Section 230’s liability shield and meanwhile discourage companies from over-surveilling users, collecting endless amounts of personal data and exploiting public discourse for monetary gain. Any regulation on the internet should heed the current legal quandaries abutting Section 230: A failure to account for societal and technological change will only be a temporary solution and we will, once again, find ourselves applying antiquated law to ever-changing technology.
Under current authority and without modern state or federal legislative tools in place to affirmatively act, state attorneys general are armed with few tools. Our pursuit for accountability on behalf of Hoosiers is then constrained to statutorily articulated investigatory tactics to ask social media companies the tough questions and demand answers in connection with abusive, deceptive or unfair trade practices. Yet we are limited in any resulting litigation to practices that fall outside the bounds of Section 230 immunity.
The path to impose accountability is precipitous and narrow. The arrows in our quiver are imperfect and finite. But there is a viable concern that social media companies are unlawfully prospering by harming society and our children’s mental health, invading our privacy and manipulating social discourse. We should not be at the mercy of these tech titans and sit idly by, as the stakes are too high.•
• Christa Kumming is a deputy attorney general in the Consumer Protection Division of the Office of the Indiana Attorney General. Opinions expressed are those of the author.
Please enable JavaScript to view this content.