Unpopular Views on AI and Its Impact on Society
Written on
In a recent conversation with Jason Howell for his upcoming AI podcast on the TWiT network, I reflected on the potential stigma surrounding artificial intelligence, particularly large language models like ChatGPT. The misuse of this technology by corporations and the way media portrays it may lead the public to question the reliability of machine-generated content. This could create a scenario that the AI proponents seem to overlook.
While some in the AI field boast about their capability to revolutionize society, they risk generating skepticism towards generative AI. This skepticism arises from the technology's potential to mislead users, clutter our information landscape with irrelevant content, degrade news quality, complicate customer service, disrupt education, threaten employment, and negatively impact the environment. What’s appealing about that?
Here are my likely unpopular views on large language models: their inappropriate use in search and news, the improbability of establishing effective safeguards, and the reality that we already face an oversaturation of content. However, I do acknowledge some limited applications for synthetic text and generative AI. Dr. Emily M. Bender, co-author of the influential Stochastic Parrots paper, suggests specific scenarios where such technology can be beneficial: instances where the structure and fluency of language matter more than factual accuracy, where biases can be mitigated, and where originality is not essential.
I previously contemplated how large language models could enhance literacy for those intimidated by writing, thereby promoting inclusivity. For instance, Google’s NotebookLM, which I learned about from its editorial director, Steven Johnson, serves as a tool to assist writers in organizing their research rather than generating content outright. This could signal a new way of interacting with news.
I appreciate the advancements made possible through machine learning, as seen in tools like Google’s Search, Translate, Maps, Assistant, and autocomplete. I defend the internet (which is the subject of my next book) and social media, but I approach this latest trend in AI with caution. This isn't due to the inherent dangers of generative AI, but rather the foolish applications and unsettling motives of its current custodians.
Here are my specific critiques regarding large language models like ChatGPT:
Utilizing generative AI in search contexts is irresponsible. When using platforms like Bing or Google, users expect a credible list of sources or a trustworthy answer based on reliable information. Allowing an LLM to generate responses, knowing it lacks factual comprehension, is fundamentally misguided.
News organizations should exercise extreme caution in using generative AI for reporting. For years, wire services have successfully employed AI to create straightforward news reports from verified and structured data—like finance and sports—because of their limited scope. Conversely, using LLMs trained on extensive web data to create news articles is reckless, as they merely predict words without discerning truth or reflecting biases. I support using AI to assist journalists in organizing information or data analysis, but otherwise, such technology should be avoided.
The world does not need more content. This issue can be traced back to Gutenberg, whose invention commodified conversation and creativity into what we now label as content. Journalists have come to mistakenly believe that their worth lies solely in content production rather than in serving and fostering relationships. This industrial mindset drives the push for more content to generate clicks and ad revenue. Introducing AI into this mix only exacerbates the problem. It's time to tell the machine to take a break.
Building foolproof safeguards against misuse of AI is unrealistic. We frequently encounter reports of LLMs providing false or defamatory information. It's crucial to understand that LLMs do not lie or fabricate in the human sense; they lack any comprehension of truth. The only limits to their output depend on developers' ability to preemptively identify and block every potential misuse. What can we do? We must hold accountable those who misguide the machine, akin to holding someone responsible for posting harmful content printed from their home printer.
AI will not destroy democracy. There are constant warnings that AI will generate an overwhelming amount of disinformation, jeopardizing democratic processes. However, we already contend with a significant volume of misinformation; adding more may not alter the situation. Research indicates that online disinformation played a minimal role in the 2016 election. We need to address the broader issues surrounding the willful gullibility of those who spread misinformation, rather than falling prey to techno-panic.
Perhaps LLMs should be regarded as tools for fiction. ChatGPT is an impressive demonstration of technology's ability to mimic human expression. If it were primarily used for creative writing—like stories, songs, or poems—there may not be the same level of anxiety regarding AI's role. However, the reality is that creativity often lacks financial viability, which diminishes the attraction for investors. The issue lies not with the technology itself but with the capitalist framework surrounding it.
Training AI on existing content may qualify as fair use. The transformative nature of AI-generated content could mean that training models on legally acquired material does not infringe on copyright. The legal ramifications of generative AI concerning outdated copyright laws will take years to resolve. Media corporations are eager to hire lawyers to ensure AI companies compensate them for using news content for training, reminiscent of past legislative efforts that compelled search engines and social platforms to pay for linking to news.
Machines should have the right to learn, similar to humans. Denying machines the ability to learn from existing material could set a dangerous precedent for restricting human access to information. Thus, defending the rights of machines may inadvertently protect our own rights.
Restricting LLMs from quality content will worsen their output. Just as paywalls limit access to quality information in our democracy, they similarly leave the field open for misinformation and bad actors in news and machine learning.
The copyright status of machine-generated content remains uncertain. A federal court recently upheld the US Copyright Office's decision to deny copyright protection for AI-generated work. While I welcome the reexamination of copyright law for the sake of a larger public domain, the ruling focused solely on content produced by the machine, neglecting human involvement in most instances. We must reconsider copyright as a whole, as I discuss in my work, The Gutenberg Parenthesis. The likelihood of such a reevaluation happening? Comparable to the chances of AI leading to humanity's downfall.
The most concerning aspect of current AI developments is not the technology itself, but the ideologies of its creators. I won't delve deeply into this now, but the questionable philosophies propagated by some AI advocates—summarized by the acronym TESCREAL—are troubling. These notions serve as justification for their wealth and influence. Their philosophy might resemble a superficial essay on utilitarianism, bordering on eugenics, especially given their substantial power. Media outlets should focus on these discussions and include voices from responsible scholars across various fields, particularly the humanities. Additionally, government initiatives should promote open-source development and investment to foster competition that can hold these influential figures accountable.