What’s killing America, even more than drugs and guns, are second and third-tier “conspiracy” websites and stations on YouTube that report misleading information in an effort to increase revenue and/or spread agendas that are usually racist, sexist, homophobic, or anti-Semitic in nature.
And disinformation is certainly nothing new in the right-wing news funnel, so it’s understandable when Trump supporters cry about their freedom of speech. But, there are always repercussions for our actions and the First Amendment simply protects us from going to jail for expressing an opinion.
- Yelling “fire” in a crowded theatre is not freedom of speech
- Calling 9-1-1 to report an emergency that didn’t happen is not freedom of speech
- Defaming the character of a person or business with the intent to mislead is not freedom of speech (as Fox News recently learned the hard way)
Artificial intelligence (AI) has become an increasingly popular tool in recent years, with its ability to perform tasks that were previously thought to be impossible. While AI has the potential for good, it also has the potential to be used for nefarious purposes, such as generating fake news stories.
According to The Byte, a man in China was recently arrested for using AI to write a fake news story that falsely claimed that an accident involving a train resulted in the death of nine people. This incident marks the first time an arrest was publicly made under China’s new AI regulations.
His charge carries with it a sentence of five to ten years in prison if he’s convicted.
China’s New AI Regulations
Chinese authorities recently created the Administrative Provisions on Deep Synthesis for Internet Information Service that bans the creation of deepfakes using AI, unless it is clearly labeled and can be traced back to its original source. As part of the agreement, anyone utilizing deep synthesis technology to customize somebody’s voice or image must first get in touch with them and gain their approval.
Deepfakes are defined as videos in which a person’s face and/or body has been digitally altered using AI so that they appear to be someone else. While this technology has so far largely been used for entertainment purposes, such as Matt Stone and Trey Parker’s web series Sassy Justice, it can also be used for nefarious purposes, such as generating fake news stories.
The Dangers of AI-Generated Fake News Stories
AI-generated fake news stories have become a growing concern in recent years.
While traditional fake news stories are typically written by humans, AI-generated stories can be created much faster and on a much larger scale. This makes them much more difficult to detect and combat.
Fake news stories generated by AI can be used for a variety of purposes, including spreading propaganda, manipulating public opinion, and even causing panic.
In the case of the man in China who was arrested for using AI to write a fake news story, the false story about the train accident could have caused unnecessary fear and panic among the public.
The Potential for AI to Be Used for Good
The use of AI for evil intentions is worrisome. Nonetheless, AI can be utilized for positive outcomes as well. It can do jobs that are too complicated or even out of reach for humans such as investigating huge datasets or carrying out intricate computations.
AI is being used to improve the lives of those with physical/functional restrictions. For example, hearing aids that use AI can give people with hearing impairment a better experience, while AI-enabled prosthetics can provide greater mobility for those who are differently-abled.
China’s new regulations on deep synthesis technology are an important step in curbing the use of AI-generated fake news stories. Let’s hope the United States and the rest of the countries around the world implement a similar model. Fake news can be extremely hurtful and very dangerous in numerous ways.