摘要： Bringing an AI-driven tool into the battle between opposing worldviews may never move the needle of public opinion, no matter how many facts on which you've trained its algorithms.
Disinformation is when someone knows the truth but wants us to believe otherwise. Better known as “lying,” disinformation is rife in election campaigns. However, under the guise of “fake news,” it’s rarely been as pervasive and toxic as it’s become in this year’s US presidential campaign.
Sadly, artificial intelligence has been accelerating the spread of deception to a shocking degree in our political culture. AI-generated deepfake media are the least of it.
Instead, natural language generation (NLG) algorithms have become a more pernicious and inflammatory accelerant of political disinformation. In addition to its demonstrated use by Russian trolls these past several years, AI-driven NLG is becoming ubiquitous, thanks to a recently launched algorithm of astonishing prowess. OpenAI’s Generative Pre-trained Transformer 3 (GPT-3) is probably generating a fair amount of the politically oriented disinformation that the US public is consuming in the run-up to the November 3 general election.
The peril of AI-driven NLG is that it can plant plausible lies in the popular mind at any time in a campaign. If a political battle is otherwise evenly matched, even a tiny NLG-engineered shift in either direction can swing the balance of power before the electorate realizes it’s been duped. In much the same way that an unscrupulous trial lawyer “mistakenly” blurts out inadmissible evidence and thereby sways a live jury, AI-driven generative-text bots can irreversibly influence the jury of public opinion before they’re detected and squelched.
若喜歡本文，請關注我們的臉書 Please Like our Facebook Page： Big Data In Finance