The initial concerns about releasing the algorithm centered on the manufacturing of “fake news.” For example, if a “real” human writer generated a sentence like “Aliens landed in New Jersey this morning,” the AI would generate an entire, supposedly believable, story that would generate panic ALA Mercury Theatre’s “War of the Worlds” episode in October, 1938.
Putting its fear of fake news aside, the company stated its reason for changing its policy and for releasing GPT-2: “We’ve seen no strong evidence of misuse so far.” That’s reassuring, isn’t it? So, how is one to know whether or not this essay, written after the release of that justification, isn’t the first among many as-yet-to-be-generated paragraphs written by AI and not a human?
You really don’t have any guarantees, do you? You have to rely on assumption and trust that I exist unless you know me personally, and even then, you don’t have a guarantee that I haven’t, like Darth Vader, “gone over to the Dark Side” to rely on a writing algorithm because, for example, I might be just too lazy to type and too dumb to come up with the more than 1,200 essays I’ve posted on this site so far. What’s a modern reader to do with all these websites, all these blogs? What’s a modern reader to do with all the headlines on all those “news” websites?
Although I want to assure you that I do exist, I’m sure you have now at least toyed with the thought that I might not be real (or sane). But that’s okay on this website because my stated purpose is to offer points of departure for your thoughts, which, I think I’m safe in assuming, you, and not some artificial intelligence algorithm, are thinking. You can trust yourself, right?
Say you believe your thoughts have not been somehow manipulated and that what you think is the product of your own mind. Then what role has cultural inculcation played in fashioning what and how you now think?
*Cohen, Nancy. Fake news via OpenAI: Eloquently incoherent? Tech Xplore. 9 Nov 2019. https://techxplore.com/news/2019-11-fake-news-openai-eloquently-incoherent.html
Accessed November 11, 2019.