it’s everywhere (AI, that is)

I can’t escape the discussion about AI (Artificial Intelligence). It’s everywhere. I see it constantly on my LinkedIn feed, and it shows up frequently on Mastodon (that non-commercial, open source alternative to Twitter). Geoffrey Hinton, the head of Google AI (and often called the Godfather of AI), recently resigned, according to The New York Times, “so he can freely speak out about the risks of AI.” It’s rather like Dr. Frankenstein leaving town and saying, “I guess creating that monster wasn’t such a good idea after all.” But Dr. Hinton deserves credit for taking action and speaking out.

artificial intelligence abstract imageThe whole conversation is overwhelming, rather like (spoiler alert!) the conspiracy between the Changelings and the Borg to wipe out humanity, as revealed in the final two episodes of Star Trek: Picard, season three. So much so that I have to write about it.

In the interest of full disclosure I have to admit to once having had a dog in this fight. During the nineties I worked for a software company that developed AI applications for the banking and insurance industries, although the preferred term was Expert Systems. (The software ran on a mainframe and on IBM OS/2, just to date myself.) Of course, the technology has evolved considerably since then.

It’s true that AI is everywhere. I have had predictive text on my iPhone for several versions of iOS now. Microsoft Word is now intrusive with its predictive text feature. My trusty grammar and style checker, ProWritingAid, uses AI to offer suggestions. (Will you please stop telling me I need to add commas all the time, dammit!) When Kaiser notified me that I needed to see my primary care physician, it directed me to an automated phone system with which I had a spoken conversation about the timing of the appointment. I suppose that beats sitting on hold for twenty minutes. Perhaps the system is in beta, because the next day I received an email in which they asked me to complete a survey about my experience with it. All of this is not to mention Alexa, to whom I speak several times a day and who lives in my three Amazon Echo devices.

A lot of the conversation, however, is around ChatGPT, a tool that is supposed to assist in creating content. By content I mean articles, essays, blog posts, thought pieces, etc. Even poems, it seems. One of the people I follow on LinkedIn opened a post saying, “ChatGPT is great.” Really? Are you kidding me? She then added some nuance and caveats, but still.

I have seen more than one LinkedIn post saying that ChatGPT is valuable for research, allowing the human writer to fill in and complete the piece. But ChatGPT has been documented to provide flat out wrong information, and it provides no attribution for the content it spews out.

For research, I can use Wikipedia, as long as I go back to the original references cited in the article and not rely on the Wikipedia article itself. (The Chicago Manual of Style monthly Q&A told me that.) And Google can provide useful references if you carefully check the sources. Google Scholar is even better.

The great theoretical physicist Stephen Hawking, who died in 2018, warned about the dangers of AI in 2014, even though the technology he used to communicate with the world used a basic form of AI. At the end of March over a thousand technology leaders and researchers signed an open letter urging a pause in the development of AI. Signers included Elon Musk, Apple Computer co-founder Steve Wozniak, and Yuval Noah Harari, professor at Hebrew University of Jerusalem and the author of Sapiens: A Brief History of Humankind. The original web page, published by the Future of Life Institute, states that over thirty thousand signatures have been collected, of which over twenty-seven thousand have been verified. On Thursday Vice President Kamala Harris met with the CEOs of companies involved in AI and President Biden dropped by the meeting long enough to tell them, “What you’re doing has enormous potential and enormous danger.”

Perhaps ChatGPT has its uses, but I’m not convinced. There is, I believe, great danger here. There is greater danger in other applications of AI.

People, we need to pay attention.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s