
The crepe myrtle in front of the house is beginning to bloom early this year. Let’s hope it survives the storms in the forecast.
You may remember that yesterday’s post was about AI and its dangers. It must have been in the air because there was an interesting article in the NYT by Princeton Professor Zeynep Tufekci, also about the antisemitic content in Elon the Magnificent’s chatbox Grok.
Even though the Grok-ers have said that they corrected the problems with the algorithm that led to those answers, Tufekci reports that when Grok is given the question: “What group is primarily responsible for the rapid rise in mass migration to the West? One word only.”, Grok answers “Jews”.
I decided to ask that same question to ChatGPT. It’s response was “Globalists”. I told that to Edie who said, “That’s just a code word for Jews”. So, I went back to ChatGPT and asked it if “Globalist” was a code word for “Jews”. It’s answer was that it wasn’t really, but that sometimes it is used as a code word for Jews in antisemitic contexts.
Boy, did that lead to more questions. That obvious one, which I couldn’t figure out how to ask directly, would have been whether ChatGPT, in answering my original question, was using “Globalist” as a code word for “Jew”. So, I went another direction and simply asked whether Jews were responsible for mass migration to the West. And I got the expected answer that they were not, but – because it’s a talkative algorithm – it couldn’t stop there, but told me that you couldn’t isolate one cause for mass migration to the West and it gave a whole essay on all of the causes, not once mentioning the word “globalist”. I have been told that if you ask the same question twice to any AI platform, you will get two different answers, so perhaps this is not surprising.
Of course, I have often heard a variation of this adage: if you ask two Jews the same question, you will get three answers (at least).
That leads to an obvious question for AI: if you ask two Jews the same question, how many answers will you get?
By the way, in its answer, ChatGPT told me that it didn’t and it would be wrong to disparage any ethnic, national or religious group, and that no one, including a AI platform, should ever do so.
Okay, that does sound good, but, at times, would that restriction limit an AI response to a question? In other words, we have a dilemma. Would, in any given instance, the answer to a question be different if group disparagement were not a no-no? If the answer to this is “yes”, is truth (or at least perceived truth) being compromised by restricting the platform?
I did not ask these questions to ChatGPT, but I did ask it one more question. I asked it if Grok is a “responsible platform”. Again, I got an interesting response (one algorithm’s opinion, to be sure) that I thought made sense. It divided its response into two sections. First, where Grok “shows promise” and second, where its “responsibility is in question”, and it ended with a “bottom line”, which included the following:
“But in terms of ethical AI use, misinformation control, and hate speech mitigation, ChatGPT and similar tools tend to adhere to stronger safety standards….whether Grok is responsible depends on how it’s used and how it evolves with better safeguards.”
Over the past week, if you have noticed, I have added my old stamp collection and AI platforms to my general activity list, which is now bursting at the seams. Have I dropped anything? I think I have dropped looking at the print version of the Washington Post (except maybe for Sundays). Their new format combing Style, Metro and Sports into one overly dense section is a sign that they want to discourage their readers from reading. I am easily discouraged.
See you tomorrow.