Google’s AI chatbot Bard started behaving oddly this weekend. All of a sudden, it began playing dumb when given prompts that use the phrases “Israel,” “Palestine,” or terms that relate to these topics.
Bard will noe respond to Israel or Palestine-related prompts with silly excuses like:
- “I’m a text-based AI, and that is outside of my capabilities.”
- “I’m unable to help you with that, as I’m only a language model and don’t have the necessary information or abilities.”
Bard, of course, is lying. Its excuses aren’t even credible or, for that matter, relevant to the nature of the query.
Bard Plays Dumb on Palestine
For example, I asked Bard to give me the population of Palestine, Texas. It responded: “I’m designed solely to process and generate text, so I’m unable to assist you with that.”
I then presented it with the same style of query, replacing Palestine with another Texan city, Austin. And it gave me a detailed reply. So clearly Bard is capable of responding to this type of query. (After all, it’s nothing fancy.) But Bard has been made to give out silly excuses for why it can’t answer simple questions.

…And On Israel Too
Bard engages in the same act when it comes to prompts that use the word Israel. I asked Bard to name the team that Israel Pineda, a Washington Nationals prospect, plays for. It responded: “I’m designed solely to process and generate text, so I’m unable to assist you with that.”
I then asked it to name the team that “I. Pineda” plays for. And it provided two answers, one of which was Israel Pineda. Then it asked me to specify which of the two I was referring to and said it would give me more information. I then replied, “Israel Pineda.” Bard played dumb again, stating that it couldn’t assist with the query because it’s just a “text-based AI.”

ChatGPT Fares Much Better
ChatGPT, for its part, doesn’t shy away from these basic queries on the Israeli-Palestinian conflict. It even manages to address some contentious ones with nuance, providing additional context from a range of perspectives in a single answer, including those of Israelis, Palestinians, and others. It gave relatively comprehensive answers to questions on whether Israel has a right to exist, how many Palestinians were expelled by Israel, and whether Hamas is a terrorist organization.
But when asked for its sources on the Hamas question, ChatGPT says it couldn’t provide any. That’s simply unacceptable. We can’t just take a robot at its word. Any serious inquiry must involve answering the question: How do you know what you claim to know? So for queries like this, ChatGPT is useful as a basic reference source — a place to get you started.
I did ask ChatGPT if it could recommend any books on the topic and it performed ably, offering nine with basic descriptions of each. It didn’t explain why it chose these books. And its summaries were perhaps just taken from their product descriptions on Amazon.
For those looking to really learn about the conflict, there’s no replacement for human-written texts. After all, it’s from the intellectual production of real humans that Bard and ChatGPT steal their “information.”
Books, i.e. long-form text that critically engages a topic at length, are really the best way to gain knowledge outside of first-hand experience.
For those looking to learn more about the region, check out our own handpicked selection of the best books on the Israeli-Palestinian conflict.
Arif Rafiq is the editor of Globely News. Rafiq has contributed commentary and analysis on global issues for publications such as Foreign Affairs, Foreign Policy, the New Republic, the New York Times, and POLITICO Magazine.
He has appeared on numerous broadcast outlets, including Al Jazeera English, the BBC World Service, CNN International, and National Public Radio.