When I asked Microsoft’s copilot about why the US supports Israel I got a detailed response. When I asked why the US doesn’t support Palestine it said it can’t talk about it. I could be wrong but that feels biased.
When these things are manually censored, there is a ton of bias that is exposed. I mean the AI is going to also have bias from whatever it was trained on, but this is MS explicitly filtering out topics related to the genocide.
This is why I’m an advocate for unfiltered AI. Whatever answer you get is biased and possibly bullshit, but you should know that regardless of what you are asking it.
Well that is just plain screwed up. I shouldn’t be prevented from learning more about a complex subject. There’s so much I don’t know.
Be very cautious about using AI as a primary source of knowledge, but I agree it is very useful to explore sensitive topics in a casual and non-judgmental way, and during/after Google some of the things it says to see who is saying that and what alternate perspectives are out there.
There are biases inherent in the AI just based on the quantity of perspectives available to train on. Most AI has a “woke” (I mean that descriptively, not pejoratively) white American perspective, but that doesn’t mean it’s not useful, as long as you’re aware.
The news source of this post could not be identified. Please check the source yourself. Media Bias Fact Check | bot support
I actually remember asking it questions early on in the war and it would answer. Then one day it just stopped. It would say it can no longer answer these type of questions and shut down the conversation.
It’s filter is weird also. The other day I asked it how do american presidential elections work and it refused to answer saying it can’t talk about elections. Not sure why that would be a taboo topic.
That’s one of the main reasons I now only read news from independent media. The bias in the US is stifling my right to learn from both sides of an issue.