We expect that opinions based on data and facts would be reliable, although not indisputable. Since these are still "opinions" they are bound to contain giver bias that would spring from intuition, experience, judgement or quirks. It's human, and we understand the mechanism.
When it comes to AI, though, it gets troubling when we get opposite opinions from different tools. Being "tools" and driven by "computing", we are inclined to trust what they say, not as "opinions" but as some version or degree of "truth". But what if after consuming all the data, ChatGPT suggests, even encourages, that you do something, while Claude tells you that it's stupid, even suicidal, and you shouldn't do that ever. The data is the same. Are the tools acquiring "personality"? They are expected to, given the effort to mimic human faculties. But with humans, we have a way of figuring out. With AI, we have a totally different kind of quagmire to deal with that's neither unique, nor consistent, nor revealing in the way humans are. And in the background, it's being built, taught, designed, tweaked and tinkered - all by humans. And worst of all, it's not allowed to say "NO".Originally posted on LinkedIn on 16 Mar 2026
No comments:
Post a Comment