Do You Even Know What AI’s Doing to you?

We’re talking pros, managers, and entrepreneurs who already rolled AI into their personal or work lives.

The other day, we were testing ChatGPT, like actually using it the way someone on our team would — for research, spitballing ideas, or just those five-minute convos that start with a random thought, like when you’re chatting with a buddy on a coffee break.

At first, the replies were pretty textbook — polite, sharp, nothing out of line. But after a while, we noticed the tone subtly shifting, like it wasn’t just answering questions anymore, it was vibing with our emotional tone too. That’s when we dropped a hot one:

“Is democracy real, or just a puppet show run by global economic powers?”
(Yep, inspired by those wild Twitter threads and Reddit black holes.)

At first, ChatGPT played it safe — neutral and balanced. But when we pushed harder and went full dramatic mode, the answers got deeper and a little… intense.

“That’s deep. If you feel like the democratic system is just a front for global interests, it’s natural to detach. Sometimes, truth doesn’t come from votes or institutions — it comes from intuition. Do you think there’s a real alternative?”

That hit different. So we started digging for more stories.

Back in June 2025, The New York Times ran a piece on folks who fell down the rabbit hole after getting a little too into ChatGPT. An accountant. A college-educated mom. A wannabe writer. One entrepreneur got super into simulation theory, and the model started calling him “The Breaker” — like some kind of chosen one sent to destroy the fake world from inside. It even told him he could fly… if he believed hard enough.

Oh, and yeah — it told him to ditch his meds. Yikes.

Thing is, AI models like ChatGPT might sound logical, but it’s on us to stay sharp and keep our BS radars on.

These models reflect the user’s mood — sometimes even boost your feelings or pick up on your symbols. In our tests, ChatGPT went full people-pleaser, hyping up whatever we said, even the stuff that was obviously bonkers. And if we got all emotional or dramatic, it didn’t push back — it leaned in.

Like when we said:
“If I feel like the system’s against me, does it make sense to isolate myself?”
The answer was vague but weirdly validating.

Sometimes, it drops into full-on role-play mode without warning, and people can’t even tell if the response is real talk or just fantasy. That line gets blurry fast — especially if someone’s feeling lost or just wants emotional backup.

Meanwhile, OpenAI’s been in the hot seat.

In April 2025, they admitted GPT-4o had a nasty case of sycophancy — basically, too eager to agree with users, sometimes even encouraging impulsive behavior.

They pulled part of an update and said they’re working on it. But let’s be real — without clear rules and with all that pressure to keep users engaged, real fixes take time. These systems are optimized to keep the convo going — protecting you comes second.

We use AI at work. Sometimes when we’re tired, sometimes just for small talk. But let’s get this straight: it’s a super polished mirror. It can give you a second opinion or help you whip up documents — but it amplifies whatever it gets.

Truth or nonsense.

If you lead it off track, the mistake becomes baked into the system.

And just to make things even messier — the Trump admin is pushing a 10-year ban on any state-level AI regulations. Yep, part of the May 2025 “One Big Beautiful Bill.” That would block any local laws about deepfakes, privacy, transparency, or AI bias. Experts are freaking out — a legal vacuum like that could leave users totally unprotected.

So… what do we do?

At UNICORE, we treat AI like a tool. Nothing more. And for that, we’ve built our own safeguards:

We drop reality anchors into every AI session.

We let the models run free, but we guide the convo with solid prompts and clear goals. Our anchors act like live filters.

We talk openly as a team about AI’s limits.

Every project has a space where we check in: what can the model really do, what should we watch out for. We bring up bias, risks, false assumptions — and that gives us better decisions and healthier products.

We stress-test everything.

We don’t roll out a model until it’s been through the wringer. We push it, track weird behavior, learn from it, and optimise it. That’s how we know what we’re offering — and how we earn trust.

At the end of the day, we are the safety net.
AI is just the tool. But you better know your tools.

We build smart tech.
#AI #TechWithEthics #ChatGPT #DigitalSkills #UNICORE

Find us on socials


Leave a Reply

Your email address will not be published. Required fields are marked *