Artificial Experience

I’ve used pro subscriptions to multiple consumer-grade AI chat bots for a few years. They’ve been great for things like starting research into complex topics, generating alpha versions of content, learning new coding techniques, and generating fun itineraries for trips.

In that time I’ve largely danced around the hallucination and memory problems by treating AI output as places to start. I know that I am going to take what they produce into my own workflow, or to some other human to make something real. Even for a trip itinerary, each item is checked against a website or a local guide because 50-60% of the entries are wrong (closed locations, inaccurate menu summary, etc.).

Two weeks ago, I decided to use more AI to help me find professional opportunities in the coming year. I wanted to get in the habit of using these tools to speed up simple, but time-consuming tasks. Things like corporate research and resume revisions.

I have spent 3-4 hours per day for two weeks working with projects and prompts within ChatGPT and Gemini. I’ve uploaded dozens of files, added countless contextual sources, and templatized a dozen prompts for specific needs.

And I’ve been severely disappointed. These tools aren’t just early-stage: they just don’t work very well.

There’s two reasons.

1. The hallucinations and lies never stop

At one point I had a ChatGPT project with 15 chats and about a dozen files. It was getting to the point that I could easily prompt the bot to give a summary of a job posting, providing a standardized output. We started down the road of having the bot take resumes it had written and create new ones based on a new opportunity.

There was a brief illusion of understanding between us.

Around the time of the fourth resume, GPT-5 started making up and inserting skills. Not skills taken from the the opportunity. Not skills from an earlier resume revision. Skills it incorrectly inserted to address keywords in a role description.

After ten or fifteen minutes of talking with it, skills were adjusted back to reflect my own, curated for the role.

But the new output removed all the impact metrics from experience bullet points, turning the whole resume into a wishy-washy stream of buzz words.

It took ten minutes to get it to put one set of impact metrics in my most recent professional experience. Not only were the other metrics still missing, but the bullets suddenly referenced skills and impacts not listed in any other resume revision.

The issue here isn’t just that the bot is changing my resume in unexpected ways. The issue is that the bot is inventing “reasons” to insert lies into- or remove truths from- the text.

Eventually I did the smart thing. I went back to manually creating copies of resumes and revising them by hand.

2. There is no learning, only coddling

As others have discussed, the primary job of every consumer-grade chat bot is to coddle the user. Every company seems to program their bots to speak exactly how they wish their mother had.

Once, I called out the fact that the bot had removed an entire section the most recent resume. We seemed to have agreed on a format. We had generated that format at least six times. And then it had diverted from the format.

The bot responded with “You’re right — I removed the Professional Summary … I lost sight of how important that section is… It tells the reader who you are before they dive into bullets. …”

No reference to the format we agreed on. No reference to many other revisions that included the section. Just a “You’re such a smart and good boy for noticing, let’s put it back.”

I have dealt with the memory issue in other ways for other projects. But there is something dangerously stupid about a bot that congratulates the user on finding something it missed or “forgot.”

What if there’s a rationale for removing the section for this opportunity? Then the bot should argue for it. What if the bot is getting confused? Then it should tell the user what is confusing. What if the bot is forgetting? Then it should come clean.

We keep being told these bots are going to replace junior employees. But we coach or fire junior employees who don’t stand behind their work, or who don’t ask for help.

And I’m getting to the point of firing general use AI chat bots for tasks that don’t require search and summary functions. They can’t learn, they struggle to remember, and they invent reasons to tell lies. It’s just faster to work without them sometimes.


Posted

in

by