Homeopathic Personality – Why We Need to Talk About How We Use AI

9 min read
🧑 🤖 Teamwork of human and AI
Note: This article has been automatically translated by a large language model.
Read the original article

AI makes us more productive, more eloquent, and faster — but bit by bit, it takes away our own voice. Whether you’re writing a LinkedIn post, drafting a cover letter, or developing code, the question is always the same: How much of this is still me? Here are some thoughts, and a proposal.

Job Applications: The Paradox of Perfection

AI-assisted cover letters have transformed recruiting. Today anyone can submit a flawlessly written application — which is exactly what makes them worthless. Spelling mistakes have paradoxically become a mark of quality, because they signal that a real person wrote the text. AI doesn’t make mistakes; people do. The savvy ones deliberately slip in errors to make their letter feel more authentic.

When a candidate with B1 language skills submits something in impeccable prose, it shows — at the latest during the first interview. Some applicants are upfront about having used AI. That kind of honesty builds trust.

To adapt, we added a short video screening to the hiring process at CityLAB. Thirty minutes is usually enough to get a sense of whether a deeper conversation is worth it — for both the candidate and the organization.

Social Media: Losing Your Own Voice

On LinkedIn and other platforms, everyone sounds polished and professional these days. AI makes that possible. The real pros only give themselves away through small tells: swapping out ChatGPT’s signature em dashes for commas before posting. Or they use Claude or Gemini, which don’t have that quirk.

Meanwhile, the people who actually put in the work fall behind. Their visibility drowns in the flood of AI-polished content. At the same time, we’re getting better at spotting “AI slop” — those perfectly formed but oddly soulless posts with no human fingerprint.

It’s like mixing too many paint colors: you end up with a muddy gray. AI content dilutes the richness of human expression. Social media is full of it. That hilarious video of a parrot commanding Alexa at 3 AM and terrorizing the whole family? Funny, sure — but it’s just AI. In the fight for attention, authenticity is the first casualty.

Losing your voice is a gradual thing. First you use AI just for spell-checking, then for whole sentences, eventually for entire texts. Some people feed their earlier work into the model, hoping to keep their personal style intact. But after a few rounds of that, all you’re left with is a watered-down copy of a copy. Homeopathic personality.

I’m speaking from experience — there’s hardly a text that can’t be sharpened by a few rounds of back-and-forth with a language model: “Make it smarter, add a bit of humor.” Ten seconds later, the polished version is ready. The writer becomes the editor.

The temptation is real. Even writing this piece, which was drafted, structured, revised, critically questioned, and then scrapped before I started over — this time without AI. It’s harder, but hopefully more authentic, and a small act of respect toward the people reading this.

The Attention Economy

On social media, going viral used to take talent and effort. A spectacular parkour jump across a rooftop gap is more impressive than someone training their dog in the backyard. Tools like Sora are changing the rules: dramatic chase scenes are now just as easy to generate as cute animal videos. Entertainment value beats authenticity.

Is that a bad thing? Not necessarily. Creative use of AI opens new doors and enriches our experience. No technology has revolutionized content creation so fundamentally.

Vibe Coding: Faster, but at What Cost?

For rapid prototyping at an innovation lab like CityLAB, vibe coding is a godsend. You get results in minutes that used to take days — if you got there at all. A genuine enabler for anyone who cares more about the outcome than the process. The winners are the creatives who treat vibe coding like playing with digital clay — building, tweaking, discarding, without worrying about code quality. On the Gas Town Code Journey, they’ve hit stage 5 or 6: the AI assistant runs the show in the IDE, code is glanced at occasionally at best, but never manually edited. At CityLAB, Creative Technologists (a generalist role bridging design and code) and Product Owners (who can whip up feature mockups in minutes with tools like Figma Make, Lovable, or v0) benefit the most.

For seasoned developers who care deeply about code quality, AI is more of a double-edged sword. They’re increasingly pushed into the role of code reviewer: AI writes, the human checks. And even that review step is now often handled more reliably by AI itself. What gets lost? The joy of tinkering. The satisfaction of puzzling through a tough problem and solving it yourself. The sense of control over generated code keeps slipping, and so does satisfaction with the daily work. It’s especially bleak for newcomers: junior positions are becoming rare, and Anthropic’s Opus model has single-handedly made “learn a programming language” a questionable career strategy in 2026.

The Pressure to Use AI

AI boosts our work so dramatically that not using it becomes a competitive disadvantage. Fall behind, and you stay behind. There’s no stopping this. The disruption is too deep, the gains too obvious. The grim forecast: if everyone’s productivity rises equally, nobody gains an edge — we’re just running an endless race for fleeting front-row spots. Pause for a moment and you get left behind.

Human Out of the Loop?

So is “Made by a human” becoming a liability? In many domains, yes — and there’s no going back. AI is fast, doesn’t need sleep, vacation, or health insurance, and in an ever-growing number of fields, produces results that match or surpass human work. But it would be a mistake to generalize. We need to look more carefully at where the human element still adds real value.

Take writing: AI almost certainly translates a technical manual more competently, and definitely faster. If you work in that field, you’re already feeling the impact. But do we want AI translating novels, too? An LLM can produce detailed software documentation in seconds, but can it write screenplays that truly grip us — stories built on nuanced observation and deeply personal, emotional experience? Even as AI mimics human prose ever more convincingly: do we actually want to read novels by AI authors? Meaningful world literature written by a large language model still feels like a distant prospect.

Humor seems to be another human stronghold. Even current models (ChatGPT 5.2) struggle: “Why did the computer go to the doctor? Because it had a virus.” Hilarious, right? And art? AI produces strikingly convincing remixes of major contemporary works. But can AI paint with intention? Can it stand on a stage and channel the audience’s energy into something new? Can it be driven by genuine motivation and curiosity?

These are specific examples, but they illustrate a broader trend: AI is claiming more and more territory, and the space where humans are truly irreplaceable is shrinking. The skilled trades are a notable exception, of course — that disruption belongs to robotics.

Voluntary Self-Regulation

We’re at risk of optimizing ourselves out of the picture with AI’s seductive capabilities. Our personality dissolves into the noise of training data. If that feels uncomfortable, we need to push back against the temptation. We need more transparency around content we’re creatively responsible for. Was AI just used for structure and polish? Were large chunks generated by the model? Or was everything written without AI at all?

Early approaches already exist: TikTok and Meta label AI content directly, often using Content Credentials. The EU’s AI Act introduces transparency obligations taking effect in 2026. Voluntary standards like AI Labels or AI Content Labels offer frameworks for labeling.

Self-Reflection, Not Judgment

While regulation and platform-level solutions matter, I’m betting on voluntary labeling. Not a traffic-light system that sorts AI use into “good” or “bad.” Using AI isn’t inherently something to praise or condemn. It’s about self-awareness: creators should be conscious of AI’s role in their work and communicate it openly — not least for their own sense of ownership over what they’ve made.

My proposal takes inspiration from AI Labels but keeps things simple:

100% human made: Your own thoughts in your own words. Honors the effort without judging other approaches. Example: A personal LinkedIn post with original reflections.

Human in the lead, with AI support: Example: A language model was used to improve readability, fix errors, or tighten the text.

Teamwork of human and AI: Example: A text developed through active iteration between human and AI.

This article is a case in point. The first draft was revised with AI help. The badges were generated with Figma Make.

Mostly AI generated, with human in the loop: Content was predominantly AI-generated. Example: A vibe-coding prototype for rapid iteration, or auto-generated summaries.

100% AI generated: Produced entirely by AI with little to no human involvement. Example: A fully AI-generated video.

Looking Ahead

I’ll be labeling my articles with these badges going forward. The goal isn’t judgment — it’s self-reflection and transparency. That’s how we develop a responsible relationship with AI: one that leverages its strengths while honoring human creativity and authenticity.

Feedback and ideas for improvement are always welcome.

Addendum

The design has since evolved into a dynamic component:

🧑 🤖 100% human made 🧑 🤖 Human in the lead, with AI support 🧑 🤖 Teamwork of human and AI 🧑 🤖 Mostly AI generated, with human in the loop 🧑 🤖 100% AI generated