I Am Terrified of AI
In this short blog post I want to share my views on AI and how it will impact all of us.
Lorenzo Meacci @kapla
4/4/20266 min read


I am Terrified of AI
I'd like to share my take on AI and how it will impact all of us. I've been reading posts on both X and LinkedIn for a while now about the excitement and fear surrounding this "godly" technology, and I thought this might be a good time to share my point of view.
I should also note that some of what I'm about to say might be a hot take, and you might even find it offensive. Some of these takes are speculative, and you SHOULD NOT agree with all of them. I am NOT the most knowledgeable when it comes to AI. I just experienced firsthand what it's capable of. If you disagree, I'd genuinely love to hear your perspective because different viewpoints are exactly what I'm after. Please reach out via LinkedIn or X.
Roughly two months ago, I came across an excellent post by Jason Lang @curi0usJack that grabbed my attention more than anything else I'd read on the topic, and I wanted to dig deeper. Sadly, I feel like a mere spectator in the orchestra, with no power to change the music and only the ability to share my thoughts. In his post, Jason explores "Real Human Concerns In The Age of AI" It's a long read, and I strongly encourage you to go through it here: https://x.com/curi0usJack/status/2024184571974000984
But here is a summary:
Purpose: When a tool can outperform a master craftsman the moment you hand it a prompt, the question stops being "am I productive?" and becomes "do I even matter?"
Deskilling: Every task you delegate to AI is a skill you stop practicing. Skills rust. Over time, the person who never had the skill but uses AI freely will outpace the expert who tries to maintain theirs.
Addiction: AI is addictive. Having a tool that delivers results from a single prompt in minutes gives us a sense of power that is very hard to let go of.
I agree with every point he made, but I want to take a moment on the first one. We're getting philosophical here, but who cares? This is a question that haunted me for a while after reading that post: "why do I matter?" As a human being, is my value measured purely by what I can produce, or does it come from something deeper?
The way we as individuals value people and the way the market values them are two completely different things. As individuals, we value people for their altruism, empathy, courage and the way they treat others. The market, on the other hand, values people for their knowledge, productivity and experience. This is nothing new, but for people like me who care deeply about their profession and tie a big part of their identity to it, the line between the two becomes very hard to see. And that line becomes impossible to ignore when a technology emerges that threatens to be better than us at the very thing we built ourselves around. This is not an attack on our profession but an attack on our identity.
So the real question becomes "do I even matter as a worker?" As strange as it sounds coming from me, I have to admit that I genuinely don't have a long-term definitive answer to that. And that itself scares me more than I expected.
Will AI become better than me at cybersecurity?
Most definitely yes. I am not delusional enough to convince myself that in the long run I can outperform a system trained around the clock on the most powerful hardware in the world, consuming every piece of cybersecurity material on the internet: CTFs, leaked courses, blogs, tools and research, mine included. So what then? I've seen people suggesting that we should start gatekeeping knowledge to starve the training data. In my humble opinion, it is already far too late for that. The amount of resources available online right now is more than sufficient to train an exceptionally capable cybersecurity expert from scratch.
But we also need to be rational about what happens before it is our turn on the guillotine. There is something people rarely talk about: by the time AI reaches someone operating at a high skill level, the disruption has already been unfolding for years beneath them. The chaos does not arrive with your personal crisis. It arrives long before it, meaning that when it is finally your turn, you are not just losing a job. You are losing it inside a society that is already under serious strain. That is not a comforting thought, but it is an honest one.
If we ever reach a point where a role as sensitive and demanding as a red team operator or a cybersecurity researcher can be fully or even partially replaced by AI with human supervision, that means the vast majority of other jobs are already gone. I do not find that reassuring, but it does carry one implication worth sitting with: the scale of disruption would be so enormous, affecting so many people simultaneously, that it would be impossible for society to ignore. At that point, taking action to prevent a total economic collapse would not be optional. It would be survival.
When and how?
Nobody knows, and that is definitely not me claiming otherwise. What I do believe is that when the change comes, it will be radical and fast.
Right now, today, I am already seeing models that are exceptionally good at solving complex software engineering tasks that require a serious level of reasoning, the kind of reasoning that is simply not required in most jobs. So technically speaking, AI is already capable of replacing a large portion of the workforce. The reason it has not happened yet is not capability, it is implementation. AI is not easy to integrate into existing workflows, and left unchecked, it can cause serious damage. At this moment in time, AI still needs human supervision and guidance to function reliably.
That supervision requirement is also a double-edged sword. An employee who does not fully understand what they are doing and blindly delegates to AI can produce the technical debt of 40 workers combined. Garbage in, garbage out. The tool amplifies you in both directions, your strengths and your mistakes equally.
For now. But AI has surprised me enough times that I have learned to never treat 'for now' as a guarantee.
What will AI be used for?
This is where I get most scared. History has a pattern that is impossible to ignore: every time a groundbreaking technology emerges, it is almost always captured first by a small group of people hungry for power and money.
Take nuclear energy. The science has been there for decades. We have had the ability to generate clean, abundant power on a scale that could have meaningfully slowed climate change long ago. Instead, a combination of manufactured fear, deliberate misinformation and aggressive lobbying from fossil fuel interests kept it suppressed, not because it did not work, but because it threatened the wrong people's profits. The planet paid the price for that decision and continues to pay it today.
This is not an isolated case. It is the rule, not the exception. The internet was supposed to democratize knowledge and level the playing field. Instead, it concentrated unprecedented wealth in roughly a dozen companies and became the most effective manipulation machine ever built. The technology delivered on its promise for some, while the structure around it was quietly designed to benefit a very specific few.
AI will be no different. The people funding it are not doing so out of altruism. The governments racing to develop it are not doing so for the common good. The most likely near-term use of truly powerful AI is not curing diseases or lifting people out of poverty. It is gaining geopolitical advantage, economic dominance, and control. The benefits, if they ever reach the rest of us, will come later, slowly and unevenly, just like they always have.
Embracing What's Useless as a Form of Survival — "All art is quite useless." Oscar Wilde
During a conversation with my psychotherapist, he quoted Oscar Wilde, who argued that art should not serve a practical or moral purpose. It should not be judged by its utility, but purely by its ability to create emotion in us. That is what sparked something in me. He then asked me to think about what is "useless" in my own life. I am addicted to kitesurfing, a sport that brings me immense joy, yet practicing it serves no productive purpose. It creates nothing. It is effectively useless in the most practical sense, and it exists for one reason only: it makes me feel alive.
We should remember that AI can only invade the parts of our lives that are measurable and productive. There will certainly be attempts to commercialize and automate even our leisure, but resisting that will be our responsibility. Doing things simply because we want to, because they make us feel something, is not a small act. It might be the most human thing we have left.
Closing Thoughts and a Question for the Reader
I hope I haven't scared you too much, and please don't let my pessimistic view be the only lens through which you look at this. One thing worth specifying is that everything I've written comes from the perspective of a worker. If you are the owner of a cybersecurity/IT firm, I would especially love to hear your thoughts. Do you see AI as a future replacement for parts of your workforce, or do you currently view it purely as a productivity tool that makes your human team more effective? Please reach out. I am genuinely curious.
That's all for today. Happy hacking!