PERSONAL EXPERIENCES

From Hype to Habit: 11 Lessons from a Year of Using AI in My Daily Work

AI is everywhere, and it doesn’t seem to be slowing down. Rather than watching from the sidelines, I believe it’s better to dive in and learn by doing. The following are my personal reflections from the past year or so of using tools like ChatGPT, Claude, Gemini, and others in real, everyday work. What has worked, what hasn’t, and what I have learned along the way with all the good and the bad.
By Jan Takacs, 3rd July 2025

There is only one way to figure it out

So, without any complicated or sophisticated introduction, let’s take a look at the major themes and takeaways that stuck with me after making AI part of my daily work life. No filters. Here we go.

1Context windows are huge and using them well has been a difference-maker for me. I get the best results when I have one conversation for each project. Sometimes it feels like LLMs are easy to confuse, and this way, I keep it efficient and don't have to repeat myself as much.

Plus, the context windows are really massive now, allowing me to work on a project for weeks (or even months) while still using a single chat window. That way, the LLM already has all the previous information, knowledge, and context, which makes the workflow faster, more tailored, and more effective. When I want to start something new, however small, I just start fresh with a new chat and a clean slate.

2I find myself constantly impressed by how great LLMs are for text-based work, but not so happy with other outputs just yet. Co-writing, critiques, ideation or even research. Anything text-based often works like magic. It almost feels like you're shaping texts like Dumbledore shapes that big-ball water spell in the Order of the Phoenix. It does seem like this is where LLMs truly shine.

On the other hand, as someone who has been in the creative business for more than 20 years, I'm often disappointed by the other outputs, such as images, sounds or video. They just need too much of extra work, refinements, and time – even to get to a quality I consider 'decent'. That's where I see myself shifting back to the 'traditional' way of making them (such as stock footage/photos manipulation), as that way I can get to the quality I want faster.
Delivery services in Southeast Asia see spike in business because of COVID-19 (Source: Channel News Asia, Photo: Mediacorp)
Shaping any text with LLMs often feels like magic. The first comparison that comes to mind is Dumbledore shaping his water-sphere spell to keep Voldemort at bay :), Image copyright: Warner Bros / Illustration: Jan Takacs

3I did say it before, but it still proves to be true every single day: Communication is the ultimate skill. Clear communication is not just key for working with people, but it's also how you get the most out of AI. Whether it’s with text, voice, or visuals, the better I express what I need, the better results I get. It’s a skill that keeps proving itself essential.

Not only for our standard (and yes, still the most important) human-to-human interactions but also for the interactions with LLMs: it seems to me that great communication is key to unlocking the technology's potential.

Hence, if there are a few top skills I see as invaluable going forward, communication is one of them for sure. If not the top skill.

4Even when I do multilingual work, the English-first approach still wins regarding output quality. What I found out is that even when my final output is in Czech or another language, I do all the thinking, writing, and refinements in English first, before ultimately translating the final outputs to the language I want. This way I get consistently good results.

Funnily enough, even the LLMs themselves are telling me that it's the better approach. Simply because their core training is done in English and that they're more efficient doing the work in the same language, rather than doing the entire work e.g. in Czech or Spanish all the way. So it does seem that people with great proficiency in English have an edge here, at least for now.

5The way I approach research changed dramatically over the past year, and it's hard to imagine moving back. A lot of my work in product and design is insight-driven and research-based, and I'm surprised how fast LLMs turbocharged some of the core activities I do often, such as desktop research, information and data synthesis, product and concept ideation, UX benchmarking, critiques, and more.

Yes, it's a tight line to walk because AI-supported (or even AI-led) research has a lot of drawbacks (and biases!), but I can comfortably say that learning how to use AI intentionally helps me move faster, explore more, and sharpen ideas way quicker than before. Removing it entirely now would feel like a major step backwards.

6Vibe coding is a great way to grow technically. And while it’s still not quite ready for prime time (especially in enterprise), it shows real promise as the next potential paradigm for designing and creating digital products. Prototyping with AI, or vibe coding, is great fun and perfect for exploration and learning. I'm a fan because I believe that successful designers (or people in general?) of the future need to be a lot more technical, and I see this as an accelerator of that trend. It has already helped me understand some engineering principles better.

But for serious product work, especially in enterprise, it’s just not ready from my observations for production yet. Taking UI to a fully working code and product takes a lot of things to get right: debugging, security, iterations, documentation, understanding the code, etc. So, I see this as an evolution for sure, but would tone down the narrative that either engineers or designers are being replaced by AI (or by each other) in the near future.

7Great input = great output. When I emphasize the quality of inputs, I get great outputs. Otherwise, it's often lots of fluff. When I'm disappointed by the results I'm getting from GPT, Claude or Gemini, it's frequently because of the poor instructions or prompts I gave. Over the past year, I figured out that if I want high-quality results, I need to provide rich, detailed context or even upload a solid work-in-progress.

And often, I see that the best results are pretty much when there's almost a 1:1 ratio between input and output. Meaning the context I'm putting in is quite substantial. The easiest recipe for bland, generic, and often useless outputs seems to be short, vague instructions.
Grab operations span across 8 Southeast Asian countries (Source: Grab)
It’s critical to stay mentally sharp and not ‘outsource’ too much to AI. Early research indicates that over‑relying on AI can dull our mental sharpness – with potentially serious consequences for our future outlook.

8The fundamentals of working with AI matter most, as the core tech hasn’t changed much in years, despite the hype. There's a lot of hype about new AI releases on a weekly and monthly basis, and AI companies seem to position every release as something new and transformative.

But every time I collaborate or talk to senior data science individuals or AI-related experts, they all agree on one thing – the underlying, core transformer technology hasn't changed much in the past few years (with all its pros and cons).

This means it's really a great idea to get good with the basics and foundations and ignore the bombastic PR headlines. A new, better LLM model is unlikely to change how well you can leverage AI – the essentials matter a lot more.

9More and more, I find myself talking to AI rather than typing, as the model quality continues to improve. Lately, I find myself talking to AI a lot. Sometimes, perhaps even more than typing. It’s often comfortable, quick, and, with the latest voice models, surprisingly engaging. It allows me to convey thoughts, ideas, and requirements easily, and it continues to change how I think about tools and interfaces. Especially in terms of ideation for future product opportunities. There is a caveat, though...

10I run into some interface-related issues daily, and it's clear to me now that the chat interface is holding us back. Lots of new AI-powered products get plenty of hate because they're just "GPT-wrappers." But the more I use the simple chat-like interfaces such as GPT, Claude, or Gemini, the more I'm convinced that the "wrapper" layer and the UX built on top of the LLMs will be the deciding factor for many of the successful products in the future.

The basic chat interface is alright, but it often crashes, recalling lost content or context is difficult, it requires me to do a lot of frustrating heavy lifting constantly, and it's hard to pause or steer the token generation.

The pain is most obvious when it comes to longer voice communication and chat-voice transitions. There's just so much left to desire.

11The more I use AI, the more I worry about staying mentally sharp and how much it matters. The more I rely on AI, the more I worry about over-relying on it. Sometimes I catch myself going to GPT even for easy tasks, even when I feel like I shouldn't.

It's not difficult to imagine that if I'm not careful, I could start outsourcing the very skills I used to take pride in – structuring and synthesizing thoughts, writing, solving problems from scratch, and more.

On top of that, I believe that to get the best from AI, it's necessary to stay sharp, read and learn more, think deeper, explore further, and resist the temptation to offload too much. Because once those brain muscles weaken, I imagine they will be very hard to rebuild.

The power is in the balance

If you’re on your own AI journey, one thing’s clear – learning by doing still rules the game. The pressure we all feel to keep up can actually be a useful push to get hands-on, experiment, and figure out what really adds value.

But in the rush to integrate AI into everything, it’s worth remembering: just because we can use AI doesn’t always mean we should. The balance between what we do ourselves and what we hand over to machines doesn’t just affect the work – it shapes how we think.

And that’s something worth protecting.
Read More