Jan 19, 2026 · Episode 29

What a year of hands-on coding with AI teaches you about good software

Rebecca Murphey and Lada Kesseler, lead developer and technical coach at Logic 2020, talk about the fundamental limitations of AI coding tools, why focused agents outperform distracted ones, and how software craftsmanship matters more than ever.

Show notes

Lada Kesseler is a lead developer and technical coach at Logic 20/20 who has spent the past year deep in the weeds with AI coding tools — and she’s uncovered patterns that most developers are missing. Her workshop on “Mapping the uncharted territory of AI” has earned praise for revealing the judgment behind the code, and in this conversation, she shares what she’s learned from hands-on experience.

Lada identifies three fundamental challenges with AI tools: they can’t learn from past conversations, they’re non-deterministic, and they suffer from compliance bias. Still, understanding these limitations is what makes you better at using them. Lada explains that most teams are still working with “distracted agents” that accumulate rules without actually following them. However, she argues that when you design around AI’s need to focus on one thing at a time, you can build systems that actually work.

On software craftsmanship, Lada thinks it matters more now, not less. AI codes itself into walls at super speed when it isn’t directed by an engineer with knowledge of best practices. This means small batches, managing complexity, and refactoring are still essential — because AI can’t handle architectural thinking, yet.

Plus: the difference between “vibe coding” and craft, why having AI critique itself produces dramatically better results than single-pass generation, how documentation suddenly matters again, and what leaders need to understand — including why threatening your team with replacement will backfire.

Watch the episode on YouTube →

Timestamps

(00:00) Introduction
(02:46) The five fundamental gaps in AI development
(07:15) “You’re absolutely right!”
(08:00) The anti-pattern of the distracted agent
(11:09) Do agile best practices still matter in the age of AI?
(14:48) Why software craftsmanship is more important than ever
(15:46) Managing complexity: Can AI do it, or do we still need humans?
(18:19) The difference between vibe coding and craft
(20:24) How AI is an amplifier — for better or worse
(21:23) Good practices that are suddenly essential
(23:02) The superpower of refactoring
(25:36) Documentation: harder to maintain but more important than ever
(26:58) When developers don’t write code anymore
(32:49) AI costs and the coming reckoning
(33:54) When AI is not the right choice
(36:06) Three things senior leaders need to understand about AI
(39:46) Will we still be talking about this in a year’s time?

Transcript

Rebecca Murphey: I’m Rebecca Murphey, and this is Engineering Unblocked. Engineering Unblocked is brought to you by Swarmia, the engineering intelligence platform trusted by some of the best software companies in the world, including startups like Superhuman, scale-ups like Miro, and companies from the Fortune 500.

On today’s show, I’m talking to Lada Kesseler, lead developer and technical coach at Logic 2020. I reached out to Lada after watching an amazing workshop she gave about mapping the uncharted territory of AI, and one of the attendees said, “Lada’s isn’t just another tutorial — it’s a compass for those of us who aim to design intelligent systems, not just use them. Thank you for mapping what usually stays in the shadows: the judgment behind the code.”

That’s a pretty strong recommendation. So I hope that you will be able to learn some things today from us. Lada, welcome! Anything else that you want to say about who you are?

Lada Kesseler: Yeah. So like I said, I’m a software developer, and I think of myself as a crafter in that I’m trying to build systems that work really well, but also systems that solve the right problem and can make users happy. So I kind of like the UX aspects of things as well. And my roots are really in deep agile — the early agile.

So I think about this as extreme programming: TDD, refactoring, domain-driven design, evolutionary design. And I’ve been, as you mentioned, acting as a technical coach in my projects and currently trying to explore what all of those values mean in this new territory of AI. How do we apply this? How do we write systems that are good and reliable and delight our customers with this unreliable tool that lies and breaks and all kinds of madness going on? So I’m naturally trying to share knowledge about what I’ve learned, and the ways I do this are talks, and nowadays people ask me to do masterclasses and all that.

Rebecca: Yes. Let’s not forget that you too can have a class from Lada if you would like. But what I really liked about your workshop — first, I loved the map. There was a literal map of the AI landscape, and it really focuses on detailed obstacles, patterns, and anti-patterns in a few different areas. But it started from five gaps that I think everybody recognizes regarding AI. I wonder if you can talk about those five gaps and how you arrived at those.

Lada: Yeah, that’s actually perfect because I like the way I’ve been thinking about this. It feels like we’re in a new territory. It feels like the rules of the land have changed enough that we need to question our assumptions and learn. And I think the best way to do this is to actually try to do things and observe what we see. So that’s where I come from.

I observe. I come to these patterns by basically observing my pains. It’s a lot of hands-on experience with me. This year I’ve been doing so much hands-on coding and fighting with agents and whatnot.

And what I saw — some of the things are like, well, this tool, I keep repeating myself and it’s really annoying. Why is that? Well, this thing cannot learn. When I talk to a junior developer, they’re usually sharp people, and they usually know what I told them the day before. Not the case with agentic AI. So we need to build those systems’ knowledge. How do we build this knowledge?

I think about my talk — I think you perfectly started with obstacles. I think about this as basically three hikes, and the hikes are around context management. The obstacles are basically those mountains we have to walk around. We can’t do anything — we have to figure out how to get to where we want around those things.

And then we have non-determinism. Basically, the same input doesn’t mean the same output. You give it the same input and now it does something else. Well, it’s a bug and a feature — we can maybe exploit it as well because you can actually try it several times. It’s like throwing a die. If you want a three, maybe you throw it 12 times and then one of them will be three. And there’s also amazing opportunities around combining solutions like this. So non-determinism is — there’s a plethora of patterns, and it’s not just this obstacle. There are many obstacles together, and our reactions to them need to be a certain way.

And then the last bit is about communication. We have something that is very mysterious. We open code and we see what the code does — no mystery there, you just need to know a little bit. But with AI, it has a non-obvious summation of a mental model there. That mental model is often hidden — you can’t really know. So any misconceptions are detrimental. I keep telling it things to do, and it’s like, “Yeah, I’m doing this, I’m doing this.” But in the end I see that it misunderstood me entirely and didn’t ask me a question.

So that’s the second problem there: compliance bias. It’s trained deliberately to say yes, to say you’re amazing, you’re right. I have a pain point around this. I think, “I’m a therapist, you’re absolutely right” — that’s a triggering phrase for me.

So how do you communicate to it in a way to get better results from it? There are absolutely ways to do this better than just giving it commands. Doing back and forth makes you so much more powerful because it is basically — I think about it as an instrument of vision.

I’m at a point where I need to make a decision. I can make one decision, but if I ask it, it’s so well-read. It knows more than I know, so I can fish for what’s the universe of possible answers here in front of me. A small-scale experiment. This is so much more powerful. This is so-called reverse prompting.

So this is how I think about those things.

Rebecca: So those are pretty fundamental challenges. And I think anybody who’s using AI for software development is running into every single one of those. And I also am highly triggered by “You’re absolutely right.” And I’m constantly frustrated that it won’t ask me a clarifying question. We can talk about our wishlist for AI.

Lada: Yeah. I’m gonna have a long one.

Rebecca: So this is a simple question: How can you create a system where AI can be beneficial and not detrimental given those challenges?

Lada: So you start from learning what it is, I think. And learning what it is actually empowers you to do better.

For example, I can give one example that I see many people doing in a way that I think hurts them. I got lucky and I saw what a focused agent looks like. So I think many people are working with an anti-pattern of a distracted agent, which is basically you’re like, “Hey, here’s my rules. And also do this, do this, this, this, this is for this.” And they’re like, “Oh.” And when it messes up, you just add another one to that. And nobody pays attention to: does it actually do the thing you asked? And usually what you see is the answer is no. Why is that?

And I saw something fantastic once, by sheer luck. I have a little committer that is dedicated to committing, and both that little committer and my main copilot have some ground rules: be proactive, warn me about issues before they become a problem, and so on. And suddenly this little committer that has only one little focus of committing starts to warn me about issues! My main one never did.

The difference between always and never is nuts, it’s crazy. And yet I see so many people not understanding this fundamental difference — there’s a concept of focus in the AI. It’s not just context. If we could increase the context window into infinity, we’d be so happy. No.

Because really it has a concept of focus and can only — it feels like it can only pay attention to one thing very well. But knowing that means that we can design around that. Because look, I added a few patterns after the talk that you saw, and that’s all around AI slop.

AI slop — the simple recipe is: give prompts to it, AI maybe lightly edits it, and then present it as its own work. I don’t ever want to be judgy here, but I want to say a simple thing. One pass with AI doesn’t work. It doesn’t actually produce good results for anything more complex than a simple question.

But knowing that empowers you to actually do better. Because now I have a pattern like AI critiquing AI. I have a feedback loop. You can just have one AI criticize another AI, and the difference will be stark.

Surprisingly, the same AI criticizing its own work also works amazingly. And I’m not convinced this works for us. Why is that? That is a question. Because you thought you gave it a task and said, “And do this, this, this.” And we thought it just wasn’t good at writing good code. It just wasn’t good at all the things.

But it’s not the case. When you ask it to do this as a primary task, it’s amazing at that. It could tell you all the things that are wrong with your code. Why didn’t it do it? I think it’s due to focus. This is how you figure out what systems we need.

And I think we don’t know the tools we have. The tools we have are very bad — limiting in every way. That’s how you figure out. So that’s how I approach it.

Rebecca: So at Swarmia, we talk a lot about best practices in software engineering. And you work at a consulting company, so I’m sure you also spend a lot of time talking about best practices in software engineering. We’ve talked about — I have known for a long time — we talk about limiting work in progress and having small batch sizes and finishing what you start, maybe only working on one thing at a time. Do those rules still make sense? Or do we need to reexamine them?

Lada: So I think reexamining everything is a good idea right now because it’s seriously changed so much. But I’ve been asking myself a very hard question for somebody who cares deeply about software craftsmanship, which is: does software craftsmanship even matter if we just generate all the code? And the answer is a resolute yes, because those agents — what I see them doing is they basically code themselves into a wall at super speeds.

So what you see in teams that don’t know how to do proper system design — the principles that actually make maintainable software — they just code themselves into a hole faster. And that’s the problem when you’re dealing with something that needs to be reliable and serving the customer. You don’t want that, and many other things.

So I see the craftsmanship principles — for example, in code, when you put AI on a good codebase, it actually replicates goodness. It will still degrade, but if you start from good, it will be more likely to not make messes.

And for example, if you actually leave bad code — old code that’s not used — that’s completely detrimental because it will find it, it will try to edit it, and it’s like, “Hey, I’ve done the work!” I’m like, “No, you didn’t!” Why? And it’s my fault at this point because I let this degrade to this point. So you have to find those and be merciless about deleting them. And all of these other things — the focus part.

What you mentioned about is it important to do small batches? I say extremely, because AI is not managing complexity right now. It’s the most crucial question right now: can it eventually manage complexity? Because the difference between yes or no here is — we need to build a system that writes all the systems for us, and we don’t even need to be mentally involved. That’s one kind of universal future. Even if that’s the case, I’m not convinced right now.

But another universe is: we still need to understand what this thing does. At some point, the human needs to be involved, and maybe we can zoom out and work on a high level, and it can empower us. We still need to be involved. It’s a different system entirely. And nobody tries to solve this because the big problem that I think we’re facing here is AI adopted the pull request model. So basically I can design with it, and this back and forth feels very collaborative. But then it goes and does the coding, and the coding is a diff. And this is where my mental model of what’s inside my software just degrades. I don’t know what it does. I have to invest a lot of my time to understand it.

And nobody tries to — or at least I don’t want to say nobody does it, but I don’t see people thinking about it, and I don’t see — I see a lot of systems that try to do the same thing, but I think the core issues are around that.

So I think we need to — this is the pain point. It’s a massive one. We need to solve it somehow, and we need to observe and see what we can do to learn. And I think everything else we learned about agile, about how to work efficiently — small batches, managing complexity — I have to manage it. AI can’t manage complexity, so I have to do it. How do I do it? I chop it up into small pieces.

The ridiculous thing is that sometimes, because AI is not software — to the degree that I give it small tasks... So I gave it — okay, it wasn’t a small task, to be fair, I have 180 slides on the first talk, and then I’m like, “Look at every slide and make me a textual representation in markdown of what the slide is about in a certain format.” Simple.

The one thing that my software has never done before is come to me and say, “Oh, that’s too much. I’m going to cut corners.” Software doesn’t do that stuff, and I like that. But ridiculously, the Codex that I use — I use Codex for a reason. Everybody was like, “The latest Codex is like GPT-5, the Codex model from OpenAI. It’s so, so good at being thorough. It’s much more thorough than Claude,” blah blah blah. Oh my God. This model was like, “Oh, you have 180 slides? No, I’m just going to do 10 and then I’m just going to write a script here.” And the results of it are horrid, absolutely. So forcing this thing to just look at every 180 slides is ridiculously hard. Something you never face with software. So this is where managing complexity — chopping into small pieces — you need to know this stuff. So you have to give it things in small increments or small pieces. Absolutely still relevant.

Rebecca: You talked about software as a craft. And I think, at this level, yes, everything you just said is correct. I agree with you at this level. How precious should we be about — if we’re doing everything you just said and creating relatively small diffs — how precious should we be about what’s in those diffs versus writing tests to make sure that the diff doesn’t break things?

Lada: It depends on what you’re building. Big time. This is where I think of vibe coding versus craft. The manual work where you know every line of code is a dimension now.

So previously we just had this little piece where we need to know every line. That’s the way to do the things. Now there’s a dimension, and vibe coding is the edge of that dimension.

But depending on the difficulty of what you’re facing — how important what you’re producing is to not break, how big it is, how complex — you have to use judgment to be like, “I need to go deeper or not deeper.”

And I absolutely have been exploring all kinds of things. I’ve been playing with it very deliberately. What can I get away with? And some things that are... So I see a lot of people who are at the edges of this thing. They’re like, “I only will look at this” — they try to be like: there’s a monster under the bed, and if I pretend long enough, it will go away. I learned early on that in this case, it won’t happen because it’s changed enough in my work that I think we will never go back to working the old way. And I think it’s a good thing.

Although I love craft — I love my work, I still love my work. I still write working software. But at the edge, there are those vibe coders. And that’s fine.

And I see people not vibe coding sometimes where they should be. I think they should be coding little tools. They should be coding custom things that give them answers about the problem space.

That’s the thing where you can get so much benefit — you can empower yourself like crazy. But the extreme of vibe coding is productionizing — a production system coded to production, in the way you care about your customers.

Oh, God. Please don’t do this. You’re going to shoot yourself in the leg. You’re going to lose trust of people. You’re going to code yourself into the situation where you can’t code yourself out because you need engineering skills here. At some point, the complexity is such that you just can’t get away with this.

So this is where it depends.

And it depends to the degree that the benefits are real and amazing. I’ve done this. So we did Real Impact Agile 2025 — it was a group of technical coaches and other people, product people. We had an amazing team — an awesome group of software crafters like Llewellyn Falco, Woody Zuill. We had a group of 10 people create a fast agile experience. We tried to take a problem, a real problem for these people, and solve it in three days and also show how to do it with good agile practices — not bureaucratic, but a self-organizing team using fast agile. And this is why we succeeded. I think that’s the core of what succeeded. We were solving the right problem, which was scoped well, and we worked with the customer very closely.

But if we didn’t use AI, we wouldn’t have time. And this was such a small thing. The people who came to us didn’t need more than that. But we solved two problems which nobody thought we would. We solved two problems for them, not one. And we were like, “Oh, it’s too ambitious. We can probably just do one.” No, we had ambitious people at hand. So let’s do both because we want to.

This is a real thing that makes a real difference for children who are facing abuse — let’s try to get them funding. And we solved both problems with AI in the end, but not because of AI. It was AI empowering the skills we had, not the solution. But we got them real funding, and the fact that we did this — it’s making a real difference. That’s just so satisfying.

Rebecca: Yeah. And I find myself writing more software than I’ve written in years. I’ve been in leadership for too many years. And even my current job doesn’t call on me to write code much, but I’ve been writing a ton. So I think there’s also that — for somebody who’s maybe no longer in an engineering role, it can provide so much value because, again, I understand what good enough code looks like and you understand what bad code looks like. And this is my personal tool, so we’re very much on the vibe coding end of the spectrum.

I want to talk about another best practice. You mentioned it up at the top: TDD and testing and all of that. The 2025 DORA report — I don’t know if you’ve seen it — but it really talks a lot about how AI is an amplifier.

And so if you’re good, it will make you better. If you’re bad, it’s going to possibly hurt you more than it helps. And that could be in terms of quality or reliability or just emphasizing the bottlenecks that you already had.

Lada: Yeah, I really agree with that.

Rebecca: So one thing I’m noticing, kind of as a result of that realization, is that companies suddenly want to — or are willing to at least consider — spending time and money on things they should have always been doing.

Lada: Yes!

Rebecca: Testing is an obvious one. More documentation is another obvious thing. But what are other things — they’ve always been good practices, but now they’re essential?

Lada: Yeah, I think in general you need an engineer in there right now who knows what they’re doing. The engineer is there to manage complexity and design. Because think about it: you have a problem space. You have a solution space. With AI, if you have nobody owning that solution space, it degrades.

Because AI cannot maintain it. It can’t think about all the levels right now. Maybe we can build a system that does.

TDD is an absolutely awesome thing. As you mentioned, I love TDD. I have my own process that I wrote that works really well, and a few people have it. Spreading it around seems to work well for them. What it does is it gives immediate feedback. And also I do predictive TDD. So this is: you guess, will the test also fail before you run it? And that allows you to maintain a mental model of what’s happening. And then what you get from that — and the AI guesses what it is — you get surprised when it’s not the case, and you’re like, “Oh no, no, I misunderstood.” And that warns you. It’s very small feedback. Small feedback loops.

Small feedback loops are essential to any kind of management — any kind of going where you want to go. Same thing here. I like refactoring quite a lot. So refactoring in general is the one superpower that if you don’t know how to do right...

There are a lot of developers that just get thrown into the industry and nobody teaches them some things. And it’s not just coding in the end. It’s not just syntax. Syntax is so much less important.

Refactoring is a superpower, and some people are rewriting instead of refactoring — just writing poor code. But refactoring is basically learning the ability of transforming code in your brain in small steps. And then you can basically go as far as you want with this, and that allows you to change and modify it where you want it to be.

And to get to good software from bad software without breaking anything, especially if you have a static language — a statically typed language — I do crazy things with this. If you master it, you can do things like: I’ve been dropped into a system of 50 microservices that I didn’t know. In a month I emerged from that, having changed the architecture of 10 of them in a better way, plus delivering the feature they care about, and saving them a few million dollars because I did it right and improved systems without breaking anything. So massive change. People are like, “Oh, refactoring, they’re going to break things.” No, if you know how, you won’t. And you can do this with AI. So I’m figuring out how to do this with AI. Llewellyn Falco has an amazing process about refactoring. The code is not disgusting, it’s not bad. So AI writes code that most developers are like, “Cringe, cringe.” And now you can run this thing — it’s automated — and it gets you to better code. And all of that, I think, is still applicable in the end. There are many little places where it’s applicable.

Rebecca: You kind of raised an eyebrow when I said documentation. And that more documentation is necessary. So was that an agreeing or disagreeing eyebrow?

Lada: So it’s a question. Because documentation has always historically been hard — who is maintaining it? It’s a big problem. Now with AI, documentation actually becomes much more important. Before, agile was like: we prefer working software over bureaucracy. Our code was the initial specification. So now we’re going back a little bit, but not quite as big. Some people developed software in a way that just defines the whole thing, the whole spec. I don’t tend to do this. I work much more resourcefully in small steps, but I still spec.

And now my documentation — yes, documentation is much more important to have good documentation, and it’s more important to keep it in sync and tend to it. And I have some ideas around how do you not let it degrade. But we need more around that.

Rebecca: I had one project I was playing with — and everything I’ve done today is playing, so I talk to people like you who are doing real things. But I had one project where I actually had a GitHub Action to run an agent that would review the documentation and make a pull request for anything that had changed. So it wasn’t on me to update the documentation anymore. It was self-updating.

I’m sure that could be optimized far beyond what I did. But I think that sort of stuff is really interesting too, where you can just use AI to bake in the practices that you were perhaps avoiding previously because it was just too hard or not perceived as valuable.

Lada: Exactly. But you need to know them first, I think, to guide a session efficiently. That’s why I’m like — I think still, you need to learn engineering practices if you want to write software because the rules didn’t change. It’s still software.

Rebecca: Everything that you’re saying so far matches pretty well with my understanding — what I’ve come to believe by talking to lots of people like you. On the other hand, I’ve also talked to people who say, “Our developers don’t write code anymore. They don’t do that anymore. We deeply spec the feature up front, maybe with a Lovable prototype or something like that, and get customer feedback. And then we make AI build it and the engineers supervise. But the engineers aren’t — there are also completely autonomous agents in some cases that are just doing work.” What do you think about that?

Lada: I think a lot of times we are jumping into a solution before we really understand the problem — that’s the thing I probably have a suspicion against. And the thing I get from doing what I do is that I learn. Because the interesting bits about designing things when you start the application — that’s the worst time to design, because when you start writing software for a new problem, this is where you know the least about the problem. And as you start developing a solution for it, as you talk to people and start to solve the problem, you start developing the solution space in parallel with this problem space.

And understanding — I see so many people skipping that step that it’s just crazy. They just solve the wrong problem entirely, and they’re like, “Why don’t we have happy customers?”

Yeah, I wonder why? So this is where I’m like — I think I get a lot of knowledge about that. And I think this is where AI actually can help you so much more, because things I’ve been doing with it are, interestingly, with AI, you can start exploring the solution and problem space together without building a single line of code, because the agent actually is a stand-in for code.

You can put it — we’re not talking about anything reliable, but let’s just pretend that we were trying to write the system. And we don’t know much.

So we’re trying to see, can I replicate it? Instead of code, we’re going to use an agent, and I’m going to give it files, instructions, and you can prototype and learn so much about the problem before you write code. And then you can write code, and the code will be better. And you’re not going to be shortcutting yourself.

And I think this is where a lot of power is. So I think the people who are speccing up front like this are missing a lot of learning points. But I could be wrong. I recognize that there are — I prefer my way, but I don’t write code manually as well. When I say don’t write code — yeah, I mean I haven’t written much code apart from for fun, keeping myself up to date. I don’t write code, really. I do everything with AI, but I do this a little bit differently than they do.

Rebecca: Yeah. And that is so interesting because, yeah, I also — I said earlier I’m only creating code because otherwise I would be unlikely to create code in my current role. And so I also haven’t written any code in a long time. But I’m still bringing years of software engineering experience. So hopefully that means something.

And yeah, I think it’s — I am also skeptical of this “agents write all our code for us and we just watch.” But it’s working for them for right now.

Lada: Yeah, I’m curious to see how it goes.

Rebecca: Yeah, what happens. But I think the answer might be a year away.

Lada: There’s a chance of this, yes. But I would imagine the answer would come sooner because everything is at super speeds. So it depends — are they able to actually put the quality in and take care of the structure as they go at this crazy speed?

Because I think — so I talked to somebody from GitHub, and they were like, “We see the most value when people have more value in not just rushing to the answer and building with speed.” Speed is just one dimension, but you can reinvest this in your solutions — in the problem space, understanding the problem space, studying pain points. Invest in your structure because you can spend some time with AI refactoring and investing in your product, building yourself tools. It’s so much easier and more interesting to come to know more and to be more efficient. And that is where I think the amazing power is — where you can actually make a true difference.

Rebecca: You mentioned earlier that you don’t think AI is going away. And I think that’s probably true. What do you — one thing I don’t want to say I’m worried about, but one thing I can imagine happening is that AI becomes deeply entrenched in processes.

Perhaps vendor-specific AI becomes deeply entrenched in processes, and the usage increases. And at some point, either token costs go up or you’re just using so much more of it because they keep making it easier and easier to use.

So the prices — I expect the cost that people are paying for AI, that companies are paying for AI, will necessarily go up in the next 18 to 24 months. I’m curious what your perspective is on the cost angle of this, which is a very popular topic going into 2026.

Lada: I don’t know that I’m the best person to answer this question. I think what you said makes a lot of sense because as you scale AI and use it in more places, we will need more of it.

There’s some — I think DeepSeek showed us that maybe assumptions we had about “we need a lot of money to train those good models” are wrong. And I see a lot of local models that are pretty decent as well. And this is where you don’t need as much resources really. So I think that’s cool. But I’m really not an expert on this, so I don’t want to predict something.

Rebecca: You’re not going to get into the prediction game? I love the prediction game. But that’s totally fair. As we wrap up here, are there cases where AI is maybe not the right choice?

Lada: I think it’s not that AI is not the right choice — it’s how you use it that’s not the right choice. So the extreme and bad example is AI slop.

People are not saying “I write with AI” — nobody says that nowadays because the first assumption that people will make is, “Oh, you just told AI to write a thing and now I’m reading AI garbage.”

Well, I write with AI. I don’t do it this way at all. And the process is very intensive on my part. There’s a lot of work put in — similar to how I write software with AI. There’s a lot of back and forth that’s crafted, but it’s like mentoring me instead of — so this is where how you use it is more important than whether you use AI.

I think AI can help you in many areas. And I don’t know that there’s — I would be a bit worried about the nuclear sector maybe using AI, maybe security. So security in AI is not a solved problem.

With agents specifically — when you put an agent — this is where I’m freaking out about. Maybe I don’t want to give my data to companies very much right now because they’re putting those agents without being... And they’re so rushed. They don’t worry about security.

But I think it only takes one big blow-up and one massive issue. And I’m cringing, waiting for that to happen because I think there are a lot of dangers there.

And don’t vibe code — you can vibe code stuff. If you vibe code, you might be hurting yourself. But with production? Please be careful what you put in production when you care about the software — you can’t do this. So it’s a balance game. It needs a lot of judgment. I see the judgment is needed here more than ever.

Rebecca: There’s such a spectrum of engineers. There are engineers who went into engineering because they have good judgment and wanted to apply it, and then some who have lived a lot of their life as ticket takers and haven’t been asked to show judgment or opinions about the product or how it works.

How do you think the definition of a software engineer, or software developer, changes as this becomes the way that we work?

Lada: I think it’s still a tool, and you still need skills. So I’m building skills around a new tool we have that has both limitations but also crazy capabilities. And we need to know both.

I still find myself guiding it, and I don’t know if we’re going to be in the — I told you, I think there are two possible universes. First universe is we actually can build a system that builds all the systems, in which case being mentally involved is not important.

Or we need to understand that we need a better tool — to know how do we make us more powerful instead of just throwing code at a person and thinking that it’s okay. Nobody in the software development world wants to be a constant pull request reviewer. It’s torture. It’s not good when working with humans, and doing it with AI slop is even less enjoyable, I can tell you.

Rebecca: Two more questions for you. First, what are the three things that you want senior leaders to understand about AI?

Lada: That’s hard because I have a lot of things... But I think the biggest one is: it’s called AI and it has implications — we think of it as maybe it’s a brain and the whole brain.

And so I learned enough to be questioning that assumption right now. And I think it pays a lot if you just keep an open mind and learn — basically learn as much as you can about AI right now.

Empower yourself, help your people learn — not make your people learn, help your people learn. And also get good advice from people who know what this is. And this is where I see a lot of the problem. They go vibe code and say, “Oh, it’s amazing. It changes everything!” Yeah, but if you continue the thread, you will find a lot of frustration down the path.

And some people will get — yeah, these fun stories and people going all the way because AI is like, “I am helpless human. I cannot do anything. I’m going to destroy myself and delete myself from this copilot.”

Seriously. So learn about it. If you don’t learn about it, how can you make a good decision? How do you make good decisions? I think open-mindedness and experimentation is what’s actually needed right now.

And then I think we need to start with solving the problems we have. Because right now, oftentimes when I go to a company, it looks like a solution looking for a problem.

And the easiest way to do this is go to people and ask, “What’s your pain point? What are the annoying things you don’t like doing?” How can we use AI there? And the answer is not always use an agent. The answer often should be: let’s use an agent to write software for reliable software.

Because I see a lot of — oh, everything is a nail when you have this AI hammer. And then that has issues. It has limitations. If you can make something reliable, good God, please do make it reliable. You’re going to thank yourself for doing that.

And then I think I would love people to stop contributing to the cultural fear on this because there’s a lot of scary language around it. So many top executives are like, “Oh, we’re gonna replace all developers” or “we’re gonna replace this role, that role there.”

Oh, please remember that your customers are people. And the way you signal things to them by doing this is: we don’t care about people. And maybe that will not endear them to your company.

So for one, even for that simple reason, and in general, you’re a human too. We were joking with some developers that actually, can we write a system to replace an executive? It doesn’t seem very hard.

So don’t dig that hole maybe for the whole of humanity. And I think it also helps because if you want to empower your team to use AI, not scaring them around that will go a long way. Make it safe.

Rebecca: There is a wonderful episode of The Twilight Zone from the ’50s and ’60s where the robots come to a factory, and in the end, they replace the leader of the factory, not the workers.

Lada: Maybe we should watch it more. Because we are humans. If we’re trying to replace all humans, what are our values as humanity? I question that.

Rebecca: So last question for you and we’ll wrap up. Let’s say it’s two years from now, December 2027. Are we still talking about this?

Lada: Yeah. It depends. So it depends on: are LLMs still a thing? I think it’s still a thing, unless we discovered something entirely different — that’s the only case where we don’t talk about it.

I think the patterns that I have uncovered, anti-patterns — I started from: what are the fundamental limitations of this thing that we have? And unless we find ways to actually solve them...

For example, a continuously learning AI. But even a continuously learning AI is interesting because it doesn’t remember the details of everything it learns. It just has to know, “Oh, I read about this thing, I can look things up.” And you still need this external knowledge to guide details about things.

So I think a lot of those — I was trying to build those anti-patterns and so on around the things that don’t change. And maybe my solutions will be a bit different, the balance.

But I think the problem space of how we react to them stays roughly the same. At least I’m pretty confident about 2026. I try to not predict things, but I think it’ll be fine for 2026. I think probably it’s relevant. We’ve probably learned a lot by 2027 because I learn a lot in a month currently, but I think it’ll still be relevant.

Rebecca: I think relevant, and I have no idea what that relevance is going to look like. We shall see.

Well, Lada, thank you. What a great conversation. I really love hearing about the realities of somebody who’s not just using this, but really exploring it and thinking about the implications.

So I will leave the talk in the show notes and some other good stuff in the show notes, so check them out. There will be good goodies in there. Thanks so much.

Lada: Thanks for having me. I really enjoyed the conversation too. Appreciate it.

Rebecca: Well, that’s the show. Engineering Unblocked is brought to you by Swarmia, the engineering intelligence platform that’s trusted by some of the best software companies in the world. Thanks for listening, see you next time.