Feb 9, 2026 · Episode 30

AI’s unglamorous wins for developer productivity

Rebecca Murphey and Tara Hernandez, VP of Developer Productivity at MongoDB, explore why faster coding had zero velocity impact for her team, why developer experience matters more than ever, and why curiosity defines a good developer.

Show notes

Tara Hernandez is the VP of Developer Productivity at MongoDB, where she oversees everything from CI systems to AI strategy. With a career spanning Netscape, Pixar, and Google, she’s seen developer productivity evolve through multiple inflection points — and now she’s applying scientific rigor to understanding AI’s actual impact.

When MongoDB asked Tara to figure out AI 18 months ago, she was a confirmed skeptic who, at the time, called it “a ginormous Ponzi scheme.” Her team’s initial measurement revealed less than impressive results: AI saved developers an hour a week on coding but had zero impact on product velocity — the ROI was negative.

They continued experimenting, and found out that the unglamorous (but very real) productivity wins live in the outer loop of productivity: Slack bots answering developer questions, agents analyzing terabytes of logs, auto-generating properly formatted tickets. These seemingly small improvements deliver bigger impact than faster code generation ever could.

Tara argues that developer productivity comes down to tools, communication, and process — and AI simply becomes another tool in that framework. What matters is how people interact, how information flows, and whether you can measure the mechanics of collaboration.

Watch the episode on YouTube →

Timestamps

(00:00) Introduction
(01:04) How Tara became MongoDB’s first VP of Developer Productivity
(06:42) Teaching Pixar how to do software development in 2002
(11:06) Tara’s three pillars of developer productivity: tools, communication, and process
(18:52) How the internet changed infrastructure from “not real engineering“ to essential
(22:00) MongoDB’s 18-month AI journey from skepticism to scientific measurement
(25:11) How AI saved coding time but had zero impact on velocity
(27:03) Where AI actually wins: senior engineers with agentic programming
(30:04) What developer productivity teams should focus on right now
(34:14) Why the outer loop matters more than the inner loop
(37:08) Measuring the cost of human-to-human communication
(40:14) Developer productivity is really about information flow
(41:13) Why developer experience matters more now, not less
(44:33) The ethical problem with celebrating AI-driven layoffs

Transcript

Tara Hernandez: We're enhancing the tooling, and by how we are using the tooling, we're driving changes to the communication and process that helps the team overall accelerate. But it is slow. This does not happen quickly. And I think a lot of other companies are seeing that too. You think, oh, I'll fire all my junior engineers and have all these agents. And then all of a sudden we're, you know, $1 billion a year and I'm like, notice they don't. People aren't saying that as much anymore.

Rebecca Murphey: I'm Rebecca Murphey, and this is Engineering Unblocked. Engineering Unblocked is brought to you by Swarmia, the engineering intelligence platform that's trusted by some of the best software companies in the world, including startups like Superhuman, scale-ups like Miro and companies from the Fortune 500.

On today's show, I'm talking to Tara Hernandez. She's the VP of Developer Productivity at MongoDB, and I have a lot of questions about how one ends up with that title. So Tara, welcome. You are the first — you are the first VP of Dev Prod that I have met. I've met directors and senior directors, but I've never seen it hit the VP level. So, yeah, tell me, tell me, what do you think?

Tara: I like to tell this story. I got this cold email from Mark Porter, who was the CTO of MongoDB at the time. And I was at Google and I was like, no way that some CTO sent me a cold recruit email. I checked the mail headers. And I finally did the LinkedIn things. I saw a bunch of people in common. I'm like, do you know this guy? You know what? He do this. And I was talking to Brad Porter, no relation. And he goes, oh yeah, Mark. Brad and I worked together at Netscape a million years ago. And he goes, oh yeah, Mark would totally do this. And you should absolutely talk to him. You two would get on like hair on fire, which was a funny thing for a bald guy to say. But there you are. And, yeah.

So I talked to Mark, and when he described the job, I'm like, are you sure this is a VP role? I actually asked him that. He goes, yeah, yeah, it's it's it's going to grow. And it did. And so I got — that was my first VP job. I had been up to senior director at that point. And this was my first VP role. And I started out overseeing the CI ecosystem specifically. MongoDB has a proprietary CI system called Evergreen. Don't judge us. It predates a lot of commercial solutions. And it is purpose-built to build and test a distributed document DB. We run about 3 to 400,000 compute hours worth of tests for this thing a day.

Rebecca: Okay. Yes.

Tara: Now it's, we sometimes I refer to myself as, you know, the VP of Kitchen Sink-ness. You know, I've got build systems, test frameworks, performance tooling and analytics, security infrastructure. Git now, I own AI, which is a funny thing as a as a confirmed AI skeptic. It's been an interesting journey. So, yeah.

Rebecca: So developer productivity at MongoDB is everything that's not SRE and is not product development.

Tara: Seems true. Largely. So correct.

Rebecca: And so you weren't raising your hand and saying, hey, I'm looking for a VP role in developer productivity. He's looking for a developer productivity — like what? How do you get there? What was happening before you showed up?

Tara: Yeah. So MongoDB and the history was there was the core database and its ecosystem, all of the drivers and client libraries and tools and whatnot. And then I forget exactly when — maybe 2013, 2014 somewhere in there. I should probably know this more more accurately. They started to have a SaaS service, right? Atlas. And so there was a separate engineering team that spun up. So there was the core team and the cloud team, roughly. And there's a lot of differences in approach because of the distribution. And the core server team was really more thinking about the on-premise. And a lot of the, you know, the rituals and processes were around that. The cloud team is okay. Now, we have a control plane and all these new things.

And so, there was a lot of — I don't want to say it was outright duplication, but there's a lot of similarities in function that was sort of tailored to those teams. But after time, this is very common at startups, right? You have this team over there. Apple, as far as I know, still does this. Every team does it differently and they pay a bunch of people to try and unify it. All the people I used to know at Apple aren't there anymore. So I'm not sure if they still do that.

But anyway, Mark and his team decided they wanted to break out some of the tools that were in common or increasingly becoming in common and have that be a separate organization. And at the time, I guess organizationally, they decided that the VP role made more sense. And then subsequently, as happens, they're like, okay, well, you're doing this and we should probably have you do that too. And what about this? And so, you know, the size of my remit has tripled, I think almost quadrupled. And it'll be — for almost four years. It'll be four years in May.

Rebecca: That's a very clear remit to start with, like, take these two things and make them one thing where it makes sense to make them one thing. Great. And outcome is also, like, pretty clear. There's not a lot of — this is like what I — well, I don't know, you can tell me I'm wrong, but this is the kind of thing that sounds like what I would call just work.

Tara: I mean, I've done the same thing my whole career at different levels, but in the course of my career it's been called build engineering, release engineering, DevOps. I hate it when anybody calls a team DevOps. Infrastructure. Like, I think there's one I'm missing. It's like it's all — at the end of the day, it's how do you take a group of people and get them moving in the same direction procedurally, culturally, in the most efficient way possible for the ultimate end goal, which is the, you know, ideally the relative rapidity of quality software releasing, right. And what does it take to do that? When I went to Pixar, you know, Pixar was a movie studio, is still is a movie studio.

It was a movie studio that had a software development team. And back in the late '80s and early '90s and then into the — let's see, when did I start there? 2002, the early aughts. Right. That was — they were just starting to realize, okay, we had a more academic approach. The managers in what was called the Studio Tools organization were all PhDs with many patents. You know, there's people like Rob Cook and Tony DeRose and others and they're just, you know, these people have written books, right? And that hierarchy felt more like sort of academia. And they would have a version of the proprietary software for every movie. And that made — that worked for the first couple of movies. Right. And then they realized, well, okay, we want to make movies faster than every 4 to 5 years.

We want to like, well, now we have to be able to do things in parallel. And so I don't know how — I like to say this. I don't — people might dispute this characterization. I like to say I taught Pixar how to do software development. But, you know, using branching, using version control, that was not 100%. Right. How to do version libraries like having a proper build system, having a CI system, having a bug tracking system that didn't require you to write SQL statements — like these are things that I helped with them.

Rebecca: So this is 2000 — early 2000, right? Aughts. We've used that word. This is early 2000, and it was reasonably normal then for companies to have not figured this out, especially like Pixar is in the vicinity of movies. Movies have been made since, what, late 1800s, maybe? Like, I cannot remember when — I say this, but like so. And we see this with our customers. We have customers who are, you know, 100-year-old manufacturing companies, trying to figure out, like, okay, how do we do — how do we do software? And we bring software to these practices?

Was that your remit at that time to, like, teach them those things? Or were you kind of trying to open their eyes to something that they just didn't know they didn't know?

Tara: A little both, a little both. I was hired — so the reason I got the job at Pixar was my VP at the previous company, Blue Martini, had been at SGI. And knew the hiring manager at Pixar. So a lot of people at SGI, a lot of them went to Netscape. And a lot of them went to Pixar. And so the hiring manager is this woman named Maryann Gallagher. And I think last I knew, she was at VMware, really great woman. And her boss was a VP by the name of Rob Cook, who was running the Studio Tools organization. And so when they hired me, they said, help us manage the software such that we can do more than one version of software at the same time. That was the remit that I was given. And the other part of it was that I was the first sort of pro infrastructure person. Previously, and this is very common at many companies, not just movie studios, that you'll either delegate like the last person hired or the lowest person on the totem pole, or someone who's, you know, that's not — infrastructure, not real engineering work. Like, you know, just go take care of that. Right.

Somewhere along the line, they decided, oh, well, maybe there's people who actually do — this is what they do. The ironic thing, by the way, as an aside, was my specialty in information — computer information sciences at UC Santa Cruz was in graphics and animation. I never used it. Not once. By this point, I couldn't have done a Z-transform if you put a gun to my head. But it was kind of cool to meet people who had written some of the textbooks I had used. But anyway, I was hired — that was my job. My job was not to work on the Pixar proprietary software, to understand anything about graphics and animation. My job was to work on the infrastructure that this 90-ish person core Studio Tools team, and then another couple of hundred people who would write like special effects plugins and things like that — to make sure that they had a good environment. And I ended up building a team out at Pixar. I think at its peak, there was 13 of us. And the one time I was paid to write C++ code was at Pixar. I wrote a version library.

Tara: Right. Because you could take — and I give this example — between when I was at Netscape and when I went to Blue Martini, an early e-commerce startup called Blue Martini, I ended up pulling — I think every senior engineer that had worked for me at Netscape ended up coming to Blue Martini eventually. And we used a lot of the developer tools that we developed at Netscape. Tinderbox and Bugzilla and Bonsai, which was an early HTML code introspection tool.

A lot of what we did at Blue Martini, even though I was the same and my engineers were the same, the tools were the same. The things that we did were different because the engineers were different, the products were different. The requirements were different. Right?

And so, you know, one of the things that was an early lesson, which is development — that the act of software development is, I view it as having three major pillars. There's the tools pillar. That's obvious. Right. What do you use for your compilers and your CI system and your testing? And how do you deploy and where do you deploy? Right. But then there's also how you talk to each other. Right. How do you convey — and not just like in the "be nice to each other" kind of sense, but in the, you know, how do you communicate? How do you do documentation, right? How do you take a group of people and have them know things? And then there's the processes that bring them all together. How do you have, you know, do you have coding standards? How do you do code review? How do you do bug tracking? Right. And those three things that I call — the first one is kind of like culture and environment, and then process and policies, and then tools. Three pillars. Technology is only one of three, is your set of requirements — like, how do you make this group of people, right — because other groups of people might want to work in a different way, and there is no one right way. There is one — maybe there's one right way for this group of people, maybe, and not always.

Rebecca: So given that — and I talk a lot about how the size, age and culture determine kind of the even the solution space available to you in a productivity situation. A team that's releasing, you know, twice a year has different goals than a team that's releasing as often as they want to. And shrink-wrap software, which — like, literally, you know, not literally sending out the shrink-wrapped boxes anymore. But like, I still think of versioned on-prem software as — oh, lovely. I might have had one of those at one point. You never know. Was it the — was it a floppy or was it a CD at that point? Nice. Nice.

Rebecca: Hey, I have computer programs on cassette tapes. So I'm like, yes, indeed. But, you know, we're dating ourselves. I think it's interesting because like, throughout this, there is, you know, this theme. I mean, I don't know how much you were talking about productivity when you're writing floppies for Borland. But at the same time, like, you're writing floppies for Borland. And that's really annoying, I assume. And at some point you raise your hand and say, like, could we make this less bad? Or like maybe, maybe not. What is productivity? What has productivity meant to you throughout this experience?

Tara: You know, it's a really great question. Right. And you think about, like, my first job was Borland. I worked on the languages team, which was the compiler team. And at the time, the Borland compiler was the number one compiler in the industry. And actually a lot of my peers at UC Santa Cruz for their first job, they tried to get into Borland tech support. Really, Borland tech support at the time — that was elite-class developers. They would help you debug your code. I don't know if you remember that, because this is pre-World Wide Web. So you couldn't just go to Stack Overflow and look something up. And if you paid extra, I think they would help you debug your code on somebody else's compiler. For Borland compiler, you got a certain amount of tech support. What we would consider professional services now. But, you know, it's interesting. Borland, I think — it was Borland version four. It was something like 33 3.5-inch floppies and 55 5.25-inch floppies, plus paper books. So the shrink-wrapped box, I swear, weighed like 30 pounds. Right. And so — but then the next job I went to was Netscape. Okay. So now this is like the beginning of the early public internet. You know, the internet for normal people. For grandma, as we used to say. And now we still had shrink-wrap software, as you saw. You would go down to Fry's or CompUSA or whatever. But you could also, if you had a modem, you could download it. And so a big part of what my team started to do was to push bits. And we realized, well, if we're just pushing to the FTP server one at a time, and using, you know, completely insecure mechanisms to do so — you know, but now we could push bits out, right. And people could log on to their modems and make the funny little noise and download directly and not have to go to the store. Right.

And fast forward, you know, a little bit into the teens, I guess. And now — I forget exactly when AWS launched. But by, you know, the late aughts, early teens, now you don't need to run software. Well, client-server started before then, but like, you could even, you know, rent services by the second and run stuff randomly without even having a computer. Right. And so it was amazing how, you know, my VP who gave me the recommendation to go to Pixar, he used to keep punch cards in his pocket. He used them for notes, you know. And but he had used those when he worked, I think, at IBM. And so from punch cards to where we're at now, which is, you know, your smartphone, your Pixel or your iPhone or whatever, you know, has more compute power than probably the entire United States 30 years ago.

Rebecca: When I think about what I have seen compared to anything what my parents have seen, I'm like ... I've got nothing.

Tara: Right. Fair, totally fair. So I think about what did productivity mean before we started talking about productivity? I think some of the same principles applied. It just was a different context. Right. You're still talking about a set of tools.

It was just, you know, the set of tools I had to care about in the beginning was shell, you know, command-line utilities, revision control systems, installer engines. I read a lot of install code, and then started to have some network remote things that came into play. But the other part of it was, is there a bug tracking system? You know, at Netscape we wrote our first — like I like to say, I wrote CI before Martin Fowler invented the term. Right.

We did — my team, our first CI system. We didn't call it that, but it was, you know, on Windows, it was a batch script. On Mac, it was AppleScript. And then our Unix guy was lucky because he actually was able to write a Bash script on his SGI machine. We were all very jealous about that. But that's where it started. And then everything turned into Perl, right? Perl became the language of infrastructure, right? And then when I was at Pixar, we finally gave up on Perl. I remember I was trying to rewrite the build system in as object-oriented Perl.

And I realized, why am I doing this to myself? And Python had finally come along enough and had enough momentum that there was, you know, I was like, all right, I guess it's time for me to learn Python. And I was able to rewrite the Perl build system in like half the time with half the code. I'm like, okay, well, obviously that was the right decision. You know, these days I look at all of this like Gems and npm and all of this, you know, and the public npm repo and Maven — I'm like, God, I miss CPAN. But, you know, it was still about how did we talk to each other? What are the documentation processes? How do we track our issues? How fast are we able to get code and testable even if we're not shipping it? Right.

And what are the tools that we use to do that? I think the World Wide Web unleashed infrastructure as a concern that mattered. I think before the World Wide Web, infrastructure was for like the people who couldn't be real engineers. But, you know, my team was writing — maybe it wasn't code that people would look at and put on a wall framed like, this is an example of great Perl code, but it worked, right. And now speed is — and we talk about internet speed, right? That started in the late '90s. And then now speed becomes more important than quality because we're increasingly not just — not to shame the entire industry, but there's a different conversation you have about quality when you're doing physical media versus when you're publishing something digitally. Right? Because when you make a mistake, you just republish. When you make a mistake with physical media, you just cost your company millions of dollars.

So it changed the conversation, right? And then all of a sudden now it's like, okay, now how do we do this stuff? And I swear, if I had an ounce of entrepreneurship, I would have predated CircleCI with a Tinderbox solution maybe, and made millions. But I'm not an entrepreneur, so other people did that, right? But you think about, like, GitHub and Atlassian. I mean, that's all about developer productivity, right? At the end of the day, it's about how do we communicate with each other? What are the bug tracking systems? Right. That company would not have needed to exist prior to the internet being available to the average person.

And so it really changed things. Right. And we talk about inflection points. It's like, okay, from floppy to optical media to internet-based. But we also talk about like the other big transition coming out of Netscape was from companies who made their money selling hardware, of which the operating system was just the way to use the hardware, to Linux, and then the birth of the Linux Foundation. And then that changed everything. Right? And then from Linux to cloud, right. And now we have from cloud to AI. And so we see these really interesting inflection points. But from my perspective, the three pillars of what we now call developer productivity still persist.

Rebecca: Still pretty much the same. So I — you've walked into my trap.

Tara: Darn. Shoot.

Rebecca: You said the word first though. You said AI before me, so. Yeah. But one of the things I'm really — I don't meditate, but like professionally meditative — ruminating. There we go.

One of the things I'm ruminating on is a lot of the reason that we have spent in the past so much money on improving the productivity of software engineers was because they were expensive — engineers and cloud. Right? Like those are your two big expenses. And so it was really urgent to, you know, get the most out of every very expensive engineer that you're hiring. Does our productivity conversation, like, move elsewhere in the age of AI, or is it still firmly focused on the developer?

Tara: This might be a hot take...

Rebecca: Hot takes appreciated.

Tara: My team was asked, I don't know, 18 months ago. There's, you know, the CEO, Dev Ittycheria at the time, and my boss, Tim Shaffer, like, okay, Tara, go figure out this AI thing. How can we have AI help us? Right. Yeah. Everywhere you look on LinkedIn and — I live in San Francisco, you know, outside of San Francisco. And you go down there, it's like all the billboards. It's like AI, all of them, right? Like, go figure this out. And they're like, you realize that I hate AI. Like everything about it.

To me, it's like a ginormous Ponzi scheme. Right? This is what I'm saying 18 months ago. And I'll tell you, I'll get to the punchline eventually. But so we started investigating it. Right. And, you know, as I'm sure a lot of people found, if you get behind the hype and you actually start doing the evaluations and you assess things, you realize, oh, this is very nascent technology, right? We got, you know, okay, here's a set of coding scenarios. And we're going to put a number of vendors — I don't want to get sued, so I won't say who they were, but there was, you know, 18 months ago, there's a number of vendors and we went through all of them. And the code quality across the board was pretty abysmal. They all failed.

Rebecca: This is 18 months ago.

Tara: Yep, 18 months.

Rebecca: Right. Okay. An eternity.

Tara: So I ended up going with one and then continuing investigations because what also started happening — as will happen — is that everybody and their mother is now launching an AI startup in some form. You know, Stanford is here, and they practically mint them in a factory. And like, pop them out. And so it's like, oh, here comes this vendor and that vendor. So let's try them out. One of my directors actually wrote a new procurement policy, because having too many of the same type of tool causes all kinds of flags to go up for finance, which is reasonable. So we're like, well, here's our proposal. And we were able to get that bought in, to make sure that we weren't being crazy. And we started to see some improvements. Right. And then you can see how fast those improvements started to happen. You think about when Copilot came out, for example, is kind of like the inaugural, you know, it was like, ooh, right. And now you think about what Copilot was two years ago and it's like sticks and rocks, right?

And now we've got all these other things and agents and MCP instances and, you know, is it a plugin or is it just an IDE or is it a command line? Because actually command line — and you know, I say command line will always live forever because humans are who they are. So instead of — kind of depends on what you wanted. And we started to do some real research. This is the other thing and why I'm considered kind of one of the AI people at MongoDB and at the larger industry is, you know, the number of vendors who provide any kind of telemetry, if they provide it at all, that you can't trust it. Right? Because I'm not to malign them, but they're going to try to give you the stuff that makes their product look the best. But in most cases, it's not actually telling me anything meaningful. So then you have companies that will ingest the data on your behalf, and now you're paying for the AI technology, and now you're paying for the AI technology company that's measuring the AI. And I'm looking at the numbers going, well, this isn't going to work. So I go back to my data analysts and like, help me out here, scientists.

So I figured out how to ingest it. And we started doing some experiments. And the first measurement that we did, and again, won't get sued, I won't say the tool, but we figured out that we had scientific proof that we had pre- and post-users of AI, and we were able to measure that on average, they saved an hour a week on the amount of time they spent coding and had zero impact on the amount of overall product velocity. Zero. So we saved an hour a week per engineer. Okay, that's a little bit of money, but we did the math. It wasn't — the amount we saved was less than the amount we were spending for them to save that hour. So the ROI therefore, scientifically, no good, right? So now we're starting to look at other tools and changing the game. And another thing that we found out — again, we put telemetry on everything. And I've seen people talk about this — coding is not actually the problem.

Rebecca: I've posted that a couple of times on LinkedIn. Yep.

Tara: Code review, release deployment. Like, you know, what's your operational set of steps, right. That's where you start to run into friction. Or if you look at if you still use the DevOps, you know, the DORA key metrics — like, did it do anything to your time to remediate, time to discover and time to remediate? Maybe it did the one, maybe not the other. It's hard to say, right? It still makes it, but it's harder to do that.

And the reason it's harder to do that — and I think a lot of people are realizing, oh, shoot, what's the security posture of these vendors? In order for it to be useful, you have to hand it the keys to the kingdom. Well, if you don't want to do that, it limits the amount of value you get, right? So that's another challenge that we've run into. So okay, so coding speed was not necessarily a win until you had the breakthrough, which was the more senior people discovering agents and agentic programing, you know, what are they calling now? Spec coding. Right.

So now actually we're cooking with gas and we're able to show that for the right set of people, you can do a lot of really interesting work there. But ultimately it's still constrained by the amount that those senior staff engineers are — the folks that often we see the most benefit. You're still constrained by the amount that they can either delegate or hold in their head because ultimately humans still have to be responsible, right? But as we also have them kind of leading the way from agentic programing coding standards, that type of thing, we're now enriching our documentation. And now the search function is now helping the lower tier, right?

So now we're creating an ecosystem, but it still goes back to the same three pillars. AI enhancing the tooling. And by how we are using the tooling, we're driving changes to the communication and process that helps the team overall accelerate. But it is slow. This does not happen quickly. And I think a lot of other companies are seeing that too. You think, oh, I'll fire all my junior engineers and have all these agents. And then all of a sudden we're, you know, $1 billion ARR. And I'm like, notice they don't. People aren't saying that as much anymore.

Rebecca: Not so much. This is another rumination of mine — you know, the cost of AI is substantial. And is probably, I would suspect, going to increase over time. And even if it doesn't increase, our usage of it is going to increase. What if this comes to a point of like, you have to say which engineers get access to AI.

Tara: Oh, well, we have that already. So there are many reasons why you might want to regulate it. It might have to do with software licensing — like we don't want — if you're working on something where it would be, you know, eyebrow-raise if you're saying you're doing a lot of AI coding, okay, those engineers don't get to use it, right. If you say, all right, finance has said — I'm going to make up a number — like, you get, you know, $10,000 a month, then you're like, okay, well, that can actually be exhausted pretty quickly with the right model usage. So now you have to prioritize which projects are we going to prioritize or are we going to peanut-butter it? Like you could make that choice, right.

And I think part of the excitement there is that it is a dynamic discussion they should be having on an ongoing basis. Right? There's no one answer, especially in the AI space, because everything is changing all the time. Right. We were talking to one company — they got acquired. Well, that changed the conversation, right? And I expect that's going to continue to happen until, you know, we kind of see things start to settle a little bit, assuming that happens. But, you know, to your point, the current cost model — like nobody can monetize. Well, that's — it's going to be hard to sustain that. You're gonna have to figure something out. Right. The chip people are realizing the power math doesn't work.

So now they need to invest in different types of chip development to lower power consumption. Right. It's going to — I think there's going to be the unexpected, or at least not largely — most people would find it unexpected — where AI is going to drive us as an industry, I think, because as we find what are the actual friction topics that we're running into and how do we address them, that's going to take things in a new direction. But if you remember, when we started, everybody was about codegen, everybody was about coaching. And that was 18 months ago. Now you almost never hear about it as an interesting topic. It's codegen plus at a minimum, or it's something else completely different.

Rebecca: That brings me back to my question of like what is a developer productivity organization focused on in an AI world. I have so many more questions on AI. But let's talk about that one first.

Tara: Well, so here's one that we've talked about, which is, you know, a developer productivity team is an engineering team, but it also provides services. Right? So, you know, if you live in Slack or whatever your chat of choice is — if you're living in a chat world, which these days people tend to — you're a team that people, other teams come to you to ask questions. Can you have something answer questions for you, especially if it's something that was probably what we would call generously, an RTFM moment. Right.

So one of the first things that we were successful at is we prototyped, which was a Slack bot, and we fed it everything, and that allowed us to — which thankfully was a lot more, because in infrastructure you don't have customer data or financials or anything else that they would be worried about. So we were able to show, here's what the data ingestion is going to look like. And it's kind of Stack Overflow-ish. In fact, I think Stack Overflow — if they had solved this, might still be a concern.

So it's like, all right, people come in to our Slack channel, Dev Prod, when I'm in Dev Prod. And so it's like, all right, how well can the bot intercept those and reduce the load, the support load on the engineer who would otherwise get that question right? Okay. Now we see that the accuracy rate is 50%. Okay. Now we need to figure out how to improve the accuracy rate. Right. Okay. Now that we've improved the accuracy, what else can we do? Oh, perhaps we can intercept a ticket. See that it's not able to be answered in Slack. Now we can generate an issue ticket and make sure that it is — have an agent make sure it's formatted correctly. Now have another agent see that there's an issue ticket and go try to solve it. And then direct a PR to the on-call engineer to go and carefully review the code and see whether or not it would solve their problems. Right.

And so these are the — you know, I would consider these for us to be experiments. We are not at the point I feel where — we're a conservative company and a company that like us, it's like we are not an early-stage startup, you know, we have to make sure that we are mindful of risk. But I think we also are a company that likes to experiment and likes, you know, these new things and is looking for ways where we can improve our overall productivity as a topic, not as a team, through ways like that. And I think one of the things that's really fun about a team like mine is we can afford to experiment more because our customers are internal. And the worst thing that happens is we make them mad. We don't cause a business deal to go south. And so that's another kind of implicit benefit, especially of why I like working in a developer tools company, is that, you know, we can be customer zero. That proprietary CI system I was alluding to before, like it's built on top of Atlas. So we stress test the heck out of our own product in ways that is this amazing and wonderful. And my team feels proud when we find a stopship bug. They all everybody walks out with their chest out.

Rebecca: Like I think that is kind of what I'm alluding to is this — maybe there's less to do in the inner loop and more to do in the outer loop now. And more to do that is more ergonomics than tooling?

Because I think of that, like we at Stripe, we had the same — it wasn't even an on-call rotation. It was just different people asked. Anyway, and it was some of these jobs as staff dev. For a whole week you were on Dev Prod, you know, and deal with anything that came up. And it's so — it's very obvious how you could free up like some portion of a person, at least, by building that sort of thing. And I am, I'm personally skeptical of — like you said, you get more developer output, but not necessarily more product outcomes if you just focus on this, you know, codegen side of things. I think it's much more interesting, like looking for those sorts of opportunities — are probably higher leverage.

Tara: Right. And so you think about — so another example in our CI ecosystem, like I said, Evergreen is purpose-built to test the distributed document database. All right. Well, the document database with all these shards and replication all generates a ton of logs. Now, we're very big fans of OpenTelemetry framework around here, and we have our traces, but we still have logs and even structured, you know, we can have terabytes, right?

So there was another thing where it's like, oh, so now we have an agent that lives in our log viewer that tries to help identify in potentially terabytes worth of logs. Can we understand where the problem is? And actually, can we recognize patterns? Right. So that's shortening the debugging time, right. Something not, you know, maybe not terribly sexy if you think about all the stuff that people like to write about on LinkedIn or whatever. But for Joe Blow and Jane Doe in the trenches, they love stuff like that because that's making their day incrementally better.

Rebecca: Well, that's what I love about working in a dev prod organization period, is that you can see the people that you're impacting and you can talk to them so easily.

Tara: Yeah, it's both the good news and the bad news that, you know, you get that instant gratification or that instant karma hits if it's on the negative.

Rebecca: Yes. One or the other. But they will find you.

Maybe one other thing I'm curious about is any of your work expanding beyond developers to look at other parts of the SDLC, whether that's, you know, go-to-market, support or customer support or marketing or any of the other?

Tara: Oh, yes - there's — not that my team has been involved with necessarily, but that, you know, our education department has done all kinds of really amazing things in order to enable our customers around, you know, how we do documentation and support around our client libraries and things like that, even ensuring that the model — including assessing, you know, all the models — are the models providing correct documentation for our customers to go look up how to talk to MongoDB through these drivers, you know. So they've taken on trying to figure out how to scan the internet's models to whether or not there's good stuff out there that maybe we don't even control.

But we, you know, we can try and go help get addressed by working with whoever the model owner is. I know that the technical services, professional services organization, are always looking for ways like how do we improve the turnaround time for customer satisfaction, right. And a lot of it, I think, is where we're seeing AI benefit. Again, we talk about it's not necessarily the sexy — you can go do a unicorn with like two people and a big enough EC2 instance to host your agents or Lambda or whatever your orchestration of choice is. But how you can identify the thousand deaths by a thousand cuts of friction that collectively actually probably results in a much bigger impact. It's just not as flashy.

Rebecca: Speaking of those thousand different things. So at Stripe, I ran unsuccessfully, I'll be honest, a program called Papercuts where developers could click a button in various places. They could click a button and basically say, like, this is not going well. And it would open up a ticket that would get reviewed.

And some of it was — I mean, one thing that we learned was that human-to-human communication was one of the most expensive things that we were doing. The amount of time that you had to talk to another human just to, like, make progress was really costly to us. And so that not a technical issue necessarily. Maybe a documentation issue, maybe an organizational issue. How do you get attention to those sorts of papercuts internally? Just within your remit of the engineering organization?

Tara: There's a couple different ways. Like one of the things that I've started to incorporate is — I don't think I invented this. I don't take credit for this. I'm pretty sure someone else invented this. But the concept of, you know, developer SLOs or development SLOs, right. We actually measure how, you know, if somebody brings a question to Slack, how fast do we respond to it? And what's your time to reply and time to resolution, kind of like what you do in production. Except it's a people incident. Right. Someone has a question. So we measure that and then we figure out how do we improve that. You know, do we have the right on-call rotations. Does it — in the Slack channel. Does it trigger the right thing. Or now does the bot, you know, provide the right answer in a timely fashion. But it's really — it's, you know, going back to — it will feel kind of silly, but, you know, I remember reading John Doerr's book, right? Measure What Matters. Right. So if the mechanics of how people operate with each other is important, figure out how to measure it, right.

So we talk about onboarding costs. For example, onboarding costs can be for a new employee. It can also be for an existing employee trying to use, you know, an API that was developed by a different team, or they switch teams or, you know, you can imagine any number of scenarios, right? Or it could be the cost of onboarding your manager onto what you're doing. Right. We actually have been talking a lot about the DORA key metrics as a framework to kind of guide and uplevel a lot of this type of thing.

So how long does it take to get all the reviews needed for a technical scoping document? How long does it take for the engineer to get their coding done? Turns out really fast. We don't need AI. How long does it take to get a code review? How many code reviews did it have to go through? And again, not to penalize if the answer is a lot, but to understand oh, this team's code review response time is terrible because they have one person who is responsible for reviewing every code review. We didn't realize that. Nobody knew we hadn't recognized that. Right.

So it's — and it's also — I think this is a really important nuance — using metrics from an enablement standpoint. Right. How long does it take the CI to do compilation? How long does it take the test to complete? Okay. Well, the build team needs to go figure out why this compilation is taking way too long. Right? Let's go fix that. Oh, now it's faster now. Like, how many tests are we running? Do we actually need to run all those tests? Well, how do we know. I think if I were to sum up developer productivity, it's the constant search for and providing of critical information in whatever form that takes. And all the tools that we do, all the documentation improvements that we have, all the processes that we have, it's information flow. At the end of the day, and it can be measured in many cases.

Rebecca: I think that's great to be thinking about. You know, our three pillars at Swarmia — our business outcomes, developer productivity, and developer experience. I'm pretty sure I could map those to yours pretty easily. But, like, investing in that developer experience. I think it's interesting, like, there was this moment in the late 2010s where developer experience was, like, all the rage because you wanted to hire, grow, retain. Right. And that was your whole job. It's hire, grow, retain. And now that like, not so much with the hiring and maybe not so much with the retaining.

Tara: For now.

Rebecca: I think that developer — right. But I think that with that change, the developer experience kind of lost the attention that it was getting. And I think it's not — it doesn't exist. Developer experience is not about free beer and free lunches.

Tara: Right. It absolutely is not. And I think that companies that dismiss that — I would argue it's even more important now. Right. Because not all engineers are equal nor fungible as much as, you know, the temptation might be to think of them in that way. The best people will never have problems getting jobs, no matter what circumstance. Like I'm old, you're old. I think you're about as old as me, right? So we remember the dot-com bust. We remember the 2008 bust, you know, now we're going to put the COVID in that. And the best people, it's very rare for them not to be able to immediately turn around to get another job. Right.

And so having the right kind of environment to incentivize people is even more critical because now they have more options. And you know, engineers — there was a — I don't know if you saw that I had a little thing — someone was saying, oh, you know, developers spend too much time worrying about things like Kubernetes when they should be focused on the products for their company. And, you know, they shouldn't be worrying about that — we could have solved this if everybody had just used CloudFront or not CloudFront. Whatever the — previous to Kubernetes orchestration, maybe it was CloudFront. I forget. In any case, it's like, sure that's true. Right.

But the fact of the matter is the thing that makes a good developer a good developer is curiosity, right? When Google came out with Kubernetes and then made a deal with the Linux Foundation and started the CNCF, don't you think that's the reason that there's 9 bazillion Kubernetes offshoot projects? Because everybody was curious about it, right?

That was — I call that a great brand of marketing. Is it the best orchestration tool ever? Probably not. There's arguments that could be made, but everybody was curious about it. And because of how they did it, everybody could participate in it. And that's what developers want. They want to be curious. They want to be supported in their curiosity. And they want to know that they can be creative, right, in their own nerdy little ways. That's what makes a developer. Right.

And to say, well, that was silly. They shouldn't have done that. Kubernetes shouldn't have won this. The other thing — I'm like, well, maybe? Right. But one could argue, like nobody paid Linus Torvalds to go invent Linux, right? He did that because he was tired of having to try and pay Unix licensing fees to whomever, AT&T or whatever. Right? Like from irritation or curiosity comes innovation in the tech world. And because of how the tech world is digital, not physical, that curiosity can be represented in ways that are so much more profound.

Rebecca: Yeah. And I think it is — on the one hand there are lots of people in the industry looking for jobs. And so that is true. I think that leaders are maybe taking advantage of that sometimes. And exactly as you said, if you were good, you make a few phone calls and you have another job. And I think we are definitely still there.

Tara: I think so. I mean, you know, I don't want to disparage — I know there are many people who have struggled. Right. And it's not because they aren't good. I don't want to say that. But maybe they don't, you know, they don't have the benefit of a reputation or the right network. Or their particular skill set is not as desired. And so, you know, they're having to go figure things out.

But I also, you know, I have to wonder — I think the thing that still puts me in the AI skeptic category and why I will continue to wear that badge even though I am less skeptical than I used to be and I'm responsible for its success within my company is that I think the inaugural sort of themes that came out with AI were handled poorly by many people. Right. Oh, you don't need all of these people — like the things that I saw in San Francisco. Those billboards like you can lay off, you know, you don't need customer service anymore because now we have these AI agents, right?

And I'm like, it feels ethically questionable to celebrate a world where we lay off thousands of people from their jobs.

Rebecca: Absent: another way for them to live.

Tara: Right. And I think it's exacerbated by people saying, oh, well, eventually nobody will have to work. And we'll have, you know, universal income. And I'm like, in this country, you think that's actually going to be a thing. You guys can't even agree on single-payer health care. And with that, there is clearly also evidence that outside of the tech industry, the appetite for AI is low. Right? There was that great thing about like Microsoft finally admits nobody wants Copilot within Windows, right? Nobody wants it because human interaction is fundamental to how we are.

Which is why I love what I do, because what I do is focus mostly on the interaction of humans, right? With the technology being an implementation detail. It's important, but the humans to me are more important. And I think that is the premise of developer productivity. It's developers, being productive.

Rebecca: Well, that seems like a great note to end on. Just like you just need a little stump speech, and we can call it there. Thank you so much. I had so much more to talk to you about that we can do another hour on this, easily. Yes, please.

Tara: I think we're at another inflection point as an industry, and it's always interesting to see how they turn out and they never turn out the way I think they are. But — I dunno.

Rebecca: Yeah. I mean, I have my own predictions, but they haven't happened yet. But we'll see. Give them a year. We'll see what happens. But I am — I'm not a skeptic. I am, you know, I am a realistic optimist. How's that?

Tara: Alright, I like that.

Rebecca: Yeah, that's what — that's what I'll say. I am a realistic optimist. And, like, this has changed how we do work.

Tara: Oh, 100%, it absolutely has.

Rebecca: But codegen as the, like, secret magic bullet — like, I don't buy that at all. But some of the other stuff that you're talking about, I buy that a ton. Like, and I think that's where AI is going to have so much more value in companies that are mature enough to see that value. And those tend to be companies that were already invested in developer productivity. So suddenly—

Rebecca: Anyway, well, thank you so much. It has been great to chat. And I hope we can continue the conversation. Once you start blogging again, I'll comment.

Tara: All right.

Rebecca: And that's the show. Engineering Unblocked is brought to you by Swarmia, the engineering intelligence platform trusted by some of the best software companies in the world. See you next time!