Allan shares his unique perspective on what it takes to be a successful “scaling CTO” — the type of leader who comes in at inflection points to help companies hockey-stick their growth.
Allan also offers insights on navigating the AI revolution in engineering — from how it’s changing the hiring process and accelerating junior developer growth, to using it for everything from code completion to writing performance reviews. With his background spanning from early Cisco networking to modern SaaS platforms, Allan brings a unique long-term perspective on technology evolution and what it means to build engineering organizations that can scale to serve millions of users worldwide.
Watch the episode on YouTube →
(0:00) Introductions
(1:37) Allan’s role as CTO
(6:04) Staying in touch with the reality of software engineers
(8:40) Similarities between leadership roles
(11:09) Encountering founder mentality
(13:00) The key to success as a CTO
(16:23) How Webflow uses AI
(19:37) How AI is affecting the hiring process
(21:44) Hiring juniors
(25:35) How AI is changing other roles
(27:22) Webflow’s approach to performance management
(32:32) Mitigation strategies to maintain productivity
(36:27) How Allan approaches reorgs
(39:46) Who Allan feels accountable to
(42:58) Creating a culture of accountability
Allan: I am a big proponent, as you can imagine, being part of a very prestigious technical college, I'm a very big proponent of new college grad hires. We've hired a good number of them here, and I think that being able to train some of the brightest minds on some of the newest technology to solve some of the hardest problems is just awesome to watch.
Rebecca: I’m Rebecca Murphey, and this is Engineering Unblocked. Engineering Unblocked is brought to you by Swarmia, the engineering intelligence platform that's trusted by some of the best software companies in the world, including startups like Superhuman, scale-ups like Miro, and several Fortune 500s.
On today's show, I'm talking to Allan Leinwand, the Chief Technology Officer at Webflow. Previously, he was the Senior Vice President of Engineering at Slack, CTO at ServiceNow, and CTO at Shopify, so I think we'll have a lot to talk about. So Allan, thanks so much for joining.
Allan: Thanks for having me, Rebecca.
Rebecca: Yeah, so you can do your own brief little intro here if you'd like, and just give me your elevator pitch for who you are and what you're up to these days.
Allan: Yeah, very briefly, I'm CTO here at Webflow, and I am trying to build the future of the web. So I'm having a lot of fun doing that and really enjoying my time here, building and doing all the engineering things here at Webflow.
Rebecca: And just for anyone who doesn't know, what does Webflow do?
Allan: Sure. Webflow is a platform, which we call the website experience platform. Essentially allows you to build a website, modernize that website, optimize it, do A/B testing, do analytics on it, and do localization, which means changing in multiple languages as powered by our own CMS – our content management system, which is sort of a database behind the scenes. And the big claim to fame for Webflow is that we are a visual-first website design tool.
Rebecca: You've been a CTO or some other kind of engineering leader at a ton of companies that people have heard of. Slack, ServiceNow, Shopify, and now Webflow. I was talking earlier with some people at work about how there are so many different archetypes of CTOs, from the co-founder who grows up to be the CTO to the CTO who comes in and helps a more mature business that is trying to grow. So what's your CTO/SVP/VP archetype?
Allan: Yeah, I mean, I think I'm lucky enough that I kind of come in to be the leader that wants to scale and grow the business. I've always had the opportunity to come into a company that is on that inflection point, ready to hockey stick up, and really help the team come and execute. That's what I really think about. I think about how can I best fit into the puzzle pieces of leadership the company already has, and how can I make myself help complete the puzzle?
I'm really challenged by trying to figure out either organizations or other technologies that I haven't played around with before. Like you said, ServiceNow, which is obviously IT workflow, Shopify – e-commerce, Slack – you know, business communications. I wasn't really an expert in those tools before I joined those companies, but I think understanding how can you take good engineering discipline, good engineering practices, and apply it to, at least in my case, very, very large-scale SaaS is kind of what I like to do.
Rebecca: It's interesting, you often see people move through very similar companies, but is there a common thread here, or is it just scaling and interesting problems? Which I also, if that's your answer, I'm sympathetic to that. Swarmia is the smallest company I've worked at, but I generally love going into places that have created enough interesting problems in their early growth. So is that kind of your thing too?
Allan: Yeah, I mean, early in my career, very early in my career, I was one of the first dozen or two dozen engineers at Cisco Systems. So I saw the scaling of this thing called the internet back in the day. And for me, being able to scale up something at global scale, it really got very addictive. I just always wanna think about how could you scale for something super huge? Scale it for millions and millions of people.
And there's people who like to scale algorithms or people who like to scale really hard problems. They wanna really hard math problems, really hard, I assume, ColdFusion have problems. Those are interesting, I assume. But for me, it's always been internet scale. It's always the thing that's sort of been my drug. So, you know, I've been involved in various companies, either advisor, board member, or directly working with the company. And to me, it's always about is this company scaling? Are they at a spot where they're gonna affect millions of people?
I love the thinking of you push a PR and literally millions of people get it the next minute. That's just so cool. Or you change a writing protocol or you change the way internet link is deployed or you change a BGP setting or you change the way a communication pattern is set up in Slack or how the sidebar changes and, magically, there's people all over the planet that see that. That's a really intriguing and, I think, really fun problem.
Rebecca: And I don't know if you still get to do this in a CTO/VP role, but, when I've worked at these internet scale companies, it has been always fun to get the local development environment up and running and, oh my gosh, I have this thing from the internet is running on my computer right now, and I can see it and change it. I love that. I love that part of internet scale.
Allan: Similarly, we do local development environments here, you can stand them up in minutes, and, I'm not saying this happens every day, but I actually did a push PR to production about two hours ago. So literally, yeah, it wasn't that big a deal.
We have a Slack channel called UX Paper Cuts and UX Paper Cuts is all the little niggly stuff that annoys people in the product. And whenever things show up in UX Paper Cuts, I like to see how they can handle it. Because that teaches me all sorts of various portions of the code and it allows me to sort of test and develop our lifecycle.
I love, like you said, standing up a local environment, doing the build, running the tests, going through deploy cycle. I think understanding that entire workflow gives me a lot of insights into what the team is working on and also how to make it better.
Rebecca: I was gonna ask you that later on, but maybe let's talk about that now. How do you stay in touch with what it's like to be a software engineer at Webflow? Obviously, submitting a pull request is a pretty, pretty solid way to do that, but probably also not something you can do every day. So, how else do you stay in touch with what that reality is for the software engineers in your organization?
Allan: Yeah, I mean, I think three things. One is, yeah, writing PRs and getting those PRs pushed to prod is always kind of fun. And you do see the whole life cycle of the code. That's very, very interesting. Even if it's like a typo or something small, but today's wasn’t a typo, but it was still something interesting.
I think the second thing that I do is I chat with the code. So we use various AI tools like a lot of other people do. And if I see a problem or I see a scalability issue or customer wants an X, Y, Z feature working on a particular pattern, I'll go chat with our code and I'll ask it, “how do we do this? How does this scale up? How does this particular file affect performance? Is there some sort of performance limitation or some sort of scalability thing I should be thinking about in this part of the code?” So I go chat with the code using LLMs all the time.
And the third thing I do is I do use a lot of interesting metrics. Swarmia has some of the tools, but I do look at things. And to me, on almost a daily basis, we also have an engineering managers meeting. So everyone in engineering leadership gets together for a half hour once a week. Sometimes we cut it to be a little bit shorter if there's not very many exciting things going on with the top-line metrics in terms of PRs pushed per day.
To me, a really key one is PR review time. I wanna make sure our PR review time is four hours or less, sometimes way less. And I'm really looking to look at each individual team and understand how those metrics affect their daily lives. And you know, there could be a team whose PR review time goes way up, and it could be that someone's on vacation or there's a time zone difference or something like that. But I think understanding the lifecycle of what the engineers are doing, how that lifecycle is affected and grows over time, and then being able to really stay baseline, grassroots into what the teams are doing.
I think as a CTO, if you end up at a higher level and not being able to technically talk about what the teams are working on, or why the problems are hard, or why the technology stack is a certain way, I think you kinda lose credibility. And I think it's important to maintain credibility with the team, both up and down the stack from me.
Rebecca: I think it's great that you're so intentional about that understanding because, like you said, a leader who is disconnected from that reality can be ineffective.
I’m guessing this is the first-ish job where you've had a tool where you could just type and ask questions at the code, which is quite a change in how our whole world works. But, besides that, what's been substantially different for you in these various leadership roles? We've talked about what's been the same, but what have you kind of learned and maybe brought with you as you have accumulated this experience?
Allan: I'm gonna turn it back on you and answer in a different way, if that's okay.
Rebecca: Absolutely.
Allan: I think the thing that is the same about all these roles that I think is super exciting to me is having a founding team and CEO that passionately loves their product and passionately loves their customer.
I mean, again, I'm not the engineering expert on almost any company's product that I've shown up to help lead. But having someone, whether it was Tobi at Shopify, Frank Slootman at ServiceNow, or Cal and Stewart at Slack. I mean, these people lived, breathed, spoke the language of the product and the customer 24 hours a day, seven days a week, and it's super inspiring to me.
Here we have Vlad and Bryant and Sergie who helped found the company here at Webflow, and they're all still involved on a daily basis. Every day, they're involved and they're in the product and they're helping us set direction and they're helping us build the right technology and they're helping guide me in terms of what the teams need to execute on and trends they’re seeing in the market.
And I really think that having that founder mentality and that push from the founders to do and execute their vision is super important. And I think that's something that's similar across all the different roles I've worked at. I haven't worked at a company with a CEO that wasn't that in quite some time. That's what's similar, I think, and that's what I really enjoy.
Rebecca: I'm curious where, how you learned that. I was looking, you go all the way back to Hewlett-Packard in the late eighties, is that right? You still aren't as cool as the person I interviewed who worked on some show on Nickelodeon in the mid-eighties.
Allan: Yeah, I’m not that cool.
Rebecca: Yeah, that was pretty cool. And now he’s doing internet things, of course.
HP was a very different company than Cisco, is a very different company when you're engineer number 12 from what it is today. But when did you first get to encounter that founder mentality?
Allan: Yeah. I mean, I joined Cisco and I think I was like, I dunno, first 50 engineers. I don't wanna overstate where I joined. But I met Leonard and Sandy and that was my first. Leonard Bosack and Sandy Lerner were the founders of Cisco. I remember they both interviewed me for the job. It was just inspiring to me.
You're right, I was at HP outta college and HP was a big organization at the time, run by somebody who I probably never met. I'm sure I never met. And then I met Len and Sandy and they had this vision and this drive and this technical knowledge and this passion for what they were doing. It just got intoxicating. So I think that that's really key. And I think once I kind of got that drug injected into my DNA, it kind of took hold and changed me a little bit.
And I did a startup along the way. A company called Digital Island in the late nineties through the .com. And again, the founder there is a gentleman named Ron Higgins. He believed in this product. He knew what it was. He had a vision. He just needed someone to come in and help him execute. I was lucky enough to be involved in that. And that's really what it boils down to, to me, is how can I take what I think I do pretty well, which is engineering execution, and apply it to whatever the company is focused on?
In my particular world, it's always been SaaS or internet or cloud, if you wanna call that software, as we used to call it. But I think that that's kind of my sweet spot. If someone said “Allan, we want you to come in and run an engineering team working on a chip design,” that wouldn't be me.
Rebecca: Might not be you.
Allan: Might not be me. I know my own skillset, Rebecca, and I also know that that’s not one of them!
Rebecca: So this is a somewhat interesting career path to be this kind of serial CTO, right? Or serial engineering leader of some sort, senior engineering leader of some sort. And what would you tell somebody today who wants to be in that kind of role? What have you had to learn in order to be successful serially in these roles?
Allan: I think the thing that I've done is I really love to dig into the technology stack. Early in my career, I made a decision that I was gonna be really deep in something, and I think that's something really important. I became really deep in networking, like deep, deep bites and, you know, really deep into networking. Manchester encoding, formats, CRC, all sorts of deep stuff in networking.
But what I found as I moved from networking up the stack into the software layers, into the COM layers, into the application layers, is a lot of those basic primitives sort of maintained. You know, I think they say there's like eight algorithms in CS that just keep repeating over and over and over again. And once I understood those algorithms and once I understood how things worked, I could apply those patterns over and over and over again to different technology stacks.
I mean, I'll give you an example, and this is gonna be a really bad analogy, and it's not technically perfect, so I'm gonna feel bad about saying it, but it's close.
In networking, there's something called a Shortest Path First or Dijkstra algorithm that describes the shortest path between any two nodes and the network tree. Here at Webflow, we have the situation where we have this thing called the DOM tree and the DOM tree is the element that represents websites, and we're worried about how to optimize, you know, re-rendering of those particular nodes in the DOM tree when browsers refresh, mouse clicks occur, et cetera.
It's the same algorithm. I mean, it's the same sort of Dijkstra algorithm to figure shortest path first between any two points on a tree applied to different disciplines and different software stacks and what happens to be Manchester encoding and OSPF and BGP and the other one happens to be DOM future traversal by network, so it's just different, but kind of the same.
So I guess my advice, back to your original question, would be get in deep with something, something you love, something you can find patterns with. Something that you can take that pattern and repeat up and down the technology stack, and look for those patterns. That's been super useful for me is to be able to apply those patterns in very useful and meaningful ways. Maybe that's what I've been doing.
Rebecca: So I grew up in front-end land, so exactly the opposite of, I don’t know if it can get much more opposite. Although eventually in front-end land, you do have to understand more about networking than you might in other disciplines. But, yeah, I also say it's all the same problem, once you zoom out, or the same eight problems or whatever. And I think the challenge is then to go deep and then come back up.
Because it took me a long time to realize that I was lacking confidence because I was so deep in JavaScript. I was like, “well, this is all I can do. This is all I know.” And when I finally came up for air, I was like, “Oh, it's all the same.”
Allan: Yeah. It's all ones and zeros eventually.
Rebecca: Eventually. I wanna zoom in on Webflow a little bit because, of course, that's where you are today. And, we talked a little bit too about the arrival of AI and how you're using that. So I know Webflow is growing, and we’re also in this age of AI. And, like I said, not a lot of leaders have gotten to work with those two facts. In fact, lots of leaders right now in the age of AI are seeing their teams decline or stabilize in size.
And so I'm really curious to hear how you're thinking about that growth? The growth, the hiring, the organization of teams, and all of it, in the context of we have this semi-magical tool available to us. I know it's not magic – still ones and zeros – but we have this, it can feel pretty magical, tool that didn't exist before. So, how's that influencing how you're thinking about hiring?
Allan: Yeah, so, fortunately, here at Webflow, we have such a big market and such a big problem space that we haven't written all the code for yet. We're continuing to grow into various areas. If I think about Webflow two years ago, two and a half years ago, when I got here, it was basically a website builder, our visual website builder, but we expanded into what's called website optimization, personalization, A/B tests. We've expanded into localization, which is language translation. We expanded into analytics. We're expanding into other markets as well. And I think that has allowed us to need more engineers to be able to write that code.
With that said, you know, we are definitely continuing to hire, and we see AI as a way to augment what engineers do in a way that makes 'em more productive. So I don't see AI necessarily replacing engineers. I do see AI as making it easier for everyone to potentially be an engineer. We have product people now that are vibe coding their ways into POCs instead of writing PRDs, which is amazing. We have people in various parts of the organization, whether it's our training group or our customer support group that are trying out our products that use AI in a way that allows them to test and get better coverage.
We do have AI tools and we do have AI tools that use LLMs to tap-to-code and tap-to-complete-code. And we're seeing very nice productivity on those tools. What we've also found is that the most skilled engineers are becoming better code reviewers because instead of spending time thinking about “how do I write this code?” Because you can almost prompt to that code or tap to complete that code. You then need to figure out, “did this really write anything that's good that would be performant or did it just make a mess?” And sometimes you get both. Sometimes you get really awesome performant code, and sometimes you kind of get a mess. And, you know, we've got a big monorepo and a big portion of our product is in a monorepo, and that is sort of hard when you don't have the right context windows, especially for newer engineers.
So, you know, the question about hiring is we're definitely hiring, we're definitely using AI to leverage out our teams and make them more productive. We're seeing those productivity gains pretty markedly. But at the same time, we don't think that AI replaces engineers. We think that AI makes everyone be able to help enhance the code and be able to even use code as a tool. So that's how we're seeing it today.
Rebecca: How is it affecting the hiring process itself? I've been out of the hiring business since all of this happened, but I read about it all the time. About how this is really changing a lot of what used to be standard practice.
Allan: I’m not going to say we haven’t had people come in that are using AI. We can clearly tell when they are. I'll tell you, we have had hiring discussions all the way down, almost to the offer, until we realized that someone was maybe not who they said they were gonna be. And then a quick little background LinkedIn check kind of went “something seems amiss here” and then we figured out that they were maybe not who they said they were gonna be.
We are being far more cautious in terms of how we hire. We've actually changed our problems, and we change 'em pretty regularly. So kind of like a good professor, we're not hand out the same exam every single time. We're changing the exam over and over again. And then we ask for a lot of live code sync with prospects as well. So that's how we've seen it change the hiring process.
We also use, which is really nice, we use an AI notetaker during the interviews if the interviewee permits it. And that allows the interviewer to really focus in on the content or the discussion as opposed to furiously trying to type notes along the way. AI notetakers, I think, are great. They do a nice job to summarize what you're doing, and it also allows you to stay focused on the right work.
Rebecca: Yeah. Oh my gosh. Back when I was in the three interviews a day land, I would've loved an AI note taker.
Allan: Yeah, it's really, really useful to help us summarize the key points, allow people to focus in on, if we're doing a code review or an architecture review with the prospect, or doing a systems design discussion with a prospect, we're spending time on that, not spending time taking notes. Let’s face it, as humans, we think we multitask but we really don’t. So it's very, very hard to do both.
Rebecca: Oh gosh. I would love that in my engineering manager interviews as well, because that's so much thinking and typing and talking all at the same time. How are you thinking about hiring juniors? That's been something that's been on my mind a lot. And I also saw that you're on the board of trustees at Harvey Mudd. Is that right?
Allan: Correct. Correct.
Rebecca: So I'm also interested, from that perspective, how are you thinking about what is the training required for people to be successful today versus maybe five years ago?
Allan: Yeah, we actually put together a set of new hire trainings here at Webflow for junior and juniors, trying to get up to speed faster. It does involve a lot more systems diagrams, a lot more hands-on coding practices, and again, using some of the AI tools allow us to chat with the code and let them get up to speed faster.
I am a big proponent, as you can imagine, being part of a very prestigious technical college, I'm a very big proponent of new college grad hires. We've hired a good number of them here, and I think that being able to train some of the brightest minds on some of the newest technology to solve some of the hardest problems is just awesome to watch.
We have a number of different folks we've hired right outta college that, you know, the typical career ladder is 18 months up to the next level or not. Our new college grads generally get the next level within the first year. So we're really seeing them elevate their careers as well as learn a lot and just be genuinely curious.
So I love seeing that, and I love what hiring new college grads does to the diversity of the team. I love what it brings to the energy of the team. You can't hire only new college grads. You have to have folks that are at the staff level or at the senior staff level to help mentor them along. And we put that as part of the overall way we archetype each team in terms of what the load is on the team, what the balance of the different career levels are on the team. We look at that pretty regularly. But yeah, new college hires? Bring them on. I love them.
Rebecca: I'm imagining. I've worked with some incredible new college hires. And also, you can try to just hire new college grads. It just doesn't go well. It's possible. I’ve seen it.
But yeah, I think about when I've been managing new grads and it's simple, well-defined tasks in a pretty limited scope. But AI is decent at that now, the kinds of tasks that you might give to a new grad. So, how is their job changing versus maybe what that new grad job looked like five years ago?
Allan: I think AI is allowing them to come up to speed faster. You're right. I mean, they came up to speed faster. They're able to take those simple bug fixes, things that come in via support, things that you think junior engineers would work on, and get context a lot faster than they would've in the past. And with that context, I think they're actually moving up the career ladder much faster. And that's what I've seen.
I've seen new college grads come in, be able to chat with the code, get context on the code, fix bugs in the code, understand what they're doing, and what you might think of someone at a role they might have been for a year or so, they're able to move up the stack because they're able to now go get the architecture or now take a look at how different systems work together. I think about new college grads, as you said, Rebecca, on a particular team with a particular set of tasks.
But I've noticed with AI and being able to use those tools, all of a sudden they're going on to the next adjacent team and going on the next adjacent team and seeing how their product or their code surface area touches multiple products. It's really been remarkable and that's, I think, the power of AI. It's the ability to leverage what humans already have and amplify it and make it easier, faster to get context and to be able to generate code that's useful and can then serve as a model for the next set of tasks.
Rebecca: How's it changing other roles? And I mean, you already talked a little bit about how product, and I imagine also UX, is able to vibe code their way to some pretty decent prototypes. How's it changing the staff engineer role, or is that still the same?
Allan: So when we think about staff engineers, we think about folks that have demonstrated really deep domain, technical and functional expertise. These are folks that implemented new functionality across parts of the Webflow stack. They're continually mentoring and providing guidance to those junior engineers, and they're really changing architectural solutions for teams across all of Webflow.
The other thing we think about on the staff that I don't think the junior engineers can do and AI can't solve is we have a category that we got for all of our folks, we call engineering citizenship. And that is, how do you make the entire engineering team at Webflow better? How do you mentor and guide engineers? How do you participate in activities, whether it's style guides or coding standards or architectural discussions or open sourcing some of your code or publishing technical papers or working on patents, speaking at conferences? How do you elevate all of Webflow engineering as a staff engineer? It’s a pretty important thing.
I tend to think of this as “I shape how this is done from an engineering perspective,” as opposed to “I'm learning how to build that shape.” You know, we borrowed that from a classic engineering archetype way of looking at things.
And staff engineers, from my point of view, are really making high-quality decisions on their own. They're self-sufficient. I don't expect that from a new college grad. And when I think through that, there's just a lot that I really like to see out of staff engineers. And we have a good chunk of staff engineers here at Webflow that are really able to bring those new college grads in and really continue to scale the org in a really, really fundamental way, which is pretty awesome.
Rebecca: This is an unpopular topic, but it's much more present topic than maybe it was in the last few years, based on the people that I'm talking to. How do you think of performance management, and has that changed at all in the last few years, as you know, the cost of doing business has gotten higher?
Allan: Quickly thinking back to the last two or three years, or even four years, we haven't done any real big change in performance management. I think that we are looking at different metrics, which might give us insights into people's performance, but, you know, we don't look at metrics that comes from Swarmia and other tools at an individual engineer basis. We think that's not the right way to do it. We're not looking for lines of code per engineer, PR for engineer, anything like that.
We're looking at aggregate, what we're doing, like how an aggregate is this team doing, how an aggregate is this – we call pillars, group of team – doing. And I think when I think about performance management in the world of AI, the only thing I can think of honestly, is it's helped me write my performance reviews a lot faster. I mean, I'll just give you an example.
We just went through a performance review cycle here, and we have a set of core competencies that we define for each level of the engineering career ladder. And we have a set of company-wide core behaviors we expect people to do. I literally made a GPT, I fed in the core behaviors, I fed in the career ladder. And then for each person, I would say something along the lines of “Write a performance review for Rebecca. Make it 100 words or less. Here's three bullets about what you did. Use the core competencies, use the career ladder.” And it would just pump out something that was probably 85, 90% close enough. And I would edit, maybe add a little bit of Allan flair to it, or Allan style to it, and submit it. And I think I got through my performance reviews, instead of maybe half hour or 45 minutes per person, I wanna say it was ten minutes per person. So it was great. It was great.
Rebecca: That would've greatly reduced the amount of scotch I drank during performance review season.
Allan: I’m not so sure I’d wanna get a performance review after my manager has been drinking a bunch of scotch? Maybe! Maybe it'd be better? Maybe.
Rebecca: Maybe worse! Uh, no. You drink the scotch after you've spent time on performance reviews.
Allan: Oh, I see. That’s how performance reviews have changed. The things we evaluate performance on, the characteristics we're looking for, haven't really changed in the world of AI. But writing those reviews? Whoa, whoa. Way simpler. The point of the performance review written document is really not the written document, it's the conversation you have with the person that you're having. So I would look at the performance review doc as a catalyst for a conversation.
Rebecca: And hopefully a conversation that you've been having all along, and not just suddenly starting today. Yeah. I wanna come back to PRs per dev as a metric. You just mentioned that it's not something that you use on an individual level, which is fantastic. I have seen it misused at that individual– or trying to find the inactive engineers, right? So it doesn't quite get down to the individual level, but it gets real close.
I'm curious how do you use that? Because of that experience I had, I'm a little twitchy about it as a metric, but I also think that it's one of the best lagging metrics that we have for the overall capacity of the engineering organization. If we assume that all pull requests are good, which is a bold assumption, but if we assume all pull requests are good and driving toward business outcomes, then we can say that the number of pull requests that we're creating per engineer is a lagging metric of our capacity.
Allan: I agree with that. A lagging indicator being able to look at an aggregate is what I think… that's right.
Rebecca: Yeah. How do you talk about that metric up, down and out?
Allan: Yeah. I mean, we don't talk about it too much down, to be honest with you, primarily because I ask each of the engineering managers to look at their team and look at those metrics in aggregate. And then I ask the directors on top of those to look at their entire pillar, or the senior director of VP to look at their entire pillars in aggregate. And, you know, to me, the absolute number isn't almost what matters, what matters is the slope of the line. If the slope of the line is down, maybe you lost headcount, maybe it was a vacation week, maybe you're not doing as many PRs as you thought because you're in an architectural rewrite, or maybe there's an issue there we should look at. So to me, it's the slope of the line when I'm talking into the org, as opposed to absolute numbers.
When I'm talking up to the board, there is a, you can imagine, we have VCs on our board and the question is always, “are you using AI enough” and “are you as productive as you should be for headcount?” And at that particular point, it's more of like what you mentioned, it's PRs or features released per dev per quarter. So, again, it's not very granular. It's like quarterly board meeting, you've hired another 5% of the org. Is your PR rate up roughly about the same? And if not, why? That's sort of the conversation.
Rebecca: I have this hypothesis that if you add five engineers, that number is going to tend not to go up as much as you might like it to. It's not that it's gonna not go up at all, but whereas adding five engineers when you have 50, you might get almost five worth of engineers out of that. When you add five engineers to 500, it may be a different story.
So first, do you agree that that line is going to tend downwards as the code grows and as the number of people grow? And what are you doing to make it not? What are your mitigation strategies or how do you think about what you're doing to make sure that number at least stays approximately where it is as you're adding engineers?
Allan: Yeah. I'm not sure I agree, to be honest with you, but maybe.
Rebecca: Well, tell me.
Allan: Well, I probably do agree specifically if you're talking about engineers needing time to ramp up. So it's not like you add five engineers and magic of the PRs jump because you do need to ramp up. Also depends on the shape of that team. If you're adding three junior engineers as opposed to adding three senior staff engineers, that's also gonna change that. So I guess it depends, is my short answer to you why I disagree.
In terms of being able to think through what I look at… again, I'm looking at your basic metrics. To me, that the tip of the iceberg is PR review time. Like I really focus in on that because if a team is able to do quick PR reviews for themselves, then that tells me that they have the right expertise, they have the right context, be able to move the PRs through, and nothing sort of slows development teams down in my opinion like a bunch of PRs that are waiting on review… some are in a different time zone, some are in a different team, just everything just sort of grinds to a halt.
So, to me, the leading indicator is PR review time, and then you get into more classic door metrics that I tend to look at. But PR review time is the one that I focus in on for each of the organizations within our team.
Rebecca: And there's so much that can impact that, like you said. It can be time zones, it can be crossing teams is one of the biggest causes. Because generally, if you're crossing time zones, you might also be crossing teams.
Allan: It could be PRs that are really big, that people shouldn't be writing big PRs or maybe they should be writing big PRs, depending on the context. There's lots of things that affect it, but again, it's the slope of the line that matters. If the PR review time is constantly getting slower and slower and slower, I might poke that team and go, tell me about that. And it might, there might be totally legitimate reasons, but there might not be, also.
Rebecca: Right. Yeah. I wanna poke at this a little bit more because I love the idea of this curve and bending that curve back upwards is really fascinating to me. And so, if you don't agree, that's also fascinating to me.
To me, it's just kind of like graph theory of the more nodes that you have, the more complicated it's gonna be to get messages between them, and the more effort that it's gonna take. It sounds like though, you're doing a lot of good things to mitigate that. So maybe my claim is if you weren't watching PR cycle time, you might see PRs per dev start to decline.
Allan: I can see as the org gets bigger, if you're not looking at those metrics or you're not educating or communicating to the team what metrics are important to keep velocity up, then yes, I could see that would be something that would happen. But I think if you have from the top-down or from the bottom-up, both people aligned on what does it mean to have a high velocity organization, what are the leading indicators that can help us understand where that velocity may be or may not be slowing down. At least as a starting point, again for the conversation, I think that that is the right way to approach it.
Rebecca: Yeah, and it sounds like you're doing a lot of the correct mitigations. I'm curious, have you ever had to do this one mitigation for, let's say cross-team pull requests and driving that PR review time up? One mitigation is a reorg, right? Maybe you're not structured right. What levers have you had to pull to keep that number on track, maybe here, or maybe in your past roles?
Allan: I've done a lot of reorgs in my life. How's that? A lot of reorgs. And I think you're right. I think the key thing that you start to see is when there's organizational tension and teams that aren't necessarily aligned but need to be aligned.
I think back, I’m not gonna give details because it'll be very obvious what I'm talking about. But I remember earlier in my career, there was a particular feature set which had a front-end component and a back-end component. Front-end component shipped their feature, and it worked great, and they thought it was fantastic. The day they shipped,what they didn't know is that the backend team was doing an upgrade to that entire piece of infrastructure. So they shipped the front-end component and all of a sudden, it started erroring out, having bugs. No one can use it. They couldn't figure out why. Turned out the backend team, which didn't know what the front end team was doing, literally decided to upgrade the key piece of software that the entire feature set ran on that day.
I put those teams together and said, “now you own it top to bottom” and there wasn’t that problem going forward. So I think there's something about owning a feature from, let's call it front-end, all the way through the infrastructure, being able to, not down to the AWS level, but to the infrastructure, primitive level, being able to trace a thread from where does the user intersect this feature as it goes through the front end, the middleware, any sort of stacks you have to have it go through into storage, networking, all the way down the stack. Be able to understand the dependency tree of that and have individual owners for that product own that dependency tree is super powerful. So when I think about reorgs, I always think about it this way.
What is the strategy we're trying to get done? What is the strategic thing we need to change? Like you said, maybe the team isn't executing. Maybe there's a disconnect. Maybe there's two teams that used to be far apart, but now depend on each other a lot. So strategically, they should be coming together.
Then the next thing I think about is what's the organizational structure that would achieve that strategy? And I don't put names in boxes. And this is where people get tempted. They wanna immediately start, “well, so and so here that.” Don't do that. It's like, just literally put on a whiteboard. What is the strategy you're trying to achieve? We're trying to make this entire product front-end and background work together. What's the organizational strategy? Maybe it's one EM, maybe it's two EMs, maybe it's an org strategy that has a couple different layers.
And the third thing you do is then you put the names in the boxes. But you have to say: What are we trying to get done? What's the right organization to allow that to get done? And then you say, “okay, if we were to staff this today, who would we put in those boxes, or will we go outside?” That's how I think about org structures, and that's how I think about when I have to do an org structure, that's the mental process I go through is: what's the strategy I want to get done? What is the right org structure to do this? Is it two orgs that are co-dependent on each other? Is it one org? Is it a tier of orgs? What would actually work best? And then I start putting names in boxes.
Rebecca: So in your experience, especially in your current experience, because the software world has changed since 1988, right? Just a little bit. But in your current experience, what do you feel like or know that you are accountable for in your role? And who are you accountable to?
Allan: I think in all the roles I've had in the past couple roles, I feel most accountable to our customers. And that's sort of cliche, but it's actually true. I really want to see the product through the customer's eyes, which is why I'm in there fixing paper cuts, because it just drives me crazy when I know customers are annoyed. So I feel I’m accountable to our customers. I feel I'm accountable in three ways.
I have to make sure the product is up and available, 'cause a SaaS product, if you're not available, kind of doesn't matter. You have to make sure it's secure. I don't wanna leak customer data at any level, and I feel very accountable to make sure that our customer data is secure. And the last one is, I feel accountable to make sure it's performant.
So I wanna make sure that the product is as performant as possible on the major user actions. So we measure P90, P99, P80, P50, most of these are actions in the product, and I've done that throughout my career. Just so I can understand the common user behavior.
So when I think about accountability, obviously I'm accountable to the customer to make sure it's up, secure and performant. Internally, I'm accountable to the teams and making sure they have the resources and the tools and the infrastructure and the developer toolkits and everything they need in order to be successful. And of course, I'm accountable up to our CEO and our board in terms of delivering key milestones.
Like a lot of companies, we run quarterly OKRs and I read those OKRs every week. I make sure we're all holding ourselves accountable to them. And I'm also one of these people that's very rigorous around OKRs. OKRs are not yellow, they're green or red in my mind. So you either hit it or you didn't.
And I make sure every KR is very, very measurable. If we say we're gonna change X, Y, Z feature to be faster, that's a horrible KR. I wanna say faster means what? And people will say, I sometimes get back, “we'll make it 20% faster.” Okay, well, you must know to start today. So why don’t you give me a number? “Okay great, I’m gonna go from a thousand milliseconds to 800 milliseconds to make the math easy.” Great. So the KR is, this particular feature has to execute 800 milliseconds at P90 at the end of the quarter? “Yes. Okay, got it.”
So I hold myself very accountable to being very clear and very precise in what we are asking the teams to do. I think it's where you have ambiguous goals or things that are not well articulated that teams tend to stray from what we need them to do, and I think being able to be very clear allows me to hold them accountable and them hold me accountable to do what we’re supposed to do.
Rebecca: You just said a lot of cultural things that are not strictly, I don't wanna say typical, but like, you can't walk into a software shop and expect all those things to be true that you just said, right?
Allan: Correct.
Rebecca: So, how are you creating that culture of accountability, especially to the customers, so that it's not just you, but how do you pass those values onto the rest of the organization so they can be appreciated in your absence?
Allan: Yeah. We have channels in Slack directly with our customers. We get input directly from the customers. I get engineering managers and even engineers directly involved in those discussions where needed. I also think that, as a company culture, we talk about our KRs quarterly at the entire company level. We talk about our KRs with our engineering leaders every two weeks. So we hold each other accountable for that.
And the other thing I do, culturally, is I just have a monthly update where I record a quick video, or I'll literally type in a Slack message and remind people, “these are our priorities. These are what we're doing, this is why we're doing it, and here's progress to date.”
And when I do those updates, I always do three things from last month that I wanna bring to everyone's attention, and three things I'm focused on next month. It's always three and three. I just force myself to constrain the message to be focused because I think the more you give people focus and the more you give them space to focus, and then you tell 'em the priorities, they're able to execute.
Rebecca: How are you able to come up with those lists in a way that resonates with your whole engineering team?
Allan: I mean, literally, my top of mind, I kind of go through it, I make it up on the fly, and then I think about what's coming up in the next month, and I have a little disclaimer at the bottom of every message. Like, “if your work is not reflected in this, please don't be alarmed. I check everything.” So I hundred percent guarantee it does not resonate with everyone, but I try.
Rebecca: I just know I've been on platform teams and sometimes they never show up there.
Allan: Exactly, but I do pay attention to the platform teams and the developer productivity teams and our security teams, which are, you know. People love to talk of our product, but I think the underlying engineering deliverable underneath product is also very, very important.
Rebecca: Well, Allan, this has been really great. Always love to chat with you about this stuff and love to debate too. Thank you for a little bit of that. But yeah, great to have you and wishing you much continued success as you continue to be a serial engineering leader.
Allan: Thank you, Rebecca. It was a lot of fun.
Rebecca: And that's the show. Engineering Unblocked is brought to you by Swarmia, the engineering intelligence platform, trusted by some of the best software companies in the world. See you next time.