New in Swarmia: Let the right insights find you with signals →
Jun 11, 2025 · Episode 22

Preserving culture and delivery speed through growth

Rebecca sits down with Julianna Lamb, co-founder and CTO of Stytch, to explore the challenges of building and scaling engineering culture at a fast-growing startup.

Show notes

Julianna shares her journey from Plaid to founding an authentication and fraud prevention platform, and dives deep into the cultural decisions that shaped Stytch's 30-person engineering team. From establishing quality practices and developer experience from day one to navigating the balance between speed and reliability, Julianna offers practical insights on maintaining culture through growth. They also discuss how AI is reshaping engineering interviews and the evolving role of junior developers in 2025.

Watch the episode on YouTube →

Timestamps

(0:00) Introductions
(0:32) Julianna's background and path to Stytch
(3:08) About the structure of Stytch
(5:30) Julianna's approach to team growth
(9:30) Early investments as a start-up
(12:36) About Stytch's culture
(17:10) Ensuring quality through testing
(19:58) Ownership of Internal tooling
(22:24) Maintaining a culture of speed
(25:09) Managing quality through growth
(28:23) The importance of culture fit
(31:53) AI's impact on junior engineers
(35:00) How Stytch interviews in 2025
(40:32) Julianna's ambitions for the future

Transcript

Rebecca:  On today's show, I have the co-founder and CTO of Stytch, Julianna Lamb, and I'm really excited to talk to her about culture and maintaining culture as you're growing your organization, and also some of the interesting aspects of growing your organization in the age of AI. So Julianna, welcome.

Julianna: Thanks for having me on.

Rebecca:  Absolutely. So first, I'll let you do the elevator pitch for Stytch, because you're gonna be better than I am. But yeah, tell me a little bit about Stytch and how you ended up there.

Julianna: Awesome. Yeah. So Stytch is a platform for user authentication and fraud detection and prevention. You need to build login into your application, you need to protect your application against bot attacks, abuse, etc. Stytch has a pretty comprehensive platform of just about any auth method you might want. Things like role-based access controls and then the fraud detection as well.

And my background that led to this was primarily coming from experience working at Plaid. It's where my co-founder and I met, and I worked on a bunch of problems related to fraud and authentication there, built a bunch of stuff in-house. The main thing I worked on was building a fraud engine to protect against things like credential stuffing, account takeover attacks. And so we spent just a ton of time building all of these experiences in-house 'cause we kind of didn't find any vendors that fit our needs. I spent a little bit of time then at another company, VGS, where I worked on many things, but one of them was ripping out Auth0.

And so I was catching up with Reed, my co-founder. He was still at Plaid. He was the PM for this team that owned the auth experiences and basically just catching up, complaining about work. And that happened to be authentication for both of us. And we were just like, “This is crazy. It's one thing to see this at one company, but now we're seeing it again. I wonder if we're just missing something here and everyone else has this figured out or is this a really, really common pain point?”

And so then we spent a bunch of time kind of going and talking to people, and everyone was very enthusiastic to complain about authentication and how frustrating it was and how much time they were pouring into it. And so we ended up starting the company about five years ago now, basically to try and build something better and remove that headache from people's lives.

Rebecca:  Yeah, Stytch is one of so many off-the-shelf solutions that I wish I had in 2006, right? It would've been a very different world back then. But yeah, I love to see the emergence of these sorts of things, like “you don't have to figure this out for yourself.” And just really getting people focused on the actual business problem they're trying to solve, which is not auth. I'm sorry. That's your business problem. To get really good at it.

So tell me a little bit about five years ago, you started this. Where are you today? How many, specifically, software engineers do you have working there? And I wanna, over time, talk about how that growth happened. But let's just start with how big you are, how you're structured, those sorts of things.

Julianna: Yeah. So the total company is about 65 people, and about 30 of those are on the engineering team. I think you get something closer to 40, including all of end product design. So definitely very heavily weighted on the EPD org.

On our engineering team, we have a couple of main teams. We have our platform team that encompasses things like infrastructure. We have what we call a core backend team, data engineering, et cetera. And then you have the product focus teams on top of that. We have one team that focuses primarily on the core auth product, and a sister team that focuses on our front-end SDKs. It's a little bit more specialized. And then we have our web team, which oversees the developer dashboard, marketing site docs, and focuses a lot on growth and activation for our customers.

And then the last team that we have is what we call our R&D labs team. So they own the fraud detection product today and are always continuously working on incubating new ideas. That fraud product came through an incubation about three years ago now, and is now a fully launched live product. And so balancing, continuing to invest in things that are working, but also finding that time for more focused R&D work, particularly when it comes to security and how we can basically solve more problems for our customers, help them better protect their applications.

Rebecca: I imagine that fraud is an endless hole to dig. When I was at Stripe, yes. Fraud. Uh-huh.

Anyway, so obviously you didn't start out like that. I'm guessing it was just the two of you, or maybe the three of you, or something like that. But how has the organization grown, and as a CTO, how have you thought about growth and when to grow and how to grow that engineering team?

Julianna: Yeah, so initially it was just me and my co-founder. We ended up raising a seed round pretty quickly. And so, hired four engineers basically right out of the gate. And so it went from, yeah, just the two of us prototyping things to going heads down and building the V1 product. I think building in the infrastructure space, there's a lot that you need to lay foundationally to be able to build and scale a product. People don't want bugs in their authentication vendor or reliability issues. And so we were fortunate that we were able to be intentional. And the way we've talked about it is we went pretty slow in the first maybe six to nine months of the company to lay that foundation and be able to really quickly build on top of that without having to stress as much about scaling challenges for hopefully at least a solid period of time.

And so we ended up then, maybe a year into the company, starting to really grow out the team. We'd hired a few more engineers at this point, and that's when we introduced engineering managers for the first time. So we started with just two. Very functional. One was one of our founding engineers. So she was owning everything front-end. And then we had one other engineering manager that we hired in, that did all of the back-end end platform pieces. And so we stayed pretty functional, I think, for another year or so. I think that worked well when the team wasn't too big. I think we were probably up to 15 engineers with that structure.

And then you start to run into, I think, just a lot of coordination costs and bottlenecks, and the way that our product works is that the API is a really big piece of it. But if you don't have the SDK support, you don't have the dashboard support, you can't really fully launch a feature, and so we're just running into blockers where two teams were having to coordinate on everything. And then the way that our backend engineers that work on products think about the work that they're doing is very different than our backend engineers that work on foundational scalability challenges. And so ended up kind of splitting out into something that looks more similar to our current structure.

I think there are a few pieces that have evolved over time, figuring out how we work with front-end SDKs and how that team maps in has been an evolution. And then this labs team that's focused on fraud detection. We basically hired in one person that we'd worked with before, who does a lot of just security R&D work. And so we hired him in, knowing that he was gonna be doing R&D stuff. And we were like, “Okay, go off and let's figure out what we're doing.”

Rebecca: Go for it.

Julianna: Yes, exactly. Go explore. And then, once we got something that was really sticking, then we started to build out that team a little bit more. So it's definitely evolved, but I think we've been in somewhat of a steady structure for maybe two years or so now. The team has grown a bunch, but we haven't quite outgrown that structure yet. I think we're starting to get to that point. The core auth products team is about nine engineers right now, and it's pretty big for one team.

Rebecca: Something's gotta happen, probably. Yeah. I wanna go back to something you said back at the very beginning because often, startups are, and correctly so, focused on product market fit, that they're not worrying about those operational concerns so much. But, like you said, with an auth provider and that it's not very optional. So, how did that manifest as far as what you were spending time on early on versus what you deferred until later?

Julianna: Yeah, so I think there were two main pieces that we wanted to invest in there. One is on the scalability side, and that is just getting a good V1 of our infrastructure that we felt would scale with us. And investing the time to make sure that we had the right things in place. I think that ended up paying off, where I think we've been able to pretty seamlessly scale for the most part up until maybe the past two or so years, and then we started to hit bottlenecks. But still have been able to largely iterate on our existing architecture and not have to totally redo things.

I don't think we did anything, I don't know, super special or crazy. I think we just used modern infrastructure tooling, things like Kubernetes, making sure that we just had scalable CICD processes, all of those things we did early on. And I think that was helpful to lay a good foundation, both in terms of how our infrastructure scales, but then I think also on engineering best practices. Having CICD and tests from day one, I think, lets you incrementally invest in that without having to have an “oh my gosh, we have to redo everything and redo culture and let's spend a bunch of time resetting how we work.”

So I think that was valuable. And then the other piece that we invested a lot in early days was a foundational developer experience. I think we believed that having the right SDK experience, docs, et cetera, was really critical for this product. Developer experience is huge, right? And I think there's a lot of value in figuring that out with a small surface area. And so we focused on building basically one auth product. We started with email Magic links and really investing in what are those pieces that we think provides the minimum competitive developer experience, but for this one product? And then we can go and add additional products really easily on top of that.

Obviously, all of that has evolved so much since then. But I do think that was valuable focus to be hyper specific on “what is the one product that we think can be viable on its own?” And people will be able to just use that standalone and then invest in the experience around that before expanding surface area.

Rebecca: So you mentioned culture, and it sounds like that focus on developer experience. When I talk developer experience, my brain is internal developer experience, which maybe we can talk about, but you're talking about the experience of customer developers using your product. So I just wanted to clarify that with the audience.

But that kind of focus on that is a cultural choice, right? And it's something that you need to figure out how to perpetuate if you believe it's still a good thing. I wanna talk about how you perpetuate it through growth, but first, what are some other cultural things that you established early on that feel important, and what are some that you accumulated along the way?

Julianna: Yeah, so I think we are, I would say, a pretty doc-heavy engineering team and culture. And I think a lot of that comes from starting in 2020 when everyone was remote. We're sort of hybrid at this point. Majority of our team is in-office, but I think 40% is still distributed. And I think it’s also probably some of that mentality of “put in the early work so that the rest takes care of itself.” And so I think we spend a lot more time on upfront design of what we're building, so that once you get into execution, there shouldn't be too many unknowns. Obviously, if there's never an unknown or decision that you have to work through, you probably spent too much time upfront. But finding what is that right balance of let's work through things in a doc, make sure that key architecture decisions or any sort of contentious implementation choices are getting discussed before you're actually putting pen to paper.

So I think that has continued to perpetuate, and I don't think it was also super intentional. I don't know that I would describe myself, for example, as a super docs-forward person. But I think that was something that just took foothold early on and served us when we were small and continues to serve us today. So it sort of perpetuated.

I think in terms of other pieces of culture, I do think internal developer experience is something that we have prioritized from early days. And it obviously didn't start like this, but at this point we have, I think, a pretty robust remote developer environment that everyone has their own full replication of our stack and that we've had since maybe 2021, so four years now. So we had it very, very early. And, I think, continue to put a lot of emphasis on “where are those friction points in building products? Where are we uncertain about things and need to have better test coverage?” Focusing a lot on how can you move really quickly and safely developing code.

Rebecca: Yeah, I mean, simply the fact that you started with tests and CICD. And I'm just so amused by… so now, I'm working for a relatively small company about the size of Stytch. But I worked at Stripe, I worked at Indeed, and I worked at a couple other larger companies. And remote development environment is just cracking me up because when the pandemic happened, I was at Indeed, and there had been some little tinkering on a remote development environment. And suddenly people were, otherwise, they came to work every day and developed on their Unix desktops that was under their desk. And so I saw remote development environments happen there, and then I went to Stripe and same. Like 2020, 2021, 2022, they're working on exactly the same thing.

But it's so interesting to hear, as startups form and evolve, you just know to do that from the beginning and it's so much easier to do from the beginning than, “let's write tests now.” It’s so much harder. But it's just interesting to see how the newer you are, the more you can take advantage of all the things that we've learned before.

Julianna: Yeah, yeah. And writing your first test when you have one product is very easy. Going back and, if we had to test everything today, oh my gosh. That would take so much time.

Rebecca: How do you think about quality basically, and ensuring that, yes, automated tests go a long way, but they don't cover all cases in most cases?

Julianna: Yeah. So what we do today is pretty comprehensive unit tests. And then, we have a bunch of synthetic tests that basically run and will block deploys. We have talked about doing blue-green deployments for a while, and I think that's been one, that the trade-offs in terms of internal developer velocity versus the gains are not quite there yet. I think we'll do it at some point probably soon-ish. But we found that, I think, the testing culture too, just emphasizing that… maybe it's hard to quantify to some degree.

I do think people are very intentional with their changes and emphasize good testing practices, whether that's writing tests as they're shipping new features, also doing a manual validation to make sure the workflow that they're working on is going to fit together in the right ways. And I feel like you could dig in and maybe say, “Okay, is the time that goes into that actually more significant than the additional time that we need to go into people thinking about the blue-green deploy?” Maybe, but it's working for us, and so I think we're “Okay, this is fine enough.”

And another, I think, interesting thing that we do that I think does emphasize the importance of thoroughness of testing is we do continuous deploys. So we use a merge queue, things automatically deploy to staging, and then if that's successful, get deployed to prod. And so I think that mentality of, “Hey, when I hit merge, this is going straight to prod unless you wanna pause deploys for something that's particularly risky.” I think that just creates a culture of, you need to test it because you don't have the opportunity to catch it at staging and manually test it. And I think that is much more brittle than doing the work upfront. And, yeah, when you hit merge you're like, “okay, this is going to prod, it's ready to go.”

Rebecca: I feel like I've lived through many generations of all of this, and it's a delight to hear that these things are not easy, but so much easier. And you're seeing so much that you get the benefit of it. You don't need to be convinced that this is work worth doing. And that it is actually essential to your product and not a nice-to-have.

Do you have any team that is focused on developer experience or on internal tooling or owning that? I'm imagining if somebody owns that remote development environment project, for example. So what does that look like, and, maybe more interestingly, when did that become its own thing, or has it become its own thing yet?

Julianna: Yeah, so our platform team owns that today. They own many other things, though, which I think does sometimes raise interesting prioritization questions because they're having to weigh, you know, investments in infrastructure reliability with the remote dev environment. And I think so far that's working, but I can also see that, at some point, it's just really hard to prioritize those things against each other. And so having a team that's, yeah, you just own developer productivity makes that a lot easier. I think we're not at the size and scale where we can do that.

But largely, our platform team has sub-teams. They operate as one big blob team, but infrastructure versus core backend is a little different. So, mostly, infra is owning the remote dev environment. And then the core backend team, one of their mandates, too, is to invest in things that help improve developer experience for working in our API code base. So that might look like observability improvements. We have a standard interface that makes it easy to emit your observability data logs, traces, et cetera, so that they don't have to think about that, right? Stuff like that, where it just improves the day-to-day developer experience.

So there's a few different pieces of teams thinking about this. But no team that fully owns it. And I think it's been that way for a while, probably. Really, since we started having any sense of a platform team, which I think has been four years now.

So they've kind of always owned it. And originally it was like we had one infrastructure engineer and he owned it, and then there was a second one and–

Rebecca: There were two!

Julianna: Yeah. Always been a piece of what they do.

Rebecca: Got it, got it. And so you've also mentioned this culture of speed, and it sounds like you're investing pretty impressively in making sure that developers can move quickly. What have you done to preserve that culture of speed and forward movement?

Julianna: Yeah, it's a really good question. I think we've done, I would say okay on this. I think it's a constant thing that we're trying to find the right balance on, because I do think I've spent a bunch of time talking about quality and reliability, right? And we're building critical infrastructure, it’s a security product. Getting things right is really, really important. And I think if we indexed too far on one side, I'd rather us index too far on moving slowly and carefully. But I do think we're a little too far that way and sometimes have to, I think, push a little bit on when does it make sense to move quickly on something versus when does it make sense to be more intentional and thorough.

And I think building different parts of the product, building different products, there's pieces that I think, you might wanna strike one side versus the other. And then I think part of the emphasis on what are the things that we can do to make it easier and safer, to move quickly or around that of how can we abstract things away or make it just a lot harder to make a mistake or not be able to fully understand the implications of your changes? 'cause there's so much spaghetti code everywhere and the interactions are really hard to reason through and whatnot.

So I think when I talk about velocity, it's not just saying, okay, write code more quickly, ship it more quickly. It's like, “How do we make that easier for people?”

Rebecca: It's an enablement problem. Yeah. It's not like a finger-pointing problem.

Julianna: Exactly. Yeah. And I think it's easy, when you're in the code base day to day, and focus on building new features, to lose perspective of “okay, where is there a ton of friction or just scary code that you're having to work through?” And so having a separate team that's a little bit owning that foundational piece helps, I think, bring that perspective. But then I also think part of my role is to push people and be, “Hey, you're really good at just putting your head down and figuring this out. But can we make it easier for you to do that? Can we help accelerate things?”

So I think it's good that there's some tension there, and I think something you always have to have to push a bit on.

Rebecca: Yeah, that's the Stockholm syndrome of, “of course I have to edit these twenty files. I always edit these twenty files,” right?

So one thing that I've seen in my experience is I got to go through a growth from 200 to a thousand engineers a few years back. Very interesting to see, but even as they were at the beginning of that scaling, they were running kind of headfirst into quality issues. How do you feel you've done as far as managing quality through the growth of the engineering team?

Julianna: Yeah, we have, I think, done well on that overall. Obviously not perfect, but I think emphasis on that quality of the code you're shipping and ownership of individuals over the changes that they're putting out there goes a long way.

But then, having good monitoring and alerting, I think that's another thing that we really, really early on invested in. Yeah, we had PagerDuty set up before we did our alpha launch and had no customers.

Rebecca: You were ready.

Julianna: Yeah. And I think some of it is just doing that, right? We've always had an on-call rotation, because we just knew that we were critical infrastructure, and even when we had a couple of customers, a major outage could really quickly erode trust. And so I think having that from so early on just makes it a critical piece of the culture. And so as you're onboarding new people, they just see the emphasis on operational excellence, and, I think, rise to the occasion then and prioritize it as well.

So I think that's largely something that we've been pretty effective at. I think we've definitely ran into scaling challenges as we've onboarded bigger and bigger customers. And I think making sure that we're finding the right balance of investing in foundational work to enable the next phase of scale without over-engineering and trying to go through essentially an academic exercise of “what would it look like to have significantly more scale that we don't know when that's coming?” And so I think we've largely balanced that well.

There have been a few fire drills we've had to run when we had a really big customer onboarding in the next month, and we knew that there were a few bottlenecks they were gonna run into. But I think we've had a lot of low-hanging fruit in a way that's made that doable. It's not like we had to totally re-architect everything. But I do think that balance is really tough to strike where you want to be investing in the right foundational stuff, but also you need to build products, you need to onboard customers. And as a startup, that's the most important thing, I think. Speed of execution and making sure that we're able to stay competitive in the market is table stakes. And you need to have the right baseline reliability. And I think we go well above and beyond that, but there's a degree at which you could go above and beyond that's just not a good use of time.

Rebecca: As you continue to grow… Well, this is a sticky topic, but how are you thinking about “culture fit” and how are you thinking about what you need to add and what you need to maintain in your engineering culture as you grow?

Julianna: Yeah. So I think the way we kind of think about evaluating candidates and seeing who might be a good fit is that we have our defined technical bar, and everyone needs to clear that technical bar. And if you don't, then there's no conversation basically.

We want people that are going to spike in some areas, so we want them to have things above this baseline that we can get really excited about. That could be specific experience with certain technology or domain or something like that. It could be just really strong, product-mindedness about how they approach the work that they're doing, the problem-solving that they're doing. It could be really, really strong testing fundamentals. There's a bunch of different things there, and we're not looking for specific ones. We're just looking for people to demonstrate expertise and excellence in some area, exactly.

And then we talk about values fit. So my co-founder and I do a values interview for every candidate. I do most of the engineering ones. He does most of the go-to-market ones. And we have three main values, although I will say there's definitely a favorite value that's most important, and I think if you check that, the others come along.

That main one is “be an owner.” I think the ownership mentality is what everyone needs to succeed here, 'cause we're a small company, we're trying to figure a lot out still. And that's really exciting. If you want to take ownership of the work you're doing, find opportunities to have impact, then I think you can grow really quickly as a result. But if you're the type of person that needs a really thorough plan handed to you, you're not gonna be successful here. We don't have time to always have a fully baked plan. And I think there's just so many things that we need to get done in a given week or month, and knowing when to go above and beyond and really take ownership over something, 'cause it's gonna make a difference for a customer, a teammate, whoever it might be, is super important. We want people that, when there's an opportunity to step up, they're going to take it and not run away from it. So I think that's by far the most important.

Our other two values are act with urgency and prioritize for impact. We really like those, 'cause I think a lot of the things we've been talking about are, how do we find that balance of working on big important things, but also moving quickly? And so we want people to be working through that tension day to day in the work that they're doing and making sure that they are prioritizing the right, big, impactful projects. But also, when there is an opportunity to act urgently and solve a customer issue or something like that, it's often worth a quick distraction in the long run to be able to solve that problem quickly.

And I think we're looking for people that have that ownership to be able to be excited about making those decisions and thinking through the implications of the work they're doing, not just the “how I build this thing that someone has told me to build. “

Rebecca: I ask people a lot, “How do you see AI influencing the role of the junior engineer?” The role of the new grad. And I'm curious, because I have also worked at places where we just hired experienced, strong people, not so many juniors at all, and just high independence and high execution. And I've been at a place where we hired seas of new grads and ran them through a three-month orientation program and all this stuff. So, how are you thinking about that leveling mix in your engineering organization?

Julianna: Yeah, it's a really good question. I think we've skewed a lot more senior, just because of the size of our team, and I think the hybrid nature, we've found it harder to have a significant number of junior engineers on the team and set them up for success. We have some, and we've hired new grads in the past, and some of them have grown a ton with us and have been super successful. And we are hiring on the early career side right now, but just a couple of folks.

I think we have kind of gone back and forth. I think some of it early days, too, is I think having more junior talent can be really valuable because you just have a bunch you need to get done and you need people who are super excited to just be thrown off the deep end and figure it out. But I think you always need that balance. You can waste a bunch of time if there's no one setting direction, and I think over time we've skewed a little bit more senior.

Some of that is there are a lot of people who have been here for a few years and started early career with us and have grown and are now senior. But I also think we've been in such an execution mode of we just know what we need to build, and there's a lot to get done, and we need to be really focused on the execution piece, and having people who can just lead a project and run with it has been really important. But I think we are hiring, I think mostly, actually, early career on engineering right now because I think the team is now feeling the pain of everyone being a project lead and we need more people that are opportunities for our current team to mentor. But also, I think we just have the capacity to help teach and grow people right now.

So I think it's a tough balance, and I think it's a tough balance when we have grown over the past five years, obviously, a lot in total, but it's been five years and pretty steady growth. And so when you're not hiring a ton of people, I think it's much easier to be conservative and skew a little bit more senior. And so I think we've definitely taken that route historically and are now kind of like, “okay, yeah, we don't have quite the right composition. We need to rebalance a little bit more.”

Rebecca: So you've been hiring all along, and you're continuing to hire now. I confess, when I was last hiring, it was 2022, I guess. So I'm embarrassed. I haven't conducted an interview for a software engineer in three years now. What does it look like in 2025?

Julianna: Yeah, I think it's a really interesting time with AI, and I think depending on how companies are thinking about it, I think we're really seeing everyone get on the AI train right now in terms of how teams are working. I do think at this point, it feels pretty inevitable that AI is gonna be a critical piece of software engineering going forward. It's just so powerful. It can remove, I think, so much grunt work and give you so much more opportunity to spend time on interesting problems instead of, yeah, repetitive, mundane tasks.

And I think being able to leverage it well is going to be critical. I think a lot of it comes down to good systems thinking and problem-solving. I think the most interesting part of engineering I don't think has ever really been writing code, but especially now. And so people who are able to problem solve really effectively, and have that systems thinking perspective, I think, will be able to leverage it to a really powerful degree. And so I think those skills become even more important to test for in the interview process versus “can you solve this lead code problem and write some small toy functions that do something?”

And I think we're at the point where we're redoing a few of our interviews basically to make it less trivial for AI to solve, because we also want to see people in their natural environment when they're interviewing. So we have people share their screen, use their normal coding setup. And we've always done that. I was looking at a doc I wrote in 2021 last week about hiring and how important it was to see how people were able to use the tools at their disposal, Google, Stack Overflow, etc. You just replace that with AI, and I think it's exactly the same.

But I do think, yeah, we had a few interview questions that we just found, if you were using something like Cursor and you started writing it, it would just solve it for you. Yeah. And I think that just shows that, I'm not sure those were ever great interview questions in hindsight. Because I think the problem-solving aspect is really important and I think we're able to get good signal because you had to struggle through writing the code. But I think now we're, “okay, we need to be a little bit more high-level in terms of the problem-solving aspect of these coding interviews.” And make sure that we're able to get that signal. And then also see how people do use the tools at their disposal.

I don't think we are saying you must absolutely use AI in the interview, but if that's how you normally work, we wanna see that. And I think we do want to make sure that we're hiring people that have an appetite for experimenting, trying new things, and are excited to leverage AI and the tools at their disposal.

Rebecca: So you mentioned system thinking there, which is something that we don't normally look for until you've had two promotions or so. How is that going to change? How do those expectations change what you're looking for in those new grads or junior roles? Is system thinking going to get pushed as an earlier requirement?

Julianna: Yeah, we actually just redid our interview loop. We used to do two coding interviews for phone screen. And then, as part of the onsite, we would do a systems design question. And we swapped one of those coding interviews with systems design. And it's not meant to be an interview that you need really specific expertise in certain technologies if you're early career, right? It's more, how do you think about structuring data and how do you think about building APIs and putting all the pieces together and can you reason through a complex system, essentially?

And so I think we are emphasizing that a lot more clearly. I think we're saying, “hey, you need to pass this to get to a final round, because we're just not gonna hire people that can't excel on that.” And I think a lot of it, too, is looking for the collaboration and problem-solving aspect of it. Can you work with the interviewer to get to a good design there? And I think those pieces of the job are gonna be super, super critical. And so even if you’re early career, we wanna see how you approach that, how you solve that problem.

I think it'll be interesting to see if we do end up hiring, I don't know, Staff+. We haven't hired that senior in a while, and I feel like it becomes even more critical there. And so we probably want even more systems design questions. But we don't have interview loops for that right now.

Rebecca: I do miss it. Oh my goodness. I never thought I would say that. Back in the day, when I had, you know, four a day or something like that.

Julianna: It's exhausting.

Rebecca: Right? I miss it. So, last question before we wrap up. You have this ambition to grow, and you have this ambition to preserve culture as you're doing it. What does success look like a year from now? How do you know, “Oh, we did it. We pulled it off?”

Julianna: Yeah. I think something I always talk about when we're onboarding new people, and it's how I think about interviews, is that I think each person should bring something new to the team. They should be a values fit in that they're going to work well with the existing team.

But I want everyone to make us better in some way, bring a different perspective. And so I think we've come a long ways. I think, especially going through, I don't know, the 20 to 60 phase of growth, you're growing quickly, things break, and you have to kind of reset and make sure that you're building the team with the right expertise, et cetera. And I think looking back, it's, yeah, we've come so far, we've improved so much and that's because we've hired amazing people that have helped us figure things out and improve.

And so I think if I'm talking about culture a year from now, I hope I'm talking about slightly different problems or that we've evolved in some way. I think that's also just super important for a startup is if you're solving the same problem month after month, you're not making progress. But you're always gonna have problems to solve, and so you just want them to be changing. So I hope people have pushed us in terms of culture in some ways. There are pieces of development experience internally that I have in the back of my head that someone's gonna come in and be like, “Oh my gosh, we can transform this. I know the answer.” So I think, really, looking for that growth as a team overall.

Rebecca: Well, Julianna, this has been awesome. Thank you for reminding me that interviewing exists and just sharing your journey these last few years. Really interesting to hear all the decisions that you've had to make and all the things that you've had to think about. So this has been awesome. Thank you so much.

Julianna: Awesome. Thanks for having me.