What am I funny stories about estimations. I remember sort of early in my career and I was struggling with this, it's sort of an individual level and my mentor at the time was like, well, just take your estimate x 2. And I always found that really, really funny because that's not an accurate estimate. That's just buffer. Right? Is all that is that, that sort of admitting that there is no good estimation to be done. And so I'm going to cheat and you know, I'm going to run my mile the first time at 10 minutes miles and then I'll run an eight-minute Mile and feel really good about it. One of the exercises that I like to run is sort of like a
For estimation where you. Okay, so we're not estimating well at the at the month. Cadence, or at the quarter Cadence let, can we estimate well in a week? Cadence. Give me a snack while in the day. Cadence.
Welcome to in-depth a show that surfaces tactical advice, Founders and startup leaders need to grow their team's companies and themselves. I'm Brett Burson. A partner at first round and we're a venture capital firm that helps startups like notion Roblox Uber and square. Tackle company building first on the in-depth podcast we share weekly conversations with startup leaders that skip the talking points and go deeper into. Not just what to do.
How to do it, learn more And subscribe today at first Grande a.com, before we move into our show today. Coop cost, makes it simple to get real-time cost visibility and insights for teams using kubernetes built by Engineers for engineers Coupe cost. Shines a light into the black box of kubernetes. Spend uncovering patterns that create over spending on infrastructure and ultimately helping teams reduce spend by up to 70% tube cost is used by thousands of
Surprises to monitor billions, and spend, and has helped companies like Adobe. Save Millions, you can install Coupe cost in minutes for free with, just a single line of code, and you can check it out at Coupe. Cost.com. Now, onto today's interview for today's episode of the in-depth, I'm super excited to be joined by snare kodesh. Sneer is the head of engineering at retool, which is a development platform for building custom business tools.
Before joining retool sneer spent six years as a senior director of engineering at lift in our conversation. We cover some of the biggest differences between leading engineering teams for consumer product, versus an Enterprise platform and the things that are actually quite consistent across both orgs first. Sneer pulls back the curtain on the software development cycle starting with setting the product roadmap while balancing a diverse set of customer needs. He outlines who
I was in the room to represent product engineering and design and what those meetings actually look and sound like next. He dives into how engineering actually starts taking that product roadmap and making a plan of action using the try do consider framework. He makes the case for Leaning On Cue be ours instead of okay ours why scope creep gets a bad rap and his advice for getting better at estimating how long a feature will actually take to ship? Finally we
Zoom all the way out and cover his essential advice for engineering leaders. Especially folks who are scaling quickly from leading a small team to a much bigger one, I really enjoyed this conversation that got into the nitty-gritty of engineering a complex platform product and I hope you find a few new Frameworks to bring to your own product development process.
Well, thank you so much for joining us.
Thank you for having me really excited to be here.
So I thought we could start off by talking, maybe more broadly about your experience as an engineering leader. And I was curious, maybe you could kind of compare and contrast the last two roles that you've had previous to retool, you were at lift and now you're retool, and maybe use that as kind of a jumping-off point.
Yeah, that sounds great. Yeah, exactly. As you said from lift to retool from consumer to Enterprise and for Marketplace to Enterprise
I think one thing that really stands out to me is that if you think about just the product surface area and the product path or happy path, that was really clear at lift you have somebody come in. Open the app look around. Hopefully, convert request a ride and sort of converting, every step of the funnel and ultimately get dropped off at their destination. And that happy path is really hard at retool specifically because it's an Enterprise platform. There's sort of infinitely, many permutations of what you can do. With the platform, you can introduce your own code, which is a hole
Other set of branches that are very hard to anticipate. And so I think from an engineering standpoint, observability and operations are really hard when you sort of open up those infinitely, many permutations and so that's certainly like one massive difference I think and it's probably true of most consumer versus Enterprise platform type of Dynamics and type of switches I would say that the challenges on the engineering side were also quite different. I think it lifts, I was working on the marketplace side so building some of those large distributed
Items that try and clear hundreds of thousands plus drivers and millions. Tens of millions rides, many many, many, many, many, many orders of magnitude, more than the type of volume that you would see on the Enterprise side and processing. That data, really efficiently finding globally, Optimal Solutions and doing that all in sub to S was really, really challenging. Just sort of a large data processing and compute Challenge and retool from a technical standpoint is very different again. You have those infinitely, many branches and Paths of what
Can build but creating a client-side tool that can support that Branch factor in sort of a secure scalable way. And again, all of which is living client-side at lift, we were all server-side and able to take advantage of some pretty compelling computing power. That's a massive difference in the engineering side as
well. So one of the things you just shared was this idea of a consumer product that has a clear happy path. Versus an Enterprise product like retool that doesn't have those same properties. And so I'm really interested
Dude, what is that? Ultimately mean for engineering and Engineering Management? Yeah,
it's a really great question and for us at retool again, what's compelling? As if you look at the customer segments, customers come in all sorts of shapes and sizes. And we want to build, not just for the customers that we have today or the ones that are pushing our Frontier. Those are obviously so critical and so valuable. But also for where we see retool going that is a massive difference. Whereas again on the happy path side in general, if you have a very structured happy path and you believe that,
Tire customer. Segment is going to walk through that path. You don't really need to think about the counterfactual the things that you're not investing today. And so for us, a lot of it on the management side is trying to anticipate all the failure modes in particular, I think today. One of the things that we're thinking a lot about is that Frontier, we talked internally and externally about retool as a low floor, high ceiling product. And I think that's absolutely true, but I think as you as you push the frontier, as you push the ceiling, things get harder and harder. You have to go and explore Escape hatches. We have to be thoughtful.
Awful about how we build those Escape hatches for you. So then we realistically not going to be able to anticipate every conceivable permutation path, but we want to have a outlet for you, to be able to achieve those goals. So a very concrete example, here is, let's say your customer and you're looking to build a heart rate monitor component, and we don't have a heart rate monitor component in our default library today, because we didn't think to build one. And so, we simultaneously want you to have a pathway to be successful in retail, in the way we do, that is by exposing.
Apis that allow you to build those custom components. And so taking that example and extrapolating, it's really about how do we try and imagine all the conceivable places where an escape hatch might be needed, and then give our customer or prospective customer the tools to operate in that escape hatch world and in doing. So we actually get a lot of really great data, right? Let's use that heart rate. Monitor example, maybe multiple customers. We find our building custom components for heart rate monitors. And so we can pull that in and provide some
sort of first-class support for that or maybe that actually goes and gives a lot of creative thought to what we could do. Even more broadly, maybe we want to build a component Marketplace, using our apis. That can be used across customer. And so again I would say that the biggest thing on the engineering side would be anticipating the places where the platform will not serve today and Building escape hatches in those moments. If not explicitly pushing the frontier out ourselves.
So what does that mean? In terms of the way, you organize the team where you prioritize?
Is work. Yeah, from a prioritization standpoint. We do want to again we believe that we are building a universal platform that will serve everyone we have today and everyone in the future we're not necessarily, we don't have as an example to be very concrete. How we organize we don't have like a flagship team or a team sort of on the engineering side. That is dedicated to one or two specific customers. And the reason for that is because we believe that there is the downside of tyranny of the minority. For example, right. You'll have one or two customers that really prevent you from achieving.
Having a global optimal. And so the organization maybe counter-intuitively is not oriented around specific customers in terms of, in end of one or an end of to customer segments. That being said, we do think our customers come in very distinct shapes and sizes, where our Enterprise customers face, very distinct and different challenges than our self-serve, customer base. And specifically, as you become more and more entrenched and more successful with retool, you start to integrate it into your software development life cycle. That's not
That you necessarily do day zero day 1 using the product using the platform and so specifically how do we Source control retool apps within a larger company within a more successful customer base? That's a really important challenge for a very specific segment of our customer base. That is something that we will absolutely Orient around, but we believe that, that is a cluster of customers and a general use case, that is generalizable and so will Orient our engineering team toward that direction. But again, maybe on intuitively or
Surprisingly, it's not a here's a dedicated team of Professional Services team, we believe that has some - cultural and it's sort of a detractor toward achieving our broader
goals. Can you talk about maybe in a little bit more detail? How you sort of balance this Enterprise versus SMB? Set of customer needs? I think one of the interesting things when you're building much more of a platform product versus kind of a piece of vertical SAS or a piece of SAS software that sold to sales people or marketers, or Finance folks.
Is that, I think product roadmapping at times, is more complicated, and I think figuring out how you get someone into the product is significantly harder because the jobs to be done are so much more Broad and particularly, if you look at one of your startup customers that are 10 people versus 100,000 person company, that's using retool, it just seems incredibly hard to figure out how to build
product in that context. Totally, I would say, first year,
100% right? And using that software development lifecycle example or using what retool at scale? Looks like from an operational standpoint, there's this really interesting inflection, point that happens. We're again that 10-person team versus 100,000 person company, probably has a central front and platformer internal tool platform team that they're using. And so we would work really closely with them, which is just distinctly a different challenge in different customer than that edge, team, or that engineer that's looking to operate more efficiently at the internal tool level. I think there is
Just one point that I'll make is that there is sort of a universal truth, which is that the core product experience. In other words, building internal tools, incredibly quickly and building these internal tool front-ends incredibly quickly. That is universal and any lift that we would make to make onboarding and to retool simple, or to extend the frontier and allow you to build more intricate, more performative apps and I can perform it, if not in the perform, but they perform faster, they load faster. That is sort of the time.
That raises all boats. And so that is, I would say the core of, of our road of any given point in time. And then the place where these segments differentiate is really again, how they operationalize retool. And so that does become a meaningful part of our road map in a way that I think consumer company doesn't necessarily have to think about that Branch factor. And so, we do definitionally have to allocate a meaningful part of our engineering team of our of our headspace of our product roadmap toward understanding what retool looks like at scale for that.
Hundred thousand user base. But I would say it's almost like a one-way door in the sense that like anything that is good for that small 10-person startup is going to be good for that. Hundred thousand person company but the inverse is not necessarily true. And so, it's just a tad of cognitive load additive thoughtfulness and a very different operating model at scale that we have to plan for budget, for that's also what makes it exciting because it's sort of Falls to me and one of those good problem to have buckets, right? You could take 100,000 person company and have retooled deployed.
It to one or two teams and that would be fine. But that's not nearly as exciting and seeing it sort of deployed wall-to-wall. And when it is sort of deployed, site-wide a whole host of different problems, come to bear. And so again, it's an exciting problem to have because it means that you've permeated throughout the org. You've become the cultural norm of how tools are built. As the saying goes, the problems, don't go away, they just evolved when all your problems have evolved, have evolved to how to manage it at scale. How to deploy it had a version, it how to collaborate and get 10 plus Builders within a napkin.
Currently a lot of really compelling problems come to pass their, but that again is just exciting that pushes our
Frontier on this thread. How does product and Engineering fit together in the context of a product, like tree tool, that's highly Technical and maybe less so, comparing and contrasting consumer versus Enterprise, is their unique way that the to work together a tree tool that might be different than if you're building a generics a sap or pick something that is sold to a
Less technical
audience. Yeah, I would say so, I think that it's just definitionally more collaborative when we're talking about building, usage analytics and the products is effectively a data warehouse or when we're talking about Source control and the product is, how do you build your branching abstraction in such a way that it's consistent with the status quo in writing code natively? More traditionally, these are both technical challenges but also like really compelling product challenges.
You sort of need to understand the state of the art in a prior life, in terms of writing that natively in order to really deliver the value. And so, I think what I see is an engineering function, this was both a selection criteria for me in terms of coming to retool. It's also a value that I really believe in which is that engineering should be deeply involved and thoughtful about what the product experience is like and product and product managers, absolutely should and can take the lead on that and they can sort of be the
The the canary in the coal, mine to some degree in terms of first line of contact coalescing, a lot of the input. But to have that be a strong interface where the product roll sets, the definition exclusively in engineer shows up and goes and builds. I think is actually a detractor for us. And something that would create a worst product overall and so there's a lot of product consideration, really good. Example of something that's being actively, worked on right now is extending our debug tools. And so you think about that? That is such a technical product to build both.
Terms of the challenges that we're trying to solve and bringing that into the closed system that is retool. But also just, in terms of the meta like what are we really trying to solve here and you kind of have to understand it from first principles and it's a developer's problem, it's an engineer's problem. Write debug tools and how to debug software effectively is Engineers problem. And so I see a really collaborative environment. I see that extend all the way out to that customer engagement, right? Where we will have Engineers on qbr as with customers sitting in and sort of discussing the challenges.
Walking through our thought process explaining what's coming up explaining parts of our roadmap. And so I would say product, definitely takes the lead on that. But it's certainly more collaborative. And I would say, a sort of weaker looser interface than a strong one, where someone would be reaching over the net. If they were getting involved in somebody else's work stream, that's certainly not the
case. So can you go a few levels deeper and maybe tell the story or explain how products are built? What the planning process looks like? How product and Engineering might work together on a given product? What kind of
Color you on who's
involved? I think abstractly. We plan to the half but then we obviously recalibrate and bring it into the quarters. I think pretty traditional on that front. It's the same sort of constructs that you might expect in terms of plans. We recently actually, and this is sort of interesting from a Norms perspective shifted away from okay. Ours into more of a qbr format. What I mean by that is that our product and Engineering teams are sort of more subjectively thinking about their space and their progress, as opposed to trying to come up with a objective metric. That frankly is now that
Give, but the inputs are, maybe what? You'd expect. We look a lot at customer sentiment, at customer signal we leaned heavily on our go-to-market teams in terms of our success, in sales teams to help inform and provide inputs into what we're doing. And we blend that with sort of a bottom-up process from the team, the team is actively building in the product, which I think is always a critical part of any company, right? Enterprise consumer doesn't really matter, right? That that use your own product culture, dog, food culture, I think that's really important and so that generates a set of ideas and we
One that was sort of a little bit of top-down Direction in terms of like the must-haves, right? And that's a little bit of I would say. Maybe I'll regret minimization framework, right? If we ended the year today, what would we be most regretful of? And these are pretty clear opportunities that really leap out to us. Some of them are just True North stability and performance, like we talked about but some of them are a lot more oriented toward the product itself. Today, I would say, we could be doing a better job at building, really complex apps like I mentioned and there.
Are again Escape hatches and there are ways of getting it done today, but I think we aspire to do better and we believe it's sort of a strategic must. And so investing in that is, maybe not something that would come from the teams in this case it did, but maybe it wouldn't. And so we would push that or, and talking about Source control, the branch logic and that collaboration on branches is something that we've decided as a broader team is just really critical. It's something again that we hear a lot from customers about it aligns with our intuition. It takes me to an interesting as I'm talking this out with you. Like, it takes me to one interesting.
Switches, I've been in orgs that data is, is everything data is the religion and you really look at data in order to explain or Express, where you should be making your bets, putting your chips where things are right or where things are wrong. And on the one hand, I think that's wonderful. I think being data-driven is really important. I think data presents a very nefarious vulnerability, which is if you don't know what to measure and you don't think to instrument and you don't necessarily know where to look, it could actually Lead You astray because your biggest opportunity might be in place that you just didn't think to in
Tournament. And so what's really interesting about retool specifically? But I think Enterprise sounds more broadly is that you don't have the same that bug in the feature nature of data plays roughly the same, you don't have the ability to sit and look every nook and cranny statistical. Significance is very difficult because you're not talking about millions of customers on a daily basis necessarily, especially as you're getting off the ground, maybe at scale. And so you have to use a lot of that customer sentiment, a lot of that intuition, a lot of that belief of where you're going.
A little bit more of that subjective signal about what is missing. But that really is the product planning process in a very roundabout way. I don't know if I quite answered your question, but a mix of outbound coming inbound, signal from customers, a mix of bottom-up, thinking, in a mix of top-down Direction.
And then how does that get brought together? And what is the planning and execution Cadence Who's involved, in, then creating the roadmap and then what's the Cadence of Sprints? And is there an annual planning and theme component is? It quarterly does
Tend to be a little bit more Bottoms Up and
so on. Yeah, there is a annual theme out to the annual themes are really intended to be Broad in terms of. There's a lot of room for interpretation there, but it certainly removes certain options from consideration. And so, for example, when we talk about stability is like, well, that's somewhat obvious, but stability is one of those things that is really subjective in terms of, have you reach your goal or not. And so having it stated is actually important because it sort of signals organizationally that like indeed we have not at least not to our liking.
Set a really high bar there in terms of the Team Planning process. Again, that that is mostly quarterly. It's in terms of who owns it. It is certainly the product function, but it is in very close collaboration with engineering and design. And so I would say the Three-Headed Beast of p.m. and design lead come together to really put that plan together as you'd expect one of the hardest things is being in sort of a resource-constrained environment in the sense that I think our eyes are bigger than our stomachs at time. And we really see so much potential
Where to put our chips but sometimes have to wake up and come back to reality in terms of what we're actually able to deliver on any given quarter. That's sort of a continuous learning process, right? Because I think the team's themselves are evolving. Let's say every team is doubling every year, continuously your recalibrating, what is your capacity and what can you actually take on? And that's certainly a big learning muscle right now but yeah, quarterly on the team level and then the reason in part we moved to the QB ours, which I think is interesting in the context of that quarterly planning.
Is we realize that there are many projects that just cannot be constrained into a quarter. I think this is one of the explicit, downsides of quarterly okrs is that you bias toward items that you can fit and deliver and demonstrate impact within a quarter and that's not always the Strategic thing to do. And so we want to have the space to take on six-month long-lived efforts. Obviously, we push for milestones, we want to see incremental validation but we want to be able to take on not just six months but we've had some work streams that have been going on for a year which
For me personally sometimes makes me a little bit nervous and a little bit paranoid because it's just not how I'm wired, but I think it's actually been really good for the organization is the only way to get some of these bigger projects off the ground. And this isn't said enough, which is, I think back on some of the bigger efforts that I've been a part of none of them have been less than nine months of work, and it, sometimes it just takes time. It takes time to do these things really well. And to be really thoughtful, and to sort of engage with customers along the way. And there are things that can be done to de-risk the progress and make sure that you're not just sitting on an egg that will never hatch.
But we want the space for our teams to be able to take that long term thinking as
well. Is there anything that you do from a ritual or process perspective? For these specific long build projects that increases, the probability that they don't land in these multi-year things that never ship or you ship something after 14 months and it's not the right thing. We certainly, I don't
know if it's quite on the ritualistic front, but we definitely embrace the more qualitative surveying.
And we really do try and get out to a customer segment early and so even reflecting on and I can share some of what we're building in terms of. It really is sort of expansion of the internal tool space and moving into areas that are not strictly on the front end as we're building these products, they've been in development for over a year. It's been a ton of hard work. It's been a couple of rotations through that time, no hard pivots, which is exciting to say. But certainly some like solid 45 degree rotations in terms of what we're trying to do it and most of that is predicated on
Getting it into the hands of customers that are eager earlier and really taking that prototype in that MVP, in that design system to engineering, even though they are really complex builds and finding the customer that is going to be okay with some in spite of our overall brand and in spite of the fact that, you know, we are retooling. People have come to expect a certain level of quality from us somebody who's willing to embrace and recognize and say, yeah you know I'm going to I'm going to go and play with your mobile product and I know that it's going to be potentially janky at times or the bills going to break on occasion because you guys are
Moving fast on it, but I'm excited to trial it and using a handful anywhere between 50 and 100 customers over the course of that year and refreshing the pool. That's the other Factor there, because if you don't refresh the pool, you can, you can get a little bit into the echo chamber. In the confirmation bias, you know, the people that have stuck with you that have seen the progress over time, get more bullish, but that's not necessarily how somebody new will think of it. And so those are the two things that I think we've done. We refresh the customer segment. We've gotten customers in very early as early, as 3 months, in to the initial.
Process. Even though these have been year plus builds,
you were talking about this idea of QV ours versus okrs. Can you explain that in a little bit more Precision? So I understand what you mean by
that. Yeah, absolutely and I would say I think that none of these are prescriptions, I'll explain the definition. I'll go back to what I mean by that. The QB are right at the quarterly business reviews, more of a subjective and directional assessment today. We have our team's just generally assess their progress. And so if you have a critical team in a very sort of self-critical team, I consider myself a self-critical person.
So if we were to me, I'm always going to come in scoring low, hell or high water. Even if things are objectively, going great for somebody else. I see the opportunity and that is sort of an explicit downside, but the QB R is really about what could have gone better. What went well, what are your plans on a going-forward basis? How do you mitigate the things that didn't go well and continue doing the things they did go? Well and then a little bit of a pre-mortem if we forecast a quarter out, you know, three months out or half out, six months out. What are the things that you believe are going to be in the what didn't go? Well call.
And the intended exercise there obviously is force people to think about it and by extension hopefully plan and mitigate the risk. The okay, our process is generally more well understood high level objective and it sort of the key results and part of the QB are intrinsically, is objective, still needs to be named if anybody's going to talk about what they're going to be build, it still has to be sort of thematically grouped and really be the global priority or rather, the local priority for that team, if not the global priority for the company. And so the good thing about the cube
ER, is it again, it allows a little bit more open-ended thinking, where you don't just have to Anchor in K ours, that may or may not be achievable on a quarterly kids. I think the bad part is that I think directionally if a team is going to be running the subjective process for let's say, six months to a year, I would be concerned, I think sprinkling in is a good thing to do and historically we have had the, okay, our process, but the sprinkling in is a good thing because it allows team to think bigger picture, it allows team to take a more, I think subjective but
Good way, audit of where they stand instead of just be scored on metrics that may or may not be the right metrics. Now, if you're using the qbr process because you aren't measuring or there's no observability or you don't have a sense of what success looks like from a quantitative standpoint. I think that's the downside and I think that's where over the long run. You can't just have this qbr process and why for us will always be a little bit of a portfolio or a mix or blending, the two approaches
Who's involved
In that specific meeting and you mentioned some of the pieces. But can you walk through how the meeting is structured?
Yeah, absolutely. Yeah. So who's involved its beak or team? And I don't mean the entire team. I think as most folks have discovered we've found that smaller meetings are actually more effective and allow for more directness and more transparency, which I just think obviously leads to better outcomes and that's really hard. At the end of the day, there is sort of like an altitude difference. Let me walk you through who is in the room and it will make a little more sense. So our co-founders David Anthony myself. A
Had a product and Ryan our head of design, so design, engineering and product. And then from the core team itself, the product manager, the engineering manager. And then the design lead or design manager as appropriate and occasionally, we will also include actually not occasionally. We will always include the TL in terms of helping to compliment a skill. Set in the room, my point about altitude is just that if you look at the core team, in the leadership team, we operate at different levels and so we want to be respectful and sometimes we don't know and we want to be vulnerable as well.
And so I think vulnerability is easy to do for us in a large group at that sort of directness or that critique isn't always. The simple thing to do in a larger group. I think that's a probably a muscle that I could personally get better at. And so the smaller group really helps that context in terms of what the meeting is structured tactically. It's the open read. We find that a silent read through is really effective actively leaving comments in the dock. So that there's a written record and that there's less sort of room for interpretation and then we really just get into it and we try and understand
Where the team is coming from. I think really the goal of that particular meeting, frankly is to walk away with this sentiment of, they've got it. The team's got it, we've got it. And that could be thinking back on some specific cases. That doesn't necessarily mean that it's all sunshine and rainbows. We asked the teams to score themselves, on a scale of one to five, one being quite poor and five being sort of exceptional. I'm glad to say, no team scored themselves five. I think that again is the pursuit of perfection and I don't necessarily think perfect is the goal, but we've walked out of meetings.
With a team that has scored, let's say, 1.5 or two and felt confidence about the next three months and for us, it really is about establishing that record. And I think if we see two of those in a row that will be cause for concern, we have yet to see that but the score is somewhat important to us the overall plan and assessment is really important to us. I think again because we've got the team leads and then the company leads for those equivalent functions. There can be some cases where there's misalignment or, you know, team will. We will believe there's
Certain root cause for over or under performance, and the team will have a different one. And so getting to alignment and reconciling that is certainly really helpful and useful. One thing that came up recently in one of our QB R is how engineering on boards on to retool because it really is a very complex system and requires a lot of global State. And that was sort of a great flag that extends Beyond any particular team is a company-wide or engineering wide need and a really great insight to come out of
the team. And then you talk about after the pre-read, the conversation that happens. One of
of the things is you have a self-assessment is most of that time as you're discussing what's in the dock and progress and where people are going in organic discussion or there's a series of more structured prompts or themes that tend to emerge across all the qpr's. You've been involved
with, we try not to. So we actually try to use the asynchronous time and while we're leaving questions, the team is sort of frenetically responding to our questions which I think a very efficient use of time, right? One of the funny things about a pre-read is that you've got basically half the room staring at a screen that they've already off.
Third, which is pretty inefficient. And so, I actually, like, our slight deviation against that which is that quiet period of reading is actually an interactive time because we're engaging myself and head of design had a product in our co-founders are engaging with the dock and leaving comments, but the team is actively responding and reacting. And so if there's a little bit of a back-and-forth in the threads that will certainly get promoted into a broader discussion but for the most part, we actually spend a lot of the time. Either asking General inquisitive questions about the overall strategy and what motivated it
Or identifying and suggesting working in the try, do consider framework asking two teams to consider or try things that maybe they didn't previously, think of. And so this came out actually recently, we were in a qbr for one of our new product lines. And the question around integration came up, how do you integrate, or do you want to integrate a new product line with an existing one? And there's a lot of reasons to go in either direction but it was a really interesting conversation because there was a point of disagreement and one side really wanted to push for
Some integration to sort of demonstrate that we are being incredibly thoughtful about how these connective pieces fit together and the other side was like, well, we absolutely see the value of integration, but let's do it in a first-class way. Let's make sure we do an exceptional job with it and for the time being there, Standalone value. I like to make a metaphor, a shout-out to sort of the AWS service Suite, which is me being very bullish as well, but you look at the aw service sweet and you've got a lot of Standalone products and some really great interaction effects and I think we think of the retool,
System in very much the same way and the question is, do you need to sort of integrate Day Zero or can you do that later in sort of an exceptional way?
So maybe on a related note what do you figure it out as it relates to estimating? I think it's kind of an interesting topic of you have a product or set of products. You have teams and you have a lot of oftentimes dependencies in terms of what you're building and when things aren't going according to the timeline, you establish, you have an interesting question of visited.
In problem, or is it an execution problem? But I'm just curious are the things that you figure it out or ways in which you've evolved. You're thinking, as it relates to helping individual Engineers or product people or engineering teams, create more predictability or increase, the probability that whatever their estimate is turns out to be accurate. It's a
really good question. And one of my funny stories about estimations, I remember this order earlier in my career, I was struggling with this it's sort of an individual level my mentor at the time was like well, gee
Take your estimate x 2. And I always found that really really funny because that's not an accurate estimate. That's just buffer is all that is that sort of admitting that there is no good estimation to be done. And so I'm going to cheat and just give myself. I'm going to run my Mi the first time, 10 minutes miles, and then I'll run an eight-minute Mile and feel really good about it. I always found that 2x suggestion, kind of funny. One of the things that we did recently, because yes, just to be direct, we struggle as I think most Engineers are most engineered
Points. Do I think again about the bigger projects that I rent lift and I think the official estimates were like six months ago a year and a half. We definitely see the same thing with retool. One of the exercises that I like to run. It's almost like a bisect for estimation where you, okay, so we're not estimating well at the months, Cadence, or at the quarter Cadence, can we estimate well in the week Cadence, can we estimate well in the day Cadence? I think often you'll see where the Miss estimation arises and certainly like there's no Silver Bullet things that I've seen are absolutely the sort of
accusation risk and the the unknown unknowns cropping up, especially as you take on large work you see a little bit less of the unknown unknown when you're going to 0 to 1, so like these new product lines, for example, have seen less unknown unknown, in terms of the technical risk of working with an existing ecosystem, but they certainly have unknown unknowns or we miss on identifying a risk factor that rears its ugly head later in the process and really bloats things. So that's certainly like one risk Vector. I think the other thing is scope creep which
It's a little bit of a bad rap. They're certainly the scope. Creep of we scoped to project to be X and now we're going to make it X Plus y, plus Z because along the way we've just decided x's and good enough. That's one form of it. But there's another scope creep that I think is actually more admissible which is we have this really great construction one of our teams it's called the bug bash a product that is almost ready to ship will be presented to the team and the team will just hammer it hammer it for 2 plus h. And this is sort of additive to product reviews and all the other
Operational processes through a tool try and establish a quality bar but the cool thing of the bug bash is beyond testing Beyond smoke testing integration, testing and unit testing, the owner of that project will often come back with new insights from people. They've been so focused and working on the product that thing's feel intuitive to them when they aren't, or they have come to accept clunky or sharp edges of the product. As it's currently being built or maybe they just are using the same fixed test Suite to interact with it and somebody brings something new. So we saw this like with d
Tools where a different app, generated a whole host of different errors. Some of which are not managed all that. Well, in the product. So, the pump action is really great. I think because it actually ensures a certain rigorous Quality Bar. It is one of those things that's like, okay, well it's a non-deterministic meeting. I'm going to come out of it with either 0 minutes of incremental work or like two months of incremental work. And that is another real challenge for us organizationally, we sort of treat these different estimation root causes with different fervor, right? So, I think
Like Miss estimations that come as a result of the bug bash in my mind or, like, more than admissible. And that's actually just us holding a really high quality bar and really deciding what done means, Miss estimations, that come out of unknown unknowns. We've certainly gotten better there in terms of moving more thought earlier in the process, which is sort of the intuitive thing to do. I've certainly seen projects in my time that get kicked off. I don't just mean every time, I mean globally, without an RFC without a speck, or if it's a spec, it's aspect that checks the box, but didn't really involve a lot of upfront thought,
I think it's hard because as an engineer, especially one who really cares about the business and the product and you want to get out, you want to write the code. I still remember for me personally, if I was going through a two-week spec process, I felt really unproductive and it's a little bit of like a reframing or reprogramming because the benefit of those two weeks can actually be many months on the back end and so that's the other thing that we've done is move a lot more thinking and a lot more thought up front into the
process. Continuing to build on this thread. When you look at a
Even engineering team. So let's say an engineering manager and his or her direct reports. How do you define excellent
performance for me, it comes down to impact and then a little more nuanced, their impact on the right way, that's more on the cultural side. In terms of our people, highly engaged. In driving toward that impact engagement, being a function of how motivated they are doing sort of understand. The work, is there a high degree of autonomy and respect all those
Is coming together, a team that has large purview a lot of autonomy, understands the motivation for the work takes this sort of customer need and conversely. The customer pain. Very seriously and then drives toward impact. That is sort of it. I don't believe, high-performing, or high productive teams. Like it's not sort of a line of code metric. It's not most of the purely measurable things that are completely objective. Can also be gamed and so, they aren't perfect indicators. And it really comes down to impact. And so,
I think maybe Connecting the Dots here if I'm reading the question, the right way in terms of like well what do you do when estimations are amiss and things are delayed? I think there's just a subjective evaluation perspective that does have to come into play. Did the project take a new shape part way through. If you just naively evaluate a project on its they want estimate all the entire guts of what it meant to ship that product to change. It's not really fair to hold it to that. In the question is, how much could we have anticipated? So connecting the
You got some of the better performing teams that I've been on. Maybe counter-intuitively didn't always hit their deadlines to me. It's less about the deadlines and more about the artifact that comes out the other end. And again, do we have to roll back that code when it hits. Does it really hit? And does it stand some reasonable test of time? Do customers. Love it. Does it have the impact as it get the usage? If not why, like, we're along the way, did we get it wrong to be misunderstand the signal, right? There's lots of examples of projects that I've seen where we got the letter of the law, right?
Right? But not the spirit, these are asked specifically from customers where they will say something like. Well, we really want. I'll use an example, a local development environment. It's like okay great. We go and build that to the best of our understanding but we don't ask the sort of five wise or the five, the what stops you today and you quickly realize it's about configuration sprawl and credential sharing and a whole bunch of other needs that don't necessarily get met when the project first lands. And so, for me, it really is the impact and then not just the impact combined with how the work.
And that again is more of those and Engineering cultural values that are just really important to me. Not just the traditional ones, operational excellence quality stability, but also the the autonomy, the thoughtfulness the business mindedness and how the team responded to the
market. Do you think it's you've grown in your career? You've embraced intuition and subjectivity more than maybe the beginning.
And I'm asking because I think this idea of like the role of intuition judgment subjective versus objective is an interesting one and it touches some of the different parts of what you've been talking about, all the way back to the comment around some of the issues of being too, data-driven. I'm curious, if you sort of think about that or are there ways in which you behave as a team that maybe avoid some of the problem, when everything, when you lean more on,
Asian or more on things that are subjective. I'm curious if anything pops to mind for
you. Yeah, I mean, I will say one quick thought which is I do think there are plenty of examples where we set the constraints ahead of time and they are sort of data-driven, write something about one of our new product lines and the planet. First was a very large breath play, like many, many customers and we refocus that, we said, actually we think that fewer customers, but but deeper Integrations and more penetration into a given. Org is more important for this particular product and
We really set those in terms of raw numbers. Now, again, as we set the raw numbers in my head, I was thinking, you know, it doesn't have to literally be these numbers. It just has to be directionally in this vein and so it's again it's a non-binary assessment of achievement right, where let's say just throwing out some numbers. This wasn't actually the case for this product, but let's say we would have said, hey we'd like to see, you know, 100 customers with upwards of 50 and users per app. Well, if it would have been 80 customers, or would have been 30 end-users, it would have been directional. What we
I want to see is thousands of customers with 12 to end users, that would not have achieved success or impact in our mind. So there are cases where we can be and we have been and we are a lot more prescriptive. I think my point which you're right, it is pretty consistent. Is that sometimes when you over index for data and you avoid, you have to marry the two. If you have no subjectivity and no intuition and you just focus on data. Frankly, that's comforting. And going back to your original question, I think the answer is yes.
Yes, over my career. I have come to embrace intuition more and the reason for that is because I think early Innings data is inarguable if you. And again, I don't believe in this in the slightest. But if you gauge engineering productivity on lines of code, like, oh, that's awfully convenient, because you have some Northstar metric that you just can't argue with, and it doesn't matter whether somebody edited a readme, 50,000 times and fifty thousand lines of code, great, you're successful. And again, this is where data and subjectivity Collide, because, obviously, I
I don't think that's the right Northstar metric and I don't think editing a readme 50,000 times would be anybody's definition of success. But early Innings it's comforting to have that. And you, you feel like you're wearing some of that armor because you have this data whereas what you? Just rely on intuition, it's a little bit more vulnerable, your little more exposed. You have to have really good strong opinions, at least loosely held and you have to have reason that is explainable and justifiable for why you think that way that's on the point of me personally, and
The sort of evolution over time, I do think you have to marry that with being your own harshest critic. I think this is where I am short term critical and maybe even short-term pessimistic but long-term very optimistic and I think that's an important balance to have because in the absence of data, there's a different failure mode which is everything just looks great. And you just constantly redefine what success looks like and you never achieve greatness, right? You sort of constantly justify fly. You know, this is a story from way back when for me but a friend
We were doing a stand-up and one of the teams that I was on, we have this culture of sort of clapping when something would ship and I can't remember how we got here. But somebody shipped a bug, fix for bug, they introduced and we clapped and my friends. Like are we seriously going to clap for an error that we ourselves introduced? And that was again many many, many years ago but it was an eye-opening moment for me because we do have to be a little bit more self critical than that. We have to push ourselves to higher Heights than just say, up could went out. Let's applaud for it. Even if really it was a lot of self-harming,
On the way. And so I use that example as to say like there has to be critique, there has to be sort of a bias for wanting to do better along the way in order for that subjective environment to work and then in terms of marrying different forms of subjectivity and building alignment because you have a very good point that my impression of the world isn't necessarily the same as anybody else's, that is where the directional data helps. And that is where, if we're working on something for performance, we are going to set a millisecond.
Paint on it load time will be defined on the order of milliseconds. A product launch will be defined by customer engagement and by daily active users, we're not walking away and just saying, hey, this is a art form. We'll just know when we see it we try to set those directional fence post but there's just more Nuance to it. Is I guess what? I'm trying to
say something you mentioned a few minutes ago that I just wanted you to explain was this. Try do consider framework that. I think you all use or maybe in the context of a qbr what is sort of the explainer.
R. Yeah, so try to consider is really nice. It's just a very good structured way for a team to understand where we sort of a leadership team are coming from do is obviously the sort of strongest version of that, which is like will discuss it and debate it. That's fine. But we sort of have again. That's I would argue strong opinion strongly held and so if we have a perspective then we will just ask the team to do something. But obviously they can say no and they can explain why. And we can engage in that. But more often than not, I think the dues prevail
Consider is way more open-ended and is actually more of a ask a question. It's sometimes come in the form of did you consider or how are you weighing? It's more exploratory. And really trying to understand how the team is thinking about a certain theme or topic or product and try isn't a square knot at the dew point. It may be is a loosely held opinion and probably not the strongest opinion either where we'd like them to try it on for size and see if it works. But
It's because we are vulnerably unsure of our perspective. That's sort of try to consider a good example, was actually the one that I just used with respect to the usage. Number is right where the team the way this came about. Is there was a table in one of these docks showing a ton of customers, but one or two end users and we basically said, hey, this isn't it? The do is, is change the acceptance criteria, right? And change the exit criteria to be if it means fewer customers, that's fine. But what we really want to see is depth, not breath.
I'm curious, I assume you get lots of heads of engineering or VP of engineering reaching out as they are kind of growing their teams and you share notes are asked questions or those types of things. And so be interested when you think about folks that are going from a small team, you know, it's 10 or 15 engineers and they're going to be growing into a 30 50. 100 person engineering team what are the types of problems that generally emerge? And what are the types of advice that you tend to give on a recurring basis for those type of
problems, actually, just got off.
Call last week with someone that fits the bill and the thing that comes up the most and the most dominant thing. And it's frankly, the thing that I also struggled a lot with personally is delegation and I think it's particularly difficult in the engineering discipline because for so long either directly or indirectly Quantified or qualified by the code that you ship, it's a very sort of Hands-On. It's collaborative you were good people but at the end the day, it's also an individualistic activity. You do it and it needs to get done. And I think when companies
Scale really quickly so much. I just in the natural process of scaling, there's less documentation, there's a lot of domain knowledge that is held by individuals. I think, in particular leaders, and especially ones. You know, at the earlier days who were also, presumably writing a decent amount of code. Handing that off is really hard, not because of some ego, but just because more often than not the fastest path to getting something done, is them doing it? And that's been true for the entire time of the company to that point in time I experienced this and
To sort of face My Demons many years ago to Crest it and times. I still struggle with it, but this notion of like being really thoughtful about what you can delegate and the other part of the coin, which gets into subject number two, which is hiring finding the people that you actually feel conviction conference that you can delegate. To is The Faraway. Most dominant thing that I share with folks, people are primed to want to do it themselves because that's been the currency. They are also the most effective person to do it and that creates just a lot of built-up bias.
Us toward doing it. Be can't scale.
And you find that sooner rather than later. So I think that's one. I think the second thing is the org structure and finding leverage through the organization. So, part of that is delegation right? But part of that is also just hiring the right people. I think, in those early days, you're hiring very logically ICS and hopefully, you're doing a good job. Hiring eye sees that actually wear, many hats and are very close to the business and have good communication Primitives as well, whether sort of written or verbal or both, but I think very quickly you want to think about the org structure and how it.
Operates effectively, I think there is Controlled. Chaos is very good, but you quickly get into uncontrolled chaos. Where people are just running anywhere and everywhere. And you get a lot of extremely local priorities, that are set that don't actually true up to any sort of positive Global outcome, right? If everybody can set what they work on and there's no structure around it. You're not necessarily selecting for the best things for the business. And so on that point, I really encourage people to find the Unicorn. I think what a lot of Founders will default to is a
really strong. I see who has like an interest in management, for example, and that's a very dangerous failure mode, because while that person should absolutely go through that transition and, and get into management, they're not necessarily the right person. Take you from 10 to 50, in fact, more often than not, they are not. And that can be because of cultural norms, or because they're learning how to performance manage or there's just a lot of learnings, along the way or they don't necessarily have a strong philosophical backbone for how they think about org structures.
A second thing, always encouraged and I try to advise Founders to not try and do the two-for-one special where they're thinking, they're getting a great org leader out of a, 1% technical leader, and sometimes they can, which is great. And there are those unicorns out there, and those are awesome. But this is the flaw of everybody thinking they're above average. Like the odds that every company that thinks they have that actually has. That is quite well, that's the other piece, which is really think about the Orga scale. Think about where you want to go, where you're trying to go and how can you get somebody to?
Today that can be comfortable at that State less variability around like, oh well they also have to perfectly grow with the company at every step along the
way on the first topic that you mentioned on delegation, what advice do you give folks other than it's going to be an issue for you? Or I guess maybe put a different way. If you look at your own journey to become better at delegating where their unlocks along the way, or what did you do to get from where you are today, versus where you were?
For my favorite thing is the counterfactual, maybe this just works really well
Engineers. Again, we sort
of have this logic Branch Factor thingy. I like to do the counterfactual. This is something actually that's worked with folks on my team today which is if you could go back in time and it different junctures do something different, maybe it's hire somebody instead of spending your time coding hire. Somebody who could have helped accelerate that over the long run. Would you have? I think that's one thing that most people, when they look back, they're able to be objective and look forward for somebody.
It's very hard for us. We're in the heat of battle in the moment war time. Would you need get shit done? And it's harder to be objective but when you look back and you ask yourself those critical inflection points, would you rather have done what you did or rather done something different? Maybe that's hire a manager to take some managerial load off your plate. Maybe that's higher. And I see to take some of the execution load off your plate, maybe that's hire someone on the product side, right to help complement and jam with you on sort of product strategy, and have more dedicated time for that and more sort of institutional time,
As counter factual questions, where I sit with someone and we look back 90% of the time they were like. Yep absolutely and so then that the follow-up questions like okay well you felt that way then can't you see how the current state will lead you to the exact same counterfactual consideration on present day forecast the future looking back on today aren't you going to feel the same way and that changes behavior and so it depends like what in terms of like which pressure point. We're talking about whether its technical execution or system and
Architecture or hiring or org design. That is really person person Company by company because the growth curves are different depending on the company, but that exercise is always worked. I shouldn't say always be like 90% of the time works really well.
Great place to end. Thanks so much for spending all this time with us. This was
great. Yeah, thank you. Thank you for having me. It was a lot of fun.