EPISODE 1763 [INTRO] [0:00:00] ANNOUNCER: Modern engineering teams often face challenges with unpredictable delivery and limited visibility into their performance. This can make it difficult to track progress, identify bottlenecks, and understand how efficiently time and resources are being used. The lack of clear insights commonly prevents teams from aligning their work with broader business goals. Sleuth is designed to be an operating system for engineering and helps teams achieve more predictable delivery and align with business needs. Dylan Etkin is the founder and CEO of Sleuth. Dylan is an Atlassian alum who has spent the last 15 years building dev tools with Jira, Bitbucket, and Statuspage. He joins the podcast to talk about the challenges faced by modern engineering teams and innovative strategies to overcome them. Gregor Vand is a security-focused technologist and is the founder and CTO of Mailpass. Previously, Gregor was a CTO across cybersecurity, cyber insurance, and general software engineering companies. He has been based in Asia Pacific for almost a decade and can be found via his profile at vand.hk. [EPISODE] [0:01:19] GV: Hi, Dylan. Welcome to Software Engineering Daily. [0:01:21] DE: Hi, thank you so much. I'm happy to be here. [0:01:23] GV: Yes, Dylan, great to have you here. You're the CEO of Sleuth, and we're going to be hearing all about the Sleuth platform today. I think as we often do at Software Engineering Daily, just hearing a bit of your background, a super interesting way to kick off. I know you've got a pretty interesting history. I think most of the products that you've worked on, I think 99.9 % of our listener base will recognize them. So, yes, could you maybe just talk through what was your programming history and product history? [0:01:53] DE: Absolutely. I'll preface it all by saying, "Don't hate me,"" because some of these products have become a little bit love to hate. But yes, so I've been in the industry for ages at this point. I was very it to end up. Well, I started in the dot-com era and I worked at an e-commerce place that nobody remembers. But it was a great experience because I got to get involved in a cool community at the time as a young developer. Through a set of circumstances, I ended up in Australia. There were only so many things that I knew that were going on in Australia, and one of them was the makers of Jira, Atlassian, and I was fortunate enough to join that company when it was really small. There was basically 20 people. I think when I called up to get the job, Mike Cannon-Brookes, one of the co-CEO's, answered the phone. I was like, "Oh, it says you're hiring developers. Are you hiring senior developers?" He was like, "Yes, man. Come on in. Sure." But yes, I was able to spend a number of years at Atlassian. I was one of the early developers on Jira, worked on that for about five years, became the first architect for Jira for about a year. Transitioned back to the US with my family. Atlassian had grown a ton, so I kind of stayed with them, and they had made an acquisition, which was a pretty exciting opportunity for me in Bitbucket. So, that was like SaaS at Atlassian, which was something very different than what they had been doing with behind the firewall Jira and Confluence. We sort of bought that product at 40,000 users. Four and a half years later, I think we were at like 2.5 million users. So, that was a really fun ride and just like a good time to be involved in DVCS in general, because it was, again, a hot thing at the time. I decided, oh, 10-plus years is a long time to be at one place. So, I left and found this little startup that did something that I thought was super cool, called Statuspage. It was like a team of 10 and we were just growing gangbusters. Then as if it was from an episode of Silicon Valley, lo and behold, a year into the journey, the CEO there comes over to my house and we're sitting on my deck and he says like, "So, we're being acquired." I'm like, "Oh, okay. I didn't know we were doing that." He's like, "By Atlassian." My response was - [0:04:06] GV: "Of course." Yes. [0:04:07] DE: "Are you effin kidding me?" He's like, "No, I'm not." And I'm like, "Well, there you go." [0:04:12] GV: So, you tried to leave, but - [0:04:14] DE: I know. Yes. Went back and the mothership pulled me back in, but that was a great experience as well to sort of, I can't imagine what it would have been like to navigate in acquisition if I didn't know where all the bodies were buried inside of Atlassian. I think it was a good thing for Statuspage and it was a good thing for Atlassian that I could bridge that gap. Yes. So, spent a few more years at Atlassian doing that and then eventually just spun out and decided, "You know what, there's a problem that I've been itching to solve for quite some time," and that's Sleuth and yes, I've been doing that for about the last five years or so. [0:04:45] GV: Yes, that's amazing. I think you just reminded me the fact that, yes, Jira, et cetera, wasn't SaaS at one stage. It was, you deploy. [0:04:53] DE: Oh, for a good portion of its life. I mean, there was like a good 15 years into Atlassian's journey. The lion's share of its revenue was coming from people that downloaded the software and installed it. It's also hard to remember that it was not ubiquitous. There was a time where it was the scrappy upstart and the IBM rationals of the world were the things that it was displacing and it was quite a journey because people were just throwing their credit card down and being like, "I don't want to use an issue tracker that sucks." [0:05:23] GV: Yes, absolutely. [0:05:24] DE: Now, some people think it's the issue tracker that sucks. [0:05:29] GV: Yes, there's all these cycles. I mean, I have been watching this space quite closely for the last couple of years. Linear is obviously the big name that seems to be eating up a lot of, I imagine eating up a lot of Jira's business now. But well, yes and no, exactly. But anyway, let's talk about Sleuth. So, let's just sort of start at basics like what is Sleuth, high level and why you've just given us your working history and what was that? I think you said you wanted to scratch. By founding Sleuth, what is that? [0:05:57] DE: Yes. Really, our goal in life is to help align business and engineering. The way that you do that is you demystify the engineering world a little bit. I think an analogy is a good one. I often think of Salesforce. Salesforce is one of these things where it helps a lot of the operators do things. But at the end of the day, it also helps you understand, are you working efficiently in terms of sales? Do you have bottlenecks in any of your processes? It generates reports for really important information and data. Engineering has lacked one of those tools. I think we've had a couple of false starts in terms of measurement around lines of code or number of pull requests, or metrics that most engineers worth their salt will recognize don't really measure very good things. But I do believe that we're in a point in time where we can measure things fairly well, we can get a very broad view of those things, and then we can use those in that same way to generate reports and things that are going to help an engineering organization understand. Are we operating effectively? Where are our bottlenecks? What are we spending our time and our money and our effort on are those the right things? And ultimately, are they aligned with the business and the business's needs? [0:07:13] GV: Yes. If you go to sleuth.io, that's where you can learn about the product. A few things are mentioned there, there's a lot of idea of running an engineering team is a theme there. How do you define what running engineering is and how would you say, again, especially with your experience and now that you've got a product centered around it, how would you say that sort of evolved in recent years? [0:07:37] DE: It's different based on scale. If you're a team of three people, you're going to behave and operate in a way that's very different than a team of 100 people and is very, very different than a team of 10,000 people. But there are some commonalities and that's a gift. It's a gift to us who are trying to solve this problem. And those commonalities are somewhat in the tools that we use to work. So, like most teams are going to be using some primitives like an issue tracker or like a Git repository and pull requests in some way, shape, or form. Or you're going to have something that's going to be doing monitoring where you're collecting a certain amount of KPIs on something. Or you're going to have like an incident management tool or at least an alerting tool and something like a pager duty, right? That's if you're a tiny little team of three, you probably have some of these tools. And even if you're a giant team of 10,000, you're just using the tools very differently. Then the other thing that I think is a commonality is for these much larger teams, they do tend to break down into smaller teams. You don't usually have a team of more than 50 or 100 developers on something, and then you tend to break it down into littler silos in between. [0:08:49] GV: Yes. I think it was good that you sort of outlined those primitives, as you say, again, just to give that high level to the listeners. Sleuth doesn't come in and sort of try to replace a Jira, et cetera. Is it fair to say that those primitives you just mentioned, like issue tracking and pager duty, for example, on call, they plug in, they integrate with Sleuth? Is that correct? [0:09:09] DE: Yes. You said that really well and I'm glad you called that out. It would be unreasonable to think that you have to prescribe somebody's tools because often you're already invested in these things. On our side, it basically means that we support issue trackers as a concept, and then in reality we support Linear, and Jira, and Shortcut, and GitHub issues, and GitLab issues, and the list goes on and on and on, unfortunately. [0:09:33] GV: Yes. Well, curious, maybe as we get through the episode, just to hear more about the actual engineering evolution of the product itself, because I think that part, that's quite interesting. But let's keep moving through the product generally. Sleuth is of emphasize as predictable delivery. In your experience, what's the most common reasons that it's unpredictable, and I guess why does Sleuth change that? [0:09:56] DE: Yes. I mean, that's a great question. I would say that there's a million reasons for unpredictable software, right? But there's some things that help spot where your next opportunity lives. One of the things that we've leaned very heavily into is this idea of DORA metrics. That's the folks that were acquired by Google a while back. They do the state of DevOps report every year. I think as teams started moving into SaaS, those of us who were deploying in that manner, we had a gut feel that all of those things were the things that were important, and I think that DORA folks did a great job of just adding some research to it and putting some names and saying, "Hey, these are four things and they are really strong indicators," and that's like deployment frequency, change lead time, failure rate, and then mean time to recovery. Those can be just a really strong indicator of how are we doing in relation to the industry, where are we potentially running into bottlenecks? For instance, maybe your team just, for some reason, takes 48 hours to pick up a code review. But you're coding things up in six hours and you're deploying them in two hours. Well, where should we spend our time? Let's look into tooling that's going to help us be more effective in that area. That's just an example of one way that you could be inefficient. I think there is an infinite number of ways. I mean, you could just be working on things that have no driver in your business. That's the most insidious kind, where you're doing a great job on shipping a thing that nobody actually cares about. [0:11:31] GV: That's probably 50% of startups, at least. [0:11:35] DE: Well, they don't know it yet. [0:11:39] GV: So, you called out the DORA metric tracking. I believe there are three parts to the product and the latest one being called Pulse, but maybe could you highlight what are these three parts to the product, and then we can maybe move to Pulse specifically after that? [0:11:55] DE: Yes. I mean, our desire is to really be, like I said, an operating system for engineering. If you're looking to align your processes, understand what are the strong measures of efficiency spot bottlenecks actually do things about those bottlenecks, like in an automated fashion, and then report on those things. That's kind of the spectrum of solutions that we're trying to attack, and as you mentioned, there's a few different modules that we have. It's kind of like you start on something easy and then you move to the next one and then you move to the next one. I think that tends to fit with team size as well. So, the DORA metrics are something that we really focus on. We can attach to a lot of things and get very high-quality DORA metrics. Those tend to be really good for team leads and engineering managers to understand what's going on. Again, that's going to help you spot those bottlenecks in the work that you're doing day in and day out. That's going to get a lot of information from your Jiras or your GitHubs and those sorts of things. Now, when you've spotted some of those issues, the very best kind are the ones that you can automate away. So, we have an automations marketplace and framework. A great example is the one that we were just talking about where your review time is too high. You should be able to set an automation that says, "Hey, as a team, we want to hit a goal of, let's call it 12 hours, for every review to get looked at." You shouldn't have to have humans nagging other humans about that shared agreement, instead you just delegate that off to the robots, right? So, what we do is we say, "Cool, set a goal. At the 75% mark, we're going to send a Slack notification because that's where everybody lives anyway." That says, "Hey guys, remember you agreed on this thing? You've got about three hours left, so you might want to take a look at it, and then we'll help you see when you're hitting it and when you're not hitting it." We have a couple hundred automations that do similar things like that, like basically best practices that some of the best teams use and helping you set guardrails and sort of standards that you won't cross. That's, again, based off of the measures that are saying, "Hey, here's where I think you could improve. I have a way of getting you better really quickly." Then the other piece is really like the operationalization piece. So, I mean, in our market in general, I think that's probably one of the hardest things to do is we've honestly had failure conditions where we've set up a company and gotten them rock solid metrics that are really interesting, that are trying to tell them a lot of things, and then they turn around and they say, "How do we use this? What are we supposed to do with this? Who's supposed to look at this? When are we supposed to look at this? How do we drive actual change?" Because it turns out I don't actually care about the metrics. I care about changing something. So, that's what Pulse is meant to do. It's a broader set of metrics that help you talk about engineering at the higher levels than just engineering managers and team leads, but also, it's helping you operationalize. It's saying, "I know that you likely have some ritual where you're either doing an operations review or maybe you're reviewing your monthly state of engineering, or you're talking to your exec staff about engineering allocations or those sorts of things. Let's key into those moments. Let's give you the tooling and the process and the data to make those really, really impactful moments. [0:15:13] GV: Yes. So, I saw the pulse product fairly recently, and I love to dive into that in just a second. I just want to go back on something you said there, having been a CTO myself, very aware of all the pain points of trying to get that exact example of review time and sort of what is it that we need people to be doing to ensure that they are actually reviewing within a certain timeframe. I completely agree that humans exactly should not be having to be the ones sort of doing those reminders. I'm curious what have the results been in terms of does a bot giving that reminder? Does that genuinely change the landscape? My concern, I'm speaking maybe on behalf of some listeners here, my concern might be that my team are now going to have nine different reminders every day saying, "Sleuth says X, Sleuth says Y." How do you approach that? [0:16:04] DE: Yes. I mean, notification fatigue is real, which means that we just have to be really smart about these things. What that looks like is we're fortunate that a lot of the tools, you target a subset of reviewers. So, we are clever about only targeting the people that are the targets of those sorts of things. It does tend to really work. We've seen teams really see a difference specifically with that one. It's so interesting because it's just about visibility. It's not even technically very complex at all. You're just saying, "I forgot." I just needed somebody to remind me because I got 25 things I'm trying to do in a day. Another one that I love and it's one of our more popular automations and it's so stupidly simple is don't allow a pull request to be merged without an issue key referenced in the title. Almost every team I've worked on in the last 15 years has done that as a convention, but occasionally then it gets missed and then you're like, "Oh, hey, what issues is this trying to fix? I don't think I have the context as you're doing this pull request." You can go from - we can sort of show the stats like when you've enabled the automation and you see teams go from like, "Oh, we do it 65% of the time to 100% of the time." I feel like that's the power of automation is that you're just saying we all agree, so even if it's a little annoying, we were annoying each other anyway and now I don't have to like have this simmering thing of like, "Joe, never remembers to do the thing. I've told them a hundred times and I'm ready to kill them." You're like, "The robot does it and you just can't get your work done, Joe. So, put it in there." [0:17:40] GV: Yes. Exactly. It still kind of amazes me, obviously, tools like Linear and they were not the first to do it. They literally provide a branch name that you can copy and then that sort of all flows through to GitHub and et cetera, yet still pull requests to get made with none of that. You kind of say like, exactly. "Hey, Joe. We have this thing in Linear that's like super helpful that like ties this whole thing together. What's going on here?" [0:18:04] DE: There's a heap of those. All of those automations are really about just saying we agree as a team to do a thing. Let's enforce it. Let's give ourselves the best chance at doing the best practice. [0:18:15] GV: Nice. So, let's talk about Pulse. I saw also fairly recently, I think the way I would describe it is just sort of as a layman is sort of taking those metrics and making them much more presentable, and especially to I would say almost like partly to non-technical folks, but as well as I think developers, I imagine are also a bit metric fatigued, like just seeing charts and graphs about weeded X and Y. So, my feeling was it blended two things together, more qualitative as much as it has quantitative. Pulse, that word, there seems to be a cadence to when this looks like almost like a card, and then you click into the card, and then you've got a whole bunch of information. But maybe I'm just giving it the layman readback of what I saw in Pulse, but maybe you want to walk us through? [0:19:04] DE: Yes. I think that's right. I guess I would just jump back one little bit. I believe that data is interesting, metrics are interesting. I would say that they're about a third of the problem. Now, if I have a dashboard and that dashboard is showing some very interesting metrics, I would argue that you're a third of the way there. The second part that is very, very important is a shared understanding and interpretation of what those metrics mean. I can't tell you how many times I've seen a dashboard. I look at it and I think one thing. I have a very strong opinion that comes away thinking one thing. Somebody else has a very strong opinion thinking something completely different. If we're just using that as the source of truth and then taking action on that, we haven't arrived at any consensus about what the data is trying to tell us, right? So, one of the key things that we're trying to do in that Pulse product is allow you to surface the right data that's going to tell a narrative that is something that then you can agree on. We're going to have the collaboration tools inside of there as well to be like, "Hey, I don't understand why there were so many incidents this period." Did that mean there was user downtime and somebody can come back and say, "No, those were just back-end things. Nobody saw a problem. It doesn't indicate that we are slipping." You can see that in this other metric over here and I go, "Okay, cool. Level set. I understand." And then that leads us to the third part, which is, what are we going to do about it? Why do we care at all? Why are we having this shared understanding unless we're trying to drive something different? We're trying to drive an outcome. We want to hold ourselves accountable to that outcome. We want to understand that there's like data attached to that outcome, so that we can measure when we've decided to do something different and see if we move the data. So, I would argue that the thing that I showed you in Pulse, it looks a lot like a dashboard, but it's very subtly different because dashboards can be really stale. You don't always know if the data is working. You don't always know what the interpretation is. The interpretation might change over time. You don't have a window of understanding of when was this relevant? When did we discuss this? When did we decide that this was true about the world? And maybe that same set of data led you to believe something else that was true about the world later. The Pulse product is basically trying to allow you to be very specific about the data that you want to pull in, that means something for you so that you can have these data-driven discussions come to an understanding, track the outcomes. I feel like that's very big for our listeners, though, unfortunately. [0:21:38] GV: It definitely covers, I think, a lot of what a listener might understand. I was actually, just as you were going through that, I realized, is it right to analogize what the Pulse, at least the data that is viewed in Pulse and the discussions around it. Is it correct to analogize that with a retrospective effectively in the scrum analogy? [0:21:59] DE: I think so. We honestly think a lot about it as pull requests. I think that was a key moment for us was when we were sitting down and we've been doing this for years, and we had gone really hard into DORA. And we knew that we had an issue around operationalization and that like basically everybody does. When we asked ourselves, like, how do we fix that? We went like, "I think the development world, we had the same problem." I worked at Atlassian forever. We had a code review tool called FishEye Crucible for those in the way back machine and it didn't work really well. It was a beautiful code review tool, but it was against subversion. The trick was that you basically did the review after you had merged your changes. I can't remember anybody who ever, ever made a suggested change because you're like, "I already shipped that." It's working, it's fine. So, pull requests sort of had this thing where you were like, "Oh, we can have a conversation at the right moment and make a change to the way this is going to go down." We thought, "You know what? That's the same thing." If I'm doing a review with my CEO about how is engineering going and how are we doing these things, why would we not try and use that analogy? You want to find the right moment, you want to find the right data, and you want to be able to have a conversation that changes outcomes at the right time. So yes, it's really a review, is what we think of it as. [0:23:22] GV: That's very interesting. Okay. So yes, a review, I think that's a very good way to analogize it. When I was thinking about it, it was sort of where have I been in this situation before and I was like, "Well, this feels a bit like what a retro is where we all talk about what went well. It didn't go well, et cetera." But I think the review analogy, because Pulse, that word, it is exactly that. It's not that we all arrange every two weeks to just get together in a room and say, "What went well? What didn't go well?" This has a much sort of more frequent cadence to it. [0:23:52] DE: Yes, well, the other thing that we found that was, I mean, kind of obvious in retrospect, but interesting, because just like we don't ask you to change issue trackers to the Sleuth issue tracker, because let's be honest, that's a method for failure, right? Nobody's going to do that. Similarly, if you want to add a new process to your process, you want to adopt something new, how do we do that in a way that is the least amount of pain for you? The answer is that we attach to your existing rituals, right? Most organizations, actually just like the primitives of issues and code review, have some rituals, like you're going to do a sprint planning, that's either going to be weekly or biweekly. If you're big enough in your SaaS, you're going to have an operations review where you're just checking out how many incidents did we have? How are things going? What are our KPIs? More than likely, if you're big enough, you're going to say, how is this subset of engineering going? You know what I mean? Is it healthy or is it not healthy? If we have seven things in flight, how are they going? How much of our time is being spent on support? How much of our time is being spent on keeping the lights on? Then if you zoom that up a level higher and you're big enough, you have 20 new hires coming in, how are we going to allocate those? We have 300 people. How are we allocating those? What does the business need us to be doing and how are we tracking to those? How are the real things that we're doing every day and these big projects that we have, how are they moving? How are they allocated? Do they match what we want to be doing? These are conversations that are happening inside of organizations already, but they're happening very often with a very anemic amount of data without necessarily doing a lot of the heavy lifting offline. The conversations tend to be very status-y in these meetings instead of actually getting into the meat and potatoes of what you're going to do different. [0:25:40] GV: Absolutely. Yes, I mean, quite literally ceremonial, it's kind of how I often saw those meetings were. [0:25:44] DE: Yes, they're rituals. [0:25:46] GV: Rituals, exactly. I'm afraid I have a fairly love-hate relationship with the term Scrum Master, because it just was the sort of semi-self-certified people who sort of thought they could just get developers to kind of work on a sort of factory line type arrangement and it doesn't often work like that. To that point, a lot of our listeners are developers and developer's satisfaction is, I believe, a bigger metric that actually Sleuth brings in. Talk to me about that. How is that is that measured? What outcomes do you see from tracking that and how to actually improve that? [0:26:20] DE: It's hard to tear it all apart. We focus for a really long time on DORA metrics and something that I as a former engineer really enjoyed about that is it's a hard number. I can drill into that number and I can understand where it's coming from and I can compare it to other numbers. That is very satisfying to an engineer to be like, "Yes. We don't have to argue too much about this. There's some truth behind this." But the truth of the matter is, a lot of what I did in my past life as an engineering leader was one-on-ones and making sure that my team was happy and that we were having fun doing the work that we were doing, that we felt satisfied with that stuff. So, there's an element that there's the qualitative and the quantitative, and I think if you want a full understanding of your engineering organization, you have to blend that too. So, DevEx experience is like, or DevEx surveys are a great way of doing that. It's not really new, but I think it has seen a bit of a resurgence with the whole space framework. I think the great example is that you are elite in all categories of DORA, but everybody wants to leave your company because they're miserable, right? That is very possible. I've been in environments where that's true. So, it's obvious that it's not the full picture if you're not getting all of it and mixing it together. [0:27:42] GV: I think from what I saw in the product, I did actually ask you while we're going through the anonymization of input from developers, that to me seems very important. Could you just speak to that? [0:27:53] DE: Yes. I mean, from day one when we started this journey, it was very important for us to worry about developers. I think that this whole space, it dances on a razor's edge, and that razor's edge is if devs are pissed off or think that you're just trying to track them for no good reason with metrics that don't work, they are going to revolt and they're going to be like, "F you. I don't want to do this." I understand that because I have been in that same spot myself. But the interesting part is, and I truly believe this, every team that I have ever worked on, the individuals on that team want to do better as a team. They're really interested in improving. Developers are super cool in that way, that they want to know how do I do this better the next time. I don't want to do the same thing over and over and over again. I actually want to get 2% to 7% better every time I do a thing. So, when you frame metrics at the team level, which is really very easy to do with DORA, and you don't surface stack ranking views, and those sorts of things. Similarly, you worry about anonymization when it comes to survey data, you basically take everything through the lens of how can we improve the teams and not worry so much about the individuals because software development is a team sport. It is not an individual sport. That's our take on everything. We have lost deals to competitors because we have had people show up to sales calls and be like, "Look, I need to stack rank my people because I need to know who to fire first." I'm like, "Well, that's really sad for you and I wish you luck. That's great, but we're not your right place for that. I think there's other ways that you can figure that out and you're not going to get that from us." [0:29:35] GV: Yes. That's an awesome approach. I really like that. I think it's just always obvious when it's a developer, whether it's of past or current, that is the CEO of a company, especially when the platform is ultimately something that developers sort of are being asked to use, I guess, because they might not be the ones that have actually buying it into the organization. But it really matters who's leading that company, that platform, actually, can put themselves in the shoes of a developer, which I don't believe a lot of other platforms, that's actually their sort of background. So, that comes through really strongly. [0:30:08] DE: Yes. I mean, it's a chicken and egg thing too, right? I like working in developer tools because I wouldn't know what to do with an insurance platform. You know what I mean? I buy my insurance - I'm like, sure, I'm insured because I'm a human and I live in society and I have to, but I don't care. You know what I mean? I don't get passionate about that. I don't really know what's good and bad in that arena, whereas developer experience, now that's a different thing. I'm like, "Yes, man, I've run teams." You know what I mean? I've run really excellent teams and I love the idea of encoding the best practices into a platform and allowing teams that have some distance to make up the tools to do so. [0:30:47] GV: Just in terms of the actual implementation in a business, you mentioned in the material, start with your biggest problem that you can actually implement one part of the product or two or three. How does that actually work in practice? [0:31:02] DE: Yes. With the Pulse product itself, it's a little bit like a widget builder. There's a lot of - based on the different systems that you can connect to, you can get pull request outliers. You can that issues shipped in a certain period. You can get support escalations from a support system. You can get a cost analysis breakdown of the work that your developers are doing based off of Jira and those sorts of things. So, there's all these different widgets. But then again, because we understand that teams have these rituals that are fairly common, we've bundled up a bunch of templates that say, "Hey, this is the right level to start at if you're trying to do a sprint review. This is the right set of data that you could start at to do your CTO conversation." I'd say the customers that use it probably stick with 95% of what's in the template, but then they drag into this and they drag into that and they have like, "Oh, we like this ritual of having a screen share of what we built and I want to pull in that widget here so that I can have that in the presentation." So, you can see how if it's templatized and you're like, "Look, we don't do operational reviews right now. We would like to, that's aspirational, but I'm not going to worry about that right now." But the burning issue that I have right now is I don't understand allocations, and when I talk to my board or to my executive team, I don't have enough data to tell them where we're at and how we're tracking, and I want more rigor around that, so I'm going to start going to start there. I'm just going to plug it into Jira and get that information and be able to have those conversations and decide if with a smaller group, if that works and if you quite like it, you're like, "Hey, engineering managers, I could see how you could use this to run that level of conversation as well." [0:32:48] GV: You mentioned the automations and the integration side. I think you also offer 100 pre-built automations. How do you decide on those? Is that quite customer-driven or is it again chicken and egg where you're having to think ahead of what a customer doesn't even know they need yet, but we think they do? Or how does that look? [0:33:07] DE: What we'll do is because we're fortunate enough to have a really strong view into the data, we also have a lot of opinions. If we are surfacing, say, for instance, if we see that review lag time not great. It's outside of norms. It's outside of your norms. It's unrelated to - it's a bottleneck that we can identify, then we'll suggest that up and we'll basically throw that into the interface just to be like, "Hey, I think we can help you here." But it's absolutely up to the customer as to whether they want to adopt these things or not. They're all opt-in and opt-out. We've had folks that turn on a thing and think this is going to be a great idea. Then they're like, "Actually, it was a terrible idea, just generated a lot of noise." But we also take the same approach to everything that we build. In the automations, it's a little bit like Legos, you know, and if the audience is more developer-oriented, they might enjoy this. But there's basically like a lot of like little custom conditions, custom actions. There's a whole rule set, and there's like a YAML that's behind all of these things. So, we have built a marketplace that lets you just say, "Hey, we put together a YAML for a very common thing that you might want to accomplish, but we've also allowed you to just add custom." Then we say, "Look, you can build the craziest Lego thing you've ever wanted. If you want to build a Lego Death Star, you go ahead and do it," and then that allows you to get very, very customized with the automations that you want to do. [0:34:28] GV: Nice that you've got that in the platform already, that customization piece. I think, Sleuth, you're still a fairly young company, I believe. You've got a lot of quite big names on the front page, the trust banner, as it's often called these days. Could you just speak a bit to kind of what was your, I mean, especially for, we've got a lot of developer in a listener base and startups generally. Your kind of go-to market, like, what did that kind of look like in terms of getting these quite big names on board? [0:34:54] DE: Oh, man. Startups are rough is what I'll say. I mean, I would love to just tell a story of a forward sprint and winning the world record in the 100 meters or something like that, but that's not how it happened. I mean, we started with this idea of deployment tracking, which we were like, "Hey, this is a gray area didn't quite find the traction that we wanted there, but really set ourselves up to - it was fun." We were always surfacing the DORA metrics on the side, and then like we had all these customers being like, "I love that stuff on the side. Could you put it in the middle?" We were like, "That's a good call." Same with expanding out some of these metrics. But as you do so, you start to get a little bit of noise. We were fortunate that we did have the DORA folks out there, because originally, with deployment tracking, we were just trying to build a space, which I would highly recommend that nobody does. As a small startup, you have only so much energy and time and you're probably better off spending that energy and time on product market fit rather than educating a market that they need what you have. So, that was a really big helper for us that organizations that wanted to adopt metrics, they basically would do some research and end up at DORA metrics. That's how like Puma and Red Bull and some of those folks ended up kind of coming our way, where they just sort of like, "We know we need this, we know we want to do this. We did some research. You guys seem like the ones that are killing it in the space." [0:36:24] GV: Yes, I think that's great. I'm sort of in the same boat, the same situation. The sort of common wisdom is exactly do not create the space. Find something where people are actually actively going out and they want to put money down somewhere. They know they've got this thing that they need to be solved. But how do you kind of angle and fit your product into that, solving that problem as much as developers and dreamers, I think you can call us that we always think there's a better way to do something and there's this grand solution out there that if only people knew about it, but reality is actually businesses that want to pay money for products, it doesn't often start that way. It starts with an existing need. I think anchoring around an established framework, like DORA, that's a really smart way to approach that. So, that's super interesting. Just looking at the future for a Sleuth of what you can reveal, what are you going to be adding to the platform, I don't know, 6 to 12 months? What does the roadmap look like? [0:37:24] DE: Oh, man, there's always 50 million things that we want to do. Prioritization is one of the hardest things that we've got going on. But I mean, with the Pulse platform, it's really exciting because we have all of these widgets that allow you to tell a much broader and richer story. There's also a whole aspect that is just like collaboration. So, we have like a V1 of almost like a GitHub workflow review. You can go in there and say, "I own this section. I want to be reminded that I need to have this done by a certain time. I want to be able to approve and request changes." We have like improvements to that. We have like just so many capitalization widgets and other cool things that we can do with the data that we have. It's really just around telling that much broader story. Honestly, I think some of the automations that we have right now are very oriented towards the door aside of things, but there's a ton of very, very interesting things that you can do at that executive level too, where you're just saying like, "Hey, I'm not just automating like code flow stuff anymore. Now, I'm getting into helping the business to remember that you need to do your quarterly goals." You know what I mean? There's so many things that we do in the world that automation and data and interpretation of all of those things, there's a lot of really interesting potential there. So, I guess that was sufficiently vague for a future roadmap. I would say anybody who wants to really know, jump on a call with us and I will talk to you about our next 6 to 12 months roadmap. [0:38:55] GV: Yes, absolutely. So, we'll wrap up shortly just with a couple of sort of side questions. But before we do that, yes, where's the best place to, you want to actually try Sleuth out or speak to someone, where's the best place for them to go? [0:39:08] DE: I mean, our website is the right place to go. Go to sleuth.io. There's a big button that says, "Talk to us." If you're interested in just trying out the DORA stuff, you can follow a link that lets you self-sign up for that. We also are really big on openness. So, our DORA projects are all available for anybody to look. If you want to see what would this look like with actual data in it, you can see the pull requests that we're working on and see what our change lead time looks like. Spoiler alert, it's pretty good. [0:39:34] GV: Awesome. That's really fun. So, that's sleuth.io, S-L-E-U-T-H.I-O. Okay. So just a couple of kind of final questions like I asked most guests. First one, what is just a day in the life of Dylan, CEO of Sleuth like? What does that look like? [0:39:53] DE: My calendar drives my day. It's a lot of interrupts. It's good stuff. I mean, let's see today, talking to a new hire just about overall direction of the company. Also doing some marketing activities with my CTO. We're running this kind of fun campaign where we're interviewing some engineering leaders and then turning that into a tag of this interesting stuff. What else was on my calendar? I'd have to look at my calendar. I can't even remember. [0:40:19] GV: A podcast, apparently? [0:40:20] DE: Oh yes, a podcast, apparently, at the end of the day. Absolutely. I did one of those interviews for that campaign. We're using an AI-based SDR tool. So, like jumped on a call just to like make sure that that campaign was up and running. It's a lot of different things, but that's why I'm doing it because it's kind of fun. [0:40:38] GV: Yes, absolutely. Couldn't agree more. [0:40:41] DE: Don't look for focus time though. [0:40:42] GV: No. [0:40:42] DE: That's hard to find. [0:40:44] GV: Yes, final question, just, if you could give advice to yourself when you were starting out in tech, knowing what you know now, what would you say? [0:40:54] DE: I would tell myself to potentially be a little less opinionated and a little more listening. That's generally been a journey of my entire life. But if I look back at me back then, I think I knew the answer to everything, which obviously is wrong, and definitely had that a little bit of that arrogant developer. I know how to fix things, so I must know how to fix all the things. Yes, looking back, I'm sure it wouldn't have hurt if I was a little bit more like I am today, where I'm much more able to hear others and be more open-minded to different opinions and different potential possibilities. But then again, maybe I just had to get old in order to get there too. [0:41:33] GV: That's good. I mean, you wouldn't be the first developer originally turned CEO that we had on the podcast, that's kind of actually what they've said as well. I think it just makes a lot of sense. At least with one other sort of agreed on that point that, unfortunately, a lot of us start out very opinionated thinking we understand the world inside out. But as we get older, it's not often the case. [0:41:55] DE: Not entirely true. [0:41:58] GV: Funny that. So, Dylan, very great to have you on the podcast today. I think we've learned a ton about Sleuth. Again, just reminder sleuth.io, that's where you can go to check out the product. Just really interesting to follow along. I really love the kind of concept of this product, given that sort of it's not trying to replace Jira or something else. It is really trying to help the whole developer experience and all the engineering experience effectively for everyone involved. So yes, I just want to keep it up, want to follow along. [0:42:30] DE: Thanks, man. This has been a really fun conversation. I appreciate it. [0:42:33] GV: Thanks a lot. [END]