EPISODE 1750 [INTRODUCTION] [0:00:00] ANNOUNCER: SoundCloud is an online platform and music streaming service, where users can upload, promote, and share their music or audio creations. It was founded in 2027, and is known for its community-driven approach, allowing artists to interact directly with their fans, and receive real-time feedback on their work. Matthew Drooker is the Chief Technology Officer at SoundCloud. He previously worked at Turner, and has deep experience as a technologist and later in the media industry. Matthew joins the show with Jordi Mon Companys, to talk about his background, the evolution of the SoundCloud platform, its current tech stack, and much more. This episode of Software Engineering Daily is hosted by Jordy Mon Companys. Check the show notes for more information on Jordi's work and where to find him. [EPISODE] [0:00:57] JMC: Hi, Matthew. Welcome to Software Engineering Daily. [0:00:59] MD: Hi. Nice to see you. [0:01:01] JMC: Likewise. Tell us a bit about yourself. What brings you here? What's your background? What's your story? [0:01:05] MD: My name is Matthew Drooker, I live in Atlanta, Georgia in the States. I've been in the industry for about 35 years. I started when I was in school at Georgia Tech, at a little media company called Turner Broadcasting. Turner Broadcasting had morphed into Time Warner and then WarnerMedia. For the 32 years I was there, I had a lot of different jobs, probably 12 or 13. The last two ones that I find valuable in this conversation, I was the CTO over at CNN Digital. CNN Digital was responsible for CNN's presence on the web, and on mobile apps, and everywhere else that a digital product would be viewed and read. I did that for about five years. A lot of time at CNN. Every year you're there is considered dog years, so it kind of ages you quickly. The job after that, while I was still at WarnerMedia, was running a lot of the platforms for our digital brands. So, those digital brands are CNN, but March Madness, NBA, did some work with HBO Max. I had a phenomenal time in that media space that was focused on video. Came to a point where I said, "All right. Well, let's try something different." A phenomenal opportunity came up at SoundCloud to be able to work in a similar space, but now really focused on audio. But I always had sites and goals to bring video to the platform. So, been at SoundCloud for about two and a half years now, doing some things that are the same, a lot of things that are different, and hopefully we'll talk a little bit about that. [0:02:36] JMC: Yes. Just before we jump into the actual integrity of this conversation, really unlikely that anyone in this audience, in the Software Engineering Daily audience is unfamiliar with SoundCloud. But just for the sake of that, what is SoundCloud? [0:02:50] MD: Yes. SoundCloud is an artist and fan platform for music. It was founded in 2007. A couple gentlemen from Sweden moved to Berlin and started a tool that let artists share their music with fans around the world. This was like a UGC platform like no other that started getting music into the ears and hands of people that, whether you created a track, or I created a track, anyone in the world could hear it. That was the promise, and it continues to be the promise of SoundCloud, which allows creators and fans to get together and interact. So, UGC 2.0. [0:03:36] JMC: Nice. I love it, actually. I used to DJ a bit back in the day, and mix things, and I used it a lot. I don't have any more time to do so, but I love it. My friends that love music still love it. [0:03:47] MD: Well, I mean, we have about 40 million of those creators on platform now, in close to 193 countries. Our responsibility is to really foster those connections around the world. A lot of our competitors, the great competitors, but that's not their goal. Our goal is to connect to those creators, those 40 million creators with millions of listeners. Anyway, those are some of the things. On average, we get about 170,000 uploads a day for those kind of audio tracks. [0:04:22] JMC: Wow. Before we actually jump into the infrastructure and all the tech stack that actually powers that, and makes it reliable, and responsive, and available everywhere. Though you have the words in the company four years ago, 10 years ago. Tell us a bit about what you know about what was happening. What were the decisions made in the past to stand up SoundCloud? I guess, why were you brought in? What were the problems that you inherited? But also, I guess in general, what were the directions that you wanted to take that previous stack into? [0:04:55] MD: Right. I don't usually use the word problems. I usually use the word opportunities. So, when you hear me say opportunities, they're synonymous, but I think it puts a different psychological take on it. So, I just want to, "I'm not disagreeing with you, but I'm going to call them opportunities." The stack is, like many companies that started small, and it's not a reflection, that's not a bad thing, like most companies start small. So, a lot of the things were built in-house. They continue to be run in-house, in our data center that we had in Amsterdam. Some of the tooling and some of the ideas that were grown out of SoundCloud are things that many companies continue to use today. Prometheus is one of them. Some of the stack that I inherited to your question ran on-prem in an Amsterdam-hosted data center, using monitoring and instrumentation, powered by Prometheus, with some architectural patterns known as BFF. The BFF is back in for front end. Many companies use that. I inherited gracefully this platform that powered all of those things that I had said earlier. What it did was, it required a lot of engineering time and maintenance to keep that stack running. There was technical debt that has accrued. And every company gets technical debt. Some of the challenges or opportunities was to solve some of the technical debt with either modern tools or vendors, and not have such a reliance on a data center because that also requires bodies to keep it running. To modernize the stack with languages and technology that can be run at a higher level of velocity of change. Those were some of like, why am I here. Those were some of the things that I kind of grok to and said, "Oh, this is going to be something fun." [0:06:49] JMC: Can those be summarized into becoming more cloud-native, I guess? [0:06:54] MD: Totally. I mean, you may have been in a meeting, I use those same words. Becoming cloud native was one of the goals. That has a lot of different connotations to people, whether it be running things in Lambda, one are running things in Fargate as a hosted container. Using something like RDS as a hosted database platform. SoundCloud did use CloudFront as a CDN. So, we were using CDNs. We used S3 as our primary storage mechanism. But yes, those other things around CDN and object storage were how do you become more cloud-native, and that runs the gamut along the way. [0:07:31] JMC: Talk to us about those engineering choices, then. Not only about the stack, but maybe about culture too, because I suppose, this would entice also changes in that sense. [0:07:39] MD: I mean, any time you change something that is ingrained into a company that requires a cultural pliability that lets people be heard and understood, but knowing that it's not going to be continued to be the direction. So, some of the changes we made to your point, we became cloud-native, and are continuing to become cloud-native. Somebody had asked me in one of the first meetings, "When are we done? Do I have a date?” I said, "We're done when we're done, because it's a journey." We didn't put a date on it. We started working through training, how do we become more cloud native, bringing in people that had done this. So, they became our North Star and conversation for what does it look like as a day in the life of an engineer to become cloud-native. As a language, one of the choices that we were strictly a Scala and Ruby shop, we started bringing in some new software languages like Go, TypeScript, and Node. We had a really hard time finding a scalable pool of engineers that we could pull from in Berlin or anywhere else if we were only using Scala. We brought in some new languages that it was also a cultural change. There's nothing wrong with the BFF architectural pattern, but I'm a huge fan of GraphQL and the way that GraphQL starts decoupling some of the requests that come from clients, whether it be mobile or web to our backend APIs. So, we brought in GraphQL. [0:09:08] JMC: Would you mind actually drilling down a down a bit precisely in the differences between BFF, and the way in which GraphQL actually works. [0:09:17] MD: I mean, if you think about a BFF, it's really a REST client or a REST pattern, you implement it with a REST client. So, anyone that knows REST knows the challenges that come along with REST. Some of them are tightly coupled contracts between clients and servers, dependency on the server team to implement new fields. There was different BFFs for each of our modalities. So, we had like a web BFF, we had a mobile BFF, we had a partner BFF. A lot of copy and pasting code architectural patterns. Those were some of the challenges. Again, goes back to TechDev, requiring the amount of engineering time it would take to maintain those became measured in the double-digit, so maintenance was higher. That architectural pattern of REST has become one of the ways to solve all of those deficiencies, is using GraphQL. GraphQL has been introduced, and so that became, how do we train on it, how do we write back end like our services around it. We use the federation to like render the graph by each of the teams that support their domain models, are then federated through our federated graph. For example, in SoundCloud, you have a profile, you have tracks, you have playlists, you have recommendations, you have search, all of those are domain-modeled objects in the graph, that then get federated through our federated gateway. Those are some of the architectural changes that we've been making. Along the way, we still have 193 countries to operate in, millions of users that are using it every day, and a team of engineers measured in about 400. We have spent a lot of time making sure we stabilized the system while we made some of those changes. Also, since you brought it up earlier, becoming cloud native. How do you now run these things in AWS and GCP? We use both cloud providers and understand the tradeoffs because there are some workloads that we continue to run on-prem. So, we still have our data center, but we've optimized it for workloads. As I had mentioned we have about 170,000 thousand uploads a day. As we've been rebuilding our median coding stack, the operational cost to run that on-prem is much lower than it would be to run in the cloud. We actually kept some on-prem infrastructure, modernized it, and we run some of our workloads on-prem because bare metal for those kind of workloads does make sense for us. It wasn't a one-size-fits-all. A lot of the teams became involved with how we make these decisions that bring finance plus engineering skills to the table to make the right call. [0:12:01] JMC: Okay. I guess, your take on this debate that – as we were talking before the recording started of repatriating all your workloads, like 37 signals, people suggest is more feasible and more reasonable to do, and all have everything in the cloud. I guess your take is hybrid, right? [0:12:19] MD: Absolutely. [0:12:20] JMC: So, making the best of some workloads, having them locally processed, in this case, in Amsterdam, and others working in AWS and in GCP. If that's the case, talk to us also about the use of Kubernetes? Because we haven't mentioned it, but is Kubernetes involved? [0:12:34] MD: Absolutely. I mean, you can't throw a rock without hitting Kubernetes at a company. The way to do orchestration around containers, Kubernetes has standardized that. In my history, we used Kubernetes back at CNN back in the pre–anything. It wasn't even pro, it's still in dev, and run a lot of the early CNN stack back in 2014. On early versions of Kube and Node 8, just a lot of stuff. So yes, at SoundCloud hosted, we are using Kubernetes on-prem, but we're also using EKS. So, hosted Elastic Kubernetes and AWS. Then, we have also brought in ECS, Fargate. There's tradeoffs for when you want to run a Kube stack on-prem, or you just have a bunch of services that you can run efficiently and optimally in ECS. Some of those levers that you pull and decisions you make are really focused on the workload. What I have coined, paying by the drip, like if you have a lot of chatty services, you don't want to run that in Lambda, that's another option. So, yes, so we run on=-prem Kube, we also now hosted EKS, and then ECS. So, a lot of different choices. What it means is a lot of different training to let engineers understand what tool to use. Then, we have principal engineers that help some of the teams make different choices or play devil's advocate. One of the things we are bringing to the table is cost-benefit, and cost ROIs on before we launch new projects into the cloud. We do know what the impact to our budget is going to be. [0:14:11] JMC: How does that manifest? How does that sort of show itself? I guess, the FinOps side of things when someone is mocking a new feature, testing it, and eventually deploying it. [0:14:23] MD: Yes, all of the above. Again, this is one of those, you're done when you're done, which means you're never really done. So, the FinOps expertise is an ongoing muscle that we're continuing to build. We don't have a team, 100% dedicated. So, engineers are being asked how much is it going to cost to run. Have you thought about this? Have you thought about that? A day in the life of some of the new projects as we move from some of our legacy tech into our new tech includes, what's the RPS request per second on it? How is it going to look through a CDN? What's your caching strategy? Can you offload? Since I mentioned GraphQL, can you use persistent queries to offload traffic to the origin? One of the interesting things about SoundCloud that's much different than my time at CNN is, it's really hard to cache a lot of the really personalized data that comes back per user. So, your playlist is different than my playlist, which is different than anyone else's playlist, the tracks that I can see. Because the tracks that I can see based on my country are different than yours and my subscription plan. We are going through a lot of exercises for what kind of objects we are, can cache effectively. All of those are factors that go into the mindset of how do I become cloud-native in a way that's not – blows the bank. That's not sexy, but it's the only way to run a business. As we mature our engineers, and to be soup to nuts, cradle-to-grave engineers, it means – what does it mean to implement something, and what does it mean to run it. I think that has been one of the best experiences I felt our team has done, which is embrace the job of cradle-to-grave engineer. [0:16:12] JMC: Since you are a person that's seen Kubernetes evolve from it, not being even in production to what it's now, which is literally a project that has enabled a foundation and an industry that moves millions and of dollars a year. [0:16:27] MD: Billions, yes. [0:16:28] JM: Yes, billions. I myself, I've worked in that industry for now over a decade. So, I'm really thankful to that specific project. What is your opinion on the evolution of Kubernetes? Because back in the day when you used it at CNN, there were other contenders out there, Mesos. At that time, other people might have thought that would be actually in a better position to become the container orchestration of choice. But it turns out that Kubernetes won. [0:16:52] MD: Kube won. I mean, based on Google's Borg project. I mean, when you take Kubernetes on as a project, there is a lot of complexity and configuration that goes along with it, and some concepts for different size projects may be overkill. Like the operational and the mental overhead of managing Kube is in a different realm, and that's not a bad thing. It is what it is. But then, as an engineer, if you're trying to build a service, some of those things should be kind of like obfuscated away. We played with tools like Rancher that kind of simplified some of the deployments of Kubernetes. We have spent some time here at SoundCloud understanding our CI/CD pipelines, maturing them to help some of those deployments and understanding that Kube is a complex engine and orchestration piece that sometimes you just don't want to know, and sometimes you shouldn't care. So, how do you as an engineer know what you need to know to get something going? We've spent some time, to be honest. We have tried to keep some of that away from many of the engineers just because that is another level of operational and mental overhead that we don't feel is valuable right now, because there's so many other problems to solve. Those problems and opportunities are at the forefront of our business, which is creating a better product. For an engineer to understand how Kubernetes runs, if you only have so many brain cycles to run, I don't want them to focus on it. This is where whether you use Fargate and simplify deployment tasks, we've been huge fans in migration to Terraform to help us understand how to get services up quickly. Some of the teams are using CDK, because they wanted a procedural way to deploy infrastructure as code. So, we haven't been precious, although we've definitely focused on Terraform from many of the complicated services. There are some instances where we just said, "All right. We can use CDK to quickly spin up a service." Anyway, that's probably too much data there, but that's kind of where my point is. [0:19:00] JMC: No, but it’s true. Kubernetes is an infrastructure tool. The main user persona there is definitely a Linux admin, if you wish, someone with deep knowledge in the integrity of the system, and I don't think software engineers should be exposed to it because it doesn't make them best of their time. [0:19:17] MD: No. But there are instances where an engineer needs to understand the network routing and understand what latency is being added to the overhead. How do you do round-robin requests if you have state that is required, something like WebSockets or something else? So, there are moments where based on the complexity and scale, and again, we have a monster amount of scale, you have to make some decisions that manage Kubernetes at a state management level to understand that a service, if you've remembered the old, any service can be considered them as cattle, not pets. That whole mantra means that you have to architect something completely different. Those are some of the things that you want people to know, but you may not want them to spend time understanding how pods and clusters are managed. [0:20:08] JMC: Yes. I think it all falls into the, again, I guess the developer experience that you choose to expose your offer, your software engineers, right? And there's a limit to what they should be knowing and being exposed to of Kubernetes because it can get deep and complex. [0:20:23] MD: They can learn it, use it. We don't want to hold anything back. We don't want to keep anything behind the curtain. But we want engineers to spend the time on adding value to the company and the product. Sometimes it's, don't worry about that. [0:20:39] JMC: This makes the following question inevitable because it seems that everyone is adding a specific value to their product, which is AI. I wonder, in the world of artists, music creators, how is this being used? How do you guys see AI adding value to your users? [0:20:56] MD: Yes. In 2024, AI is the term, and 2022 is NFTs. Years before that, it was client server. So, those of us that have been in this industry have seen like the term is then, everything has to revolve around that. But AI is at the core of many of the features and products that SoundCloud has brought to the table. We acquired a company called Museo that had an amazing set of tools that were able to identify the music that was getting uploaded at the velocity that we get music uploads. We needed to understand what type of music it was. Was it jazz? Was it rock? Was it techno? Understand beats so we can start getting recommendations out to the users very quickly. Because we want those creators that upload content to get listens as fast as possible and we don't want to put the wrong kind of music in front of the wrong kind of ear. We needed AI back before it was 2024. That is one of the really big uses of artificial intelligence whether it be AI, machine learning, or set of algorithms that looks at the waveform and understands that given this waveform, there's a pattern that says that's techno. So, that's one great use of AI that is core to our product. Another one is the recommendations. How do you recommend tracks that are newly uploaded at the velocity that we get uploads and creating that creator and fan connection means our recommendations need to be pretty complicated versus normal label content that we get. We still, by the way, SoundCloud gets every bit of label content that our competitors get, whether it be Taylor Swift or anyone else. Those come in just at the same time. So, our value proposition is to offer those labeled artists as well as all the other creators that upload it in a way that keeps people wanting more. So, those recommendations are really important to us and so we spend a lot of AI cycles focusing on those recommendations. A couple other uses are we're starting to play around with how do we do our Zendesk. We use Zendesk as a way to log tickets for all those creators. How do we quickly get answers versus having people have to answer the ticket? So, we're doing some of that. Dynamic art creation. How do we use stable diffusion or some of the other tools to get cover art if you're an artist and you need to upload art, because you want art on tracks? How do you use tools like stable diffusion to do dog on a beach with a pizza and get a good art image that goes along with it? That's another use for it. Those are the kind of things that kind of bring us together. What we're seeing in the future is also if everyone wants to be a creator or if you have a goal but you can't play an instrument, how do you let AI lay down some background track music? You give it some words and now you have a drumbeat and maybe some strings that go along with it. So, some of those things are now also open the world of creators that may not have all of the training and skills, but have the passion and would normally become the talent, right? If you have passion, you can offset some of their lacking with some of these new tools that are coming out. DJ mixes, things like that are getting – you still need somebody with art, but you still need like the creative side of it. So, we're definitely not saying you can create great work without art. We don't want to downplay what an artist brings to the table. We want to enhance it and we want to enable it and that's where some of our tools with our partners in AI have come in play. [0:24:55] JMC: So, one of the challenges that AI workloads and the processing that they require is that they are quite heavy, like to lift, and to stand up and keep running. In the hybrid setup that you just laid out briefly before, in a business that has well, the geographic distribution that you guys have, which is basically global and that handles peak traffic situations, especially with high-profile music releases and stuff like that. How challenging and how, I guess, have you addressed the input of these AI services into that setup? [0:25:28] MD: We've got some great partners that we start with. So, most of these solutions are hosted with partners. Then as we get to a certain scale, is where you start understanding, where is the margin cut off? Are we at a place where we need to start rethinking it? We haven't gotten there yet. Like every other decision when you said, “Well, we're there when we're there.” And when we get to a place where we're like, “Hey, we have so much traffic that's costing us so much to run, what is an optimal way to run it?" And now that we have our hybrid, we'll continue to have a hybrid story. Many of these tools are open source, like we absolutely embraced open source. Like I said earlier, have contributed to open source. Even the way that we encode audio is using FFmpeg, phenomenal open-source tool. As the industry matures, as we find new uses for it, there may be opportunities to run it on bare metal in our data center. But we haven't gotten there yet. We haven't seen those kind of numbers that have given us pause to say, “I need to take engineering time from here, because now I got to move it to here.” So, we're still working through that, because the industry is working through that. [0:26:39] JMC: Yes. I think so. I think everyone's figuring it out. [0:26:42] MD: Yes. We're part of committees that talk about AI in the music space. We make sure that we're complying in all of those spaces, but we also want to be leaders in those spaces too. [0:26:53] JMC: So, in fact, on that note, you've talked about what is a reality already in terms of AI and SoundCloud, but what is the crazy future that not necessarily you envision, but potentially you think about implementing in the platform? [0:27:07] MD: Yes. This is where it's such a nuanced trade-off with enhancing artists and not replacing artists because so much of the press you see is AI will replace you. Whether it be a software engineer that uses co-pilot or somebody that transcribes for a living. Even the AI tools that people use every day to take notes. So, it is such a nuanced solution and problem. I will use problem in that one, not opportunity. Because our job is to enhance an artist and it is not our mission to replace them. So, what the future looks like is how do we continue to find tools, build solutions that enhance the artist's creativity, possibilities, and opens it up for people that never thought they could be to then be. Whatever that looks like tomorrow is the sky's the limit. So, I don't think anyone has a Frankenstein-y kind of vision on how AI will change how artists work. Our sole goal is to make sure that every artist can be who they want to be and find every fan who they need to find and vice versa. [0:28:23] JMC: I genuinely think that that is absolutely true and that the only problem that SoundCloud is going to face is a problem or an opportunity to follow your train of thought there of scale because as you just said, AI is what basically is doing in terms of on the field of music and sound is democratizing, expanding the definition of artists. It was beyond me to create music. I was in a band when I was young and I was such a bad drummer that I got kicked out by my best friend. So, that was dramatic. But that also tells about my rhythmic skills. And yet with these tools in place there, I had them back in the day, I would have been much more able to compose, learn, create music. So, I think you guys are in front of an opportunity that is onboarding a humongous amount of number of new artists and finding a perfect match for those willing to listen to new deep funk, electoral, whatever anyone's creating. So, scaling the system that you just described is going to be the only challenge and opportunity. [0:29:28] MD: Yes. Then if you think about it, then as we'll get more uploads that then will require different kinds of tools to find the right listener and creator mix. That's why it's such an exciting company to be at because we're kind of at this forefront of UGC 3.0, which then, let's say I help create more content that has an opportunity to have more people listen to. Our mission and our challenge is to like connect people in the right way. One of the interesting things about SoundCloud is we allow comments and likes from listeners to creators and that created a fan connection. Those are also signals that we add into our recommendation engine to understand like a fan that creates a comment has a higher propensity to like other kinds of music of that type other type of music. So, we have a lot of different signals that aren't just the amount of time somebody spent listening to a track, could be a liking a track, it could be commenting on a track, could be DMing somebody else about a track. So again, that UGC model that we've embraced and was at the core of SoundCloud when it started also becomes opportunities for us to use AI. One of the things that we have seen that we've had to use AI for is spam. Our vector for spam has been really high. Because we have all of these opportunities to connect people, they're sometimes nefarious folks out there that want to use it for bad. So, we are using some tools and vendors to help us solve, and I don't mean solve like we're going to get 100 % of it, but shrink the footprint of spammy comments because no creator wants to get a bunch of bot likes and things that are just, they don't feel good. So, we're also focusing on that too. [0:31:19] JMC: It's clear that the other side of the coin of democratizing content creation is spam. There's a whole range of spam there, like from literally relevant to a boring content that is nefarious and all that. So yes, but that is a tradeoff that comes from an increase in output. It has happened with quality and garments when technology came into the textile industry and so forth. That is a necessary trade-off that with the appropriate measures, SoundCloud quite will curtail. But you're right. I mean, we are certainly in a not necessarily a Web 3.0, but building on the Web 2.0 example, which was defined by the fact that it was UGC enriched, right? The Web 2.0 was actually that web that made – the promise of which was, hey, users are really onboarded now on the web, and they are providing feedback in the shape of comments to blogs, and so forth. We will use that information to make those places that are more relevant and popular show up more, and so forth. But it seems like we are at the Web 3.0 now. I know that probably many people have used this example many times, but with AI and user-generated content with AI-enabled by AI, we will certainly be at a new age of this kind of side of the web, I think. [0:32:37] MD: Totally. Let me give you some context on that. I mean, we introduced a product called First Fans, and it helps our next pro subscribers get their uploads in front of the first 100 listeners or a thousand listeners. So, matching those things up is really that UGC 3.0. That's an enabled product we have. We built it using some of the tools that we've been talking about. But it is really like how do we get people to enjoy similar music, those creators upload something in our next pro product and then they get 100 folks listening to it and it's not bots and it's not just somebody who's in a bot farm. It's legitimate listeners and now we can do as many as a thousand listeners. That's part of the products that we're building. I didn't want to interrupt you, but I wanted to give you a really concrete example of it. [0:33:26] JMC: No problem with that. This is my last question, actually, looking forward, and with the stack that you have described a minute ago in mind, so that is a hybrid setup between locally managed hardware and workloads and other stuff running in the cloud and two providers, and the languages of choice, which were Go, well, GraphQL is not a language, but a heavily use of that and others. Kubernetes underlying, where do you see gaps and opportunities in that stack, not necessarily of the stack that SoundCloud is using, but in general that cloud-native stack and where do you see things, where would you like things to go in that sense? [0:34:02] MD: Yes. Well, if I knew about it, we wouldn't build it that way. But one of the challenges we've been doing is with such a small group, like running it efficiently at our scale. I think those are some of the things where, some of the questions are what keeps you up at night. Some of the things that keep me up at night is how do we continue to move the stack forward? I mean, we've made bets in those areas you talked about. If I knew what was coming around the corner tomorrow, we would do something different. We will be doing some things with video. I mean, video is just another great UGC, not 4.0, I don't want to reuse that term. But how do we get video into our environment? How do we understand what it means to have those kind of experiences as part of our offering? So, those are some of the things, as an industry, we're going to try to figure out. I don't have the all the answers for that one yet. Those are some of the things we're definitely on our horizon. We do have agreements with some of our labels and what video means to them. So, there's a lot of nuance where it comes to it, but we don't want to not be participating in that another use of UGC-type content. [0:35:14] JMC: You don't need any validation from me, but I will say this because I speak to many people in your position, to many CTOs and others, and it's obvious to me that you have a very modern stack. Go is a very healthy, fresh, supported language. You're already using FFmpeg, which is my knowledge, at least the best command line. If you have no level video and sound programmatic editing tool. You're already using the most pragmatic use of the cloud, to be honest, because I don't think having all the eggs in that basket is actually a bit dangerous, especially for the pocket. So, to be honest, I mean, I think this is a very reasonable setup to tackle the future with the best chances to be successful, to be honest. Again, this is not, I'm not validating anything I know half of the things, but it seems to be very reasonable. [0:35:57] MD: And the team has done a great job coming along, understanding where we were and honoring where we were, but knowing if we want to continue to build a world class product, spending a lot of time on our tech debt or maintenance, wasn't how we were getting there. So, all these pragmatic decisions have been with a team of folks. One of the best things is I'm here in Atlanta, we have a team in LA, team in New York, and most of the engineers are in Berlin. So, I get a great time to go head to Berlin and spend time with the team over there, and I was there last week. All the kudos go to the team that is, I only rent this seat as the CTO and try to set some ideas for where we need to go and then it's been the team that's been implementing this. Diverse set of engineers around the world that have been able to pull this off. So, kudos go to them. [0:36:49] JMC: Well, that's all from me. Anything that we wanted to touch upon that we didn't? [0:36:53] MD: No. I appreciate the time and letting me pontificate on some of the thoughts that we've had and the products that we're building. [0:37:02] JMC: Well, it was a pleasure to have you and thank you for being with us. [0:37:05] MD: Thanks. Talk to you soon. [END]