EPISODE 1756 [INTRODUCTION] [0:00:00] ANNOUNCER: In software engineering, C++ is often used in areas where low-level system access and high performance are critical, such as operating systems, game engines, and embedded systems. Its long-standing presence and compatibility with legacy code make it a go-to language for maintaining and extending older projects. Rust, while newer, is gaining traction in roles that demand safety and concurrency, particularly in systems programming. We wanted to explore these two languages side by side. So we invited Herb Sutter and Steve Klabnik to join host, Kevin Ball, on the show. Herb works at Microsoft and chairs the ISO C++ Standards Committee. Steve works at Oxide Computer Company as an alumnist of the Rust core team and is the primary author of The Rust Programming Language book. We hope you enjoyed this deep dive into Rust and C++ on Software Engineering Daily. Kevin Ball, or KBall, is the Vice President of Engineering at Mento and an independent coach for engineers and engineering leaders. He co-founded and served as CTO for two companies. Founded the San Diego JavaScript Meetup and organizes the AI in Action Discussion Group through Latent Space. Check out the show notes to follow KBall on Twitter or LinkedIn. Or visit his website, kball.llc. [INTERVIEW] [0:01:26] KB: Hey, guys. Welcome to the show. [0:01:27] SK: Thanks so much for having me. [0:01:28] HS: Hello. [0:01:30] KB: Yes. I'm excited to do this special episode. I'd love for each of you to introduce yourself. Let me throw it first over to Herb. Herb, let us know who you are, your background, and what you're into. [0:01:39] HS: Oh, I'm a programming languages nerd. I've been doing programming, especially with an interest in systems development for quite a long time. And I've been paid professionally to do development in a bunch of languages. In recent years, that's been C++ lot. And so, I've been involved in the C++ standardization world. But I'm always interested in different and new languages, and sometimes in old languages. [0:02:01] KB: Yeah. And Steve? [0:02:02] SK: Yeah. Hi. I'm Steve. I first came like known professionally because I was working on Ruby and Ruby on Rails. But I actually learned C when I was 12 or 13 years old. And then learned C++ back in '98. Stuff has changed a little bit since then. But more recently, over the last 12 years or so, I've been using and working on the Rust Programming Language. I was on the core team for almost a decade. I was the person behind the Rust Twitter account for a long time. If you ever tweeted at Rust, you probably talk to me. Lots of other stuff. Yeah. [0:02:29] KB: Well, and that naturally brings us into kind of the premise for this show, which is that C++ and Rust are often compared against each other. They're both incredibly performant. Close to the Metal programming languages. They don't require runtime. They're really good for systems programming. All these different things that everybody knows. But I feel like a lot of times when we hear a comparison, it's like a partisan of one being like, "Oh, you should always use this." Whatever. And no matter how hard they try, there's some amount of bias. Our goal today would be to kind of have counterbalancing biases. Allow you two to each represent the language that you've been focusing on this last little while. And kind of share strengths, weaknesses, talk about the communities, those whole sort of domains. Maybe let's kind of start with going to each of you. What do you see as the strength of your language of choice here? We can start this time with Steve. [0:03:16] SK: For me, the natural normal answer here would be something about memory safety. But I actually think that while that is really important, I don't think that really gives enough credit to the Rust project overall. I think that one of the things that Rust does when it was first announced internally to Mozilla, I believe the quote was like, "Programming techniques from the past come to save the future." And Rust takes a lot of ideas that have not been popular in more recent programming languages but that are well-known in the programming languages space. And sort of brings them into a more contemporary context and adds a whole lot of other things like Cargo and other stuff like that. And so, I think one of the greatest strengths of Rust is coming along so late in the overall programming language history that has been able to take advantage of all of the knowledge of the languages that have come before and then trying to build on top of that and use those tools in a new way. [0:04:10] KB: Awesome. And, Herb, what would you say for C++? [0:04:12] HS: Well, first of all, I use multiple languages. But I do a lot of C++. And it's where a lot of my work has been in recent years. And I love that it's a language where I can always open the hood and take control. We often say a systems programming language. I'm not exactly sure if that's the best term. Because I know Go has been described as a systems language. But they mean something different by that, like orchestrating data center systems. To me, one litmus test is can I write a memory allocator in it? Because a lot of Java programmers, learning Java at school, you ask them, "Okay, implement a memory allocator," and they don't understand what the question is. And sometimes you get the answerback, "Okay, a new widget." Okay, that's what we want to implement. And so, that's just a litmus test. But, also, just for performance and control for zero-overhead abstraction. Not zero-cost abstraction. Abstraction often costs things. But where you couldn't do better by hand in a lower level language, say like C, and be able to express what you need to express without mandatory overheads always on GC and things like that, which I'm so glad to see Rust and Swift avoiding as well. Because I think there's a real need for languages that make good use of the hardware that we have efficiently. [0:05:22] KB: Yeah. I feel like that's one of the things that really sets these languages, and you bring Swift in as well, apart is they don't have a runtime. They don't have these sort of costs that you have to pay regardless of whether or not you want them. And going into this zero overhead abstractions. Actually, I feel that's something I've heard in both communities a fair amount of emphasis on. I'd be curious if there's any differences in your takes on that. [0:05:45] SK: What I'll say is that the Rust world tends to use the word zero-cost. And that's actually because of a divergent historical fork of the C++ folks have realized that that name is not actually very good. And so, zero-overhead abstractions is like a better name. And so, have been using it for a long time now. And the Rust folks just never really caught up. It means conceptually the exact same thing as far as my understanding goes in the two communities. It's just kind of like a weird thing where we use some of the older terminology. And I fully agree that you get into - as Herb said, everything has a cost. People are like, "It's not a zero-cost abstraction because it makes compile times go up." And it's like, "Okay. That's like not the cost that we're talking about when we talk about cost." And that's why it's such an easy thing to start arguments about on the internet. But the underlying idea of you don't pay for what you don't use is the core general idea for sure. [0:06:45] HS: And I'll go ahead and throw some arrows at chinks in C++'s armor there. Because when you talk about zero-cost or zero overhead abstraction, the idea is I can express things a higher-level way. Usually, that means I can declare my intent more directly. Say what I want to do rather than the details of how to do it. The code tends to be smaller. It tends to be more optimizable by the compiler because it knows what you're doing now, "Oh, that's what you want to do? Let me help you." That kind of thing. But when you look at where C++ is today, there are two places in the C++ language itself where every compiler has a way to turn the feature off. And that is exception handling and runtime typing. And if you look at why, it's because those two can be written better by hand. Or you pay for them even if you don't use them. And so, I think it's been instructive to see that, largely, we've done a really great job at that. But, hey, the two things that everybody has a switch to turn off and that to some sense can have some bifurcation in the ecosystem between my library supports errors by error codes. I throw exceptions in this other library. Comes down to, "Ah, it's those places where we violated the zero overhead principle or the zero-cost principle." And I think that's instructive to learn from. [0:08:04] SK: Yeah. There's a similar situation in Rust where - we call them panics. Not exceptions. And sort of go down that idea. And so, most of the time you're returning not error codes but the moral equivalent of fancy error codes. But you can say I don't want panics or I want panics to abort is the name of the setting. Panic equals abort. And a lot of embedded projects in particular use that setting. Because adding the handlers for all those things takes up code size. And sometimes it can cause optimizations not happen because of the fact that something may panic here. And if you turn them into an abort, then things are able to go better. That's definitely - it's a goal. Right? You're trying to keep it in a principle in mind when doing design. It doesn't mean you can always fully completely get there. [0:08:47] HS: Yeah. Panics, and exception handling, and returning by error codes and other ways is a great place to start with any language. For those listening, if you're picking up a new programming language and you want to sort of get a sense for what its sensibilities are, one good place to start is to see how does it handle error handling. Because that will tell you a lot about the philosophy of the language. Including if there's not one answer. That also tells you something about the philosophy of the language. [0:09:15] KB: Yes. Yes. What is it? Erlang? That says there's no errors. They're just events that go to a particular handler? [0:09:21] HS: Yeah. That tells you a lot right there. [0:09:23] KB: I'd love to dig in a little bit more on this, since it seems to be common, around the places where you have chosen to not do a zero-cost abstraction. To say, "Oh, this abstraction might be worth it even if it has a cost. But will make it optional disabled." Is that coming from a coherent philosophy? Is that something that was sort of accidentally happened into? Are there new opportunities for that type of extension? [0:09:47] HS: Oh, there's lots of abstractions that do carry cost. But I think the philosophy - Steve, it sounded like you were saying the same thing. As long as I don't pay for it if I don't use it, that still fits the zero-cost abstraction model. The question is what's the cost if I'm not using it? And on the flip side, if I am using it, could I implement it better by hand? [0:10:07] SK: Yeah. I think sort of a small Rust history tangent here. Because I think it's kind of instructive. Rust before 1.0 was kind of four different languages at four different times. I used to say it was four. I don't know if I still fully agree with myself back. When 1.0 was happening, I tried to categorize what are the periods that Rust went through. And part of that is because very early Rust looked actually sort of similar to Go and Erlang more than today's Rust. There was a garbage collector. There was a runtime. Things were in tasks. Not threads. And that's sort of because the driving philosophy or idea behind Rust has always remained the same. But we found out over time that those goals could be implemented in a way that was more efficient. And this is where I kind of get into the zero-overhead thing. The idea of memory safety without any garbage collection was fairly novel in 2006 when Graden first - I mean, he probably typed monotone in it instead of get in it. But when the repo was created, it was like, "Okay, if we're starting from a position of memory safety, then we need a garbage collector." And that's just like the way that it is. And over time, the history of Rust up to 1.0 is kind of slowly dismantling all of these abstractions that had a significant amount of overhead once it was kind of found out - one of the primary mechanism this was accomplished is types. Because types are a compile-time construct. Not a runtime construct. At least in this family of programming languages. You can argue about that with other languages and stuff. But, basically, the more that you could express in the type system, the less you needed to check at runtime, which meant the less overhead that you would have at runtime. And so, as the type system kind of grew more powerful and it got applied in different ways, we're able to strip all that stuff away. And Rust had a runtime up until November of 2014. And Rust 1.0 was May of 2015. That was less than a year before 1.0 that Rust gained arguably one of its most defining traits. And that was sort of because the runtime was pseudo-optional. But it was not actually optional. The idea was that you would have green threads, or also system threads, or one-to-one pthreads basically. And you could write your code agnostic of which one they were and you could sort of flip a switch. Like, "Do I want the green-threaded runtime? Or do I want the regular threaded runtime?" And it turns out that implementing that interface added non-negligible overhead even to the supposedly more efficient case or to the least efficient case to the point where they were both effectively the same performance-wise. And so, it was kind of silly and was honestly a giant pain. And so, there was sort of a big fork in the road of like, "Okay, do we force everything to be this way or that way?" And we decided to go with the sort of lower level no runtime approach. And so, there's definitely a strange alternate future where Rust made a different choice there. And, honestly, I believe it would become a relatively irrelevant programming language. I think it was like one of the most monumental choices that was made before 1.0. And so, that was not recent. But that's a great example of that process kind of throughout that time period in Rust history. [0:13:07] HS: I think it's interesting that you're talking about static typing versus dynamic typing. Because I think that's actually an example where you can sort of segue into zero-overhead abstraction. The language has static typing, right? For Rust, for C++. Statically typed languages. That doesn't mean that we haven't grown as we've learned and added to our standard libraries added dynamic typing. Some languages have done that in the language. If you look at C#, they've actually gone dynamic typing, plumbed into the language at the level of you need a new compiler to use it and the compiler participates. But in C++ - and correct me Steve, but my understanding is the same in Rust. We have some dynamic types as well in addition to the static class and those kinds of things. We have optional variant and any which are - we have the sum types where any can store an object of any type but only one at a time. And that is dynamic typing. And you can ask what type is in it right now. And the answer might be different one second from now and it's the same object. That's dynamic typing. But we've done it as a library because we think that we have a sufficiently capable library-building language where we can do that and still get a useful feature. Same with variant, which is like any, except it's restricted to a statically known list of types. It's sort of in between. You have a static list of types. But, dynamically, it's one at a time. And optional is just optional. I know you have that one. You probably have other examples, too, Steve in Rust's standard crates. [0:14:35] SK: Yeah. The situation is a little - I guess it really depends on what you mean by dynamic typing. Because I would argue that an enum or a variant - I guess I don't know as much about how variant is actually implemented. But enums in Rust are fully statically typed in the sense that you do know the list of possibilities. And that is checked by the type checker. But it's also true that that introduces a branch at runtime which is sort of what ends up happening in a dynamically typed language. Which way are you sort of thinking about it is really you're getting into the semantics of things? We have the equivalent of any. But it comes with a really interesting restriction in that it only can use the static lifetime. And so, it's actually much less useful in Rust than it is in some other languages that have a similar feature because you sort of can't use any of the lifetime-related machinery to it. And this gets into some really deep weeds around the way that lifetimes are implemented and what they mean semantically into the language. I don't necessarily want to segue to that right this second. But maybe we'll go back to it later. But there definitely has like - or you could argue. The way that we do the equivalent of virtual classes is trade objects. And there's some differences in how they're implemented. But the core idea of like, "Okay, I know I have an open set of this kind of thing that has these methods. And, therefore, what is at runtime is not really known." And you're dispatching to a vtable to know which method to call, which is like the same thing as a dynamic library. And sort of bring it back to other languages too. You mentioned C# adding dynamic typing. The whole Ruby and Python world have been adding static typing to their language in the reverse or whatever. Everyone's trying to figure out what the sort of proportions of each of these features make sense and how they fit together. And language design is just as much of an art as it is a science. And it's really about the taste. And as you mentioned earlier, find out what the philosophy of the language is. That's really, really important. And then different people sort of choose how much of this and how much of that they want to add based on whatever their take on the world is so. Yeah, definitely. [0:16:27] HS: And every language is designed to solve a different set of problems. Right? It's not all languages are going to converge to be the same thing someday once we figure out what the perfect language is. Because the scripting language is just not like an imperative systems language, for example. I could say more about that later. But I'm not going to use C++ in places where I should be using Python. That's just madness. But as we learn, earlier you were both saying that we're so far along in this industry. And it's true in terms of our careers because the industry is older than - as I look at the number of gray hairs, we can see each other in the video, the industry is older than each of us. That's great. It is still a super young industry. I mean, the first compiler was what, 60, 70 years ago on the outside? Thank you, Grace Hopper. But we are such a young industry. It is no wonder that we're still figuring things out. And I think it's great to see the newer languages not forgetting the lessons that we've already learned. It would be so easy to say, "Oh, look. I reinvented this new thing." "Oh, you mean the same as that 40-year-old thing?" But in that sense, I still think that all programming languages, including C++, including Rust, are young. Because our industry is still learning itself. And we're still somewhere between craftsman and actual engineering. And we're navigating that transition. Maybe I'm just talking like an old-timer now. But it seems like we're all pretty young and still getting the hang of this. [0:17:48] SK: You're totally right. And I fully agree. I have a small interest in urban policy, and shenanigans like that, and cities and stuff. My mom goes, "Hey, there's this book you might want to check out. It's called a Pattern Language. And it's about how they would manufacture houses." And I was like, "Mom, you're not going to believe it, but I've read this book but for a completely different reason related to my computer stuff." But we think about - when people would ask me what I did and they're not software engineers, I would be like, "Well, if writing programs was a construction job, I would be a hammer manufacturer." But steel that goes into hammers is thousands of years old. Whereas you said, we're less than 100 years old as an industry overall. And so, we have a long, long way to go in terms of just figuring basically anything out. Yeah. Absolutely. Fully agree on that front. And as another small kind of note, I'm interested to see what programming languages are being started roughly now. Because if you look at the big waves of languages that became popular, you have the stuff in the sort of 70s and 80s and that's like a little bit more fuzzy. But 1994, 1995, I don't know what was in the water that year. But four or five languages that are massive were all started within a year of each other. And then you kind of have this sort of Rust Go-ish crowds that happened around the 2009, 2010 kind of period. And so, it's 2024 already. We're like at the point where that period repeating itself. There should be some people somewhere cooking up some new languages that are eventually going to be super huge. And there's some that have been started obviously in the last five or six years. But I think there's somebody who is starting a repository now that is making a programming language that's going to be a huge hit in 20 years. I just have no idea what that is yet. But I'm excited to see how it happens. [0:19:34] KB: I'd love to dive in a little bit on something that Herb said. You mentioned that the perfect general-purpose language doesn't exist. These are designed for particular problem domains. And each language is making different tradeoffs based on which domains they're sort of centered on. And those obviously evolve somewhat over time. And some of I think the trend that you're talking about, Steve, of adding static typing into Ruby, and Python, and TypeScript, JavaScript is like, "Oh, these things are moving beyond being scripting languages and used for large scale programs where static analysis is really quite helpful." What would you describe as the sort of premier targets for Rust and C++? What things do those languages optimize for in their ongoing development? And what do they make easy? [0:20:19] HS: Yeah. I think those two languages actually have a very similar target. There is still a delta of when would you use each one because of the maturity ecosystem, because of thread safety guarantees, which is pretty much unique to Rust. Although, I hear Swift is on your coattails there. And so, there's still some differences but in terms of the target of what problem or what kinds of programs do we want to support. I think there's a lot of overlap between those two, which is why we have fun conversations about which to use when. And how can we learn from each other? [0:20:48] SK: Yeah. I definitely think this is true. I think one area in which I would have fully agreed with something you said earlier, but my thinking has changed a little bit lately, is that Rust has some more applicability. A little bit higher in the stack than I would have guessed. If you would ask me 10 years ago, I would say, "Absolutely. I would never use Rust where I would use Ruby." But I am writing a web application in my day job right now. And it's going pretty great. Do I think that's necessarily appropriate for everyone? No. But an area that is an ongoing amount of focus for Rust specifically is networked services more generally. Not necessarily backends to websites. But Cloudflare, for example, is using Rust code as their backend. And they power 10% of internet traffic. Other companies that are writing services find that a company you've heard of, but I don't remember if this is a public story, but also it was six years ago, had some stuff in Python and they rewrote it in Rust. And then they were able to drop thousands of servers off of their cloud bill because the increased efficiency paid for itself or whatever. And so, in terms of I'm not going to say C++ is bad for those things. But it's not clear to me. And that doesn't mean that that's right or wrong that networking service is an area where the committee is specifically focusing on is an area of interest. But it is a place where the Rust team is putting a lot of time and effort into. Every time you hear people say async, that's effectively what they're saying. Although, also, it is useful embedded too. And that's an important part of the async story. But they're definitely in the mindset of networked services more. And that's because in practice of the largest companies that are using Rust, we're only recently seeing people use it in sort of the kernel space or the embedded space. But the first area in which it picked up significant traction amongst large tech companies was network stuff. And so, that's like an area that's like very actively having a lot of work put into it on the Rust side. [0:22:45] HS: One way I tend to answer the question at a high level is what kind of application or what kind of code where you would consider C++ is when you need performance and control. You need control over space and over time. You need to know where your objects go. When they're going to be there. How they're arranged memory, including adjacently and not in pointer chasing data structures on the heat because there're huge efficiencies there. And in time. Determinism. Knowing when things happen. Knowing that destructors will be run to have nested lifetimes for objects on the stack so that you can safely use each other and know that something declared before you outlives the thing that outlives you. Control of space and time. Being able to open the hood and take that control I think is important. And my impression is that's pretty much all so true at this very 50,000-foot level of answering where would I consider Rust. Right? Where I want that level of systems control. I don't want to have a garbage collector pause where I can't control it. I don't want overheads of taking a 2.5x total working set memory because of GC and having to have those tracing overheads. And I think that's a very similar value ideal. [0:23:54] KB: I think one of the things that is true about C++ is you obviously have so much more history because the language has been around so much longer. I'd be curious to hear a little bit about the ways in which that is kind of helpful for C++ developers and where that ends up getting in your way. [0:24:10] HS: Oh, that's going to be hard. Let me try real hard for a short answer. The short answer is because you have history, you have mistakes of history. And so, you have to overcome them. Because assumptions change, for example, over time. Internet attacks wasn't a thing in the 1970s when C was first created and when Bjarne started making C++ in the 80s. Right? We were all very collegial. And if we had a network, we were all trusting each other in universities. It was a happy time in some ways. But on the other hand, one of the challenges I see right now - I work on the C++ plus team with Microsoft, which is also the Rust team. We support Rust to use in Microsoft and C++ use at Microsoft. And we love both languages. And one challenge that we have with Rust is, "Gee-whiz. Teams that want to use it can't yet." Not because it's not a great language. Not because it doesn't have advantages. It does in certain things. Just because of that history. Because there's been 30-plus years of tooling already developed. And Rust is no longer a new language. That's been 1.0 for a decade now. That's still pretty new when you have 30 years of mature and working C++ tooling. That's not a fundamental problem. That's something that Rust will get to. But it just will take time. And right now, there are things where I can't ship code out of the company unless it goes through this tool. And it understands C++ but it doesn't understand Rust. I can't ship that code. If I write in Rust, I'm not allowed to ship it without a VP exception that, "Okay. We're going to take the risk." That's just one example. We'll get there. We have our backlog of got to upgrade this tool, got to upgrade this tool. But it's a longer backlog than you'd think just because there's just been so much built around a C and C++ as the way of doing things for so many years successfully. History is a strength and a weakness. I don't know if that helps. [0:25:57] KB: Yeah. No. Totally. I'm curious, do you - and this is for each of you. Where do you feel like - and, Herb, it sounds like you have visibility into both communities, which is great. But where do you feel like each community could really learn from the other? [0:26:12] SK: I lurk the C++ standardization process way more than you probably would assume that I do. It's not like I read and understand every single paper. But I do try to keep abreast of what's going on. I think that describing either community as one coherent thing is long not been the case for C++ and is increasingly not being the case for Rust too. There's a little bit of challenge I think on that side. But if we talk about the language development process, I think that there's sort of a big thing going on in some parts of the Rust world where some people want to kind of make larger changes to the language than some of us would prefer. I don't know if that's sufficiently gentle or not. But this history thing I think is really important. And I think that one of the strengths of the C++ committee is its total commitment to backwards compatibility within reason. Obviously, some things occasionally get removed, like the garbage collector support that literally no one implemented. If it's not in the standard anymore, that doesn't affect anyone. Or some minor backwards incompatible changes here and there. But that's just always going to happen no matter what. But they have a very clear and very firm commitment towards that. And I'm not saying that the rust team necessarily doesn't have a similar view in the large. But there's been some recent proposals about some things that kind of add a degree of complexity that I'm am not necessarily personally comfortable with or is more controversial among some parts of the community due to that kind of like does this actually pull its weight? And there sort of tends to be a push and pull about is adding a new feature always a good thing or not. And so, there's sort of I think a little bit that going on that's definitely something that could be learned from, I think. [0:28:04] HS: It's interesting you mentioned that. Because it actually makes me think several C++ examples. Because this is language agnostic, right? We all deal evolution. Congratulations. You're successful. Now people want you to do more. And there's different forces in different directions. But the example that comes to mind is actually C#. I've been at Microsoft for over 20 years. And C# is our sister team. We talk to the C# folks all the time. The language designers, the runtime implementers. And this reminds me quite a bit. And I know I'm not telling tales out of school here. This has been said before. But C# added a feature that you and I would recognize for safety, important for safety, right? Nullability. Having nullability in the language. Being able to actually have first-class support for nullability is an obviously good thing. Of course, we should run and do it. Well, they were smarter than that and actually had a discussion. But then they did do it. And it's not clear that there's not regret about it. Because it turns out that at least the way that C# did it - and this is an example of I think what Steve was hinting out, hidden costs and hidden complexity. Because there's complexity for users and the complexity in the language now but also for the next 20 years. Because you're going to be supporting this thing forever, right? And it's going to constrain evolution forever. It turns out that once you add nullability, it's like adding a drop of ink in water and it spreads and it spreads. The whole language needs to know about it more than you first thought. It spreads throughout the type system. It spreads throughout the runtime the IL. It touches everything. And it's not that there isn't benefit. There is to having nullability in the language. It's clearly the billion-dollar mistake and everything. There's clearly benefit. It's just the cost. How do those scales balance out? Because it turns out the cost is actually surprisingly high. And, of course, maybe this could be different if it were done different ways. But the actual practice is C# has it. It's great. But, boy, did it cost more than they thought? And knowing that it might not have been done or done in the same way if there was a time machine. And, by the way, all of us get to the point where we have the discussions in our language design groups, is if we had a time machine, we would do this. Since we don't, we're gonna have to move forward somehow. Do you ever have those time machine discussions? Does that come up, Steve? [0:30:13] SK: I mean, just to be clear, I haven't been involved in the language for a couple years at this point. I don't know what the current thoughts are on that. I have a very jokey answer to this question, which is I think that the standard string type should have been named StrBuf instead of String. But that ship is long sailed. And that's because like inconsistencies with other parts of the standard library where other things are called Path and PathBuf. But we have String and Str. And so, Str and StrBuf would have been like conceptually simpler. And that's like a breaking change that can never be made. I used to have a good list of these things. But off top of my head, I don't remember. But there's definitely ones where I was wrong. The inverse question where I fought against a change that I thought would be bad. But it turned out to actually be fantastic. And so, I'm glad that I got overruled on that or not taken care of. The primary one there being postfix await for the async/await syntax. JavaScript, F#, and C# I believe were the first to actually have async/await before JavaScript. But JavaScript kind of made it super popular for whatever reason. And so, you do await, and then an expression, and then a semicolon. That is awkward for several reasons in Rust. And so, the idea came up of like, "Hey, what if we did postfix await?" Instead, you do expression.await and then the semicolon. This was in 2017 was when this discussion was happening. And it was incredibly contentious. And there there was tons and tons of arguments. Basically, my argument against this feature was like we have a lot of folks coming from JavaScript. They know the syntax already. If we have a weird syntax here, we're adding complexity for sort of what I don't think is necessarily a very large benefit. And other folks were saying, "Actually, there are benefits to the postfix syntax." If it fits in a lot nicer, if you need to wait multiple times in an expression. You're not nesting it with parentheses and all this sort of stuff. But I was kind of on his conservative position of, "Okay, the tradeoffs are unclear. Therefore, we should make the conservative choice." Years later, now that I'm writing a lot of code with async and await, gosh, I'm glad they went with postfix await. Because it's like so, so much nicer and in so many ways. And we solved part of that argument about learnability by the Rust compiler knows how to parse the JavaScript style syntax. And then we'll give you a specialized error message that says, "Hey, I see you're trying to write await here. But we actually write it like this." And will just literally show you the code that you need to write instead. And so, that turns what would become a learnability issue into a very, very minor speed bump. And so, at the time, was not really thinking about, "Oh, we can solve this problem through tooling instead of through the language definition itself." That's I think a famous example where I was wrong about what should be added. But I think one area that's definitely been discussed a lot by people is around the name of mutable references. Rust has two different kinds of references, like ampersand T and ampersand mute T for immutable and mutable references. And you'll hear sometimes people call immutable references shared references. And that's sort of because, ultimately, Rust's take on mutability is that it's kind of the duel of exclusivity. If you want to mutate something, it also needs to be exclusive. And if you don't need something to be exclusive, it's also immutable. And so, you can kind of look at that property via either lens primarily. And so, there's definitely a significant number of folks who argue that we should have renamed ampersand mute T to ampersand unique T to sort of imply this sort of uniqueness of access. Because they see that as being the more fundamental part of the tradeoff. My personal take on this is that's one thing I would love to see in a Rust++. I think that that idea at the time that Rust was coming up, you needed to use the language that was more familiar with people, which is the mutable and immutable side. And then eventually you learn like, "Oh, this also has these uniqueness implications." But I think that now that Rust has kind of like at least made that a little more mainstream, that it's now possible for a future language to do that instead. And, arguably, maybe that's a better way to do it. I don't totally know. But that's definitely a commonly cited example of a change that people sort of wished that we had done. There's also a ton of minor syntactic things. Obviously, people - I wish we'd use equals instead of colon instruct assignments or things like that that are like not super big picture things but are more minor details. [0:34:20] HS: But for the things that maybe more impactful that you wish you could change - let me just ask about editions. Because we hear a lot in C++ about Rust has editions. We should too. There's been proposals about that. And I've always been interested in that. I'm working on what could we do if we took a once in 30 years breaking change to C++ syntax but kept full compatibility? And so, that's something I'm exploring myself. But in terms of taking changes more often, it's like can you really make large changes every three years? I'm curious, some of the things you just mentioned like changing at mute to at unique, is that something you could see? If there was consensus for it that this is a useful change, could you see that kind of change being made in an edition? Could you see changing a default? Like private to public or something else? Something fundamental like that that changes the meaning of code being done in an edition? [0:35:14] SK: Yeah. It's really interesting. The answer to both of those questions is technically yes. But I think that like implies something about editions that's actually not true. You just happened upon two examples that are very, let's say, flashy. One of the reasons why editions work really well in the Rust world is because we only have one compiler. And, basically, sort of the core idea of editions - for folks who don't know about Rust editions. Basically, the idea is that you can pass a flag to the compiler that sort of lets you opt into newer changes that would be breaking. A classic example is, in Russ 2015, the initial edition, async is not a keyword. If you want to name a variable async, you can totally do that and it will absolutely work. But in Rust 2018, it is a keyword. And so, that code would break. How do you get around that? Well, basically, when you start a new project, it will add the latest edition to your configuration. So you get the latest amounts of stuff. But as time passes, when a new edition comes out, you're still on the old one until you change that configuration. In practice, your code never breaks until you opt into those new changes that would be breaking. However, technically, in a - this is the technical explanation sense. Not in a paragraph subsection C, D says. The way that this kind of edition mechanism works is, like many compilers, the Rust compiler has like 15 IRs. It's actually more like five. But whatever. But the point is there's kind of like an IR level called the mid-level IR or MIR that you can sort of think of as like Haskell's core. Core syntax. Core Haskell. Where it's kind of the fully desugared. Everything is down to kind of like the most basic components sort of aspect of the language. And while that's not a stable interface, it is at least like a conceptually roughly stable interface. And so, kind of the idea is that editions can only change stuff in those desugaring layers before you get down to MIR. Something changing mute to unique as a purely syntactic change would absolutely work. Because what ends up happening is when it compiles one package for 2015 and one package for 2024, and 2024 adds unique or whatever, it would desugar to the same internal - it's not an ASD at that point. But whatever data structure of the IR. And that's where the interoperability happens. [0:37:31] HS: It's a little bit like C# and VB. They're different syntaxes but they both compile to .net IL. Yeah. [0:37:37] SK: Yeah. Exactly. And so, that's much easier with one compiler than multiple compilers. Because a GIMPLE is very different than LLVMIR, for example. Just a name two. Kind of that sort of also limits what can be changed in edition fundamentally. Because you can't like add a GC in an edition. because that would really change the way that things work on a more deep level. And, also, funny enough, because the standard library is the standard library, while you can make changes in libraries on editions in some ways, we can't with the standard library because you only get one copy of it included in the program. And it needs a single edition. A lot of people are thinking like, "Oh, could we remove a deprecated function in an edition boundary?" And the answer to that is like not actually. Because you have one copy of the standard library. And if you were using the previous version, it still needs to know what that code is. The closest we could get, and this is not implemented yet but this thing people have talked about, is hiding those functions from a visibility sense in a new edition would technically work. But you don't actually get to delete the old code in the standard library even if you were like removing something. And so, that's kind of an example of one of the things that editions can't change. And so, I think it's a really nice system overall because it gives you some flexibility. We didn't have the co await, co async problem that you all did because of editions. And that's kind of what originally drove it was how can we add new keywords without breaking everything else? And what's the minimal amount of stuff? But it does mean that some core conceptual identity of Rust has to stay the same even throughout those changes because of the fact that we can't make truly deep changes of language through the edition mechanism. That definitely helps, I think. [0:39:11] HS: That's a great summary. Thank you for that. I'm sure the folks listening will find that really useful to know what it can and can't do. And just to elaborate on the thing that you just mentioned. For those who don't know, C++ also added await. Because that's what all the cool young languages are doing. And this is going back to C++ 20 now. But because we didn't want to break existing code that might use await as a variable name or otherwise. And we were still at that point not really wanting and, to some extent now, not really wanting to have contextual keywords. But we wanted globally reserved keywords. After much, much bikeshedding, that is debating what color to paint the bikeshed. In this case, the name of the keyword. It was co_await. Co_await. We have co_await and co_yield. And there are codependent jokes and other things in there as well. But all because of deliberately uglifying it in the name of backward compatibility. And any language that's been around long enough and has been popular and cares about compatibility because you actually have customers. Congratulations. They're a blessing and a curse. Because they expect you not to break them is going to run into this. And I've seen the C folks run into this as well. That's what the code co_await joke was. Because it's totally true. We are one of the - I think might be the only major language that doesn't name it just await. It's three more characters. Can you think of any others? [0:40:29] SK: I can't off the top of my head. But I also think it was a very good reason to do that as well. I don't think it's a knock against y'all either. It's about being responsible to your customers. And languages have customers as you sort of said. I like that framing a lot too. [0:40:44] KB: One thing I'd love to dig in a little bit, something you alluded to, Steve. Rust has a single compiler that is managed by the team that is defining the language essentially. It's sort of collocated. Whereas C++, you have a separation to some extent between compiler implementers and language design. Or the sort of standards committee. What you all see is the pros and cons to those different approaches? And I guess since Rust is more immediate and you're involved in it, do you know why Rust chose to go that route? [0:41:13] HS: Oh, by default, you always start there, right? [0:41:17] SK: Yeah. On some level, we went that route because, in practice, as Herb just said, by default, that's what you do. The reason that the C++ committee is in the place they are is because also the C committee was in the place they are, which is also why the ANSI folks before, the ISO folks. And it's like C got very popular and people wrote multiple compilers. And they all had slight variance of what was going on. And whenever you have that happen, you need some sort of way to unify various implementations. And so, a lot of people imagine that like specifications get written and then people follow the specifications. But in terms of successful specifications, that's almost never what actually happens. It is a bunch of people doing their own things. And then they get together and they're like, "Hey, we actually need interoperability between the groups of us." And so, now we have to figure out how to do that. And so, that's kind of like, historically, at least my view. Obviously, Herb have different opinions or understandings given that's what you do. But that's how I see that happening. And so, in practice, the reason this happens is just because there's no reason to divide work when you're still a nascent language nobody uses. You have to like get to the point where people care enough about using you in the first place. And so, that's just naturally is kind of the case. Now, there are some examples of alternative Rust compilers. But none of them are at the level of maturity where that's a technically true sentence but is not actually what you mean when you say those words. For example, mrustc is a fantastic project that can compile correct Rust code but leaves out things like the borrow checker because they're not actually needed. They're needed to check correctness. They're not needed for correct code gen. And so, mrustc is able to compile the Rust compiler and produce a binary-equivalent output to what the regular Rust compiler would do when compiling itself. And so, in that sense, it is an alternative Rust compiler and it's useful for bootstrapping. But it's not something you would be using for a day-to-day kind of development process. And someone is currently working on adding Rust to GCC. And so, they have that going on. And they're working on that. But it's in very, very early days and is not really usable to compile any sort of real programs just yet because it takes a long time. As those projects mature - I mean, I'm not entirely sure if mrustc's maintainer plans on trying to be a full compiler eventually. But the GCC person or people definitely are. And so, at that point, a sort of more standardization approach I think makes more sense. However, I think there's a significant desire to never make it an ISO standard, let's put it that way. And not following those steps. And that's because the ISO process has a lot of requirements that don't feel like they pull their weight for the Rust crowd. And so, there's kind of a lot of like that kind of angle of, "Okay. Once we would do a standardization, is it under some sort of other body? Or is it just like we'll have the Rust standards group that's not part of some other standardization process and would write their own?" There is an ongoing desire to produce a Rust specification by the upstream Rust project. And they have a working group that's sort of working on that. And that kind of happened because the Ferrocene project sort of forked the Rust compiler in order to get safety certifications. Because an automotive and several other safety critical industries want to be using Rust code. And so, they sort of took the REST compiler and then did the work to make that happen. But they also upstreamed the vast, vast majority of it. And so, in reality, it's not really an alternate compiler. So much it is a very, very small fork. But in order to get those certifications, they had to produce a specification for Ferrocene. We kind of have some normative documentation written by the Rust team themselves. Some documentation that's not normative for the language but is normative for a compiler. And the compiler happens to be basically identical to the real compiler. So, like, "You know - ah." And so, there's these forces that are coming into play now in Rust's lifetime that are maybe going to make it a little closer to the way that the - at least conceptually that there are alternate implementations. But we're just young enough that that is only really starting to come to the fore in the last two or three years. [0:45:26] HS: Yeah. And C++ went through the same evolution. Everything begins with one compiler, right? If you start with a language, somebody's got to be first. And Bjarne wrote Cfront, the compiler that transpiled. It was a full compiler. AST and everything. Just instead of emitting bytecode for .NET or emitting binary executables, it emitted C code. And then you could bootstrap itself. And that was a great breadth play because it means it would work sometimes with a little cording wherever there was a C compiler. Big advantages that way. But eventually people said, "Well, what if I had a native compiler?" And I don't remember if your dad, Mike Ball, he was around in those early days doing C++. [0:46:05] KB: He was one of the first ones. Yeah. [0:46:07] HS: I don't remember if he was at the dinner I'm about to describe. It's funny that, here, our interviewer is like the son of the person who might - one of the people who might have been at this dinner table. I remember it was in the very late 80s, or maybe just around 1990, or early 90s where Bjarne says about he was at dinner with himself, with Walter Bright who was building the Zortech C++ compiler. And there were one or two other people. And because it was just starting to be where people were building compilers, C++ compilers, native compilers that didn't transpile to C but were native compilers for specific platforms. Walter was doing it for DOS. And then, soon after that, for Windows, which was really important for C++ success. Then there was Sun and so forth. But the anecdote was they're sitting around this dinner table because now there were some other compiler vendors to appear. And they knew at the time - and one of them I forget who actually said, "This is probably the last time that we will be able to seat all the C++ compiler implementers in the world at one dinner table." And they were right. This is a natural evolution. And I do think - again, speaking as someone who's on the C++ and Rust team. I do most of my work in C++. But I see the Rust stuff. The monoculture is seen as a risk. I mean, it is an issue that we want to at least mitigate if not solve somehow. Nobody's going to just go build another compiler just to have a second one so they can point at it. But even if there is just one, how do we mitigate that risk? I mean, look at XZ Utils. You don't want one point of failure one place for a supply chain attack. There's just too much badness going around right now. That's not to be fearmongering. It's doing fine as it is. But you need to take more care. Every time there's an update, you need to take care of what you're ingesting. And that applies to pretty much any open source project in fairness too. Including Clang and, say, GCC. But having more than one compiler, having a spec and where the compilers implement the spec not the other way around. But although sometimes you document what actually happens. But there's going to be divergence in the spec should be to make them agree. The third piece that you're going to need to build, and this is part of building out the ecosystem of tools over time over decades, is you're going to need those conformance test suites. And there's going to be vendors popped up because that's going to be an industry. Or where somebody's going to get paid to write these conformance test suites so you can actually validate if there is some sort of standard. Whether it's ISO or something. A separate standards development organization for the Rust language. And I now have a Rust compiler A, B, or C. How do I know it conforms? I'm going to run, in C++'s case, Perennial, or Plum Hall, or one of those test suites. And in Rust's case, that's going to need to be built out so that we can then mechanically validate conformance as well. And this is just part of growing and becoming popular, is you need to build out all this boring stuff, which is much less sexy than language design. It's much more fun to implement a new language feature. And then there's all this scaffolding and architecture. And, "Oh, can't we just wish that wasn't necessary?" But it's part of being well on your way to a million customers. [0:49:14] SK: Yeah. Luckily, rustc a significant number of tests already. I'm not saying that functions as a conformance test suite. But at least would give the start of doing so. Because it's not set up to run a different compiler and report. You'd be changing the scaffolding. But there's at least thousands of tests written that are a small amount of conformance checking. In particular, there's a file I really love called weird-exprs.rs that has lived since the very early days when the parser was being worked on. And it sort of includes stuff that's like you'd never write the code. But it ends up being very strange to parse. So union is a contextual keyword. There's like a union parameterized over a lifetime union with a member union that's got like the type union and just silly code like that. There's also some very fun vestigial thing before Rust 1.0. For example, there's a function called evil Lincoln that just says like let unit equals print Ln hello world or whatever you're like why is it called evil Lincoln that just says like, "Let unit equals println hello world", or whatever. You're like, "Why is it called evil Lincoln?" It's like, well, in the olden days of 2009, the print statement was called log. And Lincoln logs are a popular toy for children in the US. Or at least they used to be. I don't know if they're still a popular toy now. But that's what it's testing. [0:50:28] KB: Absolutely are. [0:50:28] SK: Yeah. Okay. Cool. But the name got changed. And so, kind of the semantics of that joke got lost to history. And so, there's lots of stuff in there. But, also, I think another great example, a reason I would want to see a new Rust compiler is actually based off of the work that like Chandler is doing with his new language, which I'm totally drawing a blank on right now for some reason. Carbon? He's given a number of talks about the architecture of Carbon and how they're doing like a data-driven approach. And that is an architecture that tends to work really well in Rust. And a very interesting thing about the Rust compiler is that it was written in a pre 1.0 language and then slowly translated over time. You know what I mean? I'm not going to say that it's bad. I don't want to like upset anyone. There's a lot of very talented people who work on the compiler. And I'm not trying to denigrate their work. But they also are dealing with a million-line legacy codebase that is written before the language was even stable. And so, there maybe architecture decisions that they would want to revisit in a clean room implementation that are just simply not feasible in the current one. But I don't work on the compiler. I can't speak to that being truthful. But I think that a thing that I found interesting learning about Carbon's approach to all these problems is the technology that they're doing to develop the compiler itself. And I think that's something you can often only do in a green room sense. I think that's kind of a very interesting idea. [0:51:47] HS: There's lots of reasons to want to. And as time goes on to decide, "Hey, I'm going to to build another compiler for -" well, you already have one that works. Why would you want another one? In C++'s case early on, it was to have a native compiler rather than a cross compiler. Well, in the case of Clang. When Clang was developed, when GCC worked pretty well, it was for mostly legal reasons as well as extensibility reasons. There can be lots of reasons to do it. Plus, we all hope we're smarter now than we were 15 years ago even if we weren't also shackled by design decisions from 15 years ago. There's always that impetus. C#, for example, managed to totally replace their compiler. And a lot of people didn't even notice. The Roslyn compiler is a drop-in replacement for the previous C# compiler. And it has lots of other stuff, and extensibility, and toolability. That was a reason to do that that the previous C# compiler didn't have. Why would you build a new one? Well, in that case, toolability was a big reason. But they managed to do it only because from day zero, from before a pencil was put to paper, that full backward compatibility was non-negotiable. And they developed the test assets and the quality gates. And they wouldn't let check-ins in that broke things from the beginning. And that added time and cost to building a new compiler. But it also meant they had a smooth rollout. But it couldn't have happened if they decided halfway through that, "Oh, yeah. We want compatibility." It's like, "No. That's a decision you make on day zero and then you live with it." But, yeah, a lot of languages do this. I have no idea what Erlang does or what other languages do. How many compilers they've had or have? Yeah, C# totally replaced their compiler in the last decade. C++ has many and has new ones springing up. [0:53:26] SK: One of my favorite multiple compiler stories. I'm curious, do you either know about the infamous RMS [Name inaudible 0:53:31] email story? Does it ring a bell? [0:53:34] HS: No. But it sounds intriguing. [0:53:36] SK: I mean, you could say the only reason we have LLVM today or the reason we only have GCC today, which GCC also had a situation where there was a fork that was semi-hostile and it got replaced at one point in time. But in the very, very olden days, back in the beginning, Chris Lattner sent an email to RMS saying, "I would like to donate LLVM to the GNU project. What do you think about that?" And RMS just missed it. So we never replied. And so, Chris went, "Okay." And then they just kept on building it. But there is another fun alternate universe where RMS answered that email. Accepted it. And LLVM either replaced GCC outright or LLVM became GPL licensed instead of its current license and all that kind of stuff. And so, it's very funny. We're all really busy. It definitely does not give me any anxiety about my email inbox when I hear stories like that. But sometimes history is just very interesting. Even then, you get merges of compilers or forks where things come back in. It's not a compiler. But IO.js was a fork of Node.js over governance. And then became the Upstream after some situations. Sometimes you don't even necessarily have a permanent state of multiple implementations. You maybe have a temporary one that then gets reconciled in some fashion later. There's lots of different ways this stuff can play out. [0:54:49] KB: The diversity and the opportunity to have that diversity can be really helpful. I hope we see more of that picking up in the web browser space again. [0:54:57] SK: Yeah. It would be nice. [0:54:58] HS: Speaking of monocultures, yes. [0:55:00] KB: Yes. Exactly. Right? We're getting close on time. I would love to kind of wrap with one last question to you all, which is you if someone is brand new to the systems programming world, they don't know either C++ or Rust, what would you recommend them in terms of starting learnability? Where would you point them? [0:55:18] SK: I mean, I have extreme bias given that I literally wrote a book about how to learn Rust. But I would advocate for Rust simply because I just think that there's a lot of things about - because we did not have the history, there's an argument to be made that Rust is I don't want to say simpler. I hate simple in programming languages. Parsimonious. There are some things that are like have less - there's no rule of three and rule of five because we don't have constructor and like stuff like that that there's just a little less to learn. I think that learning both languages is valuable. And I think that knowing one really helps you with the other. But as someone who has personally invested a significant amount of time and, at this point, my life into making Rust a learnable language, let's put it this way, I would hope that if they decided to choose Rust, that it would be successful and valuable to them. I think that this answer is also different for every individual person. I wouldn't ever say that like there is one universal answer. Because different people learn differently. I know tons of people that don't learn by reading books. And so, those people would not be served no matter how much blood, sweat, and tears I poured into the book over the years. Obviously, I'm going to say Rust. But I do acknowledge that's not necessarily universally the correct answer. [0:56:27] HS: Yeah. I think Rust is a great first language to learn. I would much rather schools would actually teach Rust instead of Java. Because at least it's a systems language. There are too many people who get degrees and don't know not just how to write an allocator but what that means, right? And not everybody's going to need that level of understanding. But it would be nice to have more of that understanding in our CS graduates. And many are doing fine. It's just something that we can continue to improve. If I had to recommend one, I would say pick one. Flip a coin and pick one and learn it. Because so much is in common. One of the things that I've been finding - I run the C++ Committee and I also run the largest C++ conference. The annual one, CPPCon. One of the things that I've been noticing in the last I want to say three, four years especially, this is just happening organically, is every single meeting, every single conference, we're having local high school classes asking to attend. Yes. Let me go invite you as guest. Send the notification off to ISO. I just invited this class. And they're like, "What are their names?" I said, "I invited the class. Get over it." And they're like, "Oh, this is new to us." But to see teenagers coming to our conferences and in some cases giving talks is nice to see. And I think that speaks, too, that we still have a ways to go to simplify the language. But it is much easier to use C++ now. Even though it's a bigger language, you don't need to know all of it to use it. You can actually get started much more easily than if it were, say, C++ 98. I would like to make that even 10x better. And I'm working on that as my own personal project. But to me, I would love to see more C++ or Rust being done in our universities. Being taught in our universities. Because systems programming needs to be taught more, I think. And C is fine too. But why not a higher-level language that knows what types are and user-defined types are beyond struct? There's benefits to that to efficient abstraction. [0:58:24] SK: Yeah. There's a slight variation of your question, which is like should I learn language X? And the answer to that is yes. For any X as far as I'm concerned. New languages help you change your perspective. They help you write better code in the languages that you decide to use more often. You develop tastes. Taste is important. And the only way you can learn taste is by trying a variety of different things and figuring out what works for you. I kind of guess it brings back to a very early thing. There will never be one universal general-purpose programming language. There's always an opportunity to pick one or the other. And you can't make that decision unless you tried both. You really got to just like keep trying new languages. [0:58:59] HS: At the risk of going over, I just got to say this is reminding me so much of my undergrad days. Because sometimes you get the question, "Oh, what was the most impactful course you had when you're an undergrad?" or something like that. And the two that come to mind, number two is of course algorithms and data structures. That's bread and butter. And that's eye-opening when you're shown the implementations of B-trees and things like that. You're forced to think about algorithmic complexity. What big O means and so forth? But to me, that's number two. The most useful one I found, which I just happened to get into, was comparative programming languages. Where every week you learned and wrote an assignment in a different programming language. And to people who think that C, and C++, and Java, and Rust are different programming languages, they're not wrong. But to think that they are so different, they should go take a course like that where you do assembler in week one, prologue in week two, and something functional in week three like Lisp. And you will never think that Rust and C++ are that different ever again, right? Because, prologue, everything is a rule. Assembler, everything is an OP code, is an instruction. It helps you get out of the rut of, "Oh, programming is just this one thing and is writing my Java application and to finish my course work." I found it to be really, really helpful. [1:00:17] SK: That course was also incredibly impactful on me. I will tell you that story some other time since we're already running over. [1:00:22] KB: I love it. Yeah. This has been great. I feel like we keep talking for hours. But we're going to wrap it with that. Thank you, both, so much. Thank you, Steve. Thank you, Herb. And with that, we will call this an episode. [1:00:35] HS: Thanks for having us. [1:00:36] SK: Thanks so much. [END]