Memory safe languages in Android 13 – OSnews – OS News

Posted under Programming, Technology On By James Steward

In Android 13, about 21% of all new native code (C/C++/Rust) is in Rust. There are approximately 1.5 million total lines of Rust code in AOSP across new functionality and components such as Keystore2, the new Ultra-wideband (UWB) stack, DNS-over-HTTP3, Android’s Virtualization framework (AVF), and various other components and their open source dependencies. These are low-level components that require a systems language which otherwise would have been implemented in C++.
To date, there have been zero memory safety vulnerabilities discovered in Android’s Rust code.
We don’t expect that number to stay zero forever, but given the volume of new Rust code across two Android releases, and the security-sensitive components where it’s being used, it’s a significant result. It demonstrates that Rust is fulfilling its intended purpose of preventing Android’s most common source of vulnerabilities. Historical vulnerability density is greater than 1/kLOC (1 vulnerability per thousand lines of code) in many of Android’s C/C++ components (e.g. media, Bluetooth, NFC, etc). Based on this historical vulnerability density, it’s likely that using Rust has already prevented hundreds of vulnerabilities from reaching production.
These numbers don’t lie.

Follow me on Twitter @thomholwerda
As a C critic for the longest time, I am glad memory safe languages are starting to make a dent for new work. Is C’s dominance in system code finally starting to crumble? It’s about time, it’s jarring how much of the world runs on languages that are so easy to exploit in the wild. Of course C has the lion’s share of established code bases and that is something it will retain for a long time, but as long as more new development work is done in safer languages then we might finally reach a point on the horizon where memory corruption is a distant memory.
Is C’s dominance in system code finally starting to crumble?
No. Most of the dodgy/non-scientific correlation/”marketing innuendo” in the article has nothing to do with system code (most is likely browser). Heck, their own charts say that vulnerabilities in “new memory unsafe code” are dropping by about the same amount as vulnerabilities in “new memory safe code”; which indicates that there’s an external factor reducing vulnerabilities for all code.
For system code, we can play the same stupid marketing tricks and claim “there’s more assembly language than there is Rust in the kernel; and there’s been no memory safety vulnerabilities in raw assembly language code; so obviously raw assembly language is better than Rust”.
Brendan,
No. Most of the dodgy/non-scientific correlation/”marketing innuendo” in the article has nothing to do with system code (most is likely browser).
I include system libraries when referring to system code. The article states the changes are low level components that would otherwise have been implemented in C++. Even if some of these components are used by the browser, why should that matter?
Heck, their own charts say that vulnerabilities in “new memory unsafe code” are dropping by about the same amount as vulnerabilities in “new memory safe code”; which indicates that there’s an external factor reducing vulnerabilities for all code.
Yes, the chart you are referring to shows that their newer code is generally improving even beyond rust..
We continue to invest in tools to improve the safety of our C/C++. Over the past few releases we’ve introduced the Scudo hardened allocator, HWASAN, GWP-ASAN, and KFENCE on production Android devices. We’ve also increased our fuzzing coverage on our existing code base.
So, given google’s efforts, it would make sense for memory vulnerabilities to improve even on the C++ side of things. Regardless though, I still think the author makes a compelling case that there is a very high correlation between switching to rust and the decrease in memory vulnerabilities.
For system code, we can play the same stupid marketing tricks and claim “there’s more assembly language than there is Rust in the kernel; and there’s been no memory safety vulnerabilities in raw assembly language code; so obviously raw assembly language is better than Rust”.
Google’s language chart only shows new code written in rust, java, kotlin, c, and c++. The author does talk about in-kernel rust usage in the “What’s next?” section, but that hasn’t started yet….
With support for Rust landing in Linux 6.1 we’re excited to bring memory-safety to the kernel, starting with kernel drivers.
As Android migrates away from C/C++ to Java/Kotlin/Rust, we expect the number of memory safety vulnerabilities to continue to fall. Here’s to a future where memory corruption bugs on Android are rare!
You don’t like rust, and that’s your prerogative. It’s not my favorite syntax either. But the truth is that over the decades since the invention of C, every generation of C developers keeps producing the same vulnerabilities over and over again at great cost to society. C just makes it too easy to write broken code. It is arguably better in an academic setting where humans are able to study an entire code base in one setting. But, given the complexity of real world projects, even those of us who are trained to identify these vulnerabilities slip up because we are human and our brains are not good at being 100% consistent across millions of lines of code.
So while I can understand your dislike of rust, would you really want to subject the future of humanity to the continuation of vulnerabilities and bugs that C is notorious for? Personally I think it’s time to fix the problem at the source.
You don’t like rust, and that’s your prerogative.
I don’t like marketing hype.
I also don’t like the continual “let’s convert everything from one language to the next every 5 years (and spend 20% of our lives relearning pointless syntactical differences)” or the “let’s have so many different languages that most programmers can’t understand most code” mentality.
For Rust itself, it has some good ideas, but it’s 95% fanboys frothing at the mouth and 5% good ideas, and it’s nowhere near enough to justify the adoption costs.
So while I can understand your dislike of rust, would you really want to subject the future of humanity to the continuation of vulnerabilities and bugs that C is notorious for? Personally I think it’s time to fix the problem at the source.
No, you think it’s time to fix one problem (memory safety) that was already fixed by many other languages, while ignoring all the other problems that have not been fixed; where some of the problems being ignored (e.g. preventing out-of-range integers) are the true cause of some of the memory safety problems (e.g. preventing out-of-range array indexes) .
Honestly; the single biggest problem that programmers deal with is that there’s too many languages. If there was a way to convert 100% of existing C source code directly into Rust source code with no human intervention required, plus some kind of international authority that can say “C is deprecated and will be banned outright in 10 years”, so that the number of languages remains constant; then Rust would be a solution to a problem that actually matters (instead of making the problem worse).
Brendan,
I don’t like marketing hype.
I also don’t like the continual “let’s convert everything from one language to the next every 5 years (and spend 20% of our lives relearning pointless syntactical differences)” or the “let’s have so many different languages that most programmers can’t understand most code” mentality.
Well, honestly that might be a much stronger point except that the language we are talking about is 50 years old and responsible for the memory safety vulnerabilities and crashes spanning over decades. While I understand your point about needlessly thrashing around for changes sake alone, at the same time we should recognize that If we refuse to evolve, then we’re only holding ourselves back. So the rebuttal I offer you is that we need to have a balance, C had a good run but we need to recognize its deficiencies for modern software. Our industry is overdue for a low level programming replacement. That can be rust, which has a lot of merit, but IMHO it doesn’t necessarily have to be. If you don’t want this language to be rust, then you’d better start promoting other safe alternatives to it.
For Rust itself, it has some good ideas, but it’s 95% fanboys frothing at the mouth and 5% good ideas, and it’s nowhere near enough to justify the adoption costs.
The opposite is true too though. We’ve got legacy C fanboys who don’t want to change despite already having costed the world many billions in vulnerabilities, corporate data breaches, hospital breaches, OS crashes & data loss, and so on. Again, I think we need balance. Instead of simply badmouthing new safe languages, it would be more productive to find a way to work together towards common goals, though I realize that in practice working together can be very naive. You can call these new breeds of developers fanboys as much as you like, but here’s a real warning: they are going to replace you. This process of attrition, though slow, has already started and is ultimately inevitable as old projects and developers maintaining them will literally die off giving way to future generations. Instead of taking offense to progress, we might swallow our pride and do our part to contribute to the future. Let’s not hold things back at computing’s stone age.
No, you think it’s time to fix one problem (memory safety) that was already fixed by many other languages, while ignoring all the other problems that have not been fixed; where some of the problems being ignored (e.g. preventing out-of-range integers) are the true cause of some of the memory safety problems (e.g. preventing out-of-range array indexes) .
Sounds like you’re talking about ranged integers…
https://docs.rs/ranged_integers/latest/ranged_integers/
Honestly; the single biggest problem that programmers deal with is that there’s too many languages. If there was a way to convert 100% of existing C source code directly into Rust source code with no human intervention required, plus some kind of international authority that can say “C is deprecated and will be banned outright in 10 years”, so that the number of languages remains constant; then Rust would be a solution to a problem that actually matters (instead of making the problem worse).
You’re right that we have more languages than we need. Few of these languages are simultaneously memory safe and suitable for use in low level code though and IMHO rust stands outs in this respect. However I don’t object to using other safe languages too. I just think that our industry needs to stop making excuses for legacy languages that have never stopped causing serious faults.
Our industry is overdue for a low level programming replacement. That can be rust, which has a lot of merit, but IMHO it doesn’t necessarily have to be.
OK, I’ve been working on the problem for too long now. The 4 biggest problems are:
1) UX (or “programmer experience”), consisting of:
1.1) Failing to separate content and presentation. Syntax, white space and formatting are all presentation. They need to be configurable at the IDE. If one programmer wants “Python like, with French keywords” and another programmer wants “C like, with German keywords” then that’s their choice. It should have absolutely nothing to do with the underlying programming language at all.
1.2) Feedback latency. If you enter invalid input you should be notified ASAP (immediately if possible, not when you compile, and not when you receive a bug report after your software is released). Currently this only works for trivial syntax errors. To make it work properly you need to build (the equivalent of) a static analyzer into the IDE (and then not have a reason to have error checking in the compiler itself).
2) Training/learning time. Learning how to be a good programmer should take an average high school graduate less than 6 months. The current “4 year degree followed by on the job learning/experience, followed by an additional 2 months per year (because of churn)” is a massive failure.
3) Quality of the final software. “It works” is the absolute bare minimum (the worst quality you can get away with). Modern languages (especially for web) encourage inefficient poor quality crap. A good language would do the opposite – no opaque abstractions, zero hidden overhead, no “easier to use a general purpose thing from a library” (instead of creating a special purpose solution designed for your specific case). Literally shove performance/efficiency in the programmer’s face so they can not ignore it (e.g. maybe colour coding – make expensive code bright red and fast code pale green).
4) The “assumed OK” default assumption. Code can be split into 3 categories: code that can be proven correct (by analyzer, compiler, …), code that can be proven incorrect; and code that can’t be proven correct or incorrect. For the latter, almost all programming tools assume the code is correct when it may not be. This needs to be reversed – “Error on line 123: This statement can’t be proven safe (even though it might be) so you need to fix it”. Note that this will be painful and is related to “feedback latency”.
Now compare how Rust goes for these things – it wins a few points for not having garbage collection (for “zero hidden overhead”), but…
Mostly; Rust suffers from the same delusional thinking as everything else – that a programming language can help solve problems that have nothing to do with programming languages; and that adding more complexity will help to reduce problems caused by too much complexity.
The opposite is true too though. We’ve got legacy C fanboys who don’t want to change despite already having costed the world many billions in vulnerabilities, corporate data breaches, hospital breaches, OS crashes & data loss, and so on.
I’ve never seen a single C fanboy. C programmers are more like “We know C is shit, but it’s well known shit we’ve been dealing with for years”. They’re not the kind of people who say “We changed to a language fewer people can read, leading to fewer people finding less bugs, so lets throw a parade and plaster it on every news site”.
Brendan,
1) UX (or “programmer experience”), consisting of
1.1) Failing to separate content and presentation. Syntax, white space and formatting are all presentation. They need to be configurable at the IDE. If one programmer wants “Python like, with French keywords” and another programmer wants “C like, with German keywords” then that’s their choice. It should have absolutely nothing to do with the underlying programming language at all.
In principal you could make each and every bit of syntax and keyword configurable. But the implications are so far reaching, how far would you take it? I assume you are thinking that all the IDEs/compilers and code repositories all need to be aware of user locales to present them with input and output mappings that are meaningful for every locale? Even the stack exchange code sites and their search engines could become aware of developer’s preferencial locales….otherwise we’d have trouble searching for answers.
Looking at the pros and cons, I think you’d be hard pressed to make the case that the benefits outweigh the costs and trouble. Still it’s an interesting idea. Maybe pictographs like those used in PLC ladder logic could play a role…
https://www.plcacademy.com/ladder-logic-tutorial/
But I wonder how much of that would be seen as a gimmick by programming circles versus something with compelling benefit over normal programming languages.
1.2) Feedback latency.
I agree, although I think this has more to do with the IDE itself than the language.
2) Training/learning time.
I’m not convinced this is a language specific problem aside from the fact that C coders need new training. For new students, teaching them safe languages up front makes a lot of sense IMHO.
3) Quality of the final software. “It works” is the absolute bare minimum (the worst quality you can get away with). Modern languages (especially for web) encourage inefficient poor quality crap. A good language would do the opposite – no opaque abstractions, zero hidden overhead, no “easier to use a general purpose thing from a library”
I wholeheartedly agree that some abstractions are responsible for some of the worst performing software, but it is not abstractions themselves are bad. OOP is full of abstractions and yet it has been used in very efficient scalable code. Also rust and C++ have the ability to write zero cost abstractions using things like templates and compile time optimizations. Like anything else, they can also be abused, ultimately I’d rather have them in my toolbox than not. That said, I’m interested in hearing what ideas you have to improve computer languages such that inefficiencies are intrinsically discouraged?
4) The “assumed OK” default assumption. Code can be split into 3 categories: code that can be proven correct (by analyzer, compiler, …), code that can be proven incorrect; and code that can’t be proven correct or incorrect. For the latter, almost all programming tools assume the code is correct when it may not be. This needs to be reversed
Obviously there are states that are unknowable at run time because they depend on input. But even in such cases, the results should never lead to unsafe or undefined memory access. Instead it should trigger an exception, which is an appropriate response to unexpected input. What else would you do?
Languages can issue a compile time error every time there’s a potential for a run time error. You probably realize this is the basis of java’s checked exceptions. There are arguments to be made for and against them.
https://phauer.com/2015/checked-exceptions-are-evil/
In cases where aborting the software is the desirable action, particularly prototypes, checked exceptions are a tedious nuisance. On the other hand in a lifesaving or critical application they could be hugely important, so maybe they need to be optional?
Now compare how Rust goes for these things – it wins a few points for not having garbage collection (for “zero hidden overhead”), but…
Mostly; Rust suffers from the same delusional thinking as everything else – that a programming language can help solve problems that have nothing to do with programming languages; and that adding more complexity will help to reduce problems caused by too much complexity.
You’ve brought up many interesting & insightful points above, but I think this conclusion can be drawn from those points. Guaranteeing safe memory and thread access has everything to do with programming and offers a ton of merit for a programming language. Whether it be rust or something else, I feel that memory safety should be a high priority for future programming languages.
Edit: “but I ^don’t^ think this conclusion can be drawn from those points.”
In principal you could make each and every bit of syntax and keyword configurable. But the implications are so far reaching, how far would you take it?
The implication is that source code is pre-processed tokens (e.g. AST) and that nothing outside your IDE knows or cares what your syntax and keywords are.
Of course you can have multiple canonical representations – e.g. “base64 encoded pre-processed tokens” (for plain transport over text), plus maybe “python-like with English keywords” (e.g. select some code in your IDE, then do an “export selection as..”).
I agree, although I think this has more to do with the IDE itself than the language.
It’s about redefining the relationship between the IDE and the compiler – e.g. IDE parses and does error checking (to reduce feedback latency as much as possible); then the compiler doesn’t do any parsing or error checking (it only receives “well formed AST”, and if it encounters a problem it’s a bug in the IDE).
I’m not convinced this is a language specific problem aside from the fact that C coders need new training.
It’s a language specific problem – compare the learning curve of languages like C, BASIC and COBOL to languages like C++, Rust, Haskell and Typescript. Heck, I could probably teach C to someone with no programming experience in 1 week, and the first 2 days would be “how computers work” and the last day would be a pizza party. 😉
The thing is, a lot of the “learning curve” problem (not all of it) is stuff designed to make it more convenient to write inefficient and/or harder to maintain software (things that prevent you from guessing what your code actually asks the CPU to do, and things that prevent other programmers from guessing whether something as innocent looking as “a = b + c” is an addition, or string concatenation, or merging 2 whole databases).
OOP is full of abstractions and yet it has been used in very efficient scalable code. Also rust and C++ have the ability to write zero cost abstractions using things like templates and compile time optimizations.
There are no efficient abstractions.
The myth that it’s efficient comes from relative thinking – it’s “efficient compared to something less efficient” and not efficient in an absolute sense (like “99% as efficient as theoretical maximum possible efficiency”).
For one simple example; let’s say you have a list of objects. How do you make that suitable for SIMD (switch from “array of structures” to “structure of arrays”)? You can’t – the OOP abstraction prevents you from writing this efficiently.
Obviously there are states that are unknowable at run time because they depend on input. But even in such cases, the results should never lead to unsafe or undefined memory access. Instead it should trigger an exception, which is an appropriate response to unexpected input. What else would you do?
What else you can do is prove (at compile time) that all possible input is handled in some way. It’s all bytes from an untrusted/external source; it all must be parsed properly. Whether that includes a “Heh, that’s bad input” dialog box or a “goto is bad on steroids” disaster doesn’t matter much. Invalid input that is correctly handed by a program is not a programmer error (and invalid input that isn’t correctly handed by a program is a programmer error).
Note that this (“prove that all possible input is handled”) sounds hard at first, until you break it into bytes and have proper variable range checking (not necessarily the “ranged integer” afterthought/experiment that Rust has).
You’ve brought up many interesting & insightful points above, but I think this conclusion can be drawn from those points
Agreed. For most of it we have nothing to compare against (unless you compare against 16 year olds with no training cooperating to develop successful complex systems in games like Factorio).
Brendan,,
The implication is that source code is pre-processed tokens (e.g. AST) and that nothing outside your IDE knows or cares what your syntax and keywords are.
Of course you can have multiple canonical representations – e.g. “base64 encoded pre-processed tokens” (for plain transport over text), plus maybe “python-like with English keywords” (e.g. select some code in your IDE, then do an “export selection as..”).
We use so many tools outside of the compilers (everything from grep and friends, diff, patch, svn/git/etc, search engines and so on that exist outside of the IDE and expect text. They’d all need to be replaced or retooled.
Don’t get me wrong, I’ve thought about the same thing with the .net family of languages since they share the same intermediate form. But because the languages don’t all share the same high level feature set they don’t have a clean 1:1 representation such that you can just switch languages between developers. To get that clean 1:1 translation would require all our languages to be based on it and languages with unique features can’t be directly translated without granularity issues. This says nothing of comments. Even though these shouldn’t impact compiled code, their placement may not have a 1:1 representation either.
Things like C macros would be an outlier because they process source code as lines of text completely independently from the AST of the language. It largely exists to make up shortcomings of C, but regardless it highlights another way that 1:1 representation breaks.
It’s still an interesting idea and I’d like to see it in action, but I suspect any implementation would have to either be very complex, or impose strict restrictions & requirements upon the languages that are supported. Honestly I think it’d be a hard sell with a difficult path to acceptance.
It’s a language specific problem – compare the learning curve of languages like C, BASIC and COBOL to languages like C++, Rust, Haskell and Typescript. Heck, I could probably teach C to someone with no programming experience in 1 week, and the first 2 days would be “how computers work” and the last day would be a pizza party.
While C is a simple language for learning the basics, I don’t think learning basic C syntax makes you a good coder any more than learning to pour a bowl of cereal makes you a good chef. You’re not wrong, more sophisticated features do take time to learn, including OOP, exceptions, generics/templates, reflection and so on…but it’s not time wasted, you’ll also be a better more qualified developer for it. More to the point of our discussion though, your C coder with a week of training w/pizza party will be writing code with more exploits and memory faults than another developer learning safe languages, and that’s a problem we’ve been facing for decades.
IMHO safe languages need to become the new norm if we want software to be more robust.
The thing is, a lot of the “learning curve” problem (not all of it) is stuff designed to make it more convenient to write inefficient and/or harder to maintain software (things that prevent you from guessing what your code actually asks the CPU to do, and things that prevent other programmers from guessing whether something as innocent looking as “a = b + c” is an addition, or string concatenation, or merging 2 whole databases).
I don’t deny that abstractions *can* hide inefficiencies, but it doesn’t follow that all abstractions are inefficient. After all some of those abstractions can be written by specialists who’ve put a lot of time & work into optimization.
There are no efficient abstractions.
The myth that it’s efficient comes from relative thinking – it’s “efficient compared to something less efficient” and not efficient in an absolute sense (like “99% as efficient as theoretical maximum possible efficiency”).
I disagree. You can have inefficient abstractions and you can have efficient ones. For better or worse we need abstractions to tackle systems that are otherwise too complex for one developer to solve.
For one simple example; let’s say you have a list of objects. How do you make that suitable for SIMD (switch from “array of structures” to “structure of arrays”)? You can’t – the OOP abstraction prevents you from writing this efficiently.
This is a good topic for another discussion since we’re already running long here, but I’d like to see stronger code & structure optimization. Ideally a human developer’s main focus should be expressing the problem at hand in a way that’s easiest to understand and the computer should be tasked with figuring out how to best optimize it.
What else you can do is prove (at compile time) that all possible input is handled in some way.
With checked exceptions, it’s provable that the developer handled the exception. It’s the developer’s responsibility to decide what to do with the exception.
With unchecked exceptions, it’s provable that the process will throw an exception rather than cause a memory fault.
I think there’s merit for both depending on the project. You may want to disagree with me, but I don’t think there’s another alternative. What clearly should *not* happen is code that continues with silent memory corruption. These are some of the worst kinds of bugs to deal with due to the lack of explicit errors.
We use so many tools outside of the compilers (everything from grep and friends, diff, patch, svn/git/etc, search engines and so on that exist outside of the IDE and expect text. They’d all need to be replaced or retooled.
Ok; let’s look at these tools. Grep is used for… I don’t know why (some IDEs lack search and replace?). Diff, patch and revision control exist because we’re attempting to use tools designed for “single player” in a “multi-player cooperative” game while every sane person is (e.g.) editing Wikipedia articles without having a clue how awful collaboration can be.
Search engines (and things like Stackoverflow) remain mostly the same (e.g. using a “python like with English keywords” canonical form), but most of the time people use normal English sentences and/or pseudo-code anyway.
Don’t get me wrong, I’ve thought about the same thing with the .net family of languages since they share the same intermediate form.
Yes – it needs to be some form of AST (Abstract Syntax Tree), and it does need to support errors/incomplete code (because code isn’t well formed while you’re still editing it). Intermediate representations (CIL, LLVM, Java byte-code, … ) are too low level to represent the higher level structure of code (e.g. things like “do/while” and “for()” loops become “if(condition) goto” in IR).
More to the point of our discussion though, your C coder with a week of training w/pizza party will be writing code with more exploits and memory faults than another developer learning safe languages, and that’s a problem we’ve been facing for decades.
Erm, no, The problems we’ve been facing for decades are opposite to each other.
The first problem is for high performance software; where every year people say “compilers are optimizing better” and everyone that needs performance switches to inline assembly because compilers still can’t optimize. For this Rust is “better in theory (for safety)” but it’s just going to get laughed at by people that don’t care about safety. I’m possibly among the worst of these (when assembly isn’t enough I’ll happily switch to self-modifying or self-generating assembly), although I doubt I’m the only one (you might be surprised what people do – e.g. most regular expression libraries are JIT compilers now).
The second/opposite problem is for low performance software – things that are already written in safe languages like Java, Javascript, Python, etc. For this Rust is “better in theory (for performance)” but it’s just going to get laughed at by people that don’t care about performance.
IMHO safe languages need to become the new norm if we want software to be more robust.
“We” (users) want software to be faster and/or cheaper; and we want operating systems to provide isolation between processes so that a process can be as unsafe as it wants without causing a reason to care if/when that process implodes.
“We” (programmers) all want different things.
Other people (never us) want a security update every hour on the hour, with everything encased in 12 layers of bubble wrap, and warning signs hanging from the ceiling in all hallways saying “Please don’t hit your head on this warning sign”.
You can have inefficient abstractions and you can have efficient ones. For better or worse we need abstractions to tackle systems that are otherwise too complex for one developer to solve.
No. You can have inefficient abstractions, and even more inefficient abstractions; and the more abstract it gets the more inefficient it becomes. We don’t need abstractions, we just need modularity so that one programmer can work on one piece while another works on another piece (and all programmers can find out what all pieces do, and modify them to make them more efficient for their specific use case).
I think there’s merit for both depending on the project. You may want to disagree with me, but I don’t think there’s another alternative.
I object to exceptions – it’s hidden bloat (unwinding) that is always worse than “unhidden, shoved in your face bloat”, on top of “goto is bad on steroids” (control flow that jumps not just to an easily found location in the same function, but to “somewhere up the chain of callers, maybe”).
I created “alternative exits” to replace (checked) exceptions. Essentially, a function can have input parameter/s that contain alternative address/es to return to, and (if an alternative exit is used) the function just replaces “return RIP” on the stack before doing a return. That way the caller is forced to have all their clean-up in plain sight (but you still avoid the need to return an error code and the caller avoids an “if(result != good) { …” comparison and branch).
Brendan,
Ok; let’s look at these tools. Grep is used for… I don’t know why (some IDEs lack search and replace?). Diff, patch and revision control exist because we’re attempting to use tools designed for “single player” in a “multi-player cooperative” game while every sane person is (e.g.) editing Wikipedia articles without having a clue how awful collaboration can be.
Search engines (and things like Stackoverflow) remain mostly the same (e.g. using a “python like with English keywords” canonical form), but most of the time people use normal English sentences and/or pseudo-code anyway.
Not everyone uses the same tools, developers do use a lot of source code management and collaboration tools outside of the IDE. A company I work for uses a “cloud” web based source code management and automated build systems outside of visual studio – the error numbers and line numbers would become meaningless because those refer to lines of text in the source, but these are no longer canonical in your scenario. Everything that uses source code would have to be retrofitted. This is my point, switching from text to binary “source code” would have a lot of repercussions for the way we use source code today.
…are too low level to represent the higher level structure of code (e.g. things like “do/while” and “for()” loops become “if(condition) goto” in IR).
Ok, but even if you add second higher level intermediate byte code, you still have the same issue: not all languages share the same high level constructions. C doesn’t even have strings for example. This makes things challenging to convert from one language to another and back again without any loss especially when things need to be written differently to accommodate a language’s featureset. The exception would be only supporting languages that are superficially different but have an exact 1:1 representation of high level code.
Erm, no, The problems we’ve been facing for decades are opposite to each other.
100% yes actually. There’s not even a sliver of doubt, your C trainees will be producing C code with memory and MT bugs just like everyone else does when they start tackling larger programs. That’s just not something we should ignore.
The first problem is for high performance software; where every year people say “compilers are optimizing better” and everyone that needs performance switches to inline assembly because compilers still can’t optimize.
I didn’t want to get into this here because it’s a big topic to chew off. Today’s optimizer are good at certain types of local optimization, but there’s a lot of room for improvement like structural, intra-procedural, algorithmic, APIs, etc. In principal, an optimizer should be allowed to change all of these as long as it doesn’t change the output, but we’re not there yet.
For this Rust is “better in theory (for safety)” but it’s just going to get laughed at by people that don’t care about safety.
If the software industry doesn’t care about safety, then perhaps your right. The problem though is that society at large is harmed by software faults regardless of what developers want. I think new generations are starting to come around to the importance of safe languages even if legacy developers fight it.
The second/opposite problem is for low performance software – things that are already written in safe languages like Java, Javascript, Python, etc. For this Rust is “better in theory (for performance)” but it’s just going to get laughed at by people that don’t care about performance.
Sure, but I’ve been pretty consistent in saying that rust isn’t the only solution going forward. I do think that rust has strengths for low level work that many other safe languages are not suitable for though.
“We” (users) want software to be faster and/or cheaper; and we want operating systems to provide isolation between processes so that a process can be as unsafe as it wants without causing a reason to care if/when that process implodes.
Yes obviously the OS should contain faulty memory accesses, but that’s not a justification for software faults/exploits/crashes in the first place.
User: “Oh my game crashed”
Developer: “It’s all good, the OS shut down the faulty process…”
User: “Um, no. It’s not good the game crashes at all!”
No. You can have inefficient abstractions, and even more inefficient abstractions; and the more abstract it gets the more inefficient it becomes. We don’t need abstractions, we just need modularity so that one programmer can work on one piece while another works on another piece (and all programmers can find out what all pieces do, and modify them to make them more efficient for their specific use case).
I’m taken aback by your saying this. Good abstractions are crucial to modern software development and I’m perplexed by your assertion that we don’t need them. Without software abstractions we’d be regressing back to the days of original basic programs. Even basic itself has evolved to incorporate OOP for productivity. We as humans need abstractions to divide a conquer big problems and it would be practically impossible to write sophisticated modern software without abstractions in some form or other. The goal should be abstractions that are convenient and efficient, not to eliminate them.
No abstractions makes so little sense that I must be misunderstanding you.
I could have worded this better…
“Yes obviously the OS should contain faulty memory accesses, ”
Haha! Obviously I meant the OS should prevent a bug in one process from affecting the rest of the system.
First graph on the link shows the number of reported memory safety vulnerabilities was dropping considerably from 2019. Furthermore they say they don’t plan to rewrite C/C++ portions of code. That only new code added is in Rust. And they say the number of reported memory safety vulnerabilities for Rust code is 0. It looks like they didn’t took C/C++ memory safety vulnerabilities all that serious before Rust. And are now more mindful about that. Anyway. Lets give it a decade. Then we can talk. On where Rust stands. The last portion of the article is something i like. And that is they say they will write Linux kernel drivers in Rust. Here i have no problem whatsoever. The sooner they start providing upstream device drivers for their flagship phones. The better. If they will use Rust instead of C. Well. We will just have to cope with that. Lets see if they will pull through with what was promised.
Geck,
The last portion of the article is something i like. And that is they say they will write Linux kernel drivers in Rust. Here i have no problem whatsoever. The sooner they start providing upstream device drivers for their flagship phones. The better. If they will use Rust instead of C. Well. We will just have to cope with that. Lets see if they will pull through with what was promised.
I didn’t see a “promise”…? Also they might not be the kinds of device drivers you want. Google’s patches might not be hardware devices drivers but rather feature drivers needed to implement networking, file systems, encryption, virtualization, and what not.
Also note that google didn’t write and may not even poses the device drivers for the majority of android phones on the market. That distinction probably goes to qualcom.
Their statement doesn’t leave much room for a different interpretation. It states. Now that Linux supports Rust we plan to bring memory safety to Linux. Starting with kernel drivers. You can’t be more explicit then that. As they already said they don’t plan to rewrite existing C/C++ code. That comes down to new kernel drivers. If we speculate a bit. They are likely already designing their chips to rival M series from Apple. And likely already writing device drivers for it in Rust. Like Asahi Linux but for Google. On top of that the situation is improving all around. That is products like Raspberry Pi are progressing in regards to getting upstream device drivers. Imagination Tech decided to give in. Qualcomm will for sure follow or will be left behind. It is really in their best interest to start doing that ASAP.
Geck,
Their statement doesn’t leave much room for a different interpretation. It states. Now that Linux supports Rust we plan to bring memory safety to Linux. Starting with kernel drivers. You can’t be more explicit then that.
You may as well explicitly quote what they actually said: “With support for Rust landing in Linux 6.1 we’re excited to bring memory-safety to the kernel, starting with kernel drivers.”
“Excitement” is not technically a “promise” and nowhere do they explicitly talk about open sourcing device drivers (either their own or others).
As they already said they don’t plan to rewrite existing C/C++ code. That comes down to new kernel drivers. If we speculate a bit. They are likely already designing their chips to rival M series from Apple. And likely already writing device drivers for it in Rust.
I also hope google open sources the device drivers for its own hardware. Regardless though I did not see any “explicit promise” to do it.
On top of that the situation is improving all around. That is products like Raspberry Pi are progressing in regards to getting upstream device drivers. Imagination Tech decided to give in. Qualcomm will for sure follow or will be left behind. It is really in their best interest to start doing that ASAP.
I don’t mind the optimism, but I’ve gotten a very bipolar vibe from you lately. You flip between optimism to lashing out at the hopelessness of the situation and even saying FOSS needs to be forced onto people. And at other times like now you flip back to optimism even suggesting that companies who don’t upstream their drivers urgently are going to be left behind soon. Next month you might be back to saying it’s hopeless again because people are too apathetic. I’m really not trying to disparage your views, but as I try to understand them, I get the impression they’re all over the place and I’m trying to understand why.
OK they are then excited to bring Rust to new kernel drivers. But for you that doesn’t read as a promise they will actually do it. As for the optimism. It can be a drag indeed. Fluctuation of the optimism.
https://scribe.rip/using-rust-at-a-startup-a-cautionary-tale-42ab823d9454
That can be a problem indeed. If the toolchain gets in your way. Constantly trying to tell you are doing it wrong. As eventually toolchain will win and you will give up. Due to its simplicity hence i feel that C will keep going strong in spaces such as kernel development.
I find it a strange attitude to have, wanting to be able to write code that is potentially broken. Having a strict compiler, guaranteed working concurrency and a powerful type system ist the best thing since sliced bread. Sure you can quickly “fart out code” and quickly prototype something in Python as the article writer says, but I highly doubt that this code will be re-written cleanly later once “it works”. This will create headaches for the “nimble team” slowing it down later the product life cycle, when they have to rewrite that stuff, without the help of a compiler pointing out the problems the changes may have created in other parts of the system.
I am not sure having a fast moving team that provides an unstable product is preferable to a slower moving team providing stable systems to the customers. Also I am confused as to how it was hard for his team to learn and understand Rust, I worked with Mathematicians and Physicists that came from Python / Matlab – they all quickly picked up how to write Rust code. A programmer by trade should be even faster. Also I doubt that for a trivial web-app, as the author was building, his team really got into having to use lifetimes or other complex features of Rust. All the hard stuff is needed for writing libraries, which is what the Rust experts on the team can do. The use of the library then is trivial, which is what the normal programmer will do most of the time.
mike-kfed,
This will create headaches for the “nimble team” slowing it down later the product life cycle, when they have to rewrite that stuff, without the help of a compiler pointing out the problems the changes may have created in other parts of the system.
That’s true. Everyone likes to think they’re perfect, but in real projects like those the google article talked about the reality is humans are fallible and having many developers in the code base exacerbates the need for compile time code verification.
I do think there are fair points being made in the link, like the fact that almost none of their new hires had prior training or experience in rust. I can sympathize with that. It will be difficult to find qualified rust developers until we start training more of them.
He says “the problems that Rust is designed to avoid can be solved in other ways”, and I actually agree to a point. There are lots of high level languages that provide memory safety. In general this is accomplished using garbage collection. And if GC doesn’t present any problems for a project, then it opens up a lot of languages that can provide run time safety. However the rust features that he complains about aren’t poorly conceived features, rust’s design is an inherent byproduct of a memory safe language that shifts all the burden of memory safety to compile time without runtime overhead.
“With Rust, though, one needs to learn entirely new ideas — things like lifetimes, ownership, and the borrow checker. These are not familiar concepts to most people working in other common languages, and there is a pretty steep learning curve, even for experienced programmers.”
He’s right that some developers may not understand this well, but he’s missing something rather fundamental. While C does not make the developer express these concepts to the compiler, C does not obviate the developer from the need to understand the very same ideas of lifetimes, ownership, borrowing, etc.. The same concepts still apply to correct C code, but they’re implied instead of explicit. They have to be manually verified in the programmer’s head. Simply ignoring them comes at the peril of the developer’s code quality & correctness. A developer who does not understand these “rust concepts” and apply them to their own C code will not be able to keep their code bug free. The concepts aren’t new and rust didn’t invent them, but by requiring the developer to express their intentions explicitly, rust is able to prove the absence of violations, which is a valuable thing to prove.
You must be logged in to post a comment.

source

Note that any programming tips and code writing requires some knowledge of computer programming. Please, be careful if you do not know what you are doing…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.