Quinn Norton recently wrote Everything Is Broken, an article lamenting the sad state of software and internet security in general, concluding that there are "plenty of ways we could regain privacy and make our computers work better by default. It isn’t happening now because we haven’t demanded that it should." In recent months, we've seen ever more reasons why it needs to happen, but if you're a developer, the task of building secure software can seem to be daunting. Vulnerabilities are a bane of large complex software projects, and companies like Microsoft spend millions to try to address them.
Remote Code Execution (RCE) vulnerabilities are the ones you care about the most. RCE vulnerabilities are virtually all of the vulnerabilities that people like the NSA pay money for and use, since they let attackers take everything you've saved on your computer and more. As Quinn Norton said, "they are bugs that let someone take over your whole computer, see everything you type or read and probably watch you pick your nose on your webcam." Or as the NSA recently boasted, "If we can get the target to visit us in some sort of web browser, we can probably own them."
This shouldn't be a surprise, but since it's popular to claim everything is hackable and nothing can be secure, it's worth spelling out. Remote code execution vulnerabilities are not hard to prevent if developers follow a few simple, practical rules from the start, since they basically always fall into the below categories:
- Memory corruption flaws
- Generating a command to be executed by another language
- Wrong file access
- Giving away remote control
- Above all, isolate as much code as possible from unauthenticated users.
- Don't include hardcoded credentials.
- Do all validation or authentication server-side.
- If you're writing a web application, you probably shouldn't. But if you do anyway, you'll need to check for CSRF, session fixation, XSS, and many other vulnerabilities.
To categorically avoid this, use only memory-safe languages, like C#, Java, ruby, go, rust, D, or Ada.
Contrary to popular opinion, many of those compile natively and portably, run fast, require minimal dependencies, interface easily with C libraries, and have lots of developer support behind them. If you don't do this, RCE vulnerabilities can happen virtually anywhere and everywhere in your program; you could accidentally access an array index out of bounds, accidentally use uninitialized memory, or use an object after it was free'd or double-free since keeping track of allocations and deallocations is really hard in complicated programs, and the NSA will undoubtedly own you.
If you do use a memory-safe application, RCE vulnerabilities can only happen at one of the below defined points in your application:
To categorically avoid this, don't use API's that parse and execute instructions.
That means don't craft and execute SQL statements or OS commands or use functions like eval with user-provided data in the command to be executed. Allowing users/untrusted data to instantiate and call methods on arbitrary object types is also often equivalent to eval, so don't deserialize arbitrary objects either and avoid reflective programming.
For example, you can use prepared statements or a object-based database interface that does not do command parsing.
Sometimes you cannot follow this advice, for example when you are creating a web application, since the HTML and javascript you generate will be parsed and executed by clients. In those cases, you need to carefully examine each spot in your code where you generate HTML code or other instructions to guarantee no injection can occur. You could whitelist allowed characters to avoid HTML special characters, or use sanitization functions.
To categorically avoid this, don't allow users to specify file paths or any part of file paths to access.
For example, if you accept uploads, generate your own name and path to save the files; don't use the name and extension given by the user.
If your code reads or writes a file whose path is determined by untrusted input, you can easily fall victim to directory traversal or arbitrary file upload attacks, in which attackers are able to e.g. write executable files or shared libraries to your filesystem that will be executed, or overwrite scripts or configuration data to grant themselves access. If you cannot avoid some arbitrary file access, ensure you whitelist file extensions and filename characters and detect any double-dot sequences. If you do detect any ".." or punctuation like : or ; or a null byte or any other special characters, fail fast! Don't try to clean the path.
To categorically avoid this, don't provide remote users the ability to execute arbitrary commands, write arbitrary files, etc.
If you can't do that since you are writing an auto-updater, ensure the updates you receive are validated by proven public-key authentication methods, such as SSL or digital signatures, and validate that the signature actually was issued by your company or organization. Don't act on anything that wasn't signed.
If you can't do that since you are writing a remote administration application; stop and re-think your life. Are you writing one that will be more secure than one that is out there? For example, are you writing a memory-safe replacement for a non-memory-safe option like Remote Desktop? If not, stop here, but if so, proceed.
-
There are many things you will need to do to ensure it is secure:
Other vulnerabilities
Other vulnerabilities do exist, and you should take a look at them too, but prevent the RCE vulnerabilities first, or nothing else will matter. Here's a few examples of where to look next:
Some of the trickiest vulnerabilities deal with crypto; sending any private information over plain text is generally regarded as a Bad Thing. But protocol design is very hard to do without introducing subtle vulnerabilities, so I suggest reading books/taking courses in secure protocol design and definitely getting an independent review from someone who has proven expertise and has previously broken insecure protocols.
Passwords are just generally awful, but if you insist on using them, there are dozens of ways to screw up and be vulnerable to some attack from someone.
Don't be stupid. Some vulnerabilities can only be categorized as you-gotta-be-kidding-me, such as not requiring authentication before serving customer data.
#1 by passenger on May 27, 2014 - 4:16 am
“If you do use a memory-safe application, RCE vulnerabilities can only happen at one of the below defined points in your application”
I don’t think so.
#2 by scriptjunkie on May 27, 2014 - 10:01 pm
Do you have any counterexamples? I’d love to see them. So far though, this is all I’ve seen, and I don’t think other vulnerabilities should be possible.
I mean, sure, if you gave an attacker the ability to write arbitrary data to an arbitrary named pipe, you could get RCE, but that would be a contrived example that isn’t substantively different than what I listed, since it’s really the same thing as arbitrary file write. Do you have any examples of real RCE vulnerabilities that wouldn’t fit in one of the above categories, or wouldn’t be mitigated by following the rules above?
#3 by deeso on July 5, 2014 - 3:33 am
So one counter example is that the “memory safe” language actually relies on lower level languages like “C” or assembly. For example, Java is mostly safe, but there have been instances when the Java’s native libraries failed to perform proper checks and memory corruption bugs occurred (though I can’t cite them exactly right now). Google+bitly says: http://bit.ly/1mh6VRd.
I should note that when you make broad statements like: “If you do use a memory-safe application, RCE vulnerabilities can only happen at one of the below defined points in your application”, the onus of proof actually falls on you 😉
#4 by scriptjunkie on July 8, 2014 - 2:20 am
Unfortunately, there is a lot of confusion between the security of applications written in Java (great) and running untrusted Java code like a Java applet from a website (bad idea). It’s like the difference between the safety of building a chair with wood, a hammer, and nails (great!) and the safety of allowing your worst enemy/hitler/a terrorist to do anything to you they want with the same hammer and nails (not good). Ironically, the Java runtime is NOT written in a memory-safe language; it’s written in C++. What jduck is presenting on is how native C/C++ code (slide 12) has memory corruption vulnerabilities when parsing untrusted input. This is not a counterexample to my point, it perfectly illustrates my point. No legitimate program in Java is going to trigger one of those. While my statement “If you do use a memory-safe application, RCE vulnerabilities can only happen at one of the below defined points in your application” is not as certain as a rigorous mathematical proof, it is certain enough I’d bet $100 that no counterexample will ever appear in my lifetime. You can rely on a program written in a memory-safe language to not have memory corruption vulnerabilities.