Postel’s Principle is an adage from the specification of TCP, aimed at implementors, commonly cited and worshipped:
Be conservative in what you do, be liberal in what you accept from others
Postel’s Principle is wrong, or perhaps wrongly applied. The problem is that although implementations will handle well formed messages consistently, they all handle errors differently. If some data means two different things to different parts of your program or network, it can be exploited—Interoperability is achieved at the expense of security.
These problems exist in TCP, the poster child for Postel’s principle. It is possible to make different machines see different input, by building packets that one machine accepts and the other rejects. In Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection, the authors use features like IP fragmentation, corrupt packets, and other ambiguous bits of the standard, to smuggle attacks through firewalls and early warning systems.
Postel’s Principle creates these problems by encouraging people to accept faulty input, but without enforcing consistency. These problems aren’t unique to Postel’s Principle—similar notions underpin attacks like the confused deputy, and cross site scripting attacks.
In early versions of perl there was the poison null byte attack. By passing parameters like ‘file.cgi%00file.jpg’ to a CGI script (where %00 is a null byte, URL encoded), perl would see one string, but the C library underneath would see another. perl would see ‘jpg’ as the extension, but C would see ‘cgi’.
Similarly, HTTPS certificates suffered from problems with null bytes and corrupt input. The person issuing the certificate, and the person validating it saw different names, due to a null byte. When one implementor is a little more liberal than the next, or a little less, these discrepancies occur. I am not blaming Postel for these bugs, but the Principle is often invoked to defend them.
Patterson, Sassaman, and Bratus’ A Patch for Postel’s Robustness Principle argues for the following updates, to mitigate the ambiguity and other security issues–
- Be definite about what you accept.
- Treat valid or expected inputs as formal languages, accept them with a matching computational power, and generate their recognizer from their grammar.
- Treat input handling computational power as a privilege, and reduce it whenever possible.
The paper, and other work and talks from the LANGSEC group, outlines a manifesto for language based security—be precise and consistent in the face of ambiguity, and use simpler parsers to shrink your attack surface. You should be liberal in what you accept, but this should be formalised and standardised, not left to the implementor.
Instead of just specifying a grammar, you should specify a parsing algorithm, including the error correction behaviour (if any). Security requires being able to validate or sanitise input consistently across all implementations, but this doesn’t have to come at the expense of interoperability. Notably, HTML5 gained a standardized parser, not for security reasons, but to ensure that bad HTML looks the same on every browser.
Interoperability doesn’t just mean working in the same way, but failing in the same way too. Implementation specific behaviour is the root of all evil, and invoking Postel’s Principle will not redeem your soul.