Information Security is littered with examples of data breaches. This would not bother me so much if the root cause of each loss of data was due to some interesting or novel attack. What we have though is not that. Attacks against enterprises and consumers follow the same pattern so much that the “top 10” OWASP attacks were the same for many years (2003-2015 ?).
How depressing is that? I get it once someone discovers buffer overflows there will be several novel ways to use that technique to get a target computer to do something that it wasn’t supposed to and it takes some time for programmers to catch up with the new standard to remove buffer overflows complexly from their code. But how long is that? When will be have software that is buffer overflow proof on compile?
Not to pick on buffer overflows explicitly but any of the attack vectors that have been around for more than an upgrade cycle or two should be stomped out. Cross site scripting (XSS) is not a feature. If you wrote code that used it for something interesting (say, authentication) take the time now to get out of debt and fix the problem once and for all.
Firewalls, IPS, and other network appliances now allow for “virtual patching” where those flaws are so common that the attack is blocked on the wire. This sounds great, but now that more traffic flows are being encrypted end-to-end, with certificate pinning and other techniques to determine if someone is intercepting the traffic this feature will not be available for too much longer.
This practice of hiding behind firewalls has made writing quality software less common and patching the software that is in place much, much harder.
Let’s solve some problems. Fix the root causes. Moving the needle forward again can only be done when we step back and fix what is really wrong.
https://www.theregister.co.uk/2017/11/24/infosec_disasters_learning_op/