Recently, I was reminded of these wisdoms while responding to one of those routine pleasures of living life online. A nonprofit community website our company hosts, as a favor, came under attack by what initially appeared to be the usual culprits: teenagers with more time on their hands than appreciation for positive life trajectory. That was the story anyway, complete with l33t-style "brags" left after the first round of attack.
But something just wasn't feeling quite right about this. Maybe it's because, as a preteen myself way back in the era when 1200 baud was high-speed, I did my fair share of poking and prodding at vulnerable systems out of sheer bullheaded curiosity. One might even argue that I'm a card-carrying "hacker," having been around long enough in that world to fondly recount stories of the old days. Maybe it's just a lifetime's fascination with pattern recognition in nonlinear systems. Whatever the cause, it just didn't feel right.
And it wasn't. In fact, to understand — and resolve — what was going wrong with this particular system, it was necessary to step back from the system, move up a level. More generally, the boring-sounding field of "security management" relies quite a bit on that particular Sun Tzu-ish move. While it's always a temptation to look for "bugs" at a particular level of systems analysis, in the world of production systems we often see that "bugs" are subtler than that. For every line of code that's simply got flawed logic (or the usual, annoying typos), there are 10 examples of systems-level interactions that, paradoxically, result in insecure behavior without any specific "bug" at all.
It's a (not so) secret truth of the geek fraternity that asking the right questions is the tough part of solving problems. All of us can remember a handful of those delicious moments when, after hours or days of fruitlessly picking away at some seemingly intractable problem with a broken application, we suddenly feel our perspective shift and — aha! — the "problem" comes clear all at once. It's never complicated, nor difficult to make the fix, when that transcendence occurs; it's less of a fix than it is the recognition of a failed assumption, a faulty expectation, a missed logical connection. One changed parameter and, all at once, that "intractable" problem vanishes like platitudes in the face of intransigent reality. As deeply frustrating as those hours of hunt-and-peck can be leading up to it, the "eureka!" moment always is more than worth it.
With the benefit of hindsight, it's always easier to see what the error was — no surprise there. Over the years, however, one learns to trust more and more that hunch one gets when things seem like they almost add up … but not quite. That certainly was the case in the "hack" attempt on our freely hosted website. It turns out that the whole story about some random preteen script kiddie launching the attacks was just a lame-ass smokescreen. In fact, the person behind it was an ex-employee who was fired last year for, in general, being an incompetent fool. That sense I was getting that things didn't add up came from a few different data points: Some things didn't quite fit, others fit too well.
Specifically, the fact that all of the member accounts that were "hacked" during the (eventual) three rounds of the attack were created before we fired that dud of a team member — that was the key that unlocked the puzzle.
He and an accomplice were brute-force de-hashing entries from an old (hashed) member password table that he'd stolen during routine backup procedures — last year. Sure enough, he'd copied the entire table right before we fired him (we keep database access logs, for just that reason). Hands caught in the cookie jar, case closed. With a bit of grunt work, we had the site back up and running — traffic is not only back up to previous levels but already has surpassed them.
There are plenty of areas of the technology world where buzzwords and platitudes rule the roost. One of the more egregious examples certainly can be found in the security market: There's a never-ending stream of magic security software that will magically ensure no "hackers" get into your stuff. Yeah, right (sigh). Sure, yes, there's some code that is necessary for us to have, in order to keep things under control: firewalls, a basic Trojan scanner, some log analysis, even perhaps some heuristic access-control filters at the server level. But all of that is freely available (i.e. OpenSource, free for download) — in contrast, little of the truly complex/fancy/expensive software to "keep your website secure" is worth the paper it's not printed on.
Why? Well, for the same reason that Einstein remarked about systemic problems and solutions thereto being found at different analytic levels. Buying more software to plug the gaps in software security is (to quote a favorite saying of a now-deceased, BASE-jumping, forensic coroner friend of mine — Dr. Nick Hartshorne) like "fucking for virginity." That's the harsh reality of things: There's no magic software that will keep your website — or your home computer/laptop — safe and secure. By the very nature of the systems we are trying to secure, it's necessary to step back a level and look at the systems overall, to see where the weak spots are.
Crypto guru Bruce Schneier has made the same general observation in regards to the whiz-bang new technology of quantum cryptography. Yes, being a geek he sees the intrinsic "coolness" of quantum crypto, and we're all curious to see how it develops over time. (The fact that genuine researchers genuinely hypothesize that quantum computation can overcome P/NP completeness barriers by employing processing power found in parallel universes — that's just too cool for words!) However, he's clear that quantum crypto won't make anybody more secure in real-world applications. Today, our weaknesses really aren't to be found in weak crypto algorithms — far from it. Rather, the weak spots in the system are usually far, far simpler.
So, for all of our fancy technology and uber-geek capabilities, it was a disgruntled former employee (who was, incidentally, trying to launch a paysite to "compete" with the free service we host) with a stolen (albeit hashed) password list that was prying open our backdoor. No fancy software catches things like that, unfortunately. Rather, it takes basic observational awareness and, above all else, gaining confidence in asking the "wrong" questions. Even as our tech staff was chasing after more complex reasons the attacker had gained control of a few member accounts, I was musing in the background: "Something's not right here, guys; this isn't some random hacker. We're missing the forest for the trees." Technology? Nope, not really. It's just basic common sense.
All too often, folks are intimidated by "technology" if they don't consider themselves part of the geek elite. A friend just got his first BlackBerry, and he wanted my help figuring out how to use the built-in camera. He handed it to me along with the owner's manual. Hahaha! I started pushing buttons, and he was shocked —didn't I want to read the manual first? After all, I was just doing trial and error: "Hmm, what does that button do?" Yep, exactly!
That's how one learns, in tech — trial and error. If you want to do a better job of keeping your own files and service "secure," the first step is deceptively simple: Stop thinking of technology as some alternative universe that has really complex rules and people who know all the answers and act as gatekeeper. We don't know the answers, any more than anyone else — all we do is keep asking questions until things start to make a bit more sense. Tech is like anything else: poke and prod at it until it starts making sense.
Ask me no questions, and I'll tell you no lies? Beh — ask me all the questions you can think of, and so will I. It's through asking good questions that we learn how things really work. Indeed, through building that sort of "gut feeling" for the systems we manage, we're in by far the best position to keep them operationally secure. The next time someone tries to sell you some complex software to "ensure security," ask him some questions — he might, indeed, tell you some lies. But I bet you'll have no trouble feeling out which answers are which. Einstein would be proud, and even Herr Gödel might have tipped his hat to your ontological awareness of analytic constraints.