If there’s one thing that’s obvious to anyone who’s spent even a little bit of time online, it’s that security is one of the biggest hot-button issues on the modern web. As we store more and more information online, cyber-attacks are becoming increasingly lucrative - and the stakes involved in securing our data are rising ever higher. Not surprisingly, that means cyber-criminals are getting smarter and craftier. Whereas before a business might have to deal with the odd DDOS or man-in-the-middle attack, now there’s a constant risk that someone might jump in to exploit even the smallest security hole. It’s a culture of not-completely-unjustified paranoia - particularly since it seems as though many organizations aren’t pulling their weight as far as protecting their data is concerned. After all, even a cursory Google search reveals a rash of data breaches, security failures, and programming blunders, which together may well have compromised gigabytes (or even terabytes) of sensitive data. So security is important, and it’s generally a business’s job to pull their own weight on that front. In other news, water is wet, the sky is blue, and dogs generally dislike cats. Blindingly obvious statements aside, there’s a question I’d like to ask with today’s piece; one that’s quite closely related to the information you just read: What’s the web developer’s role in all of this? Where do they stand in the eternal conflict between white hats and black hats? And most importantly, what happens when they don’t pull their weight?
The First Measure Of Security Is Diligence
As a general rule, there are two things that cause a data breach. The first is when a business improperly implements its security, or fails to respond to an obvious issue. The second is a bug in the organization’s underlying software; one of which they may or may not be aware. In both cases, a web developer is the first line of defense against a cyber-attack - and in the second case, they could potentially even be responsible for said attack. See, for a security exploit to exist, someone needs to have made a mistake somewhere along the line. Usually, that mistake can be found in the initial development process. Maybe there’s a minor error somewhere in their code that testing missed. Maybe they didn’t use a secure development tool to create their app. Or maybe, like what happened with Heartbleed, they somehow made a rookie mistake that put millions of people at risk. Regardless, their mistake means that there’s an exploit. The fact that there’s an exploit means that cybercriminals have a method of accessing stuff they aren’t supposed to. The fact that they can access stuff they aren’t supposed to means…. You get the idea. The takeaway here, then, is simple: while it’s still the responsibility of a business to ensure that their data is kept safe, it’s the responsibility of a developer to design their applications to be as secure as possible - and to regularly patch them in order to keep them that way. That takes time, discipline, and diligence. One cannot simply cobble together an app, hurl it out onto the ‘net, and ignore it afterwards. That’s irresponsible, lazy, and dangerous.
Developers Are The Real Gatekeepers Of Security
So, what role does security play in web development? In short, it’s central to the profession. As a developer, you cannot feasibly say you’ve done your job unless you see to security. You need to do everything in your power to ensure that your development environment and your application are both as secure as they can possibly be - otherwise, you’re just as much to blame for a breach as anyone else.