IAAS, CLOUD, AND MANAGED SERVICES BLOG
If you've been in the IT industry for a while, you'll have an almost instinctive familiarity with what the cloud is, its various modalities, deployment models, and types. Intuitively, one would think that a deep understanding would make the cloud easy to explain to less technical people, but in fact the opposite is true. It's very difficult to put yourself in the mindset of someone who lacks the conceptual framework that those of us who have been around enterprise IT for a long time have developed.
There has always been a lot of confusion around the exact meanings of the various cloud service models and their intersection with deployment strategies. That's hardly surprising given that IaaS, PaaS, SaaS, public cloud, private cloud, hybrid cloud, and a dozen other as-a-service modalities are a complex combination of marketing speak and technical jargon. In this article, I'd like to tease out one confused strand: the relationship between Infrastructure-as-a-Service and public or private cloud deployments. I've chosen to address this topic because there's often considerable confusion around what a private cloud is: I've heard people say that a private cloud can't involve virtualization, that its just another name for traditional in-house deployments, that it's a form of colocation, that Google Apps is a private cloud, and so on — none of which are remotely accurate or at least not completely so.
Even the smallest of modern companies use networks that are both heterogeneous and dispersed. Business networks are composed of multiple services spread over many servers in diverse locations. I'm a writer, so you'd think I could make do without much of a network, but when I add up all the services I use to run my small business, I find that I rely on an extensive network of personal computers, mobile devices, backup servers, file servers, cloud storage servers, virtual private servers, SaaS applications, web hosting servers, and email services; hosted in the cloud, in my home, and on traditional hosting; and distributed all over Europe and the US.
There’s a dream of the cloud in which data flows freely around the globe, available anywhere, stored wherever is convenient, and detached from the normal concerns of information management. Technologically, companies don’t have to care about where their data is stored: it’s in the cloud and the cloud encourages users to be agnostic about which server, which data center, and even which country their data is housed in. But, legally and politically, the location of data matters a lot.
If there’s one thing that’s obvious to anyone who’s spent even a little bit of time online, it’s that security is one of the biggest hot-button issues on the modern web. As we store more and more information online, cyber-attacks are becoming increasingly lucrative - and the stakes involved in securing our data are rising ever higher. Not surprisingly, that means cyber-criminals are getting smarter and craftier. Whereas before a business might have to deal with the odd DDOS or man-in-the-middle attack, now there’s a constant risk that someone might jump in to exploit even the smallest security hole. It’s a culture of not-completely-unjustified paranoia - particularly since it seems as though many organizations aren’t pulling their weight as far as protecting their data is concerned.
There’s probably no one with access to the Internet who isn’t aware that the security of Apple’s iCloud platform was called into question recently. I’m not going discuss the appalling theft of private data that ensued, but I do want to look at a related issue: rate limiting. While we’re not entirely sure of the cause of the leak of celebrity’s private photos—the likely strategy was simple social engineering, research of publicly available information, and the exploitation of poor password choices—we do know that around the same time a vulnerability was discovered in iCloud that made life much easier for any potential hackers.
The first time a user visits your site, it’s likely that they won’t have a DNS mapping for your IP stored in their browser cache, and it’s possible their ISP doesn’t have a result cached either. For many of your visitors, the Domain Name System will have to retrieve and return the DNS record from the authoritative server for your domain. That takes time, and since DNS is such a fundamental part of how the Internet works, we want to keep the amount of time it takes to a minimum. There’s no point having a well-optimized site on great hosting if it takes several seconds for your browser to find out where it should be sending requests.
When a popular site switches content management systems, particularly a site like CMS Critic, whose writers we can expect to be well-informed of content management issues, it’s useful to have a look at the reasons behind the change. At the very least, they serve as input for future site deployment decisions. Early in July, CMS Critic, which is owned by Mike Johnston, made the jump from WordPress to ProcessWire, an open source content management system that offers many of WordPress’s benefits. I wasn’t very familiar with ProcessWire, but I am familiar with WordPress, so I’d like to take a look at CMS Critic’s reasoning, consider whether their complaints about WordPress are entirely fair, and whether ProcessWire does, in fact, make a good WordPress alternative for the average WordPress user.
DDoS attacks have been hitting the headlines with increasing frequency over the last few months. They’re a favored strategy of “hacktivists”, extortionists, and online criminals hoping to create a distraction. In principle, DDoS attacks are quite simple. At the most basic level, a collective of compromised Internet-connected machines direct a flood of data at the target with the aim of degrading its performance, either by saturating its connection to the Internet or using up its resources. The result is a site or service that is no longer usable by visitors. If you’re a Feedly user, you’ll have experienced the results of a DDoS attackrecently. Attackers flooded the RSS feed reader’s servers with data, in effect knocking it out of service for several days with the intention of extracting a payment from the company — a sort of modern protection racket.
Keeping you, our clients, happy and providing the services you need is the reason Cartika exists. We recently sent out a survey so we could find out how well you think we’re doing. We’d like to thank everyone who took the time to respond. The results made all of us happy. You love our service, support, and performance. We take the results of the survey seriously and over the coming months we’ll be looking at implementing a number of the enhancements you suggested we could make to our service. Some of the things you said would improve our hosting plans and service are already online, and others will be soon. In this article I’d like to discuss what we’re working on right now.
Site security is a complex issue. The online economy is huge and hackers stand to reap considerable benefits from attacks against sites that store sensitive data or give them access to large numbers of visitors. Hackers are a motivated and intelligent group of people, albeit a group with a consistent lack of concern for their fellow Internet users. In spite of the potential complexity of securing a site, attacks tend to fall into a number of clearly defined categories, and the mitigation of a significant majority of attacks can be achieved by following a small set of best practices. That’s not to say that by implementing the strategies we’re going to discuss here a site will be rendered impervious – that’s all but impossible, but most hackers focus on low hanging fruit, and by ensuring that a site is difficult to exploit, web masters will discourage all but the most persistent online criminals.
Occasionally, I wonder what might happen if the Internet just stopped working one day. It’s not a terribly pleasant thought, is it? These days, we’re so reliant on our connectivity that if some outside force were to strip it away from us, it’d likely lead to a complete societal collapse. There are upsides to this reliance, of course – particularly if you’re in the field of web development. If you’re capable of stomaching the learning cliff and the long hours you’ll likely end up working, there’s never been a better time to be a web developer. So long as you’ve got the right knowledge and skills under your belt, you’ll never be wanting for new clients. After all, as long as the Internet exists, someone’s going to want a website built.
DNS amplification attacks are one of the most pernicious vulnerabilities in the Internet’s infrastructure and a favored tool of online criminals with an axe to grind or a need to create a distraction. They’re also a useful example of how infrastructure that grows organically over many years can cause problems because of features created in a different time. Even more striking is the fact that if companies and others running DNS servers put their mind to it, DNS amplification attacks could be rendered impossible.
GitHub is a developer’s dream: not just for managing their own code, but for discovering new and exciting scripts, frameworks, and tools to use in their work. Among the tens of thousands of projects, it can be difficult to sort the wheat from the chaff. GitHub’s popularity means that there are plenty of awesome projects, but they can be hard to find amid the dross. In this article, I’d like to highlight six open source projects that have recently caught my interest. The functionality they provide varies, but each deserves consideration for a prominent place in a web developer’s toolbox.
Later this month, the HTTPbis working group will make their last call for input into HTTP 2.0, the first major revision in a decade and a half to the protocol on which the web runs. This November, assuming all goes according to schedule, HTTP 2.0 will be submitted to the Internet Engineering Steering Group for consideration as a proposed standard, after which it’ll travel through the process for adoption as a standard. The aim of HTTP 2.0 is to make the web’s technology more suitable to the way that modern web services and sites work, with particular focus on reducing latency and improving performance. In the late 90s, when the current version of HTTP was developed, the web was a very different place. Most sites were static and served from one server. Today’s websites are dynamic, interactive, and made up of components that reside on many different servers.
They say money makes the world go round, and that’s certainly true of the world wide web. In spite of its early and idealistic origins as a platform for unhindered communication, the Internet has grown to its current size and influence because of its commercial potential. eCommerce is one of the strongest drivers of that growth, and eCommerce would be impossible without a secure and trusted way to transfer money between customers and vendors. The Payment Card Industry Data Security Standard is the de facto standard to which responsible hosting companies who deal with credit card data adhere. The PCI-DSS lays out a set of best practices that help guarantee that when customers send credit card data across the Internet, it will be treated with the respect and level of security necessary to deserve their trust.
Much of the thinking around data storage and processing construes enterprise data as an undifferentiated mass. The reality is very different. Data is differentiated across multiple axes: from low to high value, from business critical to potentially useful, from highly sensitive to publishable, and from time sensitive to archival, among many other potential lines of variation. No one-size-fits-all solution can be sufficient to accommodate the matrix of potential species of data and their meaning to a particular enterprise.