IAAS, CLOUD, AND MANAGED SERVICES BLOG
The Internet of Things will bring about an explosion in the number of connected devices and the data we have about the world. The one-off sale of useful objects — long the foundation of commerce — is likely to be replaced by the sale of services and personalized relationships with customers in a data and application ecosystem, bringing about a revolution in business models.
Many of the people reading these words are at work. They are looking at a screen connected to a computer that contains gigabytes of data and applications crucial to their productivity. Without the device, they are unable to work. Almost everyone else reading this article will have a smartphone in hand. Smartphones like the iPhone can carry gigabytes of data, but almost all of it will also live in the cloud and be accessible from any other device.
It’s something of a disconcerting statistic - fewer than one third of American financial organizations have a cloud strategy. Other regions don’t fare much better, either. EMEA shows 35% of firms preparing for cloud computing, while APAC is slightly higher, at 41%. Given the immense popularity of cloud computing in both enterprise and the consumer space, these statistics are troubling - even if they do show that more institutions are starting to realize the importance of going digital. What exactly is the root cause here? Why exactly are firms lagging behind to such a great degree? And more importantly, what can be done about it?
Some of our customers are facing and are impacted by the recent round of "Forex" site injections. The typical symptoms are site injections redirecting users to a "forex" landing page. Content can either be injected into existing pages, or, the injected bots can delete the content entirely and replace the content with code which accomplishes the same thing. The majority of the reported infections exploited a vulnerability in out-of-date Joomla and Wordpress core, plugins, modules and templates. Infections leverage publicly known vulnerabilities in WordPress, WHMCS, and Joomla enabled servers, and other customized dynamic PHP/ASP/SQL web applications. Database injections, via these exploits is also possible, and can act as a back door to re-inject websites after they have been cleaned once. I wanted to take a few minutes and discuss what Cartika is doing to help our customers, and what customers should be doing to deal with this situation if you have been impacted by it
The Domain Name Service is an essential part of the complex system that lets users connect to your website or application. It allows users to put a human readable URL — like cartika.com — into their browser and be connected to the server represented by that address. Essentially DNS translates URLs into IP addresses; it can be thought of as the Internet's address book. Without DNS, there would be no way for users to connect to your site unless they already knew its IP address. Learn More about Cartika AnyCast DNS
In the never ending battle against spam, the resultant eco-system has generated some really interesting dynamics. Often times, organizations working in synergy to try and address and resolve spamming issues, and most importantly, keep legitimate email flowing to users, get caught in tug and war battles. Often times, this is nothing more then newer players coming into a space they know little about and attempt to make their mark by flexing some muscle – when all that is actually required is a little common sense and an ability to work with each other. This is what is happening right now with SpamRats.
The web is a very different place from what it used to be. Trends such as mobile usage, semantic search, and social media have brought about a fundamental change in how users both seek out and consume content. Search engine optimization has had to change as a necessity - and that evolution has caused it to very closely intersect with web design.
Last month, Anthem Incorporated - one of the world’s leading health insurance companies - made a very grim announcement to shareholders and clients. It was, a representative explained, the target of a “very sophisticated external cyberattack,” which allowed hackers to gain unauthorized access to its IT systems. The personal information of eighty million clients - data ranging from birthdays and names to medical IDs, social security numbers, street addresses, email addresses, and employment history - was compromised.
If you've been in the IT industry for a while, you'll have an almost instinctive familiarity with what the cloud is, its various modalities, deployment models, and types. Intuitively, one would think that a deep understanding would make the cloud easy to explain to less technical people, but in fact the opposite is true. It's very difficult to put yourself in the mindset of someone who lacks the conceptual framework that those of us who have been around enterprise IT for a long time have developed.
Even the smallest of modern companies use networks that are both heterogeneous and dispersed. Business networks are composed of multiple services spread over many servers in diverse locations. I'm a writer, so you'd think I could make do without much of a network, but when I add up all the services I use to run my small business, I find that I rely on an extensive network of personal computers, mobile devices, backup servers, file servers, cloud storage servers, virtual private servers, SaaS applications, web hosting servers, and email services; hosted in the cloud, in my home, and on traditional hosting; and distributed all over Europe and the US.
There’s a dream of the cloud in which data flows freely around the globe, available anywhere, stored wherever is convenient, and detached from the normal concerns of information management. Technologically, companies don’t have to care about where their data is stored: it’s in the cloud and the cloud encourages users to be agnostic about which server, which data center, and even which country their data is housed in. But, legally and politically, the location of data matters a lot.
If there’s one thing that’s obvious to anyone who’s spent even a little bit of time online, it’s that security is one of the biggest hot-button issues on the modern web. As we store more and more information online, cyber-attacks are becoming increasingly lucrative - and the stakes involved in securing our data are rising ever higher. Not surprisingly, that means cyber-criminals are getting smarter and craftier. Whereas before a business might have to deal with the odd DDOS or man-in-the-middle attack, now there’s a constant risk that someone might jump in to exploit even the smallest security hole. It’s a culture of not-completely-unjustified paranoia - particularly since it seems as though many organizations aren’t pulling their weight as far as protecting their data is concerned.
There’s probably no one with access to the Internet who isn’t aware that the security of Apple’s iCloud platform was called into question recently. I’m not going discuss the appalling theft of private data that ensued, but I do want to look at a related issue: rate limiting. While we’re not entirely sure of the cause of the leak of celebrity’s private photos—the likely strategy was simple social engineering, research of publicly available information, and the exploitation of poor password choices—we do know that around the same time a vulnerability was discovered in iCloud that made life much easier for any potential hackers.
The first time a user visits your site, it’s likely that they won’t have a DNS mapping for your IP stored in their browser cache, and it’s possible their ISP doesn’t have a result cached either. For many of your visitors, the Domain Name System will have to retrieve and return the DNS record from the authoritative server for your domain. That takes time, and since DNS is such a fundamental part of how the Internet works, we want to keep the amount of time it takes to a minimum. There’s no point having a well-optimized site on great hosting if it takes several seconds for your browser to find out where it should be sending requests.
When a popular site switches content management systems, particularly a site like CMS Critic, whose writers we can expect to be well-informed of content management issues, it’s useful to have a look at the reasons behind the change. At the very least, they serve as input for future site deployment decisions. Early in July, CMS Critic, which is owned by Mike Johnston, made the jump from WordPress to ProcessWire, an open source content management system that offers many of WordPress’s benefits. I wasn’t very familiar with ProcessWire, but I am familiar with WordPress, so I’d like to take a look at CMS Critic’s reasoning, consider whether their complaints about WordPress are entirely fair, and whether ProcessWire does, in fact, make a good WordPress alternative for the average WordPress user.
DDoS attacks have been hitting the headlines with increasing frequency over the last few months. They’re a favored strategy of “hacktivists”, extortionists, and online criminals hoping to create a distraction. In principle, DDoS attacks are quite simple. At the most basic level, a collective of compromised Internet-connected machines direct a flood of data at the target with the aim of degrading its performance, either by saturating its connection to the Internet or using up its resources. The result is a site or service that is no longer usable by visitors. If you’re a Feedly user, you’ll have experienced the results of a DDoS attack recently. Attackers flooded the RSS feed reader’s servers with data, in effect knocking it out of service for several days with the intention of extracting a payment from the company — a sort of modern protection racket.
Site security is a complex issue. The online economy is huge and hackers stand to reap considerable benefits from attacks against sites that store sensitive data or give them access to large numbers of visitors. Hackers are a motivated and intelligent group of people, albeit a group with a consistent lack of concern for their fellow Internet users. In spite of the potential complexity of securing a site, attacks tend to fall into a number of clearly defined categories, and the mitigation of a significant majority of attacks can be achieved by following a small set of best practices. That’s not to say that by implementing the strategies we’re going to discuss here a site will be rendered impervious – that’s all but impossible, but most hackers focus on low hanging fruit, and by ensuring that a site is difficult to exploit, web masters will discourage all but the most persistent online criminals.
Occasionally, I wonder what might happen if the Internet just stopped working one day. It’s not a terribly pleasant thought, is it? These days, we’re so reliant on our connectivity that if some outside force were to strip it away from us, it’d likely lead to a complete societal collapse. There are upsides to this reliance, of course – particularly if you’re in the field of web development. If you’re capable of stomaching the learning cliff and the long hours you’ll likely end up working, there’s never been a better time to be a web developer. So long as you’ve got the right knowledge and skills under your belt, you’ll never be wanting for new clients. After all, as long as the Internet exists, someone’s going to want a website built.
DNS amplification attacks are one of the most pernicious vulnerabilities in the Internet’s infrastructure and a favored tool of online criminals with an axe to grind or a need to create a distraction. They’re also a useful example of how infrastructure that grows organically over many years can cause problems because of features created in a different time. Even more striking is the fact that if companies and others running DNS servers put their mind to it, DNS amplification attacks could be rendered impossible.
GitHub is a developer’s dream: not just for managing their own code, but for discovering new and exciting scripts, frameworks, and tools to use in their work. Among the tens of thousands of projects, it can be difficult to sort the wheat from the chaff. GitHub’s popularity means that there are plenty of awesome projects, but they can be hard to find amid the dross. In this article, I’d like to highlight six open source projects that have recently caught my interest. The functionality they provide varies, but each deserves consideration for a prominent place in a web developer’s toolbox.