Changing the Ways Data Centers Are Built

Enterprise IT departments have spent the past several years flocking to the cloud largely for the flexibility it provides, both in terms of technology and economics.

In the case of public clouds, where a third party owns and operates the IT infrastructure, that flexibility includes shifting CapEx and OpEx to the cloud provider and gives access to cutting-edge IT products the enterprise wouldn’t be able to afford on its own.

With private clouds, the enterprise owns the data center(s) and other infrastructure for reasons that typically include concerns about security and performance risks, which can arise when a third party hosts everything. Although the enterprise bears the CapEx and OpEx of owning the private cloud infrastructure, it retains the flexibility to quickly and cost-effectively shift IT resources across different business units to meet each one’s needs. For example, instead of providing the accounts receivable department with dedicated IT resources that lie mostly idle outside of the monthly billing cycle, that equipment can be shifted to other business units during peak periods. That increases ROI.

A related trend is the rise of data center appliances, servers preloaded with applications that meet each enterprise’s unique business requirements. With data center appliances, either the hardware vendor or the enterprise can be responsible for managing that infrastructure. That business model makes it possible to free the enterprise IT staff for other activities -- or in the case of small business, forgo having an IT staff at all.

The rise of clouds and appliances is reshaping the data center. Here’s how.

Collocating for Access to a Community
When an automaker builds a factory in a state where it has no facilities, its parts suppliers typically add their own plants literally next door. The logic: It’s cheaper and easier to meet just-in-time supply-chain requirements when you’re across the street than across the country.

Data center operators and their customers have started borrowing that strategy. For example, financial services companies are increasingly collocating in the same data centers. That approach avoids the latency that would occur if they were in separate facilities scattered around a country or the world.

Although that may not sound like a major benefit, bear in mind that financial services is an industry where a network bottleneck can cost millions if it means a stock trade is executed after the price has increased. Data center collocation also saves money by minimizing the amount of data shipped over wide-area connections, and staying local means fewer security risks.

Content creators and distributors are looking for similar opportunities, including sharing data centers with Internet exchanges to avoid the cost and latency of shipping their bits over wide-area connections. “They’re collocating their infrastructure,” says Ali Moinuddin, director of marketing at Interxion, a data center operator whose 1,200 customers include the world’s five largest content delivery networks.

Whatever the industry, the business drivers are similar: “They want to reside in the same data center so they can cross-connect within the data center and share data and services -- without having to leave the data center,” says Moinuddin.

Virtualization: 5 Times the Performance, 66 Percent Savings
The downside to collocation is that organizations can wind up with their IT assets and operations concentrated in one or just a few sites. That design runs counter to the post-Sept. 11 strategy of spreading IT over a wide geographic area to minimize the impact of a major disaster.

CIOs and IT managers also have to ensure that their collocation and geographic-dispersion strategies don’t violate any laws. For example, some governments restrict certain types of data from being stored or transmitted outside their borders.

To avoid running afoul of such laws, enterprises should have SLAs that dictate exactly how and where their cloud-based operations can be switched if a data center goes down. “That’s something that has to be very carefully managed,” says Aileen Smith, senior vice president of collaboration at the TM Forum, an industry organization for IT service providers.

Redundancy and resiliency are also driving the trend toward virtualization, where fibre channel storage, I/O and a host of other processes are being disconnected from hardware and moved up into the cloud. This strategy can be more cost-effective than the two historical options: building a multimillion-dollar data center packed with state-of-the-art hardware and software designed to minimize failure; or having an identical backup site that’s used only in emergencies, meaning those expensive assets sit idle most of the time instead of driving revenue.

Instead, virtualization spreads server and application resources over multiple data centers. One way it reduces capital expenses is by allowing the enterprise or its data center provider to use less expensive hardware and software. This strategy doesn’t compromise resiliency because if one or more parts of a data center goes down, operations can switch to another site for the duration. At the same time, there’s no backup data center stocked with nonperforming assets.

How much can enterprises reasonably expect to save from virtualization? F5 Networks estimates virtualization can yield five times the performance at one-third the cost. “If I can put 10 of those low-cost servers in a virtualized resource pool, I’ve got five to 10 times the power of the most powerful midrange system at a third of the cost,” says Erik Geisa, vice president of product management and marketing for F5 Networks. “By virtualizing my servers, I not only realize a tremendous cost savings, but I have a much better architecture for availability and ongoing maintenance. If I need to bring one server down, it doesn’t impact the others, and I can gracefully add in and take out systems to support my underlying architecture.”

It’s not your father’s data center anymore.

Photo Credit: @iStockphoto.com/luismmolina

The Billion Dollar Lost Laptop Problem

Every time a business laptop is lost or stolen, an organization takes a direct cost hit. But how much of a hit might surprise you. What would your organization do if it realized that each year it’s losing millions of dollars in this way? Odds are, it would be far more diligent in protecting laptops.

Last year, the Ponemon Institute released a study (conducted independently and sponsored by Intel) of The Billion Dollar Lost Laptop Problem, an independent benchmark of 329 private and public-sector U.S. organizations -- ranging in size from less than 1,000 to greater than 75,000 employees and representing more than 12 industry sectors -- to determine the economic cost of lost or stolen laptops. What they found: The cost is huge.

Participating organizations reported that in a 12-month period 86,455 laptops were lost or otherwise went missing. That added up to 263 laptops per organization on average.

According to an earlier Ponemon Institute study (conducted independently and sponsored by Intel), The Cost of a Lost Laptop, the average value of a lost laptop is a staggering $49,246. This value is based on seven cost components: replacement cost, detection, forensics, data breach, lost intellectual property costs, lost productivity and legal, consulting and regulatory expenses. It’s important to point out that the smallest cost component is the replacement cost of the laptop.

Some of the salient findings from The Billion Dollar Lost Laptop Problem report:

  • The total economic impact for 329 participating companies is $2.1 billion, or on average $6.4 million per organization.

  • Out of the 263 laptops per organization that are lost or go missing, on average just 12 laptops were recovered.

  • Forty-three percent of laptops were lost off-site (working from a home office or hotel room); 33 percent lost in transit or travel; and 12 percent were lost in the workplace.
  • Twelve percent of organizations said they don’t know where employees or contractors lose their laptops.

  • Although 46 percent of the lost systems contained confidential data, 30 percent of laptops lost had disc encryption, 29 percent had backup, and just 10 percent had other anti-theft features.

  • Industries that experience the highest rate of laptop loss are education and research; health and pharmaceuticals were next, followed by the public sector. Financial services firms had the lowest loss rate.

  • Laptops with the most sensitive and confidential data are the most likely to be stolen. However, these laptops are also more likely to have disc encryption.
  • Average loss ratio over the laptop’s useful life is 7.12 percent. That means more than 7 percent of all assigned laptops in benchmarked companies will be lost or stolen.

But Who's Minding the Data?
Not nearly enough organizations, it appears. Given the significant financial impact of missing laptops and the vulnerabilities of stolen laptop data, it is astonishing that the majority of these companies aren't taking even basic precautions to protect them.

The worst cost component is the data breach. A stolen laptop can be easily booted to reveal passwords, stored temporary files the user was even unaware of, and access to VPN connections, remote desktops, wireless encryption keys and more.

That’s enough reason to do something. Here are your best options for protecting your organization’s data integrity against all of that potential mayhem.

  • Full Disk Encryption: Full disk encryption prevents unauthorized access to data storage. Under this scenario, nearly everything is encrypted, and the decision of which individual files to encrypt is not left up to users' discretion.  But all too often, end users choose to disable the full disk encryption, probably because they incorrectly assume it significantly slows all of the processing.
  • Anti-Theft Technology: Laptops can disable themselves, when the hardware observes suspicious activity, if they get lost or stolen. When the laptop is recovered, it can be easily reactivated and returned to normal operation.
  • Data in the Cloud: Keeping sensitive material off your laptop by storing data in the cloud is not a viable solution, because that does nothing to protect the data. Such data is easily accessible by simply cracking the login credentials. Worse yet, the existence of a full backup actually increases the cost of a lost laptop, because backups make it easier to confirm the loss of sensitive or confidential data, resulting in greater expense from  forensic diagnosis and recovery efforts.

Just like Smokey the Bear says about you and forest fires, only you can stop data loss.

Migration to the Cloud: Evolution Without Confusion

The rapid rise of cloud computing has been driven by the benefits it delivers: huge cost savings with low initial investment, ease of adoption, operational efficiency, elasticity and scalability, on-demand resources, and the use of equipment that is largely abstracted from the user and enterprise.

Of course, these cloud computing benefits all come with an array of new challenges and decisions. That’s partly because cloud products and services are being introduced in increasingly varied forms as public clouds, private clouds and hybrid clouds. They also deliver software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) solutions, and come with emerging licensing, pricing and delivery models that raise budgeting, security, compliance and governance implications.

Making these decisions is also about balancing the benefits, challenges and risks of those cloud computing options against your company’s technology criteria. Many core criteria matter: agility, availability, capacity, cost, device and location independence, latency, manageability, multi-tenancy, performance, reliability, scalability, security, etc. And the available cloud options all vary widely in terms of each of these criteria -- not to mention, there are significant challenges integrating all of this with your existing infrastructure.

There are fundamentally challenging questions that companies will be forced to grapple with as they decide what cloud functionality suits them best. The central issues include security, cost, scalability and integration.

Public, Private or Hybrid?

There are a few differences among the three:

  • Public cloud services require the least investment to get started, have the lowest costs of operation, and their capacity is eminently scalable to many servers and users. But security and compliance concerns persist regarding multi-tenancy of the most sensitive enterprise data and applications, both while resident in the cloud and during transfer over the Internet. Some organizations may not accept this loss of control of their data center function.

  • Private cloud services offer the ability to host applications or virtual machines in a company’s own infrastructure, thus providing the cloud benefits of shared hardware costs (thanks to virtualization, the hardware is abstracted), federated resources from external providers, the ability to recover from failure, and the ability to scale depending upon demand. There are fewer security concerns because existing data center security stays in place, and IT organizations retain data center control. But because companies must buy, build, and manage their private cloud(s), they don’t benefit from lower up-front capital costs and less hands-on management. Further, their operational processes must be adapted whenever existing processes are not suitable for a private cloud environment. They are just not as elastic or cost-effective as public clouds.

  • Hybrid clouds are just a mix of at least one public cloud and one private cloud, combined with your existing infrastructure. Hybrid cloud interest is powered by the desire to take advantage of public and private cloud benefits in a seamless manner. Hybrid combines the benefits and risks of public and private: offering security, compliance, and control of the enterprise private cloud for sensitive, mission-critical workloads, and scalable elasticity and lower costs for apps and services deployed in the public cloud.

That combination of operational flexibility and scalability for peak and bursty workloads is the ideal goal, but the reality is that hybrid cloud solutions are just emerging, require additional management capabilities and come with the same kind of security issues for data moved between private and public clouds.

Transformational Change or Legacy Environment?
The diversity of cloud offerings means businesses evaluating various cloud computing options must decide how to integrate cloud resources with their legacy equipment, applications, people and processes, and determine whether and how this will transform their business IT or simply extend what they have today and plan for the future.

The reality of cloud environments is that they will need to coexist with the legacy environments. A publicly traded firm with thousands of deployed apps is not going to rewrite them for the public cloud.

One determining factor may be whether the services being deployed to the cloud are “greenfield” (lacking any constraints imposed by prior work), or “brownfield” (development and deployment in the presence of existing systems). In the absence of constraints, greenfield applications are more easily deployed to the cloud.

Ideally, hybrid solutions allow organizations to create or move existing applications between clouds, without having to alter networking, security policies, operational processes or management and monitoring tools. But the reality is that, due to issues of interoperability, mobility, differing APIs, tools, policies and processes, hybrid clouds generally increase complexity.

The Forecast Is Cloudy, Turning Sunny
Where this is all headed is that, for the foreseeable future, many organizations will employ a mixed IT environment that includes both public and private clouds as well as non-cloud systems and applications, because the economics are so attractive. But, as they adopt the cloud, enterprise IT shops will need to focus on security, performance, scalability, cost and avoid vendor lock-in, in order to achieve overall efficiencies.

Security concerns will be decisive for many CIOs, but companies are increasingly going to move all but their most sensitive data to the cloud. Companies will weave together cloud and non-cloud environments, and take steps to ensure that security is assured.

Non-mission critical applications -- such as collaboration, communications, customer-service and supply-chain tools -- will be excellent candidates for the public cloud.

There’s a Cloud Solution for That

As hybrid cloud offerings mature, cloud capabilities will be built into a variety of product offerings, including virtualization platforms and system management suites. Vendor and service provider offerings will blur the boundaries between public and private environments, enabling applications to move between clouds based on real-time demand and economics.

In the not-too-distant future, hybrid cloud platforms will provide capabilities to connect and execute complex workflows across multiple types of clouds in a federated ecosystem.

Products from Amazon, HP, IBM, Red Hat, VMware and others offer companies the ability to create hybrid clouds using existing computing resources, including virtual servers, and in-house or hosted physical servers.

There are also hybrid devices that are designed to sit in data centers and connect to public cloud providers, and offer control and security along with cost savings of connecting to the cloud. For example:

  • Red Hat open-source products enable interoperability and portability via a flexible cloud stack that includes its operating system, middleware and virtualization. The company recently announced its own platform as a service called OpenShift (for the public cloud) and infrastructure-as-a-service offering called CloudForms, which is a private cloud solution.
  • VMware’s vCenter Operations combines the data center functionality of system configuration, performance management and capacity management. It also supports virtual machines that can be deployed either inside the data center or outside beyond the firewall in the cloud.

Are we there yet?