5 Keys for Moving Enterprise Security to the Cloud

The worst economy in 70 years hasn’t deflated the cloud: In 2009, cloud services were already a $16 billion market, says research firm IDC. By 2014, global cloud revenues will hit $55.5 billion, growing five times faster than other IT products.

It’s not hard to see why enterprises large and small are flocking to the cloud. The cloud reduces IT capex and opex by shifting those costs to the enterprise’s cloud provider. That’s an obvious benefit even in flush times, but it’s even more attractive when the recession has CIOs and IT managers looking to run as lean as possible.

Cloud computing also helps enterprises stay nimble -- by enabling them to take advantage of new technologies faster than if they had to buy and deploy the equipment themselves, for example. That flexibility can produce competitive advantages, including rolling out services quickly to respond to changing market conditions.

Another big draw is the ability to scale IT systems up and down to meet changing needs, such as peaks during the holiday shopping season. That lets enterprises be more responsive to a flood of new customers, but without purchasing IT infrastructure that would be underutilized between peak periods.

5 Tips for Fighting Breaches

As any CIO or IT manager is quick to add, the cloud’s benefits can’t come at the expense of security. Even minor breaches can have big implications, ranging from a PR nightmare and class action lawsuits when confidential customer information is compromised, to jail time if it turns out that lax security policies violated laws. Worst-case scenario: a breach so big that Congress enacts a law nicknamed after the company.

Some enterprises have an internal cloud, others work with a cloud provider and still others have a combination of the two. These tips apply to all three models:

1. Start clean.
Some enterprises require their cloud provider to put their data only on brand-new servers. They believe it’s impossible to remove every trace of former tenants and that this electronic detritus creates back doors for hackers.

2. Secure access to the cloud.
Implement strong authentication mechanisms to secure every Web path that provides access to the cloud. Ditch simple, password-based logins in favor of multifactor authentication. In fact, some industries mandate this. One example is financial services, where since 2006 the FFIEC has required banks to use multifactor authentication to protect logins into their sites. Also, take a look outside your industry to see if there are any regulations and best practices that you could adopt or adapt to beef up cloud security.

3. Safeguard the data in the cloud.
This is another place where it’s key to keep up with industry-specific laws and best practices, including ones that can be borrowed from other sectors. For example, the Payment Card Industry (PCI) standard specifies physical and logical controls for data both when it’s at rest and in motion, while HIPAA provides similar requirements for medical data.

4. Verify and audit.
Third-party auditors can verify that your cloud or your cloud provider meet security and privacy laws, as well as any industry-specific best practices. Besides PCI and HIPAA, audits may look at compliance with SAS 70 which covers application security, physical security and security processes. Another is ISO 27002, which lists hundreds of options for security management.

5. End clean.
PCI is also an example of how some industries require that data be destroyed, including the hard drives. That includes when switching cloud providers: The contract should spell out exactly how data must be destroyed.

Need more tips? Check out Cloud Computing: Benefits, Risks and Recommendations for Information Security, a European Network and Information Security Agency report that covers 35 common risks and strategies for mitigating them. These tips are applicable in every part of the world.

Avoiding Fragmented Infrastructure in the Data Center

As a growing number of IT shops grow more comfortable with implementing their first cloud environments, they are recognizing the costs, benefits and operational efficiencies of this strategy. However, they are also discovering the rippling effects that cloud implementations may have on their existing virtualized infrastructures: fragmented infrastructures, a management nightmare. Arsalan Farooq, CEO and founder of Convirture, has experience dealing with IT shops that face this problem. Below, he offers insight.

Q. Some shops complain about dealing with fragmented infrastructures because of different approaches they have taken in implementing multiple clouds. What advice can you offer to fix that?

Arsalan Farooq: First, I would say don’t panic and start ripping everything out. And don’t run back to the virtualized or physical model you came from, because you have tried that and it didn’t work. The problem isn’t that your cloud model isn’t working. The problem is it’s fragmented, and the complexity of it is out of control. I recommend taking a top-down approach from the management perspective. You need to invest in management tools that allow you to manage the fragmented parts of your infrastructure and workloads from a single location. Now, this doesn’t solve the fragmentation problem completely because you are dealing with not only fragmented infrastructure, but also fragmented management and fragmented workloads. But once you can see all your workloads and infrastructures in one place and can operate them in one place, you can make more intelligent decisions about the workload fragmentation problem.

Q. Cloud management software is not technically keeping up with managing both physical and virtual environments. Does your solution here help out with that?

A.F.: Once you are in this fragmented infrastructure silo, the vendor tries to sell you their proprietary management too. At the end of the day, you have a management tool that just manages your vertical cloud, and another management tool that just manages your virtualized infrastructure and so on. My advice is to divest the management risk from the platform risk. If you don’t do this, you’re asking for it. As data centers become multi-infrastructure, multi-cloud and multi-operational, you have to divest the risk of the new platform from the risk of your management practices. I’m not a big fan of buying a new platform that forces you to buy new infrastructures and new management tools.

Q. What is the most important element IT shops overlook in putting together their first cloud implementation?

A.F.: Typically, there is a misunderstanding of what the scope of the transformation will be. A lot of people end up with misgivings because they have been sold on this idea that a cloud implementation will 100 percent transform the way they can build their IT data center. Second (and this is more sinister), they believe the cloud can bring efficiencies and cost benefits, but that in itself comes at a cost. You are buying efficiency and cost benefits, but you are paying the complexity price for it. This is something remarkably absent as people go into a cloud implementation. Only after they implement their cloud do they realize the architectural topology is much more complex than it was before.

Q. There is this disconnect between what IT shops already have in their virtualized data centers and what the vendors are offering as solutions to layer on top of that. Why is that?

A.F.: That is the crux of the problem we are dealing with right now. Most cloud vendors talk about how their cloud deployment works and what it brings in terms of efficiencies and cost benefits. But what that discussion leaves out is how the transformation from a cloudless data center to one that has a cloud actually works. Specifically, what are the new metrics, properties and cost benefits surrounding all that? And then, once the transformation is made, what are the attributes of a multi-infrastructure data center? Conversations about this are completely absent.

Q. But explaining something like the metrics of a new cloud conversation to an IT shop seems like such a basic part of the conversation.

A.F.: Right, but the problem is many vendors take a more tactical view of things. They are focused on selling their wares, which are cloud infrastructures. But addressing the bigger-picture ramifications is something many don’t seem to have the capacity to answer, and so they don’t talk about it. So the solution then falls either to the CIOs or other vendors who are trying to attack that problem directly.

Q. Some IT executives with virtualized data centers wonder if they even need a cloud. What do you say to those people?

A.F.: This may sound a bit flippant, but if you feel you don’t need a cloud, then you probably don’t. You have to remember what the cloud is bringing. The cloud is not a technology; it is a model. It comes down to what the CIO is looking for. If they are satisfied with the level of efficiency in their virtualized data centers, then there is no compelling reason for them to move to the cloud. However, I don’t think there are too many CIOs who, when given the prospect of incrementally improving the efficiencies of part of their operations, would not go for it.

Q. Are companies getting more confident about deploying public clouds, without having to first do a private one?

A.F.: The largest data centers still can’t find a compelling reason to move to the public cloud. The smaller shops and startups (most of which don’t have the expertise or have any infrastructure) are more confident in moving to a public cloud. The bigger question here is whether the larger corporate data centers ever move their entire operation to the public cloud as opposed to just using it in a niche role for things like backup or excess capacity.

Q. I assumed large data centers would move their entire operations to a public cloud once the relevant technologies became faster and more reliable and secure. What will it take for large data centers to gain more confidence about public clouds?

A.F.: One thing missing is a compelling cost-complexity-to-benefits ratio. My favorite example from a few months ago was when we were going to do workloads that automatically load balance between local and public cloud scenarios with cloud bursting. That all sounds good, but do you know how much Amazon costs for network transfers? It’s an arm and a leg. It is ridiculously expensive. Do an analysis of what it costs to take a medium-load website and run it constantly on Amazon, and then compare that to a co-lo or renting a server from your local provider, and your mind would be boggled. The overall economics of public clouds -- despite all the hype -- are not well-aligned given the high network usage, bandwidth usage and CPU usage scenarios. Until that changes, it’s hard to find compelling arguments to do all these fancy things that everyone talks about doing with public clouds, including ourselves. We have cloud bursting we are working on in the labs, but we are also pretty sober about what that means as to whether or not it’s here as a practical solution.

Photo: @iStockphoto.com/halbergman

New Programming Languages to Watch

Programming ultimately is about building a better mousetrap, and sometimes that includes the programming languages themselves. Today, there are at least a dozen up-and-coming languages vying to become the next C++ or Java.

Should you care? The definitive answer: It depends. One big factor is whether an emerging language stands a chance of building a following among developers, vendors and the rest of the IT industry, or whether it’s doomed to be a historical footnote. That’s always tough to predict, but a newcomer stands a better chance when its creator is a major IT player, such as Google, which created Dart as a replacement for JavaScript.

“For any language that hopes to play a similar role to JavaScript, such as Dart, some level of support would have to come from browser vendors,” says Al Hilwa, applications development software program director at IDC, an analyst firm. “Google would have to come up with a plug-in for each browser that compiles to JavaScript, and some browsers may block that for one reason or another.”

The role of a major vendor isn’t limited to creating the language itself. In some cases, the vendor can be influential when it creates an ecosystem and then encourages use of a certain language.

“For example, Microsoft pushed C# hard when it launched .NET a decade ago, which resulted in great traction for that language,” says Hilwa. “The Microsoft developer system is somewhat unique in that, up to this point, Microsoft has huge credibility in moving the ecosystem with its actions given its dominance in personal computing. But even then, many languages are made available to please specific factions of developers, and not necessarily because they are expected to dominate. I would put F# in this category.”

When a particular language is able to build market share, it’s often at the expense of an incumbent -- hence the better mousetrap analogy. Time will tell whether that’s the case with Dart and JavaScript.

“JavaScript is much maligned, though it’s often used primarily as a syntax approach for all manner of different specific semantic implementations,” says Hilwa. “Browsers purport to support JavaScript but have great variations. Other server technologies use the JavaScript syntax but solve different types of problems, such as Node for asynchronous server computing.”

Emerging languages also have to overcome the fact that incumbency often has its privileges -- or at least an installed base of products and people that are unwilling or unable to change.

“The issue with launching new languages to replace old and potentially inferior ones is the body of code written and the mass of developers who have vested skills,” says Hilwa. “It’s very hard to create a shift in the industry due to these effects. Such a change requires a high level of industry consensus and a sustained multivendor push that involves the key influential vendors.

“For example, if IBM was to put its power behind Dart, then that might help it. It may be hard for Google to muster up such long-term commitment given its culture of focus on the hot and new.”

Meet Dart, F# and Fantom

Here’s an overview of three emerging languages that, at this point, seem to have enough market momentum that enterprises should keep an eye on them:

  • Dart is a class-based language that’s supposed to make it easier to develop, debug and maintain Web applications, particularly large and thus unwieldy ones. It also uses syntax similar to that of JavaScript, which should be helpful for learning Dart. Dart’s creators say one goal is to make the language applicable for all devices that use the Web, including smartphones, tablets and laptops, as well as all major browsers.
  • F# is one of the elder newcomers in the sense that Microsoft began shipping it with Visual Studio 2010. Pronounced “F sharp,” it’s a functional-style language that’s designed to be easy to integrate with imperative languages such as C++ and Java. It also supports parallel programming, which is increasingly important as multicore processors become more common.

  • Fantom is designed to enable cross-platform development, spanning Java VM, .NET CLR and JavaScript in browsers. “But getting a language to run on both Java and .NET is the easy part,” say its creators. “The hard part is getting portable APIs. Fantom provides a set of APIs that abstract away the Java and .NET APIs. We actually consider this one of Fantom’s primary benefits, because it gives us a chance to develop a suite of system APIs that are elegant and easy to use compared to the Java and .NET counterparts.” Fantom’s ease of portability means that it eventually could be extended for use with Objective-C for the iPhone, LLVM or Parrot.

The Case for Policy-based Power Management

Not many years ago, server power consumption wasn’t a big concern for IT administrators. The supply of power was plentiful, and in many cases power costs were bundled with facility costs. For the most part, no one thought too hard about the amount of power going into servers.

What a difference a few years can make. In today’s ever-growing data centers, no one takes power for granted. For starters, we’ve had too many reminders of the threats to the power supply, including widely publicized accounts of catastrophic natural events, breakdowns in the power grid, and seasonal power shortages.

Consider these examples:

  • In the wake of the March 2011 earthquake and tsunami and the loss of the Fukushima Daiichi nuclear power complex, Japan was hit with power restrictions and rolling power blackouts. The available power supply couldn’t meet the nation’s demands.
  • In the United States, overextended infrastructure and recurring brownouts and outages have struck California and the Eastern Seaboard, complicating the lives of millions of people.
  • In Brazil and Costa Rica, power supplies are threatened by seasonal water scarcity for hydro generation, while Chile wrestles with structural energy scarcity and very expensive electricity.

Then consider today’s data centers, where a lot of power is wasted. In a common scenario, server power is over-allocated and rack space is underpopulated to cover worst-case loads. This is what happens when data center managers don’t have a view into the actual power needs of a server or the tools they need to reclaim wasted power.

All the while, data centers are growing larger, and power is becoming a more critical issue. In some cases, data centers have hit the wall; they are out of power and cooling capacity. And as energy costs rise, we’ve reached the point where some of the world’s largest data center operators consider power use to be one of the top site-selection issues when building new facilities. The more plentiful the supply of affordable power is, the better off you are. All of this points to the need for policy-based power management. This forward-looking approach to power management helps your organization use energy more efficiently, trim your electric bills and manage power in a manner that allows demand to more closely match the available supply.

And the benefits don’t stop there: A policy-based approach also allows you to implement power management in terms of elements that are meaningful to the business instead of trying to bend the business to fit your current technology and power supply.

Ultimately, the case for policy-based power management comes down to this: It makes good business sense.

Using policy-based power management to rein in energy use
In today’s data centers, power-management policies are like the reins on a horse. They put you in control of an animal -- power consumption -- that has a tendency to run wild.

When paired with the right hardware, firmware and software, policies give you control over power use across your data center. You can create rules and map policies into specific actions. You can monitor power consumption, set thresholds for power use, and apply appropriate power limits to individual servers, racks of servers, and large groups of servers.

So how does this work? Policy-based power management is rooted in two key capabilities: monitoring and capping. Power monitoring takes advantage of sensors embedded in servers to track power consumption and gather server temperature measurements in real time.

The other key capability, power capping, fits servers with controllers that allow you to set target power consumption limits for a server in real time. As a next step, higher-level software entities aggregate data across multiple servers to enable you to set up and enforce server group policies for power capping.

When you apply power capping across your data center, you can save a lot of money on your electric bills. Just how much depends on the range of attainable power capping, which is a function of the server architecture.

For the current generation of servers, the power capping range might be 30 percent of a server’s peak power consumption. So a server that uses 300 watts at peak load might be capped at 200 watts, saving you 100 watts. Multiply 100 watts by thousands of servers, and you’re talking about operational savings that will make your chief financial officer stand up and take notice. Dynamic power management takes things a step further. With this approach, policies take advantage of additional degrees of freedom inherent in virtualized cloud data centers, as well as the dynamic behaviors supported by advanced platform power management technologies. Power capping levels are allowed to vary over time and become control variables by themselves. All the while, selective equipment shutdowns -- a concept known as “server parking” -- enable reductions in energy consumption.

Collectively, these advanced power management approaches help you achieve better energy efficiency and power capacity utilization across your data center. In simple terms, you’re in the saddle, and you control the reins.

Get bigger bang for your power buck
In today’s data centers, the name of the game is to get a bigger bang for every dollar spent on power. Policy-based power management helps you work toward this goal by leveraging hardware-level technologies that make it possible to see what’s really going on inside a server. More specifically, the foundation for policy-based power management is formed by advanced instrumentation embedded in servers. This instrumentation exposes data on temperature, power states and memory states to software applications that sit at a higher level, using technology that:

  • Delivers system power consumption reporting and power capping functionality for the individual servers, the processors and the memory subsystem. 

  • Enables power to be limited at the system, processor and memory levels -- all using policies defined by your organization. These capabilities allow you to dynamically throttle system and rack power based on expected workloads.

  • Enables fine-grained control of power for servers, racks of servers, and groups of servers, allowing for dynamic migration of workloads to optimal servers based on specific power policies with the appropriate hypervisor.
Here’s an important caveat: When it comes to policy-based power management, there’s no such thing as a one-size-fits-all solution. You need multiple tools and technologies that allow you to capture the right data and put it to work to drive more effective power management -- from the server board to the data center environment.

It all begins with technologies that are incorporated into processors and chipsets. That’s the foundation that enables the creation and use of policies that bring you a bigger bang for your power buck.

Build a bridge to a more efficient data center
Putting policy-based power management in place is a bit like building a bridge over a creek. First you lay a foundation to support the bridge, and then you put the rest of the structure in place to allow safe passage over the creek. While your goal is to cross the creek, you couldn’t do it without the foundation that supports the rest of the bridge structure.

In the case of power management, the foundation is a mix of advanced instrumentation capabilities embedded in servers. This foundation is extended with middleware that allows you to consolidate server information to enable the management of large server groups as a single logical unit -- an essential capability in a data center that has thousands of servers.

The rest of the bridge is formed by higher-level applications that integrate and consolidate the data produced at the hardware level. While you ultimately want the management applications, you can’t get there without the hardware-level technologies.

Let’s look at this in more specific terms. Instrumentation at the hardware level allows higher-level management applications to monitor the power consumption of servers, set power consumption targets, and enable advanced power-management policies. These management activities are made possible by the ability of the platform-level technologies to provide real-time power measurements in terms of watts, a unit of measurement that everyone understands.

These same technologies allow power-management applications to retrieve server-level power consumption data through standard APIs and the widely used Intelligent Platform Management Interface (IPMI). The IPMI protocol spells out the data formats to be used in the exchange of power-management data.

Put it all together and you have a bridge to a more efficient data center.

Cash in on policy-based power management
When you apply policy-based power management in your data center, the payoff comes in the form of a wide range of business, IT and environmental benefits. Let’s start with the bottom line: A robust set of power-management policies and technologies can help you cut both operational expenditures (OpEx) and capital expenditures (CapEx).

At the OpEx level, you save money by applying policies that limit the amount of power consumed by individual servers or groups of servers. That helps you reduce power consumption across your data center.

How much can you save? Say that each 1U server requires 750 watts of power. If your usage model allows you to cap servers at 450 watts, you save 300 watts per machine. That helps you cut your costs for both power purchases and data center cooling. And chances are you can do this without paying server performance penalties, because many servers don’t use all of the power that has been allocated to them.

At the CapEx level, you cut costs by avoiding the purchase of intelligent power distribution units (PDUs) to gain power monitoring capabilities and by reducing redundancy requirements, which saves you thousands of dollars per rack.

More effective power management can also help you pack more servers into racks, as well as more racks, into your data center to make better use of your existing infrastructure. According to the Uptime Institute, each PDU kilowatt represents about $10,000 of CapEx, so it makes sense to try to make the best use of your available power capacity.

Baidu.com, the largest search engine in China, understands the benefits of making better use of existing infrastructure. It partnered with Intel to conduct a proof of concept (PoC) project that used Intel Intelligent Power Node Manager and Intel Data Center Manager to dynamically optimize server performance and power consumption to maximize the server density of a rack.

Key results of the Baidu PoC project:

  • At the rack level, up to 20 percent of additional capacity increase could be achieved within the same rack-level power envelope when an aggregated optimal power-management policy was applied.
  • Compared with today’s data center operation at Baidu, the use of Intel Intelligent Power Node Manager and Intel Data Center Manager enabled rack densities to increase by more than 40 percent.

And even then, the benefits of policy-based power management don’t stop at the bottom line. While this more intelligent approach to power management helps you reduce power consumption, it also helps you reduce your carbon footprint, meet your green goals, and comply with regulatory requirements. Benefits like those are a key part of the payoff for policy-based power management.

For more resources on data center efficiency, see this site from our sponsor.

The Future of Networking on 10 Gigabit Ethernet

The rise of cloud computing is a bit like that of a space shuttle taking off: When the rocket engine and its propellants fire up, the shuttle lifts slowly off the launch pad and then builds momentum until it streaks into space.

Cloud is now in the momentum-building phase and on the verge of quickly soaring to new heights. There are lots of good reasons for the rapid rise of this new approach to computing. Cloud models are widely seen as one of the keys to increasing IT and business agility, making better use of infrastructure and cutting costs.

So how do you launch your cloud? An essential first step is to prepare your network for the unique requirements of services running on a multitenant shared infrastructure. These requirements include IT simplicity, scalability, interoperability and manageability. All of these requirements make the case for unified networking based on 10 gigabit Ethernet (10GbE).

Unified networking over 10GbE simplifies your network environment. It allows you to unite your network into one type of fabric so you don’t have to maintain and manage different technologies for different types of network traffic. You also gain the ability to run storage traffic over a dedicated SAN if that makes the most sense for your organization.

Either way, 10GbE gives you a great deal of scalability, enabling you to quickly scale up your networking bandwidth to keep pace with the dynamic demands of cloud applications. This rapid scalability helps you avoid I/O bottlenecks and meet your service-level agreements.

While that’s all part of the goodness of 10GbE, it’s important to keep this caveat in mind: Not all 10GbE is the same. You need a solution that scales and, with features like intelligent offloads of targeted processing functions, helps you realize best-in-class performance for your cloud network. Unified networking solutions can be enabled through a combination of standard Intel Ethernet products along with trusted network protocols integrated and enabled in a broad range of operating systems and hypervisors. This approach makes unified networking capabilities available on every server, enabling maximum reuse in heterogeneous environments. Ultimately, this approach to unified networking helps you solve today’s overarching cloud networking challenges and create a launch pad for your private, hybrid or public cloud.

The urge to purge: Have you had enough of “too many” and “too much”?
In today’s data center, networks are a story of “too many” and “too much.” That’s too many fabrics, too many cables, and too much complexity. Unified networking simplifies this story. “Too many” and “too much” become “just right.” Let’s start with the fabrics. It’s not uncommon to find an organization that is running three distinctly different networks: a 1GbE management network, a multi-1GbE local area network (LAN), and a Fibre Channel or iSCSI storage area network (SAN).

Unified networking enables cost-effective connectivity to the LAN and the SAN on the same Ethernet fabric. Pick your protocols for your storage traffic. You can use NFS, iSCSI, or Fibre Channel over Ethernet (FCoE) to carry storage traffic over your converged Ethernet network.

You can still have a dedicated network for storage traffic if that works best for your needs. The only difference: That network runs your storage protocols over 10GbE -- the same technology used in your LAN.

When you make this fundamental shift, you can reduce your equipment needs. Convergence of network fabrics allows you to standardize the equipment you use throughout your networking environment -- the same cabling, the same NICs, the same switches. You now need just one set of everything, instead of two or three sets.

In a complementary gain, convergence over 10GbE helps you cut your cable numbers. In a 1GbE world, many virtualized servers have eight to 10 ports, each of which has its own network cable. In a typical deployment, one 10GbE cable could handle all of that traffic. This isn’t a vision of things to come. This world of simplified networking is here today. Better still, this is a world based on open standards. This approach to unified networking increases interoperability with common APIs and open-standard technologies. A few examples of these technologies:

  • Data Center Bridging (DCB) allows multiple types of traffic to run over an Ethernet wire.
  • Fibre Channel over Ethernet (FCoE) enables the Fibre Channel protocol used in many SANs to run over the Ethernet standard common in LANs.
  • Management Component Transport Protocol (MCTP) and Network Controller Sideband Interface (NC-SI) enable server management via the network.

These and other open-standard technologies enable the interoperability that allows network convergence and management simplification. And just like that, “too many” and “too much” become “just right.”

Know your limits -- then push them with super-elastic 10GbE

Let’s imagine for a moment a dream highway. In the middle of the night, when traffic is light, the highway is a four-lane road. When the morning rush hour begins and cars flood the road, the highway magically adds several lanes to accommodate the influx of traffic.

This commuter’s dream is the way cloud networks must work. The cloud network must be architected to quickly scale up and down to adapt itself to the dynamic and unpredictable demands of applications. This super-elasticity is a fundamental requirement for a successful cloud.

Of course, achieving this level of elasticity is easier said than done. In a cloud environment, virtualization turns a single physical server into multiple virtual machines, each with its own dynamic I/O bandwidth demands. These dynamic and unpredictable demands can overwhelm networks and lead to unacceptable I/O bottlenecks. The solution to this challenge lies in super-elastic 10 GbE networks built for cloud traffic. So what does it take to get there? The right solutions help you build your 10 GbE network with unique technologies designed to accelerate virtualization and remove I/O bottlenecks, while complementing solutions from leading cloud software providers.

Consider these examples:

  • The latest Ethernet servers support Single Root I/O Virtualization (SR-IOV), a standard created by the PCI Special Interest Group. SR-IOV improves network performance for Citrix XenServer and Red Hat KVM by providing dedicated I/O and data isolation between VMs and the network controller. The technology allows you to partition a physical port into multiple virtual I/O ports, each dedicated to a particular virtual machine.
  • Virtual Machine Device Queues (VMDq) improves network performance and CPU utilization for VMware and Windows Server 2008 Hyper-V by reducing the sorting overhead of networking traffic. VMDq offloads data-packet sorting from the virtual switch in the virtual machine monitor and instead does this on the network adaptor. This innovation helps you avoid the I/O tax that comes with virtualization.

Technologies like these enable you to build a high-performing, elastic network that helps keep the bottlenecks out of your cloud. It’s like that dream highway that adds lanes whenever the traffic gets heavy.

Manage the ups, downs, and in-betweens of services in the cloud
In an apartment building, different tenants have different Internet requirements. Tenants who transfer a lot of large files or play online games want the fastest Internet connections they can get. Tenants who use the Internet only for email and occasional shopping are probably content to live with slower transfer speeds. To stay competitive, service providers need to tailor their offerings to these diverse needs.

This is the way it is in a cloud environment: Different tenants have different service requirements. Some need a lot of bandwidth and the fastest possible throughput times. Others can settle for something less.

If you’re operating a cloud environment, either public or private, you need to meet these differing requirements. That means you need to be able to allocate the right level of bandwidth to an application and manage network quality of service (QoS) in a manner that meets your service-level agreements (SLAs) with different tenants -- technologies that allow you to tailor service quality to the needs and SLAs of different applications and different cloud tenants.

Here are some of the more important technologies for a well-managed cloud network: Data Center Bridging (DCB) provides a collection of standards-based end-to-end networking technologies that make Ethernet the unified fabric for multiple types of traffic in the data center.

  • It enables better traffic prioritization over a single interface, as well as an advanced means of shaping traffic on the network to decrease congestion.
  • Queue Rate Limiting (QRL) assigns a queue to each virtual machine (VM) or each tenant in the cloud environment and controls the amount of bandwidth delivered to that user. The Intel approach to QRL enables a VM or tenant to get a minimum amount of bandwidth, but it doesn’t limit the maximum bandwidth. If there is headroom on the wire, the VM or tenant can use it.
  • Traffic Steering sorts traffic per tenant to support rate limiting, QoS and other management approaches. Traffic Steering is made possible by on-chip flow classification that delineates one tenant from another. This is like the logic in the local Internet provider’s box in the apartment building. Everybody’s Internet traffic comes to the apartment in a single pipe, but then gets divided out to each apartment, so all the packets are delivered to the right addresses.

Technologies like these enable your organization to manage the ups, downs, and in-betweens of services in the cloud. You can then tailor your cloud offerings to the needs of different internal or external customers -- and deliver the right level of service at the right price.

On the road to the cloud
For years, people have talked about 10 GbE being the future of networking and the foundation of cloud environments. Well, the future is now; 10GbE is here in a big way.

There are many reasons for this fundamental shift. Unified networking based on 10GbE helps you reduce the complexity of your network environment, increase I/O scalability and better manage network quality of service. 10GbE simplifies your network, allowing you to converge to one type of fabric. This is a story of simplification. One network card. One network connection. Optimum LAN and SAN performance. Put it all together and you have a solution for your journey to a unified, cloud-ready network.

To learn more about cloud unified networking, see this resource from our sponsor: Cloud Usage Models.