The Future of Networking on 10 Gigabit Ethernet

The rise of cloud computing is a bit like that of a space shuttle taking off: When the rocket engine and its propellants fire up, the shuttle lifts slowly off the launch pad and then builds momentum until it streaks into space.

Cloud is now in the momentum-building phase and on the verge of quickly soaring to new heights. There are lots of good reasons for the rapid rise of this new approach to computing. Cloud models are widely seen as one of the keys to increasing IT and business agility, making better use of infrastructure and cutting costs.

So how do you launch your cloud? An essential first step is to prepare your network for the unique requirements of services running on a multitenant shared infrastructure. These requirements include IT simplicity, scalability, interoperability and manageability. All of these requirements make the case for unified networking based on 10 gigabit Ethernet (10GbE).

Unified networking over 10GbE simplifies your network environment. It allows you to unite your network into one type of fabric so you don’t have to maintain and manage different technologies for different types of network traffic. You also gain the ability to run storage traffic over a dedicated SAN if that makes the most sense for your organization.

Either way, 10GbE gives you a great deal of scalability, enabling you to quickly scale up your networking bandwidth to keep pace with the dynamic demands of cloud applications. This rapid scalability helps you avoid I/O bottlenecks and meet your service-level agreements.

While that’s all part of the goodness of 10GbE, it’s important to keep this caveat in mind: Not all 10GbE is the same. You need a solution that scales and, with features like intelligent offloads of targeted processing functions, helps you realize best-in-class performance for your cloud network. Unified networking solutions can be enabled through a combination of standard Intel Ethernet products along with trusted network protocols integrated and enabled in a broad range of operating systems and hypervisors. This approach makes unified networking capabilities available on every server, enabling maximum reuse in heterogeneous environments. Ultimately, this approach to unified networking helps you solve today’s overarching cloud networking challenges and create a launch pad for your private, hybrid or public cloud.

The urge to purge: Have you had enough of “too many” and “too much”?
In today’s data center, networks are a story of “too many” and “too much.” That’s too many fabrics, too many cables, and too much complexity. Unified networking simplifies this story. “Too many” and “too much” become “just right.” Let’s start with the fabrics. It’s not uncommon to find an organization that is running three distinctly different networks: a 1GbE management network, a multi-1GbE local area network (LAN), and a Fibre Channel or iSCSI storage area network (SAN).

Unified networking enables cost-effective connectivity to the LAN and the SAN on the same Ethernet fabric. Pick your protocols for your storage traffic. You can use NFS, iSCSI, or Fibre Channel over Ethernet (FCoE) to carry storage traffic over your converged Ethernet network.

You can still have a dedicated network for storage traffic if that works best for your needs. The only difference: That network runs your storage protocols over 10GbE -- the same technology used in your LAN.

When you make this fundamental shift, you can reduce your equipment needs. Convergence of network fabrics allows you to standardize the equipment you use throughout your networking environment -- the same cabling, the same NICs, the same switches. You now need just one set of everything, instead of two or three sets.

In a complementary gain, convergence over 10GbE helps you cut your cable numbers. In a 1GbE world, many virtualized servers have eight to 10 ports, each of which has its own network cable. In a typical deployment, one 10GbE cable could handle all of that traffic. This isn’t a vision of things to come. This world of simplified networking is here today. Better still, this is a world based on open standards. This approach to unified networking increases interoperability with common APIs and open-standard technologies. A few examples of these technologies:

  • Data Center Bridging (DCB) allows multiple types of traffic to run over an Ethernet wire.
  • Fibre Channel over Ethernet (FCoE) enables the Fibre Channel protocol used in many SANs to run over the Ethernet standard common in LANs.
  • Management Component Transport Protocol (MCTP) and Network Controller Sideband Interface (NC-SI) enable server management via the network.

These and other open-standard technologies enable the interoperability that allows network convergence and management simplification. And just like that, “too many” and “too much” become “just right.”

Know your limits -- then push them with super-elastic 10GbE

Let’s imagine for a moment a dream highway. In the middle of the night, when traffic is light, the highway is a four-lane road. When the morning rush hour begins and cars flood the road, the highway magically adds several lanes to accommodate the influx of traffic.

This commuter’s dream is the way cloud networks must work. The cloud network must be architected to quickly scale up and down to adapt itself to the dynamic and unpredictable demands of applications. This super-elasticity is a fundamental requirement for a successful cloud.

Of course, achieving this level of elasticity is easier said than done. In a cloud environment, virtualization turns a single physical server into multiple virtual machines, each with its own dynamic I/O bandwidth demands. These dynamic and unpredictable demands can overwhelm networks and lead to unacceptable I/O bottlenecks. The solution to this challenge lies in super-elastic 10 GbE networks built for cloud traffic. So what does it take to get there? The right solutions help you build your 10 GbE network with unique technologies designed to accelerate virtualization and remove I/O bottlenecks, while complementing solutions from leading cloud software providers.

Consider these examples:

  • The latest Ethernet servers support Single Root I/O Virtualization (SR-IOV), a standard created by the PCI Special Interest Group. SR-IOV improves network performance for Citrix XenServer and Red Hat KVM by providing dedicated I/O and data isolation between VMs and the network controller. The technology allows you to partition a physical port into multiple virtual I/O ports, each dedicated to a particular virtual machine.
  • Virtual Machine Device Queues (VMDq) improves network performance and CPU utilization for VMware and Windows Server 2008 Hyper-V by reducing the sorting overhead of networking traffic. VMDq offloads data-packet sorting from the virtual switch in the virtual machine monitor and instead does this on the network adaptor. This innovation helps you avoid the I/O tax that comes with virtualization.

Technologies like these enable you to build a high-performing, elastic network that helps keep the bottlenecks out of your cloud. It’s like that dream highway that adds lanes whenever the traffic gets heavy.

Manage the ups, downs, and in-betweens of services in the cloud
In an apartment building, different tenants have different Internet requirements. Tenants who transfer a lot of large files or play online games want the fastest Internet connections they can get. Tenants who use the Internet only for email and occasional shopping are probably content to live with slower transfer speeds. To stay competitive, service providers need to tailor their offerings to these diverse needs.

This is the way it is in a cloud environment: Different tenants have different service requirements. Some need a lot of bandwidth and the fastest possible throughput times. Others can settle for something less.

If you’re operating a cloud environment, either public or private, you need to meet these differing requirements. That means you need to be able to allocate the right level of bandwidth to an application and manage network quality of service (QoS) in a manner that meets your service-level agreements (SLAs) with different tenants -- technologies that allow you to tailor service quality to the needs and SLAs of different applications and different cloud tenants.

Here are some of the more important technologies for a well-managed cloud network: Data Center Bridging (DCB) provides a collection of standards-based end-to-end networking technologies that make Ethernet the unified fabric for multiple types of traffic in the data center.

  • It enables better traffic prioritization over a single interface, as well as an advanced means of shaping traffic on the network to decrease congestion.
  • Queue Rate Limiting (QRL) assigns a queue to each virtual machine (VM) or each tenant in the cloud environment and controls the amount of bandwidth delivered to that user. The Intel approach to QRL enables a VM or tenant to get a minimum amount of bandwidth, but it doesn’t limit the maximum bandwidth. If there is headroom on the wire, the VM or tenant can use it.
  • Traffic Steering sorts traffic per tenant to support rate limiting, QoS and other management approaches. Traffic Steering is made possible by on-chip flow classification that delineates one tenant from another. This is like the logic in the local Internet provider’s box in the apartment building. Everybody’s Internet traffic comes to the apartment in a single pipe, but then gets divided out to each apartment, so all the packets are delivered to the right addresses.

Technologies like these enable your organization to manage the ups, downs, and in-betweens of services in the cloud. You can then tailor your cloud offerings to the needs of different internal or external customers -- and deliver the right level of service at the right price.

On the road to the cloud
For years, people have talked about 10 GbE being the future of networking and the foundation of cloud environments. Well, the future is now; 10GbE is here in a big way.

There are many reasons for this fundamental shift. Unified networking based on 10GbE helps you reduce the complexity of your network environment, increase I/O scalability and better manage network quality of service. 10GbE simplifies your network, allowing you to converge to one type of fabric. This is a story of simplification. One network card. One network connection. Optimum LAN and SAN performance. Put it all together and you have a solution for your journey to a unified, cloud-ready network.

To learn more about cloud unified networking, see this resource from our sponsor: Cloud Usage Models.

Preparing for Major Shifts in Enterprise Computing

Enterprise computing is at an inflection point because a number of trends and pressures are driving a transition from the traditional client computing model toward a future in which employees will use a wide variety of devices to access information anywhere, at any time. The challenge for IT is how to manage that -- and still secure the enterprise.

The Consumerization of IT
Enterprise IT used to control how employees adopted technology. Now, employees are a major influence on IT’s adoption of new technology. Many employees want to use their own devices to access information. The number of handheld devices continues to increase rapidly in the enterprise environment: Many employees already have one or more devices in addition to their mobile business PCs and are looking for IT to deliver information to all of these devices. By responding to this need, IT can enable employees to work in more flexible and productive ways.

This requires a significant change in the way IT provides services to client devices. IT has typically focused on delivering a build -- an integrated package comprising an OS, applications, and other software -- to a single PC. As employees use a wider range of devices, IT needs to shift focus to delivering services to any device -- and to multiple devices for any employee.

This makes managing technology and security more complex. It also introduces issues for legal and human resources (HR) groups since this means providing access to company-owned information and applications from devices that are owned by users.

The Answer: IT-as-a-service
By taking advantage of a combination of technology trends and emerging compute models -- such as ubiquitous Internet connectivity, virtualization and cloud computing -- IT has an opportunity to proactively address changing user requirements and redefine the way it provides services. We believe this represents the next major change in the way that employees will use technology:

  • Users will have access to corporate information and IT services from any device, anywhere they are, at any time, whether personal or corporate-owned.
  • Multiple personal and corporate devices will work together seamlessly.
  • Corporate information and services will be delivered across these devices while the enterprise continues to be protected.

Employees will enjoy a rich, seamless and more personal experience across multiple devices. They will be able to move from device to device while retaining access to the information and services they need. Their experience will vary depending on the characteristics of the device they are using; services will be context-aware, taking advantage of higher-performing client hardware to deliver an enhanced experience.

By developing a device-independent service delivery model, IT creates a software infrastructure that will make corporate applications and user data available across multiple devices. Device independence provides the ability to deliver services not only on current devices, but also on new categories of devices that may emerge in the future. We expect that the number and variety of handheld and tablet devices will continue to increase rapidly -- and that employees will want to take advantage of these devices, in addition to the devices they already use. Depending on a device’s capabilities, it may be able to run multiple environments, including separate corporate and personal workspaces.

Client Virtualization and Cloud Technologies Make This Possible
By taking advantage of virtualization to provide device independence by abstracting IT services from the underlying hardware, IT delivers an application or an entire environment in a virtual container; this enables the service to run on any device that supports the virtualization software. IT can also run multiple environments and applications, each isolated within its own container, on the same system.

This allows faster development and introduction of new capabilities at lower cost, because IT does not need to certify the OS and each application for every hardware platform. This approach can also reduce IT management cost because IT manages only the virtual containers rather than each hardware platform.

Client virtualization encompasses a range of technologies: client-hosted virtualization (CHV), including Type 2 and Type 1 (bare-metal) client-side hypervisors; server-hosted virtualization (SHV); and application virtualization. IT should anticipate using multiple technologies, depending on the requirements of each use case and the capabilities of the device.

Type 2 client hypervisors: Type 2 hypervisors run as a software process on a host OS on the client system; they support one or more guest OSs in virtual machines (VMs). IT can use Type 2 hypervisors to provide an IT software environment for contractors who develop software for the company. Previously, to make this environment available to them, IT needed to provide them with PCs running an IT software build by simply installing a hypervisor on the contractor’s own PC and delivering a streamlined development build on top of the hypervisor. To enable this, the PC must meet minimum specifications, such as Intel Virtualization Technology and a specific OS.

This approach reduces cost and support requirements, and it reduces the company’s risk: IT provides the build within a secure, policy-managed virtual container that is fully encrypted, cannot be copied and will destroy itself if the system does not regularly check in with IT.

Type 1 client hypervisors: These are bare-metal hypervisors that run directly on the client platform, without the need for a host OS. They can provide better security and performance than Type 2 hypervisors. However, on client systems, Type 1 hypervisors are less mature than Type 2 hypervisors.

As the technology matures, IT will see an increasing number of potential uses. Type 1 hypervisors could be valuable for engineers who work with classified proprietary design information. IT could implement two isolated environments on the same client PC: a highly secure environment used for proprietary design information, and a standard enterprise environment. This would allow the user greater flexibility and productivity while protecting corporate intellectual property. IT could also use Type 1 hypervisors to implement and isolate personal and corporate environments on the same system.

Server-hosted virtualization (SHV): With SHV, software is stored on a server in a data center, and it executes in a container on the server rather than on the client; the employee interacts with the server-based software over the network. SHV can be used to deliver an entire desktop environment or individual applications to capable client devices.

Traditional SHV approaches do not support mobility; can cause performance problems with compute- and graphics-intensive applications; and increase the load on network and server infrastructure. However, as SHV technologies mature, they are beginning to identify client capabilities and take advantage of them to improve the user experience. For example, newer protocols that are used with SHV can offload some multimedia processing to PCs rather than executing all of it on the server. This can reduce the impact on network traffic and take advantage of higher-performing clients to deliver a better user experience.

Application virtualization: With application virtualization, applications are packaged and managed on a central server; the applications are streamed to the client device on demand, where they execute in isolated containers and are sometimes cached to improve performance. IT uses application virtualization to deliver core enterprise productivity applications as well as specific line-of-business applications to nonstandard PCs and other personal PCs.

Cloud computing: Cloud computing is an essential element of any IT strategy. Intel IT is developing a private cloud, built on shared virtualized infrastructure, to support the company’s enterprise and office computing environment. The goal is to increase agility and efficiency by using cloud characteristics such as on-demand scalability and provisioning, as well as automated management. Intel IT is also selectively using external cloud services such as software as a service (SaaS) applications.

Over time, one can anticipate that more IT services will be delivered from clouds, facilitating ubiquitous access from multiple types of devices. This will enable users to take advantage of cloud capabilities to broker and manage the connection to the client.

For More Information:
Visit Intel Software Insight or to find white papers on related topics, including:

  • “Personal Handheld Devices in the Enterprise”
  • “Maintaining Information Security While Allowing Personal Handheld Devices in the Enterprise”
  • “Cloud Computing: How Client Devices Affect the User Experience”
  • “Developing an Enterprise Cloud Computing Strategy”
  • “Developing an Enterprise Client Virtualization Strategy”
  • “Enabling Device-independent Mobility With Dynamic Virtual Clients”