The Future of Networking on 10 Gigabit Ethernet

The rise of cloud computing is a bit like that of a space shuttle taking off: When the rocket engine and its propellants fire up, the shuttle lifts slowly off the launch pad and then builds momentum until it streaks into space.

Cloud is now in the momentum-building phase and on the verge of quickly soaring to new heights. There are lots of good reasons for the rapid rise of this new approach to computing. Cloud models are widely seen as one of the keys to increasing IT and business agility, making better use of infrastructure and cutting costs.

So how do you launch your cloud? An essential first step is to prepare your network for the unique requirements of services running on a multitenant shared infrastructure. These requirements include IT simplicity, scalability, interoperability and manageability. All of these requirements make the case for unified networking based on 10 gigabit Ethernet (10GbE).

Unified networking over 10GbE simplifies your network environment. It allows you to unite your network into one type of fabric so you don’t have to maintain and manage different technologies for different types of network traffic. You also gain the ability to run storage traffic over a dedicated SAN if that makes the most sense for your organization.

Either way, 10GbE gives you a great deal of scalability, enabling you to quickly scale up your networking bandwidth to keep pace with the dynamic demands of cloud applications. This rapid scalability helps you avoid I/O bottlenecks and meet your service-level agreements.

While that’s all part of the goodness of 10GbE, it’s important to keep this caveat in mind: Not all 10GbE is the same. You need a solution that scales and, with features like intelligent offloads of targeted processing functions, helps you realize best-in-class performance for your cloud network. Unified networking solutions can be enabled through a combination of standard Intel Ethernet products along with trusted network protocols integrated and enabled in a broad range of operating systems and hypervisors. This approach makes unified networking capabilities available on every server, enabling maximum reuse in heterogeneous environments. Ultimately, this approach to unified networking helps you solve today’s overarching cloud networking challenges and create a launch pad for your private, hybrid or public cloud.

The urge to purge: Have you had enough of “too many” and “too much”?
In today’s data center, networks are a story of “too many” and “too much.” That’s too many fabrics, too many cables, and too much complexity. Unified networking simplifies this story. “Too many” and “too much” become “just right.” Let’s start with the fabrics. It’s not uncommon to find an organization that is running three distinctly different networks: a 1GbE management network, a multi-1GbE local area network (LAN), and a Fibre Channel or iSCSI storage area network (SAN).

Unified networking enables cost-effective connectivity to the LAN and the SAN on the same Ethernet fabric. Pick your protocols for your storage traffic. You can use NFS, iSCSI, or Fibre Channel over Ethernet (FCoE) to carry storage traffic over your converged Ethernet network.

You can still have a dedicated network for storage traffic if that works best for your needs. The only difference: That network runs your storage protocols over 10GbE -- the same technology used in your LAN.

When you make this fundamental shift, you can reduce your equipment needs. Convergence of network fabrics allows you to standardize the equipment you use throughout your networking environment -- the same cabling, the same NICs, the same switches. You now need just one set of everything, instead of two or three sets.

In a complementary gain, convergence over 10GbE helps you cut your cable numbers. In a 1GbE world, many virtualized servers have eight to 10 ports, each of which has its own network cable. In a typical deployment, one 10GbE cable could handle all of that traffic. This isn’t a vision of things to come. This world of simplified networking is here today. Better still, this is a world based on open standards. This approach to unified networking increases interoperability with common APIs and open-standard technologies. A few examples of these technologies:

  • Data Center Bridging (DCB) allows multiple types of traffic to run over an Ethernet wire.
  • Fibre Channel over Ethernet (FCoE) enables the Fibre Channel protocol used in many SANs to run over the Ethernet standard common in LANs.
  • Management Component Transport Protocol (MCTP) and Network Controller Sideband Interface (NC-SI) enable server management via the network.

These and other open-standard technologies enable the interoperability that allows network convergence and management simplification. And just like that, “too many” and “too much” become “just right.”

Know your limits -- then push them with super-elastic 10GbE

Let’s imagine for a moment a dream highway. In the middle of the night, when traffic is light, the highway is a four-lane road. When the morning rush hour begins and cars flood the road, the highway magically adds several lanes to accommodate the influx of traffic.

This commuter’s dream is the way cloud networks must work. The cloud network must be architected to quickly scale up and down to adapt itself to the dynamic and unpredictable demands of applications. This super-elasticity is a fundamental requirement for a successful cloud.

Of course, achieving this level of elasticity is easier said than done. In a cloud environment, virtualization turns a single physical server into multiple virtual machines, each with its own dynamic I/O bandwidth demands. These dynamic and unpredictable demands can overwhelm networks and lead to unacceptable I/O bottlenecks. The solution to this challenge lies in super-elastic 10 GbE networks built for cloud traffic. So what does it take to get there? The right solutions help you build your 10 GbE network with unique technologies designed to accelerate virtualization and remove I/O bottlenecks, while complementing solutions from leading cloud software providers.

Consider these examples:

  • The latest Ethernet servers support Single Root I/O Virtualization (SR-IOV), a standard created by the PCI Special Interest Group. SR-IOV improves network performance for Citrix XenServer and Red Hat KVM by providing dedicated I/O and data isolation between VMs and the network controller. The technology allows you to partition a physical port into multiple virtual I/O ports, each dedicated to a particular virtual machine.
  • Virtual Machine Device Queues (VMDq) improves network performance and CPU utilization for VMware and Windows Server 2008 Hyper-V by reducing the sorting overhead of networking traffic. VMDq offloads data-packet sorting from the virtual switch in the virtual machine monitor and instead does this on the network adaptor. This innovation helps you avoid the I/O tax that comes with virtualization.

Technologies like these enable you to build a high-performing, elastic network that helps keep the bottlenecks out of your cloud. It’s like that dream highway that adds lanes whenever the traffic gets heavy.

Manage the ups, downs, and in-betweens of services in the cloud
In an apartment building, different tenants have different Internet requirements. Tenants who transfer a lot of large files or play online games want the fastest Internet connections they can get. Tenants who use the Internet only for email and occasional shopping are probably content to live with slower transfer speeds. To stay competitive, service providers need to tailor their offerings to these diverse needs.

This is the way it is in a cloud environment: Different tenants have different service requirements. Some need a lot of bandwidth and the fastest possible throughput times. Others can settle for something less.

If you’re operating a cloud environment, either public or private, you need to meet these differing requirements. That means you need to be able to allocate the right level of bandwidth to an application and manage network quality of service (QoS) in a manner that meets your service-level agreements (SLAs) with different tenants -- technologies that allow you to tailor service quality to the needs and SLAs of different applications and different cloud tenants.

Here are some of the more important technologies for a well-managed cloud network: Data Center Bridging (DCB) provides a collection of standards-based end-to-end networking technologies that make Ethernet the unified fabric for multiple types of traffic in the data center.

  • It enables better traffic prioritization over a single interface, as well as an advanced means of shaping traffic on the network to decrease congestion.
  • Queue Rate Limiting (QRL) assigns a queue to each virtual machine (VM) or each tenant in the cloud environment and controls the amount of bandwidth delivered to that user. The Intel approach to QRL enables a VM or tenant to get a minimum amount of bandwidth, but it doesn’t limit the maximum bandwidth. If there is headroom on the wire, the VM or tenant can use it.
  • Traffic Steering sorts traffic per tenant to support rate limiting, QoS and other management approaches. Traffic Steering is made possible by on-chip flow classification that delineates one tenant from another. This is like the logic in the local Internet provider’s box in the apartment building. Everybody’s Internet traffic comes to the apartment in a single pipe, but then gets divided out to each apartment, so all the packets are delivered to the right addresses.

Technologies like these enable your organization to manage the ups, downs, and in-betweens of services in the cloud. You can then tailor your cloud offerings to the needs of different internal or external customers -- and deliver the right level of service at the right price.

On the road to the cloud
For years, people have talked about 10 GbE being the future of networking and the foundation of cloud environments. Well, the future is now; 10GbE is here in a big way.

There are many reasons for this fundamental shift. Unified networking based on 10GbE helps you reduce the complexity of your network environment, increase I/O scalability and better manage network quality of service. 10GbE simplifies your network, allowing you to converge to one type of fabric. This is a story of simplification. One network card. One network connection. Optimum LAN and SAN performance. Put it all together and you have a solution for your journey to a unified, cloud-ready network.

To learn more about cloud unified networking, see this resource from our sponsor: Cloud Usage Models.

Preparing for Major Shifts in Enterprise Computing

Enterprise computing is at an inflection point because a number of trends and pressures are driving a transition from the traditional client computing model toward a future in which employees will use a wide variety of devices to access information anywhere, at any time. The challenge for IT is how to manage that -- and still secure the enterprise.

The Consumerization of IT
Enterprise IT used to control how employees adopted technology. Now, employees are a major influence on IT’s adoption of new technology. Many employees want to use their own devices to access information. The number of handheld devices continues to increase rapidly in the enterprise environment: Many employees already have one or more devices in addition to their mobile business PCs and are looking for IT to deliver information to all of these devices. By responding to this need, IT can enable employees to work in more flexible and productive ways.

This requires a significant change in the way IT provides services to client devices. IT has typically focused on delivering a build -- an integrated package comprising an OS, applications, and other software -- to a single PC. As employees use a wider range of devices, IT needs to shift focus to delivering services to any device -- and to multiple devices for any employee.

This makes managing technology and security more complex. It also introduces issues for legal and human resources (HR) groups since this means providing access to company-owned information and applications from devices that are owned by users.

The Answer: IT-as-a-service
By taking advantage of a combination of technology trends and emerging compute models -- such as ubiquitous Internet connectivity, virtualization and cloud computing -- IT has an opportunity to proactively address changing user requirements and redefine the way it provides services. We believe this represents the next major change in the way that employees will use technology:

  • Users will have access to corporate information and IT services from any device, anywhere they are, at any time, whether personal or corporate-owned.
  • Multiple personal and corporate devices will work together seamlessly.
  • Corporate information and services will be delivered across these devices while the enterprise continues to be protected.

Employees will enjoy a rich, seamless and more personal experience across multiple devices. They will be able to move from device to device while retaining access to the information and services they need. Their experience will vary depending on the characteristics of the device they are using; services will be context-aware, taking advantage of higher-performing client hardware to deliver an enhanced experience.

By developing a device-independent service delivery model, IT creates a software infrastructure that will make corporate applications and user data available across multiple devices. Device independence provides the ability to deliver services not only on current devices, but also on new categories of devices that may emerge in the future. We expect that the number and variety of handheld and tablet devices will continue to increase rapidly -- and that employees will want to take advantage of these devices, in addition to the devices they already use. Depending on a device’s capabilities, it may be able to run multiple environments, including separate corporate and personal workspaces.

Client Virtualization and Cloud Technologies Make This Possible
By taking advantage of virtualization to provide device independence by abstracting IT services from the underlying hardware, IT delivers an application or an entire environment in a virtual container; this enables the service to run on any device that supports the virtualization software. IT can also run multiple environments and applications, each isolated within its own container, on the same system.

This allows faster development and introduction of new capabilities at lower cost, because IT does not need to certify the OS and each application for every hardware platform. This approach can also reduce IT management cost because IT manages only the virtual containers rather than each hardware platform.

Client virtualization encompasses a range of technologies: client-hosted virtualization (CHV), including Type 2 and Type 1 (bare-metal) client-side hypervisors; server-hosted virtualization (SHV); and application virtualization. IT should anticipate using multiple technologies, depending on the requirements of each use case and the capabilities of the device.

Type 2 client hypervisors: Type 2 hypervisors run as a software process on a host OS on the client system; they support one or more guest OSs in virtual machines (VMs). IT can use Type 2 hypervisors to provide an IT software environment for contractors who develop software for the company. Previously, to make this environment available to them, IT needed to provide them with PCs running an IT software build by simply installing a hypervisor on the contractor’s own PC and delivering a streamlined development build on top of the hypervisor. To enable this, the PC must meet minimum specifications, such as Intel Virtualization Technology and a specific OS.

This approach reduces cost and support requirements, and it reduces the company’s risk: IT provides the build within a secure, policy-managed virtual container that is fully encrypted, cannot be copied and will destroy itself if the system does not regularly check in with IT.

Type 1 client hypervisors: These are bare-metal hypervisors that run directly on the client platform, without the need for a host OS. They can provide better security and performance than Type 2 hypervisors. However, on client systems, Type 1 hypervisors are less mature than Type 2 hypervisors.

As the technology matures, IT will see an increasing number of potential uses. Type 1 hypervisors could be valuable for engineers who work with classified proprietary design information. IT could implement two isolated environments on the same client PC: a highly secure environment used for proprietary design information, and a standard enterprise environment. This would allow the user greater flexibility and productivity while protecting corporate intellectual property. IT could also use Type 1 hypervisors to implement and isolate personal and corporate environments on the same system.

Server-hosted virtualization (SHV): With SHV, software is stored on a server in a data center, and it executes in a container on the server rather than on the client; the employee interacts with the server-based software over the network. SHV can be used to deliver an entire desktop environment or individual applications to capable client devices.

Traditional SHV approaches do not support mobility; can cause performance problems with compute- and graphics-intensive applications; and increase the load on network and server infrastructure. However, as SHV technologies mature, they are beginning to identify client capabilities and take advantage of them to improve the user experience. For example, newer protocols that are used with SHV can offload some multimedia processing to PCs rather than executing all of it on the server. This can reduce the impact on network traffic and take advantage of higher-performing clients to deliver a better user experience.

Application virtualization: With application virtualization, applications are packaged and managed on a central server; the applications are streamed to the client device on demand, where they execute in isolated containers and are sometimes cached to improve performance. IT uses application virtualization to deliver core enterprise productivity applications as well as specific line-of-business applications to nonstandard PCs and other personal PCs.

Cloud computing: Cloud computing is an essential element of any IT strategy. Intel IT is developing a private cloud, built on shared virtualized infrastructure, to support the company’s enterprise and office computing environment. The goal is to increase agility and efficiency by using cloud characteristics such as on-demand scalability and provisioning, as well as automated management. Intel IT is also selectively using external cloud services such as software as a service (SaaS) applications.

Over time, one can anticipate that more IT services will be delivered from clouds, facilitating ubiquitous access from multiple types of devices. This will enable users to take advantage of cloud capabilities to broker and manage the connection to the client.

For More Information:
Visit Intel Software Insight or Intel.com/IT to find white papers on related topics, including:

  • “Personal Handheld Devices in the Enterprise”
  • “Maintaining Information Security While Allowing Personal Handheld Devices in the Enterprise”
  • “Cloud Computing: How Client Devices Affect the User Experience”
  • “Developing an Enterprise Cloud Computing Strategy”
  • “Developing an Enterprise Client Virtualization Strategy”
  • “Enabling Device-independent Mobility With Dynamic Virtual Clients”

Open Clouds

Enterprises and developers pursuing cloud computing can tap into an emerging category of software to get the job done: open-source software that provides compute infrastructure and delivers applications.

Organizations can tap open-source alternatives for both the infrastructure as a service (IaaS) and platform as a service (PaaS) cloud models. IaaS provides the basic underpinnings of cloud computing. The National Institute of Standards and Technology (NIST) defines IaaS as providing the ability to provide “processing, storage, networks and other fundamental computer resources.” PaaS, meanwhile, offers a foundation for cloud software development. NIST states that PaaS lets customers deploy applications “created using programming languages and tools supported by the provider.”

Open-source options for IaaS include OpenStack, Eucalyptus and Red Hat’s CloudForms. Open-source PaaS made a splash this past spring with the arrival of Red Hat’s OpenShift and VMware’s Cloud Foundry.

Kamesh Pemmaraju, head of marketing at Cloud Technology Partners (CloudTP), a cloud consulting firm, says enterprises are turning to open-source clouds as they revisit and refresh their IT strategies.

“People want to move away from proprietary solutions,” he says. “They can look at open-source and cloud solutions up and down the stack.”

Open IaaS
The open-source IaaS offerings all perform roughly the same function, but may appeal to different customer sets. Eucalyptus IaaS software, for example, could prove a fitting option for organizations with high-availability needs. Eucalyptus Systems Inc. launched in 2009 to take the software beyond its university origins.

“They are moving toward building cloud-based management infrastructure that is highly available,” Pemmaraju says of Eucalyptus. “They are looking for companies that have high availability as a requirement ... financial services, trading -- any business process where downtime can be extremely expensive.”

OpenStack, meanwhile, has quickly attracted adherents -- including Rackspace Hosting and NASA -- since 2010 when it was founded.

“A lot of people are backing it up,” says Pemmaraju, noting recent vendor support from Dell and HP.

In July 2011, Dell unveiled what it termed the first available cloud solution offering built on OpenStack. In September, HP announced a private beta program for its HP Cloud Services, which are based on OpenStack technology.

Pemmaraju surmises that OpenStack’s platform has the most promise for future enterprise adoption, particularly for customers with bare metal and no commitment to a cloud operating system.

On the other hand, customers of Red Hat Linux may find the company’s CloudForms a logical cloud approach. Those customers aren’t starting from bare metal, and they essentially build their cloud infrastructure on top of Red Hat Enterprise Linux, says Pemmaraju. OpenShift is also available to fill in the PaaS layer.

“Clearly they have much more in terms of the overall stack,” notes Pemmaraju.

PaaS Developments
PaaS, while not as widely deployed as IaaS, could eventually prove the more pivotal cloud model.

Alan Shimel, managing partner of The CISO Group, a security consulting firm, says PaaS may win out over IaaS over the long haul, the latter being more dominant today. He says people want more than just the basic operating system and hypervisor layer.

“They are going to want a rich development platform,” says Shimel.

Pemmaraju describes PaaS as “the battleground of the future.” He notes that pure infrastructure services firms are moving up the stack, while companies at the application layer are moving down.

Developer support will boost a given platform’s prospects, since more applications generally translate into greater adoption. Accordingly, multiple-language support appears to be a priority among PaaS providers. The Express version of OpenShift, for example, supports Ruby (Rails, Sinatra), Python (Pylons, Django, Turbogears), Perl (PerlDancer), PHP (Zend, Cake, Symfony, CodeIgniter) and Java (EE 6, Spring, CDI/Weld, Seam).

Cloud Foundry currently supports “applications written in Spring Java, Rails and Sinatra for Ruby, Node.js and other JVM languages/frameworks including Groovy, Grails and Scala,” according to CloudFoundry.org.

“It looks like all these PaaS players are moving toward open source, multiple languages and multiple deployment models,” says Pemmaraju.

Energy Savings in the Cloud

Cloud computing adopters may find an unanticipated edge when moving to this IT model: the ability to trim energy consumption.

Cloud proponents often cite the ability to rapidly scale resources as the initial driver behind cloud migration. This elasticity lets organizations dial up -- and dial down -- compute power, storage and apps -- to match changes in demand.

The resulting efficiency boost cuts cost and makes for a solid business case. But the cloud also has a green side: The technology, when deployed with care, can significantly reduce your power usage and carbon footprint.

The linchpin cloud technologies -- virtualization and multi-tenancy -- support this dual role. Virtualization software lets organizations shrink the server population, letting IT departments run multiple applications -- packaged as virtual machines -- on a single physical server. Storage arrays can also be consolidated and virtualized. This process creates resource pools that serve numerous customers in a multi-tenant model.

Enterprises that recast their traditional data centers along cloud lines, creating so-called private clouds, have an opportunity to move toward energy conservation. A number of metrics have emerged to help chart that course. Power usage effectiveness (PUE), SPECpower, and utilization in general are among the measures that can help organizations track their progress.

IT managers, however, must remain watchful to root out inefficient hardware and data center practices as they pursue the cloud’s energy savings potential.

Working in Tandem
EMC Corp.
has found that streamlined IT and energy savings work in tandem in its private cloud deployment. The company embarked on a consolidation and virtualization push as its IT infrastructure began to age.

Jon Peirce, vice president of EMC private cloud infrastructure and services, describes the company’s cloud as “a highly consolidated, standardized, virtualized infrastructure that is multi-tenant within the confines of our firewall.”

EMC hosts multiple applications within the same standardized, multi-tenant infrastructure, he added.

As for energy efficiency, virtualized storage has produced the greatest impact thus far. EMC, prior to the cloud shift, operated 168 separate storage infrastructure stacks across five data centers. In that setting, capacity planning became nearly impossible. The company started buffering extra capacity, which led to device utilization of less than 50 percent, explained Peirce.

Poor utilization wasn’t the only issue. EMC found storage to consume more electricity per floor tile than servers in its data centers.

“The first thing we did in our journey toward cloud computing was to take the storage component and collapse it to the smallest number we thought possible,” says Peirce.

EMC reduced 168 infrastructure stacks to 13, driving utilization up to about 70 percent in the process. Consolidation reduced the overall storage footprint, which now grows at a slower rate going forward since the company doesn’t have to buffer as much.

Storage tiering also contributed to energy reduction. In tiering, application data gets assigned to the most cost-effective -- and energy-efficient -- storage platform. EMC was able to re-platform many applications from tier-one Fibre Channel disk drives to SATA drives, higher-capacity devices that consume less electricity per gigabyte.

“That let us accommodate more data per kilowatt hour of electricity consumed,” says Peirce.

Moving data to energy-efficient drives contributed to the greening of EMC’s IT.

Together, tiering and consolidation cut EMC’s data center power requirement by 34 percent, leading to a projected 90-million-pound reduction in carbon footprint over five years, according to an Enterprise Strategy Group audit.

Getting the Most From the Cloud
John Stanley, research analyst for data center technologies and eco-efficient IT at market researcher The 451 Group, said energy efficiency may be divided into two categories: steps organizations can take on the IT side, and steps they can take on the facilities side.

On the IT side, the biggest thing a data center or private cloud operator can do to save energy is get as much work out of each hardware device as possible, says Stanley. Even an idle a server may consume 100 to 250 watts, so it makes sense to have it doing something, he notes.

Taking care of the IT angle can save energy on the facilities side. Stanley says an organization able to consolidate its workload on half as many servers will spit out half as much heat. Less heat translates into lower air-conditioning requirements.

“When you save energy in IT, you end up saving energy in the facilities side as well,” says Stanley.

But the cloud doesn’t cut energy costs completely on its own. IT managers need to make a conscious effort to realize the cloud’s energy-savings potential, cautions Stanley. He says they should ask themselves whether they are committed to doing more work with fewer servers. They should also question their hardware decisions. An organization consolidating on aging hardware may need to choose more energy-efficient servers in the next hardware refresh cycle, he adds.

In that vein, developments in server technology offer energy-savings possibilities. Today’s Intel Xeon Processor-based servers, for instance, include the ability to “set power consumption and trade off the power consumption level against performance,” according to an Intel article on power management.

This class of servers lets IT managers collect historical power consumption data via standard APIs. This information makes it possible to optimize the loading of power-limited racks, the article notes.

Keeping Score
Enterprises determined to wring more power efficiency out of their clouds have a few metrics available to assess their efforts.

Power usage effectiveness, or PUE, is one such metric. The Green Grid, a consortium that pursues resource efficiency in data centers, created PUE to measure how data centers use energy. PUE is calculated by dividing total facility energy use by the amount of power consumed to run IT equipment. The metric aims to provide insight into how much energy is expended on overhead activities, such as cooling.

Roger Tipley, vice president at The Green Grid, said PUE is a good metric for facilities engineers who need to size a data center’s infrastructure appropriately for the IT equipment installed. PUE may be coupled with the PUE Scalability Metric, a tool for assessing how a data center’s infrastructure copes with changes in IT power loads.

Peirce said EMC tracks PUE fairly closely and designed a recently opened data center to deliver a very aggressive PUE number. The highest PUE value is 1.0, which represents 100-percent efficiency. While PUE looks at energy from a facility-wide perspective, data center operators also focus more specifically on hardware utilization. Peirce said utilization is the main metric EMC uses, employing that measure to gauge both cost and energy efficiency.

Stanley says utilization measures can get a bit complicated. A CPU may experience very high utilization, but have no essential business tasks assigned to it.

“Utilization is not necessarily always the same thing as useful work,” he says.

Other hardware-centric metrics include SPECpower_ssj2008 and PAR4. Standard Performance Evaluation Corp.’s SPECpower assesses the “power and performance characteristics of volume server class computers,” according to the company. PAR4, developed by Underwriters Laboratories and Power Assure, is used to measure the power usage of IT equipment.

The various measures can guide enterprises toward energy savings, but making the efficiency grade requires a concerted effort.

“Yes, you can save energy by switching to a private cloud,” says Stanley. “But just saying ‘We are going to switch’ is not enough.”

Photo: @iStockphoto.com/HowardOates

Why You Need Double Protection Now: Hardware and Software

Theft-prone laptops and Web authentication concerns have inspired some developers to double up on data protection, combining software and hardware to bolster security.

The approach lets security software and services vendors leverage security features built into the latest generation of processors. Independent software vendors (ISVs) are already tapping hardware resources to improve encryption. The basic concept holds that harnessing the best attributes of software and hardware provides a more in-depth defense.

Mauricio Cuervo, product manager at Intel, noted that software has greater flexibility than hardware -- new features can be added more quickly -- but it is more susceptible to attacks. Hardware, in contrast, is generally more robust and difficult to penetrate. Bringing the two together unites the richness of software and the tamper-resistant nature of hardware, he added.

“The ultimate objective is to drive that synergy,” says Cuervo.

Anti-theft Technology
With that combination in mind, ISVs can now tap Intel’s Anti-Theft Technology to tighten laptop security. That technology, also referred to as Intel AT, is equipped on second-generation Intel Core and Core vPro processors. It allows ISVs to bring theft deterrence into their solutions, while improving data protection with hardware assistance, says Cuervo.

Cuervo, who focuses on Intel AT, said Absolute Software, PlumChoice, Symantec and WinMagic are partnering with Intel on Intel AT to improve their security solutions. He said other partnerships are in the pipeline.

Garry McCracken, vice president of technology partnerships at WinMagic Inc., says the company has been able to add value to its full-disk encryption software by enabling an Intel AT feature called data encryption disable. The company is able to take part of its key -- for decrypting and accessing data -- and place it inside Intel’s AT chip.

“If the Intel chip receives a kill pill or goes into a stolen state, the platform goes into a platform disable state and the computer is of no use to the thief,” says McCracken.

“Even the actual hard drive is no longer accessible -- even to users that still have their credentials -- because part of the key is locked up into the hardware,” he adds. “That is a key synergy between full-disk encryption and how we can leverage the capability of the hardware. It is worth noting that if the computer is recovered, an authorized administrator can re-enable the computer and regain access to the data.”

PlumChoice Inc., meanwhile, will incorporate Intel AT in its SAFElink Anti-Theft service. The software-based solution locks down a customer’s laptop when a theft or loss trigger is detected. Josh Goldlust, vice president, product management for PlumChoice, says Intel AT support means that lockdown can occur before a laptop’s operating system boots. PlumChoise recently demoed this technology at an Intel Developer conference.

Intel integration will let PlumChoice enable an enhanced version of SAFElink Anti-Theft, which will test customers to assure they are the proper user of the device and have them provide the correct credentials, explains Goldlust. The PlumChoice solution performs a similar function without Intel AT, but the security measure takes place once the operating system has booted.

With Intel AT, Cuervo said Intel works with ISVs that concentrate on asset tracking, asset retrieval, or data protection of PCs. Developers who are interested in Intel AT can visit Anti-Theft.Intel.com for an overview, he noted. Intel provides an Intel AT SDK for developers, but only after the company has interviewed a potential ISV partner.

“It involves a deep level of integration with our product,” says Cuervo. “We have to first analyze if it is a good fit for them and for Intel -- we do a discovery session.”

Identity Protection
Developers can also leverage Intel’s Identity Protection Technology (Intel IPT) to boost security. Intel IPT comes built into second-generation Intel Core-based PCs and laptops, providing hardware-based, two-factor authentication. The technology aims to secure access to online accounts, virtual private networks, and applications.

Two-factor authentication generally involves username/password and something the user possesses -- a security token, for example. The token generates a number that serves as a one-time password. An Intel IPT-equipped computer, however, serves as the security token. Intel IPT generates the one-time password from an embedded processor on a computer’s motherboard.

Soren Knudsen, product marketing engineer at Intel, said the company worked directly with large ISVs on Intel IPT. Currently, Symantec Corp. and VASCO Data Security International are security ISVs for Intel IPT. He said most security software companies looking to add a level of authentication and trust to their solutions would work with an Intel security ISV.

“They would use the partner’s SDK, or services, or calls and implement that into their software solution,” explains Knudsen.

Knudsen says developers can implement Intel IPT into a piece of Windows software or a website. He said most developers are using the technology to do authentication on the Web.

Faster Encryption
Cryptographic acceleration represents another area where software and hardware security meet.

WinMagic, for example, has started making use of Intel’s Advanced Encryption Standard New Instructions (AES-NI). Those instructions, built into many recent Intel CPUs, accelerate the speed of encryption.

Thi Nguyen-Huu, president and CEO, WinMagic, says AES-NI deployment has a positive impact on solid state drives (SSDs). He said SSDs are 10 times faster than traditional rotating-disk hard drives.

Without the boost from AES-NI, “software encryption would be the bottleneck and negate the advance of the SSD,” says Nguyen-Huu.

“We implemented AES-NI on the software side and found we have a greatly improved overall solution,” adds McCracken.

Beyond Intel, AMD’s next-generation Bulldozer processor cores are also expected to include encryption instructions.

Photo Credit:@iStockphoto.com/yurok