Desktop Virtualization on the Verge

Desktop virtualization has been on the verge of adoption for years. But actual deployment hasn’t caught up to the hype so fast. One hurdle is that several technologies fall under the “desktop virtualization” umbrella, and confusion can breed inaction. There is also a dearth of capable management tools to help businesses keep all this virtualization under control, says IDC analyst Ian Song.               

On the plus side, desktop virtualization is picking up the pace. Last year, some 11 million licenses sold, according to IDC. Preliminary figures for this year show that 7 to 8 million licenses were sold in the first half of this year.

Generally speaking, desktop virtualization mimics the more popular server virtualization in that it decouples the physical machine from the software. Here are the basic subcategories of desktop virtualization, and what use cases are appropriate for each.

VDI (Virtual Desktop Infrastructure)|
VDI is a double threat: It promises both centralized management (good for IT) and a customizable desktop (good for users).

“VDI lets you put the desktop into the data center, where the image can be managed easily. What’s really nice about it is an IT pro can actually create a separate desktop for all users and test out all those desktops for patches before deploying,” says Lee Collison, solutions architect for Force 3 Inc., a systems integrator based in Crofton, Md.

That is a huge draw because software vendors pump out patches and fixes that are impossible to keep tabs on, let alone test and install. The downside is that VDI presumes pretty constant connectivity to the data center, which can be a problem for road warriors.

Market leaders in VDI are VMware View and Citrix XenDesktop.

Client Hypervisors
If VDI assumes connectivity, client hypervisors pretty much assume disconnection.

Because these hypervisors, as their name implies, run on client machines, they suit situations in which users are not always tethered to the corporate LAN. That means companies that employ outside contractors who use personal machines, but also must run corporate-sanctioned applications. With client hypervisors, the company can provide them with the image that they then run offline.

“The key phrase here is ‘disconnected.’ Users can sync up when they’re connected, but otherwise they’re free to roam,” says Ron Oglesby, CTO of Unidesk, a desktop virtualization management company.

Citrix XenClient is a “type 1” or bare-metal hypervisor that sits atop the client hardware and below the client OS. It then divvies up system resources into virtual machines (VMs) as needed. Type-2 client hypervisors create a layer above the OS, which then supports several virtual OS instances.

OS Streaming
With this older technology, sometimes called terminal services, a networked device boots up a server-based operating system and streams the software needed on a download-and-go basis. “Remote or local users on a Wyse or other device download just what they need,” says Lee Collison, solutions architect with Force3 Inc., a systems integrator.

From an administrator’s point of view, the beauty is that one image powers all users -- which is great where absolute consistency is a virtue. OS streaming can also be an inexpensive option in factory floor situations that are unsuitable for pricier and more delicate PCs. But to work well, OS streaming requires a big outbound pipe and good connections to each device, says Collison.

Purists don’t see OS streaming as desktop virtualization per se because there is no hypervisor or virtual machine, but it often coexists with virtual desktops. The beauty is: It’s cheap and efficient.

More from our sponsor: Intel’s concept for intelligent desktop virtualization

Photo: @iStockphoto.com/loops7

Changing the Ways Data Centers Are Built

Enterprise IT departments have spent the past several years flocking to the cloud largely for the flexibility it provides, both in terms of technology and economics.

In the case of public clouds, where a third party owns and operates the IT infrastructure, that flexibility includes shifting CapEx and OpEx to the cloud provider and gives access to cutting-edge IT products the enterprise wouldn’t be able to afford on its own.

With private clouds, the enterprise owns the data center(s) and other infrastructure for reasons that typically include concerns about security and performance risks, which can arise when a third party hosts everything. Although the enterprise bears the CapEx and OpEx of owning the private cloud infrastructure, it retains the flexibility to quickly and cost-effectively shift IT resources across different business units to meet each one’s needs. For example, instead of providing the accounts receivable department with dedicated IT resources that lie mostly idle outside of the monthly billing cycle, that equipment can be shifted to other business units during peak periods. That increases ROI.

A related trend is the rise of data center appliances, servers preloaded with applications that meet each enterprise’s unique business requirements. With data center appliances, either the hardware vendor or the enterprise can be responsible for managing that infrastructure. That business model makes it possible to free the enterprise IT staff for other activities -- or in the case of small business, forgo having an IT staff at all.

The rise of clouds and appliances is reshaping the data center. Here’s how.

Collocating for Access to a Community
When an automaker builds a factory in a state where it has no facilities, its parts suppliers typically add their own plants literally next door. The logic: It’s cheaper and easier to meet just-in-time supply-chain requirements when you’re across the street than across the country.

Data center operators and their customers have started borrowing that strategy. For example, financial services companies are increasingly collocating in the same data centers. That approach avoids the latency that would occur if they were in separate facilities scattered around a country or the world.

Although that may not sound like a major benefit, bear in mind that financial services is an industry where a network bottleneck can cost millions if it means a stock trade is executed after the price has increased. Data center collocation also saves money by minimizing the amount of data shipped over wide-area connections, and staying local means fewer security risks.

Content creators and distributors are looking for similar opportunities, including sharing data centers with Internet exchanges to avoid the cost and latency of shipping their bits over wide-area connections. “They’re collocating their infrastructure,” says Ali Moinuddin, director of marketing at Interxion, a data center operator whose 1,200 customers include the world’s five largest content delivery networks.

Whatever the industry, the business drivers are similar: “They want to reside in the same data center so they can cross-connect within the data center and share data and services -- without having to leave the data center,” says Moinuddin.

Virtualization: 5 Times the Performance, 66 Percent Savings
The downside to collocation is that organizations can wind up with their IT assets and operations concentrated in one or just a few sites. That design runs counter to the post-Sept. 11 strategy of spreading IT over a wide geographic area to minimize the impact of a major disaster.

CIOs and IT managers also have to ensure that their collocation and geographic-dispersion strategies don’t violate any laws. For example, some governments restrict certain types of data from being stored or transmitted outside their borders.

To avoid running afoul of such laws, enterprises should have SLAs that dictate exactly how and where their cloud-based operations can be switched if a data center goes down. “That’s something that has to be very carefully managed,” says Aileen Smith, senior vice president of collaboration at the TM Forum, an industry organization for IT service providers.

Redundancy and resiliency are also driving the trend toward virtualization, where fibre channel storage, I/O and a host of other processes are being disconnected from hardware and moved up into the cloud. This strategy can be more cost-effective than the two historical options: building a multimillion-dollar data center packed with state-of-the-art hardware and software designed to minimize failure; or having an identical backup site that’s used only in emergencies, meaning those expensive assets sit idle most of the time instead of driving revenue.

Instead, virtualization spreads server and application resources over multiple data centers. One way it reduces capital expenses is by allowing the enterprise or its data center provider to use less expensive hardware and software. This strategy doesn’t compromise resiliency because if one or more parts of a data center goes down, operations can switch to another site for the duration. At the same time, there’s no backup data center stocked with nonperforming assets.

How much can enterprises reasonably expect to save from virtualization? F5 Networks estimates virtualization can yield five times the performance at one-third the cost. “If I can put 10 of those low-cost servers in a virtualized resource pool, I’ve got five to 10 times the power of the most powerful midrange system at a third of the cost,” says Erik Geisa, vice president of product management and marketing for F5 Networks. “By virtualizing my servers, I not only realize a tremendous cost savings, but I have a much better architecture for availability and ongoing maintenance. If I need to bring one server down, it doesn’t impact the others, and I can gracefully add in and take out systems to support my underlying architecture.”

It’s not your father’s data center anymore.

Photo Credit: @iStockphoto.com/luismmolina

Expert Insight: How Virtualization Simplifies the Data Center

If there’s one technology that can vastly improve computing environments of any size, it’s virtualization. By using one server to run many virtual servers, you can decrease operational costs and get far more bang for your buck. We spoke with Peter Christy, principal analyst at the Internet Research Group, for the latest on virtualization.

Q: Where are we in the virtualization revolution? Is it a dominant data center technology, or will it become dominant in the near future?

Peter Christy: The use of virtualization in the data center goes through several predictable phases. Large enterprises use virtualization for development and test purposes. Many have also used virtualization to consolidate Windows Server applications onto shared servers. Quite a few, but a smaller number, are in the process of going beyond that to virtualize more complex, business-critical applications. Leading companies are just starting to move toward automated data center operations enabling service-oriented IT operation.

Q: What do companies need to know about virtualization?

Christy: Virtualization presents new challenges in the implementation of security, storage, networking and other aspects. There is a learning curve.

Q: Do you mean learning for the software developers or the IT staff?

Christy: Both. Virtualization offers interesting new opportunities. I would encourage companies to consider the Cisco/EMC VCE Vblock effort, in which major applications (such as SAP) are being integrated with a complete hardware and software virtual system (built on Cisco’s UCS server). With Vblock, the customer can finally buy a solution with all the system and application integration done by an outside vendor.

What EMC just announced as a VCUBE offers similar benefits at the software level. They are bundling enterprise software, such as Documentum, into a complete virtual application including the underlying operating system. Previously, enterprise software deployment was ugly because the vendor had to customize the package for all the specifics of the customer: server choice, OS choice, storage choice, network design. With a VCUBE, if you have a VMware virtualized data center, you just plunk down a complete, pre-integrated application system.

At the IT staff level, the possibility of using virtualization for automated operation of IT-as-a-service encourages greater internal standardization in areas like storage use, specific OS choice and middleware.

Perhaps most importantly, it completely changes the stovepipe model of application management in which each application has server, network and storage admins. In the future, the data center team has to be made of system specialists, not just specialists.

Q: Why is that the case? Can you expand on how virtualization changes the stovepipe model of application management?

Christy: In the past, each application had a custom server, OS, storage and network design, requiring admin specialists for each. In a virtualized data center, those resources are provided by the shared fabric -- without specific admins per application. And application management issues are more at a system level than a functional level, thus requiring multifunction admins who are more broadly trained at the system level.

Q: How does virtualization affect software development?

Christy: By itself, virtualization proves the importance of simplification. To gain the full benefits of IT as a service, greater simplification in the use of common applications, with minimal customization, and using common software stacks and middleware are helpful.

Q: Explain this statement to a virtualization newcomer: “By itself, virtualization proves the importance of simplification.”

Christy: The first wave of production virtualization consolidated single server apps (typically Windows Server apps) onto shared common and virtualized servers. The motivation was simple financial return: Hardware was cheaper and utilization better. The ROI, quality and reliability of the delivered services increased. When these improvements were examined, it was largely because complexity of data center operation had been eliminated through automation.

Rather than having applications running on separate configurations -- each requiring separate procurement for provisioning and operation -- much of this was automated and complexity was removed. Complexity reduction has always been important, but was only appreciated by gurus and theorists. Production virtualization proved this wisdom to a much broader audience.

Q: What are the biggest issues with virtualization?

Christy: The biggest issues are the organizational issues within the data center teams and the way in which virtualization enables entirely new structuring of IT within an enterprise. These are significant, human changes.

Interview: Rob Enderle on Plug-and-play Appliances

One of the can’t-miss enterprise trends is the increasing use of plug-and-play, special-needs appliances in the data center. These appliances handle tasks ranging from networking to storage and deliver data processing capabilities for business intelligence, virtualization and security.

The ease of use of these devices, combined with the growing need for storage, networking and security, means these modular, plug-and-play machines are a trend to watch. To get the pulse of plug and play, we spoke with Rob Enderle, principal analyst at the Enderle Group.                                                         


Q: What are the hottest trends in the plug-and-play appliance arena?

Enderle: The midmarket appears to love these devices, because that market lacks deep IT skills and the IT staff they have are often spread too thin. For example, they just don’t have the bandwidth to learn large new storage systems, and as a result, have taken to these appliances like ducks to water. Enterprises like them for remote offices and contained projects for similar reasons.

Q: Why are these appliances gaining so much traction? Is it because appliances come equipped with software pre-installed and there’s no futzing around with app servers or operating systems?

Enderle: Yes, they contain both the costs and the overhead for the related technology, so they are easier to initially configure, vastly easier to run and lack the complexity of traditional products. This is very attractive in deployments where there isn’t any overhead to handle complexity.

Q: IT services have consolidated. Have appliances grown into devices that are capable of handling data center chores?

Enderle: These devices are clearly moving up-market. The financial downturn caused massive downsizing in support roles like IT, and many are hesitant to hire heavily on the economic upswing for fear it will be short-lived. This makes appliances a great choice for IT organizations that are now overwhelmed because they can’t or won’t staff up yet.

Q: Virtualization means new servers and applications can be quickly delivered, and many tasks can be handled with special-needs servers. But are we fully at the point where special-needs servers can take over most of the IT workload?

Enderle: Far from it, we are still in the early days of appliance adoption, and I doubt this will mature fully for decades. Appliances remain a growing exception to the more traditional highly customized approach to IT back-office technology. This adoption will take some time, and plug-and-play applications like these are still years away from becoming the default standard across the broad market.

Q: Vendors ranging from IBM to NetApp are looking for new, innovative ways to combine these technologies. Is this trend going to grow?

Enderle: Certainly! This approach has not only proven lucrative, it has led to increased customer loyalty and strong market share growth. Success tends to breed a new IT approach that others want to copy, and appliance development programs are breeding like bunnies at the moment.

Migration to the Cloud: Evolution Without Confusion

The rapid rise of cloud computing has been driven by the benefits it delivers: huge cost savings with low initial investment, ease of adoption, operational efficiency, elasticity and scalability, on-demand resources, and the use of equipment that is largely abstracted from the user and enterprise.

Of course, these cloud computing benefits all come with an array of new challenges and decisions. That’s partly because cloud products and services are being introduced in increasingly varied forms as public clouds, private clouds and hybrid clouds. They also deliver software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) solutions, and come with emerging licensing, pricing and delivery models that raise budgeting, security, compliance and governance implications.

Making these decisions is also about balancing the benefits, challenges and risks of those cloud computing options against your company’s technology criteria. Many core criteria matter: agility, availability, capacity, cost, device and location independence, latency, manageability, multi-tenancy, performance, reliability, scalability, security, etc. And the available cloud options all vary widely in terms of each of these criteria -- not to mention, there are significant challenges integrating all of this with your existing infrastructure.

There are fundamentally challenging questions that companies will be forced to grapple with as they decide what cloud functionality suits them best. The central issues include security, cost, scalability and integration.

Public, Private or Hybrid?

There are a few differences among the three:

  • Public cloud services require the least investment to get started, have the lowest costs of operation, and their capacity is eminently scalable to many servers and users. But security and compliance concerns persist regarding multi-tenancy of the most sensitive enterprise data and applications, both while resident in the cloud and during transfer over the Internet. Some organizations may not accept this loss of control of their data center function.

  • Private cloud services offer the ability to host applications or virtual machines in a company’s own infrastructure, thus providing the cloud benefits of shared hardware costs (thanks to virtualization, the hardware is abstracted), federated resources from external providers, the ability to recover from failure, and the ability to scale depending upon demand. There are fewer security concerns because existing data center security stays in place, and IT organizations retain data center control. But because companies must buy, build, and manage their private cloud(s), they don’t benefit from lower up-front capital costs and less hands-on management. Further, their operational processes must be adapted whenever existing processes are not suitable for a private cloud environment. They are just not as elastic or cost-effective as public clouds.

  • Hybrid clouds are just a mix of at least one public cloud and one private cloud, combined with your existing infrastructure. Hybrid cloud interest is powered by the desire to take advantage of public and private cloud benefits in a seamless manner. Hybrid combines the benefits and risks of public and private: offering security, compliance, and control of the enterprise private cloud for sensitive, mission-critical workloads, and scalable elasticity and lower costs for apps and services deployed in the public cloud.

That combination of operational flexibility and scalability for peak and bursty workloads is the ideal goal, but the reality is that hybrid cloud solutions are just emerging, require additional management capabilities and come with the same kind of security issues for data moved between private and public clouds.

Transformational Change or Legacy Environment?
The diversity of cloud offerings means businesses evaluating various cloud computing options must decide how to integrate cloud resources with their legacy equipment, applications, people and processes, and determine whether and how this will transform their business IT or simply extend what they have today and plan for the future.

The reality of cloud environments is that they will need to coexist with the legacy environments. A publicly traded firm with thousands of deployed apps is not going to rewrite them for the public cloud.

One determining factor may be whether the services being deployed to the cloud are “greenfield” (lacking any constraints imposed by prior work), or “brownfield” (development and deployment in the presence of existing systems). In the absence of constraints, greenfield applications are more easily deployed to the cloud.

Ideally, hybrid solutions allow organizations to create or move existing applications between clouds, without having to alter networking, security policies, operational processes or management and monitoring tools. But the reality is that, due to issues of interoperability, mobility, differing APIs, tools, policies and processes, hybrid clouds generally increase complexity.

The Forecast Is Cloudy, Turning Sunny
Where this is all headed is that, for the foreseeable future, many organizations will employ a mixed IT environment that includes both public and private clouds as well as non-cloud systems and applications, because the economics are so attractive. But, as they adopt the cloud, enterprise IT shops will need to focus on security, performance, scalability, cost and avoid vendor lock-in, in order to achieve overall efficiencies.

Security concerns will be decisive for many CIOs, but companies are increasingly going to move all but their most sensitive data to the cloud. Companies will weave together cloud and non-cloud environments, and take steps to ensure that security is assured.

Non-mission critical applications -- such as collaboration, communications, customer-service and supply-chain tools -- will be excellent candidates for the public cloud.

There’s a Cloud Solution for That

As hybrid cloud offerings mature, cloud capabilities will be built into a variety of product offerings, including virtualization platforms and system management suites. Vendor and service provider offerings will blur the boundaries between public and private environments, enabling applications to move between clouds based on real-time demand and economics.

In the not-too-distant future, hybrid cloud platforms will provide capabilities to connect and execute complex workflows across multiple types of clouds in a federated ecosystem.

Products from Amazon, HP, IBM, Red Hat, VMware and others offer companies the ability to create hybrid clouds using existing computing resources, including virtual servers, and in-house or hosted physical servers.

There are also hybrid devices that are designed to sit in data centers and connect to public cloud providers, and offer control and security along with cost savings of connecting to the cloud. For example:

  • Red Hat open-source products enable interoperability and portability via a flexible cloud stack that includes its operating system, middleware and virtualization. The company recently announced its own platform as a service called OpenShift (for the public cloud) and infrastructure-as-a-service offering called CloudForms, which is a private cloud solution.
  • VMware’s vCenter Operations combines the data center functionality of system configuration, performance management and capacity management. It also supports virtual machines that can be deployed either inside the data center or outside beyond the firewall in the cloud.

Are we there yet?