Open Clouds

Enterprises and developers pursuing cloud computing can tap into an emerging category of software to get the job done: open-source software that provides compute infrastructure and delivers applications.

Organizations can tap open-source alternatives for both the infrastructure as a service (IaaS) and platform as a service (PaaS) cloud models. IaaS provides the basic underpinnings of cloud computing. The National Institute of Standards and Technology (NIST) defines IaaS as providing the ability to provide “processing, storage, networks and other fundamental computer resources.” PaaS, meanwhile, offers a foundation for cloud software development. NIST states that PaaS lets customers deploy applications “created using programming languages and tools supported by the provider.”

Open-source options for IaaS include OpenStack, Eucalyptus and Red Hat’s CloudForms. Open-source PaaS made a splash this past spring with the arrival of Red Hat’s OpenShift and VMware’s Cloud Foundry.

Kamesh Pemmaraju, head of marketing at Cloud Technology Partners (CloudTP), a cloud consulting firm, says enterprises are turning to open-source clouds as they revisit and refresh their IT strategies.

“People want to move away from proprietary solutions,” he says. “They can look at open-source and cloud solutions up and down the stack.”

Open IaaS
The open-source IaaS offerings all perform roughly the same function, but may appeal to different customer sets. Eucalyptus IaaS software, for example, could prove a fitting option for organizations with high-availability needs. Eucalyptus Systems Inc. launched in 2009 to take the software beyond its university origins.

“They are moving toward building cloud-based management infrastructure that is highly available,” Pemmaraju says of Eucalyptus. “They are looking for companies that have high availability as a requirement ... financial services, trading -- any business process where downtime can be extremely expensive.”

OpenStack, meanwhile, has quickly attracted adherents -- including Rackspace Hosting and NASA -- since 2010 when it was founded.

“A lot of people are backing it up,” says Pemmaraju, noting recent vendor support from Dell and HP.

In July 2011, Dell unveiled what it termed the first available cloud solution offering built on OpenStack. In September, HP announced a private beta program for its HP Cloud Services, which are based on OpenStack technology.

Pemmaraju surmises that OpenStack’s platform has the most promise for future enterprise adoption, particularly for customers with bare metal and no commitment to a cloud operating system.

On the other hand, customers of Red Hat Linux may find the company’s CloudForms a logical cloud approach. Those customers aren’t starting from bare metal, and they essentially build their cloud infrastructure on top of Red Hat Enterprise Linux, says Pemmaraju. OpenShift is also available to fill in the PaaS layer.

“Clearly they have much more in terms of the overall stack,” notes Pemmaraju.

PaaS Developments
PaaS, while not as widely deployed as IaaS, could eventually prove the more pivotal cloud model.

Alan Shimel, managing partner of The CISO Group, a security consulting firm, says PaaS may win out over IaaS over the long haul, the latter being more dominant today. He says people want more than just the basic operating system and hypervisor layer.

“They are going to want a rich development platform,” says Shimel.

Pemmaraju describes PaaS as “the battleground of the future.” He notes that pure infrastructure services firms are moving up the stack, while companies at the application layer are moving down.

Developer support will boost a given platform’s prospects, since more applications generally translate into greater adoption. Accordingly, multiple-language support appears to be a priority among PaaS providers. The Express version of OpenShift, for example, supports Ruby (Rails, Sinatra), Python (Pylons, Django, Turbogears), Perl (PerlDancer), PHP (Zend, Cake, Symfony, CodeIgniter) and Java (EE 6, Spring, CDI/Weld, Seam).

Cloud Foundry currently supports “applications written in Spring Java, Rails and Sinatra for Ruby, Node.js and other JVM languages/frameworks including Groovy, Grails and Scala,” according to CloudFoundry.org.

“It looks like all these PaaS players are moving toward open source, multiple languages and multiple deployment models,” says Pemmaraju.

Getting Physical with Virtual Servers

As IT shops deploy an increasing number of virtual servers, the challenge of managing both these servers and the physical servers in their presence grows. Here, we explore solutions with Jerry Carter, CTO of Likewise Software, who explains the role of a well-conceived server management strategy in cloud computing, how to avoid virtual server sprawl, and tools for managing physical and virtual servers:

Q. To have a successful cloud strategy, how important is it that an IT shop well-manages both its physical and virtual servers and its associated storage systems?

Jerry Carter: It’s critical in order to scale the IT infrastructure to be able to meet the business needs of the company. The concept of virtualizing the machine is about rising up the stack. You virtualize the machine and then you start running multiple VMs on one piece of hardware. Then, you virtualize the storage, but you must isolate the management of the storage resources from the application that is using that storage. If you can’t abstract those resources from the application, then you end up managing pockets of data. When you are moving from physical environments to virtual ones, you must have a solid [data/IT/storage] anagement strategy in mind; otherwise, your problem gets worse and management costs rise. You might end up cutting power consumption and gaining space through consolidation, but you might increase the number of images you have to manage.

Q. How big of a problem is managing just the storage aspect of this?

J.C.: At VMworld in August, a speaker asked, ‘When you have performance problems with your VM, how many would say that over 75 percent of the time it’s a problem involving storage?’ This huge number of people raised their hands. If you just ignore the whole storage capacity and provisioning problem, then how can you manage policy to ensure you are securely protecting the data that represents the business side of the company? You must be able to apply consistent policy across the entire system; otherwise, you are managing independent pockets of storage.

Q. In thinking about managing physical and virtual servers, should IT shops start with private clouds before attempting public clouds?

J.C.: The reason people start with private clouds has to do with their level of confidence. They have to see a solution that provides a level of security for information going outside their network; not every critical application knows how to talk to the cloud. So a private cloud strategy gives you the ability to gateway protocols that those applications can actually understand, along with the back-end cloud storage APIs. For instance, look at the announcement Microsoft made involving the improvements to its Server Message Block (SMB) 2.2 for Windows Server 8. Historically, server-based applications like SQL Server and IIS have required block storage that is mounted through iSCSI for the local apps to work. So what Microsoft has done is positioning in the cloud to be a competitor for blocks via those server-based applications.

Q. Is it important that Windows Server and Microsoft’s cloud strategy succeed to broadening the appeal for managing physical and virtual servers?

J.C.: If you look at most of the application workloads on virtualized infrastructures, something like 90 percent of them are running on VMware servers carrying Microsoft workloads. VMware is trying to virtualize the operating system and the hypervisor. Microsoft’s position is to enable those application workloads because those are the things really driving business value. I think it is very important that Windows Server 8 succeeds, but I think the more critical aspect is it supports SMB 2.2.

Q. What is the state of software tools for managing physical and virtual servers right now?

J.C.: I think the maturity and growth of virtual machine management has been tremendous. When you look at things like vSphere and vCenter from VMware that allow you to manage the hypervisor and the individual VMs, be able to deploy them rapidly and then spin them up or down on an as-needed-basis, it is impressive. But the problem that remains is in treating VMs as independent entities. What business users really care about is what is inside the VM, but the hypervisor doesn’t really deploy policy to the guest. There are some clever people doing interesting things, but generally it’s still a broad set of technologies for dealing with heterogeneous networks. I don’t think [management tools] have advanced as fast as the VM management infrastructure has.

Q. With so many more virtual servers being implemented than physical servers, how do you manage virtual server sprawl?

J.C.: First, people have to realize that it is a problem. Not only do most people have large numbers of VMs deployed that are no longer in use, but they don’t have a handle on what is actually inside of those VMs. There could be data that was created inside a particular VM for development purposes. But you have all this other data used to build a critical document just sitting out on a VM. I think the security implications are huge. If you are using individual VMs for storage, then you can have huge amounts of business-critical, sensitive data sitting on these underutilized VMs. If you aren’t doing something to manage the security, then you are vulnerable to data leakage. Some managers think, ‘All my users are storing data on the central file server, so this isn’t a problem.’ But inadvertently, users copy data locally for temporary work. The best way to address it is to have a plan in place prior to allocating new VMs, where you can apply a consistent authentication and security policy. That way, users know what they are allowed to do within a particular device.

Q. What adjustments must IT make for things like data protection as they rapidly provision more physical and virtual servers?

J.C.: Data protection can’t exist without data centralization. You can’t manage individual pockets of storage or individual VMs themselves. Another issue IT has is users don’t start removing unnecessary files from disks until the disk starts to get full. But with storage capacities ever increasing, disks never fill up, so there is never any reason to go back in and clean them up. So you can end up with the same problem caused by this virtual machine sprawl.

Q. And does unstructured data rear its ugly head in this scenario too?

J.C.: Yes. I think the biggest problem is the amount of unstructured data that exists out there; people don’t really have a handle on that. About 75 percent of data out there is unstructured. The question I always propose to users is: Given all of the data you have in your network, what is the one thing you most want to know about it? The answer is usually, ‘Well, I’d like to know I can correlate the locality of my data to prove to someone I am meeting their SLAs,’ or, ‘I want to know when people are accessing data outside their normal usage pattern.’ They need to understand data about their existing data and what access people actually have.

Q: What is a common mistake IT people make as they grow their physical and virtual servers’ infrastructure up and out?

J.C.: Just as it’s impossible to manage individual pockets of applications, it is impossible to manage individual containers of storage within a single, large storage infrastructure. You must have something that addresses the overall problem. This is what people fail to understand when they move from a smaller infrastructure to a massive one. With this machine and storage sprawl, any cracks in the existing management techniques, software or policies become chasms as the size of the problem increases from something small to very large.

Q. Is IT looking more seriously at open-source solutions for managing physical and virtual servers?

J.C.: Open source will continue to play a predominant role in a lot of IT shops. But one of the real challenges is the amount of investment made in developing expertise on individual solutions. Another is finding people with expertise when you have turnover. There is a lot of great open-source technology available, but it is not always in product form. People become experts in the technology, but it can be risky to rely on technology expertise rather than product expertise, which can be replicated. The question is: Can it address the whole problem, or is fixing individual pain points a better way to go? I think the jury is still out on that.

Photo: @iStockphoto.com/herpens

Bare-metal Client Hypervisors

This flavor of desktop virtualization, referred to as a Type 1 hypervisor, lets virtual machines run directly on the client device -- hence the bare-metal moniker. The other client virtualization strategy, Type 2, places virtual machines on top of the operating system. The bare-metal approach offers the potential for better performance since fewer layers of software are involved. The technology is also considered more secure since it avoids viruses, key loggers or other issues in the base OS.

Type 1 client hypervisors available today include Citrix Systems Inc. ’s XenClient (which debuted in 2010), MokaFive ’s BareMetal (which began shipping in June) and Virtual Computer Inc.’s NxTop (which launched in 2009). In addition, Microsoft  reportedly may include its Hyper-V Type 1 technology in its upcoming Windows 8 client operating system, although the company declines to confirm those reports.

In the enterprise, bare-metal client hypervisors are gaining acceptance among customers who require an extra measure of security. The Type 1 technology also plays a role among customers who want to create business-only images for their corporate-owned machines, as opposed to employee-owned clients brought into work. Type 2 hypervisors are typically the rule for the bring-your-own-device style of computing.

As for hardware platforms, Type 1 devices currently focus on desktops and laptops. Industry executives question whether those hypervisors will find their way onto media tablets and smartphones as well.

While the technology hasn’t fully matured, bare-metal hypervisors could merit a look for organizations mulling virtualization. Bare-metal client hypervisors “have a very valid use case,” says Mark Bowker, senior analyst at Enterprise Strategy Group .

“IT organizations ... should be thinking about ways to include it in their environments,” he says.

The Case For Bare-Metal
Type 2 virtualization products do an adequate job -- letting users run Windows on Macs, for example -- but Citrix wanted a hypervisor that could have more control over virtual machines, notes Ramana Jonnala, vice president of product management for XenClient. In January 2009, Citrix agreed to work with Intel Corp. to create a Type 1 client hypervisor based on Xen open-source technology.

The Type 1 approach lets organizations provide enterprise laptops with separate business and personal environments. An IT administrator can maintain a business-only virtual machine, providing patches and updates, and let users manage their own personal virtual machine, says Jonnala. That way, administrators don’t have to worry about end users downloading software that slows the laptop or causes malware infections -- at least on the isolated business side.

“It lets them have better control of managing the images on the laptop,” he says.

“It also means they don’t have users installing apps or malware that affect corporate apps anymore.”

Similarly, MokaFive views management as key to client virtualization. Purnima Padmanabhan, the company’s vice president of products and marketing, says virtualization addresses the problem of managing distributed endpoints.

“It allows me to control the image and wrap it in a secure bubble and drop it on an end point,” she says.

MokaFive in May launched BareMetal, a Type 1 hypervisor that targets corporate-owned client devices. The hypervisor lets IT managers deploy the identical “golden image” across desktops and laptops. The company also markets Type 2 client virtualization technology geared toward employee- or contractor-owned gear.

Padmanabhan cited Windows 7 migration as one role for the company’s bare-metal product. The hypervisor lets companies install a Windows 7 environment on a range of machines without having to create separate Windows 7 builds for each type of hardware platform, she says.

And both Padmanabhan and Jonnala pointed to security-minded customers as a market for bare-metal client hypervisors.

Citrix in May debuted XenClient XT, which takes advantage of the security capabilities of Intel Core vPro. The federal government market is the initial audience for XenClient XT, says Jonnala, noting the need for secure environments in that space.

Citrix already has rung up some orders for XenClient XT and “a good number” of customers are evaluating the technology, according to Jonnala.

Evolution
Bare-metal clients are making progress, but Bowker says the technology’s development continues.

“Let’s not get too far ahead of ourselves. This technology, in particular, is still evolving,” he says.

Bowker suggested more work needs to be done on the management side of client hypervisor technology. He says the most important thing to focus on is the ability to centrally manage, maintain and secure devices.

Recent vendor moves in that direction include Citrix’s Synchronizer, which the company says helps customers install XenClient-equipped laptops across larger enterprise environments and manage virtual desktops centrally. Synchronizer is included in XenClient 2, which was announced in May.

In addition, MokaFive’s BareMetal applies updates to machines through a central management console.

Beyond management, there’s another consideration for the future of bare-metal: Will the technology play a role in mobile platforms such as media tablets and smartphones?

Jonnala says XenClient specifically targets corporate laptop users, adding that Citrix has a different virtualization strategy for devices such as tablets and smartphones. In that area, the company emphasizes Citrix Receiver, a universal software client that gives users access to the corporate desktop and applications delivered via Citrix products. Citrix offers Receivers for mobile platforms including Apple, Android and RIM.

Padmanabhan says MokaFive is looking to have a solution for mobile devices, but notes that it will not be based on a hypervisor technology.

“Hypervisors as we know them are too heavy to run on mobile devices,” she says.

“So today, we support the ability to remote to your desktop from the mobile device.”

Bowker, meanwhile, questions whether bare-metal client hypervisors are relevant for media tablets and smartphones. He says he views the IT challenge in this space as designing, architecting and modernizing apps to be used on mobile devices. Leveraging the operating system, as opposed to running virtual machines directly on the hardware, is key.

“I don’t see multiple instances of Android on some tablet device,” he says. “I’m not buying into that one yet.”

Is Ubicomp at a Tipping Point?

The Palo Alto Research Center (PARC) coined the term “ubiquitous computing” in 1988, but it took the next two decades for the PARC researchers’ vision to start becoming a part of the workplace and the rest of everyday life. As manager of PARC’s Ubiquitous Computing Area, Bo Begole  is shepherding ubicomp into the mainstream with some help from a variety of other trends -- particularly the growth in cellular and Wi-Fi coverage, smartphone adoption and cloud computing .

Begole recently discussed what enterprises need to consider when deciding where and how to implement ubicomp, both internally and as a way to better serve their customers. One recommendation: People need to feel that they’re always in control.

Q: The term “ubiquitous computing” means a lot of different things, depending on whom you’re talking to. How do you define it? Where did the term originate?

Begole: It kind of came on the heels of personal computing, which had superseded mainframe computing. So there was that pattern of naming. But what was distinct about ubiquitous computing in contrast to those others is that “mainframe computing” and “personal computing” always implied that a computer was the center of your focus.

I wanted to be able to conduct computing tasks ubiquitously: wherever you were, and whenever you needed it, as you were conducting your daily life. It’s more about ubiquitous working or living and using computer technologies in the course of that.

Q: Ubicomp dovetails nicely with another trend: cloud computing, where you’ve always got access to a network of computers.

Begole: Right. The cloud is an enabler of ubiquitous computing because you need constant access to the information services that you might utilize at that point. The PARC researchers, having lived with the personal computer for 15 years at that point, were envisioning this paradigm, and they saw that it was going to involve ubiquitous devices and also ubiquitous wireless networks. So they started to prototype those types of devices and services.

Q: Ubicomp seems a bit like unified communications: For at least the past 15 years, visionaries and vendors were touting unified communications as the next big thing. But it wasn’t until a few years ago that enterprises had deployed enough of the necessary building blocks, such as VoIP and desktop videoconferencing. Now unified communications is common in the workplace. Is ubicomp at a similar tipping point?

Begole: That’s a good analogy because unified communications requires critical mass of adoption of certain services, and then the next stage was to make them all interoperable. That’s what’s happened with ubiquitous communications too. The early researchers saw the inevitability of these pervasive devices and networks, and they prototyped some. But it’s taken a while before there existed the critical mass of smart devices and wireless networks.

I’d say that the tipping point was around 2005 to 2007, when smartphones came out. GPS chips embedded in those phones really enabled those kinds of context-aware services that ubiquitous computing research had been pushing for a while. The next wave is very intelligent context-aware services. The Holy Grail is a personal assistant that has so much intimate knowledge of what matters to you that it can sometimes proactively access information for you and enable services that you’re going to need in the very near future. That’s where things are going.

Q: Here’s a hypothetical scenario: My flight lands. I turn on my smartphone, which checks my itinerary and uses that to give me directions to baggage claim, the rental car desk and my hotel. But it also tells me that on the way to baggage claim, there’s a Starbucks, which it knows I like because of all the times I’ve logged in from Foursquare. Is that an example of the types of day-to-day experiences that people can expect from ubicomp?

Begole: Even a little beyond that. Maybe you like an independent coffee brewer in your local area. Now you’re in a new local area, so rather than recommending Starbucks, it will find the most popular local brewer because it knows the types of things you’re interested in.

It’s connecting the dots, which is what we expect humans to be able to do, and it’s possible for computers to be able to do it. They have access to all this kind of information. The smartphone is a good hub for that because it’s got access to all of your digital services, and it has intimate knowledge about your physical situation at any time.

Q: That scenario also highlights one of the challenges for ubiquitous computing: balancing the desire for personalization with concerns about privacy.

Begole: Ubiquitous computing services have to be respectful of the concerns that people have. Otherwise, it’s going to limit the adoption. It’s a little different for consumers than for enterprises. Employees probably have a little less expectation about the privacy of the data they’re exchanging on enterprise computing systems, but they may still have concerns about access to that information.

We’ve done deployments in enterprises with technologies that were observing situations in an office to see if you were interruptible for communications. To put people at ease with the sensors that were reading the environment, we put an on-off switch on the box so that, at any time, they could opt out completely. In the entire three-month time, nobody used that switch. You might take from that that it’s not important to have that capability, but it is so they have the comfort that they can gain control of their information.

Q: Interesting. Tell us some more about that deployment.

Begole: We did that at Sun Microsystems. It was connected to Sun’s instant-message system. We were using that to provide the presence information and the interruptability of the people on the IM service. That made it easier for remotely distributed teams to have awareness of when they could reach people: You could see not just whether somebody was online and in their office, but whether they were available for you to call them right now.

We took it a step further: If they weren’t available right now, we’d have the system predict when they were most likely to become available. That was based on statistical analysis of their presence patterns over time. That’s the kind of intelligence that ubiquitous computing expands.

Additional resources: VDC Conference 2011 

Tips for Building and Deploying Cloud-based Apps

The cloud and cloud-based solutions are here to stay. Recent IDC research shows that worldwide revenue from public IT cloud services exceeded $16 billion in 2009 and is forecasted to reach $55.5 billion in 2014. Clearly, the pace of growth is quite staggering.

Companies of every size and stripe are leveraging the cloud to outsource noncore competencies, improve efficiencies, cut costs and drive productivity. Central to every company’s cloud strategy is determining how best to build and deploy cloud-based applications.

Here are a few best practices to help you make the process of building and deploying applications as straightforward as possible.

1. Design your applications for performance and scalability.
Building cloud-based applications is vastly different from building on-premise ones, so you need to design your applications to maximize their ability to benefit from the cloud’s elastic computing nature. The most obvious way to do this is to create stateless apps because they scale better.

Many thought leaders believe that the stateless model facilitates much greater scalability than conventional computing and combines effectively with virtualization to achieve optimum data-center utilization.

2. Build upon existing assets.                                 
For companies seeking to maximize the value and efficiency of their cloud applications, the best approach is to build upon existing assets rather than start from scratch. Existing assets offer various benefits: First, they can be shared and reused, often more quickly and smoothly than new ones. Second, IT users have some degree of comfort using them. And the bottom-line justification: They are probably paid for.

An existing asset, such as a mobile sales app, can be repurposed and tweaked to create a new cloud-based app in a foreign language for a field sales force.

Software as a Service (SaaS) applications are a good choice for such sharing and reuse as they enable business users to collaborate, create and share assets quickly and easily.

3. Determine the right amount of isolation and sharing of assets.
The flip side of sharing is isolation. Some assets need to be shared by all users, while others need to be restricted to certain users due to their confidential or sensitive nature. At the same time, it is desirable to create multi-tenancy collaborations so that users in different groups of your company can develop and share information and assets that will enhance the productivity of all tenants.

4. Don’t ignore taxonomy and governance procedures for your assets.
Categorizing and defining your assets is vital -- especially if you work in an enterprise or even global company with hundreds or thousands of assets in different countries and dozens of languages. Besides including the most obvious assets -- such as applications, operating systems and network platforms -- you should add such intellectual assets as designs, implementation documents and even marketing information.

To make your life as easy as possible, consider detailing attributes for your assets so you can search for them effortlessly. For example, label your assets by vertical markets (such as finance and manufacturing) and level of adoption (such as mature, advanced and beginner).

You might also want to specify roles for people handling those assets. Think: creators, managers, users and those who can modify and share assets with others.

5. Allay your security worries before committing to a vendor.
Putting your data and apps in the cloud is fraught with security risks. Top-of-mind concerns for most companies are: data integrity, data location, data recovery, regulatory compliance, and privacy. The overarching concern is “Will my data be safe?” Before committing to a cloud vendor, consider getting a neutral third-party to do a thorough security assessment of the vendor. Companies should also conduct their own high-level audits of a vendor’s security and ask the vendor for proof of its security claims.