Avoiding Fragmented Infrastructure in the Data Center

As a growing number of IT shops grow more comfortable with implementing their first cloud environments, they are recognizing the costs, benefits and operational efficiencies of this strategy. However, they are also discovering the rippling effects that cloud implementations may have on their existing virtualized infrastructures: fragmented infrastructures, a management nightmare. Arsalan Farooq, CEO and founder of Convirture, has experience dealing with IT shops that face this problem. Below, he offers insight.

Q. Some shops complain about dealing with fragmented infrastructures because of different approaches they have taken in implementing multiple clouds. What advice can you offer to fix that?

Arsalan Farooq: First, I would say don’t panic and start ripping everything out. And don’t run back to the virtualized or physical model you came from, because you have tried that and it didn’t work. The problem isn’t that your cloud model isn’t working. The problem is it’s fragmented, and the complexity of it is out of control. I recommend taking a top-down approach from the management perspective. You need to invest in management tools that allow you to manage the fragmented parts of your infrastructure and workloads from a single location. Now, this doesn’t solve the fragmentation problem completely because you are dealing with not only fragmented infrastructure, but also fragmented management and fragmented workloads. But once you can see all your workloads and infrastructures in one place and can operate them in one place, you can make more intelligent decisions about the workload fragmentation problem.

Q. Cloud management software is not technically keeping up with managing both physical and virtual environments. Does your solution here help out with that?

A.F.: Once you are in this fragmented infrastructure silo, the vendor tries to sell you their proprietary management too. At the end of the day, you have a management tool that just manages your vertical cloud, and another management tool that just manages your virtualized infrastructure and so on. My advice is to divest the management risk from the platform risk. If you don’t do this, you’re asking for it. As data centers become multi-infrastructure, multi-cloud and multi-operational, you have to divest the risk of the new platform from the risk of your management practices. I’m not a big fan of buying a new platform that forces you to buy new infrastructures and new management tools.

Q. What is the most important element IT shops overlook in putting together their first cloud implementation?

A.F.: Typically, there is a misunderstanding of what the scope of the transformation will be. A lot of people end up with misgivings because they have been sold on this idea that a cloud implementation will 100 percent transform the way they can build their IT data center. Second (and this is more sinister), they believe the cloud can bring efficiencies and cost benefits, but that in itself comes at a cost. You are buying efficiency and cost benefits, but you are paying the complexity price for it. This is something remarkably absent as people go into a cloud implementation. Only after they implement their cloud do they realize the architectural topology is much more complex than it was before.

Q. There is this disconnect between what IT shops already have in their virtualized data centers and what the vendors are offering as solutions to layer on top of that. Why is that?

A.F.: That is the crux of the problem we are dealing with right now. Most cloud vendors talk about how their cloud deployment works and what it brings in terms of efficiencies and cost benefits. But what that discussion leaves out is how the transformation from a cloudless data center to one that has a cloud actually works. Specifically, what are the new metrics, properties and cost benefits surrounding all that? And then, once the transformation is made, what are the attributes of a multi-infrastructure data center? Conversations about this are completely absent.

Q. But explaining something like the metrics of a new cloud conversation to an IT shop seems like such a basic part of the conversation.

A.F.: Right, but the problem is many vendors take a more tactical view of things. They are focused on selling their wares, which are cloud infrastructures. But addressing the bigger-picture ramifications is something many don’t seem to have the capacity to answer, and so they don’t talk about it. So the solution then falls either to the CIOs or other vendors who are trying to attack that problem directly.

Q. Some IT executives with virtualized data centers wonder if they even need a cloud. What do you say to those people?

A.F.: This may sound a bit flippant, but if you feel you don’t need a cloud, then you probably don’t. You have to remember what the cloud is bringing. The cloud is not a technology; it is a model. It comes down to what the CIO is looking for. If they are satisfied with the level of efficiency in their virtualized data centers, then there is no compelling reason for them to move to the cloud. However, I don’t think there are too many CIOs who, when given the prospect of incrementally improving the efficiencies of part of their operations, would not go for it.

Q. Are companies getting more confident about deploying public clouds, without having to first do a private one?

A.F.: The largest data centers still can’t find a compelling reason to move to the public cloud. The smaller shops and startups (most of which don’t have the expertise or have any infrastructure) are more confident in moving to a public cloud. The bigger question here is whether the larger corporate data centers ever move their entire operation to the public cloud as opposed to just using it in a niche role for things like backup or excess capacity.

Q. I assumed large data centers would move their entire operations to a public cloud once the relevant technologies became faster and more reliable and secure. What will it take for large data centers to gain more confidence about public clouds?

A.F.: One thing missing is a compelling cost-complexity-to-benefits ratio. My favorite example from a few months ago was when we were going to do workloads that automatically load balance between local and public cloud scenarios with cloud bursting. That all sounds good, but do you know how much Amazon costs for network transfers? It’s an arm and a leg. It is ridiculously expensive. Do an analysis of what it costs to take a medium-load website and run it constantly on Amazon, and then compare that to a co-lo or renting a server from your local provider, and your mind would be boggled. The overall economics of public clouds -- despite all the hype -- are not well-aligned given the high network usage, bandwidth usage and CPU usage scenarios. Until that changes, it’s hard to find compelling arguments to do all these fancy things that everyone talks about doing with public clouds, including ourselves. We have cloud bursting we are working on in the labs, but we are also pretty sober about what that means as to whether or not it’s here as a practical solution.

Photo: @iStockphoto.com/halbergman

Why Do We Need Intelligent Desktop Virtualization?

For nearly two decades, traditional desktop management has been business as usual. But today’s IT environment is anything but usual. Powerful forces are driving rapid change, including the rise of consumerization, cloud computing applications and server virtualization. Users want to work using any device from any location, and the concept of “bring your own IT” makes it possible to readily access cloud services, regardless of IT approval. Despite many advances, such as classic virtual desktop infrastructure (VDI) and desktop virtualization, traditional desktop management is poised for change.

Intel’s vision is an evolutionary framework -- called Intelligent Desktop Virtualization, or IDV -- in which the overall system of managing user computing is made significantly more intelligent. IDV maximizes the user experience while also giving IT professionals the control they need -- all within an economically viable framework.

Three Tenets of Intelligent Desktop Virtualization
There are three key tenets that distinguish IDV from desktop virtualization: managing centrally with local execution, delivering layered images intelligently and using device-native management.

Each tenet is considered to be central to IDV, whereas the concepts are usually considered to be peripheral in desktop virtualization. The three tenets represent a progression. If IT departments embrace the first tenet, it will be critically beneficial for them to proceed to the second tenet. If the first two tenets are fully adopted, the third tenet will be considered essential.

By evaluating desktop virtualization solutions according to these tenets, IT can implement a desktop management infrastructure that meets the needs of both users and IT, making IDV a solution that is truly without compromise.

Tenet No. 1: Manage Centrally With Local Execution
The first tenet of IDV is essentially a division of labor that delivers the benefits of both central management and local execution. IT retains full control over operating system and application updates by managing a golden, or master, image from the data center and relies on the local compute resources of the endpoint PC to deliver a rich user experience. With the first tenet,

IT can:

  • Improve manageability and security by controlling operating system and application updates
  • Provide the best possible user experience -- and better economics -- with local compute resources
  • Optimize data center resource usage

Tenet No. 2: Deliver Layered Images Intelligently
The second tenet of IDV is based on two concepts: creating layered images to allow for user customization and simplified management of the golden image, and using bidirectional synchronization with de-duplication (also known as single-instance storage) for intelligent delivery.

By dividing the traditional desktop image into layers -- instead of managing it as a single entity -- and using bidirectional synchronization, IT can gain the flexibility to:

  • Enhance central management
  • Deliver the appropriate layers transparently to user-chosen computing platforms
  • Use bidirectional synchronization and de-duplication for intelligent delivery and storage

Tenet No. 3: Use Device-native Management
The third tenet of IDV is based on the assertion that both virtual and physical device management are required for a complete IDV solution. To fully manage user computing, endpoint devices require physical management. With the third tenet, IT can:

  • Supplement central management capabilities for operational excellence
  • Leverage hardware resources independent of the operating system to ensure a robust computing platform and gain unparalleled flexibility

The Role of Intelligent Clients
In addition to employing the three tenets, IT must find the right balance between the data center and desktops to create an infrastructure that meets unique organizational needs. By using intelligent clients, IT can achieve balanced computing with IDV.

Intelligent endpoints offer the processing power, security and management features that complement central management -- all without placing additional strain on the data center.

Intelligent clients offer a range of native management options, including support for multiple desktop virtualization models, as well as mobile computing, compute-intensive applications, rich media, offline work and local peripherals.

By design, intelligent client computing helps IT avoid expensive data center expansion and maximizes total cost of ownership.

Take the Next Steps to Full-scale IDV
As you move toward a full-scale IDV solution, remember: One size does not fit all. Most companies need more than one delivery model based on unique business requirements.

For organizations still in the planning stage:

  • Thoroughly investigate all models of desktop management.
  • Evaluate solutions according to the three tenets and ask potential vendors about their support for each.
  • Investigate options for intelligent clients to deliver the best user experience and the most effective management and security measures.

For organizations that have already implemented virtual desktop infrastructure (VDI):

  • Take the remaining steps toward a full-scale IDV solution.
  • Off-load processing to the local client (e.g., redirect multimedia tendering to intelligent clients) to further improve virtual machine density and boost VDI economics.

Get more on desktop virtualization from our sponsor.

Photo: @iStockphoto.com/eyetoeyePIX

Aberdeen Group Analyst Offers Tips on Protecting Virtualized Environments

There’s a lot riding on server virtualization, and the risk of disruption only increases as IT shops deploy more virtualized applications on fewer physical machines. A loss of a single box can bring down multiple applications, so organizations that hope to preserve the availability of apps need to protect the environments in which they run. Dick Csaplar, senior research analyst of storage and virtualization for Aberdeen Group, has been looking into these issues. He recently discussed steps enterprises can take to address downtime concerns.

Q: What got you interested in the topic of protecting virtualized environments?

Dick Csaplar: Last year, we found through our survey process that we passed the milestone where, now, more than 50 percent of applications are virtualized. You now have to start thinking about applications being virtualized as the rule, not the exception. With virtualization, you have greater server consolidation and density, so a single server’s worth of downtime impacts more applications than in the past.

The other thing that was of interest: I was at Digital Equipment Corp. back in the day, and the PDP 8 was kind of the first corporate-affordable minicomputer. That led to the growth of the corporate data center concept for even midsized corporations. The concept of downtime was co-birthed at that moment. Today, more business processes are computer-based, so downtime costs companies more than ever. Protecting against computer downtime has been, and continues to be, a major focus of IT and will be for the foreseeable future. Things happen, and you have to be prepared.

Q: Are there steps organizations should take as they go about protecting virtualized settings?

D.C: The first thing you have to think about: It’s not one-size-fits-all. You just don’t say, “I’m going to put all applications on a fault-tolerant server to get maximum uptime.” Quite frankly, it’s expensive and unnecessary. You want to tier your applications -- which ones really require high levels of uptime and which ones, if you are down for a half a day, are not going to kill an organization.

The highest-level tier is the absolutely mission-critical applications like email and database- and disaster-recovery applications. It is worth investing in a high level of uptime because when the email system goes down, for example, people stop working. But with test and development environments, even if you lose them, there was no data in there that was really corporate-critical. Most organizations -- about 60 percent, our research shows -- don’t bother to protect their test and dev applications.

And there’s a middle tier of apps where you’ve got to do the math: Is downtime protection worth the investment?

Secondly, you need to have an idea of the cost of downtime. That sets the right level of investment for your disaster recovery and backup strategy. If the cost of downtime is measured in hundreds of dollars, obviously you can’t justify spending tens of thousands of dollars to keep applications up. If it is measured in tens of thousands of dollars, you should invest a relatively large amount of money in your downtime protection.

The cost of downtime varies by application. Ballpark it. Get it to an order of magnitude. Get a sense of what those things cost, and that will guide you to the appropriate level of protection. You are right-sizing your solution.

Q: What are the technology alternatives?

D.C.: On the software side, the hypervisors themselves have high-availability technology that is embedded. It monitors applications and, if it detects an app is not performing, it will restart the application on a new server. It’s very cheap. But you do lose the data in transit; any data in that application is gone.

Then you have software clusters. You have to pay for that, but it’s better protection in that the data gets replicated to other locations. Then there is the whole area of fault-tolerant hardware: fault-tolerant servers.

Q: Can an enterprise reuse their existing application protection technology in a virtualized environment, or do they have to use technology geared toward virtualization?

D.C.: That depends on the level of protection you want and what you currently have. One of the best technologies for application protection is image-based backup. It takes a picture of the entire stack -- the hypervisor, the application and the data. That image can be restarted on any new server.

Image-based backup tends to work better in highly virtualized environments. That doesn’t mean that the more traditional backup and recovery tools don’t work. They can, but you have to specifically test them out.

And that would be another thing to consider: having a formal testing policy. About half of the survey respondents we’ve researched don’t have a regular testing program for backup and recovery. They back up all of this stuff, but they don’t know if it would work in an emergency. There has to be a formal testing process at least quarterly.

Q: Any other thoughts on protecting virtualized environments?

D.C.: We are talking more about business processes here than we are talking about tools. We’re talking about best practices for keeping apps up and running, and most of them have to do with good data protection processes.

A lot more needs to be done than just throwing technology at it. You have to do your homework and you really have to know your business.

Photo: @iStockphoto.com/Kohlerphoto

Getting Physical with Virtual Servers

As IT shops deploy an increasing number of virtual servers, the challenge of managing both these servers and the physical servers in their presence grows. Here, we explore solutions with Jerry Carter, CTO of Likewise Software, who explains the role of a well-conceived server management strategy in cloud computing, how to avoid virtual server sprawl, and tools for managing physical and virtual servers:

Q. To have a successful cloud strategy, how important is it that an IT shop well-manages both its physical and virtual servers and its associated storage systems?

Jerry Carter: It’s critical in order to scale the IT infrastructure to be able to meet the business needs of the company. The concept of virtualizing the machine is about rising up the stack. You virtualize the machine and then you start running multiple VMs on one piece of hardware. Then, you virtualize the storage, but you must isolate the management of the storage resources from the application that is using that storage. If you can’t abstract those resources from the application, then you end up managing pockets of data. When you are moving from physical environments to virtual ones, you must have a solid [data/IT/storage] anagement strategy in mind; otherwise, your problem gets worse and management costs rise. You might end up cutting power consumption and gaining space through consolidation, but you might increase the number of images you have to manage.

Q. How big of a problem is managing just the storage aspect of this?

J.C.: At VMworld in August, a speaker asked, ‘When you have performance problems with your VM, how many would say that over 75 percent of the time it’s a problem involving storage?’ This huge number of people raised their hands. If you just ignore the whole storage capacity and provisioning problem, then how can you manage policy to ensure you are securely protecting the data that represents the business side of the company? You must be able to apply consistent policy across the entire system; otherwise, you are managing independent pockets of storage.

Q. In thinking about managing physical and virtual servers, should IT shops start with private clouds before attempting public clouds?

J.C.: The reason people start with private clouds has to do with their level of confidence. They have to see a solution that provides a level of security for information going outside their network; not every critical application knows how to talk to the cloud. So a private cloud strategy gives you the ability to gateway protocols that those applications can actually understand, along with the back-end cloud storage APIs. For instance, look at the announcement Microsoft made involving the improvements to its Server Message Block (SMB) 2.2 for Windows Server 8. Historically, server-based applications like SQL Server and IIS have required block storage that is mounted through iSCSI for the local apps to work. So what Microsoft has done is positioning in the cloud to be a competitor for blocks via those server-based applications.

Q. Is it important that Windows Server and Microsoft’s cloud strategy succeed to broadening the appeal for managing physical and virtual servers?

J.C.: If you look at most of the application workloads on virtualized infrastructures, something like 90 percent of them are running on VMware servers carrying Microsoft workloads. VMware is trying to virtualize the operating system and the hypervisor. Microsoft’s position is to enable those application workloads because those are the things really driving business value. I think it is very important that Windows Server 8 succeeds, but I think the more critical aspect is it supports SMB 2.2.

Q. What is the state of software tools for managing physical and virtual servers right now?

J.C.: I think the maturity and growth of virtual machine management has been tremendous. When you look at things like vSphere and vCenter from VMware that allow you to manage the hypervisor and the individual VMs, be able to deploy them rapidly and then spin them up or down on an as-needed-basis, it is impressive. But the problem that remains is in treating VMs as independent entities. What business users really care about is what is inside the VM, but the hypervisor doesn’t really deploy policy to the guest. There are some clever people doing interesting things, but generally it’s still a broad set of technologies for dealing with heterogeneous networks. I don’t think [management tools] have advanced as fast as the VM management infrastructure has.

Q. With so many more virtual servers being implemented than physical servers, how do you manage virtual server sprawl?

J.C.: First, people have to realize that it is a problem. Not only do most people have large numbers of VMs deployed that are no longer in use, but they don’t have a handle on what is actually inside of those VMs. There could be data that was created inside a particular VM for development purposes. But you have all this other data used to build a critical document just sitting out on a VM. I think the security implications are huge. If you are using individual VMs for storage, then you can have huge amounts of business-critical, sensitive data sitting on these underutilized VMs. If you aren’t doing something to manage the security, then you are vulnerable to data leakage. Some managers think, ‘All my users are storing data on the central file server, so this isn’t a problem.’ But inadvertently, users copy data locally for temporary work. The best way to address it is to have a plan in place prior to allocating new VMs, where you can apply a consistent authentication and security policy. That way, users know what they are allowed to do within a particular device.

Q. What adjustments must IT make for things like data protection as they rapidly provision more physical and virtual servers?

J.C.: Data protection can’t exist without data centralization. You can’t manage individual pockets of storage or individual VMs themselves. Another issue IT has is users don’t start removing unnecessary files from disks until the disk starts to get full. But with storage capacities ever increasing, disks never fill up, so there is never any reason to go back in and clean them up. So you can end up with the same problem caused by this virtual machine sprawl.

Q. And does unstructured data rear its ugly head in this scenario too?

J.C.: Yes. I think the biggest problem is the amount of unstructured data that exists out there; people don’t really have a handle on that. About 75 percent of data out there is unstructured. The question I always propose to users is: Given all of the data you have in your network, what is the one thing you most want to know about it? The answer is usually, ‘Well, I’d like to know I can correlate the locality of my data to prove to someone I am meeting their SLAs,’ or, ‘I want to know when people are accessing data outside their normal usage pattern.’ They need to understand data about their existing data and what access people actually have.

Q: What is a common mistake IT people make as they grow their physical and virtual servers’ infrastructure up and out?

J.C.: Just as it’s impossible to manage individual pockets of applications, it is impossible to manage individual containers of storage within a single, large storage infrastructure. You must have something that addresses the overall problem. This is what people fail to understand when they move from a smaller infrastructure to a massive one. With this machine and storage sprawl, any cracks in the existing management techniques, software or policies become chasms as the size of the problem increases from something small to very large.

Q. Is IT looking more seriously at open-source solutions for managing physical and virtual servers?

J.C.: Open source will continue to play a predominant role in a lot of IT shops. But one of the real challenges is the amount of investment made in developing expertise on individual solutions. Another is finding people with expertise when you have turnover. There is a lot of great open-source technology available, but it is not always in product form. People become experts in the technology, but it can be risky to rely on technology expertise rather than product expertise, which can be replicated. The question is: Can it address the whole problem, or is fixing individual pain points a better way to go? I think the jury is still out on that.

Photo: @iStockphoto.com/herpens

The Long-term Commitment of Embedded Wireless

Most businesspeople replace their cell phone roughly every two years. At the other extreme is machine-to-machine (M2M) wireless devices, which often are deployed for the better part of a decade and sometimes even longer. That’s because it’s an expensive hassle for an enterprise to replace tens of thousands of M2M devices affixed to trucks, utility meters, alarm systems, point-of-sale terminals or vending machines, to name just a few places where today’s more than 62 million M2M devices reside.

Those long lifecycles highlight why it’s important for enterprises to take a long view when developing an M2M strategy. For example, if your M2M device has to remain in service for the next 10 years, it could be cheaper to pay a premium for LTE hardware now rather than go with less expensive gear that runs on GPRS or CDMA 1X, the networks of which might be phased out before the end of this decade.

Confused? Mike Ueland, North American vice president and general manager at M2M vendor Telit Wireless Solutions, recently spoke with Intelligence in Software about how CIOs, IT managers and developers can sleep at night instead of worrying about obsolescence and other pitfalls.

Q: M2M isn’t a new technology. For example, many utilities have been using it for more than a decade to track usage instead of sending out armies of meter readers every month. Why aren’t more enterprises following suit?

Mike Ueland: It’s very similar to what it was like before we had the Internet, Ethernet and things like that, where you had all of these disconnected devices. There’s an incredible opportunity, depending on what the business and application are, to connect those devices and bring more information back, as well as being able to provide additional value-added services.

There have been some significant improvements in terms of technology and the cost to deploy an M2M solution. All of the M2M solutions have gotten much more mature. There are so many more people in the ecosystem to support you.

But we haven’t seen the uptake within the enterprise community. Part of that is due to we’ve been in such a recessionary period over the past couple of years. No one really wants to start new projects.

Q: What are some pitfalls to avoid? For example, wireless carriers have to certify M2M devices before they’ll allow them on their network. How can enterprises avoid that kind of red tape and technical nitty-gritty?

M.U.: The mistake that we see a lot is that people try to bite off too much. They’ll say: “I need this custom device that needs to do this. Therefore I need to build a custom application.” They overcomplicate the solution.

There are so many off-the-shelf devices out there that can be quickly modified for your application. One of the benefits of that is you reduce technical risk and time to market because often those devices will be pre-certified on a carrier’s network. There are also a number of M2M application providers out there -- like Axeda, ILS and others -- that have an M2M architecture. That allows people to very quickly build their own application based on this platform.

Q: Price is another factor that’s kept a lot of enterprises out of M2M. How has that changed? For example, over the past few years, many wireless carriers have developed rate plans that make M2M more affordable.

M.U.: Across the board -- device costs, module costs, air-time costs -- all of these costs have come down probably by half in the past few years, if not more. So any business cases that were done two or three years ago are outdated.

In addition, there have been great technological improvements. For instance, the device sizes have continued to shrink. They use less power. So it opens it up for a whole range of applications that might not have been possible in the past.

Q: About 10 years ago, a lot of police departments and enterprises were scrambling to replace their M2M equipment because carriers were shutting down their CDPD networks . Today, those organizations have to make a similar choice: Do I deploy an M2M system that uses 2G or 3G and hope that those networks are still in service years from now? Or should I go ahead and use 4G now even though the hardware is expensive, coverage is limited and the bandwidth is far more than what I need?

M.U.: It depends on the application. For instance, AT&T is not encouraging new 2G designs. They’ve deployed a 3G network, and they’re starting to deploy a 4G network. They’d really like people to move in that direction. Verizon and Sprint have their equivalent version of 2G: CDMA 1X. Both Verizon and Sprint have publicly declared that those networks will be available through 2021. At the end of each year, they’re going to reevaluate that.

So depending on the planned time-length of your application and where you plan to deploy it -- North America or outside -- 2G networks will have a varying degree of longevity. Having said that, if you look at a 4G deployment outside of Verizon in the U.S., there are not many significant 4G deployments. We’re early days in 4G.

As module providers, we rely on lower-cost handset chipset pricing to drive M2M volumes. The cost of an LTE module is over $100 right now. Compare that to 2G, which is under $20 on average. It’s a big gap.