Avoiding Fragmented Infrastructure in the Data Center

As a growing number of IT shops grow more comfortable with implementing their first cloud environments, they are recognizing the costs, benefits and operational efficiencies of this strategy. However, they are also discovering the rippling effects that cloud implementations may have on their existing virtualized infrastructures: fragmented infrastructures, a management nightmare. Arsalan Farooq, CEO and founder of Convirture, has experience dealing with IT shops that face this problem. Below, he offers insight.

Q. Some shops complain about dealing with fragmented infrastructures because of different approaches they have taken in implementing multiple clouds. What advice can you offer to fix that?

Arsalan Farooq: First, I would say don’t panic and start ripping everything out. And don’t run back to the virtualized or physical model you came from, because you have tried that and it didn’t work. The problem isn’t that your cloud model isn’t working. The problem is it’s fragmented, and the complexity of it is out of control. I recommend taking a top-down approach from the management perspective. You need to invest in management tools that allow you to manage the fragmented parts of your infrastructure and workloads from a single location. Now, this doesn’t solve the fragmentation problem completely because you are dealing with not only fragmented infrastructure, but also fragmented management and fragmented workloads. But once you can see all your workloads and infrastructures in one place and can operate them in one place, you can make more intelligent decisions about the workload fragmentation problem.

Q. Cloud management software is not technically keeping up with managing both physical and virtual environments. Does your solution here help out with that?

A.F.: Once you are in this fragmented infrastructure silo, the vendor tries to sell you their proprietary management too. At the end of the day, you have a management tool that just manages your vertical cloud, and another management tool that just manages your virtualized infrastructure and so on. My advice is to divest the management risk from the platform risk. If you don’t do this, you’re asking for it. As data centers become multi-infrastructure, multi-cloud and multi-operational, you have to divest the risk of the new platform from the risk of your management practices. I’m not a big fan of buying a new platform that forces you to buy new infrastructures and new management tools.

Q. What is the most important element IT shops overlook in putting together their first cloud implementation?

A.F.: Typically, there is a misunderstanding of what the scope of the transformation will be. A lot of people end up with misgivings because they have been sold on this idea that a cloud implementation will 100 percent transform the way they can build their IT data center. Second (and this is more sinister), they believe the cloud can bring efficiencies and cost benefits, but that in itself comes at a cost. You are buying efficiency and cost benefits, but you are paying the complexity price for it. This is something remarkably absent as people go into a cloud implementation. Only after they implement their cloud do they realize the architectural topology is much more complex than it was before.

Q. There is this disconnect between what IT shops already have in their virtualized data centers and what the vendors are offering as solutions to layer on top of that. Why is that?

A.F.: That is the crux of the problem we are dealing with right now. Most cloud vendors talk about how their cloud deployment works and what it brings in terms of efficiencies and cost benefits. But what that discussion leaves out is how the transformation from a cloudless data center to one that has a cloud actually works. Specifically, what are the new metrics, properties and cost benefits surrounding all that? And then, once the transformation is made, what are the attributes of a multi-infrastructure data center? Conversations about this are completely absent.

Q. But explaining something like the metrics of a new cloud conversation to an IT shop seems like such a basic part of the conversation.

A.F.: Right, but the problem is many vendors take a more tactical view of things. They are focused on selling their wares, which are cloud infrastructures. But addressing the bigger-picture ramifications is something many don’t seem to have the capacity to answer, and so they don’t talk about it. So the solution then falls either to the CIOs or other vendors who are trying to attack that problem directly.

Q. Some IT executives with virtualized data centers wonder if they even need a cloud. What do you say to those people?

A.F.: This may sound a bit flippant, but if you feel you don’t need a cloud, then you probably don’t. You have to remember what the cloud is bringing. The cloud is not a technology; it is a model. It comes down to what the CIO is looking for. If they are satisfied with the level of efficiency in their virtualized data centers, then there is no compelling reason for them to move to the cloud. However, I don’t think there are too many CIOs who, when given the prospect of incrementally improving the efficiencies of part of their operations, would not go for it.

Q. Are companies getting more confident about deploying public clouds, without having to first do a private one?

A.F.: The largest data centers still can’t find a compelling reason to move to the public cloud. The smaller shops and startups (most of which don’t have the expertise or have any infrastructure) are more confident in moving to a public cloud. The bigger question here is whether the larger corporate data centers ever move their entire operation to the public cloud as opposed to just using it in a niche role for things like backup or excess capacity.

Q. I assumed large data centers would move their entire operations to a public cloud once the relevant technologies became faster and more reliable and secure. What will it take for large data centers to gain more confidence about public clouds?

A.F.: One thing missing is a compelling cost-complexity-to-benefits ratio. My favorite example from a few months ago was when we were going to do workloads that automatically load balance between local and public cloud scenarios with cloud bursting. That all sounds good, but do you know how much Amazon costs for network transfers? It’s an arm and a leg. It is ridiculously expensive. Do an analysis of what it costs to take a medium-load website and run it constantly on Amazon, and then compare that to a co-lo or renting a server from your local provider, and your mind would be boggled. The overall economics of public clouds -- despite all the hype -- are not well-aligned given the high network usage, bandwidth usage and CPU usage scenarios. Until that changes, it’s hard to find compelling arguments to do all these fancy things that everyone talks about doing with public clouds, including ourselves. We have cloud bursting we are working on in the labs, but we are also pretty sober about what that means as to whether or not it’s here as a practical solution.

Photo: @iStockphoto.com/halbergman

Where Do Ultrabook Devices Fit In?

"Sexy" might not be the first adjective that comes to mind when thinking about enterprise-grade notebooks, but that's how some reviews are describing the first generation of Ultrabook devices. The initial models -- from vendors such as Acer, ASUS, HP, Lenovo, LG and Toshiba -- shipped in late 2011, and in 2012 Intel believes there will be 75 Ultrabook devices announced or available.

The Ultrabook design is less about a checklist of must-meet specs -- which Intel says are under NDA with vendors -- and more about delivering certain types of user experiences. For example, all Ultrabook devices must be able to wake up in less than seven seconds so users always have immediate access to both content stored on their Ultrabook and Web-based data. Those abilities come, respectively, via Intel’s Rapid Start and Smart Connect technologies. [Disclosure: Intel is the sponsor of this content.]

“It’s extremely convenient,” says ​Brian Pitstick, executive director of laptop marketing for consumers and SMBs at Dell, whose initial Ultrabook is the XPS 13. “Smart Connect allows the device to periodically wake up while it’s asleep and refresh the content. So as you open it up, within seconds, you have updated content.”

Those abilities could help make Ultrabook devices attractive to enterprises that are using or considering tablets, whose always-on design helps boost productivity.

“One thing we’ve heard from the users who have done that is that one of the major purchase drivers is lightweight, easy to take with me, highly convenient in terms of instant on,” Pitstick says. “We believe we’re delivering on that with this device. At the same time, customers say they want to stay productive, and in a lot of cases, productivity requires a keyboard, [Microsoft] Office compatibility and the right performance. That’s what makes it a different proposition than a tablet.”

Bring Your Own Ultrabook

If Ultrabook devices are known for anything so far, it’s their thin, light designs. The Toshiba Portégé Z830, for example, weighs less than 2.5 pounds and is 0.63 inch thick. If that svelte figure makes Ultrabook devices popular with consumers, that’s another way they could wind up in the enterprise.

“The Ultrabook is a poster child BYOB (bring your own box) PC,” says Rob Enderle, principal analyst at Enderle Group.

Some vendors are designing their Ultrabook devices to support that kind of scenario.

“We do things like add a TPM chip so it has data-encryption security capabilities,” Pitstick says. “We have custom-configuration services [so] IT administrators can get custom images, BIOS settings, asset tags, things like that. Bringing any kind of device into an enterprise environment can cause a lot of headaches for IT. So with this class of device, we’ve thought about capabilities that would ease those concerns.”

When it’s the enterprise buying the Ultrabook, it’s important to scrutinize the specs and try it out first. That’s standard advice for any notebook, but it’s particularly valuable for Ultrabook devices because their svelte designs force vendors to get creative in areas such as durability and battery life.

“Think through what the minimums are in terms of a feature set you’ll allow into the enterprise and make sure that spec is communicated well,” Enderle says. “Some Ultrabook [devices] have brighter displays and may work better outdoors, suggesting that when that is a requirement, even the screen performance (measured in nits) should be given as a selection criteria.

“This really is a class of product that varies a great deal vendor to vendor. The buyer should try a variety of offerings before making their decision to determine which feature set, feel and appearance works best for them.”   

New Market Opportunities?

If their thin, light designs encourage consumers and business users to carry an Ultrabook in more places, they could create new opportunities to make or save money. For example, a growing number of vendors offer cloud-based video conferencing services that support a wide variety of endpoint types, from high-end telepresence rooms down to PCs, smart phones and tablets. For some users, participating in a video conference with a 10-inch tablet or a 3.5-inch smart phone might feel cramped -- to the point that they avoid using those services, undermining productivity.

With screen sizes between 11 inches and 14 inches, Ultrabook devices could be big enough to provide a good video conferencing user experience in hotel rooms, home offices and airport lounges. And as full-fledged PCs, Ultrabook devices also would enable collaboration such as file sharing, something that’s difficult on a tablet or smart phone.

“You want to be able to share and create while you’re talking,” Pitstick says. “Having the processing capability, the PC compatibility and keyboard becomes pretty important.”

The same benefits also could enable enterprises to offer a wider range of services aimed at mobile consumers, particularly those who don’t want to carry a heavy notebook or struggle to make do with a tablet or smartphone.

“What we often hear on the customer side is most people who buy laptops leave them in the home,” Pitstick says. “When it’s easier to take outside the house, I think you’ll start to see more people take it outside the house. Maybe they’re a whole new segment of devices purchased as a result.”

Why Do We Need Intelligent Desktop Virtualization?

For nearly two decades, traditional desktop management has been business as usual. But today’s IT environment is anything but usual. Powerful forces are driving rapid change, including the rise of consumerization, cloud computing applications and server virtualization. Users want to work using any device from any location, and the concept of “bring your own IT” makes it possible to readily access cloud services, regardless of IT approval. Despite many advances, such as classic virtual desktop infrastructure (VDI) and desktop virtualization, traditional desktop management is poised for change.

Intel’s vision is an evolutionary framework -- called Intelligent Desktop Virtualization, or IDV -- in which the overall system of managing user computing is made significantly more intelligent. IDV maximizes the user experience while also giving IT professionals the control they need -- all within an economically viable framework.

Three Tenets of Intelligent Desktop Virtualization
There are three key tenets that distinguish IDV from desktop virtualization: managing centrally with local execution, delivering layered images intelligently and using device-native management.

Each tenet is considered to be central to IDV, whereas the concepts are usually considered to be peripheral in desktop virtualization. The three tenets represent a progression. If IT departments embrace the first tenet, it will be critically beneficial for them to proceed to the second tenet. If the first two tenets are fully adopted, the third tenet will be considered essential.

By evaluating desktop virtualization solutions according to these tenets, IT can implement a desktop management infrastructure that meets the needs of both users and IT, making IDV a solution that is truly without compromise.

Tenet No. 1: Manage Centrally With Local Execution
The first tenet of IDV is essentially a division of labor that delivers the benefits of both central management and local execution. IT retains full control over operating system and application updates by managing a golden, or master, image from the data center and relies on the local compute resources of the endpoint PC to deliver a rich user experience. With the first tenet,

IT can:

  • Improve manageability and security by controlling operating system and application updates
  • Provide the best possible user experience -- and better economics -- with local compute resources
  • Optimize data center resource usage

Tenet No. 2: Deliver Layered Images Intelligently
The second tenet of IDV is based on two concepts: creating layered images to allow for user customization and simplified management of the golden image, and using bidirectional synchronization with de-duplication (also known as single-instance storage) for intelligent delivery.

By dividing the traditional desktop image into layers -- instead of managing it as a single entity -- and using bidirectional synchronization, IT can gain the flexibility to:

  • Enhance central management
  • Deliver the appropriate layers transparently to user-chosen computing platforms
  • Use bidirectional synchronization and de-duplication for intelligent delivery and storage

Tenet No. 3: Use Device-native Management
The third tenet of IDV is based on the assertion that both virtual and physical device management are required for a complete IDV solution. To fully manage user computing, endpoint devices require physical management. With the third tenet, IT can:

  • Supplement central management capabilities for operational excellence
  • Leverage hardware resources independent of the operating system to ensure a robust computing platform and gain unparalleled flexibility

The Role of Intelligent Clients
In addition to employing the three tenets, IT must find the right balance between the data center and desktops to create an infrastructure that meets unique organizational needs. By using intelligent clients, IT can achieve balanced computing with IDV.

Intelligent endpoints offer the processing power, security and management features that complement central management -- all without placing additional strain on the data center.

Intelligent clients offer a range of native management options, including support for multiple desktop virtualization models, as well as mobile computing, compute-intensive applications, rich media, offline work and local peripherals.

By design, intelligent client computing helps IT avoid expensive data center expansion and maximizes total cost of ownership.

Take the Next Steps to Full-scale IDV
As you move toward a full-scale IDV solution, remember: One size does not fit all. Most companies need more than one delivery model based on unique business requirements.

For organizations still in the planning stage:

  • Thoroughly investigate all models of desktop management.
  • Evaluate solutions according to the three tenets and ask potential vendors about their support for each.
  • Investigate options for intelligent clients to deliver the best user experience and the most effective management and security measures.

For organizations that have already implemented virtual desktop infrastructure (VDI):

  • Take the remaining steps toward a full-scale IDV solution.
  • Off-load processing to the local client (e.g., redirect multimedia tendering to intelligent clients) to further improve virtual machine density and boost VDI economics.

Get more on desktop virtualization from our sponsor.

Photo: @iStockphoto.com/eyetoeyePIX

Aberdeen Group Analyst Offers Tips on Protecting Virtualized Environments

There’s a lot riding on server virtualization, and the risk of disruption only increases as IT shops deploy more virtualized applications on fewer physical machines. A loss of a single box can bring down multiple applications, so organizations that hope to preserve the availability of apps need to protect the environments in which they run. Dick Csaplar, senior research analyst of storage and virtualization for Aberdeen Group, has been looking into these issues. He recently discussed steps enterprises can take to address downtime concerns.

Q: What got you interested in the topic of protecting virtualized environments?

Dick Csaplar: Last year, we found through our survey process that we passed the milestone where, now, more than 50 percent of applications are virtualized. You now have to start thinking about applications being virtualized as the rule, not the exception. With virtualization, you have greater server consolidation and density, so a single server’s worth of downtime impacts more applications than in the past.

The other thing that was of interest: I was at Digital Equipment Corp. back in the day, and the PDP 8 was kind of the first corporate-affordable minicomputer. That led to the growth of the corporate data center concept for even midsized corporations. The concept of downtime was co-birthed at that moment. Today, more business processes are computer-based, so downtime costs companies more than ever. Protecting against computer downtime has been, and continues to be, a major focus of IT and will be for the foreseeable future. Things happen, and you have to be prepared.

Q: Are there steps organizations should take as they go about protecting virtualized settings?

D.C: The first thing you have to think about: It’s not one-size-fits-all. You just don’t say, “I’m going to put all applications on a fault-tolerant server to get maximum uptime.” Quite frankly, it’s expensive and unnecessary. You want to tier your applications -- which ones really require high levels of uptime and which ones, if you are down for a half a day, are not going to kill an organization.

The highest-level tier is the absolutely mission-critical applications like email and database- and disaster-recovery applications. It is worth investing in a high level of uptime because when the email system goes down, for example, people stop working. But with test and development environments, even if you lose them, there was no data in there that was really corporate-critical. Most organizations -- about 60 percent, our research shows -- don’t bother to protect their test and dev applications.

And there’s a middle tier of apps where you’ve got to do the math: Is downtime protection worth the investment?

Secondly, you need to have an idea of the cost of downtime. That sets the right level of investment for your disaster recovery and backup strategy. If the cost of downtime is measured in hundreds of dollars, obviously you can’t justify spending tens of thousands of dollars to keep applications up. If it is measured in tens of thousands of dollars, you should invest a relatively large amount of money in your downtime protection.

The cost of downtime varies by application. Ballpark it. Get it to an order of magnitude. Get a sense of what those things cost, and that will guide you to the appropriate level of protection. You are right-sizing your solution.

Q: What are the technology alternatives?

D.C.: On the software side, the hypervisors themselves have high-availability technology that is embedded. It monitors applications and, if it detects an app is not performing, it will restart the application on a new server. It’s very cheap. But you do lose the data in transit; any data in that application is gone.

Then you have software clusters. You have to pay for that, but it’s better protection in that the data gets replicated to other locations. Then there is the whole area of fault-tolerant hardware: fault-tolerant servers.

Q: Can an enterprise reuse their existing application protection technology in a virtualized environment, or do they have to use technology geared toward virtualization?

D.C.: That depends on the level of protection you want and what you currently have. One of the best technologies for application protection is image-based backup. It takes a picture of the entire stack -- the hypervisor, the application and the data. That image can be restarted on any new server.

Image-based backup tends to work better in highly virtualized environments. That doesn’t mean that the more traditional backup and recovery tools don’t work. They can, but you have to specifically test them out.

And that would be another thing to consider: having a formal testing policy. About half of the survey respondents we’ve researched don’t have a regular testing program for backup and recovery. They back up all of this stuff, but they don’t know if it would work in an emergency. There has to be a formal testing process at least quarterly.

Q: Any other thoughts on protecting virtualized environments?

D.C.: We are talking more about business processes here than we are talking about tools. We’re talking about best practices for keeping apps up and running, and most of them have to do with good data protection processes.

A lot more needs to be done than just throwing technology at it. You have to do your homework and you really have to know your business.

Photo: @iStockphoto.com/Kohlerphoto

The Case for Policy-based Power Management

Not many years ago, server power consumption wasn’t a big concern for IT administrators. The supply of power was plentiful, and in many cases power costs were bundled with facility costs. For the most part, no one thought too hard about the amount of power going into servers.

What a difference a few years can make. In today’s ever-growing data centers, no one takes power for granted. For starters, we’ve had too many reminders of the threats to the power supply, including widely publicized accounts of catastrophic natural events, breakdowns in the power grid, and seasonal power shortages.

Consider these examples:

  • In the wake of the March 2011 earthquake and tsunami and the loss of the Fukushima Daiichi nuclear power complex, Japan was hit with power restrictions and rolling power blackouts. The available power supply couldn’t meet the nation’s demands.
  • In the United States, overextended infrastructure and recurring brownouts and outages have struck California and the Eastern Seaboard, complicating the lives of millions of people.
  • In Brazil and Costa Rica, power supplies are threatened by seasonal water scarcity for hydro generation, while Chile wrestles with structural energy scarcity and very expensive electricity.

Then consider today’s data centers, where a lot of power is wasted. In a common scenario, server power is over-allocated and rack space is underpopulated to cover worst-case loads. This is what happens when data center managers don’t have a view into the actual power needs of a server or the tools they need to reclaim wasted power.

All the while, data centers are growing larger, and power is becoming a more critical issue. In some cases, data centers have hit the wall; they are out of power and cooling capacity. And as energy costs rise, we’ve reached the point where some of the world’s largest data center operators consider power use to be one of the top site-selection issues when building new facilities. The more plentiful the supply of affordable power is, the better off you are. All of this points to the need for policy-based power management. This forward-looking approach to power management helps your organization use energy more efficiently, trim your electric bills and manage power in a manner that allows demand to more closely match the available supply.

And the benefits don’t stop there: A policy-based approach also allows you to implement power management in terms of elements that are meaningful to the business instead of trying to bend the business to fit your current technology and power supply.

Ultimately, the case for policy-based power management comes down to this: It makes good business sense.

Using policy-based power management to rein in energy use
In today’s data centers, power-management policies are like the reins on a horse. They put you in control of an animal -- power consumption -- that has a tendency to run wild.

When paired with the right hardware, firmware and software, policies give you control over power use across your data center. You can create rules and map policies into specific actions. You can monitor power consumption, set thresholds for power use, and apply appropriate power limits to individual servers, racks of servers, and large groups of servers.

So how does this work? Policy-based power management is rooted in two key capabilities: monitoring and capping. Power monitoring takes advantage of sensors embedded in servers to track power consumption and gather server temperature measurements in real time.

The other key capability, power capping, fits servers with controllers that allow you to set target power consumption limits for a server in real time. As a next step, higher-level software entities aggregate data across multiple servers to enable you to set up and enforce server group policies for power capping.

When you apply power capping across your data center, you can save a lot of money on your electric bills. Just how much depends on the range of attainable power capping, which is a function of the server architecture.

For the current generation of servers, the power capping range might be 30 percent of a server’s peak power consumption. So a server that uses 300 watts at peak load might be capped at 200 watts, saving you 100 watts. Multiply 100 watts by thousands of servers, and you’re talking about operational savings that will make your chief financial officer stand up and take notice. Dynamic power management takes things a step further. With this approach, policies take advantage of additional degrees of freedom inherent in virtualized cloud data centers, as well as the dynamic behaviors supported by advanced platform power management technologies. Power capping levels are allowed to vary over time and become control variables by themselves. All the while, selective equipment shutdowns -- a concept known as “server parking” -- enable reductions in energy consumption.

Collectively, these advanced power management approaches help you achieve better energy efficiency and power capacity utilization across your data center. In simple terms, you’re in the saddle, and you control the reins.

Get bigger bang for your power buck
In today’s data centers, the name of the game is to get a bigger bang for every dollar spent on power. Policy-based power management helps you work toward this goal by leveraging hardware-level technologies that make it possible to see what’s really going on inside a server. More specifically, the foundation for policy-based power management is formed by advanced instrumentation embedded in servers. This instrumentation exposes data on temperature, power states and memory states to software applications that sit at a higher level, using technology that:

  • Delivers system power consumption reporting and power capping functionality for the individual servers, the processors and the memory subsystem. 

  • Enables power to be limited at the system, processor and memory levels -- all using policies defined by your organization. These capabilities allow you to dynamically throttle system and rack power based on expected workloads.

  • Enables fine-grained control of power for servers, racks of servers, and groups of servers, allowing for dynamic migration of workloads to optimal servers based on specific power policies with the appropriate hypervisor.
Here’s an important caveat: When it comes to policy-based power management, there’s no such thing as a one-size-fits-all solution. You need multiple tools and technologies that allow you to capture the right data and put it to work to drive more effective power management -- from the server board to the data center environment.

It all begins with technologies that are incorporated into processors and chipsets. That’s the foundation that enables the creation and use of policies that bring you a bigger bang for your power buck.

Build a bridge to a more efficient data center
Putting policy-based power management in place is a bit like building a bridge over a creek. First you lay a foundation to support the bridge, and then you put the rest of the structure in place to allow safe passage over the creek. While your goal is to cross the creek, you couldn’t do it without the foundation that supports the rest of the bridge structure.

In the case of power management, the foundation is a mix of advanced instrumentation capabilities embedded in servers. This foundation is extended with middleware that allows you to consolidate server information to enable the management of large server groups as a single logical unit -- an essential capability in a data center that has thousands of servers.

The rest of the bridge is formed by higher-level applications that integrate and consolidate the data produced at the hardware level. While you ultimately want the management applications, you can’t get there without the hardware-level technologies.

Let’s look at this in more specific terms. Instrumentation at the hardware level allows higher-level management applications to monitor the power consumption of servers, set power consumption targets, and enable advanced power-management policies. These management activities are made possible by the ability of the platform-level technologies to provide real-time power measurements in terms of watts, a unit of measurement that everyone understands.

These same technologies allow power-management applications to retrieve server-level power consumption data through standard APIs and the widely used Intelligent Platform Management Interface (IPMI). The IPMI protocol spells out the data formats to be used in the exchange of power-management data.

Put it all together and you have a bridge to a more efficient data center.

Cash in on policy-based power management
When you apply policy-based power management in your data center, the payoff comes in the form of a wide range of business, IT and environmental benefits. Let’s start with the bottom line: A robust set of power-management policies and technologies can help you cut both operational expenditures (OpEx) and capital expenditures (CapEx).

At the OpEx level, you save money by applying policies that limit the amount of power consumed by individual servers or groups of servers. That helps you reduce power consumption across your data center.

How much can you save? Say that each 1U server requires 750 watts of power. If your usage model allows you to cap servers at 450 watts, you save 300 watts per machine. That helps you cut your costs for both power purchases and data center cooling. And chances are you can do this without paying server performance penalties, because many servers don’t use all of the power that has been allocated to them.

At the CapEx level, you cut costs by avoiding the purchase of intelligent power distribution units (PDUs) to gain power monitoring capabilities and by reducing redundancy requirements, which saves you thousands of dollars per rack.

More effective power management can also help you pack more servers into racks, as well as more racks, into your data center to make better use of your existing infrastructure. According to the Uptime Institute, each PDU kilowatt represents about $10,000 of CapEx, so it makes sense to try to make the best use of your available power capacity.

Baidu.com, the largest search engine in China, understands the benefits of making better use of existing infrastructure. It partnered with Intel to conduct a proof of concept (PoC) project that used Intel Intelligent Power Node Manager and Intel Data Center Manager to dynamically optimize server performance and power consumption to maximize the server density of a rack.

Key results of the Baidu PoC project:

  • At the rack level, up to 20 percent of additional capacity increase could be achieved within the same rack-level power envelope when an aggregated optimal power-management policy was applied.
  • Compared with today’s data center operation at Baidu, the use of Intel Intelligent Power Node Manager and Intel Data Center Manager enabled rack densities to increase by more than 40 percent.

And even then, the benefits of policy-based power management don’t stop at the bottom line. While this more intelligent approach to power management helps you reduce power consumption, it also helps you reduce your carbon footprint, meet your green goals, and comply with regulatory requirements. Benefits like those are a key part of the payoff for policy-based power management.

For more resources on data center efficiency, see this site from our sponsor.