5 Keys for Moving Enterprise Security to the Cloud

The worst economy in 70 years hasn’t deflated the cloud: In 2009, cloud services were already a $16 billion market, says research firm IDC. By 2014, global cloud revenues will hit $55.5 billion, growing five times faster than other IT products.

It’s not hard to see why enterprises large and small are flocking to the cloud. The cloud reduces IT capex and opex by shifting those costs to the enterprise’s cloud provider. That’s an obvious benefit even in flush times, but it’s even more attractive when the recession has CIOs and IT managers looking to run as lean as possible.

Cloud computing also helps enterprises stay nimble -- by enabling them to take advantage of new technologies faster than if they had to buy and deploy the equipment themselves, for example. That flexibility can produce competitive advantages, including rolling out services quickly to respond to changing market conditions.

Another big draw is the ability to scale IT systems up and down to meet changing needs, such as peaks during the holiday shopping season. That lets enterprises be more responsive to a flood of new customers, but without purchasing IT infrastructure that would be underutilized between peak periods.

5 Tips for Fighting Breaches

As any CIO or IT manager is quick to add, the cloud’s benefits can’t come at the expense of security. Even minor breaches can have big implications, ranging from a PR nightmare and class action lawsuits when confidential customer information is compromised, to jail time if it turns out that lax security policies violated laws. Worst-case scenario: a breach so big that Congress enacts a law nicknamed after the company.

Some enterprises have an internal cloud, others work with a cloud provider and still others have a combination of the two. These tips apply to all three models:

1. Start clean.
Some enterprises require their cloud provider to put their data only on brand-new servers. They believe it’s impossible to remove every trace of former tenants and that this electronic detritus creates back doors for hackers.

2. Secure access to the cloud.
Implement strong authentication mechanisms to secure every Web path that provides access to the cloud. Ditch simple, password-based logins in favor of multifactor authentication. In fact, some industries mandate this. One example is financial services, where since 2006 the FFIEC has required banks to use multifactor authentication to protect logins into their sites. Also, take a look outside your industry to see if there are any regulations and best practices that you could adopt or adapt to beef up cloud security.

3. Safeguard the data in the cloud.
This is another place where it’s key to keep up with industry-specific laws and best practices, including ones that can be borrowed from other sectors. For example, the Payment Card Industry (PCI) standard specifies physical and logical controls for data both when it’s at rest and in motion, while HIPAA provides similar requirements for medical data.

4. Verify and audit.
Third-party auditors can verify that your cloud or your cloud provider meet security and privacy laws, as well as any industry-specific best practices. Besides PCI and HIPAA, audits may look at compliance with SAS 70 which covers application security, physical security and security processes. Another is ISO 27002, which lists hundreds of options for security management.

5. End clean.
PCI is also an example of how some industries require that data be destroyed, including the hard drives. That includes when switching cloud providers: The contract should spell out exactly how data must be destroyed.

Need more tips? Check out Cloud Computing: Benefits, Risks and Recommendations for Information Security, a European Network and Information Security Agency report that covers 35 common risks and strategies for mitigating them. These tips are applicable in every part of the world.

Time to Phase out Desk Phones?

If your company is like most, you’ve got a PC, a smartphone, perhaps a tablet or some combination of those devices. That lineup means it’s time to take a hard look at softphones, which can replace traditional desk phones and the costs associated with them.

A big part of softphones’ appeal is that they leverage devices that enterprises already own. For example, an organization might decide that it’s more cost-effective to put softphone clients on employees’ existing laptops, smartphones or both than to continue to support their desk phones or to provide new hires with desk phones. A bare-bones business-grade desk phone runs at least $100 in volume versus just $15 for some softphone clients, plus a headset for employees who prefer one.

For some enterprises, video is the decisive factor. Suppose that an organization wants to provide all of its employees with the ability to use video conferencing and one-on-one video calling. A video desk phone, such as the Cisco IP Video Phone E20 or the Polycom VVX 1500, starts at about $700. That upfront hardware cost is enough to push some enterprises toward softphones that include video.

It’s no surprise that the video conferencing capabilities of Ultrabooks and other mobile devices are rendering video desk phones obsolete. For organizations with a large number of mobile employees, a video desk phone is also downright unwieldy. “Try to put that in your briefcase,” says Todd Carothers, senior vice president of marketing and products at CounterPath, whose Bria softphone runs on tablets, laptops and iPhones.

Softphones can also be a way for enterprises to wring more value from devices they already own -- or, more often than not, that their employees own. For example, a recent NPD In-Stat survey found that businesspeople use their tablets primarily for email and note-taking and that 78 percent bring their personal tablet to work rather than having a company-provided one.

“For an extra $15, you can stick Bria on there, and it replaces a $1,000 desk phone -- and they can use it for email and Web browsing,” says Carothers.

Five Steps to Softphone Success
There are several factors to consider when deciding how and where to implement softphones:

  • Not every employee will be comfortable giving up a desk phone. For example, younger employees often have no qualms because they already use Skype and Google+ Hangouts at home. “They think nothing of plugging a headset into a computer and having a conversation,” says Bob Hughes, global CIO at McLarens Young International, where some employees use softphones. “There is a user profile that you have to understand.”

Some employees prefer the audio quality of a desk phone in speaker mode. “A good resonant chamber inside a desk phone is something you can duplicate a little with a PC or Mac, but it’s still not quite as good,” says Huw Rees, vice president of business development at VoIP provider 8x8 Inc. “For speakerphone capability, the desk phone still has some advantage.”

  • Leverage Wi-Fi for mobile employees. Look for softphone solutions that can be configured to default to Wi-Fi -- the office LAN or a company-approved public hot spot provider -- so employees don’t rack up a big cellular data bill when they’re moving around the office or on the road. When traveling abroad, VoIP over Wi-Fi calls range from free to a fraction of the cost of cellular voice or a hotel room phone. “That’s the exact example of why I do it,” says Hughes.
  • Pick a softphone that’s user-friendly. The user interface should be intuitive because if it isn’t, frustrated employees are likely to switch to an easier but more expensive calling option, such as their cell phone. The softphone also should make it easy to dial extensions and add contacts. “That’s the goal: The user doesn’t have to think or do anything differently,” says Rees. “It just works.”
  • Audit your devices beforehand. It’s increasingly common for enterprises to have a mix of mobile and PC operating systems, especially when employees are allowed to choose their own devices. So if the goal is a company-wide softphone rollout, make sure that the vendor can provide versions for Android, iOS, Mac, Windows and so on. If video is another goal, check how many of your existing smartphones and tablets have a front-facing camera.
  • Audit your bandwidth. Make sure that your facilities have an IP connection that’s fast enough in both directions so that voice and video calls don’t struggle alongside other types of traffic. That can be a challenge when some employees telecommute. “Connectivity is key; fiber is the best,” says Dan Shay, ABI Research practice director for mobile services. “But if you use DSL, Wi-Fi or cellular, connectivity can be problematic; hence advantages of one option (mobility, cost, etc.) are offset by other issues. Bottom line: Different endpoint options, such as a media tablet with soft client, are simply options for initiating a call. Connectivity determines completing a call.”

Hadoop Expands Footprint

The Hadoop distributed computing framework is broadening from its initial role in Internet search engines. Further expansion seems likely this year as more developers build on the software.

Hadoop, an Apache Software Foundation project, first took root at Yahoo and has since spread to other marquee customers such as Facebook and Twitter. The open-source software specializes in crunching very large data sets -- the “big data” problem. To manage tasks of that sort, Hadoop dispatches processing chores across multiple computers. The software is inherently parallel, and applications may be designed to exploit that parallelism.

The core of the Hadoop framework consists of MapReduce, which distributes data-processing tasks across a compute cluster and aggregates the results, and Hadoop Distributed File System (HDFS), a storage component for Hadoop applications. A Hadoop FAQ states dual-core machines scale best for Hadoop; however, quad-core and hex-core deployments are emerging.

A Bigger Base

Since Hadoop’s search engine development six years ago, the technology has found its way into other applications such as clickstream analysis. The relatively recent availability of Hadoop as a distribution vendor has further widened the array of use cases. Cloudera Inc. began selling its Hadoop distribution in 2009. Other Hadoop distribution vendors include Hortonworks Inc. and MapR Technologies Inc., which both launched software in 2011.

Charles Zedlewski, vice president of products at Cloudera, said most people purchase Hadoop in distribution form, which provides an integrated stack of components, as opposed to a raw system. The additional functionality lets customers “solve a broader range of business problems,” he says.

Cloudera’s Distribution including Apache Hadoop (CDH), for example, provides a number of Apache Hadoop–related systems such as Pig, which lets developers write programs that lend themselves to parallelization; Sqoop, which integrates Hadoop with relational databases; HBase, a Hadoop database; and Flume, a system for aggregating streaming data.

CDH components such as HBase and Flume support real-time analysis, taking the technology into use cases not possible with core Hadoop, says Zedlewski.

“Flume allows users to stream data into Hadoop in real time so the lag between data generation and analysis is only a few seconds,” he notes.

Those real-time capabilities enable mobile services and systems for IT operations.

“Many popular mobile services that people use every day are backed by real-time Hadoop/HBase systems,” says Zedlewski. “We see several examples of people using Hadoop/HBase as a real-time operational data store for systems management at scale.”

Distributions from Hortonworks and MapR, meanwhile, cover a similar swath of Apache tools. They also provide management technology, which vendors believe will smooth Hadoop’s path into more enterprises and, presumably, a large set of applications.

Hortonworks Data Platform, a distribution initially released as a technology preview program in November, includes a management system, Apache Ambari. CDH and MapR, meanwhile, offer subscription versions of their free distributions that include tools for managing Hadoop clusters.

“At this point, we are focusing on things like making Hadoop really easy to consume ... and monitor and support,” says Arun Murthy, founder and architect at Hortonworks.

ISV Support

The next wave of Hadoop expansion may flow from the ecosystem of ISVs now forming around the technology.

RainStor Inc., which develops big data management software, is one such firm. The company has partnerships in place with Cloudera, Hortonworks and MapR.

“Hadoop is an operating system for big data -- a storage and processing mechanism,” says John Bantleman, chief executive officer at RainStor. “But you need applications and capabilities on top of that. I think Hadoop is a bit like Linux. It’s a platform. You need a product sitting on the platform to make it valuable.”

In January, the company rolled out its RainStor Big Data Analytics on Hadoop. The enterprise database uses data compression and partition filtering to speed up queries, notes Bantleman. The latter feature results in greater productivity through more efficient use of a Hadoop cluster, according to RainStor.

At the moment, financial services along with telco service providers stand out among RainStor’s top industry customers. Electronic trading and credit card transactions via smartphones contribute to the growth of big data in those sectors, says Bantleman.

Bantleman also points to the airline industry as another potential market: Aircraft such as Boeing’s 787 will generate masses of data from sensors monitoring engines and other aircraft systems. “We really believe that machine-generated data is a key driver,” he says.

Help For Developers

Hadoop includes components that let developers take advantage of its parallelism. Apache Pig, and specifically the Pig Latin language, is one key tool.

The Pig Latin data flow language lets programmers create scripts that generate a series of MapReduce jobs. Developers who use the high-level language don’t necessarily have to be up-to-speed on Hadoop’s parallelism to get results.

“The nice part is you don’t have to understand its parallel nature, but you get the benefit of the parallelism of MapReduce and HDFS,” says Murthy.

Work is underway to make Hadoop distributions more developer-friendly. Zedlewski said Cloudera limits updates to once a quarter to provide a predictable development target. In addition, the company aims to simplify its distribution from a development standpoint. Zedlewski notes that it is currently much more difficult to build a product against Hadoop than it is to build one against JBoss.

“We definitely want to lower the bar,” he says. “That’s a work in progress.”

Distribution providers are also developing partner programs to back developers.

Cloudera offers its Connect Partner Program for ISVs, independent hardware vendors, systems integrators, value-added resellers, and training organizations. Hortonworks’ Technology Partner Program supports ISVs, OEMs, and service providers.

Photo: @iStockphoto.com/joel-t

McAfee’s Edward Metcalf Shares Hybrid Rootkit-thwarting Strategy

It’s been 21 years since the first rootkit was written, but it wasn’t until 2005 that rootkits reared their ugly heads in the mainstream. Today, there are more than 2 million unique rootkits, and another 50 are created each hour, according to McAfee Labs.

Hackers like rootkits because they work silently, which makes them ideal for harvesting credit card numbers and other valuable information, as well as industrial espionage and electronic terrorism. Thwarting rootkits isn’t easy because they load before the operating system (OS) does, and antivirus platforms don’t kick into action until after the OS starts running. In response, security researchers have created a hybrid hardware-software approach that loads first and then looks into memory’s deepest corners to ferret out rootkits.

McAfee’s recent DeepSAFE technology is an example of this hybrid, which supplements conventional antivirus software rather than replacing it. Edward Metcalf, McAfee’s group product marketing manager, recently spoke with Intelligence in Software about how hardware-assisted security works, what the benefits are and what enterprises need to know about this emerging class of security products.

Q: Why have rootkits become so common over the past few years? And how is their rise changing security strategies?

Edward Metcalf: For the most part, it’s always been a software-based approach that the cybersecurity industry has taken to combat malware. But the motivation of cybercriminals has changed over the past few years. Early on, it used to be about fame: getting their name on the news or in the hacker community. About six years ago, we started seeing a shift in their motivation from fame to financial gain. That’s changed the landscape dramatically, as evidenced by the growth in malware and techniques.

McAfee and Intel realized that there are things within the hardware that allow our software to work, such as looking at different parts of the system that block certain types of threats: looking at the memory and blocking kernel-mode rootkits, for example. So the last couple of years, McAfee and Intel have been working on technology to allow McAfee and other vendors to better leverage and utilize the hardware functionality.

The first evolution of that integration between hardware and software is the DeepSAFE platform. DeepSAFE uses hardware functionality built into the Intel Core i3, i5 and i7 platforms.

Q: So DeepSAFE basically shines a light into previously dark corners of PCs and other devices to look for suspicious behavior that OS-based technologies wouldn’t see, right?

E.M.: Until now, for the most part, all security software has operated within the OS. Cybercriminals know that, and they know how to get past it and they’re developing ways to propagate malware. Stealth techniques like kernel-mode and user-mode rootkits are sometimes really difficult to detect with OS-based security.

The current implementation of DeepSAFE utilizes the virtualization technology built into the Core i-series platform. We’re using that hardware functionality to get beyond the OS to inspect memory at a deep level that we’ve never been able to before because we’ve never had that access. It does require PCs to be running that latest platform of the Core i-series platform.

Q: If an enterprise has PCs running those Core i processors, can they upgrade to DeepSAFE?

E.M.: Yes. I wouldn’t position it as an upgrade. It’s added functionality that provides a deep level of protection.

DeepSAFE and Deep Defender do not replace the current antivirus on a machine. They augment it. It gives us a new perspective on some of the new threats that we’ve always had a hard time detecting because they’ve always loaded well before the OS loaded, which prevented us from seeing them because we’re an OS-based application. Cybercriminals knew that that was a flaw.

Q: Is it possible to apply this hybrid architecture to embedded devices that run real-time OS’s (RTOS’s)?

E.M.: Absolutely. Currently, we don’t have the ability to do that, but we’ve already talked about working with RTOS like Wind River. Taking the DeepSAFE strategy to the embedded device certainly could happen in the future.

People are asking about whether we can put DeepSAFE on tablets and smartphones. The answer is potentially yes if we have the hardware functionality or technology to hook into the hardware that we need in order to get that new vantage point.

Q: Hackers have a long history of innovation. Will they eventually figure out how to get around hybrid security?

E.M.: We constantly have to play a cat-and-mouse game: We develop a new technology, and they find ways to get around it.

In DeepSAFE, we’ve developed a number of mechanisms built into how we load and when we load to prevent any circumvention. Because we’re the first to load on a system, and because we use techniques to ensure that we’re the first one, it makes it harder for cybercriminals to develop ways to get around it.

The Case for Policy-based Power Management

Not many years ago, server power consumption wasn’t a big concern for IT administrators. The supply of power was plentiful, and in many cases power costs were bundled with facility costs. For the most part, no one thought too hard about the amount of power going into servers.

What a difference a few years can make. In today’s ever-growing data centers, no one takes power for granted. For starters, we’ve had too many reminders of the threats to the power supply, including widely publicized accounts of catastrophic natural events, breakdowns in the power grid, and seasonal power shortages.

Consider these examples:

  • In the wake of the March 2011 earthquake and tsunami and the loss of the Fukushima Daiichi nuclear power complex, Japan was hit with power restrictions and rolling power blackouts. The available power supply couldn’t meet the nation’s demands.
  • In the United States, overextended infrastructure and recurring brownouts and outages have struck California and the Eastern Seaboard, complicating the lives of millions of people.
  • In Brazil and Costa Rica, power supplies are threatened by seasonal water scarcity for hydro generation, while Chile wrestles with structural energy scarcity and very expensive electricity.

Then consider today’s data centers, where a lot of power is wasted. In a common scenario, server power is over-allocated and rack space is underpopulated to cover worst-case loads. This is what happens when data center managers don’t have a view into the actual power needs of a server or the tools they need to reclaim wasted power.

All the while, data centers are growing larger, and power is becoming a more critical issue. In some cases, data centers have hit the wall; they are out of power and cooling capacity. And as energy costs rise, we’ve reached the point where some of the world’s largest data center operators consider power use to be one of the top site-selection issues when building new facilities. The more plentiful the supply of affordable power is, the better off you are. All of this points to the need for policy-based power management. This forward-looking approach to power management helps your organization use energy more efficiently, trim your electric bills and manage power in a manner that allows demand to more closely match the available supply.

And the benefits don’t stop there: A policy-based approach also allows you to implement power management in terms of elements that are meaningful to the business instead of trying to bend the business to fit your current technology and power supply.

Ultimately, the case for policy-based power management comes down to this: It makes good business sense.

Using policy-based power management to rein in energy use
In today’s data centers, power-management policies are like the reins on a horse. They put you in control of an animal -- power consumption -- that has a tendency to run wild.

When paired with the right hardware, firmware and software, policies give you control over power use across your data center. You can create rules and map policies into specific actions. You can monitor power consumption, set thresholds for power use, and apply appropriate power limits to individual servers, racks of servers, and large groups of servers.

So how does this work? Policy-based power management is rooted in two key capabilities: monitoring and capping. Power monitoring takes advantage of sensors embedded in servers to track power consumption and gather server temperature measurements in real time.

The other key capability, power capping, fits servers with controllers that allow you to set target power consumption limits for a server in real time. As a next step, higher-level software entities aggregate data across multiple servers to enable you to set up and enforce server group policies for power capping.

When you apply power capping across your data center, you can save a lot of money on your electric bills. Just how much depends on the range of attainable power capping, which is a function of the server architecture.

For the current generation of servers, the power capping range might be 30 percent of a server’s peak power consumption. So a server that uses 300 watts at peak load might be capped at 200 watts, saving you 100 watts. Multiply 100 watts by thousands of servers, and you’re talking about operational savings that will make your chief financial officer stand up and take notice. Dynamic power management takes things a step further. With this approach, policies take advantage of additional degrees of freedom inherent in virtualized cloud data centers, as well as the dynamic behaviors supported by advanced platform power management technologies. Power capping levels are allowed to vary over time and become control variables by themselves. All the while, selective equipment shutdowns -- a concept known as “server parking” -- enable reductions in energy consumption.

Collectively, these advanced power management approaches help you achieve better energy efficiency and power capacity utilization across your data center. In simple terms, you’re in the saddle, and you control the reins.

Get bigger bang for your power buck
In today’s data centers, the name of the game is to get a bigger bang for every dollar spent on power. Policy-based power management helps you work toward this goal by leveraging hardware-level technologies that make it possible to see what’s really going on inside a server. More specifically, the foundation for policy-based power management is formed by advanced instrumentation embedded in servers. This instrumentation exposes data on temperature, power states and memory states to software applications that sit at a higher level, using technology that:

  • Delivers system power consumption reporting and power capping functionality for the individual servers, the processors and the memory subsystem. 

  • Enables power to be limited at the system, processor and memory levels -- all using policies defined by your organization. These capabilities allow you to dynamically throttle system and rack power based on expected workloads.

  • Enables fine-grained control of power for servers, racks of servers, and groups of servers, allowing for dynamic migration of workloads to optimal servers based on specific power policies with the appropriate hypervisor.
Here’s an important caveat: When it comes to policy-based power management, there’s no such thing as a one-size-fits-all solution. You need multiple tools and technologies that allow you to capture the right data and put it to work to drive more effective power management -- from the server board to the data center environment.

It all begins with technologies that are incorporated into processors and chipsets. That’s the foundation that enables the creation and use of policies that bring you a bigger bang for your power buck.

Build a bridge to a more efficient data center
Putting policy-based power management in place is a bit like building a bridge over a creek. First you lay a foundation to support the bridge, and then you put the rest of the structure in place to allow safe passage over the creek. While your goal is to cross the creek, you couldn’t do it without the foundation that supports the rest of the bridge structure.

In the case of power management, the foundation is a mix of advanced instrumentation capabilities embedded in servers. This foundation is extended with middleware that allows you to consolidate server information to enable the management of large server groups as a single logical unit -- an essential capability in a data center that has thousands of servers.

The rest of the bridge is formed by higher-level applications that integrate and consolidate the data produced at the hardware level. While you ultimately want the management applications, you can’t get there without the hardware-level technologies.

Let’s look at this in more specific terms. Instrumentation at the hardware level allows higher-level management applications to monitor the power consumption of servers, set power consumption targets, and enable advanced power-management policies. These management activities are made possible by the ability of the platform-level technologies to provide real-time power measurements in terms of watts, a unit of measurement that everyone understands.

These same technologies allow power-management applications to retrieve server-level power consumption data through standard APIs and the widely used Intelligent Platform Management Interface (IPMI). The IPMI protocol spells out the data formats to be used in the exchange of power-management data.

Put it all together and you have a bridge to a more efficient data center.

Cash in on policy-based power management
When you apply policy-based power management in your data center, the payoff comes in the form of a wide range of business, IT and environmental benefits. Let’s start with the bottom line: A robust set of power-management policies and technologies can help you cut both operational expenditures (OpEx) and capital expenditures (CapEx).

At the OpEx level, you save money by applying policies that limit the amount of power consumed by individual servers or groups of servers. That helps you reduce power consumption across your data center.

How much can you save? Say that each 1U server requires 750 watts of power. If your usage model allows you to cap servers at 450 watts, you save 300 watts per machine. That helps you cut your costs for both power purchases and data center cooling. And chances are you can do this without paying server performance penalties, because many servers don’t use all of the power that has been allocated to them.

At the CapEx level, you cut costs by avoiding the purchase of intelligent power distribution units (PDUs) to gain power monitoring capabilities and by reducing redundancy requirements, which saves you thousands of dollars per rack.

More effective power management can also help you pack more servers into racks, as well as more racks, into your data center to make better use of your existing infrastructure. According to the Uptime Institute, each PDU kilowatt represents about $10,000 of CapEx, so it makes sense to try to make the best use of your available power capacity.

Baidu.com, the largest search engine in China, understands the benefits of making better use of existing infrastructure. It partnered with Intel to conduct a proof of concept (PoC) project that used Intel Intelligent Power Node Manager and Intel Data Center Manager to dynamically optimize server performance and power consumption to maximize the server density of a rack.

Key results of the Baidu PoC project:

  • At the rack level, up to 20 percent of additional capacity increase could be achieved within the same rack-level power envelope when an aggregated optimal power-management policy was applied.
  • Compared with today’s data center operation at Baidu, the use of Intel Intelligent Power Node Manager and Intel Data Center Manager enabled rack densities to increase by more than 40 percent.

And even then, the benefits of policy-based power management don’t stop at the bottom line. While this more intelligent approach to power management helps you reduce power consumption, it also helps you reduce your carbon footprint, meet your green goals, and comply with regulatory requirements. Benefits like those are a key part of the payoff for policy-based power management.

For more resources on data center efficiency, see this site from our sponsor.