Language Lessons: Where New Parallel Developments Fit Into Your Toolkit

The rise of multicore processors and programmable GPUs has sparked a wave of developments in parallel programming languages.

Developers seeking to exploit multicore and manycore systems -- the latter involving hundreds or potentially thousands of processors -- now have more options at their disposal. Parallel languages making moves of late include the SEJITS of University of California, Berkeley; The Khronos Group’s OpenCL; the recently open-sourced Cilk Plus; and the newly created ParaSail language. Developers may encounter these languages directly, though the wider community will most likely find them embedded within higher-level languages.

Read on for the details:

Scientific Developments

Parallel computing and programming has been around for years in the high-performance scientific computing field. Recent developments in this arena include SEJITS (selective, embedded, just-in-time specialization), a research effort at the University of California, Berkeley.

The SEJITS implementation for the Python high-level language, which goes by ASP (ASP is SEJITS for Python), aims to make it easier for scientists to harness the power of parallelism. Scientists favor speed as they work to solve a specific problem, while professional programmers take the time to devise a parallel strategy to boost performance.

Armando Fox, adjunct associate professor with UC Berkeley’s Computer Science Division, says SEJITS bridges the gap between productivity programmers and efficiency programmers. SEJITS, he notes, allows productivity programmers to write in a high-level language, a benefit facilitated by efficiency programmers’ ability to capture the parallel algorithm. Intel and Microsoft are early-adopter customers.

Here’s how it works: A scientist/programmer leverages a specializer -- a design pattern, essentially -- that addresses a specific problem and is optimized to run in parallel settings. Specializers that are currently available cover audio processing and structured grids, among other fields. This approach embeds domain-specific languages into Python with compilation occurring at runtime.

ASP specializers are available via GitHub, with a planned repository to provide a catalog of specializers and metadata. The beginnings of such a repository may be in place by December, says Fox.

“As more and more efficiency programmers contribute their patterns to this repository of patterns, application writers can pick up and use them as they would use libraries,” explains Fox.

Fox characterized SEJITS as a prototype -- albeit one with customers. He says researchers are working to get SEJITS documentation more complete.

Tapping GPUs and More

Stemming from a graphics background, OpenCL appears to be broadening its reach after emerging in Mac OS X a couple of years ago.

OpenCL, now a Khronos Group specification, consists of an API set and OpenCL C, a programming language. On one level, OpenCL lets programmers write applications that take advantage of a computer’s GPU for general, non-graphical purposes. GPUs, inherently parallel, have become programmable in recent years. But OpenCL’s role extends to increasingly parallel CPUs, notes Neil Trevett, vice president of mobile content at NVIDIA and president of the Khronos Group.

“Historically, you have had to use different programming frameworks for programming ... CPUs and GPUs,” says Trevett. “OpenCL lets developers write a single program using a single framework to use all of the heterogeneous parallel resources on a system.”

Those resources could include multiple CPUs and GPUs mixed together and exploited by a single application, he adds.

OpenCL’s scope includes multicore CPUs, field-programmable gate arrays, and digital signal processing. The basic approach is to use OpenCL C to write a kernel of work and employ the APIs to spread those kernels out across the available computing resources, says Trevett.

OpenCL C is based on C99 with a few modifications, says Trevett. Those include changes that let developers express parallelism and the removal of recursion, he notes.

OpenCL emphasizes power and flexibility versus ease of programming. A programmer explicitly controls memory management and has considerable control over how computation happens on a system, says Trevett. But higher-level language tools and frameworks may be built upon OpenCL’s foundational APIs, he adds. Indeed, Khronos Group has made C++ bindings available for OpenCL.

Trevett says the C++ bindings will make OpenCL more accessible. In another initiative, Khronos Group is working on an intermediate binary representation of OpenCL. The objective is to help developers who don’t want to ship source code along with the programs they write in OpenCL.

 A Broader Take

Earlier this year, Intel set its Cilk Plus language on an open-source path as part of the company’s effort to make parallel programming more widely available.

Cilk Plus is an extension to C and C++ that supports parallel programming. Robert Geva, principal engineer at Intel, notes that Intel first started with implementing Cilk Plus into its compiler products. Then, after gaining initial success with customer adoption, extended this to its open source efforts by implementing Cilk Plus into the GNU C Compiler  (GCC) through a series of releases.

The Cilk Plus extension to C/C++ aims to provide programmer benefits via allowing composable parallelism, and allowing utilization of hardware resources including multiple cores, vector operations within the cores, while being cache friendly.

Geva says that Cilk Plus provides a tasking model with a user level “work stealing” run time task scheduler. The work stealing algorithm assigns tasks -- identified by the programmer as capable to execute in parallel with each other -- to OS threads. According to Intel, the dynamic assignment of tasks to threads guarantees load balancing independently of an application’s software architecture. This approach to load balancing delivers a composable parallelism model.  That is, the components of a large system may use parallelism and come from independent authors, but still be integrated into a single, parallel application.

Geva says that this solves a problem for those developers who were trying to build complex parallel software systems without a good dynamic load balancing scheduler and encountered hardware resource over subscription and, therefore, poor performance.

The re-implementation of Cilk Plus in open source GCC is intended to help with adoption. Geva says the open-source move helps with adoption by two types of developers: one group that prefers the GCC compiler over the Intel compiler, and a second group that is comfortable with the Intel compiler but would like to have another source..

The first components of Cilk Plus to be released into open source includes the language’s tasking portion and one language construct for vector-level parallelization (#pragma simd). The tasking portion includes compiler implementation for three keywords, including _Cilk_spawn, _Cilk_sync and _Cilk_for; the runtime task scheduler; and the hyperobject library. The remainder of the language will be introduced in multiple steps.

Porting to GCC will also help with Intel’s standardization objectives. The current plan is to take Cilk Plus to the C++ standards body and work on a proposal there, says Geva.

“We will be in a better position working inside a standards body with two implementations instead of one,” he explains.

A High-integrity Initiative

A newly launched language, ParaSail, focuses on high-integrity parallel programming.

Tucker Taft, chairman and chief technology officer at SofCheck, a software analysis and verification firm, designed the language. The alpha release of a compiler with executables for Mac, Linux and Windows emerged in October. Taft says the compiler isn’t intended for production use, but can be used to learn the language.

“Right now, we’re just trying to get it out there and get people interested,” says Taft.

According to Taft, creating a parallel programming language from scratch gave him the opportunity to build in safety and security. The language incorporates formal methods such as preconditions and post-conditions, which are enforced by the compiler. That approach makes ParaSail “oriented toward building a high-integrity embedded system,” notes Taft.

In another nod to secure, safety-critical systems, ParaSail eliminates memory management via garbage collection. Taft says garbage collection isn’t a good match for high-integrity systems, noting the difficulty of proving that a garbage collector is “correct.”

“It is also very difficult to test a garbage collector as thoroughly as is required by high-integrity systems,” he adds.

Taft’s experience in the high-integrity area includes designing Ada 95 and Ada 2005. Defense Department once made Ada its official language, citing its ability to create secure systems. The language has found a continuing role in avionics software.

Similarly, ParaSail could cultivate a niche in aerospace. Taft cites the example of an autopilot system for a commercial jet. He also lists systems for controlling high-speed trains, medical devices and collision avoidance systems for cars.

As for distribution methods, Taft says he is working with other companies, including one with a close association with the GCC. Taft says hooking the ParaSail front end -- parser, semantic analyzer and assertion checker -- to the GCC back end would be a natural way to make the language widely available.

Another possibility: making ParaSail available as a modeling language. In that context, ParaSail could be used to prototype a complex system that would be written in another language. 

Managing the Deluge of Personal Hand-held Devices Into the Enterprise

Until recently, many enterprise IT organizations prohibited the use of personal hand-held devices in the enterprise environment. With the consumerization of IT, it faced the daunting challenge of enabling employees’ desire to access corporate information using an array of personal hand-held devices. Ten years ago, employees came to work to use great technology. Now, with the battery of consumer devices available, they often have better PCs and printers at home than they do at work. Because user expectations and needs have also changed, enterprise must adapt.

In the enterprise, a highly mobile workforce wants to take advantage of the most up-to-date systems, services and capabilities to do their jobs, typically using hand-held devices as companion devices to extend the usefulness of their corporate-owned mobile business PCs. This allows them to access information easily from home or on the road. For example, many users want to synchronize their corporate calendars with a third-party Web-based calendar utility so they can use their personal devices to access their work calendars from anywhere. They are motivated to get their jobs done in a manner that is easy, efficient and most productive.

Employees often don’t consider the information security issues raised by such a practice; however, information security is critically important for IT. Analysis of any policy prohibiting all personal devices shows that enforcing the policy would consume extraordinary resources in software and support and would negatively impacted users’ productivity.

Such an approach would require IT to verify every application before allowing a user to install it, which alone would take away much flexibility from the corporate user base. It would also need to significantly modify corporate culture and user expectations, deploy new lab networks and install large amounts of new hardware and networking equipment. That kind of control is just not possible or productive.

Solutions Must Balance User Demand and Information Security
With each new generation of technology, IT must develop ways to help keep information secure. The challenge is to develop a policy that maximizes both user demand and information security to the greatest extent possible. With safeguards in place to protect information and intellectual property, employees are allowed to select the tools that suit their personal work styles and facilitate their job duties, improving employee productivity and job satisfaction. Since the use of personal devices is accelerating, policy needs to change to accommodate it. The best option embraces the consumerization of IT, recognizing that the trend offers significant potential benefits to both users and to IT:

  • Increased productivity. Users can choose devices that fit their work styles and personal preferences, resulting in increased productivity and flexibility.
  • Greater manageability. By offering a program that users can adopt, IT is aware of what they are doing and can offer services that influence their behavior. This provides a clear understanding of our risk level so IT can actively manage it.
  • Enhanced business continuity. If a user’s mobile business PC is nonfunctional, a personal hand-held device provides at least a partial backup, enabling the user to continue to work productively.
  • Loss prevention. Internal data indicates that users tend to take better care of their own belongings and tend to lose personal devices less frequently than corporate-owned devices, which actually enhances information security.
  • Greater security. Rather than ignore the consumerization of IT, IT can increase information security by taking control of the trend and guiding it.

By taking control of the trend and the technology in its environment, IT is able to circumvent many of the security issues that might have occurred if it simply ignores the issue or prohibits employees from using their own devices to accomplish some of their job duties.

Addressing the Unique Security Challenges of This Workplace Trend
Recognizing the potential benefits of the consumerization of IT to both employees and to IT, the best step is to identify the unique security challenges of this workplace trend, investigate user behavior and define the requirements of an IT consumerization policy. That policy must support users’ needs for mobility and flexibility by allowing personally owned hand-held devices in the enterprise and allowing other personally owned devices in the future.

It is relatively easy to verify and enforce which applications are running on corporate-owned hand-held devices. With personal devices, this process is not so straightforward because employees have the right to install any applications they choose. However, we have identified certain minimum-security specifications for hand-held devices that provide a level of information security that allows IT to test, control, update, disconnect, remote wipe and enforce policy:

  • Two-factor authentication required to push email
  • Secure storage using encryption
  • Security policy setting and restrictions
  • Secure information transmittal to and from the enterprise
  • Remote wipe capability
  • Some firewall and intrusion detection
  • System (IDS) capabilities on the server side of the connection
  • Patch management and enforcement software for security rules
  • The ability to check for viruses from the server side of the connection, although the device itself may not have antivirus software

In the case of antivirus software, we analyzed virus attacks on mobile devices and found that very few targeted corporate information; most either sent text messages or attacked the user’s phone book. Although we expect malware incidents to increase over time, the current threat level to actual corporate information is low.

Mobile Business: PCs or Thin Clients?
We have not found that the thin client computing model, which centrally stores information and allows access to that information only from specific devices, is a foolproof way to protect corporate information.

Although thin clients are appropriate for certain limited applications, in general we feel they limit user mobility, productivity and creativity. Also, many of the perceived security enhancements associated with thin clients need to be viewed with caution. In fact, many of the information security risks merely moved; they didn’t disappear. For example, thin clients usually don’t include the same level of information security protection as mobile business PCs, yet they can still connect to the Internet and export information, putting that information at risk. Therefore, the loss of productivity that comes with using thin clients is for little or no gain.

Security Considerations
One of the biggest technical challenges to implementing our policy involved firewall authentication. With IT-managed systems, authentication uses two factors: something you know (a password) and something you have (a registered mobile business PC). But when the device is unknown, you are left with only one authentication criterion.

Therefore, one of the interesting challenges of allowing personal devices in the enterprise is using information on the device to authenticate to the network, without that information belonging to the user. If the employee owns the piece of information used to authenticate to the network, IT would have no grounds for disciplinary action if the user were to choose to move his or her data to a different device to get access to the network. For example, the International Mobile Equipment Identity (MEI) number on a mobile device belongs to the user if the user owns the hardware, so that IT cannot use that to authenticate the device.

To address this issue, IT can send a text message to a predefined phone number, and that text message becomes the user’s password. In this scenario, the phone number is the must-have authentication factor, and the text message is the must-know authentication factor.

Device management also poses challenges, because one solution doesn’t fit all devices and applications. You should design your device management policy with the expectation that a device will be lost or stolen. Therefore, you can expect it to be able to protect itself in a hostile attack. This means that the device is encrypted, can self-wipe with a number of wrong password attempts, and we can remotely wipe the device. Your personal device policy should require users to have controls in place prior to any loss.

Also, some services need greater levels of security than others. For example, the system for booking a conference room doesn’t need the high level of security required by the sales database. Therefore, the room booking system can reside on a device over which we have less management control. You can develop a tiered management system.

The consumerization of IT is a significant workplace trend IT has been actively anticipating for years. You need to establish a comprehensive information security policy, train users and service desk personnel and develop technical solutions that meet your information security requirements. These accomplishments will enable IT to take advantage of the benefits of IT consumerization, without putting our corporate data at risk.

To successfully accommodate employees’ desire to use personal devices in the enterprise, it is important to proactively anticipate the trend -- not ignore it or lose control of the environment by simply doing nothing. Success also hinges on an even-handed approach to developing policy, where each instance of personal device usage is treated consistently; it would be difficult to take action if one employee did something if that thing was common practice.

Highly mobile users can use either their own device or a corporate-owned hand-held device as a companion to their mobile business PC. Because employees with similar responsibilities have different preferences, allowing them to use the hand-held devices that best suit their work styles increases productivity and job satisfaction.

For more information on Intel IT best practices, visit


Securing the Enterprise Better With Encryption Instructions

The popular encryption standard, the Advanced Encryption Standard (AES), was adopted by the U.S. government in 2001, and is widely used today across the software ecosystem to protect network traffic, personal data and corporate IT infrastructure. AES applications include secure commerce, data security in database and storage, secure virtual machine migration, and full disk encryption. According to an IDC Encryption Usage Survey , the most widely used applications are corporate databases and archival backup. Full disk encryption is also receiving lots of attention.

In order to achieve faster, more secure encryption -- which makes the use of encryption feasible where it was not before -- Intel introduced the Intel Advanced Encryption Standard New Instructions (IntelAES-NI), a set of seven new instructions in the Intel Xeon  processor family and the 2nd gen Intel Core processors:

  • Four instructions accelerate encryption and decryption.
  • Two instructions improve key generation and matrix manipulation.
  • The seventh aids in carry-less multiplication.

By implementing some complex and costly sub-steps of the AES algorithm in hardware, AES-NI accelerates execution of the AES-based encryption. The results include performance improvement implications, and cryptographic libraries that independent software vendors (ISVs) can use to replace basic AES routines with these optimizations.

AES-NI implements in hardware some sub-steps of the AES algorithm. This speeds up execution of the AES encryption/decryption algorithms and removes one of the main objections to using encryption to protect data: the performance penalty.

To be clear, AES-NI doesn’t implement the entire AES application. Instead, it accelerates just parts of it. This is important for legal classification purposes because encryption is a controlled technology in many countries. AES-NI adds six new AES instructions, four for encryption and decryption, one for the mix column, and one for generating next round text. These instructions speed up the AES operations in the rounds of transformation and assist in the generation of the round keys. AES-NI also includes a seventh new instruction: CLMUL. This instruction could speed up the AES-GCM and binary Elliptical Curve Cryptography (ECC), and assists in error-correcting codes, general-purpose cyclic redundancy checks (CRCs) and data de-duplication. It particularly helps in carry-less multiplication, also known as “binary polynomial multiplication.”

Besides the performance benefit of these instructions, execution of instructions in hardware provides some additional security in helping prevent software side-channel attacks. Software side channels are vulnerabilities in the software implementation of cryptographic algorithms. They emerge in multiple processing environments (multiple cores, threads or operating systems).Cache-based software side-channel attacks exploit the fact that software-based AES has encryption blocks, keys and lookup tables held in memory. In a cache collision-timing side-channel attack, a piece of malicious code running on the platform could seed. For more information on the AES new instructions, see this report . For more information on the CLMUL instruction and its handling of carry-less multiplication, see explanation.

Encryption Usage Models

There are three main usage models for AES-NI: network encryption, full disk encryption (FDE) and application-level encryption. Networking applications use encryption to protect data in flight with protocols encompassing SSL, TLS, IPsec, HTTPS, FTP and SSH. AES-NI also assists FDE and application-level models that use encryption to protect data at rest. In all three of these models, improved performance is gained. Such performance improvements can enable the use of encryption where it might have otherwise been impractical due to performance impact.

In today’s highly networked world, Web servers, application servers and database back-ends all connect via an IP network through gateways and appliances. SSL is typically used to deliver secure transactions over the network. It’s well-known for providing secure processing for banking transactions and other ecommerce, as well as for enterprise communications (such as an intranet).

Where AES-NI provides a real opportunity is in reducing the computation impact (load) for those SSL transactions that use the AES algorithm. There is significant overhead in establishing secure communications, and this can be multiplied by hundreds or thousands, depending on how many systems want to concurrently establish secure communications with a server. Think of your favorite online shopping site during the holiday season. Integrating AES-NI would improve performance by reducing the computation impact of all these secure transactions.

With the growing popularity of cloud services, secure HTTPS connections are getting increased attention -- and use. The growth in cloud services is putting enormous amounts of user data on the Web. To protect users, operators of public or private clouds must ensure the privacy and confidentiality of each individual’s data as it moves between client and cloud. This means instituting a security infrastructure across their multitude of service offerings and points of access. For these reasons, the amount of data encrypted, transmitted, and decrypted in conjunction with HTTPS connections is predicted to grow as clouds proliferate.

For cloud providers, the performance and responsiveness of transactions, streaming content and collaborative sessions over the cloud are all critical to customer satisfaction. Yet the more subscribers cloud services attract, the heavier the load placed on servers. This makes every ounce of performance that can be gained anywhere incredibly important. AES-NI and its ability to accelerate the performance of encryption/ decryption can play a significant role in helping the cloud computing movement improve the user experience and speed up secure data exchanges.

Most enterprise applications offer some kind of option to use encryption to secure information. It is a common option used for email, and for collaborative and portal applications. ERP and CRM applications also offer encryption in their architectures with a database backend. Database encryption offers granularity and flexibility at the data cell level, column level, file system level, table space and database level. Transparent data encryption (TDE) is a feature on some databases that automatically encrypts the data when it is stored to the disk and decrypts it when it is read back into memory. Retailers can use features like TDE to help address PCI-DSS requirements. University and health care organizations can use it to automatically encrypt their data to safeguard social security numbers and other sensitive information on disk drives and backup media from unauthorized access. Since AES is a supported algorithm in most enterprise application encryption schemes, the use of AES-NI provides an excellent opportunity to speed up these applications and enhance security.

Full disk encryption (FDE) uses disk encryption software, which encrypts every bit of data that goes on a disk or disk volume. While the term FDE is often used to signify that everything on a disk is encrypted, including the programs that boot OS partitions, the master boot record (MBR) is not and thus this small part of the disk remains unencrypted. FDE can be implemented either through disk encryption software or an encrypted hard drive. Direct-attached storage (DAS) is commonly connected to one or more Serial-attached SCSI (SAS) or SATA hard drives in the server enclosure. Since there are relatively few hard disks and interconnects, the effective bandwidth is relatively low. This generally makes it reasonable for a host processor to encrypt the data in software at a rate compatible with the DAS bandwidth requirements.

In addition to protecting data from loss and theft, full disk encryption facilitates decommissioning and repair. For example, if a damaged hard drive has unencrypted confidential information on it, sending it out for warranty repair could potentially expose its data. Consider, for instance, the experience of the National Archives and Records Administration (NARA). When a hard drive with the personal information of around 76 million servicemen malfunctioned, NARA sent it back to its IT contractor for repairs. By failing to wipe the drive before sending it out, NARA arguably created the biggest government data breach ever. Similarly, as a specific hard drive gets decommissioned at the end of its life or re-provisioned for a new use, encryption can spare the need for special steps to protect any confidential data. In a data center with thousands of disks, improving the ease of repair, decommissioning and re-provisioning can save money.

In summary, these AES-NI capabilities are able to make performance-intensive encryption feasible and can be easily applied into various usage models.

Desktop Virtualization on the Verge

Desktop virtualization has been on the verge of adoption for years. But actual deployment hasn’t caught up to the hype so fast. One hurdle is that several technologies fall under the “desktop virtualization” umbrella, and confusion can breed inaction. There is also a dearth of capable management tools to help businesses keep all this virtualization under control, says IDC analyst Ian Song.               

On the plus side, desktop virtualization is picking up the pace. Last year, some 11 million licenses sold, according to IDC. Preliminary figures for this year show that 7 to 8 million licenses were sold in the first half of this year.

Generally speaking, desktop virtualization mimics the more popular server virtualization in that it decouples the physical machine from the software. Here are the basic subcategories of desktop virtualization, and what use cases are appropriate for each.

VDI (Virtual Desktop Infrastructure)|
VDI is a double threat: It promises both centralized management (good for IT) and a customizable desktop (good for users).

“VDI lets you put the desktop into the data center, where the image can be managed easily. What’s really nice about it is an IT pro can actually create a separate desktop for all users and test out all those desktops for patches before deploying,” says Lee Collison, solutions architect for Force 3 Inc., a systems integrator based in Crofton, Md.

That is a huge draw because software vendors pump out patches and fixes that are impossible to keep tabs on, let alone test and install. The downside is that VDI presumes pretty constant connectivity to the data center, which can be a problem for road warriors.

Market leaders in VDI are VMware View and Citrix XenDesktop.

Client Hypervisors
If VDI assumes connectivity, client hypervisors pretty much assume disconnection.

Because these hypervisors, as their name implies, run on client machines, they suit situations in which users are not always tethered to the corporate LAN. That means companies that employ outside contractors who use personal machines, but also must run corporate-sanctioned applications. With client hypervisors, the company can provide them with the image that they then run offline.

“The key phrase here is ‘disconnected.’ Users can sync up when they’re connected, but otherwise they’re free to roam,” says Ron Oglesby, CTO of Unidesk, a desktop virtualization management company.

Citrix XenClient is a “type 1” or bare-metal hypervisor that sits atop the client hardware and below the client OS. It then divvies up system resources into virtual machines (VMs) as needed. Type-2 client hypervisors create a layer above the OS, which then supports several virtual OS instances.

OS Streaming
With this older technology, sometimes called terminal services, a networked device boots up a server-based operating system and streams the software needed on a download-and-go basis. “Remote or local users on a Wyse or other device download just what they need,” says Lee Collison, solutions architect with Force3 Inc., a systems integrator.

From an administrator’s point of view, the beauty is that one image powers all users -- which is great where absolute consistency is a virtue. OS streaming can also be an inexpensive option in factory floor situations that are unsuitable for pricier and more delicate PCs. But to work well, OS streaming requires a big outbound pipe and good connections to each device, says Collison.

Purists don’t see OS streaming as desktop virtualization per se because there is no hypervisor or virtual machine, but it often coexists with virtual desktops. The beauty is: It’s cheap and efficient.

More from our sponsor: Intel’s concept for intelligent desktop virtualization


Why Wireless Needs a Network of Networks

If small and medium business (SMB) spending on wireless data services is any indication, the U.S. economy is finally starting to rebound. By 2015, SMBs alone -- home to roughly 2 out of every 3 new jobs -- will spend 42 percent more on wireless data services than they did in 2010, predicts In-Stat, an analyst firm.

But there’s a dark lining to this silver cloud: Cellular networks are already struggling to keep up with today’s traffic. Accommodating a 42-percent increase just from SMBs -- plus whatever large enterprises and consumers need -- will be next to impossible without a fundamental change in how wireless devices get a connection.

Enter the fledgling concept of “network virtualization,” in which devices such as smartphones and tablets would constantly hop from network to network in search of the best connection and use two networks simultaneously to increase capacity. Depending on the application’s or user’s needs, the best network might be the fastest one, while in other cases, it might the cheapest or most secure. Network virtualization has been around for decades in wireline, such as packet-switching and least-cost routing.

But for wireless, the ability to flit between and combine networks -- automatically, seamlessly and in real time -- would be not just a new option, but a paradigm shift. Today, such switches typically require manual intervention. For example, when iPhone users attempt to download an app that’s 20 MB or larger, they get an alert telling them to switch to Wi-Fi first. That requirement highlights how today’s cellular networks are ill-equipped to handle bandwidth-intensive tasks, such as downloading large files and streaming HD videos.

Network virtualization would automate that process: The transceiver in devices such as smartphones and tablets would be constantly sniffing for all available networks nearby and then, unbeknownst to the user, automatically switch to the one that’s best suited to support whatever the user wants to do at that moment. This process wouldn’t be limited to a single air-interface technology, such as CDMA, or to a single wireless carrier. Instead, the transceiver would be scanning every band in a wide swath of spectrum -- potentially between 400 MHz and 60 GHz -- looking for potential connections, regardless of whether they use CDMA, LTE, WiMAX, Wi-Fi or some technology that has yet to be invented.

Network virtualization would also aggregate disparate networks when that’s the best way to support what the user is doing. For example, suppose that the user launches a streaming video app and selects the 1080p HD feed of a movie and that a WiMAX network and a Wi-Fi network are both available. Individually, those networks couldn’t supply enough bandwidth to provide a good viewing experience and still have enough capacity left over to accommodate other users. So instead, network virtualization would enable the device to connect to both networks simultaneously to get enough bandwidth, without taxing either of them.

“It allows you to use the network most efficiently,” says William Merritt, president and CEO of InterDigital, one of the companies pioneering the concept of network virtualization. “The driver behind this is the huge bandwidth crunch. You have a limited supply of spectrum, so you have to figure out how to use that spectrum most efficiently.”

Network virtualization may sound simple, but the concept can’t simply be ported wholesale from the wireline domain to wireless, whose unique considerations -- particularly interference, battery life and signaling -- have no counterparts in the wired world. So, just as network virtualization can fundamentally change wireless, network virtualization must go through some fundamental changes before it can be a commercially viable architecture.

What’s a Cognitive Radio?
Although network virtualization sounds deceptively straightforward, making it a commercial reality is anything but. A prime example is the transceiver. Today, device vendors support multiple air-interface technologies by including a separate transceiver for each: one radio for 3G, another for Wi-Fi and still another for WiMAX. That architecture has practical limitations, including cost, complexity, physical size and battery demands.

The ideal alternative would be a cognitive radio: a single transceiver that can span dozens of bands and technologies. That’s fundamentally different from a software-defined radio -- something the wireless industry has been working on for more than a decade -- because it wouldn’t be locked to a single air-interface technology. As a result, a cognitive radio could connect to a wider range of networks.

Besides cognitive radios, network virtualization would also require a framework that enables voice, video and data sessions to be seamlessly handed from one network or technology to another. A nascent version of those inter-standard handoffs is available today when voice calls are passed between cellular and Wi-Fi networks.

InterDigital recognized the need for inter-standard handoffs early on. In 2007, it invested in Kineto Wireless, one of the companies developing a framework known as Unlicensed Mobile Access, which bridges GSM/GPRS, Wi-Fi and Bluetooth. Meanwhile, InterDigital was also developing additional inter-standard handoffs for other wireless technologies. That work eventually wound up being commercialized for use in SK Telecom’s UMTS and WiBro networks, and eventually standardized in IEEE 802.21.

Another piece of the puzzle is back-office systems that can analyze traffic in real time and route it over the appropriate network. For example, suppose a tablet is running a videoconferencing app and a cloud-based file-sharing app simultaneously. The system might run video and audio over the lowest latency, highest bandwidth network available, while the files that the conference participants are discussing are downloaded over a slower or more secure network. InterDigital is currently working with Alcatel-Lucent and standards bodies to create those kinds of policy-control mechanisms.

“As we mature each piece, we bring them into the market, get validation that they work, with the ultimate destination that all of this comes together at some point and creates this virtual network,” says Merritt.

But Is It Cheap Enough and Secure Enough?
Although sales of smartphones and tablets remain brisk, they’ll eventually be a niche play compared to the roughly 50 billion machine-to-machine (M2M) devices that could be in service by 2020. M2M devices are essentially the guts of a cell phone attached to utility meters, shipping containers, surveillance cameras and even pets to support tasks such as location tracking, usage monitoring and video backhaul.

The M2M market is notoriously price-sensitive in terms of both hardware and service costs. One obvious question is whether a cognitive radio would be too expensive to tap that market. The answer lies in another part of the network virtualization architecture: trusted devices, which serve as gateways that other devices connect to and then through.

In M2M, the trusted device would have the cognitive radio and use Wi-Fi, Bluetooth, ZigBee or another technology to communicate with nearby M2M devices, which would be kept inexpensive by using a conventional RF design. This architecture could save money on the service side too, because though the M2M device might support only a single air-interface technology and band, it would have the ability to choose the cheapest network via the trusted gateway device.

Because network virtualization dramatically expands the number of device-network combinations, security is another key concern. Trusted devices at each network edge can help there too by identifying each device’s unique signature to determine whether it’s legitimate, and thus whether it deserves access to the network.

In cellular networks, this process would also reduce a problem that’s often overshadowed by bandwidth concerns: signaling, which can have just as much impact on the user experience. Even low-bandwidth services can generate so much signaling traffic that it clogs up the network. For example, in 2009, a single Android IM app generated so much signaling traffic that it nearly crashed T-Mobile USA’s network in multiple markets. With trusted devices, the authentication-related signaling could be confined to the network’s edge to reduce the core network’s workload.

Business and Regulatory Realities
As the pool of potential networks grows, so does the likelihood that some of them will belong to multiple companies. That’s another paradigm shift. Today, a consumer or business buys a wireless device from a carrier, which then provides service via its network(s) and those of its partners. Network virtualization upends that longstanding business model by freeing the device to connect to unaffiliated and even rival service providers.

For wireless carriers, one option might be to eliminate device subsidies -- something they’ve been trying to do for decades anyway because of the cost -- in exchange for allowing the cognitive radio to connect to any network. Another potential option is for carriers to build, buy or partner with as many networks as possible, a trend that’s already well underway in North America, Europe and other world regions.

Standards work is already laying the technological groundwork for such combinations. For example, the upcoming 3GPP Release 10 standard supports simultaneous connections to LTE and Wi-Fi, a tacit acknowledgement that no single technology can do it all.

“To some extent, those hurdles are falling away as a result of industry consolidation,” says Merritt. “Others will fall away as a result of necessity.”

Merritt argues that many of the business and regulatory hurdles are already set to fall because operators and governments are under pressure to find creative ways to reconcile bandwidth demands with a finite amount of spectrum. For example, in the Americas alone, governments will have to free up between 721 MHz and 1161 MHz of additional spectrum over the next nine years based on usage trends, according to a 2009 Arthur D. Little study. Freeing up that much spectrum -- and that quickly -- will be nearly impossible, which could make wireless carriers, telecom vendors and regulators more receptive to cutting-edge alternatives, such as network virtualization.

The result would be a wireless world that’s fundamentally different both behind the scenes and in terms of the user experience.

“Networks today are largely islands,” says Merritt. “Ultimately, the idea is you would have a pool of resources that you draw on in an intelligent, dynamic and seamless way to create what is, in effect, a virtual network.”

Photo Credit: