Near Field Communication: Niche Tech or the Next Big Thing?

Money is the root of all technology, and Near Field Communication (NFC) is no exception. Frost & Sullivan, for example, predicts that by 2015, NFC will enable worldwide transactions totaling about $151.7 billion. In April 2011, another research firm, In-Stat, said “2011 is poised to be the year for NFC as it evolves from a concept to a strategy that is actively being pursued.”

Even so, electronic payments are just one of many potential uses of NFC. The variety of applications is one of the reasons why so many companies are readying NFC hardware and software -- and why enterprise CIOs, IT managers and developers should keep an eye on it. Two of Intel’s NFC experts -- Kent Helm and Sanjay Bakshi -- recently spoke with Intelligence in Software about what to expect from NFC over the next few years, and how it fits into a world that is populated by Bluetooth and RFID.

[Disclosure: Intel is the sponsor of this content.]

Q: Unlike Wi-Fi and cellular, NFC is designed for very-short-range communications. Exactly how short? And what are the advantages of such a limited range?

Kent Helm: Between 4 centimeters and 0.2 meters. Typically, it’s 2 cm to 10 cm.

Part of the advantage of that is security. Bluetooth goes quite a bit further. So you have a lot less interference with NFC. You don’t have to worry about multiple devices interfering with one another, because it’s such a short range. It’s tougher to snoop NFC.

Q: How does NFC compare to Bluetooth in terms of establishing, and then tearing down a connection? Obviously that’s important for applications such as cashless payments, which consumers don’t want to spend 30 seconds waiting for their NFC-equipped smartphone to handshake with a vending machine.

K.H.: The difference between Bluetooth and Wi-Fi and NFC is that your connection time in NFC is of very short duration and the connection is implicitly indicated by a tap unlike Bluetooth and Wi-Fi that require a connection procedure. With Bluetooth, it can take quite a while to pair and connect devices.

In Japan, you see people tapping their phones on train payment [terminals] to get their ticket and zipping off, or buying things from kiosks rather quickly. If it works in Japan, it’s going to work everywhere else as far as speed because they’re very conscious of that.

Q: Why are so many analysts and vendors predicting such a sudden, sharp increase in NFC-equipped devices?

K.H.: Phones are driving the ecosystem. You’re seeing a huge expansion of NFC adoption in the phone market. You’ve got AT&T, Verizon and T-Mobile forming ISIS for U.S. NFC payments. The ecosystem is coming alive as a result of e-commerce on phones.

Google Wallet’s initiative is instrumental in driving NFC adopting in phones. Microsoft announced that NFC stacks will be native to Windows 8. That’s huge for the consumer electronics market. It means PCs, laptops, phones, tablets -- everything -- will now have NFC built-in. Windows 8 will enable a new class of device-to-device collaboration experiences with just a tap. In short, the eco-system is starting to come together.

Intel got involved because we see the ecosystem coming. It’s a good time for us to get out there and evangelize the adoption of NFC into mobile platforms: tablets, phones, mobile PCs, desktops. It’s beautiful timing because we see the adoption of e-wallet and e-card and proximity computing exploding in 2013.

Q: Battery life is a major issue with smartphones. How much of a drain is each NFC connection?

K.H.: In standby mode, a correctly designed NFC solution should not consume any power. Given then NFC connection duration is very short -- typically a second -- the power drain is not significant.

Q: Besides cashless payments, what are some other potential NFC applications?

K.H.: Anything that involves the secure transaction of confidential information. Some obvious ones in the medical field are record transfers as a doctor or nurse enters the patient’s room. They can initiate the secure transaction of all of the medications that were administered in the past 24 hours. Another is inventory control and tracking.

Sanjay Bakshi: A doctor probably visits 20, 30 patients. Every room he goes into, he has to log in to some machine, do something and then log out. That experience can be made much more seamless. The presence of the doctor or nurse in the room is being verified because they present their [NFC] badge.

Q: But a lot of those applications are done today using RFID and other existing technologies. Where does NFC fit in? For example, does it displace RFID?

S.B.: RFID and things like that probably will stay around. RFID can be read at a long range because of the frequency they work in, so there’s probably a place for them. The close proximity of NFC is what sets it apart from Bluetooth and things like that.

Q: One potential downside of so many companies backing NFC is that it increases the chance of interoperability problems. How are the NFC Forum and individual companies, such as Intel, working to ensure interoperability?

K.H.: The standards that are available are ISO 14443A/B and from NFC Forum. It’s in everybody’s interest to ensure that the protocols and interoperability are conformant.

Where you see a difference is how rigid will you be about the performance of any individual device: Will it meet the certain number of centimeters in range? Will the antennas behave a certain way? There’s always room for movement on the latter. But on the protocols and conformance to security, there is no room for deviation.

S.B.: The NFC Forum recently released the SNEP protocol. That’s a very important step for enabling peer-to-peer experiences between devices from different OS vendors, for example.

K.H.: This is why we got involved in NFC Forum at the level we did. It’s not so much to dictate the standards. It’s mainly to get involved with everybody and help guide the industry toward this multiplatform environment.

Photo Credit@ iStockphoto.com/alexsl

Getting Physical with Virtual Servers

As IT shops deploy an increasing number of virtual servers, the challenge of managing both these servers and the physical servers in their presence grows. Here, we explore solutions with Jerry Carter, CTO of Likewise Software, who explains the role of a well-conceived server management strategy in cloud computing, how to avoid virtual server sprawl, and tools for managing physical and virtual servers:

Q. To have a successful cloud strategy, how important is it that an IT shop well-manages both its physical and virtual servers and its associated storage systems?

Jerry Carter: It’s critical in order to scale the IT infrastructure to be able to meet the business needs of the company. The concept of virtualizing the machine is about rising up the stack. You virtualize the machine and then you start running multiple VMs on one piece of hardware. Then, you virtualize the storage, but you must isolate the management of the storage resources from the application that is using that storage. If you can’t abstract those resources from the application, then you end up managing pockets of data. When you are moving from physical environments to virtual ones, you must have a solid [data/IT/storage] anagement strategy in mind; otherwise, your problem gets worse and management costs rise. You might end up cutting power consumption and gaining space through consolidation, but you might increase the number of images you have to manage.

Q. How big of a problem is managing just the storage aspect of this?

J.C.: At VMworld in August, a speaker asked, ‘When you have performance problems with your VM, how many would say that over 75 percent of the time it’s a problem involving storage?’ This huge number of people raised their hands. If you just ignore the whole storage capacity and provisioning problem, then how can you manage policy to ensure you are securely protecting the data that represents the business side of the company? You must be able to apply consistent policy across the entire system; otherwise, you are managing independent pockets of storage.

Q. In thinking about managing physical and virtual servers, should IT shops start with private clouds before attempting public clouds?

J.C.: The reason people start with private clouds has to do with their level of confidence. They have to see a solution that provides a level of security for information going outside their network; not every critical application knows how to talk to the cloud. So a private cloud strategy gives you the ability to gateway protocols that those applications can actually understand, along with the back-end cloud storage APIs. For instance, look at the announcement Microsoft made involving the improvements to its Server Message Block (SMB) 2.2 for Windows Server 8. Historically, server-based applications like SQL Server and IIS have required block storage that is mounted through iSCSI for the local apps to work. So what Microsoft has done is positioning in the cloud to be a competitor for blocks via those server-based applications.

Q. Is it important that Windows Server and Microsoft’s cloud strategy succeed to broadening the appeal for managing physical and virtual servers?

J.C.: If you look at most of the application workloads on virtualized infrastructures, something like 90 percent of them are running on VMware servers carrying Microsoft workloads. VMware is trying to virtualize the operating system and the hypervisor. Microsoft’s position is to enable those application workloads because those are the things really driving business value. I think it is very important that Windows Server 8 succeeds, but I think the more critical aspect is it supports SMB 2.2.

Q. What is the state of software tools for managing physical and virtual servers right now?

J.C.: I think the maturity and growth of virtual machine management has been tremendous. When you look at things like vSphere and vCenter from VMware that allow you to manage the hypervisor and the individual VMs, be able to deploy them rapidly and then spin them up or down on an as-needed-basis, it is impressive. But the problem that remains is in treating VMs as independent entities. What business users really care about is what is inside the VM, but the hypervisor doesn’t really deploy policy to the guest. There are some clever people doing interesting things, but generally it’s still a broad set of technologies for dealing with heterogeneous networks. I don’t think [management tools] have advanced as fast as the VM management infrastructure has.

Q. With so many more virtual servers being implemented than physical servers, how do you manage virtual server sprawl?

J.C.: First, people have to realize that it is a problem. Not only do most people have large numbers of VMs deployed that are no longer in use, but they don’t have a handle on what is actually inside of those VMs. There could be data that was created inside a particular VM for development purposes. But you have all this other data used to build a critical document just sitting out on a VM. I think the security implications are huge. If you are using individual VMs for storage, then you can have huge amounts of business-critical, sensitive data sitting on these underutilized VMs. If you aren’t doing something to manage the security, then you are vulnerable to data leakage. Some managers think, ‘All my users are storing data on the central file server, so this isn’t a problem.’ But inadvertently, users copy data locally for temporary work. The best way to address it is to have a plan in place prior to allocating new VMs, where you can apply a consistent authentication and security policy. That way, users know what they are allowed to do within a particular device.

Q. What adjustments must IT make for things like data protection as they rapidly provision more physical and virtual servers?

J.C.: Data protection can’t exist without data centralization. You can’t manage individual pockets of storage or individual VMs themselves. Another issue IT has is users don’t start removing unnecessary files from disks until the disk starts to get full. But with storage capacities ever increasing, disks never fill up, so there is never any reason to go back in and clean them up. So you can end up with the same problem caused by this virtual machine sprawl.

Q. And does unstructured data rear its ugly head in this scenario too?

J.C.: Yes. I think the biggest problem is the amount of unstructured data that exists out there; people don’t really have a handle on that. About 75 percent of data out there is unstructured. The question I always propose to users is: Given all of the data you have in your network, what is the one thing you most want to know about it? The answer is usually, ‘Well, I’d like to know I can correlate the locality of my data to prove to someone I am meeting their SLAs,’ or, ‘I want to know when people are accessing data outside their normal usage pattern.’ They need to understand data about their existing data and what access people actually have.

Q: What is a common mistake IT people make as they grow their physical and virtual servers’ infrastructure up and out?

J.C.: Just as it’s impossible to manage individual pockets of applications, it is impossible to manage individual containers of storage within a single, large storage infrastructure. You must have something that addresses the overall problem. This is what people fail to understand when they move from a smaller infrastructure to a massive one. With this machine and storage sprawl, any cracks in the existing management techniques, software or policies become chasms as the size of the problem increases from something small to very large.

Q. Is IT looking more seriously at open-source solutions for managing physical and virtual servers?

J.C.: Open source will continue to play a predominant role in a lot of IT shops. But one of the real challenges is the amount of investment made in developing expertise on individual solutions. Another is finding people with expertise when you have turnover. There is a lot of great open-source technology available, but it is not always in product form. People become experts in the technology, but it can be risky to rely on technology expertise rather than product expertise, which can be replicated. The question is: Can it address the whole problem, or is fixing individual pain points a better way to go? I think the jury is still out on that.

Photo: @iStockphoto.com/herpens

HPC: Coming to an IT Shop Near You?

As smaller  commercial IT shops gravitate toward cloud computing, they are also searching for more lower-cost, raw processing power to accelerate and improve product design. One solution could be high-performance computing (HPC) systems. Cloud computing has helped remove barriers that have kept smaller companies out of the HPC market, but it has also introduced new problems involving data movement to and from the cloud as well as concerns about security of data as it moves off site.

But Bill Magro, director of HPC Software Solutions at Intel, believes commercial shops will likely adopt HPC systems in greater numbers once a few technical and commercial barriers are eliminated. We talked to Magro about these barriers as well as the significant opportunity HPC represents for developers. Here’s what he had to say.

[Disclosure: Intel is the sponsor of this program.]

Q. Why hasn’t HPC broken out of the academic, research and governmental markets and more into the commercial market?

Bill Magro: Actually, it has. Today, HPC is a critical design tool for countless enterprises. The Council on Competitiveness (compete.org) has an extensive set of publications highlighting the use of HPC in industry. HPC systems are often run within engineering departments, outside the corporate data centers. This is perhaps a factor why IT professionals don’t see as much HPC in industry. In any case, we believe over half of HPC is consumed by industry and the potential for increased usage, especially by smaller enterprises, is enormous.

Q. Most of that potential revolves around what you call the “missing middle.” Can you explain what that is?

B.M.: At the high end of HPC, we have Fortune500 companies, major universities, and national labs utilizing large to huge HPC systems. At the low end, we have designers working with very capable workstations. We’d expect high usage of medium-sized systems in the middle, but we, as an industry, see little. That is called the “missing middle.” Many organizations are now coming together to make HPC more accessible and affordable for users in the middle, but there are a number of barriers to be cleared before they can.

Q. Might the killer app that gets the missing middle to adopt HPC systems be cloud computing?

B.M.: When cloud first emerged, it meant different things to different people. A common concern among IT managers was: Is cloud going to put me out of a job? That question naturally led to: Does cloud computing compete with HPC? An analyst told me something funny once. He said: “Asking if cloud competes with HPC is like asking if grocery stores compete with bread.” Cloud is a delivery mechanism for computing; HPC is a style of computing. HPC is the tool, and the cloud is one distribution channel for that tool. So, cloud can help, but new users need to learn the value of HPC as a first step.

Q. So will HPC increase the commercial adoption of cloud?

B.M.: I’d turn that question around to ask, “Will cloud help increase the commercial adoption of HPC?” I think the answer is “yes” to both questions. There is a synergy there. Today, we know applications like Facebook and Google Search can run in the cloud. And new applications like SpringPad can run in Amazon’s cloud services. Before we see heavy commercial use of HPC in the cloud, we need a better understanding of which HPC workloads can run safely and competently in the cloud.

Q. So you can assume cloud-computing resources are appropriate for solving HPC problems?

B.M.: Early on the answer was “No,” but more and more it is becoming “Yes – but for the right workloads and with the right configuration.” If you look at Amazon’s EC2 offering, it has specific cluster-computing instances, suitable for some HPC workloads and rentable by the hour. Others have stood up HPC/Cloud offerings, as well. So, yes, a lot of people are looking at how to provide HPC in the cloud, but few see cloud as replacing tightly integrated HPC systems for the hardest problems.

Q. What are the barriers to HPC for the more inexperienced users in the middle?

B.M.: There are a number of barriers, and the Council’s Reveal report explores these in some detail. Expertise in cluster computing is certainly a barrier. A small engineering shop may also lack capital budget to buy a cluster and the full suite of analysis software packages used by all their clients. Auto makers, for instance, use different packages. So a small shop serving the auto industry might have to buy all those packages and hire a cluster administrator, spending hundreds of thousands of dollars. But cloud computing can give small shops access to simulation capabilities on demand. They can turn capital expenses into variable expenses.

Q. What are some of the other barriers to reaching the missing middle?

B.M.: Security is a concern. If you upload a product design to the cloud, is it secure from your competitors? What about the size of those design files? Do most SMB shops have uplink speeds fast enough to send a file up and bring it back, or is it faster to compute locally? Finally, many smaller companies have users proficient in CAD who don’t yet have the expertise to run and interpret a simulation. Even fewer have the experience and confidence to prove that a simulation yields a better result than physical design. These are the questions people are struggling with. We know we will get through it, but it won’t happen overnight.

Q. Can virtualization play a role in breaking down some of these barriers?

B.M.: HPC users have traditionally shied away from virtualization, as it doesn’t accelerate data movement or computing. But, I think virtualization does have a role to play in areas like data security. You would have higher confidence levels if your data were protected by hardware-based virtualization. Clearly, virtualization can help service providers to protect themselves from malicious clients or clients that make mistakes. It also helps with isolation -- protecting clients from each other.

Q. So developers can expect to see closer ties with HPC and virtualization moving forward?

B.M.: Traditionally, virtualization and HPC have not mixed because virtualization is used to carve up resources and consolidate use on a smaller number of machines. HPC is just the opposite: it is about aggregating resources in the service of one workload and getting all the inefficiencies out of the way so it can run at maximum speed.

Virtualization will get better to where performance overhead is small enough that it  makes sense to take advantage of its management, security, and isolation capabilities. Virtualizing storage, networks and compute have to come together before you can virtualize HPC.

Q. Is there anything Intel can do at the chip level to help achieve greater acceptance of HPC?

B.M.: Much of the work at the chip level has been done, primarily through the availability of affordable high-performance processors that incorporate many features – such as floating-point vector units – critical to HPC. Today, the cost per gigaflop of compute is at an all-time low, and HPC is in the hands of more users and driving more innovation than ever.

The next wave of HPC usage will be enabled by advances in software. On the desktop, the combination of Intel Architecture and Microsoft Windows established a common architecture, and this attracted developers and drove scale. In HPC, there have been no comparable standards for clusters, limiting scale. As we reach the missing middle, the industry will need to meet the needs of these new users in a scalable way. Intel and many partners are advancing a common architecture, called Intel Cluster Ready, that enables software vendors to create applications compatible with clusters from a wide variety of vendors. Conversely, a common architecture also allows all the component and system vendors to more easily achieve compatibility with the body of ISV software.

Q. How big a challenge will it be to get inexperienced developers to write applications that exploit clustering in an HPC system?

B.M.: It will absolutely be a challenge. The good news is that most HPC software vendors have already developed versions for clusters.  So, a wide range of commercial HPC software exists today to meet the needs of first-time commercial customers. Also, more and more universities are teaching parallel programming to ensure tomorrow’s applications are ready, as well. Intel is very active in promoting parallel programming as part of university curricula.

Photo: @iStockphoto.com/halbergman

What Does It Take to Squeeze Legacy Desktop Apps Into Mobile Devices?

With sales of smartphones and tablets growing at exponential rates, third-party  and corporate developers are looking for ways to not only create new applications for these devices, but also port legacy applications over.

But porting applications and, as importantly, properly scaling legacy apps to work on these new platforms is easier said than done. There are a number of technical factors to take into consideration before developers can even begin that process. We talked to Bob Duffy, the community manager for the Intel AppUp Developer Program, about what some of those technical factors are and what advice he has for those developers who commit to such projects.

Q. What is the most import thing for developers to keep in mind when porting legacy desktop apps to mobile devices?

Bob Duffy: First, you must understand what the capabilities are of the device you are porting to. Once you understand technically what it can do, you can scale the experience you want for that device. It may be more a user-experience challenge than an engineering one. Mobile app developers have an advantage over the legacy software developer when it comes to designing apps that work well for mobile devices, as they have more experience and more of a focus on the mobile use-case experience.


Q.
Are there enough adequate tools available to get this sort of job done, or are they still lacking?

B.D.: The tools have a way to go, to be honest. I think there are tools that are well-designed for very specific platforms or languages, but you aren’t seeing the tools that make it easy to scale down an application for a particular form factor. Some development languages are getting better. You are seeing advances in things like HTML5 that allow you to auto-build your experience based on the screen size of the device.


Q.
Once legacy apps are ported over, how big of a problem is it to get them to scale well on mobile devices?

B.D.: Legacy application developers have an advantage in this regard, compared to mobile-applications developers. If you are accustomed to developing applications on low-end Pentium-based computers all the way to a core i7, they [legacy developers] know how to scale that type of application across the spectrum of hardware. But they don’t think about mobile users and how they are in and out of applications quickly. So the challenge for the application developers is now thinking about your software as sort of a utility application.


Q. What sorts of improvements are most needed in these tools?

B.D.: If you are looking at an IDE or SDK for a mobile platform, developers should look at ways to emulate applications across different devices. They should look for ways to write their code once and then get it packaged and compiled for each different form factor. If developers are going from a 4-inch device to a 10-inch, to a 60-inch TV for their applications, they need to consider what core-based templates they need for the UI that they can easily code to.


Q.
What can Intel and AMD do at the chip level to make it easier for developers to port and scale applications down to mobile devices?

B.D.: We can make things like threading applications easier. Increasingly we are seeing multicore chips on desktops and mobile devices. At the recent Intel Developer Forum show, Intel Chief Technology Officer Justin Rattner talked about the coming of 50-plus core microprocessors. So if developers are creating their applications in serial, it will be like having 50-lane freeways but only allowing one car on them at a time. We need to build better tools and instruction sets so they can move over to parallel programming. It will be up to the developers to figure out how they are going to build applications that run executions across 50 cores at once to take advantage of this.


Q.
Is there much interest among developers to migrate their mobile phone applications to tablets, or is it more the other way around?

B.D.: It is pulling in both directions. For a while you had a number of mobile developers happily developing for phones and tablets that had similar architectures. But those same developers are now thinking, “If I have my apps on a phone and a tablet, why wouldn’t I want them on a laptop, or TV, or in an in-vehicle infotainment (IVI) system?” Those developers are already scaling and migrating their applications. They are coming to us every day for advice on how to port them across platforms and get better performance.


Q
. Are there any problems porting phone and tablet apps in terms of API support? Will vendors have to come up with a different set of APIs for each platform?

B.D.: That is an issue when you are looking within one platform or OS. If I am using a bunch of iOS libraries, I am pretty cool going from the iPhone to the iPad. But if I want to go from an iPhone to a Samsung Galaxy tablet, I have to consider that I might be tied to those Apple libraries.

Developers are now thinking more about how to use standards in developing applications so they can manage a core code base and set of APIs that allow them to move from one platform to the next, from one form factor to the next.


Q. How attractive is it for developers to use emulation layers to move applications among different platforms and form factors?

B.D.: It is super-attractive. Developers generally work in one, maybe two host development environments. It’s important to them to develop in one environment and virtually test their app on other platforms and form factors. There are hosted emulation services that allow you to see your app run on iPhone, Android tablets, and netbooks, which is very valuable.  And per getting your apps complied for many devices, one company we work with created its own kit that allows you to write once and build for iOS Samsung, and build for iOS, Bada, Windows 7, MeeGo, and eventually Android.  Its kit, along with an interpreter, allows you to write to one set of APIs and then the kit and interpreter allow the application to work on all the other platforms.


Q.
How difficult is it to get a mobile or legacy application to work with your mobile processor, the Intel Atom processor, and the ARM-based processor?

B.D.: That is going to happen in the development environment with the kits used to compile the application. That is where the magic happens for the architecture you are coding for. So you want to make sure you are using the right tools that can help you take advantage of various architectures. One good example here is the Android NDK from Intel, which will allow you to write native Android apps for i86 based hardware.  The Opera browser by Opera software was something we saw recently using the NDK which is fully taking advantage of i86 on Android. There’s also game tools like Unity or App Game Kit, which I mentioned earlier, that are making it simple to write once for many devices, platforms and architectures. So the tools are getting better, allowing the cross platform mobile developers and legacy software developers to focus on writing a good apps, and to then consider the architecture differences as needed.

The Long-term Commitment of Embedded Wireless

Most businesspeople replace their cell phone roughly every two years. At the other extreme is machine-to-machine (M2M) wireless devices, which often are deployed for the better part of a decade and sometimes even longer. That’s because it’s an expensive hassle for an enterprise to replace tens of thousands of M2M devices affixed to trucks, utility meters, alarm systems, point-of-sale terminals or vending machines, to name just a few places where today’s more than 62 million M2M devices reside.

Those long lifecycles highlight why it’s important for enterprises to take a long view when developing an M2M strategy. For example, if your M2M device has to remain in service for the next 10 years, it could be cheaper to pay a premium for LTE hardware now rather than go with less expensive gear that runs on GPRS or CDMA 1X, the networks of which might be phased out before the end of this decade.

Confused? Mike Ueland, North American vice president and general manager at M2M vendor Telit Wireless Solutions, recently spoke with Intelligence in Software about how CIOs, IT managers and developers can sleep at night instead of worrying about obsolescence and other pitfalls.

Q: M2M isn’t a new technology. For example, many utilities have been using it for more than a decade to track usage instead of sending out armies of meter readers every month. Why aren’t more enterprises following suit?

Mike Ueland: It’s very similar to what it was like before we had the Internet, Ethernet and things like that, where you had all of these disconnected devices. There’s an incredible opportunity, depending on what the business and application are, to connect those devices and bring more information back, as well as being able to provide additional value-added services.

There have been some significant improvements in terms of technology and the cost to deploy an M2M solution. All of the M2M solutions have gotten much more mature. There are so many more people in the ecosystem to support you.

But we haven’t seen the uptake within the enterprise community. Part of that is due to we’ve been in such a recessionary period over the past couple of years. No one really wants to start new projects.

Q: What are some pitfalls to avoid? For example, wireless carriers have to certify M2M devices before they’ll allow them on their network. How can enterprises avoid that kind of red tape and technical nitty-gritty?

M.U.: The mistake that we see a lot is that people try to bite off too much. They’ll say: “I need this custom device that needs to do this. Therefore I need to build a custom application.” They overcomplicate the solution.

There are so many off-the-shelf devices out there that can be quickly modified for your application. One of the benefits of that is you reduce technical risk and time to market because often those devices will be pre-certified on a carrier’s network. There are also a number of M2M application providers out there -- like Axeda, ILS and others -- that have an M2M architecture. That allows people to very quickly build their own application based on this platform.

Q: Price is another factor that’s kept a lot of enterprises out of M2M. How has that changed? For example, over the past few years, many wireless carriers have developed rate plans that make M2M more affordable.

M.U.: Across the board -- device costs, module costs, air-time costs -- all of these costs have come down probably by half in the past few years, if not more. So any business cases that were done two or three years ago are outdated.

In addition, there have been great technological improvements. For instance, the device sizes have continued to shrink. They use less power. So it opens it up for a whole range of applications that might not have been possible in the past.

Q: About 10 years ago, a lot of police departments and enterprises were scrambling to replace their M2M equipment because carriers were shutting down their CDPD networks . Today, those organizations have to make a similar choice: Do I deploy an M2M system that uses 2G or 3G and hope that those networks are still in service years from now? Or should I go ahead and use 4G now even though the hardware is expensive, coverage is limited and the bandwidth is far more than what I need?

M.U.: It depends on the application. For instance, AT&T is not encouraging new 2G designs. They’ve deployed a 3G network, and they’re starting to deploy a 4G network. They’d really like people to move in that direction. Verizon and Sprint have their equivalent version of 2G: CDMA 1X. Both Verizon and Sprint have publicly declared that those networks will be available through 2021. At the end of each year, they’re going to reevaluate that.

So depending on the planned time-length of your application and where you plan to deploy it -- North America or outside -- 2G networks will have a varying degree of longevity. Having said that, if you look at a 4G deployment outside of Verizon in the U.S., there are not many significant 4G deployments. We’re early days in 4G.

As module providers, we rely on lower-cost handset chipset pricing to drive M2M volumes. The cost of an LTE module is over $100 right now. Compare that to 2G, which is under $20 on average. It’s a big gap.