The New Mobile Landscape

The word “convergence” won’t mean quite the same thing to the next generation as it does to us. That’s because kids today will come of age in a time when phones were used to play video games, computers could double as a private movie house, and televisions were flipped on to browse the Web. Unlike us, they’ll be living in a world where “ubiquity” is the word -- surrounded by devices.

Paring Down

The most interesting development of the ubiquity age isn’t that we’re surrounded by screens and able to connect to the Internet in myriad ways, from smartphones to televisions to tablets. Most fascinating is that no one device serves as the ultimate Swiss Army Knife, acting as a substitute for all the rest.

Rather, we collect these devices the way golfers keep clubs. On the go, we check movie times on mobile phones. On the couch, we research that movie on a laptop PC or tablet, or we play a game of “Words With Friends” while our significant other watches the big game. Rather than seek a one-size-fits-all solution for computing, consumer behavior indicates that there’s a time and a place for every kind of screen.

All these screens mean that portability and power are both becoming major considerations. Laptop shipments exceeded that of desktops in 2008, and high-end “desktop replacements” -- notebooks with large screens and enough horsepower to handle any computing task -- became the primary computers for many consumers. And a new designation, the netbook, sought to lower the barrier of entry to mobile computing by offering compact laptop PCs at a fraction of the price.

New Device: Ultrabooks

Now, there’s a new category of portable PC to compete with the upstart tablet PC and other flavors of laptop. The ultrabook format is light, thin, fast and portable -- an antidote to the traditional laptop PC. Ultrabook PCs are less than .08 inch thick, weigh around 3.1 pounds and have a battery life of five to eight hours.

“The ultrabook is much more than just a product segment,” says Jim Wong, president of Acer Inc. “It’s a new trend that will become the mainstream for mobile PCs.”

The model for this new kind of laptop is Apple’s MacBook Air, which was introduced in 2008. Apple sold 1.1 million units of their super-thin laptop, and they managed this feat at premium pricing. The next phase of the ultrabook device is to build major appeal by offering similar benefits to Apple’s machine at a consumer-friendly price.

Toshiba’s Portege Z835, which debuted in November of last year, dipped in price to $699 (after a $200 rebate) at Best Buy. Competing ultrabooks include the Hewlett-Packard Folio 13 and the Acer Aspire S3, which both run for about $900. The entry-level MacBook Air is $999.

Early Buzz

Initial reception to the new ultrabooks is positive. Rob Beschizza of Boing Boing called the new ASUS ZENBOOK “very good,” but he cautions against laptops that try to adopt the ultrabook moniker but stray from the design specs that make the new class of computers so attractive in the first place.

Dilip Bhatia, vice president of Lenovo’s ThinkPad business unit, is excited about his company’s contribution to the field. “The ThinkPad X1 Hybrid and T430u ultrabooks represent the next generation in thin and light computing,” he says. “From small businesses that literally live on the road to corporate professionals working in a managed environment, these new crossover laptops fundamentally change the way people think about mobile computing technology.”

Matt McRae, Vizio’s chief technology officer, recently told Business Week that his company’s entry in the ultrabook game was meant to shake things up: “It’s very similar to TV -- we want to get in there and disrupt it,” says McRae. “We think most PCs have been designed for the small-business users, that others have not done a very good job of making them entertainment devices.

With all the new ultrabook models that appeared at CES last week, it’s now just a matter of discovering just how the ultrabook will find its place in our lives next to the televisions, tablets, smartphones and desktops many consumers already have. Nobody could have predicted this 10 years ago, but it seems pretty clear: There’s still plenty of room for this light, new computing upstart.

Will a Mobile OS Update Break Your Apps?

It’s one of the biggest headaches in mobile app development: The operating system (OS) vendor issues an update that immediately renders some apps partly or completely inoperable, sending developers scrambling to issue their own updates to fix the problem. For instance, remember when Android 2.3.3 broke Netflix’s app in June, as iOS 5 did to The Economist’s in October? These examples show how breakage can potentially affect revenue -- especially when it involves an app that’s connected to a fee-based service. In the case of enterprise apps, breakage can also have a bottom-line impact by reducing productivity and inundating the helpdesk.

Tsahi Levent-Levi, CTO of the technology business unit at videoconferencing vendor RADVISION, has spent the past few years trying to head off breakage. His company’s product portfolio includes an app that turns iOS tablets and smartphones into endpoints. With an Android version imminent, his job is about to get even more challenging. Levent-Levi recently spoke with Intelligence in Software about why app breakage is so widespread and so difficult to avoid.

Q: What exactly causes apps to break?

Tsahi Levent-Levi: The first thing to understand is that when you have a mobile platform, usually you have two sets of APIs available to you. The first set is the one that’s published and documented. The other is one that you need to use at times, and this is undocumented. When it is not documented or not part of the official API, it means that the OS vendor might and will change it over time to fit its needs.

For example, we’re trying to reach the best frame rate and resolution possible. To do that well means you need to work at the chip level. So you go into the Android OS NDK, where you write C code and not Java code. Then you go one step lower to access the physical APIs and undocumented stuff of the system level, which is where the chipset vendors are doing some of the work.

Even a different chip from the same vendor or a newer chip from the same vendor is not going to work in the same way. Or the ROM used by a specific handset with the same chip is going to be different from the ROM you get from another one, and the APIs are going to be different as well.

Q: So to reduce the chances that their app will break, developers need to keep an eye on not only what OS vendors are doing, but also what chipset and handset vendors are doing.

T.L.: Yes, and it depends on what your application is doing. If you’d like to do complex video things, then you need to go to this deep level.

I’ll give you an example. With Android, I think it was version 2.2 of those handsets that had no front-facing camera. Then iPhone 4 came out with FaceTime, and the first thing that Android handset manufacturers did was add a front-facing camera. The problem with that was that there were no APIs that allowed you to select a camera for Android. If you really wanted to access that front-facing camera, you had to use APIs that were specific to that handset. When 2.3 came out, this got fixed because they added APIs to select a camera.

Platforms progress very fast today because there’s a lot of innovation in the area of applications. Additional APIs are being added by the OS vendors, and they sometimes replace, override or add functionality that wasn’t there before. And sometimes you get APIs from the handset vendor or chipset vendor and not from the OS itself. So there is a variance in the different APIs that you can use or should be using.

If the only thing you’re doing is going to a website to get some information and displaying it on the screen, there isn’t going to be any problem. But take streaming video, for example. Each chipset has a different type of decoder and a bit different behavior than another. This is causing problems.

It’s also something caused by Google itself. When Google came out with Android, the media system that they based everything on was OpenCORE. At one point -- I don’t remember if it was 2.1, 2.2 or 2.3 -- they decided to replace it with something totally different. This meant that all of the applications that used anything related to media required a rewrite or got broken. The new interface is called Stagefright, and there are rumors that this is going to change in the future as well.

Q: With so many vendor implementations and thus so many variables, is it realistic for developers to test their app on every model of Android or iOS device? Or should they focus on testing, say, the 25 most widely sold Android smartphones because those are the majority of the installed base?

T.L.: You start with the devices that most interest you, and then you enhance it because you get problem reports from customers. Today, for example, I got a Samsung Galaxy S. When I go to install some applications from the Android Market, it will tell me that I cannot do it because my phone doesn’t support it. That’s a way that Google is trying to deal with it, but it doesn’t always work because of the amount of variance.

The way it should be done in terms of the developer, you should first start from the highest level of instruction that you can in order to build your application. The next step would be to go for a third-party developer framework like Appcelerator, which is a framework that allows you to build applications using HTML5 and Javascript CSS. You take the application that you build there, and they compile it and make an Android or an iOS application. If you can put your application in such a scheme, it means you will run on the most amount of handsets to begin with because these types of frameworks are built for this purpose.

If you can’t do that, then you’ll probably need to go for doing the most amount of stuff that you can do in Android on the software development kit (SDK) level. If that isn’t enough, then you go down into the native development kit (NDK) level. And if that isn’t enough, then you need to go further down into the undocumented systems drivers and such. And you build the application in a way that the lower you go, the less code you have there.

Is Ubicomp at a Tipping Point?

The Palo Alto Research Center (PARC) coined the term “ubiquitous computing” in 1988, but it took the next two decades for the PARC researchers’ vision to start becoming a part of the workplace and the rest of everyday life. As manager of PARC’s Ubiquitous Computing Area, Bo Begole  is shepherding ubicomp into the mainstream with some help from a variety of other trends -- particularly the growth in cellular and Wi-Fi coverage, smartphone adoption and cloud computing .

Begole recently discussed what enterprises need to consider when deciding where and how to implement ubicomp, both internally and as a way to better serve their customers. One recommendation: People need to feel that they’re always in control.

Q: The term “ubiquitous computing” means a lot of different things, depending on whom you’re talking to. How do you define it? Where did the term originate?

Begole: It kind of came on the heels of personal computing, which had superseded mainframe computing. So there was that pattern of naming. But what was distinct about ubiquitous computing in contrast to those others is that “mainframe computing” and “personal computing” always implied that a computer was the center of your focus.

I wanted to be able to conduct computing tasks ubiquitously: wherever you were, and whenever you needed it, as you were conducting your daily life. It’s more about ubiquitous working or living and using computer technologies in the course of that.

Q: Ubicomp dovetails nicely with another trend: cloud computing, where you’ve always got access to a network of computers.

Begole: Right. The cloud is an enabler of ubiquitous computing because you need constant access to the information services that you might utilize at that point. The PARC researchers, having lived with the personal computer for 15 years at that point, were envisioning this paradigm, and they saw that it was going to involve ubiquitous devices and also ubiquitous wireless networks. So they started to prototype those types of devices and services.

Q: Ubicomp seems a bit like unified communications: For at least the past 15 years, visionaries and vendors were touting unified communications as the next big thing. But it wasn’t until a few years ago that enterprises had deployed enough of the necessary building blocks, such as VoIP and desktop videoconferencing. Now unified communications is common in the workplace. Is ubicomp at a similar tipping point?

Begole: That’s a good analogy because unified communications requires critical mass of adoption of certain services, and then the next stage was to make them all interoperable. That’s what’s happened with ubiquitous communications too. The early researchers saw the inevitability of these pervasive devices and networks, and they prototyped some. But it’s taken a while before there existed the critical mass of smart devices and wireless networks.

I’d say that the tipping point was around 2005 to 2007, when smartphones came out. GPS chips embedded in those phones really enabled those kinds of context-aware services that ubiquitous computing research had been pushing for a while. The next wave is very intelligent context-aware services. The Holy Grail is a personal assistant that has so much intimate knowledge of what matters to you that it can sometimes proactively access information for you and enable services that you’re going to need in the very near future. That’s where things are going.

Q: Here’s a hypothetical scenario: My flight lands. I turn on my smartphone, which checks my itinerary and uses that to give me directions to baggage claim, the rental car desk and my hotel. But it also tells me that on the way to baggage claim, there’s a Starbucks, which it knows I like because of all the times I’ve logged in from Foursquare. Is that an example of the types of day-to-day experiences that people can expect from ubicomp?

Begole: Even a little beyond that. Maybe you like an independent coffee brewer in your local area. Now you’re in a new local area, so rather than recommending Starbucks, it will find the most popular local brewer because it knows the types of things you’re interested in.

It’s connecting the dots, which is what we expect humans to be able to do, and it’s possible for computers to be able to do it. They have access to all this kind of information. The smartphone is a good hub for that because it’s got access to all of your digital services, and it has intimate knowledge about your physical situation at any time.

Q: That scenario also highlights one of the challenges for ubiquitous computing: balancing the desire for personalization with concerns about privacy.

Begole: Ubiquitous computing services have to be respectful of the concerns that people have. Otherwise, it’s going to limit the adoption. It’s a little different for consumers than for enterprises. Employees probably have a little less expectation about the privacy of the data they’re exchanging on enterprise computing systems, but they may still have concerns about access to that information.

We’ve done deployments in enterprises with technologies that were observing situations in an office to see if you were interruptible for communications. To put people at ease with the sensors that were reading the environment, we put an on-off switch on the box so that, at any time, they could opt out completely. In the entire three-month time, nobody used that switch. You might take from that that it’s not important to have that capability, but it is so they have the comfort that they can gain control of their information.

Q: Interesting. Tell us some more about that deployment.

Begole: We did that at Sun Microsystems. It was connected to Sun’s instant-message system. We were using that to provide the presence information and the interruptability of the people on the IM service. That made it easier for remotely distributed teams to have awareness of when they could reach people: You could see not just whether somebody was online and in their office, but whether they were available for you to call them right now.

We took it a step further: If they weren’t available right now, we’d have the system predict when they were most likely to become available. That was based on statistical analysis of their presence patterns over time. That’s the kind of intelligence that ubiquitous computing expands.

Additional resources: VDC Conference 2011 

LTE: Not So Fast

This December marks two years since the Long Term Evolution (LTE) cellular technology made its worldwide commercial debut. It will be at least another two before its coverage in the U.S. and beyond is comparable to its third-generation (3G) predecessors. For developers of apps and other mobile software, that timeline is one of several factors to keep in mind when deciding when, how and where to use LTE.

For example, LTE’s download speeds are roughly 10 times as fast as such 3G technologies as CDMA2000 1xEV-DO Rev, according to carriers with commercial LTE networks. (Verizon Wireless says its LTE network has average download speeds of 5 to 12 Mbps, although some reviewers report averages of 15 Mbps and peaks of 23 Mbps .)

Yet it’s far too early in LTE’s global rollout to design mobile apps and software with the assumption that LTE speeds are the rule rather than the exception. Instead, apps need to be designed to work well in a world where fallback is common. That’s the process of dropping from LTE to EV-DO, HSPA or even a 2.5G network, such as EDGE or GPRS. (One wrinkle: In the case of HSPA, the fallback actually could be a step up because a few HSPA networks, such as Telstra’s, currently are faster than LTE.)

The trick is to design the app or software so that it masks the fallback as much as possible. But that’s easier said than done, especially in the case of bandwidth-intensive services, such as video, where a sudden drop to half or one-tenth of the previous data rate can be painfully obvious.

“You can do things such as lower the bit rate for your video, or do pre-caching when you’re in LTE coverage areas or when you’re in home or business Wi-Fi coverage,” says Larry Rau, director of Verizon Wireless’ Innovation Centers, which help developers and other companies create LTE-compatible hardware and software.

Depending on the software, dealing with fallback could be as simple as designing it to do certain things only under certain conditions. For example, an antimalware or navigation app could be written so that it doesn’t attempt to download major updates unless the device has LTE coverage.

Don’t Overlook the Uplink and Latency
Although LTE is best known for its download speeds, it’s a mistake to overlook its upload capabilities. Those are key for business and consumer applications where the smartphone, tablet or other device frequently sends large amounts of data or video.

Verizon Wireless, for example, says its LTE customers can expect average upload speeds of 2 to 5 Mbps. Some reviewers say they’ve achieved uplink speeds far faster than Verizon’s claimed rates: 10 to 12 Mbps in some areas. By comparison, Verizon promises 3G upload speeds of 500 to 800 Kbps. What all of those figures mean is that, as with the downlink side, it’s important to design apps and other software to provide a good user experience even after falling back to a network with slower uplink capabilities.

Latency is another aspect of LTE that often gets overlooked in the fixation on download speeds. “We see things down in the 30-, 40-, 50-millisecond range,” says Rau. “I would see more on the lower end of that.”

Depending on the fallback network, latency could be 50 to 70 milliseconds -- or even double that, in the case of older technologies. Compared to data rates, big, sudden changes in latency can be even tougher to deal with. “If you’re doing a multiplayer game, and you’re reliant on that LTE latency, falling back into our EV-DO coverage is something you need to contend with,” says Rau.

Don’t Be a Hog
As with multicore mobile processors
, LTE’s extra horsepower makes it tempting to stop worrying about bandwidth when developing apps and software. But at least one best practice applies to both 3G and LTE: Don’t constantly ping the network because, even though each of those connections might not require a lot of bandwidth, they still create unnecessary signaling traffic, which can be just as bad -- and sometimes worse. Take, for example, the overly chatty instant-messaging app that single-handedly almost crashed T-Mobile’s 3G network  in 2009.

It’s here that LTE carriers can not only help, but also want to help.

“We can help developers learn how to manage the device so your app isn’t doing things like pinging a server,” says Rau. “You don’t want to do things like that. You get online, get the information you need and get offline.”

Another beneficiary of that kind of efficient design is the battery: When the smartphone or tablet isn’t constantly connecting to the network, its transceiver is using far less power. That’s important because with both 3G and LTE, the more dependent business users and consumers become on mobile devices, the less willing they are to be tethered to wall socket.

Smartphones and Tablets Go Multicore: Now What?

Within three years, nearly 75 percent of smartphones and tablets will have a multicore processor, predicts In-Stat, a research firm. This trend is no surprise, considering how mobile devices are increasingly handling multiple tasks simultaneously, such as serving as a videoconferencing endpoint while downloading email in the background and running an antimalware scan.

Today’s commercially available dual-core mobile devices include Apple’s iPad 2, HTC’s EVO 3D and Samsung’s Galaxy Tab, and some vendors have announced quad-core processors that will begin shipping next year. Operating systems are the other half of the multicore equation. Android and iOS, which are No. 1 and No. 2  in terms of U.S. market share, already support multicore processors.

“IOS and Android are, at their core, based on multithreaded operating systems,” says Geoff Kratz, chief scientist at FarWest Software, a consultancy that specializes in system design. “Android is basically Linux, and iOS is based on Mac OS, which in turn is based on BSD UNIX. That means that, out of the box, these systems are built to support multicore/multi-CPU systems and multithreaded apps.”

For enterprise developers, what all this means is that it’s time to get up-to-speed on programming for mobile multicore devices. That process starts with understanding multicore’s benefits -- and why they don’t always apply.

Divide and Conquer

Performance is arguably multicore’s biggest and most obvious benefit. But that benefit doesn’t apply across the board; not all apps use multithreading. So when developing an app, an important first step is to determine what can be done in parallel.

“This type of programming can be tricky at first, but once you get used to it and thinking about it, it becomes straightforward,” says Kratz. “For multithreaded programming, the important concepts to understand are queues (for inter-thread communications) and the various types of locks you can use to protect memory (like mutexes, spin locks and read-write locks).”

It’s also important to protect shared data.

“Most programmers new to multithreaded programming assume things like a simple assignment to a variable (e.g., setting a “long” variable to “1”) is atomic, but unfortunately it often isn’t,” says Kratz. “That means that one thread could be setting a variable to some value, and another thread catches it halfway through the write, getting back a nonsense value.”

It’s here that hardware fragmentation can compound the problem. For example, not all platforms handle atomic assignments. So an app might run flawlessly on an enterprise’s installed base of mobile devices, only to have problems arise when non-atomic hardware is added to the mix.

“Using mutexes (which allow exclusive access to a variable) or read/write locks (which allow multiple threads to read the data, but lock everyone out when a thread wants to write) are the two most common approaches, and mutexes are by far the most common,” says Kratz. “For Android programmers, this is easily done using the ‘synchronized’ construct in the Java language, protecting some of the code paths so only one thread at a time can use it.”

For iOS, options include using POSIX mutexes, the NSLock class or the @synchronized directive.

“Ideally, a programmer should minimize the amount of data shared between threads to the absolute bare minimum,” says Kratz. “It helps with performance and, more importantly, makes the app simpler and easier to understand -- and less liable to errors as a result.”

No Free Power-lunch

As the installed base of mobile multicore devices grows, it doesn’t mean developers can now ignore power consumption. Just the opposite: If multicore enables things that convince more enterprises to increase their usage of smartphones and tablets, then they’re also going to expect these devices to be able to run for an entire workday between charges. So use the extra horsepower efficiently, such as by managing queues.

“You want to use a queue construct that allows anything reading from the queue to wait efficiently for the next item on the queue,” says Kratz. “If there is nothing on the queue, then the threads should basically go idle and use no CPU. If the chosen queue forces you to poll and repeatedly go back and read the queue to see if there is data on it, then that results in CPU effort for no gain, and all you’ve done is use up power and generate heat. If a programmer has no choice but to poll, then I would recommend adding a small sleep between polling attempts where possible, to keep the CPU load down a bit.”

When it comes to power management, many best practices from the single-core world still apply to multicore. For example, a device’s radios -- not just cellular, but GPS and Wi-Fi too -- are among the biggest power-draws. So even though multicore means an app can sit in the background and use a radio -- such as a navigation app constantly updating nearby restaurants and ATMs -- consider whether that’s the most efficient use of battery resources.

“For some apps, like a turn-by-turn navigation app, it makes sense that it wants the highest-resolution location as frequently as possible,” says Kratz. “But for some apps, a more coarse location and far less frequent updates may be sufficient and will help preserve battery.

“There may be times where an app will adapt its use of the GPS, depending on if the device is plugged into power or not. In this case, if you are plugged into an external power source, then the app can dial up the resolution and update frequency. When the device is unplugged, the app could then dial it back, conserving power and still working in a useful fashion.”

Photo Credit: @iStockphoto.com/skodonnell