Under the Hood: A Look Inside the Ultrabook

Mobile devices have been transforming the world of computing. Smartphones, tablets, e-readers and netbooks have revolutionized the way people communicate and interact with each other, buy things, shoot video, make music and play games. Perhaps most important, mobile devices are changing the way people work.

Consumers’ expectations have risen with this proliferation of mobile technologies. Fast, reliable access to the Internet and location-aware services on smartphones and tablets has upped the ante: People expect instant gratification without barriers. Who wants to wait for their mobile device to turn on, or spend a lot of time learning a complex user interface? Smooth computing experiences in 2012 require always-on connectivity and application responsiveness.

Combining Mobility and Power
Recognizing this sea of change, a new line of mobile devices -- Ultrabooks -- was unveiled last year at Computex in Taiwan. According to the announcement, Ultrabooks “would operate more like smartphones -- wake up in a flash, combine responsiveness with performance, offer a seamless and compelling experience and be sleek and less than an inch thick.”

Ultrabook devices extend and enhance the practical applications of smartphones and tablets by combining portability with the technology that’s typically associated with high-performance laptops -- second-generation processors and a 64-bit OS. Toss in accelerometers, a gyroscope and other sensor technologies and wrap it all in a sleek, thin, lightweight case with an equally attractive price tag, and you’ve got a recipe for what manufacturers hope is the next big thing in mobile computing.

“Developers that were strictly building PC applications will now have a platform that’s more mobile than a typical laptop and have technologies and sensors they previously could not access,” says Tom Deckowski, a developer marketing manager for Intel [disclosure: Intel is the sponsor of this content]. “On the flip side, mobile app developers who were focused on creating apps for small-footprint devices that didn’t take a lot of CPU performance will now have access to CPU and graphics performance they never had before, without losing access to the sensors. There’s something new in the Ultrabook device for both PC and mobile app developers alike.”

The Details
Ultrabook devices have three primary technologies that help them perform responsively:

  • Fast start-up ensures that it will take less than seven seconds to get the system up and fully functioning from hibernation, saving time and battery charge. In some Ultrabook devices, a portion of the system’s hard drive is reserved for caching information about the operating system and application state, providing users with a mobile experience that’s highly responsive.
  • Fast response using a solid-state drive (SSD) or SSD-hybrid as a cache between a hard drive and its memory without the use of an additional drive partition, makes application launch times faster.

  • Continuous updates allow applications on some models to continue receiving data updates even while the system is in hibernate or sleep mode. This can be used for all kinds of things; for game developers, they can push game updates to MMORPG players while they’re away from their Ultrabook, instead of spending time downloading updates before they can continue playing the game.

Device security is provided via new identity protection tools that are embedded in the BIOS/firmware of the devices. While no system is immune to theft or loss, these identity protection measures can detect theft or loss and disable the system. When the Ultrabook is recovered, the software can reactivate it with no loss of data.

Another crucial feature is extended battery life. Ultrabook devices are based on low-voltage processors that offer a minimum battery life of five hours, and up to eight hours or more on some systems.

The first Utrabook devices, including the Acer Aspire S3, the ASUS ZENBOOK, HP Folio, Lenovo IdeaPad U300 and Toshiba Portege Z830 Series, are hitting shelves now. They all weigh in at 3 pounds or less, are paper thin and feature air-cooled keyboards, HDMI connectors for hooking up to a TV set and USB 3.0 connectors. Storage options include SSDs and hard drives of various sizes.

Photo: Getty Images

Near Field Communication: Niche Tech or the Next Big Thing?

Money is the root of all technology, and Near Field Communication (NFC) is no exception. Frost & Sullivan, for example, predicts that by 2015, NFC will enable worldwide transactions totaling about $151.7 billion. In April 2011, another research firm, In-Stat, said “2011 is poised to be the year for NFC as it evolves from a concept to a strategy that is actively being pursued.”

Even so, electronic payments are just one of many potential uses of NFC. The variety of applications is one of the reasons why so many companies are readying NFC hardware and software -- and why enterprise CIOs, IT managers and developers should keep an eye on it. Two of Intel’s NFC experts -- Kent Helm and Sanjay Bakshi -- recently spoke with Intelligence in Software about what to expect from NFC over the next few years, and how it fits into a world that is populated by Bluetooth and RFID.

[Disclosure: Intel is the sponsor of this content.]

Q: Unlike Wi-Fi and cellular, NFC is designed for very-short-range communications. Exactly how short? And what are the advantages of such a limited range?

Kent Helm: Between 4 centimeters and 0.2 meters. Typically, it’s 2 cm to 10 cm.

Part of the advantage of that is security. Bluetooth goes quite a bit further. So you have a lot less interference with NFC. You don’t have to worry about multiple devices interfering with one another, because it’s such a short range. It’s tougher to snoop NFC.

Q: How does NFC compare to Bluetooth in terms of establishing, and then tearing down a connection? Obviously that’s important for applications such as cashless payments, which consumers don’t want to spend 30 seconds waiting for their NFC-equipped smartphone to handshake with a vending machine.

K.H.: The difference between Bluetooth and Wi-Fi and NFC is that your connection time in NFC is of very short duration and the connection is implicitly indicated by a tap unlike Bluetooth and Wi-Fi that require a connection procedure. With Bluetooth, it can take quite a while to pair and connect devices.

In Japan, you see people tapping their phones on train payment [terminals] to get their ticket and zipping off, or buying things from kiosks rather quickly. If it works in Japan, it’s going to work everywhere else as far as speed because they’re very conscious of that.

Q: Why are so many analysts and vendors predicting such a sudden, sharp increase in NFC-equipped devices?

K.H.: Phones are driving the ecosystem. You’re seeing a huge expansion of NFC adoption in the phone market. You’ve got AT&T, Verizon and T-Mobile forming ISIS for U.S. NFC payments. The ecosystem is coming alive as a result of e-commerce on phones.

Google Wallet’s initiative is instrumental in driving NFC adopting in phones. Microsoft announced that NFC stacks will be native to Windows 8. That’s huge for the consumer electronics market. It means PCs, laptops, phones, tablets -- everything -- will now have NFC built-in. Windows 8 will enable a new class of device-to-device collaboration experiences with just a tap. In short, the eco-system is starting to come together.

Intel got involved because we see the ecosystem coming. It’s a good time for us to get out there and evangelize the adoption of NFC into mobile platforms: tablets, phones, mobile PCs, desktops. It’s beautiful timing because we see the adoption of e-wallet and e-card and proximity computing exploding in 2013.

Q: Battery life is a major issue with smartphones. How much of a drain is each NFC connection?

K.H.: In standby mode, a correctly designed NFC solution should not consume any power. Given then NFC connection duration is very short -- typically a second -- the power drain is not significant.

Q: Besides cashless payments, what are some other potential NFC applications?

K.H.: Anything that involves the secure transaction of confidential information. Some obvious ones in the medical field are record transfers as a doctor or nurse enters the patient’s room. They can initiate the secure transaction of all of the medications that were administered in the past 24 hours. Another is inventory control and tracking.

Sanjay Bakshi: A doctor probably visits 20, 30 patients. Every room he goes into, he has to log in to some machine, do something and then log out. That experience can be made much more seamless. The presence of the doctor or nurse in the room is being verified because they present their [NFC] badge.

Q: But a lot of those applications are done today using RFID and other existing technologies. Where does NFC fit in? For example, does it displace RFID?

S.B.: RFID and things like that probably will stay around. RFID can be read at a long range because of the frequency they work in, so there’s probably a place for them. The close proximity of NFC is what sets it apart from Bluetooth and things like that.

Q: One potential downside of so many companies backing NFC is that it increases the chance of interoperability problems. How are the NFC Forum and individual companies, such as Intel, working to ensure interoperability?

K.H.: The standards that are available are ISO 14443A/B and from NFC Forum. It’s in everybody’s interest to ensure that the protocols and interoperability are conformant.

Where you see a difference is how rigid will you be about the performance of any individual device: Will it meet the certain number of centimeters in range? Will the antennas behave a certain way? There’s always room for movement on the latter. But on the protocols and conformance to security, there is no room for deviation.

S.B.: The NFC Forum recently released the SNEP protocol. That’s a very important step for enabling peer-to-peer experiences between devices from different OS vendors, for example.

K.H.: This is why we got involved in NFC Forum at the level we did. It’s not so much to dictate the standards. It’s mainly to get involved with everybody and help guide the industry toward this multiplatform environment.

Photo Credit@ iStockphoto.com/alexsl

Managing the Deluge of Personal Hand-held Devices Into the Enterprise

Until recently, many enterprise IT organizations prohibited the use of personal hand-held devices in the enterprise environment. With the consumerization of IT, it faced the daunting challenge of enabling employees’ desire to access corporate information using an array of personal hand-held devices. Ten years ago, employees came to work to use great technology. Now, with the battery of consumer devices available, they often have better PCs and printers at home than they do at work. Because user expectations and needs have also changed, enterprise must adapt.

In the enterprise, a highly mobile workforce wants to take advantage of the most up-to-date systems, services and capabilities to do their jobs, typically using hand-held devices as companion devices to extend the usefulness of their corporate-owned mobile business PCs. This allows them to access information easily from home or on the road. For example, many users want to synchronize their corporate calendars with a third-party Web-based calendar utility so they can use their personal devices to access their work calendars from anywhere. They are motivated to get their jobs done in a manner that is easy, efficient and most productive.

Employees often don’t consider the information security issues raised by such a practice; however, information security is critically important for IT. Analysis of any policy prohibiting all personal devices shows that enforcing the policy would consume extraordinary resources in software and support and would negatively impacted users’ productivity.

Such an approach would require IT to verify every application before allowing a user to install it, which alone would take away much flexibility from the corporate user base. It would also need to significantly modify corporate culture and user expectations, deploy new lab networks and install large amounts of new hardware and networking equipment. That kind of control is just not possible or productive.

Solutions Must Balance User Demand and Information Security
With each new generation of technology, IT must develop ways to help keep information secure. The challenge is to develop a policy that maximizes both user demand and information security to the greatest extent possible. With safeguards in place to protect information and intellectual property, employees are allowed to select the tools that suit their personal work styles and facilitate their job duties, improving employee productivity and job satisfaction. Since the use of personal devices is accelerating, policy needs to change to accommodate it. The best option embraces the consumerization of IT, recognizing that the trend offers significant potential benefits to both users and to IT:

  • Increased productivity. Users can choose devices that fit their work styles and personal preferences, resulting in increased productivity and flexibility.
  • Greater manageability. By offering a program that users can adopt, IT is aware of what they are doing and can offer services that influence their behavior. This provides a clear understanding of our risk level so IT can actively manage it.
  • Enhanced business continuity. If a user’s mobile business PC is nonfunctional, a personal hand-held device provides at least a partial backup, enabling the user to continue to work productively.
  • Loss prevention. Internal data indicates that users tend to take better care of their own belongings and tend to lose personal devices less frequently than corporate-owned devices, which actually enhances information security.
  • Greater security. Rather than ignore the consumerization of IT, IT can increase information security by taking control of the trend and guiding it.

By taking control of the trend and the technology in its environment, IT is able to circumvent many of the security issues that might have occurred if it simply ignores the issue or prohibits employees from using their own devices to accomplish some of their job duties.

Addressing the Unique Security Challenges of This Workplace Trend
Recognizing the potential benefits of the consumerization of IT to both employees and to IT, the best step is to identify the unique security challenges of this workplace trend, investigate user behavior and define the requirements of an IT consumerization policy. That policy must support users’ needs for mobility and flexibility by allowing personally owned hand-held devices in the enterprise and allowing other personally owned devices in the future.

It is relatively easy to verify and enforce which applications are running on corporate-owned hand-held devices. With personal devices, this process is not so straightforward because employees have the right to install any applications they choose. However, we have identified certain minimum-security specifications for hand-held devices that provide a level of information security that allows IT to test, control, update, disconnect, remote wipe and enforce policy:

  • Two-factor authentication required to push email
  • Secure storage using encryption
  • Security policy setting and restrictions
  • Secure information transmittal to and from the enterprise
  • Remote wipe capability
  • Some firewall and intrusion detection
  • System (IDS) capabilities on the server side of the connection
  • Patch management and enforcement software for security rules
  • The ability to check for viruses from the server side of the connection, although the device itself may not have antivirus software

In the case of antivirus software, we analyzed virus attacks on mobile devices and found that very few targeted corporate information; most either sent text messages or attacked the user’s phone book. Although we expect malware incidents to increase over time, the current threat level to actual corporate information is low.

Mobile Business: PCs or Thin Clients?
We have not found that the thin client computing model, which centrally stores information and allows access to that information only from specific devices, is a foolproof way to protect corporate information.

Although thin clients are appropriate for certain limited applications, in general we feel they limit user mobility, productivity and creativity. Also, many of the perceived security enhancements associated with thin clients need to be viewed with caution. In fact, many of the information security risks merely moved; they didn’t disappear. For example, thin clients usually don’t include the same level of information security protection as mobile business PCs, yet they can still connect to the Internet and export information, putting that information at risk. Therefore, the loss of productivity that comes with using thin clients is for little or no gain.

Security Considerations
One of the biggest technical challenges to implementing our policy involved firewall authentication. With IT-managed systems, authentication uses two factors: something you know (a password) and something you have (a registered mobile business PC). But when the device is unknown, you are left with only one authentication criterion.

Therefore, one of the interesting challenges of allowing personal devices in the enterprise is using information on the device to authenticate to the network, without that information belonging to the user. If the employee owns the piece of information used to authenticate to the network, IT would have no grounds for disciplinary action if the user were to choose to move his or her data to a different device to get access to the network. For example, the International Mobile Equipment Identity (MEI) number on a mobile device belongs to the user if the user owns the hardware, so that IT cannot use that to authenticate the device.

To address this issue, IT can send a text message to a predefined phone number, and that text message becomes the user’s password. In this scenario, the phone number is the must-have authentication factor, and the text message is the must-know authentication factor.

Device management also poses challenges, because one solution doesn’t fit all devices and applications. You should design your device management policy with the expectation that a device will be lost or stolen. Therefore, you can expect it to be able to protect itself in a hostile attack. This means that the device is encrypted, can self-wipe with a number of wrong password attempts, and we can remotely wipe the device. Your personal device policy should require users to have controls in place prior to any loss.

Also, some services need greater levels of security than others. For example, the system for booking a conference room doesn’t need the high level of security required by the sales database. Therefore, the room booking system can reside on a device over which we have less management control. You can develop a tiered management system.

The consumerization of IT is a significant workplace trend IT has been actively anticipating for years. You need to establish a comprehensive information security policy, train users and service desk personnel and develop technical solutions that meet your information security requirements. These accomplishments will enable IT to take advantage of the benefits of IT consumerization, without putting our corporate data at risk.

To successfully accommodate employees’ desire to use personal devices in the enterprise, it is important to proactively anticipate the trend -- not ignore it or lose control of the environment by simply doing nothing. Success also hinges on an even-handed approach to developing policy, where each instance of personal device usage is treated consistently; it would be difficult to take action if one employee did something if that thing was common practice.

Highly mobile users can use either their own device or a corporate-owned hand-held device as a companion to their mobile business PC. Because employees with similar responsibilities have different preferences, allowing them to use the hand-held devices that best suit their work styles increases productivity and job satisfaction.

For more information on Intel IT best practices, visit Intel.com/IT

Photo Credit:@iStockphoto.com/mipan

What Does It Take to Squeeze Legacy Desktop Apps Into Mobile Devices?

With sales of smartphones and tablets growing at exponential rates, third-party  and corporate developers are looking for ways to not only create new applications for these devices, but also port legacy applications over.

But porting applications and, as importantly, properly scaling legacy apps to work on these new platforms is easier said than done. There are a number of technical factors to take into consideration before developers can even begin that process. We talked to Bob Duffy, the community manager for the Intel AppUp Developer Program, about what some of those technical factors are and what advice he has for those developers who commit to such projects.

Q. What is the most import thing for developers to keep in mind when porting legacy desktop apps to mobile devices?

Bob Duffy: First, you must understand what the capabilities are of the device you are porting to. Once you understand technically what it can do, you can scale the experience you want for that device. It may be more a user-experience challenge than an engineering one. Mobile app developers have an advantage over the legacy software developer when it comes to designing apps that work well for mobile devices, as they have more experience and more of a focus on the mobile use-case experience.

Are there enough adequate tools available to get this sort of job done, or are they still lacking?

B.D.: The tools have a way to go, to be honest. I think there are tools that are well-designed for very specific platforms or languages, but you aren’t seeing the tools that make it easy to scale down an application for a particular form factor. Some development languages are getting better. You are seeing advances in things like HTML5 that allow you to auto-build your experience based on the screen size of the device.

Once legacy apps are ported over, how big of a problem is it to get them to scale well on mobile devices?

B.D.: Legacy application developers have an advantage in this regard, compared to mobile-applications developers. If you are accustomed to developing applications on low-end Pentium-based computers all the way to a core i7, they [legacy developers] know how to scale that type of application across the spectrum of hardware. But they don’t think about mobile users and how they are in and out of applications quickly. So the challenge for the application developers is now thinking about your software as sort of a utility application.

Q. What sorts of improvements are most needed in these tools?

B.D.: If you are looking at an IDE or SDK for a mobile platform, developers should look at ways to emulate applications across different devices. They should look for ways to write their code once and then get it packaged and compiled for each different form factor. If developers are going from a 4-inch device to a 10-inch, to a 60-inch TV for their applications, they need to consider what core-based templates they need for the UI that they can easily code to.

What can Intel and AMD do at the chip level to make it easier for developers to port and scale applications down to mobile devices?

B.D.: We can make things like threading applications easier. Increasingly we are seeing multicore chips on desktops and mobile devices. At the recent Intel Developer Forum show, Intel Chief Technology Officer Justin Rattner talked about the coming of 50-plus core microprocessors. So if developers are creating their applications in serial, it will be like having 50-lane freeways but only allowing one car on them at a time. We need to build better tools and instruction sets so they can move over to parallel programming. It will be up to the developers to figure out how they are going to build applications that run executions across 50 cores at once to take advantage of this.

Is there much interest among developers to migrate their mobile phone applications to tablets, or is it more the other way around?

B.D.: It is pulling in both directions. For a while you had a number of mobile developers happily developing for phones and tablets that had similar architectures. But those same developers are now thinking, “If I have my apps on a phone and a tablet, why wouldn’t I want them on a laptop, or TV, or in an in-vehicle infotainment (IVI) system?” Those developers are already scaling and migrating their applications. They are coming to us every day for advice on how to port them across platforms and get better performance.

. Are there any problems porting phone and tablet apps in terms of API support? Will vendors have to come up with a different set of APIs for each platform?

B.D.: That is an issue when you are looking within one platform or OS. If I am using a bunch of iOS libraries, I am pretty cool going from the iPhone to the iPad. But if I want to go from an iPhone to a Samsung Galaxy tablet, I have to consider that I might be tied to those Apple libraries.

Developers are now thinking more about how to use standards in developing applications so they can manage a core code base and set of APIs that allow them to move from one platform to the next, from one form factor to the next.

Q. How attractive is it for developers to use emulation layers to move applications among different platforms and form factors?

B.D.: It is super-attractive. Developers generally work in one, maybe two host development environments. It’s important to them to develop in one environment and virtually test their app on other platforms and form factors. There are hosted emulation services that allow you to see your app run on iPhone, Android tablets, and netbooks, which is very valuable.  And per getting your apps complied for many devices, one company we work with created its own kit that allows you to write once and build for iOS Samsung, and build for iOS, Bada, Windows 7, MeeGo, and eventually Android.  Its kit, along with an interpreter, allows you to write to one set of APIs and then the kit and interpreter allow the application to work on all the other platforms.

How difficult is it to get a mobile or legacy application to work with your mobile processor, the Intel Atom processor, and the ARM-based processor?

B.D.: That is going to happen in the development environment with the kits used to compile the application. That is where the magic happens for the architecture you are coding for. So you want to make sure you are using the right tools that can help you take advantage of various architectures. One good example here is the Android NDK from Intel, which will allow you to write native Android apps for i86 based hardware.  The Opera browser by Opera software was something we saw recently using the NDK which is fully taking advantage of i86 on Android. There’s also game tools like Unity or App Game Kit, which I mentioned earlier, that are making it simple to write once for many devices, platforms and architectures. So the tools are getting better, allowing the cross platform mobile developers and legacy software developers to focus on writing a good apps, and to then consider the architecture differences as needed.

The Long-term Commitment of Embedded Wireless

Most businesspeople replace their cell phone roughly every two years. At the other extreme is machine-to-machine (M2M) wireless devices, which often are deployed for the better part of a decade and sometimes even longer. That’s because it’s an expensive hassle for an enterprise to replace tens of thousands of M2M devices affixed to trucks, utility meters, alarm systems, point-of-sale terminals or vending machines, to name just a few places where today’s more than 62 million M2M devices reside.

Those long lifecycles highlight why it’s important for enterprises to take a long view when developing an M2M strategy. For example, if your M2M device has to remain in service for the next 10 years, it could be cheaper to pay a premium for LTE hardware now rather than go with less expensive gear that runs on GPRS or CDMA 1X, the networks of which might be phased out before the end of this decade.

Confused? Mike Ueland, North American vice president and general manager at M2M vendor Telit Wireless Solutions, recently spoke with Intelligence in Software about how CIOs, IT managers and developers can sleep at night instead of worrying about obsolescence and other pitfalls.

Q: M2M isn’t a new technology. For example, many utilities have been using it for more than a decade to track usage instead of sending out armies of meter readers every month. Why aren’t more enterprises following suit?

Mike Ueland: It’s very similar to what it was like before we had the Internet, Ethernet and things like that, where you had all of these disconnected devices. There’s an incredible opportunity, depending on what the business and application are, to connect those devices and bring more information back, as well as being able to provide additional value-added services.

There have been some significant improvements in terms of technology and the cost to deploy an M2M solution. All of the M2M solutions have gotten much more mature. There are so many more people in the ecosystem to support you.

But we haven’t seen the uptake within the enterprise community. Part of that is due to we’ve been in such a recessionary period over the past couple of years. No one really wants to start new projects.

Q: What are some pitfalls to avoid? For example, wireless carriers have to certify M2M devices before they’ll allow them on their network. How can enterprises avoid that kind of red tape and technical nitty-gritty?

M.U.: The mistake that we see a lot is that people try to bite off too much. They’ll say: “I need this custom device that needs to do this. Therefore I need to build a custom application.” They overcomplicate the solution.

There are so many off-the-shelf devices out there that can be quickly modified for your application. One of the benefits of that is you reduce technical risk and time to market because often those devices will be pre-certified on a carrier’s network. There are also a number of M2M application providers out there -- like Axeda, ILS and others -- that have an M2M architecture. That allows people to very quickly build their own application based on this platform.

Q: Price is another factor that’s kept a lot of enterprises out of M2M. How has that changed? For example, over the past few years, many wireless carriers have developed rate plans that make M2M more affordable.

M.U.: Across the board -- device costs, module costs, air-time costs -- all of these costs have come down probably by half in the past few years, if not more. So any business cases that were done two or three years ago are outdated.

In addition, there have been great technological improvements. For instance, the device sizes have continued to shrink. They use less power. So it opens it up for a whole range of applications that might not have been possible in the past.

Q: About 10 years ago, a lot of police departments and enterprises were scrambling to replace their M2M equipment because carriers were shutting down their CDPD networks . Today, those organizations have to make a similar choice: Do I deploy an M2M system that uses 2G or 3G and hope that those networks are still in service years from now? Or should I go ahead and use 4G now even though the hardware is expensive, coverage is limited and the bandwidth is far more than what I need?

M.U.: It depends on the application. For instance, AT&T is not encouraging new 2G designs. They’ve deployed a 3G network, and they’re starting to deploy a 4G network. They’d really like people to move in that direction. Verizon and Sprint have their equivalent version of 2G: CDMA 1X. Both Verizon and Sprint have publicly declared that those networks will be available through 2021. At the end of each year, they’re going to reevaluate that.

So depending on the planned time-length of your application and where you plan to deploy it -- North America or outside -- 2G networks will have a varying degree of longevity. Having said that, if you look at a 4G deployment outside of Verizon in the U.S., there are not many significant 4G deployments. We’re early days in 4G.

As module providers, we rely on lower-cost handset chipset pricing to drive M2M volumes. The cost of an LTE module is over $100 right now. Compare that to 2G, which is under $20 on average. It’s a big gap.