Will a Mobile OS Update Break Your Apps?

It’s one of the biggest headaches in mobile app development: The operating system (OS) vendor issues an update that immediately renders some apps partly or completely inoperable, sending developers scrambling to issue their own updates to fix the problem. For instance, remember when Android 2.3.3 broke Netflix’s app in June, as iOS 5 did to The Economist’s in October? These examples show how breakage can potentially affect revenue -- especially when it involves an app that’s connected to a fee-based service. In the case of enterprise apps, breakage can also have a bottom-line impact by reducing productivity and inundating the helpdesk.

Tsahi Levent-Levi, CTO of the technology business unit at videoconferencing vendor RADVISION, has spent the past few years trying to head off breakage. His company’s product portfolio includes an app that turns iOS tablets and smartphones into endpoints. With an Android version imminent, his job is about to get even more challenging. Levent-Levi recently spoke with Intelligence in Software about why app breakage is so widespread and so difficult to avoid.

Q: What exactly causes apps to break?

Tsahi Levent-Levi: The first thing to understand is that when you have a mobile platform, usually you have two sets of APIs available to you. The first set is the one that’s published and documented. The other is one that you need to use at times, and this is undocumented. When it is not documented or not part of the official API, it means that the OS vendor might and will change it over time to fit its needs.

For example, we’re trying to reach the best frame rate and resolution possible. To do that well means you need to work at the chip level. So you go into the Android OS NDK, where you write C code and not Java code. Then you go one step lower to access the physical APIs and undocumented stuff of the system level, which is where the chipset vendors are doing some of the work.

Even a different chip from the same vendor or a newer chip from the same vendor is not going to work in the same way. Or the ROM used by a specific handset with the same chip is going to be different from the ROM you get from another one, and the APIs are going to be different as well.

Q: So to reduce the chances that their app will break, developers need to keep an eye on not only what OS vendors are doing, but also what chipset and handset vendors are doing.

T.L.: Yes, and it depends on what your application is doing. If you’d like to do complex video things, then you need to go to this deep level.

I’ll give you an example. With Android, I think it was version 2.2 of those handsets that had no front-facing camera. Then iPhone 4 came out with FaceTime, and the first thing that Android handset manufacturers did was add a front-facing camera. The problem with that was that there were no APIs that allowed you to select a camera for Android. If you really wanted to access that front-facing camera, you had to use APIs that were specific to that handset. When 2.3 came out, this got fixed because they added APIs to select a camera.

Platforms progress very fast today because there’s a lot of innovation in the area of applications. Additional APIs are being added by the OS vendors, and they sometimes replace, override or add functionality that wasn’t there before. And sometimes you get APIs from the handset vendor or chipset vendor and not from the OS itself. So there is a variance in the different APIs that you can use or should be using.

If the only thing you’re doing is going to a website to get some information and displaying it on the screen, there isn’t going to be any problem. But take streaming video, for example. Each chipset has a different type of decoder and a bit different behavior than another. This is causing problems.

It’s also something caused by Google itself. When Google came out with Android, the media system that they based everything on was OpenCORE. At one point -- I don’t remember if it was 2.1, 2.2 or 2.3 -- they decided to replace it with something totally different. This meant that all of the applications that used anything related to media required a rewrite or got broken. The new interface is called Stagefright, and there are rumors that this is going to change in the future as well.

Q: With so many vendor implementations and thus so many variables, is it realistic for developers to test their app on every model of Android or iOS device? Or should they focus on testing, say, the 25 most widely sold Android smartphones because those are the majority of the installed base?

T.L.: You start with the devices that most interest you, and then you enhance it because you get problem reports from customers. Today, for example, I got a Samsung Galaxy S. When I go to install some applications from the Android Market, it will tell me that I cannot do it because my phone doesn’t support it. That’s a way that Google is trying to deal with it, but it doesn’t always work because of the amount of variance.

The way it should be done in terms of the developer, you should first start from the highest level of instruction that you can in order to build your application. The next step would be to go for a third-party developer framework like Appcelerator, which is a framework that allows you to build applications using HTML5 and Javascript CSS. You take the application that you build there, and they compile it and make an Android or an iOS application. If you can put your application in such a scheme, it means you will run on the most amount of handsets to begin with because these types of frameworks are built for this purpose.

If you can’t do that, then you’ll probably need to go for doing the most amount of stuff that you can do in Android on the software development kit (SDK) level. If that isn’t enough, then you go down into the native development kit (NDK) level. And if that isn’t enough, then you need to go further down into the undocumented systems drivers and such. And you build the application in a way that the lower you go, the less code you have there.

Open-source Databases in the Post-Oracle World

Open-source products, like MySQL and PostgreSQL, brought relational database functionality to the masses at a fraction of the price of a commercial Oracle, IBM or even Microsoft database. MySQL led the pack of free, or almost free, contenders -- customers typically paid for support, not the database itself. Sun Microsystems bought MySQL in January 2008 and open-source fans saw Sun, which fostered many open-source projects, as a worthy caretaker. But, when Oracle bought Sun two years later, they were no longer pleased. Oracle was not known for its friendliness to open source. Here, MySQL veteran Ulf Sandberg -- now CEO of SkySQL Ab -- says prospects remain rosy for open-source databases as the world transitions to cloud computing.

Q: When Oracle bought Sun Microsystems (which had already acquired MySQL), many feared the worst for the open-source database. How has that worked out?

Ulf Sandberg:
When the Oracle news was announced, we all thought “uh oh.” Oracle was not exactly known for open source. It’s very bottom-line focused, focused on shareholder value. But on open source? No.
That triggered us to think of an alternative to Oracle. MySQL co-creator Monty Widenius started his project, MariaDB, and he’s making MySQL better, adding features so that MySQL continues to live with its creator. It’s not a fork; he builds against the current version of MySQL.

Q: Has Oracle kept up with this on its own?

U.S.:
We thought Oracle would not pay a lot of attention to MySQL, and that there should be an alternative. That doesn’t mean that Oracle is doing anything wrong. They changed some things; they changed pricing in November, they changed license agreements, so you now have to click through and sign up … and that upset people. They wonder “Why should we pay them?” But there is still a free version of MySQL.

Q: What about PostgreSQL and other open-source databases?


U.S.:
Postgres [sic] and MySQL are pretty close feature-wise. If you evaluated them as a technical person, it’s a tossup. You might find one feature in one that’s better than the other, but there’s not a huge difference. The big thing was that MySQL had a company behind it and a complete service organization, and that made it take off. The installed base of MySQL is 10 times that of Postgres.

Q: What applications are good for open-source databases, and which are better run on commercial databases?


U.S.:
Open-source databases are tailored for the new world of the Web and online. Traditional databases are monolithic; they were designed decades ago and they run legacy applications.

MySQL is still growing very fast and the sweet spot is still the Web. Companies that grew up building online apps don’t want to spend a ton of money on a big database. Facebook and Google, these are the types of customers that grew up on open-source databases.

There is nothing wrong with spending a lot of money on a commercial database. These are great products, but do you really use all that capability?

The next thing for open-source databases is the cloud. One thing that suits MySQL for cloud is that it supports pluggable storage engines that let you change the behavior of the database for different markets. The elasticity of the cloud opens up a whole new space for us and that’s where IBM, Oracle and Microsoft are stuck. They don’t have the key things needed for the virtualized world of cloud. If you turn on faucet, it should follow your needs; you shouldn’t have to install a whole new database if your need gets bigger.

Open Source Meets Systems Management

Traditional systems management software from the Big Four enterprise-class systems management players BMC, CA, IBM and Hewlett-Packard sought to solve a difficult problem: the monitoring and management of the diverse hardware and software systems that make up corporate IT. The promise was that these systems-management suites could keep tabs on and manage systems regardless of the vendor, underlying chip and operating system. As a result, they were complex, pricey and hard-pressed to keep up with changing requirements. While Microsoft offers Windows-centric systems management and VMware offers management tools for the virtualized world, the Big Four are really the incumbents to watch as more open-source systems management solutions come online.

Here, Bernd Harzog, analyst with The Virtualization Practice LLC, talks about how newer, more agile open-source alternatives are changing that.

Q: More companies are using open-source tools to monitor and manage IT. What are the names to watch?

Harzog: There are some great products with open-source elements. Hyperic, for example, is used to monitor very large-scale Web environments. It was acquired by SpringSource, which was then acquired by VMware. Hyperic has a yet-to-be-publicly-determined role in VMware’s monitoring strategy. It competed with Nimsoft before Nimsoft was bought by CA, and with Nagios, which remains almost a pure open-source -- or at least open-at-the-edge -- product.

Q: What’s the appeal?

Harzog: If a systems-management product is managing a customer’s current data center hardware, it also needs to manage new equipment the customer adds over time. Once you have more than five customers, it’s impossible to keep up with that. The only solution is to have systems-management software that is open at the edge and that is extensible and adaptable, so anyone can build a “collector” that collects the data about the new target device. Companies like CA, HP, IBM and BMC are on 12- to 18-month release cycles, so when something gets popular, it goes on the roadmap, but support for it will take up to a year and half.

An open-source-oriented company like Zenoss is open at the edge. Anyone can build a Zenoss ZenPack collector or a Nagios collector for new devices and not have to wait for the vendor.

Q: What else drives demand for these systems-management tools?

Harzog: Virtualization, IT-as-a-service, public and private clouds. Those things tend to break legacy products. Virtualization introduces new requirements -- namely keeping up with dynamic systems. Legacy tools are not designed to do this. They can monitor virtual machines (VMs), but they don’t do a good job keeping up with change in dynamic virtualization environments. Their approach to systems management, which worked with physical servers, doesn’t work well with virtual servers. The desire of enterprises for something more flexible, less expensive, easier to manage and able to meet the need of new-use cases drives demand for these new tools.

Q: How about systems management versus systems monitoring?

Harzog: That’s the other side of systems management. Management includes updating, configuring, provisioning things. There I would look at interesting new tools, like Puppet Labs’ Puppet, Opscode’s Chef and ScaleXtreme.

Management has to be done correctly. It starts with provisioning and configuration management and then has to manage performance and availability all up and down the entire stack. It’s a hard job, and tools are evolving quickly.

4 Key Principles for Creating Truly Open Open-source Projects

Scratch the surface of virtually any business, and you’ll find open-source technologies used for everything from Web servers, to databases, to operating systems for mobile devices. Although cost is often a factor for choosing an open-source solution over a proprietary one, most businesses aren’t just looking for a free solution when they choose open source: They’re usually also interested in the fact that open-source products are often more innovative, more secure and more agile in responding to the needs of the user community than their proprietary counterparts.

Indeed, the most successful open-source projects are built by communities based on principles of inclusion, transparency, meritocracy and an “Upstream first” development philosophy. By adhering to these principles, they can deliver significant value to both device manufacturers and service providers that transcend what’s offered by makers of proprietary platforms.

Open-source Principle No. 1: Inclusion
Inclusion, rather than exclusion, has always been a key tenet of open source. The idea is simple: No matter how many smart people your company employs, there are many, many other smart people in the world -- wouldn’t it be great if you could get them to contribute to your project? Successful open-source projects realize that contributions can come from anywhere and everywhere. By harnessing the capabilities of a larger community, open-source developers can deliver solutions that are often superior to proprietary ones. Contributions range from writing and debugging source code to testing new releases, writing documentation, translating software into another language or helping other uses.

Open-source Principle No. 2: Transparency
For a community development effort to work, members of the community need transparency. They need to know what’s happening at all times. By putting forums in place to encourage public discussions such as mailing lists and IRC, creating open-access problem-tracking databases using tools like Bugzilla and building systems for soliciting requests for new features, open-source developers develop a culture of trust that embraces transparency and tears down barriers -- which can stifle innovation -- between contributors.

For device manufacturers and service providers, this transparency translates directly into their ability to improve their own time-to-market and compete on a level playing field. The “Release early, release often” philosophy that frequently accompanies a transparent culture means they can evaluate new features earlier in the development cycle than they can with proprietary products. They can also provide feedback in time to influence the final release of a product.

In contrast, when a developer withholds source code until final release, those involved in the development process have a built-in time-to-market advantage over those without early access.

Open-source Principle No. 3: Meritocracy
Unlike the hierarchy of a traditional technology firm, the open-source community is one based on merit. Contributors to an open-source product prove themselves through the quality and quantity of their contributions. As their reputation for doing good work grows, so does their influence. This merit-based reward system creates much more stable development environments than those based on seniority, academic pedigree or political connections.

The Linux kernel is probably the best-known open-source project and operates on the principle of meritocracy. Linus Torvalds, founder of the Linux project, and a number of other maintainers coordinate the efforts of thousands of developers to create and test new releases of the Linux kernel. The Linux maintainers have achieved that status as a result of proven contributions to the project over a number of years. Although many of the key Linux maintainers are employed by major corporations, such as Intel and Red Hat, their status is a result of their contribution -- not their company affiliation.

Open-source Principle No. 4: “Upstream First” Philosophy
Finally, open-source developers who take advantage of existing open-source software projects rather than adopting a “Not invented here” attitude tend to be innovation leaders, especially in the fastest-paced markets.

While it is sometimes tempting to simply take source code from a project and modify it for your needs, without worrying about whether or not the original (upstream) project will accept your modifications, this approach typically leads to suboptimal results. First, you will be stuck maintaining this forked version of the project for as long as you need to use it. Second, if everyone adopted this approach, the upstream project would not benefit from the improvements made by others, which defeats one of the key benefits of open source.

So, the most successful projects that rely on components from other upstream projects have adopted an “Upstream first” philosophy, which means that the primary goal is to get the upstream project to adopt any modifications you have made to their source code. With this approach, your platform and other derivative (downstream) projects benefit from those upstream enhancements, and the platform does not incur the maintenance expense associated with maintaining a branched version of the upstream project.

Migration to the Cloud: Evolution Without Confusion

The rapid rise of cloud computing has been driven by the benefits it delivers: huge cost savings with low initial investment, ease of adoption, operational efficiency, elasticity and scalability, on-demand resources, and the use of equipment that is largely abstracted from the user and enterprise.

Of course, these cloud computing benefits all come with an array of new challenges and decisions. That’s partly because cloud products and services are being introduced in increasingly varied forms as public clouds, private clouds and hybrid clouds. They also deliver software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) solutions, and come with emerging licensing, pricing and delivery models that raise budgeting, security, compliance and governance implications.

Making these decisions is also about balancing the benefits, challenges and risks of those cloud computing options against your company’s technology criteria. Many core criteria matter: agility, availability, capacity, cost, device and location independence, latency, manageability, multi-tenancy, performance, reliability, scalability, security, etc. And the available cloud options all vary widely in terms of each of these criteria -- not to mention, there are significant challenges integrating all of this with your existing infrastructure.

There are fundamentally challenging questions that companies will be forced to grapple with as they decide what cloud functionality suits them best. The central issues include security, cost, scalability and integration.

Public, Private or Hybrid?

There are a few differences among the three:

  • Public cloud services require the least investment to get started, have the lowest costs of operation, and their capacity is eminently scalable to many servers and users. But security and compliance concerns persist regarding multi-tenancy of the most sensitive enterprise data and applications, both while resident in the cloud and during transfer over the Internet. Some organizations may not accept this loss of control of their data center function.

  • Private cloud services offer the ability to host applications or virtual machines in a company’s own infrastructure, thus providing the cloud benefits of shared hardware costs (thanks to virtualization, the hardware is abstracted), federated resources from external providers, the ability to recover from failure, and the ability to scale depending upon demand. There are fewer security concerns because existing data center security stays in place, and IT organizations retain data center control. But because companies must buy, build, and manage their private cloud(s), they don’t benefit from lower up-front capital costs and less hands-on management. Further, their operational processes must be adapted whenever existing processes are not suitable for a private cloud environment. They are just not as elastic or cost-effective as public clouds.

  • Hybrid clouds are just a mix of at least one public cloud and one private cloud, combined with your existing infrastructure. Hybrid cloud interest is powered by the desire to take advantage of public and private cloud benefits in a seamless manner. Hybrid combines the benefits and risks of public and private: offering security, compliance, and control of the enterprise private cloud for sensitive, mission-critical workloads, and scalable elasticity and lower costs for apps and services deployed in the public cloud.

That combination of operational flexibility and scalability for peak and bursty workloads is the ideal goal, but the reality is that hybrid cloud solutions are just emerging, require additional management capabilities and come with the same kind of security issues for data moved between private and public clouds.

Transformational Change or Legacy Environment?
The diversity of cloud offerings means businesses evaluating various cloud computing options must decide how to integrate cloud resources with their legacy equipment, applications, people and processes, and determine whether and how this will transform their business IT or simply extend what they have today and plan for the future.

The reality of cloud environments is that they will need to coexist with the legacy environments. A publicly traded firm with thousands of deployed apps is not going to rewrite them for the public cloud.

One determining factor may be whether the services being deployed to the cloud are “greenfield” (lacking any constraints imposed by prior work), or “brownfield” (development and deployment in the presence of existing systems). In the absence of constraints, greenfield applications are more easily deployed to the cloud.

Ideally, hybrid solutions allow organizations to create or move existing applications between clouds, without having to alter networking, security policies, operational processes or management and monitoring tools. But the reality is that, due to issues of interoperability, mobility, differing APIs, tools, policies and processes, hybrid clouds generally increase complexity.

The Forecast Is Cloudy, Turning Sunny
Where this is all headed is that, for the foreseeable future, many organizations will employ a mixed IT environment that includes both public and private clouds as well as non-cloud systems and applications, because the economics are so attractive. But, as they adopt the cloud, enterprise IT shops will need to focus on security, performance, scalability, cost and avoid vendor lock-in, in order to achieve overall efficiencies.

Security concerns will be decisive for many CIOs, but companies are increasingly going to move all but their most sensitive data to the cloud. Companies will weave together cloud and non-cloud environments, and take steps to ensure that security is assured.

Non-mission critical applications -- such as collaboration, communications, customer-service and supply-chain tools -- will be excellent candidates for the public cloud.

There’s a Cloud Solution for That

As hybrid cloud offerings mature, cloud capabilities will be built into a variety of product offerings, including virtualization platforms and system management suites. Vendor and service provider offerings will blur the boundaries between public and private environments, enabling applications to move between clouds based on real-time demand and economics.

In the not-too-distant future, hybrid cloud platforms will provide capabilities to connect and execute complex workflows across multiple types of clouds in a federated ecosystem.

Products from Amazon, HP, IBM, Red Hat, VMware and others offer companies the ability to create hybrid clouds using existing computing resources, including virtual servers, and in-house or hosted physical servers.

There are also hybrid devices that are designed to sit in data centers and connect to public cloud providers, and offer control and security along with cost savings of connecting to the cloud. For example:

  • Red Hat open-source products enable interoperability and portability via a flexible cloud stack that includes its operating system, middleware and virtualization. The company recently announced its own platform as a service called OpenShift (for the public cloud) and infrastructure-as-a-service offering called CloudForms, which is a private cloud solution.
  • VMware’s vCenter Operations combines the data center functionality of system configuration, performance management and capacity management. It also supports virtual machines that can be deployed either inside the data center or outside beyond the firewall in the cloud.

Are we there yet?