New Programming Languages to Watch

Programming ultimately is about building a better mousetrap, and sometimes that includes the programming languages themselves. Today, there are at least a dozen up-and-coming languages vying to become the next C++ or Java.

Should you care? The definitive answer: It depends. One big factor is whether an emerging language stands a chance of building a following among developers, vendors and the rest of the IT industry, or whether it’s doomed to be a historical footnote. That’s always tough to predict, but a newcomer stands a better chance when its creator is a major IT player, such as Google, which created Dart as a replacement for JavaScript.

“For any language that hopes to play a similar role to JavaScript, such as Dart, some level of support would have to come from browser vendors,” says Al Hilwa, applications development software program director at IDC, an analyst firm. “Google would have to come up with a plug-in for each browser that compiles to JavaScript, and some browsers may block that for one reason or another.”

The role of a major vendor isn’t limited to creating the language itself. In some cases, the vendor can be influential when it creates an ecosystem and then encourages use of a certain language.

“For example, Microsoft pushed C# hard when it launched .NET a decade ago, which resulted in great traction for that language,” says Hilwa. “The Microsoft developer system is somewhat unique in that, up to this point, Microsoft has huge credibility in moving the ecosystem with its actions given its dominance in personal computing. But even then, many languages are made available to please specific factions of developers, and not necessarily because they are expected to dominate. I would put F# in this category.”

When a particular language is able to build market share, it’s often at the expense of an incumbent -- hence the better mousetrap analogy. Time will tell whether that’s the case with Dart and JavaScript.

“JavaScript is much maligned, though it’s often used primarily as a syntax approach for all manner of different specific semantic implementations,” says Hilwa. “Browsers purport to support JavaScript but have great variations. Other server technologies use the JavaScript syntax but solve different types of problems, such as Node for asynchronous server computing.”

Emerging languages also have to overcome the fact that incumbency often has its privileges -- or at least an installed base of products and people that are unwilling or unable to change.

“The issue with launching new languages to replace old and potentially inferior ones is the body of code written and the mass of developers who have vested skills,” says Hilwa. “It’s very hard to create a shift in the industry due to these effects. Such a change requires a high level of industry consensus and a sustained multivendor push that involves the key influential vendors.

“For example, if IBM was to put its power behind Dart, then that might help it. It may be hard for Google to muster up such long-term commitment given its culture of focus on the hot and new.”

Meet Dart, F# and Fantom

Here’s an overview of three emerging languages that, at this point, seem to have enough market momentum that enterprises should keep an eye on them:

  • Dart is a class-based language that’s supposed to make it easier to develop, debug and maintain Web applications, particularly large and thus unwieldy ones. It also uses syntax similar to that of JavaScript, which should be helpful for learning Dart. Dart’s creators say one goal is to make the language applicable for all devices that use the Web, including smartphones, tablets and laptops, as well as all major browsers.
  • F# is one of the elder newcomers in the sense that Microsoft began shipping it with Visual Studio 2010. Pronounced “F sharp,” it’s a functional-style language that’s designed to be easy to integrate with imperative languages such as C++ and Java. It also supports parallel programming, which is increasingly important as multicore processors become more common.

  • Fantom is designed to enable cross-platform development, spanning Java VM, .NET CLR and JavaScript in browsers. “But getting a language to run on both Java and .NET is the easy part,” say its creators. “The hard part is getting portable APIs. Fantom provides a set of APIs that abstract away the Java and .NET APIs. We actually consider this one of Fantom’s primary benefits, because it gives us a chance to develop a suite of system APIs that are elegant and easy to use compared to the Java and .NET counterparts.” Fantom’s ease of portability means that it eventually could be extended for use with Objective-C for the iPhone, LLVM or Parrot.

Could Windows Server 8’s SMB 2.2 Protocol Spur Cloud Development?

Developers are hopeful that Windows Server 8’s support of the Server Message Block (SMB) 2.2 file-sharing protocol will spur application development in both the virtualization and cloud markets.

Developers believe that SMB 2.2 will allow compliant applications to deliver significantly faster performance, along with greater scalability and reliability, in virtualized and cloud environments. And given the vast majority of workloads running on host-based systems today are Windows-based, they have reason to believe that Microsoft can also gain a leg up on some of its platform competitors.

“This will lead to vastly improved performance for bigger workloads. Microsoft may have been slow to the cloud table, but their efforts (in supporting SMB 2.2) bring them fully to the banquet,” says Jerry Carter CTO of Likewise Software Inc.

The agenda of Microsoft’s archrival, VMware, is to virtualize the operating system and the hypervisor, while Microsoft’s agenda is to better enable application workloads, which serves to drive business value. But no matter who emerges as the frontrunner, it’s more important that SMB 2.2 succeed, says Carter.

While Microsoft supported SMB versions 2.0 and 2.1, the company’s core server-based applications will support the protocol for the first time with version 2.2.

“We supported the file server before, but it was targeted towards having an SQL database or having Hyper-V using an SMB file server as its back-end storage,” said Thomas Pfenning, general manager of Microsoft’s Server and Cloud Division, in briefings last fall.

Lowering Shared Storage Costs With SMB 2.2
In today’s environment, IT shops typically require shared storage that allows multiple nodes in a cluster to see all the disks, which means having either a fiber channel SAN array or iSCSI array -- an expensive proposition for many Microsoft shops.

“It is more expensive to pull that off in a Windows environment because you have costs associated with Hyper-V plus the cost of the shared storage backend,” says Eugene Lee, a senior systems administrator with a large bank in Charlotte, N.C. “When you can use a file server as the storage back-end, it can knock some dollars off the overall cost because you don’t need a fiber channel, or hire an expert storage administrator to guide you through the nuances of block storage.”

The significance of SMB 2.2 is heightened by the ongoing industry trend of moving from block to file storage. The limitations of previous versions of the Common Internet File System (CIFS) protocol led to the widespread adoption of block-based storage for both applications and virtualization. It’s no secret that the performance of back-end storage has long been a major choke point to scaling virtualization and cloud infrastructures.

“Back-end storage has been a weight corporate developers dragged with them when trying to scale virtualization and cloud infrastructures. This SMB 2.2 better positions Microsoft to exploit cost and performance advantages their stack delivers, as well as move workloads to Microsoft virtualization and cloud platforms,” says Len Barney, a purchasing agent with a large transportation company in Jacksonville, Fla.

File-based Storage Gains Street Cred
But with the arrival of SMB 2.2, file-based storage now becomes a more credible option for provisioning Microsoft-based workloads, some believe.

“It looks like a smart technology move and makes sense from a cost standpoint. The fewer specialists you have to take care of the more complex storage back-ends, the better. I also like that SMB 2.2 is supported with Windows Server 8,” says Lee.

The potential impact on the storage industry by the move from block to file could prove significant. The limitations of previous versions of CIFS/SMB lead to the widespread adoption of block-based storage for applications and virtualization. With SMB 2.2, file-based storage becomes not only a credible option, but the recommended option for provisioning Microsoft workloads. This could result in a sea change in the storage industry, industry analysts generally agree.

EMC, Net App and HP have also thrown their support behind SMB 2.2, saying they plan to deliver products around the time Microsoft ships Windows Server 8. Microsoft has backed away from naming a specific date for delivery, although company officials say it would be sometime in 2012. The first official beta release of the product, however, is expected in late February.

Does Your Enterprise Need Its Own App Store?

By 2015, more than 65 percent of North American business users will have a smartphone, and more than 26 percent of enterprises have currently deployed tablets or are at least considering them. That adoption has many enterprises developing smartphone and tablet apps for internal use. These business-to-employee apps are part of a category that will have more than 830 million users by 2016, ABI Research predicts.

These trends have enterprises such as IBM and Medtronic creating internal app stores, which ensure that employees, contractors and other authorized users get the apps that match their device models and job responsibilities. It’s a strategy built around security, productivity and convenience. Private app stores:

  • Enhance employee productivity. When the enterprise’s app store ensures that employees get the right app version for their model of smartphone or tablet, they’re more productive because they’re not tying up the help desk trying to make the wrong version work. 

  • Ensure that employees securely get the right app/data based on their responsibilities when the app store is properly configured with this functionality. For example, the EMEA sales team would get sales-related apps for their region instead of the APAC versions. Just as important, employees who don’t work in sales – as well as non-employees – can’t download those apps, thus preventing unauthorized access to the information that the sales apps provide. “For an enterprise store, you’re doing app distribution based on entitlements and roles, which means that you have to have tight integration and secure access with your identity infrastructure,” says Chris Perret, CEO of Nukona, a vendor that specializes in enterprise app stores.

  • Provide enterprises with deeper insights into who’s using their apps and how. For example, Apple’s App Store ensures end user privacy by giving developers only high-level statistics such as the volume of downloads on a weekly basis. But at Medtronic’s internal store, which launched in January 2011, everyone has to log in as an employee or contractor, which gives the company’s developers more insight. “They often want to be able to identify who’s installing their app, how long they’ve had it and if they’ve installed a new version,” says Jim Freeland, head of Medtronic’s enterprise mobility group. 

  • Offer access to user data that enables developers to determine whether their target audience is actually using their app. They also conveniently can send alerts to users when a new version is available. “You have a better way to stay in touch with users than what the App Store can provide,” says Freeland. Like the App Store and Android Market, private app stores sometimes give users the opportunity to rate and review apps, providing developers with additional insights.

Couldn’t enterprises make their private app stores accessible to customers and resellers too? Not necessarily. Medtronic says its store will always be open only to employees and contractors. “We are unable to distribute apps to customers, patients or third parties other than going through the iTunes App Store,” says Freeland. “That’s an agreement that Apple has with all companies that buy their enterprise developer license.”

Another difference between public and private is the approval process for new apps and updates. For example, instead of waiting seven business days or longer for Apple to review an app and release it to the App Store, Medtronic handles that task for internally developed apps, and it guarantees its business units that the approval will take no more than five days.

IBM and Medtronic both built their app stores from scratch. But that’s not a viable option for smaller companies that don’t have the internal resources to build their own. That’s why vendors such as Nukona are offering stores on a white-label basis, where enterprises simply add their branding.

Either way, one consideration is protecting confidential data -- not just what the app stores on the device, but also the data in the corporate network. After all, an app essentially is a door into the enterprise. Some mobile OS’s make it easy to share apps between devices, so one way to mitigate that security threat is to use a form of multifactor authentication.

“During the time of distribution, we inject a specific ID into that distributed app so we know that it’s this app with this user on this device,” says Perret.

Bring Your Own Device (BYOD)
Private app stores also give enterprises a way to deal with the OS fragmentation that occurs when employees are allowed to bring their own tablet or smartphone instead of receiving a company-issued device. Nearly half of all employee smartphones worldwide are already brought by employees, says the research firm Strategy Analytics.

As a result, some enterprises have a single app store for multiple OS’s. For example, IBM’s WhirlWind store is a one-stop shop for Android, BlackBerry, iOS and Windows apps. “We recognized early on that there needs to be some commonalities, a single go-to place for folks to find and get mobile applications,” says Bill Bodin, IBM’s CTO for mobility.

With very few exceptions -- such as some government agencies -- private app stores don’t prevent users from accessing the public stores. That’s partly because the enterprises don’t want to undermine the value of smartphones and tablets that employees pay for, even when their employer reimburses them. But enterprises do sometimes segregate the two. “So when people are trying to get to their personal data and the features they pay for, they’re not encumbered by the enterprise challenges for increased authentication, etc.,” says Bodin.

Photo: @iStockphoto.com/Nikada

Will a Mobile OS Update Break Your Apps?

It’s one of the biggest headaches in mobile app development: The operating system (OS) vendor issues an update that immediately renders some apps partly or completely inoperable, sending developers scrambling to issue their own updates to fix the problem. For instance, remember when Android 2.3.3 broke Netflix’s app in June, as iOS 5 did to The Economist’s in October? These examples show how breakage can potentially affect revenue -- especially when it involves an app that’s connected to a fee-based service. In the case of enterprise apps, breakage can also have a bottom-line impact by reducing productivity and inundating the helpdesk.

Tsahi Levent-Levi, CTO of the technology business unit at videoconferencing vendor RADVISION, has spent the past few years trying to head off breakage. His company’s product portfolio includes an app that turns iOS tablets and smartphones into endpoints. With an Android version imminent, his job is about to get even more challenging. Levent-Levi recently spoke with Intelligence in Software about why app breakage is so widespread and so difficult to avoid.

Q: What exactly causes apps to break?

Tsahi Levent-Levi: The first thing to understand is that when you have a mobile platform, usually you have two sets of APIs available to you. The first set is the one that’s published and documented. The other is one that you need to use at times, and this is undocumented. When it is not documented or not part of the official API, it means that the OS vendor might and will change it over time to fit its needs.

For example, we’re trying to reach the best frame rate and resolution possible. To do that well means you need to work at the chip level. So you go into the Android OS NDK, where you write C code and not Java code. Then you go one step lower to access the physical APIs and undocumented stuff of the system level, which is where the chipset vendors are doing some of the work.

Even a different chip from the same vendor or a newer chip from the same vendor is not going to work in the same way. Or the ROM used by a specific handset with the same chip is going to be different from the ROM you get from another one, and the APIs are going to be different as well.

Q: So to reduce the chances that their app will break, developers need to keep an eye on not only what OS vendors are doing, but also what chipset and handset vendors are doing.

T.L.: Yes, and it depends on what your application is doing. If you’d like to do complex video things, then you need to go to this deep level.

I’ll give you an example. With Android, I think it was version 2.2 of those handsets that had no front-facing camera. Then iPhone 4 came out with FaceTime, and the first thing that Android handset manufacturers did was add a front-facing camera. The problem with that was that there were no APIs that allowed you to select a camera for Android. If you really wanted to access that front-facing camera, you had to use APIs that were specific to that handset. When 2.3 came out, this got fixed because they added APIs to select a camera.

Platforms progress very fast today because there’s a lot of innovation in the area of applications. Additional APIs are being added by the OS vendors, and they sometimes replace, override or add functionality that wasn’t there before. And sometimes you get APIs from the handset vendor or chipset vendor and not from the OS itself. So there is a variance in the different APIs that you can use or should be using.

If the only thing you’re doing is going to a website to get some information and displaying it on the screen, there isn’t going to be any problem. But take streaming video, for example. Each chipset has a different type of decoder and a bit different behavior than another. This is causing problems.

It’s also something caused by Google itself. When Google came out with Android, the media system that they based everything on was OpenCORE. At one point -- I don’t remember if it was 2.1, 2.2 or 2.3 -- they decided to replace it with something totally different. This meant that all of the applications that used anything related to media required a rewrite or got broken. The new interface is called Stagefright, and there are rumors that this is going to change in the future as well.

Q: With so many vendor implementations and thus so many variables, is it realistic for developers to test their app on every model of Android or iOS device? Or should they focus on testing, say, the 25 most widely sold Android smartphones because those are the majority of the installed base?

T.L.: You start with the devices that most interest you, and then you enhance it because you get problem reports from customers. Today, for example, I got a Samsung Galaxy S. When I go to install some applications from the Android Market, it will tell me that I cannot do it because my phone doesn’t support it. That’s a way that Google is trying to deal with it, but it doesn’t always work because of the amount of variance.

The way it should be done in terms of the developer, you should first start from the highest level of instruction that you can in order to build your application. The next step would be to go for a third-party developer framework like Appcelerator, which is a framework that allows you to build applications using HTML5 and Javascript CSS. You take the application that you build there, and they compile it and make an Android or an iOS application. If you can put your application in such a scheme, it means you will run on the most amount of handsets to begin with because these types of frameworks are built for this purpose.

If you can’t do that, then you’ll probably need to go for doing the most amount of stuff that you can do in Android on the software development kit (SDK) level. If that isn’t enough, then you go down into the native development kit (NDK) level. And if that isn’t enough, then you need to go further down into the undocumented systems drivers and such. And you build the application in a way that the lower you go, the less code you have there.

Maximizing Cloud Uptime

For enterprises, the cloud can be as much of a problem as an opportunity. If employees can’t access the cloud, or if the data centers and other cloud infrastructure suffer an outage, productivity and sales can grind to a halt. Wireless is the latest wild card: By 2016, 70 percent of cloud users will access those applications and services via wireless, Ericsson predicts. Wireless is even more unpredictable than fiber and copper, so how can enterprises ensure that wireless doesn’t jeopardize their cloud-based systems?

Bernard Golden -- author of Virtualization for Dummies and CEO of HyperStratus, a cloud computing consultancy -- recently spoke with Intelligence in Software about the top pitfalls and best practices that CIOs and IT managers need to consider when it comes to maximizing cloud uptime.

Q: What are the top causes of cloud service unreliability? What are the weak spots?


Bernard Golden:
There are issues that are common when using any outside resource, and resulting questions you need to ask to identify the weak spots: Does the network go down between you and the provider? In terms of the external party’s infrastructure operations, how robust is their computing infrastructure environment? You might have questions about their operational practices in support: Do they get patches so things don’t crash?

If cloud computing is built on virtualization, and virtualization implies being abstracted from specific hardware dependence, have you designed your application so it’s robust in the face of underlying hardware failure? That’s more about whether you’ve done your proper application architecture design. Many people embrace cloud computing because of its ability to support scaling and elasticity. Have you designed your application to be robust in the face of highly variable user loads or traffic?

Q: What can enterprises do to mitigate those problems? For example, what should they specify in their service-level agreements (SLAs)?


B.G.:
There’s a lot of discussion about SLAs from cloud providers, but really it’s every link in the chain that needs to be evaluated. Do you have a single point of failure? Maybe you need two different connectivity providers.

Some people put a lot of faith in SLAs. We tend to caution people: At the end of the day, SLAs are great. They’re sort of like law enforcement: It doesn’t prevent crime, but it responds to it. It’s not, ‘I’ve got an SLA, so my system will never go down.’ Rather, an SLA means that the vendor pledges to have a certain level of availability.

So you have to evaluate, what do I expect is the likelihood that they’re going to be able to accomplish that? You need to make a risk evaluation. For example, there was a big outage at Amazon in April 2011. Many early-stage startups use Amazon as their infrastructure, so a number of them went down until Amazon was able to fix that.

There were other companies that had evaluated the risk of something like that happening in designing their application architectures and their operational processes. They said, ‘The importance of this application is such that we need to take the extra time and care and investment to design our overall environment so that we’re robust in the event of failure.’

Whatever you get from the SLA will never make up for the amount of lost business in the case of a failure.

Q: Sometimes there’s also a false sense of security, such as when an enterprise buys connectivity from two different providers to ensure redundant access to the cloud. But it could turn out that provider No. 1 is reselling provider No. 2’s network, and a single fiber or copper cut takes out both links.

B.G.: You get two apologies instead of one. That’s a really good point. You can characterize that as incomplete homework.

Q: Business users and consumers are increasingly using wireless to access cloud services. What can enterprises do to minimize the risk that wireless’ vagaries will disrupt that access?

B.G.: That strikes me as very challenging, depending on the type of wireless. For example, an internal Wi-Fi network, you could mitigate against those kinds of risks pretty well, and they’re probably not a lot worse than if you had wired Ethernet.

Out in the field, if you’re talking about somebody using a smartphone or tablet connected over 3G, I don’t know that there’s much a company can do about that. You could evaluate who has the best 3G network, but you’re always going to face the issue of overloads or dead spots.

Q: That goes back to your point about doing your homework. For example, an enterprise might choose to get wireless service from Sprint because it resells 4G WiMAX service from Clearwire. So if Clearwire’s network is unavailable in a particular market, the enterprise’s employees still can get cloud access over 3G, which is a completely separate, Sprint-owned network. The catch is that those options are pretty rare.

B.G.: It is, unfortunately. It would be great if there were more WiMAX.

Lots of times, people over-assess the risks of the cloud while under-assessing the risks of whatever the alternative might be. The fact is that most organizations don’t have redundant connectivity to their data center from two different providers from two different sides of the buildings. They’re not as careful with their own stuff as they insist someone else is.

Q: Or they’ll do it right for their headquarters, but then not be as diligent for their satellite offices.

B.G.: Absolutely. What happens a lot is that people make intuitive risk assessments. When it comes time to make that evaluation, it’s, “Well, we’ve got to support the headquarters, but we don’t have enough budget for those remote offices.” Now what they do is say, “If you’re in a remote office and it goes down, just go down to Starbucks.”

We always tell our clients that cloud providers, in terms of what they bring to the table, are probably going to be as good as best practices or much better than what’s available because those things are core competencies for them. Most IT organizations are cost centers that everybody is always asking: “How can we squeeze this? How can we put this off?”

Major cloud providers don’t have that option. They can’t say, ‘We didn’t upgrade to the latest Microsoft patch because that would require us to move to the newest service pack.’ They just can’t do that from a business perspective.

Photo Credit: @iStockphoto.com/adventtr