What’s Ahead for Functional Languages?

Languages that support functional programming are moving out of the fringe, now being put to use in mobile app development, cloud computing and multicore platforms.

Functional programming, which has its origins in the field of mathematical logic, has been something of an academic exercise over the years. But some languages are getting a bit more visibility. Those include F# and Scala, which incorporate features of both functional and object-oriented languages, and Clojure, a Lisp derivative described as predominantly being a functional programming language.

Going Mobile
Active areas for functional programming include Android app development, where Scala is cultivating a niche. Developer productivity is the main consideration behind the use of Scala in this setting, according to Donald Fisher, chief executive officer of Typesafe, and Phil Bagwell, the company’s marketing manager. Typesafe was launched earlier this year and provides a packaging of Scala, Akka middleware and developer tools on an open-source basis.

Since Scala is a succinct language that supports the functional programming style, developers typically can get more done with fewer lines of code, say the executives, who participated in an email interview.

Scala’s interoperability with Java (it runs on the Java Virtual Machine) means the language meshes with the Java-based development environment and libraries Android uses, say the executives. They also cited support in the Scala toolchain for building Android apps, pointing to the Android plugin for Scala’s Simple Build Tool (sbt).

“Finally, like Java itself, Scala is statically typed,” note Fisher and Bagwell. “That means it’s easier to optimize at compile time for constrained runtime environments, such as Android mobile devices.”

The F# language is also finding adherents in the mobile space.

Adam Granicz, chief executive officer at IntelliFactory, says his company, which offers F# consulting, development and training services, has heavily invested in applying F# to Web and mobile application development. The language offers a “significantly more effective way to develop Web and mobile applications than traditional mainstream technologies,” he says.

Users of the company’s WebSharper product, a Web and mobile framework built around F#, report ease of use and productivity gains. IntelliFactory, based in

Kiskunlachaza, Hungary, uses F# exclusively on all of its projects, including several enterprise-scale Web applications implemented with WebSharper, says Granicz.

But F# is “not only an exceptionally good fit to certain types of applications,” says Granicz, who notes that the language is “at least as well suited universally as C# or VB.”

He says F# and the functional style of programming produces less code than other methods.

“In most situations, we observe a two- to five-time drop in code size, and that clearly has a strong correlation to productivity and maintainability, even in the short run,” says Granicz.

Multicore, Cloud Implications
Functional languages facilitate parallelism, and programmers are now taking advantage of that for multicore software development. Scala’s combination of object-oriented and functional programming traditions make it a natural fit for “Java developers who face challenges in building parallel and concurrent applications that can effectively run on multicore hardware,” say Fisher and Bagwell.

“That applies to emerging mobile devices as well as desktops and servers,” say the Typesafe officials.

“We are routinely using F# asynchronous computations -- a model that inspired the new async features in C# 5 -- and various parallel extensions to implement multithreaded applications,” says Granicz. “These also carry over to Web and mobile development with WebSharper, with highly scalable back-end components and non-blocking user interfaces.”

Functional languages are also enlisted in cloud computing. Heroku’s cloud application platform, for instance, supports Clojure and, as of October, Scala. As for the latter development, Typesafe teamed with Heroku, a Salesforce.com company, to provide Scala support on the platform.

Industry Interest
Functional languages are attracting interest from a number of industries and lines of business. Granicz says groups within a diverse set of industries employ F#. He says some groups are less open about their usage than others, as they consider F# to be a competitive advantage.

“We have worked with customers from a number of domains, including marketing, printing and health services, and are finding new connections to other industries through our Web and mobile technology,” says Granicz.

At Typeface, Fisher and Bagwell say that the company’s customers include financial services firms and large-scale Internet companies -- organizations known to be early adopters of Scala. But, overall, customers represent a variety of companies that use enterprise Java today, they add.

Photo: @iStockphoto.com/loops7

Security Issues for Multicore Processors

If hackers love one thing, it’s a big pool of potential targets, which is why Android and Windows platforms are attacked far more often than BlackBerry and Mac OS X. So, it’s no surprise that as the installed base of multicore processors has grown, they’ve become a potential target.

That vulnerability is slowly expanding to mobile devices. Within three years, nearly 75 percent of smartphones and tablets will have a multicore processor, predicts In-Stat a research firm. That’s another reason why CIOs, IT managers and enterprise developers need to develop strategies for mitigating multicore-enabled attacks.

Cambridge University researcher Robert Watson has been studying multicore security attacks such as system call wrappers for several years. He recently spoke with Intelligence in Software about multicore vulnerabilities and what the IT industry is doing to close the processor back door.

Q: As the installed base of multicore processors grows in PC and mobile devices, are they becoming a more attractive target for hackers?

Robert Watson: One of the most important transitions in computer security over the last two decades has been the professionalization, on a large scale, of hacking. Online fraud and mass-market hacking see the same pressures that more stock-and-trade online businesses do: How to reach the largest audience, reduce costs and to reuse, and wherever possible, automate solutions. This means going for the low-hanging fruit and targeting the commodity platforms. Windows and Android are a case in point, but we certainly shouldn't assume that Apple's iOS and RIM's Blackberry aren't targets. They are major market players, as well.

Multicore attacks come into play in local privilege escalation. They may not be how an attacker gets their first byte of code on the phone -- that might be conventional buffer overflows in network protocols and file formats, or perhaps simply asking the user to buy malware in an online application store.

Multicore attacks instead kick in when users try to escape from sandboxing on devices, typically targeted at operating system kernel concurrency vulnerabilities. In many ways, it's quite exciting that vendors like Apple, Nokia and Google have adopted a "sandboxed by default" model on the phone. They took advantage of a change in platform to require application developers to change models. This has made the mobile device market a dramatically better place than the cesspool of desktop computing devices. However, from the attacker perspective, it's an obstacle to be overcome, and multicore attacks are a very good way to do that.

Q: What are the primary vulnerabilities in multicore designs? What are the major types of attacks?

R.W.: When teaching undergraduates about local and distributed systems programming, the term "concurrency" comes up a lot. Concurrency refers to the appearance, and in some cases the reality, of multiple things going on at once. The application developer has to deal with the possibility that two messages arrive concurrently, that a file is changed by two programs concurrently, etc.

Reasoning about possible interleavings of events turns out to be remarkably difficult to do. It's one of the things that makes CPU and OS design so difficult. When programmers reason about concurrency wrong, then applications can behave unpredictably. They can crash, data can be corrupted and in the security context, this can lead to incorrect implementation of sandboxing.

In our system call wrapper work, we showed that by exploiting concurrency bugs, attackers could bypass a variety of security techniques, from sandboxing to intrusion detection. Others have since shown that these attacks work against almost all mass-market antivirus packages, allowing viruses to go undetected. Similar techniques have been used to exploit OS bugs in systems such as Linux, in which incorrect reasoning about concurrency allows an application running with user privileges to gain system privileges.

It is precisely these sorts of attacks that we are worried about: the ability to escape from sandboxing and attack the broader mobile platform, stealing or modifying data, gaining unauthorized access to networks, etc.

Q: What can enterprise developers, CIOs and IT managers do to mitigate those threats? And is there anything that vendors such as chipset manufacturers could or should do to help make multicore processors more secure?

R.W.: Concurrency is inherent in the design of complex systems software, and multicore has brought this to the forefront in the design of end-user devices. It isn't just a security problem, though. Incorrect reasoning about concurrency leads to failures of a variety of systems. Our research and development communities need to continue to focus on how to make concurrency more accessible.

Enterprise developers need to be specifically trained in reasoning about concurrency, a topic omitted from the educations of many senior developers because they were trained before the widespread adoption of concurrent programming styles, and often taught badly even for more junior developers. Perhaps the most important thing to do here is to avoid concurrency wherever possible. It is tempting to adopt concurrent programming styles because that is the way things are going. Developers should resist!

There are places where concurrency can't be avoided, especially in the design of high-performance OS kernels, and there, concurrency and security must be considered hand-in-hand. In fact, concerns about concurrency and security, such as those raised in our system call wrapper work, have directly influenced the design of OS sandboxing used in most commercially available OSs.

For CIOs and IT managers,  concurrency attacks, like hackers, are just another weapon in the arsenal that they need to be aware of. Concurrency isn't going away. We use multicore machines everywhere, and the whole point of networking is to facilitate concurrency.

What they can do is put pressures on their vendors to consider the implications of concurrency maturely and on their local developers to do the same. Where companies produce software-based products, commercial scanning tools such as Coverity's Prevent are increasingly aware of concurrency, and these should be deployed as appropriate. For software developers, we shouldn't forget training:

First, to avoid risky software constructs, and second, to know how to use them correctly when they must be used.

We should take some comfort in knowing that hardware and software researchers are deeply concerned with the problems of concurrency, and that this is an active area of research. But there are no quick fixes since the limitations here are as much to do with our ability to comprehend concurrency as with the nature of the technologies themselves.

Read more about development and cybersecurity here.

Getting to Know NoSQL

Database technology may not rank among a developer’s most favorite things, but a class of software dubbed NoSQL might rate a look.

NoSQL databases have gained a higher profile in recent months, winning over a broader base of customers and attracting investment. Advocates say NoSQL bests relational database technology when it comes to the task of handling very large datasets, the so-called “Big Data” problem. Beyond scalability, other selling points that could win over programmers include greater flexibility in data models.

The technology is expected to experience high growth. Market Research Media, a San Francisco company that analyzes markets including IT, predicts the worldwide NoSQL market will hit $1.8 billion by 2015. The sector will expand at a compound annual growth rate of 32 percent between 2011 and 2015, according to the firm.

NoSQL software has been largely open-source. Examples include the Apache Cassandra Project, which Facebook open-sourced in 2008; Apache’s CouchDB; and MongoDB. Some software is published in both open-source and commercial versions. That’s the case with the Riak database, which is produced by Basho Technologies Inc.

The technology initially took root among social media firms that were struggling with data management issues. But NoSQL’s appeal now extends beyond that original customer set. Derrick Harris, structure editor and staff writer at GigaOM, points to traditional media companies, which generate a significant volume of unstructured content, as one example.

Market Research Media, meanwhile, points to other vertical market adopters. The company cites successful attempts to develop NoSQL applications in the biotechnology, defense and image/signal processing fields.

Another endorsement comes from the private equity community, which has placed some recent bets on NoSQL. In September, a group of venture capital firms invested $20 million in 10gen, the company that sponsors MongoDB and offers consulting, training and support services for the database. Earlier this year, Basho closed on a $7.5 million round of equity financing.

Getting Acquainted
So, is it time for developers to pile on?

“It is definitely becoming more mainstream,” says Harris, referring to NoSQL. “It’s not such a crazy idea that you might need it for something in the near future.”

In particular, programmers who work on applications that deal with or generate lots of different unstructured data types might want to start learning NoSQL skills, he adds.

Harris cites two reasons to look at NoSQL:

“One is that unstructured data, by definition, doesn’t fit into relational tables, and NoSQL is designed to deal with unstructured data,” he says. “The other is that unstructured data piles up fast -- from Web transactions, machine-generated data, etc. -- and NoSQL databases are designed to scale along with that data and maintain fast performance.”

Tony Falco, chief operating officer at Basho, says both developers and operations personnel tap Riak for a growing class of highly interactive applications. He says some customers start with Riak from the beginning and scale up. Others begin with a relational database and scale on to NoSQL when they bump into relational technology’s restrictions at scale, he adds.

Scalability isn’t the only potential draw for developers, however. Programmers who find relational database schemas overly rigid may find NoSQL more palatable.

Falco says that developers appreciate the ability to start programming, move into production, and later add a new relationship between objects.

“They have the flexibility to change and express dynamically the relationship between any objects and any sort of entity in an application,” he says. “A developer no longer has to sweat that they are going to make some sort of decision in the beginning that ultimately limits their applications.”

Coexistence
The advantages of NoSQL aside, relational databases won’t be disappearing.

“I think there is a maturing in the landscape where it isn’t either/or,” says Falco. “Our point is that relational models are one way of capturing and expressing the relationships between data, and other models are needed as the Web becomes more interactive and far more distributed.”

Market Research Media also suggests the relational database managers will endure.

“Non-relational database is not a replacement but rather a supplement to RDBMS,” according to the company’s report on the NoSQL market.

The market watcher forecasts “gradual convergence” of those technologies into a hybridized ecosystem and a “takeover of NoSQL technology leaders by established RDBMS vendors.”

Market Research Media predicts that the arrival of that converged market to occur around 2015.

Harris says he’ll be watching how legacy vendors get into the NoSQL space.

“NoSQL is going to be part of a portfolio in a vendor’s product line,” says Harris.

Photo: @iStockphoto.com/DNY59

LTE: Not So Fast

This December marks two years since the Long Term Evolution (LTE) cellular technology made its worldwide commercial debut. It will be at least another two before its coverage in the U.S. and beyond is comparable to its third-generation (3G) predecessors. For developers of apps and other mobile software, that timeline is one of several factors to keep in mind when deciding when, how and where to use LTE.

For example, LTE’s download speeds are roughly 10 times as fast as such 3G technologies as CDMA2000 1xEV-DO Rev, according to carriers with commercial LTE networks. (Verizon Wireless says its LTE network has average download speeds of 5 to 12 Mbps, although some reviewers report averages of 15 Mbps and peaks of 23 Mbps .)

Yet it’s far too early in LTE’s global rollout to design mobile apps and software with the assumption that LTE speeds are the rule rather than the exception. Instead, apps need to be designed to work well in a world where fallback is common. That’s the process of dropping from LTE to EV-DO, HSPA or even a 2.5G network, such as EDGE or GPRS. (One wrinkle: In the case of HSPA, the fallback actually could be a step up because a few HSPA networks, such as Telstra’s, currently are faster than LTE.)

The trick is to design the app or software so that it masks the fallback as much as possible. But that’s easier said than done, especially in the case of bandwidth-intensive services, such as video, where a sudden drop to half or one-tenth of the previous data rate can be painfully obvious.

“You can do things such as lower the bit rate for your video, or do pre-caching when you’re in LTE coverage areas or when you’re in home or business Wi-Fi coverage,” says Larry Rau, director of Verizon Wireless’ Innovation Centers, which help developers and other companies create LTE-compatible hardware and software.

Depending on the software, dealing with fallback could be as simple as designing it to do certain things only under certain conditions. For example, an antimalware or navigation app could be written so that it doesn’t attempt to download major updates unless the device has LTE coverage.

Don’t Overlook the Uplink and Latency
Although LTE is best known for its download speeds, it’s a mistake to overlook its upload capabilities. Those are key for business and consumer applications where the smartphone, tablet or other device frequently sends large amounts of data or video.

Verizon Wireless, for example, says its LTE customers can expect average upload speeds of 2 to 5 Mbps. Some reviewers say they’ve achieved uplink speeds far faster than Verizon’s claimed rates: 10 to 12 Mbps in some areas. By comparison, Verizon promises 3G upload speeds of 500 to 800 Kbps. What all of those figures mean is that, as with the downlink side, it’s important to design apps and other software to provide a good user experience even after falling back to a network with slower uplink capabilities.

Latency is another aspect of LTE that often gets overlooked in the fixation on download speeds. “We see things down in the 30-, 40-, 50-millisecond range,” says Rau. “I would see more on the lower end of that.”

Depending on the fallback network, latency could be 50 to 70 milliseconds -- or even double that, in the case of older technologies. Compared to data rates, big, sudden changes in latency can be even tougher to deal with. “If you’re doing a multiplayer game, and you’re reliant on that LTE latency, falling back into our EV-DO coverage is something you need to contend with,” says Rau.

Don’t Be a Hog
As with multicore mobile processors
, LTE’s extra horsepower makes it tempting to stop worrying about bandwidth when developing apps and software. But at least one best practice applies to both 3G and LTE: Don’t constantly ping the network because, even though each of those connections might not require a lot of bandwidth, they still create unnecessary signaling traffic, which can be just as bad -- and sometimes worse. Take, for example, the overly chatty instant-messaging app that single-handedly almost crashed T-Mobile’s 3G network  in 2009.

It’s here that LTE carriers can not only help, but also want to help.

“We can help developers learn how to manage the device so your app isn’t doing things like pinging a server,” says Rau. “You don’t want to do things like that. You get online, get the information you need and get offline.”

Another beneficiary of that kind of efficient design is the battery: When the smartphone or tablet isn’t constantly connecting to the network, its transceiver is using far less power. That’s important because with both 3G and LTE, the more dependent business users and consumers become on mobile devices, the less willing they are to be tethered to wall socket.

Mitigate Mobile Cross-platform Headaches

Android is now the most widely used mobile operating system in both the Unites States  and the world , according to the research firms Canalys and Nielsen. But behind that news lurk a lot of headaches for enterprise developers.

For example, 36 percent of U.S. smartphones run Android. So unless an enterprise is comfortable alienating 2 out of every 3 employees or customers, its developers have to create apps for at least iOS, BlackBerry or Windows Mobile. If that enterprise is multinational, then its developers can add Symbian to their to-do list.

“We have European customers that say: ‘We know Symbian is going to die. It might take five years, but in the meantime, a large percentage of our workers have Symbian phones,’” says Adam Blum, CEO of Rhomobile, which provides development tools.

If that weren’t enough work for developers, there’s also fragmentation within some operating systems. For example, each Android smartphone vendor often has a unique implementation of the OS, while differences in screen size and resolution create additional variables that affect an app’s user experience.

Web or Native?

Some developers sidestep fragmentation by using “Web services” or “Web apps,” which consists of taking the phone’s browser and stripping off the user interface so it appears to be an app. One drawback is that the browser isn’t always able to access phone hardware, such as the GPS radio or accelerometer, thus limiting what this pseudo app can do.

“This is a very valid approach if you do not need to do anything fancy,” says Vince Chavy, product management director at RADVISION, a videoconferencing vendor whose portfolio includes endpoint apps that run on Android and iOS devices. “It is easy, and it is quite amazing what you can do with HTML5 and CSS nowadays.”

Some enterprises like Web apps because they leverage their developers’ existing skills. “What we always hear is, ‘Because my developers have Web skills, you get all that productivity and affordability across all phones,’” says Blum.

The alternative is a “native app,” whose benefits include better performance because the software doesn’t have to pull everything from the Web. If the target audience is made up of customers rather than employees, then another benefit is that native apps can be distributed through app stores.

“You will be closer to the look and feel of the device,” says Chavy. “You are only limited by your imagination and the OS SDK APIs.”

Write Once, Run Many

The big downside to native apps is that code for one OS version can’t be reused to create other OS versions. The corollary to that requirement is that if the enterprise doesn’t have developers versed in all of the major codes, it has to find them, which increases costs and lead time. “You need a different code base for iOS and Android,” says Chavy. “You need to find Objective C developers for your IOS client and Java/Android developers for the Android client. Those are tough to find!”

Hence the growing selection of tools that enable cross-platform app development, such as MoSynch , PhoneGap  and Rhodes .

“You develop in C/C++,” says Henrik von Schoultz, MoSync co-founder and vice president of marketing and sales. “We will later also use other languages, like Java Script and Lua. The developer uses MoSync APIs, and at build time we use an intermediate language and have profiles for your target devices. We compile native to the OS you want to target.

“If a feature does not exist on an OS, the system tries to handle that. For example, if the target device doesn’t have GPS, you may use positioning from the cell towers.”

So how much time could a developer reasonably expect to save by using MoSync versus developing from scratch for each OS? It depends, but in one case, a developer went from Symbian to Android in about four hours . “If you have an app with loads of logic and not so much UI, you can save up to 80 percent,” says von Schoultz. “But if it’s a very UI-intense app, you may save maybe 30 percent.”

PhoneGap, meanwhile, emphasizes the importance of having both apps and a mobile-friendly website.

“They can use largely the same code in their mobile website as their native app in PhoneGap because we’re just running HTML and Java Script right there,” says Andre Charland, president and co-founder of Nitobi, PhoneGap’s creator. “We’re compliant with W3C APIs. You’re also future-proofing your app because as the mobile browsers evolve, you can take your PhoneGap applications and push more of that into the browser, while still augmenting the native app experience with native code via plug-ins from PhoneGap.”

One Size Doesn’t Fit All

Tablets and smartphones come in an ever-increasing range of screen sizes and resolutions. As a result, developers also have to ensure that their app looks and works just as well on a 3.5-inch display as a 4.3-inch display, or on both an Android phone and an Android tablet.

“The first decision is if you want the same GUI on the phone and on the tablet,” says Chavy. “Some developers decide to have the same UX. In that case, they just make sure that when they design the GUI, it can resize nicely by deciding on which area resizes and by setting proper anchors on the objects.

The catch is that consumers and business users increasingly expect tablet apps to take advantage of the extra screen space.

“For example, in our SCOPIA Mobile application, on the phone, you can see the video or the presentation, and not both at the same time,” says Chavy. “On the tablet, since you have a bigger screen, we have layouts with both the video and the presentation.”

Need more details about mobile cross-platform development tools? Check out this article on Mashable.