The Latest Trends, Tools and Strategies for Software Developers and IT Professionals

Intelligence in Software delivers the latest news and strategies for software developers and IT professionals.

Time to Phase out Desk Phones?

If your company is like most, you’ve got a PC, a smartphone, perhaps a tablet or some combination of those devices. That lineup means it’s time to take a hard look at softphones, which can replace traditional desk phones and the costs associated with them.

A big part of softphones’ appeal is that they leverage devices that enterprises already own. For example, an organization might decide that it’s more cost-effective to put softphone clients on employees’ existing laptops, smartphones or both than to continue to support their desk phones or to provide new hires with desk phones. A bare-bones business-grade desk phone runs at least $100 in volume versus just $15 for some softphone clients, plus a headset for employees who prefer one.

For some enterprises, video is the decisive factor. Suppose that an organization wants to provide all of its employees with the ability to use video conferencing and one-on-one video calling. A video desk phone, such as the Cisco IP Video Phone E20 or the Polycom VVX 1500, starts at about $700. That upfront hardware cost is enough to push some enterprises toward softphones that include video.

It’s no surprise that the video conferencing capabilities of Ultrabooks and other mobile devices are rendering video desk phones obsolete. For organizations with a large number of mobile employees, a video desk phone is also downright unwieldy. “Try to put that in your briefcase,” says Todd Carothers, senior vice president of marketing and products at CounterPath, whose Bria softphone runs on tablets, laptops and iPhones.

Softphones can also be a way for enterprises to wring more value from devices they already own -- or, more often than not, that their employees own. For example, a recent NPD In-Stat survey found that businesspeople use their tablets primarily for email and note-taking and that 78 percent bring their personal tablet to work rather than having a company-provided one.

“For an extra $15, you can stick Bria on there, and it replaces a $1,000 desk phone -- and they can use it for email and Web browsing,” says Carothers.

Five Steps to Softphone Success
There are several factors to consider when deciding how and where to implement softphones:

  • Not every employee will be comfortable giving up a desk phone. For example, younger employees often have no qualms because they already use Skype and Google+ Hangouts at home. “They think nothing of plugging a headset into a computer and having a conversation,” says Bob Hughes, global CIO at McLarens Young International, where some employees use softphones. “There is a user profile that you have to understand.”

Some employees prefer the audio quality of a desk phone in speaker mode. “A good resonant chamber inside a desk phone is something you can duplicate a little with a PC or Mac, but it’s still not quite as good,” says Huw Rees, vice president of business development at VoIP provider 8x8 Inc. “For speakerphone capability, the desk phone still has some advantage.”

  • Leverage Wi-Fi for mobile employees. Look for softphone solutions that can be configured to default to Wi-Fi -- the office LAN or a company-approved public hot spot provider -- so employees don’t rack up a big cellular data bill when they’re moving around the office or on the road. When traveling abroad, VoIP over Wi-Fi calls range from free to a fraction of the cost of cellular voice or a hotel room phone. “That’s the exact example of why I do it,” says Hughes.
  • Pick a softphone that’s user-friendly. The user interface should be intuitive because if it isn’t, frustrated employees are likely to switch to an easier but more expensive calling option, such as their cell phone. The softphone also should make it easy to dial extensions and add contacts. “That’s the goal: The user doesn’t have to think or do anything differently,” says Rees. “It just works.”
  • Audit your devices beforehand. It’s increasingly common for enterprises to have a mix of mobile and PC operating systems, especially when employees are allowed to choose their own devices. So if the goal is a company-wide softphone rollout, make sure that the vendor can provide versions for Android, iOS, Mac, Windows and so on. If video is another goal, check how many of your existing smartphones and tablets have a front-facing camera.
  • Audit your bandwidth. Make sure that your facilities have an IP connection that’s fast enough in both directions so that voice and video calls don’t struggle alongside other types of traffic. That can be a challenge when some employees telecommute. “Connectivity is key; fiber is the best,” says Dan Shay, ABI Research practice director for mobile services. “But if you use DSL, Wi-Fi or cellular, connectivity can be problematic; hence advantages of one option (mobility, cost, etc.) are offset by other issues. Bottom line: Different endpoint options, such as a media tablet with soft client, are simply options for initiating a call. Connectivity determines completing a call.”

Under the Hood: A Look Inside the Ultrabook

Mobile devices have been transforming the world of computing. Smartphones, tablets, e-readers and netbooks have revolutionized the way people communicate and interact with each other, buy things, shoot video, make music and play games. Perhaps most important, mobile devices are changing the way people work.

Consumers’ expectations have risen with this proliferation of mobile technologies. Fast, reliable access to the Internet and location-aware services on smartphones and tablets has upped the ante: People expect instant gratification without barriers. Who wants to wait for their mobile device to turn on, or spend a lot of time learning a complex user interface? Smooth computing experiences in 2012 require always-on connectivity and application responsiveness.

Combining Mobility and Power
Recognizing this sea of change, a new line of mobile devices -- Ultrabooks -- was unveiled last year at Computex in Taiwan. According to the announcement, Ultrabooks “would operate more like smartphones -- wake up in a flash, combine responsiveness with performance, offer a seamless and compelling experience and be sleek and less than an inch thick.”

Ultrabook devices extend and enhance the practical applications of smartphones and tablets by combining portability with the technology that’s typically associated with high-performance laptops -- second-generation processors and a 64-bit OS. Toss in accelerometers, a gyroscope and other sensor technologies and wrap it all in a sleek, thin, lightweight case with an equally attractive price tag, and you’ve got a recipe for what manufacturers hope is the next big thing in mobile computing.

“Developers that were strictly building PC applications will now have a platform that’s more mobile than a typical laptop and have technologies and sensors they previously could not access,” says Tom Deckowski, a developer marketing manager for Intel [disclosure: Intel is the sponsor of this content]. “On the flip side, mobile app developers who were focused on creating apps for small-footprint devices that didn’t take a lot of CPU performance will now have access to CPU and graphics performance they never had before, without losing access to the sensors. There’s something new in the Ultrabook device for both PC and mobile app developers alike.”

The Details
Ultrabook devices have three primary technologies that help them perform responsively:

  • Fast start-up ensures that it will take less than seven seconds to get the system up and fully functioning from hibernation, saving time and battery charge. In some Ultrabook devices, a portion of the system’s hard drive is reserved for caching information about the operating system and application state, providing users with a mobile experience that’s highly responsive.
  • Fast response using a solid-state drive (SSD) or SSD-hybrid as a cache between a hard drive and its memory without the use of an additional drive partition, makes application launch times faster.

  • Continuous updates allow applications on some models to continue receiving data updates even while the system is in hibernate or sleep mode. This can be used for all kinds of things; for game developers, they can push game updates to MMORPG players while they’re away from their Ultrabook, instead of spending time downloading updates before they can continue playing the game.

Device security is provided via new identity protection tools that are embedded in the BIOS/firmware of the devices. While no system is immune to theft or loss, these identity protection measures can detect theft or loss and disable the system. When the Ultrabook is recovered, the software can reactivate it with no loss of data.

Another crucial feature is extended battery life. Ultrabook devices are based on low-voltage processors that offer a minimum battery life of five hours, and up to eight hours or more on some systems.

The first Utrabook devices, including the Acer Aspire S3, the ASUS ZENBOOK, HP Folio, Lenovo IdeaPad U300 and Toshiba Portege Z830 Series, are hitting shelves now. They all weigh in at 3 pounds or less, are paper thin and feature air-cooled keyboards, HDMI connectors for hooking up to a TV set and USB 3.0 connectors. Storage options include SSDs and hard drives of various sizes.

Photo: Getty Images

Is ParalleX This Year’s Model?

Scientific application developers have masses of computing power at their disposal with today’s crop of high-end machines and clusters. The trick, however, is harnessing that power effectively. Earlier this year, Louisiana State University’s Center for Computation & Technology (CCT) released its approach to the problem: an open-source runtime system implementation of the ParalleX execution model. ParalleX aims to replace, at least for some types of applications, the Communicating Sequential Processes (CSP) model and the well-established Message Passing Interface (MPI), a programming model for high-performance computing. The runtime system, dubbed High Performance ParalleX (HPX) is a library of C++ functions that targets parallel computing architectures. Hartmut Kaiser -- lead of CTT’s Systems Technology, Emergent Parallelism, and Algorithm Research (STE||AR) group and adjunct associate research professor of the Department of Computer Science at LSU -- recently discussed ParalleX with Intelligence in Software.

Q: The HPX announcement says that HPX seeks to address scalability for “dynamic adaptive and irregular computational problems.” What are some examples of those problems?

Hartmut Kaiser: If you look around today, you see that there’s a whole class of parallel applications -- big simulations running on supercomputers -- which are what I call “scaling-impaired.” Those applications can scale up to a couple of thousand nodes, but the scientists who wrote those applications usually need much more compute power. The simulations they have today have to run for months in order to have the proper results.

One very prominent example is the analysis of gamma ray bursts, an astrophysics problem. Physicists try to examine what happens when two neutron stars collide or two black holes collide. During the collision, they merge. During that merge process, a huge energy eruption happens, which is a particle beam sent out along the axis of rotation of the resulting star or, most often, a black hole. These gamma ray beams are the brightest energy source we have in the universe, and physicists are very interested in analyzing them. The types of applications physicists have today only cover a small part of the physics they want to see, and the simulations have to run for weeks or months.

And the reason for that is those applications don’t scale. You can throw more compute resources at them, but they can’t run faster. If you compare the number of nodes these applications can use efficiently -- an order of a thousand -- and compare that with the available compute power on high-end machines today -- nodes numbering in the hundreds of thousands, you can see the frustration of the physicists. At the end of this decade, we expect to have machines providing millions of cores and billion-way parallelism.

The problem is an imbalance of the data distributed over the computer. Some parts of a simulation work on a little data and other parts work on a huge amount of data.

Another example: graph-related applications where certain government agencies are very interested in analyzing graph data based on social networks. They want to analyze certain behavioral patterns expressed in the social networks and in the interdependencies of the nodes in the graph. The graph is so huge it doesn’t fit in the memory of a single node anymore. They are imbalanced: Some regions of the graph are highly connected, and some graph regions are almost disconnected between each other. The irregularly distributed graph data structure creates an imbalance. A lot of simulation programs are facing that problem.

Q: So where specifically do CSP and MPI run into problems?

H.K.: Let’s try to do an analogy as to why these applications are scaling-impaired. What are the reasons for them to not be able to scale out? The reason, I believe, can be found in the “four horsemen”: Starvation, Latency, Overhead, and Waiting for contention resolution -- slow. Those four factors are the ones that limit the scalability of our applications today.

If you look at classical MPI applications, they are written for timestep-based simulation. You repeat the timestep evolution over and over again until you are close to the solution you are looking for. It’s an iterative method for solving differential equations. When you distribute the data onto several nodes, you cut the data apart into small chunks, and each node works on part of the data. After each timestep, you have to exchange information on the boundary between the neighboring data chunks -- as distributed over the nodes -- to make the solution stable.

The code that is running on the different nodes is kind of in lockstep. All the nodes do the timestep computation at the same time, and then the data exchange between the nodes happens at the same time. And then it goes to computation and back to communication again. You create an implicit barrier after each timestep, when each node has to wait for all other nodes to join the communication phase. That works fairly well if all the nodes have roughly the same amount of work to do. If certain nodes in your system have a lot more work to do than the others -- 10 times or 100 times more work -- what happens is 90 percent of the nodes have to wait for 10 percent of the nodes that have to do more work. That is exactly where these imbalances play their role. The heavier the imbalance in data distribution, the more wait time you insert in the simulation.

That is the reason that MPI usually doesn’t work well with very irregular programs, more concretely -- you will have to invest a lot more effort into the development of those programs -- a task not seldom beyond the abilities of the domain scientists and outside the constraints of a particular project. You are very seldom able to evenly distribute data over the system so that each node has the same amount of work, or it is just not practical to do so because you have dynamic, structural changes in your simulation.

I don’t want to convey the idea that MPI is bad or something not useful. It has been used for more than 15 years now, with high success for a certain class of simulations and a certain class of applications. And it will be used in 10 years for a certain class of applications. But it is not well-fitted for the type of irregular problems we are looking at.

ParalleX and its implementation in HPX rely on a couple of very old ideas, some of them published in the 1970s, in addition to some new ideas which, in combination, allow us to address the challenges we have to address to utilize today’s and tomorrow’s high-end computing systems: energy, resiliency, efficiency and -- certainly -- application scalability. ParalleX is defining a new model of execution, a new approach to how our programs function. ParalleX improves efficiency by exposing new forms of -- preferably fine-grain -- parallelism, by reducing average synchronization and scheduling overhead, by increasing system utilization through full asynchrony of workflow, and employing adaptive scheduling and routing to mitigate contention. It relies on data-directed, message-driven computation, and it exploits the implicit parallelism of dynamic graphs as encoded in their intrinsic metadata. ParalleX prefers methods that allow it to hide latencies -- not methods for latency avoidance. It prefers “moving work to the data” over “moving data to the work,” and it eliminates global barriers, replacing them with constraint-based, fine-grain synchronization techniques.

Q: How did you get involved with ParalleX?

H.K.: The initial conceptual ideas and a lot of the theoretical work have been done by Thomas Sterling. He is the intellectual spearhead behind ParalleX. He was at LSU for five or six years, and he left only last summer for Indiana University. While he was at LSU, I just got interested in what he was doing and we started to collaborate on developing HPX.

Now that he’s left for Indiana, Sterling is building his own group there. But we still tightly collaborate on projects and on the ideas of ParalleX, and he is still very interested in our implementation of it.

Q: I realize HPX is still quite new, but what kind of reception has it had thus far? Have people started developing applications with it?

H.K.: What we are doing with HPX is clearly experimental. The implementation of the runtime system itself is very much a moving target. It is still evolving.

ParalleX -- and the runtime system -- is something completely new, which means it’s not the first-choice target for application developers. On the other hand, we have at least three groups that are very interested in the work we are doing. Indiana University is working on the development of certain physics and astrophysics community applications. And we are collaborating with our astrophysicists here at LSU. They face the same problem: They have to run simulations for months, and they want to find a way out of that dilemma. And there’s a group in Paris that works on providing tools for people who write code in MATLAB, a high-level toolkit widely used by physicists to write simulations. But it’s not very fast, so the Paris group is writing a tool to covert MATLAB to C++, so the same simulations can run a lot faster. They want to integrate HPX in their tool.

ParalleX and HPX don’t have the visibility of the MPI community yet, but the interest is clearly increasing. We have some national funding from DARPA and NSF. We hope to get funding from the Department of Energy in the future; we just submitted a proposal. We expect many more people will gain interest once we can present more results in the future.

Photo: @iStockphoto.com/objectifphotoobjectifphoto

DevOps: Indispensable Approach or Costly Distraction?

The task of getting software development and operations groups to cooperate is a perennial IT challenge. Into that particular divide steps DevOps, a collection of ideas and practices that aim to improve integration between the two sides on IT. The DevOps label, although fairly new, represents a packaging of concepts that have been around for a while. Agile development methodologies, for example, have been advocating streamlined development processes for years. DevOps’ reception has been mixed, however. Critics describe DevOps as overly disruptive and possibly headed for an early exit. Other observers believe the approach can yield greater efficiency. Intelligence in Software recently discussed DevOps with Greg R. Clark, Agile mentor and IT PMO at Intel Corp’s Agile Competency Center. [Disclosure: Intel is the sponsor of this content.]

 

Q: Critics of DevOps say it amounts to an expensive cultural disruption, but adherents contend it provides a path to better communication and improved software quality. Who’s right here?

Greg R. Clark: What is DevOps? It’s important to start there. The way I see it, if you step back and look at your software development value stream, the traditional software approach is the waterfall model. It’s a very linear, phased approach to developing software that involves highly specialized job roles and several handoffs to get through the value stream.

The result is your customer becomes very disconnected from the people who are actually developing the product for them. In a linear value stream like that, with handoffs from one person to the next, you create waste and incur transaction costs that don’t add value to the end product. Agile methods, however, take the value stream and collapse it. Instead of a role-specific, phased approach, it takes all those roles and throws them together. It eliminates the handoffs and connects the customer to the team developing the software for them.

The problem is that most Agile methods don’t talk in a very detailed way about how to deal with the handoffs between the development team and the operations team. That’s where DevOps comes in. It’s an ideology like Agile, but it focuses specifically on the big, hairy handoff we have to deal with to get software from the development organization to the operations organization.

Is it disruptive? Yes, it is disruptive to implement. There’s a strong culture in both the development and operations organizations. They are based on single-minded goals that are not aligned. DevOps focuses on changing the paradigms that each of those organizations work in. Development organizations should be focused on delivering high-quality, highly sustainable software to their customers quickly, but they should deliver something that involves automated deployment and requires low effort on the part of the operations team to deploy. At the same time, operations organizations should be focused on maintaining a stable environment, but one that is also flexible enough to enable the development organization to deliver software quickly.

Sometimes disruption is necessary in order to break down these paradigms to allow us to continuously improve how we deliver to our customers.

Q: Is there a correct way to pursue DevOps?

G.C.: It depends on the specific organization. There are a lot of different approaches you can take. The best method for an organization depends on that organization and the issues they are trying to solve. A .Net Web development organization is not going to have the same problems, and the same solutions to those problems, as an enterprise ERP organization.

Q: So there’s no particular DevOps document that organizations can use?

G.C.: A lot of people really struggle with these movements that don’t have very concrete processes that they can follow. People are just wired differently. Some people want to be told specifically what to do or they have no time for it.

You really need to understand the ideology. That is, stepping back and seeing your entire value stream. What can we do to achieve much shorter release cycles? What would it take to automate deployment? What would it take to get teams to collaborate early and often on that development cycle?

This is where communities of practice and the Internet are really beneficial. You get a lot of people blogging about their experiences with DevOps and about how they’ve made these concepts work in different situations. We can go off and learn from others and develop a set of practices that we think are going to work for our unique situation.

Q: How does the rise of cloud computing impact DevOps?

G.C.: Cloud computing certainly gives the operations organizations a tool that they can leverage to better meet the needs of development organizations. They can deploy hosting environments rapidly and refresh them rapidly. They are able to deploy multiple environments on a larger scale without the additional headcount cost to maintain them. The other side to that is that cloud computing should also serve as an incentive for operations organizations to invest in these DevOps ideas. If they don’t make an effort to meet the needs of the development organizations, those organizations now have an alternative. They can go outside the enterprise directly to an external cloud-hosting service where they can get a development environment up and running quickly at a reasonable price.

Q: Overall, where do you see DevOps going? Will it become widely used? Will it go away? Is it a transitional stepping stone to something else?

G.C.: To be honest, when I first saw the term pop up and learned what it was referring to, I actually thought it was kind of silly. Not the idea that it represents, but just the fact there was this whole movement springing up around what I and other Agile practitioners had done many years before in implementing Agile in our organizations. These things they are talking about in DevOps are a natural extension of Agile when you are trying to streamline how you deliver software to your customers. But it is very difficult to break down the barriers in certain environments. The more you get into large enterprise environments, the more difficult it becomes to convince people that this is the right thing to do.

Movements like DevOps grow when there is a big need to fix a business problem that is common throughout the industry. Putting a name to something helps to legitimize it in a way, and also focuses people’s attention on the problem area so they are more prepared to address it as they drive continuous improvement in their organizations. It also acts as a catalyst to change. When people see a particular term like “DevOps” being discussed repeatedly, it can provoke them to take action by implementing those concepts in their own organizations.

Is it going to go away? Maybe the term will merge into something else, but the ideology it represents has been around for a long time. I think anyone using lean principles to make the software development value stream more efficient will sooner or later get around to addressing this issue that DevOps is referring to.