Is ParalleX This Year’s Model?

Scientific application developers have masses of computing power at their disposal with today’s crop of high-end machines and clusters. The trick, however, is harnessing that power effectively. Earlier this year, Louisiana State University’s Center for Computation & Technology (CCT) released its approach to the problem: an open-source runtime system implementation of the ParalleX execution model. ParalleX aims to replace, at least for some types of applications, the Communicating Sequential Processes (CSP) model and the well-established Message Passing Interface (MPI), a programming model for high-performance computing. The runtime system, dubbed High Performance ParalleX (HPX) is a library of C++ functions that targets parallel computing architectures. Hartmut Kaiser -- lead of CTT’s Systems Technology, Emergent Parallelism, and Algorithm Research (STE||AR) group and adjunct associate research professor of the Department of Computer Science at LSU -- recently discussed ParalleX with Intelligence in Software.

Q: The HPX announcement says that HPX seeks to address scalability for “dynamic adaptive and irregular computational problems.” What are some examples of those problems?

Hartmut Kaiser: If you look around today, you see that there’s a whole class of parallel applications -- big simulations running on supercomputers -- which are what I call “scaling-impaired.” Those applications can scale up to a couple of thousand nodes, but the scientists who wrote those applications usually need much more compute power. The simulations they have today have to run for months in order to have the proper results.

One very prominent example is the analysis of gamma ray bursts, an astrophysics problem. Physicists try to examine what happens when two neutron stars collide or two black holes collide. During the collision, they merge. During that merge process, a huge energy eruption happens, which is a particle beam sent out along the axis of rotation of the resulting star or, most often, a black hole. These gamma ray beams are the brightest energy source we have in the universe, and physicists are very interested in analyzing them. The types of applications physicists have today only cover a small part of the physics they want to see, and the simulations have to run for weeks or months.

And the reason for that is those applications don’t scale. You can throw more compute resources at them, but they can’t run faster. If you compare the number of nodes these applications can use efficiently -- an order of a thousand -- and compare that with the available compute power on high-end machines today -- nodes numbering in the hundreds of thousands, you can see the frustration of the physicists. At the end of this decade, we expect to have machines providing millions of cores and billion-way parallelism.

The problem is an imbalance of the data distributed over the computer. Some parts of a simulation work on a little data and other parts work on a huge amount of data.

Another example: graph-related applications where certain government agencies are very interested in analyzing graph data based on social networks. They want to analyze certain behavioral patterns expressed in the social networks and in the interdependencies of the nodes in the graph. The graph is so huge it doesn’t fit in the memory of a single node anymore. They are imbalanced: Some regions of the graph are highly connected, and some graph regions are almost disconnected between each other. The irregularly distributed graph data structure creates an imbalance. A lot of simulation programs are facing that problem.

Q: So where specifically do CSP and MPI run into problems?

H.K.: Let’s try to do an analogy as to why these applications are scaling-impaired. What are the reasons for them to not be able to scale out? The reason, I believe, can be found in the “four horsemen”: Starvation, Latency, Overhead, and Waiting for contention resolution -- slow. Those four factors are the ones that limit the scalability of our applications today.

If you look at classical MPI applications, they are written for timestep-based simulation. You repeat the timestep evolution over and over again until you are close to the solution you are looking for. It’s an iterative method for solving differential equations. When you distribute the data onto several nodes, you cut the data apart into small chunks, and each node works on part of the data. After each timestep, you have to exchange information on the boundary between the neighboring data chunks -- as distributed over the nodes -- to make the solution stable.

The code that is running on the different nodes is kind of in lockstep. All the nodes do the timestep computation at the same time, and then the data exchange between the nodes happens at the same time. And then it goes to computation and back to communication again. You create an implicit barrier after each timestep, when each node has to wait for all other nodes to join the communication phase. That works fairly well if all the nodes have roughly the same amount of work to do. If certain nodes in your system have a lot more work to do than the others -- 10 times or 100 times more work -- what happens is 90 percent of the nodes have to wait for 10 percent of the nodes that have to do more work. That is exactly where these imbalances play their role. The heavier the imbalance in data distribution, the more wait time you insert in the simulation.

That is the reason that MPI usually doesn’t work well with very irregular programs, more concretely -- you will have to invest a lot more effort into the development of those programs -- a task not seldom beyond the abilities of the domain scientists and outside the constraints of a particular project. You are very seldom able to evenly distribute data over the system so that each node has the same amount of work, or it is just not practical to do so because you have dynamic, structural changes in your simulation.

I don’t want to convey the idea that MPI is bad or something not useful. It has been used for more than 15 years now, with high success for a certain class of simulations and a certain class of applications. And it will be used in 10 years for a certain class of applications. But it is not well-fitted for the type of irregular problems we are looking at.

ParalleX and its implementation in HPX rely on a couple of very old ideas, some of them published in the 1970s, in addition to some new ideas which, in combination, allow us to address the challenges we have to address to utilize today’s and tomorrow’s high-end computing systems: energy, resiliency, efficiency and -- certainly -- application scalability. ParalleX is defining a new model of execution, a new approach to how our programs function. ParalleX improves efficiency by exposing new forms of -- preferably fine-grain -- parallelism, by reducing average synchronization and scheduling overhead, by increasing system utilization through full asynchrony of workflow, and employing adaptive scheduling and routing to mitigate contention. It relies on data-directed, message-driven computation, and it exploits the implicit parallelism of dynamic graphs as encoded in their intrinsic metadata. ParalleX prefers methods that allow it to hide latencies -- not methods for latency avoidance. It prefers “moving work to the data” over “moving data to the work,” and it eliminates global barriers, replacing them with constraint-based, fine-grain synchronization techniques.

Q: How did you get involved with ParalleX?

H.K.: The initial conceptual ideas and a lot of the theoretical work have been done by Thomas Sterling. He is the intellectual spearhead behind ParalleX. He was at LSU for five or six years, and he left only last summer for Indiana University. While he was at LSU, I just got interested in what he was doing and we started to collaborate on developing HPX.

Now that he’s left for Indiana, Sterling is building his own group there. But we still tightly collaborate on projects and on the ideas of ParalleX, and he is still very interested in our implementation of it.

Q: I realize HPX is still quite new, but what kind of reception has it had thus far? Have people started developing applications with it?

H.K.: What we are doing with HPX is clearly experimental. The implementation of the runtime system itself is very much a moving target. It is still evolving.

ParalleX -- and the runtime system -- is something completely new, which means it’s not the first-choice target for application developers. On the other hand, we have at least three groups that are very interested in the work we are doing. Indiana University is working on the development of certain physics and astrophysics community applications. And we are collaborating with our astrophysicists here at LSU. They face the same problem: They have to run simulations for months, and they want to find a way out of that dilemma. And there’s a group in Paris that works on providing tools for people who write code in MATLAB, a high-level toolkit widely used by physicists to write simulations. But it’s not very fast, so the Paris group is writing a tool to covert MATLAB to C++, so the same simulations can run a lot faster. They want to integrate HPX in their tool.

ParalleX and HPX don’t have the visibility of the MPI community yet, but the interest is clearly increasing. We have some national funding from DARPA and NSF. We hope to get funding from the Department of Energy in the future; we just submitted a proposal. We expect many more people will gain interest once we can present more results in the future.


DevOps: Indispensable Approach or Costly Distraction?

The task of getting software development and operations groups to cooperate is a perennial IT challenge. Into that particular divide steps DevOps, a collection of ideas and practices that aim to improve integration between the two sides on IT. The DevOps label, although fairly new, represents a packaging of concepts that have been around for a while. Agile development methodologies, for example, have been advocating streamlined development processes for years. DevOps’ reception has been mixed, however. Critics describe DevOps as overly disruptive and possibly headed for an early exit. Other observers believe the approach can yield greater efficiency. Intelligence in Software recently discussed DevOps with Greg R. Clark, Agile mentor and IT PMO at Intel Corp’s Agile Competency Center. [Disclosure: Intel is the sponsor of this content.]


Q: Critics of DevOps say it amounts to an expensive cultural disruption, but adherents contend it provides a path to better communication and improved software quality. Who’s right here?

Greg R. Clark: What is DevOps? It’s important to start there. The way I see it, if you step back and look at your software development value stream, the traditional software approach is the waterfall model. It’s a very linear, phased approach to developing software that involves highly specialized job roles and several handoffs to get through the value stream.

The result is your customer becomes very disconnected from the people who are actually developing the product for them. In a linear value stream like that, with handoffs from one person to the next, you create waste and incur transaction costs that don’t add value to the end product. Agile methods, however, take the value stream and collapse it. Instead of a role-specific, phased approach, it takes all those roles and throws them together. It eliminates the handoffs and connects the customer to the team developing the software for them.

The problem is that most Agile methods don’t talk in a very detailed way about how to deal with the handoffs between the development team and the operations team. That’s where DevOps comes in. It’s an ideology like Agile, but it focuses specifically on the big, hairy handoff we have to deal with to get software from the development organization to the operations organization.

Is it disruptive? Yes, it is disruptive to implement. There’s a strong culture in both the development and operations organizations. They are based on single-minded goals that are not aligned. DevOps focuses on changing the paradigms that each of those organizations work in. Development organizations should be focused on delivering high-quality, highly sustainable software to their customers quickly, but they should deliver something that involves automated deployment and requires low effort on the part of the operations team to deploy. At the same time, operations organizations should be focused on maintaining a stable environment, but one that is also flexible enough to enable the development organization to deliver software quickly.

Sometimes disruption is necessary in order to break down these paradigms to allow us to continuously improve how we deliver to our customers.

Q: Is there a correct way to pursue DevOps?

G.C.: It depends on the specific organization. There are a lot of different approaches you can take. The best method for an organization depends on that organization and the issues they are trying to solve. A .Net Web development organization is not going to have the same problems, and the same solutions to those problems, as an enterprise ERP organization.

Q: So there’s no particular DevOps document that organizations can use?

G.C.: A lot of people really struggle with these movements that don’t have very concrete processes that they can follow. People are just wired differently. Some people want to be told specifically what to do or they have no time for it.

You really need to understand the ideology. That is, stepping back and seeing your entire value stream. What can we do to achieve much shorter release cycles? What would it take to automate deployment? What would it take to get teams to collaborate early and often on that development cycle?

This is where communities of practice and the Internet are really beneficial. You get a lot of people blogging about their experiences with DevOps and about how they’ve made these concepts work in different situations. We can go off and learn from others and develop a set of practices that we think are going to work for our unique situation.

Q: How does the rise of cloud computing impact DevOps?

G.C.: Cloud computing certainly gives the operations organizations a tool that they can leverage to better meet the needs of development organizations. They can deploy hosting environments rapidly and refresh them rapidly. They are able to deploy multiple environments on a larger scale without the additional headcount cost to maintain them. The other side to that is that cloud computing should also serve as an incentive for operations organizations to invest in these DevOps ideas. If they don’t make an effort to meet the needs of the development organizations, those organizations now have an alternative. They can go outside the enterprise directly to an external cloud-hosting service where they can get a development environment up and running quickly at a reasonable price.

Q: Overall, where do you see DevOps going? Will it become widely used? Will it go away? Is it a transitional stepping stone to something else?

G.C.: To be honest, when I first saw the term pop up and learned what it was referring to, I actually thought it was kind of silly. Not the idea that it represents, but just the fact there was this whole movement springing up around what I and other Agile practitioners had done many years before in implementing Agile in our organizations. These things they are talking about in DevOps are a natural extension of Agile when you are trying to streamline how you deliver software to your customers. But it is very difficult to break down the barriers in certain environments. The more you get into large enterprise environments, the more difficult it becomes to convince people that this is the right thing to do.

Movements like DevOps grow when there is a big need to fix a business problem that is common throughout the industry. Putting a name to something helps to legitimize it in a way, and also focuses people’s attention on the problem area so they are more prepared to address it as they drive continuous improvement in their organizations. It also acts as a catalyst to change. When people see a particular term like “DevOps” being discussed repeatedly, it can provoke them to take action by implementing those concepts in their own organizations.

Is it going to go away? Maybe the term will merge into something else, but the ideology it represents has been around for a long time. I think anyone using lean principles to make the software development value stream more efficient will sooner or later get around to addressing this issue that DevOps is referring to.

5 Steps to a Successful Enterprise Wireless Strategy

Over the past 13 years, analyst Iain Gillott has seen plenty of enterprises grapple with wireless. On the one hand, each new generation of network technology and device type promises fresh opportunities to maximize employee productivity and responsiveness. But on the other, it’s often up to IT managers and developers to turn those promises into reality.

Gillott recently discussed what enterprises need to consider when it comes to Long Term Evolution (LTE) technology and smartphones -- and what they can learn from a basketball hoop in his driveway.

Q: You often warn enterprises not to dismiss LTE as another marketing gimmick. Why?

Gillott: Unlike with 3G, where the industry’s lofty marketing claims were not met by the networks, LTE is likely to deliver, both in terms of bandwidth available and with reduced latency. Even in a loaded network, download speeds should be over 3 Mbps. Lower latency makes a range of new apps and services viable, including good VoIP services.

For enterprise app developers and IT managers, LTE will make many previous visions real. Yes, it will take another 12 months before there is coverage in most major metro markets. But with AT&T and Verizon Wireless offering competing services, pricing should be reasonable. As one vendor told me, LTE delivers on 3G’s promises.

Q: More and more enterprises are allowing employees to choose which model of smartphone they want to use at work. Is that a wise policy shift, or does it create fragmentation headaches?

Gillott: In the past, companies have bought smartphones -- usually Windows Mobile or RIM BlackBerry -- from the operators under corporate contracts. Smartphones were expensive ($400+), and the data services required drove up the monthly cost. Plus, enterprise IT departments needed specific software and solutions to manage the devices, so it mattered which operating system the devices used.

No longer. With all operators -- including the no-contract Virgin, MetroPCS, Boost and Cricket -- offering a range of smartphones, business users have low-cost options that were not available in the past. And with Android and Apple supporting ActiveSync, device management and integration are not a problem. Many solutions are on the market. Heck, even RIM announced a few months ago that their BlackBerry Enterprise Server would support Apple and Android going forward.

With Android smartphone prices well below $100 with a contract, now is the time to let employees select their own device and pay for it. They can expense the portion applicable to business use, and the IT department can still specify the operating systems and versions they will support.

Q: Regardless of who’s buying them, smartphones are becoming more common in the enterprise world. What should enterprises consider when it comes to the apps running on those devices?

Gillott: Several companies offer enterprises the option of setting up their own internal app stores to deliver enterprise apps to smartphones and mobile devices. Apps can be delivered only to specific employees -- so only the sales guys get the sales apps, for example -- and the cost is billed to the correct internal cost center. App stores are really a new way to distribute software, not just a way for teenagers to buy games for their phones.

Q: Internal app stores are also a way for enterprises to avoid malware sneaking in via apps. But even with that hole plugged, aren’t there still a lot of other security risks?

Gillott: Security has long been perceived as an issue for mobile devices. In reality, there are many security solutions available, and enterprises can make the devices and services as secure as needed.

The employee is still a weak link, and the fact that mobile devices are not tied down does not help. But in reality, with the right planning and forethought, security can be addressed and addressed well. Yet many enterprises still seem to use the security threat as a reason not to move forward on mobile solutions.

Q: Enterprises are buying smartphones and tablets because they help improve their operations in one way or another, such as by making employees more productive or more responsive to customers. What’s key for maximizing the return on investment?

Gillott: Don’t be afraid to be creative. Smartphones, tablets, netbooks: There are many device categories now open to enterprise IT managers. Could an iPad be useful to your business and your employees? Do not be afraid to experiment and find out.

Smartphones and tablets are not just “smaller PCs” or “mobile computers.” They can enable new ways of doing business. As an example, we had a basketball goal installed on our driveway for the family at Christmas. (My kids are big on basketball.) Once installed, the vendor pulled out his iPhone, powered up an app, took my credit card, entered the transaction and got approval. No signature, no paper invoice, no waiting. He got his money, and I got the convenience of using a credit card, all because of a simple app.

Read more about Maintaining Information Security While Allowing Personal Hand-held Devices in the Enterprise” from our sponsor.

Agile Software Development Trends

As the needs of businesses change rapidly, often without planning or warning, internal software developers struggle to keep up. Increasingly, companies find that some kind of Agile methodology best suits their needs.

What is Agile? It’s an umbrella term for a number of iterative and incremental software methodologies, such as extreme programming, scrum and crystal. Unlike other software development approaches, Agile focuses all stakeholders (programmers, testers, customers, management and executives) on delivering working, tested software in short, frequent stages.

To discover some of the trends shaping the future of Agile, we spoke to David Thomas, a technologist at Rally Software. Thomas sees two major trends: “More focus on scaling agility outside of development, and growing interest in continuous delivery.”

Trend No. 1: Scaling Outside of Development
“Many of our enterprise customers want to dramatically improve their time to market by scaling Agile methods outside of development,” says Thomas. “Over the last year, we collaborated with our largest customers to define a new product for keeping development aligned with their portfolio decisions.”

Rally’s new product, codenamed Project Stratus, focuses on the project management office (PMO) level by keeping development aligned with strategic business priorities, visualizing and managing projects and large features across the entire value stream, analyzing the roadmap to optimize delivery of value while minimizing risk, and strengthening feedback loops between the PMO and development teams.

Trend No. 2: Continuous Delivery

The basic idea behind continuous delivery is this: Automate the build, deployment and testing process; improve collaboration between developers, testers and operations; and delivery teams can generate a continuous stream of software updates, regardless of the size of a project or the complexity of its code.

“Continuous delivery is about achieving flow,” says Thomas. “It is an aspect of lean software development that seeks to minimize waste and maximize efficiency and throughput. At its core, achieving flow means not designing more than you can develop, not developing more than you can test, and not testing more than you can deploy.”

For more resources on Agile, visit the Agile Alliance  and The PMI Agile Community of Practice.

Photo Credit:

Interview: Rob Enderle on Plug-and-play Appliances

One of the can’t-miss enterprise trends is the increasing use of plug-and-play, special-needs appliances in the data center. These appliances handle tasks ranging from networking to storage and deliver data processing capabilities for business intelligence, virtualization and security.

The ease of use of these devices, combined with the growing need for storage, networking and security, means these modular, plug-and-play machines are a trend to watch. To get the pulse of plug and play, we spoke with Rob Enderle, principal analyst at the Enderle Group.                                                         

Q: What are the hottest trends in the plug-and-play appliance arena?

Enderle: The midmarket appears to love these devices, because that market lacks deep IT skills and the IT staff they have are often spread too thin. For example, they just don’t have the bandwidth to learn large new storage systems, and as a result, have taken to these appliances like ducks to water. Enterprises like them for remote offices and contained projects for similar reasons.

Q: Why are these appliances gaining so much traction? Is it because appliances come equipped with software pre-installed and there’s no futzing around with app servers or operating systems?

Enderle: Yes, they contain both the costs and the overhead for the related technology, so they are easier to initially configure, vastly easier to run and lack the complexity of traditional products. This is very attractive in deployments where there isn’t any overhead to handle complexity.

Q: IT services have consolidated. Have appliances grown into devices that are capable of handling data center chores?

Enderle: These devices are clearly moving up-market. The financial downturn caused massive downsizing in support roles like IT, and many are hesitant to hire heavily on the economic upswing for fear it will be short-lived. This makes appliances a great choice for IT organizations that are now overwhelmed because they can’t or won’t staff up yet.

Q: Virtualization means new servers and applications can be quickly delivered, and many tasks can be handled with special-needs servers. But are we fully at the point where special-needs servers can take over most of the IT workload?

Enderle: Far from it, we are still in the early days of appliance adoption, and I doubt this will mature fully for decades. Appliances remain a growing exception to the more traditional highly customized approach to IT back-office technology. This adoption will take some time, and plug-and-play applications like these are still years away from becoming the default standard across the broad market.

Q: Vendors ranging from IBM to NetApp are looking for new, innovative ways to combine these technologies. Is this trend going to grow?

Enderle: Certainly! This approach has not only proven lucrative, it has led to increased customer loyalty and strong market share growth. Success tends to breed a new IT approach that others want to copy, and appliance development programs are breeding like bunnies at the moment.