Avoiding Fragmented Infrastructure in the Data Center

As a growing number of IT shops grow more comfortable with implementing their first cloud environments, they are recognizing the costs, benefits and operational efficiencies of this strategy. However, they are also discovering the rippling effects that cloud implementations may have on their existing virtualized infrastructures: fragmented infrastructures, a management nightmare. Arsalan Farooq, CEO and founder of Convirture, has experience dealing with IT shops that face this problem. Below, he offers insight.

Q. Some shops complain about dealing with fragmented infrastructures because of different approaches they have taken in implementing multiple clouds. What advice can you offer to fix that?

Arsalan Farooq: First, I would say don’t panic and start ripping everything out. And don’t run back to the virtualized or physical model you came from, because you have tried that and it didn’t work. The problem isn’t that your cloud model isn’t working. The problem is it’s fragmented, and the complexity of it is out of control. I recommend taking a top-down approach from the management perspective. You need to invest in management tools that allow you to manage the fragmented parts of your infrastructure and workloads from a single location. Now, this doesn’t solve the fragmentation problem completely because you are dealing with not only fragmented infrastructure, but also fragmented management and fragmented workloads. But once you can see all your workloads and infrastructures in one place and can operate them in one place, you can make more intelligent decisions about the workload fragmentation problem.

Q. Cloud management software is not technically keeping up with managing both physical and virtual environments. Does your solution here help out with that?

A.F.: Once you are in this fragmented infrastructure silo, the vendor tries to sell you their proprietary management too. At the end of the day, you have a management tool that just manages your vertical cloud, and another management tool that just manages your virtualized infrastructure and so on. My advice is to divest the management risk from the platform risk. If you don’t do this, you’re asking for it. As data centers become multi-infrastructure, multi-cloud and multi-operational, you have to divest the risk of the new platform from the risk of your management practices. I’m not a big fan of buying a new platform that forces you to buy new infrastructures and new management tools.

Q. What is the most important element IT shops overlook in putting together their first cloud implementation?

A.F.: Typically, there is a misunderstanding of what the scope of the transformation will be. A lot of people end up with misgivings because they have been sold on this idea that a cloud implementation will 100 percent transform the way they can build their IT data center. Second (and this is more sinister), they believe the cloud can bring efficiencies and cost benefits, but that in itself comes at a cost. You are buying efficiency and cost benefits, but you are paying the complexity price for it. This is something remarkably absent as people go into a cloud implementation. Only after they implement their cloud do they realize the architectural topology is much more complex than it was before.

Q. There is this disconnect between what IT shops already have in their virtualized data centers and what the vendors are offering as solutions to layer on top of that. Why is that?

A.F.: That is the crux of the problem we are dealing with right now. Most cloud vendors talk about how their cloud deployment works and what it brings in terms of efficiencies and cost benefits. But what that discussion leaves out is how the transformation from a cloudless data center to one that has a cloud actually works. Specifically, what are the new metrics, properties and cost benefits surrounding all that? And then, once the transformation is made, what are the attributes of a multi-infrastructure data center? Conversations about this are completely absent.

Q. But explaining something like the metrics of a new cloud conversation to an IT shop seems like such a basic part of the conversation.

A.F.: Right, but the problem is many vendors take a more tactical view of things. They are focused on selling their wares, which are cloud infrastructures. But addressing the bigger-picture ramifications is something many don’t seem to have the capacity to answer, and so they don’t talk about it. So the solution then falls either to the CIOs or other vendors who are trying to attack that problem directly.

Q. Some IT executives with virtualized data centers wonder if they even need a cloud. What do you say to those people?

A.F.: This may sound a bit flippant, but if you feel you don’t need a cloud, then you probably don’t. You have to remember what the cloud is bringing. The cloud is not a technology; it is a model. It comes down to what the CIO is looking for. If they are satisfied with the level of efficiency in their virtualized data centers, then there is no compelling reason for them to move to the cloud. However, I don’t think there are too many CIOs who, when given the prospect of incrementally improving the efficiencies of part of their operations, would not go for it.

Q. Are companies getting more confident about deploying public clouds, without having to first do a private one?

A.F.: The largest data centers still can’t find a compelling reason to move to the public cloud. The smaller shops and startups (most of which don’t have the expertise or have any infrastructure) are more confident in moving to a public cloud. The bigger question here is whether the larger corporate data centers ever move their entire operation to the public cloud as opposed to just using it in a niche role for things like backup or excess capacity.

Q. I assumed large data centers would move their entire operations to a public cloud once the relevant technologies became faster and more reliable and secure. What will it take for large data centers to gain more confidence about public clouds?

A.F.: One thing missing is a compelling cost-complexity-to-benefits ratio. My favorite example from a few months ago was when we were going to do workloads that automatically load balance between local and public cloud scenarios with cloud bursting. That all sounds good, but do you know how much Amazon costs for network transfers? It’s an arm and a leg. It is ridiculously expensive. Do an analysis of what it costs to take a medium-load website and run it constantly on Amazon, and then compare that to a co-lo or renting a server from your local provider, and your mind would be boggled. The overall economics of public clouds -- despite all the hype -- are not well-aligned given the high network usage, bandwidth usage and CPU usage scenarios. Until that changes, it’s hard to find compelling arguments to do all these fancy things that everyone talks about doing with public clouds, including ourselves. We have cloud bursting we are working on in the labs, but we are also pretty sober about what that means as to whether or not it’s here as a practical solution.

Photo: @iStockphoto.com/halbergman

Is ParalleX This Year’s Model?

Scientific application developers have masses of computing power at their disposal with today’s crop of high-end machines and clusters. The trick, however, is harnessing that power effectively. Earlier this year, Louisiana State University’s Center for Computation & Technology (CCT) released its approach to the problem: an open-source runtime system implementation of the ParalleX execution model. ParalleX aims to replace, at least for some types of applications, the Communicating Sequential Processes (CSP) model and the well-established Message Passing Interface (MPI), a programming model for high-performance computing. The runtime system, dubbed High Performance ParalleX (HPX) is a library of C++ functions that targets parallel computing architectures. Hartmut Kaiser -- lead of CTT’s Systems Technology, Emergent Parallelism, and Algorithm Research (STE||AR) group and adjunct associate research professor of the Department of Computer Science at LSU -- recently discussed ParalleX with Intelligence in Software.

Q: The HPX announcement says that HPX seeks to address scalability for “dynamic adaptive and irregular computational problems.” What are some examples of those problems?

Hartmut Kaiser: If you look around today, you see that there’s a whole class of parallel applications -- big simulations running on supercomputers -- which are what I call “scaling-impaired.” Those applications can scale up to a couple of thousand nodes, but the scientists who wrote those applications usually need much more compute power. The simulations they have today have to run for months in order to have the proper results.

One very prominent example is the analysis of gamma ray bursts, an astrophysics problem. Physicists try to examine what happens when two neutron stars collide or two black holes collide. During the collision, they merge. During that merge process, a huge energy eruption happens, which is a particle beam sent out along the axis of rotation of the resulting star or, most often, a black hole. These gamma ray beams are the brightest energy source we have in the universe, and physicists are very interested in analyzing them. The types of applications physicists have today only cover a small part of the physics they want to see, and the simulations have to run for weeks or months.

And the reason for that is those applications don’t scale. You can throw more compute resources at them, but they can’t run faster. If you compare the number of nodes these applications can use efficiently -- an order of a thousand -- and compare that with the available compute power on high-end machines today -- nodes numbering in the hundreds of thousands, you can see the frustration of the physicists. At the end of this decade, we expect to have machines providing millions of cores and billion-way parallelism.

The problem is an imbalance of the data distributed over the computer. Some parts of a simulation work on a little data and other parts work on a huge amount of data.

Another example: graph-related applications where certain government agencies are very interested in analyzing graph data based on social networks. They want to analyze certain behavioral patterns expressed in the social networks and in the interdependencies of the nodes in the graph. The graph is so huge it doesn’t fit in the memory of a single node anymore. They are imbalanced: Some regions of the graph are highly connected, and some graph regions are almost disconnected between each other. The irregularly distributed graph data structure creates an imbalance. A lot of simulation programs are facing that problem.

Q: So where specifically do CSP and MPI run into problems?

H.K.: Let’s try to do an analogy as to why these applications are scaling-impaired. What are the reasons for them to not be able to scale out? The reason, I believe, can be found in the “four horsemen”: Starvation, Latency, Overhead, and Waiting for contention resolution -- slow. Those four factors are the ones that limit the scalability of our applications today.

If you look at classical MPI applications, they are written for timestep-based simulation. You repeat the timestep evolution over and over again until you are close to the solution you are looking for. It’s an iterative method for solving differential equations. When you distribute the data onto several nodes, you cut the data apart into small chunks, and each node works on part of the data. After each timestep, you have to exchange information on the boundary between the neighboring data chunks -- as distributed over the nodes -- to make the solution stable.

The code that is running on the different nodes is kind of in lockstep. All the nodes do the timestep computation at the same time, and then the data exchange between the nodes happens at the same time. And then it goes to computation and back to communication again. You create an implicit barrier after each timestep, when each node has to wait for all other nodes to join the communication phase. That works fairly well if all the nodes have roughly the same amount of work to do. If certain nodes in your system have a lot more work to do than the others -- 10 times or 100 times more work -- what happens is 90 percent of the nodes have to wait for 10 percent of the nodes that have to do more work. That is exactly where these imbalances play their role. The heavier the imbalance in data distribution, the more wait time you insert in the simulation.

That is the reason that MPI usually doesn’t work well with very irregular programs, more concretely -- you will have to invest a lot more effort into the development of those programs -- a task not seldom beyond the abilities of the domain scientists and outside the constraints of a particular project. You are very seldom able to evenly distribute data over the system so that each node has the same amount of work, or it is just not practical to do so because you have dynamic, structural changes in your simulation.

I don’t want to convey the idea that MPI is bad or something not useful. It has been used for more than 15 years now, with high success for a certain class of simulations and a certain class of applications. And it will be used in 10 years for a certain class of applications. But it is not well-fitted for the type of irregular problems we are looking at.

ParalleX and its implementation in HPX rely on a couple of very old ideas, some of them published in the 1970s, in addition to some new ideas which, in combination, allow us to address the challenges we have to address to utilize today’s and tomorrow’s high-end computing systems: energy, resiliency, efficiency and -- certainly -- application scalability. ParalleX is defining a new model of execution, a new approach to how our programs function. ParalleX improves efficiency by exposing new forms of -- preferably fine-grain -- parallelism, by reducing average synchronization and scheduling overhead, by increasing system utilization through full asynchrony of workflow, and employing adaptive scheduling and routing to mitigate contention. It relies on data-directed, message-driven computation, and it exploits the implicit parallelism of dynamic graphs as encoded in their intrinsic metadata. ParalleX prefers methods that allow it to hide latencies -- not methods for latency avoidance. It prefers “moving work to the data” over “moving data to the work,” and it eliminates global barriers, replacing them with constraint-based, fine-grain synchronization techniques.

Q: How did you get involved with ParalleX?

H.K.: The initial conceptual ideas and a lot of the theoretical work have been done by Thomas Sterling. He is the intellectual spearhead behind ParalleX. He was at LSU for five or six years, and he left only last summer for Indiana University. While he was at LSU, I just got interested in what he was doing and we started to collaborate on developing HPX.

Now that he’s left for Indiana, Sterling is building his own group there. But we still tightly collaborate on projects and on the ideas of ParalleX, and he is still very interested in our implementation of it.

Q: I realize HPX is still quite new, but what kind of reception has it had thus far? Have people started developing applications with it?

H.K.: What we are doing with HPX is clearly experimental. The implementation of the runtime system itself is very much a moving target. It is still evolving.

ParalleX -- and the runtime system -- is something completely new, which means it’s not the first-choice target for application developers. On the other hand, we have at least three groups that are very interested in the work we are doing. Indiana University is working on the development of certain physics and astrophysics community applications. And we are collaborating with our astrophysicists here at LSU. They face the same problem: They have to run simulations for months, and they want to find a way out of that dilemma. And there’s a group in Paris that works on providing tools for people who write code in MATLAB, a high-level toolkit widely used by physicists to write simulations. But it’s not very fast, so the Paris group is writing a tool to covert MATLAB to C++, so the same simulations can run a lot faster. They want to integrate HPX in their tool.

ParalleX and HPX don’t have the visibility of the MPI community yet, but the interest is clearly increasing. We have some national funding from DARPA and NSF. We hope to get funding from the Department of Energy in the future; we just submitted a proposal. We expect many more people will gain interest once we can present more results in the future.

Photo: @iStockphoto.com/objectifphotoobjectifphoto

DevOps: Indispensable Approach or Costly Distraction?

The task of getting software development and operations groups to cooperate is a perennial IT challenge. Into that particular divide steps DevOps, a collection of ideas and practices that aim to improve integration between the two sides on IT. The DevOps label, although fairly new, represents a packaging of concepts that have been around for a while. Agile development methodologies, for example, have been advocating streamlined development processes for years. DevOps’ reception has been mixed, however. Critics describe DevOps as overly disruptive and possibly headed for an early exit. Other observers believe the approach can yield greater efficiency. Intelligence in Software recently discussed DevOps with Greg R. Clark, Agile mentor and IT PMO at Intel Corp’s Agile Competency Center. [Disclosure: Intel is the sponsor of this content.]

 

Q: Critics of DevOps say it amounts to an expensive cultural disruption, but adherents contend it provides a path to better communication and improved software quality. Who’s right here?

Greg R. Clark: What is DevOps? It’s important to start there. The way I see it, if you step back and look at your software development value stream, the traditional software approach is the waterfall model. It’s a very linear, phased approach to developing software that involves highly specialized job roles and several handoffs to get through the value stream.

The result is your customer becomes very disconnected from the people who are actually developing the product for them. In a linear value stream like that, with handoffs from one person to the next, you create waste and incur transaction costs that don’t add value to the end product. Agile methods, however, take the value stream and collapse it. Instead of a role-specific, phased approach, it takes all those roles and throws them together. It eliminates the handoffs and connects the customer to the team developing the software for them.

The problem is that most Agile methods don’t talk in a very detailed way about how to deal with the handoffs between the development team and the operations team. That’s where DevOps comes in. It’s an ideology like Agile, but it focuses specifically on the big, hairy handoff we have to deal with to get software from the development organization to the operations organization.

Is it disruptive? Yes, it is disruptive to implement. There’s a strong culture in both the development and operations organizations. They are based on single-minded goals that are not aligned. DevOps focuses on changing the paradigms that each of those organizations work in. Development organizations should be focused on delivering high-quality, highly sustainable software to their customers quickly, but they should deliver something that involves automated deployment and requires low effort on the part of the operations team to deploy. At the same time, operations organizations should be focused on maintaining a stable environment, but one that is also flexible enough to enable the development organization to deliver software quickly.

Sometimes disruption is necessary in order to break down these paradigms to allow us to continuously improve how we deliver to our customers.

Q: Is there a correct way to pursue DevOps?

G.C.: It depends on the specific organization. There are a lot of different approaches you can take. The best method for an organization depends on that organization and the issues they are trying to solve. A .Net Web development organization is not going to have the same problems, and the same solutions to those problems, as an enterprise ERP organization.

Q: So there’s no particular DevOps document that organizations can use?

G.C.: A lot of people really struggle with these movements that don’t have very concrete processes that they can follow. People are just wired differently. Some people want to be told specifically what to do or they have no time for it.

You really need to understand the ideology. That is, stepping back and seeing your entire value stream. What can we do to achieve much shorter release cycles? What would it take to automate deployment? What would it take to get teams to collaborate early and often on that development cycle?

This is where communities of practice and the Internet are really beneficial. You get a lot of people blogging about their experiences with DevOps and about how they’ve made these concepts work in different situations. We can go off and learn from others and develop a set of practices that we think are going to work for our unique situation.

Q: How does the rise of cloud computing impact DevOps?

G.C.: Cloud computing certainly gives the operations organizations a tool that they can leverage to better meet the needs of development organizations. They can deploy hosting environments rapidly and refresh them rapidly. They are able to deploy multiple environments on a larger scale without the additional headcount cost to maintain them. The other side to that is that cloud computing should also serve as an incentive for operations organizations to invest in these DevOps ideas. If they don’t make an effort to meet the needs of the development organizations, those organizations now have an alternative. They can go outside the enterprise directly to an external cloud-hosting service where they can get a development environment up and running quickly at a reasonable price.

Q: Overall, where do you see DevOps going? Will it become widely used? Will it go away? Is it a transitional stepping stone to something else?

G.C.: To be honest, when I first saw the term pop up and learned what it was referring to, I actually thought it was kind of silly. Not the idea that it represents, but just the fact there was this whole movement springing up around what I and other Agile practitioners had done many years before in implementing Agile in our organizations. These things they are talking about in DevOps are a natural extension of Agile when you are trying to streamline how you deliver software to your customers. But it is very difficult to break down the barriers in certain environments. The more you get into large enterprise environments, the more difficult it becomes to convince people that this is the right thing to do.

Movements like DevOps grow when there is a big need to fix a business problem that is common throughout the industry. Putting a name to something helps to legitimize it in a way, and also focuses people’s attention on the problem area so they are more prepared to address it as they drive continuous improvement in their organizations. It also acts as a catalyst to change. When people see a particular term like “DevOps” being discussed repeatedly, it can provoke them to take action by implementing those concepts in their own organizations.

Is it going to go away? Maybe the term will merge into something else, but the ideology it represents has been around for a long time. I think anyone using lean principles to make the software development value stream more efficient will sooner or later get around to addressing this issue that DevOps is referring to.

Sourcefire Expert Explains Next-generation Firewalls

As security threats evolve, so must firewalls -- but not at the expense of network performance. That’s one factor that enterprises need to consider when developing a strategy for next-generation firewalls.

Nearly half of the medium and large enterprises in a recent Ponemon Institute survey sponsored by security vendor Sourcefire have deployed a next-gen firewall. The survey also found performance degradation to be a major concern. Jason Lamar, Sourcefire’s senior director of product management, recently spoke with Intelligence in Software about how next-gen firewalls work and the architectural options.

Q: What exactly is a next-gen firewall?

Jason Lamar: It’s a combination of threat prevention, access control and application control. The next-generation part is about going beyond the traditional language of writing firewall policies, where you use users and applications as the way to communicate what the policy means and how it would be implemented. That’s the common definition out there in the marketplace.

Sourcefire has a little different perspective. Our belief is that you really need to have a next-generation intrusion-prevention system (IPS) as a component of a next-generation firewall. That requires contextual awareness, which is the systematic understanding of the network that you’re trying to protect, as well as all of the relevant information about the endpoints, files, users, applications and operating systems. You need to know all of that stuff about your environment in order for your system to work effectively as a next-generation security solution. We believe that you need to have contextual awareness and an enterprise-quality IPS to really be next-generation.

Q: What’s an example of the kinds of threats that a next-gen firewall would catch?

J.L.: Traditional firewalls that look only at ports and IP addresses won’t pick out anything that’s especially evasive. So a traditional firewall won’t find command-and-control channels that have been set up by an owned host inside your network, for example. And a traditional firewall that just added on good-enough IPS -- more of a Unified Threat Management approach -- won’t have the extensive evasion prevention and traffic normalization good enough to detect rapidly changing threats.

Q: So a next-gen firewall would be doing things such as filtering by signatures and reputations, right?

J.L.: Yes. The reputational component is there as an additional context, so to speak. It’s about detecting things at a high accuracy. If you look at the kinds of testing you do for a next-generation IPS, like we do at NSS Labs, and you compare that to what the traditional firewalls have in terms of threat prevention, there’s a big gap.

Q: Encryption can provide a shield for malware. Isn’t that why a next-gen firewall decrypts traffic?

J.L.: Definitely. It is a component of a next-generation firewall architecture. Most next-generations don’t do this well, though. Their performance doesn’t scale. Some vendors will tell you that you need embedded SSL decryption, but when you turn that on, the whole performance of the system tanks.

You really need an architectural approach to decryption. Why impact your IPS and application-control performance when you could have a standalone, scaling appliance to do that decryption?

By the way, most enterprises have some other reason to want to decrypt SSL than just to do the next-generation firewall components. A lot of times, there’s a content gateway there that they want to interface with or some other thing that they want to have looking at the traffic. If you go with the next-generation firewall with embedded SSL, you miss an architectural trick in offloading SSL so you can use it for multiple security inspection points.”

Q: Any other architectural tips one should consider when developing a next-gen firewall strategy? Any pitfalls to avoid?

J.L.: A lot of enterprises are struggling with whether to displace their whole firewall infrastructure for a new, next-generation firewall versus supplement and augment with next-generation firewall as an additional control behind their traditional firewall. Not every enterprise is ready to make that move or has the financial resources to make that move quickly.

Q: That ties in with the survey, where 56 percent of respondents prefer augmentation rather than replacement.

J.L.: For many organizations, it’s just too much to switch all that around when the real benefit you’re trying to get is better security and that’s really delivered through the threat-prevention and application-control components. Why change the thing that’s working and that’s operationally and organizationally the most difficult to move when you really want to get application control and better threat prevention?

A lot of customers think they’re going to buy a next-generation and switch all of their policies over to the thing with a couple of mouse clicks. That’s usually not the case, and typically it’s not advisable. Take the opportunity to rationalize the policies you should have now versus carrying forward a policy that doesn’t match what you’re trying to do.

Photo: @iStockphoto.com/alexsl

CompTIA and viaForensics Experts Discuss Credentialing Secure Mobile-app Developers

Programmers cranking out the latest mobile applications aren’t necessarily preoccupied with security. But CompTIA, an IT industry association, and viaForensics, a digital forensics and security firm, aim to address that issue. The organizations are working on a secure mobile-application developer credential, which is scheduled for availability this year. CompTIA already runs a number of technical certification programs such as A+, Network+, and Security+. Terry Erdle, CompTIA’s executive vice president for skills certifications, and Andrew Hoog, chief investigative officer for viaForensics, recently discussed their mobile security initiative with Intelligence in Software.

Q: Over the years, developers of enterprise applications have been working toward building security into applications from the start rather than inspecting software security after the fact. Are you seeing that pattern in mobile software development as well?

Andrew Hoog: The general consensus from security experts is that security has got to be built in -- engineered in from the start. It is very difficult to come in later in the game and bolt that stuff on.

Mobile was very exciting at the beginning, and everyone was rushing to get features out. But awareness of security is beginning to grow. The architects, the developers, are saying, “We are going to have to slow this down a little bit and we are going to have to make sure we are baking in security from the get-go.”

That’s why the certification that we announced is going to be very important. That education has to occur at the developer level and has to happen at the security-analyst level so they know how to develop and test for security on mobile apps.

Terry Erdle: With something as explosive and exciting as mobile applications, you’ve got a lot of people who are up to the task and doing it right, and a lot of people who are not up to the task. These developer credentials are the way for a potential employer to differentiate between those who do know what they are doing and those who don’t.

Q: What type of guidance are you giving developers through the credential program?

A.H.: One of the things that we spend a lot of time telling people is that mobile is different from the traditional applications that people are used to securing. Enterprises are used to securing Web apps and apps that run inside the business: client/server apps. With the credential, the main focus is educating folks on the differences -- Why is mobile different? How is the threat model different? -- and giving them practical experience in writing secure mobile apps.

We’ve done extensive security testing of mobile applications, and we see what the common mistakes are that the developers make. We have a list of 44 best practices. It is very helpful for developers to know the things you can do and the things you have to absolutely avoid to develop a secure mobile application.

Q: What are some of the common mistakes?

A.H.: One of the big issues companies are struggling with is data being cached on these devices. Once that data is stored on a mobile device, it is very, very hard to delete. We found banks and health care companies that end up storing information on these devices. If someone gets their hands on an iPhone, how do they make sure user data is not at risk?

I also see an issue with how data is sent over the network. Developers don’t have to worry about secure communications when they build Web apps -- the browser makes sure the security certificate is valid. Mobile app developers actually have to get involved in that -- secure communications channels and certificates -- to make sure they’re not vulnerable to man-in-the-middle attacks. It’s a shift where more and more of that responsibility is falling on their shoulders. There is a real need to up the ante for developers. You don’t get security automatically out of the box when you develop these things. There are some steps you have to take. Certification is going to help up the ante.

Q: How will the credential program be structured?

T.E.: We will put this in the context of the broader mobility area. We are doing a credential suite. The first four certifications -- Mobility+ -- are Wi-Fi, Enterprise Mobility Management, Wireless Security, and Wireless Technical Architecture. Then we will start splintering off, building some certifications that are specific to operating systems. We have to recognize that there are a couple of different operating environments and security environments. We’ll get more specific on iOS and Android and, in the future, maybe others.

Q: With mobile technology rapidly evolving, how often will developers need to renew their credentials?

T.E.: We will have to determine the exact model, whether it is renewal or continuing education. We actually have a continuing education process that we developed for a security certification for the federal DoD 8570 initiative. We plan to tap into that.

We do a rewrite every two years with most exams. This one mobile developer credential will go much more aggressively than that. We’ll be putting out significantly new questions at least every six months if not every three months. We may have little bridge exams that come out every four to six months. It’s not worthwhile if it’s not current.