Strike Back at SQL Injections

What do Lady Gaga, PBS and the British Royal Navy have in common? They all own websites that were hacked over the past two years by structured query language (SQL) injections.

Although the IT world has understood the methods and vulnerabilities for years, the attacks continue to increase for many reasons -- including the arrival of tools that enable hackers to automate some of their processes. 

To help enterprises and other organizations avoid becoming the next victim, we sought out Paul Litwin, programmer manager at the Fred Hutchinson Cancer Research Center in Seattle and owner of Deep Training, a .NET training company. Here’s his advice for identifying and thwarting SQL injection attacks.


Q: Give us an example of how simply entering a malformed SQL statement in a website’s textbox gives a hacker access to an underlying database.

Litwin: Many applications use a form to authenticate users. For example, in a typical insecure ASP.NET application, when a user clicks a login button, a method might authenticate that user by running a query. This query might calculate the number of records in a database table that match the username and password entered in the form’s textbox controls.

By entering text that seems harmless, such as “Or 1=1 --,” it’s possible for a hacker to form a syntactically correct query. For example, this might be the query behind the ASP.NET page:

string strQry = “SELECT Count(*) FROM Users WHERE UserName=’” +

 txtUser.Text + “‘ AND Password=’” + txtPassword.Text + “’”;

Now, when a “good” user enters a name of “Paul” and a password of “password,” strQry becomes:

SELECT Count(*) FROM Users WHERE UserName=‘Paul’ AND Password=‘password’

But when the hacker enters “‘ Or 1=1 --” the query instead becomes:

SELECT Count(*) FROM Users WHERE UserName=” Or 1=1 --’ AND Password=’”

And because in SQL, a pair of hyphens indicate the beginning of a comment, the query becomes:

SELECT Count(*) FROM Users WHERE UserName=” Or 1=1

But the expression 1=1 is always true for every table row, and a true expression or’d with another expression will always come back as true. So if there’s at least one row in the table, this SQL will always produce a nonzero record count and get the hacker authenticated into the application.

Q: Is SQL Server the only product that’s vulnerable to SQL injection attacks?

Litwin: No. DB2, Oracle, MySQL and Sybase are examples of other databases that are equally vulnerable. That’s because the SQL language has several features that are designed to make it powerful and flexible, but these also create risks. One example is the ability to use a pair of hyphens to embed comments in an SQL statement. Another is the ability to string together multiple SQL statements and then batch-execute them.

Basically, the more powerful the SQL dialect, the more vulnerable that database is. That’s why SQL Server is so frequently targeted.

And keep in mind that SQL injection attacks target more than just ASP.NET applications. Classic ASP, Java, JSP, Ruby on Rails and PHP applications, and even desktop applications, are vulnerable too.

Q: What do you recommend for preventing SQL injection attacks?

Litwin: First and foremost, implement multiple layers of protection. That way, if one safeguard is breached, others still stand in the hacker’s way.

Here are five tips:

  • Don’t trust user input -- ever. Use validation controls, regular expressions, code and other methods to validate every single textbox entry.
  • Avoid dynamic SQL. Instead, use parameterized SQL or stored procedures.
  • Never link a database to an admin-level account. Always use a limited access account to connect to the database.
  • Encrypt or hash passwords and connection strings. Never leave this kind of sensitive information as plain text.
  • Keep error messages at a high level. The more information those messages have, the more clues they can provide to hackers.

For more tips, check out Paul Litwin’s article “Stop SQL Injection Attacks Before They Stop You.”

Photo Credit: @iStockphoto.com/kr7ysztof

Nice Gesture, But What Does It Mean?

One step forward, two steps back. That’s how Don Norman describes today’s gesture-based user interfaces (UIs) for smartphones, tablets and a growing assortment of other devices.

Named one of the world’s 27 most influential designers by Business Week, Norman laments the lack of standards, which have created a world where a finger-swipe on one device often doesn’t have the same effect on another. That inconsistency often makes using gesture-based UIs as much fun as folding a fitted sheet. Norman recently spoke to us about this and more from South Korea, where he’s distinguished visiting professor in the Korea Advanced Institute of Science and Technology’s Department of Industrial Design.

Q: Your colleagues from the Nielsen Norman Group recently did usability tests on Apple’s iPad: “The first crop of iPad apps revived memories of Web designs from 1993, when Mosaic first introduced the image map that made it possible for any part of any picture to become a UI element. As a result, graphic designers went wild -- anything they could draw could be a UI, whether it made sense or not. It’s the same with iPad apps -- anything you can show and touch can be a UI on this device. There are no standards and no expectations.” Yet tablet and smartphone sales remain brisk.

A: It’s confusing and difficult for people. On the other hand, it’s so engaging and so much fun, that it in many ways compensates for its difficulties.

I’m about to give a series of talks about the way the field has evolved. In the beginning, we were so delighted with the technology. With the passage of time, we’ve come to take understandability, usability and function for granted. What we want now is a good experience, some fun and delight.

That’s where Apple has transformed itself as a company. It really is about experience, fun, delight and entertainment. That’s what the iPhone did: By this whole new way of interacting with strokes and gestures, it was so delightful and different.

Q: In “Gestural Interfaces: A Step Backwards in Usability,” you and Jakob Nielsen identified two root causes for gesture UI inconsistencies: a lack of industry guidelines and “the misguided insistence by companies (e.g., Apple and Google) to ignore established conventions and establish ill-conceived new ones.” Will we ever have standards for gesture UIs?

A: There will be gestures that Apple will patent and refuse to let Microsoft and Google use. All of the companies will have competing methods that will just confuse the hell out of people. But we had that in the beginning of computers too.

When I was at Apple as vice president of the Advanced Technology Group in the ’90s, I was fighting hard for standards. We had this wonderful meeting, and we got all of the major companies together to argue for standards: Sun, IBM, Apple. Microsoft walks in and says, “This meeting is unnecessary. If you want standards, just use Microsoft’s.” That’s Apple’s attitude today.

Q: One downside of standards -- whether it’s a UI or a wireless technology -- is that they sometimes don’t leave companies with many opportunities for market differentiation, aside from price.

A: Every Android phone basically looks the same. So how do you compete in a world like that, where if you really follow Google’s guidelines, you can’t distinguish LG from Samsung from HTC from Motorola? Nobody wants to compete on price. That’s death.

With early technologies, as people are learning, it’s understandable that we don’t have standards and that sometimes different applications don’t follow the same rules. Jakob and I wrote our joint piece to be critical yet friendly, to say that we thought gestures and the new interfaces were exciting, powerful, adding a lot to our experience, and that this early confusion can be overcome. This confusion is compounded by the lack of differentiation among the vendors, who are desperate to do anything different, and the companies don’t wish to cooperate.

Q: For now, consumers and business users seem to be willing to put up with those inconsistencies, judging by how many iOS and Android apps and devices there are. How long before the gee-whiz wears off and they start to complain that app and device UIs aren’t as user-friendly as they could or should be?

A: It’s starting to wear off already. When the iPad came out, people just swooned over it. Now you’re starting to see articles, sometimes by the same journalists, saying, “You know, it’s got a lot of weaknesses. You can’t really type on it.” I think it’s becoming more realistic that the pad is not a substitute for a computer. It’s a wonderful device, but for its own purpose: You can read on the couch or answer a quick question. That’s a reality that wasn’t there at first.

Q: You and Jakob also bemoaned the lack of “Undo” in today’s gesture UIs. That absence is surprising, considering that PCs have conditioned us to expect it, and it’s useful.

A: I think because it comes a little out of the browser world, where the theory is that the “Back” button is the “Undo.” But it isn’t, not when you type something. On top of that, “Back” has always been very confusing. You’re never sure what’s going to happen when you hit “Back.”

I really hate this. There’s a stack of the previous locations, and “Back” takes you to the previous location. But I’m in some app, and I’m not sure where I am because it doesn’t tell me, and I want to get back to what I think is the home page. I hit “Back” and, whoops, I’m on the desktop.

It shouldn’t do that. You shouldn’t be able to back out of the application. In the browser, when you reach the end of its list, it doesn’t close the browser or take you to the desktop. It just dims the back button and doesn’t work anymore.

That’s what ought to happen when you hit “Back” in an application. I got really angry at one application developer. I kept saying, “Why are you doing this?” They explained that it wasn’t them, that the “Back” was programmed into the operating system.

Q: That means operating system companies have to provide a foundation that enables device vendors and app developers to make UIs more user-friendly.

A: Yes, it does. And they know that. That’s why Apple has usability guidelines. I’m not sure if Google does. There are people who have written them for Google.

Photo: @iStockphoto.com/studiocasper

Open Source Meets Systems Management

Traditional systems management software from the Big Four enterprise-class systems management players BMC, CA, IBM and Hewlett-Packard sought to solve a difficult problem: the monitoring and management of the diverse hardware and software systems that make up corporate IT. The promise was that these systems-management suites could keep tabs on and manage systems regardless of the vendor, underlying chip and operating system. As a result, they were complex, pricey and hard-pressed to keep up with changing requirements. While Microsoft offers Windows-centric systems management and VMware offers management tools for the virtualized world, the Big Four are really the incumbents to watch as more open-source systems management solutions come online.

Here, Bernd Harzog, analyst with The Virtualization Practice LLC, talks about how newer, more agile open-source alternatives are changing that.

Q: More companies are using open-source tools to monitor and manage IT. What are the names to watch?

Harzog: There are some great products with open-source elements. Hyperic, for example, is used to monitor very large-scale Web environments. It was acquired by SpringSource, which was then acquired by VMware. Hyperic has a yet-to-be-publicly-determined role in VMware’s monitoring strategy. It competed with Nimsoft before Nimsoft was bought by CA, and with Nagios, which remains almost a pure open-source -- or at least open-at-the-edge -- product.

Q: What’s the appeal?

Harzog: If a systems-management product is managing a customer’s current data center hardware, it also needs to manage new equipment the customer adds over time. Once you have more than five customers, it’s impossible to keep up with that. The only solution is to have systems-management software that is open at the edge and that is extensible and adaptable, so anyone can build a “collector” that collects the data about the new target device. Companies like CA, HP, IBM and BMC are on 12- to 18-month release cycles, so when something gets popular, it goes on the roadmap, but support for it will take up to a year and half.

An open-source-oriented company like Zenoss is open at the edge. Anyone can build a Zenoss ZenPack collector or a Nagios collector for new devices and not have to wait for the vendor.

Q: What else drives demand for these systems-management tools?

Harzog: Virtualization, IT-as-a-service, public and private clouds. Those things tend to break legacy products. Virtualization introduces new requirements -- namely keeping up with dynamic systems. Legacy tools are not designed to do this. They can monitor virtual machines (VMs), but they don’t do a good job keeping up with change in dynamic virtualization environments. Their approach to systems management, which worked with physical servers, doesn’t work well with virtual servers. The desire of enterprises for something more flexible, less expensive, easier to manage and able to meet the need of new-use cases drives demand for these new tools.

Q: How about systems management versus systems monitoring?

Harzog: That’s the other side of systems management. Management includes updating, configuring, provisioning things. There I would look at interesting new tools, like Puppet Labs’ Puppet, Opscode’s Chef and ScaleXtreme.

Management has to be done correctly. It starts with provisioning and configuration management and then has to manage performance and availability all up and down the entire stack. It’s a hard job, and tools are evolving quickly.

Thought Leader James Reinders on Parallel Programming

The explosion of multicore processors means that parallel programming -- writing code that takes the best advantage of those multiple cores -- is required. Here, James Reinders, Intel’s evangelist [please note: Intel is the sponsor of this program] for parallel programming, talks about which applications benefit from parallelism and what tools are best suited for this process. His thoughts may surprise you.

Q: What new tools do you need as you move from serial to parallel programming to get the most out of the process?

Reinders: The biggest change is not the tools, it’s the mindset of the programmer. This is incredibly important. I actually think that human beings [naturally] think about things in parallel, but because early computers were not parallelized, we changed the way we thought. We trained ourselves to work [serially] with PCs, and now parallelism seems a little foreign to us. But people who think of programming as a parallel problem don’t have as much a problem with it as those who have trained themselves to think serially.

[As for the best tools], if you woke up one day and could think only in parallel you’d be frustrated with the tools we have now, but they are changing. From a very academic standpoint, you could say no [computer] languages are designed for parallelism so let’s toss them out and replace them, [but] that is not going to happen.

The languages we use will get augmented. Intel has done some popular things; Microsoft has some extensions to its toolset; Sun’s got stuff for Java and Apple’s got stuff too.

There are some very good things people can look for but they are still emerging, and programmers need to learn them. I can honestly say that as of the last year or so, trying to do parallel programming in FORTRAN or C or C++ is a pretty reasonable thing to do. Five years ago, it was something I couldn’t have done … without a lot of training and classes.  Now these [existing] tools support what we need enough to be successful.

Google has done amazing things in parallelism. Their [Google’s] whole approach in building the search engine was all about parallelism. If you asked most people back then to go examine every Web page on the planet, they’d have written a for loop process. But Google looked at this in parallel. They said, “Let’s just go look at all of them.” They thought of it as a parallel program. “I can’t emphasize how important that is.”

Q: How about debuggers and the debugging process? How does that change with parallel programming?

Reinders: Debuggers are also getting extended but they don’t seem to move very fast. They still feel a lot like they did 20 years ago.

There are three things happening in debuggers. First, as we add language extensions, it would be nice if the debuggers knew they existed. That’s an obvious fix that is now happening quickly.

Second, I suddenly have multiple things happening at once on a computer. How can you show that to me? The debugger will usually say, “Show me what’s happening on core 2,” but if you’re working with a lot of cores, the debugger needs to show you what’s happening on a lot of cores without requiring you to open a window for each. Today’s debuggers don’t handle this well, although a few [of the more expensive ones] do.

Third, this is very academic, but how do you deal with determinism? When you get a bug in a parallel program, it can be non-deterministic, meaning it can run differently each time you run [the program.] If you run a program and it does something dumb, in non-deterministic programming, just the fact that the debugger is running causes the program to run differently, and the bug many not happen the same way. So you need a way to go back to find where it happened, what the break point was.

In serial programming, typically if I run the program 20 times, it will fail the same way, but if the program runs differently every time, it’s not obvious how to fix that. The ability to rewind or go back when you’re in the debugger and look at something that happened earlier instead of rerunning the program … is very helpful. You tell the debugger to back up. To do that, the debugger has to collect more information while you’re running so you can rewind and deal with that non-determinism.

Q: If you’re a programmer, writing custom apps for your company or an ISV, when does it become essential to employ parallel programming? What apps reap the biggest advantages, and are there apps for which parallel programming has little benefit?

Reinders:  Any application that handles a lot of data and tries to process it benefits [from parallelism.] We love our data -- look at the hard drive of your home PC. Parallelism can bring benefits to obvious things that we see every day: processing pictures, video and audio. We love getting higher-res cameras, HDTV. We like higher resolution stereo sound. That’s everyday stuff.

Scientific applications are obvious beneficiaries, and business apps that do knowledge mining of sales data to reach conclusions. They all do well in parallel. That’s the obvious stuff. But then people can get very creative with new things. There are some things that you might think won’t benefit [from parallelism], like word processing software and browsers, but you’d be wrong.

Look at Microsoft Word. There are several things Microsoft does there in parallel that we all enjoy. When you hit print, it will go off and lay it out and send it to the printer, but it doesn’t freeze on you. If you go back 10 years, with Microsoft Word, you might as well have gone for coffee after hitting print.

Spelling and grammar checking, when you type in Word, it puts in the squiggles [on questionable spelling or usage]. It’s doing that in parallel. If it wasn’t, every time you type a letter, it would freeze while it looked it up. Word is WYSIWYG; if you’re in print mode, it’s justifying and kerning -- that’s doing a lot of things in parallel with many other things.

From Our Sponsor:

To learn more about Intel’s software technologies and tools, visit Intel.com/software.


Photo: @iStockphoto.com/loops7

Russia: The No. 1 Base of Global Internet Attacks

Russia currently holds the dubious distinction of being the world’s top source of Internet attack traffic -- a position based on observed traffic released in a recent report from Akamai Technologies, a leading provider of cloud optimization services.

Akamai’s quarterly “State of the Internet Report” does not offer any reasons why Russia is the source of so much malicious Internet activity. However, it does emphasize that the attacks might originate in other countries -- although Akamai does not track such activities.

The report is based on data collected from the Akamai Intelligent Internet Platform, which delivers up to 30 percent of global Web traffic on a given day. The platform consists of more than 84,000 servers in 72 countries, deployed within approximately 1,000 networks that make up most of the public Internet.

To find out more about the Russian attack traffic scenario, we spoke with David Belson, editor of the “Akamai State of the Internet Report.”

Q: How can you tell that most of the attack traffic emanates from Russia?

Belson: We use a distributed set of agents deployed across the Internet that monitor attack traffic. Based on the data collected by these agents, we can identify the top countries from which attack traffic originates, as well as the top ports targeted by these attacks. (Ports are network layer protocol identifiers.)

While our observations show the attacks clearly originate in Russia, they could be coming from somewhere else and are being proxied or forwarded through Russia.

Q: Which are the main ports being attacked from Russia?

Belson: Port 445 is the main one, but that is not unique to Russian attacks. Port 445 is used for Microsoft DS (Directory Services) and is the most-attacked port seen by our monitoring systems.

Our report found that port 445 accounted for 47 percent of observed attack traffic. Attacks on port 23 (Telnet) and port 22 (SSH) represented 11 percent and 6.2 percent, respectively.

The best protection against port 445 attacks is to use a firewall or a router that blocks access to the port.

Growing Broadband Adoption, Fastest Connections
Akamai observed a global 4.2-percent increase (from the third quarter of 2010) in the number of unique IP addresses connecting to its network, growing to more than 556 million.

Another interesting tidbit: The fastest places in the United States are in the state of Delaware and the city of Riverside, Calif. In Delaware, 67 percent of connections to Akamai occurred at 5 Mbps or faster. That state also maintained the highest average connection speed in the country, at 7.2 Mbps, as well as the highest average peak connection speed across the United States, at 28.4 Mbps.

Riverside had the highest average connection speed at 7.6 Mbps, and highest average peak connection speed at 28.5 Mbps, in the fourth quarter.