News Stay informed about the latest enterprise technology news and product updates.

You can prevent buffer-overflow attacks

Home-grown apps are susceptible to buffer overflows as are Windows and Linux apps; the conclusion of this two-part series will detail how to protect applications from attack.

Now that you know how buffer overflows work, the million-dollar question is how to prevent hackers from using them to attack and take control of your home-grown applications.

Avoid using library files included with the compiler 
Library files are commonly included with a programming language. If a hacker finds a weakness with a particular library file, any application that includes that particular library file also has the weakness. So if a hacker wants to exploit a home-grown application, he will often start by trying to exploit known weaknesses in commonly used libraries.

Libraries are also inherently insecure. Although newer compilers are starting to include more securely-written library files, for the longest time libraries offered a quick-and-easy way to accomplish a task with little regard for secure coding. This was especially true of the C++ programming language. Programs coded in C++ that rely on the standard libraries are very susceptible to run-time errors, a dream come true for hackers looking for a buffer exploit.

Buffer-overflow attacks: How do they work?
Home-grown apps are susceptible to buffer overflows as are Windows and Linux apps; part one of this two-part series shows how hackers can take control of your programs.

Qualify all user input 
To qualify all user input in home-grown applications, first make sure the input string is a valid length. For example, suppose your program is designed to accept 50 characters of text and add them to a database. If the user enters 75 characters, then they have entered more text than the database record can accommodate, and who knows what will happen next. User input should be designed so when a user enters a text string, the length of the string is compared against the maximum allowed input and truncated if necessary.

Filter potentially malicious input 
Filtering is another good defense technique. For example, take a look at the ASP code below:

'Filter out HTML code, apostrophes and quotation marks from the user's input.

strNewString = Request.Form("Review")
strNewString = Replace(strNewString, "&", "& amp;")
strNewString = Replace(strNewString, "<", "& lt;")
strNewString = Replace(strNewString, ">", "& gt;")
strNewString = Replace(strNewString, "'", "`")
strNewString = Replace(strNewString, chr(34), "``")

The code above is used for an e-commerce Web site that I am currently developing. The idea is to filter out HTML code and characters that may cause problems with the database. HTML code uses the < and > characters to designate an HTML tag. To prevent users from embedding HTML code in their input, I am filtering out the greater than and less than sign.

In ASP code, the apostrophe, quotation mark and ampersand symbols are all reserved symbols. These reserved symbols can't be included within a user's input or they will cause the application to crash. For example, if someone used an apostrophe within a line of text that was to be committed to a database, the command would fail because ASP requires apostrophes around the text being committed to the database; ASP wouldn't know what to do with the user's apostrophe. To prevent this from happening, my code is searching the input string for an apostrophe and replacing it with the ` symbol.

Test applications 
Qualifying and filtering user input goes a long way toward protecting you against buffer overflow attacks. But you still need to thoroughly test any application prior to deployment. Have a group of people go through the program with a fine-toothed comb and try to crash the program. Have them try entering long strings or reserved characters. If you have done a good job coding the application, it should hold up to the abuse. If the program does break, it's better to find out now than after the program has gone live.

Note: This article originally appeared on

Dig Deeper on Secure software development