How to detect software tampering

In their book Surreptitious Software, authors Christian Collberg and Jasvir Nasvir reveals how to tamperproof your software and make sure it executes as intended.

The chapter below, from the book Surreptitious Software: Obfuscation, Watermarking, and Tamperproofing for Software Protection, reveals how to detect attacks on software: when a program is running on corrupted hardware and operating systems, when it is running under emulation and when the correct dynamic libraries have not been loaded, for example.

Authors Christian Collberg and Jasvir Nagra reveal how to check for software tampering by inspecting a program's code, computational results and execution environment.

See sidebar below to listen to an interview with the author.

Surreptitious Software
Chapter 7: Software Tamperproofing

Table of contents:
Software tampering definitions
How to check for software tampering

Download Chapter 7 of "Surreptitious Software" as a .pdf

Interview with the authors

Listen to Christian Collberg and Jasvir Nagra talk about why the book is particularly important for security professionals who may not have a strong interest in code development.
An adversary's goal is to force your program P to perform some action it wasn't intended to, such as playing a media file without the proper key or executing even though a license has expired. The most obvious way to reach this goal is to modify P's executable file prior to execution. But this is not the only way. The adversary could corrupt any of the stages needed to load and execute P, and this could potentially force P to execute in an unanticipated way. For example, he could force a modified operating system to be loaded; he could modify any file on the file system, including the dynamic linker; he could replace the real dynamic libraries with his own; he could run P under emulation; or he could attach a debugger and modify P's code or data on the fly.

Your goal, on the other hand, is to thwart such attacks. In other words, you want to make sure that P's executable file itself is healthy (hasn't been modified) and that the environment in which it runs (hardware, operating system, and so on) isn't hostile in any way. More specifically, you want to ensure that P is running on unadulterated hardware and operating systems; that it is not running under emulation; that the right dynamic libraries have been loaded; that P's code itself hasn't been modified; and that no external entity such as a debugger is modifying P's registers, stack, heap, environment variables, or input data.

In the following definition, we make use of two predicates, Id (P, E) and Ia (P, E), which respectively describe the integrity of the application (what the defender would like to maintain) and what constitutes a successful attack (what the attacker would like to accomplish):

Definition 7.1 (Tampering and Tamperproofing). Let Id (P, E) and Ia (P, E) be predicates over a program P and the environment E in which it executes. P is successfully tamperproofed if, throughout the execution of P, Id (P, E) holds. It is successfully attacked if, at some point during the execution of P, Ia (P, E) /\ not Id (P, E), holds and this is not detectable by P.

Official book page

Surreptitious Software: Obfuscation, Watermarking, and Tamperproofing for Software Protection

Publisher: Addison-Wesley
For example, in a cracking scenario, I a could be, "P executes like a legally purchased version of Microsoft Word," and I d could be, "The attacker has entered a legal license code, and neither the OS nor the code of P have been modified." In a DRM scenario, I a could be, "P is able to print out the private key," and I d could be, "The protected media cannot be played unless a valid user key has been entered /\ private keys remain private."

Conceptually, two functions, CHECK and RESPOND, are responsible for the tamperproofing. CHECK monitors the health of the system by testing a set of invariants and returning true if nothing suspicious is found. RESPOND queries CHECK to see if P is running as expected, and if it's not, issues a tamper response, such as terminating the program.

7.1.1 Checking for Tampering
CHECK can test any number of invariants, but these are the most common ones:

code checking: Check that P's code hashes to a known value:

if (hash(P's code) != 0xca7ca115)
return false;

result checking: Instead of checking that the code is correct, CHECK can test that the result of a computation is correct. For example, it is easy to check that a sorting routine hasn't been modified by testing that its output is correct:

quickSort(A,n);
for (i=0;i<(n-1);i++)
if (A[i]>A[i+1])
return false;

Checking the validity of a computed result is often computationally cheaper than performing the computation itself. For example, while sorting takes O(n log n) time, checking that the output of a sort routine is in sorted order can be done in almost linear time. Result checking was pioneered by Manuel Blum and has been used in commercial packages such as LEDA .

environment checking: The hardest thing for a program to check is the validity of its execution environment. Typical checks include, "Am I being run under emulation?", "Is there a debugger attached to my process?", and, "Is the operating system at the proper patch level?" While it might be possible to ask the operating system these questions, it's hard to know whether the answers can be trusted or if we're being lied to! The actual methods used for environment checking are highly system-specific.

As an example, let's consider how a Linux process would detect that it's attached to a debugger. As it turns out, a process on Linux can be traced only once. This means that a simple way to check if you're being traced is to try to trace yourself:

#include <stdio.h>
#include <sys/ptrace.h>
int main() {
if (ptrace(PTRACE-TRACEME))
printf("I'm being traced!\n");
ł

If the test fails, you can assume you've been attached to a debugger:

> gcc -g -o traced traced.c
> traced
> gdb traced
(gdb) run
I'm being traced!

Another popular way of detecting a debugging attack is to measure the time, absolute or wall clock, of a piece of code that should take much longer to execute in a debugger than when executed normally.

To see an example of the favorite debugging attack, download the rest of Chapter 7: Software Tamperproofing (.pdf).

This was first published in November 2009

Dig deeper on Software Development Methodology

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close