Definition

address space layout randomization (ASLR)

Contributor(s): Sharon Shea

Address space layout randomization (ASLR) is a memory-protection process for operating systems (OSes) that guards against buffer-overflow attacks by randomizing the location where system executables are loaded into memory.

The success of many cyberattacks, particularly zero-day exploits, relies on the hacker's ability to know or guess the position of processes and functions in memory. ASLR is able to put address space targets in unpredictable locations. If an attacker attempts to exploit an incorrect address space location, the target application will crash, stopping the attack and alerting the system.  

ASLR was created by the Pax Project as a Linux patch in 2001 and was integrated into the Windows operating system beginning with Vista in 2007. Prior to ASLR, the memory locations of files and applications were either known or easily determined. 

Adding ASLR to Vista increased the number of possible address space locations to 256, meaning attackers only had a 1 in 256 chance of finding the correct location to execute code. Apple began including ASLR in Mac OS X 10.5 Leopard, and Apple iOS and Google Android both started using ASLR in 2011.

This was last updated in June 2014

Continue Reading About address space layout randomization (ASLR)

Dig Deeper on Microsoft Windows security

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

ASLR is still an after-the-fact fix. Security must be built in from the beginning, not added as an afterthought, mainly by tricks like ASLR.



Time to go back to 1964 with the release of Bob Barton's Burroughs B5000. These machines were entire system design and built in correctness and security checks. They lost out for the next 50 years to the performance-is-everything thinking of the time.



Now is the time to change that and build machines that are intrinsically secure. The B5000 was a descriptor-based machine. That meant each allocated memory block had a separate descriptor for metadata such as address, length, and type of block. Any access to a block was through descriptors. The hardware checks accesses against the address and length. (Still does this in Unisys Clearpath MCP machines.) Programs that attempted out-of-bounds access or buffer overflow are immediately terminated. This helped legitimate developers produce correct software and stopped hackers immediately. With this technique, the trillions of dollars lost to security breaches in viruses and worms would have been avoided. This is not a case of restricting what programmers can do as has been the thinking of the last 50 years.



Any machines which require raw performance avoiding such tests should be kept off the network as stand-alone machines. In the 1960s scientific processing was huge and business (rest of life) computing significant but not predominant. In 2017, business computing is entrenched in every-day life. Unfortunately, insecure machines are the basis of modern computing.



It is time to change and build security into the most fundamental layers of computing.


Cancel

-ADS BY GOOGLE

File Extensions and File Formats

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close