I read that Facebook stops as many as 600,000 invalid login attempts daily, either pre- or post-attempt, based on what is considered anomalous activity. What sorts of anomalous activities should most enterprises use as criteria to define (and in turn potentially block) suspicious logins?
While the question of identifying and blocking suspicious logins in the enterprise is simple in nature, the answer is more complex. This is because the definition of anomalous activity will vary greatly from one organization to the next. As a starting point for user account provisioning, we’ll address the question by looking at the definition of anomalous activity. There are several definitions of unusual login activity out there, but the one I like to use is, “Anomalous activity is unusual behavior relative to something expected.” So first, it’s important to understand that the anomaly in question is a behavioral activity. Something a human does, not a machine. Second, it must be agreed upon what is considered “unusual activity.” For some organizations, it might be a large number of logins in a short period of time, or withdrawing money from an ATM five times a day. Finally, it’s necessary to know what results are expected from normal activity. For example, a user logs in to check their account status, but only changes their password once every three months.
So how do you determine the criteria you need in order to define what activities are normal and what are abnormal in your particular organization? In order to get started it’s necessary to define a set of possible legitimate events, also known as an event vocabulary. These events should include actions that a normal transaction would include; a login with less than three password attempts, viewing account details, paying bills, transferring funds and buying tickets. Once the status quo for “normal events” is defined, security teams can do rudimentary anomalous activity monitoring by reporting on or blocking any event not on the list.
Anomalous events may look like normal events, but as described above, they have an unexpected outcome. After all, you can’t detect all anomalous activities on event content alone. In more advanced monitoring activities, it’s necessary to look at preferred or typical event orderings. Therefore, an anomalous activity model needs to capture the order of events. Some examples include: accessing a site at an unusual location or at an unusual time of day; adding approvers or changing approval limits on a regular basis, or changing personal information or adding access for seemingly unrelated users.
Finally, organizations need to have a tracking system; some repository that can log the activities over time to build trends. Because trends take time to build, anomalous tracking may start out with either large numbers of false positives, or not detect anomalous activities at all, or in an untimely manner. Because of this, many tools used for tracking anomalous activity will use a grace period of 30-90 days as they start collecting the data for your environment. During this time, any tools for monitoring activity will be in “notify-but-not-block” mode. In this mode, security personnel are notified of the potential of anomalous activity for further investigation, but no protection steps are taken. Organizations must be aware that sometimes the best way to monitor during this period is by contacting the user to verify they executed the activity. This is to keep from blocking legitimate activities that were falsely flagged as anomalous.
Flagging events will allow you to analyze all individual behavior during a session from login to logout; how the person accessed their information, how he or she manages their information, the types of transactions he or she engages in, the frequency of activities, what kinds of activities take place during the session and much more. By comparing individual or groups of activities, patterns of normal behavior are built and used to determine what activities are legitimate or unusual and suspicious. This allows an enterprise to adjust its event vocabulary, also known as behavior fingerprints. As confidence in detecting improves, blocking can slowly be turned on for clearly identified anomalous activity.
Although a list would have been ideal, I’ve included some examples in this response, but I hope you find understanding the process that Facebook and other organizations use to identify and flag anomalous activity more valuable. After all, one entity's anomalous actions may be another’s normal activity.
This was first published in December 2011