Steven Murdoch got as far as pressing the reply button before he realized something was awry in the email he received.
Murdoch's boss, Steve, wrote that he was stuck in a meeting and urgently needed to send an e-gift card to his sick friend for her birthday. Steve assured Murdoch he would be reimbursed after he got out of the meeting.
Only this "Steve" wasn't the real Steve -- he was a cybercriminal.
Murdoch, associate professor and research fellow at University College London (UCL), and innovation security architect at OneSpan, contacted the real Steve's staff and notified the rest of the department to not engage with the fraudulent correspondence.
But Murdoch wasn't quite done yet. He started a conversation with the scammer to see what he could glean about his intent and execution.
Here, Murdoch details the tactics of the cybercriminal, his own efforts to investigate the scammer and how those insights can help organizations spot phishing emails and mitigate risk.
Editor's note:This transcript has been edited for length and clarity.
In January 2020, you were a target in an email phishing attempt. Can you describe what happened?
Steven Murdoch: I [spotted] a phishing email that I received through my UCL email address. I get a number of phishing emails -- as does everyone else at UCL -- and even more are blocked by the email security system. But this one was somewhat personalized -- the initial email was asking if I was available.
I was curious to see what would happen if I responded, and I wanted to know exactly how the criminal would try to get money out of me. I also wanted to see whether I could find out the capability of that attacker and protect other people at UCL, so I later posted a thread on Twitter detailing the conversation.
Today my Head of Department emailed me about something. It sounded urgent, though it's odd he switched to using a Gmail address [thread] pic.twitter.com/w1eyNMit9Z— Steven Murdoch (@sjmurdoch) January 14, 2020
After the initial email from my head of department, Steve, I immediately tried phoning him, just in case it was actually an email from him. I couldn't get through, so I emailed his staff to notify the department what was happening and to encourage people not to respond. At that point, I thought the main threat was mitigated, but I decided to continue the conversation to see what would happen, so I replied. My goal was to get the attacker to say something incriminating, which I could pass on to Google security who could investigate the account. Eventually they did -- the criminal tried to get me to buy some gift cards.
I thought it would be interesting to know where this person was, so I fit a PDF with an embedded link in it which, when clicked, would bring that person to a website I controlled. This way I could find the IP address. I used the geolocation service to locate them in Nigeria.
Finally, I wanted to see what other information I could find. I sent off a gift card that didn't work. I noticed that the error message sent back to me was for a service called Paxful, which enables you to exchange gift cards for Bitcoin. I also reported that person to Paxful.
Actually, that was a tracker. "Steve" is using an iPhone and is in Lagos, Nigeria (using the same IP address as Katy, who received a slightly different file). pic.twitter.com/XcMkTW1bEz— Steven Murdoch (@sjmurdoch) January 14, 2020
I was curious. The researcher in me wanted to find out about the tactics cybercriminals are using at the moment. I wanted to request that the email traces and other resources this person used be shut down to protect others. The scammer emailed everyone in the department, so this was not specialized at all. However, the attacker did do enough research to find out who the head of department was. Many other people from academic institutions said they were targeted in similar ways, so this is a common strategy.
Your whole department was contacted -- is that normal in phishing campaigns?
Murdoch: Quite often everyone gets contacted -- it's easier to send a lot of emails rather than work out who the right target is. Other attackers are a bit more sophisticated: They look at people who are likely in a position to buy things or transfer money. Even more sophisticated criminals will compromise the email system of an organization or its computers and then intersperse phishing emails into existing conversations. In some cases, they will alter emails before they are received. Those communications are the hardest to spot.
Criminals know these sorts of techniques require a bit of research to get that far. But this sort of research is not particularly hard to do, especially with social media or online employee directories.
What other tactics do scammers use today?
Murdoch: One tactic is to appeal to the phishing target's emotions. The story I was told was it was a person in hospital with cancer. Another tactic is to try and get people to act quickly. For example, 'Steve' told me this is urgent. This means people don't have the same sort of time to consider whether the actions they are taking are appropriate.
Oh no, he's forgotten to get his best friend a birthday present. And the friend is in hospital. With cancer! pic.twitter.com/d66qwNTJ2S— Steven Murdoch (@sjmurdoch) January 14, 2020
How effective are phishing tests to educate employees of the dangers of social engineering?
Murdoch: I think phishing tests are not particularly effective. They measure how effective the phishing campaign is versus how vulnerable users are. With a little bit of effort, any company could come up with a phishing campaign that would get close to 100% of their employees to click on a phishing link. Organizations should accept that people will click on links [and] that people will reply to emails.
Phishing tests also harm the relationship between the employee and the company. If a company is continually trying to trick its employees, that's going to cost the organization.
Steven MurdochAssociate professor and research fellow at University College London
Is there an ideal combination of user awareness and security technology to recognize -- and stop -- phishing email attempts?
Murdoch: Don't fall for the trap of 'blame and train.' This is where something bad happens and the response is to blame the victim and facilitate training in the aftermath. The most productive thing is to tell employees the appropriate way to carry out actions that are high risk to the organization and make sure they follow through. For example, suppose a criminal compromises a company's supplier. The disguised supplier emails the company saying, 'Our bank account information has changed; it is now this.' That is not going to lose you [about $500] in gift cards. That loses you hundreds, thousands or maybe millions of dollars. Mitigate this by telling people in the positions of responsibility that they should never accept such details by email.
Some training will tell people not to click on links and to be distrustful of all emails received. This is problematic because it means people will be slower responding to legitimate emails. While there is no one-size-fits-all method in terms of security training, I think the more effective approach to prevent social engineering would be to tell people what to do when they are suspicious. Training should focus on how to prevent bad things happening to the organization and, at the same time, meet the natural behavior of people.
Having employees trust their security officer is also helpful. That way, when someone spots a phishing email or another security incident happens, employees can tell the security officer -- who will not blame them, but will work to get to the heart of issue and protect the organization.
Some security software cannot prevent compromised but valid credentials from being used to explore a targeted network, install malware or steal data. How can enterprises limit damage from being done inside their company?
Murdoch: No one individual product is going to prevent everything. In which case, the techniques to use are those that limit access to only what people should be entitled to. Ensure protection around the assets criminals are targeting -- for example, users or devices that move money into other people's accounts.
Another good approach is following the GDPR requirement that, if an organization doesn't need to store something, it doesn't store it. If it's not being stored, it's not going to get stolen. This is a useful and often overlooked technique for avoiding GDPR breach claims. Behavioral monitoring can also be effective to tell the difference between what people normally do and what an attacker would do.