Plato Data Intelligence.
Vertical Search & Ai.

Spotlight on 2023 Dan Kaminsky Fellow: Dr. Gus Andrews

Date:

After an inaugural year of funding intensive work scaling the way security researchers report and automate open source vulnerability fixes, the Human Security Dan Kaminsky Fellowship is diversifying its support with a completely new area of security research this year. This time around, the fellowship gives Dr. Gillian “Gus” Andrews financial and data resources to find ways to translate threat intelligence best practices to the world of human rights and civil liberties.

The goal is to start formalizing ways to track coordinated harassment, stalking, and disinformation campaigns against activists, journalists, human rights workers and non-governmental (NGO) employees that put their lives and liberty at risk.

Author of Keep Calm and Log On, Andrews is a digital literacy expert with deep roots in both human rights advocacy communities and the cybersecurity world. She teaches graduate-level courses at Columbia University’s Teachers College on technology and culture; technology and literacy; anthropology; and education. Her research has led her down professional paths exploring user behaviors and perfecting design to create better user experiences for a number of organizations, including Linden Labs, the Open Internet Tools Project (OpenITP), Simply Secure, and Thoughtworks. Simultaneous with this work, she has also pursued relentless personal interests in both human rights activism and cybersecurity.

On the civil liberties side, Andrews was involved in some of the earliest activities of the Independent Media Center (Indymedia) movement, which cropped up in the early 1990s and late 2000s to facilitate communications about activism around a range of issues in a time before blogs or social media. As a part of that participation, Andrews helped found the New York City Indymedia, which just so happened to be co-located with 2600 Magazine’s hacker space in New York. That close proximity got her rubbing elbows with the likes of Emmanuel Goldstein and the 2600 crew, who in turn roped her into attending Hackers on Planet Earth Conference (HOPE) on the regular and eventually led to her becoming deeply connected in the hacker community.

Last year had Andrews cross-pollinating all her passions as she helped the DISARM Foundation build a minimal viable product for a threat intelligence framework to track disinformation campaigns in similar fashion as the MITRE ATT&CK framework. This led her to musing about how threat intelligence practices and disciplines could potentially be used to help protect the human rights community, which in turn spurred her proposal to Human Security for this year’s Dan Kaminsky Fellowship.

Kaminsky was Human’s co-founder and an impassioned advocate for making the world a better place through technology and for finding innovative force multipliers that make it possible to elegantly solve some of the Internet’s toughest wide-scale problems in privacy and security. In 2022, the Dan Kaminsky Fellow was Jonathan Leitschuh, who made waves with his research on using pull requests to automate and scale fixes in open source software.

Andrews earned this year’s fellowship with the goal to research how the human rights community can create more formal means for sharing threat intelligence information. As a part of that, she’ll also be examining the links between traditional cybersecurity threat actors and the threat actors harassing and attacking human rights workers.

Dark Reading recently caught up with Andrews to discuss her background, the progress she’s made so far in her initial research, her goals for the rest of the fellowship, and what she hopes the research will yield in the long run. Here are some of the highlights from that Q&A session.

Goals for the Fellowship

Andrews: The fellowship really had two components. One was supporting that community’s ability to gather, share, and analyze and make use of digital threat information to a greater extent than they have been, because they have a cert in their group, but it’s sort of low-level what actually gets shared there. There’s not that much stuff. That was half of the proposal. And the other part of the proposal was me looking to compare indicators of compromise between disinformation campaigns and traditional cyber threats and see whether it’s common actors, whether there’s common infrastructure, stuff like that.

Looking For Links Between Bad Actors Online

Andrews: You might have a woman journalist and she’s out doing her work, but is being attacked by shadowy forces or large online communities, people sort of coordinating campaigns to be like, “You shouldn’t be doing your work, you should stay at home.” And making other horrible gendered attacks, sometimes much worse than that.

A lot of this stuff has a sort of coordinated, inauthentic flavor to it. There’s a lot of activity that clearly somebody has bought a botnet, somebody is doing a big campaign like that. And so from the beginning, one of my senses of this work is that one of the ways I could really help out is what if we can start to identify the command and control or just any indicators of what’s going on with this and see if that is something that we can do to support these folks who are being attacked. And that is not something actually that this community has had the capacity to do all that much.

I mean particularly when it comes to Russia, I’m aware that they do use both kinds (of harassment techniques). They’ll have farms of actual people, and then there will also be more automated stuff. I think it’s worth digging into that further and seeing what’s there.

The Human Rights Community She Hopes to Help

Andrews: It’s an interesting thing to describe because it’s literally a loose affiliation of NGOs and then people working independently. People sort of go in and out of working at Facebook, working at Google, and then they’ll come back and do work in the NGO space again. But like so many things in the digital security space, and particularly the threat intel space, we’ve built up a lot of trust over the years. We all have met each other at conferences and we’re like, “OK, this is a real person. We trust them.”

For me and for a lot of people in this community, doing digital threat intelligence represents a lot of upskilling. There’s just not that much in the way of threat intelligence chops there, and everybody’s really interested in doing more of it.

How a Media Literacy Scholar Got Tapped into the Hacker Community

Andrews: I started attending the Hackers on Planet Earth conferences, like some random kid who had done a little bit of activist stuff. But I started attending it and just going to every single talk. I would sit through all the talks. And there’s no breaks between talks, there’s no breaks for lunch. HOPE is still to this day a conference for 18-year-olds. And you have to remind them, “Go to sleep, eat a meal, and take a shower.” It’s still that conference, despite the fact that Emmanuel is now well into his sixties. Yeah, HOPE is a very stroll-up-and-you’ll-just-learn-things conference. So that was how I learned a lot of stuff.

I started speaking at the Hackers on Planet Earth Conference. I actually weaseled my way onto Matt Blaze’s panel one year. And Matt and I have been friends since then. We’ve been through a lot together, actually. So I was sort of doing this casually outside of my doctorate in education.

And I had this sort of weird dissociated thing where I had to keep my hacking work and my educational work apart for a really long time, to the extent that when I graduated from Teachers College, I talked to Renee Hobbs, who’s like a leading light of media literacy. And she was looking at my resume being like, “I don’t see your home conference. There’s no clear place that you’ve been.” Because I hadn’t talked about the fact that I’d been going to the Hackers on Planet Earth Conference for 10 years at that point.

This was all in parallel until I took this job at the Open Internet Tools Project at New America (in 2013), and then I was finally able to bring this stuff together.

The DISARM Foundation

Andrews: Last summer I worked with the DISARM foundation, which is working on creating a MITRE ATT&CK-like framework for understanding disinformation, basically.

And I’m going over MITRE ATT&CK, which turns out to have been made by Adam Pennington, who I just knew as a random guy who was at HOPE. I had no idea who was developing MITRE ATT&CK. And so he and I have had great conversations. He’s been bringing me up to speed. I looked at MITRE ATT&CK and I was like, “20 years of HOPE and I totally understand what’s going on with this. Maybe I should look into threat analysis.”

So it’s a bit of a leap and a bit of a stretch for me, but I understand what all the attacks are, and I know how to talk to people and use the MITRE ATT&CK framework to be like, “Here’s the reason why somebody might use this technique to change this technique and then get further access and escalation of privileges over here.” It’s sort of a messy path that has finally put me in a place where I can do threat analysis.

Currently on a Listening Tour

Andrews: I’m doing what I think of right now as a listening tour still. I’m sort of wrapping that part of it up. I’m doing also workplace ethnography really, because my training is an anthropology to some extent as well within education and going, “In this workflow, these workflows about sharing information, what’s not working for our community, where are the disconnects? Is it a matter of people not having the skills? Is it a matter of them not having the time? Is it a matter of people not having access? Is it a matter of lack of trust?” So trying to figure out what needs to be done there.

The Challenges of Threat Intel in Human Rights

Andrews: We don’t have a network perimeter because it’s a whole bunch of random organizations. We do hear from threat labs in the field in places like South America who sometimes have a journalist come in and they can leave their phone with us for at least long enough for us to take an image and then we can do the analysis.

But for the most part, there are places in the world where some of the folks who may come in with something suspicious happening on their device, they only have one device for their entire family and their livelihood depends on it. And so this is not something where you can leave the device for a while. Plus, there are a limited number of people who know how to do this stuff (in the community).

Another one of our challenges with communication has really been how informal the communication channels are. Say you have a bunch of Syrian journalists documenting atrocities on the street and they’re doing it on Facebook, and then Facebook’s like, “This violates our gore policy.” And takes it down. And the Syrian journalists are like, “Please don’t delete that. It’s actually documentation of war crimes.”

So most of what’s happened, most of the channels for getting that stuff rectified have been based on individual people. They’ve been informal. I actually went to a really good session last week and somebody from Apple attended, which was really great, and she was like, “We’re working on formalizing these sort of civil society connections, and the way we’re doing it is actually modeling it on our Bug Bounty Program.”

Getting to a Path of Better OSINT

Andrews: I’ve been feeling like what’s it’s missing is maybe a certain amount of open source intelligence work (OSINT). We do have organizations who do that work. There’s some really great ones. My favorite actually these days is Global Disinformation Index, who are really gathering a lot of work on where ads are supporting hate speech websites to a certain extent, and then doing campaigns around that that are really following the money. And then I’ve been compiling basically a spreadsheet — because spreadsheets are my love language — of all the possible data sets that we could be using to take a look.

What I’m hearing from people is there are problems around the reporting structures and the getting things done around that kind of stuff. I’m going to end up focusing on that pipeline. I’m already doing work with one group of folks who are mostly just working on a triage workflow that we have that’s sort of like a website where people can go and be like, “Hey, I’m having some problems.” And we send them down a pipeline of, “Do these things and then check in with these organizations. They might be able to help you.”

I’m working with them on their documentation workflow, helping clarify, “What is it you need to gather to tell people who want to be able to help you?” As you can see, I’m going in many directions at once.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?