ENSPIRING.ai: Anatomy of an AI ATTACK - MITRE ATLAS
The video discusses the importance of understanding the root cause of problems to effectively address them, using an analogy of fixing a leaky pipe. This concept is related to cybersecurity, especially in dealing with AI-based attacks. The speaker emphasizes the importance of identifying the type of attack, the target, and the steps taken by attackers to better prevent such threats in the future.
The introduction of a tool called Atlas, designed by Mitre, is aimed at aiding the understanding of AI-based attacks. This tool builds on a previous version to focus specifically on AI and encompasses various tactics and techniques that attackers might use. The video highlights how critical it is to comprehend this information to develop better defensive measures within cybersecurity frameworks.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. adversarial [ˌædvərˈsɛriəl] - (adjective) - Involving or characterized by conflict or opposition. - Synonyms: (hostile, antagonistic, contentious)
I did a video on this first one. It's called the adversarial tactics, techniques, and common knowledge.
2. traversal [trəˈvɜːrsəl] - (noun) - The act of passing across, over, or through something. - Synonyms: (crossing, passage, journey)
The bottom line is, if I'm going to fix this, I got to know where the problem is and how this water has traversed.
3. reconnaissance [rɪˈkɑːnəsəns] - (noun) - Military observation of a region to locate an enemy or ascertain strategic features. - Synonyms: (scouting, exploration, observation)
The first is reconnaissance.
4. mitigation [ˌmɪtɪˈɡeɪʃən] - (noun) - The action of reducing the severity, seriousness, or painfulness of something. - Synonyms: (alleviation, reduction, diminishment)
Ultimately, what are the mitigations that I need to put in place in order to figure out how I fix this problem?
5. lingua franca [ˈlɪŋɡwə ˈfræŋkə] - (noun) - A common language used between people whose native languages are different. - Synonyms: (bridge language, trade language, vehicular language)
A lingua franca if you will, something that we can all in the industry use to describe.
6. append [əˈpɛnd] - (verb) - To add something as an attachment or supplement. - Synonyms: (attach, add-on, annex)
And in this case, what they did was they appended just a little bit of good information.
7. bypass [ˈbaɪˌpæs] - (noun) - A method or measure designed to avoid a specific, problematic area. - Synonyms: (detour, diversion, alternative route)
And it discovered that there was a universal bypass that could be appended to malware.
8. verbose [vərˈboʊs] - (adjective) - Using or expressed in more words than are needed. - Synonyms: (wordy, lengthy, prolix)
They turned verbose logging on.
9. heat map [hiːt mæp] - (noun) - A data visualization technique that shows the magnitude of a phenomenon as color in two dimensions. - Synonyms: (intensity map, color map, density plot)
And then a heat map as well. And the heat map shows you other visualization.
10. cybersecurity [saɪbər sɪˈkjʊrɪti] - (noun) - The practice of protecting systems, networks, and programs from digital attacks. - Synonyms: (information security, IT security, network security)
So it's the same with cybersecurity, in particular with AI based attacks.
Anatomy of an AI ATTACK - MITRE ATLAS
If you want to fix a problem, you have to first understand what's causing the problem. So, for instance, with this leaky pipe, we've got water pooling up here. Where's the cause? Well, is it because there is break in the bend in this pipe, or is it further upstream? Maybe it's this fitting that's loose and therefore it's dripping down there. Or maybe the source is actually higher up in the system and the water is flowing down. The bottom line is, if I'm going to fix this, I got to know where the problem is and how this water has traversed.
So it's the same with cybersecurity, in particular with AI based attacks. I'm going to need to understand the type of attack that I'm dealing with, then I can get out the right tools. I need to understand what the target is, what is the bad guy after in this attack, and then what are the steps that they took? If I can understand that and retrace those, then I can do a better job of preventing this in the future. And then ultimately, what are the mitigations that I need to put in place in order to figure out how I fix this problem?
We're going to take a look in this video at a timeline, a tool that you can use to help understand better AI based attacks. So there's an organization called Mitre that came out with a tool that we use in the industry, very useful. I did a video on this first one. It's called the adversarial tactics, techniques, and common knowledge. And it goes over cybersecurity attacks in general and shows you what are the steps, what are the things that an attacker could go through so that you understand it better. Well, they built on that and come out with a new version that is designed specifically for AI. And it's called atlas for short. It's the adversarial threat language for AI systems. So Atlas is what we're going to take a look at today so that we can better understand these new class of AI based attacks.
So why do we have to care about these AI based attacks? Well, it turns out Mitre that I mentioned previously has already documented one case that cost $77 million in damages. That was an AI based attack. It was an attack on the AI within a particular system. So we've already seen that this can be expensive. I expect that number is only going to increase as we start using AI more and more in all kinds of use cases.
So, Atlas, let's take a look at what this thing is. So this is what the framework looks like. You can get just a general sense of what's there. And you can see in the columns we have the tactics, for instance. The first is reconnaissance. Then we have resource development, initial access, and so forth. So, well, that is what the framework looks like. And the tactics then, are the things that are basically the why. What is the attacker really trying to accomplish in a particular step? For instance, as I mentioned, reconnaissance. They're trying to case the joint. They're trying to figure out what does the environment look like. That's the why. And Mitre has documented 14 different ones of those, different kinds of whys. The tactics, then the techniques. This is the how. This is how do they go about doing what they're going to do?
And we've got 82 of those already documented. These things might, in fact, grow over time as we learn more and more and attackers learn more and more different ways to do things. And also included to sort of illustrate a lot of this are case studies. There's 22 different case studies as of the time of this video, and there may be more in the future. In fact, we're going to take a look at one of those in a minute to give you an idea, though, that further illustrates this. There's also this thing called a navigator.
So the navigator shows you, in fact, which ones of these have been selected, which ones of these have been followed. Think of it as a breadcrumb trail that shows you, in this particular attack, what actually occurred. Out of all the different possible things, here are the ones that were actually selected by the attacker. And then a heat map as well. And the heat map shows you other visualization for what these different tactics and techniques could be.
Okay, let's take a look at an actual case study from the Mitre Atlas framework. This particular case looked at a malware scanner that was based on machine learning, and it discovered that there was a universal bypass that could be appended to malware that would fake out the system, and it wouldn't identify the malware as, in fact, harmful.
So how did this work? We're gonna map this to the various tactics and techniques in particular. We'll take a look at the tactics. So the recon stage, what did the attacker do? Well, the first thing they did, it seems, is they went for public information. There was a decent amount of this available through the organization, maybe does talks at conferences, presentations, maybe even YouTube videos or things like that. So publicly available information like that also patents and other intellectual property that might have been filed in a public format. So you can use all of this to do your initial reconnaissance.
Okay, the next step then is machine learning model access. What did they do in this case? Well, what they did was they took a look at the product itself, the tool that's supposed to be doing this detection, and they started trying to see how does this thing work. They turned verbose logging on. So that means the system is writing out all kinds of information about what it's seeing. And all the information that it's writing about. What it's seeing is also information an attacker can use later at further steps.
And they discovered by looking at all of this, sort of figured out a bit about what the reputation scoring system was like in the system. So it's looking at this malware and classifying it as this is good or this is bad. Then the next stage is resource development. In this case, what they're going to do is take a look at developing some adversarial machine learning.
In particular, what they identified through reverse engineering was that there were some specific attributes, things that the malware scanner was looking for all the time. And when it saw those things, then that's when it would flag this as malware. So what they tried to do in this was discover how did that algorithm work? What was that representation scoring process like? And in particular, they made a discovery that there was actually a second model that was included in this. And the second model was basically an override. And if the second model found enough good in the code of, then it would override what its suspicions were about malware. And that became the weak point that got exploited.
Then the ML attack staging. In this case, what they did was a manual modification. They go in and modify the malware that's being submitted into the system. And in this case, what they did was they appended just a little bit of good information. They mix in just enough good information with the malware and figured out, if I add that at the very end and append that everything will be okay and the system will not recognize this because this second model would do the override. And then ultimately they launch this and we have our boom. That's the defense that evades the attack, that evades the defense which is looking for this malware.
Okay, so now we've gone through one of the case studies that comes with the mitre atlas framework. Hopefully, you have a little better idea of how this framework is able to give us a better understanding of the problem, because we can go back and see the source, we can see the steps that the person went through. We can understand what sort of tactics and techniques were deployed and employed. We can also take a look at this as a common description, a common language, a lingua franca if you will, something that we can all in the industry use to describe. So when we talk about reconnaissance, we know what that means. When we talk about resource development, we know what that means, because we're all reading from the same description. The hope then, is that with better understanding and a common description, we end up with better defenses. And that's really what we're trying to do with AI, this new attack surface.
Artificial Intelligence, Cybersecurity, Technology, Mitre Atlas, Ai Attacks, Security Frameworks, Ibm Technology
Comments ()