ENSPIRING.ai: 10 Principles for Secure by Design - Baking Security into Your Systems
This video highlights the importance of incorporating security measures directly into the system's design rather than as an afterthought. Retrofits tend to be more costly and less effective. By implementing security early in the development process, particularly during the design phase, costs can be minimized, and systems can be better protected. The video introduces ten fundamental principles essential for designing secure systems and emphasizes the importance of embedding these principles into the process from the onset.
The video explains several key principles of security by design. The principle of least privilege ensures that users have only the access necessary to perform their functions, thereby reducing potential security vulnerabilities. Defense-in-depth involves establishing multiple layers of security so that no single failure can compromise the system. Other principles discussed include failsafe defaults, keeping systems simple to minimize complexity, separating duties to prevent abuses, and ensuring systems are open and transparent to avoid reliance on obscurity for security.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. retrofitted [ˈrɛtroʊˌfɪtɪd] - (verb) - The act of adding new technology or features to older systems. - Synonyms: (modernized, upgraded, refurbished)
A retrofit is always going to be more expensive and probably not as elegant.
2. incorporates [ɪnˈkɔrpəˌreɪts] - (verb) - To include or integrate a part into the whole. - Synonyms: (includes, integrates, embodies)
The best security is the kind that's baked in, not bolted on after the fact.
3. privilege [ˈprɪvəlɪdʒ] - (noun) - A special right or advantage available only to a certain group. - Synonyms: (advantage, right, entitlement)
It's what we refer to as the principle of least privilege.
4. segment [ˈsɛgmənt] - (verb) - To divide into parts or sections. - Synonyms: (divide, partition, section)
So let's say they have internal use only that we segment that off.
5. mechanism [ˈmɛkəˌnɪzəm] - (noun) - A system of parts working together in a machine or process. - Synonyms: (system, structure, process)
I never want to rely on any single security mechanism to secure the entire system.
6. cryptography [krɪpˈtɒgrəfi] - (noun) - The practice of secure communication through encoding. - Synonyms: (encryption, encoding, ciphering)
So we're going to use cryptography in order to make sure that the data...
7. obscurity [əbˈskjʊrɪti] - (noun) - The state of being unknown or not easily understood. - Synonyms: (ambiguity, vagueness, anonymity)
It's the opposite of security by obscurity.
8. fail-safe [ˈfeɪlseɪf] - (noun) - A design feature in a system to prevent failure or mitigate the effects. - Synonyms: (safeguard, backup, precaution)
That's what's called failsafe.
9. usability [ˌjuzəˈbɪləti] - (noun) - The ease with which people can use an interface or product effectively. - Synonyms: (user-friendliness, accessibility, convenience)
So we should really be looking at usability, human factors
10. minimize [ˈmɪnɪˌmaɪz] - (verb) - To reduce to the smallest degree possible. - Synonyms: (reduce, decrease, lessen)
You want to in fact minimize the attack surface
10 Principles for Secure by Design - Baking Security into Your Systems
The best security is the kind that's baked in, not bolted on after the fact. A retrofit is always going to be more expensive and probably not as elegant. Let's take a look at the development process. If I start looking at requirements, design, code, test, deployment, and then map out the cost of a security vulnerability in terms of which phase it's found, it turns out it looks about like this. So it is way more expensive if you find something in deployment than if you find it earlier in the process. So the big desire here in this case is to try to shift left in this process. In other words, bake the security in, build it in from the start.
In this video, we're going to focus specifically on the design phase, and I'm going to give you ten principles for security by design. Okay, let's take a look at the first secure by design principle. It's what we refer to as the principle of least privilege. It means don't give any more access than is absolutely necessary for any person to do their job function. So let's take a look at this. Imagine a company has different classifications of the information they have, of the data that they have. So let's say they have internal use only. This is something we don't really want the whole world to see. But if they did, it's not the end of the world. Then we have confidential stuff that really would be very sensitive stuff. And then we have the keys to the kingdom.
Now here is a general user, and we may give them just access to the internal use only and maybe some very specific cases of confidential information where we segment that off. This guy over here, though, he's the administrator, he's got access to the keys to the kingdom, which then in fact allow him to get to any of these other kinds of things as well. So obviously we need to have that very locked down. We don't want to give everyone access to everything. And maybe in some cases we give this person temporary access to something else that falls into this realm, but then we remove it as soon as they don't need it anymore. That's the principle of least privilege. Each one of them has only what they need in order to do their job, and only for as long as they need it, and not any longer than that. That reduces the attack surface.
The next principle we'll take a look at is defense in depth. Defense in depth says, I never want to rely on any single security mechanism to secure the entire system. I want to have a system of interrelated defenses and make sure that they all would have to fail in order for someone to actually get into the system. So let's take an example here. Here's an end user and they're going to use some endpoint device, a laptop in this case could be a mobile device coming across a network to hit an application which pulls data from a database. So that's a typical use case, a typical architecture that we might see here.
So I'm going to look at each one of these layers and try to secure each one of them. For instance, here I need some ability to do identity and access management. I might need things like multifactor authentication and other things like that to ensure that the user is who they claim to be. We would also, in the identity governance part of this, make sure that we've adhered to the principle of least privilege and only given them what they need in terms of access rights. And then here we're making sure that it's really them. So we design these things into the system in order to be the first layer of defense.
The next layer of defense then is on the device itself. And we do things like you unified endpoint management to make sure that the device is configured as it should be. It doesn't have viruses or malware or anything like that. It's got a strong password. If that's the way we've done it, the information's been encrypted on it, a lot of things like that. So that has to also be secure. The network also needs to be secure. We're going to use things like firewalls, network intrusion prevention systems and other things like that. There's in fact a lot of different technologies that go into this space, but the idea is I'm not relying on just any one of those things. There are multiples and they're all looking over each other and verifying the application.
Well, what are the things we might do there? Well, I'm going to put in access controls, so I want to make sure that the application only allows certain levels of access. Again, implementing principle of least privilege. I'm going to scan the application for vulnerabilities. I'm going to scan the source code. I'm going to scan the operational system. In both cases, a lot of other things that we can do in that space. And then finally the data itself, we're going to encrypt it. So we're going to use cryptography in order to make sure that the data, if it does leak, isn't easily seen by others. And we're going to do things like backup and recovery capabilities, so that if, for instance, malware comes along and blows this thing away, ransomware case, I can recover quickly.
So you can see the point here. There's no single mechanism that is providing security. It's a whole system of interrelated defenses that work together in order to make the system secure. Our next secure by design principle is the principle of failsafe. Failsafe assumes that any system will eventually fail because Murphy's law right, anything that can go wrong will go wrong, and it will, especially in these cases. So let's take a look at what a system should do if in fact it does do one of those failures.
So let's take a look at a firewall. Let's say we allow this traffic to go through, but this traffic we block. So those are examples of the firewall operating the way we expect a firewall to operate. Now let's take a look at another example where the firewall has in fact failed. Something has gone wrong with it. Now, what condition is it going to be when it fails? Does it become a permanent block until we fix it, or does it become an open switch? So what we don't want is for anything to get through. That means even the good stuff. We certainly want to block the bad stuff, but for sure block even the good. That's what's called failsafe. The condition is it fails, but it fails in a secure position. That's what we want.
Now, the next one that we're going to take a look at is kiss principle. This is keep it simple, stupid. Now, a longer version of this is economy of mechanism, which even in and of itself is too long. So let's get rid of that name and let's go with kiss. Keep it simple, stupid. If I'm trying to get from here to here, what I don't want to do is tons of twists and turns, because each one of these twists and turns introduces more complexity. And complexity is the enemy of security. The more complex a system is, the harder it is to ensure that it's going to do, in fact, what we want it to do. So what I want to do is I want to make the system as simple as it possibly can be. And in making it simple, I reduce additional vulnerabilities.
Our next secure by design principle is the principle of separation of duties, sometimes called segregation of duties, but it's the same idea. So with separation of duties, I'll give you an illustration of this. Imagine a door with two deadbolt locks, each with a different key. And I'm going to give one user one of these keys and the other user the other key. Now, if I lock both of those locks, nobody is opening this door unless both keys have been used to unlock it. So, in other words, I've spread the ability to open the door across two people. Why would I do that? Because now it makes it harder for one single bad actor to do something bad to the system. It requires collusion in order to break into this system. So separation of duties is a good principle to keep up.
Another one is about open design. I want the system to be clear and open. It's the opposite of security by obscurity, which is something that no one really tries to design into a system. But a lot of people feel secure if they think no one knows how their thing works. So to give you an example, to borrow from cryptography, there's this thing called Kirchhoff's principle. And Kirchhoff came up with the idea that says, if I have a crypto system, so that means I'm going to take plain text that anybody can read, I feed that into my encryption system, and I'm going to get out a bunch of gibberish, something that's an encrypted message. That means that the only thing that someone should really keep as a secret would be the key, the key that's used to encrypt that data.
So this person trying to observe the system, if they can't see into it, they really don't know if this algorithm is good or not. It could have all sorts of vulnerabilities that are hidden that no one has been able to see because the inner workings of the system have been kept secret. A better idea is following Kirchhoff's principle, where we actually take the plaintext, we feed it into the system, we see how it works, and we see how it creates the ciphertext, the encrypted version of all of that. So the only secret in the system is, in fact, the key. The way the system works is visible by anyone, and that's how the best crypto systems in the world work. It's not security by obscurity. It's security through openness.
Our next secure by design principle is that of segmentation. That is, there are going to be times when we specifically want to break the system up into pieces, and those pieces create a certain amount of isolation that give us additional security. Take, for instance, here's a townhouse, and let's say these folks here have a fire, and that fire is affecting their unit. What we don't want to have happen is for that fire to spread. So we've put literally in construction between these units, something we call a firewall. That's where we get the term firewall from, by the way, on the network side. But this is in physical architecture. This firewall is designed to retard the and slow down the spread of fire from one unit to the next. So that way the fire department can get there, put the fire out before it burns down the whole building.
That kind of segmentation is also what we do with our networks and what we do with our security. We take components that are of one level of security and put them in one area isolated from other areas that may have different levels of sensitivity. So this idea of segmentation can be designed into a system to make it more secure.
How about another factor that often gets overlooked? And that is we forget that at the end of all of this security chain is a human, and the humans are often the weakest link in that chain. So we should really be looking at usability, human factors. If we make a system too hard to use, well, then people will find ways to get around the security because not because they're evil, just because they need to get their jobs done and they don't understand the reason for all that complexity.
Here's a good example of how the security department has made things difficult and made things as a result less secure in the way that they designed the system. So we put the requirements, for instance, with passwords. We say your password has to be, let's say, uppercase characters. It has to also include some lowercase characters, it has to include some numerics, some numbers along the way. It needs some special characters. It needs to be of a certain length. Let's say it's got to be 32 characters long or longer, something like that.
Let's say we're also going to add in that it has to be unique, and it needs to be unique across each system. So it's not the same password on multiple systems. We're going to make sure that this is not going to be replicated and it needs to be fresh, so we're going to make you change it frequently. All of these things go into what would theoretically be a very strong, secure password. But what do end users do when they look at all of this? They say, well, I can't remember that. If I had multiple passwords that were that level of complexity, I can't remember it.
So what am I going to do? Well, it turns out there's a solution that the end users came up with. For this. And it's this. This is the first password storage device that users turn to. And they put all of these things up on their monitor somewhere no one will ever suspect to look, right. And it won't matter how strong the password rules are if people end up doing this. You've created an insecure system, so make sure when you design a system, you design it to be usable as well as secure.
If you were aiming at a target, which ones of these bullseyes would you prefer, the big one or the little one? If you're trying to hit the target, probably like this one. But if you're trying to secure an infrastructure, you want to present this to the outside world. You want to in fact minimize the attack surface. You want to make it so that it's really hard for somebody to thread the needle and get right to where your sensitive stuff is. So I want to do that by limiting certain things.
So for instance, I'm going to limit maybe external interfaces that I have. Is there really a need for that thing to be connected to a lot of other systems? Along those same lines, I may want to limit remote access. Is there really a need for people to connect into this system from outside or would all the access be from a certain place, or there are only certain ip addresses, certain ranges or areas of the world where we know legitimate users would be? So I could do some sort of limitation there and again reduce the attack surface.
How about limit the number of components that are in the system? Again, this kind of goes back to the keep it simple, fewer number of components also minimizes the attack surface. And there's a lot of other things that we can do in this space, but you get the general idea, I want to make this into that. So it's really hard for the bad guy.
And then our 10th principle of secure design is in fact secure by default. So this now deals with the way the system operates in its default configuration. When it just comes out of the box, what is it going to look like? So take a look at two houses. Here's a house that is insecure by default. The front door is wide open, the windows are open, those are all attack surfaces. And the way it's set up right now, it is insecure. We look at this house, in this case, the door is in fact closed. We'll say it's locked, the windows are down. So secure by default.
So what are some of the principles then that we would look at in some of these cases? So it would be things like a secure insecure by default would be by default, everything is turned on. All the features of the system are turned on versus secure by default would say only the required capabilities are turned on. So I'm going to make it so that I've limited, again, attack surface. I've limited the possibilities of a bad guy to exploit the because only the things that are necessary. You'll see that a lot of these principles relate to each other. That's again very similar to principle of least privilege.
Then defaults. Also very important since we're talking about that. Is the default password configured? And if it is, well then that means it's going to be the default on all of the systems, all the instances of that system. So if it's a consumer product, everyone that buys that, then they're going to all have the same password unless it gets changed. It's much better if you make people supply a password, something that's determined during the configuration and set up the system. That way you end up with uniqueness. Otherwise someone can just go out and figure out what's the default password for this particular device and assume that people have not changed that password and then they'll be able to get into all of those. So we're going to make it a must supply.
How about default ids? Especially if those ids are the administrator id, the super sensitive stuff. If it's an admin id, maybe I want to make it so that when you actually configure and install the system, you have to pick a different name for that. You have to pick something that's going to be unique. So you've made it harder for the bad guy because now he has to guess what is the admin account, what is the admin password? And then all of the capabilities that they might normally use to break in through those have been turned off. So these are the kinds of things that we're looking for. Secure by default.
Getting security right isn't easy, but now I've given you ten principles for secure by design. That way you can make sure that security is baked in from the start and not a bolt on at the end. That way you save money and make your system more secure at the same time.
Technology, Security, Education, System Design, Cybersecurity, Principles, Ibm Technology
Comments ()