The video explores the concept of hackbots, AI systems capable of autonomously identifying vulnerabilities in websites and applications, thereby potentially revolutionizing cybersecurity. These systems can operate without human intervention, significantly influencing the methods of ethical hacking, where vulnerabilities are reported for remediation rather than exploitation. The speaker shares their own experience in ethical hacking and bug bounty programs, highlighting the industry dynamics and financial incentives that make ethical hacking both a skill-enhancing and lucrative pursuit.
The emergence of hackbots signifies a new era in the cybersecurity landscape, especially as they begin to outperform human hackers. With advancements in large language models that support AI-driven cybersecurity measures, hackbots are capable of conducting comprehensive security assessments, even in organizations with long-standing security measures. These bots can not only discover vulnerabilities but also provide reports and suggest fixes, evidencing their potential to drastically enhance digital security.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. vulnerability [ˌvʌlnərəˈbɪləti] - (noun) - A weakness in a system that can be exploited by threats to gain unauthorized access to an asset. - Synonyms: (weakness, susceptibility, flaw)
It's called bug bounty because vulnerabilities are often called bugs, and they pay you a bounty per vulnerability instead of being paid per hours or per job
2. autonomously [ɔːˈtɒnəməsli] - (adverb) - Doing something independently without human intervention. - Synonyms: (independently, automatically, self-sufficiently)
There are AI systems today that can autonomously find vulnerabilities in websites or applications without any human help or guidance.
3. ethical [ˈɛθɪkəl] - (adjective) - Relating to moral principles or the branch of knowledge dealing with these. - Synonyms: (moral, principled, virtuous)
But for the context of this talk, I'm talking about finding vulnerabilities in websites and applications, and for ethical hacking specifically, that's done in order to secure those systems
4. unethical [ʌnˈɛθɪkəl] - (adjective) - Not morally correct or in accordance with accepted standards of social or professional behavior. - Synonyms: (immoral, unscrupulous, dishonest)
unethical hacking on the other side would be if it's done in order to steal money or steal people's information.
5. bug bounty [bʌɡ ˈbaʊnti] - (noun) - A program offered by many websites and software developers by which individuals can receive recognition and compensation for reporting bugs, especially those pertaining to security exploits and vulnerabilities. - Synonyms: (defect reward, bug finder's fee, flaw bonus)
The cool thing is that those companies will actually pay you for those findings through what's called their bug bounty programs.
6. competency [ˈkɒmpɪtənsi] - (noun) - The ability to do something successfully or efficiently. - Synonyms: (capability, proficiency, skill)
And they've shown a really strong competency, not only at like co generation or content creation, but even generating TED talks.
7. oversight [ˈoʊvərˌsaɪt] - (noun) - An unintentional failure to notice or do something. - Synonyms: (mistake, error, lapse)
What oversight do other people who are skeptical or other experts that don't believe AI can hack, what oversights did they make?
8. significant [sɪgˈnɪfɪkənt] - (adjective) - Sufficiently great or important to be worthy of attention; noteworthy. - Synonyms: (notable, important, consequential)
If there were ever an AI system that could find vulnerabilities at those companies, it would be a significant achievement of validation for the industry.
9. augment [ɔːɡˈmɛnt] - (verb) - To make (something) greater by adding to it; increase. - Synonyms: (increase, enhance, expand)
Even though it's right now they're still in the early phases, I think that we'll see them scale up due to those financial incentives from bug bounty and from the fact that they can both offset and augment human hackers.
10. sophisticated [səˈfɪstɪkeɪtɪd] - (adjective) - Developed to a high degree of complexity. - Synonyms: (complex, advanced, intricate)
If there are, you know, as these systems become more and more sophisticated, if they were to fall in the wrong hands, they could be used for malicious purposes.
The Rise of AI Hackbots - Joseph Thacker - TEDxUKY
What if you could push a single button and find vulnerabilities in any website or take over anyone's bank account? It might sound insane, but it's not far from reality. There are AI systems today that can autonomously find vulnerabilities in websites or applications without any human help or guidance. Let's dive into the world of hackbots and how they could revolutionize the future of cybersecurity.
When I talk about hacking, what do most of you imagine? Probably a guy sitting in a basement with his hood up, typing on a computer with, like, six different screens. And that's mostly accurate. Me and my hacking buddies love wearing hoodies, and we also love having lots of screens. But for the context of this talk, I'm talking about finding vulnerabilities in websites and applications, and for ethical hacking specifically, that's done in order to secure those systems. So when the vulnerabilities are found, they're reported to the company so they can go in and fix them. unethical hacking on the other side would be if it's done in order to steal money or steal people's information.
ethical hacking is a really crucial part of the current cybersecurity landscape. I've been ethical hacking for over five years, and I've submitted over a thousand vulnerabilities. I've had database access to Yahoo, the ability to take over Capital One accounts, remote code execution on Alibaba servers, as well as vulnerabilities on Amazon, Apple, Google, and other big companies.
The cool thing is that those companies will actually pay you for those findings through what's called their bug bounty programs. It's called bug bounty because vulnerabilities are often called bugs, and they pay you a bounty per vulnerability instead of being paid per hours or per job. Those bounties can range anywhere from a few hundred dollars, all the way up to 50,000 or $100,000 per vulnerability, depending on the severity of the bug, the company you're reporting it to, and whether it's done as a part of a live event or a promotion.
Personally, I love bug bounty because I get to hone my hacking skills, all the while getting to have fun in what feels like a video game. And then on top of that, it's helping secure things. And so there are some things in life that I feel like are win, win, but bug bounty feels more like win, win, win, win, win. And for the companies, not only do they get like a security assessment would traditionally be a few hackers for a few weeks. With bug bounty, it's hundreds of Hackers looking at their scope around the clock. And as you can imagine, those large financial incentives mean that top hackers, talented hackers, are always going to be trying to do bug bounty. They can set up scanners they might be manually hunting. And so that makes the security of these programs much more secure. As hackers will report bugs, then the developers will go on and fix it, and then that's a cycle that continues. So the security gets really, really strong at those companies.
And I say all that not just because I'm passionate about bug bounty, clearly, but because it sets a very high bar of security to be able to find vulnerabilities in these companies that have had long standing bug bounty programs. It takes top tier bug bounty hunters. And in the context of this talk, that's important, because if there were ever an AI system that could find vulnerabilities at those companies, it would be a significant achievement of validation for the industry. And that's what recently occurred. But before we talk about it, let's go back and see how it happened.
Over the last two years, as large language models like those that power ChatGPT have kind of taken off, they've even become synonymous with AI in general, even though AI is a much broader field. And so those large language models are basically models that have been trained with a large amount of text, like cramming all the Internet into something much smaller that you can talk to kind of like you would talk to another human in order to tease out expertise and information. And they've shown a really strong competency, not only at like co generation or content creation, but even generating TED talks. Just kidding.
But as far as the cybersecurity industry goes, the biggest way that they've been making waves is through what's called hackbots. Hackbots are AI systems that can autonomously find vulnerabilities. And in websites or applications, hackbots use large language models, or at least some of them do, in order to tap into that security and hacking expertise in order to help make decisions about how they work. Last year, even though there are some experts that are pretty skeptical about whether AI can actually hack, I made a claim back in November that AI hacking agents will outperform even human hackers eventually.
And that claim was validated two months ago whenever there was an official research paper that dropped showing that LLM agents can autonomously hack websites. So what oversight do other people who are skeptical or other experts that don't believe AI can hack, what oversights did they make? I think the biggest oversight they made is that you can actually allow AI systems to have control of specific tools or utilities. And specifically, I think the best example would be if I asked you if AI could move through the world, you would maybe say no, right? It's a file or a computer program. But what if you hooked a large language model up to a robot such that the LLM could give commands to the robot to move through the world? Then maybe if you assign it a task to walk across the room, it would reply with something like this, move forward 10ft, and now all of a sudden AI is moving through the world. Right? It's very similar with hacking. So you give these AI systems the ability to call hacking tools in the same way that a human would, and run them in order to find vulnerabilities, and then if it finds them, it's able to store them off and report them later.
The current landscape of the hackbots that are out right now, at least the public ones that we know about, it's very likely that there are also some being built by three letter agencies or other governments or other countries. But of the ones that are in public, I've been able to kind of peek behind the curtains and see how they work. And I've even connected with a lot of the team members that are building them. And it's really fascinating because they're coming at it from different angles. So there are some teams that have kind of set forth and started working with making the AI systems find vulnerabilities in the code that humans wrote, whereas other ones are working more like manual hackers, where the AI systems are trying to hack things in the same way that I would whenever I'm going to do bug bounty hunting.
Even though it's right now they're still in the early phases, I think that we'll see them scale up due to those financial incentives from bug bounty and from the fact that they can both offset and augment human hackers. And that's exactly what I was talking about earlier. That hackbot, the best one that kind of went over that bar, found vulnerabilities on Apple and PayPal. Both of them are extremely hardened, they're very secure, they've had long standing bug bounty programs. But these hackbots were able to fully find vulnerabilities completely autonomously without any help or intervention from the hackbot maker. Then they were able to write the report explaining the vulnerability, why it's important, and how to potentially fix it. And the hackbot maker was paid thousands of dollars for those vulnerabilities from Apple and PayPal. And that makes a significant breakthrough.
I think the hackbots are really exciting and really cool because they could potentially secure the entire Internet. And I'll talk about two ways they could do that. One is through the way that code gets written internally at these companies, right? Any website or any application that you will use today has developers who are writing that code. That code is generally beta code before it goes online and is exposed to everyone. Hackbots can be put in the middle. They can test that code, find the vulnerabilities before it goes live onto the Internet, where attackers could find it and exploit it.
But not just at single companies. What if we scaled this up and ran hundreds or thousands of hackbots against every website on the Internet? They would be able to find basically all the vulnerabilities that they were able to find with their current skill set. And then we would be drowning in those reports and we could go in and fix all of those. We would only be limited by the ability of developers and programmers to fix those vulnerabilities. And that's one thing that AI is going to really help with too. It's already shown to be a competent coder, and there are lots of other companies already building products that do go in and fix code. So we'll be able to go in and fix that code as well.
However, it's not all sunshine and rainbows. There are some limitations and ethical considerations, right? The biggest limitation is the cost. These large language models cost a lot to run, especially whenever they're being used for a long period of time, like hours and hours or days and days to find these vulnerabilities. And those hackbot creators have to pay that cost. For example, The Apple and PayPal bug I mentioned a minute ago likely cost that hackbot creator hundreds, if not thousands of dollars just to run it long enough to find those vulnerabilities. And obviously right now that offsets some of the costs you could get from the bug bounty.
And maybe more importantly, as many of you all have heard, with great power comes great responsibility. If there are, you know, as these systems become more and more sophisticated, if they were to fall in the wrong hands, they could be used for malicious purposes like attacking critical infrastructure or even scaling up to cause large scale harm. But despite all that, I'm still super hopeful about the future of the hackbot industry. And I think by collaboration between governments, researchers and companies, that we'll actually be able to harness the power for good.
Let me ask you this. What would the future look like if people, companies or governments were able to push a single button and begin to find vulnerabilities in anything. And how long do we have before that will actually occur? But most importantly, will we harness those hackbots in order to secure the digital space, or will they fall into the hands of evil or wrongdoers and be used for harm? Thank you.
CYBERSECURITY, ETHICAL HACKING, AI SYSTEMS, TECHNOLOGY, INNOVATION, SECURITY ASSESSMENT, TEDX TALKS