We use cookies to improve your experience. Do you accept?

It’s Time to Put ChatGPT to Work to Help Improve Cybersecurity

It’s Time to Put ChatGPT to Work to Help Improve Cybersecurity - Featured Image

Threat intelligence Feb 23, 2023

ChatGPT has hit the tech industry by storm and raised questions on all sides about whether it’s an amazing innovation, or a frightening tool that will ruin our educational system and replace many human roles. Security experts have also demonstrated how ChatGPT can write convincing phishing emails, or malicious code, further lowering the entry barriers for hackers.

If you’ve played with ChatGPT, the results can seem both startling and uncanny - fully formed, logical natural language responses to complex questions. But as Google learned painfully this week with its recent massive demo fail (which dropped Google’s stock price by $100B) - just because the results are slick, doesn’t mean the answers are always correct.

To get a perspective on the potential security impacts, I decided to go to the source, ChatGPT. I typed the following query into the system and got a remarkably complete and well-balanced answer – in about 10 seconds:

Question: What is the impact of ChatGPT on cybersecurity in 2 paragraphs

ChatGPT answer:

The impact of ChatGPT on cybersecurity can be seen in a few different ways. On one hand, the technology behind ChatGPT can be used to create new and innovative cybersecurity tools. For example, natural language processing (NLP) techniques like those used in ChatGPT can be applied to detect and prevent cyberattacks that use malicious language or communications. Additionally, ChatGPT's ability to understand and generate human-like text can be used to improve the performance of existing cybersecurity tools, such as spam filters and intrusion detection systems.

On the other hand, ChatGPT itself can also be a potential security risk. As a highly sophisticated AI model, it has the potential to be used for malicious purposes, such as generating fake news, phishing scams, or impersonating individuals online. Moreover, the large amount of data that ChatGPT has been trained on, including sensitive information, raises concerns about data privacy and security. To mitigate these risks, it's important for organizations and individuals to use ChatGPT responsibly and follow best practices for data protection.

Okay – that’s pretty impressive and makes ChatGPT seem eerily self-aware. I like to think that I could have written something more thoughtful and nuanced, but I certainly couldn’t have drafted that in 10 seconds. Plus, it’s basically correct.

So, how can we harness this technology to improve cybersecurity, and how soon can we start using it? The answer to the second question is that it’s already happening. Cyware, for example, leverages AI and ML engines to automate actions like threat intel ingestion, processing and, when used in conjunction with tunable confidence scores, to action intelligence-driven security outcomes.

As machine intelligence evolves, as demonstrated by ChatGPT, advanced ML offers promise in crafting complex security workflows and defining playbooks that drive instantaneous intelligent action.

To dig deeper I went back to my new favorite “expert” and asked ChatGPT to drill-down into this question:

Question: How can GPT help create security playbooks?

ChatGPT answer (in less than 20 seconds – the neat formatting came right from the app):

GPT, or Generative Pretrained Transformer, can help create security playbooks by automating various tasks such as:

  1. Document Generation : GPT can be trained on security-related content to generate reports, documentation, and playbooks in a consistent, structured format. This can significantly reduce the time required to create and update security playbooks.

  2. Threat Intelligence : GPT can be used to gather and analyze large amounts of data to identify security threats and recommend countermeasures. This can help security teams prioritize their response to threats and mitigate potential risks.

  3. Incident Response : GPT can be integrated with incident response systems to provide real-time recommendations for remediation steps. This can help security teams respond quickly and effectively to security incidents.

  4. Automated Workflows : GPT can be used to automate repetitive and time-consuming tasks such as triage, prioritization, and escalation, freeing up security teams to focus on more strategic tasks.

  5. Knowledge Management : GPT can be used to build a centralized repository of security knowledge, helping security teams to stay informed about the latest threats and best practices.

Overall, GPT can play a significant role in enhancing the efficiency and effectiveness of security playbooks, helping security teams to keep pace with the rapidly evolving threat landscape.

Once again – pretty impressive, and basically correct. However, it’s easy to mistake massively powerful text prediction for real thinking, and OpenAI is quick to point out that ChatGPT – just like the humans who generated the content it is consuming can often make mistakes. As we saw, Google learned this lesson painfully.

Relying on machine-driven automation without human oversight is fraught with peril. But, combining human innovation and intuition with the remarkable ability of AI systems to automate repetitive and high-confidence tasks can be transformative. Like any other tool, ChatGPT is neither good nor bad –when used correctly it is powerful. Now it’s time to get moving harnessing this power to produce the next generation of cybersecurity solutions.

For a more concrete example, see our more technical blog, detailing how ML models like ChatGPT are already being used to reduce noise, improve accuracy, and automate response for complex security workflows.

Related Blogs