How OpenAI Uses AI for Security

Published:

Wondering how OpenAI is using their AI to boost cybersecurity in-house? Matthew Knight,
OpenAI’s Head of Security, spilled the beans at RSA Conference & showcasing how the company leverages AI to automate tasks, streamline incident reporting, manage open tickets, and enhance bounty reporting.

 

OpenAI
OpenAI by IT insights

How OpenAI Uses AI for Security

OpenAI has truly embraced its own tech to strengthen security processes, focusing on four main areas:

1. Inbound Message Filtering

We use AI extensively in security at OpenAI, “Matthew shared, “Our AI helps manage inbound messages and detection logistics. When employees have questions, AI ensures they reach the right people without exposing our organizational chart.”

Using GPT-4 for classification, messages are directed appropriately even if they come from
employees unfamiliar with the security team. And just in case, an engineer still reviews the
message, ensuring nothing slips through the cracks. This approach is a win-win.

2. Facilitating and Summarizing Incident Reporting

Incident report capture is a big pain point for security pros in 2024. Matthew explained how
LLMs make this easier: “You can take a chat between two security engineers, feed it into an LLM, and get a first draft of an incident report. Engineers then refine it for accuracy, saving time and effort on drafting.”

3. Process Automation

OpenAI’s AI identifies unsafe configurations and poor settings within the business. A chatbot then contacts the relevant employee to confirm if these settings were intentional.
The initial conversation happens with the chatbot, so when the engineer looks at the ticket, the context is already there. They don’t have to track down people and ask questions. The human touch remains, but without the busywork.

4. Bug Bounty Challenges

LLMs help automate the bug bounty review process. “You get a lot of spam submissions” Matthew said, Our models help filter these by classifying them against our policies. While the AI doesn’t perform cybersecurity analysis, it flags potential issues for security engineers to review, prioritizing important tasks and saving time.

 

AI Flaws in Cybersecurity
AI Flaws in Cybersecurity by IT insights

Considering AI Flaws in Cybersecurity

Large language models have flaws like hallucinations, context length limits, overreliance, and susceptibility to prompt engineering, “Matthew noted.

Hallucinations occur when models fabricate information. For example, they might generate
nonexistent CVE lists. Context length limits, due to tokenization, can also be a challenge,
especially with data types like PCAP that don’t tokenize well. Techniques like LangChain can
help mitigate these issues by chunking text for the model.

Prompt injection attacks pose another risk, but OpenAI uses Reinforcement Learning from
Human Feedback (RLHF) to improve model alignment with safety policies. For more on RLHF, check out Axel Sirota’s article “Ethical AI: How to make an AI with ethical principles using RLHF.”

We’ve integrated security into our models using RLHF, aligning them with our safety policies,” Matthew explained. Despite these efforts, vulnerabilities remain, so OpenAI collaborates with enforcement agencies and invests in the security research community to stay ahead of threats.

 

Final Thoughts

OpenAI is not just talking the talk but walking the walk when it comes to using AI for
cybersecurity. By leveraging their own AI tech, they’re setting a high bar for security processes, making their operations more efficient and secure. And while there are challenges, their proactive approach and collaboration with the broader security community ensure they stay ahead of the curve.

Related articles

spot_img

Recent articles

spot_img