Sinisa Markovic, who is a Senior Staff Writer for Help Net Security, wrote up a nice summary article on the recently published study from the University of North Carolina about using large language models (LLM) to quantify security vulnerabilities.
This study is very relevant to one of the AI projects that I am working on in the day job. Let's just say having over 21 hospitals and over 300 ambulatory/outpatient locations means that there are tens of thousands of devices that have the potential to have a vulnerability. Having the potential future means to 'sift through and prioritize' vulnerabilities will go a long way in protecting patients and members.
Here are a couple of great snippets from the article:
Better results when text signals are explicit Two metrics stood out. The first is Attack Vector, which describes how an attacker reaches the vulnerable system. It includes network access, adjacent network access, local access, or physical access. The second is User Interaction, which reflects whether the exploit requires someone to click, open a file, or perform a similar step.
Weak performance where descriptions lack detail. Privileges Required also proved difficult. This metric shows whether an attacker needs an account and at what level. All systems confused none and low because descriptions rarely specify the required access.
Small gains from meta classifiers Because each model performed well in different areas, the researchers built meta classifiers that combined predictions from all six. This brought small improvements across all metrics.
Now that you are intrigued, check out the rest of the summary article!