Microsoft claims a machine learning models its built for software developers can distinguish between security and non-security bugs 99% of the time.
Microsoft plans to open-source the methodology behind a machine learning algorithm that it claims can distinguish between security bugs and non-security bugs with 99% accuracy.
The company developed a machine learning model to help software developers more easily spot security issues and identify which ones need to prioritized.
By pairing the system with human security experts, Microsoft said it was able to develop an algorithm that was not only able to correctly identify security bugs with nearly 100% accuracy, but also correctly flag critical, high priority bugs 97% of the time.
The company plans to open-source its methodology on GitHub “in the coming months”.
According to Microsoft, its team of 47,000 developers generate some 30,000 bugs every month across its AzureDevOps and GitHub silos, causing headaches for security teams whose job it is to ensure critical security vulnerabilities don’t go missed.
While tools that automatically flag and triaged bugs are available, sometimes false-positives are tagged or bugs are classified as low-impact issues when they are in fact more severe.
To remedy this, Microsoft set to work building a machine learning model capable of both classifying bugs as security or non-security issues, as well as identifying critical and non-critical bugs “with a level of accuracy that is as close as possible to that of a security expert.”
This first involved feeding the model training data that had been approved by security experts, based on statistical sampling of security and non-security bugs. Once the production model had been approved, Microsoft set about programming a two-step learning model that would enable the algorithm to learn how to distinguish between security bugs and non-security bugs, and then assign labels to bugs indicating whether they were low-impact, important or critical.
Crucially, security experts were involved with the production model through every stage of the journey, reviewing and approving data to confirm labels were correct; selecting, training and evaluating modelling techniques; and manually reviewing random samples of bugs to assess the algorithm’s accuracy.
Scott Christiansen, Senior Security Program Manager at Microsoft and Mayana Pereira, Microsoft Data and Applied Scientist, explained that the model was automatically re-trained with new data to it kept pace with the Microsoft’s internal production cycle.
“The data is still approved by a security expert before the model is retrained, and we continuously monitor the number of bugs generated in production,” they said.
“By applying machine learning to our data, we accurately classify which work items are security bugs 99 percent of the time. The model is also 97 percent accurate at labeling critical and non-critical security bugs.
“This level of accuracy gives us confidence that we are catching more security vulnerabilities before they are exploited.”