GitHub recently announced it updated its Copilot’s AI model aiming to boost its efficiency and security to deliver faster, high-quality suggestions to its users.
The improved AI component will start rolling out to users this week and should allow software developers to become more efficient by increasing the acceptance rate of code suggestions.
Copilot is GitHub’s cloud-based artificial intelligence tool that helps users of Visual Studio, Visual Studio Code, JetBrains and Neovim IDE (integrated development environments) by autocompleting their code.
Updating the tool’s underlying Codex model is expected to make developers faster by giving them better, more accurate, and more responsive code suggestions. The company introduced the following critical improvements in the update:
Copilot’s update also introduces an AI vulnerability filtering mechanism that prevents the tool from suggesting insecure coding patterns in real time. The revamped model targets commonly known vulnerabilities, including path injections, SQL injections and hardcoded credentials.
“The new system leverages LLMs to approximate the behavior of static analysis tools—and since GitHub Copilot runs advanced AI models on powerful compute resources, it’s incredibly fast and can even detect vulnerable patterns in incomplete fragments of code,” reads the company’s announcement. “This means insecure coding patterns are quickly blocked and replaced by alternative suggestions.”
Last but not least, GitHub acknowledges the importance of employing vulnerability-detection tools upon releasing or building apps (as opposed to during the coding phase) on entire repositories for the following reasons:
tags
Vlad's love for technology and writing created rich soil for his interest in cybersecurity to sprout into a full-on passion. Before becoming a Security Analyst, he covered tech and security topics.
View all postsNovember 14, 2024
September 06, 2024