A recent study conducted by Stanford University researchers Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh titled “Do Users Write More Insecure Code with AI Assistants?” has shed light on the potential security risks associated with the use of AI code assistants.
AI code assistants, like Github Copilot, have emerged as programming tools with the potential to lower the barrier of entry for programming and increase developer productivity. These tools are built on models, like OpenAI’s Codex and Facebook’s InCoder , that are pre-trained on large datasets of publicly available code.
The study found that participants who had access to an AI assistant based on OpenAI’s codex-davinci-00 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.
Interestingly, the study also found that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g., re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities.
The findings of this study highlight the potential security risks associated with the use of AI code assistants. It underscores the need for developers to be cautious when using these tools and for the creators of these tools to consider these risks when designing their products. The researchers hope that their findings will inform the design of future AI-based Code assistants.
You can view a pdf of the study at Do Users Write More Insecure Code with AI Assistants? (openreview.net)