AI Code Assistants Might be a Potential Security Risk

A study shows using AI code-writing assistants can lead to more vulnerable code.

A recent study conducted by Stanford University researchers Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh titled “Do Users Write More Insecure Code with AI Assistants?” has shed light on the potential security risks associated with the use of AI code assistants. 


AI code assistants, like Github Copilot, have emerged as programming tools with the potential to lower the barrier of entry for programming and increase developer productivity. These tools are built on models, like OpenAI’s Codex and Facebook’s InCoder , that are pre-trained on large datasets of publicly available code. 

The Study 

The researchers conducted the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security-related tasks across different programming languages. The study involved 47 participants across 5 different security-related programming tasks spanning 3 different programming languages (Python, JavaScript, and C). 


The study found that participants who had access to an AI assistant based on OpenAI’s codex-davinci-00 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. 

Interestingly, the study also found that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g., re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. 


The findings of this study highlight the potential security risks associated with the use of AI code assistants. It underscores the need for developers to be cautious when using these tools and for the creators of these tools to consider these risks when designing their products. The researchers hope that their findings will inform the design of future AI-based Code assistants. 

You can view a pdf of the study at Do Users Write More Insecure Code with AI Assistants? ( 

Photo of author

Cray Zephyr

Cray has a major in philosophy and likes to keep things simple. He tries to keep his opinions to himself but will never shy out of a discussion, except with chickens. A chicken always wins.