The rise of AI code assistants: Dr. Beschastnikh and team investigate collaboration with software developers
Artificial Intelligence (AI) has made significant strides in recent years, and one of its rapidly emerging applications is in the development of code assistants.
These AI-powered tools are transforming the way developers write, debug, and optimize code, providing new efficiencies and capabilities across software engineering.
But to what extent are they perceived as useful to industry?
To date, there had been many studies focused on evaluating participants' perceptions and experiences with code generation tools in controlled studies, but these were primarily performed on a small handful of university students. There was a gap in exploring the opinions of those in the general community, particularly populations that may be missed in the standard studies.
UBC Okanagan PhD student Manaal Basha decided to complete her thesis on the topic, with Gema Rodriguez-Perez, a computer science assistant professor at UBC Okanagan as her supervisor. It then became a paper co-authored by Basha and Rodriguez-Perez, along with Professor Cleidson R. B. de Souza from Brazil’s Universidade Federal do Pará (UFPA), Dr. Ivan Beschastnikh, an associate professor of computer science at the University of British Columbia (UBC), and Associate Professor Dr. Dongwook Yoon, also of UBC Computer Science.
The paper titled "The Fine Balance Between Helping with Your Job and Taking it: AI Code Assistants Come to the Fore" was published as an article in the Nov./Dec. issue of IEEE Software.
Basha is excited they have gained this level of exposure for the research. “It validates the importance and ensures that it reaches professionals and academics who can further the conversation about the responsible adoption of AI tools in coding. It’s also exciting to contribute to a magazine that influences the future of software engineering practices globally.”
The evolution of AI in software development
Dr. Beschastnikh explains that the use of AI in code generation is not new but has seen remarkable advancements due to the progress in machine learning (ML) and natural language processing (NLP). He explains that AI code assistants—such as OpenAI's ChatGPT and GitHub Copilot—are reshaping software engineering by automating certain aspects of coding, from writing boilerplate code to suggesting complex functions. These tools are trained on large datasets of code, enabling them to provide context-aware suggestions that can significantly reduce the cognitive load on professional developers.
Dr. Beschastnikh noted, "What makes them particularly useful is their ability to generate code snippets, suggest bug fixes, and even refactor code automatically, all while improving the overall productivity of software developers."
However, he points out that while these AI-driven tools offer substantial benefits, they also present unique challenges, especially regarding their integration into the collaborative nature of software development and in the maintenance of quality standards.
The research methodology involved qualitative and quantitative measurement of sentiment about the subject on social media, specifically X (formerly Twitter), and posted by software engineers who use the AI tools. The researchers categorized these tweets into four key themes or narrative types: ‘The promise of useful code suggestions,’ ‘The uncovering of new work practices,’ ‘Technical and legal concerns,’ and ‘The impact on software development.’
Code quality, debugging, and trust in AI
One of the critical areas of concern that exists when using AI-driven tools to produce code, is the issue of code quality and reliability, thereby emphasizing the importance of human oversight.
"The code produced by AI is not always optimal, and it may introduce subtle bugs or security vulnerabilities that can be difficult to detect," says Beschastnikh. "While AI assistants can help streamline development, they are not a replacement for the deep expertise that human developers bring to the table."
Dr. Beschastnikh elaborated, "AI code assistants are great at producing code, but they're not perfect. Developers still need to critically evaluate the suggestions and ensure the code meets the project’s requirements in terms of efficiency, security, and maintainability."
He also pointed out that debugging AI-generated code can sometimes be more challenging than debugging code written by humans. "The interpretability of AI-generated code is an important area of ongoing research. As AI code assistants become more sophisticated, there will be an increasing need for tools that can explain how and why a particular code segment was generated," Dr. Beschastnikh remarked.
Trust is another significant theme that arose. Dr. Beschastnikh explains that there are potential risks for developers becoming overly reliant on AI tools without fully understanding the logic behind the code they are producing. "There’s a balance to strike," he noted. "Developers need to use these tools as assistants, not as substitutes for critical thinking or deep domain knowledge."
The role of AI in collaboration and software engineering
Beyond individual coding tasks, AI code assistants also have the potential to transform how teams collaborate and work on software projects. Dr. Beschastnikh is particularly interested in how AI tools can assist in distributed software development. In large-scale software projects involving multiple contributors, AI could help standardize code styles, ensure adherence to best practices, and automate some parts of code reviews.
Dr. Beschastnikh also notes that one of the potential uses for AI in this context is to automate certain collaborative processes, such as identifying redundant code, suggesting improvements based on best practices, and even predicting conflicts during code integration. "AI can act as a second pair of eyes, especially in large-scale projects where it’s easy for small issues to snowball into bigger problems,” he said.
Ethical and security considerations
As with any technology, the use of AI in software development also raises ethical and security/legal concerns. Dr. Beschastnikh emphasizes that these tools, if not used carefully, can introduce risks. He discusses the potential for AI-generated code to inadvertently include security vulnerabilities, especially if the training data includes insecure code examples.
"AI code assistants learn from publicly available code repositories, which may contain code with vulnerabilities or poor practices," he said. "This raises questions about the safety and security of AI-generated code, particularly when used in critical systems where security is paramount."
Dr. Beschastnikh is especially concerned about these ethical implications within educational settings. He noted that while these tools can be incredibly useful for learning, there is a danger that students might become too reliant on them and miss out on the problem-solving and critical-thinking skills so essential to software development. "AI can be a great tool for learning, but we must ensure that students still engage deeply with the material."
Basha noted that one of the most interesting outcomes of the research was the discovery of how contrary user opinions were from one another about Code Generation Tools (CGTs). “While many developers praised these tools for improving productivity and reducing repetitive coding tasks,” she said, “There was a significant portion of users who expressed concerns about ethical issues, biases in AI-generated code, and fears of job automation.”
She added, “This research provides software engineers with valuable insights into how their peers and the public perceive and use Generative AI tools in coding workflows. The themes uncovered help us understand potential barriers to adoption for some user groups.”
In terms of legal risks, Beschastnikh explains that this is one of the most concerning narratives as it is difficult to contain and fix. “The issue of copyright is difficult to address within AI tools. Without proper attribution, it is unclear where the generated code is sourced and what copyrights might be attached to it.”
The future of AI in software engineering
Looking forward, Beschastnikh believes that AI will continue to play an increasingly significant role in software engineering but will require careful management to ensure its benefits are fully realized without compromising code quality or security.
Mannal agrees, “AI-powered tools have the potential to enhance teamwork by streamlining communication and automating workflows, making development processes more efficient. However, to fully integrate AI, it must become more reliable and transparent, ensuring that developers can maintain control and ownership over the code they produce. This may require standardization or governance.”
The research team is working on further projects related to AI code assistants, “Specifically, we want to expand the work to explore gender and other demographic factors,” Mannal adds, “and aim to better understand how various user groups engage with CGTs. By examining these relationships, we hope to identify potential adoption barriers and limitations, ultimately working towards solutions that more effectively support a diverse range of users.”
Beschastnihk emphasized that he hopes his team’s research encourages software engineers to try these tools as they come, and to be unafraid. “It takes time to learn how to utilize them and incorporate them into your job, but there is so much to gain.” He points out, “There is no going back. The genie, as they say, is out of the bottle. AI tools will only continue to improve and become more widely used by software engineers. We must come to terms with how we want this to happen and ensure the process is highly collaborative between humans and machines.”
Rodiguez-Perez added, “We hope the future for developers includes a balanced integration of AI assistants that augments developers’ abilities rather than replaces them. Ideally, AI tools will become more user-centric, allowing for personalized experiences that cater to individual coding styles and preferences. Developers should feel empowered by these tools to experiment, learn, and innovate without fear of losing their autonomy or jobs.”