This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Programmers have spent decades writing code for AI models, and now, in a full circle moment, AI is being used to write code. But how does an AI code generator compare to a human programmer?

A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI’s ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code—with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent—depending on the difficulty of the task, the programming language, and a number of other factors.

While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code.

Yutian Tang is a lecturer at the University of Glasgow who was involved in the study. He notes that AI-based code generation could provide some advantages in terms of enhancing productivity and automating software development tasks—but it’s important to understand the strengths and limitations of these models.

“By conducting a comprehensive analysis, we can uncover potential issues and limitations that arise in the ChatGPT-based code generation... [and] improve generation techniques,” Tang explains.

To explore these limitations in more detail, his team sought to test GPT-3.5’s ability to address 728 coding problems from the LeetCode testing platform in five programming languages: C, C++, Java, JavaScript, and Python.

“A reasonable hypothesis for why ChatGPT can do better with algorithm problems before 2021 is that these problems are frequently seen in the training dataset.” —Yutian Tang, University of Glasgow

Overall, ChatGPT was fairly good at solving problems in the different coding languages—but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively.

“However, when it comes to the algorithm problems after 2021, ChatGPT’s ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems,” Tang notes.

For example, ChatGPT’s ability to produce functional code for “easy” coding problems dropped from 89 percent to 52 percent after 2021. And its ability to generate functional code for “hard” problems dropped from 40 percent to 0.66 percent after this time as well.

“A reasonable hypothesis for why ChatGPT can do better with algorithm problems before 2021 is that these problems are frequently seen in the training dataset,” Tang says.

Essentially, as coding evolves, ChatGPT has not been exposed yet to new problems and solutions. It lacks the critical thinking skills of a human and can only address problems it has previously encountered. This could explain why it is so much better at addressing older coding problems than newer ones.

“ChatGPT may generate incorrect code because it does not understand the meaning of algorithm problems.” —Yutian Tang, University of Glasgow

Interestingly, ChatGPT is able to generate code with smaller runtime and memory overheads than at least 50 percent of human solutions to the same LeetCode problems.

The researchers also explored the ability of ChatGPT to fix its own coding errors after receiving feedback from LeetCode. They randomly selected 50 coding scenarios where ChatGPT initially generated incorrect coding, either because it didn’t understand the content or problem at hand.

While ChatGPT was good at fixing compiling errors, it generally was not good at correcting its own mistakes.

“ChatGPT may generate incorrect code because it does not understand the meaning of algorithm problems, thus, this simple error feedback information is not enough,” Tang explains.

The researchers also found that ChatGPT-generated code did have a fair amount of vulnerabilities, such as a missing null test, but many of these were easily fixable. Their results also show that generated code in C was the most complex, followed by C++ and Python, which has a similar complexity to the human-written code.

Tangs says, based on these results, it’s important that developers using ChatGPT provide additional information to help ChatGPT better understand problems or avoid vulnerabilities.

“For example, when encountering more complex programming problems, developers can provide relevant knowledge as much as possible, and tell ChatGPT in the prompt which potential vulnerabilities to be aware of,” Tang says.

The Conversation (4)
Floch Forster
Floch Forster09 Jul, 2024
INDV

That's yesterday's news, try it with version 4o, it's free.

Richard Wickens
Richard Wickens08 Jul, 2024
INDV

"struggles due to training limitations" isn't that EVERYONE's problem with EVERYTHING.

"I could be an awesome guitar playing, but I struggle due to training limitations."

"I could be a great Opera singer, but I struggle due to training limitations."

"I could be a great jockey, but I am 6'4"...." Ok, well maybe not everything.

ChatGPT sucks at coding because it's not an AI - it's a big ass word predictor.

Sam Sperling
Sam Sperling07 Jul, 2024
INDV

I actually think the key here is writing good test suits to ensure AI does the right thing...

Here is the full argument: https://medium.com/@samuel.sperling/software-2-1-ai-is-coding-now-why-test-mastery-is-your-new-job-security-31a65e792f7f