Why AI Code May Be Dangerous If Not Treated Correctly

Why AI Code May Be Dangerous If Not Treated Correctly

Generative AI is now firmly embedded into many workflows, and it’s the same with Wildix. We use AI tools to help with the content creation process and emails, overseen by our copy team, and our developers use AI tools to create code. Even our CEO uses AI to help refine emails and ensure everything is clear.

All of this takes place under the watchful eye of qualified professionals who understand the systems they are using and the limitations of it. So let’s dive into what happens when developers use AI without appropriate oversight.

Some Stats: Developers and AI

One of the biggest surveys that requested feedback from programmers in 2023 was created by Zero to Mastery, a comprehensive resource for developers looking to enhance their skills. In it, the company describes how:

  • 84.4% of programmers are using AI tools.
  • The most common is ChatGPT with 74.9% using it weekly.
  • 58.5% use AI to write code.
  • 36.6% of those who don’t use AI say that it’s the learning curve that puts them off.
  • 13.4% of those who don’t use AI say that the accuracy is cause for concern.
  • 77.8% of programmers believe AI tools have a positive impact.

Overall, this paints a positive picture of the use of AI by qualified developers. And as long as the developers in question understand the basis of the code they are using and can test it, there’s no problem with this. The problem comes when coders are being pushed to use AI, they don’t quite understand what they are doing and dubious or buggy code makes its way into the code base.

The Risks of AI Code

“A great developer with ChatGPT is even greater,” said Dimitri Osler, founder and CTO of Wildix. “But it’s also a huge risk in the hands of inexperienced developers or companies who do not perform the required security and quality checks. The potential for developers to use code that they don’t understand or can’t support or maintain is huge.”

From our perspective, AI is another tool. Its use is neither good nor bad — instead, it’s how it is used that could be good or bad. The same applies to tools such as Stack Overflow or GitHub:

  • Stack Overflow is a Q&A forum where developers can get opinions and answers regarding their code.
  • GitHub is a repository where developers can share code and collaborate on projects.

The use of both of these tools comes with a certain amount of risk. The initial speculation around the 3CX breach in March 2023 suggested that it stemmed from an infected GitHub code repository, although this later proved to be false. Indeed even GitHub itself was vulnerable to exploitation through a RepoJacking technique, where malicious actors could take over repositories by taking over the username associated with it. They would then replace those repositories with malicious code. This issue has since been fixed.

All this means is that malicious code is nothing new. However, AI makes an existing problem potentially worse.

“The best outcome for poorly created AI code is that it simply doesn’t work,” noted Dimitri. “The code gets rejected and new code has to be generated, hopefully, one that does work this time. The worst case scenario is that it works initially with the code base but generates an exploit because no one had time to test what it actually does.”

Key to keeping systems safe, then, is regular testing by highly qualified developers. Back in March 2023, Dimitri had these words to say: “Security is not cheap. MSPs simply cannot afford to take the risk anymore, and your reputation will take the hit if you pursue the cheapest option.”

These words referred to the 3CX breach, and this attack caused a lot of stress and worry to both MSPs and to their end users. Part of the problem appears to have been that code was inserted without appropriate testing, which is one of the big issues with cheaper VoIP products that need to cut costs as much as possible to work within their price points.

AI can also speed up testing substantially. “We use AI tools to automate some testing, simply because it can quickly sanity-check code,” explains Dimitri. “It acts as an additional layer to our processes, not a replacement, but it does speed up the process as QA gets cleaner code to review.”

Ultimately, AI code will require extremely experienced developers to oversee the process, particularly as it’s often unclear as to how the original code has been created by the AI software. Taking shortcuts to security is simply not an option at Wildix, especially as so many users depend on our systems.

“A thorough testing regimen is needed regardless of whether you use AI code or not,” noted Dimitri. “AI is simply a tool to use, and it’s the processes around it that determine whether you are using it well or using it badly.”

AI code is here to stay, though, and it’s up to companies to find a way to use it without compromising their users.

For more insights on security and our secure-by-design ethos, subscribe to receive our magazine for free!

Social Sharing