AI-assisted code can be inherently insecure, study finds

Programmers must be educated about strong coding practices

Forward-looking: Machine learning algorithms are all the rage now, as they are used to generate any kind of “original” content after being trained on enormous pre-existing datasets. Code-generating AIs, however, could pose a real issue for software security in the future.

AI systems like GitHub Copilot promise to make programmers’ lives easier by creating entire chunks of “new” code based on natural-language textual inputs and pre-existing context. But code-generating algorithms can also bring an insecurity factor to the table, as a new study involving several developers has recently found. Read More

#devops