Warning: exif_imagetype(https://www.star937fm.com/wp-content/uploads/2022/08/Depositphotos_333149958_XL-1024x576.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.star937fm.com/wp-includes/functions.php on line 3314

Warning: file_get_contents(https://www.star937fm.com/wp-content/uploads/2022/08/Depositphotos_333149958_XL-1024x576.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.star937fm.com/wp-includes/functions.php on line 3336

Warning: exif_imagetype(https://www.star937fm.com/wp-content/uploads/2022/08/Depositphotos_333149958_XL-1024x576.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.star937fm.com/wp-includes/functions.php on line 3314

Warning: file_get_contents(https://www.star937fm.com/wp-content/uploads/2022/08/Depositphotos_333149958_XL-1024x576.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.star937fm.com/wp-includes/functions.php on line 3336

Vip99 BET.RBET yugioh,RBET Slot

News

Code-Generating AI Assistants Bring Security Vulnerabilities

Computer scientists from Stanford University have discovered that programmers who use help from AI coding tools like GitHub Copilot produce less secure code solutions for the apps they develop

Code-Generating AI Assistants Bring Security Vulnerabilities

A team of researchers affiliated with Stanford has published a paper titled, “Do Users Write More Insecure Code with AI Assistants?” The study finds that?code-generating systems provided by vendors like GitHub have unexpected pitfalls.

Neil Perry, a PhD candidate at Stanford and the lead co-author of the study believes AI code-generating systems at the moment cannot substitute human beings. Although they may provide some assistance, software engineers who use such systems are more likely to create apps with inherent security vulnerabilities.

Study details

The study specifically investigated the use of Codex, the AI code-generating system powering GitHub’s Copilot. The team involved in an experiment included 47 developers — ranging from undergraduate students to experienced industry professionals. These developers used Codex to complete security-related problems across programming languages including Python, JavaScript and C.

The results were disappointing for the technology prospects. Namely, the researchers found that the study participants who had access to Codex more often wrote incorrect and insecure solutions to programming problems compared to a control group. Additionally, they tended to over-rely on the AI tool, believing their insecure answers were secure, while the developers fully in control of their coding tasks questioned their solutions more often.

However, Megha Srivastava, a postgraduate student at Stanford and the second co-author of the study, stressed that code-generating systems are quite reliable for tasks that aren’t high-risk. Thus, they shouldn’t be abandoned altogether. At the same time, developers who use such tools?should carefully double-check the outputs and enhance their?security expertise to better spot code vulnerabilities.

SEE ALSO:

SambaNova’s Dataflow-as-a-Service Awarded as the Best Big Data Deep Learning/AI Solution

Top 10 White-label SaaS Business Opportunities in 2023

Deutsche Bank & NVIDIA to Accelerate AI Use in Finance

Nina Bobro

1604 Posts 0 Comments

https://www.star937fm.com/

Nina is passionate about financial technologies and environmental issues, reporting on the industry news and the most exciting projects that build their offerings around the intersection of fintech and sustainability.