Warning: exif_imagetype(https://www.star937fm.com/wp-content/uploads/2023/10/ai-danger-takes-over.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.star937fm.com/wp-includes/functions.php on line 3314

Warning: file_get_contents(https://www.star937fm.com/wp-content/uploads/2023/10/ai-danger-takes-over.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.star937fm.com/wp-includes/functions.php on line 3336

Warning: exif_imagetype(https://www.star937fm.com/wp-content/uploads/2023/10/ai-danger-takes-over.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.star937fm.com/wp-includes/functions.php on line 3314

Warning: file_get_contents(https://www.star937fm.com/wp-content/uploads/2023/10/ai-danger-takes-over.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.star937fm.com/wp-includes/functions.php on line 3336

Vip99 BET.RBET yugioh,RBET Slot

News

OpenAI Forms New Team to Reduce Risk From Future AI

OpenAI has started forming a new team of specialists who will be engaged in activities aimed at reducing to a minimum the degree of risks associated with the spread and integration of artificial intelligence into different spheres of life.

The need to create such a working group is an appropriate and relevant solution from the point of view of the current technological realities since AI demonstrates what can be described as rapid development. For this reason, there is an increasing need to create tools and a system for monitoring the consequences of using artificial intelligence, which, against the background of its improvement, become more significant and large-scale.

Last Thursday, October 26, the mentioned company, which became world-famous after launching a chatbot based on machine intelligence called ChatGPT, announced in its blog that a so-called preparedness team was formed, headed by Aleksander Madry, who worked at OpenAI while on leave from a faculty position at the Massachusetts Institute of Technology.

The new group of specialists will carry out activities on the analysis of potentially catastrophic risks associated with the process of functioning artificial intelligence systems. The team will also develop methods to prevent the worst-case scenarios of using AI. The specialists will pay attention to the cybersecurity problems that may arise due to applying artificial intelligence. Chemical, nuclear, and biological threats that could potentially be provoked by AI will also be studied.

Another area of work of the new team of specialists will be the creation of a concept of the company’s corporate policy aimed at forming an understanding of the algorithm of action in case risks are detected in the development of so-called boundary models, which are next-generation machine intelligence technologies that differ from existing digital thinking systems with a higher level of efficiency.

The firm is currently focused on creating artificial general intelligence, or AGI. This system of digital thinking can perform several tasks better than a person.

The company stated that there is currently a need to get some kind of assurance that there is the understanding and infrastructure that is needed to create high-performance machine intelligence configurations.

OpenAI CEO Sam Altman has repeatedly said that artificial intelligence can cause the extinction of mankind. But, as experts note, this does not mean that the new team of specialists will analyze the risks that are characteristic of the versions of reality that exist in the artistic space of scientific and factual novels-dystopias.

Also, in a message published on the company’s blog, it is noted that artificial intelligence systems, which in terms of capabilities will surpass existing advanced AI models, can benefit humanity. It is separately indicated that the probability of positive consequences does not cancel out negative risks.

OpenAI is currently requesting ideas for risk research from the community. In this case, the company offers a monetary $25,000 reward and employment to the authors of the ten best offers on this issue.

As we have reported earlier, OpenAI Executives Say About Potential of AI to Do Any Job.

Serhii Mikhailov

2864 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.