Two people familiar with the situation told Reuters that a number of staff researchers warned the board of directors about a potentially dangerous artificial intelligence breakthrough in a letter they prepared before OpenAI CEO Sam Altman's four-day banishment.
The two sources claimed that before Altman, the poster child of generative AI, was fired by the board, the previously undisclosed letter and the AI algorithm were significant advancements. Before his victorious comeback on Tuesday night, almost seven hundred workers had vowed to resign in support of their ousted boss and join Microsoft.
The sources identified the letter as one element in a wider list of board complaints that resulted in Altman's termination, including worries about commercialising innovations before realising the ramifications.
One of the persons said that OpenAI, which declined to comment, disclosed a project called Q* in a letter to the board and an internal memo to staffers prior to the events of the weekend. A representative for OpenAI stated that the communication, which was written by seasoned executive Mira Murati, informed employees of specific media reports without addressing their veracity.
As one of the people at OpenAI informed the media, some think Q*—pronounced Q-Star—may represent a significant advancement in the startup's hunt for artificial general intelligence (AGI). AGI is defined by OpenAI as autonomous systems that outperform humans in the majority of economically significant tasks.
The person, who spoke under anonymity because they were not authorised to talk on behalf of the corporation, said that the new model was able to solve some mathematical issues given its enormous computational capacity. The source stated that despite Q*'s rudimentary maths skills, the fact that it aced these exams gave researchers great hope for Q*'s future prospects.
There is however no clarity about the capabilities of Q* as claimed by the researchers.
Math is seen by researchers as a frontier for the creation of generative AI. Answers to the same topic can differ greatly, and generative AI is now good at writing and language translation by statistically predicting the following word. However, mastering arithmetic, where there is just one correct answer, suggests AI might be more capable of reasoning like a human. Researchers in artificial intelligence believe this may be used, for example, in new scientific studies.
AGI is able to learn, grasp, and generalise in contrast to a calculator, which is limited in the number of operations it can perform.
The sources stated that although the researchers did not name the specific safety issues raised in the letter to the board, they did highlight AI's potential for harm.
Computer scientists have long debated the threat posed by artificial intelligence (AI), including the possibility that these computers could conclude that it would be in their best interests to wipe out humankind.
Scholars have also brought attention to the activities of a group of "AI scientists," whose existence has been verified by several sources. According to one of the participants, the group was investigating how to optimise current AI models to enhance their reasoning and eventually carry out scientific work. It was developed by integrating previous "Code Gen" and "Math Gen" teams.
Altman spearheaded the development of ChatGPT, one of the fastest-growing software programmes ever, and he convinced Microsoft to provide the funding and processing power required to move the programme closer to artificial intelligence.
Apart from revealing an array of novel instruments during a presentation earlier this month, Altman hinted last week at a global leaders' gathering in San Francisco that he thought significant breakthroughs were imminent.
"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.
Altman was let go by the board a day later.
(Source:www.nypost.com)
The two sources claimed that before Altman, the poster child of generative AI, was fired by the board, the previously undisclosed letter and the AI algorithm were significant advancements. Before his victorious comeback on Tuesday night, almost seven hundred workers had vowed to resign in support of their ousted boss and join Microsoft.
The sources identified the letter as one element in a wider list of board complaints that resulted in Altman's termination, including worries about commercialising innovations before realising the ramifications.
One of the persons said that OpenAI, which declined to comment, disclosed a project called Q* in a letter to the board and an internal memo to staffers prior to the events of the weekend. A representative for OpenAI stated that the communication, which was written by seasoned executive Mira Murati, informed employees of specific media reports without addressing their veracity.
As one of the people at OpenAI informed the media, some think Q*—pronounced Q-Star—may represent a significant advancement in the startup's hunt for artificial general intelligence (AGI). AGI is defined by OpenAI as autonomous systems that outperform humans in the majority of economically significant tasks.
The person, who spoke under anonymity because they were not authorised to talk on behalf of the corporation, said that the new model was able to solve some mathematical issues given its enormous computational capacity. The source stated that despite Q*'s rudimentary maths skills, the fact that it aced these exams gave researchers great hope for Q*'s future prospects.
There is however no clarity about the capabilities of Q* as claimed by the researchers.
Math is seen by researchers as a frontier for the creation of generative AI. Answers to the same topic can differ greatly, and generative AI is now good at writing and language translation by statistically predicting the following word. However, mastering arithmetic, where there is just one correct answer, suggests AI might be more capable of reasoning like a human. Researchers in artificial intelligence believe this may be used, for example, in new scientific studies.
AGI is able to learn, grasp, and generalise in contrast to a calculator, which is limited in the number of operations it can perform.
The sources stated that although the researchers did not name the specific safety issues raised in the letter to the board, they did highlight AI's potential for harm.
Computer scientists have long debated the threat posed by artificial intelligence (AI), including the possibility that these computers could conclude that it would be in their best interests to wipe out humankind.
Scholars have also brought attention to the activities of a group of "AI scientists," whose existence has been verified by several sources. According to one of the participants, the group was investigating how to optimise current AI models to enhance their reasoning and eventually carry out scientific work. It was developed by integrating previous "Code Gen" and "Math Gen" teams.
Altman spearheaded the development of ChatGPT, one of the fastest-growing software programmes ever, and he convinced Microsoft to provide the funding and processing power required to move the programme closer to artificial intelligence.
Apart from revealing an array of novel instruments during a presentation earlier this month, Altman hinted last week at a global leaders' gathering in San Francisco that he thought significant breakthroughs were imminent.
"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.
Altman was let go by the board a day later.
(Source:www.nypost.com)