[ad_1]
- OpenAI updated its usage policy on January 10th.
- As part of the update, restrictions on military use of the company’s technology have been relaxed.
- This change is in conjunction with OpenAI’s rollout of the GPT Store (Custom ChatGPT Marketplace).
Earlier this week, OpenAI quietly eased restrictions on military use of its technology.
In an unannounced update to its usage policy on January 10, OpenAI broadly prohibited the use of its technology for “military and warfare” purposes. An OpenAI spokesperson told Business Insider that the new language still prohibits OpenAI’s services from being used for more specific purposes, such as developing weapons, injuring others, or destroying property. has been done.
A company spokesperson said: “We created a set of universal principles that are easy to remember and apply, especially since our tools are used around the world by everyday users and we can now also build GPTs. That’s what we’re aiming for,” he added. January 10th, OpenAI deploys GPT storea marketplace for users to share and view customized versions of ChatGPT, known as “GPT”.
OpenAI’s new usage policy includes principles such as “do no harm to others,” which are “broad but easily understood and relevant to a variety of situations,” as well as specific principles such as the development and use of weapons. It also includes a ban on use cases, an OpenAI spokesperson said.
Some AI experts are concerned that OpenAI’s policy rewrite will become too general, especially if AI technology is already being used in the Gaza conflict.of Israeli military announces use of AI Accurately identify bombing targets within the Palestinian territories.
Sarah Meyers West, managing director of the AI Now Institute and former AI policy analyst at the Federal Trade Commission, said, “The language contained in the policy remains ambiguous and does not reflect how OpenAI will approach enforcement.” It raises questions about whether he intends to do so,” he told The Intercept.
OpenAI did not reveal many details about its plans, but the wording change could open the door to future military contracts. An OpenAI spokesperson told BI that there are national security use cases that align with the company’s mission, which is part of what led to the change. For example, OpenAI is already working with the Defense Advanced Research Projects Agency to “accelerate the development of new cybersecurity tools to protect the open source software that critical infrastructure and industries rely on.”
[ad_2]
Source link