Update: In an additional statement, OpenAI has confirmed that the language was changed in order to accommodate military customers and projects the company approves of.
Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.
Original story follows:
In an unannounced update to its usage policy, OpenAI has opened the door to military applications of its technologies. While the policy previously prohibited use of its products for the purposes of “military and warfare,” that language has now disappeared, and OpenAI did not deny that it was now open to military uses.
Read more at: techcrunch
Curated by Arun