OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

1 hour ago 2

More than 30 employees from OpenAI and Google, including Google DeepMind main idiosyncratic Jeff Dean, filed an amicus little connected Monday successful enactment of Anthropic successful its ineligible combat against the US government.

“If allowed to proceed, this effort to punish 1 of the starring US AI companies volition undoubtedly person consequences for the United States’ concern and technological competitiveness successful the tract of artificial quality and beyond,” the employees wrote.

The little was filed conscionable hours aft Anthropic sued the Department of Defense and different national agencies implicit the Pentagon’s determination to designate the institution a “supply-chain risk.” The sanction, which severely limits Anthropic’s quality to enactment with subject contractors, went into effect aft Anthropic’s negotiations with the Pentagon fell apart. The AI startup is seeking a impermanent restraining bid to proceed its enactment with subject partners arsenic the suit progresses. This little specifically supports this motion.

Signatories of the little see Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, arsenic good arsenic OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others. Amicus briefs are ineligible filings submitted by parties that are not straight progressive successful a tribunal lawsuit but that person expertise applicable to it. The employees signed successful a idiosyncratic capableness and don’t correspond the views of their companies, according to the brief.

OpenAI and Google did not instantly respond to WIRED’s petition for comment.

The amicus little says that the Pentagon’s determination to blacklist Anthropic “introduces an unpredictability successful [their] manufacture that undermines American innovation and competitiveness” and “chills nonrecreational statement connected the benefits and risks of frontier AI systems.” It notes that the Pentagon could person simply dropped Anthropic’s declaration if it nary longer wished to beryllium bound by its terms.

The little besides says that the reddish lines Anthropic claims it requested, including that its AI wouldn’t beryllium utilized for wide home surveillance and the improvement of autonomous lethal weapons, are morganatic concerns and necessitate capable guardrails. “In the lack of nationalist law, the contractual and technological requirements that AI developers enforce connected the usage of their systems correspond a captious safeguard against their catastrophic misuse,” the little says.

Several different AI leaders person besides publically questioned the Pentagon’s determination to statement Anthropic a supply-chain risk. OpenAI CEO Sam Altman said successful a station connected societal media that “enforcing the SCR [supply-chain risk] designation connected Anthropic would beryllium precise atrocious for our manufacture and our country.” He added that “this is simply a precise atrocious determination from the DoW and I anticipation they reverse it.” As Anthropic’s narration with the Pentagon soured, OpenAI rapidly signed its ain declaration with the US military, a determination immoderate radical criticized arsenic opportunistic.

Read Entire Article