Re: [新聞] Anthropic 最新聲明
這家就沒誠信的垃圾公司啊
惡名昭著
怎麼會有人信啦
公司創辦人Amodei兄妹
是OpenAI原核心成員
因為擔心Open醬過度商業化
違背初衷
跳出去自己開
標榜的就是超乎水准的道德層次(?
結果呢
在競爭中為獲取訓練資料
到"盜版網站"大挖特挖
幹,你沒看錯
他們直接到盜版網站去挖
需要付費的非公開資料
人家其他企業挖網路公開資料
還可以說遊走道德邊緣
這都已經很臭了
這根本毫無下限可言啊
最終賠了15億美金
遇到競爭、利益就被背棄承諾
呵呵
真的世界靈活的道德底線啦
講話跟放屁一樣
※ 引述《breathair (拆了?簡單了)》之銘言:
: -------------------------------發文提醒----------------------------------
: 1.發文前請先詳閱[新聞]分類發文規範,未依規範發文將受處分。
: 2.發文請依照格式文章標明段落,不符合格式者依 1-2-1 處分。
: 3.連結過長請善用縮網址服務,連結不能點擊者板規 1-2-2 處分。
: 4.心得/評論請盡量充實,心得過短或濫竽充數將以板規 1-2-3 處分。
: ------------------------ 按ctrl+y 可刪除以上內容。 ----------------------
: 原文標題:
: Statement from Dario Amodei on our discussions with the Department of War
: 原文連結:
: https://www.anthropic.com/news/statement-department-of-war
: 發布時間:
: 2026年2月26日
: 記者署名:
: ※原文無記載者得留空
: 原文內容:
: I believe deeply in the existential importance of using AI to defend the Unite
: d States and other democracies, and to defeat our autocratic adversaries.
: Anthropic has therefore worked proactively to deploy our models to the Departm
: ent of War and the intelligence community. We were the first frontier AI compa
: ny to deploy our models in the US government’s classified networks, the first
: to deploy them at the National Laboratories, and the first to provide custom
: models for national security customers. Claude is extensively deployed across
: the Department of War and other national security agencies for mission-critica
: l applications, such as intelligence
: analysis, modeling and simulation, operational planning, cyber operations, and
: more.
: Anthropic has also acted to defend America’s lead in AI, even when it is agai
: nst the company’s short-term interest. We chose to forgo several hundred mill
: ion dollars in revenue to cut off the use of Claude by firms linked to the Chi
: nese Communist Party (some of whom have been designated by the Department of W
: ar as Chinese Military Companies), shut down CCP-sponsored cyberattacks that a
: ttempted to abuse Claude, and have advocated for strong export controls on chi
: ps to ensure a democratic advantage.
: Anthropic understands that the Department of War, not private companies, makes
: military decisions. We have never raised objections to particular military op
: erations nor attempted to limit use of our technology in an ad hoc manner.
: However, in a narrow set of cases, we believe AI can undermine, rather than de
: fend, democratic values. Some uses are also simply outside the bounds of what
: today’s technology can safely and reliably do. Two such use cases have never
: been included in our contracts with the Department of War, and we believe they
: should not be included now:
: Mass domestic surveillance. We support the use of AI for lawful foreign intell
: igence and counterintelligence missions. But using these systems for mass dome
: stic surveillance is incompatible with democratic values. AI-driven mass surve
: illance presents serious, novel risks to our fundamental liberties. To the ext
: ent that such surveillance is currently legal, this is only because the law ha
: igence and counterintelligence missions. But using these systems for mass dome
: stic surveillance is incompatible with democratic values. AI-driven mass surve
: illance presents serious, novel risks to our fundamental liberties. To the ext
: ent that such surveillance is currently legal, this is only because the law ha
: s not yet caught up with the rapidly growing capabilities of AI. For example,
: under current law, the government can
: purchase detailed records of Americans’ movements, web browsing, and associat
: ions from public sources without obtaining a warrant, a practice the Intellige
: nce Community has acknowledged raises privacy concerns and that has generated
: bipartisan opposition in Congress. Powerful AI makes it possible to assemble t
: his scattered, individually innocuous data into a comprehensive picture of any
: person’s life—automatically and at massive scale.
: Fully autonomous weapons. Partially autonomous weapons, like those used today
: in Ukraine, are vital to the defense of democracy. Even fully autonomous weapo
: ns (those that take humans out of the loop entirely and automate selecting and
: engaging targets) may prove critical for our national defense. But today, fro
: ntier AI systems are simply not reliable enough to power fully autonomous weap
: ons. We will not knowingly provide a product that puts America’s warfighters
: and civilians at risk. We have offered
: to work directly with the Department of War on R&D to improve the reliability
: of these systems, but they have not accepted this offer. In addition, without
: proper oversight, fully autonomous weapons cannot be relied upon to exercise t
: of these systems, but they have not accepted this offer. In addition, without
: proper oversight, fully autonomous weapons cannot be relied upon to exercise t
: he critical judgment that our highly trained, professional troops exhibit ever
: y day. They need to be deployed with proper guardrails, which don’t exist tod
: ay.
: To our knowledge, these two exceptions have not been a barrier to accelerating
: the adoption and use of our models within our armed forces to date.
: The Department of War has stated they will only contract with AI companies who
: accede to “any lawful use” and remove safeguards in the cases mentioned abo
: ve. They have threatened to remove us from their systems if we maintain these
: safeguards; they have also threatened to designate us a “supply chain risk”
: —a label reserved for US adversaries, never before applied to an American com
: pany—and to invoke the Defense Production Act to force the safeguards’ remov
: al. These latter two threats are
: inherently contradictory: one labels us a security risk; the other labels Clau
: de as essential to national security.
: Regardless, these threats do not change our position: we cannot in good consci
: ence accede to their request.
: It is the Department’s prerogative to select contractors most aligned with th
: eir vision. But given the substantial value that Anthropic’s technology provi
: des to our armed forces, we hope they reconsider. Our strong preference is to
: continue to serve the Department and our warfighters—with our two requested s
: afeguards in place. Should the Department choose to offboard Anthropic, we wil
: l work to enable a smooth transition to another provider, avoiding any disrupt
: ion to ongoing military planning,
: operations, or other critical missions. Our models will be available on the ex
: pansive terms we have proposed for as long as required.
: We remain ready to continue our work to support the national security of the U
: nited States.
: 達里奧·阿莫代(Dario Amodei)關於我們與戰爭部(Department of War)討論的聲明
: 2026年2月26日
: 我深信,利用人工智慧來保衛美國及其他民主國家,並擊敗我們的獨裁對手,具有攸關存
: 亡的重要性。
: 因此,Anthropic 一直積極主動地將我們的模型部署到戰爭部和情報界。我們是第一家在
: 美國政府機密網路中部署模型的前沿 AI 公司,是第一家在國家實驗室部署模型的公司,
: 也是第一家為國家安全客戶提供客製化模型的公司。Claude 被廣泛應用於戰爭部和其他
: 國家安全機構的任務關鍵型應用中,例如情報分析、建模與模擬、作戰計畫、網路行動等。
: Anthropic 也採取行動捍衛美國在 AI 領域的領先地位,即使這違背了公司的短期利益。
: 我們選擇放棄數億美元的收入,切斷了與中國共產黨(其中部分被戰爭部列為中國軍方企
: 業)相關之公司使用 Claude 的管道,關閉了中共支持、企圖濫用 Claude 的網路攻擊,
: 並倡導對晶片實施嚴格的出口管制,以確保民主國家的優勢。
: Anthropic 充分了解,做出軍事決定的是戰爭部,而非私人企業。我們從未對特定的軍事
: 行動提出異議,也未曾試圖以臨時、隨意的方式限制我們技術的使用。
: 然而,在少數情況下,我們認為 AI 可能會破壞而非捍衛民主價值觀。有些用途也完全超
: 出了當今技術能夠安全、可靠執行的範圍。有兩個此類應用從未包含在我們與戰爭部的合
: 約中,我們認為現在也不應包含在內:
: 大規模國內監控:我們支持將 AI 用於合法的對外情報和反情報任務。但將這些系統
: 用於大規模國內監控與民主價值觀背道而馳。由 AI 驅動的大規模監控對我們的基本自由
: 構成了嚴重且前所未見的風險。儘管目前此類監控在某種程度上是合法的,但這只是因為
: 法律尚未跟上 AI 能力的快速發展。例如,根據現行法律,政府可以在未經授權的情況下
: ,從公開來源購買美國人詳細的行蹤、網頁瀏覽紀錄和社交關係紀錄;情報界已承認這種
: 做法引發了隱私疑慮,並在國會引發了跨黨派的反對。強大的 AI
: 使得將這些分散的、單看似乎無害的個人數據,自動且大規模地拼湊成任何人生活的完整
: 輪廓成為可能。
: 全自動化武器:部分自動化武器(如今日在烏克蘭使用的武器)對捍衛民主至關重要
: 。即使是全自動化武器(即完全將人類排除在決策迴圈外,自動選擇並交戰目標的武器)
: ,未來也可能對我們的國防至關重要。但時至今日,前沿 AI
: 系統的可靠性還不足以驅動全自動化武器。我們不會明知故犯地提供會讓美國戰士和平民
: 陷入危險的產品。我們曾提議直接與戰爭部合作進行研發,以提高這些系統的可靠性,但
: 他們沒有接受這項提議。此外,若沒有適當的監管,就不能指望全自動化武器能發揮我們
: 訓練有素的專業部隊每天所展現的關鍵判斷力。它們的部署需要配備適當的安全防護機制
: (guardrails),而這些機制目前還不存在。
: 據我們所知,迄今為止,這兩個例外情況並未成為我們武裝部隊加速採用和使用我們模型
: 的障礙。
: 戰爭部已聲明,他們只會與同意「任何合法用途」並移除上述情況中安全機制的 AI 公司
: 簽約。他們威脅,如果我們保留這些防護機制,就會將我們從其系統中剔除;他們還威脅
: 要將我們列為「供應鏈風險(supply chain risk)」(這個標籤通常只保留給美國的對
: 手,從未應用於美國公司),並威脅動用《國防生產法》(Defense Production Act)來
: 強制我們移除這些安全機制。後面這兩個威脅本質上是矛盾的:一個將我們標記為安全風
: 險;另一個則將 Claude 標記為對國家安全不可或缺的要素。
: 無論如何,這些威脅都不會改變我們的立場:憑著良知,我們無法同意他們的要求。
: 選擇最符合其願景的承包商是戰爭部的特權。但鑑於 Anthropic 的技術為我們的武裝部
: 隊提供了巨大的價值,我們希望他們能重新考慮。我們強烈希望在保留這兩項防護機制的
: 前提下,繼續為戰爭部和我們的軍人服務。如果戰爭部選擇與 Anthropic 解約,我們將
: 努力協助平穩過渡給另一家供應商,避免對正在進行的軍事計畫、行動或其他關鍵任務造
: 成任何干擾。只要有需要,我們的模型將繼續根據我們提出的廣泛條款提供使用。
: 我們隨時準備好繼續我們的工作,以支持美國的國家安全。
: 心得/評論:
: Anthropic 拿到國防合約
: 正式將OAI 大到不能倒的敘事結束
: 這兩家只會有一家留著的話
: 應該是利好亞麻
: OpenAI 應該也有被通知接收
: 而這篇宣告就是控制輿論來讓戰爭部不敢解約給OpenAI的目的
: 任何人接受戰爭部的合約都等於間接承認監控人民,除了Anthropic
: PLTR還有的跌
: -----
: Sent from JPTT on my iPhone
-----
Sent from JPTT on my iPhone
--
※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 114.40.244.152 (臺灣)
※ 文章網址: https://www.ptt.cc/bbs/Stock/M.1772156797.A.A8F.html
討論串 (同標題文章)
完整討論串 (本文為第 2 之 3 篇):
Stock 近期熱門文章
PTT職涯區 即時熱門文章