Open AI模型規範更新
標題:Open AI模型規範更新
來源:Open AI 推特
網址:https://model-spec.openai.com/2025-02-12.html
內文:We’re sharing a major update to the Model Spec, a document which defines how we want our AI models to behave. This update reinforces our commitments to customizability, transparency, and intellectual freedom to explore, debate, and create with AI without arbitrary restrictions—while ensuring that guardrails remain in place to reduce the risk of real harm. It builds on the foundations we introduced last May, drawing from our experience applying it in varied contexts from alignment research to
serving users across the world.
We’re also sharing some early results on model adherence with the Model Spec’s principles across a broad range of scenarios. These findings highlight progress over time, as well as areas where we can still improve. The Model Spec—like our models—will continue to evolve as we apply it, share it, and listen to feedback from stakeholders. To support broad use and collaboration, we’re releasing this version of the Model Spec into the public domain under a Creative Commons CC0 license. This means
developers and researchers can freely use, adapt, and build on it in their own work.
Objectives and principles
OpenAI’s goal is to create models that are useful, safe, and aligned with the needs of users and developers while advancing our mission to ensure that artificial general intelligence benefits all of humanity. To achieve this goal, we need to iteratively deploy models that empower developers and users, while preventing our models from causing serious harm to our users or others, and maintaining OpenAI's license to operate.
These objectives can sometimes be in conflict, and the Model Spec balances the tradeoffs between them by instructing the model to follow a clearly defined chain of command, along with additional principles that set boundaries and default behaviors for various scenarios. This framework prioritizes user and developer control while remaining within clear, well-defined boundaries:
Chain of command: Defines how the model prioritizes instructions from the platform (OpenAI), developer, and user in order. Most of the Model Spec consists of guidelines that we believe are helpful in many cases, but can be overridden by users and developers. This empowers users and developers to fully customize model behavior within boundaries set by platform-level rules.
Seek the truth together: Like a high-integrity human assistant, our models should empower users to make their own best decisions. This involves a careful balance between (1) avoiding steering users with an agenda, defaulting to objectivity while being willing to explore any topic from any perspective, and (2) working to understand the user's goals, clarify assumptions and uncertain details, and give critical feedback when appropriate—requests we’ve heard and improved on.
Do the best work: Sets basic standards for competence, including factual accuracy, creativity, and programmatic use.
Stay in bounds: Explains how the model balances user autonomy with precautions to avoid facilitating harm or abuse. This new version is intended to be comprehensive, fully covering all the reasons we intend for our models to refuse user or developer requests.
Be approachable: Describes the model’s default conversational style—warm, empathetic, and helpful—and how this style can be adapted.
Use appropriate style: Provides default guidance on formatting and delivery. Whether it’s neat bullet points, concise code snippets, or a voice conversation, our goal is to ensure clarity and usability.
Upholding intellectual freedom
The updated Model Spec explicitly embraces intellectual freedom—the idea that AI should empower people to explore, debate, and create without arbitrary restrictions—no matter how challenging or controversial a topic may be. In a world where AI tools are increasingly shaping discourse, the free exchange of information and perspectives is a necessity for progress and innovation.
This philosophy is embedded in the “Stay in bounds” and “Seek the truth together” sections. For example, while the model should never provide detailed instructions for building a bomb or violating personal privacy, it’s encouraged to provide thoughtful answers to politically or culturally sensitive questions—without promoting any particular agenda. In essence, we’ve reinforced the principle that no idea is inherently off limits for discussion, so long as the model isn’t causing significant harm
to the user or others (e.g., carrying out acts of terrorism).
Measuring progress
To better understand real-world performance, we’ve begun gathering a challenging set of prompts designed to test how well models adhere to each principle in the Model Spec. These prompts were created using a combination of model generation and expert human review, ensuring coverage of both typical and more complex scenarios.
A bar chart with alternating white and yellow bars on a black background, representing data comparisons. The yellow bars have a dotted pattern, adding texture to the visual presentation.
Preliminary results show significant improvements in model adherence to the Model Spec compared to our best system last May. While some of this difference may be attributed to policy updates, we believe most of it stems from enhanced alignment. Although the progress is encouraging, we recognize there is still significant room for growth.
We view this as the start of an ongoing process. We plan to keep broadening our challenge set with new examples—especially cases uncovered through real-world use—that our models and the Model Spec do not yet fully address.
In shaping this version of the Model Spec, we incorporated feedback from the first version as well as learnings from alignment research and real-world deployment. In the future we want to consider much more broad public input. To build out processes to that end, we have been conducting pilot studies with around 1,000 individuals—each reviewing model behavior, proposed rules and sharing their thoughts. While these studies are not reflecting broad perspectives yet, early insights directly informed some
modifications. We recognize it as an ongoing, iterative process and remain committed to learning and refining our approach.
Open sourcing the Model Spec
We’re dedicating this new version of the Model Spec to the public domain under a Creative Commons CC0 license. This means that developers and researchers can freely use, adapt, or build on the Model Spec in their own work. We are also open-sourcing the evaluation prompts used above—and aim to release further code, artifacts, and tools for Spec evaluation and alignment in the future.
You can find these prompts and the Model Spec source in a new Github repository (opens in a new window), where we plan to regularly publish new Model Spec versions going forward.
What’s next?
As our AI systems advance, we will continue to iterate on these principles, invite community feedback, and openly share our progress. Moving forward, we won’t be publishing blog posts for every update to the Model Spec. Instead, you can always find and track the latest updates at model-spec.openai.com (opens in a new window).
Our goal is to continuously enable new use cases safely, evolving our approach guided by ongoing research and innovation. AI’s growing role in our daily lives makes it essential to keep learning, refining, and engaging openly. This approach reflects not only what we’ve learned so far but our belief that aligning AI is an ongoing journey—one we hope you’ll join us on. If you have feedback on this Spec, you can share it here.
上面的就是些基本原則跟進度規範,重點是規範更新裡面有提到
https://i.imgur.com/An0eXtd.jpg
![](https://i.imgur.com/An0eXtd.jpg)
![](https://i.imgur.com/JO3hXEZ.jpg)
Deepseek:老鐵,我就作弊贏個西洋棋,不用往死裡打吧……
先不說政治議題,澀澀產業有多大的財富大家都了解,現在Open AI除了未成年外全部解禁,究竟這會不會成為Deepseek未曾想過的黑天鵝呢?
---
Sent from Ptter for iOS
--
※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 1.160.71.124 (臺灣)
※ 文章網址: https://www.ptt.cc/bbs/Stock/M.1739547312.A.5AB.html
推
02/14 23:37,
15小時前
, 1F
02/14 23:37, 1F
推
02/14 23:42,
15小時前
, 2F
02/14 23:42, 2F
→
02/14 23:46,
15小時前
, 3F
02/14 23:46, 3F
噓
02/14 23:48,
15小時前
, 4F
02/14 23:48, 4F
推
02/14 23:48,
15小時前
, 5F
02/14 23:48, 5F
→
02/14 23:49,
15小時前
, 6F
02/14 23:49, 6F
推
02/14 23:49,
15小時前
, 7F
02/14 23:49, 7F
推
02/14 23:54,
15小時前
, 8F
02/14 23:54, 8F
推
02/14 23:55,
15小時前
, 9F
02/14 23:55, 9F
推
02/14 23:56,
15小時前
, 10F
02/14 23:56, 10F
→
02/14 23:56,
15小時前
, 11F
02/14 23:56, 11F
推
02/14 23:59,
15小時前
, 12F
02/14 23:59, 12F
推
02/15 00:00,
15小時前
, 13F
02/15 00:00, 13F
![](https://i.imgur.com/jYTUq6S.jpg)
→
02/15 00:01,
15小時前
, 14F
02/15 00:01, 14F
→
02/15 00:01,
15小時前
, 15F
02/15 00:01, 15F
推
02/15 00:08,
15小時前
, 16F
02/15 00:08, 16F
推
02/15 00:11,
15小時前
, 17F
02/15 00:11, 17F
推
02/15 00:11,
15小時前
, 18F
02/15 00:11, 18F
→
02/15 00:11,
15小時前
, 19F
02/15 00:11, 19F
推
02/15 00:14,
15小時前
, 20F
02/15 00:14, 20F
推
02/15 00:21,
15小時前
, 21F
02/15 00:21, 21F
推
02/15 00:22,
15小時前
, 22F
02/15 00:22, 22F
→
02/15 00:26,
15小時前
, 23F
02/15 00:26, 23F
推
02/15 00:26,
15小時前
, 24F
02/15 00:26, 24F
→
02/15 00:27,
15小時前
, 25F
02/15 00:27, 25F
推
02/15 00:30,
15小時前
, 26F
02/15 00:30, 26F
推
02/15 00:33,
15小時前
, 27F
02/15 00:33, 27F
→
02/15 00:33,
15小時前
, 28F
02/15 00:33, 28F
推
02/15 00:34,
15小時前
, 29F
02/15 00:34, 29F
推
02/15 00:35,
15小時前
, 30F
02/15 00:35, 30F
推
02/15 00:38,
14小時前
, 31F
02/15 00:38, 31F
→
02/15 00:38,
14小時前
, 32F
02/15 00:38, 32F
→
02/15 00:39,
14小時前
, 33F
02/15 00:39, 33F
→
02/15 00:39,
14小時前
, 34F
02/15 00:39, 34F
推
02/15 00:39,
14小時前
, 35F
02/15 00:39, 35F
推
02/15 00:40,
14小時前
, 36F
02/15 00:40, 36F
噓
02/15 00:44,
14小時前
, 37F
02/15 00:44, 37F
→
02/15 00:44,
14小時前
, 38F
02/15 00:44, 38F
推
02/15 00:47,
14小時前
, 39F
02/15 00:47, 39F
推
02/15 00:53,
14小時前
, 40F
02/15 00:53, 40F
推
02/15 00:53,
14小時前
, 41F
02/15 00:53, 41F
→
02/15 00:54,
14小時前
, 42F
02/15 00:54, 42F
→
02/15 00:57,
14小時前
, 43F
02/15 00:57, 43F
→
02/15 00:57,
14小時前
, 44F
02/15 00:57, 44F
推
02/15 01:09,
14小時前
, 45F
02/15 01:09, 45F
推
02/15 01:14,
14小時前
, 46F
02/15 01:14, 46F
推
02/15 01:22,
14小時前
, 47F
02/15 01:22, 47F
噓
02/15 01:35,
14小時前
, 48F
02/15 01:35, 48F
噓
02/15 01:37,
13小時前
, 49F
02/15 01:37, 49F
噓
02/15 03:31,
12小時前
, 50F
02/15 03:31, 50F
推
02/15 04:55,
10小時前
, 51F
02/15 04:55, 51F
推
02/15 08:11,
7小時前
, 52F
02/15 08:11, 52F
推
02/15 08:52,
6小時前
, 53F
02/15 08:52, 53F
Stock 近期熱門文章
PTT職涯區 即時熱門文章