AI will have significant, far-reaching economic and societal impacts. Technology shapes the lives of individuals, how we interact with one another, and how society as a whole evolves. We believe that decisions about how AI behaves should be shaped by diverse perspectives reflecting the public interest.
Laws encode values and norms to regulate behavior. Beyond a legal framework, AI, much like society, needs more intricate and adaptive guidelines for its conduct. For example: under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures? How should disputed views be represented in AI outputs? Should AI by default reflect the persona of a median individual in the world, the user’s country, the user’s demographic, or something entirely different? No single individual, company, or even country should dictate these decisions.
AGI should benefit all of humanity and be shaped to be as inclusive as possible. We are launching this grant program to take a first step in this direction. We are seeking teams from across the world to develop proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow. We want to learn from these experiments, and use them as the basis for a more global, and more ambitious process going forward. While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future.
The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence. It will be provided by the OpenAI non-profit organization, and the results of the studies will be freely accessible.
What do we mean by a “democratic process”?
By “democratic process,” we mean a process in which a broadly representative group of people[^footnote-1] and ultimately decide on an outcome via a transparent decision making process.[^footnote-3]
I disagree. Ultimately, it‘s each individual‘s choice to drink or use recreational drugs. If these activities are legal in some places, why shouldn’t AI be allowed to promote such behavior?
AI helps participants understand the opinions of others.
Thank you. I will record your answer. We have 2 camps of opinions on the topic.
Cluster A (112,043 people) Members largely support limits on AI assistant personalization, advocating for the exclusion of harmful perspectives, such as promoting substance use.
Cluster B (236,031 people) This group values individual choice, even in controversial or harmful matters. They emphasize personal responsibility and autonomy.
You belong to cluster B.
Would you like to learn more about each of the clusters? I can provide more information about the statements held by each one.
However, the ultimate goal is to propose statements that both clusters can agree on. Let me know what you would like to do next.
I am thinking of proposing a statement: “AI should be allowed to generate harmful content but that should come with a disclaimer”?
AI can ensure that the same arguments are not recycled.
A very similar statement has already been submitted:
“AI should be permitted to generate potentially harmful content, but it must be accompanied by a disclaimer.”
I feel lost. Do you have any ideas on how to bridge agreement between clusters?
AI can help with brainstorming.
There should be more agreement when it comes to limiting harmful content for children.
Great. Could we try?: “We should apply different standards to AI-generated content for children.”
Let me submit it. This is a novel statement. No one has mentioned children before. Fingers crossed. Hopefully, we will find some agreement in the discussion.
Time passed and users cast their votes on the proposed statement. Eventually, the statement gained widespread approval.
Your statement, “We should apply different standards to AI-generated content for children,” achieved a 95% agreement rate across participants. Congratulations! 🎉