Anthropic built a democratic AI chatbot by letting users vote for its values

In what may be a first-of-its-kind study, artificial intelligence (AI) firm Anthropic has developed a large language model (LLM) thatโ€™s been fine-tuned for value judgments by its user community.

Many public-facing LLMs have been developed with guardrails โ€” encoded instructions dictating specific behavior โ€” in place in an attempt to limit unwanted outputs. Anthropicโ€™s Claude and OpenAIโ€™s ChatGPT, for example, typically give users a canned safety response to output requests related to violent or controversial topics.

However, many pundits argue that guardrails and other interventional techniques can serve to remove usersโ€™ agency, as whatโ€™s considered acceptable isnโ€™t always useful, and whatโ€™s considered useful isnโ€™t always acceptable. At the same time, definitions for morality or value-based judgments can vary between cultures, populaces and periods of time.

Related: UK to target potential AI threats at planned November summit

One possible remedy to this is to allow users to dictate value alignment for AI models. Anthropicโ€™s โ€œCollective Constitutional AIโ€ experiment is an attempt at this โ€œmessy challenge.โ€

Anthropic, in collaboration with Polis and Collective Intelligence Project, tapped 1,000 users across diverse demographics and asked them to answer a series of questions via polling.

Source:ย Anthropic

The challenge centers around allowing users the agency to determine whatโ€™s appropriate without exposing them to inappropriate outputs. This involved soliciting user values and then implementing those ideas into a model thatโ€™s already been trained.

Anthropic uses a method called โ€œConstitutional AIโ€ to direct its efforts at tuning LLMs for safety and usefulness. Essentially, this involves giving the model a list of rules it must abide by and then training it to implement those rules throughout its process, much like a constitution serves as the core document for governance in many nations.

In the Collective Constitutional AI experiment, Anthropic attempted to integrate group-based feedback into the modelโ€™s constitution. The results, according to a blog post from Anthropic, appear to have been a scientific success in that it illuminated further challenges toward achieving the goal of allowing the users of an LLM product to determine their collective values.

One of the difficulties the team had to overcome was coming up with a novel method for the benchmarking process. As this experiment appears to be the first of its kind, and it relies on Anthropicโ€™s Constitutional AI methodology, there isnโ€™t an established test for comparing base models to those tuned with crowd-sourced values.

Ultimately, it appears as though the model that implemented data resulting from user polling feedback โ€œslightlyโ€ outperformed the base model in the area of biased outputs.

Per the blog post:

โ€œMore than the resulting model, weโ€™re excited about the process. We believe that this may be one of the first instances in which members of the public have, as a group, intentionally directed the behavior of a large language model. We hope that communities around the world will build on techniques like this to train culturally- and context-specific models that serve their needs.โ€