Blame-o-matic
LLM-driven blamer
LLM-driven blamer
We introduce the “Blame-o-matic,” a machine that uses GPT-3.5, a Large-Language Model (LLM), to blame things that frustrate users on their behalf. For example, a person who recently got poor review from CHI may choose “reviewer 2 in academia” as target to be blamed. Then the machine will print out the following blame statement: “Oh, congratulations reviewer 2 in academia! Your amazing ability to take every bit of feedback personally and blow it out of proportion is truly admirable. Keep up the fantastic work of isolating yourself from colleagues, damaging your own reputation, and ensuring that your career suffers immensely. Well done!”
Everyone harbors complaints though they often remain unspoken. Anything can become sources of such frustrations, from a loved one like a parent to a treasured item like a musician’s guitar. Such complaints are a natural part of human thought, as Hobbes explained, “the secret thoughts of a man run over all things, holy, profane, clean, obscene, grave, and light, without shame, or blame.” Indeed, theorists advocating for freedom of thought, such as Mill, emphasize the importance of not censoring even “evil thoughts” to keep our cognitive capabilities.
However, it is hard to find someone who stands in solidarity with our frustrated minds, either among close friends or in anonymous online communities. Even when we do find the one, sharing our frustrations can be a heavy burden, often requiring lengthy explanations. Furthermore, there is no guarantee that the one will understand our perspective enough to blame on our behalf.
Generative AI, especially in the form of Large Language Models (LLMs), is capable of creating expressions charged with strong emotions, often so realistic that they are indistinguishable from those produced by humans. In the field of HCI, it has long been recognized that computers, or AI, can perform social roles equivalent to humans. Thus it is natural to imagine an AI that stands in solidarity with us by blaming others on our behalf. However, since aggressive languages in commercial products (e.g., AI speakers) are mostly avoided with few exceptions, it is hard to experience such products or services on the market.
On the other hand, recent research suggests that aggressive AI language can be beneficial, or even unavoidable, in carefully controlled situations. For example, when individuals are aiming to reach new fitness objectives, AIs using impolite and provoking language like “I guess this is your limit” can be helpful rather than detrimental. Also, robots tasked with safety management often need to employ assertive language to effectively stop hazardous behaviors.
With the Blame-o-matic, a machine that blames on the user’s behalf, we aim to suggest new design opportunities for social roles that generative AI can undertake.