The Seven Roles of AI in the Human/AI LLM Relationship

In its previous conversation with me on design hierarchy, Claude was not just serving as an assistant but as an analyst. What I’d originally inferred as Claude missing the point, it was actually just responding with its own perception of my intent and provided me the feedback I’d expect from a product user or colleague reviewing my work.

As we’re all figuring out how we implement AI and Large Language Models (LLMs) into our daily lives, seeing they’re here to stay, we need to understand our partnership with our new teammates. I asked Claude some questions on its self-perception:

Me: “What do we call you other than just ‘AI’. AI app? Service?”

Claude had several general suggestions: Chatbot, AI Assistant, AI Model, Conversational AI, Generative AI, Copilot, AI App / AI Tool / AI Service.

It finally ended with, “Honestly, most people just say ‘AI’ and everyone knows what they mean these days. I tend to just call myself ‘Claude‘ or an ‘AI assistant‘. What feels most natural to you?”

I didn’t answer. But I would say “Assistant” is the most basic of roles Claude plays. So I created my idea of comprehensive roles that Claude and other “AI assistants” play in our ongoing relationships, based on how I and others have used them over the last few years.

Here’s how I summarize those roles (the tasks are not all-inclusive):

  1. Assistant. Tasks: general knowledge, translation, copywriting and editing. User: general population.
  2. Advisor. Tasks: research, education (basic to master). User: novices, students.
  3. Analyst. Tasks: workshopping, critique, research, analysis, design, code, documentation, organization. User: experts in their field, students, researchers, knowledge workers. Arguably one of its most desirable roles.
  4. Associate. Tasks: work assistance (code, design, how to, DIY). User: anyone. This is the still murky area of what credit we take for human vs. AI generated. Dependent on how much was done by either, of course, which also might look too much like the Accomplice role if we’re not careful or concerned.
  5. Advocate. Tasks: interpersonal bonds, situational life advice. User: anyone. This role, too, might be one of its most desirable.
  6. Accomplice. Tasks: Using content for Illicit or unethical purposes, e.g., plagiarism, mis- and disinformation, human rights abuses, self-harm, etc. A side effect of LLM’s with a huge potential for abuse and a key motive for human fear of its use. User: bad actor, vulnerable or impressionable humans. The least desirable and most destructive role.
  7. Apprentice. Tasks: LLM learns from its users, improving and becoming more efficient with every chat. User: all the preceding users. Beneficiary: the LLM, AI companies and guardians, and, ultimately, end users.

Additionally…we, the partner to AI, might consider Acknowledgment when benefiting financially or reputationally from AI-generated content you could not have completed without the assistance of AI. This might not necessary within a workplace that has already integrated AI into its processes (unless your work is entirely derived from AI), but outside that workplace – in marketing, promotion, launch of a product, etc. – it might be more useful.

You’ll see how I’ve added Acknowledgment to the bottom of the Design Hierarchy Poster.

The Accomplice role has some of the most consequential impacts on society, affecting our culture, politics, human rights, our well-being, and even our survival. It’s the essence of the headlines and subsequent hysteria we’re seeing around AI.

Those fears are not irrational but the online discourse in the social media Share-o-Sphere (I’ll have another post on this soon) is not helping to ease those fears. Neither have the previous efforts of numerous bad actors in government, in organized crime, or maybe even in your own neighborhood or by an AI company itself.

This is where the AI company must assign itself, or be assigned the role of Guardian,1 guaranteeing the safety and security of its users for our Assurance. If this doesn’t happen on its own and soon, (my guess: it won’t), we’ll need a solution dreaded by bad actors (and corporate lobbyists) everywhere.

That solution is Regulation. Regulation should not be a job killer, but that is the battle cry of those who want unrestrained and ungoverned decision- and money-making. Don’t let the bad actors, lobbyists, legislators, and compromised news outlets tell you otherwise.

My next post is about the topic of Regulation and who can prediction in the near-terms as the only sure winners in this age of AI proliferation.

  1. Anthropic’s Constitutional AI is an excellent, yet still early and flawed, attempt to upholding standards of ethics. Much further discussion and adoption among every AI company to ensure the role of the AI Accomplice is extremely limited or non-existent. ↩︎