On the Use of the Plural Voice

'I' claims all the credit. 'We' dodges all the responsibility. English doesn't have a pronoun for this.

On the Use of the Plural Voice

If you’ve read more than a couple of articles on this site, you’ve noticed something: I use “we.” Almost never “I.” That was deliberate. And I’m changing it — not because it was wrong, but because it’s saying something I didn’t intend.

This is the story of a pronoun, an attribution problem, and a framework that doesn’t exist yet.


Why “We” Felt Right

I have never done anything in a vacuum. Everything I know, every skill I’ve built, every system I’ve deployed — I got there because thousands of people came before me and left their work where I could find it.

I’ve been part of the open source community since before we called it that, going back to the early 1980s. When I was learning to build distributed networks of computers that could share data and talk to each other, I was building on top of Unix, on C, on Perl, on the work of thousands of systems administrators and engineers who laid the groundwork for what eventually became the Linux ecosystem and the open source movement that powers the technology world today. Certain individuals made groundbreaking contributions, but nobody did it alone.

I have always gone out of my way to acknowledge that. Yes, I did some really cool stuff. Here’s why I was able to: I built on a mountain of great prior art.

When I started working seriously with AI, it felt like an extension of that same principle. The AI has the sum total of human knowledge encoded in its training data. Interacting with it — especially in the design phase, where AI collaboration is genuinely most powerful — feels like working with a team. My ideas get amplified, challenged, completed. The conversational interface is engineered to produce exactly that impression: you talk, it talks back, it matches your tone, it builds on your thoughts. It feels like collaboration.

So I used “we.” It was an acknowledgment of humility. These are my ideas, my direction, my conclusions — but they’ve been amplified by an interaction with something that channels the accumulated knowledge of everyone who came before. Using “I” felt like taking sole credit for a collaborative process. “We” felt honest.


Why “We” Is Being Misread

Here’s the problem: most readers aren’t going to appreciate the philosophical nuance. They’re going to read “we” and think “he means he and the AI, which means the AI did all this and he’s just along for the ride.”

I don’t think that reaction is unreasonable. Public trust in AI sits at 46% globally[1], with a stark split: 39% in advanced economies versus 57% in emerging ones. People are adopting the technology faster than they’re developing comfort with it — the same study found that 57% of workers hide their AI use and present AI-generated work as their own[1]. In that environment, any hint that AI did the heavy lifting triggers skepticism, not curiosity.

The numbers on AI output misrepresentation don’t help. In academia, surveys suggest roughly one in five students have used AI on assignments[2], and UK universities reported approximately 7,000 AI-related academic misconduct cases in the 2023-2024 academic year alone[3]. People are already presenting AI output as their own work at scale. When someone reads “we” on this site, they have every reason to wonder if they’re looking at more of the same.

That’s exactly the opposite of what I’m trying to communicate. I am the author. I am the lead investigator. I am the person in charge. Every line of text, every argument, every assertion in these articles — I take responsibility for. But when you read “we,” that’s not what you hear.


Why “I” Isn’t Clean Either

So just switch to “I.” Problem solved, right?

Not quite. “I” carries its own dishonesty. If I present everything in the singular voice, I’m implicitly claiming that every word, every phrasing, every structural choice came from me. It didn’t. The AI organized my chaotic stream-of-consciousness rants into coherent outlines. It wrote first drafts from my guidance. It found research sources. It polished my grammar. Some paragraphs in my published articles are essentially untouched AI output — because the AI nailed it on the first pass and I saw no reason to change a word.

Presenting all of that as “I” feels uncomfortably close to what the majority of AI-using workers are doing: using AI, then hiding the fact[1]. Throughout my career, I have always credited the people who helped me succeed. Taking sole credit now, because my collaborator isn’t human, would violate a principle I’ve held for forty years.


Everyone Else Solved This

Here’s what’s interesting: every other domain that involves building on the work of others has figured out attribution.

In technology, we have dependency manifests and license files. “This product was built on library X, Y, and Z.” You know what’s ours and what we built on top of. Clear.

In scientific publication, you cite all prior work. Almost no experiment is truly innovative from first principles — you’re extending someone else’s work, adding resolution, exploring a different part of the problem space. And you always give credit. Always.

In academic authorship, every contributor is listed by name. The principal investigator typically appears last. The research assistants and postdocs who did the hands-on work appear first. The roles are understood. The contributions are recognized. And academic publishing has already tackled the AI question directly: every major journal and conference — Nature, Science, ACM, IEEE, NeurIPS, and all five major academic publishers — has issued a formal policy prohibiting AI as an author[4]. The prohibition is universal. AI cannot be listed as a contributor because it cannot be held accountable. That’s a principle worth noting.

Where it gets interesting is the disclosure layer beneath the prohibition. The venues agree that AI cannot be an author, but they disagree on what you have to disclose about AI assistance. NeurIPS requires disclosure only when AI is part of the methodology — the most permissive approach. ACM and IEEE require acknowledgment of AI use with varying specificity. Science requires the most: full prompts must be included in the methods section[4]. The principle is settled. The implementation is still fragmented.

In creative work, the line between plagiarism and homage is attribution. Kurosawa retold Shakespeare in feudal Japan — Throne of Blood is Macbeth, The Bad Sleep Well is Hamlet, Ran is King Lear[5]. Three films, three Shakespeare plays, each transplanted into a completely different cultural context. That’s not plagiarism. He honored the source, made the influence explicit, and created something new on top of it. He was lauded for it.

The common thread: the difference between building on someone’s work and stealing it is whether you acknowledge the contribution.

AI collaboration fits none of these existing models cleanly. There’s no named human to cite. No license file to include. No author list to populate. The collaborator is amorphous — a system trained on the work of millions, producing output that blends with your own in ways neither party can fully decompose.

People are trying. IBM published an AI Attribution Toolkit in 2025 that captures contribution type, amount, and review process — but describes itself as “a first pass” at a voluntary standard[6]. Legal scholars have proposed the AIA icon system, graduated visual indicators for AI involvement levels: Generated, Edited, Suggested[7]. Research presented at CHI 2025 found that AI receives less credit than humans for equivalent work, and argued that attribution needs more granularity than binary disclosure[8]. The existing standard for contributor attribution — the CRediT taxonomy, adopted as NISO Z39.104-2022 — covers 14 types of human contribution but has no provision for AI[6].

The pieces are emerging. The standard isn’t here yet.


The Line That Actually Matters

I’ve come to believe the real distinction isn’t “we” versus “I.” It’s this:

AI-assisted work: the human is driving. The ideas, the direction, the judgment, the editorial decisions, the accountability — all human. The AI amplifies: it organizes, polishes, retrieves, and completes patterns.

AI-curated work: the AI produced it. The human selected, tidied, and presented it as their own work product. The direction came from the machine. The human’s contribution was approval.

These are fundamentally different things. And right now, as a reader, you cannot tell which one you’re looking at. There is no convention, no standard, no norm for communicating the difference.

That’s the actual problem. The pronoun was never the disease. It was always the symptom.


This Article Is Its Own Evidence

I should be transparent about something: this article was itself a collaboration.

I sat down and talked — literally, using voice-to-text — through ten stream-of-consciousness sessions. I threw incomplete, half-formed ideas at the AI. Broken jigsaw puzzle pieces. I knew the picture I was trying to assemble but I couldn’t articulate it cleanly. I’d start an analogy, lose the thread, circle back, contradict myself, trail off.

The AI saw the pattern. It organized the chaos, completed my partial thoughts, and summarized the thesis I was reaching for: “Every domain that involves building on others’ work has developed attribution norms. AI-assisted work has none. The pronoun problem is the symptom. The missing framework is the disease.”

I read that and said: yes, that’s exactly what I was trying to say.

Neither of us could have written this article alone. The ideas are mine. The synthesis was collaborative. The result is like a piece of cake — trying to pull it apart and identify the flour, the eggs, the milk, and the sugar separately isn’t possible. It’s cake. You experience it as a bite. And the fact that you can’t tell, reading any given paragraph, which parts the human wrote and which parts the AI wrote — that’s the point.


Where This Leaves Me

I’m switching to the singular voice. Not because “I” is the right answer, but because “we” is being misread. The philosophical case for “we” still holds — and I hope this article makes that case clearly enough that the intent is understood, even if the practice changes.

There’s another reason, and it connects to why academic publishing made the decision it did. AI cannot be listed as an author because it cannot be held accountable. “We” lets you diffuse responsibility toward an entity that can’t bear it. “I” doesn’t just assert ownership — it asserts accountability. Every claim, every error, every conclusion in these articles is mine to defend. That matters more than credit.

The irony isn’t lost on me: “I” could be read as claiming all the credit while “we” could be read as dodging all the responsibility. The same pronoun problem, two directions, and no choice satisfies both.

The real solution isn’t a pronoun. It’s an attribution framework that doesn’t exist yet — one that lets creators of AI-assisted work communicate honestly about the nature of the collaboration. Something that makes it visible where the human’s contribution begins and the AI’s contribution ends. Academic publishing got halfway there with a universal prohibition and variable disclosure requirements. The proposals from IBM, the AIA system, and the CHI research suggest the rest of the field is starting to follow. But we’re in the pre-standardization phase — the problem has been named, the pieces are emerging, and nobody has assembled them yet.

Until that framework exists, every choice of pronoun is a compromise. I’m choosing the one that makes at least one thing unambiguous: the human is accountable for every word.


References

[1] KPMG and University of Melbourne. “Trust in AI: Global Insights 2025.” Over 48,000 respondents across 47 countries. https://kpmg.com/xx/en/media/press-releases/2025/04/trust-of-ai-remains-a-critical-challenge.html

[2] BestColleges. “One in Five Students Have Used AI on Assignments.” March 2023. N=1,000. https://www.bestcolleges.com/research/college-students-ai-tools-survey/

[3] The Guardian. “UK Universities AI Academic Misconduct Investigation.” Freedom of Information request covering 131 of 155 universities, 2023-2024 academic year. https://www.theguardian.com/education/article/2025/jun/15/thousands-uk-university-students-caught-cheating-ai

[4] Nature, Science/AAAS, ACM, IEEE, NeurIPS, Elsevier, Springer Nature, Wiley, Taylor & Francis, SAGE — all prohibit AI authorship as of 2024-2025. Disclosure requirements vary by venue. https://www.nature.com/nature-portfolio/editorial-policies/ai

[5] Kurosawa, A. Throne of Blood (1957, Macbeth), The Bad Sleep Well (1960, Hamlet), Ran (1985, King Lear). BFI and Criterion Collection analyses. https://www.bfi.org.uk/features/kurosawa-vs-shakespeare

[6] IBM Research. “AI Attribution Toolkit.” May 2025. Self-described as experimental. CRediT taxonomy: NISO Z39.104-2022. https://research.ibm.com/blog/AI-attribution-toolkit

[7] Avery, Abril, and del Riego. “AIA Icon System.” Northwestern Journal of Technology and Intellectual Property, 2024. https://scholarlycommons.law.northwestern.edu/njtip/vol22/iss1/1/

[8] He, Houde, and Weisz. “AI Attribution in Creative Work.” CHI 2025. N=155. https://dl.acm.org/doi/10.1145/3706598.3713522