WR: An AI Chat Interface Layer Proposal for Public AI Apps with Relevance to Long-Form and Structured Writing Projects Across All Knowledge Sectors

We are still in the early years of public AI chat systems. The basic interface patterns for how people work with these tools are forming now and will likely harden quickly, just as they have with every major software interface that came before. Once established, those patterns tend to persist for decades. This creates a short window in which structural improvements can still be articulated and potentially adopted before habits and expectations become fixed.

I recently completed a document describing a structural model called Workspaces & Records (WR) and submitted it directly to leadership channels at several major AI platform companies, including ChatGPT/OpenAI, Microsoft AI, and Google AI. It is not a feature request for any one product and not tied to any specific tool. It is an architectural proposal for a reusable, user-controlled project memory designed to operate alongside chat-based AI systems across many domains.

Full document:
https://docs.google.com/document/d/1Nos2kpiGIQuOOke6Byc3w0C8Fv2mj4r1f_OQyU9hmOk/edit?usp=sharing

This document was written to stand on its own and be readable without follow-up explanation. It is intentionally detailed and somewhat technical. I’m sharing it because many people who work in long-form or structured projects already think in terms of persistent materials that evolve over time, and that mindset overlaps with a broader question about how emerging AI tools may support serious work over the long term.

A brief personal context for why I wrote it: I’ve used Scrivener for about a decade and consider it my home base whenever a project needs to be organized seriously. I’ve never published a book, largely because most of my professional life was spent owning and building a statewide professional services company with roughly fifty employees. Long-form writing had to remain secondary during those years, but structured project work did not. Over the past few years I’ve used AI chat tools extensively across technical, research, and personal projects unrelated to writing. Working that way exposed repeated friction in maintaining continuity and structured context over time. Workspaces & Records emerged from that practical experience rather than from any single writing-related need.

The underlying issue is straightforward. Real projects persist and accumulate structured knowledge: notes, plans, reference material, evolving drafts, research, and decisions. Chat sessions are transient. Users who use AI in ongoing work must repeatedly reconstruct context, maintain parallel notes, and reintroduce background information into new conversations. This works but does not scale well and becomes increasingly inefficient and prone to drift as projects grow. WR is a proposed structural layer that allows users to maintain a stable, reusable project memory that can be selectively attached to conversations as needed and updated over time with user approval.

In simple terms, a Workspace represents a long-running project or domain. Records are structured documents within that workspace: outlines, research summaries, timelines, reference material, configurations, and similar content. Selected Records can be attached to a conversation as stable context, allowing the AI to operate around durable, user-controlled material rather than requiring repeated reconstruction. The intent is not to replace authorship or domain expertise but to support them by stabilizing project context over time.

Although this concept was not created specifically for writing, it has obvious relevance for anyone working on long-form or research-heavy projects. More broadly, it applies anywhere structured knowledge accumulates over months or years: engineering, academia, science, policy work, and advanced personal projects. Many AI platforms are already beginning to move toward more persistent, project-oriented interaction models, which suggests recognition of the same underlying need. The risk is that these capabilities emerge slowly and in fragmented, proprietary forms rather than as a broadly portable, user-controlled layer.

One reason for sharing the document publicly is that structural changes in widely used tools often gain traction only when people in multiple fields recognize their value and communicate that to the platforms they use. If individuals across different domains conclude that a reusable project memory layer would materially improve how they work with AI, and say so, adoption may accelerate. If not, similar capabilities may still emerge, but potentially over a much longer period of incremental and incompatible development.

For anyone curious, a simple way to explore the idea is to skim the document directly or download a copy to your own system and try it with an AI chat. Open the WR document, then choose File → Download → PDF, then upload the PDF into your preferred AI chat app and discuss it there to see how such a model might affect your own projects.

This is not a request for change to any specific tool, and I understand Scrivener’s position on AI. It’s simply a structural proposal offered at a moment when the interface patterns for working with AI are still forming, shared here for those who spend a lot of time thinking about long-form projects, durable materials, and how tools either support or complicate work that unfolds over years rather than minutes.

Best regards,
Lec

2 Likes

How does this proposal align with the Model Context Protocols being promoted by Anthropic? Tinderbox MCP Server - AI Integration for Knowledge Management | MCPlane

2 Likes

Good question. My understanding is that they operate at different layers and are complementary rather than overlapping.

MCP addresses how AI systems securely access external tools, files, and data sources — essentially a standardized way for models to retrieve and interact with context that lives outside the chat itself.

The WR idea is focused on something entirely different: how users maintain durable, structured project memory across time — roles, constraints, evolving reference material, and other long-lived context — and selectively attach that to conversations in a controlled way.

Even if MCP is implemented, you would still be managing your own project context across notes, documents, and chats much as people do now. MCP would make it easier for systems to reach that material, but it doesn’t in itself define a persistent, user-controlled project memory layer or a way for that material to be refined and reused across long-running work.

I tend to see MCP as addressing the access and interoperability side of the problem, while WR addresses, 1) how that durable context is structured and maintained and selectively attached in the first place, and 2) tools that currently do not exist to help you talk with the AI about that content. They are different layers and complementary, not competitive.

2 Likes

I realize WR is a really long document. Here is a suggested approach where you don’t have to read the whole document yet can still get the information most relevant to you.

Open the WR document from the link below and select File > Download > PDF. Then upload the PDF to your AI chat and ask the following question.

Link to the WR document:
https://docs.google.com/document/d/1Nos2kpiGIQuOOke6Byc3w0C8Fv2mj4r1f_OQyU9hmOk/edit?usp=sharing

Copy this question into the prompt:

“I’m experimenting with how AI tools might evolve.
You don’t need to read every word of this document. Based on the capabilities it describes, answer this:
If you had the abilities described in this document, what could you and I do together that current chat-based AI systems don’t support well today, and what frustrations or inefficiencies in long-form or multi-session work would this remove?”

I’ve had some time to reflect on this since the original discussion, and the issue feels increasingly structural rather than tool-specific.

The real question isn’t whether AI helps or harms writing. It’s that current systems provide almost no durable way to control how much of the thinking and wording they take over.

Most AI chat tools were built around short, session-based exchanges. Serious writing projects, by contrast, unfold over months or years and accumulate structure over time — notes, outlines, research, evolving drafts, editorial decisions, and revisions. Because these systems treat interaction as temporary conversation rather than long-term collaboration, context must be repeatedly reconstructed and expectations about AI participation must be renegotiated in each session. Over time this creates drift, fragmentation, and a gradual blurring of authorship boundaries.

For writers and editors, that raises a deeper issue than convenience or efficiency. In many professional or creative contexts, AI works best when it operates as a disciplined editorial partner rather than a co-author — analyzing, critiquing, and suggesting within clear limits while leaving voice, structure, and final language under human control. But today those limits exist only as ad hoc prompts. There is no persistent way to declare, once and clearly, how much compositional or cognitive work the AI is allowed to perform within a given project.

That absence of durable constraints is beginning to show up in other domains as well. Colleagues in research and education are wrestling with the same underlying problem: we are rapidly adapting serious work to tools whose core design assumptions were optimized for temporary exchanges rather than sustained intellectual effort. The result is uncertainty about authorship, responsibility, and the appropriate degree of AI participation across long-running work.

What seems to be missing is a structural layer that allows users to define and maintain explicit boundaries around AI participation — whether the system is functioning as analyst, editor, bounded drafting partner, or something else — and to keep those boundaries consistent across sessions and over time. Instead of relying on repeated prompting and vigilance, those expectations would travel with the project itself, providing a stable framework for how the assistant is meant to contribute.

Even for those who choose not to use AI tools in their own writing, the structural design of these systems will increasingly shape the environments in which others write and learn. Students, early-career writers, and researchers are already working within AI-mediated contexts, often without clear ways to preserve authorship boundaries or independent reasoning. Whether one adopts these tools personally or not, it seems worth considering what kinds of structures would best support serious writing and thinking for the next generation of writers as well as for ourselves.

That line of thought is what led me to develop the Workspaces & Records (WR) model described earlier in the thread. At its core is the idea of persistent project memory paired with reusable “constraint” settings that define how the AI may participate in analysis, drafting, revision, and decision-support. In writing contexts, that can mean establishing a stable “editor mode” in which the assistant critiques and suggests but does not generate extended replacement prose unless explicitly authorized. In other contexts, different levels of participation can be declared just as clearly, allowing the degree of AI involvement to remain intentional and visible throughout a project.

I’m sharing this here not as a proposal for Scrivener or any specific tool, but because long-form writers are among the first to feel the structural mismatch between durable projects and session-based AI. If these tools are going to support serious work over time without eroding authorial control, it seems likely that clearer, more durable ways of defining AI participation will be needed.

For convenience, here’s the document link again for anyone curious:
https://docs.google.com/document/d/1Nos2kpiGIQuOOke6Byc3w0C8Fv2mj4r1f_OQyU9hmOk/edit?usp=sharing

Best,
Lec