Help Guide Writer

Help Guide Writer

Help legal experts create clear, accurate, and actionable help guides—faster and with greater consistency.

Task Description

Legal experts often write public-facing help guides to explain legal rights, processes, and next steps to people navigating issues like housing, family law, benefits, or debt. These guides must be accurate, understandable, jurisdiction-specific, and written in a tone that is supportive and respectful of users' needs. Creating and maintaining high-quality guides is time-intensive and often limited by staff capacity.

This task focuses on a system that acts as a co-authoring, review, and optimization partner. It assists legal experts in drafting new guides, revising outdated ones, and testing for readability, accuracy, and usability. It aligns drafts with organizational style guides and formatting rules, ensures key legal and procedural details are included, and flags risks or omissions. The system can also assist in creating multilingual versions and testing user comprehension.

It works by pulling from trusted legal sources, guide templates, and internal best practices to propose initial drafts or suggested revisions. It can highlight unclear sections, suggest plain-language alternatives, and simulate different user scenarios to ensure the guide is understandable by people with varied literacy levels or legal knowledge.

This tool is especially useful for legal help organizations, court self-help sites, and state-wide portals that aim to publish trustworthy, accessible content at scale. It supports consistency across jurisdictions and guide types, while freeing experts to focus on legal accuracy and user strategy—not repetitive formatting or editing.

Success means that legal help guides are created and maintained more efficiently, while meeting or exceeding expert standards for clarity, accuracy, and usefulness to the public.

Primary audiences: Content authors, legal editors, legal aid teams, website managers, communications staff.

Quantitative Summary of Stakeholder Feedback

  • Average Value Score: 4.26 (High)
  • Collaboration Interest: 4.05 (Second-highest among all clusters)

This cluster ranks among the most promising. Stakeholders see clear benefits in federated collaboration—especially around shared style guides, templates, and reviewing tools.


Existing Projects

This was one of the most actively explored areas, with several organizations already experimenting:

  • ILAO has experimented with AI drafting followed by human editing.
  • Michigan Legal Help teams use templates and manual audit protocols.
  • Some groups are using LLMs with prompts and structured guide outlines to quickly generate first drafts.
  • Emerging vendor tools offer AI-based “knowledge base auditors.”
  • Prompt libraries and structured guide templates used internally
  • Manual content review tools (checklists, editorial workflows)
  • Early attempts to use AI to detect outdated or duplicative content
  • Some interest in AI-generated visuals or video summaries

“We’ve been testing GPT to write first drafts of guide content. The human review is still essential, but it helps us go faster.”

“What we need is not just a writer bot, but a reviewer bot. Something to find outdated sections or broken links.”

Technical Next Steps & Protocols Needed

  • Standardized content formats:
    • FAQ, step-by-step, warning boxes, definitions
  • Prompt templates for common guide types (e.g., “How to respond to an eviction notice”)
  • Content audit tools:
    • Detect outdated laws, broken links, or duplication across sites
  • House style and plain language checkers
  • Multimodal tools:
    • Convert text into video, audio, slides, or images
“The tool should help maintain consistency — not just write something that sounds legal.”
“Imagine being able to search all of our guides and see which are missing updated info on a new law.”
  • Use AI to draft, then prompt human editing and style review
  • Build shared checklists, style guides, and prompt libraries
  • Develop content reviewer tools to crawl and flag outdated pages
  • Collaborate across states on shared templates and modular content (e.g., “About the eviction process” vs. “State-specific filing deadlines”)
“AI could help us find stale or broken content we’ve forgotten exists.”
“The biggest win is speed + consistency — not automation.”

Stakeholder Commentary

This idea has broad enthusiasm and real-time urgency:

  • Many legal content teams feel understaffed and overworked—AI could help them create more with fewer resources.
  • Some caution about hallucinations and false legal certainty in AI-generated guides
  • Style and tone drift: AI may generate inconsistent or overly technical writing
  • Overuse: Concern that content teams might publish drafts too quickly
  • Dependence on human oversight: Everyone emphasized human editing
  • Strong interest in shared tools to flag outdated materials across jurisdictions or sites

“AI could help us find stale or broken content we’ve forgotten exists.”

“It should be a co-pilot, not a ghostwriter. The goal is speed and consistency—not replacing humans.”

“We don’t want a flood of mediocre content — we want better quality, faster.”



How to Measure Quality?

⚖️ Legal Accuracy and Jurisdictional Relevance

  • Reflects current law, procedures, and court practices in the appropriate jurisdiction
  • Differentiates rules that vary by region or situation
  • Flags when cited laws or processes are out of date

✍️ Plain Language and Readability

  • Rewrites complex legal terms or sentences in plain language
  • Breaks content into logical sections with clear headings and bullets
  • Scores well on readability metrics (e.g., 6th–8th grade reading level where appropriate)

🧠 Usability and Actionability

  • Clearly describes steps the user must take, with links to forms, deadlines, and phone numbers
  • Provides warnings and context where decisions or actions carry risk
  • Supports diverse users, including those with lower literacy or digital access

🔁 Consistency and Style Alignment

  • Aligns with internal or statewide style guide rules for voice, tone, and structure
  • Flags inconsistencies or formatting errors
  • Allows version tracking and side-by-side edits for multiple authors

🌍 Translation and Accessibility Support

  • Offers high-quality first drafts in other languages with glossary-aligned terms
  • Flags culturally inappropriate or unclear phrasing
  • Includes captions, alt-text, and adaptable formats for different platforms

⏱️ Efficiency and Expert Review Experience

  • Reduces drafting time while improving initial draft quality
  • Allows subject matter experts to quickly review, edit, and approve content
  • Offers side-by-side view of human vs. AI-generated suggestions with edit history

Stakeholders had a lot of thoughts about how to measure this task:

  • Plain language score + readability analysis
  • Accuracy to statute and local procedure
  • Consistency with house style or templates
  • Community validation: Have people used and understood this content?

“A good tool not only writes but checks for whether this guide is consistent with others on the site.”

“Compare against gold-standard human guides. And show your sources.”

The rubric could be as follows:

  1. Readability and plain language scoring
  2. Legal accuracy and jurisdictional fit measured by SMEs
  3. Update frequency: Can the tool surface and help update stale content?
  4. Audience comprehension testing
  5. Content coverage: Are more topics written faster?