Legal Q&A

Help people understand their legal situation and next steps through a conversation that gathers context, explains options, and connects them to help.
Task Description
When people face a stressful legal problem—like eviction, debt collection, family separation, or losing benefits—they often don’t know where to start. They have questions, confusion, and fear, but no clear pathway to help. Whether they arrive at a legal help website, talk to a junior staff member, or approach a pro bono clinic, the first challenge is understanding whether their problem is legal, what rights they have, and what steps to take.
This task involves a system that functions as a conversational legal triage and orientation assistant. The person describes their problem in their own words—typed, spoken, or uploaded—and the system engages them in a guided back-and-forth. It asks clarifying questions, listens for key facts (like deadlines, geography, or issue type), and tailors its responses accordingly. At the end of the interaction, it provides a clear summary of the issue, explains rights and risks, and offers links to trusted legal help, tools, or organizations.
This AI assistant doesn’t give legal advice or make legal decisions. It focuses on orientation, triage, and empowerment—bridging the gap between legal complexity and human need. For staff, it can also act as a co-pilot—suggesting answers or follow-ups to junior staff, volunteers, or navigators working with clients live.
This tool is valuable for legal help websites, courts, pro bono projects, and community-based service networks. It reduces the burden on overwhelmed help lines or intake teams, while offering 24/7 guidance and navigation to people in need.
Success means the person understands their legal situation, feels more confident, and takes strategic next steps—whether that’s filling out a form, contacting a legal aid group, or preparing for a hearing.
Primary audiences
Legal help website users, legal navigators, pro bono volunteers, junior staff.
Quantitative Summary of Stakeholder Feedback
- Average Value Score: 4.53 (High)
- Collaboration Interest: 3.42 (Medium)
This was rated as one of the most valuable ideas, though slightly less consensus on whether it should be pursued in a federated/national way.
Existing Projects
There are a number of chatbot pilots and tools already in production:
- Beagle+ from People's Law School in British Columbia
- Ask ILAO (Illinois Legal Aid Online): A conversational legal help assistant.
- LIA (North Carolina): A guided help bot.
- Texas Legal Services Center’s “Amigo” bot: Designed to walk users through legal needs.
- Roxanne the Repair Bot in New York
- A TIG-funded bot in New Mexico.
- NCSC-supported projects in Alaska and beyond.
- References to Quentin Steenhuis’s tools, likely DocAssemble-powered bots.
Technical Resources & Protocols
OpenSource Libraries to build on top of LLMs, like LangGraph and Deep Chat
Vector Databases for Rag like Milvus
Drupal Modules like AI Search API
Vendors like LawDroid, Josef, etc.
Data & Tech Collaboration Ideas
Shared Content & Training Sets: could we use existing self-help resources, RAG bots, and training data to build new ones?
Can we use Guided Interviews to teach the AI how to structure information into digestible bits?
Retrieval-Augmented Generation (RAG): Must pull from structured guide content, not just scraped web text
Need structured metadata on legal help pages
Requires chat logs + gold-standard answer sets for fine-tuning
Plain-language response scaffolds
Stakeholder Commentary on this task
Responses varied in confidence:
- Wary of overpromising: One legal aid leader warned that chatbots can appear helpful while giving dangerously wrong advice.
- Need for caution in federated scaling: Several noted that local laws, content quality, and infrastructure vary greatly.
- Appetite for internal copilot use: Interest was stronger for using AI to support human navigators rather than offering it directly to the public.
“I am a bit wary of a public-facing tool that is doing legal triage. If it’s not correct, it can cause harm.”
“From what I’ve seen, most of the more reliable bots are built with much more limited scopes.”
How to Measure Quality?
Suggested ways to evaluate Q-and-A performance included:
- Conversation quality metrics: accuracy, completion rates, misdirection rates
- Comparison to baseline engagement: “Compare use against baseline. For instance, our existing bot increased FAQ engagement by 30%.”
- Expert review or synthetic test cases
- Follow-up surveys with users
“We would need to carefully track conversations and make sure there is human review and testing. Metrics alone won’t tell you what’s going wrong.”
See also quality standard metrics like the Quality Metric rubric from Stanford Legal Design Lab.
Metrics to Assess
[_] Answers are legally accurate and relevant to the user’s situation
[_] Answers are written in plain, clear, supportive language
[_] The user receives concrete next steps
[_] The tone is empathetic and empowering
[_] Tool refers to a lawyer or navigator if unsure
[_] The chatbot avoids misleading certainty or unsafe outputs
Protocols to Use in Evaluation
[_] Users can easily give feedback on their experience
[_] Accuracy and usability are reviewed by SMEs regularly
[_] User surveys or quizzes assess comprehension and satisfaction
[_] Batch testing and transcript reviews are in place