Skip to content
← Back to Blog
March 15, 2026 · 7 min read · Elias Teubner
#ai #typescript #mcp #legal-tech

RIS MCP Server — When AI Meets Legal Research

A law firm came to us with a simple request: "We spend hours searching through legal databases. Can AI help?" The answer was yes — but not in the way you'd think. We didn't build a chatbot. We built a tool that makes their AI assistant actually understand law.

What is MCP?

The Model Context Protocol (MCP) is a standard for giving AI agents access to external tools and data. Think of it as a USB port for AI — plug in a capability, and any compatible AI assistant can use it. Claude, GPT, Gemini — any agent that speaks MCP can use the tools we built. One protocol, universal access.

The Problem

Austria's Rechtsinformationssystem (RIS) is the country's legal information database. It has a public API, and it holds essentially every Austrian law, regulation, and court decision. The API is complex, though. Legal queries aren't simple keyword searches. You need to know:

  • Which body of law applies (federal, state, EU)
  • Which specific sections are relevant
  • How laws reference each other (§ 42 Abs. 3 references § 18 Abs. 1)
  • The date context (was this law in effect at the time in question?)

Lawyers know this intuitively. AI doesn't — unless you teach it.

Our Solution

We built an MCP server with two types of tools:

Search tools that build proper API calls to RIS. They handle pagination, filter by law type, date, and jurisdiction, and return structured results. Instead of the AI trying to query a complex API directly (and getting it wrong), it calls our search tool with a natural description of what it needs.

Decomposition tools that take a complex legal question and break it into smaller, searchable sub-queries. For example:

"Is this employment contract clause valid under Austrian labor law?"

Becomes:

  1. Search AngG (Angestelltengesetz) for employment contract requirements
  2. Search ABGB for general contract validity clauses
  3. Search relevant OGH decisions for precedent on similar clauses

The Part Nobody Talks About: Reading the Docs

Skip past "What is MCP" and all the architecture talk. The real story of this project is me, the RIS API documentation PDF, and several months of trial and error that no blog post ever prepared me for.

The RIS API does have documentation. It's long. It's thorough. Every parameter is listed. It looks professional. It is also almost useless for figuring out how to write a call that returns something usable.

Here's a real excerpt from the parameter reference for the federal law search — one of the more complete sections:

Excerpt · RIS-OGD Dokumentation

Suchworte — Suche über den gesamten Inhalt des Dokuments. Typ: PhraseSearchExpression

Titel — Suche nach Titel oder Abkürzung der Rechtsvorschrift. Typ: PhraseSearchExpression

Index — Suche nach der Indexnummer gemäß dem „Systematischen Verzeichnis des Bundesrechts“. Typ: PhraseSearchExpression

FassungVom — Angabe jenes Datums, zu dem sich die Rechtsvorschrift in Kraft befinden soll. Wird kein Datum eingetragen, werden alle Dokumente zurückgegeben, unabhängig davon, ob sie in Kraft sind. Typ: date

Kundmachungsorgan — Suche nach dem Kundmachungsorgan (z. B. „BGBl. I Nr. “, „BGBl. II Nr. “, „BGBl. III Nr. “, „JGS Nr.“). Typ: PhraseSearchExpression

That's fine. It tells you the parameters exist. It does not tell you:

  • Which combinations the server actually accepts
  • Which ones silently return an empty result set
  • Which ones 500 the endpoint
  • What PhraseSearchExpression syntax the server will parse — quoted strings? wildcards? boolean operators?
  • How pagination really works once results exceed the default page size
  • What happens when you mix FassungVom with a query for a repealed law

The Judikatur endpoint — the one my MCP server actually uses, the one that searches court decisions — doesn't even have its own section in the PDF. You get a schema file reference (RISJudikaturNutzdaten.xsd) and the implied expectation that you'll figure the rest out. Every useful query shape I landed on was arrived at by guessing, sending, reading the result, and guessing again.

What didn't help

AI tools didn't help. The RIS API is too niche to be in any training set in a useful way. Claude could read the PDF with me, but when a call returned an empty array, it couldn't tell me why. It guessed. I guessed. We guessed together. At some point I stopped asking and just edited the Python manually.

RIS support didn't help. I wrote a detailed email asking about parameter interactions. The reply came back a few days later:

"Ja, diese Parameter existieren."

That was the whole answer. They confirmed the parameters I had named in my email were, in fact, parameters. Thank you.

What actually helped

Sitting with the PDF at 2am and changing one parameter at a time until something returned plausible output. Then taking that output to the law firm and asking: is this the right case? The law firm was the ground truth. Every correct result was a lawyer saying "yes, that's the precedent we'd cite." Every wrong result sent me back to the documentation.

I used Claude Code for the TypeScript architecture of the MCP layer on top. That part was fine. The part that had no shortcut was the read-the-PDF, rewrite-the-Python, validate-with-humans loop. Weeks of that. Most of the nights run together now. But the end state is a server where every query shape the MCP tools expose has been empirically validated against real legal results — which is the only reason a lawyer can trust the output.

The Sparring Partner

The server doesn't just fetch data — it helps the AI reason about it. When a lawyer asks their AI assistant a complex question, the MCP server helps the AI build a structured research basis. It's like having a junior associate who's read every law and can pull the relevant ones instantly. The lawyer still does the analysis, the judgment, the client advice — but the research that used to take hours now takes minutes.

For the development itself, I used Claude Code for the TypeScript architecture and the complex prompt engineering that makes the decomposition tools work well. Gemini was great for researching legal edge cases and testing the output quality. Copilot handled the Microsoft/Azure-related integration work. Each AI tool has its lane — Claude for coding, Gemini for research and testing, Copilot for the Microsoft ecosystem.

Working with the Law Firm

Building AI tools for a specialized domain means listening before you build. What we learned and applied:

  • What they actually needed vs. what we thought they needed. We started building a tool that summarized entire laws. They wanted one that could find the right paragraph in seconds. Precision over breadth — and we adapted fast.
  • Domain language matters. Legal German has specific terms that don't map cleanly to everyday language. "Kündigungsfristen" isn't just "deadlines" — it's specifically termination notice periods under employment law. Get the terminology wrong and you get the wrong results.
  • Trust is built through transparency. Lawyers verify every AI output. So we built the tools to always show their sources — every answer links back to the exact § and Abs. in the RIS. No black boxes.

The Best AI Tools Don't Replace Experts

They give experts superpowers. That's exactly what this MCP server does — it doesn't practice law. It makes lawyers dramatically faster. The firm's researchers went from spending hours on routine legal lookups to spending minutes. That time goes back to the work that actually requires human judgment — advising clients, building arguments, understanding context that no database can capture. That's the trade we were designing for from day one.

— Elias

Share this post