We build AI that investigates AML alerts. In the process of teaching an agent to do this work, we've had to break down what "investigation" actually means. It's not one task. It's five different types of work that have to happen in sequence, in parallel, and iteratively.
Most tools in this space handle one or two of these modes. Most investigators spend their days manually switching between all five. Understanding this framework explains why shallow AI summaries don't move the needle, and why deep investigation requires a fundamentally different approach.
Mode 1: Quantitative Analysis
This is the work of querying internal transaction data to find patterns, anomalies, and context.
When an alert fires, investigators need to answer a series of questions. What's the velocity and volume of this customer's transactions over time? Are there structuring patterns? Who are the counterparties, and how often do they appear? How does this behavior compare to similar customers? Where did the money come from, and where did it go?
This data lives in a warehouse or database. Accessing it requires SQL or a report request. Most frontline investigators don't write SQL. Those who do still face time constraints when they're working through a queue of 40 alerts.
Investigators typically spend 50-70% of their time gathering data from fragmented systems before they can even begin analysis. Each new query means navigating to another system, running another search, copying results into a spreadsheet.
An AI agent can write SQL in real-time, run queries, interpret results, and iterate. It can ask "who are the top counterparties?" then follow up with "what's the transaction history with each of them?" then "do any of these counterparties appear in other flagged cases?" This sequence of dependent queries, each informed by the last, happens automatically.
Mode 2: Entity Verification
This is the work of confirming that businesses and individuals are real, registered, and what they claim to be.
Investigators need to know: Is this business registered where it claims to be? When was it formed? Who is the registered agent, and do they appear in other suspicious entity networks? Who are the beneficial owners? Does the business address correspond to a real location?
This information is scattered across 50+ state registries, federal databases, and international sources. Each has a different interface. Comprehensive entity verification can take hours or even days per entity. Under time pressure, investigators focus on the most obvious sources and move on. Thorough verification often only happens on escalated cases.
An AI agent can systematically query every relevant registry, cross-reference registered agents against known networks, and flag anomalies like recently formed LLCs or addresses associated with other suspicious entities. It can identify that a registered agent has been used to form 14 other LLCs in the past 90 days, and surface that pattern as a risk signal.
Mode 3: Open Source Research
This is the work of searching the public web for information about people, businesses, and context.
Does this business have a real web presence? Does the owner have a LinkedIn profile that matches their claimed background? Is there news coverage about this entity? Does the business's stated model match what you see on their website?
This is unstructured research. You can't query it with SQL. You have to search, read, evaluate, and decide what matters. It's time-consuming and inconsistent. Different investigators search for different things, and the depth depends on time pressure.
An AI agent can search systematically across multiple sources, read and interpret what it finds, and flag relevant information. It can check whether product images are stock photos, whether pricing is plausible, whether the "About Us" page contains red flags. It can find a three-year-old local news article that provides useful context.
Mode 4: Compliance Screening
This is the work of checking individuals and entities against sanctions lists, PEP databases, and other watchlists.
Is this person or entity on the OFAC sanctions list? Are they a Politically Exposed Person? Do they appear in adverse media databases? Have they been the subject of prior SARs? Are any of the counterparties flagged?
Most organizations have automated screening at onboarding, but investigation-time screening may still require manual checks across multiple systems. And screening needs to happen on the subject of the alert, the counterparties, and the beneficial owners of any entities involved.
An AI agent can run comprehensive screening across all relevant lists for everyone involved, handling name variations and fuzzy matching, and incorporating results into the broader investigation context.
Mode 5: Reasoning and Synthesis
This is the work of connecting findings from modes 1-4 into a coherent picture.
Does the customer's stated business model explain their transaction patterns? Are the red flags individually explainable, or do they form a pattern? What's the most likely explanation? Is this suspicious activity, or a false positive?
This is the actual judgment work. But investigators often arrive at it exhausted from data gathering. The synthesis happens in their heads or in a Notes document. It's hard to audit and hard to make consistent across a team.
An AI agent can explicitly reason about what it found, weigh red flags against green flags, and articulate why it's reaching a particular conclusion. The reasoning is documented and auditable.
The Power of Iteration
The five modes aren't linear. A good investigation moves between them.
Quantitative analysis reveals unusual transaction velocity with three counterparties. Entity verification shows two were formed in the last 90 days. Open source research finds that both share a registered agent who appears in 14 other LLCs. This triggers more quantitative analysis: do any of those other LLCs appear in our transaction data? They do. Compliance screening on the network reveals one entity with an OFAC near-match. Now you have a coordinated network, not a coincidental pattern.
An AI agent can follow these threads automatically, iterating 8-15 times per investigation, running 100-250 discrete steps. A human investigator can do this too. But not in 15 minutes, and not consistently across every case.
What This Means for Investigators
Investigators aren't being replaced. They're being moved from data gathering to decision review. AI handles the five modes, documents everything, and produces an evidence package. The investigator reviews the findings, applies judgment, and makes the final call.
Roe's AI AML Investigator operates across all five modes. It queries your transaction data directly, verifies entities across public registries, searches the open web, screens against compliance watchlists, and synthesizes findings into a clear recommendation. Each investigation takes about 15 minutes and involves 100-250 steps. It connects to your existing systems through a flexible API and 20+ native data connectors, with an average integration time of four weeks.
If your investigators are spending hours gathering data before they can start analyzing, we should talk. Schedule a demo to see how this works with your data.