How LinqAlpha assesses investment theses using Devil’s Advocate on Amazon Bedrock

LinqAlpha is a Boston-based multi-agent AI system built specifically for institutional investors. The system supports and streamlines agentic workflows across company screening, primer generation, stock price catalyst mapping, and now, pressure-testing investment ideas through a new AI agent called Devil’s Advocate. In this post, we share how LinqAlpha uses Amazon Bedrock to build and scale Devil’s Advocate.

Feb 11, 2026 - 16:00
How LinqAlpha assesses investment theses using Devil’s Advocate on Amazon Bedrock

This is a guest post by Suyeol Yun, Jaeseon Ha, Subeen Pang and Jacob (Chanyeol) Choi at LinqAlpha, in partnership with AWS.

LinqAlpha is a Boston-based multi-agent AI system built specifically for institutional investors. Over 170 hedge funds and asset managers worldwide use LinqAlpha to streamline their investment research for public equities and other liquid securities, transforming hours of manual diligence into structured insights with multi-agent large language model (LLM) systems. The system supports and streamlines agentic workflows across company screening, primer generation, stock price catalyst mapping, and now, pressure-testing investment ideas through a new AI agent called Devil’s Advocate.

In this post, we share how LinqAlpha uses Amazon Bedrock to build and scale Devil’s Advocate.

The Challenge

Conviction drives investment decisions, but an unexamined investment thesis can introduce risk. Before allocating capital, investors often ask, “What am I overlooking?” Identifying blind spots usually involves time-consuming cross-referencing of expert calls, broker reports, and filings. Confirmation bias and scattered workflows make it hard to challenge one’s own ideas objectively. Consider the example thesis, “ABCD will be a generative AI beneficiary with successful AI monetization and competitive positioning.” The thesis seems sound until you probe whether open source alternatives could erode pricing power or if monetization mechanisms are fully understood across the product stack. These nuances often get missed. This is where a devil’s advocate comes in, a role or mindset that deliberately challenges the thesis to uncover hidden risks and weak assumptions. For investors, this kind of structured skepticism is essential to avoiding blind spots and making higher-conviction decisions.

Investors have traditionally engaged in devil’s advocate thinking through manual processes, debating ideas in team meetings, or mapping out pros and cons through informal scenario analysis. LinqAlpha set out to structure this manual and improvised process with AI.

The solution

Devil’s Advocate is an AI research agent purpose-built to help investors systematically pressure-test their investment theses using their own trusted sources at 5–10 times the speed of traditional review. To help investors test their investment theses more rigorously, Devil’s Advocate agent in LinqAlpha follows a structured four-step process from thesis definition and document ingestion to automated assumption analysis and structured counterargument generation:

  1. Define your thesis
  2. Upload reference documents
  3. AI-driven thesis analysis
  4. Structured critique and counterarguments

This section outlines how the system works from end to end: how investors interact with the agent, how the AI parses and challenges assumptions using trusted evidence, and how the results are presented. In particular, we highlight how the system decomposes theses into assumptions, links each critique to source materials, and scales this process efficiently using Claude Sonnet 4.0 by Anthropic in Amazon Bedrock. Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI companies and Amazon available for your use through a unified API.

Define your thesis

Investors articulate their thesis as a core assertion supported by underlying reasoning. For example, ABCD will be a GenAI beneficiary with successful AI monetization and competitive positioning. They enter this thesis in Devil’s Advocate in the Investment Thesis field, as shown in the following screenshot.

A screenshot of a computer

AI-generated content may be incorrect.

Upload reference documents

Investors upload research such as broker reports, expert calls, and public filings in the Upload Files field, as shown in the following screenshot. The system parses, chunks, and indexes this content into a structured evidence repository.

A screenshot of a computer

AI-generated content may be incorrect.

AI-driven thesis analysis

Devil’s Advocate deconstructs the thesis into explicit assertions and implicit assumptions. It scans the evidence base to find content that challenges or contradicts those assumptions.

Structured critique and counterarguments

The system generates a structured critique where each assumption is restated and directly challenged. Every counterpoint is sourced and linked to specific excerpts from the uploaded materials. The following screenshot shows how the system produces a structured, evidence-linked critique. Starting from the investor’s thesis, it extracts assumptions, challenges them, and anchors each counterpoint to a specific source. In this case, the claim that ABCD will benefit from generative AI is tested against two core weaknesses: a lack of a proven monetization path despite new features such as Product, and a track record of avoiding price increases due to customer sensitivity. Each argument is grounded in uploaded research, such as expert calls and analyst commentary, with clickable citations. Investors can trace each challenge back to its source and evaluate whether their thesis still holds under pressure.

The LinqAlpha interface produces a detailed answer presenting the evidence for and against the thesis.

Application flow

The Devil’s Advocate agent is a multi-agent system that orchestrates specialized agents for document parsing, retrieval, and rebuttal generation. Unlike a fixed pipeline, these agents interact iteratively: the analysis agent decomposes assumptions, the retrieval agent queries sources, and the synthesis agent generates counterarguments before looping back for refinement. This iterative back-and-forth is what makes the system agentic rather than a static workflow. The overall architecture can be described as four interdependent stages from ingestion to critique delivery. The architecture follows a four-stage flow from data ingestion to critique delivery.

Enter thesis

Users submit an investment thesis, often as an investment committee (IC) memo. The input is received by a custom application running in an Amazon Elastic Compute Cloud (Amazon EC2) instance, which routes the request to Amazon Bedrock. Claude Sonnet 4 by Anthropic in Amazon Bedrock interprets the statement and decomposes it into core assumptions. Amazon EC2 runs a Python-based orchestration layer built by LinqAlpha, which coordinates API calls, manages logging, and controls agent execution.

Upload documents

These documents are handled by a preprocessing pipeline running in an EC2 instance, which extracts raw data and converts it into structured chunks. The EC2 instance runs LinqAlpha’s parsing application written in Python and integrated with Amazon Textract for document parsing. AWS Lambda or AWS Fargate could have been alternatives, but Amazon EC2 was selected because customers in regulated finance environments required persistent compute with auditable logs and strict control over networking. Raw files are stored in Amazon Simple Storage Service (Amazon S3), structured outputs go into Amazon Relational Database Service (Amazon RDS), and parsed content is indexed by Amazon OpenSearch Service for retrieval.

Analyze thesis

Claude Sonnet 4 by Anthropic in Amazon Bedrock issues targeted retrieval queries across Amazon OpenSearch Service and aggregates counter-evidence from Amazon RDS and Amazon S3. A structured prompt template enforces consistency in the rebuttal output. For example, the agent receives prompts like:

You are an institutional research assistant designed to act as a Devil’s Advocate. 
Your task is to challenge investment theses with structured, evidence-linked counterarguments. 
Always use provided documents (expert calls, broker reports, 10-Ks, transcripts). 
If no relevant evidence exists, clearly state "no counter-evidence found".
Thesis: {user_thesis}
Step 1. Identify Assumptions
- Extract all explicit assumptions (stated directly in the thesis).
- Extract implicit assumptions (unstated but required for the thesis to hold).
- Label each assumption with an ID (A1, A2, A3...).
Step 2. Retrieve and Test
- For each assumption, issue retrieval queries against uploaded sources (OpenSearch index, RDS, S3).
- Prioritize authoritative sources in this order:
   1. SEC filings (10-K, 10-Q, 8-K)
   2. Expert call transcripts
   3. Broker/analyst reports
- Identify passages that directly weaken, contradict, or raise uncertainty about the assumption.
Step 3. Structured Output
For each assumption, output in JSON with the following fields:
{
  "assumption_id": "A1",
  "assumption": "",
  "counter_argument": "",
  "citation": {
       "doc_type": "10-K",
       "doc_id": "ABCD_10K_2023",
       "page": "47",
       "excerpt": "Management noted that monetization of Product features remains exploratory, with no committed pricing model."
  },
  "risk_flag": " (relative importance of this counterpoint to the thesis)"
}
Step 4. Output Formatting
- Return all assumptions and critiques as a JSON array.
- Ensure every counter_argument has at least one citation.
- If no evidence found, set counter_argument = "No counter-evidence found in provided sources" and citation = null.
- Keep tone factual and neutral (avoid speculation).
- Avoid duplication of evidence across assumptions unless highly relevant.
Step 5. Analyst Voice Calibration
- Write counter_arguments in the style of an institutional equity research analyst. 
- Be concise (2–3 sentences per counter_argument).
- Focus on material risks to the investment case (competitive dynamics, regulation, margin compression, technology adoption).

The following is a sample output:

[
  {
    "assumption_id": "A1",
    "assumption": "ABCD will successfully monetize GenAI features like Product",
    "counter_argument": "Recent disclosures suggest Product monetization is still experimental, with management highlighting uncertainty around pricing models. This raises questions about near-term revenue contribution.",
    "citation": {
      "doc_type": "10-K",
      "doc_id": "ABCD_10K_2023",
      "page": "47",
      "excerpt": "Management noted that monetization of Product features remains exploratory, with no committed pricing model."
    },
    "risk_flag": "High"
  },
  {
    "assumption_id": "A2",
    "assumption": "Open-source competitors will not significantly erode ABCD's pricing power",
    "counter_argument": "Expert commentary indicates increasing adoption of open-source alternatives for creative workflows, which could pressure ABCD’s ability to sustain premium pricing.",
    "citation": {
      "doc_type": "Expert Call",
      "doc_id": "EC_DesignAI_2024",
      "page": "3",
      "excerpt": "Clients are experimenting with Stable Diffusion-based plugins as lower-cost substitutes for ABCD Product."
    },
    "risk_flag": "Medium"
  }
]

Review output

The final critique is returned to the user interface, showing a list of challenged assumptions and supporting evidence. Each counterpoint is linked to original materials for traceability. This end-to-end flow enables scalable, auditable, and high-quality pressure-testing of investment ideas.

A diagram of a company's company

AI-generated content may be incorrect.

System components

The Devil’s Advocate agent operates as a multi-agent system that orchestrates parsing, retrieval, and rebuttal generation across AWS services. Specialized agents work iteratively, with each stage feeding back into the next, facilitating both document fidelity and reasoning depth. Investors interact with the system in two ways, forming the foundation for downstream processing. Investors can enter their thesis in a natural language statement of investment view. Often, this takes the form of an IC memo. Another option is to upload documents. Investors can upload finance-specific materials such as earnings transcripts, 10-Ks, broker reports, or expert call notes.

Uploaded materials are parsed into structured text and enriched with semantic structure before indexing:

  • Amazon Textract – Extracts raw text from PDFs and scanned documents
  • Claude Sonnet 3.7 vision-language model (VLM) – Enhances Amazon Textract outputs by reconstructing tables, interpreting visual content, and segmenting document structures ( headers, footnotes, charts)
  • Amazon EC2 orchestration layer – Runs LinqAlpha’s Python-based pipeline that coordinates Amazon Textract, Amazon Bedrock calls, and data routing

Processed data is stored and indexed for fast retrieval and reproducibility:

  • Amazon S3 – Stores raw source files for auditability
  • Amazon RDS – Maintains structured content outputs
  • Amazon OpenSearch Service – Indexes parsed and enriched content for targeted retrieval

Reasoning and rebuttal generation are powered by Claude Sonnet 4 by Anthropic in Amazon Bedrock. It performs the following functions:

  • Assumption decomposition – Sonnet 4 breaks down the thesis into explicit and implicit assumptions
  • Retrieval agent – Sonnet 4 formulates targeted queries against OpenSearch Service and aggregates counterevidence from Amazon RDS and Amazon S3
  • Synthesis agent – Sonnet 4 produces structured rebuttals, citation-linked to source documents, then returns results through the Amazon EC2 orchestration layer to the user interface

The LinqAlpha Devil’s Advocate agent uses a modular multiagent design where different Claude models specialize in distinct roles:

  • Parsing agent – Combines Amazon Textract for OCR with Claude Sonnet 3.7 VLM for structural enrichment of documents. This stage makes sure tables, charts, and section hierarchies are faithfully reconstructed before indexing.
  • Retrieval agent – Powered by Claude Sonnet 4, formulates retrieval queries against OpenSearch Service and aggregates counterevidence from Amazon RDS and Amazon S3 with long-context reasoning.
  • Synthesis agent – Also using Claude Sonnet 4, composes structured rebuttals, citation-linked to original sources, and formats outputs in machine-readable JSON for auditability.

These agents run iteratively: the Parsing agent enriches documents, the Retrieval agent surfaces potential counter-evidence, and the Synthesis agent generates critiques that might trigger additional retrieval passes. This back-and-forth orchestration, managed by a Python-based service on Amazon EC2, makes the system genuinely multi-agentic rather than a linear pipeline.

Implementing Claude 3.7 and 4.0 Sonnet in Amazon Bedrock

The LinqAlpha Devil’s Advocate agent employs a hybrid approach on Amazon Bedrock, combining Claude Sonnet 3.7 for document parsing with vision-language support, and Claude Sonnet 4.0 for reasoning and rebuttal generation. This separation facilitates both accurate document fidelity and advanced analytical rigor. Key capabilities include:

  • Enhanced parsing with Claude Sonnet 3.7 VLM – Sonnet 3.7 multimodal capabilities augment Amazon Textract by reconstructing tables, charts, and section hierarchies that plain OCR often distorts. This makes sure that financial filings, broker reports, and scanned transcripts maintain structural integrity before entering retrieval workflows.
  • Advanced reasoning with Claude Sonnet 4.0 – Sonnet 4.0 delivers stronger chain-of-thought reasoning, sharper assumption decomposition, and more reliable generation of structured counterarguments. Compared to prior versions, it aligns more closely with financial analyst workflows, producing rebuttals that are both rigorous and citation-linked.
  • Scalable agent deployment on AWS – Running on Amazon Bedrock allows LinqAlpha to scale dozens of agents in parallel across large volumes of investment materials. The orchestration layer on Amazon EC2 coordinates Amazon Bedrock calls, enabling fast iteration under real-time analyst workloads while minimizing infrastructure overhead.
  • Large context and output windows – With a 1M-token context window and support for outputs up to 64,000 tokens, Sonnet 4.0 can analyze entire 10-K filings, multi-hour expert call transcripts, and long-form IC memos without truncation. This enables document-level synthesis that was previously infeasible with shorter-context models.
  • Integration with AWS services – Through Amazon Bedrock, the solution integrates with Amazon S3 for raw storage, Amazon RDS for structured outputs, and OpenSearch Service for retrieval. This provided LinqAlpha with more secure deployment, full control over customer data, and elastic scalability required by institutional finance clients.

For hedge funds, asset managers, and research teams, the choice of Amazon Bedrock with Anthropic models is not merely about technology; it directly addresses core operational pain points in investment research:

  • Auditability and compliance – Every counterargument is linked back to its source document (10-K, broker note, transcript), creating an auditable trail that meets institutional governance standards.
  • Data control – The Amazon Bedrock integration with private S3 buckets and Amazon Virtual Private Cloud (Amazon VPC) deployed EC2 instances keeps sensitive documents within the firm’s secure AWS environment, a critical requirement for regulated investors.
  • Workflow speed – By scaling agentic workflows in parallel, analysts save hours during earnings season or IC prep, compressing review cycles from days to minutes without sacrificing depth.
  • Decision quality – Sonnet 3.7 facilitates document fidelity, and Sonnet 4.0 adds financial reasoning strength, together helping investors uncover blind spots that would otherwise remain hidden in traditional workflows.

This combination of AWS based multi-agent orchestration and LLM scalability makes the LinqAlpha Devil’s Advocate agent uniquely suited to institutional finance, where speed, compliance, and analytical rigor must coexist. With Amazon Bedrock, the solution achieved managed orchestration and built-in integration with AWS services such as Amazon S3, Amazon EC2, and OpenSearch Service, which provided fast deployment, full control over data, and elastic scale.

“This helped me objectively gut-check my bullish thesis ahead of IC. Instead of wasting hours stuck in my own confirmation bias, I quickly surfaced credible pushbacks, making my pitch tighter and more balanced.”

— PM at Tiger Cub Hedge Fund

Conclusion

Devil’s Advocate is one of over 50 intelligent agents in LinqAlpha’s multi-agent research system, each designed to address a distinct step of the institutional investment workflow. Traditional processes often emphasize consensus building, but Devil’s Advocate extends research into the critical stage of structured dissent, challenging assumptions, surfacing blind spots, and providing auditable counterarguments linked directly to source materials.

By combining Claude Sonnet 3.7 (for document parsing with VLM support) and Claude Sonnet 4.0 (for reasoning and rebuttal generation) on Amazon Bedrock, the system facilitates both document fidelity and analytical depth. Integration with Amazon S3, Amazon EC2, Amazon RDS, and OpenSearch Service enables more secure and scalable deployment within investor-controlled AWS environments.

For institutional clients, the impact is meaningful. By automating repetitive diligence tasks, the Devil’s Advocate agent frees analysts to spend more time on higher-order investment debates and judgment-driven analysis. IC memos and stock pitches can benefit from structured, source-grounded skepticism, supporting clearer reasoning and more disciplined decision-making.

LinqAlpha’s agentic architecture shows how multi-agent LLM systems on Amazon Bedrock can transform investment research from fragmented and manual into workflows that are scalable, auditable, and decision grade, tailored specifically for the demands of research on public equities and other liquid securities.

To learn more about Devil’s Advocate and LinqAlpha, visit linqalpha.com.


About the authors

Suyeol Yun

Suyeol Yun is a Principal AI Engineer at LinqAlpha, where he designs the computing and contextualization infrastructure that powers multi-agent systems for institutional investors. He studied political science at MIT and mathematics at Seoul National University. His AI journey spans from computer vision for facial reenactment, through graph neural networks for US lobbying industry and congressional stock trading, to building infrastructure for capable AI agents.

Jaeseon Ha

Jaeseon Ha is a Product Developer and AI Strategist at LinqAlpha, where she codifies complex analyst workflows into LLM-based agents. Her designs automate the extraction of critical signals from both structured and unstructured data, allowing institutional investors to delegate exhaustive data synthesis to multi-agent systems. Drawing on her experience as an equity analyst at Goldman Sachs and Hana Securities, Jaeseon ensures LinqAlpha’s products are built for high-conviction decision-making. She also contributes to the firm’s research on multi-agent systems, specifically focusing on the automated extraction and querying of financial KPIs and guidance at scale.

Subeen Pang

Subeen Pang, Ph.D. is a Co-founder of LinqAlpha, where he develops AI-driven research workflows for institutional investors. He specializes in building agentic systems that help analysts structure and interpret data from earnings calls, filings, and financial reports. He earned his Ph.D. from MIT in Computational Science and Engineering. With a background in mathematical optimization and computational optics, Subeen applies rigorous applied math to AI design. At LinqAlpha, he led the development of a finance-specific retrieval system using query augmentation and entity normalization to ensure high-precision search results for professional analysts.

Jacob (Chanyeol) Choi

Jacob (Chanyeol) Choi is the Co-founder and CEO of LinqAlpha, where he leads the development of domain-specialized, multi-agent AI systems that streamline institutional investment research and market intelligence workflows. He earned a M.S./Ph.D. in Electrical Engineering and Computer Science from MIT, a B.S. in Electrical and Electronic Engineering at Yonsei University. His research journey spans AI hardware and neuromorphic computing to building reliable, finance-native agentic systems, including work on bias and responsible agent deployment in institutional settings. He was recognized on Forbes’ 2021 30 Under 30 (Science) list.

Joungwon Yoon

Joungwon Yoon is a Senior Venture Capital Manager at AWS, based in Seoul, South Korea. She partners with leading investors and founders to help startups scale on AWS, connecting high-potential companies with the technology, resources, and global networks they need to grow. She focuses on generative AI startups and supports Korean founders in expanding into the US and Japan.

Sungbae Park

Sungbae Park is Senior Account Manager in AWS Startup team helping strategic AI startups grow and succeed with AWS. He previously worked as a Partner Development Manager establishing partnership with various MSP, SI, and ISV companies.

YongHwan Yoo

YongHwan Yoo is a GenAI Solutions Architect on the AWS Startup team. He helps customers effectively adopt generative AI and machine learning technologies into their businesses by providing architecture design and optimization support, focusing on infrastructure for large-scale model training. He is also an active member of the AI/ML Technical Field Community (TFC) at AWS.

Jat AI Stay informed with the latest in artificial intelligence. Jat AI News Portal is your go-to source for AI trends, breakthroughs, and industry analysis. Connect with the community of technologists and business professionals shaping the future.