NLI-POWERED FACT VERIFICATION

LLM Outputs,
Detect & Auto-Correct

Supports all document formats including PDF, DOCX, HWP, and HWPX. Analyze every claim in AI-generated text against source documents, automatically detecting and correcting hallucinations.

Hallucination DetectionAuto-Correction
verification-result.json

{ "claim": "South Korea's GDP was approximately $1.7 trillion in 2023.",

"verdict": "supported", "confidence": 0.94 }

 

{ "claim": "Seoul's population is approximately 15 million.",

"verdict": "contradicted", "confidence": 0.97,

"correction": "Actual Seoul population is approximately 9.5 million." }

 

{ "claim": "Investing in this fund guarantees returns.",

"verdict": "contradicted", "rule": "CG-002",

"correction": "This fund does not guarantee returns and carries the risk of principal loss." }

 

// corrected_text generated — contradicted claims auto-corrected based on source evidence

96.8%

Detection Rate (500 claims)?

Auto

Auto-Correction?

<2s

3-Layer Max Latency?

7+

Document Formats?

38

Guardrail Rules?

31

Verification Categories?

DOCUMENT SUPPORT

Every Document Format, Supported

Natively supports all major document formats including Korean Hangul (HWP/HWPX). Upload any source document for automatic analysis and verification.

Korean Gov't Standard
.hwp.hwpx

Hangul (HWP/HWPX)

Native support for Hancom's HWP (OLE binary) and HWPX (ZIP/XML) formats. Accurately extracts tables, text, and formatting.

.pdf

PDF

Precisely extracts text, tables, and layout with PyMuPDF engine.

.docx

DOCX

Parses paragraphs, tables, and styles from Microsoft Word documents.

.txt.md

TXT / Markdown

Directly analyzes plain text and markdown documents.

.html

HTML

Strips scripts and styles from web pages, extracting only the body content.

Automatic Preprocessing Pipeline

Uploaded documents are automatically processed: text extraction, semantic chunking, E5 vector embedding, and knowledge graph construction — all in one step.

landing.pipelineUploadlanding.pipelineUploadSub
landing.pipelineParselanding.pipelineParseSub
landing.pipelineChunklanding.pipelineChunkSub
landing.pipelineEmbedlanding.pipelineEmbedSub
landing.pipelineKglanding.pipelineKgSub

TECHNOLOGY

Built on State-of-the-Art NLI Research

Our verification pipeline combines semantic retrieval with neural natural language inference for claim-level accuracy.

Layer 1 — Guardrail Rule Engine

38 rules · latency <1ms

Compliance rules (CG-001~028), numerical cross-validation, and hallucination pattern matching for instant detection. Handles 73% of total detections within 1ms.

Layer 2 — NLI Semantic Verification

DeBERTa-v3 Cross-Encoder · latency ~50ms

Classifies claim-evidence pairs using cross-encoder NLI model. Leverages structured evidence from Knowledge Graph for improved accuracy.

Layer 3 — LLM-as-Judge

DeepSeek/Claude · latency ~2s

Re-verifies neutral claims from NLI using LLM with source evidence. Achieves final detection rate of 96.8%.

Layer 4 — Web Source Verification

DataForSEO SERP API · ~3s latency

Collects real-time web search results as additional evidence beyond uploaded documents. Enables fact-checking with web sources alone — no documents required. Provides source URLs and similarity scores per claim.

Auto-Correction Engine

LLM-powered · Source-grounded

Automatically generates accurate corrections for contradicted claims based on source document evidence. Preserves the original writing style while fixing only the facts, delivering ready-to-use corrected_text.

Detection Mechanism Contribution

Guardrails
50%<1ms
Numerical
14%<1ms
Pattern
9%<1ms
NLI
2%~50ms
LLM Judge
40%~2s

Detection Rate Evolution

5-round improvement on 500-claim benchmark

Detection Rate
False Positive Rate

TruthAnchor v3.2.0 · 500 claims verified · 31 categories

Category Performance

100% detected23 categories
90%+ detected4 categories
Total Claims Verified500
Guardrail Rules38

TRY IT

See Hallucination Detection in Action

Paste any LLM-generated text to see how HalluGuard's 3-layer verification works.

* This is a simulation demo. Sign in for real verification.

HOW IT WORKS

5-Step Verify & Correct Pipeline

1

Upload Source

Upload source documents as verification references. Supports PDF, DOCX, HWP, HWPX, TXT, MD, and HTML. Documents are automatically chunked and vector-embedded.

2

Extract Claims

Automatically extract individual factual claims from LLM output.

3

Match Evidence

Semantic search for relevant evidence using E5-large embedding model.

4

NLI Judgment

Classify claims as supported, contradicted, or neutral using DeBERTa-v3 cross-encoder.

5

Web Evidence

Search the web in real-time to collect external sources, combining them with uploaded documents as hybrid evidence.

6

Auto-Correct

Automatically correct contradicted claims based on source evidence, generating ready-to-use corrected text.

PRICING

Fair Pricing, Powerful Verification

Start free. No credit card required.

Monthly Billing

Service period: 1 month from the date of payment

Free

Free
  • 20 verifications/mo
  • 1 projects
  • 50MB per file
  • 100MB storage
  • Unlimited free scanner
Get Started

Recommended

Pro

₩49,000/mo
  • 500 verifications/mo
  • 10 projects
  • 50MB per file
  • 2GB storage
  • Web source verification
  • Academic paper verification
  • API Access
  • Email support
Get Started

Business

₩179,000/mo
  • 2,000 verifications/mo
  • 50 projects
  • 200MB per file
  • 10GB storage
  • All Pro features
  • 20 team members
  • Custom domain guardrails
  • Priority support
Get Started

Enterprise

₩829,000/mo
  • Unlimited verify & generate
  • 500MB per file
  • 100GB storage
  • SSO/SAML authentication
  • 99.9% SLA Guarantee
  • Dedicated support
  • On-premise deployment
  • Custom guardrail design
Contact Sales

Start verifying LLM outputs today.

Free plan includes 15 verifications per month. No credit card required.

Get Started Free