top of page

Digital Lending with Agentic AI

  • Writer: Anand Nerurkar
    Anand Nerurkar
  • Dec 2
  • 25 min read

AI-First Automation in Digital Lending” (Interview Script)

“In our digital lending platform, we adopted an AI-first automation approach, where AI is not an add-on but embedded into every decision-making and customer interaction step.We used traditional ML for deterministic risk decisions, and GenAI for cognitive automation, explanation, and user interaction.The outcome was: 90% straight-through processing 70–80% reduction in manual underwriting Loan TAT reduced from days to minutes 30–40% operational cost reduction.”

Then you break it into three AI layers:

✅ 1. Decision AI (ML Models)

Used for automated decisions:

  • Document AI (OCR + classification)

  • Credit Risk Model

  • Fraud Risk Model

  • Income Stability Model

  • AML & Sanctions Model

These produce scores, not conversations.

✅ 2. GenAI (Cognitive & Conversational Layer)

Used for:

  • Borrower Assistant (customer side)

  • Underwriting Copilot (bank side)

  • Agreement explanation & clause summarization

  • Policy & SOP interpretation using RAG

✅ 3. Multi-Agent / Agentic AI (Orchestration & Specialization)

Used for:

  • Specialized autonomous execution

  • Parallel intelligence

  • Cognitive workflow automation

✅ 1. RAM – END-TO-END DIGITAL LENDING JOURNEY (EVENT + AGENT FLOW)

Stage 0 – Pre-Login (Prospect Journey)

Ram visits ABC Bank website (not a customer).

Borrower GenAI Assistant (Public Mode) helps:

  • Product discovery

  • EMI simulation

  • Eligibility estimation(NO identity yet, NO Azure AD required)

Ram clicks “Proceed” ➝ Mobile OTP Verification (CIAM – Azure AD B2C)After OTP → Lightweight Digital Identity is created

✅ This is NOT net banking login✅ This is Prospect CIAM Identity

Stage 1 – Loan Application Creation

Ram enters:

  • Loan Amount

  • Tenure

  • Consent

  • Uploads PAN, Aadhaar, Salary Slip

System Actions

  • Metadata saved in:

    • loan_application

    • customer_profile

  • Documents stored in:

    • Object Storage (Blob/S3)

  • Event Fired:

LOAN.APPLICATION.SUBMITTED

✅ Borrower Assistant now switches to Authenticated Mode✅ Shows:“Your Application Ref: RRRRR is under process.”

Stage 2 – OCR + Document AI

OCR Service consumes:

LOAN.APPLICATION.SUBMITTED

Actions

  • Extract PAN, Aadhaar, Salary details

  • Store structured metadata in:

    • kyc_raw_data

    • income_raw_data

Event Fired

DOCUMENT.OCR.EXTRACTED

✅ 2. TRUE MULTI-AGENT SYSTEM (NOT MICROSERVICES)

These are LLM-controlled Agents with Tools + Memory + Autonomy

Agent

Purpose

KYC Agent

Identity verification

Credit Risk Agent

Bureau & score

Fraud Agent

Device/IP & behavioral fraud

Income Stability Agent

Salary reliability

Orchestrator Agent

Decision making

Borrower Assistant

Customer interaction

Underwriter Copilot

Human-in-loop

Each agent has:

  • Instruction prompt

  • Tool registry

  • Event triggers

  • Memory store

✅ 3. KYC AGENT – FULL WORKING DESIGN

🔹 Trigger

Consumes:

DOCUMENT.OCR.EXTRACTED

🔹 KYC AGENT – SYSTEM PROMPT (Instruction-Based)

You are the KYC Verification Agent for the Digital Lending Platform.

Your responsibility:
1. Fetch PAN and Aadhaar extracted data from PostgreSQL.
2. Validate PAN using the PAN Verification API.
3. If PAN fails, validate Aadhaar using Aadhaar OTP API.
4. Compare OCR data vs API response.
5. Detect name/DOB mismatch.
6. Mark KYC status as:
   - PASS
   - FAIL
   - MANUAL_REVIEW
7. Persist result to PostgreSQL.
8. Publish result to Kafka Topic: KYC.VERIFIED.

Tools Available:
- db.read(table, query)
- db.write(table, data)
- api.call(url, payload)
- kafka.publish(topic, event)

Decision Rules:
- PAN success + OCR match → PASS
- PAN fail + Aadhaar fail → FAIL
- Any field mismatch > 20% → MANUAL_REVIEW

Never perform Credit, Fraud, or Income checks.
Return only structured status.

🔹 KYC AGENT – EXECUTION FLOW

  1. db.read(kyc_raw_data)

  2. api.call(PAN_API)

  3. If PAN fails → api.call(AADHAAR_API)

  4. Compare OCR vs API

  5. db.write(kyc_status)

  6. kafka.publish(KYC.VERIFIED)

✅ Fully autonomous✅ No human needed unless MANUAL_REVIEW

✅ 4. CREDIT RISK AGENT

🔹 Trigger

KYC.VERIFIED (status = PASS)

🔹 Prompt (Instruction-Based)

You are the Credit Risk Assessment Agent.

Steps:
1. Fetch applicant PAN from DB.
2. Call CIBIL Bureau API.
3. If CIBIL unavailable, call Experian as fallback.
4. Fetch internal bank risk model score.
5. Normalize all scores (0–1).
6. Write risk_score and rating to DB.
7. Publish event CREDIT.RISK.COMPLETED.

Do not perform Fraud or Income checks.

✅ 5. FRAUD RISK AGENT

🔹 Trigger

KYC.VERIFIED (PASS)

🔹 Actions

  • Call Hunter / ThreatMetrix

  • Check:

    • Device fingerprint

    • IP velocity

    • SIM swap risk

    • Bot score

Publishes:

FRAUD.RISK.COMPLETED

✅ 6. INCOME STABILITY AGENT

🔹 Trigger

DOCUMENT.OCR.EXTRACTED

🔹 Actions

  • Use salary slip OCR

  • Call internal ML Model Endpoint:

POST /ml/income/stability

Outputs:

  • Stability score

  • Employer credibility

  • Salary volatility

Publishes:

INCOME.RISK.COMPLETED

✅ 7. ORCHESTRATOR AGENT (THE BRAIN)

🔹 Trigger

Listens to:

KYC.VERIFIED
CREDIT.RISK.COMPLETED
FRAUD.RISK.COMPLETED
INCOME.RISK.COMPLETED

🔹 Decision Prompt

You are the Lending Risk Orchestrator Agent.

Collect results from:
- KYC
- Credit Risk
- Fraud Risk
- Income Stability

Apply Bank Policy Rules:
- If any FAIL → REJECT
- If Credit > 0.75 AND Fraud < 0.3 AND Income > 0.6 → AUTO_APPROVE
- Else → MANUAL_REVIEW

Persist final decision.
Publish:
- LOAN.AUTO.APPROVED OR
- LOAN.MANUAL.REVIEW OR
- LOAN.REJECTED

✅ This is true Agentic Decision Intelligence

✅ 8. UNDERWRITER COPILOT (HUMAN-IN-LOOP)

Triggered only when:

Copilot:

  • Summarizes all risks

  • Shows:

    • Bureau score

    • Fraud heatmap

    • Income volatility

  • Suggests:

    • Approve with lower amount

    • Reject

    • Ask more docs

Human decision → fed back to Orchestrator.

✅ 9. GENAI LOAN AGREEMENT EXPLANATION (NOT GENERATION)

You are absolutely correct:

  • Agreement is created by template system

  • NOT by GenAI

Then:

Step 1 – Agreement Generated

AGREEMENT.GENERATED

Step 2 – Document AI + Temporary Vector Index

  • Text extracted

  • Chunked

  • Embedded

  • Stored in:

    • Ephemeral Vector Index (TTL 48 hrs)

Step 3 – GenAI Reads via RAG

Borrower asks:“What is my foreclosure charge?”

GenAI retrieves from:

Ephemeral Vector Store + Policy Knowledge Hub

✅ This is called Transient RAG✅ Used only for explanation, NOT training

✅ 10. BORROWER ASSISTANT VS UNDERWRITER COPILOT

Role

Borrower Assistant

Underwriter Copilot

User

Customer (Ram)

Bank Officer

Purpose

Guide, explain, calculate, status

Risk analysis, decision support

Uses

RAG + Orchestration APIs

Internal ML + RAG

Auth

Azure AD B2C

Azure AD (Employee)

✅ 11. WHY THIS IS AI-FIRST AUTOMATION

You can confidently say:

“This platform is AI-first because every core lending decision is automated and driven by ML or GenAI before human intervention. Humans only intervene in exception cases.”

Stage

AI Type

OCR

Document AI

KYC

Rule + API AI Agent

Credit

ML Model

Fraud

ML Risk Engine

Income

ML Stability Model

Decision

Agentic AI

Customer Support

GenAI

Underwriting

GenAI Copilot

✅ 12. WHAT TO SAY IN INTERVIEW (HEAD OF DIGITAL ARCHITECTURE LENS)

You should say:

“We didn’t treat AI as an add-on. We designed the entire digital lending platform in an AI-first, event-driven, and agent-based architecture. Every risk decision—KYC, credit, fraud, income stability—is executed by autonomous AI agents coordinated by an AI orchestrator. GenAI is used for borrower experience, legal explainability, underwriting assistance, and regulatory transparency via RAG over governed knowledge hubs. Human intervention is only for exception handling. This reduced our approval TAT from days to minutes and manual underwriting by over 70%.”

✅ 13.

✅ Is KYC PAN-only possible?

Yes. Aadhaar is fallback.

✅ How agent knows API/DB/kafka?

Via:

  • Tool registry

  • Agent configuration manifests

  • Prompt instructions

✅ Is temporary vector store different?

Yes.

Permanent Store

Ephemeral Store

Policies, RBI docs

Loan agreement

PgVector, Pinecone

In-memory / Redis

Long term

TTL based

✅ Why chunk/embed even for temporary?

Because:

  • Clause-level retrieval

  • Accurate GenAI explanations

  • No hallucination

✅ Is this real Multi-Agent?

Yes — because:

  • Each agent has autonomy

  • Own memory

  • Tool reasoning

  • Event-based collaboration

  • Not simple API chaining


Customer -Ram Journey

=====

Ram (prospect) visits the bank, verifies mobile, applies for loan, uploads PAN/Aadhaar/salary docs → event-driven pipeline triggers Document-AI → KYC Agent → parallel Risk Agents (credit/fraud/income/AML) → Orchestrator aggregates results + rule engine → Decision (Auto-Approve / Auto-Reject / Manual Review). GenAI Borrower Assistant and Underwriter Copilot provide explanations and evidence (via RAG and Context API) while all interactions are auditable and PII-masked.

2 — High-level components (what each does)

  • Client / Borrower UI (web/mobile) — collects inputs, shows borrower assistant chat, pushes initial event.

  • API Gateway / Auth — verifies mobile OTP (CIAM / Azure AD B2C-like flows).

  • Object Store / ADLS / Blob — stores original documents (encrypted).

  • Postgres — transactional metadata (masked PII pointers, application table).

  • Cosmos DB (Event/Context store) — timeline & low-latency context for Context API (masked).

  • Kafka / Azure Event Hubs — event bus (topics for each stage).

  • Redis — caching (session, eligibility quick results).

  • Document-AI (OCR + NER) — extracts structured fields, saves JSON to ADLS + Postgres, emits ocr.extracted.

  • Feature Store / Azure ML — model features + MLOps endpoints.

  • pgvector / Vector DB — KnowledgeHub (policy/SOP embeddings) + ephemeral index (temporary doc embeddings).

  • MLOps model endpoints — credit/fraud/income models served securely.

  • LLMOps / GenAI Gateway — orchestrates RAG, prompt templates, prompt/versioning, guardrails.

  • Orchestrator Agent (Temporal/Conductor or Spring AI orchestration) — coordinates agents, retries, timeline, decision aggregation.

  • Agents — small, instruction-driven services (Spring AI components calling tools).

  • Decision Engine (DMN/Rules) — final deterministic decision logic.

  • Underwriter UI + Copilot — aides manual review.

  • Loan Agreement Service + DocGen + eSign + CBS — post-approval automation.

  • Audit store (append-only) — store all prompts, retrieved doc ids, model outputs, decisions.

3 — Event topics (Kafka)

Use clear, versioned topics with schema registry:

  • application.created.v1

  • docs.uploaded.v1

  • ocr.extracted.v1

  • kyc.requested.v1

  • kyc.completed.v1

  • aml.requested.v1

  • aml.completed.v1

  • credit.requested.v1

  • credit.completed.v1

  • fraud.requested.v1

  • fraud.completed.v1

  • income.requested.v1

  • income.completed.v1

  • decision.requested.v1

  • decision.made.v1

  • agreement.generated.v1

  • esign.completed.v1

  • loan.account.created.v1

Each event includes: applicationId, traceId, producer, timestamp, payloadRef (ADLS path or feature ref), auditPointer.

4 — DB & stores — where to write/read

  • Postgres: applications, documents, kyc_status, agent_results (store masked PII/IDs), model_versions.

    • Example tables: applications(applicationId, applicantMobileMasked, status, createdAt, updatedAt, refId), documents(docId, applicationId, blobPath, hash, parsedJsonPath).

  • ADLS Blob: original PDFs/images; parsed OCR JSON files (curated).

  • Cosmos DB: application_timeline (append-only low-latency timeline for Context API). Stores masked reason codes and pointers, not raw PII.

  • pgvector/Pinecone:

    • KnowledgeHub: SOPs, RBI rules, credit policy embeddings (permanent).

    • Ephemeral Index: freshly generated loan agreements or uploaded docs that should be queryable for a short TTL.

  • Feature Store: online features used by model endpoints (Redis-backed online store).

  • Audit Append-Only Store: stores full prompts, retrieved chunk ids, model outputs, decision record snapshots.

5 — Orchestrator Agent behavior (central coordinator)

Role: Plan, manage, coordinate, execute tasks; track timeline; rehydrate context; invoke agents by publishing events or calling agent endpoints; aggregate outputs and call Decision Engine.

Key responsibilities:

  • On application.created, create Postgres applications row, put initial timeline in Cosmos, publish docs.uploaded when documents stored.

  • Subscribe to ocr.extracted → publish kyc.requested (trigger KYC agent).

  • On kyc.completed(status), decide next steps:

    • If FAIL_DEFINITE → publish decision.requested (auto-reject).

    • If PASS → publish credit.requested, fraud.requested, income.requested, aml.requested in parallel.

  • Wait for parallel *.completed events (with timeout/fallback policies). Then call Decision Engine with aggregated payload.

  • If MANUAL_REVIEW, create underwriter task and notify underwriter UI; also publish underwriter.task.created.

  • All steps logged to audit store.

Implementation: Spring Boot + Spring AI orchestration (or Temporal worker). Uses Kafka for events, Postgres for transactional state, Cosmos for context timeline.

6 — Agent specification pattern

Each agent is a small Spring Boot service with a single instruction prompt (system/instruction text) that tells it what tools/APIs to call and how to behave. Agents are idempotent, authenticated via mTLS/service principal, and expose an HTTP endpoint for orchestrator or subscribe to Kafka topics. They write results to Postgres and publish *.completed events.

Below I give the detailed instruction prompts plus tool registry per agent.

Agent A — Document-AI (runs before agents; included for completion)

Trigger: docs.uploaded.v1Action: OCR+NER+field-extract → store parsed JSON to ADLS, write documents table entry, publish ocr.extracted.v1.

Tool registry:

  • ADLS Blob SDK (put file)

  • Document AI model (VT/LayoutLM) endpoint

  • Postgres JDBC

  • Kafka producer

Output event example: ocr.extracted.v1 payload includes docId, applicationId, ocrJsonPath, extractedFields (name, panHash, aadhaarHash masked, incomeSummary, confidenceScores).

Agent 1 — KYC Agent (instruction-driven)

Trigger: kyc.requested.v1 (published by Orchestrator after ocr.extracted)Primary task: Verify identity via PAN API (primary) and optionally Aadhaar API (fallback). Set kycStatus = PASS / FAIL / MANUAL_REVIEW and write kyc.completed.v1.

Instruction (System Prompt) — KYC Agent

SYSTEM: You are the KYC Agent. Your mission: given an applicationId and parsed document reference, determine KYC result. Follow these rules strictly:
1. Read the provided parsed JSON from ADLS path (documentRef). Use only masked PII for any external work.
2. Primary verification: call PAN_Verification_API (Tool: panApi.verifyPan) with panHash and maskedPanLast4. If PAN returns VALID & nameMatches → mark PAN_OK.
3. Fallback: if PAN service times out or returns NEEDS_ADDITIONAL, call Aadhaar_Verification_API (Tool: aadhaarApi.verify) with aadhaarHash (masked), then reconcile name/dob.
4. If any API returns direct mismatch → kycStatus = FAIL_DEFINITE.
5. If API returns ambiguous/fuzzy match or confidence < 0.7 → kycStatus = MANUAL_REVIEW and produce evidence summary.
6. On PASS, record kycStatus=PASS and include evidence pointers (apiTransactionId, confidence).
7. Write result to Postgres kyc_status table (with masked pointers only) and publish Kafka event kyc.completed.v1 with result, reasonCode, evidencePointer.
8. Log all calls, responses, and final prompt in audit store.
9. NEVER send raw PII to LLM or vector DB; only masked values allowed in any prompt.

Tool registry for KYC Agent

  • panApi.verify(panMaskedLast4, name, dob) → returns {status: VALID/INVALID/NEED_MORE, confidence, transactionId}

  • aadhaarApi.verify(aadhaarMaskedLast4, name, dob) → same pattern

  • Postgres JDBC (write to kyc_status)

  • Kafka producer (kyc.completed.v1)

  • Cosmos DB timeline append

  • Audit store writer (append-only)

Inputs & Outputs

  • Input: applicationId, documentRef (ADLS path), traceId.

  • Output event: kyc.completed.v1:

{
 "applicationId":"APP-123",
 "kycStatus":"PASS",
 "reasonCode":"KYC_OK_PAN",
 "evidence":{"panTx":"PAN-TRX-xx","confidence":0.92},
 "traceId":"T-111"
}
  • DB writes:

    • kyc_status(applicationId,status,reasonCode,evidenceRef,updatedAt)

Agent 2 — AML & Sanctions Agent

Trigger: aml.requested.v1 (can run after kyc.completed or in parallel)Primary task: Check sanctions/PEP/adverse media. Output aml.completed.v1 with status CLEAR/POTENTIAL_HIT/HIGH_HIT.

Instruction (System Prompt) — AML Agent

SYSTEM: You are the AML Agent. Given applicationId and masked identity fields, run sanctioned-entity & PEP screening:
1. Load normalized name tokens from parsed JSON.
2. Perform exact-match check via internal sanctionListStore (Tool: sanctionApi.exactMatch).
3. Perform fuzzy-match (Levenshtein / phonetic) via sanctionApi.fuzzyMatch with threshold > 0.85.
4. Query vendor WorldCheck/Refinitiv (Tool: vendorWorldCheck.search) for additional hits. Capture source ids.
5. Run adverse media search via adverseMediaApi.search (return count and confidence).
6. Calculate composite matchScore = weighted(exact,fuzzy,vendor,adverseMedia).
7. If matchScore > 0.95 or vendorExactHit → status = HIGH_HIT (auto-reject).
8. If matchScore between 0.6 and 0.95 → POTENTIAL_HIT → send to EDD / MANUAL_REVIEW.
9. Append aml result to Cosmos timeline and publish `aml.completed.v1`.
10. Persist meta (matchScore, matchedLists) to Postgres with masked references. Log audit.

Tools

Output example:

aml.completed.v1 includes status, matchScore, matchedListIds, recommendation.

Agent 3 — Credit Risk Agent

Trigger: credit.requested.v1 (after KYC PASS)Primary task: Call external bureau (CIBIL) and internal PD model; store results; publish credit.completed.v1.

Instruction — Credit Agent

SYSTEM: You are the Credit Agent. Goal: compute credit PD and bureau score for applicationId.
1. Use masked applicant identifiers to call CIBIL API via bureauApi.getScore (Tool).
2. In parallel, extract features from FeatureStore (Tool: featureStore.getOnlineFeatures(applicationId)).
3. Call internal creditModel endpoint (Tool: ml.creditModel.predict) with features; return pdScore and SHAP top-5 features.
4. If bureauApi fails, fallback to alternate vendor Experian (Tool: bureauAltApi).
5. Merge results into a single creditResult payload {bureauScore, pdScore, modelVersion, shapTop}.
6. Write to Postgres `agent_results` and to FeatureStore latest snapshot; publish `credit.completed.v1`.
7. Log modelVersion and SHAP into audit store.

Tools

  • bureauApi.getScore(maskedIdentifiers)

  • ml.creditModel.predict(features)

  • featureStore.getOnlineFeatures(appId)

  • Postgres, Kafka, Cosmos, Audit store.

Agent 4 — Fraud Agent

Trigger: fraud.requested.v1Primary task: Vendor fraud checks + internal anomaly model.

Instruction — Fraud Agent

SYSTEM: You are the Fraud Agent.
1. Call Hunter/Hunter-like vendor (Tool: fraudVendor.check) with transaction/session metadata (device fingerprint, IP, geo)—use masked user id only.
2. Pull behavioral features from FeatureStore (Tool).
3. Call ml.fraudModel.predict to get fraudScore and anomaly flags.
4. Combine vendorScore + modelScore → fraudComposite.
5. Persist to Postgres; publish `fraud.completed.v1` with fraudScore, riskFlags, modelVersion.
6. If fraudComposite > REJECT_THRESHOLD → mark for immediate reject and notify orchestrator.

Tools

  • fraudVendor.check(sessionData)

  • ml.fraudModel.predict(features)

  • FeatureStore, Postgres, Kafka, Cosmos.

Agent 5 — Income Stability Agent

Trigger: income.requested.v1Primary task: Parse bank statement features, calculate DTI, stability score, affordability.

Instruction — Income Agent

SYSTEM: You are Income Stability Agent.
1. Read parsed bank statement JSON from ADLS (Tool).
2. Extract monthly inflows, salary credits, large irregular credits; compute DTI and EMI affordability formula.
3. Optionally call ml.incomeModel.predict for stabilityScore (Tool).
4. Write results to Postgres and FeatureStore; publish `income.completed.v1`.
5. If affordability < requested EMI → set affordabilityFlag.

Tools

  • ADLS read, ml.incomeModel.predict, Postgres, Kafka.

Agent 6 — Orchestrator Agent (detailed)

Trigger: application.created (and subsequent events)Detailed behavior:

  1. Accept application.created → create application row, push docs.uploaded after file sync.

  2. Wait ocr.extracted.v1. On receive publish kyc.requested.v1.

  3. On kyc.completed:

    • If FAIL_DEFINITE → create decision.made.v1 {AUTO_REJECT, reason}.

    • If PASS → publish in parallel credit.requested.v1, fraud.requested.v1, income.requested.v1, aml.requested.v1.

  4. Track partial results. Wait up to configured timeout (e.g., 30s normal, 5 mins extended). Use fallback endpoints or shadow models when needed.

  5. On receiving all *.completed events, call Decision Engine (Tool: rulesEngine.evaluate) with payload: {kyc, aml, credit, fraud, income, applicationMeta}. Decision Engine returns AUTO_APPROVE, AUTO_REJECT, or MANUAL_REVIEW and ruleVersion.

  6. Persist decision.made to Postgres + audit; publish decision.made.v1.

  7. If MANUAL_REVIEW:

    • Create underwriter task in underwriter.queue (Cosmos) with underwriter.brief.created.

    • Call LLMOps to create UnderwriterBrief (Tool: llmops.generateUnderwriterBrief) using masked context & RAG policy snippets.

  8. On AUTO_APPROVE: trigger agreement.generated (DocGen) -> esign -> loan.account.created.

Tools: Kafka, Postgres, Cosmos, rulesEngine, llmops, docGen, eSign provider, audit store.

Agent 7 — Borrower Assistant (GenAI chat window)

Trigger: UI events (user asks) or system prompts at stages (auto-popup after application submission).Role: Customer-facing GenAI that answers product questions, eligibility, status, document checklist, EMI calculations — always using Context API + RAG.

Instruction — Borrower Assistant

SYSTEM: You are the Borrower Assistant. Use only the following data sources: (1) Context API (GET /context/{applicationId}) that returns masked timeline & status; (2) RAG KnowledgeHub for bank policy and product info. Never request or expose raw PII. For calculations call Eligibility API (Tool) or EMI microservice. For status questions, call Context API first. For policy explanations cite policy IDs. For documents, show checklist and next steps. Use simple, customer-friendly language and provide buttons for actions (Proceed / Save for later / Request call).

Tools

  • contextApi.get(applicationId)

  • eligibilityApi.compute(mobile/otp/session)

  • emiService.calculate(amount,tenure,roi)

  • RAG via llmops.retrieve(policies)

  • Kafka for logging messages and audit.

Example flow for Ram:

  • Ram asks: “Based on 60k salary, how much can I get?”

    • Borrower Assistant calls eligibilityApi (tool) with income feature (if logged/verified) or uses an on-the-fly calculator; calls RAG for product rules; returns: “Estimated ₹12–15L; EMI ₹28,400 for 5 yrs. Proceed?”

    • Buttons shown: [Yes → leads to pre-fill loan form], [No → exploratory options].

    • If Ram clicks Yes, UI navigates to application form and orchestrator starts application flow (or creates a provisional session and requests OTP verification).

Agent 8 — Underwriter Copilot (GenAI for reviewer)

Trigger: underwriter.task.created (when manual review required)Role: Provide evidence-backed brief (policy citations), recommended action, explain model flags & top SHAP features.

Instruction — Underwriter Copilot

SYSTEM: You are the Underwriter Copilot. Given masked application context, model outputs and policy references, produce:
1. One-paragraph summary (applicant profile, requested amount, key risks).
2. Top-3 reasons the system flagged manual review (model SHAP & vendor hits).
3. Quoted policy clauses (via RAG) that correspond to the deviation.
4. Suggested actions (collect X doc, escalate Y, or reject).
5. A fillable checklist for the underwriter and buttons: Approve / Reject / Request More Docs.
6. Include exact evidence pointers (docId, API tx ids) for audit.
7. Log prompt & returned snippets in audit.

Tools: contextApi, llmops.retrieve, llmops.generate, Postgres for underwriter decisions, Kafka for events, Audit store.

Agent 9 — Loan Agreement Reviewer Agent (post-DocGen)

Trigger: agreement.generated.v1 (DocGen done)Role: Chunk, embed, index agreement to ephemeral vector store; extract clauses, risk flags, summary; publish agreement.indexed.v1.

Instruction — Agreement Agent

SYSTEM: You are Loan Agreement Reviewer Agent.
1. Retrieve generated PDF from ADLS (agreement path).
2. Perform OCR/clean-text extraction if needed.
3. Chunk text into clause-aware chunks; embed using approved embedding model.
4. Store embeddings into ephemeral vector index (pgvector with TTL 48hrs).
5. Run clause-extraction heuristics (prepay penalty, foreclosure clause, ROI, fees) and produce key-value summary.
6. Persist summary to Postgres (agreement_summary table) and publish `agreement.indexed.v1`.
7. Make indexId available for RAG retrieval for Borrower Assistant/Underwriter (ephemeral).
8. If ephemeral TTL expires, remove embeddings and keep only the agreementSummary in ADLS/DB.

Tools: ADLS, embedding model, pgvector, Postgres, Kafka, Audit store.

7 — Sample end-to-end sequence (Ram’s flow) — events + agent calls

  1. Ram (prospect) visits bank site → clicks “Explore Loans” → Borrower Assistant pops up (pre-login).

    • No application yet. He asks eligibility; assistant calls eligibilityApi (stateless) → response.

  2. Ram clicks Proceed → system requests mobile OTP → verifies via Azure AD B2C (OTP flow) → Orchestrator creates applicationId and returns application.created.v1. Postgres row created.

  3. Ram fills form, accepts consent → uploads PAN/Aadhaar/salary slip → UI stores files to ADLS (encrypt) and posts docs.uploaded.v1 to Kafka. Postgres documents rows added.

  4. Document-AI agent consumes docs.uploaded.v1, runs OCR (LayoutLM), writes parsed JSON to ADLS Curated, writes documents.parsed in Postgres, publishes ocr.extracted.v1.

  5. Orchestrator picks ocr.extracted → publishes kyc.requested.v1.

  6. KYC Agent (consumes kyc.requested) reads parsed JSON, calls panApi.verify → returns VALID → writes kyc.completed.v1 {PASS} to Kafka + Postgres & updates timeline in Cosmos.

  7. Orchestrator on KYC PASS publishes in parallel aml.requested.v1, credit.requested.v1, fraud.requested.v1, income.requested.v1.

  8. AML Agent runs sanction/PEP checks → aml.completed.v1 (CLEAR) published.

  9. Credit Agent calls bureauApi → gets bureauScore 650; calls ml.creditModel.predict → pdScore 0.08; publishes credit.completed.

  10. Fraud Agent runs vendor + model, returns fraudScore 0.12 → fraud.completed.

  11. Income Agent computes DTI 0.42 and stabilityScore 0.7 → income.completed.

  12. Orchestrator collects all *.completed, calls rulesEngine.evaluate:

    • If rules meet AUTO_APPROVE → publish decision.made (AUTO_APPROVE).

    • Else if MANUAL_REVIEW → create underwriter task.

  13. If AUTO_APPROVE → Orchestrator triggers agreement.generated (DocGen) → Agreement Agent indexes agreement ephemeral → Borrower Assistant notified with summary; DocGen triggers eSign → esign.completed → LoanBooking calls CBS → loan.account.created → Disbursement and notification.

  14. If MANUAL_REVIEW → Underwriter Copilot is created; underwriter logs in, sees the brief and can Approve/Reject/Request documents. Underwriter action triggers decision.made.

  15. Throughout: Borrower Assistant can be used by Ram to ask status: it calls contextApi.get(applicationId) (cosmos) & RAG for any policy reference. GenAI returns human-friendly response and logs to audit.

8 — Prompt templates

KYC Agent system instruction (short version for interview)

“KYC Agent: read parsed JSON only, call PAN verification API; fallback Aadhaar if needed. Return PASS / FAIL / MANUAL_REVIEW with evidencePointer. Write masked results to Postgres and publish kyc.completed. Never include raw PII in logs or LLM prompts.”

Borrower Assistant (system)

“Borrower Assistant: For any user query, call Context API first. Use RAG for policies. Provide friendly answers, buttons for actions, and never display raw PII. Always cite policy IDs when explaining rules.”

Underwriter Copilot

“Underwriter Copilot: Generate an evidence-backed brief using (masked) model outputs, SHAP features and RAG policy snippets. Provide suggested actions and a one-line recommended decision. Store prompt + retrieved clause ids in audit.”

9 — Tool registry summary (what agent can call)

  • External vendor APIs: panApi, aadhaarApi, bureauApi (CIBIL), bureauAltApi (Experian), fraudVendorApi (Hunter), worldCheckApi, adverseMediaApi, eSign provider.

  • Internal services: featureStore.getOnlineFeatures, ml.creditModel.predict, ml.fraudModel.predict, ml.incomeModel.predict, docGenService, emiService, eligibilityApi.

  • Persistence: Postgres (transactional), ADLS (raw/curated), Cosmos (timeline / context), Redis cache, pgvector (KnowledgeHub/ephemeral).

  • Messaging & Orchestration: Kafka topics, Orchestrator (Temporal/Spring AI), rulesEngine (DMN).

  • LLMOps: llmops.retrieve (RAG), llmops.generate (LLM), promptRepo, promptVersioning.

  • Audit & Governance: auditStore (append-only), modelRegistry (MLflow), policy registry.

10 — Ephemeral vs Permanent Vector Index

  • Permanent KnowledgeHub index: store SOPs, credit policy, RBI rules — long lived; embeddings persist in pgvector/Pinecone with governance and versioning.

  • Ephemeral Vector Index: temporary embeddings for newly-generated loan agreements or uploaded docs you must make searchable immediately (e.g., for the next 48 hours). Purpose: immediate RAG retrieval for borrower or underwriter. After TTL expire, remove embeddings; keep canonical doc in ADLS and summarized clause metadata in Postgres.

You do chunk/embed/index ephemeral docs because you want immediate semantic search (clause extraction, Q&A) without waiting for a slow full Ingest workflow.

11 — Operational & safety guardrails you must mention

  • PII Masking & Tokenization: never pass raw PAN/Aadhaar to LLM or vector DB; store only masked/pseudonymized values; use token vault for reversible access with strong RBAC.

  • Context API: LLMs always call Context API (masked) instead of DB. Context API enforces purpose & consent.

  • Prompt & Retrieval Audit: store prompts, retrieved doc ids, LLM responses, and red-team test results in audit store.

  • Red-Team: run prompt-injection, hallucination, and leakage tests regularly on LLMOps.

  • Model Versioning & Explainability: store modelVersion and SHAP outputs in audit for every decision.

  • Idempotency & Exactly-once semantics: Orchestrator should handle duplicates and retries idempotently.

  • SLA & Timeouts: set strict timeouts and fallback flows (e.g., if bureau times out, use alternate vendor or mark for manual review).

12 — Why this is true multi-agent

  • Agents are instruction-based LLM-enabled workers with a specific autonomy and domain (KYC, Credit, Fraud, Income, AML, Agreement-Indexer, Borrower Assistant, Underwriter Copilot).

  • They communicate via events, call tool APIs, write to common stores, and use shared memory (Cosmos timeline + vector store).

  • Each agent can evolve independently (specialized models, prompts, toolset) and scale independently.

  • The Orchestrator is the planner that composes agent capabilities into end-to-end behavior (agent coordination resembles a conductor / Multi-Agent System).

13 —

  • Start: “We implemented an event-driven multi-agent platform where each agent is an instruction-driven service (Spring AI) responsible for a domain: KYC, AML, Credit, Fraud, Income, etc.”

  • Walk step-by-step through Ram’s journey (use the numbered sequence above).

  • Emphasize PII safety: Context API + masking + no direct DB for LLM.

  • Highlight business wins: faster TAT, fewer manual reviews, explainability via GenAI, better audit trail.

  • End: “This design is scalable, auditable, and allows independent teams (MLOps, LLMOps, App, DevOps, Risk) to own their domains while meeting regulatory and compliance needs.”

14 —

  • Q: Who builds the agents? A: Small cross-functional squads — MLOps builds predictive endpoints, LLMOps owns prompt & RAG, App team owns agent orchestration and UI integration.

  • Q: Why ephemeral embedding? A: For immediate clause-aware Q&A on newly-generated agreements without contaminating permanent KB; TTL enforces data minimization.

  • Q: Does GenAI decide? A: No — GenAI explains & recommends. Decision remains deterministic rulesEngine; final decisions logged + auditable.

  • Q: How to reduce false positives in AML? A: Multi-algorithm scoring + human EDD + ML-backed false-positive classifier; red-team & continuous tuning.

15 — Example small JSON events and flows you can quote (concise)

application.created.v1

{"applicationId":"APP-0001","mobileMasked":"9XXXX12345","requestedAmount":1000000,"tenureMonths":60,"traceId":"T-1","createdAt":"2025-11-30T09:00:00Z"}

ocr.extracted.v1

{"applicationId":"APP-0001","docId":"DOC-100","ocrJsonPath":"adls://curated/APP-0001/doc-100.json","extracted":{"panLast4":"1234","nameHash":"hxx...","incomeSummary":"gross=60000"},"traceId":"T-1"}

kyc.completed.v1

{"applicationId":"APP-0001","kycStatus":"PASS","reasonCode":"KYC_OK_PAN","evidence":{"panTx":"PAN-TX-435","confidence":0.92},"traceId":"T-1"}

decision.made.v1

{"applicationId":"APP-0001","decision":"AUTO_APPROVE","ruleVersion":"v1.4","modelVersions":{"credit":"m_v3","fraud":"f_v2"},"evidencePointers":["doc-100","shap-xx"],"traceId":"T-1"}

16 — Highlights

  • Event-first, agent-coordinated architecture with Spring AI / Orchestrator.

  • Context API + Cosmos low-latency timeline for GenAI & UI.

  • Postgres for transactional state; ADLS for documents; FeatureStore + MLOps endpoints.

  • KnowledgeHub (pgvector) for SOPs + ephemeral vector index for temporary docs.

  • Strict PII masking, prompt/audit logging, model versioning, and red-team LLM safety.

  • Deterministic Decision Engine (DMN) — GenAI explains & assists, never authorizes decisions alone.


✅ 1. Event-Driven Digital Lending – End-to-End Flow (Ram Journey – System View)

Step 0: Prospect Journey (Pre-Login)

  1. Ram visits ABC Bank website

  2. Borrower Assistant (GenAI) chat widget appears.

  3. Ram asks:

    “My salary is ₹60k, what loan can I get for 5 years?”

  4. Borrower Assistant:

    • Calls Eligibility Rules API

    • Calls RAG → Lending Policy Knowledge Hub

    • Responds with EMI + eligibility.

  5. Ram clicks “Proceed”

  6. System triggers Mobile OTP verification (Azure AD B2C CIAM – Passwordless)→ Ram is now a verified digital prospect (not a net-banking user)

Step 1: Loan Application Submission

Ram enters:

  • Loan amount & tenure

  • Accepts consent for:

    • KYC

    • Credit

    • Fraud

    • Income verification

  • Uploads:

    • PAN

    • Aadhaar

    • Salary Slip / Form-16

System actions:

  • Store raw docs in Blob Storage

  • Create record in Postgres

  • Create Application ID: LN-2025-000123

  • Publish Kafka Event:

LoanApplication.Submitted

Ram sees UI message:

“Your loan application LN-2025-000123 is under process.”

SMS + Email triggered.

Step 2: OCR & Document AI

Document AI extracts:

  • PAN Number

  • Aadhaar Number

  • Name, DOB

  • Employer & Income

Stores metadata in PostgresPublishes Kafka Event:

OCR.Extracted

Step 3: Lending Orchestrator Agent Starts

The Orchestrator Agent (Agentic AI) consumes OCR.Extracted and:

✅ Triggers in parallel:

  • KYC Agent

  • Credit Risk Agent

  • Fraud Risk Agent

  • Income Stability Agent

✅ 2. True Multi-Agent Execution (Event Driven)

🧠 Agent 1: KYC Agent

Instruction Prompt

You are a KYC Verification Agent.
Your job is to:
1. Validate PAN via PAN API.
2. If PAN fails → fallback to Aadhaar XML API.
3. Match OCR name + DOB + document data.
4. Write KYC status to Postgres.
5. Publish result to Kafka topic: kyc.done
Never take credit decisions.
Return only PASS, FAIL or MANUAL_REVIEW.

Tools Registered

  • PAN Verification API

  • Aadhaar XML API

  • Postgres DB

  • Kafka Producer

Spring AI Pseudo-Code

@Bean
public Function<OCRExtractedEvent, KycResultEvent> kycAgent() {
   return event -> {
      PanResult pan = panApi.verify(event.getPan());
      if (!pan.isValid()) {
         AadhaarResult aadhaar = aadhaarApi.verify(event.getAadhaar());
         if (!aadhaar.isValid()) return fail();
      }
      postgres.saveKycStatus(event.getAppId(), "PASS");
      kafka.send("kyc.done", resultEvent);
      return resultEvent;
   };
}

💳 Agent 2: Credit Risk Agent

Instruction Prompt

Call CIBIL API using PAN.
If CIBIL fails → fallback to Experian.
Compute score.
Store result.
Publish credit.done.

Tools:

  • CIBIL API

  • Experian API

  • Postgres

  • Kafka

🕵️ Agent 3: Fraud Risk Agent

Call Hunter Fraud API
Check device, IP, velocity, mismatch
Publish fraud.done

💰 Agent 4: Income Stability Agent

Call internal ML endpoint
Evaluate income consistency over 12 months
Publish income.done

🧭 Agent 5: Lending Orchestrator (Master Agent)

Consumes:

  • kyc.done

  • credit.done

  • fraud.done

  • income.done

Decision Rule Engine

IF KYC = FAIL → REJECT
IF CreditScore < 650 → MANUAL
IF Fraud = HIGH → MANUAL
IF IncomeStable = LOW → MANUAL
Else → AUTO_APPROVE

Publishes:

✅ 3. Underwriter Copilot (Human-in-Loop)

If MANUAL_REVIEW:

  • Underwriter logs in.

  • Underwriter Copilot (GenAI):

    • Summarizes:

      • Credit report

      • Fraud flags

      • Income anomalies

    • Recommends:

      • Approve with conditions

      • Reject

      • Ask additional docs

Underwriter decision → stored → Kafka:

loan.manual.decision

✅ 4. Loan Agreement + GenAI Explanation

Agreement Generation

  • Template-based

  • Filled using:

    • Approved amount

    • Tenure

    • ROI

  • Stored in Blob Storage

  • Kafka Event:

loan.agreement.generated

Temporary Vectorization for GenAI Explanation

Agreement is NOT permanently stored in Knowledge Hub.

Instead:

  1. OCR → text extraction

  2. Chunking

  3. Embedding

  4. Stored in Ephemeral Vector Index (TTL 48 hours)→ For private, temporary RAG only

Now Borrower Assistant can:

  • Summarize agreement

  • Extract clauses

  • Explain risks

  • Highlight EMI, penalties

After 48 hours → agreement vectors auto-deleted.

✅ 5. Borrower Assistant – Live During Entire Journey

Capabilities:

  • Product discovery

  • EMI explanation

  • Document clarification

  • Status queries:

    “Your application is in Credit Verification stage”

  • Agreement summary

  • Consent explanation

It accesses:

  • Postgres (status)

  • Cosmos DB (events)

  • Permanent Knowledge Hub (policies)

  • Ephemeral Vector Store (agreement)

✅ 6. Kafka Event Schemas (Sample)

LoanApplication.Submitted

{
  "appId": "LN-2025-000123",
  "mobile": "98XXXXXX23",
  "loanAmount": 1200000,
  "tenure": 60,
  "timestamp": "2025-12-01T10:30:21Z"
}

kyc.done

{
  "appId": "LN-2025-000123",
  "status": "PASS",
  "timestamp": "2025-12-01T10:32:10Z"
}

loan.approved

{
  "appId": "LN-2025-000123",
  "approvedAmount": 1150000,
  "emi": 28400
}

✅ 7. Why This Is True Multi-Agent AI (Not Microservices)

Feature

Microservices

Multi-Agent AI

Stateless logic

Prompt-driven reasoning

Goal-based execution

Tool selection by LLM

Autonomous coordination

Memory & reflection

Here:

  • Each agent has:

    • Memory

    • Prompt

    • Tool registry

    • Autonomy

  • Orchestrator is Agentic AI (planner + executor)

✅ 8. MLOps vs LLMOps Pipeline Summary

Independent ML Pipelines

  • Credit Risk Model

  • Fraud Model

  • Income Stability Model

Each has:

  • Feature Store

  • Training pipeline

  • Model registry

  • AKS deployment

LLMOps Pipeline

  • OCR → Chunk → Embed → Index

  • Vector Store (pgvector / ephemeral)

  • RAG layer

  • Prompt versioning

  • Guardrails + Red-Team testing

✅ 9 Summary

“Our digital lending platform is built on an AI-first, event-driven, multi-agent architecture.When a customer submits a loan request, OCR and Document AI extract data and raise events. This triggers multiple autonomous AI agents – KYC, Credit Risk, Fraud, and Income Stability – each with their own instruction prompt and tool registry.These agents run in parallel, publish their decisions through Kafka, and a Lending Orchestrator Agent consolidates them using policy rules to auto-approve, reject, or route to manual underwriting.For customer experience, we use a Borrower Assistant powered by RAG and temporary vector indexing to explain EMI, loan status, and even summarize the generated loan agreement in real time.On the bank side, Underwriter Copilot supports human decisioning in high-risk cases.This architecture gives us straight-through processing, regulatory compliance, real-time observability, and true AI-driven automation across the entire lending lifecycle.

✅ 10. Highlights

  • “AI is not an add-on — it is the primary execution engine

  • “Business workflows are event-triggered and agent-executed

  • “GenAI handles interaction, reasoning, summarization, and explanation

  • “ML handles risk, fraud, income, AML scoring

  • “Humans intervene only where AI confidence is low

  • “This reduced TAT from weeks to 1 day and manual ops by ~70%”


Confiugartion Driven

====

1️⃣ Production-Grade Spring AI Tool-Calling Configuration (KYC Agent Example)

✅ Tool Registry (Enterprise Standard)

spring:
  ai:
    openai:
      api-key: ${OPENAI_API_KEY}
      chat:
        model: gpt-4.1
    tools:
      registry:
        - name: panVerificationTool
          type: rest
          url: https://api.nsdl.com/pan/verify
          method: POST
        - name: aadhaarVerificationTool
          type: rest
          url: https://uidai.gov.in/api/verify
          method: POST
        - name: kycStatusWriter
          type: postgres
          table: kyc_status
        - name: kafkaPublisher
          type: kafka
          topic: kyc.completed

✅ Spring AI KYC Agent Configuration

@Bean
public AiAgent kycAgent(ChatClient chatClient, ToolRegistry toolRegistry) {
   return AiAgent.builder(chatClient)
        .systemPrompt(KYC_SYSTEM_PROMPT)
        .toolRegistry(toolRegistry)
        .build();
}

✅ KYC Instruction Prompt (Production-Grade)

You are a regulated BFSI KYC Agent.

Your responsibilities:
1. Read OCR_EXTRACTED event.
2. Call PAN verification API using panVerificationTool.
3. If PAN fails → call Aadhaar verification API.
4. Store verification result in PostgreSQL using kycStatusWriter.
5. Publish KYC_COMPLETED event to Kafka.
6. Never expose API secrets.
7. If ambiguity > 20% → flag MANUAL_REVIEW.

Output JSON strictly in schema:
{
  "applicationId": "",
  "kycStatus": "PASS | FAIL | MANUAL",
  "reason": "",
  "confidenceScore": 0.0
}

2️⃣ Kafka Event Schemas (Canonical Banking Standard)

📌 OCR Extracted Event

{
  "eventType": "OCR_EXTRACTED",
  "applicationId": "LN12345",
  "pan": "ABCDE1234F",
  "aadhaar": "XXXXXXXX9012",
  "name": "RAM SHARMA",
  "dob": "1992-04-12",
  "salary": 60000,
  "timestamp": "2025-01-15T10:22:11Z"
}

📌 KYC Completed Event

{
  "eventType": "KYC_COMPLETED",
  "applicationId": "LN12345",
  "status": "PASS",
  "confidence": 0.97,
  "timestamp": "2025-01-15T10:22:45Z"
}

📌 Risk Aggregation Event

{
  "eventType": "RISK_AGGREGATION_READY",
  "applicationId": "LN12345",
  "creditScore": 780,
  "fraudRisk": "LOW",
  "incomeStability": "STABLE"
}

3️⃣ Lending Orchestrator – Pseudo State Machine

STATE: APPLICATION_SUBMITTED
 → OCR_PENDING
 → OCR_COMPLETED
 → KYC_IN_PROGRESS
 → KYC_COMPLETED
 → PARALLEL_RISK_EVALUATION
     → CREDIT_AGENT
     → FRAUD_AGENT
     → INCOME_AGENT
 → RISK_AGGREGATION
 → DECISION_ENGINE
     → AUTO_APPROVE
     → AUTO_REJECT
     → MANUAL_REVIEW
 → LOAN_AGREEMENT_STAGE
 → CUSTOMER_ACCEPTANCE
 → DISBURSEMENT

✅ Orchestrator Decision Rules

IF KYC = FAIL → AUTO_REJECT  
IF CreditScore < 600 → REJECT  
IF FraudRisk = HIGH → MANUAL_REVIEW  
IF IncomeStability = UNSTABLE → MANUAL_REVIEW  
IF All Pass → AUTO_APPROVE  

Decision written to:

  • PostgreSQL (transactional)

  • Cosmos DB (audit/event store)

  • Kafka (next workflow trigger)

4️⃣ Borrower Assistant – Prompt Pack (Customer Side GenAI)

You are ABC Bank Digital Lending Assistant.

Rules:
- Never give final approval decisions.
- Only explain status from RAG + Orchestrator Events.
- Always cite bank policy using RAG.
- Offer next actionable step.

Tools Allowed:
- loanStatusAPI
- eligibilityAPI
- documentChecklistAPI
- emiCalculator
- RAG(KnowledgeHub)

Tone: Professional, reassuring, compliant.

✅ Example Interaction

Ram: I earn 60k. How much loan can I get?

Assistant:
→ Calls eligibilityAPI
→ Calls emiCalculator
→ Uses RAG (loan policy)

Response:
You are eligible for ₹12–15 Lakhs. Your EMI for 5 years is approx ₹28,400.
Would you like to proceed? [YES] [NO]
  • If YES → triggers APPLICATION_INITIATED event

  • If NO → session closed (no data persistence)

5️⃣ Underwriter Copilot – Prompt Pack (Internal GenAI)

You are a Regulated Credit Underwriter Copilot.

Tasks:
- Summarize all risk agent outputs.
- Highlight red flags.
- Recommend APPROVE / REJECT / MANUAL.
- Never take final decision.
- Always show evidence.

Tools:
- creditRiskAPI
- fraudAPI
- incomeStabilityModel
- RAG(credit policy)
- documentViewer

6️⃣ Agent Instruction Prompts (All Core Agents)

🔹 Credit Risk Agent

Task:
- Call CIBIL API.
- If timeout → Call Experian fallback.
- Normalize score to bank scale.
- Publish CREDIT_COMPLETED event.

🔹 Fraud Risk Agent

Task:
- Call Hunter Fraud API.
- Check geo-IP mismatch.
- Detect velocity anomalies.
- Publish FRAUD_COMPLETED event.

🔹 Income Stability Agent

Task:
- Call internal ML model endpoint.
- Validate 6–12 month consistency.
- Publish INCOME_STABILITY_COMPLETED event.

7️⃣ Temporary vs Permanent Vector Index (Your Agreement Question)

Type

Use

Store

Permanent Knowledge Hub

RBI norms, lending policy, SOPs

pgvector / Pinecone

Ephemeral Vector Index

Loan agreements, one-time docs

In-memory / Redis + TTL

✅ Loan agreement is NOT permanently stored✅ It is:

  1. OCR scanned

  2. Chunked

  3. Embedded

  4. Stored in TTL-based vector index

  5. Used by GenAI for 24–48 hours

  6. Auto-expired for compliance

8️⃣ Does GenAI Generate Loan Agreement? (Correct Answer)

❌ Loan agreement is NOT GenAI-generated✅ It is template + rules-based✅ GenAI is used only for:

  • Clause explanation

  • Risk summarization

  • Customer-friendly translation

  • Compliance Q&A

9️⃣ True Multi-Agent vs Microservices

Microservices

Multi-Agent

Hardcoded logic

LLM-driven reasoning

Static workflows

Autonomous task planning

No learning

Self-improving

Tech orchestration

Cognitive orchestration

Your system is TRUE Agentic AI because:

✅ Agents think

✅ Agents decide which tools to call

✅ Agents coordinate via events

✅ Agents reason using RAG + rules

🔟 Ram’s End-to-End Event-Driven Journey (Interview-Ready)

Stage 1: Prospect Phase (Pre-Login)

Ram visits ABC Bank → Talks to Borrower Assistant → Gets eligibility & EMI → Clicks Proceed

Stage 2: Minimal Identity (CIAM / OTP)

Mobile OTP verified → Prospect ID created → Not Net-Banking login

Stage 3: Application Submission

Ram enters:

  • Loan amount & tenure

  • PAN, Aadhaar, Salary Slip, Form-16

  • Gives digital consent

System shows:✅ “Your application LN12345 is under process.”

Kafka Event:

APPLICATION_SUBMITTED

Stage 4: Event-Driven AI Orchestration (Behind the Scenes)

  1. OCR Agent → extracts data → fires OCR_EXTRACTED

  2. Orchestrator → triggers:

    • KYC Agent

    • Credit Agent

    • Fraud Agent

    • Income Agent

  3. All agents run in parallel

  4. Each pushes result to Kafka

  5. Orchestrator aggregates → runs decision rules

Stage 5: Decision

  • If LOW risk → AUTO APPROVE

  • If MEDIUM → UNDERWRITER COPILOT

  • If HIGH → AUTO REJECT

Stage 6: Loan Agreement & GenAI Review

  • Agreement generated via template engine

  • GenAI:

    • Summarizes

    • Highlights risks

    • Explains EMI clauses

  • Stored in temporary vector index

  • Customer signs digitally

Stage 7: Disbursement

  • Core Banking API

  • NBFC Ledger Update

  • Customer gets SMS + Email

  • Loan status updated in Borrower Assistant

🎯

“We built a fully AI-first, event-driven digital lending platform where GenAI and ML work together. The borrower interacts with a GenAI Borrower Assistant for eligibility, EMI simulation, and journey guidance. Once the application is submitted, an event-driven orchestration triggers multiple autonomous AI agents in parallel—KYC, Credit Risk, Fraud Detection, and Income Stability. Each agent reasons independently, calls external and internal tools, writes to enterprise databases, and publishes its own Kafka events. A central Orchestrator Agent aggregates these outputs using bank risk rules and automatically decides approval, rejection, or manual underwriting. For manual cases, an Underwriter Copilot powered by GenAI supports human decisioning. All contracts are rule-based but explained using GenAI through temporary vector indexing for compliance. This architecture gives us straight-through processing, regulatory auditability, and true AI-first automation.”

✅ Highlights

✅ ML + GenAI separation

✅ True Multi-Agent orchestration

✅ Event-driven autonomy

✅ CIAM vs Netbanking identity

✅ KnowledgeHub vs Ephemeral vector index

✅ Regulatory-grade controls

✅ Human-in-the-loop governance


1️⃣ Production-Grade Spring AI Tool-Calling Configuration (Text)

🔹 Tool Registry Concept

Each Agent has:

  • LLM

  • Tool Registry (APIs, DB, Kafka)

  • Instruction Prompt

  • State Handler

🔹 Common Tool Interfaces (Spring AI Style – Conceptual)

@Tool(name="kycPanVerify")
String verifyPan(@Param("pan") String pan);

@Tool(name="kycAadharVerify")
String verifyAadhar(@Param("aadhaar") String aadhaar);

@Tool(name="publishEvent")
void publishKafka(@Param("topic") String topic, @Param("payload") String json);

@Tool(name="saveStatus")
void savePostgres(@Param("table") String table, @Param("json") String json);

Each agent only sees its own tools (Zero-Trust for Agents).

2️⃣ Event-Driven Orchestrator – Pseudo State Machine

🔹 States

RECEIVED
 → OCR_DONE
 → KYC_IN_PROGRESS
 → KYC_DONE
 → CREDIT_IN_PROGRESS
 → FRAUD_IN_PROGRESS
 → INCOME_IN_PROGRESS
 → RISK_AGGREGATION
 → AUTO_APPROVED | MANUAL_REVIEW | REJECTED
 → AGREEMENT_GENERATED
 → ESIGNED
 → DISBURSED

🔹 Orchestrator Rules (Decision Logic)

Rule

Action

KYC = FAIL

REJECT

CreditScore < 650

MANUAL_REVIEW

Fraud = HIGH

REJECT

IncomeStability < 0.6

MANUAL_REVIEW

All Green

AUTO_APPROVE

3️⃣ Kafka Event Schemas (Simplified, Production-Realistic)

✅ Loan Submitted

{
  "eventType": "LOAN_SUBMITTED",
  "loanRefId": "LN12345",
  "customerId": "RAM987",
  "amount": 1500000,
  "tenure": 60,
  "timestamp": "2025-12-02T10:15:30Z"
}

✅ OCR Extracted

{
  "eventType": "OCR_EXTRACTED",
  "loanRefId": "LN12345",
  "pan": "ABCDE1234F",
  "aadhaar": "234523452345",
  "monthlyIncome": 60000,
  "docStorePath": "blob://loan/LN12345/docs/"
}

✅ KYC Done

{
  "eventType": "KYC_DONE",
  "loanRefId": "LN12345",
  "status": "PASS",
  "confidence": 0.97
}

Same pattern for:

  • CREDIT_DONE

  • FRAUD_DONE

  • INCOME_DONE

  • DECISION_DONE

4️⃣ True Multi-Agent Instruction Prompts (Production-Style)

🔹 KYC Agent – Instruction Prompt

You are the KYC Agent for ABC Bank Digital Lending.
Your responsibility is identity verification only.

Steps:
1. Read PAN and Aadhaar from OCR_EXTRACTED event.
2. Call PAN API using tool kycPanVerify.
3. If PAN API fails, fallback to Aadhaar API.
4. If both fail → mark KYC = FAIL.
5. Write result to Postgres table: loan_kyc_status.
6. Publish KYC_DONE event to Kafka.

Constraints:
- Do NOT call credit or fraud APIs.
- Do NOT take approval decisions.
- If confidence < 0.85 → mark MANUAL_REVIEW.

🔹 Credit Risk Agent Prompt

You are Credit Risk Agent.
Tasks:
1. Call CIBIL API.
2. If timeout, fallback to Experian.
3. Combine bureau score with internal ML endpoint.
4. Store probability_of_default in Postgres.
5. Publish CREDIT_DONE event.

🔹 Fraud Agent Prompt

You are Fraud Detection Agent.
1. Call Hunter API.
2. Call internal transaction anomaly ML model.
3. If fraud_score > 0.8 → HIGH.
4. Publish FRAUD_DONE.

🔹 Income Stability Agent Prompt

You are Income Stability Agent.
1. Call salary verification API.
2. Call internal cashflow ML model.
3. Generate stability index (0–1).
4. Publish INCOME_DONE.

🔹 Lending Orchestrator Agent Prompt

You are the Master Orchestrator.
- Subscribe to all Kafka topics.
- Wait for KYC_DONE, CREDIT_DONE, FRAUD_DONE, INCOME_DONE.
- Apply bank decision rules.
- Persist final decision.
- Trigger agreement generation or manual review.

5️⃣ Borrower Assistant – Prompt Pack (CX View)

You are ABC Bank Borrower Assistant.
You are NOT an approval system.
You explain:
- Eligibility
- EMI
- Document checklist
- Application status
- Reasons for rejection (if allowed)

Tools you can use:
- EligibilityRulesAPI
- LoanStatusAPI
- RAG (lending policies)

Never:
- Predict final approval
- Override underwriting

6️⃣ Underwriter Copilot – Prompt Pack

You assist bank underwriters.
You summarize:
- Credit report
- Fraud signals
- Income stability
- Bank policy deviations

You must:
- Highlight top 3 risks
- Recommend approve/reject with reasoning
- Never auto-approve

7️⃣ Architecture Governance Model for AI Agents (EA Lens)

🔹 Layers of Governance

Layer

Control

Model

MRM, bias testing, drift monitoring

Prompt

Prompt versioning, approval

Tool

API whitelisting

Data

PII masking, RBAC

Events

Schema registry

Security

Zero Trust, token-scoped

🔹 Architecture Review Checklist

  • Agent isolation ✅

  • Tool boundaries ✅

  • Explainability ✅

  • Human-in-loop ✅

  • Audit trail ✅

8️⃣ Model Risk Management (MRM) Framework

Phase

Controls

Design

Model validation

Build

Bias testing

Deploy

Canary release

Operate

Drift monitoring

Audit

RBI SR 11-7 style governance

Metrics:

  • Accuracy

  • Bias score

  • Explainability index

  • Override rate

  • Risk leakage %


1️⃣ Summary

“Our digital lending platform is built as an event-driven, AI-first architecture. Every customer action produces an event. OCR, KYC, credit risk, fraud and income stability are not microservices pretending to be AI — they are true autonomous agents with instruction-based prompts and tool boundaries. Each agent independently verifies risk and publishes decisions to Kafka. A central orchestrator agent aggregates these decisions using bank policies and compliance rules. GenAI does NOT replace underwriting — it augments it via Borrower Assistant and Underwriter Copilot. This reduced TAT from days to minutes, improved fraud detection by 25%, and cut cost to serve by 35%. That is what AI-first automation truly means.”

✅ Trade Off

“How did you use AI/ML & GenAI in Digital Lending, Banking & MF?”


  • ML for:

    • Credit risk

    • Fraud detection

    • Income stability

    • AML screening

  • GenAI for:

    • Borrower assistance

    • Underwriter copilot

    • Agreement explanation

    • Policy reasoning via RAG

  • Multi-Agent for:

    • Autonomous verification

    • Parallel risk execution

    • Independent scalability

  • Event-Driven Backbone:

    • Kafka + Cosmos DB event store

  • Human-in-Loop:

    • Only for high-risk cases

  • Governed by MRM + AI Architecture Board

 
 
 

Recent Posts

See All
How to replan- No outcome after 6 month

⭐ “A transformation program is running for 6 months. Business says it is not delivering the value they expected. What will you do?” “When business says a 6-month transformation isn’t delivering value,

 
 
 
EA Strategy in case of Merger

⭐ EA Strategy in Case of a Merger (M&A) My EA strategy for a merger focuses on four pillars: discover, decide, integrate, and optimize.The goal is business continuity + synergy + tech consolidation. ✅

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
  • Facebook
  • Twitter
  • LinkedIn

©2024 by AeeroTech. Proudly created with Wix.com

bottom of page