POC-Agentic AI Solution
- Anand Nerurkar
- Jul 20
- 3 min read
🧠 Use Case Overview: Agentic AI for Mutual Fund Platform
Objective: Enhance decision-making in a Mutual Fund platform with GenAI Agents for:
Investor onboarding & profiling
Portfolio recommendation
KYC & compliance checks
Dispute management
Fraud detection
Investment advisory
While ensuring guardrails (for safety) and explainability (XAI) for trust, regulation, and compliance.
✅ Step-by-Step Solution Blueprint
1. Define Roles of Autonomous AI Agents (Agentic AI)
Agent | Responsibilities |
Orchestrator Agent | Coordinates multi-agent collaboration, tracks context and decision lineage |
KYC Compliance Agent | Performs document parsing, face match, PAN/Aadhaar validation |
Risk & Suitability Agent | Evaluates risk profile using survey + behavior |
Portfolio Advisor Agent | Recommends funds based on goals, suitability, market trends |
Fraud Detection Agent | Analyzes anomalies in transactions using ML |
Dispute Resolution Agent | Auto-summarizes issues and proposes resolution with escalation triggers |
2. Architecture: Agentic AI with Guardrails + Explainability
🧩 High-Level Flow:
text
CopyEdit
User ➜ Gateway API ➜ Orchestrator Agent ├─ KYC Agent ➜ Azure OCR / Face API ├─ Risk Agent ➜ Azure ML (Rules + Models) ├─ Advisory Agent ➜ LLM + Fund Data ├─ Fraud Agent ➜ Kafka ➜ Azure ML Model ➜ SHAP └─ Guardrails AI + SHAP ➜ Validate + Explain ⬑ Orchestrator logs decision path
3. Key Technologies
Component | Tech Stack |
LLM Coordination | LangGraph + LangChain |
Spring Microservices | Spring Boot + Spring Cloud Gateway |
Guardrails | Guardrails AI (GuardrailsHub / Python) |
Explainability | SHAP, LIME, Captum (PyTorch) |
Model Serving | Azure ML, FastAPI / Flask |
Deployment | Azure AKS, Azure Monitor, Log Analytics, App Gateway |
Message Bus | Kafka for async events (KYC completed, fraud alerts) |
Agent Infra | Vector DB (Weaviate / FAISS), Redis for session memory |
4. Guardrails AI – Where and How Applied
Location | Purpose |
🛡️ Input Guarding | Enforce input schema (e.g., no profanity, max token limits) |
🧠 Output Guarding | Ensure generated advice complies with SEBI, no hallucinations |
✅ Validation Checkpoints | All recommendations validated via regex, type, tone |
🔐 Audit Logs | Orchestrator logs decisions + validation results for compliance |
Framework Used: Guardrails AI (Python SDK)
5. Explainability (XAI) – Where and How Applied
Agent | Explainability Applied |
Risk Agent | SHAP used to show top 5 features influencing suitability |
Fraud Agent | SHAP force plots to justify why txn flagged as fraud |
Advisory Agent | Textual explanation of fund recommendation logic (goal, risk, sector trend) |
Dispute Agent | Extractive summaries using LLM + RAG citations |
Tools Used:
6. Spring Boot Microservices Integration
Microservice | Responsibilities |
kyc-service | PAN OCR, Aadhaar match, face comparison (Azure Cognitive) |
risk-profile-service | Stores user risk scores, interacts with ML |
portfolio-service | Calls LLM (LangChain) with vector DB |
fraud-detector-service | Kafka consumer ➜ model scoring ➜ SHAP explanation |
explainability-service | Serves SHAP plots + textual reasons |
guardrails-service | Python FastAPI wrapper for Guardrails validation |
orchestrator-service | Coordinates all via LangGraph + HTTP APIs |
7. DevOps and Azure Setup
Layer | Azure Services |
Deployment | Azure AKS, Azure Container Registry |
ML Models | Azure ML Endpoints |
Logging & Monitoring | Azure Monitor, App Insights, Log Analytics |
Security | Azure Key Vault, App Gateway + WAF, MS Entra ID (AAD) |
Data Store | Azure SQL, CosmosDB, Blob Storage, Redis Cache |
🧪 Optional POC / Live Demo Flow
Run a mock model for fraud detection (FastAPI + SHAP)
Deploy on Azure ML or locally
Return SHAP force plot to frontend
Call Guardrails AI wrapper before sending response to user
✅ Sample Output to User (Post-Guardrails + XAI)
“⚠️ This transaction appears unusual and is flagged for manual review. Top Factors: Amount exceeds 3x average (45%) Location mismatch from last 5 txns (33%) IP Address flagged (22%) Explanation Score Chart available here”
📌 Conclusion
With this architecture:
✅ LLMs are controlled and explainable
✅ LangGraph enables deterministic multi-agent orchestration
✅ Guardrails AI ensures safety and compliance
✅ SHAP/LIME explain critical decisions (risk, fraud, advice)
✅ Azure stack offers production-grade scalability and monitoring
Comments