top of page

Screening Service Flow

  • Writer: Anand Nerurkar
    Anand Nerurkar
  • Sep 18
  • 11 min read

End-to-End Screening Service Flow (Enterprise Context)

1. Triggering the Screening

Screening is invoked at multiple customer or transaction touchpoints:

  • Customer Onboarding (KYC/CDD) – new customer Amit R applies for an account.

  • Ongoing Due Diligence – periodic refresh (every 1/2/5 years depending on risk category).

  • Transaction Screening – Amit R sends an international wire transfer → transaction is screened before execution.

2. Data Extraction & Normalization

  • Customer Data: Name, Date of Birth, Nationality, PAN, Passport, Address are extracted from the onboarding/transaction payload.

  • Normalization: Data is standardized → e.g., “Amit R.” vs “R, Amit” → converted to canonical form (uppercase, stripped of punctuation).

  • Enrichment: Additional identifiers (customer ID, account ID, correlation ID) are tagged for traceability.

3. Screening Service (Core Engine)

The Screening Service is typically a microservice or enterprise utility deployed centrally. It:

  • Receives Input via API or message queue (Kafka/ESB).

  • Performs Matching against multiple lists:

    • Sanctions lists (OFAC, UN, EU, RBI, HKMA, etc.)

    • PEP lists (Politically Exposed Persons)

    • Adverse Media / Negative News feeds

    • Internal Blacklists/Watchlists

  • Matching Logic includes:

    • Exact match (e.g., passport number in UN list)

    • Fuzzy matching / phonetic matching (e.g., “Mohammed / Mohamed”) using algorithms like Soundex, Levenshtein, or AI-based NLP.

    • Threshold scoring – similarity score (e.g., 85%) above threshold is flagged.

4. Result Classification

The engine classifies the outcome:

  • Clear Hit (Pass) → No match, transaction/account proceeds.

  • True Positive → Actual regulatory match (e.g., Amit’s PAN = blacklisted ID). Must be blocked/reported.

  • False Positive → Name similar but not same (e.g., “Amit R” vs “Amit Rahman” in watchlist). Needs human review.

  • Pending / Escalated → Sent to Level-1 Compliance Analyst for manual review.

5. Case Management & Review

  • Screening hits are logged in a Case Management System (CMS) (e.g., Actimize, Fircosoft, Oracle FCCM, SAS).

  • L1 Analyst reviews supporting documents (passport, PAN, date of birth).

  • If unclear → escalates to L2/L3 compliance officer.

  • Decision outcome (Clear/Block/Report) is recorded.

6. Action Execution

Based on review outcome:

  • Clear → Customer/transaction released automatically.

  • Block/Reject → Transaction stopped, account onboarding denied.

  • Regulatory Report → Suspicious Transaction Report (STR) / Suspicious Activity Report (SAR) filed to FIU-IND or equivalent regulator.

7. Audit, Logging & Regulator Readiness

  • Every screening attempt (input, list version, match score, reviewer decision) is logged immutably in DB + audit log.

  • Reports generated for:

    • Internal Audit (prove no bypass of screening).

    • Regulator Inspections (FIU, RBI, SEBI).

  • Versioning: Banks maintain the list version used at screening time → so if later challenged, they can show “On date X, OFAC list v5.2 was used.”

8. Continuous Improvement

  • False Positive Tuning: Adjust thresholds to balance compliance vs efficiency.

  • ML/AI Integration: Train models on past hits to reduce unnecessary escalations.

  • Dynamic Watchlists: Auto-ingestion of regulator updates daily/hourly.

  • Feedback Loop: Analyst decisions feed back into engine to improve matching.

🔹 Typical Enterprise Architecture Components for Screening

  • Screening Microservice – REST/GraphQL APIs, Kafka consumers.

  • Watchlist Data Store – refreshed daily from regulator feeds.

  • Matching Engine – rules + ML + fuzzy matching.

  • Case Management – for investigations, workflow, and reporting.

  • Audit & Compliance Reporting Layer – regulator-ready evidence.

In short:The Screening Service acts like a gatekeeper at every entry and transaction point. It ensures that customers like Amit R are checked against sanctions/PEP/adverse media lists, hits are investigated, regulators can audit the process, and business can prove compliance without causing too many false positives.


Let’s now extend the Screening Service walkthrough with Amit R, and also cover real-world APIs like Refinitiv World-Check, Dow Jones Risk & Compliance, and how banks integrate them.

📝 End-to-End Walkthrough – Screening Service with Amit R

Step 1. Event Trigger

  • Scenario: Amit R applies for an account → onboarding workflow calls the Screening Service API.

  • Payload sent:

    { "customerId": "CUST12345", "fullName": "Amit R", "dob": "1988-04-05", "nationality": "IN", "documentType": "PAN", "documentId": "ABCDE1234F", "address": "Mumbai, India", "transactionId": null, "screeningType": "CUSTOMER_ONBOARDING" }

  • Transport: JSON over HTTPS (TLS 1.2+).

Step 2. Integration with Screening Providers

Banks don’t usually maintain sanctions lists manually — they subscribe to data providers (Refinitiv, Dow Jones, LexisNexis, Accuity).

  • Option A: Local Screening Engine

    • The bank downloads watchlist feeds daily from provider SFTP/API (sanctions, PEP, adverse media).

    • These are ingested into an internal DB + matched by in-house engine.

  • Option B: API Call to Provider (common in modern setups):

    • Example: Refinitiv World-Check One API

      • POST /v1/screening/requests

      • Payload: Amit’s details (name, DOB, nationality, identifiers).

      • The API returns possible matches with risk scores, match type, and list source.

    • Example: Dow Jones Risk & Compliance API

      • Similar endpoint, also supports adverse media search in real time.

Step 3. Screening Engine Matching

  • The Screening Service either:

    • Queries external API (Refinitiv/Dow Jones) → receives raw matches.

    • Or runs local matching using their feeds → applies fuzzy matching, AI scoring.

  • Output example:

    { "customerId": "CUST12345", "status": "POTENTIAL_HIT", "matches": [ { "listSource": "OFAC", "nameMatched": "Amit Rahman", "matchScore": 87, "dobMatched": false, "nationalityMatched": false } ] }

Step 4. Case Management

  • Since Amit R is a potential hit (87% score) → the record is logged into Case Management System (CMS) (e.g., Actimize, Fircosoft, Oracle FCCM).

  • Analyst Workflow:

    • L1 Analyst compares Amit’s PAN, DOB vs. watchlist entry.

    • Finds mismatch (DOB, ID don’t match).

    • Marks as False Positive.

  • Decision is stored, audit log updated:

    • "decision": "False Positive", "reviewer": "L1 Analyst 123", "timestamp": "2025-09-15T12:05Z"

Step 5. Outcome Execution

  • CMS sends outcome back to Screening Service → updates onboarding system.

  • Amit’s account is approved and onboarding continues.

  • Audit trail ensures that if regulator asks later, bank can show:

    • What list version was used,

    • What match was found,

    • Who reviewed,

    • Final decision.

Step 6. Transaction Screening (Future)

  • Later Amit does an international wire transfer → Payment system triggers Transaction Screening API.

  • Payload example:

    { "transactionId": "TXN98765", "senderName": "Amit R", "senderAccount": "1234567890", "receiverName": "John Doe", "receiverCountry": "US", "amount": 1500, "currency": "USD", "purpose": "Education" }

  • Screening Service checks:

    • Sender/Receiver names vs. lists.

    • Country (if sanctioned e.g., Iran, North Korea).

    • Purpose code vs. high-risk categories.

  • If clean → pass. If hit → block transaction + escalate case.

Step 7. Regulatory Reporting

  • If a True Positive was found, the bank must:

    • Block account/transaction.

    • File STR (Suspicious Transaction Report) or CTR (Cash Transaction Report) to FIU-IND.

    • Report is generated automatically from CMS with case evidence.

🔹 How Integration is Done (Technical Flow)

1. Bank Systems → Screening Service

  • Systems (CBS, Mobile Banking, Payments, Onboarding) call internal Screening Microservice (REST API / Kafka event).

2. Screening Service → External Providers

  • Options:

    • Synchronous API calls → Refinitiv/Dow Jones.

    • Local DB Matching → watchlists refreshed daily from provider feeds.

3. Data Passed

  • For customers: Name, DOB, ID type + number, nationality, address.

  • For transactions: Sender/receiver details, country, amount, purpose code.

4. Security & Compliance

  • All API calls are TLS encrypted.

  • PII data masked where possible.

  • Audit logs stored with immutable ID + timestamp.

In Summary

  • Trigger: Onboarding or transaction event.

  • Data passed: Customer identifiers or transaction metadata.

  • Integration: Internal Screening Service → external providers (Refinitiv/Dow Jones) or in-house engine.

  • Matching: Exact/fuzzy/ML scoring.

  • Outcome: Pass / False Positive / True Positive.

  • Case Management: Analysts review borderline cases.

  • Regulator-ready audit: Logs, list version, reviewer decision.


– let me clarify the decision flow for screening in enterprises.

🔹 Screening Result Flow

  1. Pass (Clear Match = No hit)

    • ✅ Automatically Approved

    • No human review needed

    • Onboarding/transaction continues

  2. Potential Hit (Name/DOB similarity, but not exact)

    • ⚠️ Goes for Review by L1 compliance analyst

    • Analyst compares additional identifiers (PAN, Passport, DOB, Nationality)

    • Outcome after review:

      • If mismatch → mark as False Positive → Approved

      • If match → mark as True Positive → Blocked/Reported

  3. True Positive (Confirmed match with sanctions/PEP/adverse media)

    • Rejected/Blocked

    • Customer cannot be onboarded OR transaction stopped

    • Case escalated + STR/SAR filed to FIU/regulator

So in short:

  • Pass → Directly Approved (no review).

  • False Positive → Initially flagged, but after analyst review → Approved.

  • True Positive → Block/Report.


🔹 Screening Decision Outcomes

Result

Meaning

Next Step

Final Status

Pass

No match found in any list (clean record).

Auto-approve, no analyst review needed.

✅ Approved

Potential Hit

Name/identifier similarity, but not conclusive (system flags as “suspect”).

Sent to analyst for manual review.

Pending → Reviewed

False Positive

After analyst review → mismatch (e.g., name similar but PAN/DOB don’t match).

Case closed, proceed with onboarding/transaction.

✅ Approved

True Positive

Analyst confirms match with sanctions/PEP list.

Block onboarding/transaction + file STR/SAR.

❌ Rejected / Reported

✅ So yes → if mismatch after human review → it is marked as False Positive → Approved.❌ Only True Positives result in rejection/blocking.


🔹 Correct Regulatory Practice in Enterprises

  • Pass (Clear) → ✅ Onboard / allow transaction (no issues).

  • Potential Hit → ⚠️ Needs manual review by analyst.

  • After review:

    • False Positive → This means the system thought there was a match, but after checking identifiers (DOB, PAN, Passport, etc.), the analyst finds it’s NOT the same person/entity.

      • Customer can be onboarded / transaction can go through.

      • Example: “Amit R” flagged against “Amit Rahman” on UN list → but PAN/DOB mismatch proves it’s different.

    • True Positive → Analyst confirms it’s the same as sanctions/PEP entry.

      • Block onboarding / transaction.

      • Report to FIU/regulator.

🔹 Why False Positive = Approved

If every false positive was blocked, banks would:

  • Wrongly deny thousands of genuine customers.

  • Fail financial inclusion obligations.

  • Draw regulator criticism for being over-conservative.

That’s why manual review exists → to distinguish genuine customers from true risks.

So, final clarification:

  • False Positive = Approved (after review).

  • True Positive = Blocked/Reported.


— citing regulators in your interview will show that you not only understand enterprise practice but also compliance law.

🔹 Key Regulator References on False Positives

  1. FATF (Financial Action Task Force) – International Standard

    • FATF explicitly states that screening tools will generate false positives, and these should be resolved through manual review.

    • Only true matches (confirmed hits) should lead to blocking and reporting.

    • Ref: FATF Guidance on AML/CFT Risk-Based Approach, 2021.

  2. RBI (India) – Master Direction on KYC (2023)

    • Section 38: Regulated entities must screen customers against UN Sanctions/PEP lists.

    • Section 39: If a match is found, banks must verify identifiers and escalate for review.

    • Only confirmed matches are treated as reportable; false positives can be onboarded.

  3. OFAC (US Treasury, for global best practice)

    • OFAC FAQ #41: "A potential match does not mean you are dealing with an SDN. You must determine if the information truly matches."

    • If not, it is a false positive and business may continue.

🔹 How to Say It in the Interview

"Our screening service integrates with external providers like Refinitiv World-Check or Dow Jones. Any name similarity triggers a potential hit. As per FATF, RBI, and OFAC guidelines, potential hits undergo manual review. If it’s a false positive, the customer is cleared and onboarding proceeds. If it’s a true positive, onboarding is blocked and the case is reported to FIU-IND or the relevant regulator."


— let’s do a sample walkthrough with your example Amit R being flagged. This is the kind of scenario-based storytelling that wins interviews because it shows both technical solutioning and compliance awareness.

🔹 Case Study: Screening Flag for Amit R

Step 1: Onboarding Trigger

  • Customer Amit R applies for account opening.

  • KYC/CDD service collects identifiers:

    • Full Name: Amit R

    • DOB: 15/07/1988

    • PAN: ABCDE1234F

    • Address: Mumbai, India

Step 2: Screening Service

  • Screening microservice sends request to provider (e.g., Refinitiv World-Check API).

  • Payload: { name, DOB, PAN, nationality, address }.

  • Response: Match found → “Amit Rahman” on UN Consolidated List.

Step 3: System Action

  • System categorizes as Potential Hit.

  • Customer onboarding workflow is paused.

  • Case is routed to Level-1 Compliance Analyst via Case Management system.

  • Audit log created: CaseID #12345, Reason: Name Match, Source: UN List.

Step 4: Analyst Manual Review

The analyst compares:

Field

Customer (Amit R)

Watchlist (Amit Rahman)

Match?

Full Name

Amit R

Amit Rahman

Partial

DOB

15/07/1988

12/03/1975

PAN

ABCDE1234F

N/A

Nationality

Indian

Bangladeshi

  • Conclusion: Identifiers do not match.

  • Determination: False Positive.

Step 5: Decision Documentation

  • Analyst updates case system:

    • “Reviewed against UN list entry. Name similarity only. DOB, PAN, nationality differ. Determined False Positive.”

  • Audit trail locked for RBI/FIU inspection.

Step 6: System Action

  • Case status → False Positive – Cleared.

  • Customer onboarding resumes.

  • Notification sent to Ops: “Customer approved – false positive case.”

🔹 Interview-Friendly Soundbite

"Let me give you a real-world example. Suppose a customer named Amit R is flagged by Refinitiv because of a partial name match to a UN-listed individual Amit Rahman. Our screening system pauses onboarding and creates a compliance case. The analyst compares identifiers like DOB, PAN, and nationality, finds clear mismatches, and records the decision as a False Positive. This decision is logged for audit, and onboarding continues. If it had been a True Positive, onboarding would be blocked and escalated to FIU-IND. This shows how we balance regulatory compliance with customer experience."


👉True Positive walkthrough (same flow, but ends with block + STR filing to FIU-IND) so you have both cases ready for the panel?


🔹 Case Study: Screening Flag – True Positive

Step 1: Onboarding Trigger

  • Customer Amit R applies for account opening.

  • KYC/CDD service collects identifiers:

    • Full Name: Amit R

    • DOB: 12/03/1975

    • Passport: Z1234567

    • Nationality: Bangladeshi

Step 2: Screening Service

  • Screening microservice calls Dow Jones Risk & Compliance API.

  • Payload: { name, DOB, passport, nationality }.

  • Response: Match found → UN Sanctions List – Individual “Amit Rahman”.

Step 3: System Action

  • Match risk score = High (95%).

  • Customer onboarding workflow paused immediately.

  • Case created in Case Management System (CMS) → CaseID #98765.

  • Audit log created: “Potential Sanctions Hit – UN List.”

Step 4: Analyst Manual Review

The analyst compares:

Field

Customer (Amit R)

Watchlist (Amit Rahman)

Match?

Full Name

Amit R

Amit Rahman

DOB

12/03/1975

12/03/1975

Passport

Z1234567

Z1234567

Nationality

Bangladeshi

Bangladeshi

  • Conclusion: Identifiers fully match.

  • Determination: True Positive.

Step 5: Decision Documentation

  • Analyst records decision in CMS:

    • “Confirmed match against UN Sanctions list. All identifiers align (DOB, Passport, Nationality). This is a True Positive.”

  • Case escalated to Level-2 Senior Compliance Officer for approval.

Step 6: Final Action

  • Case approved as True Positive.

  • Customer blocked from onboarding.

  • System triggers:

    1. Reject Application → notify Ops team.

    2. Generate Suspicious Transaction Report (STR) → automatically prepared.

    3. File STR with FIU-IND (India) or local regulator.

    4. Retain audit trail for regulatory inspection.

🔹 Interview-Friendly Soundbite

"Let’s take a True Positive case. Suppose Amit R applies for onboarding, and the screening service finds a direct match on the UN Sanctions List — including exact name, DOB, and passport. The onboarding is stopped immediately. Compliance analysts validate the identifiers, confirm it’s a True Positive, and record the decision. The customer is rejected, an STR is filed with FIU-IND, and the full audit trail is preserved for regulators. This ensures the bank does not onboard sanctioned individuals while demonstrating end-to-end compliance."

✅ Now you have both narratives:

  • False Positive → Onboard (after analyst review)

  • True Positive → Block & Report


👉 “gray zone” scenario — e.g., partial match but inconclusive → escalate to Enhanced Due Diligence (EDD) — since interviewers sometimes test you on that middle case?

You said:

Excellent ✅ — having a Gray Zone / Enhanced Due Diligence (EDD) scenario ready shows you can handle real-world complexity (not just binary pass/fail).

🔹 Case Study: Screening Flag – Gray Zone (EDD Required)

Step 1: Onboarding Trigger

  • Customer Amit R applies for a high-value corporate account.

  • KYC/CDD service collects:

    • Full Name: Amit R

    • DOB: 15/07/1978

    • Passport: Z9876543

    • Nationality: Indian

Step 2: Screening Service

  • Screening microservice calls Refinitiv World-Check API.

  • Payload: { name, DOB, passport, nationality }.

  • Response: Potential match → Politically Exposed Person (PEP): “Amit R Sharma”, Senior Govt Official in India.

Step 3: System Action

  • Match risk score = Medium (65%).

  • Onboarding workflow paused.

  • Case created in Case Management System (CMS) → CaseID #45678.

  • Audit log entry: “PEP Potential Match – World-Check.”

Step 4: Analyst Review

Comparison:

Field

Customer (Amit R)

Watchlist (Amit R Sharma)

Match?

Full Name

Amit R

Amit R Sharma

Partial

DOB

15/07/1978

15/07/1979

Close (±1 yr)

Passport

Z9876543

Not Available

Nationality

Indian

Indian

Occupation

Private Sector Employee

Senior Politician

  • Conclusion: Not a clear True Positive.

  • But risk indicators present → Possible PEP confusion.

Step 5: EDD Trigger

  • Analyst marks as “Inconclusive – Escalate to Enhanced Due Diligence.”

  • Additional steps initiated:

    1. Request extra documents (Employment proof, Address proof, PAN, Aadhaar).

    2. Perform media screening (negative news search).

    3. Check with independent vendor (e.g., Dow Jones PEP database).

    4. Senior Compliance Officer reviews case.

Step 6: Outcome

  • After EDD:

    • Media search shows no adverse news.

    • PAN/Aadhaar verified, employer confirmed → private sector.

    • Dow Jones confirms not the same individual.

  • Final decision: False Positive (after EDD) → ✅ Customer approved.

🔹 Interview-Friendly Soundbite

"Not all matches are clear-cut. For example, suppose Amit R is flagged as a possible PEP because of a partial match to ‘Amit R Sharma’, a senior politician. The system pauses onboarding, and compliance performs Enhanced Due Diligence. This includes extra document checks, media screening, and sometimes secondary vendor validation. If EDD clears the case, the customer can be onboarded. If doubts remain, the case is escalated further. This gray zone handling is critical in enterprise compliance — it balances regulatory expectations with customer experience."

✅ Now you have all three cases ready:

  1. False Positive → Approve

  2. True Positive → Block + STR

  3. Gray Zone → EDD → Final decision


 
 
 

Recent Posts

See All
How to replan- No outcome after 6 month

⭐ “A transformation program is running for 6 months. Business says it is not delivering the value they expected. What will you do?” “When business says a 6-month transformation isn’t delivering value,

 
 
 
EA Strategy in case of Merger

⭐ EA Strategy in Case of a Merger (M&A) My EA strategy for a merger focuses on four pillars: discover, decide, integrate, and optimize.The goal is business continuity + synergy + tech consolidation. ✅

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
  • Facebook
  • Twitter
  • LinkedIn

©2024 by AeeroTech. Proudly created with Wix.com

bottom of page