top of page

Which Cloud?how to deploy??

  • Writer: Anand Nerurkar
    Anand Nerurkar
  • Oct 24
  • 7 min read

Excellent — this is exactly the kind of scenario-based, architecture governance question you’ll face in your interview for the Enterprise Architect – Banking Cloud Platforms role.

Let’s go step by step with a structured, CTO-level answer — including how to decide placement, access, and deployment strategy.

🎯 Question:

How will you decide which components will be on-prem, which will be on cloud, how will you access on-prem from cloud or cloud from on-prem, and how will you manage deployment across both environments?

🧩 Step 1: Establish Decision Framework

Answer:

As an Enterprise Architect, I start with a structured decision framework based on business, regulatory, technical, and operational drivers.

📊 Criteria for Component Placement:

Decision Factor

Description

Example Decision

Regulatory / Data Residency

Whether data can be stored or processed outside specific geography (RBI, SEBI, GDPR)

PII, KYC data → On-prem or India Cloud region

Latency & Performance

Components requiring sub-ms latency to core banking systems

Core Transaction Engine → On-prem

Elasticity / Compute Bursts

Components needing scale-up/scale-down elasticity

AI/ML Scoring, Analytics → Public Cloud (AWS/GCP)

Integration Complexity

Systems deeply coupled with legacy mainframes or hardware HSMs

Payment switch, HSM → On-prem

Security Posture & Controls

Ability to enforce zero-trust, encryption, key mgmt

Tokenization service → Hybrid

Cost & TCO Optimization

Trade-off between CapEx vs OpEx

Batch jobs → Cloud (Spot instances)

Modernization Roadmap

Whether system is being re-architected or remains legacy

Stepwise migration from on-prem → cloud-native

Outcome: We categorize services into three buckets:

  • Stay on-prem (core banking, data vaults)

  • Move to cloud (digital channels, analytics, AI)

  • Hybrid connectivity (API gateway, integration layer, data replication)

🧭 Step 2: Define the Connectivity and Access Patterns

Answer:

Once component placement is decided, I design secure hybrid connectivity ensuring seamless data and API access across both environments.

🔐 Secure Access Patterns

  1. Hybrid Connectivity Setup

    • Use Azure ExpressRoute / AWS Direct Connect / GCP Interconnect for private low-latency connection between on-prem DC and cloud VPC.

    • No traffic over public internet.

  2. Identity & Access

    • Federate identity with Azure AD / Okta for both environments using SSO and conditional access.

    • Enforce Zero Trust Network Access (ZTNA) — “never trust, always verify”.

  3. Service-to-Service Access

    • Use Private Link / VPC Peering to connect services privately.

    • For APIs → expose via API Gateway deployed on both sides with mutual TLS and JWT validation.

  4. Data Access

    • Replicate operational data to cloud via CDC tools (Debezium / GoldenGate) for analytics without breaching residency.

🚀 Step 3: Deployment & CI/CD Strategy

Answer:

Deployment in a hybrid setup is managed using a single DevOps pipeline but with environment-specific stages and agents.

🧱 Deployment Pattern

A. Unified Pipeline (Azure DevOps / Jenkins / GitHub Actions)

  • Stage 1: Build artifacts once → store in central artifact repo (Nexus/ACR).

  • Stage 2: Deploy to on-prem Kubernetes (OpenShift/VMs) using on-prem agents.

  • Stage 3: Deploy to cloud AKS/EKS/GKE using cloud agents.

B. Configuration Management

  • Use Helm + Terraform for IaC.

  • Parameterize environment-specific configurations (network, secrets, endpoints).

  • Store secrets in Vault / KeyVault / Secrets Manager.

C. Deployment Governance

  • Enforce change approvals, vulnerability scans, and compliance checks in the pipeline.

  • Integrate Veracode / Snyk / Prisma Cloud for DevSecOps.

D. Observability

  • Unified monitoring via Azure Monitor / Prometheus / Grafana, logging via ELK / Splunk.

  • Cross-environment correlation using trace IDs for distributed transactions.

⚙️ Step 4: Example Architecture Flow

Example Use Case: Digital Lending Platform (Hybrid)

Component

Location

Justification

Core Loan Engine

On-prem

Tight integration with CBS, data residency

API Gateway

Hybrid

Cloud-facing APIs + internal routing

Digital Onboarding UI

Cloud (AKS)

Elastic demand, global availability

KYC Service

On-prem

PII compliance

ML Scoring Service

Cloud (GCP)

GPU compute elasticity

Data Lake

Cloud (Azure / GCP)

Analytics at scale

Security / IAM

Hybrid (AD + Azure AD)

Centralized identity federation

Data moves securely via ExpressRoute, APIs exposed via API Gateway, and deployments handled through unified DevOps pipelines.

🧠 Step 5: Close with Governance and Risk Mitigation

Answer:

To ensure architecture consistency and compliance:
  • I define placement decision matrix (as above).

  • Conduct Architecture Review Boards to approve movement of workloads.

  • Maintain architecture registry in tools like LeanIX or ServiceNow CMDB.

  • Apply continuous compliance checks for RBI/SEBI mandates.

  • Plan for failover and DR (Active-Active or Active-Passive) across on-prem and cloud.

✅ Summary

I follow a structured hybrid architecture strategy.First, I classify workloads based on regulatory, performance, and modernization factors. Sensitive and tightly coupled systems stay on-prem, while elastic or AI workloads move to the cloud.Connectivity is through private channels like ExpressRoute with unified identity federation.I maintain one CI/CD pipeline with environment-specific deployments — using IaC, Helm, Terraform, and DevSecOps controls.Finally, I ensure consistency and compliance through governance boards, standards, and continuous monitoring.This approach gives us scalability and innovation from cloud, while preserving control and compliance on-prem.

In our digital lending modernization initiative — the Amit use case — we adopted a full cloud-native approach because the enterprise had already completed its cloud compliance assessment with RBI and enabled data residency controls on Azure India region.


However, in a typical banking environment, not every system can be moved at once. For example, core banking or payment systems may remain on-prem due to latency, integration, or vendor lock-in.


In such cases, I follow a hybrid design — keeping sensitive systems on-prem, enabling secure connectivity (ExpressRoute or Direct Connect), and gradually migrating non-critical workloads to the cloud following a phased modernization roadmap.


So, the architectural approach depends on the organization’s current maturity and compliance posture — whether they are cloud-first or still hybrid.”


🧩 In Short — When to Use Each Approach

Scenario

Approach

Deployment

Greenfield modernization (like Amit)

Cloud-native

All services on Azure (India region)

Brownfield transformation (typical bank)

Hybrid

Core on-prem, digital & AI on cloud

Regulatory sandbox / test environment

Cloud (isolated tenant)

Separate VNet & IAM

Gradual modernization roadmap

Phased hybrid-to-cloud

Start with digital, end with core

Perfect 👌 Anand — this is one of the most powerful and realistic Enterprise Architect questions you can get in your interview.

Let’s walk through it step-by-step, using a realistic BFSI hybrid use case, including:

  • Business need

  • Architecture decision (what stays on-prem, what moves to cloud)

  • Connectivity pattern (how cloud ↔ on-prem are linked)

  • Access, security, and deployment setup

🏦 Use Case: Fraud Detection & Transaction Monitoring in a Bank

🎯 Business Context

A Tier-1 bank wants to modernize its fraud detection system to enable real-time anomaly detection on transactions across channels (UPI, NEFT, internet banking).

However:

  • Core banking, payments, and customer master data must stay on-prem (RBI data residency + latency + vendor contracts).

  • The ML-based fraud detection engine and analytics layer are hosted on Azure Cloud to leverage scalable compute, GPUs, and managed AI services.

So we end up with a hybrid architecture — some systems on-prem, some on cloud.

🧩 Step 1: Component Placement Decision

Component

Location

Reason

Core Banking System (CBS)

On-Prem

Legacy vendor-managed, high security & low latency

Payments Switch (RTGS, UPI, NEFT)

On-Prem

Integrates with NPCI systems, strict RBI controls

Customer Master Data / PII Store

On-Prem

Data residency & masking requirements

Event Streaming (Kafka)

On-Prem + Cloud Mirror

On-prem Kafka cluster → replicates selective topics to cloud

Fraud Detection Microservices (Spring Boot + ML model)

Azure (AKS)

Elastic compute, AI scalability

Feature Store + ML Model Training

Azure ML / Databricks

GPU compute, parallel model training

Dashboard & Reporting (Power BI / Grafana)

Azure Cloud

Visualization, secure access via RBAC

Security / IAM

Hybrid (Azure AD + AD Federation)**

Unified identity + conditional access

🔄 Step 2: Why Connectivity Was Needed

We needed bidirectional connectivity because:

  1. From On-prem → Cloud

    • Real-time transaction events from CBS and payments needed to be streamed to cloud for ML scoring.

    • Fraud engine API hosted on Azure needs to be called synchronously or asynchronously.

  2. From Cloud → On-prem

    • Once the ML engine flags a suspicious transaction, a response event must go back to CBS or AML system to block or mark the transaction.

So — a low-latency, secure, private connection between on-prem and cloud was essential.

☁️ Step 3: Connectivity Design (Hybrid Secure Access)

🔐 Architecture Setup

  1. Private Connectivity

    • Configured Azure ExpressRoute between bank’s on-prem DC and Azure VNet.

    • Provides private IP-based routing, <10ms latency.

    • No public internet exposure.

  2. Network Segmentation

    • Created separate VNets and subnets for fraud workloads.

    • Used NSGs + Azure Firewall to control ingress/egress.

  3. Service Access

    • APIs on cloud exposed via Azure API Management (APIM).

    • On-prem systems accessed cloud APIs via private endpoint in ExpressRoute circuit.

    • TLS 1.2 + mutual certificate authentication enabled.

  4. Data Access

    • On-prem Kafka cluster → mirrored selected topics (transaction events) to Kafka MirrorMaker running on Azure AKS.

    • PII data tokenized using Vault tokenization service before transmission.

  5. Identity and Security

    • Active Directory Federation Services (ADFS) integrated with Azure AD.

    • Conditional access policies enforced based on device, IP, and MFA.

    • Secrets and certificates stored in Azure Key Vault and synced via HashiCorp Vault on-prem.

🧱 Step 4: Deployment and DevOps Setup

CI/CD Flow

  • Single Azure DevOps pipeline with:

    • Build → Unit test → Container image → Push to Azure Container Registry (ACR)

    • Deploy to AKS (cloud) using Helm.

    • For on-prem connectors or Kafka consumers, pipeline triggers Jenkins on-prem agent via self-hosted runner.

Configuration Management

  • Infrastructure as Code via Terraform.

  • Environment variables parameterized (endpoints, keys, etc.).

  • Secure deployment approvals using change gates and RBAC policies.

📊 Step 5: Example Data Flow

Sequence Flow Example:

  1. Customer initiates a transaction via Internet Banking → hits Core Banking System (on-prem).

  2. CBS publishes event → On-prem Kafka topic: txn.initiated.

  3. Kafka MirrorMaker streams this event securely to Azure AKS topic fraud.txn.initiated.

  4. Cloud-based Fraud Detection Service (Spring Boot + ML) consumes this event, runs model in Azure ML.

  5. If anomaly detected → publishes response event txn.suspicious.

  6. MirrorMaker syncs back to on-prem Kafka → CBS consumes and flags/block transaction.

  7. Results also go to Power BI Dashboard on Azure for fraud monitoring team.

🧠 Step 6: Governance and Risk Controls

Control Area

Implementation

Data Residency

Sensitive data never leaves India region; tokenization before transit

Security

TLS 1.2, mutual certs, private link

Access Control

AD + Azure AD SSO; Just-in-Time access for ops

Compliance

Audited against RBI Cybersecurity Framework & ISO 27001

Monitoring

Centralized logs in Azure Log Analytics; alerting via Sentinel

Disaster Recovery

Secondary Azure region (India Central) + On-prem DR site

🧩 Step 7: Summary

“In one of our hybrid BFSI programs, we modernized the bank’s fraud detection platform while keeping the core banking and payment systems on-prem due to latency and RBI data residency requirements.We deployed the fraud detection microservices and ML scoring engine on Azure AKS to leverage GPU scalability and ML services.For real-time data exchange, we set up Azure ExpressRoute between on-prem and Azure, and mirrored Kafka topics securely using private endpoints.Data was tokenized before leaving the data center.Identity federation was achieved using AD + Azure AD with conditional access.This hybrid design allowed us to achieve real-time scoring and analytics while maintaining regulatory compliance and low latency.The setup was fully automated via Azure DevOps pipelines and monitored using Azure Monitor and Sentinel.”

 
 
 

Recent Posts

See All
How to replan- No outcome after 6 month

⭐ “A transformation program is running for 6 months. Business says it is not delivering the value they expected. What will you do?” “When business says a 6-month transformation isn’t delivering value,

 
 
 
EA Strategy in case of Merger

⭐ EA Strategy in Case of a Merger (M&A) My EA strategy for a merger focuses on four pillars: discover, decide, integrate, and optimize.The goal is business continuity + synergy + tech consolidation. ✅

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
  • Facebook
  • Twitter
  • LinkedIn

©2024 by AeeroTech. Proudly created with Wix.com

bottom of page