top of page

Steps for cloud migration for a portfolio of 200 application

  • Writer: Anand Nerurkar
    Anand Nerurkar
  • Sep 29
  • 12 min read

Updated: Oct 1

Walk you through exactly how a real enterprise runs a 200-app cloud migration step-by-step, with the people you engage, the surveys you run, the tools you use (what those tools do and what they don’t), the scoring method, how TIME + 6R are applied, how waves get chosen, the runbooks/playbooks, governance, KPIs, and a worked numeric example so you can show the math in an interview.


Read it as an “operations playbook” you can present to a CXO or use during an EA interview.

1) High-level phases (what actually happens, in order)

  1. Initiation & Governance setup — Steering committee, EA Review Board, Cloud CoE.

  2. Discovery & Inventory — automated scans + human surveys + CMDB enrichment.

  3. Business capability mapping — workshops with BU heads to map apps → capabilities.

  4. Technical assessment & scoring — CAST / vFunction / APM / dependency analysis.

  5. TIME quadrant + 6R decisioning — assign each app a migration strategy.

  6. Wave planning & pilot selection — sequence apps into waves (dependencies, risk, value).

  7. PoV / Pilot / Proof of Value — migrate 3–5 pilot apps end-to-end.

  8. Execute waves — run migration playbooks per 6R type, CI/CD, tests, cutover.

  9. Stabilize & Operate — SRE, FinOps, runbooks, compliance evidence, monitoring.

  10. Decommission & Closure — retire legacy, archive, lessons learned.

Each phase has its own outputs, stakeholders, tools, KPIs and artifacts (listed below).

2) Phase-by-phase detailed steps, stakeholders, artifacts & tools

Phase 0 — Initiation & Governance (1–3 weeks)

Goal: create decision structure and baseline plan.

Who: CIO, CFO, Line-of-Business (LOB) Heads, EA, Security, Legal, Procurement, Program Manager.

Artifacts/Decisions:

  • Program charter, timelines, budget ask, risk appetite.

  • Steering Committee + EA Review Board + Cloud CoE charter.

  • Tooling procurement list (CAST, vFunction, LeanIX, APM).

Tools: Confluence (store charter), PowerPoint for exec ask, ServiceNow for approvals.

KPIs: Steering approvals, budget signoff, baseline inventory target.

Phase 1 — Discovery & Inventory (4–8 weeks)

Goal: collect a trusted dataset for all 200 apps (automated + manual).

How (steps):

  1. Automated scans

    • CAST Highlight/AIP: code metrics (size, complexity, modularity), tech debt hotspots, cloud readiness indicators (OS, native libraries, DB bindings).

    • vFunction: for large monoliths: call graph + candidate service boundaries (suggests “how to slice”).

    • APM tools (AppDynamics / Dynatrace / NewRelic): runtime telemetry, transactions per second, slow endpoints, error rates.

    • Network / Infra discovery (Azure Migrate, CMDB connectors) to capture where apps run, infra, storage, batch jobs, SFTP endpoints.

  2. CMDB / ServiceNow import & reconcile

    • Pull owner/contact, SLAs, contract/vendor, current infra costs, DB servers, support windows.

  3. Business metadata survey & interviews

    • A short survey + 45–60 minute interview per LOB and per app owner to capture business criticality, peak windows, regulatory scope, dependency knowledge.

  4. Dependency mapping

    • Use dependency maps from CAST/APM + manual validation from app owners to build a graph of upstream/downstream dependencies.

Survey + Metadata fields to collect (example):

  • App ID, Name, Short description

  • Business owner (name, role), Technical owner

  • Primary Business Capability (e.g., Loan Origination, Claims)

  • SLA (RTO, RPO), Peak TPS, Transactions/day

  • Users (internal/external), revenue impact

  • Regulatory impact (PII? PCI? AML?)

  • Technology stack (language, app server, DB)

  • Integrations (APIs, MQ, SFTP, partners)

  • Vendor & contract expiry

  • Current infra cost (monthly)

  • EOL / maintenance window

  • Test coverage & automation level

  • Current runbook & support model

  • CAST score (technical debt), vFunction decomposition hints

Tools: CAST Highlight/AIP, vFunction, AppDynamics/Dynatrace, Azure Migrate, ServiceNow CMDB, Excel export, Miro for dependency diagrams.

Outputs / Artifacts:

  • Master Inventory (200 rows) — canonical single source CSV/Excel + loaded into LeanIX or EA repo.

  • Dependency graph (visual).

  • Initial scoring fields filled.

Interview tip: emphasize you never rely only on CAST/vFunction for business mapping — those tools give code and dependency insight but business capability mapping requires workshops and app owner interviews.

Phase 2 — Business Capability Mapping & Workshops (2–4 weeks)

Goal: align IT inventory to business capabilities and owners.

Steps:

  1. Create a canonical capability catalog (Retail Banking, Payments, Lending, Claims, Underwriting, AML, Treasury, CRM, etc.).

  2. Run capability mapping workshops (1–2 hrs per LOB) with BU heads and product owners — walk app list and confirm mapping.

  3. Resolve duplicates (two apps mapped to same capability) and identify capability gaps (no app currently provides capability).

  4. Capture capability-level KPIs and owners (so you can later align migration to business impact).

Artifacts:

  • Capability → Application mapping.

  • RACI table showing business owner / app owner / EA / infra owner.

Tools: LeanIX (capability maps), Miro, Confluence.

Phase 3 — Technical Assessment, scoring & TIME + 6R (2–4 weeks)

Goal: make a reproducible decision for each app: TIME quadrant + 6R.

Steps

  1. Create scoring model (explained with numbers below). Typical scoring dimensions:

    • Business value (BV) — revenue/customer impact (35% weight)

    • Technical complexity (TC) — code complexity, dependencies (20%) (invert for priority)

    • Cloud readiness (CR) — OS/stack support, containerizable (15%)

    • Compliance criticality (CC) — PII/AML/PCI (15%)

    • Integration complexity (IC) — # of interfaces (15%) (invert)

  2. Calculate composite score for each app (automated spreadsheet). Use CAST, vFunction and APM for inputs plus survey fields for BV and compliance ratings.

  3. TIME Quadrant mapping

    • Use composite scores and business owner input to classify: Tolerate / Invest / Migrate / Eliminate.

  4. 6R assignment (deterministic rules):

    • If TIME=Eliminate → Retire.

    • If TIME=Invest and complexity low → Replatform or Refactor.

    • If TIME=Migrate and complexity low → Rehost.

    • If TIME=Invest and complexity high & strategic → Rearchitect (strangler).

    • If external SaaS exists with better fit → Replace.

    • Retain for apps with regulatory constraints or long lead-time.

Tools: CAST outputs, vFunction microservice candidate list, LeanIX, Excel / PowerBI for dashboards.

Artifact: Master decision table: App | Score | TIME | 6R | WaveCandidate | Rationale | Owner.

Example: numeric scoring (digit-by-digit, show interviewer you can do arithmetic correctly)

We use the weights: BV 40%, TC 20% (inverted), CR 15%, CC 15%, IC 10% (inverted).

Assume App LoanBatchX metrics captured:

  • BusinessValue (BV) = 80

  • TechnicalComplexity (TC) = 70  → invert = (100 − 70) = 30

  • CloudReadiness (CR) = 40

  • ComplianceCriticality (CC) = 90

  • IntegrationComplexity (IC) = 60 → invert = (100 − 60) = 40

Compute weighted parts step-by-step:

  1. BV term = 40% of 80 → 0.40 × 80 = 32.00

  2. TC term (inverted) = 20% of 30 → 0.20 × 30 = 6.00

  3. CR term = 15% of 40 → 0.15 × 40 = 6.00

  4. CC term = 15% of 90 → 0.15 × 90 = 13.50

  5. IC term (inverted) = 10% of 40 → 0.10 × 40 = 4.00

Sum them:

  • 32.00 + 6.00 = 38.00

  • 38.00 + 6.00 = 44.00

  • 44.00 + 13.50 = 57.50

  • 57.50 + 4.00 = 61.50

Composite score = 61.50.Interpretation (example thresholds):

  • 75 → High priority (Wave 1 / Replatform / Rearchitect)

  • 50–75 → Medium (Wave 2 / Refactor / Replatform)

  • <50 → Low (Wave 3 / Rehost / Retire)

So LoanBatchX = 61.5 → Medium priority → candidate for Wave 2 refactor/replatform.

(In interviews, say thresholds are tuning parameters you validate with business sponsors.)

Phase 4 — Wave planning & dependency sequencing (2–4 weeks)

Goal: create executable waves, respecting upstream/downstream dependencies and risk.

Steps:

  1. Build a dependency DAG (directed acyclic graph) from the discovery step.

  2. For each capability, select a pilot app (low risk but high value) for Wave 1 to show value fast.

  3. Apply constraints:

    • Platform services (API Gateway, AuthN, Logging) must be in Wave 0/Foundation.

    • You cannot modernize a consumer until its upstream producer is available (unless you add an adapter).

  4. Allocate resources & estimate duration per app type:

    • Rehost: 2–6 weeks each (small teams)

    • Replatform: 4–12 weeks

    • Refactor: 8–20 weeks per app (longer)

    • Rearchitect: 6–18 months (for big core systems)

  5. Balance waves with capacity (e.g., 3–5 refactor tracks in parallel).

Outputs:

  • Wave plan (dates, owners, dependencies)

  • Resource plan (teams, vendors)

  • Pilot selection and success criteria

Phase 5 — Pilot / Proof of Value (6–12 weeks)

Goal: prove approach with 3–5 apps end-to-end (inc. data migration, partner integration, security, compliance proofs).

Steps:

  • Choose 1 rehost, 1 replatform, 1 refactor candidate.

  • Execute full runbook: infra as code, CI/CD, security scans, performance tests, cutover to production in a controlled manner.

  • Measure pilot KPIs (latency, error rates, business TAT, reconciliation accuracy).

  • Adjust runbooks.

Tools: Terraform/Bicep, Azure DevOps/GitHub Actions, SonarQube, Snyk, OWASP ZAP, Postman for API tests.

Artifact: Pilot retrospective, updated playbooks.

Phase 6 — Execute waves (rolling, months → years)

Goal: scale pilots into program-level migrations.

For each app in a wave:

  1. Pre-migration stabilisation

    • Finalize code changes, contract tests, mocks for downstream partners.

  2. Environment provisioning

    • IaC + landing zone prerequisites.

  3. CI/CD pipeline connect

    • Build, unit tests, SAST, containerization, image vulnerability scan.

  4. Integration testing

    • Contract tests with consumers, synthetic transactions, security tests (DAST), performance tests.

  5. Data migration / cutover

    • CDC / dual-write / outbox pattern / reconciliation.

  6. Canary / blue-green deploy

    • Kill-switch plan & rollback.

  7. Production verification & monitoring

    • SLOs/SLA monitoring, synthetic tests.

  8. Handover to SRE / runbooks

    • Knowledge transfer, runbooks, on-call rotation.

  9. Decommission legacy

    • Archive data, switch off hardware, update CMDB.

Playbook templates exist for each 6R type and must be updated by teams after pilots.

Tools: ArgoCD/Flux (GitOps), Helm, Kubernetes, Kafka, Debezium for CDC, Data Factory for ETL, AppDynamics for APM, Prometheus/Grafana, ELK, Azure Sentinel.

Phase 7 — Stabilize, Optimize & Run (continuous)

  • Monitor KPIs (uptime, latency, reconciliation success).

  • FinOps cost optimization (reserved instances, rightsizing).

  • Security posture management (patching, pentest cycles).

  • Model governance for ML models used in business logic (XAI, SHAP/LIME).

  • Regular compliance audits & evidence packaging.

3) Who you must collaborate with (practical stakeholder plan)

  • CIO / CFO (Steering) — budget & business alignment.

  • LOB Heads / Product Owners — business value mapping, prioritization.

  • Application Owners / Dev Leads — technical detail, acceptance.

  • Infrastructure & Cloud Architects — landing zone, network.

  • Security / Compliance — regulatory controls & audit evidence.

  • Data Office / CDO — canonical model, data migration.

  • Vendor / Partner managers — for Finacle/TCS, Actimize, Fenergo.

  • Change & HR — skills & adoption.

  • SRE / Ops — runbooks, incident handling.

Governance cadence

  • Weekly migration scrum for execution.

  • Biweekly EA Review Board for architectural exceptions.

  • Monthly steering committee for program status.

  • Quarterly business KPI reviews.

4) Surveys you run and how you structure them (concrete)

Purpose: capture business metadata that automated tools cannot.

Survey fields (short form):

  • App short description

  • Business owner name & contact

  • Criticality (1–5)

  • SLA (uptime %, RTO, RPO)

  • Business windows & peak times

  • Monthly transaction volume / peak TPS

  • Regulatory scope (PII/PCI/FATCA/AML)

  • Vendor & contract expiry

  • Test automation % (unit, integration, e2e)

  • Dev team size & skills

  • Known dependencies (upstream/downstream)

  • Existing runbooks & DR docs

  • Expected business downtime tolerance (yes/no)

How: email + follow-up interview. Keep the survey short; use interviews to validate and enrich.

5) How CAST and vFunction are actually used (and what they don’t do)

CAST Highlight / AIP

  • Scans codebases and returns objective metrics: technical debt, complexity hotspots, cloud readiness indicators, open-source license risks.

  • Used for: prioritizing refactor candidates, understanding where effort will be concentrated.

  • Limitations: does not tell you business value, does not map to capabilities by itself, does not make 6R decisions alone.

vFunction

  • Analyzes monolithic Java/.NET applications and suggests candidate microservice boundaries (based on call graphs). It outputs recommended service groupings and a complexity estimate for extraction.

  • Used for: accelerating refactor plans for large monoliths (gives candidate service boundaries and estimated effort).

  • Limitations: these are suggestions — you must validate with domain experts, data access patterns, and business boundaries.

Bottom line: CAST + vFunction feed the technical dimension of your scoring matrix — you still need human business input and dependency validation.

6) Governance artifacts you must produce (and show in interviews)

  • Master Inventory (spreadsheet or LeanIX): 200 apps with fields captured above.

  • Decision table: App | Score | TIME | 6R | Wave | Owner | Rationale.

  • Wave plan (Gantt) with dependencies and resource plan.

  • Pilot runbook & technical cutover checklist.

  • ADRs (Architecture Decision Records) — decision, options considered, justification.

  • Security & compliance evidence packs for regulators.

  • Migration runbooks per app & per 6R pattern.

  • Risk register (with owners, RAG, due dates).

  • KPI dashboards (PowerBI or Grafana) for execs.

7) RACI Matrix – Portfolio Modernization (200+ Applications Cloud Migration)

Decision Area

CIO

CTO

Enterprise Architect

Solution Architect

BU Head / Product Owner

Application Owner

Data Architect / CDO

CISO

Cloud CoE / Ops

Vendor Partner (Infosys, TCS, Accenture, etc.)

Governance Board (ARB, Steering Committee)

Define Cloud Migration Vision & Strategy

A

C

R

C

C

I

C

I

C

C

I

Application Portfolio Discovery & Assessment (CAST, vFunction, Surveys)

I

C

A

R

C

R

C

I

I

C

I

Business Capability Mapping & Prioritization

A

I

R

C

R

C

C

I

I

I

C

TIME Quadrant (Tolerate, Invest, Migrate, Eliminate) Classification

I

C

A

R

C

C

C

I

I

I

I

6R Strategy (Rehost, Refactor, Rearchitect, Replace, Retire, Retain)

I

C

A

R

C

C

C

I

C

R

I

Cloud Provider & Technology Selection

C

A

R

C

I

I

C

C

R

C

I

Target State Reference Architecture Definition

I

A

R

R

I

C

C

C

C

C

A

Security & Compliance (SABSA, NIST, Zero Trust)

I

I

C

C

I

I

I

A

R

C

A

Operational Model (DevOps/DevSecOps, Monitoring, DR)

I

C

R

R

I

I

C

C

A

C

A

Migration Wave Planning & Roadmap

I

C

A

R

C

R

C

I

R

C

A

Execution of Migration Waves

I

I

C

R

C

R

C

I

A

R

I

Risk Management & Mitigation (Enterprise Risk Register)

C

C

A

C

I

I

C

C

I

C

A

Final Go/No-Go for Migration Waves

A

C

C

C

C

C

C

C

C

C

A

🔑 Key Observations

  • CIO is Accountable for the overall business-aligned cloud transformation vision.

  • CTO is Accountable for technology selection, reference architecture, and operational models.

  • Enterprise Architect is Responsible for capability mapping, TIME analysis, 6R strategy, blueprinting, governance, and roadmap.

  • Solution Architects + Application Owners are Responsible for application-level details.

  • CISO is Accountable for compliance, threat modeling, and security alignment.

  • Cloud CoE/Ops ensures operationalization (DevOps, monitoring, DR).

  • Vendors (Infosys, TCS, Accenture) provide execution support but are never final Accountable.

  • Governance Board signs off on standards, patterns, risk mitigation, and wave plans.


Note "For the 200+ application portfolio, I led the enterprise assessment using CAST and vFunction, mapped applications to business capabilities, applied TIME quadrant and 6R strategy, and defined a wave-based cloud migration roadmap. Accountability was distributed — CIO for strategy, CTO for tech, CISO for security — but I was responsible for the enterprise blueprint, governance, and ensuring business-IT alignment. This governance via RACI ensured transparency and reduced migration risks."

8) Example single-app walkthrough (complete lifecycle) — “LoanOrigApp” (EJB + PL/SQL)

  1. Discovery: CAST finds EJB monolith with 350k LOC, heavy DB stored procs; vFunction shows 7 candidate service clusters.

  2. Business workshop: BU says this app supports Loan Origination capability, 60% of loan volume, SLA 99.9, regulatory reports depend on it.

  3. Score: BV 90, TC 85 (invert 15), CR 30, CC 95, IC 70 (invert 30) → compute composite (similar arithmetic as earlier) → ~ (0.4×90)+(0.2×15)+(0.15×30)+(0.15×95)+(0.10×30) = 36 + 3 + 4.5 + 14.25 + 3 = 60.75 → Medium/High.

  4. Decide: TIME=Invest; 6R = Rearchitect (strangler pattern). Assign Wave 3.

  5. Pilot microservice: pick a low-risk candidate cluster (e.g., Document Validation service) to extract first.

  6. Implement: design API contract, build microservice (Spring Boot), implement Outbox pattern for reliable events, create CDC for pushing necessary DB changes, setup CI/CD, run security scans.

  7. Integration tests: contract tests with downstream settlement, simulate load, run compliance reporting tests.

  8. Parallel run: run new service in front of a small % of traffic (canary) while legacy remains system of record. Run daily reconciliation.

  9. Cutover: when reconciliation passes and business validates, move more traffic, deprecate legacy module, finally decommission EJB modules when all services extracted.

  10. Handover: SRE runbooks, incident playbooks, cost optimisation. Update CMDB.

9) KPIs to show CXO (what they actually ask)

  • % portfolio assessed (target 100% within X weeks)

  • % apps migrated (by wave) vs plan

  • Time-to-first-value (pilot -> business benefit measured)

  • Mean time to recover (MTTR) pre/post migration

  • Monthly cloud run cost vs on-prem baseline

  • % of apps with automated CI/CD & security gates

  • Reconciliation success rate (for dual-write phases)

  • Compliance report acceptance rate (FIU/RBI/etc.)

10) Practical interview tips (how to answer clearly)

  • Start with governance: say “I start with governance, inventory and a steering committee — without that nothing scales.”

  • Distinguish tool outputs vs business input: CAST/vFunction give technical signals — you must run workshops to get business criticality and capability mapping. Interviewers want to hear both automation + human alignment.

  • Show a repeatable scoring method (give the formula and one numeric example — we provided that). Be able to explain weight choices.

  • Describe pilots: pick 3 pilot apps with different 6R types and say what success criteria you used (latency improvement, reconciliation success, regulatory acceptance).

  • Talk about people: explain change management (training, SME pairing, vendor augmentation).

  • Show artifacts: say you produced Master Inventory, Wave Gantt, ADRs, Runbooks, and a Risk Register — these are tangible deliverables execs want.

  • Quantify outcomes where possible (time saved, cost reduction, TAT improvement).

11) Quick checklist / cheat-sheet you can memorize before interview

  • Phase names: Initiation → Discovery → Capability mapping → Scoring/TIME+6R → Waves → Pilot → Execute → Stabilize → Decommission.

  • Tools: CAST, vFunction, AppDynamics, LeanIX, ServiceNow, Terraform, AKS, Kafka, Debezium, SonarQube, Snyk, Prometheus/Grafana.

  • 6R quick meaning: Rehost / Replatform / Refactor / Rearchitect / Replace / Retire.

  • Key artifacts: Master Inventory, Wave Plan, Runbooks, ADRs, Risk Register.

  • One numeric example ready (use LoanBatchX or LoanOrigApp above) so you can show you can do the math.


12.🔑 Decision & Approval Governance

1. Business Layer

  • Decision Authority:

    • Head of Retail Lending / Business Unit Owner → decides KPIs like Customer Onboarding Time (TAT).

    • Chief Risk Officer (CRO) → ensures compliance with AML/KYC.

  • Approval Needed:

    • Business KPIs and Risk appetite are approved at Executive Committee / CXO Steering Committee.

  • Example: Approving the target onboarding SLA (e.g., 15 minutes).

2. Application Layer

  • Decision Authority:

    • Enterprise Architect / Solution Architect → designs LOS/KYC/AML integration flow.

    • Product Owner / Application Owner → validates functional design.

  • Approval Needed:

    • Architecture Review Board (ARB) → signs off solution design and integrations.

  • Example: Choosing API-first onboarding vs batch-based onboarding.

3. Data Layer

  • Decision Authority:

    • Data Architect / Chief Data Officer (CDO) → defines canonical data model, mappings.

  • Approval Needed:

    • Data Governance Council → approves schema alignment, data quality rules, compliance with DPDP/GDPR.

  • Example: Approving ETL transformations between new modernized DB and legacy LOS/LMS DB.

4. Technology Layer

  • Decision Authority:

    • Cloud Architect / Infrastructure Head → chooses event-driven vs batch, Kafka vs MQ, AKS vs VM-based deployment.

  • Approval Needed:

    • Technology Standards Board / CTO → approves adherence to Cloud-First, Secure-by-Design, Compliance-by-Design patterns.

  • Example: Approving containerized deployment on Azure AKS with Istio service mesh.

5. Security, Compliance & Governance Layer

  • Decision Authority:

    • CISO (Chief Information Security Officer) → defines threat models, encryption standards.

    • Compliance Officer → ensures RBI/FIU-IND/SEBI guidelines are met.

  • Approval Needed:

    • Security Governance Council + Regulatory Audit Teams.

  • Example: Approving AML integration with Actimize, FIU-IND reporting workflows.

⚙️ Enterprise Decision-Making Process (Typical Flow)

  1. EA/Tech Teams → Propose design (solution options, trade-offs).

  2. Working Group (Application Owners, SMEs, Architects) → Review and refine.

  3. Architecture Review Board (ARB) → Approve technical architecture.

  4. Data Governance Council → Approve data models, compliance alignment.

  5. Security Council (CISO team) → Approve threat model, security patterns.

  6. Business Steering Committee / CXOs → Approve final rollout, budget, and KPIs.


 
 
 

Recent Posts

See All
Open Banking Vs Tradinal Banking

1. What is Open Banking? Open banking  is a system where banks allow secure sharing of financial data  with authorized third-party...

 
 
 
How To Validate Architecture

🧭 1️⃣ What Architecture Validation Means It’s the structured process of verifying that the proposed or implemented solution : Meets...

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
  • Facebook
  • Twitter
  • LinkedIn

©2024 by AeeroTech. Proudly created with Wix.com

bottom of page