Documents/analysis/Analysis v2

Analysis v2

Analysis v2 — Legacy Platform Modernization (AI-Heavy Setup)

Key difference vs v1: Stronger AI setup, deeper investment in Month 1 → 2x multiplier instead of 1.4x. This changes the entire capacity, feasibility, phase plan, and achievable scope within 9 months.


0. AI-Heavy Strategy — The 2x Multiplier

0.1 Why 2x is achievable — what's different from 1.4x?

v1 uses AI at the assistant level (Copilot autocomplete + CodeRabbit review). That's the minimum tier.

v2 pushes to the agentic AI level — AI doesn't just suggest, it runs workflows end-to-end under engineer supervision:

Level Description Multiplier v1 v2
L1: Autocomplete AI suggests code line by line 1.2x
L2: Chat Assistant Ask AI → receive code block → paste 1.4x
L3: Inline Agent AI edits directly in IDE, multi-file 1.7x
L4: Agentic Workflow AI receives task → plans → codes → tests → human review 2x+
L5: Custom Agents AI agents trained on legacy codebase, self-migrate each module 2.5x+ Partial

v2 target: L4 with elements of L5.

0.2 AI-Heavy Toolchain (Month 1 — Deep Investment)

Week 1: Core AI Development Environment
├── IDE: Cursor Pro (entire team) — built-in agentic mode
│   ├── Agent mode: AI reads codebase → plan → implement → test
│   ├── Multi-file editing: refactor across 20+ files in one prompt
│   └── @ codebase: query entire legacy code via natural language
├── CLI Agent: Claude Code / Aider — terminal-based AI coding agent
│   ├── Run migration scripts automatically
│   ├── AI creates branch → commit → push → PR
│   └── Batch processing: apply same pattern across 50 files
├── Code Review: CodeRabbit + Copilot PR Review (double layer)
├── Architecture: Claude/ChatGPT for design decisions + ADR drafting

Week 2: Legacy Codebase AI Ingestion
├── Codebase Indexing: Feed entire legacy .NET code into AI context
│   ├── Cursor @codebase index (local, fast)
│   ├── Custom embeddings for complex business logic
│   └── Dependency graph generation (AI-analyzed)
├── Legacy Documentation: AI auto-generate docs from code
│   ├── Module-by-module business logic summary
│   ├── API contract extraction from controllers
│   ├── Database schema → ER diagram (auto)
│   └── Hidden coupling detection (AI finds what humans miss)
└── Migration Templates: AI-generated .NET Framework → .NET 8 patterns
    ├── Controller migration template
    ├── EF6 → EF Core migration template
    ├── DI registration migration template
    └── Config migration template (web.config → appsettings.json)

Week 3: Agentic Workflow Setup
├── AI Migration Pipeline:
│   ├── Input: legacy .NET module
│   ├── Step 1: AI analyze → identify boundaries + dependencies
│   ├── Step 2: AI generate → .NET 8 service scaffold
│   ├── Step 3: AI translate → business logic migration
│   ├── Step 4: AI test → generate contract + unit tests
│   ├── Step 5: AI review → self-review + human final gate
│   └── Output: PR ready for human review
├── Prompt Library (Production-grade):
│   ├── System prompts for each migration task type
│   ├── Few-shot examples from first manual migration
│   ├── Validation checklist prompts
│   └── Versioned in git (prompt-as-code)
└── MCP Integration (Model Context Protocol):
    ├── Connect AI agents to: DB schema, API docs, CI pipeline
    ├── AI can query live DB schema when generating migrations
    └── AI can trigger test runs in CI

Week 4: Validation & Calibration
├── Pilot: Migrate Communications module (simplest) entirely via AI
│   ├── Measure: time, quality, bugs found in review
│   ├── Calibrate: adjust prompts, templates, review gates
│   └── Baseline: compare AI vs manual migration speed
├── Team Training: Pair programming with AI agents
│   ├── Every engineer proficient in agentic workflow
│   ├── "AI driving, human navigating" mindset
│   └── Practice: when to override AI, when to trust
└── Metrics Dashboard:
    ├── Lines of AI-generated code vs human-written
    ├── AI-generated code bug rate vs human bug rate
    ├── Time per module migration (tracking improvement)
    └── Review rejection rate (% of AI PRs rejected)

0.3 Capacity Recalculation — 2x Multiplier

Explaining the 2x multiplier (weighted average):

Task Category % of Work AI Speed Boost Effective Multiplier
Boilerplate/CRUD/scaffolding 25% 5x (AI generates almost entirely) 5.0x
Business logic migration 25% 1.5x (AI translates, human validates logic) 1.5x
Test writing 15% 3x (AI generates from specs + contract) 3.0x
Data migration scripts 10% 4x (AI reads schema → generates CDC/ETL) 4.0x
Code review 10% 2x (AI first pass, human less workload) 2.0x
Architecture/complex decisions 10% 1.1x (AI assists brainstorm, human decides) 1.1x
Documentation 5% 4x (AI generates from code) 4.0x
Weighted average 100% ~2.6x raw → 2.0x conservative

Raw calculation = 2.6x but we apply a conservative discount for:

  • Context-switching overhead when using AI tools
  • AI hallucination → rework time (~10%)
  • Learning curve in the first month
  • Prompt iteration (sometimes takes 2-3 attempts)

2x conservative, realistic, defensible.

Capacity (phase-by-phase, variable AI multiplier):

Total capacity:  5 engineers × 9 months              = 45 engineer-months
Subtract overhead (~40%):                            = -18 engineer-months

Phase-by-phase with variable multiplier:
  P0 (M1):    5.0 raw - 3.0 overhead = 2.0 net × 1.0 =  2.0
  P1 (M2-4): 15.0 raw - 6.0 overhead = 9.0 net × 2.0 = 18.0
  P2 (M5-7): 15.0 raw - 5.5 overhead = 9.5 net × 2.0 = 19.0
  P3 (M8-9): 10.0 raw - 3.5 overhead = 6.5 net × 1.0 =  6.5
────────────────────────────────────────────────────────────────────────────
Effective capacity:                                  ≈ 44 engineer-months
                                          (45.5, rounded conservatively)

Comparison of 3 scenarios:

                    Traditional    v1 (AI 1.4x)    v2 (AI 2x, variable)
                    ───────────    ────────────     ────────────────────
Base available:     27             27               27
AI investment:      0              -3               -5 (Phase 0)
After investment:   27             24               22
Multiplier:         1.0x           1.4x             ×1.0 (P0,P3) / ×2.0 (P1,P2)
Effective:          27             33.6             45.5 ≈ 44 (conservative)
                    ───────────    ────────────     ────────────────────
Gain vs trad:       —              +6.6 (+24%)      +17 (+63%)
Modules possible:   2-3            3-4              5 (all except Payment)

44 engineer-months = nearly double traditional. Enough to extract all modules except Payment, with buffer for stabilization, and lay an AI foundation in the product.

0.4 AI Application Per Phase — Deep Integration

Phase AI Role Specifics Impact
Phase 0: AI Setup AI is the subject Toolchain, ingestion, calibration, pilot migration -5 eng-months upfront, compound returns
Phase 1: Foundation AI generates infra AI generates IaC (Bicep), CI/CD pipelines, API Gateway config, Dockerfile. AI analyzes legacy → dependency map, bounded context suggestion ~60% of infra code AI-generated
Phase 2: First Extract AI migrates code AI agent receives module → generates .NET 8 service → tests → PR. Human reviews + adjusts business logic ~70% of migration code AI-generated
Phase 3: Scale Extract AI batch-migrates Pattern proven from Phase 2 → AI applies to Event, Workforce. Faster because templates already calibrated ~75% AI-generated, faster per module
Phase 3: Harden AI monitors + tests AI-generated load tests, chaos engineering scenarios, anomaly detection rules. AI draft Payment migration plan AI as QA + ops partner

0.5 AI in Product Architecture — Deeper than v1

┌──────────────────────────────────────────────────────────────┐
│                     AI Intelligence Layer                      │
├────────────┬─────────────┬──────────────┬────────────────────┤
│ Smart      │ AI-powered  │ Predictive   │ Semantic Search    │
│ Routing    │ Anomaly     │ Analytics    │ (future: RAG       │
│ (API GW)   │ Detection   │ (Reporting)  │  over domain data) │
├────────────┴─────────────┴──────────────┴────────────────────┤
│                     AI-Ready Data Layer                        │
├────────────┬─────────────┬──────────────┬────────────────────┤
│ Event Store│ Feature     │ Vector Store │ Data Lake          │
│ (all domain│ Store       │ (embeddings  │ (raw events for    │
│  events)   │ (aggregated │  for search) │  ML training)      │
│            │  features)  │              │                    │
├────────────┴─────────────┴──────────────┴────────────────────┤
│              Unified Event Bus (Azure Service Bus)            │
│  ┌─────────────────────────────────────────────────────┐      │
│  │ Every domain event → captured → stored → queryable  │      │
│  │ Schema versioned → backward compatible → ML-ready   │      │
│  └─────────────────────────────────────────────────────┘      │
├──────────────────────────────────────────────────────────────┤
│                    Microservices Layer                         │
│  Travel │ Event │ Comms │ Workforce │ Reporting │ [Payment]   │
└──────────────────────────────────────────────────────────────┘

What's different from v1:

  • Added AI Intelligence Layer on top — not just “AI-ready” but actual AI features running
  • Smart routing at API Gateway (AI-based traffic management)
  • Anomaly detection from Day 1 (observability → AI monitoring)
  • Data Lake for raw events — foundation for future ML/AI features

0.6 AI Governance — Stronger Setup Needs Stronger Gates

When AI generates 60-75% of code, governance must be proportional:

Layer Governance Rule Why
AI Code Generation All AI output → mandatory CI pipeline (lint, test, SAST, DAST). Zero bypass. 75% of code is AI-generated → skipping = major security risk
Business Logic AI translates legacy logic → mandatory human validation per business rule. Checklist tracing requirement → implementation AI can miss edge cases in domain logic
Architecture AI drafts → Tech Lead review + ADR sign-off. AI does NOT make architecture decisions autonomously Prevent AI hallucination at system design level
Code Review 2 gates: AI review (auto) → Human review (mandatory). AI reviewer flags suspicious patterns → human focuses there AI as filter → human more effective
Data Migration AI generates CDC/migration scripts → dry-run on staging mandatory → human verifies data integrity Data loss is irreversible
Security Payment-related: 100% human review, zero AI-only merge. Other services: AI review + spot-check Payment = highest risk, zero tolerance
Knowledge Transfer Weekly "AI code walkthrough" — team explain AI-generated code to each other Prevent "nobody understands the codebase" syndrome
Prompt Versioning Prompts stored in git, review changes like code. Prompt changes → test on sample before deploy Prompt drift → quality drift

1. Domain Decomposition Analysis

1.1 Identified Bounded Contexts

# Bounded Context Core Responsibility Complexity AI Migration Difficulty
1 Travel Booking Search, booking, itinerary, supplier integration High Medium — heavy CRUD, AI handles well
2 Event Management Event creation, scheduling, venue, attendee mgmt High Medium — similar to Travel
3 Payment & Billing Payment processing, invoicing, reconciliation Critical High — security + compliance, AI needs heavy review
4 Workforce Management Staff allocation, scheduling, availability Medium Low — algorithms + CRUD, AI excels here
5 Communications Notifications, emails, in-app messaging Low Very Low — perfect AI quick win
6 Reporting & Analytics Operational reports, dashboards, data export Medium Low — read-only CQRS, AI generates query models

1.2 Domain Relationship Map

Travel Booking ──────► Payment & Billing ◄────── Event Management
      │                       ▲                         │
      │                       │                         │
      ▼                       │                         ▼
Workforce Mgmt ───────────────┘                  Communications
      │                                                 ▲
      └─────────────► Reporting & Analytics ────────────┘

1.3 Coupling Analysis

Relationship Coupling Level AI Impact
Travel → Payment Tight ACL generation = AI excels. But payment logic review = human
Event → Payment Tight Same pattern as Travel → ACL, reuse AI template
Travel → Workforce Medium AI auto-detects and decouples via event bus
Event → Communications Medium AI generate event handlers easily
All → Reporting Loose CQRS read models = AI sweet spot
Travel ↔ Event Ambiguous AI analyzes shared concepts → suggests boundary. Human decides

AI Key Insight: With agentic AI, coupling analysis is not guesswork — AI scans the entire codebase, finds every reference, shared table, shared model → outputs an accurate dependency graph. Reduces risk of incorrect boundaries.


2. Constraint Deep-Dive

2.1 "Zero Downtime" — Unchanged from v1

  • Strangler Fig Pattern mandatory
  • Parallel run legacy + new
  • API Gateway gradual routing transition
  • No big bang cutover

2.2 "5 Engineers, 9 Months" — Capacity (2x AI)

                    Traditional    v2 (AI, variable multiplier)
                    ───────────    ────────────────────────────
Gross capacity:     45             45
Overhead:           -18            -18
Base available:     27             27
AI investment:      0              -5 (Phase 0)
Multiplier:         1.0x           ×1.0 (P0,P3) / ×2.0 (P1,P2)
Effective:          27             45.5 ≈ 44 (conservative)
────────────────    ───────────    ────────────────────────────
Gain:               baseline       +63%

44 engineer-months implications:

  • Enough to extract 5/6 modules (all except Payment)
  • ~6 engineer-months buffer for stabilization + unexpected issues
  • Room for React 18 rewrite on 3-4 modules
  • Capacity for AI foundation in the product (not just engineering)

2.3 "Payment Flow Cannot Change in Phase 1" — Same as v1

Interpretation A: Payment lives in monolith Phase 1. New services call legacy Payment via ACL.


3. Risk & Feasibility Matrix (2x Adjusted)

3.1 Feasibility Assessment

Deliverable Traditional (27) v1 AI (33.6) v2 AI (44) Note
Extract Travel Booking ✅ Feasible ✅ Feasible Easy AI agent handles bulk migration
Extract Event Management ⚠️ Partial ✅ Feasible Feasible Pattern from Travel reused
Extract Payment ❌ No ❌ No No (constraint) Frozen Phase 1 — not capacity issue
Extract Workforce ⚠️ Partial ⚠️ Partial Feasible Newly achievable with 2x
Extract Communications ✅ Easy ✅ Easy Trivial AI pilot migration in Week 4
Extract Reporting ✅ Easy ✅ Easy Easy CQRS = AI sweet spot
React 18 Frontend ⚠️ 1-2 modules ⚠️ 2-3 modules 3-4 modules AI component generation
CI/CD + IaC ✅ Must-have ✅ Must-have AI-generated Faster setup
Event-driven architecture ✅ Feasible ✅ Feasible Full adoption All new services event-driven
AI product foundation ❌ No ⚠️ Foundation only Active monitoring + smart routing Newly achievable

3.2 What's Achievable in 9 Months (2x)

✅ CAN DO:
  - AI engineering foundation (Month 1) — agentic level
  - CI/CD + IaC foundation (AI-generated)
  - API Gateway + Strangler Fig routing
  - 5 services extracted: Communications, Travel, Event, Workforce, Reporting
  - React 18 for 3-4 modules
  - Full event-driven architecture for all new services
  - Observability + AI-powered anomaly detection (live)
  - AI-ready data layer (event store + initial feature store)
  - Payment migration PLAN (ready for Phase 2 post-9-months)

⚠️ STRETCH (if buffer allows):
  - Smart routing at API Gateway (AI-based)
  - Basic predictive analytics in Reporting module
  - Full database decomposition (per-service DBs for all)

❌ CANNOT DO (defer):
  - Payment modernization (constraint, not capacity)
  - Full React 18 for Payment module
  - ML/AI product features beyond monitoring + routing
  - Performance optimization at scale (needs production data)

vs v1: +2 service extractions, +1 React module, AI product features go from "foundation only" to "live monitoring + smart routing"


4. Migration Pattern Analysis

4.1 Pattern Selection — Same conclusion, different execution

Pattern Fit AI Enhancement
Strangler Fig ✅ Best fit AI-powered route analysis: which endpoints to migrate first based on traffic patterns
Branch by Abstraction ✅ Now viable With 2x capacity, can do internal refactoring before extraction
Parallel Run ✅ For Payment prep AI-generated comparison tests between legacy and new
Automated Migration Pipeline NEW in v2 AI agent pipeline: analyze → scaffold → migrate → test → PR

4.2 Architecture (Expanded for 5 services)

                         ┌──────────────────┐
          Users ────────►│   API Gateway     │
                         │  (YARP + Smart    │
                         │   Routing/AI)     │
                         └────────┬─────────┘
                                  │
         ┌──────────┬────────────┬┴──────────┬──────────┬──────────┐
         ▼          ▼            ▼           ▼          ▼          ▼
    ┌─────────┐┌─────────┐┌──────────┐┌──────────┐┌─────────┐┌─────────┐
    │ Travel  ││ Event   ││Workforce ││  Comms   ││Reporting││ Legacy  │
    │ Booking ││  Mgmt   ││  Mgmt    ││ Service  ││ (CQRS)  ││Monolith │
    │ .NET 8  ││ .NET 8  ││ .NET 8   ││ .NET 8   ││ .NET 8  ││(Payment)│
    └────┬────┘└────┬────┘└────┬─────┘└────┬─────┘└────┬────┘└────┬────┘
         │          │          │           │          │          │
    ┌────┴────┐┌────┴────┐┌────┴─────┐┌────┴─────┐    │     ┌────┴────┐
    │Travel DB││Event DB ││Workforce ││ Comms DB │    │     │Monolith │
    │         ││         ││   DB     ││          │    │     │   DB    │
    └─────────┘└─────────┘└──────────┘└──────────┘    │     └────┬────┘
                                                      │          │
                                                ┌─────┴──────────┘
                                                │ Reporting DB
                                                │ (Read replicas
                                                │  + CDC feeds)
                                                └────────────────
         ┌──────────────────────────────────────────────────────┐
         │          Azure Service Bus (Event-Driven)            │
         │  Travel.Booked → Event.Created → Payment.Requested  │
         │  Comms.Notify → Report.Updated → Workforce.Assigned │
         └──────────────────────────────────────────────────────┘
         ┌──────────────────────────────────────────────────────┐
         │          AI Intelligence Layer                       │
         │  Anomaly Detection │ Smart Routing │ Event Analytics │
         └──────────────────────────────────────────────────────┘

4.3 Anti-Corruption Layer Strategy

New Service ──► ACL ──► Legacy Monolith (Payment)
                │
                ├── ACL translates new service contracts → legacy API
                ├── ACL handles data format differences
                ├── ACL is THE ONLY coupling point between new and legacy
                └── When Payment is modernized → swap ACL target, zero changes to consumers

AI Role: AI auto-generates ACL from legacy API analysis.
         AI maintains ACL tests to ensure backward compat.

5. Phase Plan (AI-Heavy, 2x)

5.1 Scoring Matrix

Module Business Value Extraction Effort (with AI) Risk if Delayed Priority
AI Foundation Multiplier Medium (1 month) Blocks 2x gains P0
CI/CD + Infra Unlocks all Low (AI-generated) Blocks everything P0
Communications Medium Very Low (AI pilot) Low P1 (pilot + quick win)
Travel Booking High Medium High P1
Event Mgmt High Medium Medium P2
Reporting Medium Low Low P2
Workforce Medium Low-Medium Low P3
Payment Critical N/A (frozen) Frozen Future

5.2 Phase Sequence

Note: Analysis v2 uses a 5-phase model (Phase 0–4) for detailed analysis.
Planning.md consolidates into 4 phases (0–3) with month-by-month dev assignments.
Service go-live order (canonical, from Planning.md):
Travel (M3) → Event (M4) → Workforce (M6) → Comms + Reporting (M7)
Communications is piloted in Phase 0 Week 4 to validate the AI pipeline,
but goes live in production in Phase 2 (Month 7).

Month:  1        2        3        4        5        6        7        8        9
        ├────────┼────────┼────────┼────────┼────────┼────────┼────────┼────────┤

Phase 0 │████████│                                                               
AI +    │ AI toolchain (Cursor, Claude Code, CodeRabbit)                         
Infra   │ Legacy codebase ingestion + AI analysis                                
(M1)    │ Agentic workflow pipeline setup                                        
        │ CI/CD, IaC (Bicep), API Gateway (YARP), Docker                         
        │ Pilot: Communications module migration by AI (staging only)            
        │ Team training on AI-driven development                                 
        │ Prompt library v1 + migration templates                                
        │        │
Phase 1 │        │████████████████████████│                                       
Core    │        │ Travel Booking service extraction                              
(M2-4)  │        │   AI agent: analyze → scaffold → migrate → test               
        │        │   Travel go-live Month 3 (Strangler Fig: 5% → 100%)           
        │        │   React 18 pages for Travel                                   
        │        │ Event Management service extraction                            
        │        │   Reuse AI templates from Travel (faster)                      
        │        │   Event go-live Month 4                                        
        │        │   React 18 pages for Events                                   
        │        │   ACL → Legacy Payment for both services                       
        │                        │
Phase 2 │                        │████████████████████████│                       
Scale   │                        │ Workforce Management extraction               
(M5-7)  │                        │   AI handles allocation algorithms            
        │                        │   Workforce go-live Month 6                   
        │                        │ Communications service (full production)      
        │                        │ Reporting service (CQRS)                      
        │                        │   AI-generated read models from legacy SQL    
        │                        │   React 18 dashboard                          
        │                        │   Comms + Reporting go-live Month 7           
        │                                                │
Phase 3 │                                                │████████████████████│  
Harden  │                                                │ Stabilization       │  
(M8-9)  │                                                │ Performance tuning  │  
        │                                                │ Security audit      │  
        │                                                │ AI monitoring live  │  
        │                                                │ Payment migration   │  
        │                                                │   PLAN (not build)  │  
        │                                                │ Handover docs       │  
        │                                                │ AI feature roadmap  │  

5.3 Phase 0 Detail — AI Setup (The Investment That Changes Everything)

Week Activity Owner Output
W1 Cursor Pro + Claude Code setup, team licenses Tech Lead All 5 engineers using agentic AI
W1 CodeRabbit integration into GitHub/Azure DevOps Tech Lead Auto-review on every PR
W2 Legacy codebase ingestion — AI indexes entire codebase 2 engineers @codebase queries work, dependency graph generated
W2 AI-generated legacy documentation AI (automated) Module summaries, API contracts, DB schema docs
W3 Build migration pipeline prompts + templates Tech Lead + 1 engineer Prompt library v1, .NET Framework → .NET 8 templates
W3 Setup MCP connections (DB, CI, docs) 1 engineer AI agents can query live schema, trigger tests
W4 Pilot migration: Communications module 2 engineers + AI Working .NET 8 Communications service + tests
W4 Measure & calibrate: speed, quality, gaps Tech Lead Baseline metrics, adjusted prompts, known limitations

Exit criteria Phase 0: Communications service deployed to staging, AI pipeline proven, team confident with agentic workflow.


6. Key Technical Decisions

# Decision Recommendation AI Impact
1 Database Phased: shared DB view → per-service DB AI generates CDC scripts, schema migrations
2 API Gateway YARP (.NET-based) AI generates routing configs from legacy URL mapping
3 Communication REST (sync) + Service Bus (async) AI generates event handlers + message contracts
4 Frontend React 18 incremental (3-4 modules) AI generates React components from legacy UI screenshots + API contracts
5 Data migration CDC (Change Data Capture) AI generates CDC connectors from schema analysis
6 Testing Contract testing (Pact) + AI-generated unit tests 80% of tests AI-generated, human validates business rules
7 AI tooling Cursor Pro + Claude Code + CodeRabbit + MCP Full agentic stack, not just autocomplete
8 AI governance 2-gate system: AI auto-review → human mandatory review Stronger gates needed for 2x AI-generated code
9 Prompt management Git-versioned prompt library Prompts = code. Review, version, test
10 AI monitoring Azure Monitor + AI anomaly detection (Day 1) Not waiting for prod — monitoring from first deployment

7. Risk Analysis — New Risks with AI-Heavy Approach

7.1 AI-Specific Risks

Risk Likelihood Impact Mitigation
AI hallucination in business logic High High Mandatory human review gate for all business logic. Contract tests verify behavior, not just code
Team over-reliance on AI Medium High Weekly "explain this code" sessions. Rotate who reviews vs who prompts. Knowledge sharing mandatory
AI-generated security vulnerabilities Medium Critical SAST/DAST mandatory in CI. Payment code = 100% human review. Security audit Phase 3
Prompt drift (prompt changes → quality drops) Medium Medium Prompts versioned in git. Changes require PR review. Test on sample before applying
AI cost overrun (API costs, licenses) Low Low Budget: ~$500/engineer/month. Offset: massive time saving
AI tool unavailability (outage) Low Medium Multi-provider strategy (Cursor + Claude Code as backup). Local models for non-critical tasks

7.2 Standard Migration Risks (unchanged)

Risk Likelihood Impact Mitigation
Data inconsistency during migration Medium High CDC + dual-read verification. Parallel run for high-risk modules
Payment flow disruption Low (frozen) Critical ACL isolation. Zero changes to payment in Phase 1
Team burnout (9 months, 5 engineers) Medium High AI reduces grunt work → engineers focus on interesting problems. But monitor
Scope creep High Medium Strict phase gates. What's not in plan = NOT DONE
Legacy undocumented behavior High Medium AI-generated legacy docs (Phase 0). But still risk of hidden logic

8. What Assessors Really Want — v2 Advantage

Signal v1 Answer v2 Answer (AI-Heavy)
AI-first mindset "We'll use Copilot" "We'll build an agentic migration pipeline — AI does 70% of code generation, human validates business logic and architecture"
Realistic scope "2-3 services in 9 months" "5 services in 9 months because AI 2x our capacity — here's the math"
Risk awareness Standard migration risks Standard risks + AI-specific risks with mitigations (shows deep AI understanding)
Leadership clarity "We need to prioritize" "Month 1 is AI investment. It delays first delivery by 2 weeks but compounds into 63% more capacity"
Trade-off logic "Payment deferred" "Payment deferred by CONSTRAINT not capacity. With 2x we COULD do it but the constraint says no — this is the right call"

The meta-signal:

PhoenixDX is an AI-first hub. They're hiring a Tech Lead who builds teams that leverage AI as a force multiplier, not just a convenience tool. The candidate who shows they can design an AI-augmented engineering org — with proper governance, metrics, and realistic multipliers — wins.


Appendix: v1 vs v2 Summary

Dimension v1 (AI 1.4x) v2 (AI 2x)
AI Level L2: Chat assistant L4: Agentic workflow
AI Setup 3 eng-months 5 eng-months
Effective Capacity 33.6 eng-months 44 eng-months
Services Extracted 3-4 5 (all except Payment)
React Modules 2-3 3-4
AI in Product Foundation only Live monitoring + smart routing
AI Governance Basic review gates 2-gate system + prompt versioning + weekly knowledge transfer
Key Risk Under-deliver on scope AI hallucination in business logic
Key Advantage Safe, proven approach Bold, higher ROI, aligned with company DNA