Assessment Brief
Problem Brief — AI-First Engineering Team
The original brief that shaped this case study.
The Problem
Company: PhoenixDX — AI-first engineering hub, Ho Chi Minh City
Mission: Re-architect mission-critical enterprise systems using AI-augmented engineering
An enterprise platform serving ~40,000 global users needs to be modernized.
The platform covers: travel booking, event management, workforce allocation, digital payments, communications, and reporting.
Current state: Legacy .NET monolith — single database, single deployment, no events, no tests
Target state: .NET 8 microservices, event-driven, per-service databases, React 18 frontend
The Constraints
These are real constraints, not artificial difficulty. They force real trade-offs.
| Constraint | Why it matters |
|---|---|
| 5 engineers | Cannot grow the team — must do more with the same headcount |
| 9 months | Hard deadline — scope must fit or be explicitly deferred |
| Zero downtime | 40K users across timezones — no maintenance window exists |
| Payment frozen (Phase 1) | PCI compliance risk — Payment module cannot be touched initially |
The Central Question
Can a 5-person team execute a 9-month platform modernization that would normally require 10+ engineers?
The hypothesis: yes — but only if AI is treated as engineering infrastructure, not a tool individuals use ad hoc.
This means:
- Investing Month 1 in AI toolchain setup before writing any product code
- Assigning AI multipliers to capacity planning (not guessing)
- Building governance for what AI can and cannot own
- Designing architecture to be AI-legible (ADRs, contracts, clear boundaries)
What Needed to Be Produced
Six formal outputs — the backbone of this case study:
| Output | Core question answered |
|---|---|
| 4.1 Target Architecture | What does the target system look like, and why? |
| 4.2 Migration Strategy | How do we get there without breaking anything? |
| 4.3 Failure Modeling | What can go wrong, and what's the recovery plan? |
| 4.4 Trade-Off Log | What are we deliberately not doing, and why? |
| 4.5 Assumptions | What must be true for this plan to hold? |
| 4.6 AI Usage Declaration | What did AI produce, and what did humans validate? |
See docs/03-deliverables/ for each output.
Target Tech Stack
| Layer | Technology | Key reason |
|---|---|---|
| Backend | .NET 8 Microservices | Migration path from existing .NET monolith |
| Frontend | React 18 + Shared Design System | Incremental adoption, vast ecosystem |
| Gateway | YARP | .NET-native, Strangler Fig routing built-in |
| Messaging | Azure Service Bus | Managed, zero ops overhead |
| Database | Per-service Azure SQL | One technology, team already knows SQL Server |
| IaC | Bicep | Azure-native, no state file management |
| Containers | Azure Container Apps | Zero Kubernetes ops for 5 engineers |
| Observability | OpenTelemetry + Serilog | Vendor-neutral, .NET-native |
| AI Tooling | Cursor Pro + Claude Code + CodeRabbit | The 2× multiplier stack |
Domain Modules (Legacy → Microservices)
LEGACY MONOLITH TARGET (9 months)
──────────────────── ─────────────────────────────
┌──────────────────┐ ✈ Travel Booking (M3)
│ Travel Booking │ ──extract──► 📅 Event Management (M4)
│ Event Mgmt │ 👥 Workforce (M6)
│ Workforce │ 📧 Communications (M7)
│ Communications │ 📊 Reporting CQRS (M7)
│ Reporting │
│ ─────────────── │ 💳 Payment — stays in monolith
│ Payment │ ──ACL──────► (Anti-Corruption Layer bridge)
└──────────────────┘
│
Single SQL Server
~200+ tables
Cross-module FKs