System Architecture
Complete architecture for modernizing the legacy .NET monolith into .NET 8 microservices. Strangler Fig pattern, event-driven, per-service databases, zero downtime.
1. Current State vs Target State
Current State — Legacy Monolith
Target State — End of 9 Months
2. Service Boundaries (Bounded Contexts)
| # | Service | Context | Type | Data Owned | APIs | Go-Live |
|---|---|---|---|---|---|---|
| 1 | Travel Booking | Travel | Core | bookings, itineraries, suppliers, pricing rules | /api/travel/* | Month 3 |
| 2 | Event Management | Events | Core | events, venues, schedules, attendees | /api/events/* | Month 4 |
| 3 | Workforce + Allocation | Workforce | Supporting | staff profiles, allocations, shifts, skills | /api/staff/* | Month 6 |
| 4 | Communications | Comms | Generic | notifications, templates, delivery logs | /api/comms/* | Month 7 |
| 5 | Reporting (CQRS) | Reporting | Supporting | report definitions, read models, dashboards | /api/reports/* | Month 7 |
| 6 | Payment (Legacy) | Payment | Core (Frozen) | payments, invoices, reconciliation | /api/payments/* (via ACL bridge) | — |
Boundary Decision Rationale
| Decision | Chose | Over | Because |
|---|---|---|---|
| Travel & Event = separate | 2 services | 1 "Operations" service | Different lifecycle, different scaling (Travel = high-freq CRUD, Event = complex scheduling) |
| Allocation → Workforce | 1 service, 2 modules | 2 separate services | Same domain (people + allocation), same data. 2 services too small for 5 eng |
| Communications = separate | Cross-cutting service | Embed in each service | Avoid duplicate logic (email/SMS/push). Shared across all modules |
| Reporting = CQRS read service | Dedicated service | Each service serves own reports | Cross-module reports need aggregated data. Read-only = scales differently |
| Payment = stay in legacy | ACL bridge | Migrate early | Constraint: frozen Phase 1. Highest risk (PCI, financial). ACL isolates safely |
3. Communication Model
→ Sync (REST)
Client → Gateway → Service. Service → ACL → Legacy Payment. Immediate response.
⇶ Async (Service Bus)
BookingCreated → Comms sends email. Events → Reporting updates read model.
◉ CDC (Data Sync)
Legacy DB → CDC stream → Reporting DB. No dual-write needed.
YARP Routing Configuration
Strangler Fig Migration Flow
4. Event Schema Standard
{
"eventId": "uuid-v4",
"eventType": "travel.booking.created",
"source": "travel-booking-service",
"timestamp": "2026-03-14T10:30:00Z",
"version": "1.0",
"correlationId": "uuid-v4",
"data": {
"bookingId": "BK-12345",
"userId": "USR-67890",
"destination": "Tokyo",
"totalAmount": 2500.00
},
"metadata": {
"tenantId": "tenant-001",
"region": "APAC"
}
}Event Schema Rules
5. Anti-Corruption Layer (ACL) — Legacy Payment
ACL Interface
ACL Responsibilities
- • Contract translation — new contracts ↔ legacy API formats
- • Error mapping — legacy error codes → standardized responses
- • Resilience — circuit breaker, retry with backoff, 5s timeout
- • Logging — every ACL call logged for audit trail
- • Testing — contract tests verify ACL ↔ legacy compat
6. Service Internal Architecture (Clean Architecture)
Project Structure
SharedKernel NuGet Package
Cross-cutting concerns shared across all services as internal NuGet package:
- • Structured logging (Serilog)
- • OpenTelemetry tracing
- • Health check endpoints
- • Exception handling middleware
- • Correlation ID propagation
- • Event publishing abstractions
- • Base entity classes
- • Common value objects
Version-controlled. Breaking changes = major version bump.
7. Database Architecture
Per-Service Database Topology
Data Sync Strategies
| Scenario | Strategy | How |
|---|---|---|
| New service → new service (read) | Events | Subscribe to domain events, maintain local projection |
| New service → legacy payment (write) | ACL (sync call) | REST through ACL → legacy API |
| Legacy DB → Reporting (read) | CDC (Change Data Capture) | Debezium/Azure CDC → Reporting DB read models |
| Data migration (one-time) | ETL + CDC | Initial bulk load (ETL) then continuous sync (CDC) |
| Cross-service queries | Reporting Service | No cross-DB joins. Reporting for aggregated views |
Database Migration Path
Phase 0–1: Services use DB views (read-only access to monolith). Phase 2–3: Data migrated via CDC, per-service DBs become authoritative. Legacy DB shrinks as modules extract.
8. Frontend Architecture
Strategy
- • App Shell: React 18, nav + auth + layout. Deployed Phase 0–1
- • Module-by-module: Each migrates when backend service goes live
- • Legacy iframe: Payment UI embedded via iframe (Phase 1–2)
- • Code splitting: Each module lazy-loaded on navigation
Shared Design System (Storybook)
Built upfront, shared across all modules. Ensures consistent UX during phased migration.
9. Infrastructure & Deployment
Compute
- • Azure Container Apps
- • Auto-scale per service
- • No K8s ops overhead
Data
- • Azure SQL (per-service)
- • Azure Service Bus (Standard)
- • Azure Key Vault (secrets)
DevOps
- • Bicep IaC (Azure-native)
- • GitHub Actions CI/CD
- • Azure Container Registry
10. CI/CD Pipeline
Branch Strategy
11. Security Architecture (4 Layers)
Layer 1: Edge
- • TLS termination (HTTPS only)
- • JWT validation (Azure AD)
- • Rate limiting (per-user, per-IP)
- • CORS policy enforcement
- • Request size limits
Layer 2: Service-to-Service
- • Managed identities (Azure AD)
- • mTLS between services
- • No direct DB access from other services
- • Internal network only
Layer 3: Data
- • Encryption at rest (Azure SQL TDE)
- • Encryption in transit (TLS 1.3)
- • Secrets in Azure Key Vault
- • PII data masked in logs
- • Payment data: legacy only (Phase 1)
Layer 4: Pipeline
- • SAST (CodeQL) in CI
- • Dependency scanning (Dependabot/Snyk)
- • Container image scanning
- • AI code: same security gates as human code
12. Observability (3 Pillars + AI)
Health Check Endpoints
AI-Enhanced Monitoring
- • Anomaly detection on metrics (Azure Monitor AI)
- • Log pattern analysis (clustering errors)
- • Predictive alerting (trend-based, not threshold)
- • Smart routing adjustment based on error rates
13. Architecture Decision Records (ADRs)
| # | Decision | Over | Rationale |
|---|---|---|---|
| ADR-001 | Strangler Fig Pattern | Big bang rewrite | Zero downtime required, 40K users |
| ADR-002 | YARP as API Gateway | Azure APIM / Ocelot | .NET-native, Strangler Fig routing, lightweight |
| ADR-003 | Azure Service Bus | Kafka / RabbitMQ | Managed (5 eng can't ops Kafka), enterprise SLA |
| ADR-004 | Per-service Azure SQL DBs | Shared database | Service autonomy, independent deploy |
| ADR-005 | Payment stays in monolith | Early modernize | Constraint: frozen Phase 1. Highest risk (PCI) |
| ADR-006 | Clean Architecture per service | Minimal/simple layers | Testability, separation, consistent team patterns |
| ADR-007 | Contract testing (Pact) | Full E2E only | Verify service compat without full environment |
| ADR-008 | CDC for legacy→reporting sync | Dual-write / ETL only | Non-invasive, real-time, no legacy code changes |
| ADR-009 | Shared NuGet package (SharedKernel) | Copy-paste across services | DRY logging, tracing, health. Versioned independently |
| ADR-010 | Event schema versioning | Unversioned events | Backward compatibility, independent evolution |
| ADR-011 | AI-first engineering (2× multiplier) | Traditional development | 5 eng/9 months requires force multiplication |
| ADR-012 | AI code = same CI gates as human | Relaxed AI code review | 60-75% code is AI-generated → same quality bar |