Submission Strategy — Approach & Methodology
This is a "meta-document" — how we THINK about the assessment, not the submission itself
1. What Does the Assignment Actually Ask For?
1.1 Deliverables Checklist
| # |
Deliverable |
Weight |
Difficulty |
| 4.1 |
Target Architecture Overview |
★★★★★ |
Hard — this is the core section |
| 4.2 |
Migration Strategy |
★★★★★ |
Hard — 4+ phases, zero downtime |
| 4.3 |
Failure Modeling |
★★★☆☆ |
Medium — 5 scenarios + matrix |
| 4.4 |
Trade-Off Log |
★★★★☆ |
Medium — but shows maturity |
| 4.5 |
Assumptions |
★★☆☆☆ |
Easy — 8+ items |
| 4.6 |
AI Usage Declaration |
★★★★☆ |
Easy write, HIGH strategic value |
1.2 Hard Constraints of the Submission
Format: PDF
Length: UNDER 6 PAGES
AI Declaration: MANDATORY
6 pages = ~3000-3600 words (depending on font, diagrams)
6 deliverables in 6 pages = ~1 page per section
→ MUST be extremely concise. Tables + diagrams > prose.
2. What Is the Assessor Actually Testing?
2.1 Skills Being Evaluated
This assessment does NOT test:
✗ Coding ability
✗ Quiz knowledge (what is microservices?)
✗ Strict adherence to a specific methodology
This assessment DOES test:
✓ JUDGMENT — making the right calls under constraint pressure
✓ COMMUNICATION — articulating architecture clearly and concisely
✓ TRADE-OFF THINKING — knowing what to sacrifice, what to keep, and why
✓ AI FLUENCY — using AI like a Tech Lead, not like a junior
✓ PRAGMATISM — solutions that are feasible, not theoretical
✓ RISK AWARENESS — identifying problems before they happen
2.2 Assessor Profile (Inferred)
The person who wrote this assessment is likely:
• Senior/Staff Engineer or Engineering Manager at PhoenixDX
• Deep understanding of DDD, microservices, .NET ecosystem
• Cares about AI-first (this is the company's identity)
• Has seen many submissions → knows the pattern of "AI wrote it all" sameness
They will:
• Skim for 2 minutes first → first impression matters
• Look at diagrams first → must be clear, self-explanatory
• Scrutinize trade-offs → this is what separates junior from senior
• Examine AI declaration → transparency > perfection
• Cross-check assumptions vs. the rest → are assumptions consistent?
3. Strategy Composition — 6-Page Layout
3.1 Page Budget
Page 1: Architecture Overview (diagram + service boundaries)
Page 2: Architecture Details (communication model, data strategy)
Page 3: Migration Strategy (4 phases, timeline, zero-downtime approach)
Page 4: Failure Modeling (5 risks table) + Trade-Off Log
Page 5: Assumptions + AI Declaration
Page 6: Appendix — glossary, references (if needed, or use for overflow)
Alternative layout (denser):
Page 1: Executive Summary + Architecture Diagram
Page 2: Service Boundaries + Communication Model
Page 3: Migration Phases (timeline visual)
Page 4: Zero-Downtime Strategy + Failure Modeling
Page 5: Trade-Offs + Assumptions
Page 6: AI Declaration + Key Decisions Summary
3.2 Writing Principles
| Rule |
Why |
| Diagrams first, text second |
Assessor will look at diagrams first. 1 diagram = 500 words |
| Tables over paragraphs |
Scannable, dense, professional |
| Bold keywords |
Assessor skims → bold = attention anchor |
| No fluff |
"In today's rapidly evolving landscape..." → DELETE |
| Show judgment |
"We chose X over Y because Z" > "We use X" |
| Explicit trade-offs |
Every decision = benefit + cost stated |
| Number everything |
Phase 1, 2, 3, 4. Risk 1-5. Assumption 1-8. Easy to reference |
4. Tactics Per Deliverable
4.1 Target Architecture Overview (Page 1-2)
Tactic: Lead with diagram
Content:
1. ASCII/clean diagram — full system view
• Frontend → API Gateway → Services → DBs
• Events bus connecting services
• ACL → Legacy monolith
2. Service boundaries table
| Service | Bounded Context | Owner | Phase |
3. Communication model
• Sync: REST via YARP gateway
• Async: Azure Service Bus domain events
• Pattern: CQRS + Event-driven
Wow factor: Diagram must be clean + complete. If the reader only looks at one thing → it's the diagram.
4.2 Migration Strategy (Page 3)
Tactic: Timeline visual + phase table
Content:
1. Phased timeline (gantt-style ASCII)
Phase 0 (M1): Foundation
Phase 1 (M2-4): Core extraction
Phase 2 (M5-7): Remaining modules
Phase 3 (M8-9): Stabilize + optimize
2. Per-phase table
| Phase | Months | Extract | Key Milestone |
3. Zero-downtime approach (brief)
• Strangler Fig via YARP routing
• Canary → gradual traffic shift
• Instant rollback capability
• Feature flags per module
Wow factor: Crystal clear WHAT happens WHEN. Assessor sees the timeline and immediately knows you have a feasible plan.
4.3 Failure Modeling (≈ 0.5 page)
Tactic: Dense table, 5 risks
| # | Scenario | Likelihood | Impact | Mitigation |
|---|----------|-----------|--------|------------|
| 1 | Service cascading failure | Medium | Critical | Circuit breaker, bulkhead |
| 2 | Data inconsistency during CDC | High | High | Dual-write validation, reconciliation |
| 3 | Legacy Payment ACL failure | Low | Critical | Retry + DLQ + fallback to direct call |
| 4 | Team key-person dependency | Medium | High | Cross-training, shared ownership |
| 5 | AI-generated code defect in prod | Medium | Medium | Mandatory review gate, contract tests |
Wow factor: Risk #5 = AI-aware risk. Assessor at an AI-first company will appreciate this.
4.4 Trade-Off Log (≈ 0.5 page)
Tactic: 3 columns — chose/over/because
| Decision | Chose (and trade-off) | Over (alternative) |
|----------|---------------------|---------------------|
| Hosting | Container Apps (less control) | AKS (too much ops for 5 eng) |
| DB | Azure SQL everywhere (not optimized) | Polyglot (too much expertise) |
| Payment | Frozen Phase 1 (tech debt: ACL) | Migrate early (too risky) |
| Testing | Contract tests (not E2E heavy) | E2E (slow, flaky, expensive) |
| Frontend | Single SPA (not micro-FE) | Micro-frontends (overkill) |
Wow factor: Explicitly naming WHAT you sacrifice. Shows maturity.
4.5 Assumptions (≈ 0.3 page)
Tactic: Numbered list, bold assumption, brief justification
1. **Legacy codebase has minimal test coverage** → AI generates characterization tests
2. **Team has .NET 8 experience** → no major re-skilling needed
3. **Azure cloud environment available** → no cloud migration overhead
4. **Legacy DB is SQL Server** → CDC tools available (Debezium, built-in CDC)
5. **No regulatory/compliance changes during migration** → scope stable
6. **Product owner available for domain clarification** → no requirement gaps
7. **Legacy monolith APIs are documented or discoverable** → AI can analyze
8. **Budget approved for AI tooling licenses** → Cursor Pro, CodeRabbit, etc.
9. **No mobile app in scope** → web-only modernization
10. **SLA target: 99.9% availability** → standard enterprise, not five-9s
Wow factor: 10 assumptions (exceeds the 8 requirement). Practical, not theoretical.
4.6 AI Usage Declaration (≈ 0.5 page)
Tactic: Transparent + strategic. This is the DIFFERENTIATING section.
Structure:
1. AI Tools Used
• Claude (analysis, document generation)
• Cursor Pro (architecture exploration)
2. AI-Assisted Sections
• All sections used AI for initial drafts
• Architecture diagram: AI-generated, manually refined
• Risk matrix: AI-suggested, manually prioritized
3. Manual Validation
• All trade-offs are personal engineering judgment
• Phase timeline validated against team capacity math
• Service boundaries based on domain analysis, not AI default
4. AI Governance for Backend Team
• Review gate: all AI code must pass peer review
• Contract tests: verify service boundaries (Pact)
• Prompt versioning: track what prompts generate what code
• Weekly AI effectiveness retro: measure actual vs expected velocity
Wow factor: Section 4 (governance) = you don't just USE AI, you know how to GOVERN AI usage across the team.
This is Tech Lead thinking, not developer thinking.
5. Mistakes to Avoid
| # |
Mistake |
Consequence |
How to Avoid |
| 1 |
Exceeding 6 pages |
Auto-fail or bad impression |
Outline first, strict page budget |
| 2 |
Overly complex diagrams |
Assessor skips, misses main point |
1 main diagram, max 2 detail diagrams |
| 3 |
Listing tech stack without justification |
Looks like copy-paste |
EVERY choice must have a "because" |
| 4 |
Over-promising |
Assessor knows 5 eng + 9 months limits |
Explicit defer list, realistic scope |
| 5 |
AI declaration says "AI only assisted" |
Lacks transparency |
State clearly which parts are AI-heavy, which are manual judgment |
| 6 |
Ignoring Payment frozen constraint |
Misses key constraint |
ACL + frozen until Phase 3 stated explicitly |
| 7 |
No rollback plan |
Zero downtime becomes lip service |
Every phase must have a rollback mechanism |
| 8 |
Generic microservices advice |
Not specific to the assignment |
Reference 40K users, 5 eng, 9 months specifically |
6. Execution Timeline
Step 1: Domain Analysis (done — Business Domain.md)
└── Understand bounded contexts, coupling points
Step 2: Architecture Decision (done — Architect.md, Analysis v2.md)
└── Service boundaries, communication model, data strategy
Step 3: Migration Plan (done — Planning.md)
└── 4 phases, timeline, team allocation
Step 4: Risk + Trade-offs (done — scattered across docs)
└── Consolidate into Failure Modeling + Trade-Off Log
Step 5: Draft Submission (Submission.md)
└── Compose 6 pages from existing materials
Step 6: Review + Compress
└── Cut to exactly ≤6 pages
└── Ensure every page earns its space
└── Check consistency (numbers, phases, names)
Step 7: Export PDF
└── Format check: diagrams render, tables align
└── Final proofread
7. Competitive Edge — What Makes This Submission Stand Out?
Level 1 (Average): Migration plan + architecture diagram
Level 2 (Good): + Clear trade-offs + realistic timeline
Level 3 (Excellent): + AI as strategy (not tool)
+ Capacity math (44 effective MM)
+ Constraint interaction analysis
+ Governance framework for AI in team
+ Explicit defer list (maturity signal)
+ Domain-driven decomposition (not module-mapping)
Target: Level 3.
Key differentiator: AI is not just a tool you use —
it is a STRATEGY you design for the team.