Stride: From 5 Months Behind to 3x Productivity
An 80-person team was 5 months past launch with an 80% defect rate. After a 2-week Delivery Diagnostic, throughput tripled and cycle time dropped 8x.
An 80-person product team was 5 months past their launch date with an 80% defect rate and 20% engineering efficiency. After a 2-week Delivery Diagnostic and 90 days of implementation, engineering throughput tripled, cycle time dropped 8x, and the team shipped its first major release.
What Was Broken
Stride's Career-as-a-Platform team had a meaningful mission – helping high school students who weren't heading to college find pathways to good jobs. But eight months in, the team hadn't shipped a single feature to users.
The numbers told the story:
- 80% defect rate – only 1 in 5 features actually worked
- 20% engineering efficiency – 60% of time went to rework and bug fixes
- 4-6 week cycle time per ticket, despite only 3-4 days of actual work in each
- 50% sprint commitment success – half their commitments missed every sprint
- 8-hour daily Zoom meetings – the entire team working synchronously all day, every day
- 5 months behind the original launch date with nothing shipped
The business analyst was working 12-14 hour days. Designers were creating mockups in real-time during meetings instead of thinking through customer journeys. Engineers were coding before anyone understood the requirements.
Leadership had already tried everything they could think of. They hired a product manager who quit after two months. They cycled through three project managers. They brought in consultants at $40,000 per month. They expanded the offshore engineering contract to 60 people. Every attempt made things slower and more expensive.
What the Diagnostic Revealed
Over two weeks, I combined five different data sources to build a complete picture of what was broken.
The meeting observation showed the pattern on day one. Sprint planning started at 8 AM and was still going at 6 PM. This wasn't an anomaly – the team was working synchronously every day, with product, design, and engineering all trying to figure things out simultaneously in real-time.
The work management analysis revealed that tickets averaged 4-6 weeks of cycle time but contained only 3-4 days of actual work. 90% of time was spent waiting, context-switching, or doing rework.
The code repository inspection uncovered something leadership had no visibility into: out of 60 offshore engineers, only a handful were actually contributing meaningful code. Most commits came from the same few developers. Code reviews were even worse – just two people doing the bulk of the review work.
The communication analysis confirmed what the data showed. Slack channels were full of engineers saying "We're coding before we understand what we're building," designers saying "I can't be creative in 30-minute Zoom windows," and product people saying "We're deciding without research."
The team interviews revealed the same underlying truth across every conversation: everyone knew they were failing, but no one could articulate why.
The root cause wasn't what leadership expected. It wasn't a talent problem – the business analyst was talented, the junior PM showed promise, the designers were brilliant. It was synchronous chaos preventing anyone from thinking. Eight hours of meetings felt productive because everyone was doing something. But the team was confusing motion with progress.
How We Fixed It
The implementation blueprint addressed root causes through six systematic changes over 90 days.
Rebuilt the "why." I started interrogating every piece of work using the 5 Whys approach. The vast majority of the time, features had no user-centric justification – just "we thought we should" or "an executive told us to." We eliminated work that didn't solve real customer problems.
Trained the team. I conducted a 4-day intensive workshop with the BA, junior PM, head of design, and key executives covering strategy development, measuring success, conflict resolution, and proper execution cadence. Then I modeled it for them sprint after sprint until it stuck.
Restructured the work cadence. Each function – product, design, engineering – got protected time to think and work effectively. Async-first communication with intentional synchronous touchpoints. Clear handoffs. Protected focus time for deep work.
Enforced boundaries. Every meeting got time-boxed with clear agendas. Requirements had to be defined before work started. No more real-time decision-making during build sessions.
Established a true definition of done. The team implemented automated testing – both front-end and unit tests. I trained them on user acceptance testing. Quality standards had to be met before work could be called complete.
Created proper planning. A now/next/later roadmap with realistic estimates. The team gained visibility into current work and future preparation. No more surprises. No more random task-switching.
The Results
Within 90 days, every metric moved:
| Metric | Before | After | Change |
|---|---|---|---|
| Engineering throughput | Baseline | 3x baseline | 3x improvement |
| Cycle time | 4-6 weeks | 5-7 days | 8x improvement |
| Defect rate | 80% | 30% | 50 point drop |
| Rework | 60% of capacity | 15% of capacity | 75% reduction |
| Sprint commitment | 50% success | 100% success | Consistent delivery |
| Meeting time | 8+ hours/day | ~20% of schedule | Industry standard |
Beyond the metrics: the team that spent 8 months shipping nothing delivered its first major release at the end of those 90 days, with two more releases planned and in progress. Product had time to plan properly. Design had space to think. Engineering could focus on building instead of fixing.
The fix wasn't "work harder" or "hire more people." It was identifying the root causes through comprehensive analysis and systematically eliminating them.
What This Means For You
Your situation won't be identical to Stride's. You might not have 60 offshore engineers or 8-hour Zoom marathons. But if your team is consistently behind schedule, your defect rate is climbing, and you can't figure out why – the diagnostic process is the same.
I'll observe your specific patterns, analyze your specific data, interview your specific people, and find what's invisible from inside your specific organization. The root causes are always there. They're just hard to see when you're living in them.
Frequently Asked Questions
How long did the full engagement take?
The diagnostic took 2 weeks. Implementation took 90 days. The diagnostic can stand alone – you get the complete findings and blueprint regardless of whether you engage for implementation.
Did Stride have to lay anyone off?
No. The changes were about how the team worked, not who was on the team. The talented people were already there – they just needed a system that let them do their jobs.
Was the offshore engineering team the main problem?
It was a contributing factor – the repository analysis showed most engineers weren't contributing meaningful code. But the core problem was systemic: synchronous chaos, no planning discipline, no definition of done. Even the productive engineers couldn't be effective in that environment.
What happened after the 90 days?
The team continued executing with the new processes. Over the following months, Stride evaluated and acquired a company called Tallo, and I led the technical and feature discovery process for the merger. Three years later, Tallo brought me back for a Strategy Diagnostic – a testament to the trust built during this original engagement.
More Case Studies
- Tallo: Strategic Clarity in 2 Weeks – Strategy Diagnostic for Stride's child company, three years later
- CoinDesk: 9-Month Delays to 3x Productivity – Delivery Diagnostic for a cryptocurrency media company
Ready to Find What's Broken?
The Delivery Diagnostic Sprint is $15,000 for a complete 2-week diagnostic with 30 days of follow-up support.
Email matthew@fieldway.org to schedule a discovery call.