Stratwell Consulting Logo
Stratwell Consulting Logo
Finacial Strategy for e-commerce

They Almost Wasted $1.8M on a Protocol That Would Have Failed FDA Review

Finacial Strategy for e-commerce

They Almost Wasted $1.8M on a Protocol That Would Have Failed FDA Review

Finacial Strategy for e-commerce

They Almost Wasted $1.8M on a Protocol That Would Have Failed FDA Review

Service:

Protocol Review + FDA Pre-Sub Prep

Client:

Funded, Pre-Trial

Duration:

8 weeks

Client Photo

"Working with them gave us back valuable time, reduced expenses, and simplified everything."

Name Withheld (NDA)

CEO, Digital Health Startup *Client confidentiality maintained

Client Photo

"Working with them gave us back valuable time, reduced expenses, and simplified everything."

Name Withheld (NDA)

CEO, Digital Health Startup *Client confidentiality maintained

Client Photo

"Working with them gave us back valuable time, reduced expenses, and simplified everything."

Name Withheld (NDA)

CEO, Digital Health Startup *Client confidentiality maintained

The Challenge

The Challenge

The Challenge

CONFIDENTIALITY NOTE:

This case study represents a composite of multiple client engagements with AI-enabled medical devices. Details have been changed to protect client confidentiality. The regulatory challenges described reflect real 2025 FDA requirements.

A Professional-Looking Protocol That Would Have Failed

The founders had everything going for them.

One had a PhD in machine learning from a top university. The other was a practicing physician with years of clinical experience. Together, they'd built an AI algorithm that could predict serious medical events hours before they became obvious to clinicians.

They'd raised good money. They'd spent over a year refining the algorithm. The performance was impressive—better than standard clinical judgment.

Their investors wanted them moving fast. They'd talked to several CROs and were weeks away from signing a contract for nearly $2M.

But something made them pause.

"Can you just take a quick look at this protocol?" they asked. "Make sure we're not missing anything obvious?"

I'm glad they asked.

Week 1: I Found Five Problems That Would Sink Them

The protocol looked good at first glance. 85 pages. Professional formatting. Detailed statistical plan. Clear endpoints.

The CRO had done a nice job making it look legitimate.

But I'd seen this movie before. I knew what FDA was looking for in 2025 for AI-enabled devices. This protocol was missing critical pieces.

By the end of week 1, I'd found five major problems. Any one of them could have led to an FDA rejection letter 18 months and $2M later.

Problem #1: They Had No Plan for Algorithm Updates

Here's what most founders don't know about AI medical devices in 2025:

FDA now expects you to have a plan for how you'll update your algorithm AFTER you get cleared. It's called a Predetermined Change Control Plan (PCCP).

Without it, you have two bad options:

  1. Never update your algorithm (and watch competitors pass you)

  2. Submit a brand new FDA application every time you want to make a change (18+ months per update)

Their protocol? Zero mention of PCCP.

If they'd run this trial and gotten cleared, they would have been stuck with a frozen algorithm or facing years of regulatory delays for every improvement.

Problem #2: They Were Going to Test an Outdated Algorithm

The CRO's plan said: "Lock the algorithm 6 months before the trial starts."

That's how traditional medical devices work. But AI algorithms aren't traditional devices.

These founders had been improving their algorithm every few months as they got new data. That's normal for AI development.

If they'd locked their algorithm 6 months early, they'd be testing an outdated version by the time patients enrolled.

Worse: FDA's 2025 guidance says you need to describe how you manage algorithm changes during development. Their protocol said nothing about this.

Problem #3: The Endpoint Was Wrong for Their Device Type

The protocol focused on proving their algorithm was accurate at making predictions.

Sounds reasonable, right?

Wrong.

Their device was a clinical decision support tool—it helps doctors make better decisions. It's not a diagnostic that makes the decision for them.

FDA regulates these differently.

For a diagnostic, you need to prove your device is as good as the current standard.

For clinical decision support, you need to prove that doctors using your tool make better decisions than doctors without it.

Their protocol was designed for the wrong regulatory category. If FDA had reviewed it, they would have asked for a completely different study.

Problem #4: No Plan to Monitor If the Algorithm Still Works

AI algorithms can drift over time. What works on 2023 hospital data might not work as well on 2025 patients.

FDA knows this. They expect you to monitor for it.

The protocol had no monitoring plan. Not during the trial. Not after clearance.

If their algorithm's performance had degraded during the trial, they wouldn't have known until it was too late.

Problem #5: Not Enough Patients to Show It Works for Everyone

The study was designed for a relatively small number of patients.

For a traditional device, that might be fine.

But for an AI algorithm that would be used across different hospitals, different patient populations, different demographics? FDA wants to see that it works for everyone, not just one specific group.

Their sample size was too small to prove that.

Our Approach

Our Approach

Our Approach

Week 1-2: Full Protocol Audit

I read the entire protocol multiple times. Then I cross-referenced it against:

  • FDA's 2025 final guidance on PCCPs for AI devices

  • FDA's 2025 draft guidance on AI-enabled device software functions

  • FDA's clinical decision support guidance

  • Recent 510(k) clearances for similar AI-enabled devices

I also consulted with former FDA reviewers (now in private practice) and walked them through the protocol anonymously. Their feedback: "This would likely get a major deficiency letter."

By end of week 2, I had a detailed memo for the founders outlining the five major problems and recommended fixes.

Week 3-4: FDA Pre-Submission Strategy

We decided to run an FDA Pre-Sub meeting before finalizing the protocol.

FDA encourages early engagement through the Q-Submission program, especially for AI-enabled devices .

I helped them prepare:

  • Specific questions about PCCP requirements

  • Questions about appropriate endpoints for their device classification

  • Questions about acceptable validation approaches for AI algorithms

We submitted the Pre-Sub package in Week 4.

Week 5-8: Protocol Rewrite + Pre-Sub Prep

While waiting for the FDA meeting (which got scheduled for several weeks out), we rewrote the protocol.

Key changes:

  • Added PCCP framework and data collection plan

  • Changed primary endpoint to align with device classification

  • Increased sample size with stratified enrollment

  • Added performance monitoring plan

  • Added version control and validation procedures

  • Built in post-market surveillance plan

We also prepared the FDA Pre-Sub meeting presentation and Q&A materials.

Total time from "we need your help" to "new protocol is ready": 8 weeks.

The Results

The Results

The Results

FDA Pre-Sub Meeting:
FDA appreciated that we came in early. They confirmed our approach on all five major issues. They also caught a few minor things we'd missed (labeling requirements, cybersecurity documentation).

Written feedback from FDA: Confirmation that the proposed study design was appropriate to support the intended use.

Trial Timeline:

  • Original plan: Sign CRO contract immediately, start trial many months later

  • Actual timeline: FDA Pre-Sub completed, revised protocol finalized, started trial earlier than originally projected

We actually got them started faster because the FDA meeting gave them clarity and confidence.

Cost Impact:

Original CRO quote (initial protocol): Substantial
Revised CRO quote (improved protocol, larger study): Higher upfront but appropriate
My engagement: [Redacted per NDA]

But here's the real savings:

If they'd started the trial with the original protocol and FDA had rejected their 510(k) submission 18+ months later, they would have had to:

  • Re-run parts of the trial (significant additional cost)

  • Delay clearance by 12-18 months (substantial operational burn)

  • Potentially re-raise funding on less favorable terms

Conservative estimate: Saved 6-12 months to clearance and significant cost

Business Outcome

The trial is currently underway. Enrollment is proceeding well. The algorithm is performing as expected.

The founders told their board: "The upfront investment in getting this right saved us from a much more expensive mistake down the road."

WHAT MADE THE DIFFERENCE
Regulatory expertise specific to 2025
The PCCP guidance was finalized in 2025. Many service providers haven't updated their approaches to reflect this yet. I stay current with FDA guidance.

Pattern recognition from recent clearances
I'd seen multiple AI-enabled devices go through FDA review recently. I knew what FDA was asking for in 2025.

Willingness to increase scope when necessary
Expanding the study increased costs. But it was the right call. The founders agreed because I showed them the risk/reward analysis.

FDA engagement before spending money
The Pre-Sub meeting gave them written confirmation they were on the right track before spending substantial budget on the trial.

Lessons Learned

1. PROTOCOL PROBLEMS ARE EXPENSIVE TO FIX LATER

The issues we found would have cost 12-18 months and significant budget to fix if discovered after the trial started or during FDA review. Catching them in week 1 meant they could be fixed in 8 weeks at a fraction of the cost.

The lesson: An ounce of prevention is worth a pound of cure. Spend time getting the protocol right before you spend money executing it.

2. FDA WANTS TO HELP (IF YOU ASK EARLY)

FDA encourages early engagement through the Q-Submission program. They'd rather give you guidance upfront than reject your submission 18 months later.

The Pre-Sub meeting gave these founders written confirmation they were on the right track. That's worth more than any consultant opinion.

The lesson: Don't be afraid of FDA. Most reviewers are helpful if you engage early with thoughtful questions.

3. MOST CROs DON'T STAY CURRENT ON AI GUIDANCE

The PCCP guidance was finalized in 2025 . This CRO was still using 2023-2024 protocol templates. They weren't trying to screw the client—they just hadn't updated their approach.

If you're building an AI device, you need someone who's tracking current FDA thinking, not someone recycling old templates.

The lesson: Assume your CRO is 12-18 months behind on regulatory guidance for emerging technologies. Verify everything.

Name Withheld (NDA)

CEO, Digital Health Startup *Client confidentiality maintained

Protocol Review

Don't Sign That CRO Contract Yet.

Protocol Review

Don't Sign That CRO Contract Yet.

Protocol Review

Don't Sign That CRO Contract Yet.