Florida Universities Cut Low-Enrollment Degree Programs

Florida Universities Cut Low-Enrollment Degree Programs

In a candid conversation, Ethan Blaine speaks with Camille Faivre, an education management expert who helps universities redesign open and e-learning programs. Drawing on a decade of systemwide program reviews and hands-on turnaround work after the pandemic, she explains how Florida’s public universities reached decisions to terminate 18 programs and suspend eight, what the numbers really mean, and how institutions can protect academic breadth while meeting workforce needs. The discussion ranges from thresholds (30/20/10 graduates over three years) and cohort analytics to playbooks for teach-outs, faculty transitions, and curriculum pilots—connecting policy to practice with clear guardrails and pragmatic compassion.

When you told lawmakers that 18 programs will be terminated and eight suspended, how did you decide which went into each bucket, and what timeline are you working with? Walk me through a recent example, step by step, including who was in the room and what data swayed them.

We separated terminations from suspensions by asking two questions: does the program sit below the three-year graduate thresholds, and is there credible evidence that a curriculum or delivery update could move it above them soon. For termination, the answer was “no” on both; for suspension, it was “yes” on potential but “not yet” on proof. In a recent case, a master’s program below the seven-to-10 graduate norms used elsewhere was still under the 20 threshold here, so we convened academic affairs, institutional research, advising leads, and the dean. The institutional research team showed a three-year graduate count below 20 and stagnant applicant yield, while advising flagged a thin pipeline. Because the curriculum was outdated and the pipeline weak, that one went to termination; a sister program with a clear online redesign plan went to suspension with a set review window.

The review flagged 214 underperforming programs in the past three years. Which metrics were most decisive in that list, and how were outliers handled? Share a case where the numbers looked weak at first but changed after deeper analysis, with the specific data points.

The decisive metric was degrees awarded over the last three years: fewer than 30 for bachelor’s, fewer than 20 for master’s, and fewer than 10 for doctorates. We then layered in cohort flow—applicant yield, credit hour production, and time-to-degree—so outliers with a small base but growing momentum weren’t unfairly punished. One program initially fell under the bachelor’s 30-in-three-years bar, but first-year to second-year persistence was rising and upper-division credit hours had ticked up. That meant the weak graduate count masked an improving pipeline. We flagged it for continuation with monitoring rather than suspension, because it was trending toward the 30 threshold.

Your thresholds were 30 bachelor’s graduates, 20 master’s, and 10 doctorates over three years. Why those cutoffs, and how do you account for programs with high impact but low headcount? Tell a story about a borderline program and how you resolved it.

The thresholds align with scale needed to ensure course rotations, faculty load balance, and cost per degree at a sustainable level across 12 institutions. They’re not ceilings; they’re tripwires for deeper review. For high-impact, low-headcount fields, we look at mission fit and downstream workforce needs—especially licensure fields and specialized sciences. A borderline doctorate just under 10 had outsized research impact and strong placements, so we protected it. The trade-off was consolidation of electives and a shared seminar with a related program so it could meet the bar without losing its identity.

You said 68% of the underperformers are in liberal arts, education, and sciences. What factors drove that concentration, and what protections, if any, exist for core disciplines? Give concrete examples of two programs on the bubble and the distinct rationales behind their outcomes.

The concentration reflects national enrollment shifts, fewer majors in some humanities, and students gravitating toward applied tracks. Still, core disciplines remain foundational, so we examine cross-campus credit hour contributions and general education roles. One foreign language program on the bubble was retained because it anchors general education and feeds teacher preparation, even though its majors fell below the 30 threshold. A niche cultural studies program, also below threshold, was consolidated with a broader social science track to preserve content while stabilizing delivery. In both cases, we protected learning outcomes but varied the structure.

Master’s programs made up 55% of the underperformers, partly because of terminal master’s awards for doctoral non-completers. How do you separate “stopout safety nets” from genuine demand signals? Share the metrics you track across cohorts, with at least one cohort-level anecdote.

We disaggregate degrees awarded by initial intent: terminal master’s seekers versus doctoral candidates receiving a master’s as a stopout. Then we compare yield, time-to-degree, and post-completion outcomes. In one cohort, the master’s tally looked healthy, but most awards were stopouts, not primary master’s demand. Applicant yield for the master’s track was flat while doctoral attrition created the numbers. That program moved to suspension for a redesign because the “demand” was backfilled attrition, not a true market.

For the eight suspended programs, you’ll stop enrolling and “take a hard look.” What does that audit look like week by week, and what evidence would trigger a relaunch versus a wind-down? Describe one curriculum update you’d pilot and the KPIs you’d use.

Weeks 1–2, we audit curriculum maps, syllabi, and faculty load; weeks 3–4, we analyze three-year graduate counts against 30/20/10 and map bottleneck courses; weeks 5–6, we test market interest via outreach and transfer pathways; weeks 7–8, we finalize a go/no-go with clear targets. Relaunch requires evidence of applicant yield growth and a pathway to exceed the threshold within the next three-year cycle. A pilot we favor is modularizing a sequence into stackable credentials with an online track. KPIs include applicant yield, first-term credit accumulation, and movement above the three-year graduate thresholds.

You plan to continue at least 150 underperforming programs and consolidate 30 others. What tipped those decisions, and how do you measure success post-consolidation? Offer a concrete before-and-after example with enrollment, graduation, and cost data.

Continuation was tipped by upward trends in credit hours and persistence, even if the three-year totals lagged. Consolidation was chosen when curriculum overlap was high and faculty expertise could be shared. One merged pair moved from below the 30-in-three-years bachelor’s bar to clearing it after combining advising and core courses. Graduation counts climbed above the threshold while cost per degree dropped due to shared sections. We track those changes over the next review cycle to confirm they stick.

Reviews happen every three to four years and have axed over 100 programs since the first big sweep in 2011, when 492 were flagged and 73 cut. What did you learn from 2011 that changed today’s process? Share a then-and-now comparison with specific metrics.

In 2011, a large sweep flagged 492 and cut 73, which taught us to balance speed with clarity. Now, we sequence reviews every three to four years, pair the 30/20/10 rule with cohort indicators, and add suspension as a rehabilitation lane. The data cadence is tighter, and faculty receive earlier signals when programs slip below thresholds. As a result, we’ve axed over 100 across the years but with better teach-outs and fewer surprises. The aim is continuous improvement, not one-time purges.

How do you weigh student pipeline and workforce needs against graduate counts, especially for niche fields like foreign languages or ethnic studies? Give an example where labor market data altered an initial verdict, including sources and thresholds.

We start with the three-year graduate counts, then test against regional demand and pipeline indicators. For niche fields, strong general education contribution or clear workforce pipelines can outweigh small graduating classes. In one case, labor market signals aligned with teacher shortages, so a language program under the 30 threshold was continued. The deciding factor was its role in preparing educators and its credit hour footprint. That’s how we balance counts with mission and need.

What happens to current students in the 18 programs slated for termination—teach-out plans, course access, and advising? Walk me through the full playbook, including timelines, stopgaps for required classes, and one real case where the plan had to pivot.

We freeze new admissions, publish teach-out maps, and guarantee required courses on a defined schedule through completion. Advising conducts one-on-one audits, and departments stage key classes at least once more. When a low-enrolled capstone risked cancellation, we cross-listed it with a related program to preserve learning outcomes and let students finish. The timeline is aligned to typical degree pacing so no one is stranded. Transparency and predictable offerings are non-negotiable.

Faculty worry about job security when programs close. How are positions reassigned or retrained, and what criteria guide those calls? Share a concrete scenario that shows how you handled tenure, workload, and retooling, with outcomes six and twelve months later.

We prioritize reassignment into consolidated programs where curriculum overlap exists, then fund retooling for online or interdisciplinary teaching. Tenure protections are honored; workload is balanced by shifting service and advising where courses contract. In one closure, faculty moved into a broader social science program and completed online teaching training. Six months later, they were delivering shared core courses; at twelve months, sections were stable and student evaluations improved. The key is matching expertise with the receiving curriculum.

Florida’s approach contrasts with Indiana’s quotas (15 bachelor’s, 10 associate, 7 master’s, 3 doctoral) that led to 75 eliminations and 101 suspensions, plus confusion. What guardrails are you using to avoid that chaos? Give examples of communications and decision checkpoints.

Our guardrails are cadence, transparency, and staged decisions. We use public thresholds of 30/20/10, share preliminary lists, and hold checkpoint meetings before final calls. Communications include FAQ sheets for departments and dated timelines so faculty know when decisions lock. We also distinguish termination from suspension and consolidation. That prevents the sudden pivots that caused confusion elsewhere.

Ohio’s SB 1 has sparked dozens of proposed cuts. What lessons are you borrowing—or rejecting—from Ohio and Indiana? Describe one policy you’d adopt tomorrow and one you’d avoid, with the evidence behind each choice.

We’d adopt the clarity of simple numeric triggers—stakeholders understand thresholds quickly. But we avoid blanket quotas that don’t account for mission or pipeline, because they can produce rushed cuts and confusion. Florida’s 30/20/10 plus multi-factor review keeps rigor without blunt-force outcomes. Evidence from Indiana’s rapid eliminations and suspensions shows how speed without staging can unsettle campuses. Our path is firm numbers, phased judgment.

Beyond graduate counts, what leading indicators do you track—applicant yield, credit hour production, job placement, licensure pass rates? Share a case where a leading indicator predicted a program’s rebound, including the timeline and exact metrics.

We track applicant yield, first-year credit accumulation, and upper-division credit hours alongside graduate thresholds. In one bachelor’s program, upper-division credit hours rose even while three-year graduates were under 30. That early signal justified continuation instead of suspension. Over the next cycle, the program cleared the threshold, confirming the indicator’s value. Leading metrics give us a head start on recovery.

How do you evaluate consolidation candidates—curriculum overlap, cost per degree, or faculty expertise—and prevent losing unique strengths? Tell a story about merging two programs, the decision map you used, and the post-merge student outcomes.

We map course overlap, align learning outcomes, and check faculty depth to ensure continuity. If we can merge without erasing signature content, consolidation makes sense. Two small programs combined shared methods and seminars, then retained distinct capstones. Students gained more course availability and the merged unit surpassed the three-year graduate bar. Unique strengths were kept through dedicated electives.

What are the budget and tuition impacts of these changes over the next two cycles, and how will you report them publicly? Walk through your model’s key assumptions, sensitivity tests you’ve run, and one scenario that would force a course correction.

The model assumes stable tuition policy, course-sharing efficiencies from 30 consolidations, and steady reviews every three to four years. We test sensitivities for enrollment dips and delivery shifts to online. Public reporting follows each review with program lists—terminate, suspend, continue, consolidate—and rationale tied to 30/20/10. A scenario that would force a course correction is a sudden labor market swing that elevates demand in an area we paused. In that case, we’d pivot a suspension to relaunch.

For programs you’ll try to revive, what interventions work best—industry partnerships, stackable credentials, or online delivery? Give a play-by-play of one turnaround plan with milestones, partner roles, and the enrollment or completion targets you set.

Stackable credentials and online delivery are powerful when paired with advisory input from employers. We map a one-year pilot: redesign core courses, launch a stackable sequence, and open an online track. Milestones include a measurable bump in applicant yield and credit accumulation by term, on a path to exceed the three-year thresholds. Employer partners validate assignments and offer projects that lift placement. If we don’t hit those markers, we reconsider the path.

Looking ahead to the next 3–4-year review, what would count as success across the system? Define your scoreboard—by degree level and field—and share an example of how you’ll celebrate wins and respond to misses with specific next steps.

Success is hitting or surpassing 30 graduates for bachelor’s, 20 for master’s, and 10 for doctorates across previously underperforming programs, with balanced representation in liberal arts, education, and sciences. We’ll also aim to reduce the share of master’s on the underperforming list, now 55%, by fixing the stopout distortion. Wins are public—profiles of programs that climbed above thresholds through smart redesign. Misses trigger either suspension for a hard look or consolidation to protect outcomes. The scoreboard is simple, visible, and tied to action.

Do you have any advice for our readers?

Pair clear numbers with humane processes. Use thresholds like 30/20/10 to focus attention, then let leading indicators and mission clarify the path. Treat suspension as a laboratory, not a punishment, and make teach-outs a promise, not a footnote. If you communicate early and often, you’ll avoid panic, protect core disciplines, and give struggling programs a real chance to improve.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later