Padel AI MVP — capture, insight, practice loop
A working padel coaching AI needs three things to compound into a product, not a tool: smartphone capture that succeeds without club hardware, an insight that a player can act on in their next session, and a coach who can sign off on the result. The three pilot tracks below trade speed against learning depth; the diagram shows what to build first under three different conditions.
How a padel coaching app earns the right to exist
A padel coaching app has to compound three things into one product, not three: capture that succeeds without club hardware, an insight a player can act on in their next session, and a coach who signs off on the result. Skip any one and the loop stalls — capture without insight is a video archive, insight without coach buy-in is a newsletter, coach buy-in without capture is a workshop business. The diagram below is the smallest shape that closes the loop.
The strongest part of the loop is step three. It is the only place a video stream becomes a labelled training signal: which drill, which losing pattern, did the player improve. Without that closing tag, the model never gets better than commodity tagging.
Three pilot tracks compared
Before any of the three pilots starts, decide which question matters more: how fast the loop can ship, or how deeply it can be measured. The table below trades the same loop against three different pilot shapes. None is wrong; the choice depends on what is unknown.
| Track | Who runs it | What it tests | What it learns | What it cannot tell you |
|---|---|---|---|---|
| A · Single club, one academy | One academy lead, three to five coaches, twenty to thirty regular players over four to six weeks | Whether players read the recap and act on it inside one social context | Whether the loop has any traction at all and where capture friction shows up | Whether the loop generalises across clubs or coaching styles |
| B · Two clubs, paired | Two academies in the same city, four to six coaches each, two cohorts of players running in parallel | Whether the loop survives a second context with different coach habits | Whether ratings drift across clubs and whether the recap template needs club-specific tuning | Whether the loop works in markets with different padel cultures (Iberia vs the Gulf vs the Russian-speaking world) |
| C · Open beta, narrow geography | A landing page in one city or country, anyone can sign up, no academy partnership | Whether the loop works for self-served players without a coach in the loop | What happens to retention when there is no human reinforcement at step three | Whether the loop works at all — open beta tests scale, it does not test value |
The honest order is A → B → C. Track A answers the value question with the smallest possible group of people. Track B tests whether what worked in one academy survives in another. Track C only makes sense after A and B return strong signals.
What to build first
The diagram below shows the build order under three different early signals. Each branch picks the smallest next thing that earns the right to keep going.
Risk register
The five things that can go wrong, ordered by how likely each is to break the pilot. Each row names the symptom early enough to react and the response that does not require rebuilding from scratch.
| Risk | What it looks like | Where to spot it | What to do |
|---|---|---|---|
| Players never read the recap | The PDF is opened by fewer than three in ten players within three days of receiving it | Email or chat read receipts during weeks two and three | Move the recap from a downloadable file to a short message inside the chat the player already uses with the coach |
| Coaches refuse the tool | Two or more coaches stop filing the outcome tag during a single week despite reminders | Weekly tagging report shared with the academy lead | Switch to coach-only delivery: the recap goes to the coach first; the player only sees what the coach decides to share |
| The recap is too generic | Players say "I already knew that" in week-three interviews | Two short interviews per coach panel, recorded for replay | Re-curate the drill list with the coaches and shorten the recap to one losing pattern instead of three |
| Phone capture fails outdoors or indoors | More than one in four matches cannot be processed because of light, occlusion, or angle | Ingestion log shows rejection rate by week | Publish a one-page setup card with where to mount the phone, what to point it at, and how high; offer a club-camera fallback if the partner club has one |
| Ratings drift between clubs | A player's rating changes by more than five percent across two clubs in two weeks | Cross-club rating comparison once a second club is added in track B | Lock the shot taxonomy to a published schema before any second-club expansion; treat the schema as a contract, not an evolving guess |
What this loop is not
The loop is not a video editor. It is not a club-management tool. It is not a tournament organiser. It is not a marketplace for coaches. Each of those would compete with companies that have years of work in those categories. The loop earns its right to exist only on one job: turning the next match into a measurably better next session, with a coach in the room.
How to read this against the rest of the research
This page describes the loop. The strategic brief explains why it is the loop worth building. The competitor landscape shows which adjacent products already own pieces of it. The subscription economics page works out what the loop has to be worth per month for the unit economics to land. The ninety-day plan is what the first quarter looks like if track A starts on day one.