tekkuINVESTOR_PORTAL
SCENARIO / BASE
METHOD

Learning science

Kids learn by building, not by sitting through lessons. Pedagogy is embedded in the loop itself. Every checkpoint is a teaching moment. Every error translation is a teaching moment. Every concept detected is a teaching moment the parent can see.

LAST UPDATED 2026-04-22

Tekku is not a curriculum. There is no lesson plan, no quiz, no badge for sitting still. The pedagogy is embedded inside the build loop itself. A kid has an intent, a kid tries, a kid gets stuck, the AI scaffolds, the kid ships, the kid reflects, and a new intent forms. The concept coverage falls out of the code that the kid wrote, which is the only honest evidence of learning a parent can hold.

This page walks the pedagogical lineage, the scaffolding model, the Stage 1 concept set, the outcomes-mapping shape, and the parent-facing claim posture. Where a claim leans on academic work we have not yet cited, the claim is marked TODO(learning) until the advisor review lands and the citation is firm.

The pedagogy loop

flowchart LR INT[Kid has intent<br/>"make it bounce"] --> TRY[Kid tries<br/>types prompt] TRY --> STK{Kid stuck?} STK -->|no, works| SHP[Kid ships<br/>real URL] STK -->|yes, broken| SCA[AI scaffolds<br/>checkpoint or explain] SCA --> TRY SHP --> REF[Kid reflects<br/>texts URL, sees response] REF --> NEW[New intent<br/>"now add sound"] NEW --> INT
Kids learn by building. The AI scaffolds when the kid gets stuck. The ship event is the reflection moment. The next intent forms from what the kid shipped, not from what a lesson told them to try.
Pedagogical frameworkConstructionist in lineage. Build-first, label-second. The artifact is the evidence.

The pedagogical lineage is constructionist, which is the same theoretical frame that Scratch was built on: kids learn by building artifacts they care about, making them real to someone else, and discovering concepts in the process rather than being taught them in the abstract. Seymour Papert and Yasmin Kafai are the primary citations. TODO(learning): advisor review to confirm the citation shape and tighten the language we use in parent-facing surfaces.

Tekku's specific twist is that the AI is the scaffolding partner rather than the teacher. The AI does not own the curriculum. The kid owns the project. The AI answers when the kid asks, explains when the kid is stuck, and proposes a patch when the kid cannot write the next line. The kid ships. The AI labels what was learned. The parent reads the label in plain English and trusts the claim because the evidence line is a real snippet from a real session.

What this framework is not: a direct-instruction curriculum, a problem-set walker, a tutor, a homework helper. Tekku is the studio a kid enters to make something of their own. Khanmigo is the tutor. Duolingo is the daily habit. Tekku is the workshop. The three can coexist on a household budget.

Scaffolding modelThree active layers. Checkpoint questions, patch explanations, error translations. All tuned to kid reading level.

The scaffolding model draws on the zone of proximal development framing (Vygotsky): the AI helps the kid do what the kid is close to being able to do alone. TODO(learning): cite the specific Vygotskian synthesis we are leaning on, and confirm with advisor that the framing matches what an educator would expect.

Three scaffolding surfaces are live today. First, checkpoint questions. When the kid's idea is fuzzy, the checkpoint_question tool (lib/ai/tools.ts) pauses the build and offers exactly three short choices. The constraint is exact on purpose: three is enough to disambiguate intent, not so many that the kid has to read. Second, patch explanations. The submit_patch tool requires a one-sentence explanation in grade-5 kid language alongside every code change. The kid sees what changed and why before the change applies. Third, error translations. The error-explanation prompt in lib/prompts/error-explanation.md translates every error class ("unexpected token", "cannot read property of undefined") into a kid-language action ("something in the code is typed a little off").

What each layer does not do: it does not grade the kid, it does not expose a rubric, it does not tell the kid they got it wrong. Errors are never red. Recovery never shames. The scaffolding is what a patient adult in the same room would do, not what a worksheet does.

Outcomes mapping: how we track learning without testing kidsConcept coverage per kid, artifact-to-concept linking, first-to-confident progression. The weekly parent email is the output.

The concept detector in lib/concepts/detector.ts walks the kid's saved code after every turn and tags the concepts it finds. Stage 1 is the five-concept set in lib/concepts/catalog.ts: state, effects, events, lists, conditionals. Each concept has a kid-facing phrase and a parent-facing phrase. The detector returns each tagged concept with an evidence snippet (a slice of code around the pattern) and the file the pattern came from. That evidence line is what lands in the weekly parent email and the parent dashboard.

Coverage accumulates per kid across sessions. The first time a kid uses a concept (regardless of project) triggers the parent-facing "Maya used state for the first time" claim. Subsequent uses feed a repeat-use count per concept. The confidence gate promotes a concept from "new" to "confident" in the parent view. TODO(learning): the first-to-confident threshold is not yet sourced in published pedagogy work. The current plan is four independent uses across at least two projects, pending advisor review.

Stage 2 replaces the regex detector with a Claude Haiku classifier (TODO-002). Richer concept set, confidence scores, subtler concept detection (derived state, useEffect dependencies, event propagation). The mapping shape stays the same: evidence line, first-use flag, confidence progression. The upgrade path is contained to one file.

Parent-facing pedagogical claims: what we say, what we do notEvidence-anchored claims only. No adjective-driven progress reports. No standards-alignment claims that are not audited.

What we do say: the kid used a specific concept, the evidence line is a real code snippet, the session was a real session, the shipped URL is a real URL. Every claim in the weekly parent email is auditable by clicking through to the session snapshot. The claim lives next to the evidence, not across the page from it.

What we do not say: we do not say the kid "mastered" a concept. We do not say the kid is "ahead of grade level" without an external assessment to anchor the comparison. We do not claim standards alignment (Common Core, state K-12 CS standards) that we have not mapped and published. TODO(learning): the standards-alignment build is a Stage 2 artifact that lands alongside the Workshop tier. It requires a specific mapping from Tekku concepts to state-level standards and is not something we hand-wave.

The reason for the caution is commercial. A parent who catches one bad claim in the weekly email cancels the subscription. The weekly email is a bank account. Every accurate claim is a deposit. Every overstated claim is a withdrawal. We run the bank.

Academic advisor strategyNo advisors retained today. The strategy for getting them is named and calendarized.

TODO(team): no academic advisors retained as of 2026-04-22. The plan is to close one advisor from the constructionist lineage (Scratch / MIT Media Lab orbit) and one advisor from the AI-in-education policy space (Stanford HAI, CMU HCII, or a peer institution) before the Stage 2 public pilot launches. Advisors carry cash and equity at a modest line; the primary currency is publication and standards-mapping credibility for parent and school audiences.

Advisor work product for the first year is a concept-to-standards mapping (Stage 1 five concepts to Common Core CS and CSTA K-12), a published white paper on the "kids who build with AI" category framing, and a public endorsement of the safety posture. Each is a concrete deliverable that shows up on the public research page.

The reason to close advisors on a calendar rather than opportunistically is the parent-facing claim posture. We will not make a pedagogical claim we cannot defend in front of an academic reviewer. Advisors are the check.

Stage 1 concept coverage

The five-concept set shipping today. Each row shows how we detect the concept now, how we will detect it in Stage 2, and what the parent reads in the weekly email.

 Stage 1 detectionStage 2 detectionParent-facing claim
StateRegex on useStateLLM classifier (Haiku) with confidenceMaya's apps can now remember things between clicks.
EffectsRegex on useEffectLLM classifier, with dependency-array error detectionMaya's apps can now do things over time.
EventsRegex on onClick, onChange, onKeyDown, onSubmit and siblingsLLM classifier with event-propagation detectionMaya's apps now listen for clicks and keys.
ListsRegex on Array.map inside JSXLLM classifier with key-prop detectionMaya's apps now show many things at once.
ConditionalsRegex on ternary or short-circuit inside JSXLLM classifier, handles if/else blocks above JSX returnMaya's apps now show different things in different moments.

Stage 2 concept set expands past the five shown here to include derived state, data flow (props and lifting), component composition, and async data fetching. The expanded set is gated on TODO-002 (LLM classifier).

See also