Answer to Question 1 (dependence on which kind of phonics is being stopped / replaced)
You’re right: my earlier plan implicitly treated “explicit phonics” as a single, uniform dose, and “replacement” as categorically non-phonics. If the optimal stop point depends on what kind of phonics you stop—and whether the replacement is actually another form of code instruction—then the methodology has to shift from a simple dose reduction study to a component-and-match study.
How I’d change the investigation
1) Build an “instruction taxonomy” and measure fidelity
Instead of logging only minutes/day, I’d code instruction along dimensions such as:
- Type: synthetic (grapheme→phoneme blending), analytic (pattern discovery), analogy/onset-rime, word sorts, etc.
- Unit of instruction: phoneme-grapheme, rime units, syllable types, morphology (prefix/suffix/root), etymology/orthographic conventions.
- Teaching moves: explicit modeling, guided practice, error correction style, cumulative review, mixed practice.
- Content match: whether instruction targets the learner’s current error patterns (e.g., vowel confusions vs multisyllable segmentation).
Data needed: structured observation rubrics + audio/video samples for a subset to quantify quality and consistency (inter-rater reliability).
2) Redefine “stopping” as switching modes, not just reallocating time
Rather than STOP vs CONTINUE, the decision becomes: WHEN to transition from Mode A → Mode B (and which B).
Example modes:
- A: basic GPC blending + controlled decodables
- B1: advanced code (vowel teams, multisyllable routines)
- B2: morphology-focused decoding/spelling (still code-based, but different grain size)
- B3: fluency/comprehension emphasis with minimal code review
So the question becomes: Which transition, at which mastery point, yields the best outcomes for which learners?
3) Use a multi-arm design (or a SMART) rather than a single threshold RCT
At the “ready” moment, randomize among different transitions, not just stop vs continue. For example:
- Arm 1: continue same phonics approach (synthetic)
- Arm 2: switch to word sorts / variability training emphasizing flexibility
- Arm 3: switch to morphology + syllable division routines
- Arm 4: reduce code instruction and increase fluency/comprehension
A SMART design (Sequential Multiple Assignment Randomized Trial) fits well because kids who don’t respond after switching can be re-randomized to a different approach.
4) Treat “match” as a hypothesis: learner profile × instruction type
I’d pre-specify moderators to test interaction effects, e.g.:
- baseline phonemic awareness
- error profile (vowel errors, guessing, slow blending, multisyllable breakdown)
- working memory/attention indicators (if available)
- dyslexia risk markers
- language background
Outcome comparisons would then answer the more realistic question: who should stop which kind of phonics, and what should replace it?
5) Expand outcomes beyond “reading got better” to include mechanism-level changes
If the replacement is “morphology,” I’d track:
- multisyllabic decoding accuracy/latency
- morphological spelling/meaning transfer
- decoding of derived forms (e.g., heal→health, sign→signal)
That prevents falsely concluding “phonics can stop” when the student is actually benefiting from code instruction at a different grain size.
Answer to Question 2 (measuring the “self-teaching mechanism” / orthographic mapping efficiency)
Share’s Self-Teaching Hypothesis implies that exit criteria shouldn’t just reflect current decoding accuracy, but whether the child can use decoding to teach themselves new words through independent encounters. So I’d add direct measures of orthographic learning rate and retention from context.
How I’d modify the investigation
1) Add a “novel word self-teaching” task (experimental orthographic learning paradigm)
Design:
- Create short passages/stories containing novel words (pseudowords or rare real words) that are decodable but not previously taught.
- Ensure the meaning is inferable from context (so it mimics real reading).
- Control exposures: each target appears, say, 2, 4, 6, 8 times across texts.
Measures (immediate + delayed, e.g., 24 hours / 1 week):
- Reading accuracy + speed for those novel words in isolation
- Spelling of the novel words (strong test of orthographic mapping)
- Orthographic choice (pick correct spelling among foils: dain/dane/dein)
- Recognition/lexical decision (is this the word you saw?)
- Optional: meaning recall (to ensure it’s not rote visual memory only)
Key outcome: exposures-to-criterion (how many encounters needed to become accurate/automatic and retain).
2) Measure transfer to new complexity, not just mastered patterns
Include targets that require:
- multi-syllable decoding routines
- variable vowel pronunciations / “flexibility” demands
- morphologically complex forms (e.g., a novel root + familiar affix)
This tests whether the self-teaching mechanism works when the spelling system is less one-to-one.
3) Track “independent orthographic learning” in a semi-naturalistic way
For a week or two:
- Give the child short independent reading with embedded novel targets.
- Do no pre-teaching of those targets.
- Then probe whether the words were learned (reading/spelling/recognition).
This is closer to the real exit question: Will the child keep building word knowledge without explicit code lessons?
4) Integrate these measures into the stop/transition experiment
Instead of only asking, “Did decoding scores stay high after stopping?”, I’d ask:
- Does the STOP/SHIFT condition preserve or improve orthographic learning rate compared to CONTINUE?
- Do some transitions (e.g., morphology-focused) increase self-teaching efficiency for complex words?
5) Use self-teaching efficiency as part of the exit rule
An evidence-based exit criterion might become something like:
- adequate decoding accuracy/fluency plus
- adequate orthographic learning rate (e.g., learns and retains novel decodable words after ≤X exposures)
That directly operationalizes Share’s mechanism.
What this does to the original question
These changes would likely shift the “answer” away from a single stop point and toward a profiled transition rule:
- Stop/reduce basic phoneme-level phonics when decoding is accurate/automatic and the learner shows efficient self-teaching on novel words;
- but continue/switch to a different code manifestation (multisyllable routines, morphology, flexibility training) if self-teaching is weak or breaks at higher complexity.
If you tell me the age/grade band (K–1 vs 2–3 vs older intervention), I can outline a concrete novel-word task that’s developmentally appropriate and doesn’t take more than ~10–15 minutes to administer.