Skip to main content
Digital Fluency

Field Discovery for Digital Fluency

Fieldwork — provider visits and interview plan

Why this exists

The literature review (see pedagogy.md and research/summaries/) made one thing clear: the empirical evidence base on adult digital-fluency transfer is thin. No RCTs, no mental-model-construction studies, mixed LLM-tutor metacognitive results. What does exist — Urban Institute 2019, RAND 2024 pilot, ProLiteracy syntheses — leans heavily on provider interviews and program-level observations, not controlled trials.

That is a feature, not a bug. The people who actually know what works with low-fluency adult learners are the librarians, ABE instructors, workforce counselors, and senior-program coordinators who have been teaching digital skills for 20+ years. Their knowledge is largely undocumented in the academic literature. Touring those programs is the highest-leverage research move available to us before we lock the spec.

This document defines what we're trying to learn, who we should learn it from, what to ask, what to bring back, and how to convert the findings into product decisions.


The five questions we need to answer

Each question maps to a specific design choice that's currently underspecified in our docs. Field research must produce evidence that lets us make these choices, not just stories.

Q1. Where do adult learners actually fail?

The Urban Institute names "multi-step workflows" and "cross-application navigation" as failure modes, but at the level of generality that doesn't help us design a curriculum. At the level of specific micro-tasks, where do real adult learners get stuck?

Examples of what we want to surface:

This is the content design question. We can't pick the right tasks for our curriculum without it.

Q2. What does the AI co-pilot actually need to do — and what should it never do?

Our pedagogy doc commits to specific moves (refuse-until-effort, explicit pattern naming, metacognitive debrief, contrasting cases). Provider experience is the reality check on these.

This is the AI co-pilot design question. The technical-approach doc commits to a specific intervention pattern that the field could either validate or reshape.

Q3. What does engagement look like over time, and what kills it?

Bastani et al. found engagement (not problem volume) mediates learning gains, but their population was high-school students in a structured course. Adult learners self-select in and out continuously. Sustained engagement is harder.

This is the engagement architecture question. It probably reshapes whether our product is purely individual or has a community/cohort layer.

Q4. Who actually shows up — and who doesn't?

The PIAAC stats name 32M Americans with no digital skills, but the population that actually walks into a library digital-skills class is a self-selected subset. Understanding the gap matters for distribution.

This is the distribution / market question. It directly informs the pitch's "first-cohort plan" gap.

Q5. What do good outcomes actually look like — and how do you know?

Providers have been doing this work for decades. They have an intuitive sense of which graduates "got it" and which didn't. We need to surface that tacit assessment.

This is the assessment design question. Our spec commits to a near/far transfer distinction; the field probably has practical, lower-cost proxies we should adopt.


Who to talk to

Tier the outreach. Start with the people closest to the work.

Tier 1 — Public libraries (the most universal venue)

Libraries are the dominant US infrastructure for free adult digital-skills training. Per the Urban Institute brief, "libraries have been engaged in computer trainings since the 1990s." This is the single most important provider type to study.

Specifically ask for:

Tier 2 — Adult Basic Education (ABE) and ESOL providers

The federal AEFLA-funded ABE network reaches adult learners who lack high school equivalency. Many integrate digital skills into literacy/numeracy work.

Tier 3 — Workforce development and senior programs

Tier 4 — Academic and research

After you've done field interviews, then talk to the researchers who study this. Their input lands differently after you've seen the work yourself.


What to do at each visit

Before the visit

  1. Read their published materials. Most library systems publish their digital-literacy curriculum (it's often Northstar-aligned). Read it. Don't ask them basic questions about it during the visit.
  2. Send a short email. "I'm researching how to design a better digital-fluency learning platform. Would you have 30–60 minutes to talk about what you've learned teaching this for X years? I'm not selling anything, I just want to understand what works." Identify yourself plainly. Do not pitch the product.
  3. Bring a one-pager — what you're working on, in non-product-pitch language. They'll ask. Have it ready, but don't lead with it.

During the visit — the interview

Use a semi-structured interview approach. Have the five questions above as the spine, but follow what's interesting.

The most valuable question is "tell me about a learner who failed." Specific stories surface mechanism in a way general questions never do. Ask for two or three of these. Take notes.

The second most valuable question is "show me the materials you actually use." Often what providers say they do and what they actually do diverge. Worksheets, cheat sheets, screenshots they print out — these are gold.

The third most valuable observation is the room itself. What's on the walls? What's on the whiteboard? Is there a 1:1 tech-help desk and what's the queue look like? Are learners using their own laptops or library desktops? What's the OS — Windows? Chrome? iPad? Each of those choices encodes years of experience.

If they offer to let you sit in on a session, say yes. One observation session is worth five interviews. Take field notes during; don't participate.

After the visit

Write up notes within 24 hours. Format: see template below. Do not skip this — memory degrades fast and you'll do 8–15 of these visits.


Note-taking template

Save each visit as research/field-notes/YYYY-MM-DD-[provider-name].md.

# {{Provider name}} — {{date}}

**Where:** {{city, organization, branch}}
**Who:** {{names, roles, years in role}}
**Format:** {{interview / observation / both, duration}}
**Pre-read:** {{materials I read before going}}

## Top 3 takeaways
1. {{single most important thing I learned}}
2. {{...}}
3. {{...}}

## Failure stories
- {{Specific story of a learner who failed, why, what they tried}}
- {{...}}

## What works
- {{Practices the provider says actually work, with specifics}}
- {{...}}

## Surprises / counter-evidence
- {{Things that contradicted my assumptions before the visit}}
- {{...}}

## Quotes worth keeping
> "..."

## Materials they shared
- {{worksheets, screenshots, link to curriculum}}

## Implications for our design
- {{Specific product/spec/pedagogy implications — one bullet each}}

## Followups
- [ ] {{People they referred me to}}
- [ ] {{Questions I forgot to ask}}

Sequencing — what order to do this in

You probably can't visit 15 providers. You probably should visit 6–8. Here's how to sequence.

Phase 1 (weeks 1–2): Local immersion.

Goal: develop intuition. After this phase you should be able to predict what providers will say. That intuition is what makes the next phase efficient.

Phase 2 (weeks 3–5): Targeted depth.

Goal: pressure-test the patterns from Phase 1 against the most sophisticated providers. If your Phase 1 hypotheses survive contact with the people who have run the best programs, they're load-bearing.

Phase 3 (weeks 6–7): Researchers and synthesis.

Goal: convert field findings into concrete product changes. Researchers help you generalize what you saw without overgeneralizing.


What to bring back

Each visit should produce a field-notes/ markdown. The synthesis at the end of Phase 3 should produce:

  1. A revised list of failure modes — what real learners actually fail at, prioritized. This rewrites the spec's curriculum and assessment.
  2. A revised co-pilot intervention spec — what the AI should do, what it should never do, drawn from observed instructor behavior. This rewrites technical-approach.md's co-pilot section.
  3. An engagement-architecture decision — purely individual, cohort-based, or hybrid. This is a big spec change if it goes anywhere other than individual.
  4. A distribution-channel hypothesis — D2C / library / workforce / government, with the specific reasoning. This fills the scaffolded TODO in the pitch.
  5. A list of partner-provider candidates — 2–3 organizations that would plausibly host an MVP pilot. This is what makes the pitch's "first-cohort plan" concrete.

Budget and time

Realistic estimate: 6–8 weeks of part-time effort, ~30–60 hours total. Travel cost minimal if you stay regional (most major cities have all three tiers represented). The single largest hidden cost is writing up notes within 24 hours of each visit — block calendar time for that.


What to skip


Success criteria for the program

You're done when:

  1. You can predict what a new provider will say in the first 10 minutes of an interview, with ~70% accuracy. (You've absorbed the field's tacit knowledge.)
  2. You have 2–3 concrete spec changes that you would not have made without the field research. (The research actually changed your design, rather than confirming it.)
  3. You have a credible sentence to put in the pitch that begins "We have visited and learned from N digital-skills providers including [names]." (Distribution credibility.)
  4. You have 2–3 named providers who would discuss hosting an MVP pilot. (First-cohort path.)

If you finish the program and none of those four are true, the field research has not done what it needed to do — extend or redirect.


How this connects back

Field findings update three docs:

The plan we already have (pedagogy → technical → trim pitch & spec) assumes field research happens between drafting pedagogy and drafting technical-approach. That's the right insertion point. The pedagogy doc is grounded in the research literature; the technical-approach doc and the spec revisions should be grounded in the field.