Prompt-based, agentic workflows for scalable content production

Compliance-heavy agentic Ai workflows

2024

system design / product design

CHAT-GPT / CLAUDE / FIGMA / NOTION

Prompt-based, agentic workflows for scalable content production

Compliance-heavy agentic Ai workflows

2024

system design / product design

CHAT-GPT / CLAUDE / FIGMA / NOTION

Prompt-based, agentic workflows for scalable content production

Compliance-heavy agentic Ai workflows

2024

system design / product design

CHAT-GPT / CLAUDE / FIGMA / NOTION

TL;DR

In a compliance-constrained tech education context, I architected a structured AI-assisted curriculum production system that used chained prompts, guardrails, and human approval stages to simulate agentic workflows before agent tooling was widely available. The system enabled scalable content production while preserving quality and accountability, resulting in ~70% faster course development and a 50% reduction in human effort.

Curious? Scroll down for the details…

context & user understanding

the pressure to ship vs constraints.

the pressure to ship vs constraints

A growing education company (Future Group/WildCodeSchool) faced increasing pressure
to ship high-quality, self-paced learning content under tight time and resource constraints.

Content designers were expected to:

  • Translate academic material into structured curricula

  • Adhere to strict pedagogical frameworks and learning outcomes

  • Stay within hard constraints (e.g. 50-hour content caps)

  • Produce exercises, quizzes, and materials at “real-world” quality

This resulted in high cognitive load, repeated manual work, and a growing risk of inconsistency and burnout — especially for designers juggling multiple roles.

The challenge was not UI, but how to scale expert decision-making without sacrificing quality.

Design Intent

keep the human expert in the loop

Instead of automating decisions blindly, the goal was to design an AI-assisted system where:

  • Human designers remained the subject-matter experts

  • AI accelerated processing, structuring, and iteration

  • Quality, compliance, and intent were preserved through explicit guardrails

"

The system needed to behave less like a generator - more like a guided assistant embedded in a workflow.

solution

The Agentic Flow

The solution took the form of a prompt-based, multi-step workflow, where AI acted as a constrained agent within a clearly defined process.

Contextual AI processing

Designers selected a prompt template based on their stage in the content journey (e.g. curriculum planning, lesson drafting, exercise creation).

The AI was used to:

  • Break down high-level learning goals into structured curricula

  • Distill large academic texts based on explicit criteria

  • Generate synthetic datasets for exercises

  • Produce quizzes aligned with learning material

  • Rewrite and format content to match tone, language, and pedagogical standards

"

The AI never moved the system forward on its own.

Sequential validation & refinement

Rather than one large prompt, the system used chained prompts:

  • Each step required human review and approval

  • Outputs only progressed once validated

  • Later steps (tone, language polish) were intentionally separated from structural decisions

Context persisted through human-controlled progression, not automated state.

The AI never moved the system forward on its own.

Contextual AI processing

Designers selected a prompt template based on their stage in the content journey (e.g. curriculum planning, lesson drafting, exercise creation).

The AI was used to:

  • Break down high-level learning goals into structured curricula

  • Distill large academic texts based on explicit criteria

  • Generate synthetic datasets for exercises

  • Produce quizzes aligned with learning material

  • Rewrite and format content to match tone, language, and pedagogical standards

Human-led framing

Designers first:

  • Collected learning design guidelines

  • Defined structural and quantitative constraints

  • Selected and verified academic sources

    This ensured intent and authority stayed human-owned.

solution

The Agentic Flow

The solution took the form of a prompt-based, multi-step workflow, where AI acted as a constrained agent within a clearly defined process.

Contextual AI processing

Designers selected a prompt template based on their stage in the content journey (e.g. curriculum planning, lesson drafting, exercise creation).

The AI was used to:

  • Break down high-level learning goals into structured curricula

  • Distill large academic texts based on explicit criteria

  • Generate synthetic datasets for exercises

  • Produce quizzes aligned with learning material

  • Rewrite and format content to match tone, language, and pedagogical standards

Sequential validation & refinement

Rather than one large prompt, the system used chained prompts:

  • Each step required human review and approval

  • Outputs only progressed once validated

  • Later steps (tone, language polish) were intentionally separated from structural decisions

Context persisted through human-controlled progression, not automated state.

The AI never moved the system forward on its own.

Contextual AI processing

Designers selected a prompt template based on their stage in the content journey (e.g. curriculum planning, lesson drafting, exercise creation).

The AI was used to:

  • Break down high-level learning goals into structured curricula

  • Distill large academic texts based on explicit criteria

  • Generate synthetic datasets for exercises

  • Produce quizzes aligned with learning material

  • Rewrite and format content to match tone, language, and pedagogical standards

"

The AI never moved the system forward on its own.

Human-led framing

Designers first:

  • Collected learning design guidelines

  • Defined structural and quantitative constraints

  • Selected and verified academic sources

    This ensured intent and authority stayed human-owned.

development & iteration

Prompt & interaction Design

Through testing, the workflow evolved from single, multi-layered prompts (hard to control, error-prone) to modular, chained prompts aligned with specific decisions.

This significantly improved reliability and traceability.

"

Context persisted through human-controlled progression, not automated state.

modular, chained prompts

(aligned with specific decisions)

single, multi-layered prompts

(hard to control, error-prone)

key design decisions

  • Explicit role definition for the AI

  • Clear audience and pedagogical framing

  • Batched constraints where interdependencies mattered (e.g. lesson duration + learning outcomes)

  • Prompt templates informed by NNG, Google, and Perplexity frameworks — adapted to real user needs

development & iteration

Prompt & interaction Design

Through testing, the workflow evolved from single, multi-layered prompts (hard to control, error-prone) to modular, chained prompts aligned with specific decisions.

This significantly improved reliability and traceability.

modular, chained prompts

(aligned with specific decisions)

single, multi-layered prompts

(hard to control, error-prone)

"

Context persisted through human-controlled progression, not automated state.

key design decisions

  • Explicit role definition for the AI

  • Clear audience and pedagogical framing

  • Batched constraints where interdependencies mattered (e.g. lesson duration + learning outcomes)

  • Prompt templates informed by NNG, Google, and Perplexity frameworks — adapted to real user needs

development & iteration

Prompt & interaction Design

Through testing, the workflow evolved from single, multi-layered prompts (hard to control, error-prone) to modular, chained prompts aligned with specific decisions.

This significantly improved reliability and traceability.

modular, chained prompts

(aligned with specific decisions)

single, multi-layered prompts

(hard to control, error-prone)

"

Context persisted through human-controlled progression, not automated state.

key design decisions

  • Explicit role definition for the AI

  • Clear audience and pedagogical framing

  • Batched constraints where interdependencies mattered (e.g. lesson duration + learning outcomes)

  • Prompt templates informed by NNG, Google, and Perplexity frameworks — adapted to real user needs

Guardrails & Failure Handling

Predictability Over Novelty

Several risks were actively designed for to ensure that the system favoured predictability over novelty.

Hallucinations

  • Discovered via testing that synthetic data generation failed beyond ~50 entries

  • Introduced hard constraints into prompt workflows

Overconfidence (human + AI)

  • Mandatory human approval gates

  • Explicit workflow protocols to prevent blind trust

model drift

  • Regular control tests to ensure prompts continued to behave predictably

  • One conversation (chat) per goal (prompt)

outcomes

Better, Faster, Stronger:

Amplifying Expertise

Better, Faster, Stronger:

Amplifying Expertise

The AI did not replace expertise — it amplified it.

~50% less human
resources required

Maintained quality
through SME reviews

~70% faster
course development

Significantly reduced cognitive load for designers

key learnings

Discipline > enthusiasm

*

Human systems are inherently non-deterministic — check-ins and collaboration matter more than perfect documentation

*

Designing AI interactions sharpens thinking around: Constraints, Responsibility, Clear communication

*

Agentic systems require discipline, not automation enthusiasm

*

Chained workflows are easier to debug, trust & scale than monolithic prompts

key learnings

Discipline > enthusiasm

*

Agentic systems require discipline, not automation enthusiasm

*

Chained workflows are easier to debug, trust & scale than monolithic prompts

*

Designing AI interactions sharpens thinking around: Constraints, Responsibility, Clear communication

*

Human systems are inherently non-deterministic — check-ins and collaboration matter more than perfect documentation

key learnings

Discipline > enthusiasm

*

Agentic systems require discipline, not automation enthusiasm

*

Human systems are inherently non-deterministic — check-ins and collaboration matter more than perfect documentation

*

Chained workflows are easier to debug, trust & scale than monolithic prompts

*

Designing AI interactions sharpens thinking around: Constraints, Responsibility, Clear communication

2026 all rights reserved

Made with love by yours truly

2026 all rights reserved

Made with love by yours truly