Instructional Design Writing: When to Use AI, When to Resist

Calendar icon representing date
February 25, 2026
Clock icon representing time
IDIODC #2164 with Connie Malamed

Your team is probably using AI to write learning content. But does it actually make it better? AI hallucinations and weird text patterns can be spotted by anyone paying attention. Connie Malamed joins IDIODC to share what she's learned from experimenting with AI writing tools.

We discuss the fingerprints AI leaves in your content. Overused sentence structures. Paragraphs that sound impressive but say nothing. You'll learn about specialized AI research tools that beat ChatGPT for finding credible sources, and using NotebookLM to prepare for SMEs interviews. Connie shares when to use AI and when to fight through your own messy first draft. Because sometimes the struggle is what keeps your writing sharp and your content authentic.

Transcript: Instructional Design Writing with AI Assistance

Welcome and Introductions

Chris Van Wingerden welcomes viewers to Instructional Designers in Offices Drinking Coffee and notes that the show is brought to them by dominKnow, which helps L&D teams develop, manage, and deliver impactful learning content at scale. He introduces guest Connie Malamed and asks her to share a bit about her background and experience in the learning field.

Connie explains that she has been in the field of learning and instructional design for many years, writes The eLearning Coach blog and podcast, and runs the Mastering Instructional Design community. She mentions her client work history, a brief stint in management, and her strong interest in visual design and writing as central parts of her professional focus.

Why Talk About AI and Writing

Chris explains that the episode will focus on how Connie has been using AI to support different writing tasks in the instructional design space. He asks when she started experimenting with AI for writing and what her early use cases looked like.

Connie shares that she initially used AI tools for simple tasks like clarifying awkward sentences, rephrasing complex wording, and using tools such as Grammarly for proofreading grammar and punctuation. For this session and her more recent work, she became curious about the cognitive activities involved in writing and wanted to understand where AI can genuinely support those activities and where human discernment remains essential.

The Cognitive Model of Writing

Connie introduces the Flower and Hayes cognitive model of writing from 1981 as a useful framework for understanding how writers actually think while composing text. She notes that the model shows writing is not a simple linear sequence of planning, writing, and revising but a recursive set of thinking processes where writers constantly move back and forth between planning, translating, and reviewing.

She explains that the model describes writing as a set of distinctive thinking processes that writers orchestrate during composition. The planning process involves forming an internal representation of ideas before using language, which she finds fascinating because it suggests a kind of thought without explicit language. Translating involves transforming those abstract ideas into linear written syntax, and reviewing focuses on evaluating and improving the draft. A monitoring process oversees when writers switch between these phases.

Chris and Paul respond that this model resonates with their own experiences of drafting and revising long academic or professional texts, where first drafts are often rough and later iterations refine structure and clarity. Connie adds that when she writes research-based articles she often moves repeatedly between planning and rewriting because understanding what she truly wants to say is one of the hardest parts of writing.

Planning: Generating, Organizing, and Goal Setting

Connie drills into the planning component and notes three key subprocesses: generating ideas by retrieving relevant information from long term memory, organizing and structuring those ideas, and setting goals and sub goals for the writing task. She observes that in her work she not only retrieves ideas from memory but also generates ideas from research, which is common for many instructional designers.

She clarifies that during planning the writer is not yet working directly with language, but rather shaping concepts and intentions. Under translating, related cognitive activities include managing simultaneous constraints, using automaticity so grammar and syntax come automatically, and transforming or reorganizing knowledge. During reviewing, writers judge the quality of text against their goals and modify the content based on that evaluation.

Paul comments that different phases of the model match his own habit of jotting down outlines and fragments, then later focusing on how to articulate and refine them for clarity. He describes needing multiple passes before a piece becomes good, while the first draft may only make sense to him. Chris notes that the model captures the reality that writers are always revisiting earlier decisions and that planning, translating, and reviewing overlap in practice.

Where AI Can Support the Writing Process

Connie then turns to the question of where AI can support specific subprocesses in planning, translating, and reviewing. She explains that AI can serve as an external long term memory by retrieving facts, summarizing research, and suggesting related concepts when given a prompt. For planning, AI can help generate ideas, propose topic angles, and surface connections that may not be obvious from memory alone.

In organizing, she notes that AI can generate structured outlines, cluster disorganized notes, and suggest a logical order for presentation of content. Goal setting is another area where AI can assist by analyzing a writing prompt and proposing specific sub goals such as addressing a beginner audience or emphasizing a particular outcome. She emphasizes that any AI support must be used with discernment, since the quality of suggestions varies and the AI does not truly understand audience or context.

Paul shares an example of a colleague who used AI to better understand a technical topic before meeting with a subject matter expert. By having AI summarize concepts and propose outlines, he could enter the SME conversation more prepared, keep the meeting focused, and avoid the common situation where an SME tries to include everything. Connie agrees that AI can effectively support both long term and working memory, as long as the designer checks for hallucinations and verifies that facts actually appear in referenced sources.

Using AI for Structuring and Outlining

Connie and Paul discuss how AI can help instructional designers structure ideas by generating outlines or reorganizing notes into a more coherent flow. Paul notes that people often default to speed and accept AI output without sufficient checking, but using AI in discrete steps, such as generating an outline first, encourages critical review and collaboration with the tool instead of blind reliance.

They highlight that asking AI to provide a proposed structure can prompt designers to ask whether anything important is missing, whether the sequence suits their audience, and how to adapt the structure to their needs. Chris relays a comment from a viewer who starts with AI drafted text and then iteratively critiques and prompts revisions, finding that this process deepens understanding of the material even though it can feel tedious.

Connie observes that AI is best treated as a catalyst rather than a final author. It can get you started and surface possibilities, but every sentence still requires human discernment since AI does not possess experience, emotions, or a nuanced understanding of learners. She connects this back to the Flower and Hayes model, suggesting that using AI effectively means aligning it to different phases rather than collapsing the entire process into a single prompt and paste action.

Concerns about Agency and Writing Skill

When the discussion turns to translating, Connie shows a caution symbol and explains that she personally avoids having AI translate her abstract thoughts directly into full paragraphs. She worries about loss of human agency and erosion of writing skills if designers outsource too much of this core thinking process to AI. Writing skills are critical in instructional design, and she prefers to maintain control over how ideas are expressed.

Paul shares concerns from professors he knows who receive student work clearly written by AI, where students cannot explain the ideas or reasoning in their own assignments. He notes that when people rely entirely on AI to handle translating thoughts into text, they are no longer practicing the cognitive work that builds skill. He compares this to getting out of shape after not writing for awhile, where it is harder to get back into strong writing form.

Chris points out that in L&D contexts, discernment is multi layered. Designers must review AI supported content, but SMEs and other stakeholders also need to apply their own judgment to ensure accuracy and avoid risks. Connie agrees that spelling and grammar tools are helpful, but she sees too many online pieces where words are spelled correctly yet are the wrong words in context, indicating that no one truly proofread the content. This reflects a mindset that AI always knows better instead of being a tool under human control.

AI Fingerprints in Writing

Connie introduces the idea of AI fingerprints, patterns that reveal AI involvement in text. She mentions the noticeable surge in em dash usage in AI generated writing and jokes that this makes her sad because she loves em dashes herself. Another strong fingerprint she dislikes is the sentence structure that contrasts two statements in the form “it is not X, it is Y,” which she now sees everywhere in AI influenced content.

She explains that this device can be useful occasionally but becomes distracting and artificial when overused. Paul adds that many AI generated articles feel substantial on the surface yet leave readers with no real learning or insight, like consuming something that has appealing flavor but no nutritional value. Chris describes reading a piece that used ornate vocabulary and a flattened emotional tone, which made him suspect heavy AI use and caused him to question the authenticity of the author’s voice.

Connie shares an example of a travel video about Portugal where the visuals and voiceover were generated by AI. The script sounded polished but meaningless, filled with generic positive statements that became unintentional comedy for her and her husband. She notes that no one seemed to have edited the script, even though the video drew many views. This leads Paul back to the importance of discernment across the population, asking whether people are critically evaluating such content or simply consuming superficial material that does not require thought.

Beyond ChatGPT: Academic Research Tools

Paul asks Connie about AI tools beyond well known chatbots that she finds useful for research and writing. Connie highlights several academic research assistants that focus on scholarly sources rather than general web content. She mentions tools like Consensus, Elicit, and Undermine, which retrieve research studies, summarize findings, and sometimes prompt users to clarify their research question as if forming a hypothesis.

She explains that these tools rely on open access research or abstracts and can surface more rigorous evidence than generic language models, which often respond with popular articles from outlets such as Forbes. Connie notes that you can ask these tools what the research says about topics like cognitive activities in writing and receive responses grounded in academic literature. She also shares practical workarounds designers can use when articles are behind paywalls, such as emailing authors, searching for pre publication versions, or using Google Scholar’s “all versions” feature.

Paul connects these tools to the idea of governed content, where AI works over curated, high quality corpora instead of the entire internet. He notes that similar patterns appear in health focused AI tools that rely on peer reviewed research for nutrition or medical information. Connie cautions, however, that even academic research demands discernment, since not all studies are strong and some may rely on tiny samples or weak designs. She advises designers to go beyond abstracts when possible and to evaluate whether a study’s methods and context support the conclusions.

Using NotebookLM for Learning and SME Prep

Chris then asks Connie about NotebookLM, a lesser known Google tool she has referenced. Connie explains that NotebookLM is designed as a learning assistant where you upload sources such as research papers or chapters, and the tool generates infographics, quizzes, flashcards, and even podcast style audio conversations that discuss the material. She recounts an experiment where she uploaded a chapter from one of her books and a medical article from the Mayo Clinic.

In her test, NotebookLM handled the medical content accurately but struggled with nuance when summarizing her own chapter, producing a podcast that misunderstood some of her points. She notes that the tool has likely improved since that early test, but the story illustrates why designers must still verify nuance and meaning. One feature she finds helpful is the ability to click on findings in a summary and see the highlighted portion of the original document, which supports transparency and verification.

Connie describes how designers can select specific sources within NotebookLM when generating reports or summaries, letting them control which documents influence the output. She notes that she tried to use NotebookLM to recreate a cleaner version of the Flower and Hayes diagram from her slide deck, but the tool kept altering the structure. Pressed for time, she ultimately used a screen capture of the original. Paul responds by noting dominKnow has an upcoming webinar on graphic design and creating visuals intentionally rather than relying on AI to guess.

Working with AI Stepwise Instead of All at Once

Paul and Connie reflect on the importance of breaking AI use into steps that align with the cognitive model rather than asking for a complete final draft from a single prompt. Paul notes that when people dump everything into AI and expect a finished piece, they often end up with something that looks good but fails to meet deeper goals. A stepwise approach, such as using AI first for idea generation, then for structuring, then for limited phrasing support, encourages collaboration and human oversight.

Chris reads a viewer comment about using AI to practice job applications by feeding in resumes and job descriptions and then iterating on suggestions. Paul observes that such workflows can boost confidence by providing language and structure to react to, while still requiring the human to make final decisions. Connie agrees that this aligns with the notion of discernment, where AI sparks thinking but the human remains the decision maker.

They reiterate that using AI as a partner in planning, translating, and reviewing can save time and improve quality as long as designers avoid turning the tool into the sole author. Connie notes that even if AI support does not save time in every case, it can still be valuable if it helps produce better writing while keeping the designer actively engaged in the cognitive work.

Discernment, Research Quality, and Evidence Use

The conversation returns to the theme of discernment in handling research. Connie emphasizes that research tools providing academic studies do not remove the need for critical thinking. Designers must still ask whether sample sizes are sufficient, whether methods are sound, and whether conclusions logically follow from the data. She warns that relying on abstracts alone is risky, since abstracts may not expose limitations or confounds.

Paul recalls that in academic publishing, some journals have stronger standards than others, and not all publications undergo the same level of rigorous peer review. He notes that this variability is another reason designers must read beyond surface summaries when evidence informs high stakes decisions in learning programs. Connie underscores that instructional designers who work with research supported approaches need to understand these nuances to avoid misrepresenting what evidence actually shows.

Closing Thoughts and Call to Action

As the episode winds down, Chris thanks Connie for the rich conversation and notes that the discussion has provided many practical nuggets for instructional designers learning to work with AI more thoughtfully. He reminds viewers that IDIODC is brought to them by dominKnow, which supports L&D teams in creating and delivering impactful learning at scale, and that episodes are available live every second Wednesday as well as on video and audio podcast platforms.

He jokes that the show is hosted by three living people rather than an AI generated panel and encourages the audience to continue exploring past episodes in the back catalog. Connie thanks the hosts and audience, and the group signs off with a light hearted remark about dancing out of the episode, keeping with the show’s informal, coffee break tone.

Also available on Apple Podcasts and Spotify.


--------------

Connie Malamed is a learning experience design consultant, author, and the creator of The eLearning Coach. She helps learning teams design evidence-informed, visually clear learning experiences that people actually understand and remember.

She has written two influential books on visual design for learning and runs Mastering Instructional Design, a community for learning professionals who want to deepen their craft and stay current in a changing L&D landscape.