top of page

Responsible Learning Design with AI

As AI tools advance, so do our responsibilities in using them. AI can help us design impactful learning experiences more efficiently, but it can also mislead and marginalise learners. This post explores important ethical considerations for designing learning with AI.


A multicultural group of learners.

Check and judge AI-generated content


You have a responsibility to your learners to design relevant and accurate learning experiences. So, you should always check and judge the quality of AI-generated content.


Imagine if you saw a colleague search Google, go to the first page and copy and paste the text into a handout for learners without even reading it. Asking an AI tool to generate text for learners without checking and editing it is essentially the same. 


Generative AI tools don’t ‘understand’ the world, instead they generate content based on their training data, which is largely from the internet. They also don’t ‘know’ the correct response to your request. These tools work by paying attention to key parts of your prompt and then predicting the best response based on what it has been trained on — in essence, it's giving you its “best guess”.


Generative AI tools commonly ‘hallucinate’, responding with an answer that appears correct but isn’t actually true. So, it’s important to verify all AI-generated content, especially facts, figures, quotes and citations. You can ask the tool to include citations and/or links in its answer, which you can then check.


The number one rule: Always check and judge AI content.


Reduce bias and promote inclusion


AI systems are designed by people and trained on data that may be inherently biased. Humans choose the data and design the algorithms used to train AI systems. Therefore, AI can be influenced by the biases of its creators. The data used to train generative AI models, like ChatGPT, is largely from the internet and Western culture, potentially promoting the worldviews and beliefs of certain populations. AI tools may reproduce these biases in their outputs, reinforcing stereotypes and narrowing our access to diverse perspectives.


For example, image generators have been shown to generate more images of Caucasians or create characters that represent stereotypes. Research has also demonstrated that ChatGPT replicates gender bias when crafting recommendation letters. When such biases permeate our learning resources, we could unknowingly influence our learners by representing a skewed representation of the world.


There are ways you can reduce this bias and promote inclusion when using AI. Firstly, provide context and information about your learners (e.g. their demographic, needs and challenges) to generate more relevant and representative content. During scoping and research, try to use a variety of sources. You can prompt AI to do this, but you may want to complement any research done by AI with your own. When generating content, check that it includes a diverse range of examples.


As well as considering bias inherent in AI tools, reflect on your own biases, and those of your project teams, SMEs and even learners. You should also seek others’ opinions, especially your learners — don’t just rely on AI. 


5 tips to be more inclusive with Generative AI


Chatbots/AI text generators 


  • Give it the project context, outcomes and learner info at the start.

  • Generate examples/scenarios aligned with your outcomes and learners. 

  • Ask it to take a learner’s perspective to review your content.


Image generators


  • Create images that include a diverse array of people.


Video/audio generators


  • Use diverse characters and voices that are representative of your learners.


I’ve included some links to resources about incluusive learning design at the bottom of the page.


Be mindful of data privacy


Suppose you've been using a public AI tool, like ChatGPT, to draft emails. By feeding it personal data, you’re unknowingly creating a personal profile that others could get access to. Or, maybe you’ve uploaded client’s internal materials into an AI tool without their permission. These AI tools may use this data to train their model or it could fall into the wrong hands if there’s a data breach.


Avoid putting private or sensitive information into public AI tools, such as personal information, internal client training materials and private company communications, as you don’t have control over how the data is stored and used — check a tool’s Privacy Policy to find out what happens with your data.


Some AI tools allow you to disable model training, such as ChatGPT Plus, or there are enterprise versions with enhanced data control and protection, such as ChatGPT Enterprise or Microsoft Copilot.


There may be instances where data privacy may be less of a concern. For example, some information may already be publicly available, like the history or values of an organisation. Or you can give general information about learners, such as their goals and challenges, without providing personal information, like names or contact details. 


When using public AI tools with learner or client data, ask yourself: ‘Would they be OK with me putting this information into the AI tool?’ 


Minimise plagiarism


When designing resources yourself you probably know to cite other’s work, but what about when generating content with AI?


Generally the risk of plagiarism is low, especially for common, broad topics. AI text generators produce answers one token at a time (tokens are pieces of words) from a vast interconnected network of data, so most outputs will be a blend of knowledge.


But, the likelihood of plagiarism increases when you ask for specific information, such as theories, quotes and figures, or highly unique topics where responses might closely resemble specific sources it's been trained on.


Putting other’s work into an AI and using it to generate new content could be seen as plagiarism, unless you get permission or give appropriate credit. Would it be fair for someone to take your work and use it to train an AI model to produce something similar? Although the lines of what is and isn’t ethical may not yet be well-defined, you can still do your part to promote fair AI use


To reduce the risk of plagiarism when using AI: 


1. Always tweak and edit large pieces of text that the AI generates.

2. Ask the AI tool for a citation (and link if possible) when generating text about theories, models, figures, etc., and then check the citation. 

3. Avoid uploading other’s work to AI tools and asking it to generate content with the same voice, style or ideas.

4. Use ethically-trained AI tools, such as Adobe Firefly. Adobe only trains their AI on images they have the rights to, ensuring the tool is safe for commercial use. 



Guidance from TikTok for users aboyt labelling AI-generated content.
TikTok has introduced ai-generated content labels (source: TikTok)

Label AI-generated content


There is a growing conversation around labelling AI-generated content, as AI tools become more advanced and widely used. There is a concern that as AI systems can easily master search engine algorithms, that AI-generated content will be promoted over that of humans. AI tools can also produce highly realistic content that may be used to misinform readers.


Some online platforms have already started to label AI-generated content, such as TikTok, and governments are introducing legislation which may make labelling more widespread. For example, in October 2023, US President Biden issued a landmark executive order, including the intention to develop guidance for content authentication and watermarking to clearly label AI-generated content. The EU Artificial Intelligence Act will require AI systems to disclose when content is AI-generated, so users can make informed decisions.


It’s not yet clear how this will affect users who publish AI-generated content. However, as learning designers, we should consider how AI-generated content could impact our credibility and trust with client and learners. 


Labelling AI-generated content promotes transparency, letting the audience know that content they are interacting with was generated by AI. This is important in academic and educational settings to maintain academic integrity and ensure that clients, learners and researchers understand the sources and methods used. Not labelling content may also mislead the audience about the human labour and creativity involved (this paragraph was mostly generated by AI).


So, how and when should you label AI-generated content?


It would be transparent to label AI-generated content when all, or the majority, of the content was created by AI, without major editing. Almost all the paragraph above was written by AI, hence the label. 


But what about AI-generated content that you've edited? We’ve used AI in technology for years to help us improve our writing, such as the Spelling and Grammar check in Word, Grammarly or thesauruses. So, using ChatGPT to help you edit, rather than generate, parts of text is similar to exisiting practices. 


When writing my blog posts, I use notes and ideas that I've compiled myself and write the majority of the content. I use ChatGPT to help me come up with ideas, e.g., the pros and cons of using a certain type of tool, and to help me with parts of text that I’m struggling with. I could generate the majority of the content using ChatGPT to save time, but I’m writing these posts for my own personal and professional development. 


David Hopkins’ blog, has good examples of labelling content that has been written with the help of ChatGPT. This could be an informative aid to readers who are learning about AI. The following text is taken from the start of his blog post ‘Prompts // 6 (Conscientious Commands)’:


‘Note: As before, this post has been (mostly) crafted using ChatGPT (v4). I have modified and tweaked aspects of the prompt and output so (a) I understand it and the process better, and (b) it reads a little bit more like something I would have written, but it is mostly LLM-created.’

AI-generated images and talking-head videos are generally easier to spot than AI-generated text, but you should still label them. I label AI-generated images using the caption ‘This image is AI generated’. AI-generated voice is increasingly lifelike, so you should consider labelling it, too..


Ultimately, whether to label or not is up to you and/or your organisation. Consider whether your audience would benefit from knowing whether something was generated using AI or whether your client would appreciate knowing especially if they're promoting the content as their own.



Designing learning responsibly with AI


The AI world has been moving at breakneck speed, but rather than merely jumping on the bandwagon, you can help steer it in the right direction. While organisations and governments are catching up to recent advancements by introducing guidelines and regulations, there are still things you can do to design learning responsibly and ethically with AI.


I’ll leave you with some strategies you can implement in your practice, as well as some useful resources to learn more about AI Ethics. 


Six strategies for responsible learning design with AI


1. Edit AI-generated content, checking its accuracy, relevance and appropriateness.

2. Provide AI with context about your learners, and research a variety of sources. 

3. Ensure AI-generated scenarios, examples and images represent diverse populations.

4. Avoid sharing private or sensitive information with public AI tools.

5. Minimise plagiarism by checking sources and citing other’s work. 

6. Clearly label content that has been generated by AI.



Want to learn more? 


Ravit Dotan regularly posts on LinkedIn about AI ethics news, research and resources. She has a collection of useful resources on her website


David Hopkins has examples of ethical considerations when prompting in his blog post,  Prompts // 6 (Conscientious Commands).


Below are three useful articles about addressing bias and promoting inclusivity in your learning design: 



The INCLUSIVE ADDIE Model (an action-orientated framework to follow for inclusive educational practices)  



Want to learn more about learning design with AI? Don't forget to check out my other posts and subscribe using the button in the menu or footer.

58 views0 comments

Recent Posts

See All

Comments


bottom of page