Understanding AI is about more than technology: it’s about learning a new language – the Language of AI
“Literacy is a bridge from misery to hope. It is a tool for daily life in modern society … the road to human progress and the means through which every man, woman, and child can realise their full potential.” – Kofi Annan.
When Kofi Annan spoke about literacy, he was referring to the ability to read and write, the foundation of human progress for centuries. Today, as we enter the age of artificial intelligence, we find ourselves facing a new form of literacy. AI literacy is quickly becoming the bridge between opportunity and obsolescence, between organisations that thrive and those that fall behind. Just as traditional literacy unlocked participation in the modern world, AI literacy will unlock participation in the AI-powered workplace.
For professionals in change management, this is more than just a technical concern. It is about equipping people to understand, adapt to, and make informed choices about the technologies that are reshaping work and society. Just as traditional literacy empowered individuals to participate fully in the modern world, AI literacy will enable organisations to navigate the transformations brought about by intelligent systems.
Why AI Literacy Matters Now
Artificial intelligence is no longer a distant innovation on the horizon. It is here, shaping how organisations operate, how people work, and how decisions are made. Across the globe, governments are introducing legislation that requires organisations to manage and mitigate the risks of AI.
In Europe, the EU AI Act is establishing a legal framework that classifies AI systems by risk and sets out obligations for transparency, accountability, and oversight. Similar initiatives are underway in the US, UK, and Asia. What all of this means is simple: AI literacy is no longer optional. For organisations, and for the professionals who guide them through change, it is becoming a mandated competency.
So, how do we build AI literacy? One helpful way to think of it is to consider it like learning a new language. When learning a language, the first step is building a vocabulary. You don’t need to be fluent overnight, but you do need to understand the key words and phrases that allow you to make sense of what is happening around you.
In the same way, AI literacy begins with acquiring the proper vocabulary of AI, the set of core concepts and terms that everyone working with, or impacted by, AI should be familiar with.
- Core Concepts & Foundations
To begin, let’s examine the building blocks of modern AI.
AGI (Artificial General Intelligence): Often described as the “holy grail” of AI, AGI refers to machines that can perform any intellectual task a human can. We’re not there yet, today’s AI is powerful but narrow, designed for specific tasks. Still, AGI is beneficial to know as it frames the debate about the long-term trajectory of AI.
ANN (Artificial Neural Network): Inspired by the human brain, ANNs are the computational frameworks that underpin much of AI. They consist of layers of interconnected “nodes” or “neurons” that process information.
Backpropagation: A technical term for the algorithm that allows neural networks to learn. By adjusting weights in the network to minimise error, backpropagation drives the learning process. I like to thank that backpropagation is like a teacher who helps the student to understand their mistakes and learn from them. The teacher (backpropagation) examines the students’ work (the LLM’s predictions) and provides feedback on what they did wrong (the difference between the predicted output and the desired output). The teacher then helps the student to understand their mistakes and gives them feedback on how to improve (adjusting the weights and biases in the neural network). The students can then try again and improve their work based on feedback from the teacher.
Deep Learning: A subset of machine learning that uses vast, multi-layered neural networks to solve complex tasks like recognising images, translating languages, or playing games.
Transformer: Transformers are the type of AI architecture that powers most modern language models, including ChatGPT. What makes them special is their ability to understand context, not just single words, but how words relate to each other across whole sentences or documents. This is why they can summarise long reports, answer questions, or hold conversations in a way that feels coherent. For change managers, understanding transformers is key because they are the engine behind the AI tools that are already reshaping communication, knowledge work, and decision-making in organisations.
Parameters: Think of parameters as the “settings” or “knobs” of an AI model. A large language model (LLM) may have billions of parameters, each of which is adjusted during training to shape how it generates output. When we train an LLM on a text dataset, the system learns the relationships between patterns and words; these are the parameters. You will often see the expression parameters used in discussions about ChatGPT and other LLMs, so it is worth knowing what they are. You can also think of a parameter as a Lego building block. The more Lego blocks you have, the bigger and more complex the Lego model you can build. It works the same for LLMs.
Tokenisation: Tokenisation is the process of breaking text into smaller units, called tokens, that a model can process. Tokens might be whole words, parts of words, or even single characters, depending on the system. For example, the word “running” might be split into two tokens: “run” and “-ing.” Tokenisation is essentially about chopping language into manageable chunks.
Word Embeddings: Word embeddings represent tokens as numbers in a mathematical space. Think of a mathematical space as an imaginary map where positions are defined by numbers, allowing relationships such as closeness or distance to be measured and compared. Crucially, embeddings help capture meaning: words with similar meanings are placed close to each other. For example, “cat” and “dog” would be represented with vectors that are near each other, while “cat” and “apple” would be far apart. Embeddings allow AI models to detect relationships, analogies, and context, the foundation for making language “understandable” to machines.
For change managers, knowing these terms helps demystify AI. It’s less about coding and more about recognising the ingredients that make modern AI possible.
- How AI Learns & Adapts
Next, let’s examine how AI systems are trained and adapted to meet specific needs.
Fine-tuning: Once a base model is trained, it can be fine-tuned for a specific task, such as analysing customer feedback in a particular industry. Fine-tuning makes AI outputs more relevant.
Prompt Engineering: As AI systems become more widely used, the ability to ask the right question or design the right input prompt is becoming a professional skill. Prompt engineering can make the difference between a vague, unhelpful answer and an insightful, actionable one.
Retrieval-Augmented Generation (RAG): RAG is a way of combining an AI model with an external knowledge source, such as a company’s document library or database. Instead of relying only on what it was trained on, the model “retrieves” relevant information and then uses it to generate an answer. This approach helps improve accuracy and reduces the risk of the model making things up (hallucinating), which is especially valuable in organisational settings where reliable information matters.
Few-shot Learning: Modern AI models can learn a new task after being shown only a handful of examples, typically placed directly in the prompt. For example, if you provide the model with three examples of how to convert meeting notes into bullet points, it can often continue the pattern with new notes. This ability is part of what’s known as in-context learning, the model learns from the examples you provide “in the moment” without retraining. For change managers, this explains why AI tools can be so quickly adapted to different organisational needs without months of technical work.
Zero-shot learning: When an AI model can perform a task without being given any examples at all. Instead, it relies on the knowledge it has already absorbed during training. For example, if you ask the model to “summarise this report,” it can usually do so immediately, even if you haven’t shown it a single sample summary. For change managers, this highlights the versatility of modern AI tools, they can handle many new tasks straight “out of the box,” making adoption faster and easier.
RLHF (Reinforcement Learning with Human Feedback): This method improves AI models by incorporating human judgment. People rate outputs, and the model learns what is considered “good” or “bad” in a given context. RLHF is one reason today’s chatbots feel more aligned with human values. But spare a thought for the low-paid workers tasked with removing offensive and obscene material in the datasets used to train ChatGPT. Time magazine has reported that this task was outsourced to workers in Kenya earning less than $2 per hour, requiring them to read hate speech and descriptions of violence and abuse.
For change managers, these methods are not just technical details. They show how flexible AI can be and why adoption requires ongoing human oversight.
- Capabilities & Behaviours
This group of terms describes what AI can do, and sometimes, what goes wrong.
Generative AI: Perhaps the most talked-about capability today. Generative AI refers to systems that create new content, including text, images, audio, and code. All based on patterns learned from data.
Emergent: Large models sometimes display unexpected abilities, such as solving problems they weren’t explicitly trained on. These emergent behaviours are exciting but also raise challenges in predictability.
Hallucination: When an AI confidently outputs false or fabricated information. Hallucinations pose a significant risk in professional contexts where accuracy is crucial.
Jail Break: Users sometimes attempt to trick AI systems into bypassing their safety guardrails, for example, by creatively rephrasing a prompt. Jailbreaking highlights the tension between usefulness and safety.
Stochastic Parrots: A metaphor coined by researchers to remind us that language models don’t “understand” meaning. They generate plausible text by predicting what comes next, based on training data. For change managers, this is a helpful reality check: AI isn’t a colleague with expertise; it’s a tool that mimics expertise.
For change managers, recognising these behaviours, from the creativity of generative AI to the risks of hallucinations or jail breaks, is essential for setting realistic expectations, addressing concerns, and guiding responsible adoption.
- Risks, Ethics & Governance
Finally, we turn to the vocabulary that connects most directly with legislation and organisational accountability.
Bias: AI systems can reproduce and even amplify biases present in their training data. This has implications for fairness, hiring, lending, healthcare, and other areas. Recognising bias is essential for responsible adoption.
Explainability (XAI): Regulators, including the EU AI Act, are demanding that AI systems be explainable, meaning users should be able to understand how decisions are made. For organisations, this is about building trust as well as meeting compliance requirements.
Model Governance: Just as financial systems or data protection require governance structures, processes, and audits that ensure responsible use, AI also necessitates governance. Model governance is where compliance, ethics, and risk management meet.
For change managers, this is the most immediate arena of action. Organisations will need support in building AI literacy, embedding responsible practices, and navigating new regulations.
Why This Vocabulary Matters for Change Managers
At first glance, some of these terms may seem abstract or overly technical. However, for professionals in change management, these tools are essential. When you understand the vocabulary of AI, you are better equipped to:
- Translate between technical teams and business leaders.
- Anticipate how the adoption of AI will reshape workflows, roles, and skills.
- Guide conversations about risks, ethics, and compliance with confidence and expertise.
- Build trust with stakeholders who are uncertain or skeptical about AI.
In short, AI literacy is not about becoming a data scientist. It’s about being able to speak the language of change in an AI-enabled world.
Call to Action: Become AI-Literate, Become Future-Ready
AI is not a passing trend. It is a structural force reshaping industries, professions, and societies. And just as digital literacy became a core competency of the last generation, AI literacy will be a core competency of the next.
For change management professionals, the challenge and the opportunity are clear. By investing in AI literacy now, you can position yourself as a bridge between technology and people, between innovation and adoption, between compliance and culture.
The EU AI Act and similar legislation are not obstacles; they are signposts, reminding us that AI literacy is no longer optional. It is the new baseline for professional competence.
So, here’s the challenge: start learning the vocabulary of AI. Use it in your conversations. Test it against your projects. Could you share it with your teams? Build it into your change frameworks.
Because the future of AI in organisations will be defined not by technology alone, but by the change leaders who guide its adoption with clarity, accountability, and humanity.
This article was commissioned by the Change Management Institute and authored by Declan Foster. Declan is the Founder & CEO of Project Pal AI, Thought Leader and Author. He is an industry leader in change management and project delivery and provides consulting services to clients globally.
🎬 Members can watch the webinar on the MEMBER HUB
🤔 Not a member yet? Now is a great time to JOIN HERE NOW