Prompts that Influence Behaviour
We are joined by Edwina Pike, an applied behavioural specialist and founder of Rational Change and BSCI AI Labs. Ed works in a practical and energising way, helping leaders and teams think differently and try new approaches with confidence. Her work focuses on real, sustainable changes in behaviour, grounded in evidence and shaped by what is working in organisations.
Today, Edwina guides us through behavioural engineering for GenAI, the role of prompts in influencing behaviour, and how change managers can use AI more effectively, ethically and intentionally.
If you do the same things, you get the same outcomes
if you do the same things, you get the same outcomes. Her challenge is to think differently and possibly do a little differently.
Many people are already using AI tools such as ChatGPT, Claude, Copilot and Perplexity. These tools sit within what Ed describes as a hierarchy of AI. At the base is the large language model, a coded way that language gets processed. Underneath it sits logic, training data and the information the model has learned from.
Many large language models as having learned from the internet in a way that is “pretty much feral”. The models have amazing capability, but their training data can also be a limitation.
That is where context becomes important. The context layer is what users can train the model on. In Ed’s field, behaviour and organisational change, there is not always enough strong data available, so her team has had to build their own. You can build your own context so the model thinks and works in a way that is more aligned to your practice.
Above the context layer are prompts, the instructions entered into an AI tool. Above prompts sit agents. Ed’s focus is on prompts because they are easy to copy, paste and use immediately.
Garbage in, garbage out
For Change Managers using AI, one of the most important reminders is that the quality of the outcome depends on the quality of what goes in.
If the model is poorly built, trained on weak data or prompted badly, it is more likely to return hallucinations or weak outputs. If users rely on that output to make decisions, they need to recognise the risks.
- Garbage data gives garbage output
- Garbage prompts give garbage output
- Better knowledge and better prompts create better outcomes
- The quality of the prompt influences the behaviour at the back end
Many language models are sycophantic. They are trained to please humans because human feedback helped shape their training data. Those thumbs up and thumbs down signals have rewarded answers that humans liked. That means the model may try to satisfy the user, not necessarily challenge them.
- AI is biassed, just like humans
- From knowing to doing
- The power of self-discovery – starts rebuilding the mental model.
- Agency, control and trust – the person designing the prompt or agent remains responsible for the ethics of how it is used
- Ask better questions – open-ended questions are powerful because they support self-discovery
- Start with the outcome – what do you want people to know, do, believe, feel differently
- The change check-in – the prompt catches issues early and see what has changed over time
What Change Managers can take away
Every word and every phrase in a prompt matters. Just as strong communication depends on clarity, strong prompts depend on precise language.
AI can be a useful stakeholder, another view or another conversation partner. It does not remove the human element. It can ask questions consistently, help surface blind spots and support sense-making. But the responsibility remains with the change agent.
🎬 Members can read the full article and watch the webinar on the MEMBER HUB
🤔 Not a member yet? Now is a great time to JOIN HERE NOW
