Taking the Temperature of AI

Pardon the Interruption

This article is just an example of the content available to mallowstreet members.

On average over 150 pieces of new content are published from across the industry per month on mallowstreet. Members get access to the latest developments, industry views and a range of in-depth research.

All the content on mallowstreet is accredited for CPD by the PMI and is available to trustees for free.

A lot has been said about AI recently. A lot. ChatGPT hit the headlines late last year and suddenly, everyone was talking AI. At every turn, it seems to be in front of me. Maybe it really is being discussed everywhere, or perhaps it is just the Baader-Meinhof phenomenon – seeing it once is to see it repeatedly.

Curious about what people have been asking, I put the question to ChatGPT: ‘what are the top queries people have recently had?’

Interesting. I then decided to ask a very simple question: describe mallowstreet in a paragraph:
Not bad. I then clicked the ‘regenerate response’ button and got a slightly different answer.

I was puzzled, because ChatGPT chose to focus on different elements of what we do.

Why is this? Models, probability, and temperature


In a nutshell, ChatGPT has been trained on billions of pages of written text – everything written on the internet before 2021, to be precise. When answering questions, it works step-by-step to determine the next ‘best’ word given the text it has already generated. The choice of next word is based on a list of possible candidates and their respective probabilities. Makes sense, but this didn’t answer my question about getting different responses to the same question.

This is why ‘temperature’ matters. Temperature is an adjustable parameter that determines how 'creative' ChatGPT can be when forming the answer. It allows the model to sample a wider selection of possible words when forming the answer. A higher temperature of say, 1, will give you predictable and boring text. A low temperature (e.g. 0) will allow the model to form a low probability response (but could be quite creative in reality!).  In practice, a temperature of .8 seems best. We played around with it in our models, and it seems to be correct. It is a rather technical process to get into in this blog post, but what matters is that there’s no single setting for AI to produce results – this is something I will revisit in the future.

Implications for business


OK, so you need to calibrate your models. And experimenting with AI is essential as it helps show what is actually possible. Forcing it into a business without solving a real problem is pointless, and will almost definitely end in failure.

So how will this change the way we do business? Will all our jobs be replaced? I don’t think so. Rather, I think AI has the ability to transform what we do in our respective jobs. The nirvana of a job is to outsource all the mundane things that humans are largely bad at (being objective, consistent, thorough), and focus on the thing humans are great at (building interpersonal relationships, reading the room, picking up on energy).

What I love most about all of this is that it does truly level the playing field. Find a way to embed tools in your business and solve a problem, bingo – real results will follow quickly. What if I could have a tool that would help me get 1% better each day on a specific task? So that tomorrow, I’m only marginally better – but in a year, it will be transformational.

I asked ChatGPT what the future holds for AI over the next five years. It gave a considered response:
What’s your experience with using AI, and how are you integrating it into your business? What’s worked, and what hasn’t? 

More from mallowstreet