It’s a curious thing to think.
Can AI think?
I asked the new OpenAI model o1-mini-preview to explain thinking in a concise, universally understood statement. It provided: ‘Thinking is the silent architect of our realities, shaping understanding and guiding actions.’
Pretty deep for an AI.
This marks the step into the next wave of AI advancements. Current AI LLMs work similarly, outputting the most likely next word based on what came before it.
It’s a bit more complicated than that and can be controlled with variables like weights and fine-tuning, but OpenAI’s latest models, o1-preview and o1-mini-preview, have been programmed to ‘think’ before providing their output.
This process finally solves the viral problem showing LLM models could not count how many r’s are in the word strawberry.
These new models can.
Counting r’s in “strawberry” may not sound impressive, but it’s the start of AI being able to reason and really think through a task before simply outputting an answer. It’s better at maths, data analysis, and processes that require steps.
But the big question is: can we leverage this ‘thinking’ technique to other AI LLMs?
The answer is yes. Kind of.
A prompt shared with a new model Reflection70B asked the AI model to ‘think’ through the process, output an answer, and then ‘reflect’ on its thinking and output to identify any mistakes.
This is a technique called Chain of Thought prompting, which has variations such as ‘think through this process step by step before answering’. Techniques like this consistently produce better outputs, so it’s exciting to see AI companies embedding these thought processes into their models natively. This will give new users a better, albeit slower, experience using AI.
Thinking takes time.
Experiment and have fun with these new models and the Reflection prompt below.
It’s a quick win.
Reflection prompt:
You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.

