I once again found myself staring at my computer screen, trying to figure out a Google Sheets formula to transfer data between sheets. While I'm comfortable with basic formulas, I'm far from an expert. Naturally, I turned to my "friend" ChatGPT for assistance. Perhaps "friend" is an overstatement, and it wasn't as helpful as I had hoped.
After several very frustrating minutes, we finally got the formula to work:
=ARRAYFORMULA(IF(B2:B = "", "", IFERROR(VLOOKUP(B2:B, 'Purchase Orders'!B$2:M, 12, FALSE), "Not Found")))
I have a general understanding of what's happening here, and to ChatGPT's credit, it explained each component. However, I admit I didn't read the explanation. I simply wanted a working formula and was quick to blame the system for not delivering the first time.
After a year of using these tools, I've gained a better understanding of their workings. I know when to abandon an unproductive thread and start fresh, which model suits different queries, the importance of context, and techniques like shot prompting and chain-of-thought reasoning. Yet sometimes, I still feel like a novice.
This highlights the challenge with Large Language Models (LLMs). I frequently hear stories about AI's limitations, often based on a single unsuccessful attempt. ChatGPT 3.5's shortcomings may have contributed to this perception, which could explain why OpenAI now offers GPT-4o for free, ensuring users' first impressions aren't disappointing.
LLMs are a unique technology. We communicate with them using natural language, as we would with a friend or colleague. However, to get optimal results, we need to understand much more than just plain language.
Experimentation is crucial, but it must be built on a foundation of fundamental knowledge.
This realisation led me to shift focus with non-profits to introductory workshops. These sessions aim to level the playing field by teaching prompting techniques, exploring different models and AI tools, and having fun with applications like Suno for music creation. The goal is to inspire, educate, and encourage daily use.
Next time someone claims "AI isn't good for that," ask why and see if you can help them discover better ways to improve outputs.
Let's not judge AI on first impressions.
Instead, let's embrace the learning curve and unlock AI's true potential through persistence, experimentation, and shared knowledge.
Me, myself, Claude Sonnet 3.5, Lex.page and DALLE.
Excellent read, Kyle. I expect more improvement in single interfaces to multiple LLMs so that users can ask for what they want and choose from the best responses. Ideally, you could maintain state across numerous LLMs and bounce between each as you pleased.