Excellent read, Kyle. I expect more improvement in single interfaces to multiple LLMs so that users can ask for what they want and choose from the best responses. Ideally, you could maintain state across numerous LLMs and bounce between each as you pleased.
Thanks Josh. I agree, there are already some interesting tools allowing to shift models mid conversation, fork, or test the same prompt on different models and it’s so interesting seeing how different responses come from the same request. It’s a tough challenge for newcomers to the space though
Excellent read, Kyle. I expect more improvement in single interfaces to multiple LLMs so that users can ask for what they want and choose from the best responses. Ideally, you could maintain state across numerous LLMs and bounce between each as you pleased.
Thanks Josh. I agree, there are already some interesting tools allowing to shift models mid conversation, fork, or test the same prompt on different models and it’s so interesting seeing how different responses come from the same request. It’s a tough challenge for newcomers to the space though