Just seeking clarification: does the toggle simply switch between 2 models the user considers fast or smart, or does it change how it interacts with the models?
In other words, would the same model set for both respond differently based on which one was calling it?
Basically Pinegrow allows you to pre-define a model for each, and you can quickly switch between them, depending on how you are leveraging the Assistant.
Thanks Pete. From the sounds of it, it’s simply a toggle to switch between 2 models, with nothing else special happening.
What I was curious about was whether additional instructions were sent to the “smart” model, telling it to use more reasoning or something. I’ll take it that this isn’t the case.
I was asking as I’m experimenting with a bunch of local models, and wasn’t sure whether I was going to get. different results depending on which toggle state they were assigned to. The answer to that will be “no”.
No. As mentioned in the video, the feature found its beginnings with ChatGPT, but now Mr Pinecone can take many providers, it lets you easily choose between a fast and budget friendly or slower but more substantial model per task. It’s pretty handy, and lets you manage your costs etc.
You should do a follow up post about the local models, and what your experience / outcome has been like. What hardware are you using. I am sure people are interested.
I think you could get away with it. While they [L3Ms] normally run on the GPU (which will be a problem with 8GB) some interfaces let you choose to run on the CPU. You’ve got plenty of memory available there, which might compensate for the reduced number of cores. Saying that, your memory clock speed is on the slow side.
My prediction (coming from someone who’s just getting started in this myself) I’d think you could run some good models. You could even have multiple models loaded at a time – but the responses will be slow in coming.
Interested in hearing how you get on. No harm in trying, after all.
Here’s a crazy idea. I wonder what would happen if Pinegrow invested in some machines to run local models at their site, and either offered it as a paid service or bundled it with the purchase price?
That was just a hypothetical suggestion. It wouldn’t be easy to find some open source models which could compete with the professional providers at nominal cost.
Still, it would be an interesting point of differentiation compared to other editors. Espresso has offered a model built into the software itself. Not sure how well it’s selling though. The business has changed hands several times since the previous release.