I’ve decided to experiment with using the AI assistant in Pinegrow on Mac with a local model. After a bit of experimentation, I’ve managed to instal Ollama (although intending on switching to Swama).
So, yesterday, I got Ollama installed and downloaded 2 models (Gemma3 & Devstral). I was impressed with the command line interactions. I pointed Xcode and Pinegrow to them, and everything worked as I’d hoped.
Then this morning, I open Pinegrow and all it says under the AI Assistant tab is… Loading AI assistant from the server. Please wait…
This happens in every existing project. It happens in new projects. I can’t change the settings in the tab as the buttons haven’t appeared yet.
I’ve tried removing the Pinegrow folder in ~/Library/Application Support. It does reset Pinegrow back to first run, but it wants the activation code again. When I enter that, it encounters an error, so that’s an obstacle.
Any suggestions how I can get Mr Pine Cone talking to me again?
Almost forgot to mention: The command line and Xcode interface to Ollama is still functioning, so unlikely to be an issue with the models.
Hello @reflex,
This seems like a UI initialization bug in Pinegrow’s AI Assistant, possibly caused by a mismatch between the local model setup and Pinegrow’s expected API response. Since Ollama works via CLI and Xcode, the issue likely lies in Pinegrow’s integration layer. Try launching Pinegrow with internet access temporarily—some versions check license or model endpoints on startup. Also, ensure your local model endpoint is reachable at http://localhost:11434 and not blocked by firewall or port conflicts. If the issue persists, this Pinegrow forum thread discusses similar symptoms and may offer a workaround. Reinstalling Pinegrow after backing up your license might help reset the integration cleanly. Let me know if you want help testing the endpoint.