Running Pinegrow AI with Locally Installed LLM Ai assistants

RABBIT HOLE!!! RABBIT HOLE ALert!! A LLM ert…

Easy

Not so easy to set up , config didnt work for a while but then you have a local chatty, code assistant in VSCode.

Mind bending unless your a developer.
it took me 2 days… after a few days repeating the video I posted previously. Awesome, Cheers Alex!

Incredibly easy once you’ve set up Ollama.
Having said that I had a problem where I had to un and re install twice… sigh.

You can use it for RAG (feeding it your own documents… so its your very own specialised LLM. with your own documentation to peruse… the same as the Open WebUI above)

And of course… Pinegrow, with AI … Mr PineCone.but…

this page needs updating… seriously!

it doesnt contain how to do it with your own Local LLM instance.
as it omits the custom setup process


So without that one…ONE…custom setting in Pinegrow,
I would have had zero interest and NEVER gone down this…
Rabbit hole.

Have I mentioned Rabbit holes enough.

SO seriously,
I am running an m1 Mac, silicon, on an outdated OS (im on 12… macs may be on 15-16 by now I dont know) and had various warnings,

I had to learn about Node… Python… NVM (for various different Node version environmen ts) and Anaconda (settled for MiniForge for maintaining a sane Python versions environment) to follow along with the Awesome Video By ALex about setting all this up, the developer way…

And… then I found the anythingLLM app… from this video.

and am now also exploring uncensored LLM’s.

So that App and Ollama. its running.
It all starts with Ollama (or LMStudio)

and you can fire up pinegrow with your very own, no fees/payments/internet/security madness.

It really should be in the PG Docs.

Cheers at @matjaz ,
this has actuallly piqued my interest and got me tinkering again for the first time in Years if Im honest and now… Ive got a coding assistant and PG to get me back into it.

BRILLIANT!

ive not enthused about something for a very…very … long time.

now like I said, Ive got an m1 mac,16inch version, 2021 16gb ram, 1TB harddrive (SSD) and about a GAZILLION tabs open (hundreds) in about…jesus, just counted them, 68 windows!
PG Running…AnythingLLM… bean editor, zed editor

, vscode (running the openweb ui instance) piles of text editor pages open…new shiny warp text editor
(experimenting with -as also interfaces with local AI)

(yeah i forgot to add that in my picture-ran out of screen real estate)

several other things, Activity monitor and…

Its smashing it.
yes, its a bit laggy but, totally useable (I used to use windows 3.1 so yeah!)
everything works and its not rebooted for I dont know how long…weeks.

SO try it, its a whole new way of getting info (Yeah, looking at your, condescending stack overflow nerds…)
and it really comes alive in coding apps like PG and the others above, so come on PG Team,
update your docs and get people to actually fire up and use your AI assitant!
And… thanks for actually creating it and then not jailing it to online paid providers.

like you say, just part of your toolset, no lock in.
Not with code and now
not with Online LLM AI providers with prohibitive, ongoing charges which necessitate a permanent Internet connection.
This and money, not always available.

So, i now have a new found respect for the PG app and its development.
AI aint gimmicky after all.
Cheers.

OH Yes, and I have to say, all these different LLM PORTALS seem to have different lags and latencies with regard response.
Terminal I found to be the quickest,
But then, the open-WebUI … which suprised me as the backend setup was immense ( there is also a one line docker install! I didnt go that route. that will be next. Never used docker before)
So when the dust settles I’ll reboot with just these relevant apps open and see how PG compares to the others for speed etc.

PS… I last posted about all this node, python, package managing stuff back in 2021

Well, I think Ive finally done it sanely on this machine.
Its been bugging me for years.

Right, now to go to the horses!
TTFN