Running Pinegrow AI with Locally Installed LLM Ai assistants

Hi,
Well, since AI is something that PG seems to be going down the road of, but that is limiting in terms of 3rd party source connections, security, connectivity, paid subscriptions, the 2 recommended models etc then how about integrating it with locally installed LLM AI assistants?

Until this is made possible I wont be using AI assistant.
However I would if it was locally installled.

And as well as that, well, you can then use your own Locally installed instance of AI for, well, whatever you want/area of speciality of that LLM AI’s Training scenario.

So with that said, this is a pretty darn good example of setting it all up (on a Mac sorry. lucky for me.) with this

As @Emmanuel said in the post (went off piste) on centring an image, during which, AI was suggested to someone who doesn’t want to code.

So if an LLM model was installed via the methodology in the posted video and an LLM selected which used an OpenAI compatible Key, then it should be possible to integrate these and get PG running with a local LLM, of whatever source and check out the results, however amazing or dubious.

Let me know if anyone gets this fired up and succeeds, and if so, with which model and how was the integration process with PG accomplished/problems etc.

Oh and a quick google search yields this result with regards API compatibity… leading off with the AI result… wouldn’t you know…

which says…

Local LLM Models with OpenAI API

Based on the provided search results, the following local LLM models have an API that’s compatible with the OpenAI API:

  1. Modelz LLM: Offers an OpenAI-compatible API for various LLMs, including:
  • FastChat T5
  • Vicuna 7B Delta V1.1
  • LLaMA 7B
  • ChatGLM 6B INT4 and 6B
  • Bloomz 560M

You can deploy Modelz LLM on local or cloud-based environments and use the OpenAI-compatible API to interact with these models.

  1. LocalAI: Provides an OpenAI-compatible API for running local LLMs, including:
  • GPT-4o
  • Claude 3 Opus
  • Gemini 1.5

LocalAI allows you to replace OpenAI models with any open-source AI model and offers a model gallery for easy model selection.

  1. TensorChord’s modelz-llm: Offers an OpenAI-compatible API for various LLMs, including:
  • FastChat T5
  • Vicuna 7B Delta V1.1
  • LLaMA 7B
  • ChatGLM 6B INT4 and 6B

This library provides Docker images for easy deployment and supports various LLM models.

These local LLM models and APIs are designed to mimic the OpenAI API, allowing you to integrate them seamlessly into your applications and workflows. However, please note that some models might have slightly different usage or configuration requirements. Be sure to check the documentation for each model and API for specific details.

End AI quote

now, the above lists the latest and greatest, ie, Llama 7b, wheres the local installs are hovering around the Llama 3 versions, so … Im not too sure about this. we shall have to see if it can get rolling.
IM currently frozen in and about to feed my horses, (not a euphemism) but, any input on this and getting it up and running would be good.

Id run with the AI assistant then, to check out the results it gave and see if it could be integrated into PG

OH AND FINALLY,
Reddit users are on it too…

https://www.reddit.com/r/LocalLLaMA/comments/1cdps4s/create_openai_like_api_for_llama3_deployed/

So it looks good for Llama 3

1 Like

Running a large language model (LLM) locally (with the goal of using it with Pinegrow) isn’t something everyone can pull off; you need a strong setup for it to work well in a way that’s acceptable, without hogging the resources of the computer where the LLM is installed (especially if it’s the same computer running Pinegrow), which means a lot of CPU power or, ideally, a GPU. On top of that, besides just making sure it’s compatible with the API, you have to find a model that’s as advanced as Claude 3.5 for the specific interactions we’re providing with Mr. Pine Cone.

We view AI as a genuine opportunity for web development and for integrating new features into Pinegrow. However, the numerous tests we’ve run with other models (via online services that offer them) have all been disappointing for the use cases we’re looking to implement. On the other hand, Claude 3.5 and, to a lesser degree, OpenAI have met our needs quite well.

That said, we’re definitely curious and interested in experimenting, and we’ll be following your progress with great interest.

2 Likes

Done it!

now running Llama 3.2 locally,
via Ollama app on the Mac

AND …after fumbling around for some time,
worked out the API end point
and the API key …

they are

API Key : Ollama
End point: http://localhost:11434/v1/chat/completions

this was after fooling around with the command liine and checking out the text answers etc.

I have to admit, im impressed. Really so.
But it did then make a few mistakes, like labelling Media queries as a Javascript file…
And why I asked it … it admitted it made a mistake and explained!

all on my local machine.

no data shared, no connection, no Internet required and , it codes far better than me.
I just asked it to create a 3 column web page.
bang!
it used css grid or flexbox, whichever I preferred.

ok now this could be a game changer.
a personal coding tutor.

This may get me back into it for the winter…

Llama3.2_Ollama_Pinegrow_2_Dec-09-2024

2 Likes

Hi @schpengle
Congratulations. It is in this spirit that I recently subscribed to Pinegrow. While waiting for the integration of Apple’s AI natively, it is really a very good thing to be able to run your assistant locally: confidentiality, security and speed. Yes, it is not always up to date with the latest news, but in programming I have not seen much revolutionize the world in recent weeks, so…
I still have to “absorb” the Pinegrow interface (I am brand new here) to embark on this adventure that I hope to start in two or three weeks.
A question: what is the GPU usage? I am on a Mac M2 ultra so it should work fine, but I am curious to know in advance. Congratulations again for your configuration.

Hi there @Blep .

Well, in the image posted, the blue item above the Ollama app icon is the GPU window of the native apple Activity Monitor application.

Activity Monitor/window/GPU History.
that is a pop out so you can see the GPU history as you’re running it.

I am on a 16GB M1 Pro, 10 cores (for this. I run older and MUCH older macs too)
here is the spec.

https://support.apple.com/en-us/111901

I had an absoloutley huge load of browser windows and tabs open, other apps at the time, just as usual and… I hardly noticed anything/lag or what have you with other apps.
There is a processing delay when making AI requests but, hey! I have that myself , when talking to people :slight_smile:
But, then I have not used any online AI so dont have any comparible benchmarks to compare it against but I dont care. I am not prepared to give up all my stuff to some online portal in pursuit of curiosity.

WHY does OpenAI…requrire…my… phone number?
Not happening.

WHY does… Siri… send all my voice commands back to Apple to better improve heuristics
Not happening.

Im not into the whole AI HYPE thing. bad juju may come of this because of pioneers lack of foresight, and the whole connectivity, Internet of things, autonomous systems …thing.

However, tinkering locally, with LLM’s… to achieve end goals with coding… something I cant do very well, offering a guidance when i want and an application and system I can TURN OFF;
Well, yes, this is tolerable and actually has renewed my interest/enthusiasm for coding/PG and whatever AI currently is.

So, there, overly cautious mini rant over.
Im pretty sure your M2 will be great at this! its a generation on from mine.
My boss has an M3… im tempted to set him up with it locally, when we next see each other.

Oh, btw, I also asked the local LlaMa if it spoke German.
It explained that it had been taught on a large volume of texts including German, so I greeted it in German and… it complimeneted me on my use of local idioms!

now THAT suprised me.

IN Pinegrow, it sometimes just seems to return the word

[code]
in the window with no discernible effect on the page layout but, I’d not ready any instructions from the PG team… as usual (Im a bloke after all :slight_smile: ) I just wanted to get this done.

so now i’ll go back and check out the instructional videos etc by @matjaz to see what I probably SHOULD have done and how to correctly implement it and Fine tune it with those modal drop down menu options.

oh also, btw, on my brief exploration and following along with a video by that same guy above,
I discovered

FITTEN

This is a VSCode AI plugin, which, if your happy with all your information and code being shipped off to a nefarious Chinese server to get your AI fix and then shipped back to you, then yeah.
I’m not and lots of people posting in the videos comments above stated the same, but…

They also have a web interface, which you can use for free.

https://code.fittentech.com/playground

now THIS was interesting!
…and where I realised that my days as an ex wannabe coder were basically over.
I’m redundant, so , back ot feeding my horses I guess as any hopes of ever doing this stuff professionally have been dashed by a new technology.

but, it will, however, increase my hobbyist abilities when I need assistance, so I may return to this whole web thing as I now have assistance… artificial AI assistance,and so I’ll return to PG tinkering again.

Now, I wonder if the AI can help me remember and figure out the PG UI :smiley:
as its evovled a lot over the years since Ive been using it and i’ve forgotten/lost most of it.

let me know if you go ahead with the Ollama/ Llama3.2/ LLM of choice install and how it goes.
It really was painless with no set up hassles.

PS, haveing said that.
I omitted to follow through with the whole UI/web UI, chat client, shiny front end part of the video tutorial as this involved NODE, NVM, Python, Conda… and/or Docker.
I wasnt interested in this rabbit hole, just to get this Mr Pinecone Assistant thing running as I hate Redundancy in applicatioins. upgrade/sign up to/ etc etc.
if its there it shoudl work.

So, now it does.
Job done :slight_smile:

Just a matter of playing with different LLM’s etc now and seeing what is what.
But life is short and I have horses to look after.
:slight_smile:
But hopefully this will be inspiring for some and free them from the dubious chains of the whole AI token/calling back to base/training on/stealing your code/writing /images whatever mentality.
sure, you loose the whole mega cluster, quantum computing type AI generation capabilities but…
At what cost and who cares?

i just want pretty web pages about my horses and this may be fun.

1 Like

Oh and this Native mac app will run, utilising the locally hosted LLM server via Ollama and whatever Model you are running through it… buuuut,
I cant as my Mac OS is too old (12.x Monterey) whereas this Mac app requires 14+.

Just start up the Ollama server and … you should be good to go with it with a compatible Mac OS.

It also runs on iOS… running Ollama… wow!
Ive no idea.

1 Like

Hi @schpengle
Thank you very much for your very detailed answer. I will keep you informed of the progress of my tests, I think I will start in 2 to 3 weeks, the end of year holidays are an absolute family priority.
My initial project is to use locally Homebrew, python, Bolt, LLama3, stable diffusion and Flask, with Pinegrow as a website builder. The AI ​​will thus assist me throughout the production chain, from the construction of the site to the generation of its content. I have recently retired (retired psychologist, it sounds good :rofl:) and I admit that I am delighted with the arrival of AI as a territory of exploration. Obviously this is a transitional phase because the “at home” AI assistants will bring about many other effects, most of which are already at work now, but hey I prefer to keep that for another time, after having attracted the wrath of those who cannot stand AI (and especially its inevitable consequences already here) on the Realmac and Weaverspace forums, I will wait a little here… :grin:

Yes, well, keep us all informed.
That sounds like quite…some…dev environment you have setting up there.
Ive no idea what your plans are.
maybe you could fire up a topic in Random about what your goals are?
I used to post quite a lot here (and in that …channel?) but too many got deleted, after a lot of work posting, so I basically quit posting on the forum.

Just thought Id add to this topic though.
only fired up by a potential new PG user who didnt want to code.
which i get :smiley:

but then an answer to use AI.
and I thought WHAT!?

oh well, careful what you wish for :smiley:

PS… a retired Psychologist?
Yeah , right!
There is no such thing… a bit like an off duty policeman… dont drop litter near one
:wink:

And welcome aboard. Id noticed you posting, helping people with some basic questions. (complicated enough to confuse me though.) and thank you for the congratulations on my Local LLM integration with PG.

here’s another little vid for you.

llama3.2_PG_3_col_page_code_creationDec-10-2024

I just asked Llama3.2, via Ollama, in pinegrow , in just chatting …mode

create a new responsive web page with navigation and 3 columns
and that’s what it spat out.

the first time, it also included javascript for nav interaction.
but, it dropped the js the 2nd try and said it wasn’t necessary… on its own.
so there are different results with the same queries.

interesting.

right, off to feed my horses.
I’m late.

1 Like

Well, before I went, I just had to have one last attempt and… wow.
ok,
AI in Pinegrow.

I thought it was a gimick… and with all the external data calls, to me it was.
But now… locally, ok, I’m utterly sold on this!

llama3.2_PG_3_col_page_Horses_Dec-10-2024

Oh yeah, reading the PG instructions (well the video tutorials) and selecting the correct fine tuning modalities helps!
ie, generate the whole page

This is great!
right, real world horses now.

1 Like

Hi again @schpengle
Thanks again for your message. The idea I have in mind is the creation of a personal station of specialized AI assistants (first on a computer then on dedicated computers ideally under Linux, hence my question about GPU consumption), the website creation assistance project is a fun first step. The rest is much broader. I am like you I want my data and queries to stay at home and not with a third-party service even if I appreciate OpenAI and Anthropic. I am impatiently waiting for the local AI.
However, in order not to waste too much time I will start playing with Claude AI and Pinegrow (I’ve already had fun with OpenAI and Pinegrow, but copy-past one), this while waiting for the configuration that I indicated in my previous message. The goal is to be able to not only assist in the creation of the website with Pinegrow, but also generate the necessary content: texts, images, videos (hence the GPU), sounds (for voice dubbing), … It’s ambitious but at the speed of AI I think it’s not so much anymore (and only two months have passed since I had the idea…). BTW my wife loves Horses (normal for a former rider)

1 Like

Hello @matjaz or other dev types,
This mofal pop up message appears when making requests.

any ideas about it?
how to fix etc?

well that sounds like a plan.
…to take over your entire…bedroom!
are you going to have your ai agents querying each other, in house, until the GPU takes flight under its own fan?

or interface to the outside world , with a load of Psychology trained AI’s resolving peoples problems?

the NHS (in The UK) would.love that idea -national health service.

NGROK

btw :slight_smile:

1 Like

I quite like the idea of ​​a gpu drone unit that would not only produce content but also dissipate their heat throughout the house thanks to their flight. Admittedly, we are moving away somewhat from the initial objective, but it seems to me to be an idea worth exploring. I immediately note that I will need an AI assistance unit to pilot the fleet of flying radiators in an urban interior :crazy_face:

1 Like

Add |tools behind the model name in the provider settings. Of course the model has to support the tools for this to work.

1 Like

Thanks for that… and…while I didnt quite understand WHERE the |tools info should be appended in the settings… i did go off and check

…which is a pretty site , with nice images, came back, broke pinegrow a few times by changing things … spent some time sorting it out. putting everything back like it was and… it no longer throws that modal pop up window anymore, after a few restarts. without having |tools appended to anything.

SO that was nice.

PG also did that infuriating cannot edit an element created at run time thing for a while… On an empty page… with no code in it… all. just an orange pop up modal and an orange border.
that is always a pain.

shut and restarted PG and the empty page was ok to click on again.
this was after the first AI /pine cone attempt.

also, I notice that PG can only use Mr Pinecone/local Llama instance in ONE instance of a PG window.

if you open a second window , then it appears to be that only the most recently created PG window will give access to MrPC and the other throws orange Modals at you.