Running Pinegrow AI with Locally Installed LLM Ai assistants

Hi,
Well, since AI is something that PG seems to be going down the road of, but that is limiting in terms of 3rd party source connections, security, connectivity, paid subscriptions, the 2 recommended models etc then how about integrating it with locally installed LLM AI assistants?

Until this is made possible I wont be using AI assistant.
However I would if it was locally installled.

And as well as that, well, you can then use your own Locally installed instance of AI for, well, whatever you want/area of speciality of that LLM AI’s Training scenario.

So with that said, this is a pretty darn good example of setting it all up (on a Mac sorry. lucky for me.) with this

As @Emmanuel said in the post (went off piste) on centring an image, during which, AI was suggested to someone who doesn’t want to code.

So if an LLM model was installed via the methodology in the posted video and an LLM selected which used an OpenAI compatible Key, then it should be possible to integrate these and get PG running with a local LLM, of whatever source and check out the results, however amazing or dubious.

Let me know if anyone gets this fired up and succeeds, and if so, with which model and how was the integration process with PG accomplished/problems etc.

Oh and a quick google search yields this result with regards API compatibity… leading off with the AI result… wouldn’t you know…

which says…

Local LLM Models with OpenAI API

Based on the provided search results, the following local LLM models have an API that’s compatible with the OpenAI API:

  1. Modelz LLM: Offers an OpenAI-compatible API for various LLMs, including:
  • FastChat T5
  • Vicuna 7B Delta V1.1
  • LLaMA 7B
  • ChatGLM 6B INT4 and 6B
  • Bloomz 560M

You can deploy Modelz LLM on local or cloud-based environments and use the OpenAI-compatible API to interact with these models.

  1. LocalAI: Provides an OpenAI-compatible API for running local LLMs, including:
  • GPT-4o
  • Claude 3 Opus
  • Gemini 1.5

LocalAI allows you to replace OpenAI models with any open-source AI model and offers a model gallery for easy model selection.

  1. TensorChord’s modelz-llm: Offers an OpenAI-compatible API for various LLMs, including:
  • FastChat T5
  • Vicuna 7B Delta V1.1
  • LLaMA 7B
  • ChatGLM 6B INT4 and 6B

This library provides Docker images for easy deployment and supports various LLM models.

These local LLM models and APIs are designed to mimic the OpenAI API, allowing you to integrate them seamlessly into your applications and workflows. However, please note that some models might have slightly different usage or configuration requirements. Be sure to check the documentation for each model and API for specific details.

End AI quote

now, the above lists the latest and greatest, ie, Llama 7b, wheres the local installs are hovering around the Llama 3 versions, so … Im not too sure about this. we shall have to see if it can get rolling.
IM currently frozen in and about to feed my horses, (not a euphemism) but, any input on this and getting it up and running would be good.

Id run with the AI assistant then, to check out the results it gave and see if it could be integrated into PG

OH AND FINALLY,
Reddit users are on it too…

https://www.reddit.com/r/LocalLLaMA/comments/1cdps4s/create_openai_like_api_for_llama3_deployed/

So it looks good for Llama 3

1 Like

Running a large language model (LLM) locally (with the goal of using it with Pinegrow) isn’t something everyone can pull off; you need a strong setup for it to work well in a way that’s acceptable, without hogging the resources of the computer where the LLM is installed (especially if it’s the same computer running Pinegrow), which means a lot of CPU power or, ideally, a GPU. On top of that, besides just making sure it’s compatible with the API, you have to find a model that’s as advanced as Claude 3.5 for the specific interactions we’re providing with Mr. Pine Cone.

We view AI as a genuine opportunity for web development and for integrating new features into Pinegrow. However, the numerous tests we’ve run with other models (via online services that offer them) have all been disappointing for the use cases we’re looking to implement. On the other hand, Claude 3.5 and, to a lesser degree, OpenAI have met our needs quite well.

That said, we’re definitely curious and interested in experimenting, and we’ll be following your progress with great interest.

2 Likes