Hi,
Well, since AI is something that PG seems to be going down the road of, but that is limiting in terms of 3rd party source connections, security, connectivity, paid subscriptions, the 2 recommended models etc then how about integrating it with locally installed LLM AI assistants?
Until this is made possible I wont be using AI assistant.
However I would if it was locally installled.
And as well as that, well, you can then use your own Locally installed instance of AI for, well, whatever you want/area of speciality of that LLM AI’s Training scenario.
So with that said, this is a pretty darn good example of setting it all up (on a Mac sorry. lucky for me.) with this
As @Emmanuel said in the post (went off piste) on centring an image, during which, AI was suggested to someone who doesn’t want to code.
So if an LLM model was installed via the methodology in the posted video and an LLM selected which used an OpenAI compatible Key, then it should be possible to integrate these and get PG running with a local LLM, of whatever source and check out the results, however amazing or dubious.
Let me know if anyone gets this fired up and succeeds, and if so, with which model and how was the integration process with PG accomplished/problems etc.
Oh and a quick google search yields this result with regards API compatibity… leading off with the AI result… wouldn’t you know…
which says…
Local LLM Models with OpenAI API
Based on the provided search results, the following local LLM models have an API that’s compatible with the OpenAI API:
- Modelz LLM: Offers an OpenAI-compatible API for various LLMs, including:
- FastChat T5
- Vicuna 7B Delta V1.1
- LLaMA 7B
- ChatGLM 6B INT4 and 6B
- Bloomz 560M
You can deploy Modelz LLM on local or cloud-based environments and use the OpenAI-compatible API to interact with these models.
- LocalAI: Provides an OpenAI-compatible API for running local LLMs, including:
- GPT-4o
- Claude 3 Opus
- Gemini 1.5
LocalAI allows you to replace OpenAI models with any open-source AI model and offers a model gallery for easy model selection.
- TensorChord’s modelz-llm: Offers an OpenAI-compatible API for various LLMs, including:
- FastChat T5
- Vicuna 7B Delta V1.1
- LLaMA 7B
- ChatGLM 6B INT4 and 6B
This library provides Docker images for easy deployment and supports various LLM models.
These local LLM models and APIs are designed to mimic the OpenAI API, allowing you to integrate them seamlessly into your applications and workflows. However, please note that some models might have slightly different usage or configuration requirements. Be sure to check the documentation for each model and API for specific details.
End AI quote
now, the above lists the latest and greatest, ie, Llama 7b, wheres the local installs are hovering around the Llama 3 versions, so … Im not too sure about this. we shall have to see if it can get rolling.
IM currently frozen in and about to feed my horses, (not a euphemism) but, any input on this and getting it up and running would be good.
Id run with the AI assistant then, to check out the results it gave and see if it could be integrated into PG
OH AND FINALLY,
Reddit users are on it too…
https://www.reddit.com/r/LocalLLaMA/comments/1cdps4s/create_openai_like_api_for_llama3_deployed/
So it looks good for Llama 3