upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search
Collection
nizata @ata · Dec 19, 2023
  • Collection
Self Hosting LLMs

exploring options for hosting llms locally

4 likes
7 curations
Share on Twitter Repost
Recently Added Custom Order
nizata @ata · Dec 19, 2023
  • Curated in Self Hosting LLMs
LLaMA and Llama-2 Hardware Requirements for Local Use (GPU, CPU, RAM)
Post Add to Collection Mark as Completed
1 mention
nizata @ata · Dec 19, 2023
  • Curated in Self Hosting LLMs
How is LLaMa.cpp possible?
Post Add to Collection Mark as Completed
1 mention
nizata @ata · Dec 19, 2023
  • Curated in Self Hosting LLMs
GitHub - jmorganca/ollama: Get up and running with Llama 2 and other large language models locally
Post Add to Collection Mark as Completed
1 mention
nizata @ata · Dec 19, 2023
  • Curated in Self Hosting LLMs
GitHub - oobabooga/text-generation-webui: A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Post Add to Collection Mark as Completed
1 mention
nizata @ata · Dec 19, 2023
  • Subcollection in Self Hosting LLMs
llm prompting
4 curations
2
Share on Twitter Repost
nizata @ata · Dec 20, 2023
  • Curated in Self Hosting LLMs
Uncensored Models
Post Add to Collection Mark as Completed
1 mention
nizata @ata · Dec 23, 2023
  • Curated in Self Hosting LLMs
How to make LLMs go fast
Post Add to Collection Mark as Completed
1 mention
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta