Sitemap

Knowledge base vs fine-tune LLMs // pros & cons

2 min readJul 9, 2023

Fine-tuning (HF, OpenAI) requires more resources (time, data, compute, etc.) and money, not suitable for constant real-time updates. It’s good for big amounts of data, fundamental, long-term knowledge.

Press enter or click to view image in full size
https://platform.openai.com/docs/guides/fine-tuning
Press enter or click to view image in full size
https://platform.openai.com/docs/guides/fine-tuning
Press enter or click to view image in full size
https://huggingface.co/docs/transformers/training

Trainig data quality for fine-tuning makes the final result. Other models can be used, ChatGPT for example.

A knowledge base is a great way to organize constantly updated data, but its usage is limited by LLM max prompt length.

Knowledge base and fine-tune can be used together to get max results. Semantic (vector) databases, knowledge graphs, etc. are themselves suitable for collecting and processing knowledge.

Press enter or click to view image in full size
https://www.linkedin.com/posts/ahmedssoliman_llms-llama2-finetuning-activity-7094247865748717570-TwyF/

Notebooks for finetuning and inference of Llama-2 and LLMs

- bnb-4bit-training-with-inference
https://lnkd.in/dtiky_k8

- Llama 2 Fine-Tuning using QLora
https://lnkd.in/dEd4xTAR

- Fine-tune Llama 2 in Google Colab
https://lnkd.in/dHi2JHcf

- llama-2–70b-chat-agent
https://lnkd.in/dWjSdqRB

- text-generation-inference
https://lnkd.in/dvmMGhD6

- text-generation-webui
https://lnkd.in/dQ_NVNkP

- llama2-webui
https://lnkd.in/dkY3c_Pc

- llama_cpu_interface
https://lnkd.in/dyq2ft-q

Press enter or click to view image in full size
https://huggingface.co/tiiuae
Press enter or click to view image in full size
https://falconllm.tii.ae/proposal.php
https://github.com/mshumer/gpt-llm-trainer

--

--

noailabs
noailabs

Written by noailabs

Tech/biz consulting, analytics, research for founders, startups, corps and govs.

No responses yet