Bits and Bytes

Shreyas Srivastava

13 June 2024

Comparing techniques in LLM application development

by Shreyas Srivastava

When it comes to LLM application development techniques, how do you evaluate various strategies and come up with the best approach for your problem. In the following table we examine the pros/cons associated with each approach.

Note that these techniques are not mutually exclusive and can be used in combination with each other.

Technique/Paradigm Advantages Disadvantages
Prompt engineering 1. Give model few shot examples and clear/detailed instructions
2. Usually the first technique to try out, great for building problem intuition
3. Low engineering cost, fast iteration loops
4. Break down the problem down into steps e.g. Chain of thoughts, react framework
1. Prompt tokens are expensive, feeding more into the context is expensive and may not make sense
2. Not effective in learning new output structure, formatting, programming language etc
     
Retrieval augmented Generation aka RAG 1. Currently the only scalable way to ingest new knowledge from a bunch of documents given a pretrained model
2. Getting the context aka “what the model needs to know”
3. Interpretability: It’s straightforward to implement interpretability since model answers based on the retrieved context
4. Decomposing the problem into generation and retrieval can lead to independent iteration
1. Medium engineering cost i.e. requires tuning
2. RAG pipeline can be expensive to maintain
3. Multi-step evaluation is needed (generation & retrieval/ranking)
     
Fine tuning 1. Generally specifying how the model needs to behave i.e. domain style, formatting etc
2. Effective at teaching model output structure, new programming language, syntax etc
3. More efficient during inference since model can follow instructions and style better (reduced number of context tokens)
4. Reduced reliance on extensive prompt tuning
1. Does not teach the model new knowledge, only emphasizes knowledge that already exists in pretraining
2. Supervised dataset creation can be expensive
3. Infra cost, hyperparameter tuning cost
     
Agents 1. Ability to solve more complex problems requiring long range context and real world interaction
2. Planning and breaking down complex tasks into simpler tasks
3. Using complementary LLM’s strength eg use code generation LLM + general purpose LLM + LLM fine tuned for tool use
4. Incorporate multi turn human feedback into the loop and switch between human dialogue and tool use.
1. Reliability with the current generation of LLM is still an issue
2. Context management and pollution is an issue and a hard problem

References: A Survey of Techniques for Maximizing LLM Performance
Mastering LLMs: A Conference For Developers & Data Scientists

tags: