Getting Started with Python LLM Programming
1. Introduction
Python LLM Programming is an essential skill for both aspiring and seasoned data science professionals. In this post, we will cover the foundational concepts of LLM programming to get you started. In future installments of this series, we will dive deeper into more advanced techniques and frameworks.
2. What is an LLM?
An LLM (Large Language Model) is a computer program that:
- Reads text
- Learns patterns from lots of text
- Predicts the next word in a sentence
It does not think or understand, but it can generate human-like responses. Think of it as a super smart autocomplete.
To learn more about LLM, I strongly recommend you to read our post Fundamentals of LLMs
3. How LLM Programming Works?
The process is simple:
- You write a prompt (question or instruction)
- The model generates text
- Python displays the output on your screen
To learn more about Prompt Engineering, please read our post Complete Prompt Engineering Guide
Before building our first LLM application, we will review essential Python commands to ensure those new to the language have a solid foundation for LLM programming.
4. Basic Python Concepts
Python is a programming language that lets you give instructions to your computer. Here is the simple Python program to print message on the computer monitor:
4.1 Simple Output
Code
print("Hello, world!")
Output
Hello, world!
4.2 Variables (to store data)
Variables
age = 15
name = "Alice"
print(name, "is", age, "years old.")
Output
Alice is 15 years old.
4.3 Read Input
Get Input
name = input("Enter your name: ")
print("Hello,", name)
Output
Enter your name: Gen-Z
Hello,Gen-Z
4.4 Loop
Loop
count = 0
while count < 3:
print("Count is", count)
count += 1
Output
Count is 0
Count is 1
Count is 2
4.5 Conditional Statements (If / Else)
If...Else
age = 15
if age >= 18:
print("You are an adult.")
else:
print("You are a minor.")
Output
You are a minor.
You can find lot of free resources avaialble on the web to level up your Python skills
5. Libraries You Can Use for LLMs
| Library | Runs Locally? | Notes |
|---|---|---|
| textgen | ✅ | Tiny model, beginner-friendly |
| llama-cpp-python | ✅ | Medium model, CPU-friendly, needs model file |
| transformers | ✅ | Torch/TensorFlow needed for big models |
| openai | ❌ | Cloud API, very easy, requires API key |
6. Simple Q & A LLM Example using Hugging Face Library
Code
from transformers import pipeline
qa = pipeline("question-answering", model="distilbert-base-cased-distilled-squad")
context = """ Singapore is a country in Southeast Asia. Its Prime Minister is Lawrence Wong. """
question = "Who is the Prime Minister of Singapore?"
answer = qa(question=question, context=context)
print("Prime Minister of Singapore is ",answer["answer"])
Output
Prime Minister of Singapore is Lawrence Wong
6.1 Program Flow Details:
- from transformers import pipeline
- loading models
- tokenization
- inference
- Create a Question-Answering pipeline
- "question-answering" tells Transformers:
- We want a Q & A system
- distilbert-base-cased-distilled-squad:
- A lightweight BERT model
- Trained specifically on question-answer datasets (SQuAD)
- Works well on CPU
- Define the context
- Define the question
- Ask the model the question
- Print the answer
Imports Hugging Face’s pipeline utility
pipeline hides complex steps like:
qa = pipeline("question-answering", model="distilbert-base-cased-distilled-squad")
context = """Singapore is a country in Southeast Asia.Its Prime Minister is Lawrence Wong."""
Context is a knowledge source
LLM will refer this source only to find the suitable answer for our question
question = "Who is the Prime Minister of Singapore?"
craft a question specific and clear
answer = qa(question=question, context=context)
We pass context and question to model using pipieline
Model reads both question and context to find the suitable answer
print("Prime Minister of Singapore is ", answer["answer"])
7. Tips for Beginners
- Keep questions simple and specific
- Don’t expect the model to always be correct
- Try with different prompts and check the output and fine tune it
--Infinite Ripples | HK
Comments
Post a Comment