Getting Started with Python LLM Programming

1. Introduction

Python LLM Programming is an essential skill for both aspiring and seasoned data science professionals. In this post, we will cover the foundational concepts of LLM programming to get you started. In future installments of this series, we will dive deeper into more advanced techniques and frameworks.

2. What is an LLM?

An LLM (Large Language Model) is a computer program that:

  • Reads text
  • Learns patterns from lots of text
  • Predicts the next word in a sentence

It does not think or understand, but it can generate human-like responses. Think of it as a super smart autocomplete.

To learn more about LLM, I strongly recommend you to read our post Fundamentals of LLMs

3. How LLM Programming Works?

The process is simple:

  • You write a prompt (question or instruction)
  • The model generates text
  • Python displays the output on your screen

To learn more about Prompt Engineering, please read our post Complete Prompt Engineering Guide

Before building our first LLM application, we will review essential Python commands to ensure those new to the language have a solid foundation for LLM programming.

4. Basic Python Concepts

Python is a programming language that lets you give instructions to your computer. Here is the simple Python program to print message on the computer monitor:

4.1 Simple Output

Code

print("Hello, world!")

Output

Hello, world!

4.2 Variables (to store data)

Variables

age = 15

name = "Alice"

print(name, "is", age, "years old.")

Output

Alice is 15 years old.

4.3 Read Input

Get Input

name = input("Enter your name: ")

print("Hello,", name)

Output

Enter your name: Gen-Z

Hello,Gen-Z

4.4 Loop

Loop

count = 0

while count < 3:

print("Count is", count)

count += 1

Output

Count is 0

Count is 1

Count is 2

Repeats actions multiple times. In chat programs, while True: keeps the chat running.

4.5 Conditional Statements (If / Else)

If...Else

age = 15

if age >= 18:

print("You are an adult.")

else:

print("You are a minor.")

Output

You are a minor.

Used to make decisions based on criteria.

You can find lot of free resources avaialble on the web to level up your Python skills

5. Libraries You Can Use for LLMs

Library Runs Locally? Notes
textgen Tiny model, beginner-friendly
llama-cpp-python Medium model, CPU-friendly, needs model file
transformers Torch/TensorFlow needed for big models
openai Cloud API, very easy, requires API key

6. Simple Q & A LLM Example using Hugging Face Library

Note: first install, ! pip install transformers torch,to run the program

Code

from transformers import pipeline

qa = pipeline("question-answering", model="distilbert-base-cased-distilled-squad")

context = """ Singapore is a country in Southeast Asia. Its Prime Minister is Lawrence Wong. """

question = "Who is the Prime Minister of Singapore?"

answer = qa(question=question, context=context)

print("Prime Minister of Singapore is ",answer["answer"])

Output

Prime Minister of Singapore is Lawrence Wong

6.1 Program Flow Details:

  1. from transformers import pipeline
  2. Imports Hugging Face’s pipeline utility

    pipeline hides complex steps like:

    • loading models
    • tokenization
    • inference
  3. Create a Question-Answering pipeline
  4. qa = pipeline("question-answering", model="distilbert-base-cased-distilled-squad")

    • "question-answering" tells Transformers:
      • We want a Q & A system
    • distilbert-base-cased-distilled-squad:
      • A lightweight BERT model
      • Trained specifically on question-answer datasets (SQuAD)
      • Works well on CPU
  5. Define the context
  6. context = """Singapore is a country in Southeast Asia.Its Prime Minister is Lawrence Wong."""

    Context is a knowledge source

    LLM will refer this source only to find the suitable answer for our question

  7. Define the question
  8. question = "Who is the Prime Minister of Singapore?"

    craft a question specific and clear

  9. Ask the model the question
  10. answer = qa(question=question, context=context)

    We pass context and question to model using pipieline

    Model reads both question and context to find the suitable answer

  11. Print the answer
  12. print("Prime Minister of Singapore is ", answer["answer"])

7. Tips for Beginners

  • Keep questions simple and specific
  • Don’t expect the model to always be correct
  • Try with different prompts and check the output and fine tune it

--Infinite Ripples | HK

Next Topic
Exploring Different Types of Agent AI

Comments

Popular posts from this blog

Complete Guide to Prompt Engineering: Myths, Types, Mistakes, and Best Practices

The DNA of Data: How Statistics Powers Artificial Intelligence

Prompt Engineering for Content Creation