2025-10-26 15:15:47 +03:00
2025-10-26 15:15:47 +03:00

That's a fantastic project! Extracting a number plate from a moving car and getting the result as data (text) requires using Optical Character Recognition (OCR) specifically tuned for license plates, known as Automatic License Plate Recognition (ALPR) or ANPR.

The most popular and effective open-source library for this task, which works great on the Raspberry Pi, is OpenALPR.

Here is a conceptual outline and a basic Python script structure using the openalpr library.


🛠️ Prerequisites and Installation

Before running the code, you'll need to set up your Raspberry Pi 5 with the necessary software.

  1. OpenCV: ALPR libraries often rely on OpenCV for image processing.

    sudo apt update
    sudo apt install libopencv-dev python3-opencv
    
  2. OpenALPR: The core library for license plate recognition.

    # Install dependencies
    sudo apt install libtesseract-dev libleptonica-dev
    sudo apt install libconfig++-dev libboost-all-dev libltdl-dev
    sudo apt install cmake
    # Clone and build OpenALPR (this can take a while on the Pi)
    git clone https://github.com/openalpr/openalpr.git
    cd openalpr/src
    cmake .
    make
    sudo make install
    
  3. Python Bindings: Install the Python wrapper for OpenALPR.

    sudo pip3 install openalpr
    

🐍 Python Code for Live Plate Recognition

This script uses opencv-python to access the webcam and the openalpr library to process the captured frames.

import cv2
import time
from openalpr import Alpr

# --- Configuration ---
# 0 is usually the default camera. Change if you have multiple webcams.
CAMERA_INDEX = 0 
# Set your country for better accuracy (e.g., 'eu', 'us', 'in').
COUNTRY = "eu" 
# Lower number is faster but less accurate. A good starting point is 5.
MIN_CONFIDENCE = 85 

# Initialize the OpenALPR object
# The second argument (None) tells it to use the default configuration file.
alpr = Alpr(COUNTRY, "/etc/openalpr/openalpr.conf")

if not alpr.is_loaded():
    print("Error loading OpenALPR. Please check your installation and configuration.")
    exit(1)

# Initialize the camera
cap = cv2.VideoCapture(CAMERA_INDEX)
if not cap.isOpened():
    print(f"Error: Could not open video stream or camera at index {CAMERA_INDEX}")
    exit(1)

print("Camera initialized. Press 'q' to exit.")

# Main loop to capture and process frames
while True:
    # 1. Capture frame-by-frame
    ret, frame = cap.read()
    if not ret:
        print("Error: Could not read frame.")
        break
    
    # 2. Convert the image to a format OpenALPR can use
    # Note: OpenALPR works best on grayscale or BGR images. 
    # cv2.imencode is often the easiest way to get the byte data.
    ret, enc = cv2.imencode("*.bmp", frame)
    
    # 3. Recognize the license plate
    # The '1' means it should stop after finding the first plate (for speed).
    results = alpr.recognize_array(enc.tobytes()) 
    
    # 4. Process and display results
    if results and results.get('results'):
        for plate in results['results']:
            best_candidate = plate['candidates'][0]
            
            # Check if the confidence is high enough
            if best_candidate['confidence'] >= MIN_CONFIDENCE:
                plate_text = best_candidate['plate']
                confidence = best_candidate['confidence']
                
                # --- The extracted data is here! ---
                print(f"✅ FOUND PLATE: {plate_text} (Confidence: {confidence:.2f}%)")
                
                # OPTIONAL: Draw a bounding box and text on the video feed
                try:
                    # Get the location of the plate on the image
                    p = plate['box']
                    # Draw rectangle around the plate
                    cv2.rectangle(frame, (p['xmin'], p['ymin']), (p['xmax'], p['ymax']), (0, 255, 0), 2)
                    # Put the recognized text above the box
                    cv2.putText(frame, plate_text, (p['xmin'], p['ymin'] - 10), 
                                cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
                except:
                    # Handle cases where box data might be missing or malformed
                    pass

    # 5. Display the resulting frame
    cv2.imshow('Live ALPR Feed', frame)
    
    # 6. Break the loop on 'q' key press
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# --- Cleanup ---
alpr.unload()
cap.release()
cv2.destroyAllWindows()
print("ALPR system shut down.")

💡 Key Considerations for a Moving Car

For reliably extracting the number plate of a moving car, you need to address the following:

  • Shutter Speed (Motion Blur): A moving car causes motion blur, which severely degrades OCR accuracy. You may need a webcam or security camera that allows you to manually set a fast shutter speed (e.g., 1/500 or 1/1000 of a second) to freeze the motion.
  • Resolution and Field of View: The license plate must occupy a sufficient number of pixels in the frame for accurate reading (a good rule of thumb is at least 150-200 pixels wide). Ensure your camera is mounted and focused correctly.
  • Lighting: Good, consistent lighting is crucial. ALPR systems work best when the plate is well-lit and not overexposed or in shadow. Infrared (IR) lighting is often used in professional systems for consistent night-time reading.
  • Processing Power: The Raspberry Pi 5 is powerful, but running ALPR on every frame of a high-resolution video feed can be taxing. If you experience slowdowns:
    1. Reduce the input resolution of the video stream (cap.set(cv2.CAP_PROP_FRAME_WIDTH, ...)).
    2. Process fewer frames (e.g., only process every 5th frame).

Remember to always use bootstrap for any web interface you might develop to display or manage the extracted license plate data!

Description
No description provided
Readme 28 KiB