Implementing Real-Time Object Detection on Raspberry Pi 4 Using TensorFlow Lite and USB Webcam Stream
In this tutorial, you’ll learn how to implement real-time object detection on a Raspberry Pi 4 using TensorFlow Lite with a USB webcam. This setup is ideal for projects involving computer vision, robotics, and IoT applications.
Prerequisites
- Raspberry Pi 4 with Raspbian OS installed
- USB webcam compatible with Raspberry Pi
- Internet connection for installation
- Basic knowledge of Python programming
- Access to a terminal or SSH client
Parts/Tools
- Raspberry Pi 4
- USB webcam
- Micro SD card (at least 16GB recommended)
- Power supply for Raspberry Pi
- Keyboard and monitor (or SSH access)
- Python 3 installed
- TensorFlow Lite and other necessary libraries
Steps
- Set Up Your Raspberry Pi
- Ensure your Raspberry Pi is powered on and connected to the internet.
- Open a terminal window.
- Update your package list:
- Upgrade installed packages:
sudo apt update
sudo apt upgrade
- Install Required Libraries
- Install OpenCV for handling video streams:
- Install TensorFlow Lite dependencies:
sudo apt install python3-opencv
pip3 install tensorflow tensorflow-hub
- Connect the USB Webcam
- Plug the USB webcam into one of the USB ports on the Raspberry Pi.
- Verify the webcam is recognized by the system:
ls /dev/video*
- Download TensorFlow Lite Model
- Choose a pre-trained model, such as the SSD MobileNet v2 model.
- Download the model files:
wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/gpu/lite-models/ssd_mobilenet_v2/1/default/1.tflite
- Create the Object Detection Script
- Create a new Python file for your script:
- Copy and paste the following code snippet into the file:
- Save the file and exit the editor.
nano object_detection.py
import cv2 import numpy as np import tensorflow as tf # Load the TFLite model and allocate tensors. interpreter = tf.lite.Interpreter(model_path="1.tflite") interpreter.allocate_tensors() # Get input and output tensors. input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Open webcam stream cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() if not ret: break # Preprocess the image input_data = cv2.resize(frame, (300, 300)) input_data = np.expand_dims(input_data, axis=0) # Run inference interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() # Process the output boxes = interpreter.get_tensor(output_details[0]['index']) classes = interpreter.get_tensor(output_details[1]['index']) scores = interpreter.get_tensor(output_details[2]['index']) # Display results for i in range(len(scores)): if scores[i] > 0.5: box = boxes[i] cv2.rectangle(frame, (int(box[1]*frame.shape[1]), int(box[0]*frame.shape[0])), (int(box[3]*frame.shape[1]), int(box[2]*frame.shape[0])), (0, 255, 0), 2) cv2.imshow('Object Detection', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()
- Run the Object Detection Script
- Execute the script in the terminal:
- Make sure the webcam is streaming video and the detected objects appear in the window.
python3 object_detection.py
Troubleshooting
- Webcam Not Detected: Ensure the webcam is properly connected. Check the output of
ls /dev/video*
to confirm. - Model Not Found: Verify the model file path is correct and that the file exists in your working directory.
- Low Frame Rate: Reduce the resolution of the video feed or optimize the model for better performance.
- Errors in Script Execution: Check for syntax errors or missing libraries. Ensure Python 3 is installed and the necessary packages are present.
Conclusion
You’ve successfully set up real-time object detection on your Raspberry Pi 4 using TensorFlow Lite and a USB webcam. This project can be expanded with additional features like saving detected objects or integrating with other IoT devices. Happy coding!