It has been a while since I last wrote about the Pynq, covering it in chronicles 155 to 161 just after its release.
Starting with the computer vision overlay.
One of the things I developed for my Pynq quickly after I received it was a simple object tracking application using OpenCV. This used a USB web camera and a simple OpenCV algorithm that detected the difference between a reference frame and frames received from the web camera.
Differences between the captured frame and reference frame above a certain threshold would then be identified on the captured frame as it was output over HDMI.
The original webcam-based algorithm itself is simple doing the following steps
- Capture a frame from the web cam and convert it to grey scale (cv2.cvtColor)
- Perform a Gaussian Blur on the frame (cv2.GaussianBlur)
- Calculate the difference between the frame blurred frame and the reference frame (cv2.absdiff)
- Create a binary image using a threshold operation (threshold)
- Dilate the differences on the binary image to make them more noticeable (cv2.dilate)
- Find the contours within the binary image (cv2.findContours)
- Ignore any contours with a small area (cv2.contourArea)
- Draw a box around each contour that is large enough (cv2.boundingRect & cv2.rectangle)
- Show captured frame with the boxes drawn (cv2.imshow)
While the algorithm has less than 10 steps, each step requires several operations on an image array. As a result, the frame rate was when running just on SW was very low. Knowing that programmable logic is ideal for implementing these functions and would provide significant acceleration I intended to go back and accelerate the algorithm.
Sadly, I never got the time to do this. However, looking at the new computer vision overlay which uses elements of the reVision stack I realised that I could probably very quickly accelerate this algorithm.
The new computer vision overlay provides the following image processing functions accelerated within the programmable logic.
- 2D filter 3×3 with a configurable kernel allowing Gaussian Blurs, Sobel (V+H) etc
- Re mapping
Within the Pynq PL these are implemented as shown in the diagram below.
To install the computer vision overlay we use a PuTTY terminal connected to the Pynq to download and install the packages from Github.
In the PuTTY terminal use the following commands
$ sudo -H pip3.6 install –upgrade git+https://github.com/Xilinx/PYNQ-ComputerVision.git
$ sudo reboot now
Once installed we can proceed to updating the algorithm.
The computer vision overlay ideally uses the HDMI input and output for the best performance. To provide an accurate comparison against the previous OpenCV based example, my first step was to update that design to capture images using the HDMI input in place of the web camera.
I also modified the code to run for 200 frames such that I could time the execution and calculate the frames per second of both solutions.
This updated OpenCV design resulted in a frame rate of 4 frames per second when I ran it on my Pynq.
The next step was to update the algorithm to use the computer vision overlay to do this I used the 2D filter to perform the Gaussian Blurring and the Dilate operations.
Switching in these functions resulted in significant increase in the frame rate making the application usable.
Hints & Tips
- You can increase the performance by increasing the size of the contour further processed.
- Previous blogs on the Pynq are available here P155, P156, P157, P158, P159,P160 & P161
- Jupyter Note book is available on my GitHub
MicroZed Chronicles on GitHub
Want a Book