AI-driven Plastic Surface Defect Detection via UV-exposure

January 5, 2026

Navigation Bar

Apply Theme

Experimenting with different UV wavelengths and camera types to develop a feature-rich industrial anomaly detection mechanism with edge AI.

Keywords
Brands
Hardware
  • 1ELECROW Regular PCB (4-layer)
  • 1Raspberry Pi 4 Model B
  • 1Raspberry Pi 5
  • 1Raspberry Pi Camera Module 3 Wide
  • 1Raspberry Pi Camera Module 3 NoIR Wide
  • 1Raspberry Pi Camera FFC Connector Cable (150mm)
  • 1Raspberry Pi Camera FPC Connector Cable (300mm)
  • 1Raspberry Pi Camera FPC Connector Cable (500mm)
  • 1UV Bandpass Filter (25mm Glass ZWB ZB2)
  • 1Godox Color Gel Filters
  • 1DFRobot UVC Ultraviolet Germicidal Lamp Strip (275 nm)
  • 1DARKBEAM UV Flashlight (395 nm)
  • 1DARKBEAM UV Flashlight (365 nm)
  • 1ATmega328P-PU
  • 116.000 MHz Crystal
  • 110K Resistor
  • 222pF Ceramic Disc Capacitor
  • 1100nF Ceramic Disc Capacitor
  • 110uF 250v Electrolytic Capacitor
  • 2Nema 17 (17HS3401) Stepper Motor
  • 2A4988 Driver Module
  • 1Logic Level Converter (Bi-Directional)
  • 1SSD1306 OLED Display (128x64) Blue-Yellow
  • 2Magnetic Hall-effect Sensor Module (KY-003)
  • 2Long-shaft Potentiometer (B4K7)
  • 5Button (6x6)
  • 2DC Barrel Female Power Jack
  • 3DC Barrel to Wire Jack (Male)
  • 2DC Barrel to Wire Jack (Female)
  • 1Arduino Uno
  • 1FTDI Adapter (Programming Board)
  • 15 mm Steel Balls (Beads) for Bearings
  • 1M3 Screws, Nuts, and Washers
  • 1M3 Brass Threaded Inserts
  • 1M2 Screws, Nuts, and Washers
  • 1ATX Power Supply Unit (PSU)
  • 1XH-M229 ATX Power Supply Adapter Board (Breakout)
  • 1Xiaomi 20000 mAh 3 Pro Type-C Powerbank
  • 1USB Buck-Boost Converter Board
  • 1Jumper Wires
  • 1Bambu Lab A1 Combo

Description

As I was reading about the applications of UV (ultraviolet) radiation in industrial operations, especially for anomaly detection, I became fascinated by the possibility of developing a proof-of-concept AI-driven industrial automation mechanism as a research project for detecting plastic surface anomalies. Due to the shorter wavelength of ultraviolet radiation, it can be employed in industrial machine vision systems to detect extremely small cracks, fissures, or gaps, as UV-exposure can reveal imperfections on which visible light bounces off, leading to catching some production line mistakes overlooked by the human eye or visible light-oriented camera sensors.

In the spirit of developing a proof-of-concept research project, I wanted to build an easily accessible, repeatable, and feature-rich AI-based mechanism to showcase as many different experiment parameters as I could. Nonetheless, I quickly realized that high-grade or even semi-professional UV-sensitive camera sensors were too expensive, complicated to implement, or somewhat restrictive for the features I envisioned. Even UV-only high-precision bandpass filters were too complex to utilize since they are specifically designed for a handful of high-end full-spectrum digital camera architectures. Therefore, I started to scrutinize the documentation of various commercially available camera sensors to find a suitable candidate to produce results for my plastic surface anomaly detection mechanism by the direct application of UV (ultraviolet radiation) to plastic object surfaces. After my research, I noticed that the Raspberry Pi camera module 3 was promising as a cost-effective option since it is based on the CMOS 12-megapixel Sony IMX708 image sensor, which provides more than 40% blue responsiveness for 400 nm. Although I knew the camera module 3 could not produce 100% accurate UV-induced photography without heavily modifying the Bayer layer and the integrated camera filters, I decided to purchase one and experiment to see whether I could generate accurate enough image samples by utilizing external camera filters, which exposes a sufficient discrepancy between plastic surfaces with different defect stages under UV lighting.

In this regard, I started to inspect various blocking camera filters to pinpoint the wavelength range I required — 100 - 400 nm — by absorbing visible light spectrums. After my research, I decided to utilize two different filter types separately to increase the breadth of UV-applied plastic surface image samples — a glass UV bandpass filter (ZWB ZB2) and color gel filters (with different light transmission levels - low, medium, high).

Since I did not want to constrain my experiments to only one quality control condition by UV-exposure, I decided to employ three different UV light sources providing different wavelengths of ultraviolet radiation — 275 nm, 365 nm, and 395 nm.

✅ DFRobot UVC Ultraviolet Germicidal Lamp Strip (275 nm)

✅ DARKBEAM UV Flashlight (395 nm)

✅ DARKBEAM UV Flashlight (365 nm)

After conceptualizing my initial prototype with the mentioned components, I needed to find an applicable and repeatable method to produce plastic objects with varying stages of surface defects (none, high, and extreme), composed of different plastic materials. After thinking about different production methods, I decided to design a simple cube on Fusion 360 and alter the slicer settings to engender artificial but controlled surface defects (top layer bonding issues). In this regard, I was able to produce plastic objects (3D-printed) with a great deal of variation thanks to commercially available filament types, including UV-sensitive and reflective ones, resulting in an extensive image dataset of UV-applied plastic surfaces.

✅ Matte White

✅ Matte Khaki

✅ Shiny (Silk) White

✅ UV-reactive White (Fluorescent Blue)

✅ UV-reactive White (Fluorescent Green)

Before proceeding with developing my industrial-grade proof-of-concept device, I needed to ensure that all components, camera filters, UV light sources, and plastic materials (filaments) I chose were compatible and sufficient to generate the UV-applied plastic surface image samples with enough discrepancy (contrast), in accordance with the surface defect stages, to train a visual anomaly detection model. Therefore, I decided to build a simple data collection rig based on Raspberry Pi 4 to construct my dataset and review its validity. As I decided to utilize the Raspberry Pi camera module 3 Wide to cover more of the surface area of the target plastic objects, I designed unique multi-part camera lenses according to its 120° ultra-wide angle of view (AOV) to make the camera module 3 compatible with the glass UV bandpass filter and the color gel filters. Then, I designed two different rig bases (stands) compatible with UV light sources in the flashlight form and the strip form, enabling height adjustment while attaching the camera module case mounts (carrying lenses) to change the distance between the camera (image sensor) focal point and the target plastic object surface.

After building my simple data collection rig, I was able to:

✅ utilize two different types of camera filters — a glass UV bandpass filter (ZWB ZB2) and color gel filters (with different light transmission levels),

✅ adjust the distance between the camera (image sensor) focal point and the plastic object surfaces,

✅ apply three different UV wavelengths — 395 nm, 365 nm, and 275 nm — to the plastic object surfaces,

✅ and capture image samples of various plastic materials showcasing three different stages of surface defects — none, high, and extreme — while recording the concurrent experiment parameters.

After collecting UV-applied plastic surface images with all possible combinations of the mentioned experiment parameters, I managed to construct my extensive dataset and achieve a reliable discrepancy between the different surface defect stages to train a visual anomaly detection model. In this regard, I confirmed that the camera module 3 Wide produced sufficient UV-exposed image samples to continue developing my proof-of-concept mechanism.

After training and building my FOMO-AD (visual anomaly detection model) on Edge Impulse Studio successfully, I decided not to continue developing my mechanism with the Raspberry Pi 4 and migrated my project to the Raspberry Pi 5 since I wanted to capitalize on the Pi 5’s dual-CSI ports, which allowed me to utilize two different types of camera modules (regular Wide and NoIR Wide) simultaneously. I decided to add the secondary camera module 3 NoIR Wide, which is based on the same IMX708 image sensor but has no IR filter, to review the visual anomaly model behaviour with a regular camera and a night-vision camera simultaneously to develop a feature-rich industrial-grade surface defect detection mechanism.

After configuring my dual camera set-up and visual anomaly detection model (FOMO-AD) on Raspberry Pi 5, I started to work on designing a complex circular conveyor mechanism based on my previous data collection rig, letting me place plastic objects under two cameras (regular Wide and NoIR Wide) automatically and run inferences with the images produced by them simultaneously.

Since I wanted to develop a sprocket-chain circular conveyor mechanism rather than a belt-driven one, I needed to design a lot of custom mechanical components to achieve my objectives and conduct fruitful experiments. Since I wanted to apply a different approach rather than limit switches to align plastic objects under the focal points of the cameras, I decided to utilize neodymium magnets and two magnetic Hall-effect sensor modules. While building these complex parts, I encountered various issues and needed to go through different iterations to complete my conveyor mechanism until I was able to demonstrate the features I planned. I documented my design mistakes and adjustments below to explain my development process thoroughly for this research study :)

As I was starting to design the mechanical components, I decided to develop a unique controller board (PCB) as the primary interface of the sprocket-chain circular conveyor. To reduce the footprint of the controller board, I decided to utilize an ATmega328P and design the controller board (4-layer PCB) as a custom Raspberry Pi 5 shield (hat).

Finally, since I wanted to simulate the experience of operating an industrial-grade automation system, I developed an authentic web dashboard for the circular conveyor, which lets the user:

✅ review real-time inference results with timestamps,

✅ sort the inference results by camera type (regular or NoIR),

✅ and enable the Twilio integration to get the latest surface anomaly detection notifications as SMS.

By referring to the following tutorial, you can inspect the in-depth feature, design, and code explanations with the challenges I faced during the overall development process.

🎁📢 Huge thanks to ELECROW for sponsoring this project by providing their high-quality PCB manufacturing service:

Elecrow 4-layer PCB Service

project_image_1
project_image_2
project_image_3
project_image_4
project_image_5
project_image_6
project_image_7
project_image_8
project_image_9
project_image_10
project_image_11
project_image_12
project_image_13
project_image_14
project_image_15
project_image_16
project_image_17
project_image_18
project_image_19
project_image_20
project_image_21
project_image_22
project_image_23
project_image_24
project_image_25
project_image_26
project_image_27
project_image_28
project_image_29
project_image_30
project_image_31
project_image_32
project_image_33
project_image_34
project_image_35
project_image_36
project_image_37
project_image_38
project_image_39
project_image_40
project_image_41
project_image_42
project_image_43
project_image_44
project_image_45
project_image_46
project_image_47
project_image_48
project_image_49
project_image_50
project_image_51
project_image_52
project_image_53
project_image_54
project_image_55
project_image_56
project_image_57
project_image_58
project_image_59
project_image_60
project_image_61
project_image_62
project_image_63

Development process, different prototype versions, design failures, and final results

As I was developing this research project, I encountered lots of problems due to complex mechanical component designs, especially related to the sprocket-chain mechanism, leading me to go through five different iterations. I documented the overall development process for the final mechanism in the following written tutorial thoroughly and showcased the features of the final version in the project demonstration videos.

Every feature of the final version of this proof-of-concept automation mechanism worked as planned and anticipated after my adjustments, except that the stepper motors (Nema 17) around which I designed the primary internal gears could not handle the extra torque applied to my custom-designed ball bearings (with 5 mm steel beads) after I recalibrated the chain tension with additional tension pins. I explained the reasons for the tension recalibration thoroughly in the following steps. In this regard, I needed to record some features related to sprocket movements (affixed to outer gears pivoted on the ball bearings) by removing or loosening the chain for the demonstration videos.



Data Collection Rig - Step 1: Defining the parameters for this research study, planning experiments, and outlining the research roadmap

As I briefly talked about my thought process for deciding the experiment parameters and sourcing components in the introduction, I will thoroughly cover the progress of building the UV-applied plastic surface image sample (data) collection rig in this section.

The simple data collection rig is the first version of this research project, which helped me to ensure that all components, camera filters, UV light sources, and plastic materials (filaments) I chose were compatible and sufficient to produce an extensive UV-applied plastic surface image dataset with enough discrepancy (contrast) to train a visual anomaly detection model.

As mentioned, after meticulously inspecting the documentation of various commercially available camera sensors, I decided to employ the Raspberry Pi camera module 3 Wide (120°) to capture images of plastic surfaces, showcasing different surface defect stages, under varying UV wavelengths. I studied the spectral sensitivity of the CMOS 12-megapixel Sony IMX708 image sensor and other available Raspberry Pi camera modules on the official Raspberry Pi camera documentation.

Since I decided to benefit from external camera filters to capture UV-oriented image samples with enough discrepancy (contrast) in accordance with the inherent surface defects, instead of heavily modifying the Bayer layer and the integrated camera filters, I sourced nearly full-spectrum color gel filters with different light transmission levels for blocking visible light. By stacking up these color gel filters, I managed to capture accurate UV-induced plastic surface images in the dark.

  • Godox color gel filters with low light transmission
  • Godox color gel filters with medium light transmission
  • Godox color gel filters with high light transmission
project_image_64
project_image_65

Of course, only utilizing visible light-blocking color gel filters was not enough, considering the extent of this research study. In this regard, I also sourced a precise glass UV bandpass filter absorbing the visible light spectrum. Although I inspected the glass bandpass filter specifications from a different brand's documentation, I was only able to purchase one from AliExpress.

  • UV bandpass filter (25 mm glass ZWB ZB2)
project_image_66
project_image_67

As I did not want to constrain this research project to showcase only one UV light source type while experimenting with quality control conditions by the direct application of UV (ultraviolet radiation) to plastic object surfaces, I decided to purchase three different UV light sources providing different UV wavelength ranges.

  • DFRobot UVC Ultraviolet Germicidal Lamp Strip (275 nm)
  • DARKBEAM UV Flashlight (395 nm)
  • DARKBEAM UV Flashlight (365 nm)
project_image_68
project_image_69
project_image_70

Since I decided to manufacture plastic objects myself to control experiment parameters to develop a valid research project, I needed to find an applicable and repeatable method to produce plastic objects with varying stages of surface defects (none, high, and extreme) and source different plastic materials to produce a wide selection of plastic objects. After mulling over different production methods, I decided to produce my plastic objects with 3D printing and modify slicer settings to inflict artificial but controllable surface defects. Thanks to commercially available filament types, including UV-sensitive and reflective ones, I was able to source a great variety of materials to construct an extensive image dataset of UV-applied plastic surfaces.

  • ePLA-Matte Milky White
  • ePLA-Matte Light Khaki
  • eSilk-PLA White (Shiny)
  • PLA+ Luminous Green (UV-reactive - Fluorescent)
  • PLA+ Luminous Blue (UV-reactive - Fluorescent)

#️⃣ First, I designed a simple cube on Autodesk Fusion 360 with dimensions of 40.00 mm x 40.00 mm x 40.00 mm.

project_image_71

#️⃣ I exported the cube as an STL file and uploaded the exported STL file to Bambu Studio.

#️⃣ Then, I modified the slicer (Bambu Studio) settings to implement artificial surface defects, in other words, inflicted top-layer bonding issues.

#️⃣ Since I wanted to showcase three different surface defect stages — none, high, and extreme — I copied the cube three times on the slicer.

#️⃣ For all three cubes, I selected the sparse infill density as 10% to outline the inflicted surface defects.

#️⃣ I utilized the standard slicer settings for the first cube, depicting the none surface defect stage.

project_image_72

#️⃣ For the second cube, I reduced the top shell layer number to 0 and selected the top surface pattern as the monotonic line, representing the extreme surface defect stage.

project_image_73

#️⃣ For the third cube, I lowered the top shell layer number to 1 and selected the top surface pattern as the Hilbert curve, representing the high surface defect stage.

project_image_74
project_image_75

#️⃣ However, as shown in the print preview, only reducing the top shell layer number would not lead to a protruding high defect stage, as I had hoped. Thus, I also reduced the top shell thickness to 0 to get the results I anticipated.

project_image_76
project_image_77
project_image_78
project_image_79

#️⃣ Since I decided to add the matte light khaki filament latest, I sliced three khaki cubes with 15% sparse infill density to expand my plastic object sample size.

project_image_80
project_image_81

After meticulously printing the three cubes showcasing different surface defect stages with each filament, I produced all plastic objects (15 in total) required to construct an extensive dataset to train a visual anomaly detection model and develop my industrial-grade proof-of-concept surface defect detection mechanism.

project_image_82
project_image_83
project_image_84
project_image_85
project_image_86
project_image_87
project_image_88
project_image_89
project_image_90
project_image_91
project_image_92

Data Collection Rig - Step 2: Designing unique camera lenses compatible with UV bandpass filter and color gel filters

Since I wanted to utilize external filters not compatible with the camera module 3 Wide, I needed to design unique camera lenses housing the color gel filters and the glass UV bandpass filter. In the case of gel filters, I had to design the camera lens to make color gel filters hot swappable while experimenting with different light transmission levels - low, medium, and high. Conversely, in the case of the glass bandpass, I had to design the camera lens as rigid as possible to avoid any light reaching the image sensor without passing through the bandpass filter. On top of all of these lens requirements, I also had to make sure that the color gel and the UV bandpass filter lenses were easily changable during my experiments.

After sketching different lens arrangements, I decided to design a unique multi-part case for the camera module 3, which gave me the freedom to design lenses with minimal alterations to the base of the camera case and mount.

As I was working on these components, I leveraged some open-source CAD files to obtain accurate measurements:

✒️ Raspberry Pi Camera Module v3 (Step) | Inspect

✒️ Raspberry Pi 4 Model B (Step) | Inspect

#️⃣ First, I designed the camera module case, mount, and lens for the color gel filters on Fusion 360.

#️⃣ Since the camera module 3 Wide has a 120° ultra-wide angle of view (AOV), I aligned the focal point of the image sensor and the horizontal borders of the lens accordingly.

project_image_93
project_image_94
project_image_95

#️⃣ After completing the focal point alignment, I designed the glass UV bandpass filter lens by altering the color gel filter lens to protect the 120-degree horizontal border placement.

#️⃣ Since the camera module case is composed of stackable parts, it can be utilized without adding the filter lenses as a stand-alone case.

project_image_96
project_image_97
project_image_98
project_image_99
project_image_100
project_image_101
project_image_102
project_image_103
project_image_104

After completing the camera module case, mount, and lens designs, I started to work on the placement of the camera module in relation to the target plastic object surface and the applied UV light source. To capture a precise UV-exposed image highlighting as many plastic surface defects as possible, I needed to make sure that the camera module's image sensor (IMX708) would catch the reflected ultraviolet radiation as optimally as possible during my experiments.

In this regard, I needed to align the focal point of the camera sensor and the focal point of the applied UV light source on perpendicular axes, intersecting at the center of the UV-applied plastic surface. Since I selected UV light sources in the flashlight and strip formats, the most efficient way to place my light sources was to calculate the arc angle required to create a concave (converging) shape to focus the ultraviolet radiation emitted by the UVC strip (275 nm) directly to the center of the target plastic surface. By knowing the center (focal point) of the calculated arc, I could easily place the remaining UV flashlights directly pointed at the center of the target plastic object.

#️⃣ As I decided to place UV light sources ten centimeters (100 mm) away from plastic objects and knew the length of the UVC strip, I was able to calculate the required arc angle effortlessly via this formula:

S = r * θ

S ➡ Arc length [length of the UVC strip]

r ➡ Radius [distance between the center of the plastic object surface and the arc center (focal point)]

θ ➡ Central angle in radians [angle/rad]

Arc_angle = (Arc_length * rad) / Radius

project_image_105
project_image_106
project_image_107
project_image_108
project_image_109

Data Collection Rig - Step 3: Designing the rig bases (stands) compatible with UV flashlights and strips

After calculating the arc angle, I continued to design the rig base compatible with the UVC strip, providing the concave (converging) shape to focus the emitted ultraviolet radiation directly onto the target plastic object surface. Based on the camera module mount, the rear part of the camera module case, I added a rack to the rig base to enable different levels (height adjustment) while attaching the camera module case (mount) to the rig base. In this regard, the rig base lets the user change the distance between the camera (image sensor) focal point and the target plastic object surface effortlessly.

I also added a simple holder to place the Raspberry Pi 4 on the top of the rig base easily while positioning the camera module case, providing hex-shaped snug-fit peg joints for easy installation.

As mentioned earlier, by knowing the center (focal point) of the calculated arc, I was able to modify the concave shape of the rig base for the UVC strip to design a subsequent rig base compatible with the remaining UV light sources in the flashlight format.

project_image_110
project_image_111
project_image_112
project_image_113
project_image_114
project_image_115
project_image_116
project_image_117
project_image_118
project_image_119
project_image_120
project_image_121
project_image_122
project_image_123
project_image_124

#️⃣ After completing the overall rig design, I exported all parts as STL files and uploaded them to Bambu Studio.

#️⃣ To boost the rigidity of the camera module case parts to produce a sturdy frame while experimenting with the external camera filters, I increased the wall loop (perimeter) number to 3.

#️⃣ To precisely place tree supports, I utilized support blockers while slicing the flashlight rig base.

project_image_125
project_image_126
project_image_127
project_image_128
project_image_129
project_image_130
project_image_131
project_image_132
project_image_133
project_image_134
project_image_135
project_image_136
project_image_137
project_image_138
project_image_139
project_image_140

Data Collection Rig - Step 4: Assembling the data collection rig and the custom camera filter lenses

After printing all of the data collection rig parts with my Bambu Lab A1 Combo, which helped me a lot while printing the plastic objects with different filaments, thanks to the integrated AMS lite.

#️⃣ First, I started to assemble the multi-part camera module case. Since I designed all case parts stackable to swap external camera filters without changing the case frame, I was able to assemble the whole case with four M3 screw-nut pairs.

#️⃣ Since I specifically designed the external color gel filter camera lens to make gel filters hot swappable while experimenting with different light transmission levels — low, medium, and high — I was able to affix the gel camera lens directly to the case frame.

project_image_141
project_image_142
project_image_143
project_image_144
project_image_145
project_image_146
project_image_147
project_image_148
project_image_149
project_image_150

#️⃣ After completing the assembly of the camera module case with the external gel filter lens, I connected the camera module 3 to the Raspberry Pi 4 via an FFC cable (150 mm) to test the fidelity of the captured images.

project_image_151
project_image_152
project_image_153
project_image_154

#️⃣ On the other hand, as discussed, I designed the external UV bandpass filter camera lens as rigid as possible to avoid any light reaching the image sensor without passing through the glass UV bandpass filter. Therefore, I diligently applied instant glue (super glue) to permanently affix the glass bandpass filter to its unique camera lens.

project_image_155
project_image_156
project_image_157
project_image_158
project_image_159
project_image_160

#️⃣ After installing M3 brass threaded inserts with my TS100 soldering iron to strengthen the connection between the rig bases and the Raspberry Pi 4 holder, I continued to attach the UV light sources to their respective rig bases.

#️⃣ As the 275 nm UVC strip (FPC circuit board) came with an adhesive tape side, I was able to fasten the UVC strip to the dedicated concave shape of the rig base effortlessly.

#️⃣ As I specifically designed the subsequent rig base considering the measurements of my UV light sources in the flashlight format (395 nm and 365 nm), the installation of UV flashlights was as easy as sliding them into their dedicated slot.

project_image_161
project_image_162
project_image_163
project_image_164
project_image_165
project_image_166
project_image_167
project_image_168
project_image_169
project_image_170
project_image_171
project_image_172
project_image_173
project_image_174
project_image_175
project_image_176
project_image_177
project_image_178
project_image_179
project_image_180
project_image_181
project_image_182
project_image_183

#️⃣ After installing the UV light sources into their respective rig bases successfully, to initiate my preliminary experiments, I attached the camera module case to the rack of the UV flashlight-compatible rig base by utilizing four M3 screw-nut pairs.

#️⃣ Then, I attached the Raspberry Pi 4 holder to the top of the rig base via M3 screws through the peg joints and placed the Raspberry Pi 4 onto its holder.

project_image_184
project_image_185
project_image_186
project_image_187
project_image_188
project_image_189
project_image_190
project_image_191
project_image_192
project_image_193
project_image_194
project_image_195

Data Collection Rig - Step 5: Setting up and programming Raspberry Pi 4 to capture images with the camera module 3 while logging the applied experiment parameters

As you might have noticed, I have always explained setting up the Raspberry Pi OS in my previous tutorials. Nonetheless, the latest version of the Raspberry Pi Imager is very straightforward to the point of letting the user configure the SSH authentication method and the Wi-Fi credentials. You can inspect the official Raspberry Pi Imager documentation here.

#️⃣ After setting up Raspberry Pi OS successfully, I installed the required Python modules (libraries) to continue developing.

sudo apt-get update

sudo apt-get install python3-opencv

project_image_196

#️⃣ After updating the system and installing the required libraries, I started to work on the Python script to capture UV-applied plastic surface image samples and allow the user to record the concurrent experiment parameters to the image file names by entering user inputs.

📁 uv_defect_detection_collect_data_w_rasp_4_camera_mod_wide.py

⭐ Include the required system and third-party libraries.

⭐ Uncomment to modify the libcamera log level to bypass the libcamera warnings if you want clean shell messages while entering user inputs.

import cv2
from picamera2 import Picamera2, Preview
from time import sleep
from threading import Thread

# Uncomment to disable libcamera warnings while collecting data.
#import os
#os.environ["LIBCAMERA_LOG_LEVELS"] = "4"

#️⃣ To bundle all the functions to write a more concise script, I used a Python class.

⭐ In the __init__ function:

⭐ Define a picamera2 object for the Raspberry Pi camera module 3 Wide.

⭐ Define the output format and size (resolution) of the captured images to obtain an OpenCV-compatible buffer — RGB888. Then, configure the picamera2 object accordingly.

⭐ Initialize the video stream (feed) produced by the camera module 3.

⭐ Describe all possible experiment parameters in a Python dictionary for easy access.

class uv_defect_detection():
    def __init__(self):
        # Define the Raspberry Pi camera module 3 object.
        self.picam2 = Picamera2()
        # Define the camera module output format and size, considering OpenCV frame compatibility.
        capture_config = self.picam2.create_preview_configuration(raw={}, main={"format":"RGB888", "size":(640,640)})
        self.picam2.configure(capture_config)
        # Initialize the camera module video stream (feed).
        self.picam2.start()
        sleep(2)
        # Describe the UV-based surface anomaly detection parameters, including the object materials and the applied camera filter types.
        self.uv_params = {
            "cam_focal_surface_distance": ["3cm", "5cm"],
            "uv_source_wavelength": ["275nm", "365nm", "395nm"],
            "material": ["matte_white", "matte_khaki", "shiny_white", "fluorescent_blue", "fluorescent_green"],
            "filter_type": ["gel_low_tr", "gel_medium_tr", "gel_high_tr", "uv_bandpass"],
            "surface_defect": ["none", "high", "extreme"]           
        }
        self.total_captured_sample_num = 0
		
		...

⭐ In the display_camera_feed function:

⭐ Obtain the latest frame generated by the camera module 3.

⭐ Then, show the obtained frame on the screen via the built-in OpenCV tools.

⭐ Stop the camera feed and terminate the OpenCV windows once requested.

    def display_camera_feed(self):
        # Display the real-time video stream (feed) produced by the camera module 3.
        self.latest_frame = self.picam2.capture_array()
        cv2.imshow("UV-based Surface Defect Detection Preview", self.latest_frame)
        # Stop the camera feed once requested.
        if cv2.waitKey(1) & 0xFF == ord('q'):
            cv2.destroyAllWindows()
            self.picam2.stop()
            self.picam2.close()
            print("\nCamera Feed Stopped!")

⭐ In the camera_feed function, initiate the loop to show the latest frames consecutively to observe the real-time video stream (feed).

    def camera_feed(self):
        # Start the camera video stream (feed) loop.
        while True:
            self.display_camera_feed()

⭐ In the save_uv_img_samples function:

⭐ Define the file name and path of the current image sample by applying the passed experiment parameters.

⭐ Up to the passed batch number, save the latest successive frames with the given file name and path, differentiated by the sample number.

⭐ Wait half a second before obtaining the next available frame.

    def save_uv_img_samples(self, params, batch):
        # Based on the provided UV parameters, create the image sample name.
        img_file = "uv_samples/{}/{}_{}_{}_{}_{}".format(self.uv_params["surface_defect"][int(params[4])],
                                                         self.uv_params["cam_focal_surface_distance"][int(params[0])],
                                                         self.uv_params["uv_source_wavelength"][int(params[1])],
                                                         self.uv_params["material"][int(params[2])],
                                                         self.uv_params["filter_type"][int(params[3])],
                                                         self.uv_params["surface_defect"][int(params[4])]
                                                         )
        # Save the latest frames captured by the camera module consecutively according to the passed batch number.
        for i in range(batch):
            self.total_captured_sample_num += 1
            if (self.total_captured_sample_num > 30): self.total_captured_sample_num = 1
            _img_file = img_file + "_{}.jpg".format(self.total_captured_sample_num)
            cv2.imwrite(_img_file, self.latest_frame)
            # Wait before getting the next available frame.
            sleep(0.5)
            print("UV-exposed Surface Image Sample Saved: " + _img_file)

⭐ In the obtain_and_decode_input function:

⭐ Initiate the loop to obtain user inputs continuously.

⭐ Once the user input is fetched, decode the retrieved string to obtain the given experiment parameters as an array. Then, check the number of the extracted experiment parameters.

⭐ If matched, capture image samples up to the given batch number (10) and record the given experiment parameters to the sample file names.

    def obtain_and_decode_input(self):
        # Initiate the user input prompt to obtain the current UV parameters to capture image samples.
        while True:
            passed_params = input("Please enter the current UV parameters:")
            # Decode the passed string to extract the provided UV parameters.
            decoded_params = passed_params.split(",")
            # Check the number of the given parameters.
            if (len(decoded_params) == 5):
                # If matched, capture image samples according to the passed batch number.
                self.save_uv_img_samples(decoded_params, 10)
            else:
                print("Wrong parameters!")

#️⃣ As the built-in Python input function needs to check for new user input without interruptions, it cannot run with the real-time video stream generated by OpenCV in the same operation (runtime), which processes the latest frames produced by the camera module 3 continuously. Therefore, I utilized the built-in Python threading module to run multiple operations concurrently and synchronize them.

⭐ Define the uv_defect_detection class object.

⭐ Declare and initialize a Python thread for running the real-time video stream (feed).

⭐ Outside of the video stream operation (thread), check new user inputs continuously to obtain the provided experiment parameters.

uv_defect_detection_obj = uv_defect_detection()

# Declare and initialize Python thread for the camera module video stream (feed).
Thread(target=uv_defect_detection_obj.camera_feed).start()

# Obtain the provided UV parameters as user input continuously.
uv_defect_detection_obj.obtain_and_decode_input()
project_image_197
project_image_198

Data Collection Rig - Step 6: Constructing an extensive image dataset of surfaces of various plastic materials with different defect states under 395 nm, 365 nm, and 275 nm UV wavelengths

After concluding programming the Raspberry Pi 4, I proceeded to capture UV-applied plastic surface image samples showcasing all of the combinations of the experiment parameters to construct my extensive dataset.

I would like to reiterate all experiment parameters to elucidate the extent of the completed image dataset.

#️⃣ I utilized three different UV light (radiation) sources, providing varying wavelength ranges.

  • 275 nm
  • 365 nm
  • 395 nm

#️⃣ I designed three cubes showcasing different surface defect stages.

  • none
  • high
  • extreme

#️⃣ I printed these three cubes with five different plastic materials (filaments) to increase my sample size.

  • Matte White
  • Matte Khaki
  • Shiny (Silk) White
  • UV-reactive White (Fluorescent Blue)
  • UV-reactive White (Fluorescent Green)

#️⃣ I applied two different types of external camera filters, making it four different filter options due to the gel filters' light transmission levels.

  • UV bandpass filter (glass)
  • Gel filters with low light transmission
  • Gel filters with medium light transmission
  • Gel filters with high light transmission

#️⃣ I stacked up four different primary colors provided by my gel filter set to pass the required blue-oriented wavelength range and block the remaining visible light spectrums.

#️⃣ Since my color gel filter set included three gel filters for each primary color with varying light transmission levels, I decided to use low, medium, and high color gel filter groups, sets of four primary colors, during my experiments.

project_image_199
project_image_200

#️⃣ Since I specifically designed the rig base racks to be able to attach the camera module case mounts (carrying external lenses) at different height levels, I was able to adjust the distance between the camera (image sensor) focal point and the target plastic object surface. In this regard, I collected image samples at two different height levels to acquire samples with different zoom percentages.

  • 3 cm
  • 5 cm
project_image_201
project_image_202
project_image_203
project_image_204

#️⃣ Considering all of the mentioned experiment parameters, I painstakingly collected UV-applied plastic surface image samples with every possible combination and constructed my extensive dataset successfully.

  • none / 3 cm / 395 nm / Gel (low transmission)
  • high / 3 cm / 395 nm / Gel (low transmission)
  • extreme / 3 cm / 395 nm / Gel (low transmission)
  • none / 3 cm / 395 nm / Gel (medium transmission)
  • high / 3 cm / 395 nm / Gel (medium transmission)
  • extreme / 3 cm / 395 nm / Gel (medium transmission)
  • none / 3 cm / 395 nm / Gel (high transmission)
  • high / 3 cm / 395 nm / Gel (high transmission)
  • extreme / 3 cm / 395 nm / Gel (high transmission)
  • none / 3 cm / 395 nm / UV bandpass
  • high / 3 cm / 395 nm / UV bandpass
  • extreme / 3 cm / 395 nm / UV bandpass
  • none / 3 cm / 365 nm / Gel (low transmission)
  • high / 3 cm / 365 nm / Gel (low transmission)
  • extreme / 3 cm / 365 nm / Gel (low transmission)
  • none / 3 cm / 365 nm / Gel (medium transmission)
  • high / 3 cm / 365 nm / Gel (medium transmission)
  • extreme / 3 cm / 365 nm / Gel (medium transmission)
  • none / 3 cm / 365 nm / Gel (high transmission)
  • high / 3 cm / 365 nm / Gel (high transmission)
  • extreme / 3 cm / 365 nm / Gel (high transmission)
  • none / 3 cm / 365 nm / UV bandpass
  • high / 3 cm / 365 nm / UV bandpass
  • extreme / 3 cm / 365 nm / UV bandpass
  • none / 3 cm / 275 nm / Gel (low transmission)
  • high / 3 cm / 275 nm / Gel (low transmission)
  • extreme / 3 cm / 275 nm / Gel (low transmission)
  • none / 3 cm / 275 nm / Gel (medium transmission)
  • high / 3 cm / 275 nm / Gel (medium transmission)
  • extreme / 3 cm / 275 nm / Gel (medium transmission)
  • none / 3 cm / 275 nm / Gel (high transmission)
  • high / 3 cm / 275 nm / Gel (high transmission)
  • extreme / 3 cm / 275 nm / Gel (high transmission)
  • none / 3 cm / 275 nm / UV bandpass
  • high / 3 cm / 275 nm / UV bandpass
  • extreme / 3 cm / 275 nm / UV bandpass
  • none / 5 cm / 395 nm / Gel (low transmission)
  • high / 5 cm / 395 nm / Gel (low transmission)
  • extreme / 5 cm / 395 nm / Gel (low transmission)
  • none / 5 cm / 395 nm / Gel (medium transmission)
  • high / 5 cm / 395 nm / Gel (medium transmission)
  • extreme / 5 cm / 395 nm / Gel (medium transmission)
  • none / 5 cm / 395 nm / Gel (high transmission)
  • high / 5 cm / 395 nm / Gel (high transmission)
  • extreme / 5 cm / 395 nm / Gel (high transmission)
  • none / 5 cm / 395 nm / UV bandpass
  • high / 5 cm / 395 nm / UV bandpass
  • extreme / 5 cm / 395 nm / UV bandpass
  • none / 5 cm / 365 nm / Gel (low transmission)
  • high / 5 cm / 365 nm / Gel (low transmission)
  • extreme / 5 cm / 365 nm / Gel (low transmission)
  • none / 5 cm / 365 nm / Gel (medium transmission)
  • high / 5 cm / 365 nm / Gel (medium transmission)
  • extreme / 5 cm / 365 nm / Gel (medium transmission)
  • none / 5 cm / 365 nm / Gel (high transmission)
  • high / 5 cm / 365 nm / Gel (high transmission)
  • extreme / 5 cm / 365 nm / Gel (high transmission)
  • none / 5 cm / 365 nm / UV bandpass
  • high / 5 cm / 365 nm / UV bandpass
  • extreme / 5 cm / 365 nm / UV bandpass
  • none / 5 cm / 275 nm / Gel (low transmission)
  • high / 5 cm / 275 nm / Gel (low transmission)
  • extreme / 5 cm / 275 nm / Gel (low transmission)
  • none / 5 cm / 275 nm / Gel (medium transmission)
  • high / 5 cm / 275 nm / Gel (medium transmission)
  • extreme / 5 cm / 275 nm / Gel (medium transmission)
  • none / 5 cm / 275 nm / Gel (high transmission)
  • high / 5 cm / 275 nm / Gel (high transmission)
  • extreme / 5 cm / 275 nm / Gel (high transmission)
  • none / 5 cm / 275 nm / UV bandpass
  • high / 5 cm / 275 nm / UV bandpass
  • extreme / 5 cm / 275 nm / UV bandpass

#️⃣ As shown in the Python script documentation, I generated separate folders for each defect stage and recorded the applied experiment parameters to the image file names to produce a self-explanatory dataset for training a valid visual anomaly detection model.

  • /none (3600 samples)
  • /high (3600 samples)
  • /extreme (3600 samples)

Since I thought this dataset might be beneficial for different materials science projects, I wanted to make it open-source for anyone interested in training a neural network model with my samples or adding them to their existing project. Please refer to the project GitHub repository to examine the UV-applied plastic surface image dataset.

📌 Inspecting gel filters

project_image_205
project_image_206
project_image_207
project_image_208
project_image_209
project_image_210
project_image_211
project_image_212
project_image_213
project_image_214
project_image_215
project_image_216
project_image_217
project_image_218

🔎 3 cm / 395 nm / Gel (low transmission)

project_image_219
project_image_220
project_image_221
project_image_222
project_image_223
project_image_224
project_image_225
project_image_226
project_image_227
project_image_228
project_image_229
project_image_230
project_image_231
project_image_232
project_image_233
project_image_234
project_image_235
project_image_236
project_image_237
project_image_238

🔎 3 cm / 395 nm / Gel (medium transmission)

project_image_239
project_image_240
project_image_241

🔎 3 cm / 395 nm / Gel (high transmission)

project_image_242
project_image_243
project_image_244

🔎 3 cm / 365 nm / Gel (low transmission)

project_image_245
project_image_246
project_image_247
project_image_248
project_image_249

🔎 5 cm / 395 nm / Gel (low transmission)

project_image_250
project_image_251
project_image_252
project_image_253

🔎 5 cm / 365 nm / Gel (low transmission)

project_image_254
project_image_255
project_image_256

🔎 3 cm / 275 nm / Gel (low transmission)

project_image_257
project_image_258
project_image_259
project_image_260
project_image_261
project_image_262
project_image_263
project_image_264

🔎 5 cm / 275 nm / Gel (low transmission)

project_image_265
project_image_266
project_image_267

🔎 3 cm / 395 nm / UV bandpass

project_image_268
project_image_269
project_image_270
project_image_271
project_image_272
project_image_273
project_image_274

🔎 3 cm / 365 nm / UV bandpass

project_image_275
project_image_276
project_image_277
project_image_278

🔎 5 cm / 395 nm / UV bandpass

project_image_279
project_image_280
project_image_281

🔎 5 cm / 365 nm / UV bandpass

project_image_282
project_image_283
project_image_284

🔎 3 cm / 275 nm / UV bandpass

project_image_285
project_image_286
project_image_287

🔎 5 cm / 275 nm / UV bandpass

project_image_288
project_image_289
project_image_290

🖥️ Real-time video stream on Raspberry Pi 4 while collecting image samples

project_image_291
project_image_292
project_image_293
project_image_294
project_image_295
project_image_296
project_image_297
project_image_298
project_image_299
project_image_300
project_image_301
project_image_302
project_image_303
project_image_304
project_image_305
project_image_306
project_image_307
project_image_308
project_image_309
project_image_310
project_image_311
project_image_312
project_image_313
project_image_314
project_image_315
project_image_316
project_image_317
project_image_318
project_image_319
project_image_320
project_image_321
project_image_322
project_image_323
project_image_324
project_image_325
project_image_326
project_image_327
project_image_328
project_image_329
project_image_330
project_image_331
project_image_332
project_image_333
project_image_334
project_image_335

Circular Conveyor - Step 0: Migrating project from Raspberry Pi 4 to Raspberry Pi 5 to utilize two different camera module 3 versions (regular Wide and NoIR Wide) simultaneously

After successfully concluding my experiments with the data collection rig and constructing the UV-applied plastic surface image dataset with enough discrepancy (contrast) to train a visual anomaly detection model, I started to work on developing the industrial-grade proof-of-concept circular conveyor mechanism to explore different aspects of utilizing the substantial data I was collecting in a real-world manufacturing setting.

After training and building my FOMO-AD (visual anomaly detection model) on Edge Impulse Studio successfully — the training process is explained in the following step — I came to the conclusion that utilizing only the camera module with which I constructed my dataset was not applicable for a real-world scenario since camera types and attributes differ in manufacturing settings. Thus, to review my visual anomaly detection model's behaviour with image samples generated by a different camera type, I decided to add a secondary camera to my mechanism. As the secondary camera, I selected the NoIR version of the Raspberry Pi camera module 3, which is based on the same IMX708 image sensor but has no integrated IR filter, producing distinctly different UV-induced image samples than the regular Wide module, but with the same procedure.

In this regard, I decided to migrate my project from the Raspberry Pi 4 to the Raspberry Pi 5 since I wanted to capitalize on the Pi 5’s dual-CSI ports, which allowed me to utilize two different types of camera modules (regular Wide and NoIR Wide) simultaneously and develop a feature-rich industrial-grade surface defect detection mechanism employing a regular camera and a night-vision camera.

#️⃣ Similar to the Raspberry Pi 4, after setting up the Raspberry Pi OS on the Raspberry Pi 5 via the Raspberry Pi Imager, I installed the required Python modules (libraries) to continue developing.

sudo apt-get update

sudo apt-get install python3-opencv

project_image_336

#️⃣ Contrary to the Raspberry Pi 4, the dual CSI ports of the Raspberry Pi 5 are not compatible with FFC cables. Thus, I purchased official FPC connector cables (300 mm and 500 mm) to attach the regular Wide and the NoIR Wide camera modules to the respective CSI ports.

project_image_337
project_image_338
project_image_339
project_image_340

#️⃣ Before proceeding with developing my circular conveyor mechanism with the dual camera setup, I needed to establish the workflow for running both cameras simultaneously. Thus, I decided to modify my previous Python script for capturing UV-applied plastic surface image samples with the camera module 3 Wide.

Even though I programmed the Raspberry Pi 5 to capture image samples produced by two different camera modules simultaneously, I did not expand my dataset or retrain the model with images generated by the camera module 3 NoIR Wide, as I wanted to study my model's behaviour while running inferences in a different manufacturing setting.

📁 uv_defect_detection_collect_data_w_rasp_5_camera_mod_wide_and_noir.py

⭐ Include the required system and third-party libraries.

⭐ Uncomment to modify the libcamera log level to bypass the libcamera warnings if you want clean shell messages while entering user inputs.

import cv2
from picamera2 import Picamera2, Preview
from time import sleep
from threading import Thread

# Uncomment to disable libcamera warnings while collecting data.
#import os
#os.environ["LIBCAMERA_LOG_LEVELS"] = "4"

#️⃣ To bundle all the functions to write a more concise script, I used a Python class.

⭐ In the __init__ function:

⭐ Define a picamera2 object addressing the CSI port of the Raspberry Pi camera module 3 Wide.

⭐ Define the output format and size (resolution) of the images captured by the regular camera module 3 to obtain an OpenCV-compatible buffer — RGB888. Then, configure the picamera2 object accordingly.

⭐ Initialize the video stream (feed) produced by the regular camera module 3.

⭐ Define a secondary picamera2 object addressing the CSI port of the Raspberry Pi camera module 3 NoIR Wide.

⭐ Define the output format and size (resolution) of the images captured by the camera module 3 NoIR to obtain an OpenCV-compatible buffer — RGB888. Then, configure the picamera2 object accordingly.

⭐ Initialize the video stream (feed) produced by the camera module 3 NoIR.

⭐ Describe all possible experiment parameters in a Python dictionary for easy access.

⭐ Define the camera attributes and respective total sample numbers for the concurrent data collection process.

class uv_defect_detection():
    def __init__(self):
        # Define the Picamera2 object for communicating with the Raspberry Pi camera module 3 Wide.
        self.cam_wide = Picamera2(0)
        # Define the camera module frame output format and size, considering OpenCV frame compatibility.
        capture_config = self.cam_wide.create_preview_configuration(raw={}, main={"format":"RGB888", "size":(640,640)})
        self.cam_wide.configure(capture_config)
        # Initialize the camera module continuous video stream (feed).
        self.cam_wide.start()
        sleep(2)
        
        # Define the Picamera2 object for communicating with the Raspberry Pi camera module 3 NoIR Wide.
        self.cam_noir_wide = Picamera2(1)
        # Define the camera module NoIR frame output format and size, considering OpenCV frame compatibility.
        capture_config_noir = self.cam_wide.create_preview_configuration(raw={}, main={"format":"RGB888", "size":(640,640)})
        self.cam_noir_wide.configure(capture_config_noir)
        # Initialize the camera module NoIR continuous video stream (feed).
        self.cam_noir_wide.start()
        sleep(2)
        
        # Describe the surface anomaly detection conditions based on UV-exposure, including plastic material types, applied UV wavelengths, and the employed camera filter categories.
        self.uv_params = {
            "cam_focal_surface_distance": ["3cm", "5cm"],
            "uv_source_wavelength": ["275nm", "365nm", "395nm"],
            "material": ["matte_white", "matte_khaki", "shiny_white", "fluorescent_blue", "fluorescent_green"],
            "filter_type": ["gel_low_tr", "gel_medium_tr", "gel_high_tr", "uv_bandpass"],
            "surface_defect": ["none", "high", "extreme"]           
        }
        
        # Define the required camera information for the data collection process.
        self.active_cam_info = [{"name": "wide", "total_captured_sample_num": 0}, {"name": "wide_noir", "total_captured_sample_num": 0}]
        
		...

⭐ In the display_camera_feeds function:

⭐ Obtain the latest frame generated by the regular camera module 3.

⭐ Show the obtained frame on the screen via the built-in OpenCV tools.

⭐ Then, obtain the latest frame produced by the camera module 3 NoIR and show the retrieved frame in a separate window on the screen via the built-in OpenCV tools.

⭐ Stop both camera feeds (regular Wide and NoIR Wide) and terminate individual OpenCV windows once requested.

    def display_camera_feeds(self):
        # Display the real-time video stream (feed) produced by the camera module 3 Wide.
        self.latest_frame_wide = self.cam_wide.capture_array()
        cv2.imshow("UV-based Surface Defect Detection [Wide Preview]", self.latest_frame_wide)
        # Display the real-time video stream (feed) produced by the camera module 3 NoIR Wide.
        self.latest_frame_noir = self.cam_noir_wide.capture_array()
        cv2.imshow("UV-based Surface Defect Detection [NoIR Preview]", self.latest_frame_noir)            
        # Stop all camera feeds once requested.
        if cv2.waitKey(1) & 0xFF == ord('q'):
            cv2.destroyAllWindows()
            self.cam_wide.stop()
            self.cam_wide.close()
            print("\nWide Camera Feed Stopped\n")
            self.cam_noir_wide.stop()
            self.cam_noir_wide.close()
            print("\nWide NoIR Camera Feed Stopped!\n")

⭐ In the camera_feeds function, initiate the loop to show the latest frames produced by the regular Wide and NoIR Wide camera modules consecutively to observe the real-time video streams (feeds) simultaneously.

    def camera_feeds(self):
        # Start the camera video streams (feeds) in a loop.
        while True:
            self.display_camera_feeds()

⭐ In the save_uv_img_samples function:

⭐ Define the file name and path of the current image sample by applying the passed experiment parameters.

⭐ The given parameters also determine whether the latest frame should be obtained from the regular camera module or the NoIR camera module.

⭐ Up to the passed batch number, save the latest successive frames generated by the selected camera module (regular or NoIR) with the given file name and path, differentiated by the sample number.

⭐ Wait half a second before obtaining the next available frame.

    def save_uv_img_samples(self, params, batch):
        # Based on the provided UV-based anomaly detection conditions and the selected camera type, generate the given image sample path and partial file name.
        selected_cam = self.active_cam_info[int(params[5])]["name"]
        img_file = "uv_samples/{}/{}/{}_{}_{}_{}_{}".format(
                                                             selected_cam,
                                                             self.uv_params["surface_defect"][int(params[4])],
                                                             self.uv_params["cam_focal_surface_distance"][int(params[0])],
                                                             self.uv_params["uv_source_wavelength"][int(params[1])],
                                                             self.uv_params["material"][int(params[2])],
                                                             self.uv_params["filter_type"][int(params[3])],
                                                             self.uv_params["surface_defect"][int(params[4])]
                                                         )
        
        # Save the latest frames captured by the selected camera type — the camera module 3 Wide or the camera module 3 NoIR Wide — consecutively according to the passed batch number.
        for i in range(batch):
            self.active_cam_info[int(params[5])]["total_captured_sample_num"] += 1
            if (self.active_cam_info[int(params[5])]["total_captured_sample_num"] > 30): self.active_cam_info[int(params[5])]["total_captured_sample_num"] = 1
            _img_file = img_file + "_{}.jpg".format(self.active_cam_info[int(params[5])]["total_captured_sample_num"])
            if(selected_cam == "wide"):
                cv2.imwrite(_img_file, self.latest_frame_wide)
            elif(selected_cam == "wide_noir"):
                cv2.imwrite(_img_file, self.latest_frame_noir)                
            # Wait before getting the next available frame.
            sleep(0.5)
            print("UV-exposed Surface Image Sample Saved [" + selected_cam + "]: " + _img_file)

⭐ In the obtain_and_decode_input function:

⭐ Initiate the loop to obtain user inputs continuously.

⭐ Once the user input is fetched, decode the retrieved string to obtain the given experiment parameters as an array. Then, check the number of the extracted experiment parameters.

⭐ If matched, capture image samples up to the given batch number (10) with the selected camera module and record the given experiment parameters to the sample file names.

    def obtain_and_decode_input(self):
        # Initiate the user input prompt to obtain the given UV-exposure conditions for the data collection process.
        while True:
            passed_params = input("Please enter the current UV-exposure conditions:")
            # Decode the passed string to extract the provided parameters.
            decoded_params = passed_params.split(",")
            # Check the number of the extracted parameters.
            if (len(decoded_params) == 6):
                # If matched, capture image samples according to the passed batch number — 10.
                self.save_uv_img_samples(decoded_params, 10)
            else:
                print("Incorrect parameter number!")

#️⃣ As the built-in Python input function needs to check for new user input without interruptions, it cannot run with the real-time video streams generated by OpenCV in the same operation (runtime), which processes the latest frames produced by the regular Wide and NoIR Wide camera modules continuously. Therefore, I utilized the built-in Python threading module to run multiple operations concurrently and synchronize them.

⭐ Define the uv_defect_detection class object.

⭐ Declare and initialize a Python thread for running the real-time video streams (feeds) produced by the regular camera module 3 and the camera module 3 NoIR.

⭐ Outside of the video streams operation (thread), check new user inputs continuously to obtain the provided experiment parameters.

uv_defect_detection_obj = uv_defect_detection()

# Declare and initialize a Python thread for the camera module 3 Wide and the camera module 3 NoIR Wide video streams (feeds).
Thread(target=uv_defect_detection_obj.camera_feeds).start()

# Obtain the provided UV-exposure conditions as user input continuously.
uv_defect_detection_obj.obtain_and_decode_input()
project_image_341
project_image_342
project_image_343

Circular Conveyor - Step 1: Building a visual anomaly detection model (FOMO-AD) w/ Edge Impulse Enterprise

Since Edge Impulse provides developer-friendly tools for advanced AI applications and supports almost every development board due to its model deployment options, I decided to utilize Edge Impulse Enterprise to build my visual anomaly detection model. Also, Edge Impulse Enterprise incorporates elaborate model architectures for advanced computer vision applications and optimizes the state-of-the-art vision models for edge devices and single-board computers such as the Raspberry Pi 5.

Among the diverse machine learning algorithms provided by Edge Impulse, I decided to employ FOMO-AD (visual anomaly detection), which is specifically developed for handling unseen data, like defects in a product during manufacturing.

While labeling the UV-applied plastic surface image samples, I needed to utilize the default classes required by Edge Impulse to enable the F1 score calculation:

  • no anomaly
  • anomaly

Plausibly, Edge Impulse Enterprise enables developers with advanced tools to build, optimize, and deploy each available machine learning algorithm as supported firmware for nearly any device you can think of. Therefore, after training and validating, I was able to deploy my FOMO-AD model as an EIM binary for Linux (AARCH64) compatible with Raspberry Pi 5.

To utilize the advanced AI tools provided by Edge Impulse, you can register here.

Furthermore, you can inspect this FOMO-AD visual anomaly detection model on Edge Impulse as a public project.

Circular Conveyor - Step 1.1: Uploading and labeling the UV-applied plastic surface image samples

#️⃣ First, I created a new project on my Edge Impulse Enterprise account.

project_image_344

#️⃣ To label image samples manually for FOMO-AD visual anomaly detection models, go to Dashboard ➡ Project info ➡ Labeling method and select One label per data item.

#️⃣ To upload training and testing UV-applied plastic surface image samples as individual files, I opened the Data acquisition section and clicked the Upload data icon.

project_image_345

#️⃣ I utilized default Edge Impulse configurations to distinguish training and testing image samples to enable the F1 score calculation.

#️⃣ For training samples, I selected the Training category and entered no anomaly as their shared label.

#️⃣ For testing samples, I selected the Testing category and entered anomaly as their shared label.

As I wanted this visual anomaly detection model to represent all of my experiments, I uploaded all image samples with the none surface defect stage as the training samples and all image samples with the extreme surface defect stage as the testing samples.

  • /none (3600 samples)
  • /extreme (3600 samples)
project_image_346
project_image_347
project_image_348
project_image_349
project_image_350
project_image_351
project_image_352
project_image_353
project_image_354
project_image_355
project_image_356
project_image_357

Circular Conveyor - Step 1.2: Training the FOMO-AD (visual anomaly detection) model

An impulse (an application developed and optimized by Edge Impulse) takes raw data, applies signal processing to extract features, and then utilizes a learning block to classify new data.

For my application, I created the impulse by employing the Image processing block and the Visual Anomaly Detection - FOMO-AD learning block.

Image processing block processes the passed raw image input as grayscale or RGB (optional) to produce a reliable features array.

FOMO-AD learning block represents the officially supported machine learning algorithms, based on a selectable backbone for feature extraction and a scoring function (PatchCore, GMM anomaly detection).

#️⃣ First, I opened the Impulse design ➡ Create impulse section, set the model image resolution to 320 x 320, and selected the Fit shortest axis resize mode so as to scale (resize) the given image samples precisely. To complete the impulse creation, I clicked Save Impulse.

project_image_358
project_image_359

#️⃣ To modify the raw image features in the applicable format, I navigated to the Impulse design ➡ Image section, set the Color depth parameter as RGB, and clicked Save parameters.

project_image_360

#️⃣ Then, I proceeded to click Generate features to extract the required features for training by applying the Image processing block.

project_image_361
project_image_362
project_image_363
project_image_364
project_image_365

#️⃣ After extracting features successfully, I navigated to the Impulse design ➡ Visual Anomaly Detection section and modified the neural network settings and architecture to achieve reliable accuracy and validity.

#️⃣ First, I selected the Training processor as GPU since I uploaded an extensive dataset providing more than 3000 training image samples.

#️⃣ According to my prolonged experiments, I assigned the final model settings as follows.

📌 Training settings:

  • Training processor ➡ GPU
  • Capacity ➡ High

📌 Neural network architecture:

  • MobileNetV2 0.35
  • Gaussian Mixture Model (GMM)

#️⃣ Adjusting Capacity higher means a higher number of (Gaussian) components, making the visual anomaly detection model more adapted to the original distribution.

#️⃣ After training the model with the final configurations, Edge Impulse did not evaluate the F1 score (accuracy) due to the nature of the visual anomaly model training process.

project_image_366
project_image_367
project_image_368
project_image_369
project_image_370

Circular Conveyor - Step 1.3: Evaluating the model accuracy and deploying the validated model

Testing the FOMO-AD visual anomaly detection models is extremely salient for getting precise results while running inferences on the device. In addition to evaluating the F1 precision score (accuracy), Edge Impulse allows the user to tweak the learning block sensitivity by adjusting the anomaly (confidence) threshold, resulting in a much more adaptable model for real-world operations.

#️⃣ First, to obtain the validation score of the trained model based on the provided testing samples, I navigated to the Impulse design ➡ Model testing section and clicked Classify all.

project_image_371

#️⃣ Based on the initial F1 score, I started to rigorously experiment with different model variants and anomaly (confidence) thresholds to pinpoint the optimum settings for the real-world conditions.

#️⃣ Although Edge Impulse suggested 7.3 as the confidence threshold based on the top anomaly scores in the training dataset, it performed poorly for the Unoptimized (float32) model variant. According to my experiments, I found out that a 2.12 confidence threshold is the sweet spot for the unoptimized version, leading to an 83.74% F1 score (accuracy).

project_image_372
project_image_373
project_image_374
project_image_375
project_image_376
project_image_377
project_image_378

#️⃣ On the other hand, the Quantized (int8) model variant performed best with an 8 confidence threshold, leading to a 100% F1 score (accuracy).

project_image_379
project_image_380
project_image_381
project_image_382

#️⃣ To deploy the validated model optimized for my hardware, I navigated to the Impulse design ➡ Deployment section and searched for Linux(AARCH64).

#️⃣ I chose the Quantized (int8) model variant (optimization) to achieve the optimal performance while running the deployed model.

#️⃣ Finally, I clicked Build to download the produced EIM binary, containing the trained visual anomaly detection model.

project_image_383
project_image_384
project_image_385
project_image_386

Circular Conveyor - Step 2: Setting up Apache web server with MariaDB database and Edge Impulse Linux Python SDK on Raspberry Pi 5

As mentioned earlier, I decided to develop a web dashboard for the circular conveyor mechanism and host it locally on the Raspberry Pi 5. Thus, I decided to utilize Apache as the local server for my web dashboard, providing all necessary tools to build a full-fledged PHP-based application.

To easily access and run my FOMO-AD visual anomaly detection model (EIM binary) via a Python script, I also installed the Edge Impulse Linux Python SDK on the Raspberry Pi 5.

#️⃣ First, I installed the Apache web server with a MariaDB database, the PHP MySQL package, and the PHP cURL package via the terminal.

sudo apt-get install apache2 php mariadb-server php-mysql php-curl -y

project_image_387
project_image_388

#️⃣ To utilize the MariaDB database, I set the root user by strictly following the secure installation prompt.

sudo mysql_secure_installation

project_image_389
project_image_390

#️⃣ After setting up the Apache server, I proceeded to install the official Edge Impulse Python SDK with all dependencies.

sudo apt-get install libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev python3-pip

sudo pip3 install pyaudio edge_impulse_linux --break-system-packages

#️⃣ Since I did not create a virtual environment, I needed to utilize the break-system-packages command-line argument to bypass the system-wide package installation error.

project_image_391
project_image_392
project_image_393

As discussed earlier, I decided to design a unique controller board (PCB) for the circular conveyor mechanism in the form of a Raspberry Pi 5 shield (hat). Since the controller board would be based on an ATmega328P, I decided to establish the data transfer via serial communication. In this regard, before prototyping the circular conveyor interface, I needed to enable the UART serial communication protocol on the Raspberry Pi 5.

#️⃣ To activate the UART serial communication via GPIO pins, I enabled the Serial Port interface on Raspberry Pi Configuration. Then, I rebooted the Pi 5.

project_image_394
project_image_395
project_image_396
project_image_397

Circular Conveyor - Step 3: Prototyping and initial programming of the circular conveyor interface with Arduino Uno

Before proceeding with developing the mechanical parts and the controller board (interface) of the circular conveyor mechanism, I needed to ensure that every sensor and component was operating as anticipated. In this regard, I decided to utilize an Arduino Uno to prototype the circular conveyor interface. Since I had an original Arduino Uno, which is based on the ATmega328P, I was able to test and run my initial programming of the conveyor interface effortlessly.

#️⃣ As I decided to design two conveyor drivers sharing the load while rotating the conveyor chain, I utilized two Nema 17 (17HS3401) stepper motors controlled by two separate A4988 driver modules.

#️⃣ Since I wanted to utilize neodymium magnets to align the center of the plastic object surfaces held by the plastic object carriers of the circular conveyor with the focal points of both camera modules (regular Wide and NoIR Wide), I used two magnetic Hall-effect sensor modules (KY-003).

#️⃣ To provide the user with a feature-rich interface, I connected an SSD1306 OLED display and four control buttons.

#️⃣ To enable the user to adjust the conveyor attributes manually, I added two long-shaft potentiometers.

#️⃣ Since I needed to supply power for a lot of current-demanding electronic components with different operating voltages, I decided to convert my old ATX power supply unit (PSU) into a simple bench power supply by utilizing an ATX adapter board (XH-M229) providing stable 3.3V, 5V, and 12V. For each power output of the adapter board, I soldered wires to attach a DC-barrel-to-wire jack (male) in order to create a production-ready bench power supply.

#️⃣ Furthermore, as a part of my initial programming experiments, I reviewed the data transmission between the software serial port of the Arduino Uno and the hardware UART serial port (GPIO) of the Raspberry Pi 5.

#️⃣ Since Arduino Uno and ATmega328P operate at 5V while Raspberry Pi 5 requires 3.3V logic level voltage, their GPIO pins cannot be connected directly, even for serial communication. Therefore, I utilized a bi-directional logic level converter to shift the voltage between the respective pin connections.

project_image_398
project_image_399
project_image_400

Circular Conveyor - Step 3.1: Setting up and configuring ATMEGA328P-PU as an Arduino Uno

After completing my initial Arduino Uno prototyping and programming, I started to set up my ATmega328P-PU to be able to move electrical components from the Arduino Uno to its corresponding pins to continue developing the circular conveyor interface.

#️⃣ First, based on the ATmega328P datasheet, I built the required circuitry to drive the ATmega328P single-chip microcontroller, consisting of these electrical components:

  • 16.000 MHz crystal [1]
  • 10K resistor [1]
  • 22pF ceramic disc capacitor [2]
  • 10uF 250v electrolytic capacitor [1]
project_image_401
project_image_402

#️⃣ Since I did not want to add an onboard USB port to my PCB, I decided to upload code files to the ATmega328P via an external FTDI adapter (programming board), which requires an additional 100nF ceramic disc capacitor while connecting its DTR/RTS pin to the reset pin of the ATmega328P.

project_image_403
project_image_404

📌 DroneBot Workshop provided in-depth written and video tutorials regarding utilizing the ATmega328P and the FTDI adapter, from which I got the connection schematics above. So, please refer to DroneBot Workshop's tutorial to get more information about the ATmega328P microcontroller.

Since I wanted to program the ATmega328P as an Arduino Uno via the Arduino IDE, I purchased ATmega328P-PU chips, which come with the preloaded (burned) Arduino bootloader in their EEPROM. Nonetheless, any of my ATmega328P chips with the PU version had been recognized by the latest version of Arduino IDE — 2.3.6.

Therefore, I needed to burn the required bootloader manually to my ATmega328P-PU by employing a different Arduino Uno, other than the one I used to prototype the conveyor interface, as an in-system program (ISP), as depicted in this official Arduino guideline.

#️⃣ First, I connected the Arduino Uno to the computer and selected its COM port on the Arduino IDE.

project_image_405

#️⃣ Then, I navigated to File ➡ Examples ➡ ArduinoISP and uploaded the ArduinoISP example to the Arduino UNO.

project_image_406
project_image_407

#️⃣ Since the ISP example uses the SPI protocol to burn the bootloader, I connected the hardware SPI pins (MISO, MOSI, and SCK) of the Arduino Uno to the corresponding SPI pins of the ATmega328P.

#️⃣ I also connected pin 10 to the ATmega328P reset pin since the ISP example uses D10 to reset the target microcontroller, rather than the SS pin.

  • MOSI (D11) ➡ 17
  • MISO (D12) ➡ 18
  • SCK (D13) ➡ 19
  • D10 ➡ 1
project_image_408
project_image_409

#️⃣ After connecting the Arduino Uno SPI pins to the ATmega328P SPI pins, I selected Tools ➡ Programmer ➡ Arduino as ISP. Then, I selected the board as Arduino Uno since I wanted to burn the Arduino Uno bootloader to the target ATmega328P chip.

project_image_410

#️⃣ After configuring bootloader settings, I clicked Tools ➡ Burn Bootloader to initiate the bootloader burning procedure.

project_image_411
project_image_412

#️⃣ After burning the Arduino Uno bootloader to my ATmega328P chip successfully, I uploaded a simple program via the external FTDI adapter to test whether the ATmega328P chip behaves as an Arduino Uno.

project_image_413
project_image_414
project_image_415

#️⃣ Once I confirmed the ATmega328P worked as an Arduino Uno, I connected a button to its reset pin and GND in order to restart my program effortlessly in case of logic errors.

project_image_416

#️⃣ Finally, I migrated all of the electrical components to the ATmega328P, considering its pin names equivalent to the Arduino Uno's.

// Connections
// ATMEGA328P-PU :  
//                                Nema 17 (17HS3401) Stepper Motor w/ A4988 Driver Module [Motor 1]
// 5V      ------------------------ VDD
// GND     ------------------------ GND
// D2      ------------------------ DIR
// D3      ------------------------ STEP
//                                Nema 17 (17HS3401) Stepper Motor w/ A4988 Driver Module [Motor 2]
// 5V      ------------------------ VDD
// GND     ------------------------ GND
// D4      ------------------------ DIR
// D5      ------------------------ STEP
//                                SSD1306 OLED Display (128x64)
// 5V      ------------------------ VCC
// GND     ------------------------ GND
// A4      ------------------------ SDA
// A5      ------------------------ SCL
//                                Raspberry Pi 5 
// D6 (RX) ------------------------ GPIO 14 (TXD)
// D7 (TX) ------------------------ GPIO 15 (RXD)
//                                Magnetic Hall Effect Sensor Module (KY-003) [First] 
// GND     ------------------------ -
// 5V      ------------------------ +
// A0      ------------------------ S
//                                Magnetic Hall Effect Sensor Module (KY-003) [Second] 
// GND     ------------------------ -
// 5V      ------------------------ +
// A1      ------------------------ S
//                                Long-shaft B4K7 Potentiometer (Speed) 
// A2      ------------------------ Signal
//                                Long-shaft B4K7 Potentiometer (Station) 
// A3      ------------------------ Signal
//                                Control Button (A)
// D8      ------------------------ +
//                                Control Button (B)
// D9      ------------------------ +
//                                Control Button (C)
// D10     ------------------------ +
//                                Control Button (D)
// D11     ------------------------ +
project_image_417
project_image_418
project_image_419
project_image_420

Circular Conveyor - Step 4: Programming ATMEGA328P-PU as the circular conveyor interface

To prepare monochromatic images in order to display custom logos on the SSD1306 OLED screen, I followed this process.

#️⃣ First, I converted monochromatic bitmaps to compatible C data arrays by utilizing LCD Assistant.

#️⃣ Based on the SSD1306 screen type, I selected the Horizontal byte orientation.

#️⃣ After converting all logos successfully, I created a header file — logo.h — to store them.

project_image_421
project_image_422
project_image_423

#️⃣ I installed the libraries required to control the attached electronic components:

📚 SoftwareSerial (built-in) | Inspect

📚 Adafruit_SSD1306 | Download

📚 Adafruit-GFX-Library | Download

📁 ai_driven_surface_defect_detection_circular_sprocket_conveyor.ino

⭐ Include the required libraries.

#include <SoftwareSerial.h>
#include <Adafruit_GFX.h>
#include <Adafruit_SSD1306.h>

⭐ Import custom logos (C data arrays).

#include "logo.h"

⭐ Declare a software serial port to communicate with Raspberry Pi 5.

SoftwareSerial rasp_pi_5 (6, 7); // RX, TX

⭐ Define the SSD1306 display configurations and declare the SSD1306 class instance.

#define SCREEN_WIDTH 128 // OLED display width, in pixels
#define SCREEN_HEIGHT 64 // OLED display height, in pixels
#define OLED_RESET    -1 // Reset pin # (or -1 if sharing Arduino reset pin)
Adafruit_SSD1306 display(SCREEN_WIDTH, SCREEN_HEIGHT, &Wire, OLED_RESET);

⭐ Define the analog pins for the Hall-effect sensor modules (KY-003).

#define first_hall_effect_sensor    A0
#define second_hall_effect_sensor   A1

⭐ Define the digital pins for the control buttons.

#define control_button_A   8
#define control_button_B   9
#define control_button_C   10
#define control_button_D   11

⭐ Declare all of the variables required by the circular conveyor drivers by creating a struct.

struct stepper_config{
  #define m_num 2
  int _pins[m_num][2] = {{2, 3}, {4, 5}}; // (DIR, STEP)
  // Assign the required revolution and initial speed variables based on drive sprocket conditions.
  int stepsPerRevolution = 200;
  int sprocket_speed = 12000;
  // Assign stepper motor tasks based on the associated part.
  int sprocket_1 = 0, sprocket_2 = 1;  
  // Declare the circular conveyor station pending time for each inference session.
  int station_pending_time = 5000;
  // Define the necessary potentiometer configurations for adjusting the sprocket speed and the station pending time.
  int pot_speed_pin = A2, pot_speed_min = 8000, pot_speed_max = 25000;
  int pot_pending_pin = A3, pot_pending_min = 3000, pot_pending_max = 30000;
};

⭐ Initiate the declared software serial port with its assigned RX and TX pins to start the data transmission process with the Raspberry Pi 5.

rasp_pi_5.begin(9600);

⭐ Activate the assigned DIR and STEP pins connected to the A4988 driver modules, controlling the Nema 17 stepper motors.

for(int i = 0; i < m_num; i++){ pinMode(stepper_config._pins[i][0], OUTPUT); pinMode(stepper_config._pins[i][1], OUTPUT); } 

⭐ Initialize the SSD1306 class instance.

  display.begin(SSD1306_SWITCHCAPVCC, 0x3C);
  display.display();
  delay(1000);

⭐ In the show_screen function, program different screen layouts (interfaces) based on the ongoing conveyor operation, the given user commands, and the real-time sensor readings.

void show_screen(char _type, int _opt){
  // According to the given parameters, show the requested screen type on the SSD1306 OLED screen.
  int str_x = 5, str_y = 5;
  int l_h = 8, l_sp = 5;
  if(_type == 'h'){
    display.clearDisplay();
    switch(_opt){
      case 0: display.drawBitmap(str_x, str_y, home_bits, home_w, home_h, SSD1306_WHITE); break;
      case 1: display.drawBitmap(str_x, str_y, adjust_bits, adjust_w, adjust_h, SSD1306_WHITE); break;
      case 2: display.drawBitmap(str_x, str_y, check_bits, check_w, check_h, SSD1306_WHITE); break;
      case 3: display.drawBitmap(str_x, str_y, serial_bits, serial_w, serial_h, SSD1306_WHITE); break;
      case 4: display.drawBitmap(str_x, str_y, activate_bits, activate_w, activate_h, SSD1306_WHITE); break;
    }
    display.setTextSize(1);
    (_opt == 1) ? display.setTextColor(SSD1306_BLACK, SSD1306_WHITE) : display.setTextColor(SSD1306_WHITE);
    display.setCursor((SCREEN_WIDTH/2)-str_x, str_y);
    display.print("1. Adjust");
    str_y += 2*l_h;
    (_opt == 2) ? display.setTextColor(SSD1306_BLACK, SSD1306_WHITE) : display.setTextColor(SSD1306_WHITE);
    display.setCursor((SCREEN_WIDTH/2)-str_x, str_y);
    display.print("2. Check");
    str_y += 2*l_h;
    (_opt == 3) ? display.setTextColor(SSD1306_BLACK, SSD1306_WHITE) : display.setTextColor(SSD1306_WHITE);
    display.setCursor((SCREEN_WIDTH/2)-str_x, str_y);
    display.print("3. Serial");
    str_y += 2*l_h;
    (_opt == 4) ? display.setTextColor(SSD1306_BLACK, SSD1306_WHITE) : display.setTextColor(SSD1306_WHITE);
    display.setCursor((SCREEN_WIDTH/2)-str_x, str_y);
    display.print("4. Activate");
    display.display();
    delay(500);
  }
  if(_type == 'a'){
    int rect_w = l_h, rect_h = l_h;
    display.clearDisplay();
    display.drawBitmap(str_x, str_y, adjust_bits, adjust_w, adjust_h, SSD1306_WHITE);
    display.setTextSize(1);
    display.setTextColor(SSD1306_WHITE);
    str_x = (SCREEN_WIDTH/2);
    display.fillRect(str_x-rect_w-l_sp, str_y+(l_h/2)-(rect_h/2), rect_w, rect_h, SSD1306_WHITE);
    display.setCursor(str_x, str_y);
    display.print("Speed:");
    str_x += 5*l_sp;
    str_y += l_h;
    display.setCursor(str_x, str_y);
    display.print(current_pot_speed_value);
    str_y += l_h;
    display.setCursor(str_x, str_y);
    display.setTextColor(SSD1306_BLACK, SSD1306_WHITE);
    display.print(stepper_config.sprocket_speed);
    str_x -= 5*l_sp;
    str_y += 2*l_h;
    display.setTextColor(SSD1306_WHITE);
    display.setCursor(str_x, str_y);
    display.fillRect(str_x-rect_w-l_sp, str_y+(l_h/2)-(rect_h/2), rect_w, rect_h, SSD1306_WHITE);
    display.print("Pending:");
    str_x += 5*l_sp;
    str_y += l_h;
    display.setCursor(str_x, str_y);
    display.print(current_pot_pending_value);
    str_y += l_h;
    display.setCursor(str_x, str_y);
    display.setTextColor(SSD1306_BLACK, SSD1306_WHITE);    
    display.print(stepper_config.station_pending_time);
    display.display();
  }
  if(_type == 'c'){
    int c_r = l_h;
    display.clearDisplay();
    display.drawBitmap(str_x, str_y, check_bits, check_w, check_h, SSD1306_WHITE);
    display.setTextSize(1);
    display.setTextColor(SSD1306_WHITE);
    str_x = (SCREEN_WIDTH-check_w-(4*c_r))/3;
    str_x = check_w + str_x + c_r + l_sp;
    str_y += 2*l_h;
    (!digitalRead(control_button_A)) ? display.fillCircle(str_x, str_y, c_r, SSD1306_WHITE) : display.drawCircle(str_x, str_y, c_r, SSD1306_WHITE);
    display.setCursor(str_x-(l_h/2)-1, l_sp/2);
    display.print("CW");
    str_x = SCREEN_WIDTH - c_r - (2*l_sp);
    (!digitalRead(control_button_C)) ? display.fillCircle(str_x, str_y, c_r, SSD1306_WHITE) : display.drawCircle(str_x, str_y, c_r, SSD1306_WHITE);
    display.setCursor(str_x-(2*l_h/3)-2, l_sp/2);
    display.print("CCW");
    str_x = (2*l_sp/3) + check_w;
    str_y += c_r + (3*l_sp);
    display.setCursor(str_x, str_y);
    display.print("First_H: "); display.print(analogRead(first_hall_effect_sensor));
    str_y += 2*l_sp;
    display.setCursor(str_x, str_y);
    display.print("Second_H: "); display.print(analogRead(second_hall_effect_sensor));
    display.display();     
  }
  if(_type == 's'){
    display.clearDisplay();
    display.drawBitmap(str_x, str_y, serial_bits, serial_w, serial_h, SSD1306_WHITE);
    display.setTextSize(1);
    display.setTextColor(SSD1306_WHITE);
    str_x += serial_w + 3*l_sp;
    display.setCursor(str_x, str_y);
    display.print("Serial");
    str_y += l_h;
    display.setCursor(str_x, str_y);
    display.print("Initiated!");
    str_y += 3*l_h;
    display.setCursor(str_x, str_y);
    display.print("Response: "); display.print(rasp_pi_5_res);
    display.display();
  }
  if(_type == 'r'){
    display.clearDisplay();
    str_x = (SCREEN_WIDTH-activate_w)/2;
    str_y = (SCREEN_HEIGHT-activate_h)/2;
    display.drawBitmap(str_x, str_y, activate_bits, activate_w, activate_h, SSD1306_WHITE);
    display.display();
  }
}

⭐In the rasp_pi_5_response function, wait until Raspberry Pi 5 successfully sends a response to the transmitted data packet via serial communication.

⭐Once the retrieved data packet is processed, halt the loop checking for the response data packets.

⭐If Raspberry Pi 5 does not send a response in the given timeframe (station pending time), terminate the loop as well.

⭐Finally, return the fetched response.

char rasp_pi_5_response(){
  char rasp_pi_response = 'n';
  int port_wait = 0;
  // Wait until Raspberry Pi 5 successfully sends a response to the transmitted data packet via serial communication.
  while(rasp_pi_5_ongoing_transmission){
    port_wait++;
    while(rasp_pi_5.available() > 0){
      rasp_pi_response = rasp_pi_5.read();
    }
    delay(500);
    // Halt the loop once Raspberry Pi 5 returns a data packet (response) or does not respond in the given timeframe (station pending time).
    if(rasp_pi_response != 'n' || port_wait > stepper_config.station_pending_time){
      rasp_pi_5_ongoing_transmission = false;
    }
  }
  // Then, return the retrieved response.
  delay(500);
  return rasp_pi_response;  
}

⭐ In the send_data_packet_to_rasp_pi_5 function, transfer the passed data packet to Raspberry Pi 5 via serial communication.

⭐ Suspend code flow until acquiring a response from Raspberry Pi 5.

void send_data_packet_to_rasp_pi_5(String _data){
  rasp_pi_5_res = 'o';
  // Send the passed data packet to Raspberry Pi 5 via serial communication.
  rasp_pi_5.println(_data);
  // Suspend code flow until getting a response from Raspberry Pi 5.
  rasp_pi_5_ongoing_transmission = true; rasp_pi_5_res = rasp_pi_5_response();
  delay(1000);
}

⭐ In the conveyor_move function, based on the passed direction and step number, rotate two stepper motors driving the sprockets simultaneously to move the conveyor chain precisely.

  • Clockwise [CW]: rotate stepper motors in the same direction (right) at the same velocity.
  • Counterclockwise [CCW]: rotate stepper motors in the same direction (left) at the same velocity.
void conveyor_move(int step_number, int acc, String _dir){
  /*
      Move the sprocket-driven circular conveyor stations by controlling the rotation of the associated stepper motors.
      Clockwise [CW]: rotate stepper motors in the same direction (right) at the same velocity.
      Counterclockwise [CCW]: rotate stepper motors in the same direction (left) at the same velocity.
  */

  if(_dir == "CW"){
    digitalWrite(stepper_config._pins[stepper_config.sprocket_1][0], HIGH);
    digitalWrite(stepper_config._pins[stepper_config.sprocket_2][0], HIGH);
  }
  if(_dir == "CCW"){
    digitalWrite(stepper_config._pins[stepper_config.sprocket_1][0], LOW);
    digitalWrite(stepper_config._pins[stepper_config.sprocket_2][0], LOW);
  }

  for(int i = 0; i < step_number; i++){
    digitalWrite(stepper_config._pins[stepper_config.sprocket_1][1], HIGH);
    digitalWrite(stepper_config._pins[stepper_config.sprocket_2][1], HIGH);
    delayMicroseconds(stepper_config.sprocket_speed/acc);
    digitalWrite(stepper_config._pins[stepper_config.sprocket_1][1], LOW);
    digitalWrite(stepper_config._pins[stepper_config.sprocket_2][1], LOW);
    delayMicroseconds(stepper_config.sprocket_speed/acc);
  }  
}

⭐ On the home screen, update the highlighted interface option once the control button A or C is pressed. In other words, move the cursor between interface options.

  • [A] ➡ Down
  • [C] ➡ Up

⭐ Activate the highlighted interface option once the control button B is pressed.

  • [B] ➡ Activate (Select)
  show_screen('h', highlighted_menu_opt);

  // Update the highlighted interface option if the control button A or the control button C is pressed.
  if(!digitalRead(control_button_A)){
    highlighted_menu_opt++;
    if(highlighted_menu_opt > 4) highlighted_menu_opt = 0;
    delay(1000);
  }

  if(!digitalRead(control_button_C)){
    highlighted_menu_opt--;
    if(highlighted_menu_opt < 0) highlighted_menu_opt = 4;
    delay(1000);
  }

  // Select the highlighted interface option if the control button B is pressed.
  if(!digitalRead(control_button_B) && highlighted_menu_opt > 0){
    active_menu_opt[highlighted_menu_opt-1] = true;
    delay(250);
  }

⭐ Once the Adjust interface option is activated:

⭐ Obtain the latest potentiometer values and map the retrieved values according to the given thresholds.

⭐ Once the control button A is pressed, declare the associated potentiometer value (mapped) as the speed parameter for controlling the speed of the stepper motors driving the sprockets while rotating them.

⭐ Once the control button C is pressed, declare the associated potentiometer value (mapped) as the station pending time parameter, which is the intermission to give camera modules time to focus before running an inference.

⭐ Inform the user of the real-time parameter adjustments (declarations) on the screen.

⭐ Return to the home screen if the control button D is pressed.

  if(active_menu_opt[0]){
    show_screen('a', highlighted_menu_opt);
    while(active_menu_opt[0]){
      // Obtain the latest potentiometer values and map the retrieved values according to the given thresholds.
      current_pot_speed_value = constrain(map(analogRead(stepper_config.pot_speed_pin), 50, 850, stepper_config.pot_speed_min, stepper_config.pot_speed_max), stepper_config.pot_speed_min, stepper_config.pot_speed_max);
      current_pot_speed_value /= 1000; current_pot_speed_value *= 1000;
      current_pot_pending_value = constrain(map(analogRead(stepper_config.pot_pending_pin), 50, 850, stepper_config.pot_pending_min, stepper_config.pot_pending_max), stepper_config.pot_pending_min, stepper_config.pot_pending_max);
      current_pot_pending_value /= 1000; current_pot_pending_value *= 1000;
      // Once the control button A is pressed, declare the associated potentiometer value (mapped) as the new conveyor sprocket speed parameter.
      if(!digitalRead(control_button_A)){ stepper_config.sprocket_speed =  current_pot_speed_value; }
      // Once the control button C is pressed, declare the associated potentiometer value (mapped) as the new conveyor station pending time parameter.
      if(!digitalRead(control_button_C)){ stepper_config.station_pending_time =  current_pot_pending_value; }
      // Inform the user of the latest adjustments on the screen.
      show_screen('a', highlighted_menu_opt);
      // Return to the home screen if the control button D is pressed.
      if(!digitalRead(control_button_D)){ active_menu_opt[0] = false; delay(500); }
    }
  }

⭐ Once the Check interface option is activated:

⭐ Once the control button A is pressed, rotate the stepper motors driving sprockets one step clockwise simultaneously.

⭐ Once the control button C is pressed, rotate the stepper motors driving sprockets one step counterclockwise simultaneously.

⭐ Obtain the real-time magnetic Hall-effect sensor raw readings.

⭐ Inform the user of the ongoing stepper motor movement and the real-time sensor readings on the screen.

⭐ Return to the home screen if the control button D is pressed.

   if(active_menu_opt[1]){
    show_screen('c', highlighted_menu_opt);
    while(active_menu_opt[1]){     
      // Once the control button A is pressed, rotate the drive sprockets one step clockwise.
      if(!digitalRead(control_button_A)){ conveyor_move(stepper_config.stepsPerRevolution, 10, "CW"); }
      // Once the control button C is pressed, rotate the drive sprockets one step counterclockwise.
      if(!digitalRead(control_button_C)){ conveyor_move(stepper_config.stepsPerRevolution, 10, "CCW"); }
      // Inform the user of the given sprocket direction and the latest magnetic Hall effect sensor readings on the screen immediately.
      show_screen('c', highlighted_menu_opt);       
      // Return to the home screen if the control button D is pressed.
      if(!digitalRead(control_button_D)){ active_menu_opt[1] = false; delay(500); }
    }
  }

⭐ Once the Serial interface option is activated:

⭐ Once the control button A is pressed, send the test command to Raspberry Pi 5 via serial communication to check the two-way data transmission status.

⭐ Once the control button C is pressed, send the run command to Raspberry Pi 5 via serial communication to manually run consecutive inferences (regular Wide and NoIR Wide) with the FOMO-AD visual anomaly detection model.

⭐ Inform the user of the received response (data packet) from Raspberry Pi 5.

⭐ Return to the home screen if the control button D is pressed.

   if(active_menu_opt[2]){
    show_screen('s', highlighted_menu_opt);
    while(active_menu_opt[2]){
      // Once the control button A is pressed, send the test command to Raspberry Pi 5 via serial communication to check the connection status.
      if(!digitalRead(control_button_A)){ send_data_packet_to_rasp_pi_5("test"); }
      // Once the control button C is pressed, send the run command to Raspberry Pi 5 via serial communication to manually run an inference.
      if(!digitalRead(control_button_C)){ send_data_packet_to_rasp_pi_5("run"); }
      // Inform the user of the latest received data packets from Raspberry Pi 5.
      show_screen('s', highlighted_menu_opt); 
      // Return to the home screen if the control button D is pressed.
      if(!digitalRead(control_button_D)){ active_menu_opt[2] = false; rasp_pi_5_res = 'o'; delay(500); }
    }
  } 

⭐ Once the Activate interface option is activated:

⭐ Initiate the stepper motors to rotate the sprockets simultaneously to move the chain of the circular conveyor continuously but steadily.

⭐ Once both of the magnetic Hall-effect sensors detect neodymium magnets attached to the bottom of the plastic object carriers simultaneously, stop the circular conveyor motion immediately.

⭐ Wait until the given intermission (station pending time) passes to give the camera modules time to focus on the plastic object surfaces.

⭐ Then, send the run command to Raspberry Pi 5 via serial communication to initiate an inference session automatically.

⭐ Once Raspberry Pi 5 sends the response denoting that the inference session with the provided Edge Impulse FOMO-AD visual anomaly detection model was successful, resume the circular conveyor motion.

⭐ After concluding the inference session, move the chain of the circular conveyor further to prevent Hall-effect sensors from detecting the same neodymium magnets, which would lead to running inferences with the same plastic objects.

⭐ Terminate the automatic conveyor operations and return to the home screen once the control button D is pressed.

   if(active_menu_opt[3]){
    show_screen('r', highlighted_menu_opt);
    while(active_menu_opt[3]){
      // Initiate the circular conveyor to move the conveyor stations continuously but steadily.
      conveyor_move(stepper_config.stepsPerRevolution/2, 10, "CW");
      // Via the neodymium magnets attached to the bottom of the conveyor stations, detect when stations are passing above the associated magnetic Hall effect sensors.
      if(analogRead(first_hall_effect_sensor) < 150 || analogRead(second_hall_effect_sensor) < 150){
        // Then, stop the circular conveyor motion immediately.
        circular_conveyor_station_stop = true;
        while(circular_conveyor_station_stop){
          // To give the cameras attached to Raspberry Pi 5 time to focus, wait until the given station pending time passes.
          delay(stepper_config.station_pending_time);
          // Then, send the run command to Raspberry Pi 5 via serial communication to initiate the inference session.
          send_data_packet_to_rasp_pi_5("run");
          // Once Raspberry Pi 5 runs an inference successfully with the provided Edge Impulse FOMO-AD model, resume the circular conveyor motion.
          if(rasp_pi_5_res == 's'){
            circular_conveyor_station_stop = false;
            station_magnet_detected = true;
            rasp_pi_5_res = 'o';
          }
        }
      }
      // After successfully completing the inference session and continuing the conveyor motion, rotate the drive sprockets additionally to prevent detecting the same station magnets consecutively.
      if(station_magnet_detected){
        conveyor_move(5*stepper_config.stepsPerRevolution, 10, "CW");
        station_magnet_detected = false;
      }
      // Return to the home screen if the control button D is pressed.
      if(!digitalRead(control_button_D)){ active_menu_opt[3] = false; delay(500); }
    }
  } 
project_image_424
project_image_425
project_image_426
project_image_427
project_image_428

Circular Conveyor - Step 5: Designing the circular conveyor controller PCB (4-layer) as a Raspberry Pi 5 shield (hat)

After programming the ATmega328P and ensuring all electronic components performed features as expected, I started to work on designing the circular conveyor controller PCB layout. After developing distinct PCBs for my proof-of-concept projects, I came to the conclusion that designing PCB outlines and structures (silkscreen, copper layers, etc.) directly on Autodesk Fusion 360 works best for my development process. Creating PCB digital twins allows me to simulate complex 3D mechanical systems compatible with the PCB part placement and outline before sending my PCB designs for manufacturing. In this case, designing the layout on Fusion 360 was a necessity rather than a choice since I wanted to design the conveyor controller PCB as a unique Raspberry Pi 5 shield (hat), reducing the board footprint as much as possible.

As I was working on the conveyor PCB layout, I leveraged the open-source CAD file of Raspberry Pi 5 to obtain accurate measurements:

✒️ Raspberry Pi 5 (Step) | Inspect

#️⃣ First, I drew the PCB outline to make sure I left enough clearance for connecting the FPC camera connection cables to the dual-CSI ports.

#️⃣ Then, I added a circular opening (hole) for a cooling fan (40 mm x 40 mm) and ensured that the outline had enough clearance to attach its cable to the Pi 5 fan header.

❗ While conducting my experiments, I noticed that my Raspberry Pi 5's temperature increased to the point of a potential bottleneck, especially in the case of processing real-time image buffers produced by two different camera modules (regular Wide and NoIR Wide) simultaneously. Thus, to design a feature-rich shield (hat), I decided to add a built-in cooling fan on the top of the PCB, supporting the heatsinks affixed to the Raspberry Pi 5.

#️⃣ Finally, I thoroughly measured the areas of the electrical components with my caliper and placed them in the borders of the PCB outline diligently, including the 40-pin female pin header, which would be on the back of the PCB for attaching the shield onto the Raspberry Pi 5.

#️⃣ In the spirit of designing an authentic shield, I wanted to add Pikachu as a part of the PCB outline, emphasizing the power connectors :)

project_image_429
project_image_430
project_image_431

After designing the PCB outline and structure, I imported my outline graphic to KiCad 9.0 in the DXF format and created the necessary circuit connections to complete the circular conveyor PCB layout.

As I had already tested all electrical components on the breadboard, I was able to create the circuit schematic effortlessly in KiCad by following the prototype connections.

project_image_432
project_image_433
project_image_434
project_image_435
project_image_436
project_image_437

Before drawing connection lines to design the overall PCB layout, I noticed that a 2-layer PCB layout would be too restrictive for my compact part placement and unique PCB shape. In this regard, I decided to design a 4-layer PCB layout, which allowed me to create ground and power-oriented planes.

#️⃣ To increase the layer number on the KiCad PCB Editor, I navigated to File ➡ Board Setup ➡ Board Stackup ➡ Physical Stackup and selected the number of copper layers as 4.

project_image_438
project_image_439
project_image_440

After configuring the 4-layer PCB layout settings, I completed the circular conveyor controller PCB layout design layer-by-layer.

project_image_441
project_image_442
project_image_443
project_image_444
project_image_445
project_image_446
project_image_447
project_image_448
project_image_449
project_image_450
project_image_451

Circular Conveyor - Step 5.1: Soldering and assembling the circular conveyor controller PCB

After completing the circular conveyor controller PCB layout, I utilized ELECROW's high-quality regular PCB manufacturing service to fabricate my PCB design. For further inspection, I provided the fabrication files on the project GitHub repository. To replicate this device, you can order this PCB directly from my ELECROW community page.

#️⃣ After receiving my PCBs, I soldered electronic components and pin headers via my TS100 soldering iron to place all parts according to my PCB layout.

📌 Component assignments on the circular conveyor controller PCB:

U1 (ATmega328P-PU)

Y1 (16.000 MHz Crystal)

C1, C2 (22 pF Ceramic Capacitor)

C3 (100nF Ceramic Capacitor)

C4 (10uF 250V Electrolytic Capacitor)

R1 (10K Resistor)

DR1, DR2 (Headers for A4988 Stepper Motor Driver)

Motor1, Motor2 (Headers for Nema 17 [17HS3401] Stepper Motor)

Mg1, Mg2, (Headers for Magnetic Hall-effect Sensor Module [KY-003])

RV1, RV2 (Long-shaft Potentiometer [B4K7])

B1 (Headers for Logic Level Converter)

C_B1, C_B2, C_B3, C_B4, Reset1 (6x6 Pushbutton)

SSD1306 (Headers for SSD1306 OLED Display)

FT232RL1 (Headers for FTDI Adapter)

J1 (40-pin Female Header for Raspberry Pi 5)

J_5V_1, J_12V_1 (DC Barrel Female Power Jack)

J_5V_2,J_12V_2 (Headers for Power Supply)

project_image_452
project_image_453
project_image_454
project_image_455
project_image_456
project_image_457
project_image_458
project_image_459

#️⃣ I soldered the 40-pin female header (20x2) to the back of the conveyor controller PCB since I designed the PCB as a Raspberry Pi 5 shield (hat).

project_image_460
project_image_461

#️⃣ After completing soldering components, I connected the cooling fan to the top of the PCB via its integrated M3 screw-nut pairs.

project_image_462
project_image_463
project_image_464
project_image_465
project_image_466
project_image_467

#️⃣ Then, I attached the remaining sensors and modules via their associated headers. I also affixed knobs to the long-shaft potentiometers to provide a more intuitive controller interface.

project_image_468
project_image_469

#️⃣ Even though I did not add a dedicated USB port to the PCB to minimize the shield footprint as much as possible, it is still possible to upload code files to the onboard ATmega328P chip by attaching the FTDI adapter to the PCB.

project_image_470
project_image_471

#️⃣ After ensuring the conveyor controller PCB operated as intended, I fastened it onto the Raspberry Pi 5 via the 40-pin header. Then, I attached the cooling fan cable to the Pi 5's dedicated fan header.

DISCLAIMER: As I was developing the controller PCB, I utilized a white SSD1306 display directly attachable to the dedicated screen header. Nonetheless, while designing mechanical components, I decided to use a blue-yellow SSD1306 display instead of the white display version. Since the blue-yellow version has VCC and GND pins swapped, it must not be connected to the dedicated header directly. Hence, I connected the blue-yellow SSD1306 display via jumper wires to the PCB.

❗ If you want to replicate this project and PCB, a directly connectable SSD1306 display must have this pinout: VCC - GND - SCL - SDA

project_image_472
project_image_473
project_image_474
project_image_475

Circular Conveyor - Step 6: Developing custom mechanical components and parts to build a full-fledged circular sprocket-chain conveyor mechanism utilizing Hall-effect sensors for accurate positioning

In the spirit of developing a proof-of-concept research project, I wanted to showcase my concept of detecting plastic surface anomalies via the direct application of UV (ultraviolet) radiation in an industrial-grade setting. Therefore, I designed this circular sprocket-chain conveyor from the ground up, including custom ball bearings and multi-part chain.

Developing the complex mechanical parts of this circular conveyor was a strenuous process since I needed to go through five different iterations, not counting minor clearance corrections. After my adjustments, every feature of the final version of the automation mechanism worked as planned and anticipated, except that the stepper motors (Nema 17) around which I designed the primary internal gears could not handle the extra torque applied to my custom-designed ball bearings (with 5 mm steel beads) after I recalibrated the chain tension with additional tension pins, leading me to record some features by removing or loosing the chain for the demonstration videos.

I heavily modified my previous data collection rig to design the dual camera stands, elongated camera lens mounts, UV light source mounts, and the plastic object carriers.

As I was working on the circular conveyor mechanism, I leveraged some open-source CAD files to obtain accurate measurements:

✒️ Nema 17 (17HS3401) Stepper Motor (Step) | Inspect

✒️ Raspberry Pi Camera Module v3 (Step) | Inspect

✒️ Raspberry Pi 5 (Step) | Inspect

The pictures below show the final version of the circular conveyor mechanism on Fusion 360. I will explain all of my design choices and assembly process thoroughly in the following steps.

project_image_476
project_image_477
project_image_478
project_image_479
project_image_480
project_image_481
project_image_482
project_image_483
project_image_484
project_image_485

As a frame of reference for those who aim to develop a similar research project, I shared the design files (STL) of each mechanical component of this circular conveyor as open-source on the project GitHub repository.

🎨 As mentioned earlier, I sliced all the exported STL files in Bambu Studio and printed them using my Bambu Lab A1 Combo. In accordance with my color theme, I utilized these PLA filaments while printing 3D parts of the circular conveyor:

  • PLA+ Peak Green
  • PLA+ Very Peri
  • Hyper Speed Orange
  • Hyper Speed Yellow
  • Hyper Speed Blue

The pictures below demonstrate the overview of the individual 3D parts of the earliest version of the circular conveyor mechanism during my initial ball bearing clearance review process. In later development stages, while going through different iterations, I modified some component designs and added chain tensioning parts as explained in the following steps.

project_image_486
project_image_487
project_image_488
project_image_489
project_image_490
project_image_491
project_image_492
project_image_493

Circular Conveyor - Step 6.a: Designing the circular conveyor sprocket driver mechanism with a custom internal gear and ball bearing

#️⃣ First, I calculated the inner and outer gear radii of the internal gear mechanism.

#️⃣ The inner circle and the outer circle must be tangent circles, intersecting at a single point.

#️⃣ Then, I utilized the built-in SpurGear script to generate gears based on the inner and outer circles.

project_image_494
project_image_495
project_image_496
project_image_497

#️⃣ By modifying the inner gear, I created the primary driver gear attachable to the shaft of the Nema 17 stepper motor.

#️⃣ Around the Nema 17 stepper motor, I designed the base of the conveyor chain driver.

#️⃣ To make the driver base easily modifiable, I designed the base shaft carrying the custom ball bearing as a separate component.

#️⃣ Based on 5 mm steel balls (beads), I designed the custom ball bearing in three parts, making adjusting bearing pressure and stress effortless.

  • Inner ring
  • Outer ring [Top]
  • Outer ring [Bottom]

#️⃣ By modifying the outer gear of the internal gear mechanism, I created the outer gear of the conveyor driver, pivoted by the custom ball bearing.

#️⃣ Finally, by using the SpurGear script, I designed the sprocket that moves the conveyor chain. As I was going through different design iterations, I heavily modified the usual spur gear layout to get the optimal results while moving the conveyor chain.

#️⃣ As discussed, I developed the circular conveyor mechanism to drive the conveyor chain via two drivers simultaneously to create a stable system. Thus, I mirrored the first conveyor driver to create the second conveyor driver.

#️⃣ Nonetheless, due to the produced angular momentum, it would not be wise to attach the conveyor chain to separated and unsupported drivers. In this regard, I designed the driver guide rails with triangular mortise and tenon joints.

project_image_498
project_image_499
project_image_500
project_image_501
project_image_502
project_image_503
project_image_504
project_image_505
project_image_506
project_image_507
project_image_508
project_image_509
project_image_510
project_image_511
project_image_512
project_image_513
project_image_514
project_image_515
project_image_516

Circular Conveyor - Step 6.a.1: Printing and assembling the circular conveyor sprocket driver mechanism

#️⃣ First, on Autodesk Fusion 360, I exported all conveyor driver components as individual STL files.

#️⃣ Then, I sliced the exported parts in Bambu Studio, providing an intuitive user interface for adjusting slicer settings even for complex structures.

#️⃣ Since the driver base shaft carries the custom ball bearing, pivoting the sprocket, I utilized the built-in height range modifiers to increase the wall loop (perimeter) number of potential weak points to 3.

#️⃣ I also increased the wall loop (perimeter) number to 3 for the custom bearing parts and the primary driver (inner) gear.

#️⃣ For the remaining components, I selected the sparse infill density as 10% instead of 15%.

project_image_517
project_image_518
project_image_519
project_image_520
project_image_521
project_image_522
project_image_523
project_image_524
project_image_525
project_image_526
project_image_527
project_image_528
project_image_529
project_image_530
project_image_531
project_image_532

#️⃣ Since threaded inserts use heat to bond by melting the plastic, they reinforce M3 screw connections more than any other method. Hence, I utilized my TS100 soldering iron with its special heat set tip kit to install M3 brass threaded inserts to the conveyor driver base shaft to strengthen its connection with the driver base and the custom ball bearing.

project_image_533
project_image_534

#️⃣ Then, I assembled the custom ball bearing by utilizing 5 mm steel balls (beads), M3 washers, screws, and nuts. I chose not to permanently fasten the outer rings of the ball bearing since I wanted to adjust the bearing stress while building the conveyor mechanism.

project_image_535
project_image_536
project_image_537
project_image_538
project_image_539
project_image_540
project_image_541

#️⃣ I placed the Nema 17 stepper motor into its slot and installed M3 inserts to attach the stepper motor lid to the driver base via M3 threaded bolts.

#️⃣ Then, I fastened the primary driver gear (inner) to the stepper motor shaft.

project_image_542
project_image_543
project_image_544
project_image_545
project_image_546
project_image_547
project_image_548

#️⃣ After affixing the conveyor driver base shaft to the driver base via M3 screws successfully, I attached the custom ball bearing to the top of the base shaft via M3 screws through the installed M3 inserts.

project_image_549
project_image_550
project_image_551
project_image_552
project_image_553
project_image_554
project_image_555

#️⃣ After installing M3 inserts to the outer gear of the conveyor driver for strengthening the sprocket connections (if necessary), moving the conveyor chain, I attached the outer gear to the custom ball bearing by employing the M3 screws already tensioning the outer rings of the bearing. While attaching the outer gear, I changed the tensioning M3 screws with longer ones to gain more clearance and added M3 washers between the ball bearing and the driver outer gear to reduce friction.

project_image_556
project_image_557
project_image_558
project_image_559
project_image_560
project_image_561
project_image_562
project_image_563
project_image_564

#️⃣ After completing the assembly of the first conveyor driver successfully, I assembled the second conveyor driver by following the exact same steps above.

project_image_565
project_image_566
project_image_567
project_image_568
project_image_569
project_image_570
project_image_571
project_image_572
project_image_573
project_image_574
project_image_575
project_image_576
project_image_577
project_image_578
project_image_579

Circular Conveyor - Step 6.b: Designing the circular conveyor controller PCB mount and camera module 3 stations (regular Wide and NoIR Wide) based on the previous data collection rig

#️⃣ As mentioned earlier, I designed the dual camera stands, UV light source holders, and the camera lens mounts by heavily modifying my previous data collection rig.

#️⃣ First, I divided the rig bases to create separate UV strip and flashlight-compatible holders, letting me change UV light sources without disturbing the camera module cases or the plastic object carriers.

#️⃣ I utilized the same camera case and filter lens designs for the color gel and the UV bandpass filters. Nevertheless, I elongated the camera case mount to ensure the focal points of the camera modules (regular Wide and NoIR Wide) aligned with the center of the plastic object surfaces, carried by the plastic object carriers.

#️⃣ Based on the conveyor controller PCB outline, I designed an authentic PCB case bridging two camera stand racks while encapsulating the Raspberry Pi 5.

project_image_580
project_image_581
project_image_582
project_image_583
project_image_584
project_image_585
project_image_586
project_image_587
project_image_588
project_image_589
project_image_590

Circular Conveyor - Step 6.b.1: Printing and assembling the circular conveyor PCB mount and camera stations

#️⃣ While slicing the UV light source holders and the camera stands, I selected the sparse infill density as 5% and applied the gyroid sparse infill pattern to print lightweight components as strong as possible.

#️⃣ For the remaining parts, I used the usual slicer settings.

project_image_591
project_image_592
project_image_593
project_image_594
project_image_595
project_image_596

#️⃣ To reinforce the connection between the PCB case and the camera stand racks, I installed M3 inserts and employed M3 threaded bolts.

#️⃣ Then, I attached the dedicated Hall-effect sensor mounts directly to the camera stand racks via M3 screws.

project_image_597
project_image_598
project_image_599
project_image_600
project_image_601
project_image_602
project_image_603
project_image_604
project_image_605

Circular Conveyor - Step 6.c: Affixing sprockets to the conveyor drivers and attaching the guide rails

#️⃣ Similar to the camera stands, I selected the sparse infill density as 5% while applying the gyroid sparse infill pattern to make the guide rails lightweight but robust.

project_image_606
project_image_607
project_image_608
project_image_609

#️⃣ First, I attached the sprockets to the outer gears of the conveyor drivers via the pre-installed M3 screws, already tensioning the outer gears and the outer rings of the custom ball bearings.

❗ The pictures below demonstrate the very first iteration of the sprockets. While developing the conveyor mechanism, I heavily modified the sprocket design to move the conveyor chain most efficiently.

project_image_610
project_image_611
project_image_612
project_image_613
project_image_614
project_image_615
project_image_616
project_image_617

#️⃣ Then, I connected the two separate conveyor driver bases via the guide rails. Although I added holes to tighten the rail connections via M3 screws, the integrated triangular mortise and tenon joints were more than enough to move the conveyor chain stably.

project_image_618
project_image_619
project_image_620
project_image_621
project_image_622
project_image_623

Circular Conveyor - Step 6.d: Designing the chain outer and inner plates with annular snap fit joints (roller-pin connection)

#️⃣ Since I wanted to design a unique multi-part conveyor chain instead of purchasing a commercial conveyor chain, I scrutinized various production line system documentation to decide the best chain type for my use case.

#️⃣ After my research, I decided to design my chain composed of these interlocking parts:

  • Outer plate
  • Outer plate with pins
  • Inner plate
  • Inner plate with roller

#️⃣ I designed the outer plate pins as annular snap fit joints, which are suitable for high-stress applications and distribute stress uniformly. In this regard, it is possible to assemble or disassemble the chain at any length without additional tools or parts.

#️⃣ Based on the length of one chain link, I calculated the required chain links covering the driver sprockets and the distance between them.

project_image_624
project_image_625
project_image_626
project_image_627
project_image_628
project_image_629
project_image_630

Circular Conveyor - Step 6.d.1: Designing custom plastic object carriers compatible with neodymium magnets and aligning Hall-effect sensor mounts

#️⃣ After estimating the chain length and simulating the fully wrapped chain on Fusion 360, I was able to design the plastic object carrier structure precisely.

#️⃣ Since I had already aligned the focal points of the camera modules and the center of the target plastic object surfaces, I derived the base of the plastic object carrier by directly encasing the target objects.

#️⃣ By knowing the distance between the bottom of the plastic object carrier and the top of the conveyor chain, I designed two separate pins to attach the object carrier to an outer chain link.

#️⃣ To the back of the carrier base, I added slots for one circular neodymium magnet (8 mm) and two rectangular neodymium magnets (10 mm x 5 mm). Since I did not want to use fasteners, I specifically designed snap fit slots for the magnets, applying the strain produced by the expanded plastic.

#️⃣ To design an accurate Hall-effect sensor mount attachable to the camera stands, I took the precise measurements of the module via my caliper. Then, I made sure the center of the Hall-effect sensor (placed at the front of the module) aligned with the center of the circular neodymium magnet under the plastic object carrier, leading to optimal sensor readings while moving the conveyor chain.

#️⃣ After successfully designing the first plastic object carrier based on the target object, I copied a plastic carrier to each second outer chain link to simulate the final state of the conveyor chain.

project_image_631
project_image_632
project_image_633
project_image_634
project_image_635
project_image_636
project_image_637
project_image_638
project_image_639
project_image_640
project_image_641
project_image_642
project_image_643
project_image_644
project_image_645
project_image_646

Circular Conveyor - Step 6.d.2: Printing and assembling the conveyor chain links and plastic carriers

#️⃣ Since the outer plate pins carry the most load and need to distribute stress while moving the conveyor chain, I decided to boost outer plate strength by basically printing their pins as solid plastic. In this regard, I increased the wall loop (perimeter) number to 4 while slicing them on Bambu Studio.

#️⃣ For the remaining chain parts, I utilized the usual slicer settings.

project_image_647
project_image_648
project_image_649
project_image_650
project_image_651
project_image_652
project_image_653
project_image_654

#️⃣ To reduce the total chain weight while preserving its rigidity, I also sliced all plastic object carrier components with the usual settings.

project_image_655
project_image_656
project_image_657
project_image_658

#️⃣ I started to assemble the conveyor chain with two inner chain links and one outer chain link.

#️⃣ After testing the flexibility of the first connected chain links, I proceeded to complete the assembly of the whole conveyor chain.

project_image_659
project_image_660
project_image_661
project_image_662
project_image_663
project_image_664
project_image_665

#️⃣ Then, I attached neodymium magnets to each plastic object carrier base via their dedicated snap fit slots.

  • 1 x Circular neodymium magnet [8 mm]
  • 2 x Rectangular neodymium magnets [10 mm x 5 mm]
project_image_666
project_image_667
project_image_668
project_image_669
project_image_670
project_image_671

#️⃣ Finally, I connected plastic object carrier bases to each second outer chain link via the carrier pins by using M3 screws.

project_image_672
project_image_673
project_image_674
project_image_675
project_image_676

Circular Conveyor - Step 6.f: Overhauling my component design mistakes and recalibrating the chain tension by adding tension pins and a tensioning clip to fix chain sag

As discussed earlier, I needed to go through different iterations to overhaul some of my design mistakes. I omitted explaining the minor iterations due to clearance issues, as they did not impact the final version of the conveyor mechanism. Nevertheless, I needed to heavily modify some mechanical components and change the final mechanism to incorporate the major design iterations outlined below.

All of the major issues stemmed from my faulty simulations of mechanical component attributes and interactions.

#️⃣ First, there was too much friction between the chain links and the sprockets due to the length of the gear teeth, even though I assessed that the sprockets should have moved the conveyor chain perfectly on Fusion 360. Nonetheless, the friction issues were probable since even the quality of the sprocket surface finish could cause extra friction or clearance problems.

#️⃣ In this regard, I reiterated the sprocket design until achieving the optimal results while moving the conveyor chain.

project_image_677
project_image_678
project_image_679
project_image_680
project_image_681
project_image_682

#️⃣ After solving the friction issues, the conveyor chain was moving smoothly. However, there was an even bigger problem: the sag of the conveyor chain was more than I calculated, rendering the neodymium magnets under the plastic object carrier bases not aligned with the Hall-effect sensor modules.

project_image_683
project_image_684
project_image_685

#️⃣ After studying my previous simulations, I concluded that I had missed the weight of the additional perimeters of the outer plates and the tilt of my floor while estimating the chain sag.

#️⃣ Therefore, I needed to recalibrate the chain tension to ensure the carriers were aligning with the magnetic sensors. After mulling over countless solutions, I decided to add tensioning pins to the conveyor chain.

#️⃣ Since I added tensioning pins to each second outer chain link, I was able to disassemble the conveyor chain at different points to estimate the required force to tension the chain enough for realignment. Basically, I utilized a twine to reconnect the separated chain links and counted how many times I wrapped the twine until obtaining the required tension.

#️⃣ After estimating the required force, I realized that I could tension the conveyor chain by removing two inner chain links with one outer chain link and adding a tensioning clip instead.

#️⃣ Since each second outer chain link was connected to a plastic object carrier, I would have had to discard one object carrier while tensioning the conveyor chain via the tensioning clip. Therefore, I designed the tensioning clip to hold the discarded carrier at the same level as the outer chain links.

#️⃣ Of course, adding this much unprecedented tension to the conveyor chain rendered the Nema 17 stepper motors not able to handle the extra torque applied to my custom-designed ball bearings (with 5 mm steel beads). Thus, as mentioned earlier, I needed to record some features related to sprocket movements (affixed to outer gears pivoted on the ball bearings) by removing or loosening the chain for the demonstration videos.

project_image_686
project_image_687
project_image_688
project_image_689
project_image_690
project_image_691
project_image_692
project_image_693
project_image_694
project_image_695
project_image_696
project_image_697
project_image_698
project_image_699
project_image_700
project_image_701
project_image_702
project_image_703
project_image_704
project_image_705
project_image_706
project_image_707
project_image_708
project_image_709
project_image_710

Circular Conveyor - Step 6.g: Assembling unique camera filter lenses and attaching the Raspberry Pi 5 with the circular conveyor controller PCB (shield) to the camera stations

#️⃣ Due to the elongated camera case mount, I needed to strengthen the camera module case parts to ensure the camera modules would not shake while moving the plastic object carriers. Thus, I increased the wall loop (perimeter) number to 4 while slicing them on Bambu Studio.

#️⃣ For the same reason, I also sliced the Hall-effect sensor module mounts with extra perimeters.

project_image_711
project_image_712
project_image_713
project_image_714

#️⃣ The assembly of the multi-part camera module cases was the same as the camera case of the data collection rig because I only modified the length of the camera case mount.

#️⃣ Even though I printed new camera case parts with increased perimeters, I decided to utilize the previously printed camera filter lenses, as I had already permanently affixed the glass UV bandpass filter to its dedicated camera lens.

project_image_715
project_image_716
project_image_717
project_image_718
project_image_719
project_image_720
project_image_721

#️⃣ After assembling the camera module cases of regular Wide and NoIR Wide camera modules, I fastened the Raspberry Pi 5 to the PCB case via M2 screws, nuts, and washers. To raise the Raspberry Pi 5 from the PCB case surface, I utilized extra M2 nuts.

#️⃣ Then, I attached camera module case mounts to the racks of the camera stands via eight M3 screw-nut pairs. Although separated, the camera stand racks are the same as those of the previous data collection rig bases, leading to persistent results while collecting new UV-applied plastic surface images.

#️⃣ After connecting the power supply to the Raspberry Pi 5, I attached the conveyor controller PCB to the Raspberry Pi 5 via its 40-pin female header. Since I specifically designed the PCB case edges to support the heavy side of the conveyor PCB to reduce the load on the pin header, I did not encounter any connection problems.

project_image_722
project_image_723
project_image_724
project_image_725
project_image_726
project_image_727
project_image_728

#️⃣ Finally, I fastened the Hall-effect sensor modules to their dedicated mounts on the camera stands via a hot glue gun.

project_image_729

#️⃣ After making sure the FPC camera connection cables were firmly attached to the camera cases using zip ties, I concluded the PCB case assembly.

project_image_730
project_image_731
project_image_732
project_image_733
project_image_734

Circular Conveyor - Step 6.h: Positioning Hall-effect sensors, plastic objects, and UV light sources

❗ As I was completing the assembly of the final version of the circular conveyor mechanism, I decided to switch the white SSD1306 display for the blue-yellow SSD1306 display. As mentioned earlier, I needed to connect the blue-yellow version via jumper wires since it has VCC and GND pins swapped. Therefore, I decided to fasten the blue-yellow SSD1306 screen to the front of the first camera stand rack via the hot glue gun.

⚙️ Positioning the camera stand racks bridged by the conveyor controller PCB case between the guide rails and aligning the Hall-effect sensors with the neodymium magnets:

project_image_735
project_image_736
project_image_737
project_image_738
project_image_739

⚙️ Installing UV light sources into their respective holders in the flashlight format and the strip format:

project_image_740
project_image_741
project_image_742
project_image_743

⚙️ Placing the color gel filters into the dedicated camera filter lens:

project_image_744
project_image_745
project_image_746
project_image_747
project_image_748

⚙️ Putting the plastic objects into their dedicated carriers connected to the conveyor chain:

project_image_749
project_image_750
project_image_751
project_image_752
project_image_753
project_image_754
project_image_755
project_image_756
project_image_757

⚙️ Testing camera modules (regular Wide and NoIR Wide), UV light sources (275 nm, 365 nm, and 395 nm), and the Hall-effect sensors:

project_image_758
project_image_759
project_image_760
project_image_761
project_image_762
project_image_763
project_image_764
project_image_765
project_image_766
project_image_767
project_image_768
project_image_769
project_image_770
project_image_771
project_image_772

Circular Conveyor - Step 7: Creating an account to utilize Twilio's SMS API

Even before starting to develop the web dashboard of the circular conveyor, I knew that I wanted to enable the web dashboard to inform the user of the detected plastic surface anomalies via SMS. Thus, I decided to utilize Twilio's SMS API since Twilio provides a trial text messaging service to transfer an SMS from a virtual phone number to a verified phone number internationally. Furthermore, there are official Twilio helper libraries for different programming languages, including PHP, enforcing its suite of APIs.

#️⃣ First, to be able to access trial services, I navigated to the Account section and created a new account, which is a container for Twilio applications.

project_image_773
project_image_774
project_image_775

#️⃣ After verifying my phone number for the newly created account (container), I configured the initial account settings for implementing the Twilio SMS API in PHP.

project_image_776
project_image_777
project_image_778
project_image_779
project_image_780

#️⃣ To enable the SMS service, I navigated to Messaging ➡ Send an SMS and obtained a free 10DLC virtual phone number.

project_image_781
project_image_782

#️⃣ Then, I tested the trial SMS service by sending a message to my verified phone number via the Twilio web interface.

project_image_783
project_image_784

#️⃣ If the Twilio console throws a permission error, you might need to go to the Geo permissions section to add your country to the allowed recipients.

project_image_785
project_image_786

#️⃣ After adjusting allowed recipients, I was able to send the test message from the console without a problem.

project_image_787

#️⃣ After making sure the Twilio SMS service worked as anticipated, I navigated to the Account Info section to obtain the required account credentials (SID and auth token).

#️⃣ Finally, I installed the Twilio PHP helper library to enable the web dashboard to access the SMS API locally for transferring notification messages.

project_image_788

Circular Conveyor - Step 8: Developing a feature-rich circular conveyor web dashboard to observe and sort the latest inference results on Raspberry Pi 5

As discussed earlier, I decided to develop a web dashboard for the circular conveyor mechanism to allow the user to observe the latest inference results in real-time and sort them by the camera module type — regular Wide or NoIR Wide, leading to pinpointing plastic surface anomalies by object more easily. As mentioned in the previous step, I also enabled the web dashboard to employ Twilio's SMS API to inform the user of the latest detected plastic surface anomalies. To ensure the web dashboard could access the latest inference results without any issues, I developed the web dashboard as I was setting up the FOMO-AD visual anomaly detection model on Raspberry Pi 5. I explained the web dashboard code files thoroughly in the following steps. Nonetheless, you can refer to the project GitHub repository if you want to download or inspect the code files directly.

The directory structure (alphabetically) of the circular conveyor web dashboard is as follows, under surface_defect_detection_dashboard as the application root folder:

  • /anomaly_detection
    • /fomo_ad_model
      • ai-driven-plastic-surface-defect-detection-via-uv-exposure-linux-aarch64-v1.eim
    • /inference_results
    • uv_defect_detection_run_inference_w_rasp_5_camera_mod_wide_and_noir.py
  • /assets
    • /img
    • /script
      • dashboard_update.js
      • index.js
    • /style
      • index.css
      • root_variables.css
    • /twilio-php-main
    • anomaly_update.php
    • class.php
    • create_necessary_database_tables.sql
    • database_secrets.php
    • settings_update.php
  • index.php
project_image_789
project_image_790
project_image_791
project_image_792
project_image_793
project_image_794
project_image_795
project_image_796
project_image_797
project_image_798

Circular Conveyor - Step 8.1: Constructing the necessary database tables on MariaDB

Since I had already set up the Apache server with the MariaDB database to develop the web dashboard on the Raspberry Pi 5, I was able to configure the required database settings on the terminal effortlessly.

#️⃣ First, I created a new MariaDB database named surface_detection by utilizing the integrated terminal prompt.

sudo mysql -uroot -p

create database surface_detection;

GRANT ALL PRIVILEGES ON surface_detection.* TO 'root'@'localhost' IDENTIFIED BY '';

FLUSH PRIVILEGES;

#️⃣ Then, by running these SQL commands on the terminal, I created two new database tables with the necessary data fields and inserted the initial dashboard states into the associated database table.

use surface_detection;

CREATE TABLE `notification_settings`(id int AUTO_INCREMENT PRIMARY KEY NOT NULL, cam_regular varchar(255), cam_noir varchar(255), sms_twilio varchar(255) );

INSERT INTO `notification_settings` (`cam_regular`, `cam_noir`, `sms_twilio`) VALUES ("activated", "activated", "activated");

CREATE TABLE `anomaly_results`( id int AUTO_INCREMENT PRIMARY KEY NOT NULL, cam_type varchar(255), detection varchar(255), img_file varchar(255), station_num varchar(255), detection_time varchar(255), server_time varchar(255) );

project_image_799
project_image_800
project_image_801
project_image_802
project_image_803
project_image_804
project_image_805
project_image_806

#️⃣ As mentioned, I developed the web dashboard and configured the FOMO-AD visual anomaly detection model simultaneously. In this regard, I needed to clear the inference results from the associated database table a few times during my experiments until the web dashboard was showing the inference results as intended. To achieve this, I dropped and recreated the associated database table by running these SQL commands.

DROP TABLE `anomaly_results`;

CREATE TABLE `anomaly_results`( id int AUTO_INCREMENT PRIMARY KEY NOT NULL, cam_type varchar(255), detection varchar(255), img_file varchar(255), station_num varchar(255), detection_time varchar(255), server_time varchar(255) );

project_image_807

Circular Conveyor - Step 8.2: Setting up the FOMO-AD (visual anomaly detection) model on Raspberry Pi 5

After installing my FOMO-AD visual anomaly detection model as an EIM binary for Linux (AARCH64) on the Raspberry Pi 5, I needed to configure some permission settings to integrate the FOMO-AD model into the web dashboard successfully.

#️⃣ Since the child directories and files under the root folder of the Apache server are restricted, I changed permissions to enable file creation and modification while running the web dashboard.

sudo chmod 777 /var/www/html

project_image_808

#️⃣ Since I copied the EIM binary — Linux (AARCH64) — after changing the root folder permissions, I changed the file permissions of the binary specifically to make it executable.

sudo chmod 777 /var/www/html/surface_defect_detection_dashboard/anomaly_detection/fomo_ad_model/ai-driven-plastic-surface-defect-detection-via-uv-exposure-linux-aarch64-v1.eim

project_image_809

Circular Conveyor - Step 8.3: Thorough file-by-file code documentation of the conveyor web dashboard

📁 create_necessary_database_tables.sql

⭐ Necessary SQL commands to create the required database tables with the initial states in the MariaDB database.

CREATE TABLE `notification_settings`(id int AUTO_INCREMENT PRIMARY KEY NOT NULL, cam_regular varchar(255), cam_noir varchar(255), sms_twilio varchar(255) );

INSERT INTO `notification_settings` (`cam_regular`, `cam_noir`, `sms_twilio`) VALUES ("activated", "activated", "activated");

CREATE TABLE `anomaly_results`( id int AUTO_INCREMENT PRIMARY KEY NOT NULL, cam_type varchar(255), detection varchar(255), img_file varchar(255), station_num varchar(255), detection_time varchar(255), server_time varchar(255) );

DROP TABLE `anomaly_results`;
project_image_810

📁 database_secrets.php

⭐ Enable the PHP-based MariaDB database connection via the integrated MySQLi extension.

// Database info.
$server = array(
	"server" => "localhost",
	"username" => "root",
	"password" => "",
	"database_name" => "surface_detection"
);

// Database connection credentials.
$_db_conn = mysqli_connect($server["server"], $server["username"], $server["password"], $server["database_name"]);
project_image_811

📁 class.php

#️⃣ To bundle all the functions to write a more concise script, I used a PHP class.

⭐ Import the required MariaDB database connection settings.

⭐ Include the Twilio PHP helper library and its required modules.

include_once "database_secrets.php";

// Include the Twilio PHP Helper Library. 
require_once 'twilio-php-main/src/Twilio/autoload.php';
use Twilio\Rest\Client;

⭐ Declare the necessary Twilio account (container) and phone number (trial and registered) information.

	private $twilio_info = array(
									"sid" => "<__SID__>",
									"token" => "<__TOKEN__>",
									"to_phone" => "+16__________",
									"from_phone" => "+16_________"
	                            );	

⭐ In the __init__ function:

⭐ Integrate the previously declared MySQL object with the passed database credentials into this PHP class.

⭐ Declare a new Twilio client instance (object).

	public function __init__($_db_conn){
		// Init the MySQL object with the passed database credentials.
		$this->db_conn = $_db_conn;
		// Declare a new Twilio client instance (object). 
		$this->twilio = new Client($this->twilio_info["sid"], $this->twilio_info["token"]);		
	}

⭐ In the send_sms function, transfer the given text message as an SMS to the registered phone number through the Twilio SMS API.

	protected function send_sms($message){
		$message = $this->twilio->messages
								->create($this->twilio_info["to_phone"], // to
										array(
												"from" => $this->twilio_info["from_phone"],
												"body" => $message
											 )
										);
		echo "Sent SMS SID: ".$message->sid;
	}

⭐ In the obtain_not_settings function, obtain the latest dashboard status states from the associated MariaDB database table.

	public function obtain_not_settings(){
		$sql = "SELECT * FROM `$this->not_set_table` WHERE `id` = 1";
		$result = mysqli_query($this->db_conn, $sql);
		$check = mysqli_num_rows($result);
		if($check > 0){
			// If found successfully, return the registered notification settings.
			if($row = mysqli_fetch_assoc($result)){
				return $row;
			}else{
				return false;
			} 
		}else{
			return false;
		}		
	}

⭐ In the update_not_setting function, update the given dashboard status state with the passed value.

    public function update_not_setting($setting, $value){
		$sql = "UPDATE `$this->not_set_table` SET `$setting` = '$value' WHERE `id` = 1;";			
		// Show the query result.
		return (mysqli_query($this->db_conn, $sql)) ? true : false;		
	}

⭐ In the fetch_anomaly_results function:

⭐ First, obtain the latest dashboard status states from the associated database table.

⭐ According to the fetched dashboard status states of the regular Wide and NoIR Wide camera modules, obtain the surface anomaly detection logs (results) from the associated MariaDB database table, leading to sorting anomaly results by the provided user choices.

⭐ After getting surface anomaly detection logs (results), generate a section HTML element for each retrieved entry while recording the produced HTML elements to the main HTML content string.

⭐ If there are no detection logs, create the main HTML content string accordingly.

⭐ After processing the fetched anomaly detection information successfully, return the main HTML content string.

	public function fetch_anomaly_results(){
		// Obtain the latest notification setting values.
		$notification_vals = $this->obtain_not_settings();
        // Based on the given notification settings, obtain surface anomaly detection results from the associated MariaDB database table.
        $sql = "";
		$html_content = '';
        if($notification_vals["cam_regular"] == "activated" && $notification_vals["cam_noir"] == "activated"){ $sql = "SELECT * FROM `$this->result_table` ORDER BY `id` DESC"; }
		else if($notification_vals["cam_regular"] == "activated"){ $sql = "SELECT * FROM `$this->result_table` WHERE `cam_type` = 'regular' ORDER BY `id` DESC"; }
		else if($notification_vals["cam_noir"] == "activated"){ $sql = "SELECT * FROM `$this->result_table` WHERE `cam_type` = 'noir' ORDER BY `id` DESC"; }
		$result = mysqli_query($this->db_conn, $sql);
		$check = mysqli_num_rows($result);
		if($check > 0){
			while($row = mysqli_fetch_assoc($result)){
				// If there are surface anomaly detection logs (entries), generate HTML elements from each retrieved entry. 
				$html_element = '<section class="'.$row["cam_type"].' '.$row["detection"].'">
								 <span>'.$row["station_num"].'</span>
								 <img src="anomaly_detection/'.$row["img_file"].'" />
								 <h2>'.ucfirst($row["detection"]).'</h2>
								 <p>'.$row["detection_time"].'</p>
								 </section>';

				// Then, add the produced HTML element to the main HTML content.
                $html_content .= $html_element;			
			} 
		}else{
			$html_content = '<section>
			                 <span>💾</span>
							 <img src="assets/img/raspberrry_pi_logo.png" />
							 <h2>No Entry!</h2>
							 <p>MariaDB</p>
							 </section>';
		}
		// After processing the fetched anomaly detection information successfully, return the main HTML content.
		return $html_content;
	}

⭐ In the insert_anomaly_log_and_inform_via_SMS function:

⭐ First, obtain the latest dashboard status states from the associated database table.

⭐ Get the current date & time (server).

⭐ Insert the passed surface anomaly detection log (result) into the associated MariaDB database table.

⭐ If the dashboard status state of the Twilio integration is enabled, inform the user of the given anomaly detection log by sending an SMS via the Twilio SMS API.

❗ I noticed that the Twilio SMS API does not transfer SMS messages with more than two message segments (140-byte chunks) for my trial account after a while. Thus, I needed to shorten my notification text messages. For paid accounts, you can apply (uncomment) the longer version with multiple segments.

	public function insert_anomaly_log_and_inform_via_SMS($log){
		// Obtain the latest notification setting values.
		$notification_vals = $this->obtain_not_settings();		
		// Get the current date & time (server).
		$date = date("Y_m_d_h_i_s");
		// Insert the passed log to the associated MariaDB database table.
		$sql = "INSERT INTO `$this->result_table` (`cam_type`, `detection`, `img_file`, `station_num`, `detection_time`, `server_time`)
				VALUES ('".$log["cam_type"]."', '".$log["detection"]."', '".$log["img_file"]."', '".$log["station_num"]."', '".$log["detection_time"]."', '$date');";	
        /*
			Once the new anomaly log is registered successfully to the database table, inform the user of the latest detection results
			by sending an SMS via Twilio if the associated notification settings are enabled.
		*/
		if(mysqli_query($this->db_conn, $sql)){
			echo "Registered successfully!<br><br>";
			if($log["detection"] == "anomaly" && $notification_vals["sms_twilio"] == "activated"){
				$message =  "192.168.1.23/surface_defect_detection_dashboard/anomaly_detection/".$log["img_file"];
				// Uncomment for paid accounts with more SMS segments.
				//$message =  "⚠️ Surface Anomaly Detected \n\r\n\r📸 ".ucfirst($log["cam_type"])."\n\r\n\r#️⃣ ".$log["station_num"]."\n\r\n\r🖼️".$log["img_file"]."\n\r\n\r⏱️ ".$log["detection_time"]."\n\r\n\r⏰ ".$date;
				$this->send_sms($message);				
			}
		}else{
			echo "Database error [Insert]!";
		}			
	}
project_image_812
project_image_813
project_image_814

📁 dashboard_update.js

⭐ Every 2 seconds, make an HTTP POST request (jQuery Ajax) to the associated PHP file to obtain the latest surface anomaly detection logs (results). After obtaining the HTML content string derived from the anomaly detection logs, update the target HTML element's content accordingly.

setInterval(() => {
	// Obtain the required updates from the database.
	$.ajax({
		url: "assets/anomaly_update.php",
		type: "POST",
		data: {"get_html_content": "OK"},
		success: (response) => {
				// After getting the produced anomaly logs, update the associated HTML element's content accordingly.
                $(".container").html(response);				
			}
		});	
}, 2000);
project_image_815

📁 index.js

⭐ In the update_not_setting function, make an HTTP GET request (jQuery Ajax) to the associated PHP file to update the given dashboard status state with the provided value.

function update_not_setting(setting, value){
	$.ajax({
		url: "assets/settings_update.php?setting=" + setting + "&value=" + value,
		type: "GET",
		success: (response) => {
			console.log("Notification Setting [" + setting + "] updated to: " + value);
		}
	});	
}

⭐ Once a dashboard status button (toggle switch) is clicked, toggle its last position by assigning the associated animation class (style) and update the corresponding status state value in the associated database table accordingly.

$("#cam_regular").on("click", function(event){
	let toggle = $(this).find("span");
	if(toggle.hasClass("anim_setting_activated") || $(this).hasClass("activated")){
		toggle.removeClass("anim_setting_activated");
		toggle.addClass("anim_setting_disabled");
		// Update the setting value accordingly.
		update_not_setting("cam_regular", "disabled");
	}else{
		if(toggle.hasClass("anim_setting_disabled")) toggle.removeClass("anim_setting_disabled");
		toggle.addClass("anim_setting_activated");
		// Update the setting value accordingly.
		update_not_setting("cam_regular", "activated");		
	}
});
$("#cam_noir").on("click", function(event){
	let toggle = $(this).find("span");
	if(toggle.hasClass("anim_setting_activated") || $(this).hasClass("activated")){
		toggle.removeClass("anim_setting_activated");
		toggle.addClass("anim_setting_disabled");
		// Update the setting value accordingly.
		update_not_setting("cam_noir", "disabled");
	}else{
		if(toggle.hasClass("anim_setting_disabled")) toggle.removeClass("anim_setting_disabled");
		toggle.addClass("anim_setting_activated");
		// Update the setting value accordingly.
		update_not_setting("cam_noir", "activated");		
	}
});
$("#sms_twilio").on("click", function(event){
	let toggle = $(this).find("span");
	if(toggle.hasClass("anim_setting_activated") || $(this).hasClass("activated")){
		toggle.removeClass("anim_setting_activated");
		toggle.addClass("anim_setting_disabled");
		// Update the setting value accordingly.
		update_not_setting("sms_twilio", "disabled");
	}else{
		if(toggle.hasClass("anim_setting_disabled")) toggle.removeClass("anim_setting_disabled");
		toggle.addClass("anim_setting_activated");
		// Update the setting value accordingly.
		update_not_setting("sms_twilio", "activated");		
	}
});

⭐ After the assigned animation is completed, modify the appearance of the target toggle switch accordingly.

⭐ In the case of disabling a camera type dashboard status (regular or NoIR), suspend the remaining camera type switch to avoid data omission while sorting surface anomaly detection results.

$(".header > section > article > span").on("animationend", function(event){
	let notf_set_button = $(this).parent();
	if($(this).hasClass("anim_setting_activated")){
		if(!notf_set_button.hasClass("activated")) notf_set_button.addClass("activated");
	}
	if($(this).hasClass("anim_setting_disabled")){
		if(notf_set_button.hasClass("activated")) notf_set_button.removeClass("activated");
	}
    // Once a camera notification setting is disabled, suspend the corresponding camera setting to avoid data omission.
	let target = $(this).parent().attr("id");
    if(target == "cam_regular"){
		if($("#cam_noir").hasClass("suspended")){ $("#cam_noir").removeClass("suspended"); }
		else{ $("#cam_noir").addClass("suspended"); }
	}
    if(target == "cam_noir"){
		if($("#cam_regular").hasClass("suspended")){ $("#cam_regular").removeClass("suspended"); }
		else{ $("#cam_regular").addClass("suspended"); }
	}	
	
});
project_image_816
project_image_817

📁 index.php

⭐ Include the class.php file to integrate the required functions and define the anomaly_result class object.

require "assets/class.php";

// Define the anomaly_result_obj class object.
$anomaly_result_obj = new anomaly_result(); 
$anomaly_result_obj->__init__($_db_conn);

⭐ Then, obtain the latest dashboard status states from the associated MariaDB database table.

$notification_vals = $anomaly_result_obj->obtain_not_settings();

⭐ According to the retrieved status states, modify the appearances of dashboard status buttons (toggle switches) by applying the associated CSS classes.

<section>
<article id="cam_regular" class="<?php echo (($notification_vals["cam_regular"] == "disabled") ? "disabled" : (($notification_vals["cam_noir"] == "disabled") ? "activated suspended" : "activated")) ?> ">
<span></span>
</article>
<article id="sms_twilio" class="<?php echo ($notification_vals["sms_twilio"] == "activated") ? "activated" : ""; ?> ">
<span></span>
</article>
<article id="cam_noir" class="<?php echo (($notification_vals["cam_noir"] == "disabled") ? "disabled" : (($notification_vals["cam_regular"] == "disabled") ? "activated suspended" : "activated")) ?> ">
<span></span>
</article>
</section>
project_image_818
project_image_819

📁 settings_update.php

⭐ Include the class.php file to integrate the required functions and define the anomaly_result class object.

require "class.php";

// Define the anomaly_result_obj class object.
$anomaly_result_obj = new anomaly_result(); 
$anomaly_result_obj->__init__($_db_conn);

⭐ Once requested, update the given dashboard status state in the associated MariaDB database table with the provided value.

if(isset($_GET["setting"]) && isset($_GET["value"])){
	$anomaly_result_obj->update_not_setting($_GET["setting"], $_GET["value"]);
}
project_image_820

📁 anomaly_update.php

⭐ Include the class.php file to integrate the required functions and define the anomaly_result class object.

// Include the required class functions.
require "class.php";

// Define the anomaly_result_obj class object.
$anomaly_result_obj = new anomaly_result(); 
$anomaly_result_obj->__init__($_db_conn);

⭐ Once requested, produce the main HTML content string by processing the surface anomaly detection logs (results).

if(isset($_POST["get_html_content"])){
	echo $anomaly_result_obj->fetch_anomaly_results();
}

⭐ Once requested via an HTTP GET request in the form of a query (URL) parameter array, insert the provided surface anomaly detection log (result) information into the associated MariaDB database table.

../anomaly_update.php?anomaly_log[cam_type]=noir&anomaly_log[detection]=normal&anomaly_log[img_file]=normal_10_17_2025_04_17_23.jpg&anomaly_log[station_num]=11&anomaly_log[detection_time]=10_17_2025_04_17_23

if(isset($_GET["anomaly_log"])){
	$anomaly_result_obj->insert_anomaly_log_and_inform_via_SMS($_GET["anomaly_log"]);
}

⭐ Once requested via an HTTP POST request in the form of a JSON object literal, insert the provided surface anomaly detection log (result) information into the associated MariaDB database table.

data: {"anomaly_log": {"cam_type": "regular", "detection": "anomaly", "img_file": "anomaly_10_18_2025_07_07_30.jpg", "station_num": 12, "detection_time": "10_18_2025_07_07_30"}}

if(isset($_POST["anomaly_log"])){
	$anomaly_result_obj->insert_anomaly_log_and_inform_via_SMS($_POST["anomaly_log"]);
}

#️⃣ I decided to make this webhook compatible with HTTP GET and POST requests simultaneously while registering new anomaly detection logs to develop a more flexible API for the web dashboard.

project_image_821

📁 index.css and root_variables.css

⭐ Please refer to the project GitHub repository to review the circular conveyor web dashboard design (styling) files.

project_image_822
project_image_823
project_image_824

📁 uv_defect_detection_run_inference_w_rasp_5_camera_mod_wide_and_noir.py

⭐ Include the required system and third-party libraries.

⭐ Uncomment to modify the libcamera log level to bypass the libcamera warnings if you want clean shell messages while running inferences.

import serial
import cv2
from picamera2 import Picamera2, Preview
from time import sleep
from threading import Thread
from edge_impulse_linux.image import ImageImpulseRunner
import os
import datetime
import requests
import json

# Uncomment to disable libcamera warnings whuile collecting data.
#os.environ["LIBCAMERA_LOG_LEVELS"] = "4"

#️⃣ To bundle all the functions to write a more concise script, I used a Python class.

⭐ In the __init__ function:

⭐ Define a picamera2 object addressing the CSI port of the Raspberry Pi camera module 3 Wide.

⭐ Define the output format and size (resolution) of the images captured by the regular camera module 3 to obtain an OpenCV-compatible buffer — RGB888. Then, configure the picamera2 object accordingly.

⭐ Initialize the video stream (feed) produced by the regular camera module 3.

⭐ Define a secondary picamera2 object addressing the CSI port of the Raspberry Pi camera module 3 NoIR Wide.

⭐ Define the output format and size (resolution) of the images captured by the camera module 3 NoIR to obtain an OpenCV-compatible buffer — RGB888. Then, configure the picamera2 object accordingly.

⭐ Initialize the video stream (feed) produced by the camera module 3 NoIR.

⭐ Declare the directory path to access the Edge Impulse FOMO-AD (visual anomaly detection) model.

⭐ Then, based on the previous experiments, define the anomaly (confidence) threshold.

⭐ Declare the circular conveyor plastic carrier (station) number parameters to enable the web dashboard to track plastic objects by carriers while transferring the inference results to it.

⭐ Initialize serial communication between the ATmega328P chip and the Raspberry Pi 5 through the built-in UART GPIO pins.

class uv_defect_detection():
    def __init__(self, model_file):
        # Define the Picamera2 object for communicating with the Raspberry Pi camera module 3 Wide.
        self.cam_wide = Picamera2(0)
        # Define the camera module frame output format and size, considering OpenCV frame compatibility.
        capture_config = self.cam_wide.create_preview_configuration(raw={}, main={"format":"RGB888", "size":(640,640)})
        self.cam_wide.configure(capture_config)
        # Initialize the camera module continuous video stream (feed).
        self.cam_wide.start()
        sleep(2)
        
        # Define the Picamera2 object for communicating with the Raspberry Pi camera module 3 NoIR Wide.
        self.cam_noir_wide = Picamera2(1)
        # Define the camera module NoIR frame output format and size, considering OpenCV frame compatibility.
        capture_config_noir = self.cam_wide.create_preview_configuration(raw={}, main={"format":"RGB888", "size":(640,640)})
        self.cam_noir_wide.configure(capture_config_noir)
        # Initialize the camera module NoIR continuous video stream (feed).
        self.cam_noir_wide.start()
        sleep(2)
        
        # Define the required configurations to run the provided Edge Impulse FOMO-AD (visual anomaly detection) model.
        self.dir_path = os.path.dirname(os.path.realpath(__file__))
        self.model_file = os.path.join(self.dir_path, model_file)
        self.anomaly_threshold = 8
        
        # Declare the circular conveyor station number to track plastic objects after running inferences.
        self.station_num = 0
        self.total_station_num = 11
        
        # Initialize serial communication between ATMEGA328P and Raspberry Pi 5 through the built-in UART GPIO pins.
        self.ATMEGA328 = serial.Serial("/dev/ttyAMA0", 9600, timeout=1000)
        sleep(3)
		
		...

⭐ In the display_camera_feeds function:

⭐ Obtain the latest frame generated by the regular camera module 3.

⭐ Show the obtained frame on the screen via the built-in OpenCV tools.

⭐ Then, obtain the latest frame produced by the camera module 3 NoIR and show the retrieved frame in a separate window on the screen via the built-in OpenCV tools.

⭐ Stop both camera feeds (regular Wide and NoIR Wide) and terminate individual OpenCV windows once requested.

    def display_camera_feeds(self):
        # Display the real-time video stream (feed) produced by the camera module 3 Wide.
        self.latest_frame_wide = self.cam_wide.capture_array()
        cv2.imshow("UV-based Surface Defect Detection [Wide Preview]", self.latest_frame_wide)
        # Display the real-time video stream (feed) produced by the camera module 3 NoIR Wide.
        self.latest_frame_noir = self.cam_noir_wide.capture_array()
        cv2.imshow("UV-based Surface Defect Detection [NoIR Preview]", self.latest_frame_noir)            
        # Stop all camera feeds once requested.
        if cv2.waitKey(1) & 0xFF == ord('q'):
            cv2.destroyAllWindows()
            self.cam_wide.stop()
            self.cam_wide.close()
            print("\nWide Camera Feed Stopped\n")
            self.cam_noir_wide.stop()
            self.cam_noir_wide.close()
            print("\nWide NoIR Camera Feed Stopped!\n")

⭐ In the camera_feeds function, initiate the loop to show the latest frames produced by the regular Wide and NoIR Wide camera modules consecutively to observe the real-time video streams (feeds) simultaneously.

    def camera_feeds(self):
        # Start the camera video streams (feeds) in a loop.
        while True:
            self.display_camera_feeds()

⭐ In the run_inference function:

⭐ Initiate the integrated Edge Impulse ImageImpulseRunner to utilize the provided Edge Impulse FOMO-AD visual anomaly detection model converted to an EIM binary for Linux (AARCH64).

⭐ If requested, print the detailed model information.

⭐ According to the passed camera type, obtain the latest camera frame generated by the camera module 3 Wide or the camera module 3 NoIR Wide for running an inference.

⭐ After obtaining the latest frame, generate the required features from the retrieved frame based on the provided model information.

#️⃣ Since the Edge Impulse FOMO-AD (visual anomaly detection) models categorize given image samples by producing individual cells (grids) according to the dichotomy between the normal image sample features with which the model was trained and the passed features, there can only be two different classes in relation to the declared anomaly threshold: anomaly and no anomaly.

#️⃣ To identify the plastic surface anomalies, I compared the produced mean visual anomaly values with the anomaly threshold score pinpointed by running the model on the testing samples repeatedly via the Edge Impulse Studio.

⭐ First, after running the inference, obtain the individual cells (grids) with their assigned labels and anomaly scores.

⭐ For each cell with the anomaly label, check whether its anomaly score is greater than the given threshold.

⭐ If so, in relation to the provided anomaly range, draw cells on the inference image in three different colors (BGR) to showcase the extent of defective and damaged surface areas.

⭐ After processing the anomaly score information successfully, update the circular conveyor plastic carrier (station) number and save the processed and modified inference image to the inference_results folder.

⭐ Finally, transfer the generated inference information to the circular conveyor web dashboard, which registers the transferred information into the associated MariaDB database table.

    def run_inference(self, cam_type, __debug):
        # Run an inference with the provided FOMO-AD model to detect plastic surface defects via visual anomaly detection based on UV-exposure.
        with ImageImpulseRunner(self.model_file) as runner:
            try:
                detected_class = ""
                # If requested, print the information of the Edge Impulse FOMO-AD model converted to a Linux (AARCH64) application (.eim).
                model_info = runner.init()
                if(__debug): print('\nLoaded runner for "' + model_info['project']['owner'] + ' / ' + model_info['project']['name'] + '"')
                labels = model_info['model_parameters']['labels']
                # According to the passed camera type, obtain the latest camera frame generated by the camera module 3 Wide or the camera module 3 NoIR Wide for running an inference.
                latest_frame = self.latest_frame_wide if (cam_type == "regular") else self.latest_frame_noir
                # After obtaining the latest frame, modify the retrieved frame based on the provided model requirements in order to generate accurate features.
                features, cropped = runner.get_features_from_image(latest_frame)
                res = runner.classify(features)
                # Since the Edge Impulse FOMO-AD (visual anomaly detection) models categorize given image samples by individual cells (grids)
                # according to the dichotomy between the pretrained anomaly image samples and the passed image sample, there can only be two different classes: anomaly and no anomaly.
                # To identify the plastic surface anomalies, I compared the produced mean visual anomaly values with the anomaly threshold score pinpointed by running the model on the testing samples repeatedly via the Edge Impulse Studio.
                if res["result"]["visual_anomaly_mean"] >= self.anomaly_threshold:
                    detected_class = "anomaly"
                    # Obtain the cells with their assigned labels and anomaly scores evaluated by the FOMO-AD (visual anomaly detection) model.
                    intensity = ""
                    anomaly_range = 3
                    for cell in res["result"]["visual_anomaly_grid"]:
                        # Draw each cell assigned with an anomaly score greater than the given anomaly threshold on the inference image.
                        if cell["label"] == "anomaly" and cell["value"] >= self.anomaly_threshold:
                            # Utilize different colors (BGR) for the cells to showcase the extent of defective and damaged surface areas.
                            cell_c = (255, 26, 255)
                            if(cell["value"] >= self.anomaly_threshold+anomaly_range and cell["value"] < self.anomaly_threshold+(2*anomaly_range)): cell_c = (26, 163, 255)
                            elif(cell["value"] >= self.anomaly_threshold+(2*anomaly_range)): cell_c = (0, 0, 255)
                            # Draw the cell.
                            cv2.rectangle(cropped, (cell["x"], cell["y"]), (cell["x"]+cell["width"], cell["y"]+cell["height"]), cell_c, 2)
                else:
                    detected_class = "normal"
                # After running the provided FOMO-AD model successfully:
                if detected_class != "":
                    if(__debug): print("\nFOMO-AD Model Detection Result => " + detected_class + "\n")
                    # Update the circular conveyor station number accordingly.
                    self.station_num += 1
                    if(self.station_num > self.total_station_num): self.station_num = 1
                    # Save the produced and modified inference image to the inference_results folder.
                    file_name, date = self.save_inference_result_img(cam_type, detected_class, cropped, __debug)
                    # Register the given inference information to the surface defect detection web dashboard.
                    self.register_inference_info(cam_type, detected_class, file_name, date, __debug)
            # Stop the running inference.    
            finally:
                if(runner):
                    runner.stop()

⭐ In the save_inference_result_img function:

⭐ Define the file name and path of the provided inference image by applying the passed inference parameters.

⭐ Then, save the passed inference image to the inference_results folder.

⭐ Return the produced file name and file creation time for further usage.

    def save_inference_result_img(self, cam_type, detected_class, passed_image, __debug):
        # According to the provided image information, save the passed inference image to the inference_results folder.
        date = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
        file_name = "inference_results/{}_{}_{}__{}.jpg".format(detected_class, cam_type, self.station_num, date)
        cv2.imwrite(file_name, passed_image)
        if(__debug): print("Inference image successfully saved: " + file_name)
        return file_name, date

⭐ In the register_inference_info function:

⭐ By making an HTTP POST request in the form of a JSON object literal, transfer the passed inference information to the circular conveyor web dashboard.

    def register_inference_info(self, cam_type, detected_class, file_name, date, __debug):
        # Register the passed inference information to the surface defect detection web dashboard.
        url = "http://localhost/surface_defect_detection_dashboard/assets/anomaly_update.php"
        data = {"anomaly_log[cam_type]": cam_type, "anomaly_log[detection]": detected_class, "anomaly_log[img_file]": file_name, "anomaly_log[station_num]": self.station_num, "anomaly_log[detection_time]": date}
        r = requests.post(url, data=data)
        if(__debug): print("Inference information successfully registered to the web dashboard! Server response: " + r.text)

⭐ In the consecutive_inferences function, run inferences with the latest frames produced by the regular Wide camera module and the NoIR Wide camera module consecutively.

    def consecutive_inferences(self):
        self.run_inference("regular", True)
        sleep(1)
        self.run_inference("noir", True)

⭐ In the obtain_ATMEGA328_data_packets function:

⭐ Obtain the data packets transferred by the ATmega328P chip via serial communication (UART) continuously.

⭐ If the run command is received, run inferences with both camera modules (regular and NoIR) consecutively.

⭐ Then, inform the ATmega328P chip once the inferences are completed by sending the associated data packet (char).

⭐ If the test command is received, send the associated data packet (char) to ensure that the two-way serial data transmission is working as anticipated.

    def obtain_ATMEGA328_data_packets(self):
        # Obtain the data packets transferred by ATMEGA328P via serial communication continuously.
        while True:
            sleep(.5)
            if self.ATMEGA328.in_waiting > 0:
                data_packet = self.ATMEGA328.readline().decode("utf-8", "ignore").rstrip()
                print("Received data packet [ATMEGA328P]: " + data_packet)
                if(data_packet.find("run") >= 0):
                    # Run inferences with the camera module 3 Wide (regular) and the camera module 3 NoIR Wide (noir) consecutively.
                    self.consecutive_inferences()
                    # Then, inform ATMEGA328P of the completed inferences.
                    self.ATMEGA328.write("s".encode("utf-8"))
                if(data_packet.find("test") >= 0):
                    # Testing serial connection status.
                    self.ATMEGA328.write("w".encode("utf-8"))
                    print("Serial connection test...\n")

#️⃣ As the program needs to check for data packets transferred by the ATmega328P chip via serial communication (UART) without interruptions, it would not be feasible to check for data packets while running the real-time video streams generated by OpenCV in the same operation (runtime), which processes the latest frames produced by the regular Wide and NoIR Wide camera modules continuously. Therefore, I utilized the built-in Python threading module to run multiple operations concurrently and synchronize them.

⭐ Define the uv_defect_detection class object.

⭐ Declare and initialize a Python thread for running the real-time video streams (feeds) produced by the regular camera module 3 and the camera module 3 NoIR.

⭐ Outside of the video streams operation (thread), check for the latest data packets transferred by the ATmega328P chip via serial communication (UART).

uv_defect_detection_obj = uv_defect_detection("fomo_ad_model/ai-driven-plastic-surface-defect-detection-via-uv-exposure-linux-aarch64-v1.eim")

# Declare and initialize a Python thread for the camera module 3 Wide and the camera module 3 NoIR Wide video streams (feeds).
Thread(target=uv_defect_detection_obj.camera_feeds).start()

# Declare and initialize a Python thread for continuous communication with ATMEGA328P via serial communication.
uv_defect_detection_obj.obtain_ATMEGA328_data_packets()

project_image_825
project_image_826
project_image_827

Circular Conveyor - Step 9: Configuring Raspberry Pi Connect and preparing the circular conveyor mechanism for final experiments

Before completing the assembly of the circular conveyor mechanism, I ensured all of the circular conveyor web dashboard functions were working as expected. Even though I explained the conveyor part assembly before the web dashboard development process to write a concise tutorial, concluding the circular conveyor mechanism was not a linear process: I needed to work on component assembly, part redesigns, and dashboard development simultaneously to build the final version of the circular conveyor.

project_image_828
project_image_829

After completing the final version of the circular conveyor, I could easily connect to the Raspberry Pi 5 without utilizing a screen via the Secure SHell (SSH) protocol to access the Python program running inferences with the FOMO-AD visual anomaly detection model and showcasing the real-time camera feeds produced by the regular camera module 3 Wide and the camera module 3 NoIR Wide. Nonetheless, the SSH connection was not feasible for me to document the features of the final version of the circular conveyor mechanism, since I wanted to screen record the real-time camera feeds for the demonstration videos. In this regard, I decided to employ Raspberry Pi Connect to access my Raspberry Pi desktop and command line directly from any browser. Since Raspberry Pi Connect is officially developed by Raspberry Pi and integrated into the Raspberry Pi OS, it is a highly secure and simple remote access solution.

#️⃣ Even though the rpi-connect package should be installed by default in Raspberry Pi OS (Bookworm or later), I tried to reinstall it to see if there were any dependency issues.

sudo apt install rpi-connect

project_image_830

#️⃣ Then, I initiated Raspberry Pi Connect via the terminal (command line).

rpi-connect on

project_image_831

#️⃣ After starting Raspberry Pi Connect, I needed to associate my Raspberry Pi 5 with my Connect account. Thus, I started the Connect account sign-in procedure via the terminal.

rpi-connect signin

project_image_832

#️⃣ Then, I navigated to the Connect sign-in page on the browser and created a new account.

project_image_833

#️⃣ After creating my Connect account successfully, I opened the verification link on the terminal, generated by the rpi-connect package, to verify my device.

project_image_834

#️⃣ After naming and verifying my Raspberry Pi 5, it was signed in to the Connect service without any problems.

project_image_835
project_image_836

After configuring Raspberry Pi Connect, the circular conveyor mechanism was ready for me to conduct final experiments to showcase all its features.

Thanks to the separate UV light source mounts, I was able to switch the UV light sources for both camera modules (regular Wide and NoIR Wide) effortlessly while conducting final experiments.

Nevertheless, I decided to only utilize the color gel filters with the camera module 3 NoIR Wide and the glass UV bandpass filter with the regular camera module 3 Wide. Since the UV bandpass filter blocks the IR (infrared) spectrum as well as the UV spectrum, it renders the no infrared filter hardware characteristic of the NoIR variant ineffectual.

project_image_837
project_image_838
project_image_839
project_image_840
project_image_841

Circular Conveyor Features: Capturing UV-applied plastic surface images with both camera module 3 versions (regular Wide and NoIR Wide) while logging the applied experiment parameters

⛓️ ⚙️ 🔦 🟣 The circular conveyor mechanism lets the user capture UV-applied plastic surface images with both camera modules (regular Wide and NoIR Wide) and record the applied experiment parameters to the file names by simply entering Python inputs in this format:

0,0,2,3,2,0

[cam_focal_surface_distance], [uv_source_wavelength], [material], [filter_type], [surface_defect], [camera_type]

cam_focal_surface_distance: the distance between the camera focal point and the center of the target plastic object

  • 0: 3cm
  • 1: 5cm

uv_source_wavelength: the wavelength of the UV light source applied to the plastic object surface

  • 0: 275nm
  • 1: 365nm
  • 2: 395nm

material: the filament (material) type of the target plastic object

  • 0: matte_white
  • 1: matte_khaki
  • 2: shiny_white
  • 3: fluorescent_blue
  • 4: fluorescent_green

filter_type: the filter type attached to the selected camera's external filter lens

  • 0: gel_low_tr
  • 1: gel_medium_tr
  • 2: gel_high_tr
  • 3: uv_bandpass

surface_defect: the surface defect stage of the target plastic object

  • 0: none
  • 1: high
  • 2: extreme

camera_type: the selected camera to capture a new UV-applied plastic surface image

  • 0: wide
  • 1: wide_noir
project_image_842
project_image_843
project_image_844

⛓️ ⚙️ 🔦 🟣 Since the circular conveyor mechanism shows the real-time camera feeds produced by the regular camera module 3 Wide and the camera module 3 NoIR Wide, it enables the user to capture precise and high-quality UV-applied images.

⛓️ ⚙️ 🔦 🟣 It also saves the collected images by registering the given experiment parameters to their file names under this directory tree, leading to sorting images effortlessly for further model training or testing.

  • wide
    • extreme
    • high
    • none
  • wide_noir
    • extreme
    • high
    • none
project_image_845
project_image_846
project_image_847
project_image_848
project_image_849
project_image_850
project_image_851
project_image_852

⛓️ ⚙️ 🔦 🟣 Even though I had already trained my FOMO-AD visual anomaly detection model with the image samples collected via the data collection rig based on Raspberry Pi 4, it was crucial to experiment with capturing samples with Raspberry Pi 5, utilizing a dual-camera setup, to ensure the FOMO-AD model would produce consistent anomaly results.

project_image_853
project_image_854
project_image_855
project_image_856
project_image_857
project_image_858

Circular Conveyor Features: Adjusting circular conveyor attributes and analyzing the behavior of system components

⛓️ ⚙️ 🔦 🟣 On the interface of the circular conveyor mechanism, the user can change the highlighted interface option by pressing the control button A and the control button C.

⛓️ ⚙️ 🔦 🟣 Once an interface option is highlighted, in other words, having the current cursor position, the user can activate (initiate) the highlighted option by pressing the control button B.

⛓️ ⚙️ 🔦 🟣 After activating an interface option, the user can terminate the ongoing task and return to the home screen by pressing the control button D.

  • [A] ➡ Down
  • [C] ➡ Up
  • [B] ➡ Activate (Select)
  • [D] ➡ Exit (Terminate)
project_image_859

⛓️ ⚙️ 🔦 🟣 Once the Adjust interface option is activated, the user can adjust the two potentiometer values mapped according to the associated conveyor configurations.

⛓️ ⚙️ 🔦 🟣 The first potentiometer value (mapped) denotes the speed parameter, managing how fast the stepper motors rotate the sprockets. Once the user presses the control button A, the latest value of the first potentiometer becomes the speed parameter.

⛓️ ⚙️ 🔦 🟣 The second potentiometer value (mapped) denotes the station pending time parameter, which is the intermission to give camera modules time to focus before running the successive inference. Once the user presses the control button C, the latest value of the second potentiometer becomes the station pending time parameter.

project_image_860
project_image_861
project_image_862
project_image_863
project_image_864

⛓️ ⚙️ 🔦 🟣 Once the Check interface option is activated, the user can rotate the stepper motors driving sprockets simultaneously to review the circular conveyor movement and the chain tension.

  • [A] ➡ One step clockwise
  • [C] ➡ One step counterclockwise

⛓️ ⚙️ 🔦 🟣 Furthermore, the user can inspect the real-time raw readings yielded by two magnetic Hall-effect sensor modules to review whether the neodymium magnets attached to the bottom of the plastic object carriers are precisely aligned with the sensor's center point.

project_image_865
project_image_866
project_image_867
project_image_868
project_image_869
project_image_870

⛓️ ⚙️ 🔦 🟣 Once the Serial interface option is activated, the conveyor interface shows the response (latest received data packet) as 'o' (ok), meaning the system is ready.

⛓️ ⚙️ 🔦 🟣 Then, the user can transfer specific commands from the conveyor interface (ATmega328P) to the Raspberry Pi 5 via serial communication.

project_image_871
project_image_872

⛓️ ⚙️ 🔦 🟣 Once the user presses the control button A, the interface transfers the test command to the Raspberry Pi 5 and waits for the response to show it on the screen according to the success of two-way data transmission — 'w' (working) or 'n' (none).

project_image_873
project_image_874
project_image_875

⛓️ ⚙️ 🔦 🟣 Once the user presses the control button C, the interface transfers the run command, leading the Raspberry Pi 5 to run consecutive inferences with the provided FOMO-AD visual anomaly detection model by utilizing images captured by both camera modules (regular Wide and NoIR Wide).

⛓️ ⚙️ 🔦 🟣 Then, the Raspberry Pi 5 modifies the inference images to draw heatmaps and transfers the anomaly detection results to the circular conveyor web dashboard.

⛓️ ⚙️ 🔦 🟣 After running the inferences successfully, the Raspberry Pi 5 informs the conveyor interface (ATmega328P) by sending the associated data packet (char) — 's' (success).

#️⃣ Since the Raspberry Pi 5 and the web dashboard perform the same processes while running inferences manually and automatically, I did not cover them in this step to avoid repetition. Thus, please refer to the following step to review the related Pi 5 and dashboard features.

project_image_876
project_image_877
project_image_878

⛓️ ⚙️ 🔦 🟣 When the user terminates the Serial interface option, the conveyor interface clears the latest received data packet to restart the manual data transmission procedure.

project_image_879

Circular Conveyor Features: Detecting plastic surface anomalies automatically, observing the latest inference results (including heatmaps by grids) via the Twilio-enabled web dashboard, and sorting them by camera type

⛓️ ⚙️ 🔦 🟣 Once the Activate interface option is activated, the circular conveyor mechanism initiates the automatic plastic surface anomaly detection procedure via UV-exposure.

project_image_880
project_image_881

⛓️ ⚙️ 🔦 🟣 First, the conveyor interface rotates the stepper motors driving sprockets simultaneously to move the circular conveyor chain continuously but steadily.

⛓️ ⚙️ 🔦 🟣 When both of the magnetic Hall-effect sensor modules detect neodymium magnets attached to the bottom of two successive plastic object carriers simultaneously, the conveyor interface stops the circular conveyor motion immediately, aligning the focal point of the camera modules and the center of the target plastic object surfaces. Then, the interface becomes idle until the given intermission (station pending time) passes, giving both camera modules (regular Wide and NoIR Wide) time to focus on the plastic object surfaces.

project_image_882
project_image_883

⛓️ ⚙️ 🔦 🟣 After the intermission, the conveyor interface transfers the run command to the Raspberry Pi 5 via serial communication, leading the Raspberry Pi 5 to run consecutive interferences with the provided FOMO-AD visual anomaly detection model by utilizing images produced by the regular camera module 3 Wide and the camera module 3 NoIR Wide.

⛓️ ⚙️ 🔦 🟣 Since the Edge Impulse FOMO-AD (visual anomaly detection) models categorize given image samples by producing individual cells (grids) with assigned labels and anomaly scores, the Raspberry Pi 5 modifies the inference images to draw each cell with an anomaly score higher than the given confidence threshold in three different colors in relation to the provided anomaly range to emphasize the extent of defective and damaged surface areas.

  • Pink ➡ Scratched
  • Orange ➡ Dented
  • Red ➡ Highly damaged

⛓️ ⚙️ 🔦 🟣 After running inferences and modifying the inference images with anomaly scores higher than the given confidence threshold to draw heatmaps, the Raspberry Pi 5 transfers the anomaly detection results to the circular conveyor web dashboard.

project_image_884
project_image_885

⛓️ ⚙️ 🔦 🟣 While the conveyor chain moves the plastic object carriers automatically, the user can switch UV light sources for both camera modules effortlessly, thanks to the separate UV light source mounts.

  • 275 nm
  • 365 nm
  • 395 nm
project_image_886
project_image_887
project_image_888
project_image_889
project_image_890
project_image_891
project_image_892
project_image_893

⛓️ ⚙️ 🔦 🟣 Once the user navigates to the conveyor web dashboard, it checks for surface anomaly detection logs (results) from the associated database table. If there are no anomaly detection results yet, the web dashboard informs the user accordingly.

project_image_894
project_image_895

⛓️ ⚙️ 🔦 🟣 Otherwise, the web dashboard generates an HTML card for each surface anomaly result, including the inference date, the inference image, the detected class, and the number of the plastic carrier carrying the target plastic object. Then, the dashboard shows the retrieved anomaly results as HTML cards emphasizing the inference images. Since the web dashboard checks for anomaly results automatically every 2 seconds, the user can review the latest surface anomaly results immediately.

project_image_896
project_image_897
project_image_898

⛓️ ⚙️ 🔦 🟣 The web dashboard allows the user to sort plastic surface anomaly detection results by camera type (regular Wide or NoIR Wide) while obtaining the latest logs from the database table automatically, leading the user to easily track real-time surface anomaly detection results produced by the selected camera module.

⛓️ ⚙️ 🔦 🟣 Once a camera type is disabled, the web dashboard suspends the remaining camera type toggle switch to avoid data omission while sorting surface anomaly detection results.

project_image_899
project_image_900

⛓️ ⚙️ 🔦 🟣 As the user enables the Twilio integration, the web dashboard sends an SMS message for each detected plastic surface anomaly via the Twilio SMS API to inform the user.

⛓️ ⚙️ 🔦 🟣 Since the transferred SMS messages include links to the inference images with heatmaps, the user can review the degree of the latest detected plastic surface anomalies effortlessly.

project_image_901
project_image_902
project_image_903
project_image_904
project_image_905
project_image_906
project_image_907

⛓️ ⚙️ 🔦 🟣 Furthermore, the user can review the latest plastic surface anomaly detection results by directly inspecting the inference images since the file names include the exact same information as the HTML cards generated by the web dashboard.

project_image_908

📌 normal_noir_5__2025_11_19_13_14_49.jpg

project_image_909
project_image_910

📌 anomaly_noir_8__2025_11_19_12_25_43.jpg

project_image_911
project_image_912
project_image_913
project_image_914
project_image_915
project_image_916
project_image_917

Project GitHub Repository

The project's GitHub repository provides:

  • The extensive UV-applied plastic surface image dataset
  • Code files
  • PCB manufacturing files
  • Mechanical part and component design files (STL)
  • Edge Impulse FOMO-AD visual anomaly detection model (EIM binary for Linux AARCH64)

Schematics

project_image_918
project_image_919
project_image_920
project_image_921
project_image_922
project_image_923

Code

Select File

  • ai_driven_surface_defect_detection_circular_sprocket_conveyor.ino
  • logo.h
  • uv_defect_detection_collect_data_w_rasp_4_camera_mod_wide.py
  • uv_defect_detection_collect_data_w_rasp_5_camera_mod_wide_and_noir.py
  • uv_defect_detection_run_inference_w_rasp_5_camera_mod_wide_and_noir.py
  • create_necessary_database_tables.sql
  • database_secrets.php
  • class.php
  • dashboard_update.js
  • index.js
  • index.php
  • anomaly_update.php
  • settings_update.php
  • index.css
  • root_variables.css

Custom assets

See on other platforms