Synopsys Science Fair 2026
A machine learning pipeline that detects electronic components on circuit boards using YOLO11 instance segmentation, maps camera pixels to printer coordinates via polynomial transforms, and physically picks and sorts components with a modified 3D printer.
From raw camera frame to physically sorted component — every step is automated.
4K camera with 3× digital zoom, CLAHE contrast enhancement, Laplacian sharpness selection (best of 30 frames)
YOLO11 Large Segmentation identifies components with pixel-level masks; centroid computed via image moments
Degree-3 polynomial maps pixel coordinates to printer millimeters, with phase-correlation drift compensation
Two-speed BLTouch probing measures Z height of each component (1mm fast chunks, 0.1mm precision chunks)
Vacuum nozzle picks up component at (X, Y, Z), travels to the correct drop bin, releases and scrapes
380 layers, 132.7 GFLOPs, trained on an NVIDIA A100 GPU
Layers 0–9
Extracts features at multiple scales from the raw 1024×1024 input image.
Layers 10–22
Fuses features across scales so the model can detect both tiny resistors and large ICs.
Layer 23
Predicts bounding boxes, class probabilities, and instance segmentation masks simultaneously.
sigmoid(coeffs × prototypes) — ProtoNet architecture4 datasets, 48+ raw class labels unified into 20 classes
6,260 images
Bitmap + polygon masks, 25 raw classes
1,299 images
Polygon annotations, 9 raw classes
47 boards
XML bounding boxes, 19 raw classes
+additional
Bounding boxes → converted to 4-corner polygons
48+ raw labels merged into 20 unified classes. Examples:
C, cap1–cap4, capacitor
→
SMD Capacitor
R, CR, resistor, resestor
→
Resistor
U, IC, ic
→
IC
Q, QA, transistor
→
Transistor
FB, ferrite, emi
→
Ferrite Bead
mov, Mov, MOV, V
→
Varistor
Complete Intersection over Union. Penalizes center distance, overlap, and aspect ratio mismatch. Highest weight because pickup accuracy depends directly on box precision.
Binary Cross-Entropy + Dice Loss on predicted masks vs. ground-truth masks. Dice directly optimizes global overlap to handle background pixel imbalance.
Per-class binary cross-entropy. Lowest weight because classification is the easiest sub-task once localization is learned.
Predicts a probability distribution over box coordinates instead of a single value. Handles ambiguous component boundaries gracefully.
| Image Size | 1024 × 1024 | High resolution for small SMD components |
| Batch Size | 10 (auto) | Fills 60% of A100 80GB VRAM |
| Optimizer | AdamW | Adaptive LR with decoupled weight decay |
| Learning Rate | 0.001 → 0.00001 | Cosine annealing schedule |
| Warmup | 3 epochs | Prevents gradient explosion at start |
| Early Stopping | patience=20 | Stops if no improvement for 20 epochs |
| Mosaic | 1.0 | 4 images stitched per sample; off last 10 epochs |
| Transfer Learning | COCO pretrained | 1071/1077 layers transferred |
| Oversampling | 2×–5× | Classes below 1000 instances duplicated |
| Mixed Precision | AMP (FP16) | ~2× training speed on A100 |
Degree-3 polynomial fit mapping pixels to millimeters
The camera sees pixels. The printer moves in millimeters. The camera is mounted at an angle, and the wide-angle lens introduces barrel distortion. A simple linear scaling fails.
A degree-3 polynomial with 10 feature terms per axis, fit via least-squares regression on 10+ manually calibrated reference points.
Feature vector for pixel (x, y):
[1, x, y, x², xy, y², x³, x²y, xy², y³]
Transform:
printer_X = c⋅features
printer_Y = d⋅features
Solved with np.linalg.lstsq(A, dst)
A 3×3 homography assumes a perfect pinhole camera and a single plane. It cannot model the nonlinear barrel distortion from the wide-angle lens. The cubic polynomial terms (x³, y³, x²y, xy²) capture these distortions.
Place visible mark on printer bed
Jog nozzle over mark, record printer X,Y
Click mark in camera image, record pixel X,Y
Repeat 10+ times across entire work area
Fit polynomial via least squares regression
Phase correlation in the frequency domain (FFT) detects sub-pixel camera drift between the calibration reference frame and the current frame. The offset is subtracted before the polynomial transform.
(dx, dy), confidence = cv2.phaseCorrelate(ref, cur)
Modified 3D printer + Arduino + BLTouch + vacuum gripper
A standard 3D printer has a hotend nozzle and a heated bed — neither of which are useful for picking up components. The key modification is a custom toolhead mounted alongside the original hotend on a stepper-driven rail:
The heated bed stays off — the PCB sits flat on the unheated bed surface, held in place by its own weight. The bed provides a flat, known reference plane for the coordinate system.
WebSocket / Moonraker API
printer.objects.queryArduino relay + vacuum pump
Serial 115200 baud
BLTouch + Klipper coordinated
Average precision across all 20 component classes at 50% IoU threshold
Instance segmentation accuracy — masks track closely with box accuracy
Strict metric averaged over IoU thresholds from 50% to 95%
Sub-millimeter coordinate transform + 0.1mm Z probing resolution
| Epoch | Box Loss | Seg Loss | Cls Loss | Box mAP50 | Mask mAP50 |
|---|---|---|---|---|---|
| 1 | 1.302 | 1.604 | 1.747 | 44.2% | 43.8% |
| 5 | 1.061 | 1.216 | 0.990 | 65.8% | 65.6% |
| 10 | 0.936 | 1.074 | 0.739 | 76.2% | 76.0% |
| 15 | 0.877 | 1.008 | 0.632 | 81.4% | 80.9% |
| 20 | 0.843 | 0.963 | 0.575 | 83.5% | 82.9% |
| 24 | 0.811 | 0.941 | 0.529 | 84.5% | 84.0% |
Actual YOLO11 inference on a PCB image at conf=0.25 — 116 detections across 19 classes