Visual quality inspection —welds, cracks, spatter, surface defects.
Hosted computer-vision pipelines built specifically for manufacturing QC. Weld-bead geometry, crack tracing, spatter and undercut detection, dark-blob defect blobs, surface-texture anomaly scoring — all behind a single REST API. No PhDs, no GPUs, no “is this an object-detection problem?” meetings.
- Built-in tools
- 8+ defect pipelines
- Inputs
- JPG/PNG from line cameras
- Outputs
- verdict · measurements · overlay
- Round-trip
- ~100-150 ms p50
- Free quota
- 300 calls / month
Most computer-vision platforms are general purpose: they hand you object-detection and segmentation models and let you figure out the manufacturing-specific workflow on your own. mSightFlow ships domain-tuned pipelines for the defect classes that actually show up on production lines — weld beads, cracks, spatter, undercut, surface blobs, texture anomalies — so you can be running a real inspection by the end of the day, not by the end of next quarter.
For defect types we don't ship out of the box, the same platform handles auto-labelling, active learning, and COCO/YOLO export so you can fine-tune a model on your own line data — typically 100-500 images per class is enough to start.
Welding-specific defect detection
Six classical-CV pipelines tuned for weld inspection. Zero training needed — they work on representative imaging conditions out of the box.
Weld bead segmentation
CLAHE + Sobel + Otsu + morphology pipeline. Returns a binary bead mask plus geometry metrics: area, length, width, angle, centerline curvature.
/v1/cv-tools/weld_bead_segmentWeld seam profile
Intensity profile along the weld centerline — diagnostics for porosity, lack-of-fusion, and inconsistent penetration without destructive testing.
/v1/cv-tools/weld_seam_profileCrack tracing
Sobel-edge + thinning pipeline that extracts crack centerlines from greyscale images. Reports length, branch count, and an overlay PNG for review.
/v1/cv-tools/crack_traceSpatter detection
Otsu-thresholded bright-pixel pipeline. Counts spatter spots per unit area and flags weld passes above a configurable spatter budget.
/v1/cv-tools/spatter_detectUndercut detection
Edge-concavity detector at the weld toe. Surfaces undercut early — the failure mode that visually-trained inspectors miss most often.
/v1/cv-tools/undercut_detectDark-blob defect detection
Connected-component analysis on dark regions. Catches porosity, slag inclusions, voids — anything that registers as a dark blob against a bright surface.
/v1/cv-tools/dark_defect_blobsBeyond welding — general surface-defect tools
Three more domain-agnostic pipelines for sheet metal, fabric, castings, electronics, and additive manufacturing.
Elongated region segmentation
Generalised version of weld-bead segmentation for any elongated defect: scratches, drips, drool, edge chips. Sobel + morphology + curvature filtering.
/v1/cv-tools/elongated_region_segmentTexture anomaly scoring
Sliding-window local-entropy anomaly score. Detects unusual texture patches without training — useful for fabric, sheet metal, castings, raw materials.
/v1/cv-tools/texture_anomalyAnnotation QA + duplicate detection
Tiny / zero-area / out-of-bounds bbox detection plus dhash-based image duplicate detection. Quality-gate your training set before it costs you a retrain.
/v1/cv-tools/{annotation_qa, perceptual_dedup}Where this lands today
Customer-validated workflows across six manufacturing segments.
Welding & metal fabrication
Bead geometry, undercut, spatter, porosity, lack-of-fusion. Robot-welding lines, pressure-vessel QC, structural fabrication.
Automotive & aerospace
Surface defects on body panels, casting porosity, paint imperfections, fastener verification. Tier-1 and OEM assembly lines.
Additive manufacturing
Layer-line monitoring, porosity, delamination, support-removal artefacts. Per-layer image streams from build plates.
Sheet metal & coatings
Scratches, drips, drool, edge chips, paint runs, orange peel. Continuous coil inspection and finished-part audit.
Electronics & PCB
Solder-joint quality, component placement, missing-part detection, conformal-coating coverage. Combine with OCR for label / barcode verification.
Castings & forgings
Surface cracks, dimensional drift, draft-angle compliance, mould-induced defects. Pairs with depth + measurement workflows.
From camera to PLC, in three patterns
mSightFlow is REST-only — you bring the camera and the controller, we bring the inspection logic.
- Inline inspection (most common). An edge gateway near your line camera triggers on each new frame, POSTs to the relevant cv_tools endpoint, parses the verdict, and writes to your PLC over OPC-UA / Modbus / MQTT. Round-trip ~100-150 ms p50 — fast enough for typical inspection takts.
- Batch audit (overnight QA). Capture a full shift of frames to cloud storage; submit as a dataset-batch run; receive a zipped report with per-image verdicts, overlays, and a CSV summary you can keep for your own internal records.
- Hybrid (inline pass-rate, batch audit). Inline runs flag suspect frames in real-time; batch audit re-runs the full set with stricter thresholds + human review queue from the quality dashboard.
Code — inspect, batch, integrate
import os, requests
api = "https://api.msightflow.ai/v1/cv-tools/weld_bead_segment"
hdr = {"Authorization": f"Bearer {os.environ['MSF_API_KEY']}"}
# Single-image inspection — the same endpoint shape on every tier.
resp = requests.post(api + "/run-single", headers=hdr,
files={"image": open("weld_001.jpg", "rb")},
data={"min_bead_length_px": 80}).json()
# resp = {
# "verdict": "pass" | "fail",
# "bead": {"area_px": 4123, "length_px": 318, "width_px": 14.2,
# "angle_deg": 86.7, "centerline_curvature": 0.012},
# "overlay_png": "<base64>", # optional visualisation
# }
if resp["verdict"] == "fail":
print(f"FAIL: bead width {resp['bead']['width_px']:.1f}px outside spec")
# Batch a folder of images via cv_tools dataset-batch run.
# Returns a run_id; poll /runs/{run_id} until status=done; download artefact zip.
import time
run = requests.post(
"https://api.msightflow.ai/v1/cv-tools/spatter_detect/run",
headers=hdr,
json={"dataset_id": "DATASET_ID", "params": {"brightness_threshold": 220}},
).json()
run_id = run["run_id"]
while True:
status = requests.get(f"https://api.msightflow.ai/v1/cv-tools/runs/{run_id}",
headers=hdr).json()
if status["status"] == "done":
break
time.sleep(2)
# Download the artefact (per-image JSON + overlays + summary CSV in a zip)
zip_bytes = requests.get(
f"https://api.msightflow.ai/v1/cv-tools/runs/{run_id}/download",
headers=hdr,
).content
open("spatter_report.zip", "wb").write(zip_bytes)
# Pipeline: camera frame -> mSightFlow -> PLC.
# Run this on an edge gateway near your camera. ~100 ms per cycle on a fast line.
import requests, time
from your_plc import write_register # OPC-UA / Modbus / MQTT — your stack
API = "https://api.msightflow.ai/v1/cv-tools/weld_bead_segment/run-single"
HDR = {"Authorization": f"Bearer YOUR_API_KEY"}
def inspect_and_alert(frame_path):
r = requests.post(API, headers=HDR, files={"image": open(frame_path, "rb")},
data={"min_bead_length_px": "80"}).json()
write_register("WELD_OK", 1 if r["verdict"] == "pass" else 0)
write_register("WELD_BEAD_WIDTH", int(r["bead"]["width_px"] * 100))
return r
# Trigger on each new image from your line camera (file-watcher, MQTT topic,
# RTSP stream sampler — your line's source).
Why a domain-tuned pipeline beats a general API
| mSightFlow welding toolkit | Generic detection API | |
|---|---|---|
| Time to first inspection | Day 1 (no training) | 1-4 weeks (labelling + training) |
| Returns geometry metrics | ✅ width, length, angle, curvature | ❌ bbox only — measurements DIY |
| Tuned to weld imaging conditions | ✅ CLAHE + Otsu defaults | ❌ generic model — needs domain images |
| Pass / fail verdict built in | ✅ configurable thresholds | ❌ post-process detections yourself |
| Audit-trail logging | ✅ timestamp + image hash + model version + decision per call | ◐ generic API logs only, no manufacturing context |
The honest framing: the built-in tools are a fast start for known defect classes. For everything else, mSightFlow is also the general detection / labelling / training platform — you don't outgrow it.
Pricing — same as every other endpoint
Manufacturing inspection isn't a separate plan. Same calls, same exports, same pricing — just better-tuned defaults.
Standard
$10/mo
- 5,400 API calls / month
- Batch up to 10 images / call
- CSV / JSON log export
Features that pair with manufacturing inspection
Auto-labelling
For novel defect classes — auto-label your first 500 images in hours, then verify.
Learn moreActive learning
3-5× fewer labels for the same accuracy. Spend annotator time only on uncertain frames.
Learn moreAnnotation quality + IAA
Inter-annotator agreement + class-imbalance alerts — audit-ready quality controls.
Learn moreFAQ for manufacturing teams
What's the smallest dataset I need to start?
For domain-specific tools (weld_bead_segment, crack_trace, spatter_detect, etc.), zero — they're classical-CV pipelines that work out of the box on representative images. For a custom-trained detector on a new defect class, 100-500 images per class is the working minimum; with mSightFlow's auto-labelling + active-learning combo, you can typically get there in 2-3 days.
Can mSightFlow integrate with my PLC / MES?
Yes — via REST. mSightFlow doesn't ship a PLC driver, but every endpoint returns structured JSON your supervisor application can forward to OPC-UA, Modbus, MQTT, or any HTTP-capable controller. For low-latency lines, run a thin edge service near the camera that calls our API and writes to your PLC in the same loop.
How accurate are the classical-CV tools (weld_bead_segment, crack_trace, …) without training?
Honest answer: they're tuned for representative imaging conditions (good lighting, ~1024 px, consistent camera angle). On well-controlled inspection rigs they typically score 90-95% recall with low false-positive rates. Outside those conditions — varying ambient light, mixed product geometries — accuracy degrades and a fine-tuned model on your own data is the better path.
Cloud-only, or can this run on the factory floor?
Cloud-only. The REST API works from any internet-connected device — including a small edge gateway near your camera. Round-trip latency from a factory floor in Europe to our EU cluster is typically 80-150 ms, which is fine for most inspection tachts. On-prem and air-gapped deployments are not available today.
What about audit trail requirements?
Every API call is logged with timestamp, image hash, model version, and decision. Logs can be exported as CSV / JSON via the API for your own internal record-keeping. We host project data on AWS in EU regions and processing is GDPR-aligned. mSightFlow is not a certified medical, automotive, or quality-management system — we don't claim compliance with named frameworks (ISO, IATF, FDA, HIPAA, SOC 2).
From line camera to pass/fail, today.
Eight built-in defect pipelines. REST API. 300 free calls / month. Welds, cracks, spatter, surface defects — all running on day one.