3D printing could revolutionize manufacturing through local and on-demand production while enabling uniquely complex and custom products. However, 3D printing's propensity for production errors prevents autonomous operation and the quality assurance necessary to realize this vision. Human operators cannot continuously monitor or correct errors in real time, while automated approaches predominantly only detect errors. New methodologies correct parameters either offline or with slow response times and poor prediction granularity, limiting their utility. A commonly available 3D printing process metadata is harnessed, alongside the video of the printing process, to build a unique image dataset. Regression models are trained to precisely predict how printing material flow should be altered to correct errors and this should be used to build a fast control loop capable of 3D printing parameter discovery and few-shot correction. Demonstrations show that the system can learn optimal parameters for unseen complex materials, and achieve rapid error correction on new parts. Similar metadata exists in many manufacturing processes and this approach could enable the adoption of fast data-driven control systems more widely in manufacturing.
In line with Matta's grant mission to build AI for manufacturing the impossible, the possible must be manufactured well at first (and this can still be done with AI). We worked with the Institute for Manufacturing in the Department of Engineering at the University of Cambridge to explore how AI can help detect and correct errors in the 3D printing process - hopefully demonstrating the potential that machine learning can bring to traditional manufacturing and control problems which remain unsolved.
Introduction
In this work, commonly available 3D printing process metadata from firmware was harnessed, alongside a real-time video of printing, to build a new dataset of over 250,000 automatically labeled images. With this dataset, vision regression models were trained to precisely predict material flow as a continuous variable, in turn, enabling true proportional correction. We coupled this monitoring of real-time video with a new feedback loop capable of 3D printing parameter discovery and few-shot correction. This combined control system ran at sampling rates of nearly 15 Hz, over 4 times faster than the previous state of the art. Finally, the response times were further reduced through toolpath splitting and optimized prediction filtering. Experiments showed that the system can learn the optimal flow rate for unseen complex materials and achieve rapid few-shot error correction on new parts.
Our method improves on prior work in 3D printing error detection, correction, and flow rate control with the application of models capable of continuous prediction for a single parameter compared to the previous discrete prediction approaches. This is achieved using a lightweight model and efficient sampling to enable significantly faster feedback speed. The data required can be rapidly collected and the models quickly trained, with the whole process from no data to deployed control system taking only 8 hours total. Similar metadata exists in many manufacturing processes and this easy to deploy approach could enable the adoption of fast data-driven control systems more widely in manufacturing.
Results
Autolabelled Dataset Generation
A unique data acquisition system was developed to capture high-quality images of the material deposition process in extrusion AM and to label each image with metadata, specifically the current material flow rate. An endoscope camera was attached to a low-cost 3D printer to capture images of the extrusion process during printing (see Figure 1A). An overview of the steps in this data acquisition system is shown in Figure 1B. This pipeline enabled the creation of an entirely new autolabeled dataset of 253,405 images in less than 5 hours. Sample images from this dataset alongside their labels are shown in Figure 1C. Uniquely, the approach is scalable to any number of printers, in future enabling the creation of large datasets from fleets of machines.

A range of 19 flow rate levels was used. The resolution and spacing of these levels were determined qualitatively through experimentation by printing samples at a range of flow rates centered around 100% to determine the smallest relative change with a noticeable impact on part quality. For under-extrusion, this was when gaps first appeared between the extruded paths and for over-extrusion when the extruded paths overlapped to increase surface roughness, with the optimal range resulting in completely uniform extrusion over the build surface.
In total 24 prints were completed. These prints all consisted of single-layer circular geometries with a layer height of 0.2 mm and diameter of 150 mm. Twelve different infill types were used (2 samples printed of each). Six of the samples used 100% linear infill at 30° increments from 0° to 150°. Two used concentric infill at 25% and 100% densities. The remaining 4 prints used cross, grid, gyroid, and triangle infill patterns all at 25% density. This range of infill types was chosen to allow the vision models to generalize across different geometries.
For each print, a flow rate value was randomly sampled without replacement from a uniform distribution of 19 possible values in log space. This value was then converted back to relative percentages and sent to the printer to update the flow rate. Upon execution, 1200 images were collected and labeled for that level. Subsequently, another flow rate was sampled without replacement and 1200 images were again captured until no levels remained. A new complete set of the 19 flow rates was then generated to start the process again. With this method, a total of 253,405 labeled images were collected. In total, this only took 5 hours of printing time on a single machine. With this system, the mean sampling rate across the dataset was 14.37 Hz. Overheads involved in retrieving information from the printer’s firmware, capturing the snapshot, sending it over the network, and labeling the image reduced average the sampling rate from the endoscope’s 30 Hz; however, this rate was still occasionally reached. Thus, the developed controller had to be capable of running faster than this maximum sampling frequency.
From the complete dataset, an equal number of the 19 levels of flow rate were then sampled - this number being the total samples for flow rate level with the fewest samples. This resulted in a final dataset of 125 077 labeled images which was then split through random sampling into train, validation, and test sets with a ratio of 80:10:10. Finally all images were pre-cropped to a 256x256 pixel region centered around the nozzle tip using the coordinates stored during collection to significantly speed up training. See Figure 1C for some examples.
Fast Flow Rate Prediction Model
For the precise prediction of flow rate various sized models with a RegNet backbone and single output fully connected linear layer were trained. The initial weights of these network backbones were pre-trained on the ImageNet1k dataset. The models used the log space flow rates as targets - using this normalized, linear, and evenly spaced data was vital for achieving good performance.
During training, the pixel values across each red, green, and blue (RGB) channel were normalized. Additionally, several common data augmentation techniques were used as popularised on standard datasets such as ImageNet (see Figure 2). The pre-cropped and normalized 256x256 image centered on the nozzle tip was randomly cropped to a 224x224 square. The resultant cropped image was then horizontally flipped with a probability of 0.5. After these geometric augmentations, principal component analysis (PCA) color augmentation was applied to each image as popularised by AlexNet. These augmentations were only applied during the training pass, and for validation passes the input 256x256 pixel images were center cropped to the required 224x224 input shape.

The mean squared error (MSE) was minimized using stochastic gradient descent and the learning rate was updated during training for all models using a cosine annealing learning rate scheduler.
At test time, to improve the predictive performance, each 256x256 input image was cropped into five 224x224 windows for the top left, top right, bottom left, bottom right, and center. Then, each of these five images was horizontally flipped resulting in 10 images produced from the single input (see Figure 3B). These 10 images were stacked together and passed through the network in a single forward pass. The 10 output predictions were subsequently averaged using the mean. The multi-crop approach led to a boost in performance of 14.6% at test time. We also tested each model on an out-of-distribution print and noticed a drop in performance; however, the predictions were sufficiently accurate to still enable real-time control.
The results of the trained model on the held-back test set can be seen in Figure 3A. The trained network accurately predicted flow rate, with the light green shaded region showing the input resolution used in training for the 19 levels of flow rate. In addition, 95% and 99% intervals are shown with the network producing relatively few outliers––these can be easily removed with suitably chosen filtering procedures. The residuals of this test data are then plotted along with a fitted Gaussian distribution illustrating that the predictions are well centered. The smallest model with test augmentation achieved an inference rate of 100 Hz on a single GPU––significantly higher than the endoscope’s 30 Hz sampling rate or the 14.37 Hz seen during data collection. As such, there is considerable scope for running multiple printers in parallel or an ensemble of models to improve predictive performance and enable uncertainty estimation.

Details on the filtering and smoothing of network predictions can be found in the full paper. These procedures were especially important for handling out-of-distrubution samples.
Learning Flow for Novel Materials
New materials with unknown printing parameters are continually being developed. These include foaming materials that expand upon extrusion and can be used, for example, to reduce weight or for insulation. The level of foaming achieved by these materials is directly coupled to the temperature during material deposition, with higher temperatures, in general, leading to increased foaming. The more foaming the greater the volume; thus, to achieve a uniform and dimensionally accurate prints with good surface finish the amount of material extruded must be proportionally updated to account for the expansion at different temperatures. The relationship between temperature, foaming, volume expansion, and therefore flow rate, is unknown to a new user and currently, these relationships must be found through intensive manual experimentation.
The ability to discover the correct flow rates at given temperatures for these new materials demonstrates the effectiveness of the trained model. Figure 4A shows a test part printed from foaming PLA. Throughout the course of this part, the temperature was increased from 195 °C to 240 °C at evenly-spaced intervals 20 mm apart. There were noticeable changes in extrusion with vertical bands appearing in the print. At lower temperatures the part suffered from severe under-extrusion; however, at higher temperatures, there was significant over-extrusion. Plots showing the increase in temperature can be seen in Figure 4B along with the model’s mean rolling flow rate predictions. The predictions for each target temperature were then averaged and plotted with their standard deviations in Figure 4C. This plot highlights that the model generalized to an unseen material and successfully predicted under extrusion at low temperatures, good extrusion at 215 °C, and over extrusion at higher temperatures. The use of such an automated learning system to determine the couplings and relationships between parameters is clear. Users could apply such systems to autonomously predict optimal parameter levels for new unseen materials in a single sample print, and subsequently, use these levels on future parts.
We go further and show that human operators are not required to manually correct parameters using the predictions of the machine learning model. The model enables rapid real-time closed-loop control, allowing the printer to self-correct and autonomously learn the optimal combination of parameters (see Figure 4D). A part was printed in the foaming PLA at 235 °C with the same settings as described previously (see Figure 4E).
During the printing process, images were captured and sent over the network for inference with test time data augmentation. By taking the inverse of the predicted relative flow rate, updates were generated to proportionally correct towards the optimal level, and then sent to the printer over the network. To increase the likelihood that the steady-state prediction for this unseen material was correct and to reduce the chance of overshooting slower updates were made, with 150 predictions averaged before sending a correction. Prior to printing, the G-code toolpath of the object was split into subpaths with a maximum length of 1 mm to reduce the firmware response time for updates. G-code commands are executed sequentially, thus without splitting long moves result in significant correction delays. The plots in Figure 4D show the predicted flow rates and the updates made to the actual flow rate until the steady state was achieved. Figure 4E shows the part in its entirety––changes in extrusion level are visible, especially between the 1 and 2 min markers. During the main printing sequence from 2.5 to 5 min, the closed-loop system was remarkably stable and kept the flow rate within a tight range. The very beginning and very end of the predictions in Figure 4D show detection of under extrusion. During the initial lines of printing predicting, the level of extrusion is challenging due to the lack of interaction with adjacent paths. This is compounded by the current bias within the training dataset, as there are more images of the dense infill than the initial paths. Nozzle images at the end of printing appear as under extrusion, because the bed is visible between deposited paths - the primary feature of under extrusion. As such, greater training data of these final lines is required for accurate predictions; however, this current limitation was acceptable as only the final seconds of printing were affected causing minimal impact. Figure 4F shows close-up images at the start of the print with a 100% flow rate at 235 °C and at the end with an optimal steady- state flow rate of 48%. These images show a clear improvement in print quality with minimal overshooting and oscillation.

Rapid One-Shot Correction
For the production of end-use parts, control systems must correct errors rapidly to minimize their impact on dimensional accuracy and mechanical performance. Being able to precisely predict the exact distance from optimal extrusion allows the developed system to correct errors in one or very few actions. Due to the sampling rate and split toolpath, this update can be applied rapidly after an error occurs.
In Figure 5, the first layer of a spanner geometry is shown, with the overall printing direction indicated by the black arrow. An error was introduced reducing the flow rate to 50% of optimal. No feedback or automated correction was applied, thus, significant under extrusion is visible for the remainder of the print (Fig. 5A). The print settings were the same as used for previous prints in this work. A second print was then run with identical settings but with error correction enabled. Figure 5B shows that the system was able to accurately predict and correct the flow rate rapidly. Unlike previous work, the controller does not iteratively approach optimal flow, but jumps in one or few-shots to the correct level. The controller showed good steady-state behavior with no large incorrect updates applied; however, minor oscillations did sometimes occur. To mitigate this, the response time was updated depending on the flow rate prediction made. For predictions near 100%, updates are less time critical and a greater level of accuracy was required. Therefore, for FIFO predictions between 89% and 113%, the average was taken across 20 buffers before an update was made. Also to stop overshooting caused by the mechanical and software response time of the system, after very large updates a delay of 150 images was used to ensure that new predictions were only made after the previous update had been realized. This delay was deemed to be acceptable due to the one-shot nature of our control algorithm meaning often a single update was all that was required.

Conclusion
Here we report a real-time closed-loop control system for 3D printing which is capable of predicting the precise level of material flow rate enabling parameter discovery for unseen materials and rapid few-shot correction. Commonly available 3D printing parameter information was harnessed, alongside the video of the printing process, to autonomously create a new dataset of over 250,000 labeled images. With this dataset, multiple regression models were trained to precisely predict the level of relative material flow as a continuous variable from a single input image. The temporal nature of extrusion printing was utilized by combining multiple predictions to further improve the accuracy of the system. The trained model was then used on complex materials to predict and learn the relationship between printing temperature and material flow rate in both an offline and online fashion, enabling autonomous parameter discovery for unknown materials - a previously manually intensive and tedious process. Also, the same model was used as a few-shot controller to rapidly correct flow rate and recover parts after severe errors were introduced. Due to our unique data generation and labeling procedure of combining readily available manufacturing metadata with video the system was even capable of correcting errors in a single action. Importantly, the aforementioned was achieved in a short time, ~8 hours, with data collection taking only 5 hours on a single printer and the lightweight model taking 3 hours to train. This speed of deployment to new settings, alongside the use of metadata that is available for many manufacturing processes, may aid industry uptake and the potential future applications of data-driven control.
Further Reading
- Read the paper
Authors
Douglas A. J. Brion
Sebastian W. Pattinson
Acknowledgements
This work has been funded by the Engineering and Physical Sciences Research Council (EPSRC) Ph.D. Studentship EP/N509620/1 to Douglas Brion, Royal Society award RGS/R2/192433 to Sebastian Pattinson, Academy of Medical Sciences award SBF005/1014 to Sebastian Pattinson, Engineering and Physical Sciences Research Council award EP/V062123/1 to Sebastian Pattinson, and An Isaac Newton Trust award to Sebastian Pattinson. For the purpose of open access, the author has applied a Creative Commons Attribution (CC-BY) license to any Author Accepted Manuscript version arising.