We are pleased to announce that we will organise a online workshop for the society of DXCT on the 2nd Dec 2020. This workshop will be supported by the AdvanCT project, where the project is funded by the European Metrology Programme for Innovation and Research. Due to the impact of COVID-19, this event will be run online.
Advanced X-ray computed tomography for dimensional metrology- reconstruction algorithms
Workshop time: 9:00 – 11:30 BST
Workshop Chair: Wenjuan Sun
Introduction to the event and the EMPIR AdvanCT project-Fast CT work package.
Wenjuan Sun, National Physical Laboratory
Fundamentals of iterative reconstruction and parallel processing in computed tomography
While the direct reconstruction method filtered backprojection (and variations) is the overwhelmly used algorithm in CT, it suffers heavily from artifacts when the data is noisy or not fully sampled. The alternative is to use iterative reconstruction. This set of algorithms model the X-ray system and iteratively minimize objective functions (often the difference between the model and the data)
in order to obtain a better estimate of the reconstructed images. They are a powerful tool that has a high computational cost, but can incorporate prior information about the sample and can be very robust against CT errors. This talk will introduce iterative algorithms, address the issues that may arise from using them and showcase some algorithms and examples where a significant improvement on image quality can be found using them.
The interplay between iterative reconstruction and fast X-ray computed tomography acquisition and their effect on dimensional measurement uncertainty
Thomas Blumensath, University of Southampton
In many in-line manufacturing settings, object dimensions need to be monitored. For objects with complex external and internal structures, X-ray computed tomography (XCT) is often the only viable inspection technique. However, XCT is a time consuming process, which can reduce its use in production line settings. In this study, we compare the dimensional measurements extracted from XCT data when using two different approaches to speed up x-ray acquisition, 1) using a shorter x-ray exposure time, which leads to increased noise and 2) the use of fewer x-ray projections, which leads to image artefacts. These data acquisition methods are compared to a slower, low-noise and low artefact scan. Furthermore, the influence that different tomographic reconstruction algorithms have on dimensional measurement variability are analysed. We here compare the standard Feldkamp Davis Kress (FDK) algorithm to two state of the art iterative algorithms, the conjugate gradient least square method (CGLS) and the Fast Iterative Shrinkage Thresholding algorithm (FISTA), which also enforces a Total Variation (TV) constraint. We observed that the performance of the selected fast acquisition strategy is linked to the reconstruction method used. Using limited projections did not always lead to viable surface estimates when used with the standard FDK algorithm. However, both iterative algorithms were found to lead to biased results. Furthermore, the use of the Total Variation constraint was found to lead to extra variability and bias in dimensional measurements.
Deep learning image reconstruction for underdetermined inverse solution in industrial XCT with limited data
Manuchehr Soleimani, University of Bath
Recently, with the significant developments in deep learning techniques, solving underdetermined inverse problems has become one of the major concerns in the many imaging domains. A very good examples is under-sampled and sparse-view x-ray computed tomography, where deep learning techniques can achieve remarkable performances. Deep learning methods appear to overcome the limitations of existing mathematical methods when handling various underdetermined problems in particular in medical CT. Its applications in a very large scale XCT such as the one in metrology is at early stage. This study focuses on learning the relationship regarding the structure of the training data suitable for deep learning, to solve highly underdetermined inverse problems in limited data XCT. And comparisons are given with the medical CT domain both in terms of the size of imaging domain and a prior knowledge about the domain under imaging.
Efficient perturbation methods for Uncertainty Quantification in XCT reconstruction
National Physical Laboratory & University of Lyon
Many different methods can be applied to XCT reconstruction each of them leveraging specific prior information about the nature of the object to be reconstructed. Any specific piece of prior information is crucial in the case where the number of projections is small as compared to the size of the original object under measurement. This prior information can be incorporated in the form of a penalisation term in a least squares approach, or by providing a taylored training data set out of which estimators based on neural networks can perform accurate reconstruction. Understanding the impact of uncertainty in these reconstruction methods is a big challenge for modern data science and engineering which requires thinking anew the statistical aspects of these problems. In particular, many standard approaches to uncertainty quantification are based on Bayesian methodologies, which may not be fully suitable in the XCT reconstruction context, since they may not preserve the expected features present in the original object, e.g. sparsity, as they may depart too far from the prior information in the case of a too small number of measurements. Bayesian approaches also suffer from the curse of dimensionality in the case of large images sizes. In this work, we present a projection methodology originally devised in the statistical literature, which is able to preserve the prescribed features at a very low computational cost. We will present the theory underlying the perturbation method and illustrate the advantages of this method via numerical experiments.
Evaluation of XCT reconstruction algorithms for dimensional metrology
Wenjuan Sun, National Physical Laboratory
X-ray computed tomography(XCT) is a powerful non-destructive evaluation technology. It has the potential to be used as a metrology tool to evaluate advanced manufactured parts with non-line-of-sight features. Reconstruction plays a vital role in XCT technology as it transforms projection images to volumetric data. Reconstruction is also a source of error and contributes to measurement uncertainty. However, the impact is not well understood. The presentation introduces several testing metrics in the evaluation of reconstruction algorithms. Both analytical algorithms and iterative reconstruction algorithms are considered, where impacts on image quality and dimensional metrology are both investigated.
State-of-the-art X-ray computed tomography for dimensional metrology
Benjamin A Bircher, Alain Küng, Felix Meli
Federal Institute of Metrology METAS, Laboratory for Length, Nano- and Microtechnology, Bern-Wabern, Switzerland
We will discuss how X-ray computed tomography (XCT) can be applied for dimensional metrology. To reach highest accuracy with XCT, a large number of influence factors must be taken into account. Measures to correct these have been implemented on our home-built XCT system, METAS-CT, and will be presented. We will conclude with a number of use cases of small workpieces.
The event is free, but registration is essential. Links and instructions to join the workshop will be sent to you prior to the event.
Sponsorship and Exhibitions
Opportunities are available to sponsor and exhibit various aspects of the event, for more information or to discuss sponsorship opportunities please contact Dr Wenjuan Sun.
To receive future event information, please join DXCT LinkedIn Group.