Objectives: To evaluate the accuracy of automatic deep learning-based method for fully automatic segmentation of the mandible from CBCTs. Setting and sample population: CBCT-derived mandible fully automatic segmentation. Methods: Forty CBCT scans from healthy patients (20 females and 20 males, mean age 23.37 ± 3.34) were collected, and a manual mandible segmentation was carried out by using Mimics software. Twenty CBCT scans were randomly selected and used for training the artificial intelligence model file. The remaining 20 CBCT segmentation masks were used to test the accuracy of the CNN automatic method by comparing the segmentation volumes of the 3D models obtained with automatic and manual segmentations. The accuracy of the CNN-based method was also assessed by using the DICE Score coefficient (DSC) and by the surface-to-surface matching technique. The intraclass correlation coefficient (ICC) and Dahlberg's formula were used respectively to test the intra-observer reliability and method error. Independent Student's t test was used for between-groups volumetric comparison. Results: Measurements were highly correlated with an ICC value of 0.937, while the method error was 0.24 mm3. A difference of 0.71 (±0.49) cm3 was found between the methodologies, but it was not statistically significant (P >.05). The matching percentage detected was 90.35% (±1.88) (tolerance 0.5 mm) and 96.32% ± 1.97% (tolerance 1.0 mm). The differences, measured as DSC in percentage, between the assessments done with both methods were, respectively, 2.8% and 3.1%. Conclusion: The tested deep learning CNN-based technology is accurate and performs as well as an experienced image reader but at much higher speed, which is of significant clinical relevance.

Fully automatic segmentation of the mandible based on convolutional neural networks (CNNs)

Lo Giudice A.
Primo
Writing – Original Draft Preparation
;
Ronsivalle V.;Spampinato C.;
2021-01-01

Abstract

Objectives: To evaluate the accuracy of automatic deep learning-based method for fully automatic segmentation of the mandible from CBCTs. Setting and sample population: CBCT-derived mandible fully automatic segmentation. Methods: Forty CBCT scans from healthy patients (20 females and 20 males, mean age 23.37 ± 3.34) were collected, and a manual mandible segmentation was carried out by using Mimics software. Twenty CBCT scans were randomly selected and used for training the artificial intelligence model file. The remaining 20 CBCT segmentation masks were used to test the accuracy of the CNN automatic method by comparing the segmentation volumes of the 3D models obtained with automatic and manual segmentations. The accuracy of the CNN-based method was also assessed by using the DICE Score coefficient (DSC) and by the surface-to-surface matching technique. The intraclass correlation coefficient (ICC) and Dahlberg's formula were used respectively to test the intra-observer reliability and method error. Independent Student's t test was used for between-groups volumetric comparison. Results: Measurements were highly correlated with an ICC value of 0.937, while the method error was 0.24 mm3. A difference of 0.71 (±0.49) cm3 was found between the methodologies, but it was not statistically significant (P >.05). The matching percentage detected was 90.35% (±1.88) (tolerance 0.5 mm) and 96.32% ± 1.97% (tolerance 1.0 mm). The differences, measured as DSC in percentage, between the assessments done with both methods were, respectively, 2.8% and 3.1%. Conclusion: The tested deep learning CNN-based technology is accurate and performs as well as an experienced image reader but at much higher speed, which is of significant clinical relevance.
2021
3D rendering
CBCT
artificial intelligence
mandible
File in questo prodotto:
File Dimensione Formato  
ocr.Fully atutomatic mandible pdf.pdf

solo gestori archivio

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 860.69 kB
Formato Adobe PDF
860.69 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/543661
Citazioni
  • ???jsp.display-item.citation.pmc??? 18
  • Scopus 33
  • ???jsp.display-item.citation.isi??? 31
social impact