Colour specification can be carried out using different instruments or tools. The biggest limitation of these existing instruments consists of the region in which they can be applied. Indeed, they can only work locally in small regions on the surface of the object under examination. This implicates a slow process, errors while repeating the procedure and sometimes the impossibility of measuring the colour depending on the object’s surface. We present a new way to perform colour specification in the CIELab colour space from RGB images by using Convolutional Generative Model that performs the transformation needed to remove all the shading effect on the image, producing an albedo image which is used to estimate the CIELab value for each pixel. In this work, we examine two different models one based on autoencoder and another based on GANs. In order to train and validate our models we present also a dataset of synthetic images which have been acquired using a Blender–based tool. The results obtained using our model on the generated dataset prove the performance of this method, which led to a low average colour error (ΔE00) for both the validation and test sets. Finally, a real-scenario test is conducted on the head of the god Hades and a half-bust depicting the goddess Persephone, both are from the archaeological Museum of Aidone (Italy).

Convolutional Generative Model for Pixel–Wise Colour Specification for Cultural Heritage

Furnari Giuseppe
;
Anna Maria Gueli;Stanco Filippo;Dario Allegra
2024-01-01

Abstract

Colour specification can be carried out using different instruments or tools. The biggest limitation of these existing instruments consists of the region in which they can be applied. Indeed, they can only work locally in small regions on the surface of the object under examination. This implicates a slow process, errors while repeating the procedure and sometimes the impossibility of measuring the colour depending on the object’s surface. We present a new way to perform colour specification in the CIELab colour space from RGB images by using Convolutional Generative Model that performs the transformation needed to remove all the shading effect on the image, producing an albedo image which is used to estimate the CIELab value for each pixel. In this work, we examine two different models one based on autoencoder and another based on GANs. In order to train and validate our models we present also a dataset of synthetic images which have been acquired using a Blender–based tool. The results obtained using our model on the generated dataset prove the performance of this method, which led to a low average colour error (ΔE00) for both the validation and test sets. Finally, a real-scenario test is conducted on the head of the god Hades and a half-bust depicting the goddess Persephone, both are from the archaeological Museum of Aidone (Italy).
2024
978-3-031-51025-0
978-3-031-51026-7
Color measurement · Color specification · Autoencoder · GANs
File in questo prodotto:
File Dimensione Formato  
ICIAP 2023_2.pdf

solo gestori archivio

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.94 MB
Formato Adobe PDF
1.94 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/603709
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact