In the last decade, deep learning models have yielded impressive performance on visual object recognition and image classification. However these methods still rely on learning visual data distributions and show difficulties in dealing with complex scenarios where visual appearance only is not enough to effectively tackle them. This is the case, for instance, of fine-grained image classification in domain-specific applications for which it is very complex to employ data-driven models because of the lack of large amounts of samples and that, instead, can be solved by resorting to specialized human knowledge. However, encoding this specialized knowledge and injecting it into deep models is not trivial. In this paper, we address this problem by: a) employing computational ontologies to model specialized knowledge in a structured representation and, b) building a hybrid visual-semantic classification framework. The classification method performs inference over a Bayesian Network graph, whose structure depends on the knowledge encoded in an ontology and evidences are built using the outputs of deep networks. We test our approach on a fine-grained classification task, employing an extremely complex dataset containing images from several fruit varieties as well as visual and semantic annotations. Since the classification is done at the variety level (e.g., discriminating between different cherry varieties), appearance changes slightly and expert domain knowledge — making using of contextual information — is required to perform classification accurately. Experimental results show that our approach significantly outperforms standard deep learning–based classification methods over the considered scenario as well as existing methods leveraging semantic information for classification. These results demonstrate, on one hand, the difficulty of purely-visual deep methods in tackling small and highly-specialized datasets and, on the other hard, the capabilities of our approach to effectively encode and use semantic knowledge for enhanced accuracy.

Exploiting structured high-level knowledge for domain-specific visual classification

Palazzo S.;Murabito F.;Rundo F.;Giordano D.;Spampinato C.
2021-01-01

Abstract

In the last decade, deep learning models have yielded impressive performance on visual object recognition and image classification. However these methods still rely on learning visual data distributions and show difficulties in dealing with complex scenarios where visual appearance only is not enough to effectively tackle them. This is the case, for instance, of fine-grained image classification in domain-specific applications for which it is very complex to employ data-driven models because of the lack of large amounts of samples and that, instead, can be solved by resorting to specialized human knowledge. However, encoding this specialized knowledge and injecting it into deep models is not trivial. In this paper, we address this problem by: a) employing computational ontologies to model specialized knowledge in a structured representation and, b) building a hybrid visual-semantic classification framework. The classification method performs inference over a Bayesian Network graph, whose structure depends on the knowledge encoded in an ontology and evidences are built using the outputs of deep networks. We test our approach on a fine-grained classification task, employing an extremely complex dataset containing images from several fruit varieties as well as visual and semantic annotations. Since the classification is done at the variety level (e.g., discriminating between different cherry varieties), appearance changes slightly and expert domain knowledge — making using of contextual information — is required to perform classification accurately. Experimental results show that our approach significantly outperforms standard deep learning–based classification methods over the considered scenario as well as existing methods leveraging semantic information for classification. These results demonstrate, on one hand, the difficulty of purely-visual deep methods in tackling small and highly-specialized datasets and, on the other hard, the capabilities of our approach to effectively encode and use semantic knowledge for enhanced accuracy.
2021
Belief networks
Computational ontologies
Fine-grained visual classification
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/523968
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 3
social impact