Understanding food images can be useful to enable different technologies aimed at improving the quality of life of the society. We focus on the problem of analyzing food images to recognize the utensils to be used to consume the meal depicted in the image. The proposed investigation has both a practical and a theoretical relevance, since (1) it can contribute to the design of intelligent systems able to assist people with mental disabilities and (2) it allows to assess if high level concepts related to food (e.g., how to eat food) can be inferred from visual analysis. We augment the FD1200 dataset with labels related to utensils and perform experiments considering AlexNet features coupled with a multiclass SVM classifier. Results show that, even such a simple classification pipeline can achieve promising results.
|Titolo:||Understanding Food Images to Recommend Utensils During Meals|
|Data di pubblicazione:||2017|
|Appare nelle tipologie:||4.1 Contributo in Atti di convegno|