Accuracy of food portion size estimation from digital pictures acquired by a chest-worn camera

2014 
Self-reporting (e.g. electronic or paper-and-pencil food diary, 24h dietary recall, FFQ) is the most common method of dietary assessment(1–5). Although this approach is used widely in large cohort studies, its accuracy is limited by its dependence on the willingness of the participant to report and his/her ability to estimate accurately the amount of food consumed(6–8). To improve assessment accuracy, various portion size measurement aids are employed, including pictures (two dimensions) or realistic models (three dimensions) of objects of known sizes (e.g. a life-size picture of a tennis ball or a real tennis ball)(9–12). With the help of portion size measurement aids, an individual’s ability to estimate portion size can be improved significantly, especially after training(13–16). However, the ability of portion size measurement aids to improve accuracy varies with food models, training methods, food type and study population(13–25). For example, Lanerolle and co-workers developed models specifically for Asian foods (e.g. rice, noodles)(21,22). Yuhas et al. compared estimation accuracies among solid foods, liquids and amorphous foods using portion size measurement aids. They concluded that errors were smallest in solid foods and largest in amorphous foods(23). Foster et al. showed the importance of using age-appropriate food photographs for studies in children(24,25). Regardless of these findings, the accuracy of dietary assessment methods still highly depends on the individual’s ability to estimate portion size accurately. Recently a picture-based method for dietary analysis has been reported that uses camera-enabled mobile phones or tablet computers to record pictures of consumed foods and beverages. Pictures are acquired by the participant before and after meal and snack consumption. Food volume is then estimated from the pictures, and converted to energy and nutrient values using a nutritional database(5,26–31). Compared with the method of employing portion size measurement aids, the picture-based method provides more objective estimation of portion size. However, it requires the willingness of the participant to take pictures at each eating event. Hence, the food intake record may be incomplete if the participant forgets or ignores picture taking, especially when a meal involves multiple courses of foods and when picture taking disrupts his/her normal social interaction during eating. To resolve this issue, we developed a wearable device (‘eButton’) that automatically takes pictures at a pre-set rate without interrupting the participant’s eating behaviour. eButton is convenient to use, since the wearer only needs to turn it on and off. However, an important question is whether eButton pictures (which are two-dimensional) can provide accurate food volume (i.e. three-dimensional (3D)) estimates. In the present study we therefore compared food volumes estimated from eButton pictures with actual volumes measured using a seed displacement method(32,33). A few picture-based studies have attempted to analyse volume measurement error, but the food samples used in these studies were limited to those with standard volumes or volumes that could easily be measured by water displacement (e.g. solid fruits)(31,34). In this experiment, we studied real foods prepared or purchased by study participants and consumed at lunch break in the lab (see Fig. 1). The volume of each food item was first measured using the seed displacement method (see ‘Experimental methods’ section and online supplementary material) and then calculated using a software program from eButton images acquired during lunch. Different from water displacement, seed displacement involves no liquids and thus permits volume measurements of a wide variety of foods. For example, an airtight waterproof enclosure is required for measuring hamburgers with water displacement; yet controlling the amount of sealed air appropriately is more difficult. To validate further the accuracy of our software for volume estimation, we recruited three human raters to estimate the volume of each food sample by observing the same eButton-acquired pictures. Fig. 1 (colour online) (a) eButton Prototype; (b) a person wearing an eButton during eating
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    56
    References
    65
    Citations
    NaN
    KQI
    []