X Close

CBC Digi-Hub Blog

Home

Menu

Lifelogging cameras for a passive and rich eating behaviour tracking

By Artur Direito, on 4 January 2018

By Dr. Rami Albatal, Big Data Scientist, Computer Vision Expert & Lifelogger Lead Data Scientist at HeyStaks Ltd.

Lifelogging represents a phenomenon whereby people can digitally record their daily lives in varying amounts of detail, for a variety of purposes. In a sense it represents a comprehensive “black box” of a human’s life activities and may offer the potential to mine or infer knowledge about how we live our lives.

Beside activity and location trackers, wearable cameras were introduced among the first and probably the richest Lifelogging devices (check out SenseCam, Narrative Clip and the recent Google Clips cameras). These cameras, are designed to record the life experience of the wearers, by passively capturing images based on movement, time, location or contextual triggers (e.g. smiling person).

By capturing over 1,000 images per day, Lifelogging images are the holy grail for personal behavioural and lifestyle analysis. This analysis requires advanced skills in Computer Vision, Machine Learning and visualisation; these are the core skills of the Lifelogging research team at Dublin City University, where dozens of Lifelogging research and projects are undertaken.

While I was working at the Lifelogging team at DCU, I had high cholesterol, while I am a slim person without particular poor habits in eating (that I am aware of :)), but my GP insisted that something in my diet is not right and I should control my food intake. At this point I started to write a food diary, and I noticed that manually maintaining a food diary is a cumbersome task, it is time consuming, and can easily lack objectivity. So I tried some mobile apps like My Fittness Pal, My Food Diary, and despite the nice visualisation, they still require manual efforts in entering the food types, not to mention the explicit action of taking pictures which is something that can be easily forgotten.

From here, and as a Lifelogger and Computer Vision researcher, I had the idea of using Computer Vision and Machine Learning algorithms to automatically extract food images from my Lifelog. Food detection in digital images is not a new topic, but the application to Lifelogging was novel at that time and allowed me not only to reduce the time and effort in completing the food diary, but provided me with valuable insights about the time of my meals/snacks, the speed of meal consumption, the social context of my meals (alone/with people, indoor/outdoor, on a table/holding the plate with my hand… etc.). And the visualisation of the food I consume allowed me to realise my bad snacking habits (crisps/chips) between meals. And as I was often having these snacks while I was working at my desk or watching movies at home, I often didn’t “remember” to write them in my diary (or capture them by the apps). Fortunately, Lifelogging cameras don’t cheat, they were able to capture all details related to my food consumption and I was able to reduce my cholesterol level by being more aware and having tools that maximise objectivity and allow easy visualisation.

It is now possible to classify the images of the food into high-level categories, like soup, salads, pizza, fast-food, rice meal, bread… etc., and with advanced face detection algorithms we are able to protect the privacy of other people by hiding their faces automatically. This combination of hardware and software technologies might help Lifelogging cameras in penetrating the market of food consumption monitoring and other health-related applications.

Here are some food images automatically detected (the faces in the images are automatically detected and blurred for privacy protection):

Ah come-on, it is just a muffin with butter!

Pizza time, does this count?

 

Each image is timestamped,we can know how quickly the meal was eaten

After this overview on my personal experience in eating behaviour tracking, new questions are always arising and offering new opportunities to use Lifelogging cameras. Here is a couple of simple questions that led to projects in the universities of Auckland and Wellington in New Zealand:

  • As the cost of in-house nursing in some rehabilitation processes (as in the case of heart failure rehabilitation) is expensive, time consuming and require the travel of the nurse to the patient’s house, how can we use Lifelogging cameras to automatically generate an accurate and objective report about the daily activities, and the health status of the patient?
  • Can we fight obesity by measuring kids’ exposure to fast food advertising around their schools? So we can build green zones surrounding the schools to reduce kids’ fast food consumption.

Please feel free to contribute by proposing new ideas!

For a more futuristic, interesting but also cynical point-of-view on Lifelogging, feel free to watch the Black Mirror episode “The Entire History of You”.

Bio:

Dr. Rami Albatal

Ph.D. Big Data Scientist,  Computer Vision Expert & Lifelogger Lead Data Scientist at HeyStaks Ltd. My research and development interests spans on the areas of Machine Learning, Lifelogging, Computer Vision and Information Retrieval. Currently working on Machine Learning solutions for user behavioural modelling, predictive analysis and real-time ad targeting. Previous Lifelogging researcher at Dublin city university, active long term Lifelogger with advanced Computer Vision skills, co-organiser of the Lifeloggin@Dublin meetup group, Lifelogging international evaluation campaign NTCIR-Lifelog.

Leave a Reply