false
zh-CN,zh-TW,en,fr,de,hi,ja,ko,pt,es
Catalog
AI in Healthcare Virtual Summit Session Recordings
Ambient and Wearable Devices Health Monitoring
Ambient and Wearable Devices Health Monitoring
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Well, welcome everyone. I greatly appreciate the invitation to talk at this summit. My name is Eduard Sazonov. I'm a computer engineer at the University of Alabama. There are several Universities of Alabama. This one is for the football team. And today I'm going to talk about an overview of the field and some about our work in the field of wearable sensors, AI, and their application for precision nutrition. So precision nutrition has gained significant interest in the past few years. The goal of it is to provide personalized individual tailored recommendations in terms of diet and specific foods and the effect of those foods on individual health. And it involves multiple cross-sections between genetics, microbiome, metabolomics, and just regular lifestyle factors. And it's a lot of data to be processed together. And a part of what I'm going to cover is use of wearables and artificial intelligence to gather dietary data and process dietary data. And the importance of precision nutrition is highlighted by a recent project, which was started by NIH as part of the All of Us program. This ambitious project, Nutrition for Precision Health, aims to predict individual responses to food and dietary patterns. And the study involves up to 10,000 individuals, which is a lot of people, and using this diverse data set to develop various models that predict these individualized responses. So with that in mind, how do we characterize diet and nutrition and eating behavior? And we can ask a few questions here. One of the important questions is determining when we eat. So timing and duration of eating, because obviously to see, let's say, glucose response to eating, you need to know when it happened. And then you need to know what was consumed and how much of that food was consumed. Other questions may include how we eat. And that may include microstructure of eating, eating behavior, speed of eating, whether it was consumed individually in a social situation. And we can extend this asking, but why we eat? Is it necessary as nutrition, or is it some sort of psychological reward? And finally, we could think about using all of this measured quantities to change eating behavior. Today, I'll focus on the first three questions. So timing, what we eat and quantity of consumption. And this is a challenging problem. Trying to identify when somebody is eating is difficult. And so people historically have explored the use of numerous indirect indicators of food intake. And here we're interested in the look of application of those indicators over the lifespan. So newborns are very limited in the choice of foods, either milk or formula. They consume it by sucking, ingested by swallowing. And researchers have developed a number of different sensor modalities that look at this process of sucking, swallowing, and try to identify and quantify the amount of eating. As the kids grow, two things happen. One, they develop dentition. So now they can chew solid foods. And second, they develop fine motor skills. So now they can bring the food to their mouth in a hand to mouth gesture. So both of those modalities were also explored as proxies for detecting when someone's eating. So let's start at a sucking monitor. So this is an example. This is from our work. It's a very prototype, very crude prototype. But the monitor monitors activation of the jaw muscles through a small piece of electric sensor. You see all of the individual sucks and pauses in sucking. You can quantify timing, duration, rate, and other aspects. Then hand to mouth monitors are somewhat popular because they're very socially acceptable. It's a wristband. And a lot of this work was done by Adam Hoover at Clemson. So they have extremely high social acceptance. But they also have downsides because hand to mouth gestures, you grab something, you bring it to your mouth, is not a unique gesture to food. It could be something that is not fully related to napkin. And smokers have a lot of similar gestures with bringing cigarettes or vapes to the mouth. Obviously, this type of sensor may be used to detect when you're eating. But it does not give us much insight about what food is being consumed. So moving on to swallowing monitors. This is a very, very reliable way to detect when someone's eating. And the reason is very simple. Regardless whether you're eating solid or liquid food, the rate of swallowing shuts up, goes very high compared to the baseline spontaneous swallowing. And this increase in swallowing rate is very easy to detect, very good indicator that someone is eating. But this sort of sensor ran into social acceptance problem. Because on the left, for example, here we have a sensor that we experimented with and people are not willing to accept it as like everyday sort of measure. And obviously, it also does not give us an insight into the type of food being consumed. So moving on, we have a lot of muscles, skeletal muscles, that control ingestion of food. So specifically, most prominent muscles will be masseter muscle that moves the jaw and temporalis muscle, which resides on the temple. Because most of the foods that we consume, we chew. This is also a very reliable method of detecting when somebody is eating. But it may have some false negatives for like sipping a beverage or may generate false positives when somebody is, let's say, chewing gum. And this sort of technology has a reasonable social acceptance, but it also does not provide an insight in what is being consumed. So what can we do with wearables? Well, here is an example of what you can do with a wearable. First of all, you can establish timing and duration of eating You can establish timing and duration of eating events and even look at things such as probability patterns of eating. You can look at daily patterns. You can look at weekly patterns and so on and so forth. So ultimately, even though they don't provide us insight with what is being consumed, they give a very good insight into eating patterns and behavior. There is another side to this because you can imagine the more you chew, the more you swallow, the more you have hand gestures, the more food you are consuming. So studies have been conducted to use this information just to quantify eating amount. And as you can see from the graph, this is from one of the public studies. The correlation is there, but it's not super strong. So that leads us to conclude that wearables are really helpful. Helpful in detecting eating and looking at the patterns, but they're limited in what they can tell you about food being consumed. Food being consumed. And there got to be another way, right? So since the invention of photography, people try to use it for quantification of what is being eaten. One of the oldest studies that I'm aware of is from 1985. And the study used a film camera where participants carried a film camera and took pictures of everything that they consumed. And then the film was developed and photographs printed and used for analysis of the consumption. Obviously, with the invention of a digital camera, this situation has changed dramatically. So now digital cameras are everywhere and they are widely used to quantify the type of food consumed. And this involves stationary cameras that may be installed in the cafeteria, portable cameras, which was some of the earlier attempts. Now everybody's using smartphones. And on the bottom row, you see some of the representative wearables where a camera is integrated into a wearable device, whether it's worn over the ear, in front on the short, or on eyeglasses like shown here. So use of this sort of modalities allow us to capture what is being eaten. And this capture may be what is called active. That means the participant has to take a picture like shown on this slide. This participant has to place all of the foods in front of them and take a picture. Or it may be passive as one would do with a wearable sensor where the camera does this automatically. But once you capture the image, what do you do with it? The very first attempts used nutritionist analysis of the captured image. So you could look at like what type of food is in the image, what quantity of food, enter this in a database, and obtain the nutrition information for this. And it looks promising. However, the initial error in estimation is fairly high. From our experience, we see that a lot of error comes from the portion size estimation. And I'm going to talk about that specifically because objects look differently on the camera than in real life. But since then, we have a great explosion in artificial intelligence and machine learning. And ever since 1990s, early 2000s, researchers have been trying to utilize computer vision techniques to understand content of food images and perform tasks such as segmentation of foods as shown here. So once we capture an image, we can automatically isolate food items from the background. After we isolate food items from the background, we may want to look at the type of food and recognize what type of food is being consumed. Then going back to segmentation, we can segment out the space occupied by food and try to ensure the volume of each individual food item. All of this information then can be brought together to look up nutrient information from a standard database such as FNBDS, USDASR, or branded food items that are available from USDA. As you can see, that's a lot of steps of analysis. And actually, errors may occur in every step. So you may misrecognize the food item. You may misestimate the portion size. And actually, the database information may not be completely accurate and just an approximation of the real food item. So let's take a look at some of these steps. So the very first step, let's say if we have an image, we want to process this image. And the very first question is, well, is there any food in the image? So we can just take the whole image, look at it, and try to recognize if it contains a food item at all. Because if it doesn't contain a food item, we don't need to analyze it. We use this sort of processing in our wearable data processing just to sift through the images and only leave those that contain food. Well, the next question you could ask is, so where on the image is this food item? Because we want to concentrate our analysis on that portion of the image. And the third question is food recognition. So we're trying to identify type of the food. And that's what this image here illustrates, where we perform sort of object detection and try to identify each of the food items and try to identify type of the food item in the image. This sort of analysis, since this is an AI summit, so historically, it's been conducted with object detectors that are based on convolutional networks. But since then, folks developed transformers, such, you know, probably the most famous transformer is ChatGPT, and started asking questions. Well, can we use transformer architecture not only to identify the box where the food is and maybe pick what the food is, but describe the food scene? And this is an illustration of one of the papers that is based on a study that a consortium funded by Gates Foundation conducted in Africa and I was part of. So this is kind of an image captured, and then we can identify individual items in the image and try to classify, recognize the food type, and then it goes through the transformer architecture that potentially will tell you something like this. A person is eating cassava and okra stew. So provide a text description of what is happening on with the image. Another very interesting paper looks at recipes, because a lot of foods that we absorb, they are mixed. It's very hard to identify a single ingredient. Let's say a smoothie here is a really good example, because it's a smoothie. It's mulched together, and and how would you know what goes into this smoothie? So this paper conducted a very interesting experiment where they tried to match a query image, so they want to know the recipe for this image, and they developed an artificial intelligence method to look up a similar image and try to approximate the recipes. So on the left column you see true ingredient, on the right column you see retrieved ingredient. You see they're close, but definitely it's not perfect, but overall it's a very interesting study which highlighted the importance of understanding so what is in our food. Then the next step, as I mentioned, is segmentation because segmentation needs to precede our estimation of the portion size and segmentation methods historically originated from computer vision and now they progress to the deep learning segmentation networks and here you see an illustration from a paper from some of my colleagues where they compare different methods trying to segment a number of foods from progressively simple foods such as an apple to very complex food such as a mixed salad here and as you can see it is a challenging problem and the challenge grows as the foods become more and more mixed together because you need to estimate each of those items accurately or develop a technique that combines all of this information together and we need this to estimate portion size because that is the most critical value that we can infer in terms of quantifying the caloric intake, energy intake. The reason for this is there is a variability in energy density across foods that's definitely there but portion size is one of the most definite parameters which tells you what was the total energy consumed in a given eating event. An analysis of images is challenging because of the perspective so first of all cameras have different view angles and on the right hand side you see an example what's called forced perspective so larger objects appear smaller when they're farther away and smaller objects appear larger when they're closer away so for us to estimate the size true size of that object we really need to know a few things such as distance to the object and view angle and view angle of the camera and so historically this has been handled by using a fiducial markers the ones that you see on the images here so this is color square and this square defines distance to the image it also defines viewing angle and also has a color pattern that you could use to accurately reproduce colors in different lighting. Obviously this creates some participant burden because whoever wants to estimate the amount of food and type of food needs to carry it around and put it in the image every time they use it but nevertheless that's a very popular technique. Just to illustrate that's not the only possible technique there are a few papers that appear where different types of sensor technology are used so here is for example from one of our papers we are a time of flight sensor so essentially this sensor shoot a laser beam to some point and measures the distance with accuracy up to a millimeter to the scene and then you can use this information to perform a variety of geometric transformation and reconstruct the actual size of the food item. So now it's time to switch gears from overview I want to talk about a little bit our sensor the automatic ingestion monitors and how it combines sensors internet of things and machine learning artificial intelligence. So the aim is short for automatic ingestion monitor it's a small camera that you wear on eyeglasses. The way it operates it has this eating sensor it's an optical sensor and it looks at the temporalis muscle and the temporalis muscle is in your lobe and the best thing about this it's activated during food crushing and it's not very much activated when people talk so it gives a very good proxy for detecting chewing and sucking. The device here is developed with the idea of this being a passive device so the idea is the participant only wear it but don't do anything else because the device will automatically detect eating and capture eating events which may happen in different environments so if you think about image-based method if someone's driving a car and we see a lot of people are eating during driving that's probably not a good idea to use your smartphone to capture that meal. Another benefit of this sensor is we are measuring muscle activation and this we can look at very detailed microstructure metrics of food consumption and another capability of this device is measuring if people are actually using it because any sensor that you develop is only good if people are using it and so we can quantify this and remind participants to use it if needed. Just to give you some idea this is what it looks like so the optical sensor looks at the temporalis muscle and that is the signal that this sensor will produce during eating and you see the signal on the bottom and you see images captured by the camera and this is a time aligned so time scale here is about a few minutes and you can see that image is aligned so let's say drinking figure at the camera then we see some pausing eating then we see some chewing about so we can see an ingestion about in here then another ingestion about in here followed by a short pause in eating and that is kind of information that we grabbing from the wearable then the interesting question is how to process this how to use machine learning how to use AI in processing of this data and this is a data flow for processing of the data so our wearable can be used autonomously not connected to an app not connected to the internet it can be just used as a monitoring tool because it has internal storage and we can store data in the device and then just use direct upload to process the data the device also has bluetooth connectivity and can go through the user phone where we can have a real data stream to our server and process all the data in real time and even generate feedback back to the participant in real time I put a neural network mark on all of the places in here where we use machine learning and artificial intelligence we use it on the device we use it in all steps of the processing so I want to highlight the differences in the role of machine learning artificial intelligence artificial intelligence and processing of this data so the very first one is the device itself you see the electronic board it's a tiny board we have very limited bandwidth to send our data via bluetooth we have very limited battery capacity it's nothing that is available on a smartphone for example it's much much smaller because it has to be lightweight but it also has to last for a very long time so we use machine learning algorithms to control image capture in the device we implement a tiny tiny model this is a machine learning model the slide the code that I show this is not chat gpt this is just a few lines of code but it's a machine learning code what it does this code monitors the sensor signal in real time and when this machine learning model decides that the person is eating it will trigger image capture for the camera this particular model is biased for sensitivity so we don't miss any eating events but the benefit of this is that we don't have to send that much information over bluetooth and we don't have to capture that many images camera is the most power hungry element of any device like if you run it on your phone for a very long time sometimes it overheats that's how power hungry it is so this saves us a lot of energy then the sensor signals are processed differently from images and we can do this on a cpu you're probably aware of the difference between cpu and gpu so cpu is much cheaper and available widely on any personal computer so we utilize machine learning models such as gradient booster trees support vector machines random forests so they're much larger models than what we run on the device but they also are more accurate models they allow us to quantify the values that we want with much greater accuracy just because they're more computationally intensive just because we can crank out more sophisticated data and normally we do it using python which is the to-go platform for today machine learning artificial intelligence then the images are processed on a gpu so it's significantly more complicated processing our images go through privacy protection we remove any non-food images we blur screens we blur people faces we just focus on the food in the image i will show you a few examples and we're also looking at food and beverage detection and working on processing that image in real time this requires very high computational resources and we use machine learning frameworks such as pytorch and keras and they we use big gpus to crank this out and it's also done in python so just to give you some ideas what what's happening with these images the very first step of processing is privacy protection now privacy protection in any image information is critical and it probably will gain even more significant attention with uh advent of augmented reality devices you know all of the big tech companies trying to put together augmented reality glasses and beat uh everybody to to this market augmented reality glasses are not possible without a forward-looking cameras and this is similar somewhat to what we have in here and we have to come up with a privacy protection framework which eliminates people which eliminates screens and use ethical guidelines for wearable camera research which is a published paper the link is below uh in processing of our data so that is a very important issue to be addressed um then something else that we do that i mentioned is determining compliance are our participants wearing the device is it just been sitting somewhere in the pocket was it sitting somewhere on the desk and we use sensor signals we have a small accelerometer on the device we have our optical sensor we use a combination of these sensor signals to estimate compliance you're looking at compliance report from an ongoing study so any white area in here is where there was no device where so it's uh non-compliant green is charging um blue is compliant where and meals are in red so you see this is almost a week of data you see all the eating events you see that participant is mostly compliant we do ask them to remove the camera for any activity uh they consider private so this is normal that participants may take the device off when they don't want it to be on you see the eating patterns one interesting observation here is that thursday rolls over into friday with eating and gives us insight about you know what is happening with the exact eating behavior for this person then obviously we want to have some nutrient and dietary data out of the device and we have a semi-automatic system uh it's not completely ai yet but we are moving in that direction where images captured by the device are uh again semi-automatically annotated from uh different databases so here in this particular example you can see uh like chick-fil-a sandwich was from sdasr whereas the rest of the food items came from fnbds and then we have the capability of generating the report which combines all of the compliance information all of the nutrient information all of the meal timing information and gives a kind of comprehensive picture of when the person was eating what they were eating and some other metrics related to let's say chewing rate here is the slide illustrating something that you may see on this report somewhat similar to what i showed before you see a weekly pattern here they're shown in different days in different color each bar here represents an eating event and we observe from a very few to quite a few eating events during the day uh you we can look again at probability of eating during the day we can look at cumulative chewing curves we can look at microstructure metrics such as in this case uh chewing rate and we can compute eating rate which also inclusive of pauses uh in the eating so to characterize whether it was like a really quick meal or was it slow and relaxed uh this kind of information that we can obtain uh with a wearable device but is it all solved well uh probably not so there are a few issues related to dietary assessment first of all complexity of real-time uh real-life behavior so frequently when you look at the uh food analysis and dietary analysis papers you see images like this nicely staged um nicely put together very well distinguishable but in real life uh food are not staged as i mentioned we see a lot of eating the cards and um this uh maybe cultural effect for you united states but nevertheless we do a study in the united states that's what we see uh people may eat on the go right so here is an example just a person holding a bowl in their hand and eating out of this and there is a lot of shared style eating uh internationally and even in the united states think about this like if you go to a restaurant uh there may be some appetizers which are shared and this creates a very complicated environment for the analysis. And then we rely on images, but is image a perfect means of conveying nutrient information? So here we have a glass with white liquid. You would say it's milk, yes, but is it, you know, a skim milk or is it a whole milk? Maybe it's one of those milks. How do we know? So we either have to ask the person eating or have some other ways of determining which type of milk was it. And there is a lot of interest and there is a lot of research in different areas. People are experimenting, for example, with multispectral imaging. So that is where you look not at the full spectrum, but isolate individual wavelengths and try to identify the response from the objects in that wavelength. You can see the difference between diet and regular soda, for example, that way. You could see whether some fats are present or not present in the image. So it's a very interesting idea, but the limitation is it requires some illumination from these different wavelengths, which makes it somewhat challenging for portable applications. Folks have published on intraoral sensing, as you can see on the right, this is from Tufts. And they developed a very interesting sensor that resides on your tooth and tries to look at certain macronutrients and they looked at alcohol. Very interesting response time was probably too slow for practical application, but maybe that is something that will experience further development and becomes practical at some point of time. So I'm near the end. I want to conclude. So I started with precision nutrition and I mentioned that for precision nutrition, we need to know when someone is eating, what they're eating, and potentially the manner in which they're eating. And that requires multi-modal measurements. So we want to have a device which captures accurate timing. We want to capture the content of consumption. We want to capture the amount of consumption and that is a challenging problem. And based on my presentation and the history of wearables, history of using machine learning and AI in the dietary analysis, you see that it's still an open problem, an open field of research with great progress. And I hope this progress continues, but there are some still issues to address, especially when they relate to this real-life behaviors of consumption. And another point that you probably saw is that all of these methods rely on some sort of machine learning or artificial intelligence technology for analysis because sensor signal analysis, image analysis, is impossible without this method. And given the great advances in the past few years, we should be able to expect to see great advances in application of those methods to the field of dietary analysis. And finally, my last point is that along with everything else, ethical and privacy issues need to be addressed along with the technical aspects because we're dealing with imaging technologies that may capture, you know, a lot of unrelated information. These are not only related to wearables that are used in dietary analysis. There is a great push, as I mentioned, toward augmented reality. And these issues need to be addressed as a whole. So thank you very much for your attention. I would like to acknowledge all of my collaborators. Unfortunately, the list is so long, it will not fit on this slide, but I greatly appreciate all of the people who are working with me on this project. I also want to appreciate all of the support that NIH, NSF, and Gates Foundation placed in the development of the technologies that were described here. Thank you very much. Thank you. Thank you for that excellent presentation, Dr. Sazonoff. I think it was really insightful and certainly very thought-provoking. And, you know, I think it's very relevant to the endocrine community, especially as they take care of patients who have a range of dietary-related, you know, metabolic disorders. I will start by taking a few questions from the chat. Someone is asking, what are the real-world challenges within modality of assessment? And they also asked about the cost-effectiveness of automatic ingestion monitors. I think you touched upon these two things, but certainly feel free to elaborate. Yeah, absolutely. And this is a very important question. So when you think about modalities of assessment, which I didn't cover much in here, you can think about self-report, right? It could be a diet diary, you know, recall, ASA 24, and similar methods. And they do work, absolutely, but they are subjected to recall bias, right, remembering what you ate, when you ate, how much you ate, and so on and so forth. So there are limitations there. Then you can think about the new generation of different smartphone applications that are used for dietary assessment. Some smartphones use AI, some smartphone applications don't use AI. The limitation here is that some of them may require entering what you ate, you know, some may rely on artificial intelligence to recognize. But as I try to show, not all real-life scenarios, and again, mostly in the United States, because we don't see that, for example, in our studies in Africa. But in the United States, not all food consumption is conducive to self-reporting through an app, just because it could be, again, driving is a perfect example. But there are other examples that you can use in here. And wearables also have their own challenges, right. So as I mentioned, some of the wearables may not necessarily tell you what you're eating, and then you have wearables that, like our automatic ingestion monitor, that do tell you what you're eating. But then you have to deal with the privacy and ethical issues. So it's always a balance. I think there is, who would imagine that nowadays everybody will have like one or two cameras on them every time, right. So obviously, we are moving towards much greater acceptance of this imaging, everyday imaging. So I think in a few years, we'll see much greater acceptance of imaging-based food assessment technologies. Excellent. Thank you. I have one question, which I will post to you at this point. I was particularly intrigued by, you know, the various different types of sensing technologies available. And I know in the beginning of the presentation, you started by talking about different aspects of precision nutrition, whether it was microbiome or metabolic responses or lifestyle factors. And I wanted to get your thoughts on how would you envision integration of other lifestyle-related factors that can be captured by wearable devices, for example, sleeping, exercising, and many other types of behaviors, all of which would then act collaboratively with eating patterns to affect some sort of either a weight change or other types of outcomes that are of interest to clinicians and healthcare researchers. What are your thoughts on that? Well, absolutely, you're absolutely right. These are very important factors. So I, like, my lab is a part of Dietary Assessment Center for the Precision Nutrition Study. So I'm very familiar with the type of information that is being collected. And you're absolutely right. The goal is to collect a very comprehensive set of data, which includes lifestyle data. So it includes actigraph data to measure activity, even though I have to say, we also do some experiments with using our aim to measure activity because it has an accelerometer. So actigraph data for activity and sleep, then there is a whole comprehensive bunch of assessments related to genetics, microbiome, and essentially everything. And this data set eventually is going to be available through Researcher Workbench, available to all of us, and available for public analysis. And I'm pretty sure that all of these factors will be thoroughly analyzed and impact of each individual factors quantified in terms of how they affect health outcomes. Thank you. Excellent. There is another question in the chat. Someone is asking, the challenges, personal bias, unwillingness to change eating habits despite being unhealthy. How can we modify that with all the data that is collected? Well, that is a challenging, challenging issue. I would say the person has to be, I'm frequently asked, like, why would the person wear the device, right? And my answer is, look at this. Some people wear fitness monitors, some don't, right? Some people want to know what they eat and how much they eat and what effect it has on their health, and some people don't. So I think we have to differentiate these individual preferences and kind of individual lifestyle goals with our ability to monitor. Some of the variables, I would say we probably need, it's more cultural where you would want to have a drive toward healthier foods. But obviously, through use of sensor technology and analysis that we do, we can highlight the impact of food. That's, again, the goal of this Nutrition for Precision Health. If we can really narrow down the effects of food and say, okay, so this really may have a therapeutic effect, for example, right, then it will be a great progress toward that goal. And if I can just share a couple of my own thoughts. I think that, as you rightly mentioned, I think those who do enroll in such programs, there is obviously some inherent acceptability of the data that will come in and perhaps acting upon it. So I do think that there is going to be some selection bias, perhaps, to individuals who end up participating in this. But certainly, if somebody is unbilling, I don't think there is anything that any technology can really, you know, move the needle. I did want to touch upon one more interesting subject. I think you talked a little bit about how the Nutrition Initiative, as part of the All of Us study, is at the front and center of some of these research areas. I did want to ask your opinions on how much of this technology has looked at the non-traditional Western diets or sort of accounts for diversity in food choices. For example, I would imagine, you know, foods in Asia may be very different and foods in Africa may be very different. And I'm not sure of how much they contribute to the data that you have in your algorithms. So, for example, if somebody is eating noodles in a little bowl that you can't even see inside or eating soup, what is sort of the diversity of data that you have in such algorithms and wearable devices? And is there other plans to improve the socio-cultural diversity of such technologies? Oh, that's very important. Yes, absolutely. So, there are a few sides to this. First of all, this research into trying to quantify diet and nutrition, it's really international. One of the biggest AI models is actually developed on Asian foods. And it's one of the largest database in existence. And it's purely Asian, but it's literally no American foods in there. So, that's another limitation. Second is, we encountered this when we conducted the study in Africa because we had to have a database of local foods, which are not necessarily present in the USDA database. And there are localized efforts to develop such databases where things such as cassava, you saw in, you know, other local staple foods are well represented. And there is actually an international symposium, consortium, sorry, where they're trying to integrate all of these international databases together into one comprehensive database and make it uniform structure and uniform information available across the databases. So, there is a significant effort going in that direction. Unfortunately, just when you needed the name of that consortium, it's skipping me. But it's out there. I can find it if necessary. And finally, yes, it all will need to be integrated together. In today's world, you know, the variety of food is great. And what we have in the USDA databases is not necessarily 100% representative of what is being consumed. Yes, yes, thank you. I'm just looking in the chat if there are additional questions. I don't see any. So, let me just ask you one follow-up. And I think maybe we can wrap up if there are no more questions after this. So, in your view, what is really the future of this field? I think you touched upon it from, you know, variable to implantable technologies. But what would be your vision, you know, if you had a crystal ball for using this type of technology and actually helping human health? Well, I think we'll see a lot of development on the AI side of things that probably will originate from the image analysis. We'll see development in terms of trying to make this real-time. To be honest, I'm working on this. I showed this. We already have a real-time loop with our device where we can see what participant is eating and provide feedback like a few seconds later. You probably will see if big tech gets it right with augmented reality, you may see augmented reality applications that have, you know, may guide your nutritional choices right before you make them. Again, not possible in all situations, but that's definitely from the realm of reality. We will see some sensor technology development. Just to give you an example, so the sensor that we use to measure distance to the food is called time-of-flight sensor. And Apple is now placing a time-of-flight camera on some of their phones. So they already have capability of measuring distances and not just with one point, but with multiple points that would be great for things such as portion size estimation. So a combination of all of these directions should hopefully improve our ability to estimate what people are eating. Great. That is, again, thank you so much for your great thoughts and a great presentation. And I think we can adjourn at this point. But thank you again for your insightful talk. Thank you very much for the...
Video Summary
Dr. Eduard Sazonov's presentation focused on the intersection of wearable sensors, artificial intelligence, and precision nutrition. Precision nutrition aims to provide personalized dietary recommendations by integrating data from genetics, microbiomes, metabolomics, and lifestyle factors. Dr. Sazonov explored the challenges and advancements in detecting dietary behaviors using wearable devices, like automatic ingestion monitors, and discussed various modalities that measure eating habits—such as swallowing, chewing, and hand-to-mouth movements.<br /><br />His work, particularly with the Automatic Ingestion Monitor (AIM), highlights how sensors and AI help in identifying eating patterns and estimating consumption without relying heavily on self-reporting. Sazonov emphasized the technological advancements in real-time data processing and the integration of dietary data with other lifestyle factors, like physical activity and sleep, to improve health outcomes.<br /><br />He also addressed challenges like participant compliance, the diversity of global diets, and privacy concerns associated with wearable technologies. The future of this field, as envisioned by Sazonov, hinges on further AI advancements and potential augmented reality applications to guide nutritional choices actively. This integrated approach to dietary analysis can significantly impact healthcare and patient management, particularly for metabolic disorders.
Asset Subtitle
Edward Sazonov, PhD
University of Alabama,
Computer Laboratory of Ambient and Wearable Systems
Keywords
wearable sensors
artificial intelligence
precision nutrition
dietary behaviors
Automatic Ingestion Monitor
real-time data processing
eating patterns
augmented reality
metabolic disorders
EndoCareers
|
Contact Us
|
Privacy Policy
|
Terms of Use
CONNECT WITH US
© 2021 Copyright Endocrine Society. All rights reserved.
2055 L Street NW, Suite 600 | Washington, DC 20036
202.971.3636 | 888.363.6274
×