{"title":"2018 IEEE International Conference on Multimedia and Expo, ICME 2018, San Diego, CA, USA, July 23-27, 2018","authors":"","doi":"10.1109/icme41493.2018","DOIUrl":"https://doi.org/10.1109/icme41493.2018","url":null,"abstract":"","PeriodicalId":90694,"journal":{"name":"Proceedings. IEEE International Conference on Multimedia and Expo","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46966850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye He, Chang Xu, Nitin Khanna, Carol J Boushey, Edward J Delp
{"title":"FOOD IMAGE ANALYSIS: SEGMENTATION, IDENTIFICATION AND WEIGHT ESTIMATION.","authors":"Ye He, Chang Xu, Nitin Khanna, Carol J Boushey, Edward J Delp","doi":"10.1109/ICME.2013.6607548","DOIUrl":"10.1109/ICME.2013.6607548","url":null,"abstract":"<p><p>We are developing a dietary assessment system that records daily food intake through the use of food images taken at a meal. The food images are then analyzed to extract the nutrient content in the food. In this paper, we describe the image analysis tools to determine the regions where a particular food is located (image segmentation), identify the food type (feature classification) and estimate the weight of the food item (weight estimation). An image segmentation and classification system is proposed to improve the food segmentation and identification accuracy. We then estimate the weight of food to extract the nutrient content from a single image using a shape template for foods with regular shapes and area-based weight estimation for foods with irregular shapes.</p>","PeriodicalId":90694,"journal":{"name":"Proceedings. IEEE International Conference on Multimedia and Expo","volume":"2013 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5448794/pdf/nihms823616.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35054345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angeliki Metallinou, Ruth B Grossman, Shrikanth Narayanan
{"title":"QUANTIFYING ATYPICALITY IN AFFECTIVE FACIAL EXPRESSIONS OF CHILDREN WITH AUTISM SPECTRUM DISORDERS.","authors":"Angeliki Metallinou, Ruth B Grossman, Shrikanth Narayanan","doi":"10.1109/ICME.2013.6607640","DOIUrl":"https://doi.org/10.1109/ICME.2013.6607640","url":null,"abstract":"<p><p>We focus on the analysis, quantification and visualization of atypicality in affective facial expressions of children with High Functioning Autism (HFA). We examine facial Motion Capture data from typically developing (TD) children and children with HFA, using various statistical methods, including Functional Data Analysis, in order to quantify atypical expression characteristics and uncover patterns of expression evolution in the two populations. Our results show that children with HFA display higher asynchrony of motion between facial regions, more rough facial and head motion, and a larger range of facial region motion. Overall, subjects with HFA consistently display a wider variability in the expressive facial gestures that they employ. Our analysis demonstrates the utility of computational approaches for understanding behavioral data and brings new insights into the autism domain regarding the atypicality that is often associated with facial expressions of subjects with HFA.</p>","PeriodicalId":90694,"journal":{"name":"Proceedings. IEEE International Conference on Multimedia and Expo","volume":"2013 ","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICME.2013.6607640","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32736407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marc Bosch, TusaRebecca Schap, Fengqing Zhu, Nitin Khanna, Carol J Boushey, Edward J Delp
{"title":"INTEGRATED DATABASE SYSTEM FOR MOBILE DIETARY ASSESSMENT AND ANALYSIS.","authors":"Marc Bosch, TusaRebecca Schap, Fengqing Zhu, Nitin Khanna, Carol J Boushey, Edward J Delp","doi":"10.1109/ICME.2011.6012202","DOIUrl":"https://doi.org/10.1109/ICME.2011.6012202","url":null,"abstract":"<p><p>Of the 10 leading causes of death in the US, 6 are related to diet. Unfortunately, methods for real-time assessment and proactive health management of diet do not currently exist. There are only minimally successful tools for historical analysis of diet and food consumption available. In this paper, we present an integrated database system that provides a unique perspective on how dietary assessment can be accomplished. We have designed three interconnected databases: an image database that contains data generated by food images, an experiments database that contains data related to nutritional studies and results from the image analysis, and finally an enhanced version of a nutritional database by including both nutritional and visual descriptions of each food. We believe that these databases provide tools to the healthcare community and can be used for data mining to extract diet patterns of individuals and/or entire social groups.</p>","PeriodicalId":90694,"journal":{"name":"Proceedings. IEEE International Conference on Multimedia and Expo","volume":"2011 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICME.2011.6012202","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72215786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cache on demand","authors":"S. Ahuja, Tao Wu, S. Dixit","doi":"10.1109/ICME.2002.1035390","DOIUrl":"https://doi.org/10.1109/ICME.2002.1035390","url":null,"abstract":"Web caching is becoming increasingly important as rich content such as streaming media gains popularity on the Web. However, conventional Web caching lacks two essential functionalities for many services. First, Web caching is largely a \"best effort\" service, lacking the capability of guaranteeing application-level QoS such as content storage that many services (including streaming media) desire. Second, standard Web caching does not ensure strong content consistency. In this paper, we develop a cache on demand (CoD) system that addresses both problems. The key components of the CoD system are an admission control mechanism that guarantees the content storage, and the CoD protocol that ensures content consistency between the origin server and the cache. CoD also allows QoS to be switched between \"guaranteed\" and \"best effort\" as needed. CoD is a flexible and effective solution for providing application layer QoS and strong consistency, and provides a new model for revenue-generating services.","PeriodicalId":90694,"journal":{"name":"Proceedings. IEEE International Conference on Multimedia and Expo","volume":"28 1","pages":"41-44 vol.2"},"PeriodicalIF":0.0,"publicationDate":"2002-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89896304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using indexing structures for resource descriptors extraction from distributed image repositories","authors":"S. Berretti, A. Bimbo, P. Pala","doi":"10.1109/ICME.2002.1035547","DOIUrl":"https://doi.org/10.1109/ICME.2002.1035547","url":null,"abstract":"Content based retrieval from distributed libraries raises new and challenging issues with respect to retrieval from a single repository. In particular, an effective management of distributed libraries develops upon three main processes: resource description (extraction of descriptors that qualify the content of a given archive), resource selection (given a user query, analyze resource descriptions and select the resources that contain relevant documents) and results merging (organize and present items returned by individual libraries). So far, these issues have been mainly addressed for text archives. We present a solution to resource descriptors extraction, developing on the use of techniques for multidimensional data indexing. In particular, we implement and compare the extraction of resource descriptors computed through two different indexing approaches; namely m-tree indexing and fuzzy clustering. Comparative results are presented for a test database of about 1000 images.","PeriodicalId":90694,"journal":{"name":"Proceedings. IEEE International Conference on Multimedia and Expo","volume":"87 1","pages":"197-200 vol.2"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74486811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Web information extraction for content augmentation","authors":"A. Janevski, N. Dimitrova","doi":"10.1109/ICME.2002.1035617","DOIUrl":"https://doi.org/10.1109/ICME.2002.1035617","url":null,"abstract":"Today, users have to cope with an overwhelming number of TV channels and Web content sources. We introduce automatic content augmentation as a novel approach to contextual information extraction on behalf of the user where the context is provided by the primary content source (i.e. TV channel) and tailored by the user's preferences. A key aspect of this approach is Web information extraction (WebIE) which automatically derives structured information from unstructured Web documents. Our system executes WebIE tasks, each an instantiation of WebIE rules, our generic document processors. We present two WebIE approaches: diffusion WebIE that crawls a wide set of Web pages and extracts information from a subset of the pertinent pages; and laser WebIE that accesses a select set of Web pages and extracts narrowly defined information. We describe the architecture and the implementation details of the system and provide detailed laser WebIE examples.","PeriodicalId":90694,"journal":{"name":"Proceedings. IEEE International Conference on Multimedia and Expo","volume":"27 1","pages":"389-392 vol.2"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74651421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Biometric applications based on handwriting","authors":"Falko Ramann, C. Vielhauer, R. Steinmetz","doi":"10.1109/ICME.2002.1035683","DOIUrl":"https://doi.org/10.1109/ICME.2002.1035683","url":null,"abstract":"A wide variety of biometric based techniques have been proposed but it is quite difficult to classify the approaches according to their application domains and to measure their functionality. Our intention is to classify today's applications in detail for one particular biometric scheme, handwriting. To give individual users with a specific application in mind orientation and a decision tool, we have built a new classification scheme and furthermore define major characteristics for each of the application classes as an evaluation matrix.","PeriodicalId":90694,"journal":{"name":"Proceedings. IEEE International Conference on Multimedia and Expo","volume":"43 1","pages":"573-576 vol.2"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74866117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Event clustering of consumer pictures using foreground/background segmentation","authors":"A. Loui, Matthieu Jeanson","doi":"10.1109/ICME.2002.1035810","DOIUrl":"https://doi.org/10.1109/ICME.2002.1035810","url":null,"abstract":"This paper describes a new algorithm to classify consumer photographs into different events when date and time information is not available. Without any information about the context of the pictures, we have to rely on the image content. Our approach involves using an efficient segmentation scheme and extraction of low-level features to detect event boundaries. Specifically, we have developed a foreground/background segmentation algorithm based on block-based clustering. This block segmentation provides less precision, but still gives good results with low computation cost. A third-party ground truth database has been created with the help of the Human Factors Laboratory at Kodak, to benchmark our approaches. Based on these results, we concluded that a simple block-based segmentation scheme performed better than the original block-based event clustering algorithm without segmentation. We believe that many improvements, especially on segmentation and feature extraction, should lead to better results in the future.","PeriodicalId":90694,"journal":{"name":"Proceedings. IEEE International Conference on Multimedia and Expo","volume":"83 1","pages":"429-432 vol.1"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73407803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Pingali, Claudio S. Pinhanez, A. Levas, R. Kjeldsen, Mark Podlaseck
{"title":"User-following displays","authors":"G. Pingali, Claudio S. Pinhanez, A. Levas, R. Kjeldsen, Mark Podlaseck","doi":"10.1109/ICME.2002.1035914","DOIUrl":"https://doi.org/10.1109/ICME.2002.1035914","url":null,"abstract":"Traditionally, a user has positioned himself/herself to be in front of a display in order to access information from it. In this information age, life at work and even at home is often confined to be in front of a display device that is the source of information or entertainment. The paper introduces another paradigm where the display follows the user rather than the user being tied to the display. We demonstrate how steerable projection and people tracking can be combined to achieve a display that automatically follows the user.","PeriodicalId":90694,"journal":{"name":"Proceedings. IEEE International Conference on Multimedia and Expo","volume":"11 1","pages":"845-848 vol.1"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75604054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}