11th International Multimedia Modelling Conference最新文献

筛选
英文 中文
Interoperability and Multimedia Archives 互操作性和多媒体档案
11th International Multimedia Modelling Conference Pub Date : 2005-01-12 DOI: 10.1109/MMMC.2005.51
Dalen Kambur, Damir Becarevic, M. Roantree
{"title":"Interoperability and Multimedia Archives","authors":"Dalen Kambur, Damir Becarevic, M. Roantree","doi":"10.1109/MMMC.2005.51","DOIUrl":"https://doi.org/10.1109/MMMC.2005.51","url":null,"abstract":"In distributed computing systems, it is unwise to move data to the point of program code, but instead process data at the point of storage. This concept is even more appropriate to multimedia repositories where large files must be processed as a result of each retrieval operation. Two problems with multimedia databases are that they require a large volume of disk storage and processing power and that it is generally difficult to query video content or optimise the retrieval process. In the EGTV project, both of these issues are addressed. In the first case, the data server is distributed across multiple autonomous sites where users are permitted to modify schemas and in some cases the data model. In the second case, a mechanism for storing behaviour has been devised which allows functions such as retrieve_frame, retrieve_segment and retrieve_context to be implemented and stored at individual servers. This provides selective retrieval and thus, large data items can be processed locally to filter unwanted data before the transfer process must begin. In this way, storage of operations forms the basis for optimisation of retrieval content and volumes.","PeriodicalId":121228,"journal":{"name":"11th International Multimedia Modelling Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122093901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Database Support for Haptic Exploration in Very Large Virtual Environments 大型虚拟环境中触觉探索的数据库支持
11th International Multimedia Modelling Conference Pub Date : 2005-01-12 DOI: 10.1109/MMMC.2005.33
H. Kriegel, Peter Kunath, M. Pfeifle, M. Renz
{"title":"Database Support for Haptic Exploration in Very Large Virtual Environments","authors":"H. Kriegel, Peter Kunath, M. Pfeifle, M. Renz","doi":"10.1109/MMMC.2005.33","DOIUrl":"https://doi.org/10.1109/MMMC.2005.33","url":null,"abstract":"The efficient management of complex objects has become an enabling technology for modern multimedia information systems as well as for many novel database applications. Unfortunately, the integration of modern database systems into human centered virtual reality applications including multimodal simulations fails to achieve the indispensably required interactive response times. In this paper, we present an approach which achieves efficient query processing along with industrial-strength database support for real time haptic rendering systems which compute force feedback (haptic display). Our approach externalizes and accelerates the approved main-memory Voxmap-PointShellTM (VPS) approach. We group numerous independent database queries together according to a cost model which takes statistical information reflecting the actual data distribution into account. The performance of our approach is experimentally evaluated using a realistic data-set CAR, provided by our industrial partner, a German car manufacturer. Our results show that we can achieve satisfying rendering frame rates using the presented access techniques.","PeriodicalId":121228,"journal":{"name":"11th International Multimedia Modelling Conference","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123194071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Metadata Model Supporting Scalable Interactive TV Services 支持可扩展交互式电视服务的元数据模型
11th International Multimedia Modelling Conference Pub Date : 2005-01-12 DOI: 10.1109/MMMC.2005.11
G. Durand, G. Kazai, M. Lalmas, U. Rauschenbach, P. Wolf
{"title":"A Metadata Model Supporting Scalable Interactive TV Services","authors":"G. Durand, G. Kazai, M. Lalmas, U. Rauschenbach, P. Wolf","doi":"10.1109/MMMC.2005.11","DOIUrl":"https://doi.org/10.1109/MMMC.2005.11","url":null,"abstract":"In this paper, we introduce a novel metadata model for describing scalable and interactive TV services that can be enriched with supplemental multimedia information. The model allows users to access such TV services not only via their traditional TV sets, but also via additional mobile devices like TabletPCs or PDAs. To achieve this, we segment the traditional linear program into sub-components, while separating device-independent and device-specific metadata. A realization of this model builds on the existing standards of TV-Anytime, MPEG-7 and MPEG-21. The model achieves a step towards the \"Connected world\" vision of the second specification phase of the TV-Anytime forum.","PeriodicalId":121228,"journal":{"name":"11th International Multimedia Modelling Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115403081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Using Games As a Means for Collaboration 将游戏作为合作的手段
11th International Multimedia Modelling Conference Pub Date : 2005-01-12 DOI: 10.1109/MMMC.2005.70
Keiran Bartlett, M. Simpson
{"title":"Using Games As a Means for Collaboration","authors":"Keiran Bartlett, M. Simpson","doi":"10.1109/MMMC.2005.70","DOIUrl":"https://doi.org/10.1109/MMMC.2005.70","url":null,"abstract":"The availability of a good interface for online user collaboration has been a sore point for most collaboration applications to date. While MUD’s, MOO’s, IRC and other chat applications are well suited to impersonal communication, the meaning of a single message can often be misconstrued or misunderstood, and the effort often required to learn control of a new application while understanding navigation in a virtual world, can be difficult to overcome. The Nexus promises to aid in the intuitive act of communication, interaction and movement and in the process enhance the collaboration experience for the user, through the use of a game engine.","PeriodicalId":121228,"journal":{"name":"11th International Multimedia Modelling Conference","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115460090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Semantic Virtual Environments with Adaptive Multimodal Interfaces 具有自适应多模态接口的语义虚拟环境
11th International Multimedia Modelling Conference Pub Date : 2005-01-12 DOI: 10.1109/MMMC.2005.65
M. Gutiérrez, D. Thalmann, F. Vexo
{"title":"Semantic Virtual Environments with Adaptive Multimodal Interfaces","authors":"M. Gutiérrez, D. Thalmann, F. Vexo","doi":"10.1109/MMMC.2005.65","DOIUrl":"https://doi.org/10.1109/MMMC.2005.65","url":null,"abstract":"We present a system for real-time configuration of multimodal interfaces to Virtual Environments (VE). The flexibility of our tool is supported by a semantics-based representation of VEs. Semantic descriptors are used to define interaction devices and virtual entities under control. We use portable (XML) descriptors to define the I/O channels of a variety of interaction devices. Semantic description of virtual objects turns them into reactive entities with whom the user can communicate in multiple ways. This article gives details on the semantics-based representation and presents some examples of multimodal interfaces created with our system, including gestures-based and PDA-based interfaces, amongst others.","PeriodicalId":121228,"journal":{"name":"11th International Multimedia Modelling Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115508274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Direct Fingerprinting on Multicasting Compressed Video 直接指纹在多播压缩视频
11th International Multimedia Modelling Conference Pub Date : 2005-01-12 DOI: 10.1109/MMMC.2005.35
Zheng Liu, Xue Li, Zhao Yang Dong
{"title":"Direct Fingerprinting on Multicasting Compressed Video","authors":"Zheng Liu, Xue Li, Zhao Yang Dong","doi":"10.1109/MMMC.2005.35","DOIUrl":"https://doi.org/10.1109/MMMC.2005.35","url":null,"abstract":"A video fingerprint is a kind of digital watermark used in digital video for tracking pirate copies in a multi-user environment. Different users receive the same video with different watermarks designed for uniquely identifying designated users. As a value-added business service, fingerprinting is independent from video compression. In current fingerprinting schemes, a compressed video has to be decoded once for every user in order to add on individual fingerprints. Then the video is encoded again before the dispatch. However, the multiple decode/re-encode operations can result in poor system performance. In this paper, we propose a new integrated fingerprint algorithm, which can be applied directly to the compressed video without decoding/reencoding. Our experiments show that the performance of fingerprinting improved with no compromise to the robustness.","PeriodicalId":121228,"journal":{"name":"11th International Multimedia Modelling Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116552096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Structured Document Model for Authoring Video-Based Hypermedia 用于创作基于视频的超媒体的结构化文档模型
11th International Multimedia Modelling Conference Pub Date : 2005-01-12 DOI: 10.1109/MMMC.2005.15
Tina T. Zhou, Jesse S. Jin
{"title":"A Structured Document Model for Authoring Video-Based Hypermedia","authors":"Tina T. Zhou, Jesse S. Jin","doi":"10.1109/MMMC.2005.15","DOIUrl":"https://doi.org/10.1109/MMMC.2005.15","url":null,"abstract":"This paper describes a new method for authoring video-based hypermedia. It defines an XML-based (Extensible Markup Language) data modeling language called HyperVideo Authoring Language (HyVAL) for constructing structured documents in which the composition of internal video objects (segments, scenes, shots, frames and visual objects in frames) and external media objects (text, audio, images, html pages, etc.) is specified. The model allows an author to interactively specify various internal objects and relationship among internal and external objects. Through the language, a flexible presentation of video-based hypermedia is achieved and comprehensive viewer interactions with video objects are carried out. The experiment of structuring video-based hypermedia using HyVAL in our authoring and presentation environment is also described in this paper.","PeriodicalId":121228,"journal":{"name":"11th International Multimedia Modelling Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132898141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Framework for Sub-Window Shot Detection 子窗口镜头检测框架
11th International Multimedia Modelling Conference Pub Date : 2005-01-12 DOI: 10.1109/MMMC.2005.7
Chuohao Yeo, Yongwei Zhu, Qibin Sun, Shih-Fu Chang
{"title":"A Framework for Sub-Window Shot Detection","authors":"Chuohao Yeo, Yongwei Zhu, Qibin Sun, Shih-Fu Chang","doi":"10.1109/MMMC.2005.7","DOIUrl":"https://doi.org/10.1109/MMMC.2005.7","url":null,"abstract":"Browsing a digital video library can be very tedious especially with an ever expanding collection of multimedia material. We present a novel framework for extracting sub-window shots from MPEG encoded news video with the expectation that this will be another tool that can be used by retrieval systems. Sub-windows shots are also useful for tying in relevant material from multiple video sources. The system makes use of Macroblock parameters to extract visual features, which are then combined to identify possible sub-windows in individual frames. The identified sub-widows are then filtered by a non-linear Spatial-Temporal filter to produce sub-window shots. By working only on compressed domain information, this system avoids full frame decoding of MPEG sequences and hence achieves high speeds of up to 11 times real time.","PeriodicalId":121228,"journal":{"name":"11th International Multimedia Modelling Conference","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132839420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Dissemination of Cultural Heritage Content through Virtual Reality and Multimedia Techniques: A Case Study 通过虚拟现实和多媒体技术传播文化遗产内容:一个案例研究
11th International Multimedia Modelling Conference Pub Date : 2005-01-12 DOI: 10.1109/MMMC.2005.36
S. Valtolina, S. Franzoni, P. Mazzoleni, E. Bertino
{"title":"Dissemination of Cultural Heritage Content through Virtual Reality and Multimedia Techniques: A Case Study","authors":"S. Valtolina, S. Franzoni, P. Mazzoleni, E. Bertino","doi":"10.1109/MMMC.2005.36","DOIUrl":"https://doi.org/10.1109/MMMC.2005.36","url":null,"abstract":"This paper presents the case study of an interactive digital narrative and real-time visualization of an Italian theatre during the 19th century. This case study illustrates how to integrate the traditional concepts of cultural heritage with Virtual Reality (VR) technologies. In this way virtual reconstructions of cultural sites are lift up to an exciting new edutainment level. Novel multimedia interaction devices and digital narrative representations combined with environment historical and architectural certified, offer to the users real-time immersive visualization where to live experiences of the past. Starting to the studies of several project strengthening the great benefits connected at the use of the VR technologies in the cultural fields, the paper illustrates the motivations that have triggered a collaboration between the department of Computer Science[1] and the department of Performing Arts of the University of Milano [2] in order to develop this educational and entertaining system.","PeriodicalId":121228,"journal":{"name":"11th International Multimedia Modelling Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115460885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
ASAP: A Synchronous Approach for Photo Sharing across Multiple Devices ASAP:一种跨多个设备的照片共享同步方法
11th International Multimedia Modelling Conference Pub Date : 2005-01-12 DOI: 10.1109/MMMC.2005.21
Zhigang Hua, Xing Xie, Hanqing Lu, Wei-Ying Ma
{"title":"ASAP: A Synchronous Approach for Photo Sharing across Multiple Devices","authors":"Zhigang Hua, Xing Xie, Hanqing Lu, Wei-Ying Ma","doi":"10.1109/MMMC.2005.21","DOIUrl":"https://doi.org/10.1109/MMMC.2005.21","url":null,"abstract":"Digital photos have become increasingly common and popular in mobile communications. However, due to the distribution of these photos captured in various devices, there is a need to develop new technologies to facilitate the sharing of large image collections across these devices for users. In this paper, we propose A Synchronous Approach for Photo sharing across multiple devices (ASAP). The ASAP provides a hierarchical two-level synchronization scheme, namely image-level and region-level. In the ASAP, a user’s interaction with any device automatically leads to a series of synchronous updates in other devices. Thus, the ASAP simultaneously presents similar images across devices in a way that allows automatic synchronization of images based on user interactions. Experimental evaluations indicate that it is effective and useful to improve image browsing experience across devices for users.","PeriodicalId":121228,"journal":{"name":"11th International Multimedia Modelling Conference","volume":"244 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116160490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信