{"title":"Fast IDCT implementation on hardware accelerator devices","authors":"A. Silva, O. Nunes, C. Aragao, A. Navarro","doi":"10.1109/ISCE.2004.1375992","DOIUrl":"https://doi.org/10.1109/ISCE.2004.1375992","url":null,"abstract":"Most of hyhrid morion compensated video codiiig staudards rise a well known discrete cosirie traiisform (DCT) at the encoder to remove reduridancy from video raudom processes. An inverse operurion takes place at the decoder. As all crrlculutioris ore done in floating point. some carefully design is nerded when calculations are implemented in j k e d p i n / circrrirs. This paper proposes a hish peformunce IDCT algorithm and its implementation usiiig a FPGA. IDCT is one of the most compuration-iiitensive part.s .f the video coding process. For this reason. a fus t hardware based IDCT iriiplementution is crucial to speed-up video processing. I . Index Terms IDCT, Fixed-point Processing, Hardware Accelerators, FPGA","PeriodicalId":169376,"journal":{"name":"IEEE International Symposium on Consumer Electronics, 2004","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117153495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hard disk drive enhancements for consumer electronics products","authors":"D. Singh, V. lyer","doi":"10.1109/ISCE.2004.1376015","DOIUrl":"https://doi.org/10.1109/ISCE.2004.1376015","url":null,"abstract":"The use of hard disk drives for storing video and gaming content is an emerging,field. There is a lack qfjield data regarding the drive reliability in this new application. The drive industry is experienced in designing drives for the desktop and server arena where error,fiee data is of paramount importance. The usage profile and associated error recovety is signipcantlv diferent ,for the consumer electronics industW. Factors affecting reliability and customer sati.Tfaction are the acoustics noise, reliabiliy at elevated temperattire and ability to prorect against shock and vibration while delivering interrupt ,pee content. The @ects of acoustics, thermak and mechanical design is reviewed. Methods OJ measuring sound pouer and judging sound quality is described. The changes in drive design,for the consumer electronics environment are also described. ATA: A T Attachment, DE: Digital Entertainment. ECC: Error Correction Code, DUT: Device Under Test, FDB: Fluid Dynamic Beuring, GB: Gigubyte, HDD: Hard Disk Drive. STB: Set TOQ B m . Index Terms HDDs far AV applications, AV performance, HDD reliability, Performance Evaluation.","PeriodicalId":169376,"journal":{"name":"IEEE International Symposium on Consumer Electronics, 2004","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131750223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Chroma error analysis and compensation for heterogeneous video transcoding","authors":"Yu Liu","doi":"10.1109/ISCE.2004.1375962","DOIUrl":"https://doi.org/10.1109/ISCE.2004.1375962","url":null,"abstract":"D@rent video coding standards implement motion cawpensation (MC) algorithm with fine d@zrcnce. This niiaiice may introdrice serioiis chroma signal error. in heterogeneoia video tran.scoding. I n t1ri.s paper. the Chroma Error Drift is tested and analwed. We also proposed a transcoding architectiiw to currect this error. According to o w 1e.V resrdt. this algorithai can settle the Chroma Ewor D f q i witk high qnality and ejficiency'. Chroma. I n d e x Terms -Transcoding, Motion Compensation,","PeriodicalId":169376,"journal":{"name":"IEEE International Symposium on Consumer Electronics, 2004","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132819366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new bit estimation scheme for H.264 rate control","authors":"Hongtao Yu, F. Pan, Zhiping Lin","doi":"10.1109/ISCE.2004.1375976","DOIUrl":"https://doi.org/10.1109/ISCE.2004.1375976","url":null,"abstract":"~ Rate control is a critical isstre in ff.264 video coding standard. This paper aims at improving video qualit?; at .scene changes and high motions hv acccrrately estimating the target hits in H.264 rate control. We define a neu, measure. nameLv motion comple.rit?;, to represent the amount oJ' motion content.s between two consecutive frames. Mulion complexit?; i s closely correlated to the bits !ha/ have been allocated to the p?evious/v encoded frames. Based on motion comple.riv. we propose a new and simple scheme to estimate the target bits in rate control. Experimental results show that our bit estimation scheme can effectively reduce /he sharp drops of peak signal-to-noi.se ratio (PSNR) ai scene changes and high motions as compared with the H.264 proposal'. alphabetical order, separated by commas. Index Terms Ahnut four key words or phrases in","PeriodicalId":169376,"journal":{"name":"IEEE International Symposium on Consumer Electronics, 2004","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133881884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel data fusion technique for imaging devices","authors":"A. Castorina, A. Capra, A. Bruna, S. Battiato","doi":"10.1109/ISCE.2004.1375924","DOIUrl":"https://doi.org/10.1109/ISCE.2004.1375924","url":null,"abstract":"1 A. Bruna, A. Capra and A. Castorina work at STMicroelectronicsAST Catania Lab Catania, Italy (email name.surname@st.com) 2 Sebastiano Battiato works at Universita di Catania, Dipartimento di Matematica ed Informatica Catania, Italy, (email: battiato@dmi.unict.it) Abstract — The paper presents a complete system for building an improved picture with greater high dynamic range by using different pictures of the same scene acquired under different exposure settings. The image data fusion is achieved by merging the original data weighting each single contribute on pixel basis by suitable data function. Experiments confirm the effectiveness of such approach.","PeriodicalId":169376,"journal":{"name":"IEEE International Symposium on Consumer Electronics, 2004","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133268862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Speech interactive agent system for car navigation using embedded ASR/TTS and DSR","authors":"Heun-Ji Lee, O. Kwon, Hanseok Ko","doi":"10.1109/ISCE.2004.1376022","DOIUrl":"https://doi.org/10.1109/ISCE.2004.1376022","url":null,"abstract":"This paper presents an efficient speech interactive agent rendering smooth car navigation and Telematics scrvices. by employing embedded automatic speech recognition (eASR), distributed speech recognition (DSR) and cmbeddcd text-to-speech (eTTS) modulcs, all while enabling safe driving. A speech interactive agcnt is essentially a conversational tool providing command and control functions to drivers such as enabling navigation task, audiolvideo manipulation, and E-commerce services through natural voicclresponse interactions between user and interface. To cope with the multiplc random inputs from mute buttons, hands-free buttons, push-to-talk buttons and events occurred by service applications on car navigation system, this provides resource ncgotiation rulcs using priority control based on inter-process communication, speech intcractivc helper function, multi-thread process and cxception handling. In addition, involved hardware resources are often limitcd and intemal comniunication protocols are complex to achieve real time responses. Thus, the hardware dependent architectural and algorithmic code optimization is applied to improve the perfomlance. The proposed system is tested and optimized on real car environments '. Index Terms About four key words o r phrases in alphabetical order, separated by commas.","PeriodicalId":169376,"journal":{"name":"IEEE International Symposium on Consumer Electronics, 2004","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116290379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Description of audiovisual virtual 3D scenes: MPEG-4 perceptual parameters in the auditory domain","authors":"A. Dantele, U. Reiter","doi":"10.1109/ISCE.2004.1375910","DOIUrl":"https://doi.org/10.1109/ISCE.2004.1375910","url":null,"abstract":"A high level of immei-siwi cart he provided f o r the user of virtrial irudiovis~rul erivirorimeiits when sorrnd and visuirl irnlwessiori get coordinated on 11 high quality level. Tlierefore. a coniprehensive scene rlescription Iangrruge is rieeiieil for both. the auditory and the visital purt. The mrrltiriredia sr(rfiilard MPEG-4 provides a powerfiil tool-set f o r the sceiie decription of 2D ond SD virtiiiil environments. fiir the undio part, apart from a coriventiorial yhysicul description, u novel approach is available which is based on perceptual pnrameters which hove been derived from psycho-acoustic e.rperirnents. The practical qualijcarion of this method is discussed when applied to auditory und audiovisual 30 scenes. Enhancements of-e proposed to an cxample application of the perceptrial upproacli which is included in the MPEG-4 stanrlurd arid an implementution f o r 30 rrrrdio rendering is introduced. Index Terms Auditory Scene Description, MPEG-4, Perceptual Parameters, Virtual Acoustics 1. AUDIOVISUAL SCENE DESCRIPTION I MPEG-4 Moving Picture Expens Group T E P E G ) has established novel approaches for the coding of multimedia content in the international standard MPEG-4. Auditory, visual and other content is subdivided into media objects which together build a 2D or 3D scene. Thus the most efficient coding scheme for each object can be chosen according to its type of media, e.g. video, audio, graphics, etc. [I]. For the combination of the objects MPEG-4 provides a powerful tool-set for scene description, the so-called BIFS (Binary Format for Scene Description) [ 2 ] . Here all the elements describing media objects and their properties are put together as nodes in a scene graph. The resulting structure reflects the mutual dependency of the single objects. This concept is based on the scene graph of the Virtual Reality Modeling Language (VRML) standard [3]. The audio part of this scene description (AudioBIFSj allows to specify the behavior of sound emitting objects in the scene (e.g. their position, level, directivity). These basic fuuctionalities have been extended in version 2 of AudioBIFS where new nodes. mainly for virtual acoustics in a 3D ‘This work was conductcd in the research group IAVAS (Intcmztivu AudioVirual Application Systmmr) which i s funded by lhr Thuringim Minisuy 01 Scicnce. Resrmh and thc Ans. Erlun. Germany. Andrcns Dnnlele and Ulnch Rcitcr are with 1hr Institute of Media Technology at 1he Technischc Univcrsicil Ilmcnau. 0.98614 Ilmenau. Germany (e-mail: andruas.dantrIr@lu-iImennu.dc. uhch.ruitcr@tuilnlmnu.duJ. environment, have been added [4]. These are often referred to as Advanced AudioBlFS (AABIFS) and are of main interest for the work described here. In general, the auralization of virtual scenes not only has to reproduce sound sources which are placed in the scenery but also to add ambient sound effects like reverberation. Thus the user can feel the surrounding virtual space by listening to the acoustic cues","PeriodicalId":169376,"journal":{"name":"IEEE International Symposium on Consumer Electronics, 2004","volume":"30 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124616184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Castorina, A. Capra, S. Curti, M. La Cascia, V. Lo Verde
{"title":"Extension of the depth of field using multi-focus input images","authors":"A. Castorina, A. Capra, S. Curti, M. La Cascia, V. Lo Verde","doi":"10.1109/ISCE.2004.1375923","DOIUrl":"https://doi.org/10.1109/ISCE.2004.1375923","url":null,"abstract":"","PeriodicalId":169376,"journal":{"name":"IEEE International Symposium on Consumer Electronics, 2004","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115516267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A light-weight service discovery framework for home network","authors":"Wei Dong, Shiyuan Yang","doi":"10.1109/ISCE.2004.1375996","DOIUrl":"https://doi.org/10.1109/ISCE.2004.1375996","url":null,"abstract":"In this paper. a light-weight Service Di.scovery Framework (L WSDF) is proposed f i r Control Nehwrk in home to realize r e d Network Plug-and-Pla.v. The supporting oivironmmt. address allocution scheme. Device Object Model and service di.scuvery mechanism of that f i amewrk are descrihd in details. A simple exemplaiy implementation show that this technolim can great/v simplifi ranfi,nnratiiii and maintenance work for light-weight Control Nehvork.y.' Index Terms -Home Network, Service Discovery Framework, Jini, UPnP, Plug-and-Play","PeriodicalId":169376,"journal":{"name":"IEEE International Symposium on Consumer Electronics, 2004","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121939873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yan Xu, Qinggang Meng, Yu Liu, Ying Guo, Guiling Li
{"title":"MV-based adaptive transcoding technique for reduced spatial resolution","authors":"Yan Xu, Qinggang Meng, Yu Liu, Ying Guo, Guiling Li","doi":"10.1109/ISCE.2004.1375961","DOIUrl":"https://doi.org/10.1109/ISCE.2004.1375961","url":null,"abstract":"Open/oop transcoder has been known as the fu.ste.st onefiir rediicedspatial resolution. but the qualify of its oiitpiit stream i.s pour. To improve the gnality, motion vector refinement (MVR) and intra-refi.c~h (IR) techniqne are piit ,forward. In this paper, we introdnce a new scheme in which each gronp IJ/ inacrohlr~cks (COMB, the soiirce neighboring macrohlncb fur dow-.sampling) con choose one strirctio-e for transcoding: openloop, MVR. or IR according to their motion vectors' condition. E-rperimental r.es~rlts how that this scheme is more eflicient than the architectirre onlv with IR or MVR'. lndex Terms down-sampling, Euclidean distance, motion vector, transcoding","PeriodicalId":169376,"journal":{"name":"IEEE International Symposium on Consumer Electronics, 2004","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122009787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}