{"title":"前页","authors":"","doi":"10.1111/cgf.70089","DOIUrl":null,"url":null,"abstract":"<p>The European Association for Computer Graphics 46<sup>th</sup> Annual Conference</p><p>London, UK</p><p><b>Full Papers Chairs</b></p><p>Angela Dai (Technical University of Munich, Germany)</p><p>Adrien Bousseau (INRIA, Université Côte d'Azur, France)</p><p><b>Conference Chairs</b></p><p>Niloy Mitra (University College London and Adobe Research, UK)</p><p>Tobias Ritschel (University College London, UK)</p><p>Published by</p><p><i>The Eurographics Association and John Wiley & Sons Ltd.</i> in Computer Graphics <i>forum,</i> Volume 44 (2025), Issue 2 ISSN 1467-8659</p><p><b>STARs Chairs</b></p><p>Yulia Gryaditskaya (Adobe Research, UK)</p><p>Pooran Memari (CNRS, LIX, Ecole Polytechnique, Inria, IP Paris, France)</p><p><b>Tutorials Chairs</b></p><p>Rafał Mantiuk (University of Cambridge, UK)</p><p>Klaus Hildebrandt (TU Delft, The Netherands)</p><p><b>Short Papers Chairs</b></p><p>Duygu Ceylan (Adobe Research, UK)</p><p>Tzu-Mao Li (University of California, San Diego, US)</p><p><b>Education Papers Chairs</b></p><p>Rafael Kuffner dos Anjos (University of Leeds, UK)</p><p>Karina Rodriguez Echavarria (University of Brighton, UK)</p><p><b>Posters Chairs</b></p><p>Tobias Günther (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany)</p><p>Zahra Montazeri (University of Manchester, UK)</p><p><b>Doctoral Consortium Chairs</b></p><p>Lingjie Liu (University of Pennsylvania, US)</p><p>Gurprit Singh (MPI Informatik, Germany)</p><p><b>Diversity Chairs</b></p><p>Noura Faraj (Université de Montpellier, France)</p><p><b>Sustainability</b></p><p>Dennis Bukenberger (Technical University of Munich, Germany)</p><p>This issue of the Computer Graphics Forum contains the technical full papers program of the Eurographics Association 46th annual conference, held in London, England from 12-16 May 2025. The Eurographics annual venue presents a unique opportunity to present outstanding technical contributions in computer graphics. The full papers selected for publication in the Computer Graphics Forum journal are arguably the most prestigious feature of the conference.</p><p>The technical paper selection process involved a group of 98 experts forming the International Program Committee (IPC). We invited experts without more than two consecutive years of participation in the IPC so that the committee can be regularly renewed. The IPC members covered a diverse range of research subareas in computer graphics.</p><p>We received a total of 211 full submissions, six of which were desk-rejected because they were out of scope or because of plagiarism or double-submission issues. A sorting committee, consisting of the two Chairs and six sorting committee members, subsequently assigned each paper to two IPC members, as either primary or secondary reviewer, up to five papers, respecting to their preferences, expertise, conflicts, and automatically computed matching scores between IPC members and submitted papers. The primary and secondary reviewers in turn invited three additional tertiary reviewers on each submission.</p><p>After the initial five reviews per submission were collected, the authors had five days to consult these reviews and write a 1000-word rebuttal, addressing key questions and potential misinterpretations. Fourteen submissions were withdrawn by their authors who decided to forgo the rebuttal. Finally, all reviewers assigned to a paper read the rebuttal and all reviews and together reached an initial decision.</p><p>This year, following an established tradition that started in 2012 and improved continuously through the years, all IPC members participated in a one-week virtual asynchronous meeting, where the discussions between the IPC members leading to the final decisions were performed off-line by a bulletin board and other means of personal communication. New to this year, the six members of the sorting committee also acted as moderators of the discussions. This process led to extensive discussions where papers and reviews were debated, involving other IPC members as extra readers when needed. Each paper had a public discussion board, and each and every IPC member contributed to discussions where they felt competent.</p><p>All papers conditionally accepted with minor revisions went through a short second review cycle, with evaluations from the primary reviewer, and sometimes the secondary reviewer, before being finally accepted.</p><p>In the end, 75 papers out of the 205 valid submissions were accepted with minor revisions for a 36.6% acceptance rate, while 9 were recommended to a fast-track review process with major revisions to be considered for publication in a future issue of Computer Graphics Forum. This year we had papers on a diverse range of topics including generative modeling of images, videos and 3D content, machine learning, image and video editing, geometry processing, physically-based and non-photorealistic rendering, neural rendering, material appearance and texture, character animation, digital avatars, motion reconstruction, physical simulation, visualization, virtual reality, digital fabrication.</p><p>All accepted full papers are published in the Computer Graphics Forum journal. It is worth noting that for all submissions conflict-of-interest was managed on all levels, from reviewers, committee, advisory board, best paper committee, up to the chairs. The review process was double-blind for tertiary reviewers and single-blind for primary and secondary IPC members, and in case the original set of reviewers did not conclude with a decision, additional reviewers were invited to perform a full review and assist the decision process. Best papers were selected by a dedicated awards committee who selected among the top 19 papers based on overall review scores.</p><p>We would like to thank everyone who made this possible. First and foremost, we are grateful to all the members of the IPC who dedicated a remarkable amount of their time to finding tertiaries, reviewing and discussing papers, and subsequently shepherded the accepted papers undergoing the minor revision cycle. We wish to thank all the reviewers, who provided more than 1000 high-quality and thoughtful reviews and, of course, all the authors for their efforts in preparing and revising the submitted papers. We are especially grateful to Michael Wimmer who shared with us the insights from previous years and was indefatigable with his help and assistance. We would like to express strong appreciation to the sorting committee for their help in assigning the papers and monitoring the discussions, and to the advisory board for their guidance about the overall reviewing process. Last but not least, we would like to thank Stefanie Behnke from Eurographics Publishing for her outstanding support with SRM functionality, and for her constant responsiveness which was the key to the successful outcome of the paper selection process.</p><p>We are very happy to present the full paper proceedings of Eurographics 2025. We believe that these papers reflect the extraordinary variety of computer graphics research and its best contributions. It was both an honor and a pleasure for us to lead this selection process and we hope that you will find both the papers and the entire conference thought-provoking and inspiring of your future endeavors.</p><p>EG 25 Full Papers Co-Chairs</p><p><b>Alexa, Marc</b></p><p>TU Berlin</p><p><b>Drettakis, George</b></p><p>Inria</p><p><b>Sorkine-Hornung, Olga</b></p><p>ETH Zürich</p><p><b>Theobalt, Christian</b></p><p>Max Planck Insitute for Informatics</p><p><b>Wimmer, Michael</b></p><p>TU Wien</p><p><b>Beeler, Thabo</b></p><p>Google</p><p><b>Bender, Jan</b></p><p>RWTH Aachen University</p><p><b>Bommes, David</b></p><p>University of Bern</p><p><b>Jarabo, Adrian</b></p><p>Meta Reality Labs Research</p><p><b>Kim, Min H.</b></p><p>KAIST</p><p><b>Thies, Justus</b></p><p>TU Darmstadt</p><p><b>Alghofaili, Rawan</b></p><p>University of Texas at Dallas</p><p><b>Babaei, Vahid</b></p><p>MPI</p><p><b>Baek, Seung-Hwan</b></p><p>POSTECH</p><p><b>Bærentzen, Jakob Andreas</b></p><p>TU Denmark</p><p><b>Barla, Pascal</b></p><p>Inria</p><p><b>Bi, Sai</b></p><p>Adobe</p><p><b>Billeter, Markus</b></p><p>University of Leeds</p><p><b>Botsch, Mario</b></p><p>TU Dortmund</p><p><b>Bruckner, Stefan</b></p><p>University of Bergen</p><p><b>Campen, Marcel</b></p><p>Osnabrück University</p><p><b>Casas, Dan</b></p><p>Universidad Rey Juan Carlos</p><p><b>Castellani, Umberto</b></p><p>University of Verona</p><p><b>Chaine, Raphaelle</b></p><p>Université Claude Bernard Lyon 1</p><p><b>Chandran, Prashanth</b></p><p>Disney Research</p><p><b>Chapiro, Alexandre</b></p><p>Meta</p><p><b>Chu, Mengyu</b></p><p>Peking University</p><p><b>Cordonnier, Guillaume</b></p><p>Inria, Université Côte d'Azur</p><p><b>Daviet, Gilles</b></p><p>NVIDIA</p><p><b>Deng, Zhigang</b></p><p>University of Houston</p><p><b>Didyk, Piotr</b></p><p>University of Lugano</p><p><b>Doggett, Michael</b></p><p>Lund University</p><p><b>Faraj, Noura</b></p><p>Université de Montpellier – LIRMM</p><p><b>Ferguson, Zachary</b></p><p>CLO Virtual Fashion</p><p><b>Fu, Hongbo</b></p><p>The Hong Kong University of Science and Technology</p><p><b>Gain, James</b></p><p>University of Cape Town</p><p><b>Garces, Elena</b> Adobe</p><p><b>Gingold, Yotam</b></p><p>George Mason University</p><p><b>Gobbetti, Enrico</b> CRS4</p><p><b>Golyanik, Vladislav</b></p><p>MPI for Informatics</p><p><b>Groueix, Thibault</b></p><p>Adobe</p><p><b>Günther, Tobias</b></p><p>FAU Erlangen-Nuremberg</p><p><b>Heide, Felix</b></p><p>Princeton University</p><p><b>Henzler, Philipp</b></p><p>Google</p><p><b>Hu, Shi-Min</b></p><p>Tsinghua University</p><p><b>Huang, Qixing</b></p><p>UT Austin</p><p><b>Ju, Tao</b></p><p>Washington University in St. Louis</p><p><b>Mo, Kaichun</b></p><p>NVIDIA</p><p><b>Leake, Mackenzie</b></p><p>Adobe</p><p><b>Lee, Seungyong</b></p><p>POSTECH</p><p><b>Lefebvre, Sylvain</b></p><p>Inria</p><p><b>Leimkühler, Thomas</b></p><p>MPI Informatik</p><p><b>Lensch, Hendrik</b></p><p>University of Tübingen</p><p><b>Li, Changjian</b></p><p>University of Edinburgh</p><p><b>Li, Dingzeyu</b></p><p>Adobe</p><p><b>Li, Lei</b></p><p>Technical University of Munich</p><p><b>Li, Minchen</b></p><p>Carnegie Mellon University</p><p><b>Livesu, Marco</b></p><p>IMATI CNR</p><p><b>Martín, Daniel</b></p><p>Universidad de Zaragoza</p><p><b>Mellado, Nicolas</b></p><p>CNRS, IRIT, Université de Toulouse, France</p><p><b>Musialski, Przemyslaw</b></p><p>New Jersey Institute of Technology</p><p><b>Oliveira, Manuel M.</b></p><p>UFRGS</p><p><b>Pajarola, Renato</b></p><p>University of Zurich</p><p><b>Parakkat, Amal Dev</b></p><p>Institut Polytechnique de Paris</p><p><b>Paschalidou, Despoina</b></p><p>Stanford University</p><p><b>Peers, Pieter</b></p><p>College of William & Mary</p><p><b>Pelechano, Nuria</b></p><p>Universitat Politècnica de Catalunya</p><p><b>Philip, Julien</b></p><p>Netflix Eyeline Studios</p><p><b>Pirk, Sören</b></p><p>Google</p><p><b>Qi, Anran</b></p><p>Inria, Université Côte d'Azur</p><p><b>Ren, Jing</b></p><p>ETH Zurich</p><p><b>Rushmeier, Holly</b></p><p>Yale</p><p><b>Sawhney, Rohan</b></p><p>NVIDIA</p><p><b>Schreck, Camille</b></p><p>Inria Nancy</p><p><b>Sellán, Silvia</b></p><p>University of Toronto</p><p><b>Sharf, Andrei</b></p><p>Ben Gurion University</p><p><b>Sharp, Nicholas</b></p><p>NVIDIA</p><p><b>Sintorn, Erik</b></p><p>Chalmers University</p><p><b>Skouras, Melina</b></p><p>INRIA</p><p><b>Smirnov, Dmitry</b></p><p>Netflix</p><p><b>Stamminger, Marc</b></p><p>Friedrich-Alexander-Universität</p><p><b>Stein, Oded</b></p><p>University of Southern California</p><p><b>Steinberger, Markus</b></p><p>Graz University of Technology, Huawei Technologies</p><p><b>Sueda, Shinjiro</b></p><p>Texas A&M University</p><p><b>Sung, Minhyuk</b></p><p>KAIST</p><p><b>Tan, Ping</b></p><p>The Hong Kong University of Science and Technology</p><p><b>Teschner, Matthias</b></p><p>University of Freiburg</p><p><b>Tong, Xin</b></p><p>Microsoft Research Asia</p><p><b>Uy, Mikaela Angelina</b></p><p>Stanford University</p><p><b>Vaxman, Amir</b></p><p>The University of Edinburgh</p><p><b>Wang, Beibei</b></p><p>Nanjing University</p><p><b>Wang, Charlie C. L.</b></p><p>The University of Manchester</p><p><b>Wang, Peng-Shuai</b></p><p>Peking University</p><p><b>Wang, Tuanfeng Y.</b></p><p>Adobe</p><p><b>Wang, Wenping</b></p><p>Texas A&M</p><p><b>Wang, Zeyu</b></p><p>The Hong Kong University of Science and Technology, Guangzhou</p><p><b>Weber, Ofir</b></p><p>Bar-Ilan University</p><p><b>Wei, Li-Yi</b></p><p>Adobe</p><p><b>Weyrich, Tim</b></p><p>Friedrich-Alexander-Universität Erlangen-Nürnberg</p><p><b>Wu, Kui</b></p><p>LightSpeed Studios</p><p><b>Wyman, Chris</b></p><p>NVIDIA</p><p><b>Xu, Kai</b></p><p>National University of Defense Technology</p><p><b>Yan, Ling-Qi</b></p><p>UC Santa Barbara</p><p><b>Yang, Yin</b></p><p>The University of Utah</p><p><b>Zhang, Biao</b></p><p>KAUST</p><p><b>Zhou, Yang</b></p><p>Adobe</p><p><b>Zhu, Bo</b></p><p>Dartmouth College</p><p><b>Zhu, Junqiu</b></p><p>UC Santa Barbara</p><p><b>Zint, Daniel</b></p><p>New York University</p><p>Agus, Marco</p><p>Aksoy, Yagiz</p><p>Alzayer, Hadi</p><p>Amenta, Annamaria</p><p>Ando, Ryoichi</p><p>Aristidou, Andreas</p><p>Ashraf, Maliha</p><p>Assarsson, Ulf</p><p>Attene, Marco</p><p>Bächer, Moritz</p><p>Bahat, Yuval</p><p>Bahmani, Sherwin</p><p>Bang, Seungbae</p><p>Bangaru, Sai</p><p>Banterle, Francesco</p><p>Barczak, Joshua</p><p>Barrera-Machuca, Mayra</p><p>Barthe, Loïc</p><p>Basri, Ronen</p><p>Basset, Jean</p><p>Batty, Christopher</p><p>Bauer, Frank</p><p>Belyaev, Alexander</p><p>Bemana, Mojtaba</p><p>Ben-Chen, Mirela</p><p>Benes, Bedrich</p><p>Benjamin, Juanita</p><p>Bermano, Amit Haim</p><p>Bernard, Florian</p><p>Bharadwaj, Shrisha</p><p>Bian, Wenjing</p><p>Birsak, Michael</p><p>Bittner, Jiří</p><p>Boscaini, Davide</p><p>Bressa, Nathalie</p><p>Bruneton, Eric</p><p>Burley, Brent</p><p>Cabiddu, Daniela</p><p>Cao, Dongliang</p><p>Capouellez, Ryan</p><p>Cardoso, Joao</p><p>Celen, Ata</p><p>Ceylan, Duygu</p><p>Chandran, Prashanth</p><p>Chang, Pascal</p><p>Chang, Yue</p><p>Chen, Chen</p><p>Chen, He</p><p>Chen, Honglin</p><p>Chen, Jianchun</p><p>Chen, Jiong</p><p>Chen, Kenneth</p><p>Chen, Peter Yichen</p><p>Chen, Qiang</p><p>Chen, Qimin</p><p>Chen, Renjie</p><p>Chen, Wei-Yu</p><p>Chen, Wenzheng</p><p>Chen, Xin</p><p>Chen, Xuelin</p><p>Chen, Yingcong</p><p>Chen, Yun-Chun</p><p>Chen, Zhen</p><p>Cheng, Zhanglin</p><p>Choi, Myung Geol</p><p>Choi, Suyeon</p><p>Chrysanthou, Yiorgos</p><p>Chugunov, Ilya</p><p>Chung, Jiwoo</p><p>Cibulski, Lena</p><p>Ciccone, Loïc</p><p>Cieslak, Mikolaj</p><p>Clarberg, Petrik</p><p>Čmolík, Ladislav</p><p>Coiffier, Guillaume</p><p>Corman, Etienne</p><p>Corpetti, Thomas</p><p>Corsini, Massimiliano</p><p>Cosmo, Luca</p><p>Dachsbacher, Carsten</p><p>Daněček, Radek</p><p>Das, Devikalyan</p><p>Datta, Sayantan</p><p>Davis, Abe</p><p>Deng, Bailin</p><p>Deng, Qixin</p><p>Deng, Xi</p><p>Deng, Yitong</p><p>Deng, Yu</p><p>Deng, Zhigang</p><p>Diehl, Alexandra</p><p>Digne, Julie</p><p>Dischler, Jean-Michel</p><p>Dittebrandt, Addis</p><p>Dodik, Ana</p><p>Dong, Weiming</p><p>Dong, Yue</p><p>Dou, Zhiyang</p><p>Douthe, Cyril</p><p>Du, Zheng-Jun</p><p>Eboli, Thomas</p><p>Echevarria, Jose</p><p>Eisert, Peter</p><p>Fan, Deng-Ping</p><p>Fan, Zhimin</p><p>Fang, Bryant Shaoheng</p><p>Fang, Guoxin</p><p>Fang, Hao-Shu</p><p>Fei, Raymond Yun</p><p>Feng, Nicole</p><p>Feng, Weixi</p><p>Feng, Yao</p><p>Finnendahl, Ugo</p><p>Fischer, Michael</p><p>Fisher, Matthew</p><p>Fu, Qiang</p><p>Fu, Rao</p><p>Fuchs, Martin</p><p>Fudos, Ioannis</p><p>Fujiwara, Haruo</p><p>Fukusato, Tsukasa</p><p>Gal, Rinon</p><p>Ganeshan, Aditya</p><p>Gao, Lin</p><p>Gao, Maolin</p><p>Gao, Quankai</p><p>Garrido, Pablo</p><p>Gavriil, Konstantinos</p><p>Gavryushin, Alexey</p><p>Ghosh, Anindita</p><p>Giebenhain, Simon</p><p>Gong, Bingchen</p><p>Goswami, Prashant</p><p>Gotsman, Craig</p><p>Gousseau, Yann</p><p>Grigorev, Artur</p><p>Grittmann, Pascal</p><p>Groth, Colin</p><p>Gruson, Adrien</p><p>Gryaditskaya, Yulia</p><p>Gu, Xiaodong</p><p>Guan, Phillip</p><p>Guan, Yanran</p><p>Guehl, Pascal</p><p>Guemeli, Can</p><p>Guerrero, Paul</p><p>Guo, Chuan</p><p>Guo, Xiaohu</p><p>Guo, Yingchun</p><p>Guo, Yu-Xiao</p><p>Guthe, Michael</p><p>Habermann, Marc</p><p>Hadwiger, Markus</p><p>Hahn, David</p><p>Hähnlein, Felix</p><p>Hall, Peter</p><p>Han, Jihae</p><p>Hanika, Johannes</p><p>Hanji, Param</p><p>Hanocka, Rana</p><p>Hao, Jiang</p><p>He, Ying</p><p>Hedman, Peter</p><p>Hedstrom, Trevor</p><p>Henz, Bernardo</p><p>Herholz, Philipp</p><p>Hertz, Amir</p><p>Hertzmann, Aaron</p><p>Holdenried-Krafft, Simon</p><p>Holzschuch, Nicolas</p><p>Hou, Fei</p><p>Hou, Junhui</p><p>Hsu, Jerry</p><p>Hu, Yixin</p><p>Huang, Chun-Hao</p><p>Huang, Jin</p><p>Huang, Kemeng</p><p>Huang, Ruqi</p><p>Huang, Tianxin</p><p>Huang, Xiaolei</p><p>Hwang, Jaepyung</p><p>Ibrahim, Muhammad Twaha</p><p>Iglesias-Guitian, Jose A.</p><p>Iser, Tomáš</p><p>Ishida, Sadashige</p><p>Isogawa, Mariko</p><p>Iwai, Daisuke</p><p>Jacobson, Alec</p><p>Jaspe, Alberto</p><p>Je, Jihyeon</p><p>Jebe, Lars</p><p>Jeong, Hyeonho</p><p>Ji, Xinya</p><p>Jiang, Lihan</p><p>Jiang, Yifeng</p><p>Jiang, Ying</p><p>Jiang, Zhongshi</p><p>Jin, Xiaogang</p><p>Jin, Yuduo</p><p>Jindal, Akshay</p><p>Jones, Ben</p><p>Jones, R. Kenny</p><p>Jönsson, Daniel</p><p>Jung, Seung-Won</p><p>Kaiser, Adrien</p><p>Kalischek, Nikolai</p><p>Karunratanakul, Korrawe</p><p>Kaufmann, Manuel</p><p>Kavaklı, Koray</p><p>Keller, Marilyn</p><p>Kelley, Brendan</p><p>Kelly, Tom</p><p>Kerbl, Bernhard</p><p>Khattar, Apoorv</p><p>Kim, Dongyeon</p><p>Kim, Doyub</p><p>Kim, Seung Wook</p><p>Kim, Suzi</p><p>Klein, Jonathan</p><p>Kodnongbua, Milin</p><p>Koo, Juil</p><p>Kopanas, George</p><p>Kosinka, Jiri</p><p>Kovalsky, Shahar</p><p>Kuth, Bastian</p><p>Kwon, Mingi</p><p>Kwon, Taesoo</p><p>Lagunas, Manuel</p><p>Lai, Yu-Kun</p><p>Lalonde, Jean-François</p><p>Lan, Lei</p><p>Lanza, Dario</p><p>Larboulette, Caroline</p><p>Lavoue, Guillaume</p><p>Le, Binh</p><p>Leake, Mackenzie</p><p>Lee, Joo Ho</p><p>Lee, Sunmin</p><p>Lee, Yoonsang</p><p>Lei, Jiahui</p><p>Leimkuehler, Thomas</p><p>Lejemble, Thibault</p><p>Levi, Zohar</p><p>Levin, David</p><p>Li, Bo</p><p>Li, Manyi</p><p>Li, Tzu-Mao</p><p>Li, Xuan</p><p>Li, Yidi</p><p>Li, Yushi</p><p>Li, Zhe</p><p>Li, Zhengqin</p><p>Liang, Yiqing</p><p>Liao, Rongfan</p><p>Liao, Zhouyingcheng</p><p>Lin, Daqi</p><p>Lin, Kai-En</p><p>Lindell, David</p><p>Ling, Ben</p><p>Litalien, Joey</p><p>Liu, Chenxi</p><p>Liu, Haiyang</p><p>Liu, Haolin</p><p>Liu, Hsueh-Ti Derek</p><p>Liu, Libin</p><p>Liu, Tiantian</p><p>Liu, Yuan</p><p>Liu, Yuan</p><p>Liu, Zheng</p><p>Long, Xiaoxiao</p><p>Lu, Jiaxin</p><p>Lukac, Mike</p><p>Ly, Mickaël</p><p>Lyu, Weijie</p><p>Ma, Qianli</p><p>Ma, Xiaohe</p><p>Machado, Gustavo</p><p>Maesumi, Arman</p><p>Maggioli, Filippo</p><p>Magnet, Robin</p><p>Majercik, Alexander</p><p>Malpica, Sandra</p><p>Mancinelli, Claudio</p><p>Mao, Tianlu</p><p>Marais, Patrick</p><p>Mendiratta, Mohit</p><p>Meng, Johannes</p><p>Mercier-Aubin, Alexandre</p><p>Meric, Adil</p><p>Meyer, Mark</p><p>Michel, Élie</p><p>Miller, Bailey</p><p>Millerdurai, Christen</p><p>Min, Sehee</p><p>Mo, Haoran</p><p>Monzon, Nestor</p><p>Moon, Gyeongsik</p><p>Morrical, Nathan</p><p>Mould, David</p><p>Mousas, Christos</p><p>Müller, Thomas</p><p>Multon, Franck</p><p>Munkberg, Jacob</p><p>Muthuganapathy, Ramanathan</p><p>Myszkowski, Karol</p><p>Nader, Georges</p><p>Nah, Jae-Ho</p><p>Nehvi, Jalees</p><p>Nie, Yongwei</p><p>Nivoliers, Vincent</p><p>Noh,Junyong</p><p>Nöllenburg, Martin</p><p>Novak, Jan</p><p>Novello, Tiago</p><p>Nowrouzezahrai, Derek</p><p>Nuria, Pelechano</p><p>Ohrhallinger, Stefan</p><p>Olajos, Rikard</p><p>Osman, Ahmed</p><p>Ost, Julian</p><p>Otaduy, Miguel A.</p><p>Pajarola, Renato</p><p>Pajouheshgar, Ehsan</p><p>Pan, Hao</p><p>Pandey, Rohit</p><p>Panetta, Julian</p><p>Panozzo, Daniele</p><p>Papaioannou, Georgios</p><p>Park, Geon Yeong</p><p>Patashnik, Or</p><p>Patney, Anjul</p><p>Peng, Jason</p><p>Peng, Shichong</p><p>Peng, Sida</p><p>Peng, Ziqiao</p><p>Peters, Christoph</p><p>Peters, Jorg</p><p>Petrov, Dmitrii</p><p>Petrovich, Mathis</p><p>Pierson, Emery</p><p>Pietroni, Nico</p><p>Pintore, Giovanni</p><p>Pintus, Ruggero</p><p>Po, Ryan</p><p>Qian, Shenhan</p><p>Qin, Dafei</p><p>Raab, Sigal</p><p>Radl, Lukas</p><p>Raistrick, Alexander</p><p>Raj, Amit</p><p>Rakotosaona, Marie-Julie</p><p>Rao, Anyi</p><p>Rath, Alexander</p><p>Rautek, Peter</p><p>Ray, Nicolas</p><p>Reddy, Pradyumna</p><p>Reiser, Christian</p><p>Rekik Dit Nekhili, Rim</p><p>Rempe, Davis</p><p>Ren, Bo</p><p>Ren, Yingying</p><p>Ren, Yixuan</p><p>Rist, Florian</p><p>Rohmer, Damien</p><p>Roitberg, Alina</p><p>Salvati, Marc</p><p>Salvi, Marco</p><p>Sartor, Sam</p><p>Schaefer, Scott</p><p>Schmalstieg, Dieter</p><p>Schreck, Tobias</p><p>Schroeder, Craig</p><p>Schüßler, Vincent</p><p>Schweickart, Eston</p><p>Sebastien, Hillaire</p><p>Selgrad, Kai</p><p>Serifi, Agon</p><p>Serrano, Ana</p><p>Seyb, Dario</p><p>Shamir, Ariel</p><p>Shao, Tianjia</p><p>Sharma, Adwait</p><p>Sheffer, Alla</p><p>Shekhar, Sumit</p><p>Shi, Mingyi</p><p>Shi, Yujun</p><p>Shin, Joonghyuk</p><p>Shirley, Peter</p><p>Shugrina, Maria</p><p>Skarbez, Richard</p><p>Smith, Jesse</p><p>Song, Sicheng</p><p>Spurek, Przemyslaw</p><p>Stearns, Colton</p><p>Sugimoto, Ryusuke</p><p>Sun, Caroline</p><p>Sun, Qi</p><p>Sun, Weiwei</p><p>Szymanowicz, Stan</p><p>Takikawa, Towaki</p><p>Tang, Min</p><p>Tang, Yansong</p><p>Tanveer, Maham</p><p>Tatzgern, Markus</p><p>Tewari, Ayush</p><p>Theobalt, Christian</p><p>Thiery, Jean-Marc</p><p>Tian, Yapeng</p><p>Tricard, Thibault</p><p>Tseng, Ethan</p><p>Tu, Peihan</p><p>Tursun, Cara</p><p>Unterguggenberger, Johannes</p><p>Valkanas, Antonios</p><p>Villeneuve, Keven</p><p>Vouga, Etienne</p><p>W. Sumner, Robert</p><p>Wallner, Johannes</p><p>Wang, Arran</p><p>Wang, Bin</p><p>Wang, Bing</p><p>Wang, Chen</p><p>Wang, Hai</p><p>Wang, Jiepeng</p><p>Wang, Lu</p><p>Wang, Xiaogang</p><p>Wang, Xinpeng</p><p>Wang, Zhendong</p><p>Wang, Zirui</p><p>Warner, Jeremy</p><p>Wei, Kaixuan</p><p>Weiss, Kenneth</p><p>Weiss, Sebastian</p><p>Weiss, Tomer</p><p>Weng, Chung-Yi</p><p>Westermann, Rüdiger</p><p>Westhofen, Lukas</p><p>Williams, Niall</p><p>Wolski, Krzysztof</p><p>Wronski, Bartlomiej</p><p>Wu, Haomiao</p><p>Wu, Lifan</p><p>Wu, Rundi</p><p>Wu, Songyin</p><p>Wu, Xiaoloong</p><p>Xia, Mengqi</p><p>Xian, Liu</p><p>Xiao, Qinjie</p><p>Xie, Desai</p><p>Xie, Haoran</p><p>Xie, Haozhe</p><p>Xie, Tianyi</p><p>Xie, Zhaoming</p><p>Xing, Jiankai</p><p>Xu, Bing</p><p>Xu, Jie</p><p>Xu, Jingyi</p><p>Xu, Pei</p><p>Xu, Xiang</p><p>Xu, Xiaogang</p><p>Xu, Zexiang</p><p>Xu, Zhan</p><p>Xu, Zilin</p><p>Yan, Chuan</p><p>Yan, Kai</p><p>Yan, Siming</p><p>Yang, Guandao</p><p>Yang, Haitao</p><p>Yang, Josh</p><p>Yi, Hongwei</p><p>Yi, Li</p><p>Yi, Renjiao</p><p>Yi, Xinyu</p><p>Yoo, Seungwoo</p><p>Yoon, Jae Shin</p><p>Yu, Borou</p><p>Yu, Difeng</p><p>Yu, Emilie</p><p>Yu, Fenggen</p><p>Yu, Hongchuan</p><p>Yu, Mulin</p><p>Yu, Tao</p><p>Yuan, Yuhui</p><p>Yuchi, Huo</p><p>Yue, Yonghao</p><p>Zellmann, Stefan</p><p>Zeng, Ailing</p><p>Zeng, Chong</p><p>Zeng, Yanhong</p><p>Zeng, Zheng</p><p>Zhang, Cheng</p><p>Zhang, Chuyan</p><p>Zhang, Congyi</p><p>Zhang, Haotian</p><p>Zhang, Hongwen</p><p>Zhang, Jason Y.</p><p>Zhang, Paul</p><p>Zhang, Qing</p><p>Zhang, W.</p><p>Zhang, Xiuming</p><p>Zhang, Yuxin</p><p>Zhao, Hang</p><p>Zhao, Mingyang</p><p>Zhao, Shuang</p><p>Zheng, Shaokun</p><p>Zheng, Xinyang</p><p>Zhou, Junwei</p><p>Zhou, Kailai</p><p>Zhou, Tongyu</p><p>Zhou, Xilong</p><p>Zhou, Yang</p><p>Zhou, Yi</p><p>Zhou, Zhiqian</p><p>Zhu, Lifeng</p><p>Zibrek, Katja</p><p>Zuffi, Silvia</p><p>\n </p><p>\n </p><p>Ariel Shamir is a professor and the former Dean of Efi Arazi School of Computer Science at Reichman University in Israel (formerly the Interdisciplinary Center). Before joining the university, he spent two years as a postdoctoral fellow at the Computational Visualization Center at the University of Texas in Austin. Over the years he held visiting research positions at Mitsubishi Electric Research Labs (Cambridge, MA), Disney Research, MIT, and Google.</p><p>Ariel Shamir has been one of the most prolific authors in computer graphics in the last decade, making several pioneering contributions in a wide array of topics, including image and video processing, shape analysis, 3D modeling, fabrication and animation. Many of his algorithms integrate and are guided by human perception models, mixing art and science and helping develop ready-to-use tools. He was the senior author in the original seam carving paper (and others that followed) which has been one of the most impactful papers in image editing in the last fifteen years, and which quickly established a line of research on the deceivingly simple problem of scaling images and adapting their content accordingly.</p><p>Ariel is wellknown for many other works such as sketch2photo, a system that allowed users to compose realistic images from simple handmade annotated sketches (roughly a decade before deep learning took off), algorithms to extract full 3D shapes from images, mesh segmentation, automatic video editing, stylization and abstraction, to name just a few. His recent work also advances machine learning techniques.</p><p>Ariel is a very active member of the community, regularly serving on major program committees and editorial boards of many leading journals. He was Chair of the SIGGRAPH Asia Technical Papers Programme in 2024. He has received many international awards, including being inducted in 2024 into the ACM SIGGRAPH Academy. He also maintains collaborations with several hightech companies, both large and small, which highlights the practical angle that guides his research.</p><p>In summary, Ariel Shamir's exceptional contributions to Computer Graphics and Human-Computer Interaction have left an indelible mark. His innovative research, numerous accolades, and leadership in academia exemplify his dedication to advancing research, technology and education.</p><p>EUROGRAPHICS is extremely pleased to recognize Ariel Shamir with the 2025 Outstanding Technical Contributions Award.</p><p>Valentin Deschaintre receives the EUROGRAPHICS Young Researcher Award 2025. Valentin's research focuses on inverse rendering and appearance generation, acquisition, authoring and representations for virtual environments and scene understanding. His work covers many major contributions, including his seminal paper at SIGGRAPH 2018 on lightweight SVBRDF capture, which combined differential rendering with synthetic training data. The latter is now a standard for training and benchmarking.</p><p>Valentin worked on his PhD in Computer Science at INRIA Sophia-Antipolis, in collaboration with the Ansys affiliate Optis. His thesis received the French Computer Graphics Thesis Award and the UCA Academic Excellence Thesis Award. He continued his research at Imperial College London in 2020 before joining Adobe Research in 2021. In 2024, he was elected a EUROGRAPHICS Junior Fellow.</p><p>Valentin developed several important contributions in the context of data-driven appearance acquisition and authoring, published in top venues and journals of Computer Graphics and Vision: acquisition of large surfaces (EGSR 2020), polarization-based acquisition (CVPR 2021), procedural material models creation (SIGGRAPH 2022, 2023 and 2024), material authoring and generation (EGSR 2022, SIGGRAPH Asia 2022, SIGGRAPH 2023 and 2024), material perception (SIGGRAPH 2023), and scene understanding (SIGGRAPH 2023 and 2024, SIGGRAPH Asia 2023). In recent years, he published a series of papers contributing towards a complete pipeline for materials from acquisition, generation and description to selection, segmentation, editing and retrieval for textures, images and 3D assets.</p><p>Much of his work appeared in top venues and journals of Computer Graphics and Vision and many of his papers are highly cited. This shows the strong impact that his findings had on the community, in which Valentin plays an active role; he participated in program committees (EGSR 2021-2023, EUROGRAPHICS 2023, SIGGRAPH Asia 2023, 2024), and chaired events (SIGGRAPH Thesis FF and EG Doctoral Consortium). Further, he also successfully mentored and collaborated with various international PhD students.</p><p>EUROGRAPHICS is extremely pleased to recognize Valentin Deschaintre with the 2025 Young Researcher Award in recognition of his outstanding contributions to Computer Graphics/Computer Vision in the area of data-driven material authoring and understanding.</p><p>Sebastian Starke receives the EUROGRAPHICS Young Researcher Award 2025. Sebastian obtained his PhD from the University of Edinburgh under the supervision of Taku Komura. He is now a research scientist at Meta Reality Labs.</p><p>Sebastian has made significant contributions to motion synthesis and character animation methods using deep learning techniques. His research in character animation fuses motion control with deep learning to create responsive and lifelike digital characters.</p><p>In his research, Sebastian extends the phase concept into complex humanscene interactions, such as basketball playing, boxing and our four-legged friends. His DeepPhase framework introduces an end-to-end neural architecture that learns a compact, representative phase space directly from raw motion capture data. This approach not only unifies the existing phasebased representations, but also elegantly handles the nuances of diverse motion patterns, ensuring natural and fluid animation synthesis. Separately, his codebook matching algorithm addresses the inherent ambiguities of control signals -such as those from VR devices- by aligning and matching latent categorical probability distributions. By explicitly sampling from the distribution, the technique results in high-fidelity and responsive control systems that are pivotal for immersive applications in embodiment in metaverse and beyond.</p><p>The work of Sebastian Starke is published in the top tier conferences and journals of computer graphics and has been widely cited. His work received several honors, such as best paper awards at SIGGRAPH and Pacific Graphics, as well as the Symposium on Computer Animation (SCA) Best PhD Dissertation award (2023). Sebastian's innovative contributions to the field of character animation significantly advance interactive applications such as gaming, virtual reality and robotics.</p><p>EUROGRAPHICS is extremely pleased to recognize Sebastian Starke with the 2025 Young Researcher Award in recognition of his outstanding contributions to Computer Graphics in the area of character animation and motion synthesis.</p><p>Justin Solomon</p><p>MIT</p><p>Alexei Efros</p><p>UC Berkeley</p><p>Karen Liu</p><p>Stanford University</p><p>Michael Black</p><p>Max Planck Institute for Intelligent Systems</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 2","pages":"i-xxvii"},"PeriodicalIF":2.9000,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70089","citationCount":"0","resultStr":"{\"title\":\"Front Matter\",\"authors\":\"\",\"doi\":\"10.1111/cgf.70089\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The European Association for Computer Graphics 46<sup>th</sup> Annual Conference</p><p>London, UK</p><p><b>Full Papers Chairs</b></p><p>Angela Dai (Technical University of Munich, Germany)</p><p>Adrien Bousseau (INRIA, Université Côte d'Azur, France)</p><p><b>Conference Chairs</b></p><p>Niloy Mitra (University College London and Adobe Research, UK)</p><p>Tobias Ritschel (University College London, UK)</p><p>Published by</p><p><i>The Eurographics Association and John Wiley & Sons Ltd.</i> in Computer Graphics <i>forum,</i> Volume 44 (2025), Issue 2 ISSN 1467-8659</p><p><b>STARs Chairs</b></p><p>Yulia Gryaditskaya (Adobe Research, UK)</p><p>Pooran Memari (CNRS, LIX, Ecole Polytechnique, Inria, IP Paris, France)</p><p><b>Tutorials Chairs</b></p><p>Rafał Mantiuk (University of Cambridge, UK)</p><p>Klaus Hildebrandt (TU Delft, The Netherands)</p><p><b>Short Papers Chairs</b></p><p>Duygu Ceylan (Adobe Research, UK)</p><p>Tzu-Mao Li (University of California, San Diego, US)</p><p><b>Education Papers Chairs</b></p><p>Rafael Kuffner dos Anjos (University of Leeds, UK)</p><p>Karina Rodriguez Echavarria (University of Brighton, UK)</p><p><b>Posters Chairs</b></p><p>Tobias Günther (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany)</p><p>Zahra Montazeri (University of Manchester, UK)</p><p><b>Doctoral Consortium Chairs</b></p><p>Lingjie Liu (University of Pennsylvania, US)</p><p>Gurprit Singh (MPI Informatik, Germany)</p><p><b>Diversity Chairs</b></p><p>Noura Faraj (Université de Montpellier, France)</p><p><b>Sustainability</b></p><p>Dennis Bukenberger (Technical University of Munich, Germany)</p><p>This issue of the Computer Graphics Forum contains the technical full papers program of the Eurographics Association 46th annual conference, held in London, England from 12-16 May 2025. The Eurographics annual venue presents a unique opportunity to present outstanding technical contributions in computer graphics. The full papers selected for publication in the Computer Graphics Forum journal are arguably the most prestigious feature of the conference.</p><p>The technical paper selection process involved a group of 98 experts forming the International Program Committee (IPC). We invited experts without more than two consecutive years of participation in the IPC so that the committee can be regularly renewed. The IPC members covered a diverse range of research subareas in computer graphics.</p><p>We received a total of 211 full submissions, six of which were desk-rejected because they were out of scope or because of plagiarism or double-submission issues. A sorting committee, consisting of the two Chairs and six sorting committee members, subsequently assigned each paper to two IPC members, as either primary or secondary reviewer, up to five papers, respecting to their preferences, expertise, conflicts, and automatically computed matching scores between IPC members and submitted papers. The primary and secondary reviewers in turn invited three additional tertiary reviewers on each submission.</p><p>After the initial five reviews per submission were collected, the authors had five days to consult these reviews and write a 1000-word rebuttal, addressing key questions and potential misinterpretations. Fourteen submissions were withdrawn by their authors who decided to forgo the rebuttal. Finally, all reviewers assigned to a paper read the rebuttal and all reviews and together reached an initial decision.</p><p>This year, following an established tradition that started in 2012 and improved continuously through the years, all IPC members participated in a one-week virtual asynchronous meeting, where the discussions between the IPC members leading to the final decisions were performed off-line by a bulletin board and other means of personal communication. New to this year, the six members of the sorting committee also acted as moderators of the discussions. This process led to extensive discussions where papers and reviews were debated, involving other IPC members as extra readers when needed. Each paper had a public discussion board, and each and every IPC member contributed to discussions where they felt competent.</p><p>All papers conditionally accepted with minor revisions went through a short second review cycle, with evaluations from the primary reviewer, and sometimes the secondary reviewer, before being finally accepted.</p><p>In the end, 75 papers out of the 205 valid submissions were accepted with minor revisions for a 36.6% acceptance rate, while 9 were recommended to a fast-track review process with major revisions to be considered for publication in a future issue of Computer Graphics Forum. This year we had papers on a diverse range of topics including generative modeling of images, videos and 3D content, machine learning, image and video editing, geometry processing, physically-based and non-photorealistic rendering, neural rendering, material appearance and texture, character animation, digital avatars, motion reconstruction, physical simulation, visualization, virtual reality, digital fabrication.</p><p>All accepted full papers are published in the Computer Graphics Forum journal. It is worth noting that for all submissions conflict-of-interest was managed on all levels, from reviewers, committee, advisory board, best paper committee, up to the chairs. The review process was double-blind for tertiary reviewers and single-blind for primary and secondary IPC members, and in case the original set of reviewers did not conclude with a decision, additional reviewers were invited to perform a full review and assist the decision process. Best papers were selected by a dedicated awards committee who selected among the top 19 papers based on overall review scores.</p><p>We would like to thank everyone who made this possible. First and foremost, we are grateful to all the members of the IPC who dedicated a remarkable amount of their time to finding tertiaries, reviewing and discussing papers, and subsequently shepherded the accepted papers undergoing the minor revision cycle. We wish to thank all the reviewers, who provided more than 1000 high-quality and thoughtful reviews and, of course, all the authors for their efforts in preparing and revising the submitted papers. We are especially grateful to Michael Wimmer who shared with us the insights from previous years and was indefatigable with his help and assistance. We would like to express strong appreciation to the sorting committee for their help in assigning the papers and monitoring the discussions, and to the advisory board for their guidance about the overall reviewing process. Last but not least, we would like to thank Stefanie Behnke from Eurographics Publishing for her outstanding support with SRM functionality, and for her constant responsiveness which was the key to the successful outcome of the paper selection process.</p><p>We are very happy to present the full paper proceedings of Eurographics 2025. We believe that these papers reflect the extraordinary variety of computer graphics research and its best contributions. It was both an honor and a pleasure for us to lead this selection process and we hope that you will find both the papers and the entire conference thought-provoking and inspiring of your future endeavors.</p><p>EG 25 Full Papers Co-Chairs</p><p><b>Alexa, Marc</b></p><p>TU Berlin</p><p><b>Drettakis, George</b></p><p>Inria</p><p><b>Sorkine-Hornung, Olga</b></p><p>ETH Zürich</p><p><b>Theobalt, Christian</b></p><p>Max Planck Insitute for Informatics</p><p><b>Wimmer, Michael</b></p><p>TU Wien</p><p><b>Beeler, Thabo</b></p><p>Google</p><p><b>Bender, Jan</b></p><p>RWTH Aachen University</p><p><b>Bommes, David</b></p><p>University of Bern</p><p><b>Jarabo, Adrian</b></p><p>Meta Reality Labs Research</p><p><b>Kim, Min H.</b></p><p>KAIST</p><p><b>Thies, Justus</b></p><p>TU Darmstadt</p><p><b>Alghofaili, Rawan</b></p><p>University of Texas at Dallas</p><p><b>Babaei, Vahid</b></p><p>MPI</p><p><b>Baek, Seung-Hwan</b></p><p>POSTECH</p><p><b>Bærentzen, Jakob Andreas</b></p><p>TU Denmark</p><p><b>Barla, Pascal</b></p><p>Inria</p><p><b>Bi, Sai</b></p><p>Adobe</p><p><b>Billeter, Markus</b></p><p>University of Leeds</p><p><b>Botsch, Mario</b></p><p>TU Dortmund</p><p><b>Bruckner, Stefan</b></p><p>University of Bergen</p><p><b>Campen, Marcel</b></p><p>Osnabrück University</p><p><b>Casas, Dan</b></p><p>Universidad Rey Juan Carlos</p><p><b>Castellani, Umberto</b></p><p>University of Verona</p><p><b>Chaine, Raphaelle</b></p><p>Université Claude Bernard Lyon 1</p><p><b>Chandran, Prashanth</b></p><p>Disney Research</p><p><b>Chapiro, Alexandre</b></p><p>Meta</p><p><b>Chu, Mengyu</b></p><p>Peking University</p><p><b>Cordonnier, Guillaume</b></p><p>Inria, Université Côte d'Azur</p><p><b>Daviet, Gilles</b></p><p>NVIDIA</p><p><b>Deng, Zhigang</b></p><p>University of Houston</p><p><b>Didyk, Piotr</b></p><p>University of Lugano</p><p><b>Doggett, Michael</b></p><p>Lund University</p><p><b>Faraj, Noura</b></p><p>Université de Montpellier – LIRMM</p><p><b>Ferguson, Zachary</b></p><p>CLO Virtual Fashion</p><p><b>Fu, Hongbo</b></p><p>The Hong Kong University of Science and Technology</p><p><b>Gain, James</b></p><p>University of Cape Town</p><p><b>Garces, Elena</b> Adobe</p><p><b>Gingold, Yotam</b></p><p>George Mason University</p><p><b>Gobbetti, Enrico</b> CRS4</p><p><b>Golyanik, Vladislav</b></p><p>MPI for Informatics</p><p><b>Groueix, Thibault</b></p><p>Adobe</p><p><b>Günther, Tobias</b></p><p>FAU Erlangen-Nuremberg</p><p><b>Heide, Felix</b></p><p>Princeton University</p><p><b>Henzler, Philipp</b></p><p>Google</p><p><b>Hu, Shi-Min</b></p><p>Tsinghua University</p><p><b>Huang, Qixing</b></p><p>UT Austin</p><p><b>Ju, Tao</b></p><p>Washington University in St. Louis</p><p><b>Mo, Kaichun</b></p><p>NVIDIA</p><p><b>Leake, Mackenzie</b></p><p>Adobe</p><p><b>Lee, Seungyong</b></p><p>POSTECH</p><p><b>Lefebvre, Sylvain</b></p><p>Inria</p><p><b>Leimkühler, Thomas</b></p><p>MPI Informatik</p><p><b>Lensch, Hendrik</b></p><p>University of Tübingen</p><p><b>Li, Changjian</b></p><p>University of Edinburgh</p><p><b>Li, Dingzeyu</b></p><p>Adobe</p><p><b>Li, Lei</b></p><p>Technical University of Munich</p><p><b>Li, Minchen</b></p><p>Carnegie Mellon University</p><p><b>Livesu, Marco</b></p><p>IMATI CNR</p><p><b>Martín, Daniel</b></p><p>Universidad de Zaragoza</p><p><b>Mellado, Nicolas</b></p><p>CNRS, IRIT, Université de Toulouse, France</p><p><b>Musialski, Przemyslaw</b></p><p>New Jersey Institute of Technology</p><p><b>Oliveira, Manuel M.</b></p><p>UFRGS</p><p><b>Pajarola, Renato</b></p><p>University of Zurich</p><p><b>Parakkat, Amal Dev</b></p><p>Institut Polytechnique de Paris</p><p><b>Paschalidou, Despoina</b></p><p>Stanford University</p><p><b>Peers, Pieter</b></p><p>College of William & Mary</p><p><b>Pelechano, Nuria</b></p><p>Universitat Politècnica de Catalunya</p><p><b>Philip, Julien</b></p><p>Netflix Eyeline Studios</p><p><b>Pirk, Sören</b></p><p>Google</p><p><b>Qi, Anran</b></p><p>Inria, Université Côte d'Azur</p><p><b>Ren, Jing</b></p><p>ETH Zurich</p><p><b>Rushmeier, Holly</b></p><p>Yale</p><p><b>Sawhney, Rohan</b></p><p>NVIDIA</p><p><b>Schreck, Camille</b></p><p>Inria Nancy</p><p><b>Sellán, Silvia</b></p><p>University of Toronto</p><p><b>Sharf, Andrei</b></p><p>Ben Gurion University</p><p><b>Sharp, Nicholas</b></p><p>NVIDIA</p><p><b>Sintorn, Erik</b></p><p>Chalmers University</p><p><b>Skouras, Melina</b></p><p>INRIA</p><p><b>Smirnov, Dmitry</b></p><p>Netflix</p><p><b>Stamminger, Marc</b></p><p>Friedrich-Alexander-Universität</p><p><b>Stein, Oded</b></p><p>University of Southern California</p><p><b>Steinberger, Markus</b></p><p>Graz University of Technology, Huawei Technologies</p><p><b>Sueda, Shinjiro</b></p><p>Texas A&M University</p><p><b>Sung, Minhyuk</b></p><p>KAIST</p><p><b>Tan, Ping</b></p><p>The Hong Kong University of Science and Technology</p><p><b>Teschner, Matthias</b></p><p>University of Freiburg</p><p><b>Tong, Xin</b></p><p>Microsoft Research Asia</p><p><b>Uy, Mikaela Angelina</b></p><p>Stanford University</p><p><b>Vaxman, Amir</b></p><p>The University of Edinburgh</p><p><b>Wang, Beibei</b></p><p>Nanjing University</p><p><b>Wang, Charlie C. L.</b></p><p>The University of Manchester</p><p><b>Wang, Peng-Shuai</b></p><p>Peking University</p><p><b>Wang, Tuanfeng Y.</b></p><p>Adobe</p><p><b>Wang, Wenping</b></p><p>Texas A&M</p><p><b>Wang, Zeyu</b></p><p>The Hong Kong University of Science and Technology, Guangzhou</p><p><b>Weber, Ofir</b></p><p>Bar-Ilan University</p><p><b>Wei, Li-Yi</b></p><p>Adobe</p><p><b>Weyrich, Tim</b></p><p>Friedrich-Alexander-Universität Erlangen-Nürnberg</p><p><b>Wu, Kui</b></p><p>LightSpeed Studios</p><p><b>Wyman, Chris</b></p><p>NVIDIA</p><p><b>Xu, Kai</b></p><p>National University of Defense Technology</p><p><b>Yan, Ling-Qi</b></p><p>UC Santa Barbara</p><p><b>Yang, Yin</b></p><p>The University of Utah</p><p><b>Zhang, Biao</b></p><p>KAUST</p><p><b>Zhou, Yang</b></p><p>Adobe</p><p><b>Zhu, Bo</b></p><p>Dartmouth College</p><p><b>Zhu, Junqiu</b></p><p>UC Santa Barbara</p><p><b>Zint, Daniel</b></p><p>New York University</p><p>Agus, Marco</p><p>Aksoy, Yagiz</p><p>Alzayer, Hadi</p><p>Amenta, Annamaria</p><p>Ando, Ryoichi</p><p>Aristidou, Andreas</p><p>Ashraf, Maliha</p><p>Assarsson, Ulf</p><p>Attene, Marco</p><p>Bächer, Moritz</p><p>Bahat, Yuval</p><p>Bahmani, Sherwin</p><p>Bang, Seungbae</p><p>Bangaru, Sai</p><p>Banterle, Francesco</p><p>Barczak, Joshua</p><p>Barrera-Machuca, Mayra</p><p>Barthe, Loïc</p><p>Basri, Ronen</p><p>Basset, Jean</p><p>Batty, Christopher</p><p>Bauer, Frank</p><p>Belyaev, Alexander</p><p>Bemana, Mojtaba</p><p>Ben-Chen, Mirela</p><p>Benes, Bedrich</p><p>Benjamin, Juanita</p><p>Bermano, Amit Haim</p><p>Bernard, Florian</p><p>Bharadwaj, Shrisha</p><p>Bian, Wenjing</p><p>Birsak, Michael</p><p>Bittner, Jiří</p><p>Boscaini, Davide</p><p>Bressa, Nathalie</p><p>Bruneton, Eric</p><p>Burley, Brent</p><p>Cabiddu, Daniela</p><p>Cao, Dongliang</p><p>Capouellez, Ryan</p><p>Cardoso, Joao</p><p>Celen, Ata</p><p>Ceylan, Duygu</p><p>Chandran, Prashanth</p><p>Chang, Pascal</p><p>Chang, Yue</p><p>Chen, Chen</p><p>Chen, He</p><p>Chen, Honglin</p><p>Chen, Jianchun</p><p>Chen, Jiong</p><p>Chen, Kenneth</p><p>Chen, Peter Yichen</p><p>Chen, Qiang</p><p>Chen, Qimin</p><p>Chen, Renjie</p><p>Chen, Wei-Yu</p><p>Chen, Wenzheng</p><p>Chen, Xin</p><p>Chen, Xuelin</p><p>Chen, Yingcong</p><p>Chen, Yun-Chun</p><p>Chen, Zhen</p><p>Cheng, Zhanglin</p><p>Choi, Myung Geol</p><p>Choi, Suyeon</p><p>Chrysanthou, Yiorgos</p><p>Chugunov, Ilya</p><p>Chung, Jiwoo</p><p>Cibulski, Lena</p><p>Ciccone, Loïc</p><p>Cieslak, Mikolaj</p><p>Clarberg, Petrik</p><p>Čmolík, Ladislav</p><p>Coiffier, Guillaume</p><p>Corman, Etienne</p><p>Corpetti, Thomas</p><p>Corsini, Massimiliano</p><p>Cosmo, Luca</p><p>Dachsbacher, Carsten</p><p>Daněček, Radek</p><p>Das, Devikalyan</p><p>Datta, Sayantan</p><p>Davis, Abe</p><p>Deng, Bailin</p><p>Deng, Qixin</p><p>Deng, Xi</p><p>Deng, Yitong</p><p>Deng, Yu</p><p>Deng, Zhigang</p><p>Diehl, Alexandra</p><p>Digne, Julie</p><p>Dischler, Jean-Michel</p><p>Dittebrandt, Addis</p><p>Dodik, Ana</p><p>Dong, Weiming</p><p>Dong, Yue</p><p>Dou, Zhiyang</p><p>Douthe, Cyril</p><p>Du, Zheng-Jun</p><p>Eboli, Thomas</p><p>Echevarria, Jose</p><p>Eisert, Peter</p><p>Fan, Deng-Ping</p><p>Fan, Zhimin</p><p>Fang, Bryant Shaoheng</p><p>Fang, Guoxin</p><p>Fang, Hao-Shu</p><p>Fei, Raymond Yun</p><p>Feng, Nicole</p><p>Feng, Weixi</p><p>Feng, Yao</p><p>Finnendahl, Ugo</p><p>Fischer, Michael</p><p>Fisher, Matthew</p><p>Fu, Qiang</p><p>Fu, Rao</p><p>Fuchs, Martin</p><p>Fudos, Ioannis</p><p>Fujiwara, Haruo</p><p>Fukusato, Tsukasa</p><p>Gal, Rinon</p><p>Ganeshan, Aditya</p><p>Gao, Lin</p><p>Gao, Maolin</p><p>Gao, Quankai</p><p>Garrido, Pablo</p><p>Gavriil, Konstantinos</p><p>Gavryushin, Alexey</p><p>Ghosh, Anindita</p><p>Giebenhain, Simon</p><p>Gong, Bingchen</p><p>Goswami, Prashant</p><p>Gotsman, Craig</p><p>Gousseau, Yann</p><p>Grigorev, Artur</p><p>Grittmann, Pascal</p><p>Groth, Colin</p><p>Gruson, Adrien</p><p>Gryaditskaya, Yulia</p><p>Gu, Xiaodong</p><p>Guan, Phillip</p><p>Guan, Yanran</p><p>Guehl, Pascal</p><p>Guemeli, Can</p><p>Guerrero, Paul</p><p>Guo, Chuan</p><p>Guo, Xiaohu</p><p>Guo, Yingchun</p><p>Guo, Yu-Xiao</p><p>Guthe, Michael</p><p>Habermann, Marc</p><p>Hadwiger, Markus</p><p>Hahn, David</p><p>Hähnlein, Felix</p><p>Hall, Peter</p><p>Han, Jihae</p><p>Hanika, Johannes</p><p>Hanji, Param</p><p>Hanocka, Rana</p><p>Hao, Jiang</p><p>He, Ying</p><p>Hedman, Peter</p><p>Hedstrom, Trevor</p><p>Henz, Bernardo</p><p>Herholz, Philipp</p><p>Hertz, Amir</p><p>Hertzmann, Aaron</p><p>Holdenried-Krafft, Simon</p><p>Holzschuch, Nicolas</p><p>Hou, Fei</p><p>Hou, Junhui</p><p>Hsu, Jerry</p><p>Hu, Yixin</p><p>Huang, Chun-Hao</p><p>Huang, Jin</p><p>Huang, Kemeng</p><p>Huang, Ruqi</p><p>Huang, Tianxin</p><p>Huang, Xiaolei</p><p>Hwang, Jaepyung</p><p>Ibrahim, Muhammad Twaha</p><p>Iglesias-Guitian, Jose A.</p><p>Iser, Tomáš</p><p>Ishida, Sadashige</p><p>Isogawa, Mariko</p><p>Iwai, Daisuke</p><p>Jacobson, Alec</p><p>Jaspe, Alberto</p><p>Je, Jihyeon</p><p>Jebe, Lars</p><p>Jeong, Hyeonho</p><p>Ji, Xinya</p><p>Jiang, Lihan</p><p>Jiang, Yifeng</p><p>Jiang, Ying</p><p>Jiang, Zhongshi</p><p>Jin, Xiaogang</p><p>Jin, Yuduo</p><p>Jindal, Akshay</p><p>Jones, Ben</p><p>Jones, R. Kenny</p><p>Jönsson, Daniel</p><p>Jung, Seung-Won</p><p>Kaiser, Adrien</p><p>Kalischek, Nikolai</p><p>Karunratanakul, Korrawe</p><p>Kaufmann, Manuel</p><p>Kavaklı, Koray</p><p>Keller, Marilyn</p><p>Kelley, Brendan</p><p>Kelly, Tom</p><p>Kerbl, Bernhard</p><p>Khattar, Apoorv</p><p>Kim, Dongyeon</p><p>Kim, Doyub</p><p>Kim, Seung Wook</p><p>Kim, Suzi</p><p>Klein, Jonathan</p><p>Kodnongbua, Milin</p><p>Koo, Juil</p><p>Kopanas, George</p><p>Kosinka, Jiri</p><p>Kovalsky, Shahar</p><p>Kuth, Bastian</p><p>Kwon, Mingi</p><p>Kwon, Taesoo</p><p>Lagunas, Manuel</p><p>Lai, Yu-Kun</p><p>Lalonde, Jean-François</p><p>Lan, Lei</p><p>Lanza, Dario</p><p>Larboulette, Caroline</p><p>Lavoue, Guillaume</p><p>Le, Binh</p><p>Leake, Mackenzie</p><p>Lee, Joo Ho</p><p>Lee, Sunmin</p><p>Lee, Yoonsang</p><p>Lei, Jiahui</p><p>Leimkuehler, Thomas</p><p>Lejemble, Thibault</p><p>Levi, Zohar</p><p>Levin, David</p><p>Li, Bo</p><p>Li, Manyi</p><p>Li, Tzu-Mao</p><p>Li, Xuan</p><p>Li, Yidi</p><p>Li, Yushi</p><p>Li, Zhe</p><p>Li, Zhengqin</p><p>Liang, Yiqing</p><p>Liao, Rongfan</p><p>Liao, Zhouyingcheng</p><p>Lin, Daqi</p><p>Lin, Kai-En</p><p>Lindell, David</p><p>Ling, Ben</p><p>Litalien, Joey</p><p>Liu, Chenxi</p><p>Liu, Haiyang</p><p>Liu, Haolin</p><p>Liu, Hsueh-Ti Derek</p><p>Liu, Libin</p><p>Liu, Tiantian</p><p>Liu, Yuan</p><p>Liu, Yuan</p><p>Liu, Zheng</p><p>Long, Xiaoxiao</p><p>Lu, Jiaxin</p><p>Lukac, Mike</p><p>Ly, Mickaël</p><p>Lyu, Weijie</p><p>Ma, Qianli</p><p>Ma, Xiaohe</p><p>Machado, Gustavo</p><p>Maesumi, Arman</p><p>Maggioli, Filippo</p><p>Magnet, Robin</p><p>Majercik, Alexander</p><p>Malpica, Sandra</p><p>Mancinelli, Claudio</p><p>Mao, Tianlu</p><p>Marais, Patrick</p><p>Mendiratta, Mohit</p><p>Meng, Johannes</p><p>Mercier-Aubin, Alexandre</p><p>Meric, Adil</p><p>Meyer, Mark</p><p>Michel, Élie</p><p>Miller, Bailey</p><p>Millerdurai, Christen</p><p>Min, Sehee</p><p>Mo, Haoran</p><p>Monzon, Nestor</p><p>Moon, Gyeongsik</p><p>Morrical, Nathan</p><p>Mould, David</p><p>Mousas, Christos</p><p>Müller, Thomas</p><p>Multon, Franck</p><p>Munkberg, Jacob</p><p>Muthuganapathy, Ramanathan</p><p>Myszkowski, Karol</p><p>Nader, Georges</p><p>Nah, Jae-Ho</p><p>Nehvi, Jalees</p><p>Nie, Yongwei</p><p>Nivoliers, Vincent</p><p>Noh,Junyong</p><p>Nöllenburg, Martin</p><p>Novak, Jan</p><p>Novello, Tiago</p><p>Nowrouzezahrai, Derek</p><p>Nuria, Pelechano</p><p>Ohrhallinger, Stefan</p><p>Olajos, Rikard</p><p>Osman, Ahmed</p><p>Ost, Julian</p><p>Otaduy, Miguel A.</p><p>Pajarola, Renato</p><p>Pajouheshgar, Ehsan</p><p>Pan, Hao</p><p>Pandey, Rohit</p><p>Panetta, Julian</p><p>Panozzo, Daniele</p><p>Papaioannou, Georgios</p><p>Park, Geon Yeong</p><p>Patashnik, Or</p><p>Patney, Anjul</p><p>Peng, Jason</p><p>Peng, Shichong</p><p>Peng, Sida</p><p>Peng, Ziqiao</p><p>Peters, Christoph</p><p>Peters, Jorg</p><p>Petrov, Dmitrii</p><p>Petrovich, Mathis</p><p>Pierson, Emery</p><p>Pietroni, Nico</p><p>Pintore, Giovanni</p><p>Pintus, Ruggero</p><p>Po, Ryan</p><p>Qian, Shenhan</p><p>Qin, Dafei</p><p>Raab, Sigal</p><p>Radl, Lukas</p><p>Raistrick, Alexander</p><p>Raj, Amit</p><p>Rakotosaona, Marie-Julie</p><p>Rao, Anyi</p><p>Rath, Alexander</p><p>Rautek, Peter</p><p>Ray, Nicolas</p><p>Reddy, Pradyumna</p><p>Reiser, Christian</p><p>Rekik Dit Nekhili, Rim</p><p>Rempe, Davis</p><p>Ren, Bo</p><p>Ren, Yingying</p><p>Ren, Yixuan</p><p>Rist, Florian</p><p>Rohmer, Damien</p><p>Roitberg, Alina</p><p>Salvati, Marc</p><p>Salvi, Marco</p><p>Sartor, Sam</p><p>Schaefer, Scott</p><p>Schmalstieg, Dieter</p><p>Schreck, Tobias</p><p>Schroeder, Craig</p><p>Schüßler, Vincent</p><p>Schweickart, Eston</p><p>Sebastien, Hillaire</p><p>Selgrad, Kai</p><p>Serifi, Agon</p><p>Serrano, Ana</p><p>Seyb, Dario</p><p>Shamir, Ariel</p><p>Shao, Tianjia</p><p>Sharma, Adwait</p><p>Sheffer, Alla</p><p>Shekhar, Sumit</p><p>Shi, Mingyi</p><p>Shi, Yujun</p><p>Shin, Joonghyuk</p><p>Shirley, Peter</p><p>Shugrina, Maria</p><p>Skarbez, Richard</p><p>Smith, Jesse</p><p>Song, Sicheng</p><p>Spurek, Przemyslaw</p><p>Stearns, Colton</p><p>Sugimoto, Ryusuke</p><p>Sun, Caroline</p><p>Sun, Qi</p><p>Sun, Weiwei</p><p>Szymanowicz, Stan</p><p>Takikawa, Towaki</p><p>Tang, Min</p><p>Tang, Yansong</p><p>Tanveer, Maham</p><p>Tatzgern, Markus</p><p>Tewari, Ayush</p><p>Theobalt, Christian</p><p>Thiery, Jean-Marc</p><p>Tian, Yapeng</p><p>Tricard, Thibault</p><p>Tseng, Ethan</p><p>Tu, Peihan</p><p>Tursun, Cara</p><p>Unterguggenberger, Johannes</p><p>Valkanas, Antonios</p><p>Villeneuve, Keven</p><p>Vouga, Etienne</p><p>W. Sumner, Robert</p><p>Wallner, Johannes</p><p>Wang, Arran</p><p>Wang, Bin</p><p>Wang, Bing</p><p>Wang, Chen</p><p>Wang, Hai</p><p>Wang, Jiepeng</p><p>Wang, Lu</p><p>Wang, Xiaogang</p><p>Wang, Xinpeng</p><p>Wang, Zhendong</p><p>Wang, Zirui</p><p>Warner, Jeremy</p><p>Wei, Kaixuan</p><p>Weiss, Kenneth</p><p>Weiss, Sebastian</p><p>Weiss, Tomer</p><p>Weng, Chung-Yi</p><p>Westermann, Rüdiger</p><p>Westhofen, Lukas</p><p>Williams, Niall</p><p>Wolski, Krzysztof</p><p>Wronski, Bartlomiej</p><p>Wu, Haomiao</p><p>Wu, Lifan</p><p>Wu, Rundi</p><p>Wu, Songyin</p><p>Wu, Xiaoloong</p><p>Xia, Mengqi</p><p>Xian, Liu</p><p>Xiao, Qinjie</p><p>Xie, Desai</p><p>Xie, Haoran</p><p>Xie, Haozhe</p><p>Xie, Tianyi</p><p>Xie, Zhaoming</p><p>Xing, Jiankai</p><p>Xu, Bing</p><p>Xu, Jie</p><p>Xu, Jingyi</p><p>Xu, Pei</p><p>Xu, Xiang</p><p>Xu, Xiaogang</p><p>Xu, Zexiang</p><p>Xu, Zhan</p><p>Xu, Zilin</p><p>Yan, Chuan</p><p>Yan, Kai</p><p>Yan, Siming</p><p>Yang, Guandao</p><p>Yang, Haitao</p><p>Yang, Josh</p><p>Yi, Hongwei</p><p>Yi, Li</p><p>Yi, Renjiao</p><p>Yi, Xinyu</p><p>Yoo, Seungwoo</p><p>Yoon, Jae Shin</p><p>Yu, Borou</p><p>Yu, Difeng</p><p>Yu, Emilie</p><p>Yu, Fenggen</p><p>Yu, Hongchuan</p><p>Yu, Mulin</p><p>Yu, Tao</p><p>Yuan, Yuhui</p><p>Yuchi, Huo</p><p>Yue, Yonghao</p><p>Zellmann, Stefan</p><p>Zeng, Ailing</p><p>Zeng, Chong</p><p>Zeng, Yanhong</p><p>Zeng, Zheng</p><p>Zhang, Cheng</p><p>Zhang, Chuyan</p><p>Zhang, Congyi</p><p>Zhang, Haotian</p><p>Zhang, Hongwen</p><p>Zhang, Jason Y.</p><p>Zhang, Paul</p><p>Zhang, Qing</p><p>Zhang, W.</p><p>Zhang, Xiuming</p><p>Zhang, Yuxin</p><p>Zhao, Hang</p><p>Zhao, Mingyang</p><p>Zhao, Shuang</p><p>Zheng, Shaokun</p><p>Zheng, Xinyang</p><p>Zhou, Junwei</p><p>Zhou, Kailai</p><p>Zhou, Tongyu</p><p>Zhou, Xilong</p><p>Zhou, Yang</p><p>Zhou, Yi</p><p>Zhou, Zhiqian</p><p>Zhu, Lifeng</p><p>Zibrek, Katja</p><p>Zuffi, Silvia</p><p>\\n </p><p>\\n </p><p>Ariel Shamir is a professor and the former Dean of Efi Arazi School of Computer Science at Reichman University in Israel (formerly the Interdisciplinary Center). Before joining the university, he spent two years as a postdoctoral fellow at the Computational Visualization Center at the University of Texas in Austin. Over the years he held visiting research positions at Mitsubishi Electric Research Labs (Cambridge, MA), Disney Research, MIT, and Google.</p><p>Ariel Shamir has been one of the most prolific authors in computer graphics in the last decade, making several pioneering contributions in a wide array of topics, including image and video processing, shape analysis, 3D modeling, fabrication and animation. Many of his algorithms integrate and are guided by human perception models, mixing art and science and helping develop ready-to-use tools. He was the senior author in the original seam carving paper (and others that followed) which has been one of the most impactful papers in image editing in the last fifteen years, and which quickly established a line of research on the deceivingly simple problem of scaling images and adapting their content accordingly.</p><p>Ariel is wellknown for many other works such as sketch2photo, a system that allowed users to compose realistic images from simple handmade annotated sketches (roughly a decade before deep learning took off), algorithms to extract full 3D shapes from images, mesh segmentation, automatic video editing, stylization and abstraction, to name just a few. His recent work also advances machine learning techniques.</p><p>Ariel is a very active member of the community, regularly serving on major program committees and editorial boards of many leading journals. He was Chair of the SIGGRAPH Asia Technical Papers Programme in 2024. He has received many international awards, including being inducted in 2024 into the ACM SIGGRAPH Academy. He also maintains collaborations with several hightech companies, both large and small, which highlights the practical angle that guides his research.</p><p>In summary, Ariel Shamir's exceptional contributions to Computer Graphics and Human-Computer Interaction have left an indelible mark. His innovative research, numerous accolades, and leadership in academia exemplify his dedication to advancing research, technology and education.</p><p>EUROGRAPHICS is extremely pleased to recognize Ariel Shamir with the 2025 Outstanding Technical Contributions Award.</p><p>Valentin Deschaintre receives the EUROGRAPHICS Young Researcher Award 2025. Valentin's research focuses on inverse rendering and appearance generation, acquisition, authoring and representations for virtual environments and scene understanding. His work covers many major contributions, including his seminal paper at SIGGRAPH 2018 on lightweight SVBRDF capture, which combined differential rendering with synthetic training data. The latter is now a standard for training and benchmarking.</p><p>Valentin worked on his PhD in Computer Science at INRIA Sophia-Antipolis, in collaboration with the Ansys affiliate Optis. His thesis received the French Computer Graphics Thesis Award and the UCA Academic Excellence Thesis Award. He continued his research at Imperial College London in 2020 before joining Adobe Research in 2021. In 2024, he was elected a EUROGRAPHICS Junior Fellow.</p><p>Valentin developed several important contributions in the context of data-driven appearance acquisition and authoring, published in top venues and journals of Computer Graphics and Vision: acquisition of large surfaces (EGSR 2020), polarization-based acquisition (CVPR 2021), procedural material models creation (SIGGRAPH 2022, 2023 and 2024), material authoring and generation (EGSR 2022, SIGGRAPH Asia 2022, SIGGRAPH 2023 and 2024), material perception (SIGGRAPH 2023), and scene understanding (SIGGRAPH 2023 and 2024, SIGGRAPH Asia 2023). In recent years, he published a series of papers contributing towards a complete pipeline for materials from acquisition, generation and description to selection, segmentation, editing and retrieval for textures, images and 3D assets.</p><p>Much of his work appeared in top venues and journals of Computer Graphics and Vision and many of his papers are highly cited. This shows the strong impact that his findings had on the community, in which Valentin plays an active role; he participated in program committees (EGSR 2021-2023, EUROGRAPHICS 2023, SIGGRAPH Asia 2023, 2024), and chaired events (SIGGRAPH Thesis FF and EG Doctoral Consortium). Further, he also successfully mentored and collaborated with various international PhD students.</p><p>EUROGRAPHICS is extremely pleased to recognize Valentin Deschaintre with the 2025 Young Researcher Award in recognition of his outstanding contributions to Computer Graphics/Computer Vision in the area of data-driven material authoring and understanding.</p><p>Sebastian Starke receives the EUROGRAPHICS Young Researcher Award 2025. Sebastian obtained his PhD from the University of Edinburgh under the supervision of Taku Komura. He is now a research scientist at Meta Reality Labs.</p><p>Sebastian has made significant contributions to motion synthesis and character animation methods using deep learning techniques. His research in character animation fuses motion control with deep learning to create responsive and lifelike digital characters.</p><p>In his research, Sebastian extends the phase concept into complex humanscene interactions, such as basketball playing, boxing and our four-legged friends. His DeepPhase framework introduces an end-to-end neural architecture that learns a compact, representative phase space directly from raw motion capture data. This approach not only unifies the existing phasebased representations, but also elegantly handles the nuances of diverse motion patterns, ensuring natural and fluid animation synthesis. Separately, his codebook matching algorithm addresses the inherent ambiguities of control signals -such as those from VR devices- by aligning and matching latent categorical probability distributions. By explicitly sampling from the distribution, the technique results in high-fidelity and responsive control systems that are pivotal for immersive applications in embodiment in metaverse and beyond.</p><p>The work of Sebastian Starke is published in the top tier conferences and journals of computer graphics and has been widely cited. His work received several honors, such as best paper awards at SIGGRAPH and Pacific Graphics, as well as the Symposium on Computer Animation (SCA) Best PhD Dissertation award (2023). Sebastian's innovative contributions to the field of character animation significantly advance interactive applications such as gaming, virtual reality and robotics.</p><p>EUROGRAPHICS is extremely pleased to recognize Sebastian Starke with the 2025 Young Researcher Award in recognition of his outstanding contributions to Computer Graphics in the area of character animation and motion synthesis.</p><p>Justin Solomon</p><p>MIT</p><p>Alexei Efros</p><p>UC Berkeley</p><p>Karen Liu</p><p>Stanford University</p><p>Michael Black</p><p>Max Planck Institute for Intelligent Systems</p>\",\"PeriodicalId\":10687,\"journal\":{\"name\":\"Computer Graphics Forum\",\"volume\":\"44 2\",\"pages\":\"i-xxvii\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-07-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70089\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Graphics Forum\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/cgf.70089\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Graphics Forum","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cgf.70089","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Dennis Bukenberger (Technical University of Munich, Germany)
This issue of the Computer Graphics Forum contains the technical full papers program of the Eurographics Association 46th annual conference, held in London, England from 12-16 May 2025. The Eurographics annual venue presents a unique opportunity to present outstanding technical contributions in computer graphics. The full papers selected for publication in the Computer Graphics Forum journal are arguably the most prestigious feature of the conference.
The technical paper selection process involved a group of 98 experts forming the International Program Committee (IPC). We invited experts without more than two consecutive years of participation in the IPC so that the committee can be regularly renewed. The IPC members covered a diverse range of research subareas in computer graphics.
We received a total of 211 full submissions, six of which were desk-rejected because they were out of scope or because of plagiarism or double-submission issues. A sorting committee, consisting of the two Chairs and six sorting committee members, subsequently assigned each paper to two IPC members, as either primary or secondary reviewer, up to five papers, respecting to their preferences, expertise, conflicts, and automatically computed matching scores between IPC members and submitted papers. The primary and secondary reviewers in turn invited three additional tertiary reviewers on each submission.
After the initial five reviews per submission were collected, the authors had five days to consult these reviews and write a 1000-word rebuttal, addressing key questions and potential misinterpretations. Fourteen submissions were withdrawn by their authors who decided to forgo the rebuttal. Finally, all reviewers assigned to a paper read the rebuttal and all reviews and together reached an initial decision.
This year, following an established tradition that started in 2012 and improved continuously through the years, all IPC members participated in a one-week virtual asynchronous meeting, where the discussions between the IPC members leading to the final decisions were performed off-line by a bulletin board and other means of personal communication. New to this year, the six members of the sorting committee also acted as moderators of the discussions. This process led to extensive discussions where papers and reviews were debated, involving other IPC members as extra readers when needed. Each paper had a public discussion board, and each and every IPC member contributed to discussions where they felt competent.
All papers conditionally accepted with minor revisions went through a short second review cycle, with evaluations from the primary reviewer, and sometimes the secondary reviewer, before being finally accepted.
In the end, 75 papers out of the 205 valid submissions were accepted with minor revisions for a 36.6% acceptance rate, while 9 were recommended to a fast-track review process with major revisions to be considered for publication in a future issue of Computer Graphics Forum. This year we had papers on a diverse range of topics including generative modeling of images, videos and 3D content, machine learning, image and video editing, geometry processing, physically-based and non-photorealistic rendering, neural rendering, material appearance and texture, character animation, digital avatars, motion reconstruction, physical simulation, visualization, virtual reality, digital fabrication.
All accepted full papers are published in the Computer Graphics Forum journal. It is worth noting that for all submissions conflict-of-interest was managed on all levels, from reviewers, committee, advisory board, best paper committee, up to the chairs. The review process was double-blind for tertiary reviewers and single-blind for primary and secondary IPC members, and in case the original set of reviewers did not conclude with a decision, additional reviewers were invited to perform a full review and assist the decision process. Best papers were selected by a dedicated awards committee who selected among the top 19 papers based on overall review scores.
We would like to thank everyone who made this possible. First and foremost, we are grateful to all the members of the IPC who dedicated a remarkable amount of their time to finding tertiaries, reviewing and discussing papers, and subsequently shepherded the accepted papers undergoing the minor revision cycle. We wish to thank all the reviewers, who provided more than 1000 high-quality and thoughtful reviews and, of course, all the authors for their efforts in preparing and revising the submitted papers. We are especially grateful to Michael Wimmer who shared with us the insights from previous years and was indefatigable with his help and assistance. We would like to express strong appreciation to the sorting committee for their help in assigning the papers and monitoring the discussions, and to the advisory board for their guidance about the overall reviewing process. Last but not least, we would like to thank Stefanie Behnke from Eurographics Publishing for her outstanding support with SRM functionality, and for her constant responsiveness which was the key to the successful outcome of the paper selection process.
We are very happy to present the full paper proceedings of Eurographics 2025. We believe that these papers reflect the extraordinary variety of computer graphics research and its best contributions. It was both an honor and a pleasure for us to lead this selection process and we hope that you will find both the papers and the entire conference thought-provoking and inspiring of your future endeavors.
EG 25 Full Papers Co-Chairs
Alexa, Marc
TU Berlin
Drettakis, George
Inria
Sorkine-Hornung, Olga
ETH Zürich
Theobalt, Christian
Max Planck Insitute for Informatics
Wimmer, Michael
TU Wien
Beeler, Thabo
Google
Bender, Jan
RWTH Aachen University
Bommes, David
University of Bern
Jarabo, Adrian
Meta Reality Labs Research
Kim, Min H.
KAIST
Thies, Justus
TU Darmstadt
Alghofaili, Rawan
University of Texas at Dallas
Babaei, Vahid
MPI
Baek, Seung-Hwan
POSTECH
Bærentzen, Jakob Andreas
TU Denmark
Barla, Pascal
Inria
Bi, Sai
Adobe
Billeter, Markus
University of Leeds
Botsch, Mario
TU Dortmund
Bruckner, Stefan
University of Bergen
Campen, Marcel
Osnabrück University
Casas, Dan
Universidad Rey Juan Carlos
Castellani, Umberto
University of Verona
Chaine, Raphaelle
Université Claude Bernard Lyon 1
Chandran, Prashanth
Disney Research
Chapiro, Alexandre
Meta
Chu, Mengyu
Peking University
Cordonnier, Guillaume
Inria, Université Côte d'Azur
Daviet, Gilles
NVIDIA
Deng, Zhigang
University of Houston
Didyk, Piotr
University of Lugano
Doggett, Michael
Lund University
Faraj, Noura
Université de Montpellier – LIRMM
Ferguson, Zachary
CLO Virtual Fashion
Fu, Hongbo
The Hong Kong University of Science and Technology
Gain, James
University of Cape Town
Garces, Elena Adobe
Gingold, Yotam
George Mason University
Gobbetti, Enrico CRS4
Golyanik, Vladislav
MPI for Informatics
Groueix, Thibault
Adobe
Günther, Tobias
FAU Erlangen-Nuremberg
Heide, Felix
Princeton University
Henzler, Philipp
Google
Hu, Shi-Min
Tsinghua University
Huang, Qixing
UT Austin
Ju, Tao
Washington University in St. Louis
Mo, Kaichun
NVIDIA
Leake, Mackenzie
Adobe
Lee, Seungyong
POSTECH
Lefebvre, Sylvain
Inria
Leimkühler, Thomas
MPI Informatik
Lensch, Hendrik
University of Tübingen
Li, Changjian
University of Edinburgh
Li, Dingzeyu
Adobe
Li, Lei
Technical University of Munich
Li, Minchen
Carnegie Mellon University
Livesu, Marco
IMATI CNR
Martín, Daniel
Universidad de Zaragoza
Mellado, Nicolas
CNRS, IRIT, Université de Toulouse, France
Musialski, Przemyslaw
New Jersey Institute of Technology
Oliveira, Manuel M.
UFRGS
Pajarola, Renato
University of Zurich
Parakkat, Amal Dev
Institut Polytechnique de Paris
Paschalidou, Despoina
Stanford University
Peers, Pieter
College of William & Mary
Pelechano, Nuria
Universitat Politècnica de Catalunya
Philip, Julien
Netflix Eyeline Studios
Pirk, Sören
Google
Qi, Anran
Inria, Université Côte d'Azur
Ren, Jing
ETH Zurich
Rushmeier, Holly
Yale
Sawhney, Rohan
NVIDIA
Schreck, Camille
Inria Nancy
Sellán, Silvia
University of Toronto
Sharf, Andrei
Ben Gurion University
Sharp, Nicholas
NVIDIA
Sintorn, Erik
Chalmers University
Skouras, Melina
INRIA
Smirnov, Dmitry
Netflix
Stamminger, Marc
Friedrich-Alexander-Universität
Stein, Oded
University of Southern California
Steinberger, Markus
Graz University of Technology, Huawei Technologies
Sueda, Shinjiro
Texas A&M University
Sung, Minhyuk
KAIST
Tan, Ping
The Hong Kong University of Science and Technology
Teschner, Matthias
University of Freiburg
Tong, Xin
Microsoft Research Asia
Uy, Mikaela Angelina
Stanford University
Vaxman, Amir
The University of Edinburgh
Wang, Beibei
Nanjing University
Wang, Charlie C. L.
The University of Manchester
Wang, Peng-Shuai
Peking University
Wang, Tuanfeng Y.
Adobe
Wang, Wenping
Texas A&M
Wang, Zeyu
The Hong Kong University of Science and Technology, Guangzhou
Weber, Ofir
Bar-Ilan University
Wei, Li-Yi
Adobe
Weyrich, Tim
Friedrich-Alexander-Universität Erlangen-Nürnberg
Wu, Kui
LightSpeed Studios
Wyman, Chris
NVIDIA
Xu, Kai
National University of Defense Technology
Yan, Ling-Qi
UC Santa Barbara
Yang, Yin
The University of Utah
Zhang, Biao
KAUST
Zhou, Yang
Adobe
Zhu, Bo
Dartmouth College
Zhu, Junqiu
UC Santa Barbara
Zint, Daniel
New York University
Agus, Marco
Aksoy, Yagiz
Alzayer, Hadi
Amenta, Annamaria
Ando, Ryoichi
Aristidou, Andreas
Ashraf, Maliha
Assarsson, Ulf
Attene, Marco
Bächer, Moritz
Bahat, Yuval
Bahmani, Sherwin
Bang, Seungbae
Bangaru, Sai
Banterle, Francesco
Barczak, Joshua
Barrera-Machuca, Mayra
Barthe, Loïc
Basri, Ronen
Basset, Jean
Batty, Christopher
Bauer, Frank
Belyaev, Alexander
Bemana, Mojtaba
Ben-Chen, Mirela
Benes, Bedrich
Benjamin, Juanita
Bermano, Amit Haim
Bernard, Florian
Bharadwaj, Shrisha
Bian, Wenjing
Birsak, Michael
Bittner, Jiří
Boscaini, Davide
Bressa, Nathalie
Bruneton, Eric
Burley, Brent
Cabiddu, Daniela
Cao, Dongliang
Capouellez, Ryan
Cardoso, Joao
Celen, Ata
Ceylan, Duygu
Chandran, Prashanth
Chang, Pascal
Chang, Yue
Chen, Chen
Chen, He
Chen, Honglin
Chen, Jianchun
Chen, Jiong
Chen, Kenneth
Chen, Peter Yichen
Chen, Qiang
Chen, Qimin
Chen, Renjie
Chen, Wei-Yu
Chen, Wenzheng
Chen, Xin
Chen, Xuelin
Chen, Yingcong
Chen, Yun-Chun
Chen, Zhen
Cheng, Zhanglin
Choi, Myung Geol
Choi, Suyeon
Chrysanthou, Yiorgos
Chugunov, Ilya
Chung, Jiwoo
Cibulski, Lena
Ciccone, Loïc
Cieslak, Mikolaj
Clarberg, Petrik
Čmolík, Ladislav
Coiffier, Guillaume
Corman, Etienne
Corpetti, Thomas
Corsini, Massimiliano
Cosmo, Luca
Dachsbacher, Carsten
Daněček, Radek
Das, Devikalyan
Datta, Sayantan
Davis, Abe
Deng, Bailin
Deng, Qixin
Deng, Xi
Deng, Yitong
Deng, Yu
Deng, Zhigang
Diehl, Alexandra
Digne, Julie
Dischler, Jean-Michel
Dittebrandt, Addis
Dodik, Ana
Dong, Weiming
Dong, Yue
Dou, Zhiyang
Douthe, Cyril
Du, Zheng-Jun
Eboli, Thomas
Echevarria, Jose
Eisert, Peter
Fan, Deng-Ping
Fan, Zhimin
Fang, Bryant Shaoheng
Fang, Guoxin
Fang, Hao-Shu
Fei, Raymond Yun
Feng, Nicole
Feng, Weixi
Feng, Yao
Finnendahl, Ugo
Fischer, Michael
Fisher, Matthew
Fu, Qiang
Fu, Rao
Fuchs, Martin
Fudos, Ioannis
Fujiwara, Haruo
Fukusato, Tsukasa
Gal, Rinon
Ganeshan, Aditya
Gao, Lin
Gao, Maolin
Gao, Quankai
Garrido, Pablo
Gavriil, Konstantinos
Gavryushin, Alexey
Ghosh, Anindita
Giebenhain, Simon
Gong, Bingchen
Goswami, Prashant
Gotsman, Craig
Gousseau, Yann
Grigorev, Artur
Grittmann, Pascal
Groth, Colin
Gruson, Adrien
Gryaditskaya, Yulia
Gu, Xiaodong
Guan, Phillip
Guan, Yanran
Guehl, Pascal
Guemeli, Can
Guerrero, Paul
Guo, Chuan
Guo, Xiaohu
Guo, Yingchun
Guo, Yu-Xiao
Guthe, Michael
Habermann, Marc
Hadwiger, Markus
Hahn, David
Hähnlein, Felix
Hall, Peter
Han, Jihae
Hanika, Johannes
Hanji, Param
Hanocka, Rana
Hao, Jiang
He, Ying
Hedman, Peter
Hedstrom, Trevor
Henz, Bernardo
Herholz, Philipp
Hertz, Amir
Hertzmann, Aaron
Holdenried-Krafft, Simon
Holzschuch, Nicolas
Hou, Fei
Hou, Junhui
Hsu, Jerry
Hu, Yixin
Huang, Chun-Hao
Huang, Jin
Huang, Kemeng
Huang, Ruqi
Huang, Tianxin
Huang, Xiaolei
Hwang, Jaepyung
Ibrahim, Muhammad Twaha
Iglesias-Guitian, Jose A.
Iser, Tomáš
Ishida, Sadashige
Isogawa, Mariko
Iwai, Daisuke
Jacobson, Alec
Jaspe, Alberto
Je, Jihyeon
Jebe, Lars
Jeong, Hyeonho
Ji, Xinya
Jiang, Lihan
Jiang, Yifeng
Jiang, Ying
Jiang, Zhongshi
Jin, Xiaogang
Jin, Yuduo
Jindal, Akshay
Jones, Ben
Jones, R. Kenny
Jönsson, Daniel
Jung, Seung-Won
Kaiser, Adrien
Kalischek, Nikolai
Karunratanakul, Korrawe
Kaufmann, Manuel
Kavaklı, Koray
Keller, Marilyn
Kelley, Brendan
Kelly, Tom
Kerbl, Bernhard
Khattar, Apoorv
Kim, Dongyeon
Kim, Doyub
Kim, Seung Wook
Kim, Suzi
Klein, Jonathan
Kodnongbua, Milin
Koo, Juil
Kopanas, George
Kosinka, Jiri
Kovalsky, Shahar
Kuth, Bastian
Kwon, Mingi
Kwon, Taesoo
Lagunas, Manuel
Lai, Yu-Kun
Lalonde, Jean-François
Lan, Lei
Lanza, Dario
Larboulette, Caroline
Lavoue, Guillaume
Le, Binh
Leake, Mackenzie
Lee, Joo Ho
Lee, Sunmin
Lee, Yoonsang
Lei, Jiahui
Leimkuehler, Thomas
Lejemble, Thibault
Levi, Zohar
Levin, David
Li, Bo
Li, Manyi
Li, Tzu-Mao
Li, Xuan
Li, Yidi
Li, Yushi
Li, Zhe
Li, Zhengqin
Liang, Yiqing
Liao, Rongfan
Liao, Zhouyingcheng
Lin, Daqi
Lin, Kai-En
Lindell, David
Ling, Ben
Litalien, Joey
Liu, Chenxi
Liu, Haiyang
Liu, Haolin
Liu, Hsueh-Ti Derek
Liu, Libin
Liu, Tiantian
Liu, Yuan
Liu, Yuan
Liu, Zheng
Long, Xiaoxiao
Lu, Jiaxin
Lukac, Mike
Ly, Mickaël
Lyu, Weijie
Ma, Qianli
Ma, Xiaohe
Machado, Gustavo
Maesumi, Arman
Maggioli, Filippo
Magnet, Robin
Majercik, Alexander
Malpica, Sandra
Mancinelli, Claudio
Mao, Tianlu
Marais, Patrick
Mendiratta, Mohit
Meng, Johannes
Mercier-Aubin, Alexandre
Meric, Adil
Meyer, Mark
Michel, Élie
Miller, Bailey
Millerdurai, Christen
Min, Sehee
Mo, Haoran
Monzon, Nestor
Moon, Gyeongsik
Morrical, Nathan
Mould, David
Mousas, Christos
Müller, Thomas
Multon, Franck
Munkberg, Jacob
Muthuganapathy, Ramanathan
Myszkowski, Karol
Nader, Georges
Nah, Jae-Ho
Nehvi, Jalees
Nie, Yongwei
Nivoliers, Vincent
Noh,Junyong
Nöllenburg, Martin
Novak, Jan
Novello, Tiago
Nowrouzezahrai, Derek
Nuria, Pelechano
Ohrhallinger, Stefan
Olajos, Rikard
Osman, Ahmed
Ost, Julian
Otaduy, Miguel A.
Pajarola, Renato
Pajouheshgar, Ehsan
Pan, Hao
Pandey, Rohit
Panetta, Julian
Panozzo, Daniele
Papaioannou, Georgios
Park, Geon Yeong
Patashnik, Or
Patney, Anjul
Peng, Jason
Peng, Shichong
Peng, Sida
Peng, Ziqiao
Peters, Christoph
Peters, Jorg
Petrov, Dmitrii
Petrovich, Mathis
Pierson, Emery
Pietroni, Nico
Pintore, Giovanni
Pintus, Ruggero
Po, Ryan
Qian, Shenhan
Qin, Dafei
Raab, Sigal
Radl, Lukas
Raistrick, Alexander
Raj, Amit
Rakotosaona, Marie-Julie
Rao, Anyi
Rath, Alexander
Rautek, Peter
Ray, Nicolas
Reddy, Pradyumna
Reiser, Christian
Rekik Dit Nekhili, Rim
Rempe, Davis
Ren, Bo
Ren, Yingying
Ren, Yixuan
Rist, Florian
Rohmer, Damien
Roitberg, Alina
Salvati, Marc
Salvi, Marco
Sartor, Sam
Schaefer, Scott
Schmalstieg, Dieter
Schreck, Tobias
Schroeder, Craig
Schüßler, Vincent
Schweickart, Eston
Sebastien, Hillaire
Selgrad, Kai
Serifi, Agon
Serrano, Ana
Seyb, Dario
Shamir, Ariel
Shao, Tianjia
Sharma, Adwait
Sheffer, Alla
Shekhar, Sumit
Shi, Mingyi
Shi, Yujun
Shin, Joonghyuk
Shirley, Peter
Shugrina, Maria
Skarbez, Richard
Smith, Jesse
Song, Sicheng
Spurek, Przemyslaw
Stearns, Colton
Sugimoto, Ryusuke
Sun, Caroline
Sun, Qi
Sun, Weiwei
Szymanowicz, Stan
Takikawa, Towaki
Tang, Min
Tang, Yansong
Tanveer, Maham
Tatzgern, Markus
Tewari, Ayush
Theobalt, Christian
Thiery, Jean-Marc
Tian, Yapeng
Tricard, Thibault
Tseng, Ethan
Tu, Peihan
Tursun, Cara
Unterguggenberger, Johannes
Valkanas, Antonios
Villeneuve, Keven
Vouga, Etienne
W. Sumner, Robert
Wallner, Johannes
Wang, Arran
Wang, Bin
Wang, Bing
Wang, Chen
Wang, Hai
Wang, Jiepeng
Wang, Lu
Wang, Xiaogang
Wang, Xinpeng
Wang, Zhendong
Wang, Zirui
Warner, Jeremy
Wei, Kaixuan
Weiss, Kenneth
Weiss, Sebastian
Weiss, Tomer
Weng, Chung-Yi
Westermann, Rüdiger
Westhofen, Lukas
Williams, Niall
Wolski, Krzysztof
Wronski, Bartlomiej
Wu, Haomiao
Wu, Lifan
Wu, Rundi
Wu, Songyin
Wu, Xiaoloong
Xia, Mengqi
Xian, Liu
Xiao, Qinjie
Xie, Desai
Xie, Haoran
Xie, Haozhe
Xie, Tianyi
Xie, Zhaoming
Xing, Jiankai
Xu, Bing
Xu, Jie
Xu, Jingyi
Xu, Pei
Xu, Xiang
Xu, Xiaogang
Xu, Zexiang
Xu, Zhan
Xu, Zilin
Yan, Chuan
Yan, Kai
Yan, Siming
Yang, Guandao
Yang, Haitao
Yang, Josh
Yi, Hongwei
Yi, Li
Yi, Renjiao
Yi, Xinyu
Yoo, Seungwoo
Yoon, Jae Shin
Yu, Borou
Yu, Difeng
Yu, Emilie
Yu, Fenggen
Yu, Hongchuan
Yu, Mulin
Yu, Tao
Yuan, Yuhui
Yuchi, Huo
Yue, Yonghao
Zellmann, Stefan
Zeng, Ailing
Zeng, Chong
Zeng, Yanhong
Zeng, Zheng
Zhang, Cheng
Zhang, Chuyan
Zhang, Congyi
Zhang, Haotian
Zhang, Hongwen
Zhang, Jason Y.
Zhang, Paul
Zhang, Qing
Zhang, W.
Zhang, Xiuming
Zhang, Yuxin
Zhao, Hang
Zhao, Mingyang
Zhao, Shuang
Zheng, Shaokun
Zheng, Xinyang
Zhou, Junwei
Zhou, Kailai
Zhou, Tongyu
Zhou, Xilong
Zhou, Yang
Zhou, Yi
Zhou, Zhiqian
Zhu, Lifeng
Zibrek, Katja
Zuffi, Silvia
Ariel Shamir is a professor and the former Dean of Efi Arazi School of Computer Science at Reichman University in Israel (formerly the Interdisciplinary Center). Before joining the university, he spent two years as a postdoctoral fellow at the Computational Visualization Center at the University of Texas in Austin. Over the years he held visiting research positions at Mitsubishi Electric Research Labs (Cambridge, MA), Disney Research, MIT, and Google.
Ariel Shamir has been one of the most prolific authors in computer graphics in the last decade, making several pioneering contributions in a wide array of topics, including image and video processing, shape analysis, 3D modeling, fabrication and animation. Many of his algorithms integrate and are guided by human perception models, mixing art and science and helping develop ready-to-use tools. He was the senior author in the original seam carving paper (and others that followed) which has been one of the most impactful papers in image editing in the last fifteen years, and which quickly established a line of research on the deceivingly simple problem of scaling images and adapting their content accordingly.
Ariel is wellknown for many other works such as sketch2photo, a system that allowed users to compose realistic images from simple handmade annotated sketches (roughly a decade before deep learning took off), algorithms to extract full 3D shapes from images, mesh segmentation, automatic video editing, stylization and abstraction, to name just a few. His recent work also advances machine learning techniques.
Ariel is a very active member of the community, regularly serving on major program committees and editorial boards of many leading journals. He was Chair of the SIGGRAPH Asia Technical Papers Programme in 2024. He has received many international awards, including being inducted in 2024 into the ACM SIGGRAPH Academy. He also maintains collaborations with several hightech companies, both large and small, which highlights the practical angle that guides his research.
In summary, Ariel Shamir's exceptional contributions to Computer Graphics and Human-Computer Interaction have left an indelible mark. His innovative research, numerous accolades, and leadership in academia exemplify his dedication to advancing research, technology and education.
EUROGRAPHICS is extremely pleased to recognize Ariel Shamir with the 2025 Outstanding Technical Contributions Award.
Valentin Deschaintre receives the EUROGRAPHICS Young Researcher Award 2025. Valentin's research focuses on inverse rendering and appearance generation, acquisition, authoring and representations for virtual environments and scene understanding. His work covers many major contributions, including his seminal paper at SIGGRAPH 2018 on lightweight SVBRDF capture, which combined differential rendering with synthetic training data. The latter is now a standard for training and benchmarking.
Valentin worked on his PhD in Computer Science at INRIA Sophia-Antipolis, in collaboration with the Ansys affiliate Optis. His thesis received the French Computer Graphics Thesis Award and the UCA Academic Excellence Thesis Award. He continued his research at Imperial College London in 2020 before joining Adobe Research in 2021. In 2024, he was elected a EUROGRAPHICS Junior Fellow.
Valentin developed several important contributions in the context of data-driven appearance acquisition and authoring, published in top venues and journals of Computer Graphics and Vision: acquisition of large surfaces (EGSR 2020), polarization-based acquisition (CVPR 2021), procedural material models creation (SIGGRAPH 2022, 2023 and 2024), material authoring and generation (EGSR 2022, SIGGRAPH Asia 2022, SIGGRAPH 2023 and 2024), material perception (SIGGRAPH 2023), and scene understanding (SIGGRAPH 2023 and 2024, SIGGRAPH Asia 2023). In recent years, he published a series of papers contributing towards a complete pipeline for materials from acquisition, generation and description to selection, segmentation, editing and retrieval for textures, images and 3D assets.
Much of his work appeared in top venues and journals of Computer Graphics and Vision and many of his papers are highly cited. This shows the strong impact that his findings had on the community, in which Valentin plays an active role; he participated in program committees (EGSR 2021-2023, EUROGRAPHICS 2023, SIGGRAPH Asia 2023, 2024), and chaired events (SIGGRAPH Thesis FF and EG Doctoral Consortium). Further, he also successfully mentored and collaborated with various international PhD students.
EUROGRAPHICS is extremely pleased to recognize Valentin Deschaintre with the 2025 Young Researcher Award in recognition of his outstanding contributions to Computer Graphics/Computer Vision in the area of data-driven material authoring and understanding.
Sebastian Starke receives the EUROGRAPHICS Young Researcher Award 2025. Sebastian obtained his PhD from the University of Edinburgh under the supervision of Taku Komura. He is now a research scientist at Meta Reality Labs.
Sebastian has made significant contributions to motion synthesis and character animation methods using deep learning techniques. His research in character animation fuses motion control with deep learning to create responsive and lifelike digital characters.
In his research, Sebastian extends the phase concept into complex humanscene interactions, such as basketball playing, boxing and our four-legged friends. His DeepPhase framework introduces an end-to-end neural architecture that learns a compact, representative phase space directly from raw motion capture data. This approach not only unifies the existing phasebased representations, but also elegantly handles the nuances of diverse motion patterns, ensuring natural and fluid animation synthesis. Separately, his codebook matching algorithm addresses the inherent ambiguities of control signals -such as those from VR devices- by aligning and matching latent categorical probability distributions. By explicitly sampling from the distribution, the technique results in high-fidelity and responsive control systems that are pivotal for immersive applications in embodiment in metaverse and beyond.
The work of Sebastian Starke is published in the top tier conferences and journals of computer graphics and has been widely cited. His work received several honors, such as best paper awards at SIGGRAPH and Pacific Graphics, as well as the Symposium on Computer Animation (SCA) Best PhD Dissertation award (2023). Sebastian's innovative contributions to the field of character animation significantly advance interactive applications such as gaming, virtual reality and robotics.
EUROGRAPHICS is extremely pleased to recognize Sebastian Starke with the 2025 Young Researcher Award in recognition of his outstanding contributions to Computer Graphics in the area of character animation and motion synthesis.
期刊介绍:
Computer Graphics Forum is the official journal of Eurographics, published in cooperation with Wiley-Blackwell, and is a unique, international source of information for computer graphics professionals interested in graphics developments worldwide. It is now one of the leading journals for researchers, developers and users of computer graphics in both commercial and academic environments. The journal reports on the latest developments in the field throughout the world and covers all aspects of the theory, practice and application of computer graphics.