Keynote Speakers


keynote
Andrés Navarro

ICESI University

Andres Navarro (M’95–SM’11) got his Electronic Engineer (1993), and Master on Technology Management (1999), both from Universidad Pontificia Bolivariana in Medellín. He got his PhD in Telecommunications from Universitat Politècnica de Valencia (2003). He is an IEEE Senior Member, former Advisor of the National Innovation Program on Electronics, Telecommunications and Informatics from Colombian Research, Development and Future Projects system. He is also advisor of the Spectrum Management Committee for Colombian Spectrum Agency. Since 1999, he has served as Director of the i2t research group at Universidad Icesi. His research interests are Spectrum Management, radio propagation and m-health. Currently, he is the Chairman of the Colombian chapter of IEEE Communications Society.


Topic: From Spectrum Visualization to Urban Computing in Cali, Colombia


Abstract: During some years, a group of Colombian Universities in Cali, has been working in Mobile Computing projects, in cooperation through the i2ComM initiative and lately, some of this cooperation was extended to China, with student’s exchange and professor’s mobility, as well as different cooperation actions with joint publications as a result. As part of this cooperation, several students have traveled to China, and we are willing to extend this cooperation far from the current point, to expand our activities and achievements. In this presentation we will talk about some of the activities and projects we are executing in Colombia, as part of the Consortium, including things like the use of Game Engines (JMonkey and Unity) and Virtual Reality for Radio Channel simulation for 5G, as well as Telecommunications training and Spectrum Management learning using VR tools. In second place, we will talk about some Urban Computing initiatives developed jointly between Colombian and Chinese Universities, including the use of ad-hoc large-scale sensor network and big data, which aims to collect any kind of urban or social data and use visualization tools.


keynote
Liang Lin

SenseTime, SYSU

Liang Lin is the Executive Director of SenseTime Research and a full Professor of Sun Yat-sen University. He currently leads the SenseTime R&D teams to develop cutting-edges and deliverable solutions on computer vision, data analysis and mining, and intelligent robotic systems. He has authorized and co-authorized on more than 100 papers in top-tier academic journals and conferences (e.g., 15 papers in TPAMI/IJCV). He has been serving as an Associate Editor of IEEE Trans. Human-Machine Systems. He served as an Area Chair for numerous conferences such as CVPR, ICME, ACCV, ICMR. He was the recipient of Best Paper Dimond Award in IEEE ICME 2017, Best Paper Runners-Up Award in ACM NPAR 2010, Google Faculty Award in 2012, Best Student Paper Award in IEEE ICME 2014, and Hong Kong Scholars Award in 2014. He is a Fellow of IET.


Topic: Depth Learning---When Depth Estimation Meets Deep Learning


Abstract: Depth data is indispensable for reconstructing or understanding 3D scenes. It serves as a key ingredient for applications such as synthetic defocus, autonomous driving, and augmented reality. Although active 3D sensors (e.g., Lidar, ToF, and structured-light 3D scanner) can be employed, retrieving depth from monocular/stereo cameras is typically a more cost-effective approach. However, estimating depth from images is inherently under-determined, to regularize the problem, one typically needs handcrafted models characterizing the properties of depth data or scene geometry. As the recent advances in deep learning, depth estimation is cast as a learning task, leading to state-of-the-art performance. In this talk, I will present our new progress on depth estimation with convolutional neural networks (CNN). Particularly, I will first introduce cascade residual learning (CRL), our two-stage deep architecture on stereo matching producing high-quality disparity estimates. Observations with CRL inspires us to propose a domain-adaptation approach---zoom and learn (ZOLE)---for training a deep stereo matching algorithm without the ground-truth data of a target domain. By combining a view synthesis network and the first stage of CRL, we propose single view stereo matching (SVS) for single image depth estimation, with a performance superior to the classic stereo block matching method taking two images as inputs.


keynote
Ming C. Lin

Maryland University

Ming C. Lin is currently the Elizabeth Stevinson Iribe Chair of Computer Science at the University of Maryland College Park and John R. & Louise S. Parker Distinguished Professor Emerita of Computer Science at the University of North Carolina (UNC), Chapel Hill. She is also an honorary Chair Professor (Yangtze Scholar) at Tsinghua University in China. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and many best paper awards at international conferences. She is a Fellow of ACM, IEEE, and Eurographics.

Her research interests include computational robotics, haptics, physically-based modeling, virtual reality, sound rendering, and geometric computing. She has (co-)authored more than 300 refereed publications in these areas and co-edited/authored four books. She has served on hundreds of program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is currently a member of Computing Research Association-Women (CRA-W) Board of Directors, Chair of IEEE Computer Society (CS) Fellows Committee, Chair of IEEE CS Computer Pioneer Award, and Chair of ACM SIGGRAPH Outstanding Doctoral Dissertation Award. She is a former member of IEEE CS Board of Governors, a former Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics (2011-2014), a former Chair of IEEE CS Transactions Operations Committee, and a member of several editorial boards. She also has served on several steering committees and advisory boards of international conferences, as well as government and industrial technical advisory committees.


Topic: Reconstructing Reality: From Physical World to Virtual Environments


Abstract: With increasing availability of data in various forms from images, audio, video, 3D models, motion capture, simulation results, to satellite imagery, representative samples of the various phenomena constituting the world around us bring new opportunities and research challenges. Such availability of data has led to recent advances in data-driven modeling. However, most of the existing example-based synthesis methods offer empirical models and data reconstruction that may not provide an insightful understanding of the underlying process or may be limited to a subset of observations.

In this talk, I present recent advances that integrate classical model-based methods and statistical learning techniques to tackle challenging problems that have not been previously addressed. These include flow reconstruction for traffic visualization, learning heterogeneous crowd behaviors from video, simultaneous estimation of deformation and elasticity parameters from images and video, and example-based multimodal display for VR systems. These approaches offer new insights for understanding complex collective behaviors, developing better models for complex dynamical systems from captured data, delivering more effective medical diagnosis and treatment, as well as cyber-manufacturing of customized apparel. I conclude by discussing some possible future directions and challenges.


keynote
Marc Christie

University of Rennes 1

Marc Christie is an associate professor at University of Rennes 1. His research is focused on virtual cinematography which is the application of real cinematography techniques to virtual 3D environments. The research covers a wide range of challenges like extracting data from real-movies, learning elements of film style (types of transitions, continuity between shots, editing patterns), proposing models and techniques to re-apply the learnt elements to virtual contents, computing camera angles and trajectories as well as optimal edits. Recently Marc focused his research on how these models and techniques can be transferred to drones, opening the topic of cinematographic drones. He co-authored 40+ conference papers on these topic, and led courses at Eurographics and Siggraph Asia.


Topic: VR content creation for movie previsualisation


Abstract: Creatives in animation and film productions have forever been exploring the use of new means to visually design filmic sequences before realizing them in studios through ranges of techniques: hand-drawn storyboards, physical mockups or more recently virtual 3D environments (called previsualisation). A central issue in using virtual 3D environments to rehearse a sequence is the complexity of content creation tools that are not accessible to creatives such as film directors, directors of photo or lighting designers. In this talk, we take the path of using VR, not as an experiential exploration tools in virtual environments, but as an authoring system which enables the crafting of filmic sequences even for creative people who are not experts with 3D tools. The proposed system is designed to reflect the traditional creative process through (i) the creation of storyboards using VR, and (ii) the creation of animated filmic sequences using VR (design the scene, place the cameras, perform a montage between the cameras). As a benefit, the proposed approach enables a novel and seamless back-and-forth between all stages of the process. A user evaluation with students from film schools, experts and non-experts reports the benefits of such a system in prototyping animated sequences for movie storyboarding and rehearsal compared to traditional tools, and demonstrates strengths such as (i) ease of use, (ii) spatialization that reduces manipulations, and (iii) seamless back and forth between stages. The tool is currently under evaluation in film schools, previsualisation companies, and feature animation film companies.


Vice President Developer Ecosystems, NVIDIA. President at Khronos Group. At NVIDIA Neil works to enable applications leverage advanced silicon acceleration. Neil is also the elected President of the Khronos Group where he has helped initiate and evolve APIs and formats such as Vulkan, OpenXR, OpenGL ES, WebGL, glTF, OpenCL, OpenVX and NNEF.


Topic: Open Standards for Building Virtual and Augmented Realities


Abstract: For VR and AR to become truly pervasive, native applications and the Web need to be enabled with portable and open standards for 3D, vision and inferencing acceleration, efficient formats for delivering 3D assets, and cross platform APIs for user interaction and scene analysis and interaction. The Khronos Group is working alongside other international standards organizations to create the building blocks for XR-enabled browsers and applications. This presentation will provide an update on the very latest developments in Khronos standards, and how they fit within the larger industry XR ecosystem.


keynote
Qinping Zhao

Beihang University

Professor of Beihang University (BUAA), member of the China Academy of Engineering (CAE), Chief Scientist of State Key Laboratory of Virtual Reality Technology and System, President of China Simulation Federation (CSF).

Professor Zhao has conducted virtual reality and artificial intelligence research for many years, and accomplished more than 20 national science and technology programs, including National Natural Science Foundation, National High-tech R&D Program and the National Basic Research Program of China. As the major completer, he was awarded National Prize for Progress in Science and Technology Grade One once and Grade TWO twice, National Prize for Technical Innovation Grade TWO once. By now, he has published 3 academic monographs, more than 180 papers and 60 national authorized patents.


Topic: Promote the IQ of Computer System


Abstract: The IQ of computer system means the humanlike intelligence of its hardware and software, and the humanlike thinking from its designer and producer. According to human logical thinking, we analyzed and graded the humanlike thinking ability of computer system, and make further efforts to promote its humanlike possibility.


keynote
Wenping Wang

The University of Hong Kong 

Wenping Wang is Chair Professor of Computer Science at the University of Hong Kong. His research interests cover computer graphics, computer visualization, computer vision, robotics, medical image processing, and geometric computing, and he has published over 140 journal papers in these fields. He is journal associate editor of several international journals, including Computer Aided Geometric Design (CAGD), Computer Graphics Forum (CGF), IEEE Transactions on Computers, and IEEE Computer Graphics and Applications, and has chaired a number of international conferences, including Pacific Graphics 2012, ACM Symposium on Physical and Solid Modeling (SPM) 2013, and SIGGRAPH Asia 2013. Prof. Wang received the John Gregory Memorial Award for his contributions in geometric modeling. He is an IEEE Fellow.


Topic: On Reconstructing 3D Wire Objects


Abstract: 3D shape reconstruction has widespread applications in computer graphics, computer vision, robotics and virtual reality. However, the reconstruction of 3D wire objects has received relatively little research attention, despite the ubiquity of these thin objects, such as ropes, cables, tree branches, wire arts and wire-frame furniture. In this talk I will present our recent works on an image-based reconstruction method and on using a hand-held commodity RGBD sensor for scanning and reconstructing wire objects with a skeleton-based fusion approach. I will also discuss a range of outstanding challenges that need to be addressed in order to achieve reliable and real-time reconstruction of wire objects in the wild.


keynote
Xiangshi Ren

Kochi University of Technology

Xiangshi Ren is a professor in the School of Information and director of the Center for Human-Engaged Computing (CHEC) at Kochi University of Technology. He is founding president and honorary life-time president of the International Chinese Association of Computer-Human Interaction (ICACHI). He was named one of the Asian Human-Computer Interaction Heroes in ACM CHI 2015. He was a visiting professor at the University of Toronto, visiting faculty researcher at IBM Research (Almaden), and visiting/guest/chair professor at several universities in China. Currently, he is an adjunct professor at Jilin University, Beijing Normal University. He is a Senior Member of the ACM and of the IEEE.

Prof. Ren has been working on fundamental studies in the field of Human-Computer Interaction (HCI) for over twenty-five years. His research interests include all aspects of human-computer interaction, particularly human performance models, pen-based interaction, multi-touch interaction, eye-based interaction, haptic interaction, gesture input, game interaction, user interfaces for older users and for blind users. He and his colleagues have established a unique research framework based on information technology, incorporating methodologies such as human performance modeling, developing new algorithms, conducting user studies, and systematically testing and applying HCI theory to applications.


Topic: From Human-Computer Interaction to Human-Engaged Computing


Abstract: This talk is in three parts: 1) First I will review the history of Human-Computer Interaction (HCI), discuss the future relationship between humans and computers, and describe a new overarching perspective for development - Human-Engaged Computing (HEC) for the next generation of human-computer interaction. 2) Secondly, I will give a summary of my HCI studies from the past 25 years. 3) Then finally I will share some valuable principles that I did learn through my past experience in HCI studies.


keynote
Jinxiang Chai

Texas A&M University

Jinxiang Chai is currently the founder and CEO of Xmov.ai, which develops the world’s first scalable end-to-end solution for high-fidelity performance based animation for human characters. He is also a tenured professor in the Department of Computer Science and Engineering at Texas A&M University. He received his Ph.D in Robotics from the School of Computer Science, Carnegie Mellon University in 2006. His primary research is in the area of computer graphics and vision with a focus on human motion capture, analysis, synthesis, simulation and control. He is extremely interested in developing realtime human motion capture technologies for animation 2.0 and natural user interfaces for next generation computing platforms, such as smart TVs, AR/VR, and service robots. He has published 20 SIGGRAPH/TOG papers on human motion analysis, synthesis, capture and control. He received an NSF CAREER award for his work on theory and practice of Bayesian human motion synthesis.


Topic: Human Motion Capture: Applications, Challenges and Progress


Abstract: Motion capture technologies have made revolutionary progress in computer animation in the past decade. With the detailed motion data and editing algorithms, we can directly transfer expressive performance of a real person to a virtual character, interpolate existing data to produce new sequences, or compose simple motion clips to create a rich repertoire of motor skills. In addition to computer animation applications, motion capture technologies have enabled natural user interactions for computers, smart phones, game consoles, smart TV, VR/AR and service robots, as well as human motion recognition for video analysis and intelligent security monitoring.
Current motion capture technologies are often restrictive, cumbersome, and expensive. Video-based motion capture offers an appealing solution because they require no markers, no sensors, or no special suits and thereby do not impede the subject’s ability to perform the motion. Graphics and vision researchers have been actively exploring the problem of video-based motion capture for many years, and have made great advances. However, these results are often vulnerable to ambiguities in video data (e.g., occlusions), degeneracies in camera motion, and a lack of discernible features on a human body/hand.
In this talk, I will describe our recent efforts on acquiring human motion using RGB/RGBD cameras. Notable examples include full-body motion capture using a single depth camera, realtime and automatic 3D facial performance capture with eye gaze using a single RGB camera, realtime hand gesture capture using a single depth camera, and acquiring physically realistic hand grasping and manipulation data and physically accurate human motion using multiple cameras. I will also talk about applications of human motion capture in natural user interaction and character animation.



Conference Program

Time

(时间)

Events

(内容)

Place

(会场)

Saturday, 20th October, 2018

20181020日,周六)

08:30-18:00

Competition Registration

竞赛评委、工作人员、参赛团队注册

Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼

15:00-17:00

Competition Meeting

竞赛工作会议

#501, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼501

Saturday, 21th October, 2018

1021日,周日

09:00-12:00

14:00-17:30

Competition Semifinals: College Group A

竞赛高校组A组半决赛

#303, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼303

09:00-12:00

14:00-17:30

Competition Semifinals: College Group B

竞赛高校组B组半决赛

#401, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼401

09:00-12:00

14:00-17:30

Competition Semifinals: College Group C

竞赛高校组C组半决赛

#501, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼501

14:00-22:00

Conference Advance Registration

会议提前注册

Lobby, Holiday Inn

世园假日一层大堂

Saturday, 22th October, 2018

[Day 0] 1022日,周一

08:00-22:00

Conference Registration

(会议注册)

Lobby, Holiday Inn

世园假日一层大堂

09:00-12:00

14:00-17:30

Competition-Final- Corporate Group

 企业组竞赛决赛

#303, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼303

09:00-12:00

VR Courses (1)

 VR专题课程与报告(上)

#306, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼306

14:00-17:00

VR Courses (2)

VR专题课程与报告(下)

#306, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼306

14:00-17:30

Competition-Final- College Group

高校组竞赛决赛

#501, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼501

14:00-16:00

CSIG TC-VR Meeting

中国图象图形学学会虚拟现实专委会会议

#305, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼305

14:30-17:30

CVRVT Working Meeting

中国虚拟现实与可视化产业技术创新战略联盟工作会议

#501, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼501

16:00-18:00

CCF TC-VRV Meeting

中国计算机学会虚拟现实与可视化技术专委会会议

#305, Building 5,

Beihang Goertek Institute

北航青岛研究院5号楼305

Saturday, 23th October, 2018

[Day 1] 1023日,周二

07:00-21:00

Conference Registration

(会议注册)

Lobby,Holiday Inn

世园假日一层大堂

07:30-17:30

VR Exhibition

(虚拟现实关键技术

及市场示范应用展)

1st Floor Corridor,

Holiday Inn

世园假日一层走廊

08:30-09:00

Opening Ceremony

(大会开幕式)

Grand Ballroom,

Holiday Inn

世园假日千祥云集厅

09:00-09:40

Keynote Speech 1

Prof. Qinping Zhao(Beihang University)

大会特邀报告一

赵沁平教授(北京航空航天大学)

Grand Ballroom,

Holiday Inn

世园假日千祥云集厅

09:40-10:00

Photo & Tea Break

合影、巡展、茶歇

1st Floor Corridor,

Holiday Inn

世园假日一层走廊

10:00-10:40

Keynote Speech 2

Prof. Wenping Wang(The University of Hong Kong)

大会特邀报告二

王文平教授(香港大学)

Grand Ballroom,

Holiday Inn

世园假日千祥云集厅

10:40-11:20

Keynote Speech 3

Prof. Xiangshi Ren(Kochi University of Technology)

大会特邀报告三

Prof. Xiangshi Ren(日本高知工科大学)

Grand Ballroom,

Holiday Inn

世园假日千祥云集厅

11:20-12:00

Keynote Speech 4

Prof. Andres Navarro(ICESI University)

(大会特邀报告四)

Prof. Andres Navarro(哥伦比亚伊塞斯大学)

Grand Ballroom,

Holiday Inn

世园假日千祥云集厅

12:00-14:00

Buffet Lunch

(自助午餐)

Youshan restaurant,

Holiday Inn

世园假日优山创意餐厅

14:00-16:00

ICVRV Paper Presentation 1

ICVRV论文宣讲1

Meeting RoomⅣ,

Holiday Inn

世园假日天权厅

14:00-16:00

ChinaVR Paper Presentation 1

ChinaVR论文宣讲 1

Meeting RoomⅡ,

Holiday Inn

世园假日天璇厅

14:00-17:30

VR Workshop: Interaction Design and Ageing

前沿技术论坛:交互设计与老龄化

Meeting RoomⅠ,

Holiday Inn

世园假日天枢厅

14:00-17:30

VR Workshop: Innovation and Development of VR Industry

VR产学研专题论坛一:虚拟现实产业创新发展

Grand BallroomⅠ,

Holiday Inn

世园假日

千祥云集I厅

14:00-17:30

VR Workshop: VR Industry Demonstration Application

VR产学研专题论坛二:虚拟现实产业示范应用

Grand BallroomⅡ,

Holiday Inn

世园假日万象更新厅

14:00-17:30

VR Workshop: VR ilitary-Civilian Integration

VR产学研专题论坛三:虚拟现实军民融合

Function Room,

Holiday Inn

世园假日千祥云集II厅

16:00-16:30

Tea Break

(茶歇)

1st Floor Corridor,

Holiday Inn

世园假日一层走廊

16:30–18:30

ICVRV Paper Presentation 2

ICVRV论文宣讲2

Meeting RoomⅣ,

Holiday Inn

世园假日天权厅

16:30-18:30

ChinaVR Paper Presentation 2

ChinaVR论文宣讲2

Meeting RoomⅡ,

Holiday Inn

世园假日天璇厅

Saturday, 24th October, 2018

[Day 2] 1024日,周三

08:00-12:00

Conference Registration

(会议注册)

Lobby,

Holiday Inn

(世园假日

一层大堂)

08:30-09:10

Keynote Speech 5

Prof. Ming C. Lin(Maryland University)

(大会特邀报告五)

Prof. Ming C. Lin(美国马里兰大学帕克分校)

Grand Ballroom,

Holiday Inn

(世园假日

千祥云集厅)

09:10-09:50

Keynote Speech 6

Prof. Marc Christie(University of Rennes 1)

(大会特邀报告六)

Prof. Marc Christie(法国雷恩第一大学)

Grand Ballroom,

Holiday Inn

(世园假日

千祥云集厅)

09:50-10:30

Keynote Speech 7

Prof. Jinxiang Chai(Texas A&M University)

(大会特邀报告七)

Prof. Jinxiang Chai(美国德克萨斯农机大学)

Grand Ballroom,

Holiday Inn

(世园假日

千祥云集厅)

10:30-10:40

Break

(茶歇)

1st Floor Corridor,

Holiday Inn

(世园假日

一层走廊)

10:40-11:20

Keynote Speech 8

Prof. Neil Trevett(NVIDIA)

(大会特邀报告八)

Prof. Neil Trevett(美国英伟达公司)

Grand Ballroom,

Holiday Inn

(世园假日

千祥云集厅)

11:00-11:30

Keynote Speech 9

Prof. Liang Lin(SenseTime)

(大会特邀报告九)

林倞教授(商汤科技)

Grand Ballroom,

Holiday Inn

(世园假日

千祥云集厅)

12:00-14:00

Buffet Lunch

(自助午餐)

Youshan restaurant,

Holiday Inn

(世园假日

优山创意餐厅)

14:00-16:00

ICVRV Paper Presentation 3

ICVRV论文宣讲3

Meeting RoomⅣ,

Holiday Inn

(世园假日

天权厅)

14:00-16:00

ChinaVR Paper Presentation 3

ChinaVR论文宣讲3

Meeting RoomⅡ,

Holiday Inn

(世园假日

天璇厅)

14:30-17:30

ICVRV&ChinaVR Graduate Academic Forum

研究生学术论坛

Meeting RoomⅠ,

Holiday Inn

(世园假日

天枢厅)

14:00-17:30

VR Workshop: VR Industry Promotion

and Investment Development

VR产学研专题论坛四:

虚拟现实产业促进与投资发展

Grand BallroomⅠ,

Holiday Inn

(世园假日

千祥云集I厅)

14:00-17:30

VR Workshop: Key Technology

Trends in the VR Industry

VR产学研专题论坛五:

虚拟现实产业关键技术趋势

Grand BallroomⅡ,

Holiday Inn

(世园假日

千祥云集II厅)

16:00–16:30

Tea Break

茶歇

1st Floor Corridor,

Holiday Inn

(世园假日

一层走廊)

16:30-18:30

ICVRV Paper Presentation 4

ICVRV论文宣讲4

Meeting RoomⅣ,

Holiday Inn

(世园假日

天权厅)

16:30-18:30

ChinaVR Paper Presentation 4

ChinaVR论文宣讲4

Meeting RoomⅡ,

Holiday Inn

(世园假日

天璇厅)

 

18:30

Closing Ceremony

(会议闭幕)