Virtual Reality and Light Field Immersive Video Technologies for Real-World Applications
- Length: 400 pages
- Edition: 1
- Language: English
- Publisher: The Institution of Engineering and Technology
- Publication Date: 2022-02-09
- ISBN-10: 1785615785
- ISBN-13: 9781785615788
- Sales Rank: #0 (See Top 100 Books)
Virtual reality (VR) refers to technologies that use headsets to generate realistic images, sounds and other sensations that replicate a real-world environment or create an imaginary setting. VR also simulates a user’s physical presence in this environment. In virtual reality, six degrees of freedom allows users to not only look around, but also to move around the virtual world and look from above, below or behind objects. To have a true VR experience, the hardware must provide six degrees of freedom, using both orientation tracking (rotational) and positional tracking (translation).
This book is addressed to video experts who want to understand the basics of 3D representations and multi-camera video processing to target new immersive media applications. Unlike single camera video coding, future VR technologies address new challenges that arise beyond compression-only, including the pre- and post-processing (depth acquisition and 3D rendering). This book is inspired by the MPEG-I (immersive media) and JPEG-PLENO (plenoptic media) standardization activities, and offers a glimpse of their underlying technologies.
Cover Title Copyright Contents About the authors Chapter 1 Immersive video introduction References Chapter 2 Virtual reality 2.1 Introduction/history 2.2 The challenge of three to six degrees of freedom 2.3 The challenge of stereoscopic to holographic vision References Chapter 3 3D gaming and VR 3.1 OpenGL in VR 3.2 3D data representations 3.2.1 Triangular meshes 3.2.2 Subdivision surfaces and B__amp__#233;zier curves 3.2.3 Textures and cubemaps 3.3 OpenGL pipeline References Chapter 4 Camera and projection models 4.1 Mathematical preliminaries 4.2 The pinhole camera model 4.3 Intrinsics of the pinhole camera 4.4 Projection matrices 4.4.1 Mathematical derivation of projection matrices 4.4.2 Characteristics of the projection matrices References Chapter 5 Light equations 5.1 Light contributions 5.1.1 Emissive light source 5.1.2 Ambient light 5.1.3 Diffuse light 5.1.4 Specular light 5.2 Physically correct light models 5.3 Light models for transparent materials 5.4 Shadows rendering 5.5 Mesh-based 3D rendering with light equations 5.5.1 Gouraud shading 5.5.2 Phong shading 5.5.3 Bump mapping 5.5.4 3D file formats References Chapter 6 Kinematics 6.1 Rigid body animations 6.1.1 Rotations with Euler angles 6.1.2 Rotations around an arbitrary axis 6.1.3 ModelView transformation 6.2 Quaternions 6.2.1 Spherical linear interpolation 6.3 Deformable body animations 6.3.1 Keyframes and inverse kinematics 6.3.2 Clothes animation 6.3.3 Particle systems 6.4 Collisions in the physics engine 6.4.1 Collision of a triangle with a plane 6.4.2 Collision between two spheres, only one moving 6.4.3 Collision of two moving spheres 6.4.4 Collision of a sphere with a plane 6.4.5 Collision of a sphere with a cube 6.4.6 Separating axes theorem and bounding boxes References Chapter 7 Raytracing 7.1 Raytracing complexity 7.2 Raytracing with analytical objects 7.3 VR challenges References Chapter 8 2D transforms for VR with natural content 8.1 The affine transform 8.2 The homography 8.3 Homography estimation 8.4 Feature points and RANSAC outliers for panoramic stitching 8.5 Homography and affine transform revisited 8.6 Pose estimation for AR References Chapter 9 3DoF VR with natural content 9.1 Stereoscopic viewing 9.2 360 panoramas 9.2.1 360 panoramas with planar reprojections 9.2.2 Cylindrical and spherical 360 panoramas 9.2.3 360 panoramas with equirectangular projection images References Chapter 10 VR goggles 10.1 Wide angle lens distortion 10.1.1 Wide angle lens model 10.1.2 Radial distortion model 10.1.3 VR goggles pre-distortion 10.2 Asynchronous high frame rate rendering 10.3 Stereoscopic time warping 10.4 Advanced HMD rendering 10.4.1 Optical systems 10.4.2 Eye accommodation References Chapter 11 6DoF navigation 11.1 6DoF with point clouds 11.2 Active depth sensing 11.3 Time of flight 11.3.1 Phase from a modulated light source 11.3.2 Structured light 11.3.3 Phase from interferometry 11.4 Point cloud registration and densification 11.4.1 Photogrammetry 11.4.2 SLAM navigational applications 11.5 3D rendering of point clouds 11.5.1 Poisson reconstruction 11.5.2 Splatting References Chapter 12 Towards 6DoF with image-based rendering 12.1 Introduction 12.2 Finding relative camera positions 12.2.1 Epipolar geometry 12.2.2 Rotation and translation from the essential and fundamental matrices 12.2.3 Epipolar line equation 12.2.4 Extrinsics with checkerboard calibration 12.2.5 Extrinsics with sparse bundle adjustment 12.2.6 Depth estimation 12.2.7 Stereo matching 12.2.8 Depth quantization 12.2.9 Stereo matching and cost volumes 12.2.10 Occlusions 12.2.11 Stereo matching with adaptive windows around depth discontinuities 12.2.12 Stereo matching with priors 12.2.13 Uniform texture regions 12.2.14 Epipolar plane image with multiple images 12.2.15 Plane sweeping 12.3 Graph cut 12.3.1 The binary graph cut 12.4 MPEG reference depth estimation 12.5 Depth estimation challenges 12.6 6DoF view synthesis with depth image-based rendering 12.6.1 Morphing without depth 12.6.2 Nyquist__amp__#8211;Whittaker-Shannon and Petersen__amp__#8211;Middleton in DIBR view synthesis 12.6.3 Depth-based 2D pixel to 3D point reprojections 12.6.4 Splatting and hole filling 12.6.5 Super-pixels and hole filling 12.6.6 Depth reliability in view synthesis 12.6.7 MPEG-I view synthesis with estimated depth maps 12.6.8 MPEG-I view synthesis with sensed depth maps 12.6.9 Depth layered images __amp__#8211; Google 12.7 Use case I: view synthesis in holographic stereograms 12.8 Use case II: view synthesis in integral photography 12.9 Difference between PCC and DIBR References Chapter 13 Multi-camera acquisition systems 13.1 Stereo vision 13.2 Multiview vision 13.2.1 Geometry correction for camera array 13.2.2 Colour correction for camera array 13.3 Plenoptic imaging 13.3.1 Processing tools for plenoptic camera 13.3.2 Conversion from Lenslet to Multiview images for plenoptic camera 1.0 References Chapter 14 3D light field displays 14.1 3D TV 14.2 Eye vision 14.3 Surface light field system 14.4 1D-II 3D display system 14.5 Integral photography 14.6 Real-time free viewpoint television 14.7 SMV256 14.8 Light field video camera system 14.9 Multipoint camera and microphone system 14.10 Walk-through system 14.11 Ray emergent imaging (REI) 14.12 Holografika 14.13 Light field 3D display 14.14 Aktina Vision 14.15 IP by 3D VIVANT 14.16 Projection type IP 14.17 Tensor display 14.18 Multi-, plenoptic-, coded-aperture-, multi-focus-camera to tensor display system 14.19 360__amp__#176; light field display 14.20 360__amp__#176; mirror scan 14.21 Seelinder 14.22 Holo Table 14.23 fVisiOn 14.24 Use cases of virtual reality systems 14.24.1 Public use cases 14.24.2 Professional use cases 14.24.3 Scientific use cases References Chapter 15 Visual media compression 15.1 3D video compression 15.1.1 Image and video compression 15.2 MPEG standardization and compression with 2D video codecs 15.2.1 Cubemap video 15.2.2 Multiview video and depth compression (3D-HEVC) 15.2.3 Dense light field compression 15.3 Future challenges in 2D video compression 15.4 MPEG codecs for 3D immersion 15.4.1 Point cloud coding with 2D video codecs 15.4.2 MPEG immersive video compression 15.4.3 Visual volumetric video coding 15.4.4 Compression for light field displays References Chapter 16 Conclusion and future perspectives Index
Donate to keep this site alive
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.