Introduction to Visual Effects: A Computational Approach
- Length: 228 pages
- Edition: 1
- Language: English
- Publisher: CRC Press
- Publication Date: 2022-12-06
- ISBN-10: 103207230X
- ISBN-13: 9781032072302
- Sales Rank: #0 (See Top 100 Books)
Introduction to Visual Effects: A Computational Approach is the first single introduction to the computational and mathematical aspects of Visual Effects, incorporating both computer vision and graphics. The book also provides the reader with the source code to a library, enabling them to follow the chapters directly and build up a complete Visual Effects platform. The book covers the basic approaches to Camera Pose Estimation, Global Illumination, and Image Based Lighting, and includes chapters on the Virtual Camera, Optimization and Computer Vision, Path Tracing and many more.
Key features include:
Introduction to Projective Geometry, Image Based Lighting (IBL), Global Illumination solved by the Monte Carlo Method (Pathtracing), an explanation of a set of Optimization Methods, and the techniques used for Calibrating One, Two, and Many cameras, including how to use the RANSAC algorithm in order to make the process robust, and providing code to be implemented using the Gnu Scientific Library.
C/C++ code using the OpenCV library, to be used in the process of tracking points on a movie (an important step for the matchmove process), and in the construction of Modeling Tools for Visual Effects.
A simple model of the Bidirectional Reflectance Distribution Function (BRDF) of surfaces and the Differential Rendering Method, allowing the reader to generate consistent shadows, supported by a code that can be used in combination with a software like Luminance HDR.
Cover Page Half-Title Page Title Page Copyright Page Dedication Page Contents Preface Chapter 1 ◾ Introduction 1.1 Camera Calibration 1.2 Historical Overview of Tracking 1.3 Global Illumination 1.4 Image-Based Lighting 1.5 Mathematical Notations 1.6 Projective Geometry Concepts 1.6.1 Projective Space 1.6.2 Projective Transforms 1.6.3 Projective Geometry on This Book 1.6.4 Parallelism and Ideal Points 1.7 About the Code Chapter 2 ◾ Virtual Camera 2.1 Basic model 2.1.1 Camera in the Origin 2.1.2 Camera in Generic Position 2.1.3 Digital Camera 2.1.4 Intrinsic Parameters 2.1.5 Dimension of the Space of Virtual Cameras 2.2 Camera for image synthesis 2.2.1 Terminologies 2.2.2 Clipping and Visibility 2.3 Transformation of visualization 2.3.1 Positioning the Camera 2.3.2 Transformation of Normalization 2.3.3 Perspective Projection 2.3.4 Device Coordinates 2.4 Comparison with the basic model 2.4.1 Intrinsic Parameters 2.4.2 Dimension 2.4.3 Advantages over the Basic Model 2.5 Camera for Path Tracing 2.6 Visibility and Ray Casting 2.7 Cameras for calibration 2.7.1 Projective Model 2.7.2 Projective Notation for Cameras 2.7.3 Generic Projective Camera 2.8 Mapping a calibrated Camera into the S3D Library 2.8.1 Specification of Extrinsic Parameters 2.8.2 Specification of Intrinsic Parameters 2.9 API 2.9.1 MatchMove Software Functions 2.9.2 Render Software Functions 2.10 Code 2.10.1 Code in the MatchMove Software 2.10.2 Code in the Render Software Chapter 3 ◾ Optimization Tools 3.1 Minimize a function defined on an interval 3.2 Least Squares 3.3 Non-linear least squares 3.3.1 Gauss-Newton Method 3.3.2 Levenberg-Marquardt Algorithm 3.4 Minimize the norm of a linear function over a sphere 3.5 Two stages optimization 3.6 Robust model estimation 3.6.1 RANSAC Algorithm 3.6.2 Example of Using the RANSAC Algorithm Chapter 4 ◾ Estimating One Camera 4.1 Calibration in relation to a set of 3D points 4.1.1 Calibration Using Six Matches 4.1.2 Calibration Using More Than Six Matches 4.2 Normalization of the points 4.3 Isolation of camera parameters 4.4 Camera for image synthesis 4.5 Calibration by restricted optimization 4.5.1 Adjusting the Levenberg-Marquardt to the Problem 4.5.2 Parameterization of Rotations 4.5.3 Parameterization of the Camera Space 4.6 Problem points of parameterization 4.7 Finding the intrinsic parameters 4.8 Calibration using a planar pattern 4.9 API 4.10 Code 4.11 Single Camera Calibration Program 4.12 Six points single camera calibration program Chapter 5 ◾ Estimating Two Cameras 5.1 Representation of relative positioning 5.2 Rigid movement 5.3 Other projective model 5.4 Epipolar Geometry 5.4.1 Essential Matrix 5.5 Fundamental matrix 5.6 The 8-points algorithm 5.6.1 Calculation of F 5.6.2 Using More Than 8 Points 5.6.3 Calculation of F˜ 5.7 Normalized 8-points algorithm 5.8 Finding the extrinsic parameters 5.8.1 Adding Clipping to the Model 5.8.2 Three-Dimensional Reconstruction 5.9 API 5.10 Code Chapter 6 ◾ Feature Tracking 6.1 Definitions 6.2 Kanade-Lucas-Tomasi algorithm 6.3 Following windows 6.4 Choosing the windows 6.5 Disposal of windows 6.6 Problems using KLT 6.7 Code Chapter 7 ◾ Estimating Many Cameras 7.1 Definitions 7.2 Calibrating in pairs 7.3 Calibration in three steps 7.4 Three-step calibration problems 7.5 Making the calibration of small sequences robust 7.5.1 Solution to the Problem of Step 1 7.5.2 Solution to the Problem of Step 2 7.5.3 Solution to the Problem in Step 3 7.6 Choice of base columns 7.7 Bundle Adjustment 7.8 Representation of a configuration 7.9 Refinement cycles 7.10 Example 7.11 Decomposition of the video into fragments 7.12 Junction of fragments 7.12.1 Alignment of Fragments 7.12.2 Compatibility of Scales 7.12.3 Robust Scale Compatibility 7.13 Off-Line Augmented Reality 7.14 Global Optimization by Relaxation 7.15 Code Modules 7.15.1 Bundle Adjustment API 7.15.2 Bundle Adjustment Code 7.15.3 RANSAC API 7.15.4 RANSAC Code 7.15.5 Features List API 7.15.6 Features List Code 7.15.7 Sequence of Frames API 7.15.8 Sequence of Frames Code 7.15.9 Relaxation API 7.15.10 Relaxation Code 7.16 MatchMove Program 7.17 Relaxation Program Chapter 8 ◾ Modeling Tools 8.1 API 8.2 Code 8.3 Point Cloud Definer Program 8.4 Point Cloud Calib Program Chapter 9 ◾ Light Transport and Monte Carlo 9.1 Radiance 9.2 The invariance of the Radiance 9.3 The BRDF and the Rendering Equation 9.4 Other definition for the Rendering Equation 9.5 Examples of BRDF 9.5.1 The perfect Lambertian Surface BRDF 9.5.2 The Perfect Mirror BRDF 9.5.3 The Modified Blinn-Phong’s BRDF 9.6 Numerical Approximation 9.7 Monte Carlo Integration Method 9.8 Path Tracing 9.9 Uniform sampling over a hemisphere 9.10 Splitting the Direct and Indirect Illumination 9.11 polygonal Luminaries 9.12 Code Modules 9.12.1 Path Tracing API 9.12.2 Path Tracing Code 9.12.3 Poly Light API 9.12.4 Poly Light Code 9.13 Rendering Program 9.14 Result Chapter 10 ◾ Image-Based Lighting 10.1 HDR Pictures 10.2 Reconstructing the HDR Radiance Map 10.3 Colored pictures 10.4 Recovering an HDR Radiance Map 10.5 The PFM File Format 10.6 Conversions between LDR and HDR images 10.7 From HDR pictures to equirectangular projections 10.8 Orienting the radiance dome 10.9 Rendering using a Radiance Map 10.10 Interaction Between the real and virtual scenes 10.10.1 Modeling the BRDF of the Local Scene 10.10.2 Differential Rendering 10.11 Code Modules 10.11.1 HDR Image API 10.11.2 HDR Image Code 10.11.3 Image-Based Light API 10.11.4 Image-Based Light Code 10.11.5 HDR Scene API 10.11.6 HDR Scene Code 10.11.7 Dome Path Tracing API 10.11.8 Dome Path Tracing Code 10.12 PolyShadow Color Adjust Program 10.13 Visual Effects Program 10.14 Results Bibliography Index
Donate to keep this site alive
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.