Old Web

English

Sign In

Given a set of images depicting a number of 3D points from different viewpoints, bundle adjustment can be defined as the problem of simultaneously refining the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acquire the images, according to an optimality criterion involving the corresponding image projections of all points. Given a set of images depicting a number of 3D points from different viewpoints, bundle adjustment can be defined as the problem of simultaneously refining the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acquire the images, according to an optimality criterion involving the corresponding image projections of all points. Bundle adjustment is almost always used as the last step of every feature-based 3D reconstruction algorithm. It amounts to an optimization problem on the 3D structure and viewing parameters (i.e., camera pose and possibly intrinsic calibration and radial distortion), to obtain a reconstruction which is optimal under certain assumptions regarding the noise pertaining to the observed image features: If the image error is zero-mean Gaussian, then bundle adjustment is the Maximum Likelihood Estimator.:2 Its name refers to the bundles of light rays originating from each 3D feature and converging on each camera's optical center, which are adjusted optimally with respect to both the structure and viewing parameters (similarity in meaning to categorical bundle seems a pure coincidence). Bundle adjustment was originally conceived in the field of photogrammetry during the 1950s and has increasingly been used by computer vision researchers during recent years.:2 Bundle adjustment boils down to minimizing the reprojection error between the image locations of observed and predicted image points, which is expressed as the sum of squares of a large number of nonlinear, real-valued functions. Thus, the minimization is achieved using nonlinear least-squares algorithms. Of these, Levenberg–Marquardt has proven to be one of the most successful due to its ease of implementation and its use of an effective damping strategy that lends it the ability to converge quickly from a wide range of initial guesses. By iteratively linearizing the function to be minimized in the neighborhood of the current estimate, the Levenberg–Marquardt algorithm involves the solution of linear systems termed the normal equations. When solving the minimization problems arising in the framework ofbundle adjustment, the normal equations have a sparse block structure owing to the lack of interaction among parameters for different 3D points and cameras. This can be exploited to gain tremendous computational benefits by employing a sparse variant of the Levenberg–Marquardt algorithm which explicitly takes advantage of the normal equations zeros pattern, avoiding storing and operating on zero-elements.:3 Bundle adjustment amounts to jointly refining a set of initial camera and structure parameter estimates for finding the set of parameters that most accurately predict the locations of the observed points in the set of available images. More formally, assume that n {displaystyle n} 3D points are seen in m {displaystyle m} views and let x i j {displaystyle mathbf {x} _{ij}} be the projection of the i {displaystyle i} th point on image j {displaystyle j} . Let v i j {displaystyle displaystyle v_{ij}} denote the binary variables that equal 1 if point i {displaystyle i} is visible in image j {displaystyle j} and 0 otherwise. Assume also that each camera j {displaystyle j} is parameterized by a vector a j {displaystyle mathbf {a} _{j}} and each 3D point i {displaystyle i} by a vector b i {displaystyle mathbf {b} _{i}} . Bundle adjustment minimizes the total reprojection error with respect to all 3D point and camera parameters, specifically where Q ( a j , b i ) {displaystyle mathbf {Q} (mathbf {a} _{j},,mathbf {b} _{i})} is the predicted projection of point i {displaystyle i} on image j {displaystyle j} and d ( x , y ) {displaystyle d(mathbf {x} ,,mathbf {y} )} denotes the Euclidean distance between the image points represented by vectors x {displaystyle mathbf {x} } and y {displaystyle mathbf {y} } . Clearly, bundle adjustment is by definition tolerant to missing image projections and minimizes a physically meaningful criterion.

Parent Topic

Child Topic

No Parent Topic