language-icon Old Web
English
Sign In

Photogrammetry

Photogrammetry falls under the broader category of Geomatics, and, according to the American Society for Photogrammetry and Remote Sensing, is defined as, 'the art, science and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena'. A simplified definition could be the extraction of three-dimensional measurements from two-dimensional data (i.e. images). Close-range photogrammetry refers to the collection of photography from a lesser distance than traditional aerial (or orbital) photogrammetry. 'Digital' is also an important piece of the name, as this implies the modern digital techniques discussed in this guide. Photogrammetry is as old as modern photography, dating to the mid-19th century and in the simplest example, the distance between two points that lie on a plane parallel to the photographic image plane,it can be determined by measuring their distance on the image, if the scale (s) of the image is known. Photogrammetry falls under the broader category of Geomatics, and, according to the American Society for Photogrammetry and Remote Sensing, is defined as, 'the art, science and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena'. A simplified definition could be the extraction of three-dimensional measurements from two-dimensional data (i.e. images). Close-range photogrammetry refers to the collection of photography from a lesser distance than traditional aerial (or orbital) photogrammetry. 'Digital' is also an important piece of the name, as this implies the modern digital techniques discussed in this guide. Photogrammetry is as old as modern photography, dating to the mid-19th century and in the simplest example, the distance between two points that lie on a plane parallel to the photographic image plane,it can be determined by measuring their distance on the image, if the scale (s) of the image is known. Photogrammetric analysis may be applied to one photograph, or may use high-speed photography and remote sensing to detect, measure and record complex 2D and 3D motion fields by feeding measurements and imagery analysis into computational models in an attempt to successively estimate, with increasing accuracy, the actual, 3D relative motions. From its beginning with the stereoplotters used to plot contour lines on topographic maps, it now has a very wide range of uses such as sonar, radar, and lidar. Photogrammetry has been defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) as the art, science, and technology of obtaining reliable information about physical objects and the environment through processes of recording, measuring and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena. Photogrammetry uses methods from many disciplines, including optics and projective geometry. Digital image capturing and photogrammetric processing includes several well defined stages, which allow the generation of 2D or 3D digital models of the object as an end product. The data model on the right shows what type of information can go into and come out of photogrammetric methods. The 3D co-ordinates define the locations of object points in the 3D space. The image co-ordinates define the locations of the object points' images on the film or an electronic imaging device. The exterior orientation of a camera defines its location in space and its view direction. The inner orientation defines the geometric parameters of the imaging process. This is primarily the focal length of the lens, but can also include the description of lens distortions. Further additional observations play an important role: With scale bars, basically a known distance of two points in space, or known fix points, the connection to the basic measuring units is created. Each of the four main variables can be an input or an output of a photogrammetric method. Algorithms for photogrammetry typically attempt to minimize the sum of the squares of errors over the coordinates and relative displacements of the reference points. This minimization is known as bundle adjustment and is often performed using the Levenberg–Marquardt algorithm. A special case, called stereophotogrammetry, involves estimating the three-dimensional coordinates of points on an object employing measurements made in two or more photographic images taken from different positions (see stereoscopy). Common points are identified on each image. A line of sight (or ray) can be constructed from the camera location to the point on the object. It is the intersection of these rays (triangulation) that determines the three-dimensional location of the point. More sophisticated algorithms can exploit other information about the scene that is known a priori, for example symmetries, in some cases allowing reconstructions of 3D coordinates from only one camera position. Stereophotogrammetry is emerging as a robust non-contacting measurement technique to determine dynamic characteristics and mode shapes of non-rotating and rotating structures.

[ "Computer vision", "Cartography", "Remote sensing", "Artificial intelligence", "Photogrammetries", "Bundle adjustment", "Virtual archaeology", "Digital orthophoto quadrangle", "close range photogrammetry" ]
Parent Topic
Child Topic
    No Parent Topic