The vast majority of localisation methods considered thus far are subject to a number of crucial assumption and constraints. First, the robot is constrained to move over a planar surface, in an environment composed exclusively of rectilinear structures, and wherein its sensors must meet strict pose constraints. Second, the robot relies on the robust extraction of features, which are often based on assumptions about the characteristics of the environment. Finally, many of the methods depend on an accurate a priori map.
A number of researchers have developed methods which avoid the use of explicit features or maps. These methods express the sensor data as a function of the pose of the robot, and attempt to invert this function. In other words, these methods perform sensor inversion. Principal components analysis (PCA), sometimes known as eigenspace analysis, is a general pattern classification technique which has enjoyed successful application in the domain of face and object recognition and has recently seen some success in the problem of position estimation [44, 6, 19, 16]. PCA treats dense sensor data (such as that from a camera) or an extracted feature vector as a vector in a high dimensional space, and classifies the input data based on a projection of that vector into a subspace that maximises its discrimination from other samples. Nayar et al have developed a method for correcting the pose of a camera mounted on an end effector by employing a principal components representation of the space of possible camera views  and Jepson and Black have used eigenspace techniques for tracking objects which undergo changes in pose . These methods are similar to the Kalman Filter in that they rely on a linear approximation to the underlying behaviour of the data, yet they differ in that they do not rely on explicitly interpreted features but linearise the statistical variation of the data in order to choose maximally discriminating features, which are unlikely to hold any explicit semantic value.
Dudek and Zhang have also employed the notion of sensor inversion in their implementation of image-based position estimation . In that work, a neural network was employed to invert the edge statistics of an image as a function of position. In similar work, Oore, Hinton and Dudek have implemented a position estimator as a neural network which processes sonar data . While neural networks have been shown to give good results for highly nonlinear or complex input, they can be difficult to tune, which is a particular difficulty in the face of the fact that retraining is usually required after changes to the environment. In addition, the behaviour of a particular implementation can be difficult to evaluate, and may be inconsistent with the same implementation under different environmental conditions. Another difficulty posed by the use of neural networks is the solutions often depend on global features. That is, such methods will tend to fail completely in the presence of outliers, such as the cases when part of the image becomes obscured (perhaps by another robot or person passing through the field of view of the camera), or the camera fails to meet the pose constraints.
A significant problem associated with the problem of sensor inversion in general is that the function to be inverted may not be not one-to-one, a situation which may not be easily detected a priori. Dudek and Zhang consider this difficulty in their work by implementing a consistency measure which incorporates multiple measurements under different viewing conditions in order to achieve optimal consistency in the resulting pose estimate .