Some of the material in is restricted to members of the community. By logging in, you may be able to gain additional access to certain collections or items. If you have questions about access or logging in, please use the form on the Contact Page.
Near point problems are widely used in computational geometry as well as a variety of other scientific fields. This work examines four common near point problems and presents original algorithms that solve them. Planar nearest neighbor searching is highly motivated by geographic information system and sensor network problems. Efficient data structures to solve near neighbor queries in the plane can exploit the extreme low dimension for fast results. To this end, DealaunayNN is an algorithm using Delaunay graphs and Voronoi cells to answer queries in O(log n) time, faster in practice than other common state-of-the art algorithms. k-Nearest neighbor graph construction arises in computer graphics in areas of normal estimation and surface simplification. This work presents knng, an efficient algorithm using Morton ordering to solve the problem. The knng algorithm exploits cache coherence and low storage space, as well as being extremely optimize-able for parallel processors. The GeoFilterKruskal algorithm solves the problem of computing geometric minimum spanning trees. A common tool in tackling clustering problems, GMSTs are an extension of the minimum spanning tree graph problem, applied to the complete graph of a point set. By using well separated pair decomposition, bi-chromatic closest pair computation, and partitioning and filtering techniques, GeoFilterKruskal greatly reduces the total computation required. It is also one of the only algorithms to compute GMSTs in a manner that lends itself to parallel computation; a major advantage over its competitors. High dimensional nearest neighbor searching is an expensive operation, due to an exponential dependence on dimension from many lower dimensional solutions. Modern techniques to solve this problem often revolve around projecting data points into a large number of lower dimensional subspaces. PCANN explores the idea of picking one particularly relevant subspace for projection. When used on SIFT data, principal component analysis allows for greatly reduced dimension with no need for multiple projection. Additionally, this algorithm is also highly motivated to make use of parallel computing power.