Abstract
The size of large, geo-located datasets has reached scales where visualization of all data points is inefficient. Random sampling is a method to reduce the size of a dataset, yet it can introduce unwanted errors. We describe a method for subsampling of spatial data suitable for creating kernel density estimates from very large data and demonstrate that it results in less error than random sampling. We also introduce a method to ensure that thresholding of low values based on sampled data does not omit any regions above the desired threshold when working with sampled data. We demonstrate the effectiveness of our approach using both, artificial and real-world large geospatial datasets.
Citation
Yan Zheng,
Yi Ou,
Alexander Lex,
Jeff M. Phillips
Visualization of Big Spatial Data using Coresets for Kernel Density Estimates
IEEE Transactions on Big Data, early access, 7(3): 524-534, doi:10.1109/TBDATA.2019.2913655, 2019.
BibTeX
@article{2019_tbd_coresets, title = {Visualization of Big Spatial Data using Coresets for Kernel Density Estimates}, author = {Yan Zheng and Yi Ou and Alexander Lex and Jeff M. Phillips}, journal = {IEEE Transactions on Big Data, early access}, doi = {10.1109/TBDATA.2019.2913655}, volume = {7}, number = {3}, pages = {524-534}, year = {2019} }
Note
This is the journal version of a previously published conference paper.
Acknowledgements
Thanks to support by NSF CCF-1350888, IIS-1251019, ACI-1443046, CNS-1514520, CNS-1564287 and NIH U01 CA198935.