|The construction and study of systems that can extract useful knowledge from the massively growing data have become extremely challenging. The conventional knowledge extraction tools, which handle computations on matrices/graphs, do not typically scale to extremely large data sizes. Major difficulties particularly arise when the data correlations are dense; the underlying matrix/graphs cannot be fit into a single machine or, effectively partitioned, parallelized, and communicated within multiple processing units (i.e., system’s bandwidth limitations.)
Our research focus is on the development of a novel data/platform aware framework for massive analysis and knowledge extraction applications of structured dense matrices. Such matrices commonly arise in a broad range of engineering applications including signal processing, data mining, boundary elements methods, computer vision, and N-body problems. Our approach is based upon our intuition that despite the apparent dimensionality of massive data, a structured dense matrix occupies a much smaller subset of the space or admits a low-dimensional signal model. Since the major bottleneck in portioning and parallelization is the system’s bandwidth limitations, we create scalable transformations that automatically adapt to the specific structure of the data and the capabilities/constraints of the underlying available computing machinery ranging from a single device to multi-machine structures with energy/ delay/ bandwidth/ memory constraints. Our present results demonstrate more than two orders of magnitude improvement (compared with the best known systems) for computing on dense datasets with billions of nonzeros. We also discuss our progress and results for real-time applications and reconfigurable platforms.
Wednesday, April 9, 2014
10:15am - McMurtry Auditorium, Duncan Hall
2014 ECE Affiliates Meeting
3 Ships - Leadership, Internship, and Entrepreneurship
Associate Professor, Department of Electrical and Computer Engineering.