MatrixMap: Programming abstraction and implementation of matrix computation for big data analytics

  • Published: 01 October 2016
  • The computation core of many big data applications can be expressed as general matrix computations, including linear algebra operations and irregular matrix operations. However, existing parallel programming systems such as Spark do not have programming abstraction and efficient implementation for general matrix computations. In this paper, we present MatrixMap, a unified and efficient data-parallel programming framework for general matrix computations. MatrixMap provides powerful yet simple abstraction, consisting of a distributed in-memory data structure called bulk key matrix and a programming interface defined by matrix patterns. Users can easily load data into bulk key matrices and program algorithms into parallel matrix patterns. MatrixMap outperforms current state-of-the-art systems by employing three key techniques:matrix patterns with lambda functions for irregular and linear algebra matrix operations, asynchronous computation pipeline with context-aware data shuffling strategies for specific matrix patterns and in-memory data structure reusing data in iterations. Moreover, it can automatically handle the parallelization and distribute execution of programs on a large cluster. The experiment results show that MatrixMap is 12 times faster than Spark.

    Citation: Yaguang Huangfu, Guanqing Liang, Jiannong Cao. MatrixMap: Programming abstraction and implementation of matrix computation for big data analytics[J]. Big Data and Information Analytics, 2016, 1(4): 349-376. doi: 10.3934/bdia.2016015

    Related Papers:

  • The computation core of many big data applications can be expressed as general matrix computations, including linear algebra operations and irregular matrix operations. However, existing parallel programming systems such as Spark do not have programming abstraction and efficient implementation for general matrix computations. In this paper, we present MatrixMap, a unified and efficient data-parallel programming framework for general matrix computations. MatrixMap provides powerful yet simple abstraction, consisting of a distributed in-memory data structure called bulk key matrix and a programming interface defined by matrix patterns. Users can easily load data into bulk key matrices and program algorithms into parallel matrix patterns. MatrixMap outperforms current state-of-the-art systems by employing three key techniques:matrix patterns with lambda functions for irregular and linear algebra matrix operations, asynchronous computation pipeline with context-aware data shuffling strategies for specific matrix patterns and in-memory data structure reusing data in iterations. Moreover, it can automatically handle the parallelization and distribute execution of programs on a large cluster. The experiment results show that MatrixMap is 12 times faster than Spark.


    加载中
    [1] [ C.-C. Chang and Chih-Jen, libsvm dataset url:http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/news20.binary.bz2, 2015.
    [2] [ J. Choi, J. J. Dongarra, R. Pozo and D. W. Walker, ScaLAPACK:A scalable linear algebra library for distributed memory concurrent computers, in Frontiers of Massively Parallel Computation, 1992., Fourth Symposium on the, IEEE, (1992), 120-127.
    [3] [ Chu, Cheng-Tao and Kim, Sang Kyun and Lin, Yi-An and Yu, YuanYuan and Bradski, Gary and Ng, Andrew Y and Olukotun, Kunle, Map-Reduce for Machine Learning on Multicore, in Neural Information Processing Systems, 2007.
    [4] [ M. T. Chu and J. L. Watterson, On a multivariate eigenvalue problem, Part I:Algebraic theory and a power method, SIAM Journal on Scientific Computing, 14(1993), 1089-1106.
    [5] [ T. H. Cormen, Introduction to Algorithms, MIT press, 2009.
    [6] [ J. Dean and S. Ghemawat, MapReduce:simplified data processing on large clusters, Communications of the ACM, 51(2008), 107-113.
    [7] [ J. Ekanayake, H. Li and B. Zhang, Twister:A runtime for iterative MapReduce, in HPDC'10 Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, (2010), 810-818.
    [8] [ J. Gonzalez, Y. Low, H. Gu, D. Bickson and C. Guestrin, PowerGraph:Distributed graphparallel computation on natural graphs, in OSDI'12 Proceedings of the 10th USENIX conference on Operating Systems Design and Implementation, (2012), 17-30.
    [9] [ P. Harrington, Machine Learning in Action, Manning Publications, 2012.
    [10] [ P. Hintjens, ZeroMQ:Messaging for Many Applications, O'Reilly Media, Inc., 2013.
    [11] [ Intel, Threading Building Blocks url:https://www.threadingbuildingblocks.org/, 2009.
    [12] [ M. Isard, M. Budiu, Y. Yu, A. Birrell and D. Fetterly, Dryad:distributed data-parallel programs from sequential building blocks, ACM SIGOPS Operating Systems Review, 41(2007), 59-72.
    [13] [ Join (SQL) url:https://en.wikipedia.org/wiki/Join, 2015.
    [14] [ J. Kepner and J. Gilbert, Graph Algorithms in the Language of Linear Algebra, SIAM, 2011.
    [15] [ K. Kourtis, V. Karakasis, G. Goumas and N. Koziris, CSX:An extended compression format for spmv on shared memory systems, in ACM SIGPLAN Notices, 46(2011), 247-256.
    [16] [ J. Kowalik, ACTORS:A model of concurrent computation in distributed systems (Gul Agha), SIAM Review, 30(1988), 146-146.
    [17] [ C. G. Aapo Kyrola and G. Blelloch, GraphChi:Large-scale graph computation on just a PC, in Proceedings of the 10th USENIX conference on Operating Systems Design and Implementation, USENIX Association, (2012), 31-46.
    [18] [ Y. Low, J. Gonzalez and A. Kyrola, Graphlab:A distributed framework for machine learning in the cloud, arXiv preprint, arXiv:1107.0922, 1107(2011).
    [19] [ Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin and J. M. Hellerstein, Distributed GraphLab:A Framework for Machine Learning and Data Mining in the Cloud, in Proceedings of the VLDB Endowment, 5(2012), 716-727.
    [20] [ G. Malewicz, M. Austern and A. Bik, Pregel:A system for large-scale graph processing, Proceedings of the the 2010 international conference on Management of data, 114(2010), 135-145.
    [21] [ D. Murray, F. McSherry, R. Isaacs, M. Isard, P. Barham and M. Abadi, Naiad:A timely dataflow system, in SOSP'13:Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, (2013), 439-455.
    [22] [ E. J. O'Neil, P. E. O'Neil and G. Weikum, The LRU-K page replacement algorithm for database disk buffering, in ACM SIGMOD Record, 22(1993), 297-306.
    [23] [ T. W. L Page, S Brin, R Motwani, The PageRank Citation Ranking:Bringing Order to the Web, tech. rep., Stanford InfoLab, 1999.
    [24] [ R. Power and J. Li, Piccolo:Building fast, distributed programs with partitioned tables, Proceedings of the 9th USENIX conference on Operating systems design and implementation-OSDI'10, (2010), 1-14.
    [25] [ J. Protic, M. Tomasevic and V. Milutinovi´c, Distributed Shared Memory:Concepts and Systems, John Wiley & Sons, 1998.
    [26] [ Z. Qian, X. Chen, N. Kang and M. Chen, MadLINQ:large-scale distributed matrix computation for the cloud, Proceedings of the 7th ACM european conference on Computer Systems. ACM, (2012), 197-210,.
    [27] [ RocksDB, http://rocksdb.org/, 2015.
    [28] [ A. Roy, I. Mihailovic and W. Zwaenepoel, X-stream:edge-centric graph processing using streaming partitions, in the Twenty-Fourth ACM Symposium on Operating Systems Principles, (2013), 472-488.
    [29] [ S. Seo, E. J. Yoon, J. Kim, S. Jin, J.-S. Kim and S. Maeng, HAMA:An efficient matrix computation with the mapreduce framework, in 2010 IEEE Second International Conference on Cloud Computing Technology and Science, (2010), 721-726.
    [30] [ J. Shun and G. Blelloch, Ligra:A lightweight graph processing framework for shared memory, in PPoPP, (2013), 135-146.
    [31] [ M. S. Snir, S. W. Otto, D. W. Walker, J. Dongarra and Huss-Lederman, MPI:The Complete Reference, MIT Press, 1995.
    [32] [ L. Valiant, A bridging model for parallel computation, Communications of the ACM, 33(1990), 103-111.
    [33] [ P. Vassiliadis, A survey of extract-transform-load technology, International Journal of Data Warehousing and Mining, 5, 1-27.
    [34] [ S. Venkataraman, E. Bodzsar, I. Roy, A. AuYoung, and R. S. Schreiber, Presto, in Proceedings of the 8th ACM European Conference on Computer Systems-EuroSys'13, (2013), p197.
    [35] [ R. S. Xin, J. E. Gonzalez, M. J. Franklin, I. Stoica, and E. AMPLab, GraphX:A Resilient Distributed Graph System on Spark, in First International Workshop on Graph Data Management Experiences and Systems, p. 2, 2013.
    [36] [ M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker and I. Stoica, Spark:Cluster computing with working sets, HotCloud'10 Proceedings of the 2nd USENIX conference on Hot topics in cloud computing, (2010), p10.
    [37] [ M. Zaharia, M. Chowdhury, T. Das and A. Dave, Resilient Distributed Datasets:A FaultTolerant Abstraction for In-Memory Cluster Computing, tech. rep., UCB/EECS-2011-82 UC Berkerly, 2012.
    [38] [ T. Zhang, Solving large scale linear prediction problems using stochastic gradient descent algorithms, in Proceedings of the twenty-first international conference on Machine learning, ACM, (2004), p116.
    [39] [ Y. Zhou, D. Wilkinson, R. Schreiber and R. Pan, Large-scale parallel collaborative filtering for the netflix prize, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), LNCS, 5034(2008), 337-348.
  • Reader Comments
  • © 2016 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3243) PDF downloads(598) Cited by(0)

Article outline

Figures and Tables

Figures(13)  /  Tables(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog