An Efficient Data Extracting Method Based on Hadoop

2014 
As an open-source big data solutions, Hadoop ecosystem have been widely accepted and applied. However, how to import large amounts of data in a short time from the traditional relational database to hadoop become a major challenge for ETL (Extract-Transform-Load)stage of big data processing. This paper presents an efficient parallel data extraction method based on hadoop, using MapReduce computation engine to call JDBC(The Java Database Connectivity) interface for data extraction. Among them, for the problem of multi-Map segmentation during the data input, this paper presents a dynamic segmentation algorithm for Map input based on range partition, can effectively avoid data tilt, making the input data is distributed more uniform in each Map. Experimental results show that the proposed method with respect to the ETL tool Sqoop which also using the same calculation engine of MapReduce is more uniform in dividing the input data and take less time when extract same datas.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    2
    Citations
    NaN
    KQI
    []