ETL is the most common tool in the process of building EDW, of course the first step in data integration. As big data emerging, we would find more and more customer starting using hadoop and spark. Personally, I agree the idea that spark will replace most ETL tools.
- Business Intelligence -> big data
- Data warehouse -> data lake
- Applications -> Micro services
- Data getting out of sync, each copy is a risk.
- Performance issues and waste of server resource(peek Performance), although ETL can do limited parallel work.
- Plain-text code in hidden stages(VB or java typical)
- CSV files are not type safe
- all or nothing approach in batch jobs.
- legacy code
Spark for ETL
- parallel processing in build in
- using steaming to parallel ETL
- Hadoop which is data source, we don’t need copy and reduce risk
- just one code(scala or python)
- Machine learning included
- security, unit testing, Performance measurement , excepting handling, monitoring
- Simple one
spark.read.json("/sourcepath") #extract .filter(...) # Transform and blew .agg(...) .write.mode("append") # Load .parquet("/outputpath")
# @param1: master # @param2: appname sc = SparkContext("local", "NetworkWordCount") # @param1: spark context # @param2: seconds ssc = StreamingContext(sc, 1) steam = ssc.textFileStream("path") # do transform # do load ssc.start() ssc.awaitTermination()