-
創(chuàng)愿(上海)信息技術有限公司_招聘_Junior,Data,Engineer_0人
- 工作地點:靜安 不限工作經驗 本科學歷
- 五險一金年底雙薪交通補助加班補助飯補
Job Responsibilities:
1. Create and maintain ETL pipelines using Airflow/Python
2. Build and update big data pipelines to maintain Kargo’s data lake
3. Ensure that users have the right level of access to our data assets
4. Coordinate with the insights team to extract data for adhoc reporting
5. Design and create impactful visualization using a variety of BI tools
6. Detect and clean any deviation in the data
7. Communicate technical and business topics, as appropriate, using written, verbal and/or presentation materials as necessary
Job Requirements:
1. Experience in Big Data processing using Apache Hadoop/Spark ecosystem applications like Hadoop, Hive, Spark, Kafka and HDFS preferable
2. Must have strong experience in Data Warehouse ETL design and development, methodologies, tools, processes and best practices
3. Strong experience in stellar dashboards and reports creation for C-level executives
4. You should love and be passionate about data. You should be able to demonstrate your experience with engaging with truly large data sets.
5. Strong experience in Cloud Technologies like AWS, Azure or Aliyun
6. Strong experience in creating data pipelines using Python, Airflow or similar ETL tools
7. Strong experience in Data Modelling
8. Expert SQL skills
9. Good command of Linux
10. Have a strong learning ability. You can demonstrate the skills required to research and gain extensive knowledge about a subject and an appreciable level of expertise on the subject.
11. A Bachelor degree is required unless you have substantial work experience or real-life experience.
12. Good communication skills in English and Chinese.