返回查詢:Data Engineering / 新北市

Role summary

We are looking for an experienced Data Engineering Manager with a strong background in the
Hadoop ecosystem
and big data technologies to lead our data engineering team. This role involves designing and managing scalable big data solutions, building robust data pipelines, and ensuring the efficient operation of our data infrastructure. The ideal candidate will combine technical expertise with leadership skills to drive innovation and deliver data solutions that empower business decision-making.

Core Responsibilities

  • Team Leadership & Management:
    Lead, mentor, and manage a team of data engineers, fostering a collaborative and innovative work environment.
  • Big Data Infrastructure & Pipeline Development:
    Design, develop, and maintain scalable data pipelines using Hadoop technologies (HDFS, MapReduce, Hive, Pig, etc.).
  • Collaboration & Stakeholder Engagement:
    Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver tailored solutions.
  • Data Governance & Security:
    Establish and enforce data governance policies to ensure data quality, security, and compliance with regulations (e.g., GDPR, CCPA).
  • Strategic Planning & Innovation:
    Define and execute the big data engineering roadmap in alignment with the organization's data strategy.

Required Skills

  • Experience:
    Proven experience (7+ years) in data engineering, with at least 1 year in a leadership or managerial role.
  • Hadoop Ecosystem:
    Expertise in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Pig, and YARN.
  • Data Modeling:
    Knowledge of data modeling, schema design, and performance optimization in big data environments.
  • Soft Skills:
    Strong problem-solving skills and the ability to work in a fast-paced, dynamic environment.

Bonus/Preferred Experience

  • Azure Cloud:
    Familiarity with Azure-based big data solutions (e.g., Azure Data Lake, Azure Synapse Analytics, Azure HDInsight).
  • Data DevOps:
    Experience with DevOps practices, including CI/CD pipelines, containerization (e.g., Docker), and orchestration tools (e.g., Kubernetes).
  • Monitoring:
    Proficiency in monitoring and alerting tools (e.g., Prometheus, Grafana, Splunk, Nagios, or Datadog) to ensure system reliability and performance. .

Candidate profile

The ideal candidate is a results-driven leader with a passion for big data technologies and a proven ability to manage and grow high-performing teams. They are technically proficient in the Hadoop ecosystem, possess strong problem-solving skills, and excel at collaborating with cross-functional teams. A proactive mindset, excellent communication skills, and a commitment to delivering high-quality data solutions are essential for success in this role. Familiarity with Azure-based big data tools, DevOps practices, and monitoring tools is a strong advantage.