Description

Job Description

Who we are:

Born digital, UST transforms lives through the power of technology. We walk alongside our clients and partners, embedding innovation and agility into everything they do. We help them create transformative experiences and human-centered solutions for a better world.

UST is a mission-driven group of over 39,000+ practical problem solvers and creative thinkers in over 30+ countries. Our entrepreneurial teams are empowered to innovate, act nimbly, and create a lasting and sustainable impact for our clients, their customers, and the communities in which we live.

With us, you’ll create a boundless impact that transforms your career—and the lives of people across the world.

Visit us at UST.com .

You Are

We are seeking an experienced Senior Big Data Engineer with a minimum of 6 years of hands-on experience in designing, implementing, and maintaining Big Data solutions using Apache Spark, Hive, and Yarn. The ideal candidate will have a strong background in data processing, optimization, and integration, as well as a proven track record of delivering high-quality, scalable solutions in a fast-paced environment.

The Opportunity

 

  • Design and implement robust and scalable Big Data architectures using Apache Spark, Hive, and Yarn.
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
  • Develop and optimize data processing pipelines using Spark for efficient data transformation and analysis.
  • Implement Hive queries and data warehouse solutions for structured and semi-structured data.
  • Manage and optimize Apache Yarn clusters for efficient resource allocation and utilization.
  • Troubleshoot and resolve performance issues related to cluster and job execution.
  • Implement ETL processes to ingest, transform, and load large volumes of data from diverse sources into the Big Data ecosystem.
  • Ensure data quality and integrity throughout the ETL process.
  • Conduct performance tuning and optimization of Spark and Hive jobs to enhance overall system efficiency.
  • Monitor and analyse job execution metrics to identify and address bottlenecks.
  • Collaborate with data scientists, analysts, and other stakeholders to understand their data processing needs.
  • Document and communicate technical solutions, best practices, and guidelines for the team. 

     

What You Need

 

  • Bachelor's degree in Computer Science, Engineering, or related field.
  • Minimum of 6 years of hands-on experience in Big Data technologies, with a focus on Apache Spark, Hive, and Yarn.
  • Proficiency in programming languages such as Java, Scala, or Python.
  • Strong understanding of distributed computing principles and data processing frameworks.
  • Experience in designing and implementing ETL processes for large-scale data sets.
  • Proven ability to troubleshoot and optimize performance in Big Data environments.
  • Excellent communication and collaboration skills.
  • Ability to work in a fast-paced, dynamic environment.

Key Skills

Apache Spark Hive Yarn Java Scala Python ETL Big Data Sqoop

Education

Any Graduate

  • Posted On: Few Days Ago
  • Experience: 6+
  • Availability: Remote
  • Openings: 1
  • Category: Big Data Engineer
  • Tenure: Any