Data Engineer

  • Engineering
  • Singapore, Singapore

Data Engineer

Job description

 

Backed by global leading venture capitalists Sequoia and Wavemaker, Nugit is expanding its team to help build the future of data storytelling. Our AI technology makes building and sharing data stories easy so everyone can be a great data storyteller. Companies such as Johnson & Johnson, Audi, Fave, Kellogg’s, and PropertyGuru use Nugit’s platform to give everyone access to data to do their jobs well.

 

We are hiring Data Engineers to join our team and build the next generation of Nugit’s ETL pipeline.

This is your chance to work at the forefront of technology. You will have the freedom to design and implement scalable systems that utilise modern frameworks all while applying Software Engineering best practices and working with a diverse and talented team of engineers.

 

Nugit’s team comes from more than 12 different countries specialising in a broad range of technologies and areas such as Design, Data Engineering, DevOps, Visualisation and SaaS applications.  Everyone contributes their unique skills and experience to create an environment that is ripe for learning and challenging oneself while delivering value to the business.

 

Working closely with various members of the Product, API and Frontend Visualisation teams, you will be responsible for implementing Nugit’s ETL pipeline and will thus play a vital role in converting the customer’s data into valuable business insights.

 

 

Core Responsibilities

  • Implement Extract, Transform and Load processes within the Nugit data pipeline

  • Assist in improving app architectures – Explore existing systems and determine areas of maintainability, scalability and extensibility

  • Work with different data providers to integrate their data into Nugit’s pipeline

  • Contribute to continual improvement by identifying and incorporating new and emerging ETL technology

  • Build reusable components across different data providers

 

Requirements

 

For this role, we are seeking talented individuals who matches below profile:

  • Demonstrable experience in writing / maintaining ETL pipelines

  • Experience with Python

  • Experience with SQL/ NoSQL databases

  • Experience with GNU/Linux or Unix systems

  • Strong focus on writing clean, reliable and maintainable code

  • Strong focus on test coverage and documentation

  • Fluency with Git and Github

  • Familiarity with Spark & Scala is a plus

  • Familiarity with Airflow is a plus

  • Interest or experience with functional programming languages is a plus

General Skills

  • You are entrepreneurial - you’re an independent thinker and a problem solver

  • You are effective – you use version control with git, you work fast, ship product and you get stuff done

  • You are a team player - you want to work with engineers, designers and data scientists from all over the world

  • You are adaptable - you embrace the fast paced and always changing nature of a startup

  • You are remarkable - you can show us real world examples of the amazing stuff you have built in the past

  • You are curious and a fast learner