Careers

July 12, 2024 2024-12-26 11:53

Careers

Work with Us

The field of Data Science is here to stay and grow. Are you an experienced data professional looking to work on enterprise-scale projects, or looking to start your career in this exciting industry ? Datalens is the perfect place for you.

Data Engineering - Databricks

JOB DESCRIPTION:

  • Employment Type: Full Time, Permanent
  • Location: Hyderabad, Singapore
  • No. of Positions: 1
  • Joining: Immediate

Key Responsibilities & Duties:

  • Work as part of a team to develop Cloud Data and Analytics solutions
  • Participate in development of cloud data warehouses, data as a service, business intelligence solutions.
  • Ability to provide solutions that are forward-thinking in data and analytics
  • Developing Modern Data Warehouse solutions.
  • Design, develop and build data integration pipeline architecture and ensure successful creation of the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, SQL and other technologies.
  • Expertise in writing complex, highly optimized queries across large datasets
  • Code deployments, Code Reviews
  • Develop and maintain design documentation, test cases, performance and monitoring and performance evaluation using GIT, Confluence, Cluster Manager

Key Skills & Requirements:

  • 4 – 5 years of experience in a diversified IT and minimum 3+ years of relevant experience in Databricks.
  • Hands-on experience in Python/Pyspark/Scala/Hive Programming to develop ETL pipelines.
  • Working experience in ETL testing including Master Data Management, Data Completeness Data Cataloging, Data Leverage, and Data Quality for various data feeds coming from the source.
  • Hands-on experience in creating Stored Procedures, Functions, Tables, Cursors, and Hashtables and handling large data types.
  • Experience in database testing, data comparison & data transformation scripting.
  • Good knowledge of Data Warehousing concepts & Data Modelling.
  • Should have experience working on complex data architecture.
  • Communicate progress and other relevant information the project stakeholders.
  • Proficient in a source code control system such as GIT.
  • Experience with building CI/CD pipelines in Data environments.
  • Excellent communication and documentation skills.
  • Should have worked in the onsite/offshore model earlier.

Add Ons: Any AWS/AZURE and Databricks Certifications

Kindly share your CVs at talent.datalensai.com

Sr. Data Engineering - Databricks

JOB DESCRIPTION:

  • Employment Type: Full Time, Permanent
  • Location: Hyderabad, Singapore
  • No. of Positions: 1
  • Joining: Immediate

Key Responsibilities & Duties:

  • Work as part of a team to develop Cloud Data and Analytics solutions
  • Participate in development of cloud data warehouses, data as a service, business intelligence solutions.
  • Ability to provide solutions that are forward-thinking in data and analytics
  • Developing Modern Data Warehouse solutions.
  • Design, develop and build data integration pipeline architecture and ensure successful creation of the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, SQL and other technologies.
  • Expertise in writing complex, highly optimized queries across large datasets
  • Code deployments, Code Reviews
  • Develop and maintain design documentation, test cases, performance and monitoring and performance evaluation using GIT, Confluence, Cluster Manager

Key Skills & Requirements:

  • 4 – 5 years of experience in a diversified IT and minimum 3+ years of relevant experience in Databricks.
  • Hands-on experience in Python/Pyspark/Scala/Hive Programming to develop ETL pipelines.
  • Working experience in ETL testing including Master Data Management, Data Completeness Data Cataloging, Data Leverage, and Data Quality for various data feeds coming from the source.
  • Hands-on experience in creating Stored Procedures, Functions, Tables, Cursors, and Hashtables and handling large data types.
  • Experience in database testing, data comparison & data transformation scripting.
  • Good knowledge of Data Warehousing concepts & Data Modelling.
  • Should have experience working on complex data architecture.
  • Communicate progress and other relevant information the project stakeholders.
  • Proficient in a source code control system such as GIT.
  • Experience with building CI/CD pipelines in Data environments.
  • Excellent communication and documentation skills.
  • Should have worked in the onsite/offshore model earlier.

Add Ons: Any AWS/AZURE and Databricks Certifications

Kindly share your CVs at talent.datalensai.com

Data Lead

JOB DESCRIPTION:

  • Employment Type: Full Time, Permanent
  • Location: Hyderabad, Singapore
  • No. of Positions: 1
  • Joining: Immediate

Key Responsibilities & Duties:

  • Work as part of a team to develop Cloud Data and Analytics solutions
  • Participate in development of cloud data warehouses, data as a service, business intelligence solutions.
  • Ability to provide solutions that are forward-thinking in data and analytics
  • Developing Modern Data Warehouse solutions.
  • Design, develop and build data integration pipeline architecture and ensure successful creation of the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, SQL and other technologies.
  • Expertise in writing complex, highly optimized queries across large datasets
  • Code deployments, Code Reviews
  • Develop and maintain design documentation, test cases, performance and monitoring and performance evaluation using GIT, Confluence, Cluster Manager

Key Skills & Requirements:

  • 4 – 5 years of experience in a diversified IT and minimum 3+ years of relevant experience in Databricks.
  • Hands-on experience in Python/Pyspark/Scala/Hive Programming to develop ETL pipelines.
  • Working experience in ETL testing including Master Data Management, Data Completeness Data Cataloging, Data Leverage, and Data Quality for various data feeds coming from the source.
  • Hands-on experience in creating Stored Procedures, Functions, Tables, Cursors, and Hashtables and handling large data types.
  • Experience in database testing, data comparison & data transformation scripting.
  • Good knowledge of Data Warehousing concepts & Data Modelling.
  • Should have experience working on complex data architecture.
  • Communicate progress and other relevant information the project stakeholders.
  • Proficient in a source code control system such as GIT.
  • Experience with building CI/CD pipelines in Data environments.
  • Excellent communication and documentation skills.
  • Should have worked in the onsite/offshore model earlier.

Add Ons: Any AWS/AZURE and Databricks Certifications

Kindly share your CVs at talent.datalensai.com