Data Engineer- India

Job Responsibilities

You will be responsible for maintaining large-scale data processing systems, data warehouses and data lakes to help manage the ever-growing information needs of our clients.

  • Your technical challenge will be to test and optimize systems that ingest, aggregate and visualize terabytes of data that solve business relevant problems of our customers.
  • Work with business users to refine analytical requirements for quantitative data (view-through, clickstream, acquisition, product usage, transactions), qualitative data (survey, market research) and unstructured data (blog, social network).
  • Designing and developing schema definitions and support data warehouse/mart to enable integration of disparate data sources from within Client environment and outside, aggregate it and make it available for analysis.
  • As a key member of the team drive adoption of new technologies, tools, and process improvements to build world class analytical capabilities for web analytics, optimization, experimentation and personalization.
  • Develop high performance, scalable implementations of the statistical/machine learning models developed by our Data Scientists.

Qualifications

  • BS/MS in computer science or equivalent work experience.
  • 4 to 6 years experience in developing Data Models, DB schemas, creating ETLs, and familiar with Hadoop Ecosystem
  • 2+ years experience with data ingestion through batch and streaming methodologies using open source or public tools like Kafka, Airflow, Azure Data Factory etc..
  • Experience with databases both RDBMS and NoSQL (Vertica, Netezza or Oracle and AWS data services tech). Through understanding of SQL (any variant)
  • Good understanding of Data Ware House methodologies.
  • Hands on experience in any of the programming languages (Shell scripting, Python, Scala, Java, etc)

Technical Skills

  • Knowledge of Big Data ecosystem like Hadoop M/R, Pig and Hive is a strong plus.
  • Understanding of IN memory distributed computing frameworks like Spark (and/or DataBricks) and its parameter tuning, writing optimized queries in Spark
  • Scheduling and Monitoring of Hadoop and Spark jobs
  • Good understanding of any reporting tools such as Tableau, Pentaho or Jasper is a big plus.
  • Experience in design, development and deployment of one or more tools - ETL (Informatica, OWB, ODI), reporting (Business Objects, QlikView, Tableau)

TheMathCompany would provide you with an ecosystem to learn and grow in your professional journey, offering guidance to help you be successful. We are also a fun bunch and will help you in making this memorable.

So, based on what you have read do you believe you have what it takes to build TheMathCompany and analytics capabilities for Fortune 500 organizations?

Have some questions or suggestions? Unclear about certain opportunities? Feel free to reach out to us anytime for a friendly chat.

Apply Here