Job Requirements:
- Experience building production ML pipelines for model training/prediction in Python and R. (Must have)
- Experience working with open source ML libraries such as Sklearn, Pandas and TensorFlow (Must have)
- Create and maintain ML model training/prediction pipelines in production
- Create re-usable tools and frameworks for ML model deployment and monitoring
- Experience working with large data sets, coming from varied sources (Must have)
- Experience with both object-oriented and functional programming concepts (Must have)
- Good knowledge of MS productivity tools (Must have)
- Working knowledge of visualization tools (e.g. Tableau/Power BI etc.) (Good to have)
- Experience working with ML model training/deployment tools (such as Airflow, Kubeflow, Seldon etc.) (Good to have)
- Familiarity with data engineering tools (Spark/Kafka etc.) (Good to have)
Key Responsiblities:
- Interpret and analyze (structured and unstructured) data using exploratory statistical and mathematical techniques to identify trends,
anomalies, and quantify business results.
- Ability to perform data discovery tasks and work with large data sets.
- Apply Data Mining/ Data Analysis methods using a variety of data tools, building and implementing models using algorithms and creating/
running simulations to drive optimisation and improvement across business functions.
- Work with data scientists to refine Machine Learning (ML) model and scale it up
- Effectively communicate the analytics approach and how it will meet and address objectives to business partners.
- Ability to work collaboratively with cross-functional teams to improve outcome of Research projects/BD activities/Product
development/Projects execution.
Educational Qualification:
A Bachelor’s or Master’s in computer science or related disciplines (including Math/Statistics), with at least 2 years of relevant work experience (Must have)