USACares Jobs

Job Information

Kyndryl, Inc. Data Engineer – API Developer in Bangalore, India

Your Role and Responsibilities

As a Data Engineer/API developer you are expected to be functionally knowledgeable in deploying and managing AI/ML models and APIs with strong emphasis on API development, API maintenance, model deployment and governance using cloud native services as well as 3rd party DSML platforms. This role in our Data and AI team, you will provide support for one or more projects; assist in defining scope and sizing of work; and work on Proof of Concept development. You will support the team in providing data engineering, model deployment and management solutions based on the business problem, integration with third party services, designing and developing complex model pipelines for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Pre-Sales and various pursuits focused on our clients' business needs.

You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains.


• Design and build scalable machine learning services and data platforms

• Develop Model Pipelines (DevOps) for model reusability and version controlling

• Serve models in production leveraging serving engines such as Tensorflow, PyTorch, Seldon etc.

• Analyze, designs, develops, codes and implements programs in one or more programming languages, for Web and Rich Internet Applications, Cloud Native and 3rd Party Applications.

• Supports applications with an understanding of system integration, test planning, scripting, and troubleshooting

• Defines specifications and develop programs, modifies existing programs, prepares test data, and prepares functional specifications.

• Utilize benchmarks, metrics, and monitoring to measure and improve models

• Develop integration with monitoring tools to detect model drift and alert - Prometheus, Grafana stack, Cloud native monitoring stack

• Research, design, implement and validate cutting-edge deployment methods across hybrid cloud scenarios

• Work with data scientists to implement Client, AI and NLP techniques for article analysis and attribution.

• Support the build of complex AI/ML models and help in deploying them either on cloud or 3rd party DSML platforms

• Containerize the models developed by Data Scientist and deploy them in Kubernetes/Container environments

• Develop and maintain documentation of the Model flows and integrations, pipelines etc

• Support the teams in providing technical solutions from model deployment and architecture perspective, ensure the right direction and propose resolution to potential model pipeline, deployment -related problems.

• Developing Proof of concepts (PoC) of key technology components to project stakeholders

• Collaborate with other members of the project team (Architects, Data Engineers, Data Scientists) to support delivery of additional project components

• Evaluate and create PoVs around the performance aspects of DSML platforms and tools in the market against customer requirements

• Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.

• Assist in driving improvements to the Enterprise AI technologies stack, with a focus on the digital experience for the user, as well as model performance & security to meet the needs of the business and customers, now & in the future

• Support technical investigations & proof of concepts, both individually and as part of a team, including being hands - on with code, to make technical recommendations

• Create documentation for Architecture Principles, design patterns & examples, technology roadmaps & future planning

Required Technical and Professional Expertise

Python, Machine Learning Engineer and API development with 3-5 years of experience with following skills -

• Strong DevOps, Data Engineering and Client background with AWS or GCP or Azure cloud

• Experience with design and development of REST API platform using Apigee/APIM, converting web services from SOAP to REST or vice-versa.

• Experience with Security frameworks (e.g., JWT, OATH2)

• Experience in API layer like security, custom analytics, throttling, caching, logging, monetization, request and response modifications etc. using Apigee

• Proficient in SQL and Stored Procedures such as in Oracle, MySQL

• Experience with Unix, Linux Operating Systems

• Experience with Scrum and other Agile processes.

• Knowledge of Jira, Git/SVN, Jenkins

• Experience in creating REST API documentation using Swagger and YAML or similar tools desirable

• Experience with Integration frameworks (e.g., Mule, Camel) desirable

• Experience with one or more of MLOps tools: ModelDB, Kubeflow, Pachyderm, and Data Version Control (DVC) etc

• Experience in Distributed computing, Data pipelines, and AI/Client

• Experience setting up and optimizing DBs for production usage for Client app context

• Experience in Docker, Kubernetes (OpenShift, EKS, AKE, GKE, vanilla K8s), Jenkins, any CICD tool

• Experience in Spark, Kafka, HDFS, Cassandra

• Strong and handson knowledge in Python, Apache Spark, Kubernetes, PySpark

• Hands-on expertise in at least 1 Data Science project - Model Training, Deployment on Hyperscalars - AWS, Azure, GCP

• Experience in any of the following solutions - AWS Sagemaker, Azure ML or GCP Vertex AI or 3rd party solutions like, Datarobot, etc

Preferred Technical and Professional Experience

• Python programmer

• DevOps - CD/CI Implementations

• Data Science skills - Model development, Training

• API development

• Strong knowledge of web services (WSDL Soap, Restful)

• Strong knowledge of the java/pythonframeworks (Spring MVC, Spring Security etc)

Required Education Bachelor's Degree

Preferred Education Master's Degree

Country/Region India

State / Province KARNATAKA

City / Township / Village Bangalore

Being You @ Kyndryl

Kyndryl is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, pregnancy, disability, age, veteran status, or other characteristics. Kyndryl is also committed to compliance with all fair employment practices regarding citizenship and immigration status.

Other things to know

When applying to jobs of your interest, we recommend that you do so for those that match your experience and expertise. Our recruiters advise that you apply to not more than 3 roles in a year for the best candidate experience.

For additional information about location requirements, please discuss with the recruiter following submission of your application.

Primary job category Data Science

Role ( Job Role ) Data Scientist

Employment Type Full-Time

Contract type Regular

Position Type Early Professional

Travel Required No Travel

Company (Y030) Kyndryl Solutions Private Limited

Is this role a commissionable/sales incentive based position? No