|
|||||
|
Data Engineer |
|||||
TASKS AND RESPONSIBILITIES
- Build and optimize ETL/ELT pipelines using AWS Glue, Spark, Athena, Lake Formation, S3, Step Functions, Lambda.
- Contributes to design and implementation of AWS Data Lake, including orchestration and security guardrails.
- Collaborate with DevSecOps engineers on infrastructure automation and CI/CD pipelines using tools such as GitHub Actions and Terraform.
- Automate deployment and management of data infrastructure and workflows to ensure reliability and scalability.
- Define and enforce data standards, including metadata, lineage and governance.
- Ensure platform scalability, performance and cost efficiency (e.g., partitioning, caching strategies).
- Coordinate integration needs, including feature engineering and data validation.
- Participate in validation and documentation efforts aligned with GxP.
- Engage with stakeholders to align technical design with business needs.
- Manage and reduce technical debt in data pipelines and infrastructure.
- Collaborate with teams that are based across different locations and time zones.
- Deliver and operate solutions for sites across the Globe.
WHO YOU ARE
- 4+ years of experience in data engineering on AWS.
- Strong knowledge of AWS data stack: Glue, Athena, Lake Formation, S3, Step Functions, Lambda.
- Proficient in Python for scripting, automation and data manipulation tasks.
- Experience with PySpark for building scalable, distributed ETL/ELT pipelines using Apache Spark (especially within AWS Glue).
- Experience in structuring and modelling data in both relational and non-relational forms.
- Ability to elaborate and propose relational/non-relational approach.
- Knowledge of databases, including query optimization, relational and non-relational schema design.
- Expertise in data storage technologies, including files, relational databases, NoSQL, and various data types (structured, unstructured, metrics, logs).
- Awareness of DevSecOps practices, including automation of CI/CD pipelines (GitHub Actions or similar), infrastructure as code (Terraform), IAM roles, encryption, Static Application Security Testing (SAST), Software Composition Analysis (SCA) tools, auditability.
- Advanced English level (written and spoken).
- Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified) is preferred.
- Hands-on experience configuring and developing dashboards in tools such as Power BI, Grafana or Tableau.
- Strong analytical and communication skills.
- Ability to work collaboratively in a team environment.
- Experience in regulated environments (GxP, pharma, financial or healthcare).
- Experience with AI/ML data pipelines, embedding management or feature stores.
- High level of accuracy and attention to detail.
At Bayer we believe in diversity, equity and inclusion. We aim to create an environment in which everybody can feel authentic, respected and equally valued. Every day we strive to reflect our values through our unique capabilities, self-experiences, and aspirations. We intentionally seek diversity, to enable our people to bring their fullest potential out and encourage others to likewise do so. Our company wins when we leverage our capabilities to lead the cultural transformation in our business, positively impacting society.
Candidates who meet the requirements based on the job profile will be considered for employment regardless of physical disability, race, color, religion, sex, age, sexual orientation, gender identity and will not be at a disadvantage if unemployed.
|
|
| Application Period: | 05/04/2026 - 05/18/2026 | Reference Code: | 867669 | |
| Division: | Enabling Functions | Location: | Mexico : Ciudad de México : Ciudad de México | |
| Functional Area: | Information Technology | Work Time: | Full Time | |
| Employment Type: | Regular | |||
| Contact Us | ||||
| Address | ||||
| Ciudad de México, México |
|
|
||
|
|
||||
Job Segment:
Testing, Engineer, Database, Equity, Technology, Engineering, Finance
