jobs@entomo
Join us to build enterprises of tomorrow
Data Engineer
Location: Bangalore
Job Description
Job Title: Data Engineer
Experience Required: 3–5 years
Location: Bangalore
Summary
entomo is an Equal Opportunity Employer. The company promotes and supports a diverse workforce at all levels across the organization. We ensure that our associates, potential hires, third-party support staff, and suppliers are not discriminated against—directly or indirectly—based on color, creed, caste, race, nationality, ethnicity, national origin, marital status, pregnancy, age, disability, religion or similar philosophical belief, sexual orientation, gender, or gender reassignment.
We are looking for a skilled and experienced Data Engineer with 3 to 5 years of experience to design, build, and optimize scalable data pipelines and infrastructure. The ideal candidate will work closely with data scientists, analysts, and software engineers to ensure reliable and efficient data delivery throughout our data ecosystem.
Key Responsibilities
Design, implement, and maintain robust data pipelines using ETL/ELT frameworks
Build and manage data warehousing solutions (e.g., Snowflake, Redshift, BigQuery)
Optimize data systems for performance, scalability, and cost-efficiency
Ensure data quality, consistency, and integrity across various sources
Collaborate with cross-functional teams to integrate data from multiple business systems
Implement data governance, privacy, and security best practices
Monitor and troubleshoot data workflows and perform root cause analysis on data issues
Automate data integration and validation using scripting languages (e.g., Python, SQL)
Work with DevOps teams to deploy data solutions using CI/CD pipelines
Required Skills & Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field
3–5 years of experience in data engineering or a similar role
Strong proficiency in SQL and at least one programming language (Python, Java, or Scala)
Experience with cloud platforms (AWS, Azure, or GCP)
Hands-on experience with data pipeline tools (e.g., Apache Airflow, Luigi, DBT)
Proficient with relational and NoSQL databases
Familiarity with big data tools (e.g., Spark, Hadoop)
Good understanding of data architecture, modeling, and warehousing principles
Excellent problem-solving and communication skills
Preferred Qualifications
Certifications in cloud platforms or data engineering tools
Experience with containerization tools (e.g., Docker, Kubernetes)
Knowledge of real-time data processing tools (e.g., Kafka, Flink)
Exposure to data privacy regulations (e.g., GDPR, HIPAA)