Data Engineer
We are embarking on a new strategic initiative for one of our US based clients, a leader in the auto insurance industry. We seek a Data Engineer to become a pivotal team member for this challenge. As a Data Engineer you will develop robust and scalable data engineering systems and utilities to ingest data from internal and external source systems and make that data available to our stakeholders in a reliable and easily-discoverable manner to assist in data-driven business decisions.
You might be our missing piece if you have:
- A degree in a technical field (e.g., Computer Science, Engineering) or equivalent professional experience.
- At least 2 years of experience as a Data Engineer or in a similar technical role.
- Strong understanding of data engineering principles, including database design patterns and ETL processes.
- Hands-on experience with Apache Airflow for orchestrating and managing data pipelines.
- Experience working with large, diverse datasets and building scalable data solutions.
- Proficiency in SQL and familiarity with both SQL and NoSQL database technologies.
- Experience with AWS cloud services, using both the console and CLI.
- Knowledge of infrastructure as code (e.g., Terraform, AWS CloudFormation).
- Familiarity with DevOps practices, version control, and CI/CD pipelines.
- Strong analytical and problem-solving skills, with a keen eye for detail.
- A collaborative mindset and enthusiasm for mentoring and cross-team engagement.
We would be thrilled if you have:
- Experience implementing data governance and quality frameworks.
- Exposure to event-driven or real-time data architectures (e.g., Kafka, Kinesis).
- Familiarity with machine learning pipelines, predictive modeling, or NLP concepts.
- Experience with data visualization tools such as Tableau or D3.js.
- Knowledge of modern data stack tools (e.g., dbt, Snowflake, or Redshift).
- A passion for continuous improvement, innovation, and driving best practices in data engineering.
We will be working together on:
- Designing, developing, and maintaining data pipelines that ensure reliability, scalability, and discoverability.
- Replicating existing data pipeline patterns to support new initiatives and optimize performance.
- Writing reusable data utilities using common data-centric libraries and frameworks.
- Building and maintaining ETL workflows (Airflow) and troubleshooting data pipeline issues.
- Executing database operations manually and through Data Definition Language (DDL) scripts.
- Writing optimized SQL queries and advising on appropriate database solutions.
- Operating within AWS environments, managing and integrating cloud services effectively.
- Developing infrastructure-as-code scripts for consistent, scalable data environments.
- Implementing automated testing (unit, integration, end-to-end, contract) to maintain data reliability.
- Collaborating with cross-functional teams to design and deliver solutions that meet both business and technical needs.
- Department
- AI & Data
- Role
- Data Engineer
- Locations
- Cluj-Napoca, Brasov, Oradea
- Remote status
- Hybrid
Colleagues
About RebelDot
At RebelDot we enable organizations in more than 15 industries to make an asset out of custom software. From consulting to web or mobile apps, UX-UI design and QA, we help our clients achieve more through technology. Our goal is to make software development effective and hassle-free for small and medium enterprises.
Helping our clients get the most value for their investment in technology is what drives us. Increasingly, this means working with them as a full technical partner, starting with an initial consulting stage where we understand their needs and propose the optimal approach – or, “the line”, as we call it. Because of our ‘rebel’ approach to software development, oftentimes, our solutions are very different from our peers as we stand out through innovation. From there on out, we partner up and lead the line for our clients.
Already working at RebelDot?
Let’s recruit together and find your next colleague.