Work Schedule
OtherEnvironmental Conditions
OfficeJob Description
Summarized Purpose:
We are offering an exceptional opportunity to join Thermo Fisher Scientific as a Systems Developer(Datawarehouse). In this role, you will focus on crafting, building, and optimizing database-centric solutions that power our analytical and operational data platforms. You will play a key role in developing robust, scalable data pipelines and warehouse structures—primarily on AWS Redshift—supporting data-intensive workloads and downstream reporting applications.
Education/Experience:
- Bachelor’s degree or equivalent experience in Computer Science, Information Systems, or a similar field
- Around 3+ years of experience in database development, data engineering, or a related field
- Equivalent combinations of education, training, and experience will also be considered
Major Job Responsibilities:
- Build, implement, and fine-tune data warehouse schemas (e.g., star/snowflake models) and data pipelines in AWS Redshift or similar RDBMS platforms.
- Formulate and sustain SQL-focused ETL/ELT procedures for substantial data assimilation, conversion, and fusion from diverse origins.
- Perform database performance tuning, query optimization, and capacity planning to ensure efficient data processing.
- Work with collaborators to translate data needs into scalable and maintainable data models.
- Support data quality, validation, and reconciliation processes to ensure accurate and reliable datasets.
- Collaborate on data workflow automation using AWS services like Lambda, Step Functions, and S3.
- Participate in development reviews, documentation, and testing activities to ensure adherence to quality and compliance standards.
- Collaborate with Operations and DevOps teams to deploy and monitor data workflows using CI/CD pipelines where applicable.
- Troubleshoot production issues, analyze root causes, and propose balanced solutions.
- Leverage AI-assisted development tools to improve query efficiency, code refactoring, and documentation.
Knowledge, Skills and Abilities:
- Strong hands-on SQL development skills including complex queries, window functions, joins, and analytical operations.
- Mastery in data modeling and grasp of data warehousing concepts (ETL/ELT, dimensional modeling, slowly changing dimensions).
- Experience working with large relational databases (e.g., Redshift, PostgreSQL, SQL Server, MySQL, Oracle).
- Experience with AWS cloud services, especially S3, Lambda, Redshift, and Step Functions.
- Familiarity with Python or NodeJS for scripting, automation, or Lambda-based data workflows is required.
- Excellent analytical and problem-solving skills with attention to detail.
- Strong communication and collaboration skills in a team-oriented environment.
Must Have skills:
- Strong background in SQL and RDBMS – Successfully developed and enhanced queries, stored procedures, and data conversions for high-volume data operations.
- Data warehousing and ETL/ELT – Hands-on experience crafting and maintaining data warehouse environments and data pipelines.
- AWS familiarity – Hands-on experience with AWS data tools like Redshift, S3, Lambda, and Step Functions.
- Proficiency in Python or NodeJS programming – Ability to use a programming language for automation or data integration purposes.
- Data modeling and schema construction – Experience developing normalized and dimensional schemas for analytics and reporting.
Good to have skills:
- Exposure to data lakehouse or big data environments (Databricks, Snowflake, or similar).
- Knowledge of AI-assisted or modern query optimization tools and practices.
