2 Apache Spark Jobs in Südtirol
YOUR RESPONSIBILITIES
- Design and maintain scalable data pipelines and ETL processes
- Develop APIs using Python (e. g. FastAPI) for data access and integration
- Ensure data quality, performance optimisation and security
- Knowledge in NoSQL-based databases
- Collaborate with data scientists and analysts to enable data-driven decisions
- Maintain and document data architecture and best practices
|
YOUR RESPONSIBILITIES
- Support the development of our data warehouse / lakehouse solution
- Design, build, and maintain scalable data pipelines and ETL/ELT processes for structured and unstructured data
- Ensure data quality, performance optimization, and security across all data systems
- Contribute to the development of best practices, documentation and data cataloguing
- Contribute to streaming data solutions
|