Die Tätigkeit umfasst
- Enge Zusammenarbeit mit internen Stakeholdern, um Anforderungen zu verstehen und datenbasierte Lösungen zu entwickeln
- Sicherstellung und Verbesserung der Datenqualität
- Durchführung von Datenanalysen zur Unterstützung von Fachbereichen
- Mitarbeit bei der Strukturierung und Dokumentation von Datenquellen
- Unterstützung bei der Automatisierung von Datentransformationsprozessen
- Aufbau, Pflege und Weiterentwicklung von Power BI Reports und Dashboards
|
Vollzeit
Bolzano, Milan
09.03.2026
Bolzano, Milan
Brands of Oberalp-Group
- Design and implement data pipelines to extract, transform, and load data from various sources, including databases, cloud storage, and APIs.
- Maintain and improve existing data pipelines and models, ensuring they are efficient and reliable.
- Support IT and business colleagues in Business Intelligence and Advanced Analytics modelling.
- Collaborate with data scientists and analysts to support data-driven decision making.
- Identify opportunities to optimize data pipelines and suggest new approaches to improve efficiency and reliability.
- Write and maintain documentation for data pipelines and processes.
- Implement CI/CD processes for data pipelines and work with DevOps teams to ensure smooth deployment and operation (DataOps).
- Drive system problem resolution and root cause analysis
- Ensure and maintain integrity of code base during concurrent development cycles
- Collaborate with experts in a variety of technologies to come up with the best overall solutions
- Collaborate with cross functional team to resolve data quality and operational issues
- Identify opportunities for team standardization in coding, deployments, documentation, and other related areas and create said standards
|
YOUR RESPONSIBILITIES
- Support the development of our data warehouse / lakehouse solution
- Design, build, and maintain scalable data pipelines and ETL/ELT processes for structured and unstructured data
- Ensure data quality, performance optimization, and security across all data systems
- Contribute to the development of best practices, documentation and data cataloguing
- Contribute to streaming data solutions
|
Deine Aufgaben
- BI‑Anforderungen aufnehmen und in klare Lösungskonzepte übersetzen
- Enge Zusammenarbeit mit IT, BI‑Entwicklung und Fachbereichen
- End‑to‑End‑Koordination von BI‑Lösungen hinsichtlich Qualität und Nutzen
- Ad hoc Analysen und Deep Dives zur Identifikation von Trends, Ursachen und Optimierungspotenzialen
- Verantwortlichkeit für zugewiesene BI‑Lösungen und deren kontinuierliche Weiterentwicklung
|
Positionsübersicht
- Entwicklung von neuronalen 3D-Systemen
- Erstellung und Optimierung von neuronalen inversen Rendering-Modellen für Geometrie- und Materialschätzungen
- Entwicklung von Pipelines für Relighting und Dekomposition von Erscheinungsbildern unter Verwendung differenzierbarer Rendering-Techniken
- Erforschung von generativen Rekonstruktionsansätzen: diffusionsbasiertes 3D, Feed-Forward-Rekonstruktionsnetzwerke, neuronale Szenendarstellungen
- Bewältigung anspruchsvoller Szenarien: spiegelnde und transparente Oberflächen, dünne Strukturen, komplexe BRDFs
- Schreiben Sie sauberen, produktionsreifen Code und verbessern Sie die Qualität der Modelle und die Geschwindigkeit der Schlussfolgerungen
- Zusammenarbeit & Erkundung
- Zusammenarbeit mit Rendering-Spezialisten und Systementwicklern, um gelernte Komponenten in die Pipeline zu integrieren
- Zusammenarbeit mit Unternehmenskunden zur Erfassung von Anforderungen und zur Weiterentwicklung von Fähigkeiten
- Halten Sie sich über CVPR, SIGGRAPH und verwandte Forschungsarbeiten auf dem Laufenden; identifizieren Sie Methoden, die in die Produktion einfließen sollen
|
Vollzeit
Brixen, remote
28.02.2026
Brixen, remote
Worum es uns geht
- bessere Entscheidungen auf Basis von Daten zu treffen,
- Performance sichtbar zu machen, und Hotels Schritt für Schritt zu
- einem professionellen Revenue-Ansatz zu führen
|
YOUR RESPONSIBILITIES
- Ensure proper setup, maintenance, and governance of master data records (customer, supplier, material, product, and further domains as defined)
- Define and enforce master data quality standards and governance processes to ensure consistency, accuracy, and reliability
- Monitor and report on master data quality KPIs; drive corrective and preventive actions
- Design, implement, and optimize business processes directly related to master data, ensuring end-to-end efficiency and compliance with internal policies and external regulations
- Ensure cross-application process alignment across ERP, CRM, PLM, and other platforms
- Collaborate with business units and IT to align master data and process requirements with organizational needs
- Support strategic program planning (e.g., ERP, CRM), ensuring data and process readiness and ensure master data quality readiness for the running SAP implementation project
- Drive continuous improvement in master data management and data-driven processes through automation, analytics, and best practices
|
Dein Beitrag zum Erfolg von ACS:
- Eigenverantwortliche Verwaltung deiner Projekte
- Installation, Konfiguration und Wartung der IT-Infrastruktur unserer ACS-Kunden
- Betreuung der zugewiesenen Kunden
- Zusammenarbeit mit dem ACS Service Desk
|
YOUR RESPONSIBILITIES
- Design and implement backend services for an incident‑automation platform, including telemetry processing, orchestration, workflow execution, case/ticket integrations, and platform APIs
- Evolve the current architecture from tightly coupled workflows and database‑driven logic to a modular, event‑driven, and scalable backend platform
- Build and maintain reliable integrations with charger telemetry, ticketing/case systems, service workflows, and customer‑facing product components
- Improve workflow reliability, idempotency, retry mechanisms, state management, error handling, and auditability across automated playbooks
- Collaborate with product, service, R&D, and data engineering teams to translate operational playbooks into maintainable backend capabilities
- Contribute to technical decisions on rules/policy handling, workflow orchestration, context modeling, observability, and system scalability
- Support the implementation of secure, governed, and monitorable AI/agent‑based backend capabilities where appropriate
- Help define engineering standards, testing strategies, deployment patterns, and the long‑term backend architecture
|
YOUR RESPONSABILITIES
- Generation, verification and consolidation of data/information for operative and strategic decision-making in Procurement department
- Monitoring of price and market developments
- Creation of reports and monitoring of non-financial targets to be met by the procurement organization
- Continuous development of models and approaches to determine target costs for projects, sourcing categories, components and KPIs
- Verification, safeguarding and improving data quality by adjusting spend data and conducting plausibility checks
- Capturing and consolidating relevant data and information for procurement risk management, in collaboration with Controlling department
- Ensure items are classified correctly for reporting, compliance, and operational efficiency
- Work closely with Commodity Management to update Supplier-Item Relations in time
- Perform periodic audits of item master data to identify and resolve discrepancies
|
Positionsübersicht
- Entwurf und Verwaltung von Datenspeichersystemen für große Datensätze (Multi-TB-Bilddaten, 3D-Assets, Trainingsdaten)
- Entwicklung effizienter Datenzugriffsmuster und Bewegungsstrategien für verteiltes Training und Experimentieren
- Implementierung der Versionierung von Datensätzen und Verfolgung der Abstammung für die Reproduzierbarkeit
- Einrichtung und Pflege der Infrastruktur für Experimentverfolgung und Modellregistrierung (MLflow, Weights & Biases)
- Aufbau von ML-Pipelines für Datenvorverarbeitung, Training, Validierung und Modellregistrierung (Kubeflow, Airflow, Prefect)
- Unterstützung verteilter Trainingsworkflows über Multi-GPU-Cluster (PyTorch Distributed, Horovod, Ray)
- Profilierung und Optimierung von Trainingspipelines: Engpässe beim Laden von Daten, Batch-Sizing, GPU-Speicherauslastung
- Sicherstellung der Reproduzierbarkeit von Experimenten: Umgebungspinning, Datenversionierung, Artefaktmanagement
- Verwaltung der Speicherung und Verteilung von Artefakten (Docker-Registrierungen, Modell-Registrierungen, Paket-Repositories)
- Entwicklung von Werkzeugen zur Verbesserung der Entwicklerproduktivität für ML-Workflows
|
YOUR RESPONSIBILITIES
- Design and implement agentic workflows, tool/skill integrations and orchestration logic for operational use cases
- Develop safe and reliable execution patterns for agentic systems, including guardrails, fallbacks, retries and auditability
- Integrate agentic capabilities with internal systems, such as case/ticket workflows, automation engines and digital product components
- Contribute to the design of structured context models, typed tool interfaces and decision flows
- Evaluate and improve agent performance through testing, evaluations, feedback loops and production monitoring
- Support internal/admin tools for the configuration, observability and review of agent behaviour
- Collaborate closely with product managers, engineers and domain experts to translate process knowledge into reliable agentic software
|
Vollzeit
Bozen, Brixen, Venezia
19.02.2026
Bozen, Brixen, Venezia
- Microsoft/Windows/Server und Client
- VMware ESXi
- HP PCs & Notebooks
- HPE Proliant Server
- Office 365
- Veeam Backup
- NAS QNAP und Synology
- IT System Integrator
|
YOUR RESPONSIBILITIES
- Define & modify BOMs, Routings, Machine & Capacity groups
- Create configuration logic and its registration in the ERP system
- Define & modify Work instructions, JSON and Lifecycle Interface including time studies and their recording in the ERP system
- Perform time studies, continuous improvement & MUDA walk
- Coordinate and implement ECR and process changes related to production master data and production documents
- Support in setting up workstations
|
YOUR RESPONSIBILITIES
- Monitor chargers performance using collected field data to proactively identify and resolve issues, particularly during product launch phases
- Apply systematic troubleshooting techniques to identify and resolve technical issues with the chargers in customer applications, like:
- Data mining using corporate tools
- Analysis of system logs and specific data collection
- Performance of tests to reproduce, isolate and address the issues
- Work with all the development teams to resolve complex issues quickly and effectively in a multi-disciplinary team
- Provide guidance to data analysts and scientists on the strategic direction of their activities
|
- Drive end-to-end process transparency across Global Service
- Identify structural improvement potentials
- Design and shape future service processes in alignment with our stakeholders
- Enable Operational Excellence through implementation and governance
- Establish performance framewors and KPIs to measure the impact
|
YOUR RESPONSIBILITIES
- Monitor and troubleshoot software performance and functionality on test systems
- Investigate, diagnose, and independently resolve smaller software-related problems
- Act as the first point of contact for test system software users
- Support users with guidance and documentation on system usage and known issues
- Analyze test system data (e.g., error logs, test cycle times, performance KPIs) to identify trends and root causes
- Identify bottlenecks, inefficiencies, or software anomalies from operational data
- Create structured and detailed change requests for larger software enhancements or bug fixes
- Maintain clear documentation on known issues, applied fixes, software versions, and best practices
- Ensure adherence to internal standards and procedures for software changes and updates
|
YOUR RESPONSIBILITIES
- Design, develop and implement large-scale projects for cloud-based systems
- Planning, definition of requirements, development, testing, and quality assurance of the software development lifecycle for AWS solutions
- Identify, analyse and resolve infrastructure troubleshoot incidents and application deployment issues and implement preventative measures
- Provide guidance on the implementation of new cloud-based initiatives, through appropriate training
- Modernise and consolidate IT infrastructure and implement the best cloud-based solutions for the company, to ensure continued impact on growth
- Customise AWS applications to make the business more secure and efficient
- Monitoring the migration process of the new applications to the cloud so that it is seamless and in line with the organisation's operations
- Definition and documentation of the best practices and strategies for application deployment and infrastructure maintenance
- Ensure application and cloud environment performance, maintaining high standards and complying with company security policies
|
Deine Aufgaben:
- Erfassung, Analyse und Auswertung von betriebswirtschaftlichen Zahlen und Statistiken, um den Gesamtüberblick über das Unternehmen zu behalten.
- Erstellung von betriebswirtschaftlichen Auswertungen und Kontrolle der Jahresabschlüsse sowie Berechnung und Steuerung von Kennzahlen zur Leistungsbewertung.
- Überwachung und Analyse von Unternehmensprozessen, Erkennen von Problemen und Entwicklung von Lösungsvorschlägen zur Optimierung von Abläufen.
- Verantwortung für Budgetplanung, Forecasts und Kostenkontrolle sowie Erstellung von Kostenplänen und Prognosen.
- Beratung und Unterstützung der Geschäftsführung bei strategischen Entscheidungen durch fundierte Datenanalysen und Berichte.
|