What makes you stand out You hold a degree in Computer Science, Business Informatics, or a comparable qualification.You have strong expertise in SQL databases, including data modeling, table design, and data querying — ideally with experience in Snowflake.You bring experience in developing and integrating data products and are familiar with Azure Data Factory, Python, and modern cloud technologies (preferably Azure).You have hands‑on experience creating meaningful Power BI reports; knowledge of CI/CD or container technologies (e.g., Docker, Kubernetes) is a plus.You work analytically, communicate effectively, collaborate well in teams, and demonstrate a strong hands‑on mentality.You are fluent in English and have very good German skills; you also have a passion for financial databases and enjoy exploring new topics.
Principal Accountabilities: Collaboration in projects of the European Data Science & Advanced Analytics Team.Concept, design, development and execution of complex innovative AI/Machine Learning solutions as well as execution and implementation of concept studies using advanced statistical methods.Development of deep learning models for structured medical concept extraction from unstructured data.Productionalization of machine learning algorithms in Big Data platforms.Application of modern data mining and machine learning techniques in connection with Healthcare Big Data to identify complex relationships and link heterogeneous data sources.Advanced usage of Large Language Models for summarization, chatbot, entity extraction etc.Develop foundational Deep Learning Models for assets and patients.Builds and trains new production grade algorithms that can learn from complex, high dimensional data to uncover patterns from which machine learning models and applications can be developed.
For a perfect match, you need: Very good command of Microsoft PurviewIn-depth knowledge of data protection concepts: classification, DLP, information protection, retention/records, encryption, access governanceVery good understanding of data flows in modern collaboration/cloud environments and typical leakage scenariosExperience with policy design, rollout strategies, exception processes, and effectiveness measurementKnowledge of relevant standards/frameworks and governance requirements (e.g., data protection, auditability, control evidence)Analytical skills for evaluating alerts/findings, trend analyses, KPI reportingIn your job: Own and develop data security policies and controls (classification, DLP, retention, encryption) Design and optimize DLP and information protection measures (email, endpoints, cloud, collaboration tools) Identify sensitive data and assess exposure risks Monitor and improve the effectiveness of data protection measures Define retention and lifecycle requirements in cooperation with Legal & Compliance Create guidelines, training materials, and incident runbooks Collaborate closely with IT, Privacy, Legal, Cloud, and business teams
What you will do Gather, analyze, and interpret data to fulfill reporting requests from business stakeholders Translate complex data into clear, actionable insights that support business strategy Support ad hoc analysis and contribute to data-driven decision-making Automate routine reporting processes and maintain existing reports to ensure reliability and improve data workflows Maintain, design, and implement advanced dashboards using tools like Google Looker Studio, enabling self-service analytics across the organization Collaborate with data engineers, data scientists, and stakeholders across the organization to ensure data quality and consistency while delivering data-driven insights that support supply-related business decisions Communicate findings effectively to technical and non-technical audiences Foster a data-driven culture within the organization, promoting the use of analytics in decision-making processes Who you are You have at least 1-2 years of experience as a Data Analyst Proficiency in SQL for complex data analysis, reporting, and querying large datasets Experience with Google BigQuery Experience with data visualization tools (e.g., Looker Studio, Excel, or similar) to create compelling dashboards and reports Excellent communication skills in English Ability to translate fuzzy business requirements from diverse stakeholders into analytical requirements.
Location: Mobile work (Germany-wide) What you’ll do Empower customers: Identify, design, and implement data‑ and AI‑driven use cases for external clients across diverse industries. Build the backbone: Collaborate with other Data Engineers to enhance and integrate solutions into existing data and platform architectures.
MID – Driving Continuous Transformation Für spannende Projekteinsätze bei unseren Kunden suchen wir erfahrene Consultants mit BI Expertise.In Deiner Position als Senior Consultant Data Warehouse / Senior Data Analyst erwarten Dich abwechslungsreiche und anspruchsvolle Tätigkeiten entlang des gesamten Prozesses der Softwareentwicklung in Projekteinsätzen bei unseren KundenEigenverantwortlich stimmst Du die unterschiedlichen Anforderungsspezifikationen mit unseren Kunden und Fachabteilungen ab, analysierst und konzipierst die Datenaufbereitungsprozesse von der Schnittstelle bis hin zur KennzahlDabei beschäftigst Du Dich mit der Entwicklung/Aufbau von relationalen und multidimensionalen Datenbanken mit dem Ziel eines einheitlichen DatenbestandesDu optimierst bestehende Systemlösungen im Kontext Data WarehouseDu gestaltest die Architektur und das Design des Data Warehouse und entwickelst moderne Lösungen für ein Datenmodell.Du hast ein Studium im Bereich (Wirtschafts-)Informatik oder einem anderen Studiengang mit IT und/oder Informatik Fokus abgeschlossen, alternativ verfügst du über eine vergleichbare Qualifikation mit entsprechender IT-Affinität und bestenfalls Bezug zu ControllingDu bringst mindestens 5 Jahre Berufserfahrung in agilen Business Intelligence- oder in Data Warehouse-Projekten mit und bist dabei mit der Umsetzung von DSGVO-konformen DWH-Lösungen in Berührung gekommen.Du verfügst über Berufserfahrung im Design von ETL-Prozessen und Datenstrukturen, insbesondere im Umgang mit sehr großen Datenmengen und bist sicher im Umgang mit SQLMehrjährige Erfahrung in der Konzeption von ETL/ELT-Prozessen sowie ein Bewusstsein für DWH-Architekturen zeichnen dich aus.Du bringst Erfahrung in der Gestaltung von Schnittstellen zum DWH mit und hast bestenfalls bereits eventbasierte Schnittstellen mit Apache Kafka designt.Du überzeugst uns mit konzeptionellen Fähigkeiten und einem prozessualen und analytischen Denkvermögen und verfügst über verhandlungssichere Deutschkenntnisse.
You are responsible for the conceptual, logical, and structural integrity of our Core Data Model as well as the Gold Layer across Azure, Snowflake, and dbt.You ensure that fragmented data sources are transformed into consistent, reusable, and decision‑relevant data products, actively preventing the platform from drifting into team‑specific, incompatible models.You define and maintain central business objects, canonical dimensions, shared metrics, and facts, ensuring that the Core Data Model serves as a stable, business‑oriented foundation across all domains.You develop modeling standards, naming conventions, layering concepts (Staging → Intermediate → Gold), reuse patterns, and dbt design guidelines, and you ensure their consistent implementation across all teams.You safeguard the semantic consistency of the entire data model, resolve domain conflicts, ensure that identical business terms are modeled only once, and review changes affecting core layers.You act as the technical design authority for model changes in Snowflake/dbt, balancing local requirements with long‑term model coherence, and ensuring that all models remain performant, scalable, maintainable, and of high quality.
As an IT Data Engineer, you will contribute to discover insights about our customers and internal operations, by designing and implementing data pipelines and models, as well as maintaining and improving existing ones, so that you and your team can accelerate business experimentation and influence data-driven-augmented decision making. Tasks & Responsibilities Design and implement data pipelines to extract, transform, and load data from various sources, including databases, cloud storage, and APIs.
Manage and refine business and technical requirements in collaboration with stakeholders Coordinate data integration activities with various source systems Design and model data structures within a Data Warehouse environment, with a strong focus on Data Vault methodology Develop and optimize data pipelines using SQL and Python Work with tools like Databricks and dbt to build scalable data transformation workflows Ensure data quality, consistency, and compliance, especially within banking-related use cases Experience in requirements management Experience in coordination with source systems Experience with data modeling in a Data Warehouse environment: Focus on Data Vault Good German and English language skills Databricks experience is nice to have Experience with dbt (data build tool) is an advantage Experience with SQL (as a query language) and Python is an advantage Banking experience is an advantage Renowned client Remote work Ihr Kontakt Ansprechpartner Florian Pracher Referenznummer 863466/1 Kontakt aufnehmen E-Mail: florian.pracher@hays.at Anstellungsart Freiberuflich für ein Projekt
As an IT Data Engineer, you will contribute to discover insights about our customers and internal operations, by designing and implementing data pipelines and models, as well as maintaining and improving existing ones, so that you and your team can accelerate business experimentation and influence data-driven-augmented decision making. Tasks & Responsibilities Design and implement data pipelines to extract, transform, and load data from various sources, including databases, cloud storage, and APIs.
Give it a try and learn what the market has to offer – our services are free of charge, non-binding and discreet! We look forward to hearing from you. Design, build, and optimize batch data pipelines for internal tool use casesDevelop efficient Spark SQL transformations for large-scale datasetsUse Python for data processing, orchestration, and automationCreate and maintain data models (facts, dimensions, aggregates) with clear grain and metric definitionsEnsure data quality and correctness, including handling late data, duplicates, and adjustmentsImplement validation, data quality checks, and reconciliation logicWork with business stakeholders to gather requirements, define metrics, and translate needs into pipelinesCollaborate with infrastructure teams on standards, performance tuning, and best practices Bachelor oder Master degree in a technical field or an equivalent qualificationExperience in data engineering or a related fieldStrong proficiency in Spark SQL for large-scale data transformationsSolid Python skills for data processing and pipeline developmentStrong understanding of data modeling (fact tables, dimensions, grain, SCDs)Hands-on experience building and maintaining batch pipelines in productionHigh attention to detail with a strong focus on data quality and metric integrityAbility to communicate clearly with non-technical stakeholders and translate business needs into data solutions Remuneration in the most attractive collective agreement in the industry Annual leave entitlement of 30 days Generous working time account with the possibility to pay overtime Subsidization of direct insurance (as company pension scheme) Ihr Kontakt Ansprechpartner Kristina Meng Referenznummer 863942/1 Kontakt aufnehmen E-Mail: kristina.meng@hays.de Anstellungsart Anstellung bei der Hays Professional Solutions GmbH
Zum nächstmöglichen Zeitpunkt suchen wir in Weeze für die IT eine/n Data Analyst (m/w/d) Deine Aufgaben Du analysierst, designst und realisierst innovative Business Intelligence (BI) Lösungen Du erfasst und spezifizierst fachliche Anforderungen Du analysierst Prozesse und erarbeitest Optimierungspotentiale Du erstellst Reports und Dashboards Du erarbeitest Ideen zur Verbesserung und Weiterentwicklung unserer BI-Systeme und Datenbankstrukturen Du unterstützt unsere Fachbereiche bei BI-Fragestellungen Dein Profil Basic: Du hast eine berufliche Ausbildung oder ein Studium im Bereich der Informatik, Mathematik, Wirtschafts-, Natur- oder Ingenieurwissenschaften.
As part of our team, you will take on the following responsibilities: You work in an innovative and rapidly growing environment, using your strong communication skills to generate business value from data.You collaborate with your team and business partners in an agile setup to develop data‑driven products that address business challenges — including building data pipelines as well as work related to product, reporting, and analytics.You help design and maintain a scalable, cloud‑based data landscape that creates a new foundation for how DKV handles data.You prepare and present your results in a clear and engaging way for your business partners.You are open to new approaches and continuously refine your solutions to achieve better performance, quality, and cost efficiency.
Du begleitest Schnittstellen end-to-end: von der Anforderungsaufnahme über Design und Umsetzung bis hin zu Tests, Rollout und Betrieb – gemeinsam mit Deinen Kolleg*innen und externen Partnern. Du arbeitest eng mit unseren Fachbereichen zusammen, priorisierst pragmatisch nach Wirkung und entwickelst Lösungen konsequent aus dem Business heraus.
Fundierte Programmierkenntnisse (vorzugsweise Python), Datenbank-Knowhow (SQL sowie NoSQL) und Erfahrung im API-Design (REST sowie GraphQL), der Systemintegration und der Anbindung diverser Datenquellen an KI-Modelle sind von Vorteil. Sie verfügen über ein ausgeprägtes Architekturverständnis für moderne Datenplattformen und ETL/ELT-Pipelines, idealerweise auch im Umgang mit hochfrequenten Telemetrie- und IoT-Daten.
With our innovative and stylish products from the Dining & Lifestyle and Bath & Wellness segments, we have been creating moments and rooms to feel good in since 1748. Our success is based on the passion, design expertise and innovative strength of our more than 13,000 employees in 42 countries. Want to become part of us? #shapeandcreate Deine Aufgaben: Stammdatenverwaltung in SAP P51 und P30: Du verantwortest die Anlage und Pflege aller relevanten Artikel-, Logistik-, Planungs- und Produktionsstammdaten in SAP P51 und P30, einschließlich der Überprüfung auf Vollständigkeit und der Sicherstellung korrekter Freigabeprozesse.
As Senior Data Engineer (m/w/d), you design and operate scalable data pipelines and architectures that support Nordex analytics, reporting and machine learning solutions. You work closely with data scientists and engineering teams to deliver robust, high‑quality datasets.
Java, C #, Python, JavaScript, Spring, Spring Boot... Erste Basiskenntnisse mit Angular, Versionskontrolle, API-Design/ Rest, Web-Services und IT-Systemen Erste Kenntnisse über Machine Learning und KI-Systeme (v.a. Large Language Models und GenAI Agents) Grundkenntnisse über Semantic Web, Knowledge Graphs, Graph Data Science, Data Mesh und Data Products von Vorteil Interesse an IT-Themen im Banken-/Finanzwesen, wie z.B.
Messdaten, Betriebsführung, GIS, Asset Management) zu einer einheitlichen Datenbasis für projektbezogene Analyse- und Nutzungsszenarien. Design, Implementierung und Weiterentwicklung von ETL-/ELT Prozessen auf Basis moderner Azure und Databricks Komponenten Analyse von Quellstrukturen sowie Aufbau und Pflege projektweiter Datenmodelle Unterstützung beim Aufbau, der Konfiguration und dem Monitoring der Datenplattform sowie der zugehörigen Schnittstellenprozesse.
MID – Driving Continuous Transformation Mit der zunehmenden Bedeutung der Datenverarbeitung in der Cloud suchen wir nach einem erfahrenen Data Vault Consultant, der uns bei der Implementierung und Optimierung von Data Vault Lösungen in einer Cloud-Umgebung unterstützt.Du entwirfst und implementierst skalierbare Data Vault 2.1-Modelle.Dabei designst, modellierst und entwickelst Du Datenintegrationslösungen, die auf Best Practices im Bereich Data Vault basieren.Zur Sicherstellung einer reibungslosen Datenpipeline arbeitest Du eng mit den Data Engineering und BI-Teams zusammen.Du beteiligst Dich an Datenqualitätsprüfungen, Validierungen und Governance-Praktiken.Die Analyse von Kundenanforderungen und Übersetzung in skalierbare und robuste Architekturen gehören ebenfalls zu Deinen Aufgaben.Du unterstützt bei der Migration bestehender Lösungen in die Cloud.Du unterstützt die Geschäftsberichterstattung und -analyse durch sauber versionierte Data MartsAußerdem schulst und coachst Du Teams in Data Vault Methodiken und Best Practices in den jeweiligen Cloud Umgebungen.Dein Studium mit dem Schwerpunkt Informatik / Wirtschaftsinformatik, Mathematik, MINT-Fächern oder eine vergleichbare Qualifikation hast Du abgeschlossen.Du konntest bisher mindestens 5 Jahre Erfahrung in der Modellierung und Implementierung von Data Vault Lösungen in Produktionsumgebungen sammeln und bringst fundierte Kenntnisse in der Arbeit mit Cloud-Datenplattformen (z.
Du brennst für Data Transformation as Code und moderne Cloud-Architekturen? Dann gestalte mit uns die datengetriebene Zukunft. Architektur-Design: Du konzipierst und implementierst moderne Datenplattform-Architekturen in der Cloud auf AWS, Azure oder GCP mit Fokus auf Snowflake.Data Engineering: Du entwickelst modulare und testbare Transformations-Pipelines mit dbt und arbeitest nach Best Practices wie Versionierung, CI/CD und automatisierten Tests.Datenmodellierung: Du erstellst Datenmodelle nach Data Vault 2.0, Star-Schema oder vergleichbaren Modellierungstechniken zur strukturierten Abbildung fachlicher Anforderungen.End-to-End Orchestrierung: Du integrierst Ingestion-Tools wie Fivetran oder Airbyte und stellst die Workflow-Orchestrierung mit Airflow oder Dagster sicher.Consulting und Coaching: Du berätst Kunden bei der Auswahl geeigneter Komponenten und begleitest die Migration von Legacy-Systemen in die Cloud.Qualitätssicherung: Du stellst durch automatisierte Datenqualitätsprüfungen und klare Governance-Strukturen eine hohe Datenverlässlichkeit sicher.
Messdaten, Betriebsführung, GIS, Asset Management) zu einer einheitlichen Datenbasis für projektbezogene Analyse- und Nutzungsszenarien. Design, Implementierung und Weiterentwicklung von ETL-/ELT Prozessen auf Basis moderner Azure und Databricks KomponentenAnalyse von Quellstrukturen sowie Aufbau und Pflege projektweiter DatenmodelleUnterstützung beim Aufbau, der Konfiguration und dem Monitoring der Datenplattform sowie der zugehörigen Schnittstellenprozesse.Aufbereitung und Bereitstellung von Daten für unterschiedliche technische und analytische Nutzungsszenarien im ProjektEntwicklung und Betreuung projektbezogener Reporting Lösungen (u.a.
What makes you stand out You hold a Bachelor’s or Master’s degree in Business Administration, Industrial Engineering, Analytics, or Statistics.Ideally, you have additional qualifications in Marketing Analytics or CRM.You bring 3–5 years of professional experience in Campaign/CRM Analytics and Sales Operations.You have hands-on experience in end-to-end campaign tracking, KPI development, and European process harmonization.You are proficient in Power BI (including DAX and Data Modeling) and Power Query (ETL).You are familiar with Dynamics 365 and HubSpot (workflows/tracking).You have experience with SQL (Snowflake) and SAP data integration.You ensure data quality and consistency across systems.You communicate clearly and effectively and Stakeholder management motivates you and drives you to create solutions collaboratively.You design processes, promote enablement, and share best practices.You have excellent German and English skills, both written and spoken. We are looking forward to your application and to applicants who enrich our diverse culture!
Du konzipierst und verantwortest Data-Governance- und Operating-Modelle (z. B. Rollen, Verantwortlichkeiten, Domains, Prozesse). Du designst, implementierst und optimierst die Collibra Data Governance Plattform (Data Catalog, Asset-Modelle, Workflows, Communities, Verantwortlichkeiten).
Du konzipierst und verantwortest Data-Governance- und Operating-Modelle (z. B. Rollen, Verantwortlichkeiten, Domains, Prozesse). Du designst, implementierst und optimierst die Collibra Data Governance Plattform (Data Catalog, Asset-Modelle, Workflows, Communities, Verantwortlichkeiten).
MySQL Python Google BigQuery Gitlab What you will do Conduct in-depth analyses on shop user behavior to uncover actionable insights Drive shop optimization and growth by providing analytical insights to Business and Product Owners and influencing product enhancements Shape the success metrics for our shop, developing and monitoring KPIs that directly impact millions of users’ shopping experiences Provide full-cycle A/B test analytical support to optimize shop performance Define tracking requirements and support QA of tracking implementations to ensure data accuracy and reliability Design and implement advanced dashboards, enabling self-service analytics across the organization Collaborate with multiple stakeholders across the organization to answer shop-related questions with analytical insights, supporting business decision-making Who you are Advanced proficiency in SQL for complex data analysis and querying large datasets Expertise in e-commerce KPIs and funnel analysis Solid understanding of web analytics and experience working with frontend tracking data (GA4 experience is a plus) Experience with data visualization tools (e.g., Looker Studio, Tableau, or similar) to create compelling dashboards and reports Advanced knowledge of A/B testing methodology and statistical analysis for experiment design and interpretation Strong understanding of customer segmentation and cohort analysis in the e-commerce context Proactive and self-driven, with the ability to work independently and drive projects from conception to completion Collaborative mindset, adept at working with cross-functional teams and building strong relationships with stakeholders Pragmatic approach to problem-solving, consistently delivering efficient, data-driven solutions.
Maschinen-/Anlagenbau) von Vorteil Sehr gute Kenntnisse im Bereich ETL, Datenmodellierung und Reporting Kenntnisse über Datenstrukturen in SAP ERP und S/4HANA wünschenswert Kenntnisse im objektorientierten Design und der Softwarequalität Sehr gute analytische Fähigkeiten und ein starker Sinn für Geschäftsprozesse und Business Prioritäten Eigenständige, strukturierte und lösungsorientierte Arbeitsweise Sehr gute Kommunikations- und zwischenmenschliche Fähigkeiten Sichere Kommunikation in Deutsch und Englisch Reisebereitschaft (ca. 15%) Haben wir Ihr Interesse geweckt?
Aufbau einer Log-Ingestion- und Parsing-Pipeline für mehrgigabytegroße Archive (ZIP/XML/JSON/Windows-Event-Logs), um normalisierte Ereignisdaten zu erzeugen Ereigniskorrelation und Datenanreicherung (zeitliche Sortierung, korrelierte Events aus unterschiedlichen Quellen, unscharfer Abgleich mit einer Fehler-/FMEA-Wissensdatenbank) Entwicklung einer Embedding- und Hybrid-Retrieval-Pipeline (Azure OpenAI Embeddings + Keyword-Suche + Vektor-Suche) mit klar definierten Zielen für Latenz und Durchsatz Durchführung von Datenqualitätsprüfungen (Schema-Validierung, Encoding-Checks, Erkennung von Duplikaten) sowie Erstellung einer präzisen technischen Übergabedokumentation Erfahrung in Implementierung von ETL/ELT-Prozessen in Python zur Verarbeitung großer und heterogener Log-Datenmengen Kenntnisse Design von Schemas und Datenmodellen für normalisierte Events und Wissensbank-Dokumente (JSON/JSONL + SQL) Aufbau/Optimierung von Vektorindex-Sammlungen und Relevanzbewertungen (BM25/TF-IDF + Cosine Similarity) Performance-Optimierung (Batching, Caching) und Bereitstellung wartbaren Codes inkl.
Arbeitsort: Böblingen Ihre Aufgaben Übergreifende Abstimmung zur Entwicklung eines Datenökosystems für die gesamte Supply Chain Unterstützung und Beratung beim Einsatz von innovativen Digitalisierungstechnologien (Data Analytics, Cloud, …) Steuerung des komplexen Umsetzungsprozesses bis zur Realisierung und Erfolgsabsicherung der erarbeiteten Digitalisierungslösungen Anwendung von agilen Methoden, wie bspw. Design Thinking oder Scrum im Rahmen von Projekten/Workshops Ihr Profil Abgeschlossenes Studium der Wirtschaftsinformatik, Wirtschaftsingenieurwesen, Wirtschaftswissenschaften, Mathematik, Informatik, Ingenieurwissenschaften oder Vergleichbares Idealerweise erste praktische Erfahrungen in Digitalisierungsprojekten, Data Analytics, Cloudtechnologien Wir bieten Die Sicherheit eines großen, global tätigen Unternehmens mit hohen Standards für die Arbeitssicherheit Attraktive, der Qualifikation entsprechende Vergütung Unbefristete Anstellung Ein interessantes und vielseitiges Aufgabenfeld Individuelle Weiterbildung und Entwicklungsprogramme sowie vielfältige Karrieremöglichkeiten So geht’s weiter Bewerben Sie sich bitte direkt online, indem Sie auf den Button „JETZT AUF DIESE STELLE BEWERBEN“ klicken.
YOUR TASKS: Design, develop and deploy digital solutions ensuring the software development life cycle in an agile setup Develop solutions on a leading-edge cloud based platform for managing and analyzing large datasets Create technical documentation Analyze and decompose business requirements into technical functionalities Produce clean and efficient code based on business requirements and specifications Create Notebooks, pipelines and workflows in SCALA or Python to ingest, process and serve data in our platform Be a technical lead for junior and external developers Be a part of the continuous improvement of Nordex’ development processes by participating in retrospectives and proposing optimizations YOUR PROFILE: Technical degree in Computer Science, Software Engineering or comparable Experience or certification in Databricks Fluent English At least 3 years of proven experience Availability to travel YOUR BENEFITS: In addition to the opportunity to make our world a little more sustainable, we offer you: *Some offers may vary by location. ** Hybrid working in accordance with the company's internal policy.
Aufbau einer Log-Ingestion- und Parsing-Pipeline für mehrgigabytegroße Archive (ZIP/XML/JSON/Windows-Event-Logs), um normalisierte Ereignisdaten zu erzeugenEreigniskorrelation und Datenanreicherung (zeitliche Sortierung, korrelierte Events aus unterschiedlichen Quellen, unscharfer Abgleich mit einer Fehler-/FMEA-Wissensdatenbank)Entwicklung einer Embedding- und Hybrid-Retrieval-Pipeline (Azure OpenAI Embeddings + Keyword-Suche + Vektor-Suche) mit klar definierten Zielen für Latenz und DurchsatzDurchführung von Datenqualitätsprüfungen (Schema-Validierung, Encoding-Checks, Erkennung von Duplikaten) sowie Erstellung einer präzisen technischen Übergabedokumentation Erfahrung in Implementierung von ETL/ELT-Prozessen in Python zur Verarbeitung großer und heterogener Log-DatenmengenKenntnisse Design von Schemas und Datenmodellen für normalisierte Events und Wissensbank-Dokumente (JSON/JSONL + SQL)Aufbau/Optimierung von Vektorindex-Sammlungen und Relevanzbewertungen (BM25/TF-IDF + Cosine Similarity)Performance-Optimierung (Batching, Caching) und Bereitstellung wartbaren Codes inkl.
What you can expect You take on the functional and disciplinary responsibility for all FTEs in the Sales Data Hub within the federated data setup, including Data Engineers, Data Scientists, Data Governance roles and Product Owners Data.You define, design, develop, and operate cloud‑based data products for the Sales, Marketing, and Customer Service business units.You are responsible for the methodological integration of data assets and data products.You manage the Sales Data Hub operationally and further develop it as a specialized unit for data‑driven solutions in Sales, Marketing, and Customer Service.You are responsible for the further development and ongoing maintenance of all sales‑related models based on feedback and requirements from the sales organization.You lead projects related to planning, expanding, and organizing new and existing products in collaboration with the relevant business units and external partners.You assume technical responsibility for data products developed by or for the Sales, Marketing, and Customer Service areas within the Data Intelligence & Analytics team.You drive the continuous expansion, professionalization, and organizational development of the Sales Data Hub within the existing governance and organizational framework.
Develop, maintain, and optimize data pipelines and ETL/ELT processes using Databricks Implement version control workflows and collaborate using GitHub Build and maintain CI/CD pipelines with GitHub Actions Design and implement scalable data transformations using Python/PySpark Write efficient and reliable SQL queries for data processing and analytics Strong hands-on experience with Databricks and strong SQL skills Proficiency with GitHub for version control and collaboration Experience building CI/CD pipelines, ideally with GitHub Actions and solid knowledge of Python/PySpark Experience with Microsoft Azure and knowledge of Data Vault data modeling is an advantage Experience with Kafka or other streaming technologies is an advantage Understanding of Unity Catalog for data governance is an advantage Experience with Splunk for monitoring and troubleshooting is an advantage Renowned client Remote work possible Ihr Kontakt Ansprechpartner Florian Pracher Referenznummer 863468/1 Kontakt aufnehmen E-Mail: florian.pracher@hays.at Anstellungsart Freiberuflich für ein Projekt
YTD reporting Hands-on experience in GRDC form design (Manage Forms) Experience with Package Management (Manage Package app) Ability to design validation rules and manage controls in the Data Monitor Solid understanding of GRDC ?
Work with consolidation units, FS items, subitems, versions, and the ACDOCU table structureSupport both periodic and year-to-date reporting activitiesCreate and configure GRDC forms using the Manage Forms applicationMaintain and optimize forms to ensure accurate and efficient data collectionCreate and manage packages via the Manage Package appDefine package steps, assign forms or folders, and configure data-entry context for usersDesign and maintain validation rules, ensuring proper behavior in the Data MonitorUse the Reported Data Validation task to resolve reported-data issuesManage visual and backend controls to ensure consistent data qualityUnderstand GRDC data integration with ACDOCU and Group ReportingWork with Data Monitor task sequences such as Calculate Net Income Strong understanding of Group Reporting concepts and ACDOCU structureExperience with periodic vs. YTD reportingHands-on experience in GRDC form design (Manage Forms)Experience with Package Management (Manage Package app)Ability to design validation rules and manage controls in the Data MonitorSolid understanding of GRDC ?
Benefits Senior-Rolle im SAP S4HANA und PLM Umfeld Fokus auf Master Data Governance und Prozessintegration Standort Mülheim an der Ruhr Aufgaben Sie übersetzen Business- und Engineering-Anforderungen in SAP S4HANA MDG M und PLM Lösungen Sie definieren funktionale Spezifikationen und begleiten Tests Sie gestalten und überwachen Data-Governance- und Data-Quality-Regeln Sie analysieren und bereinigen Stammdatenprobleme Sie steuern Schnittstellen zwischen SAP, Teamcenter und angebundenen Systemen Sie erstellen Dokumentation und Trainingsmaterialien für Endanwender Profil Abgeschlossenes Studium im Bereich Informatik, Wirtschaftsinformatik, Maschinenbau oder vergleichbar Mehrjährige Erfahrung in SAP S4HANA MDG M, SAP MM und SAP PP Erfahrung im PLM-Umfeld, idealerweise Teamcenter Sehr gute Kenntnisse in Requirements Engineering und funktionalem Design Erfahrung in Implementierungsprojekten sowie im IT-Projektmanagement Sehr gute Englischkenntnisse Unsicher, ob die Stelle zu Ihnen passt?
Working in an interdisciplinary team of engineers to develop and improve designs and manufacturing processes for thick film sensors Improve and maintain the data infrastructure and pipeline for production and process control data from various sources and ensure timely data availability Act as a technical interface between R&D and Production and between various R&D departments to harmonize data handling and standards Improve and maintain data visualization tools (dashboards, interactive charts) and support in routine data analysis Support in defining and improving image analysis methods and tools to derive quantitative feature values from images Extend the data infrastructure with additional information, e.g. from sensor performance characterization Data driven improvements of manufacturing processes Completed technical training in process engineering, data science, bioinformatics, or similar professional education Professional experience in industrial R&D or manufacturing environment, ideally in the medical device industry or a comparable regulated environment Experience in building and maintaining data pipelines (ETL processes) from diverse sources such as SQL databases, CSV, and machine log files Ability to create interactive dashboards and visualization tools with a solid understanding of applied statistics (e.g. correlation analysis, cluster analysis) to support the development teams Skills in digital image processing, object-oriented programming (OOP) in Python, and knowledge of SQL are a strong advantage, adding significant value to this opportunity Good communication skills in a multicultural and multidisciplinary environment A thorough way of working and documentation Motivated team player with passion in promoting and driving fast-paced and ambitious projects Aptitude to understand and improve the underlying technical processes Proficiency in both English and German Unlimited project contract Fascinating, innovative environment in an international atmosphere Ihr Kontakt Ansprechpartner Jannik Fabio Eichin Referenznummer 865639/1 Kontakt aufnehmen E-Mail: jannik.eichin@hays.ch Anstellungsart Freiberuflich für ein Projekt
Sie übernehmen End-to-End-Verantwortung von der Anforderung bis zur produktiven Strecke. IHRE AUFGABEN Design, Entwicklung und Betrieb von ETL-/ETL-Strecken mit SAP Data Services und SAP Datasphere – inkl. Job-Orchestrierung, Scheduling und Monitoring Migration bestehender ETL-Strecken aus SAP BI/ BO in SAP Datasphere, inkl.
Sie übernehmen End-to-End-Verantwortung von der Anforderung bis zur produktiven Strecke. IHRE AUFGABEN Design, Entwicklung und Betrieb von ETL-/ETL-Strecken mit SAP Data Services und SAP Datasphere – inkl. Job-Orchestrierung, Scheduling und Monitoring Migration bestehender ETL-Strecken aus SAP BI/ BO in SAP Datasphere, inkl.
Give it a try and learn what the market has to offer – our services are free of charge, non-binding and discreet! We look forward to hearing from you. Design, build and optimize enterprise Tableau dashboardsDevelop reporting-friendly data models on Cloudera Data Platform (Hadoop) and Azure Databricks (Delta Lake, SQL Warehouses)Implement and tune SQL queries (Hive, Impala, Spark SQL, Databricks SQL) for performance, cost efficiency and concurrencyApply Tableau performance optimization strategies (extract vs live, push-down optimization, query tuning)Design and implement secure enterprise Tableau configurations, including row-level security aligned with role conceptsEnsure compliance with IT security, data governance and regulatory requirementsCollaborate with data platform teams, DataOps, IT security/compliance and controlling solutionsProduce professional documentation: data models, dashboard specifications, data-source definitions, security concepts, test casesConduct testing for accuracy, performance, access control and stability of dashboards and data modelsProvide knowledge transfer, coaching and structured handover to internal teams Strong hands-on experience with Tableau Desktop and Tableau Server/Cloud in enterprise environmentsProven ability to build management-ready dashboards for finance/controlling or other regulated industriesPractical experience with Cloudera Data Platform, Hadoop ecosystems, and Azure Databricks integrationsAdvanced SQL skills across Hive, Impala, Spark SQL, Databricks SQLSolid understanding of Delta Lake, parquet/ORC formats, and BI-oriented data modeling principlesExperience implementing row-level security and with enterprise BI solutionsStrong knowledge of performance optimization in Tableau, Hadoop and Databricks environmentsAbility to operate in regulated financial environments with security, compliance and data governance constraintsExcellent communication and documentation skills in English International clientRemote option Ihr Kontakt Ansprechpartner Eliška Stejskalová Referenznummer 864486/1 Kontakt aufnehmen E-Mail: eliska.stejskalova@hays.at Anstellungsart Freiberuflich für ein Projekt
Give it a try and learn what the market has to offer – our services are free of charge, non-binding and discreet! We look forward to hearing from you. Design, build and optimize enterprise Tableau dashboards Develop reporting-friendly data models on Cloudera Data Platform (Hadoop) and Azure Databricks (Delta Lake, SQL Warehouses) Implement and tune SQL queries (Hive, Impala, Spark SQL, Databricks SQL) for performance, cost efficiency and concurrency Apply Tableau performance optimization strategies (extract vs live, push-down optimization, query tuning) Design and implement secure enterprise Tableau configurations, including row-level security aligned with role concepts Ensure compliance with IT security, data governance and regulatory requirements Collaborate with data platform teams, DataOps, IT security/compliance and controlling solutions Produce professional documentation: data models, dashboard specifications, data-source definitions, security concepts, test cases Conduct testing for accuracy, performance, access control and stability of dashboards and data models Provide knowledge transfer, coaching and structured handover to internal teams Strong hands-on experience with Tableau Desktop and Tableau Server/Cloud in enterprise environments Proven ability to build management-ready dashboards for finance/controlling or other regulated industries Practical experience with Cloudera Data Platform, Hadoop ecosystems, and Azure Databricks integrations Advanced SQL skills across Hive, Impala, Spark SQL, Databricks SQL Solid understanding of Delta Lake, parquet/ORC formats, and BI-oriented data modeling principles Experience implementing row-level security and with enterprise BI solutions Strong knowledge of performance optimization in Tableau, Hadoop and Databricks environments Ability to operate in regulated financial environments with security, compliance and data governance constraints Excellent communication and documentation skills in English International client Remote option Ihr Kontakt Ansprechpartner Eliška Stejskalová Referenznummer 864486/1 Kontakt aufnehmen E-Mail: eliska.stejskalova@hays.at Anstellungsart Freiberuflich für ein Projekt
elasticsearch AWS Python Google BigQuery Google Cloud Platform Numpy Pandas Gitlab What you will do Design and develop innovative algorithms to power a personalized shopping experience, leveraging cutting-edge machine learning techniques Deploy your solutions into production, taking full ownership and ensuring high performance and scalability Combine your data science expertise with a pragmatic, agile approach to find innovative solutions and drive measurable results within a fast-paced environment Challenge the status quo by identifying areas for improvement in existing retrieval and reranking systems, particularly those relying heavily on business logic, and propose data-driven solutions Thrive in a dynamic, fast-paced environment with a flat hierarchy, where your ideas and contributions can make a real difference Who you are Proficiency in Python or experience with at least one scientific computing language (e.g., MATLAB, R, Julia, C++) Strong SQL skills with experience in analytical or transactional database environments Theoretical understanding of machine learning principles, coupled with a hands-on approach to building and iterating on models Proven experience in building and deploying machine learning solutions that deliver tangible business value Strong understanding of data structures, algorithms, and tools for efficiently handling large datasets (e.g. pandas, numpy, dask, arrow, polars, …) Experience designing, building, and managing data pipelines Familiarity with cloud-based model training and serving platforms (e.g., GCP Vertex AI, Amazon SageMaker) Solid understanding of statistical methods for model evaluation Big Data: Experience analyzing large datasets using statistical and machine learning techniques DevOps: Familiarity with CI/CD tools (e.g., GitLab CI/CD, Hashicorp Terraform) is a plus Generative AI: Experience with generative AI and agentic frameworks (e.g., LangChain, ADK, CrewAI, Pydantic AI, …) is a plus Understanding of recommendation, retrieval and reranking systems in e-commerce and retail is a plus Excellent written and verbal communication skills in English Ability to effectively communicate complex machine learning concepts to both technical and non-technical stakeholders Proven ability to collaborate effectively within a team to establish standards and best practices for deploying machine learning models A proactive approach to knowledge sharing and fostering a quick development environment Nice to have Experience with BigQuery Knowledge of time series and (graph) neural network models Familiarity with statistical testing and Gaussian Processes Strong Knowledge of Computer Vision libraries, (e.g.
Working in an interdisciplinary team of engineers to develop and improve designs and manufacturing processes for thick film sensorsImprove and maintain the data infrastructure and pipeline for production and process control data from various sources and ensure timely data availabilityAct as a technical interface between R&D and Production and between various R&D departments to harmonize data handling and standardsImprove and maintain data visualization tools (dashboards, interactive charts) and support in routine data analysisSupport in defining and improving image analysis methods and tools to derive quantitative feature values from imagesExtend the data infrastructure with additional information, e.g. from sensor performance characterizationData driven improvements of manufacturing processes Completed technical training in process engineering, data science, bioinformatics, or similar professional educationProfessional experience in industrial R&D or manufacturing environment, ideally in the medical device industry or a comparable regulated environmentExperience in building and maintaining data pipelines (ETL processes) from diverse sources such as SQL databases, CSV, and machine log filesAbility to create interactive dashboards and visualization tools with a solid understanding of applied statistics (e.g. correlation analysis, cluster analysis) to support the development teamsSkills in digital image processing, object-oriented programming (OOP) in Python, and knowledge of SQL are a strong advantage, adding significant value to this opportunityGood communication skills in a multicultural and multidisciplinary environment A thorough way of working and documentationMotivated team player with passion in promoting and driving fast-paced and ambitious projectsAptitude to understand and improve the underlying technical processesProficiency in both English and German Unlimited project contractFascinating, innovative environment in an international atmosphere Ihr Kontakt Ansprechpartner Jannik Fabio Eichin Referenznummer 865639/1 Kontakt aufnehmen E-Mail: jannik.eichin@hays.ch Anstellungsart Freiberuflich für ein Projekt
YOUR TASKS: Develop solutions on a leading-edge cloud based platform for managing and analyzing large datasets. Design, develop and deploy digital solutions ensuring the software development life cycle in an agile setup. Create technical documentation.
We are seeking for a talented Enterprise Architect – Tools and Monitoring to join our team. You are expected to deliver technical and architecture design and formulate standards related IT monitoring landscape (Infrastructure, Application and Business Process). You will act as the SME (Subject Matter Expert) and 3rd level support and also provide technical consultancy to support the design and implementation planning of new infrastructure technologies.
Contribuisce alla creazione di un flusso dati coerente e affidabile a supporto delle diverse fasi di sviluppo prodotto. Requisiti: Laurea in Design del Prodotto, Scienza e Tecnologia dei Materiali oppure Diploma tecnico o percorso ITS in ambito calzatura / materiali / prototipazione / processo produttivo Conoscenza di base delle componenti del prodotto (calzatura o attrezzatura tecnica) e delle principali categorie di materiali Buona conoscenza della lingua inglese, scritta e parlata Confidenza con sistemi informativi e gestione dati; la conoscenza di PLM o ERP costituisce un plus Ottima padronanza di Excel Precisione, attenzione al dettaglio e attitudine al lavoro in team Interesse per i processi di sviluppo e industrializzazione del prodotto Sede di lavoro: Montebelluna (TV) Se sei interessato a lavorare in un ambiente stimolante, internazionale e dinamico, candidati subito!
Contribuisce alla creazione di un flusso dati coerente e affidabile a supporto delle diverse fasi di sviluppo prodotto. Requisiti: Laurea in Design del Prodotto, Scienza e Tecnologia dei Materiali oppure Diploma tecnico o percorso ITS in ambito calzatura / materiali / prototipazione / processo produttivo Conoscenza di base delle componenti del prodotto (calzatura o attrezzatura tecnica) e delle principali categorie di materiali Buona conoscenza della lingua inglese, scritta e parlata Confidenza con sistemi informativi e gestione dati; la conoscenza di PLM o ERP costituisce un plus Ottima padronanza di Excel Precisione, attenzione al dettaglio e attitudine al lavoro in team Interesse per i processi di sviluppo e industrializzazione del prodotto Sede di lavoro: Montebelluna (TV) Se sei interessato a lavorare in un ambiente stimolante, internazionale e dinamico, candidati subito!
Give it a try and learn what the market has to offer – our services are free of charge, non-binding and discreet! We look forward to hearing from you. Design and implement a SQL-based landing zone for regulatory dataDevelop stored procedures for transformation, enrichment, and aggregationBuild and operate high-volume batch processing chains for monthly/quarterly cyclesImplement SSIS-based ingestion flows and job orchestrationEnsure data quality, technical lineage, and full traceability across layersDefine and document integration patterns and mapping logic between landing-zone datasets and Tagetik-based reporting templatesPerform operational monitoring, troubleshooting, and performance optimization Strong expertise in Microsoft SQL Server and T-SQLHands-on experience with stored-procedure-driven ETL and complex data modelsSolid SSIS skills for orchestration and control of processing chainsExperience with batch processing, logging, restartability, and performance tuningKnowledge of data lineage, reconciliation, and regulatory processing needsExperience with reporting platforms such as Tagetik is a plusFamiliarity with Oracle source systems is advantageous Renowned ClientRemote Option Ihr Kontakt Ansprechpartner Eliška Stejskalová Referenznummer 862801/1 Kontakt aufnehmen E-Mail: eliska.stejskalova@hays.at Anstellungsart Freiberuflich für ein Projekt
Give it a try and learn what the market has to offer – our services are free of charge, non-binding and discreet! We look forward to hearing from you. Design and implement a SQL-based landing zone for regulatory data Develop stored procedures for transformation, enrichment, and aggregation Build and operate high-volume batch processing chains for monthly/quarterly cycles Implement SSIS-based ingestion flows and job orchestration Ensure data quality, technical lineage, and full traceability across layers Define and document integration patterns and mapping logic between landing-zone datasets and Tagetik-based reporting templates Perform operational monitoring, troubleshooting, and performance optimization Strong expertise in Microsoft SQL Server and T-SQL Hands-on experience with stored-procedure-driven ETL and complex data models Solid SSIS skills for orchestration and control of processing chains Experience with batch processing, logging, restartability, and performance tuning Knowledge of data lineage, reconciliation, and regulatory processing needs Experience with reporting platforms such as Tagetik is a plus Familiarity with Oracle source systems is advantageous Renowned Client Remote Option Ihr Kontakt Ansprechpartner Eliška Stejskalová Referenznummer 862801/1 Kontakt aufnehmen E-Mail: eliska.stejskalova@hays.at Anstellungsart Freiberuflich für ein Projekt
We are seeking for a talented Enterprise Architect – Tools and Monitoring to join our team. You are expected to deliver technical and architecture design and formulate standards related IT monitoring landscape (Infrastructure, Application and Business Process). You will act as the SME (Subject Matter Expert) and 3rd level support and also provide technical consultancy to support the design and implementation planning of new infrastructure technologies.