Jobs Career Advice Signup
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Oct 19, 2023
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Never pay for any CBT, test or assessment as part of any recruitment process. When in doubt, contact us

    Glencore is one of the world’s largest global diversified natural resource companies. As a leading integrated producer and marketer of commodities with a well-balanced portfolio of diverse industrial assets, we are strongly positioned to capture value at every stage of the supply chain, from sourcing materials deep underground to delivering products to...
    Read more about this company

     

    Data Engineer - JHB

    Accountability:

    • The role primarily serves the purpose of developing, managing, and optimizing intricate data architectures, pipelines, and ETL processes, which are pivotal for proficient data handling and derivation of actionable insights.
    • By introducing industry best practices into the digital projects of the organization, the position ensures a maximized constructability profile.
    • Furthermore, this role stands as a guardian of data, ensuring its security, separation, and compliance across various Glencore international boundaries, all while satisfying the multifaceted data infrastructure needs of the business.
    • The role is supporting the business plan through the creation and meticulous maintenance of optimal data models and pipelines, the position bolsters the business's vision of leveraging data driven decision making through optimal systems and processes.
    • This position enhances efficiencies for different teams, promoting automated processes which replace manual labour, and enhancing data delivery systems. This pivotal transformation permits team members to channel their focus on tasks that directly add value.

    Key Relationships

    • Lead of Data Engineering
    • Business Analyst, Software Developers
    • Global Digital & Analytics Manager
    • Regional Technology Transformation Manager
    • Operational Excellence Managers
    • DevOps Architect
    • Data Scientist

    Qualifications & Skill Requirements:

    • An undergraduate qualification (Bachelors / Honours degree or equivalent) in a relevant IT, software, computer science or engineering discipline.
    • MBA/MIS or Tertiary qualification desirable
    • Agile Certification Desirable
    • Lean Six Sigma Certification desirable
    • Project Management Certification desirable

    Competencies:

    • Ensures Accountability
    • Develops Talent
    • Drives Engagement
    • Instils Trust
    • Collaborates
    • Customer Focus
    • Plans and Aligns
    • Strategic mindset
    • Communicates effectively.
    • Decision Quality
    • Courage
    • Being Resilient

    Work Experience:

    •   Direct experience and a very good understanding of the need for formal project disciplines.
    • +8 years of work experience
    • +5 years’ experience as Data Engineer
    • Experience in industry projects, mining or similar is desirable
    • Experience building and optimising ‘big data’ data pipelines, ETLs, architectures and data sets
    • Experience managing, maintaining, and improving quality in data pipelines
    • Experience building processes supporting data transformation, data structures, metadata, dependency, and workload management.
    • Experience manipulating, processing, and extracting value from large, disconnected datasets
    • Strong experience in SQL applied on different platforms
    • Experience in Python, Pyspark, RedPanda, Dagster, Kubernetes (Developer), Dockers.
    • Experience using big data & cloud development such as databricks, datafactory, Kafka, Spark, Hadoop and microservices
    • Experience building reports using PowerBI or similar
    • Experience using Application Lifecycle Management (ALM) platforms such as JIRA, Azure DevOps or similar Multi-tasking and excellent management of personal time and priorities is essential due to involvement in different time-zones.
    • Strong verbal and written communication skills that work effectively with technical and non-technical audiences.
    • Demonstrated, well-developed judgment and problem-solving skills.
    • Strong analytic skills related to working with unstructured datasets.
    • Proficient developing with Python, R, Java, C, and/or C++
    • Experience in Python, Pyspark, RedPanda, Dagster, Kubernetes (Developer), Dockers.
    • Proficient developing API integrations
    • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
    • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
    • Experience supporting and working with cross-functional teams in a dynamic environment.

    General Accountability:

    • Data Architecture and Database Management:
    • Design, construct, install, and maintain large-scale processing systems and other infrastructure.
    • Ensure the architectural integrity and efficiency of databases.
    • ETL Processes:
    • Develop and manage ETL (Extract, Transform, Load) processes to move data among systems.
    • Monitor ETL processes to ensure accuracy and performance.
    • Data Pipeline Construction:
    • Build and maintain the infrastructure required for optimal extraction, transformation, and loading of data from various sources.
    • Data Quality and Compliance:
    • Ensure the quality, consistency, and security of data.
    • Implement measures to maintain data integrity across all platforms and ensure data's privacy and compliance with relevant regulations.
    • Collaboration:
    • Work closely with data scientists, analysts, and other stakeholders to ensure that the data infrastructure aligns well with organizational needs.
    • Collaborate with system architects to ensure alignment between data and system architectures.
    • Performance Optimization:
    • Continuously monitor, refine, and report on the performance of data management systems.
    • Identify, design, and implement internal process improvements.
    • Scalability and Storage Solutions:
    • Design and implement systems that are scalable, ensuring that data solutions can handle growth.
    • Manage and optimize data storage solutions, including databases and cloud storage solutions.
    • Liaise with business and other stakeholders to elicit and clarify data requirements, including assessing implementation feasibility and delivery.
    • Perform tasks in accordance with quality procedures or architecture requirements as applicable.
    • Apply innovative data techniques to identify and solve the most pressing problems within the copper business
    • Build and maintain advanced analytics platforms, tools, and products
    • Build and maintain data pipelines to capture, clean, and democratise information
    • Lead, design and build tactical data architectures and databases to support internal product development
    • Lead, design and build highly complex solutions in collaboration with data scientist, full stack developers and subject matter experts
    • Produce high quality code using internal guidelines, frameworks, documentation to scale and maintain the internal technology stack
    • Follow internal Continuous Integration and Continuous Deployment practices
    • Secure adherence and performance of advisory systems and products.

    Method of Application

    Interested and qualified? Go to Glencore on glencorejobs.nga.net.au to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at Glencore Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail