Jobs Career Advice Signup
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Mar 23, 2022
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Never pay for any CBT, test or assessment as part of any recruitment process. When in doubt, contact us

    Absa Group Limited (Absa) has forged a new way of getting things done, driven by bravery and passion, with the readiness to realise the possibilities on our continent and beyond.
    Read more about this company

     

    Big Data Services Lead

    Job Summary

    Leveraging a thorough understanding of the business data requirements & own data subject matter expertise (ahead of trend), set data architecture direction & lead the agile solution design, deployment, implementation & ongoing optimization of enterprise wide data retrieval, storage and distribution across an estate.
    Job Description

    Team Context

    Data Engineering is responsible for the central data platform that receives and distributes data across the bank. This is a multi-platform environment and leverages a blend of custom, commercial and open-source tools to manage and support thousands of critical data-related jobs. These jobs are supported and updated in line with changes across the landscape to avoid disruption to downstream data consumers.

    Role Description

    The Big Data Services Lead role is responsible for driving optimization of the end-to-end environment. This includes the Hadoop platform, scheduling, automation, monitoring and the various aspects of supportability and efficiency required to manage a large, complex environment.

    A Big Data Services Lead is a professional responsible for programming Hadoop applications and knows about all the components or pieces of the Hadoop Ecosystem , understands how the Hadoop components fit together and has the ability to decide on which is the best Hadoop component for a specific task. In this role, you will be part of the Data Operations team that is responsible for supporting all the Applications on the Hadoop ecosystem. This role expands in maintaining changes on datasets and optimisation activities on all Applications, including new development. They therefore need to understand basic programming to enable them to manage Big Data and to transfer all data to Hadoop.

    Education: Bachelor’s degree in Computer Science, Information Systems or related field.

    Responsibilities 

    • Manage an assigned team through day to day support tasks
    • Oversee enhancements and new developments, providing guidance and peer review 
    • Build and deploy new data pipelines
    • Oversee development plans for the team and provide mentorship to the team
    • Identify and drive optimisation opportunities across the environment
    • Drive improvements to environment supportability and maintain effective cross team relationships
    • Test changes to internal tools and approve releases, with a focus on optimizing test execution
    • Drive improvements to environment supportability and maintain effective cross team relationships
    • Manage the handover of new applications ensuring that required standards and practices are met
    • Advise users on best practices and conduct training sessions where required  
    • Define and continuously improve the team best practice guides and standards
    • Translate complex functional and technical requirements into detailed design for the team to build

    Job Experience & Skills Required:

    • 4 years’ experience working in Big data environment, optimising and building big data pipelines, architectures and data sets with e.g. Java, Scala, Python, Hadoop, Apache Spark and Kafka
    • Cross domain knowledge
    • Experience with designing and building, BI systems and complex data eco systems
    • Minimum one year experience with Scala programming language
    • Familiarity with Hadoop ecosystem and its components
    • Solid experience in a working environment in Big Data development utilising SQL or Python
    • Experience in Big Data development using Spark
    • Experience in Hadoop, HDFS and MapReduce
    • Experience in database design, development and data modelling

    The following additional knowledge, skills and attributes are preferred:

    • Good knowledge in back-end programming, specifically Java
    • Understanding of Cloud technologies and migration techniques
    • Understanding of data streaming and the intersection of batch and real time data
    • Experience with development in a Linux environment and its basic commands
    • Ability to write reliable, manageable, and high-performance code
    • Should have basic knowledge of SQL, database structures, principles, and theories
    • Knowledge of workflow/schedulers
    • Strong collaboration and communication skills
    • Strong analytical and problem solving skills
    • Data architecture and security
    • Data management

    Closing Date: 4th, April 2022

    Method of Application

    Interested and qualified? Go to Absa Group Limited (Absa) on absa.wd3.myworkdayjobs.com to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at Absa Group Limited (Absa) Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail