Apply Soon

Data Engineer

Full-time

Hybrid

Deadline

July 27, 2024

About the organization

ECOSOC

United Nations Economic and Social Council

Organization type

Multilateral Organization

In A Nutshell

Location

Hybrid Beirut, Lebanon

Job Type

Full-time

Experience Level

Mid-level

Deadline to apply

July 27, 2024

Support efforts to modernize, transform and innovate ICT in the UN, while strengthening governance and optimizing resources.

Responsibilities

  • Manage individual projects regarding the optimal extraction, transformation, and loading of data from a wide variety of sources into data pipelines, as well as the creation and maintenance of data catalogues.
  • Design and develop solutions for data-related technical problems and data infrastructure needs surfaced by Executive, Analytics, and Design teams.
  • Manage the identification, design, and implementation of internal process improvements: automating manual processes, optimizing data delivery, and re-designing architecture for greater scalability.
  • Deploy resources to extract data features from complex datasets for data scientists and data analysts.
  • Manage the implementation of changes to data systems to ensure compliance with data governance, protection, privacy, and security requirements.

Skillset

  • A minimum of five years of progressively responsible experience in data management, integration, modeling, optimization, and other relevant areas, is required.
  • Advanced university degree (Master’s degree or equivalent) in computer science, statistics, applied mathematics, data management, information systems, information science, or a related quantitative field, is required.
  • Experience supporting cross-functional teams and collaborating with stakeholders in support of analytics initiatives is required.
  • Experience in programming languages such as Python or R is required.
  • Experience with database programming languages (SQL, etc.) is required.
  • Experience in designing data integration and pipeline architectures which must include ingesting data through different methods such as message queues, database connections, flat files, REST, or specific APIs, is required.
  • Experience in DevOps tool chains consisting of tools like Git, Jenkins or Bamboo – or equivalent tools – and experience with the deployment of data pipelines is required.
  • Experience in delivering big data use cases is desirable, including projects using technology such as Apache Spark, Hadoop or others.
  • Experience in Artificial Intelligence, particularly Machine Learning techniques for data mining and extraction of large amounts of data is desirable, particularly familiarity with concepts of bias and fairness (i.e., how to identify, assess, and minimize harmful biases, discrimination, and unfair outcomes in algorithmic systems and data).
  • Experience working with self-service analytics applications like Microsoft PowerBI, Tableau, Qlik, and others for data discovery is desirable.

Spot any inaccurate information? Have a job to share? Let us know.