Description Summary:
A leading hedge fund is seeking a skilled Data Engineer to join their Credit Technology team. This team is responsible for building, owning, and supporting a world-class data platform for portfolio managers and their teams. They are developing the suite of core components that will underpin the offering of this team for years to come. We are looking for an experienced and eager engineer to join our team in supporting this mandate.
Role Overview:
- Design, build, and grow a modern data platform and data-intensive applications, from ingestion through ETL, data quality, storage, and consumption/API's.
- Work closely with quantitative engineers and researchers.
- Collaborate in a global team environment to understand, engineer, and deliver on business requirements.
- Strike a balance along the dimensions of feasibility, stability, scalability, and time-to-market when delivering solutions.
Qualifications & Requirements:
- 5+ years of work experience in a data engineering or similar data-intensive capacity.
- Demonstrable expertise in SQL and relational databases.
- Strong skills in Python and at least one data-manipulation library/framework (e.g., Pandas, Polars, Dask, Vaex, PySpark).
- Strong debugging skills at all levels of the application stack and proven problem-solving ability.
- Strong knowledge of the data components used in distributed applications (e.g., Kafka, Redis, or other messaging/caching tools).
- Experience architecting and building data platforms / ETLs, ideally batch as well as streaming, data lake/warehouse/lakehouse patterns.
- Experience with column-oriented data storage and serialization formats such as Parquet/Arrow.
- Experience with code optimization and performance tuning.
- Excellent communication skills.
Additional experience in the following areas is a plus:
- Experience building application-level code (e.g., REST APIs to expose business logic).
- Prior usage of tooling such as Prometheus, Grafana, Sentry, etc. for distributed tracing and monitoring metrics.
- Experience with distributed stateful stream processing (e.g., Kafka Streams, Flink, Arroyo).
- Work with financial instruments/software in areas such as research, risk management, portfolio management, reconciliation, order management, etc.
- Prior experience with ClickHouse, Snowflake, or KDB
Can pay up to 350,000 Total Comp.... APPLY BELOW!
Desired Skills and Experience
A minimum of 5 years in a data engineering role or a similar data-focused position.
Proven expertise in SQL and relational database systems.
Proficiency in Python, along with experience in at least one data manipulation library or framework (such as Pandas, Polars, Dask, Vaex, or PySpark).
Strong debugging capabilities across the entire application stack and a demonstrated ability to solve complex problems.
In-depth knowledge of data components used in distributed systems (e.g., Kafka, Redis, or other messaging/caching tools).
Experience in designing and developing data platforms/ETLs, with a preference for both batch and streaming processes, as well as data lake/warehouse/lakehouse architectures.
Familiarity with column-oriented data storage and serialization formats like Parquet or Arrow.
Experience in code optimization and performance enhancement.
Excellent communication abilities.