Tech intern projects at Optiver Amsterdam
By:
Date published:

This summer, Optiver’s Amsterdam office hosted a group of tech interns eager to tackle the challenges of market making. Beyond just theory, they worked hands-on with our core trading technologies, directly engaging with some of the most interesting technical challenges in the financial industry.

In this blog post, our Software Engineering interns delve into their individual projects. Through their first hand accounts, gain insight into the challenges they faced, the knowledge they acquired and the innovative solutions they devised over just six weeks.

Challenge

Working at Optiver, software engineers encounter multiple interesting problems daily. In the Infra Enterprise team, we are responsible for providing reliable infrastructure for software engineers, traders, researchers, and other teams.

During my internship, I was tasked with improving an existing infrastructure setup for an external-facing application requiring high availability. The goal was to free developers from concerns about the application’s connectivity, security, monitoring, and reliability.

Approach

My project involved using Infrastructure as Code and Terraform to provision the application platform infrastructure in AWS. The first step was creating a base infrastructure, starting with the establishment of a VPC – Virtual Private Cloud. Within this VPC, I created public and private subnets in different availability zones. By launching nodes in multiple availability zones, we aimed for the highest fault tolerance. Applications were to be deployed within these subnets. For enhanced functionality and security, I integrated NAT Gateways, route tables, NACLs, and WebACLs.

Next, I focused on the application infrastructure. For each application, I implemented code to provision an autoscaling group, which is a collection of EC2 Instances for app hosting. As implied, these can scale up or down based on load. There are health rules that trigger if the CPU usage goes over a certain percentage, leading to the launch of a new server. These instances are then connected to load balancers, distributing traffic amongst them. The load balancers use a sticky round-robin algorithm, ensuring that end users can access the application continuously. This is further supported by employing a rolling deployment strategy for all servers.

This approach was notably different from the previous one, where Elastic Beanstalk, an existing AWS service, was used. Instead, I opted for a method that offers more configurability and flexibility. I created Terraform modules that could be reused for other product infrastructures in the future. Additionally, I implemented a CI/CD pipeline to automate the provisioning of infrastructure to the cloud.

Results

After implementing the deploy pipeline, the system was ready for migration to production. This project highlighted the importance of understanding infrastructure challenges. It reinforced the idea that infrastructure serves as a foundational tool for other teams and that the reliability of this system has a direct impact on their work.

Challenge

Configuring our autotrader components in production is always challenging, due to many upstream dependencies and performance requirements. Moreover, more often than not, manual work is needed, making the whole process time-consuming and error-prone. In our trading operations team, we have build an automation tool to generate all the configuration files required for a specific autotrader in a reliable way.

In my project, I was tasked with devising a design solution for automated configuration generation that can be extended to many other autotraders.

Approach

The design stage was fundamental and was consolidated by a lot of research, understanding the differences between components, the limitations of current configurations structure, and how my tool can compensate for the lack of standardisation between different components. It was enlightening to talk to different developers and understand the reasoning behind their decisions and how these align with Optiver’s goals.

As part of my project, I also had to contribute to an already existing library, used by many other tools. The impact of my changes wasn’t only limited to my project, but could potentially touch other tools and components. Being able to see how meaningful my work was to the company truly was an amazing experience, as I felt confident taking ownership and leading the direction where I thought the project should be heading.

Implementing the tool itself proved to be a continuous improvement process, where feedback was integrated to enhance the quality of the results. With the support from my mentor and other team members, I was able to quickly execute and deliver multiple prototypes that guided further development.

Results

The end product is a command line tool that configures autotraders automatically, generating all the necessary files in the production configuration repo. Its standout feature is its adaptability: the tool isn’t just limited to one specific type of autotrader but can be extended to many others. It achieves this by decoupling the format structure of a component from the process of collecting the requisite information for the configuration files.

The technical challenges made the project truly interesting: having to learn a completely new language generator, using my creativity and critical thinking to determine the best design for the tool, deep diving into existing code to make a meaningful contribution, while also learning about Optiver’s infrastructure. One key takeaway is decisions should be made with the trading problem in mind, establishing the main priorities and objectives. And, in the end, with the right mix of these ingredients, even making automation scalable is possible.

Are you our next intern?

Apply now for our 2024 Amsterdam internship program and get ready for an 8-week immersive learning experience.

Scroll to Top