“Speed isn’t a bonus, it’s the requirement.” Petr Filipský on performance, latency, and data at scale
Nov 26, 2025
You recently gave a talk at Matfyz in Prague about what happens “under the hood” of a limit order book, how implementation choices affect latency, and why real-world data access patterns and profiling matter. But latency wasn’t your original topic – you started out as a computer graphics student. How did the path from late-90s Matfyz lead you to heading a C++ development team at an algorithmic trading firm?
I might disappoint you, but it’s not some dramatic legend about how I’d known since childhood that I would be a programmer. I had an older brother, and wherever he went, I went. He’d been obsessed with programming since he was little and grew up with it back when things worked very differently from today.
When did it click for you?
When my brother was in his first year at Matfyz, he spent a lot of time in the computer lab in Troja – and I, still a high-school student at the time, went there with him. We didn’t have a computer at home then, so this was my first systematic contact with PCs and with the idea that you can build something yourself. I’d go there to program, try things out, make games, play around with graphics.
So by the time you applied to Matfyz, it was a clear follow-up?
I studied there from 1994 to 2000, and that was a period when technology was changing incredibly fast. When people say “the rise of IT” today, it may sound abstract – but for us it was literally happening in our hands. It was the dot-com era, when computers were starting to connect to the internet en masse, new companies, fields, and technologies were emerging, and every year brought a major shift. The atmosphere was full of expectations and new possibilities – a bit like today with artificial intelligence. Back then, too, completely new paths were opening up and no one really knew where it would lead, but everyone could feel that something fundamental was changing.
You chose computer graphics as your field.
I was fascinated by what happens between data and image. A key person for me was Docent Pelikán – a brilliant teacher who got me genuinely excited about graphics and working with complex hardware. All of a sudden it wasn’t just ideas, but a real world of performance: volume data visualization, OpenGL, the first specialized graphics cards that started taking part of the workload off the CPU, and on top of that Silicon Graphics workstations – the cutting edge of what was available at the time.
How did technology jump forward while you were studying?
It was fascinating to watch the contrast: at home I had a 386 without a floating-point coprocessor, where every calculation took forever, and right next to that the university had extremely powerful machines optimized precisely for graphics operations.
But over just a few years that gap started to narrow dramatically. When my brother and I managed to get a 3dfx Voodoo Banshee card with hardware OpenGL acceleration for our PC, it suddenly became clear that even a home computer could compete with those million-crown graphics workstations in some tasks – and even outperform them in others. That was astonishing. Technology was evolving so fast then that practically every year we had to upgrade or replace the whole computer.
What from your studies stayed with you in the years that followed?
Through my thesis – volume data visualization in OpenGL – I started to care about where performance comes from, where it gets lost, how data is transformed into an image, and how the whole performance “stack” works from hardware all the way up to the application. Matfyz taught me to think systemically: not just to write something that works, but to understand the complexity of a solution and translate it into real, practical efficiency in the resulting code. Ever since then I’ve thought of myself as a systems programmer – I’m interested in how software runs on the metal.
After graduation I joined a company that was developing a new product from scratch, and that’s where I first encountered data at real scale. It wasn’t enough for the system to do the right thing; it had to handle a huge amount of work in a short time. That’s what I enjoyed, because performance wasn’t some nice extra — it was part of the definition. Until it was fast enough, it wasn’t done.
Speed is part of the definition in algorithmic trading too. Did you know anything about financial markets and automated trading when you joined Qminers?
I came to Qminers at a time when the company was much smaller than it is today and had a very startup-like atmosphere. I knew practically nothing about trading back then, but I liked how sensible the codebase looked, how meaningfully things were done, and above all the people and the culture. What excited me was the approach to development – as soon as something was fixed or improved, it showed up in production right away. For a programmer who likes simplicity, well-run code, and a visible impact of their work, that’s genuinely attractive.
How quickly did you get to the thing you focus on so intensely now – latency?
More or less right away. Before, I dealt mainly with throughput; here you deal with both. In backtests you want to push a large volume of data through the system as fast as possible, but in production what matters most is how quickly you react to a market impulse. Sometimes those two go hand in hand, and sometimes they pull against each other. It’s often a very delicate balancing act, because you can’t simply calculate how many extra percent of return lower latency will bring in practice, and what it “costs” you – for example in code complexity. You’re looking for a sweet spot between fast backtests and fast reaction to the market. It’s a mix of technical expertise, experience, and intuition – and of course also hard measurement data and how you interpret it.
Your team programs in C++. Why that language? What’s the advantage?
If you care about how software really runs on hardware – how efficiently it uses CPU, memory, and time – C++ is still an excellent tool. Today’s compilers generate extremely efficient machine code while still giving you a lot of control over performance.
And Rust? A lot of people say that’s the future. How do you see it?
Rust is very popular today, and I see it as a really interesting successor. While still giving you enough control over what happens under the hood, it brings higher safety and eliminates whole classes of bugs already at compile time. The borrow checker gives you much stronger guarantees that your code doesn’t contain “sleeping” errors of the undefined-behavior kind. For a new project, Rust can make a lot of sense. At the same time, the reality is that C++ will be around for a very long time, because the world is full of enormous C++ codebases that no one is going to rewrite. So for a young developer even today, the advice is: learn both.
Since we’re talking about young people – what would you say to today’s Matfyz students?
I feel like students today don’t have it easy at all. When I was studying, things were clearer. Today there are more options, more noise, more uncertainty – and on top of that AI. Everywhere you look people are debating who AI will replace and whether it even makes sense to learn programming when “a model can write it for me.”
I think the quality of the output will still depend on the quality of the prompt and on a person’s know-how for a long time. If a programmer doesn’t know what’s good and what’s bad, they can’t guide AI effectively. AI is a great tool and a huge efficiency booster — but only if you know how to ask questions and how to judge the result. You own the code, not the model. And solid foundations – algorithms, data structures, system properties, performance – matter more than ever. If you understand what’s happening under the hood, you’ll be able to use AI to the fullest.