
MBA
Invest in the career of a lifetime and gain a global network with our customisable two-year programme.
Discover morePlease enter a keyword and click the arrow to search the site
Or explore one of the areas below
From visible losses and invisible risks to new skills and opportunities
Machine learning is making inroads into every aspect of business life and asset management is no exception. Here are six ways in which machine learning has transformed the field – from the feel of the trading floor to the ideal skillset.
Most flow trading done by banks has already been fully automated. While 20 years ago such products as cash equities or foreign exchange were mostly traded by humans, often with hundreds of traders occupying the trading floors, shouting “buy” or “sell” orders, currently most market makers rely on the algorithmic execution and automated inventory management. In fact, many institutional orders are not executed by hand either; they are routinely sent to the algorithms ensuring optimal execution that would minimize their market impact or trading costs. Obviously, this shift has resulted in substantial changes to the industry, with banks heavily investing in trading platforms, and many traditional flow traders losing their jobs.
Another change to the playing field came with the new regulations following the recent global financial crisis, with more complex products (such as structured derivatives) now requiring much higher capital allocation. Increased capital requirements, together with the limitations on proprietary trading (driven by the Volker rule), changed the profitability of many traditional financial products and made some business lines simply not viable from an economic point of view. As a result, the focus of trading desks shifted to optimal capital allocation rather than taking directional bets or looking for more sophisticated models.
As execution systems become more complex, it’s difficult to achieve high trading volume without facing major operational risks. Making it worse, there is almost no quantitative way to measure and manage most of this exposure and banks are forced to develop their own internal operational risk frameworks. At the same time, the downside can be really large and manifests itself not only in the direct loss to the bank, following a wrong line of code or another glitch in the system. Since the frequency and severity of operational incidents is directly tied up to the capital that banks need to allocate against operational losses, the total costs of running this risk is actually much larger. As a result banks are incentivised to invest in the reduction of operational risks even if the glitches do not lead to immediate losses.
This is a really tough question. Generally, the past doesn’t predict the future, so even a carefully back-end tested algorithm can backfire if the market changes. Furthermore, extensive back-end testing can easily lead to the so-called “p-hacking”, when the strategy looks really good simply due to chance (and carefully chosen parameters). Also, most of the really profitable algorithms have very limited lifetime, because market participants tend to discover and arbitrage away emerging opportunities. No matter what people will tell you, there is no (regular) free lunch on the market without a private advantage in information or speed, and therefore, firms will always take into account potential losses of such algorithms. As a result, the amount of exposure and allocated capital that a single algorithm may utilise should always be limited, no matter how good it seems.
Invest in the career of a lifetime and gain a global network with our customisable two-year programme.
Discover more