Machine learning is making inroads into every aspect of business life and asset management is no exception. Here are six ways in which machine learning has transformed the field – from the feel of the trading floor to the ideal skillset.
Most flow trading done by banks has already been fully automated. While 20 years ago such products as cash equities or foreign exchange were mostly traded by humans, often with hundreds of traders occupying the trading floors, shouting “buy” or “sell” orders, currently most market makers rely on the algorithmic execution and automated inventory management. In fact, many institutional orders are not executed by hand either; they are routinely sent to the algorithms ensuring optimal execution that would minimize their market impact or trading costs. Obviously, this shift has resulted in substantial changes to the industry, with banks heavily investing in trading platforms, and many traditional flow traders losing their jobs.
Another change to the playing field came with the new regulations following the recent global financial crisis, with more complex products (such as structured derivatives) now requiring much higher capital allocation. Increased capital requirements, together with the limitations on proprietary trading (driven by the Volker rule), changed the profitability of many traditional financial products and made some business lines simply not viable from an economic point of view. As a result, the focus of trading desks shifted to optimal capital allocation rather than taking directional bets or looking for more sophisticated models.
As execution systems become more complex, it’s difficult to achieve high trading volume without facing major operational risks. Making it worse, there is almost no quantitative way to measure and manage most of this exposure and banks are forced to develop their own internal operational risk frameworks. At the same time, the downside can be really large and manifests itself not only in the direct loss to the bank, following a wrong line of code or another glitch in the system. Since the frequency and severity of operational incidents is directly tied up to the capital that banks need to allocate against operational losses, the total costs of running this risk is actually much larger. As a result banks are incentivised to invest in the reduction of operational risks even if the glitches do not lead to immediate losses.
This is a really tough question. Generally, the past doesn’t predict the future, so even a carefully back-end tested algorithm can backfire if the market changes. Furthermore, extensive back-end testing can easily lead to the so-called “p-hacking”, when the strategy looks really good simply due to chance (and carefully chosen parameters). Also, most of the really profitable algorithms have very limited lifetime, because market participants tend to discover and arbitrage away emerging opportunities. No matter what people will tell you, there is no (regular) free lunch on the market without a private advantage in information or speed, and therefore, firms will always take into account potential losses of such algorithms. As a result, the amount of exposure and allocated capital that a single algorithm may utilise should always be limited, no matter how good it seems.
“Generally, the past doesn’t predict the future, so even a carefully back-end tested algorithm can backfire if the market changes”
Absolutely, and the importance of the data has increased tremendously over the past years. There are so many applications and innovations we have never thought possible before. However, the style of this dependence generally varies with the area of finance. For example, some would say that trading illiquid products, such as distressed debt, doesn’t rely much on the publicly available data (as usually it is scarce). At the same time, a market-making platform trading liquid products generally makes decisions only based on the available real-time data feeds. For them it is paramount to have the best information possible, which means processing a giant feed of data in a fraction of a second, and doing it in a smart way.
They are cheap! Many leading brokerages have reduced ETF trading commissions to almost zero even for their retail clients, who are usually paying the most. But even more importantly, ETFs allow investors to obtain complex and at the same time liquid exposure to diverse portfolios which would be way more expensive and time consuming to construct otherwise. It’s a legitimate short cut, really, - simple to use, and cheap to buy.
There are many fields, such as investment banking or private equity, that are mainly driven by relationships and qualitative analysis. At the moment, they do not require analysts to handle large data sets or making automated decisions. In addition, a lot of the models for pricing complicated structured products, requiring both advanced stochastic calculus and fast numerical methods, have already been coded and are now a part of the banks’ pricing systems. Understanding them, along with their assumptions, limitations, and possible implementation and interpretation by the market participants – now that’s the skill that does not get old. With so many models and formulas already available at the tip of your fingerprints, a real insight into them is essential for a good career in banking, be it sales, trading, or structuring new products.
But of course, there are many growing fields, such as algorithmic trading or some niche areas in derivatives, that employ complex IT systems for managing existing inventory, executing orders, or pricing portfolios of securities. Naturally, people working in these fields need to either develop the existing systems or conduct research involving analysis of large data sets. Excellent coding skills there are essential.
Svetlana Bryzgalova is an Assistant Professor of Finance at London Business School.