New research argues that although there are similar and repeated patterns of information detectable in both low and high-frequency financial market data in South African markets, predictability does easily imply profitability without large-scale asymmetric market access.

The research was conducted by the Statistical Science Department at the University of Cape Town (UCT) and the School of Computer Science and Applied Mathematics from the University of the Witwatersrand.

The work was completed by Fayyaaz Loonat (Deloitte) in conjunction with UCT Associate Professor Tim Gebbie.

Loonat and Gebbie considered the question of how an investor should distribute their wealth among several stocks and trading strategies in a stock market on a specific trading day, if what happens in the past is likely to happen again and if a pragmatic investor is only concerned with maximising their wealth irrespective of risk.

Initially, the duo independently replicated prior studies and then extended the work to a new data-set with various algorithm refinements.

The approach used was broadly based on the work of information theorist Thomas Cover and his idea of universal portfolios, combined with the pattern-matching algorithm extensions developed by László Györfi, Frederic Udina and Harro Walk. These were implemented in an optimised manner on novel data to explore predictability.

In this approach each trader or investment agent is represented by a strategy that uses different methods to define similarities or patterns in the market across time and the sole objective of trading agents is to maximise their long-term wealth.

This research is important when learning how to manage big-data pipelines and high-performance computing infrastructure and is a way of showing that even when predictable patterns seem to exist they are not easily profited from because of various business model and institutional constraints.

Gebbie comments: “We are far from being able to trust studies that claim profitability from stock market prediction because almost all are plagued with data-overfitting (the probability of back-test overfitting) but this makes online learning (real-time incremental adaptive learning) interesting as these algorithms have a more nuanced approach to dealing with generalisation errors.”

In addition, he points out that a key feature of modern financial markets is that institutions can shape the behaviours of agents and that asymmetric access to capital and market-infrastructure can lead to various apparent arbitrages that cannot be easily accessed without large scale bulk trading.

Gebbie argues that industrialising research and development at the institutional level will become the norm in this space as it provides the only effective way to exploit much of the apparent predictability in financial market data given the nature of both the infrastructure required as well as the need of having highly integrated cross-market bulk trading.

The authors believe that trying to understand stock markets and what types of profitable strategies and market properties can be induced by top-down regulation and various top-down barriers to entry can be important.

This would not only provide simple computer-based learning agents the ability to outperform human traders and investors, but most importantly, it can teach us about how institutions create (and destroy) predictability in financial markets.

The study, “Learning zero-cost portfolio selection with pattern matching”, was published in PLOS ONE, a peer-reviewed open-access journal.