Glimpses into the offices of modern financial institutions reveal dizzyingly-intricate algorithmic and computationally-driven investment strategies. Machine learning techniques and the methods of applied physics confound the layman and foster a reputation of unapproachable complexity around the realm of quantitative finance.

The intricate probabilistic methods of academic economics and their required mathematical erudition seem to bar entrance to those more intent on cultivating the techniques of causal realism. Yet such models ultimately rest upon a fundamental assumption concerning the nature of human action, the validity of which is hardly addressed within academia and whose ontological fallaciousness fatally undermines their applicability.

Assumption

Mainstream academia’s modus operandi in the field of economics is the development of contrived quantitative models constructed from unrealistic premises concerning human behavior. Just as in the divorce of macroeconomic theory from the most basic principles of human action, financial modeling’s unfeasible postulates are justified as observable, and thus effectively tenable in the aggregate. So long as a model’s internal consistency is (ostensibly) maintained, financial modeling’s concentration on large groups of data and market participants motivates a disregard for complete holistic rigor.

While said probability models are intended to inform individual financial decisions, little attention is given to whether the claims through which their conclusions are attained are valid from a single agent’s perspective. Yet the most fundamental assumption held within quantitative finance is the treatment of economic and financial data as generally homogenous both temporally and between individuals.

Put simply, the financial activity of numerous individuals is compiled together, compared across time, and treated as the result of a single data-generating process. Again, the uniqueness of individual financial decisions is not denied, only that statistical techniques applicable to homogenous datasets become informative when applied to their aggregated outcomes. Such methods permit the identification of empirical frequencies, the construction of probability distributions, the assignment of probabilities to various future outcomes (i.e., the bedrock tools of modern probability theory).

This homogenized pool of heterogeneous variables is analogous to procedures employed in the natural sciences, where fundamentally heterogeneous variables are successfully aggregated into homogeneous descriptions, as in the analysis of large collections of gas particles to predict temperature and pressure.

Relaxation

While many modeling techniques tentatively relax the assumption of homogeneity, they often do so through awkward categorization, such as the Bayesian/Gaussian standardizations of returns by time periods and market “regimes,” or the partitioning of financial market participants into categories of risk-aversion. A similar approach (used in ARCH/GARCH models) is the sleightful transposition of assumed homogeneity onto data normalized by constant scaling factors (e.g., recent volatility levels).

Other models (such as Fama-French) relocate the assumption of homogeneity away from the target variables they seek to predict (e.g., returns) onto common explanatory factors (e.g., profitability) which are presumed to be identically experienced and priced by all market participants.

A number of risk management-focused models embrace far more opacity and imprecision than their counterparts, yet they introduce at best structured elements of uncertainty (e.g., a finite range of plausible outcomes in robust MVOs).

Any fundamental skepticism as to the epistemological legitimacy of interpersonal and intertemporal economic comparison is ultimately absent within the field of quantitative finance, as such a discussion calls into question its integral raison d’être.

Heterogeneity

But as quantitative financial models themselves admit, human action—the likes of which generates financial data—is undeniably heterogeneous both between distinct acting individuals and across time. The expectations, preferences, and levels of risk tolerance which inform financial decisions are ever-changing, as are the unequivocally non-constant, functionally infinite conditions which influence them.

As a result, the outcomes of financial events do not tend towards stable values and their variances are explicitly variable and uncapped, completely invalidating the formulation of probabilities derived from distributions built upon historical financial data. Furthermore, data accumulated from multiple financial market participants is itself heterogeneous, as each decision to buy or sell assets in various quantities are all unique for the very same reasons.

Analogous to the Keynesian trope that “a dollar of spending is a dollar of spending,” the amalgamated nature of financial markets misleads those intent on quantification into neglecting the nature of markets and the price mechanism. Far from homogenous events, market prices are simply the exchange ratio (in this case, between a monetary good and a financial asset) at which trade was maximized during that period of time through the forces of manual speculation, trading systems, automated order books, etc.

Practically all of quantitative finance uncritically treats said exchange at market prices as the result of a single statistical process for the sake of convenience, despite its having taken place between countless individuals, each possessing distinct and constantly-evolving motivations.

The derivation of probabilities from the empirical frequencies found within said aggregated data and their employment in predicting future market outcomes, in which motivations, conditions and even participants will necessarily have changed, is simply the inevitable false step arising from the fallacious assumption of homogeneity.

How ironic that, as the undeniably heterogeneous nature of financial data foils any purely stochastic model’s chances at consistent success, the assumption of homogeneity is simply shifted onto statistical methods of data standardization and categorization, all but guaranteeing further failure for the very same reasons.

Even if homogeneity across time were fully abandoned (alone disarming the overwhelming majority of the tools of modern quantitative finance), heterogeneity across individual financial actors would have to be embraced, completely dismantling the entire edifice of academic finance.

Implications

Academics, professional quants, and even laymen enamored by the complexities of probability theory and modern machine learning might initially scoff at such an admittedly-skeptical line of reasoning. They could understandably highlight the tremendous returns attained by various probabilistic trading strategies as examples of the successful application of probability theory in predicting financial market outcomes. And yet, modern probability-based models’ ignorance of underlying supply and demand conditions can easily be said to adversely divert capital and contribute to the extreme dislocations observable across the modern economy.

Furthermore, the modern probabilistic framework of quantitative finance can be equally held responsible for aggravating several generational financial bubbles which annihilated trillions of dollars in wealth, such as the 2008 GFC, in which widespread probabilistic structuring and VaR models played crucial roles. It must be emphasized that the broad implementation of probabilistic models across financial markets ultimately depends upon non-probabilistic methods for oversight, development, modification, guidance, and, most importantly, entrepreneurial foresight.

As such, for as long as stochastic models require constant adjustment and assets are recognized as solely the property of individuals as not purely algorithmic entities, returns (especially consistent returns) must ultimately be attributed to the entrepreneurial abilities of those possessing equity in profitable investment ventures.

Conclusion

Criticism of the homogeneity assumption should not be misconstrued as the dismissal of statistical methods as a whole. Data of all kinds facilitate entrepreneurship and economic calculation, unequivocally increasing economic efficiency. Yet the probabilities derived from data falsely assumed to be homogenous are simply feigning empiricism and represent yet another example of “physics envy” within mainstream economic academia.



Source link

Previous articleIt’s Time: California Needs Science-Based Standards for Wildfire Insurance Claims
Next articleHitachi, Ltd. (HTCI:CA) Q3 2025 Earnings Call Transcript

LEAVE A REPLY

Please enter your comment!
Please enter your name here