The LIPNE Lab investigates a range of areas related to neuroeconomics

Professor Peter Bossaerts discusses his Research, Janeway Institute University of Cambridge, 2024.

Probabilistic uncertainty

Probabilistic uncertainty refers to situations where learning can proceed by repeatedly trying actions. Beliefs converge to a common standard (“objective probabilities”) regardless of how sophisticated belief updating takes place. This is also the environment where machine learning is successful. Probabilistic uncertainty is the foundation of much of empirical work on decision-making under uncertainty. Economists study all decision-making under uncertainty as if this is the only type of uncertainty that exists.

Uncertainty because of complexity

Complexity permeates daily life. We study two types of complexity: (i) computational complexity, i.e., situations where it’s hard to figure out what to do, even if one has all the data (even the best electronic computers have a hard time dealing with this type of complexity; unsurprisingly, it has been studied extensively in computer science); (ii) inference complexity, i.e., situations in which it is hard, from observations, to infer what caused the observations. When making decisions under either type of complexity, one cannot generally be confident to be right. We (and others) have recently proposed that both types of uncertainty cause people not to try to maximise rewards/values, but to control surprise. Engineers have long used the idea as one way to ensure robust control.

Uncertainty in social interaction

Here we focus on strategic uncertainty: situations in which even formal theoretical modelling (“game theory”) makes it impossible to predict what one’s opponent/ally will do. Psychologists would argue that such situations require “theory of mind.”

Markets: equilibrium predictions

Virtually all economic models of social interaction, including through markets, assumes that outcomes reflect a system that is “in equilibrium.” Equilibrium restrictions are used to interpret historical data from the field and advise policy makers such as central bankers. But absent an economic law of “entropy,” it is by far a foregone conclusion that economic systems equilibrate. Since the 1960s, controlled experiments have been run to evaluate under what conditions equilibrium really emerges, and if not, what the reasons could be. 

Markets off equilibrium

Before they got focused on equilibrium, economists (most prominently, Cambridge’s Alfred Marshall) actually explored markets behaviour without assuming that it somehow converges to equilibrium. This endeavour was put on the back burner however, because of lack of tools (mathematical framework and analysis; experimental methodology). We have worked at addressing this situation, engaging in a dialogue between theory and (experimental) data in order to discover basic principles that govern behaviour of markets off equilibrium.

Markets as mechanisms to re-allocate risk, collect information, and spread knowledge

The standard view of financial markets is that they serve an important role in re-allocating risk (across agents and over time). But they appear to be equally important as a tool to gather information (prediction markets), and to spread knowledge — whether that knowledge is pure data points or it helps resolve computational complexity.

Learning in financial markets (human and artificial agents) 

Financial markets generate a type of risk which is rather unusual, in that there are lots of extreme events (“outliers”): either the market is very calm, or stormy; there does not seem to be a “normal” level of uncertainty. This raises the questions as to whether humans, or artificial agents (machine learning), are well prepared for it, and if not, what to do about it. 

Instrument building for financial markets research and teaching (including algorithmic trading)

Experimental research on markets requires software with which one can readily set up multiple simultaneous markets where human participants can exchange tailor-made goods and securities. The software has to be flexible enough so many relevant economics situations can be studied (and taught). Exchange mechanisms have to be realistic, yet amenable to control. Because of the relevance of robot traders in the field, the software should also be able to interface with algorithms that participants write to automate trade or that are made available to participants so they can deploy them to enhance interaction with markets. We do not limit attention to emulating successful field market mechanisms, but explore novel mechanisms, with the goal of resolving tough problems in the field, such as lack of liquidity.