^{♦}

^{1}

^{2}

^{3}

Dante Mendes Aldrighi

We present an adaptation of a new Keynesian model into an agent-based computational model, accounting for the importance of heterogeneity, interaction and bounded rationality in problems such as price setting and expectations formation. We evaluate the evolution of the distribution of price setting strategy frequencies, which agents might choose on each period between inflationary, neutral or deflationary, a process based on private and social features of agents’ utility functions. In addition, we consider the network’s topological structure and find that the regular network model fails in replicating exactly the new Keynesian model’s original results, while the complex network model presents results in accordance with the originals.

Apresentamos uma adaptação de um modelo novo keynesiano em um modelo computacional baseado em agentes, tendo em vista a importância da heterogeneidade, interação e racionalidade limitada nos problemas como fixação de preços e formação de expectativas. Avaliamos a evolução da distribuição de frequência das estratégias de formação de preços, que os agentes podem escolher a cada período entre inflacionária, neutra ou deflacionária, processo este baseado nas características privadas e sociais da função utilidade dos agentes. Adicionalmente, levamos em conta a estrutura topológica da rede em conta, e verificamos que o modelo em rede regular fracassa em replicar com exatidão os resultados originais do modelo novo keynesiano, enquanto o modelo em rede complexa apresenta resultados congruentes com os originais.

Price setting and expectations are subject to long-standing debates in macroeconomics. Most new Keynesian models assume that individuals formulate expectations independently and employ a representative agent to analyse them, assuming that individuals have similar preferences, access to the same information and the same reasoning prowess. It goes without saying that these assumptions are not to be taken literally, but they are the foundations on which canonical new Keynesian models are built.

However, these models have its critics.

Recent literature such as

This study’s proposed model is an ABM which takes network effects
into account, assuming a monopolistic competition economy based on

Therefore, this article’s main contribution to macroeconomic
research is assessing whether the conclusions put forward by

Thus, this article is structured as follows: next section is a review of the literature touching on topics such as price setting, social interaction and networks in economics; the third section lays out the methodology behind the model here proposed; the fourth section presents and discusses the results achieved; and the fifth and final section presents our concluding remarks.

This subsection assembles some findings regarding price setting
and expectations that are of great interest to us due to how they
relate to the model we will run afterwards. As a starter, the study
published by

According to

In this article we chose an evolutionary approach when discussing
price setting.

Still on evolutionary dynamics,

Another model from

In an effort to bring theory closer to empirical data, research
in economics has been more observing of the importance of social
interaction between economic agents.

If a firm decides to raise the price of a good it produces, it
cannot raise the price much higher than the neighbouring firms that
produce close substitute goods, as that raises the chance of losing
clients, therefore not maximising profits.

In this context,

In macroeconomics, with the prevalence of the new neoclassical
synthesis and DSGE models, the starting studies in this field was
rather timid but, more recently, models with social interaction have
been getting more attention. The reason behind it is that current
mainstream models have started to come under fire since the subprime
crisis, facing difficulties when trying to fit empirical findings in
their framework, as

However, not all is lost for mainstream economics, as there is a
recent strife between RANK (Representative Agent New Keynesian) and
HANK (Heterogeneous Agent New Keynesian) models. The mainstream
models pointed out in the last paragraphs as suffering from lack of
realism are the RANK models. In similar contexts, HANK models
performed better, as discussed by

For instance,

This subsection is a primer on network science, as some basic
concepts need to be laid out to better understand the computational
model used here.

It can be said that two nodes i and j are directly connected if they are adjacent to each other. If they are not adjacent, but are part of a sequence of adjacent nodes linked amongst themselves, then i and j are indirectly connected. The neighbourhood of any given node is composed by a set formed by nodes connected by edges to this node. In complex networks, a node can connect with another one very far from its geographical location.

This means nodes can be linked even when their positions in the
network are far from each other, and nodes that are very far from
each other might be linked in a sequence with a few steps
(small-world phenomena). In order to represent complex networks,

Each of the nodes is linked to K neighbours in clockwise sense
and K neighbours in anticlockwise sense. Having defined a ring
network, we can eliminate an edge between the i-th node and a
neighbour, and then create a new link between i and any other node
chosen randomly with a probability p ∈ [0, 1]R. This operation is
called

A network is called regular when p = 0, and its main feature is
having the same number of edges for all nodes. For any i node, in a
regular network the connected edges are always the same. If p = 1
the network is called random.

We will now build the model, starting with an explanation on
how agents make decisions. Consider an agent i that must choose
between three mutually exclusive alternatives, denoted by −1, 0 e
1. Let σ_{i}_{i}_{i}_{i}

where _{i}

It is possible to rewrite

_{i}

However, even if the observable utility of a given strategy
σ_{i}_{i}_{i}_{i}

where _{i}_{i}

_{i}

with β > 0 being a real constant. The CDF corresponding to the PDF given by (5) is in turn given by:

and by inserting

where σ_{i}

An interpretation of β relevant to analysing the results is
offered by

_{i}_{i}

with α being a parametric constant that measures the relative
weight of deterministic private utility,
_{i}_{j∈n}
.

Inserting _{i}

and after this adaptation, employing the same reasoning used
when deriving _{i}

Lastly, the social deterministic utility
_{i}

After defining n_{i}

where J > 0 is a parametric constant that measures the
degree of influence in the neighbourhood and
δ_{σ}iσj_{i}_{j}_{σ}iσj_{σ}iσj

Substituting

In short, if α/J > 1 then agents will put more weight on private utility, else they will be more affected by network incentives. As for β, the lower it is, the more random will the choice be. This includes bounded rationality in the model, as an agent might make bad choices from a deterministic utility viewpoint, choosing a worse strategy if he/she puts a low weight on observable incentives.

This subsection briefly introduces the model conceived by

In the original model, money is just a medium of exchange to
make transactions, so it is possible to employ money supply, M, as
a proxy of aggregate nominal demand. Therefore, the producer’s
maximum utility as a function of the nominal aggregate demand, the
price of good i, P_{i}

Therefore, the i-th producer’s optimal price,
P

where

In the price-setting game described by
_{i}_{i}

Relaxing the hypothesis that states that producers immediately
identify the game’s symmetric Nash equilibrium, it is possible to
adapt the model here described to a network game. This means that
in period t, the i-th producer, when choosing the price
P_{i,t}_{t}

where

As mentioned beforehand, one of the main changes is abandoning
the rational expectations paradigm, where
_{t}_{i,t}_{i,t}_{i,t}

Now we will present the utility function’s three components.
Drawing on

Notice that for _{t}_{t−1}), it follows that

As for the deterministic social utility, for the i-th producer
on period t, it depends on the strategies of the producer himself
and also of the n_{i}

As outlined before, the stochastic component of the utility
function will be given by random variables independent amongst
themselves that follow the Type-1 Gumbel distribution, formally
defined in _{i,t}

The last step for model closure is defining how producers,
given their states on period t, form their expectations for the
general price level, _{t−1}, if the agent is inflationary,
_{t−1},
1.2P_{t−1}], and if the agent is
deflationary, _{t−1},
P_{t−1}].

In short, the above options can be described intuitively. Drawing
again on _{i,t}

Based on their expectations of the general price level, each
agent will define an individual price on t by using the optimal rule
given by

An outcome that could not happen in the original model by

Some variables of interest for the results will be the output level and price variance. The authors define the money supply M = PC, and as the model does not mention investment or government expenditure we can, for each period, isolate the mean consumption index and treat it as the aggregate output:

and lastly, the price variance on each step can be described by the standard formula:

This subsection provides details on the initial conditions setup,
the computational loop and how agents are disposed in the program.
To implement different network structures, we can do a rewiring
procedure as described by

For this procedure to be formally implemented to a matrix
structure, we can employ the shortcuts routine as described by

Agents will only change strategies on the start of each step.
As the strategies distribution has a certain weight in producer
decisions, the initial conditions are that in t = 0 each one of
the strategies starts with 33.33% of adherence by the agents,
distributed randomly. Also, using a uniform probability
distribution, prices are spread randomly inside the interval
[0.8P_{t−1},
1.2P_{t−1}] ⊂ R.

We have chosen 20% price increases at most due to evidences shown
by authors such as

Having established the initial conditions, period t = 1 starts.
With the propensities toward strategies already defined, producers
form their expected prices (

After calculating utilities, the logistic CDF in
_{i,t}_{i,t}_{(}i,
t) ⩽ Prob(σ_{i,t}_{i,t}_{i,t}_{i,t}_{i,t}_{i,t}

This subsection provides an insight into the model’s calibration process. We will calibrate the model aiming to find parameter sets that create a synthetic time series similar to the empirical time series. As the original model is populated by producers, we chose the Producer Price Index from the United States, in particular the finished goods consumption index, as the model deals with their pricing. Monthly data was obtained from the Federal Reserve of St. Louis for the period from 04/1947 (first entry in the series) until 12/2020, totalling 884 observations.

The calibration criterion used was minimising the sum of squared errors, which consists of finding the set of parameters that minimises the squared difference between empirical and simulated prices, formally given by:

with T being the number of periods,
P_{e}_{s}

Based on this criterion, an optimisation algorithm will be
employed to find the best-fitting combination of parameters {ϕ, α.
β, J, p} that minimises

In this section we will present the results. For the sake of
comparison, we will start with the most basic case, which is a
regular network model – thus, p = 0. We calibrate the model using
the procedure described in

Parameter Calibrated value Range of possible values
α
0.01245
[0, 5] ⊂ R
β
2.0954
[0, 5] ⊂ R
ϕ
0.7653
[0, 1] ⊂ R
J
6.9351
[0, 10] ⊂ R

Next is the strategy distribution evolution (

Now we will compare a complex network’s results with the
results previously shown. Running the optimisation algorithm, our
initial guess was the set from

The closest result to the initial guess is ϕ, whose value, much closer to one than to zero, shows that in choosing their optimal price, agents strongly value their expected price rather than changes in money supply. This shows that the elasticity towards the money supply and price level do not seem to be dependent on the network structure. The β value, which was very close to the initial guess, shows that there is a relatively high weight from deterministic incentives on agents’ choices. Hence, each agent’s individual decision will not be taken very randomly.

Parameter
Value
α
2.0288
β
2.0279
ϕ
0.7986
J
2.3246
p
0.5267

In turn, α is much higher than the initial guess and shows there is a significant incentive from private behaviour in setting prices, much more than in the benchmark model where p = 0. Which means that in a complex network, the utility associated to setting the price in the same direction as the general price level is more important than in the original model.

J presents a moderate value, lower than shown by the model with a regular network. This one is interesting considering that agents are more connected but consistent with a higher value of α, and it might actually be an effect of the network structure, which is the most important change in the model. In turn, p goes well beyond the expectation of a small-world network (p ≈ 0.1).

As done beforehand,

Our first analysis (

The previous section highlighted some differences between the
regular network model and our main complex network model, most
interesting being the long-run survival of inflationary
expectations. Now we are faced with the most striking difference
so far, which is an emergence of the neutral strategy as the
dominant one, as shown in

The first test (

step, that is, σ_{i,1} = −1 for any i
= 1, 2, . . . , 10^{4}. We have already understood this
from previous results, but this is another evidence that the
deflationary strategy is indeed the weakest. On step 2 almost all
agents already move away from it, splitting between the neutral
and inflationary strategies. Afterwards, we see the neutral
strategy’s climb towards the same spot as shown by the simulation
with balanced starting conditions (

The second test (
_{i,1} = 0
for any i = 1, 2, . . . , 10^{4}. This one shows a very
surprising result, as in step 2 the neutral strategy – dominant in
all simulations so far – is switched out by a majority of agents.
Between step 200 and 400 it is already very close to extinction.
Then the deflationary strategy slowly dies out and by step 1,400
the inflationary strategy is already the choice of 99% of the
agents. With this result, the deflationary strategy is cemented as
the weakest one, as it is never the dominant strategy by the
end.

Ultimately, the third test (
_{i,1} = 1 for any i = 1, 2, . . . ,
10^{4}. Now it is the inflationary strategy that is
switched out by almost all agents in the second step, followed by
the fastest of the near-extinctions in all of the results, with
the neutral strategy establishing its firm dominance by step 500.
Looking at

Something common between the tests is that by step 2, almost
all agents discard the starting strategy – similar tests on the
regular network model show that when all agents start on a
single strategy, they never change. This reinforces the strength of
the neutral strategy, as the only test where it is not the winning
strategy is the one where it starts as the only strategy, and we
have seen that it means the strategy is doomed to die out when
that happens. We believe that this is caused by a bigger value of
α, giving more weight to whether agents get the price change right
compared to the global price level or not, and until settling near
P = 1 it is quite volatile (check

After all of them change to another strategy, there are big incentives in continuing with it, as the intensity of choice value (β = 2.0288) being strictly positive prevents them from doing many random changes and the value of parameter J = 2.3246 prevents them from intensely deviating from their neighbours. Therefore, it makes sense that the starting strategy is brought to the brink of extinction very quickly.

Furthermore, we can also test how the model’s parameters affect strategy distributions. To do so, we create vectors with 101 values of three parameters of interest, α/J, β and p. For each parameter, we take the calibrated value as the central value and generate 50 equidistant values to the left and the right of the central value, with the lower bound being zero and the upper bound being double the central value. Thus, each of these tests is comprised by 100 runs of the model.

We run simulations with t = 1, 000 and discard the first 100 observations to avoid biases due to the very high degree of randomness before agents start settling into a distribution pattern that will be more representative of the overall trend for that specific parameter value. After doing that, we calculate the mean of the remaining 900.

The first test evaluates how changing the α/J ratio impacts the frequency distribution of strategies across producers, as this ratio captures the relative weight of private incentives with respect to social incentives. This means that agents will give more importance to guessing the prices right in lieu of following their neighbourhood along. The simplest way to do so is keeping J = 2.3246 as it was calibrated and manipulating the value of α. Therefore, the interval starts at zero and is incremented by 0.0174 each time, thus closing at the value α/J = 1.74.

Next, we test for values of β, the parameter that determines
the intensity of choice, accounting for bounded rationality in the
model. If values of β are low, we expect the proportions to remain
near 33%

The last test is for values of rewiring probability, p, with
results shown on

Lately, ABMs have presented themselves as a versatile, bottom-up approach aided by the advances in complexity and network sciences. These tools can better explain and translate the increasing complexity of markets in general. As such, they are able to tackle issues that can be hard to include in mainstream models, e.g. network effects and heterogeneous expectations.

Our main objective was to assess how the relaxation of some
hypotheses affected the results of a model proposed by

Each agent could, on each period, go with deflationary, neutral or inflationary expectations. The calibrated parameters presented a higher emphasis on the relative weight of private utility with respect to network effects compared to the regular network model. In the same vein, it showed a larger weight on deterministic utility and a higher elasticity towards one’s expected price level compared to the money supply. The rewiring probability presented a rather high value showing that, when setting prices, agents may be highly influenced by distant nodes in the network.

While the regular network was unable to show the emergence of the neutral strategy as the dominant one, by adding the rewiring probability we saw that happening. Some tests were done to observe other emergent properties, with the first one being a change to the starting conditions. By making all agents start with each one of the three strategies, we have seen that by step two of the simulation all of them change strategies, with the starting one going extinct and the remaining two vying for dominance.

These tests regarding the sensitivity to initial conditions show the neutral strategy is the strongest, and the inflationary can be said to be second place, as it was able to crowd out the deflationary strategy when all agents started as neutral (which guaranteed it would die out soon, for reasons not entirely clear to us).

We also stressed all parameters to check their effects on strategies. When α grows, the neutral strategy becomes more dominant, showing it is stronger when agents look only at their accuracy when comparing with the global price level. Likewise, when β grows, meaning agents become more rational, they gravitate towards the neutral strategy. However, changes in p do not show any particular trend when running the gamut from zero to one.

It is also most interesting that, even though we have relaxed
several hypotheses of the original new Keynesian model, the
conclusions are mostly unchanged: agents reach P = Y = 1, reinforcing
that heterogeneous expectations and the fear of getting the price
wrong compared to one’s peers (thus, losing out on profits) are
factors that can contribute to price stickiness. Still, as

In fact, our main takeaway from the comparison between regular and
complex network results is that some degree of randomness was needed
to perfectly replicate the symmetric Nash equilibrium from

As this is a quite new research agenda, our hand was forced in some decisions, such as the price index and calibration method choices; still, we ran a high amount of simulations and believe our results to be highly trustworthy, even if not fully optimised. Some suggestions for future research on the same theme would be: adding monetary policy; using a preferential attachments network instead of a small-world one; increasing neighbourhood size; using other arrangements instead of square lattice; changing the calibration method and/or price index.

The authors are grateful to Roberto Meurer, Eva Yamila da Silva Catela, João Basílio Pereima Neto and two anonymous referees for helpful comments and suggestions - any remaining errors are our own.This work was supported by CNPq(the Brazilian National Council of Scientific and Technological Development) [grant numbers 312432/2018-6(JJS) and 313628/2021-1 (JJS)].This study was also financed in part by CAPES (the Brazilian Coordination for the Improvement of Higher Education Personnel) - Finance Code 001.

C63, D80, E30.