eeEstudos Econômicos (São Paulo)Estud. Econ.1980-53570101-4161Departamento de Economia; Faculdade de Economia, Administração, Contabilidade e Atuária da Universidade de São Paulo (FEA-USP)10.1590/1980-53575311rhjArtigo de PesquisaComplex network effects on price setting: an agent-based computational approach ♦0000-0003-2926-3165FeltrinRafael Jasper10000-0003-0163-0197AlmeidaHelberte João França20000-0002-0943-1283SilveiraJaylson Jair da3Graduate Program in Economics – Federal University of Santa Catarina – Campus Universitário Reitor João David Ferreira Lima, s/nº – Trindade – CEP: 88040-900 – Santa Catarina - SC BrazilFederal University of Santa CatarinaCampus Universitário Reitor João David Ferreira LimaGraduate Program in EconomicsSCSanta CatarinaBrazilProfessor – Department of Economics and International Relations - Campus Universitário Reitor João David Ferreira Lima, s/nº – Trindade – CEP: 88040-900 – Santa Catarina - SC – BrazilCampus Universitário Reitor João David Ferreira LimaDepartment of Economics and International RelationsSCSanta CatarinaBrazilProfessor – Department of Economics and International Relations - Campus Universitário Reitor João David Ferreira Lima, s/nº – Trindade – CEP: 88040-900 – Santa Catarina - SC – BrazilCampus Universitário Reitor João David Ferreira LimaDepartment of Economics and International RelationsSCSanta CatarinaBrazilrafafeltrin5@outlook.comhelberte.almeida@ufsc.brjaylson.silveira@ufsc.br
Dante Mendes Aldrighi
22032023Jan-Mar20235317402710202129112022Esta obra está licenciada com uma Licença Creative Commons Atribuição-Não Comercial 4.0 Internacional.Abstract
We present an adaptation of a new Keynesian model into an agent-based computational model, accounting for the importance of heterogeneity, interaction and bounded rationality in problems such as price setting and expectations formation. We evaluate the evolution of the distribution of price setting strategy frequencies, which agents might choose on each period between inflationary, neutral or deflationary, a process based on private and social features of agents’ utility functions. In addition, we consider the network’s topological structure and find that the regular network model fails in replicating exactly the new Keynesian model’s original results, while the complex network model presents results in accordance with the originals.
Resumo
Apresentamos uma adaptação de um modelo novo keynesiano em um modelo computacional baseado em agentes, tendo em vista a importância da heterogeneidade, interação e racionalidade limitada nos problemas como fixação de preços e formação de expectativas. Avaliamos a evolução da distribuição de frequência das estratégias de formação de preços, que os agentes podem escolher a cada período entre inflacionária, neutra ou deflacionária, processo este baseado nas características privadas e sociais da função utilidade dos agentes. Adicionalmente, levamos em conta a estrutura topológica da rede em conta, e verificamos que o modelo em rede regular fracassa em replicar com exatidão os resultados originais do modelo novo keynesiano, enquanto o modelo em rede complexa apresenta resultados congruentes com os originais.
KeywordsModelos baseado em agentesRedes completasFormação de preçosPalavras-chaveAgent-based modelComplex networksPrice settingCNPq312432/2018-6CNPq313628/2021-1CAPES001The authors are grateful to Roberto Meurer, Eva Yamila da Silva Catela, João Basílio Pereima Neto and two anonymous referees for helpful comments and suggestions - any remaining errors are our own.This work was supported by CNPq(the Brazilian National Council of Scientific and Technological Development) [grant numbers 312432/2018-6(JJS) and 313628/2021-1 (JJS)].This study was also financed in part by CAPES (the Brazilian Coordination for the Improvement of Higher Education Personnel) - Finance Code 001.Introduction
Price setting and expectations are subject to long-standing
debates in macroeconomics. Most new Keynesian models assume that
individuals formulate expectations independently and employ a
representative agent to analyse them, assuming that individuals have
similar preferences, access to the same information and the same
reasoning prowess. It goes without saying that these assumptions are
not to be taken literally, but they are the foundations on which
canonical new Keynesian models are built.
However, these models have its critics.
Flieth
and Foster (2002) for instance remark that representative agent
models face hardships trying to reproduce patterns found empirically.
With that in mind, we have agent-based models (ABM), a newer style of
modelling that, according to
Macal
and North (2005), can be an important tool to evaluate social
systems composed by different agents that interact amongst them, learn
with past experiences, influence each other and adapt their behaviour
looking for the best responses to their environment.
Recent literature such as
Fainmesser
and Galeotti (2016) or
Fainmesser
and Galeotti (2020) highlight that network effects can be very
important in price setting, with firms and consumers looking at their
neighbourhoods to gather information in order to make better
decisions. The amount of influence these relations exert depend on
various factors, like the strength of network effects, network size
and the behaviour of a few very influential vertices.
This study’s proposed model is an ABM which takes network effects
into account, assuming a monopolistic competition economy based on
Ball
and Romer (1989). In our reasoning, agents must choose their
strategy on each round: keeping the same prices, raising them or
lowering them. Their choices depend on their preferences. These agents
are disposed in a network, which starts out regular (where
neighbourhoods are always the same) but eventually will include a
degree of randomness, as proposed by
Watts
and Strogatz (1998).
Therefore, this article’s main contribution to macroeconomic
research is assessing whether the conclusions put forward by
Ball
and Romer (1989) still hold up after relaxing certain
hypotheses – namely, representative agents, perfect rationality, and
a tendency towards symmetrical equilibria. These hypotheses are put
aside in favour of heterogeneous agents, bounded rationality and a
non-enforcement of equilibria.
Thus, this article is structured as follows: next section is a
review of the literature touching on topics such as price setting,
social interaction and networks in economics; the third section lays
out the methodology behind the model here proposed; the fourth
section presents and discusses the results achieved; and the fifth
and final section presents our concluding remarks.
Prices, social interaction and networksSome theory and evidence on price setting
This subsection assembles some findings regarding price setting
and expectations that are of great interest to us due to how they
relate to the model we will run afterwards. As a starter, the study
published by
Levy et
al. (1998) about the price adjustment process at multiproduct
retail stores shows that this is a more complex task than previously
thought. Therefore, at some moments it isn’t rigorously planned,
resulting in price stickiness. The authors remark that this is
caused mainly by item pricing laws and the fear that agents have of
making mistakes while setting prices.
According to
Nakamura
(2008), 16% of the variation in product prices is common all
across the market, while 65% is common only within a same retail
chain and 17% of it is completely idiosyncratic.
Midrigan
(2011) too finds that it’s more likely that a particular
product has its price changed if a big fraction of the other prices
within the same store are being changed, especially when the price
changes touch similar goods. He also points out (although based on
weaker evidence than on the previous finding) that there is some
level of synchronization across stores in a given city.
In this article we chose an evolutionary approach when discussing
price setting.
Bonomo et al. (2003), for
instance, employ evolutionary games to evaluate the transaction
costs between two equilibria with the goal of reaching disinflation.
Based on their findings we can assume that agents with traits of
bounded rationality suffer worse losses than more rational agents
and strategies with below average performance have their use reduced
with the passage of time. Likewise, in the long run they will
eventually be able to learn which strategy leads to more gains. In
the same vein,
Saint-Paul
(2005) aims to find a motivation for price stickiness. The
author aims to answer two main questions: (i) if the economy
converges to a rational expectations equilibrium; (ii) in the case
this doesn’t happen, if is it possible to find a particular
behaviour of sticky prices. The author builds a model where economic
agents are imperfectly rational and interact in a localized manner
with others. The author argues that, even though the rational
expectations equilibrium is among the possible choices by agents,
the economy doesn’t necessarily converge to this equilibrium.
Still on evolutionary dynamics,
Silveira
and Lima (2008) elaborate a model that takes into account the
emergence of monomorphic (where only one price-setting strategy
survives between perfect and bounded rationality) and polymorphic
(both survive in the long run) equilibria. Another difference is
that while on the monomorphic equilibria the money is neutral, on
polymorphic equilibria the money might not be.
Another model from
Lima
and Silveira (2015) allows firms to choose the Nash strategy
(update their information and establish optimal prices) by paying a
cost, or else they can choose the bounded rationality strategy
(which is free) with lagged information. Evolutionary dynamics take
the process to a long-run equilibrium where, even though most or all
firms employ the bounded rationality strategy, the price level is
the same as the one from the symmetric Nash equilibrium.
Social interaction in economics
In an effort to bring theory closer to empirical data, research
in economics has been more observing of the importance of social
interaction between economic agents.
Hommes
(2006) reminds that, in a social interaction framework, the
payoff received by a certain agent for his/her actions is directly
related to the agents next (be it in a geographical, economic or
social sense) to him/her. Their gains do not come only through
market mechanisms, but also through imitation, learning and peer
pressure.
Flieth
and Foster (2002) notes that the expectation formation
process involves discussion amongst agents, as an individual has
his/her expectations influenced by opinions of friends, business
partners or competitors.
Topa
(2001) highlights those interactions happen between social
neighbours which share certain socioeconomic proximity, but not
necessarily geographic: this could mean individuals who took the
same classes at school, co-workers or any attendants of the same
social activities, even if they do not live close to each other.
If a firm decides to raise the price of a good it produces, it
cannot raise the price much higher than the neighbouring firms that
produce close substitute goods, as that raises the chance of losing
clients, therefore not maximising profits.
Bernhardt
(1993) points out there is a cost in price adjustment, in the
sense that not coordinating with close firms means that an increase
in price might in fact not be profit maximising: it would be only a
question of following a nominal change in money supply.
In this context,
Hohnisch
et al. (2005) propose an interactive expectations model to
study business environment amongst entrepreneurs. Each entrepreneur
is faced with a ternary choice between negative, neutral or positive
expectations. Thus, it would change its evaluation of business
climate based on the opinions of its neighbours, with the results
emulating with great proximity some traits of the German business
confidence index.
Fainmesser
and Galeotti (2016) develop a model
with a monopolistic firm that sells a network good and discriminate
prices by using information about consumers’ influence and their
susceptibility to peer pressure. The monopoly’s maximising behaviour
happens when offering discounts to influential consumers and
charging premia to susceptible clients (that are more likely to go
along side influential consumers).
Fainmesser
and Galeotti (2020) make new progress by showing that this
influence marketing leads to inefficient consumer-product matches,
making firms invest in more information. This competition for
information brings firms’ profits closer to zero but, at the same
time, increases consumer surplus.
In macroeconomics, with the prevalence of the new neoclassical
synthesis and DSGE models, the starting studies in this field was
rather timid but, more recently, models with social interaction have
been getting more attention. The reason behind it is that current
mainstream models have started to come under fire since the subprime
crisis, facing difficulties when trying to fit empirical findings in
their framework, as
Romer
(2016) staunchly points out. ABMs, however, present promising
tools to address these issues.
Dosi
and Roventini (2019) point that The Great Recession was a
natural experiment in macroeconomics, considering it showed issues
with the mainstream approach. It displayed that price stickiness,
for instance, can be better approached under the lens of agent-based
modelling. The author signals that price stickiness might arise from
heterogeneity, imperfect information and coordination hurdles.
However, not all is lost for mainstream economics, as there is a
recent strife between RANK (Representative Agent New Keynesian) and
HANK (Heterogeneous Agent New Keynesian) models. The mainstream
models pointed out in the last paragraphs as suffering from lack of
realism are the RANK models. In similar contexts, HANK models
performed better, as discussed by
Kaplan
et al. (2018) and
Acharya
et al. (2020).
For instance,
Kaplan
et al. (2018) hint that, in the face of a Ricardian exchange
failure, HANK models capture better the monetary policy transmission
mechanism to households. In the same vein,
Acharya
et al. (2020) stress out that, in HANK models, optimal
monetary policy is considerably different from the standard (derived
from RANK models), due to household inequalities. However, even
though heterogeneity and sometimes bounded rationality is accounted
for in these models, network effects are still mostly ignored.
Elements of network theory
This subsection is a primer on network science, as some basic
concepts need to be laid out to better understand the computational
model used here.
Goyal
(2012) explains that the networks of a given system might be
represented mathematically through graph theory, according to which
a network might be represented by G = (N, B) in which N is a set
(non-empty and finite) of nodes and B is the set of edges, formed by
unordered pairs of distinct nodes.
It can be said that two nodes i and j are directly connected if
they are adjacent to each other. If they are not adjacent, but are
part of a sequence of adjacent nodes linked amongst themselves, then
i and j are indirectly connected. The neighbourhood of any given
node is composed by a set formed by nodes connected by edges to this
node. In complex networks, a node can connect with another one very
far from its geographical location.
This means nodes can be linked even when their positions in the
network are far from each other, and nodes that are very far from
each other might be linked in a sequence with a few steps
(small-world phenomena). In order to represent complex networks,
Watts
and Strogatz (1998) suggest the usage
of ring networks (see
Figure 1), where nodes are in a
circumference and uniformly distributed throughout it.
Complex network types
Source:
Watts and
Strogatz (1998).
Each of the nodes is linked to K neighbours in clockwise sense
and K neighbours in anticlockwise sense. Having defined a ring
network, we can eliminate an edge between the i-th node and a
neighbour, and then create a new link between i and any other node
chosen randomly with a probability p ∈ [0, 1]R. This operation is
called rewiring.
A network is called regular when p = 0, and its main feature is
having the same number of edges for all nodes. For any i node, in a
regular network the connected edges are always the same. If p = 1
the network is called random.
Watts
and Strogatz (1998) remark that, empirically, social networks
tend to present high clustering but a low average distance between
nodes. As it has been pointed out by the authors, these traits put
many empirical networks halfway between the regular and random
networks, and in case p ≈ 0, 1 we can call it a small-world network.
It has an average distance between nodes comparable to the average
distance of a random network, but the clustering is strictly
superior to the one from a random network.
A price-setting game on a complex networkA discrete choice model
We will now build the model, starting with an explanation on
how agents make decisions. Consider an agent i that must choose
between three mutually exclusive alternatives, denoted by −1, 0 e
1. Let σi ∈ {-1, 0, 1} be the i-th
agent’s choice in period t ∈ N. At any period t, each agent can
choose between lowering one’s price
(σi = −1), maintaining the same price
(σi = 0) or raising it
(σi = 1).
Train
(2009) remarks that preferences (and, consequently, the
utility function that represents them too) depend on observable
motivations and non-observable motivations, the last depending on
idiosyncratic traits of an agent. Due to non-observable motivations,
decision-making phenomena is stochastic, not deterministic, to
someone watching from the outside. To account for that, the utility
function will feature a deterministic component referring to
observable traits and a stochastic one that is associated to
non-observable traits:
𝒰(σi)=𝒰𝒹(σi)+ε(σi),
where 𝒰𝒹(σi)
is the function’s deterministic component, associated to
observable motivations, and ε(σi)
the stochastic component, associated to non-observable
motivations. After defining the utility function, next step is
evaluating the agent’s optimal choice. Choosing a certain value
σi ∈ {−1, 0, 1} is the agent’s optimal
choice in case it fulfills the following condition:
𝒰(σi)≥𝒰(σi′),∀σi′∈{−1,0,1}.
It is possible to rewrite (2) using (1) as follows:
𝒰𝒹(σi)−𝒰𝒹(σi′)≥ε(σi′)−ε(σi),∀σi′∈{−1,0,1}, in a way to highlight that choosing a certain
σi will be an optimal choice for the
i-th agent if the net gain from the observable component
(left-hand side) is bigger than the net gain from the stochastic
component (right-hand side), associated to any
σi′ alternative.
However, even if the observable utility of a given strategy
σi is bigger than the others, that
does not guarantee that σi will be
chosen by the i-th agent. Non-observable incentives from one of
the other strategies might be bigger than the ones from the
σi strategy. Thus, it is only possible
to define the probability of the i-th agent choosing
σi ∈ {−1, 0, 1} from (2) and (3):
where f(εi⃗)
is the joint probability density function (PDF) of the random
variables vector εi⃗=
ε(σi
= −1), ε(σi
= 0), ε(σi
= 1) and I[·] an indicator function, equal to 1 if
the inequality between brackets is true and zero if false. What
follows is that function (4) is a cumulative density
function (CDF) of the utility function’s random component given by
equation (1). This CDF
indicates the i-th agent’s propensity towards choosing a certain
strategy σi ∈ {−1, 0, 1}. The propensity
towards choosing σi raises together with
the difference between observable incentives, meaning idiosyncratic
motivations have their importance diminished when this differential
raises.
Train
(2009) highlights that diverse discrete choice models might
be derived from distinct specifications of the PDF given by
f(ε⃗i), with the most common being the
one whose final result is the logit model. To do so, suppose the
random components of (1) are
random variables independent amongst themselves, with the same
probability distribution for extreme values. This means that for
each ε(σi), the PDF is a Type-1 Gumbel
formally given by:
f(ε(σi))=βe(−βε(σi))e(−e−βε(σi)),
with β > 0 being a real constant. The CDF corresponding to
the PDF given by (5) is
in turn given by:
F(ε(σi))=e(−e−βε(σi)),
and by inserting (5) and (6) in (4) it is possible to
obtain the logistic CDF, defined by the following equation:
Prob(σi)=eβUd(σi)eβUd(σi)+eβUd(σi′)+eβUd(σi″),
=11+e−β[Ud(σi)−Ud(σi′)]+e−β[Ud(σi)−Ud(σi″)],
where σi, σi′ e
σi′′ are three distinct choices.
An interpretation of β relevant to analysing the results is
offered by
Brock and
Hommes (1997), who remark that, ceteris paribus, when β is small,
the impact of non-observable incentives is larger on the probability
of choosing a certain strategy. Therefore, choices little depend on
deterministic utility. However, if β has a bigger value, the
strategy chosen will probably be the one that offers the biggest
deterministic utility.
Brock
and Durlauf (2001) propose including a third term on the
function (1) to represent
network effects. In this logic, the i-th agent makes his/her
decisions influenced by his/her social neighbourhood, henceforth
called ni. Formally,
ni is the set of agents whose decisions
are observed by the i-th agent and influence his/her decisions.
Including network effects as an observable incentive causes the
deterministic component of the utility function to be represented as
follows
Ud(σi)=α𝒰𝓅(σi)+𝒰𝓈(σi,σie⃗),
with α being a parametric constant that measures the relative
weight of deterministic private utility,
𝒰𝓅(·),
representing all observable incentives except network effects, and
𝒰𝓈(·)
being the social deterministic utility that suffers network
effects. It is worth highlighting that
𝒰𝓈(·)
depends not only on the choice from the i-th agent but also on the
choices from the ni neighbourhood,
represented by the vector σie⃗
≡ (σie)j∈n
.
Inserting (8) in (1), we can obtain a new
equation for the utility of the i-th agent when he/she chooses
σi:
(σi)=α𝒰𝓅(σi)+JUs(σi⃗)+ε(σi),
and after this adaptation, employing the same reasoning used
when deriving equation (7), we can obtain the
probability of choosing the σi
alternative but with Ud(σi)
now defined by (8).
Lastly, the social deterministic utility
𝒰𝓈(·)
will be derived. It depends on the network topology, but for
demonstration ends, a regular network with quadratic form (square
lattice) – due to ease of computational implementation – and N
agents will be assumed. Therefore, the
ni social neighbourhood of the i-th
agent located in the coordinates (ℓ, c) ∈ {1, ..., N}X{1,
..., N} of order N will be defined
by:
ni={(m,n)∈{1,2,…,N}2:|k−m|+|l−n|=1}.
After defining ni, it is possible
to define the social utility of the i-th agent located in the
coordinates (ℓ, c) ∈ {1, ..., N}X{1, ..., N} as:
𝒰𝒾𝓈(σi⃗)=J4∑j∈niδσiσj,
where J > 0 is a parametric constant that measures the
degree of influence in the neighbourhood and
δσiσj is the Kronecker delta. If
σi = σj,
then δσiσj = 1. Otherwise,
δσiσj = 0, so that the Kronecker delta
can be rewritten as:
δσiσj=∑j∈ni12[σiσj+3σi2σj2−2(σi2+σj2)+2].
Substituting (12) in
(11), it is possible to
redefine social utility as:
𝒰𝒾𝓈(σi)=J8∑j∈ni[σiσj+3σi2σj2−2(σi2+σj2)+2].
In short, if α/J > 1 then agents will put more weight on
private utility, else they will be more affected by network
incentives. As for β, the lower it is, the more random will the
choice be. This includes bounded rationality in the model, as an
agent might make bad choices from a deterministic utility viewpoint,
choosing a worse strategy if he/she puts a low weight on observable
incentives.
Adaptation of a new Keynesian price-setting model
This subsection briefly introduces the model conceived by
Ball
and Romer (1989) and adapts it into a network model. Let us
have N producers that fabricate distinct goods that are produced by
their own labour (thus suppressing labour markets). They are
subsequentially sold and profits are used to purchase products from
other producer-consumers – the goods, however, are close substitutes
amongst themselves. With this model as basis, we can swap rational
expectations for interactive expectations by sorting heterogeneous
agents in the network.
In the original model, money is just a medium of exchange to
make transactions, so it is possible to employ money supply, M, as
a proxy of aggregate nominal demand. Therefore, the producer’s
maximum utility as a function of the nominal aggregate demand, the
price of good i, Pi, and the general
price level, P, resulting from the simultaneous decisions of all
producers can be written as follows:
Ui=MP(PiP)1−ϵ−(ϵ−1γϵ)(MP)γ(PiP)−γϵ.
Therefore, the i-th producer’s optimal price,
Pi∗, is that which maximises his/her utility is
given by:
Pi*=PϕM1−ϕ,
where ϕ=1+(1−ϵ)(1−γ)1−ϵ(1−γ)∈(0,1)⊂R
represents the elasticity of the i-th producer's
optimal price with respect to the general price level.
In the price-setting game described by
Ball
and Romer (1989), a symmetric Nash equilibrium occurs when
Pi∗ = P for all i = 1, 2, ..., N. If that happens,
then Pi∗ = P = M, Ci =
C = 1 and Yi = 1 for all i = 1, 2, ...,
N.
Relaxing the hypothesis that states that producers immediately
identify the game’s symmetric Nash equilibrium, it is possible to
adapt the model here described to a network game. This means that
in period t, the i-th producer, when choosing the price
Pi,t, must form an expectation of the
general price level Pt to apply the
optimal rule, which is given by:
Pi,t=(Pi,te)ϕ(Mt)1−ϕ,.
where Pi,te
is the general price level expected on period t by the i-th
producer.
As mentioned beforehand, one of the main changes is abandoning
the rational expectations paradigm, where
Pi,te=Pt,
employing bounded rationality instead, meaning that agents might
not always make the best decision. They need to choose their
stance on each step, either lowering the price
(σi,t = −1), maintaining it
(σi,t = 0) or raising it
(σi,t = 1). To make a choice, an agent
will choose the strategy which grants the most overall
utility.
Now we will present the utility function’s three components.
Drawing on
Silva
(2012), deterministic private utility is given by:
𝒰𝓅(σi,t)=(Pt−Pt−1Pt−1)σi,t
Notice that for 0]]>𝒰𝓅(σi,t)>0,
both terms multiplying amongst themselves must have the same
signal, meaning that if the agent reduces his/her price
(σi,t
= −1) and the general price level falls
(Pt <
Pt−1), it follows that
\ 0]]>𝒰𝓅(σi,t)>0.
Analogously, the same happens in case the agent raises his/her
price and the general price level goes up. If the agent is neutral
(σi,t
= 0), deterministic private utility follows the price variation’s
signal.
As for the deterministic social utility, for the i-th producer
on period t, it depends on the strategies of the producer himself
and also of the ni neighbourhood.
Deterministic social utility is given by:
As outlined before, the stochastic component of the utility
function will be given by random variables independent amongst
themselves that follow the Type-1 Gumbel distribution, formally
defined in equation (5).
Based on these assumptions, it is possible to express the
propensity to choose a certain strategy
σi,t in the following way:
The last step for model closure is defining how producers,
given their states on period t, form their expectations for the
general price level, Pi,te.
If the agent is neutral, Pi,te=
Pt−1, if the agent is inflationary,
Pi,te
will be chosen randomly between [Pt−1,
1.2Pt−1], and if the agent is
deflationary, Pi,te
will be chosen randomly between
[0.8Pt−1,
Pt−1].
In short, the above options can be described intuitively. Drawing
again on Silva
(2012), let us assume Pi,te
be a uniformly distributed random variable, with its valued
conditioned to the choice of σi,t. More
precisely, the value of each agent’s expected
Pi,te
will be chosen amongst:
Pi,t̃∈[0.8Pt−1,Pt−1],\ if\σi,t=−1,
Pt−1, if σi,t=0,
Pi,t̃∈[Pt−1,1.2Pt−1], if σi,t=1.
Based on their expectations of the general price level, each
agent will define an individual price on t by using the optimal rule
given by equation (16),
whereas the general price level is calculated by the following
equation:
P=(1N∑j=1NPj1−ϵ)11−ϵ.
An outcome that could not happen in the original model by
Ball
and Romer (1989) can and will happen all the time in ours: at
any given moment, there can be heterogeneity amongst producer prices
and a general price level that is not the symmetric Nash equilibrium
price level which would be reached assuming rational
expectations.
Some variables of interest for the results will be the output
level and price variance. The authors define the money supply M =
PC, and as the model does not mention investment or government
expenditure we can, for each period, isolate the mean consumption
index and treat it as the aggregate output:
Yt=MtPt
and lastly, the price variance on each step can be described by
the standard formula:
Vart=1N∑i=1N[Pi,t−Pt]2
Computational implementation
This subsection provides details on the initial conditions setup,
the computational loop and how agents are disposed in the program.
To implement different network structures, we can do a rewiring
procedure as described by
Watts
and Strogatz (1998), but using a matrix instead of a ring
network due to ease of programming. This is not a problem, as we can
program agents in the first row and column to be linked respectively
to the last row and column to solve the contour issue. We will
assume N = 10, 000 agents for our simulations, which is a 100X100
matrix.
For this procedure to be formally implemented to a matrix
structure, we can employ the shortcuts routine as described by
Taylor
and Higham (2009). That is, there will be a rewiring
probability p ∈ [0, 1] ⊂ R, in which a shortcut will be generated
between two nodes that are not immediate neighbours – the larger p
is, the more random a network will be. A shortcut means that an
agent’s link towards one of his/her neighbours is severed and the
agent connects with another peer from anywhere in the matrix.
Agents will only change strategies on the start of each step.
As the strategies distribution has a certain weight in producer
decisions, the initial conditions are that in t = 0 each one of
the strategies starts with 33.33% of adherence by the agents,
distributed randomly. Also, using a uniform probability
distribution, prices are spread randomly inside the interval
[0.8Pt−1,
1.2Pt−1] ⊂ R.
We have chosen 20% price increases at most due to evidences shown
by authors such as
He et al. (2010) and
Roberts
and Supina (1996) that most goods prices go up or down by
20%, no more than that. After defining individual prices, the
general price level is set for the t = 0 period, using
equation (21) and monetary supply
is fixed as 1.
Having established the initial conditions, period t = 1 starts.
With the propensities toward strategies already defined, producers
form their expected prices (Pi,te),
as shown by
equation (20). After expected
prices are established, each agent’s individual price is obtained
based on equation (15).
When a period ends, firms are made aware of general price level,
output level and variance between individual prices compared to
the general price level. With this information and observing
strategies in their vicinity, we are able to calculate each
agent’s utility function, according to
equation (8).
After calculating utilities, the logistic CDF in
(7) is taken to measure
the propensity of choosing each of the three strategies.
Afterwards, a number ri,t contained
inside the interval [0, 1] ⊂ R will be randomly generated. Let us
have σi,t = 0,
σi′,t = 1 and
σi′′,t = −1. If r(i,
t) ⩽ Prob(σi,t = 0) the producer will
adopt a neutral strategy, else if
Prob(σi,t = 0) < r ⩽
Prob(σi,t = 0) +
Prob(σi,t = 1) the producer will adopt
an inflationary strategy. Finally, if r >
Prob(σi,t =
0)+Prob(σi,t = 1), the producer will
adopt a deflationary strategy. Afterwards, a new period
starts.
Model calibration
This subsection provides an insight into the model’s
calibration process. We will calibrate the model aiming to find
parameter sets that create a synthetic time series similar to the
empirical time series. As the original model is populated by
producers, we chose the Producer Price Index from the United
States, in particular the finished goods consumption index, as the
model deals with their pricing. Monthly data was obtained from the
Federal Reserve of St. Louis for the period from 04/1947 (first
entry in the series) until 12/2020, totalling 884
observations.
The calibration criterion used was minimising the sum of
squared errors, which consists of finding the set of parameters
that minimises the squared difference between empirical and
simulated prices, formally given by:
1T∑t=1T(Pe,t−Ps,t)2
with T being the number of periods,
Pe the price index from empirical data
and Ps the general price index from
simulated data.
Based on this criterion, an optimisation algorithm will be
employed to find the best-fitting combination of parameters {ϕ, α.
β, J, p} that minimises
equation (24), using a variety
of Newton’s methods in order to find the answers to the
optimisation problem. Given the initial guesses plus the inferior
and superior bounds of the parameters to be calibrated, a random
set of parameters is chosen, generates simulated values and
compares them to real ones. If the set generates a smaller
distance than before, the function stores the new parameters and
discards the previous, repeating the process until finding
parameters that minimise the error.
Results and emergent properties of the modelThe benchmark case: a price-setting game on a regular
network
In this section we will present the results. For the sake of
comparison, we will start with the most basic case, which is a
regular network model – thus, p = 0. We calibrate the model using
the procedure described in
Subsection 3.4. To avoid
using extreme results from only one run, all results are averaged
from 30 simulations, except for the parameter tests from
section 4.3.
Calibrated values for the regular network model
Parameter Calibrated value Range of possible values
α
0.01245
[0, 5] ⊂ R
β
2.0954
[0, 5] ⊂ R
ϕ
0.7653
[0, 1] ⊂ R
J
6.9351
[0, 10] ⊂ R
Source: own elaboration.
Difference between empirical and simulated price indexes, regular network
Source: own elaboration.
Figure 2 shows the
difference between the empirical price index and the simulated price
index generated with the parameters from
Table 1. We see that the
percentage error is small, but it does raise in both directions when
the synthetic series overshoots or undershoots due to outliers. That
is to be expected considering that the calibration algorithm is
quite simple and outliers are a known problem in synthetic data
generation as pointed out by
Tucker
et al. (2020).
Figure 3 shows price level and
output evolution, where it does not converge to a perfectly elegant
P = Y = 1 such as
Ball
and Romer (1989): price level fluctuates between 1 and
1.01 and output level fluctuates between 0.99 and 1 after step 500.
As seen in Figure 4, price
level variance is very low and converges to zero between steps 500
and 1,000, correlated with the dying out of the deflationary
strategy (see Figure 5).
For variance plots we have excluded the first 100 observations due
to an initial overshooting while agents are still learning.
Price and output level evolution, regular network
Source: own elaboration.
Price level variance, regular network
Source: own elaboration.
Strategy distribution evolution, regular network
Source: own elaboration.
Next is the strategy distribution evolution (
Figure 5), which should be read
as follows: below the d line we have the proportion of
deflationary agents at the present step and below the d + n line
we have the proportion of deflationary and neutral agents,
therefore the gap between the d and d + n lines represents the
proportion of neutral agents. Everything above d + n is the
proportion of inflationary agents. Here there is no convergence
towards one strategy even as price level and output stabilise,
with the inflationary strategy coexisting with the neutral
strategy in the long run.
Calibration of the proposed ABM: a price-setting game on a
complex network
Now we will compare a complex network’s results with the
results previously shown. Running the optimisation algorithm, our
initial guess was the set from
Table 1 along with a value of p ≈
0.1, which is the standard rewiring probability for small-world
networks. After roughly 176,000 steps – that is, 200 runs of our
data set – the optimisation algorithm found a combination of
parameters. The results can be found in
Table 2.
The closest result to the initial guess is ϕ, whose value, much
closer to one than to zero, shows that in choosing their optimal
price, agents strongly value their expected price rather than
changes in money supply. This shows that the elasticity towards
the money supply and price level do not seem to be dependent on
the network structure. The β value, which was very close to the
initial guess, shows that there is a relatively high weight from
deterministic incentives on agents’ choices. Hence, each agent’s
individual decision will not be taken very randomly.
Calibrated values for the main model Parameter Value
Parameter
Value
α
2.0288
β
2.0279
ϕ
0.7986
J
2.3246
p
0.5267
Source: own elaboration.
In turn, α is much higher than the initial guess and shows
there is a significant incentive from private behaviour in setting
prices, much more than in the benchmark model where p = 0. Which
means that in a complex network, the utility associated to setting
the price in the same direction as the general price level is more
important than in the original model.
J presents a moderate value, lower than shown by the model with
a regular network. This one is interesting considering that agents
are more connected but consistent with a higher value of α, and it
might actually be an effect of the network structure, which is the
most important change in the model. In turn, p goes well beyond
the expectation of a small-world network (p ≈ 0.1).
Difference between empirical and simulated price indexes, complex network
Source: own elaboration.
As done beforehand,
Figure 6 shows the difference
between the empirical price series and the synthetic series
generated by the set of parameters from
Table 2. Just like
Figure 2, it does feature some
larger errors when trying to keep up with outliers, but like the
regular network version, errors are quite small on average. Thus,
we can follow with our analysis.
Price and output level evolution, complex network
Source: own elaboration.
Our first analysis (
Figure 7) was on the P = Y = 1
equilibrium. We had reproduced this result closely but not
perfectly on the benchmark model. Now there is a perfect, inverse
correlation between P and Y and, between steps 3,500 and 4,000,
the economy stabilises on P = Y = 1, with a higher precision than
in Figure 3 In turn, the
variance (Figure 8) is
larger when compared to the benchmark (
Figure 4), as it takes as long
as step 2,500 to become equal to the variance from the benchmark
but does not get as close to zero afterwards. This was somewhat
expected, considering there is now a greater degree of
randomness.
Emergent properties
The previous section highlighted some differences between the
regular network model and our main complex network model, most
interesting being the long-run survival of inflationary
expectations. Now we are faced with the most striking difference
so far, which is an emergence of the neutral strategy as the
dominant one, as shown in
Figure 9. Due to the randomness,
none of the three strategies will truly die out, but the
deflationary and inflationary together end up fluctuating around
1% as we reach P = Y = 1, which might as well be considered an
extinction. We will run several tests by stressing different parts
of the model and checking out what changes happen, if any. To
better understand how starting conditions can create very
different emergent properties, we will run three tests. They will
assess the effect of all agents starting with the same strategy,
meaning all 10,000 of them will start being either deflationary,
neutral or inflationary.
The first test (
Figure 10) shows what happens
when all agents are deflationary in the first
Price level variance, complex network
Source: own elaboration.
Strategy distribution evolution, complex network
Source: own elaboration.
Strategy distribution evolution, all agents starting as deflationary
Source: own elaboration.
step, that is, σi,1 = −1 for any i
= 1, 2, . . . , 104. We have already understood this
from previous results, but this is another evidence that the
deflationary strategy is indeed the weakest. On step 2 almost all
agents already move away from it, splitting between the neutral
and inflationary strategies. Afterwards, we see the neutral
strategy’s climb towards the same spot as shown by the simulation
with balanced starting conditions (
Figure 9). It does crush the
inflationary strategy much faster, amassing a 98% fraction around
step 600, which is around 3,000 steps earlier.
The second test (
Figure 11) has all agents
starting as neutral, that is, σi,1 = 0
for any i = 1, 2, . . . , 104. This one shows a very
surprising result, as in step 2 the neutral strategy – dominant in
all simulations so far – is switched out by a majority of agents.
Between step 200 and 400 it is already very close to extinction.
Then the deflationary strategy slowly dies out and by step 1,400
the inflationary strategy is already the choice of 99% of the
agents. With this result, the deflationary strategy is cemented as
the weakest one, as it is never the dominant strategy by the
end.
Ultimately, the third test (
Figure 12) features all agents
starting with the inflationary strategy, that is,
σi,1 = 1 for any i = 1, 2, . . . ,
104. Now it is the inflationary strategy that is
switched out by almost all agents in the second step, followed by
the fastest of the near-extinctions in all of the results, with
the neutral strategy establishing its firm dominance by step 500.
Looking at Figures 10 and
12, they look like
imperfect mirrors of each other, with the universal starting
strategy getting close to extinction during the first steps and
the opposite strategy being dominated by the neutral one and being
asymptotically extinct around step 600.
Something common between the tests is that by step 2, almost
all agents discard the starting strategy – similar tests on the
regular network model show that when all agents start on a
single strategy, they never change. This reinforces the strength of
the neutral strategy, as the only test where it is not the winning
strategy is the one where it starts as the only strategy, and we
have seen that it means the strategy is doomed to die out when
that happens. We believe that this is caused by a bigger value of
α, giving more weight to whether agents get the price change right
compared to the global price level or not, and until settling near
P = 1 it is quite volatile (check
Figures 3 and
7), so they will
certainly choose the wrong strategy at one of the early steps.
Strategy distribution evolution, all agents starting as neutral
Source: own elaboration.
Strategy distribution evolution, all agents starting as inflationary
Source: own elaboration.
After all of them change to another strategy, there are big
incentives in continuing with it, as the intensity of choice value
(β = 2.0288) being strictly positive prevents them from doing many
random changes and the value of parameter J = 2.3246 prevents them
from intensely deviating from their neighbours. Therefore, it
makes sense that the starting strategy is brought to the brink of
extinction very quickly.
Furthermore, we can also test how the model’s parameters affect
strategy distributions. To do so, we create vectors with 101
values of three parameters of interest, α/J, β and p. For each
parameter, we take the calibrated value as the central value and
generate 50 equidistant values to the left and the right of the
central value, with the lower bound being zero and the upper bound
being double the central value. Thus, each of these tests is
comprised by 100 runs of the model.
We run simulations with t = 1, 000 and discard the first 100
observations to avoid biases due to the very high degree of
randomness before agents start settling into a distribution
pattern that will be more representative of the overall trend for
that specific parameter value. After doing that, we calculate the
mean of the remaining 900.
The first test evaluates how changing the α/J ratio impacts the
frequency distribution of strategies across producers, as this
ratio captures the relative weight of private incentives with
respect to social incentives. This means that agents will give
more importance to guessing the prices right in lieu of following
their neighbourhood along. The simplest way to do so is keeping J
= 2.3246 as it was calibrated and manipulating the value of α.
Therefore, the interval starts at zero and is incremented by
0.0174 each time, thus closing at the value α/J = 1.74.
Figure 13 shows the
results found, where we observe less agents choosing deflationary
and inflationary expectations as the α/J ratio gets larger.
Therefore, as the deterministic private utility’s relative weight
rises, a larger fraction of agents go with the neutral strategy,
with the average proportion of agents choosing it getting as large
as 95%. That means when network effects are relatively weak (i.e.
when the ratio α/J is relatively high), the neutral strategy is
the one that provides agents with the largest utility gains.
However, for low values of α/J, which gives a small relative
weight to private incentives, the neutral strategy is indeed more
vulnerable. For several values of α/J < 1, the inflationary
strategy is able to hold a majority, and for some values around
0.1, the neutral strategy in fact is going extinct.
Next, we test for values of β, the parameter that determines
the intensity of choice, accounting for bounded rationality in the
model. If values of β are low, we expect the proportions to remain
near 33% ad infinitum, because choices are
random. Ath the same time, larger values mean that agents will
gravitate towards the strategy with the largest utility gains.
That means we expect that as agents become more rational – that
is, with β raising – they will go with the strategy that
has been showing the best performance so far, which is
neutrality. Again, the interval starts at zero and ends at β =
4.045 (double the calibrated value) with increments of 0.04005
each time.
Average strategy distribution evolution as a function of α/J
Source: own elaboration.
Average strategy distribution evolution as a function of β
Source: own elaboration.
Figure 14 presents our
results: for low values of β our test reflects what was mentioned
before, with values of β < 1 leading to the three strategies
coexisting equally in the long run, even though there is a clear
distinction in utility gains from picking between them.
Nevertheless, as predicted by
Hommes
(1997), when β rises agents pick with higher probability the
strategy with relatively better performance in gaining utility,
which is the neutral strategy, as the furthest value from the origin
shows the largest proportion of neutral agents. The path is not the
smoothest, but it does show a trend of agents moving towards the
neutral strategy as decisions become more deterministic.
The last test is for values of rewiring probability, p, with
results shown on
Figure 15. If p = 0 then we have
a regular network with no shortcuts, and if p = 1, a totally
random network. We account for all values between 0 and 1 with a
0.01 increment. Unlike the previous results, there is not a clear
trend associated with raises in the rewiring probability, however
the neutral strategy is established as a clear favourite for most
values of p. It does remind us again that the neutral strategy is,
on average, much more likely to emerge as the clearly dominant
one, although for some values of p it faces decent challenge from
the inflationary strategy.
Average strategy distribution evolution as a function of p
Source: own elaboration.
Concluding remarks
Lately, ABMs have presented themselves as a versatile, bottom-up
approach aided by the advances in complexity and network sciences.
These tools can better explain and translate the increasing
complexity of markets in general. As such, they are able to tackle
issues that can be hard to include in mainstream models, e.g.
network effects and heterogeneous expectations.
Our main objective was to assess how the relaxation of some
hypotheses affected the results of a model proposed by
Ball and
Romer (1989) on the
theme of price setting. To do so, we have adapted a new Keynesian
model into an agent-based computational model featuring heterogeneous
agents with interactive expectations, bounded rationality and
parameters calibrated by numerical methods. Using a regular network
version as the benchmark, we also have added a rewiring probability to
check whether it improved or not the model.
Each agent could, on each period, go with deflationary, neutral
or inflationary expectations. The calibrated parameters presented a
higher emphasis on the relative weight of private utility with
respect to network effects compared to the regular network model. In
the same vein, it showed a larger weight on deterministic utility
and a higher elasticity towards one’s expected price level compared
to the money supply. The rewiring probability presented a rather
high value showing that, when setting prices, agents may be highly
influenced by distant nodes in the network.
While the regular network was unable to show the emergence of the
neutral strategy as the dominant one, by adding the rewiring
probability we saw that happening. Some tests were done to observe
other emergent properties, with the first one being a change to the
starting conditions. By making all agents start with each one of the
three strategies, we have seen that by step two of the simulation
all of them change strategies, with the starting one going extinct
and the remaining two vying for dominance.
These tests regarding the sensitivity to initial conditions show
the neutral strategy is the strongest, and the inflationary can be
said to be second place, as it was able to crowd out the
deflationary strategy when all agents started as neutral (which
guaranteed it would die out soon, for reasons not entirely clear to
us).
We also stressed all parameters to check their effects on
strategies. When α grows, the neutral strategy becomes more
dominant, showing it is stronger when agents look only at their
accuracy when comparing with the global price level. Likewise, when
β grows, meaning agents become more rational, they gravitate towards
the neutral strategy. However, changes in p do not show any
particular trend when running the gamut from zero to one.
It is also most interesting that, even though we have relaxed
several hypotheses of the original new Keynesian model, the
conclusions are mostly unchanged: agents reach P = Y = 1, reinforcing
that heterogeneous expectations and the fear of getting the price
wrong compared to one’s peers (thus, losing out on profits) are
factors that can contribute to price stickiness. Still, as
Guerrero
and Axtell (2011) point out, by stressing mainstream models and
reaching similar conclusions, we can be more appreciative of the
models that do hold up even under heavy scrutiny.
In fact, our main takeaway from the comparison between regular and
complex network results is that some degree of randomness was needed
to perfectly replicate the symmetric Nash equilibrium from
Ball
and Romer (1989), which went hand-in-hand with the emergence of
the neutral strategy as the strictly dominant one. Thus, complex
networks might be better-suited to represent price setting phenomena,
which makes sense as economic agents are not influenced only by
geographical neighbours.
As this is a quite new research agenda, our hand was forced in
some decisions, such as the price index and calibration method
choices; still, we ran a high amount of simulations and believe our
results to be highly trustworthy, even if not fully optimised. Some
suggestions for future research on the same theme would be: adding
monetary policy; using a preferential attachments network instead of
a small-world one; increasing neighbourhood size; using other
arrangements instead of square lattice; changing the calibration
method and/or price index.
BibliographyAcharya, Sushant, Edouard Challe, and Keshav Dogra. “Optimal monetary policy according to HANK,” 2020.AcharyaSushantChalleEdouardDograKeshav2020Ball, Laurence, and David Romer. “Are prices too sticky?” The Quarterly Journal of Economics 104, no. 3 (1989): 507–524.BallLaurenceRomerDavidAre prices too sticky?19891043507524Bernhardt, Dan. “Costly price adjustment and strategic firm interaction.” International Economic Review 34, no. 3 (1993): 525–548.BernhardtDanCostly price adjustment and strategic firm interaction1993343525548Bonomo, Marco, Vinícius Carrasco, and Humberto Moreira. “Aprendizado evolucionário, inércia inflacionária e recessão em desinflações monetárias.” Revista Brasileira de Economia 57, no. 4 (2003): 663–681.BonomoMarcoCarrascoViníciusMoreiraHumbertoAprendizado evolucionário, inércia inflacionária e recessão em desinflações monetárias2003574663681Brock, William A, and Steven N Durlauf. “Discrete choice with social interactions.” The Review of Economic Studies 68, no. 2 (2001): 235–260.BrockWilliam A.DurlaufSteven N.Discrete choice with social interactions2001682235260Brock, William A, and Cars H Hommes. “A rational route to randomness.” Econometrica: Journal of the Econometric Society 65, no. 5 (1997): 1059–1095.BrockWilliam A.HommesCars H.A rational route to randomness199765510591095Dosi, Giovanni, and Andrea Roventini. “More is different... and complex! the case for agent-based macroeconomics.” Journal of Evolutionary Economics 29, no. 1 (2019): 1–37.DosiGiovanniRoventiniAndreaMore is different... and complex! the case for agent-based macroeconomics2019291137Fainmesser, Itay P, and Andrea Galeotti. “Pricing network effects.” The Review of Economic Studies 83, no. 1 (2016): 165–198.FainmesserItay P.GaleottiAndreaPricing network effects2016831165198———. “Pricing network effects: Competition.” American Economic Journal: Microeconomics 12, no. 3 (2020): 1–32.FainmesserItay P.Pricing network effects: Competition2020123132Flieth, Burkhard, and John Foster. “Interactive expectations.” Journal of Evolutionary Economics 12, no. 4 (2002): 375–395.FliethBurkhardFosterJohnInteractive expectations2002124375395Goyal, Sanjeev. Connections: an introduction to the economics of networks. Princeton University Press, 2012.GoyalSanjeevPrinceton University Press2012Guerrero, Omar A, and Robert L Axtell. “Using agentization for exploring firm and labor dynamics.” In Emergent results of artificial economics, 139–150. Springer, 2011.GuerreroOmar A.AxtellRobert L.Using agentization for exploring firm and labor dynamicsSpringer2011139150He, Yongxiu, SL Zhang, LY Yang, YJ Wang, and J Wang. “Economic analysis of coal price–electricity price adjustment in China based on the CGE model.” Energy Policy 38, no. 11 (2010): 6629–6637.HeYongxiuZhangS.L.YangL.Y.WangY.J.WangJ.Economic analysis of coal price–electricity price adjustment in China based on the CGE model2010381166296637Hohnisch, Martin, Sabine Pittnauer, Sorin Solomon, and Dietrich Stauffer. “Socioeconomic interaction and swings in business confidence indicators.” Physica A: Statistical Mechanics and its Applications 345, no. 3 (2005): 646–656.HohnischMartinPittnauerSabineSolomonSorinStaufferDietrichSocioeconomic interaction and swings in business confidence indicators20053453646656Hommes, Cars H. “Heterogeneous agent models in economics and finance.” Handbook of computational economics 2 (2006): 1109–1186.HommesCars H.Heterogeneous agent models in economics and finance2006211091186———. “Models of complexity in economics and finance.” System Dynamics in Economic and Financial Models, Wiley Publishers, 1997, 3–41.HommesCars H.Models of complexity in economics and financeWiley Publishers1997341Kaplan, Greg, Benjamin Moll, and Giovanni L Violante. “Monetary policy according to HANK.” American Economic Review 108, no. 3 (2018): 697–743.KaplanGregMollBenjaminViolanteGiovanni L.Monetary policy according to HANK20181083697743Levy, Daniel, Shantanu Dutta, Mark Bergen, and Robert Venable. “Price adjustment at multiproduct retailers.” Managerial and decision economics 19, no. 2 (1998): 81–120.LevyDanielDuttaShantanuBergenMarkVenableRobertPrice adjustment at multiproduct retailers199819281120Lima, Gilberto Tadeu, and Jaylson Jair Silveira. “Monetary neutrality under evolutionary dominance of bounded rationality.” Economic Inquiry 53, no. 2 (2015): 1108–1131.LimaGilberto TadeuSilveiraJaylson JairMonetary neutrality under evolutionary dominance of bounded rationality201553211081131Macal, Charles M, and Michael J North. “Tutorial on agent-based modeling and simulation.” In Proceedings of the Winter Simulation Conference, 2005. 14– pp. IEEE, 2005.MacalCharles M.NorthMichael J.Proceedings of the Winter Simulation Conference2005IEEE20051414Midrigan, Virgiliu. “Menu costs, multiproduct firms, and aggregate fluctuations.” Econometrica: Journal of the Econometric Society 79, no. 4 (2011): 1139–1180.MidriganVirgiliuMenu costs, multiproduct firms, and aggregate fluctuations201179411391180Nakamura, Emi. “Pass-through in retail and wholesale.” American Economic Review 98, no. 2 (2008): 430–37.NakamuraEmiPass-through in retail and wholesale982200843037Roberts, Mark J, and Dylan Supina. “Output price, markups, and producer size.” European Economic Review 40, nos. 3-5 (1996): 909–921.RobertsMark J.SupinaDylanOutput price, markups, and producer size1996403-5909921Romer, Paul. “The trouble with macroeconomics.” The American Economist 20 (2016): 1–20.RomerPaulThe trouble with macroeconomics201620120Saint-Paul, Gilles. “Some Evolutionary Foundations for Price Level Rigidity.” American Economic Review 95(3) (June 2005): 765–779.Saint-PaulGillesSome Evolutionary Foundations for Price Level Rigidity200506953765779Silva, Rebbeka Cristina Freire de Melo. 2012. “Análise do processo de ajustamento nominal em uma economia com concorrência monopolística: uma abordagem de jogos computacionais em redes.” Master’s thesis, Graduate Program in Economics, UFSC.SilvaRebbeka Cristina Freire de MeloMaster’s thesis, Graduate Program in EconomicsUFSC2012Silveira, Jaylson Jair da and Gilberto Tadeu Lima. “Conhecimento imperfeito, custo de otimização e racionalidade limitada: uma dinâmica evolucionária de ajustamento nominal incompleto.” Revista Brasileira de Economia 62, no. 1 (2008): 57–75.SilveiraJaylson Jair daLimaGilberto TadeuConhecimento imperfeito, custo de otimização e racionalidade limitada: uma dinâmica evolucionária de ajustamento nominal incompleto20086215775Taylor, Alan, and Desmond J. Higham. “Contest: A controllable Test Matrix Toolbox for Matlab.” ACM Transactions on Mathematical Software (TOMS) 35 (26) (2009).TaylorAlanHighamDesmond J.Contest: A controllable Test Matrix Toolbox for Matlab20093526Topa, Giorgio. “Social interactions, local spillovers and unemployment.” The Review of Economic Studies 68, no. 2 (2001): 261–295.TopaGiorgioSocial interactions, local spillovers and unemployment2001682261295Train, Kenneth E. Discrete choice methods with simulation. Cambridge university press, 2009.TrainKenneth E.Cambridge university press2009Tucker, Allan, Zhenchen Wang, Ylenia Rotalinti, and Puja Myles. “Generating high-fidelity synthetic patient data for assessing machine learning healthcare software.” NPJ digital medicine 3, no. 1 (2020): 1–13.TuckerAllanWangZhenchenRotalintiYleniaMylesPujaGenerating high-fidelity synthetic patient data for assessing machine learning healthcare software202031113Watts, D. J., and S. H. Strogatz. “Collective Dynamics of Small World Networks.” Nature 393 (1998): 440–442.WattsD.J.StrogatzS.H.Collective Dynamics of Small World Networks1998393440442
The authors are grateful to Roberto Meurer, Eva Yamila da Silva Catela, João Basílio Pereima Neto and two anonymous referees for helpful comments and suggestions - any remaining errors are our own.This work was supported by CNPq(the Brazilian National Council of Scientific and Technological Development) [grant numbers 312432/2018-6(JJS) and 313628/2021-1 (JJS)].This study was also financed in part by CAPES (the Brazilian Coordination for the Improvement of Higher Education Personnel) - Finance Code 001.