Efficiency Properties of Markets: Public Good Thesis

Exclusively available on IvyPanda Available only on IvyPanda
Updated: Apr 8th, 2024

Abstract

The main thrust of this dissertation is the efficiency properties of markets. Parts of the main focus are public good, economic theory, Pareto optimal allocations, and equilibrium mechanisms. There are many thinkers who have formed and given their theories and ideas on how markets work, but no one can give a perfect idea or answer. Adam Smith had the notion of the Invisible Hand in markets, but again the concept seems to have remained a concept. The ultimate task of the researcher is to gather together these theories, ideas, comments, and analyse them to the best he can. This is one of the most difficult tasks being carried out in this dissertation.

We will write a custom essay on your topic a custom Thesis on Efficiency Properties of Markets: Public Good
808 writers online

This paper defined and analysed various economic factors surrounding public good that include Lindahl equilibrium, Nash equilibrium, economic mechanisms, Pareto optimal allocation, Pareto efficiency, other various mechanisms, and a host of other theories useful for public good. The methodology used qualitative study from vast articles and journals taken from online repositories, EBSCO, ProQuest, and much more.

Introduction

The subject of public good can be explained by drawing examples. In his doctoral dissertation, Van Essen (2010) provided an example of a public good – an insecticide – made available in a community. Since an insecticide is sprayed throughout the neighbourhood without exemption, everybody benefits from it even if not everyone pays for it. Other public good institutions cannot answer for the benefits enjoyed by those who did not purchase the good. One of the solutions is to impose a tax; it is a Lindahl tax computable according to the citizens’ limited benefit, and this has to be assessed “at the optimal level of the public good” (Van Essen, 2010, p. 17).

Drawing further from Van Essen’s (2010) example, the Lindahl taxation may look advantageous for the community and the government but it is also criticised as not too fair. Citizens who are taxed should know their preferences, or what the government is doing to them and their money. In sum, this does not solve the problem of the benefits that should be given fairly and it creates more problems. There should be other incentive schemes to deal with the benefits in the example. One of the remarkable features of Lindahl allocation is the Pareto optimal allocation that results in the consumer receiving fair and equitable benefits (Van Essen, 2010).

The Lindahl method of allocation imposes taxation to produce equal allocation, or Pareto optimal allocation. Economists are favourable of the Lindahl equilibrium and try to implement it with a number of mechanisms from different researchers – some are dynamic, others static. The Lindahl equilibrium is a standard benchmark, regarded as a model in a “public good environment” (Van Essen, 2014, p. 309), despite some criticisms. What affects the mechanism equilibrium in a Lindahl allocation are the economic conditions, such as preferences, costs, etc., in which the mechanism will be out of equilibrium for a considerable period of time, before it comes again to an equilibrium, or near an equilibrium. Therefore, it is significant to determine how long alternative mechanisms can approximately reach an equilibrium (Van Essen, 2010).

The subject of public good is increasingly important. People are keen on holding the government to account on how it allocates the resources collected from the public in the form of tax. Attaining optimal allocation is the focus of mechanisms provided in this literature review. This essay will analyse several examples of public good with the use of economic concepts, and provide case studies of economies using previous studies of researchers and economists. Most of these studies are taken from online repositories, journals and articles from EBSCO and ProQuest “Dissertations and Theses”.

Historical background

This section will delve on the background of economic theory and human behaviour, the latter seen as “the solution to an optimisation problem” (Börgers, 2001). Economic agents change their behaviour from time to time and, in the process, when they have gained experience, they become “rational,” to use a term from the economist’s viewpoint. Economic literature has extensively used “evolutionary models,” which can be considered imitation models.

1 hour!
The minimum time our certified writers need to deliver a 100% original paper

Economic theory

Economic theory has an important role in the macro-economic and political policies that impact the lives of individuals and states in late modern capitalism. A concept of economic theory states that economics, as an intellectual technology and as a practical knowledge, aims to help economic governments, “to construct economies in specific ways, not to accurately represent the reality of economics” (Hayes, 2008, p. 16).

Knowledge acts on the economy, which then brings attention to policy action and results. Adam Smith believed in free and open markets, free trade, but limited government interventions. However, a strong debate in what he really advocates still looms among economists. The revisionist interpretation of Smith is that they laugh at the traditional view about him, caricaturing Smith as a narrow follower of laissez-faire. Some Left Smithians hold that Smith thought that markets “brought perfect harmony without any flaws, dangers, or problems except those introduced by government policy itself” (Winch; Rasmussen as cited in Mueller, 2013).

George Stigler argued, in his article “Smith’s Travels on the Ship of State,” that Smith had support for laissez faire and did not push for government intervention since he believed his entire philosophical system was grounded on “a granite of self-interest” (Stigler as cited in Mueller, 2013).

There were groups known as Left Smithians who did not want the issue of Stigler about laissez faire. Smith was not a full supporter of liberty – he still believed that the government should have some form of interventions, but economic freedom should not be ignored. However, the issue about whether Smith favoured laissez faire and government intervention is still not clear in the minds of economists who studied Smith.

Smith wrote the “Wealth of Nations”, a book about commercial freedom, which proposes his theory about liberty and limited government. The key argument of the “Wealth of Nations” is the promotion of free trade and interference makes bad worse. Problems, corruption, and conflict can be the result of people’s economic interest, even in the presence of democracy. Smith’s criticism of government intervention is due to the fact that politicians generally have limited and incomplete knowledge about the impact of their policies on society (Mueller, 2013).

People must have liberty as long as their practice of liberty does not harm others (Mill as cited in Mueller, 2013). Hirst (as cited in Mueller, 2013), on the other hand, suggests that exceptions or limitations of Smith’s theory on the subject of defensive tariffs were not commendable and could not lead to wealth improvement. Smith’s exception could make things faster; some of these included determining whether it is better and politically applicable to reduce tariffs slowly and not abruptly.

Smith’s “Wealth of Nations” is an angle “that pursues liberty as a maxim and not as an axiom” (Clark, 2004, p. 21). His presumption of liberty is a method that is based on the concept that liberty works so well, in almost all instances, and it should be presumed that it is the correct policy unless there is proof otherwise. Advocates against liberty must have the burden of proof why it does not work in some instances. However, Smith does not have full hundred percent support for liberty, allowing his critics to argue that he departed from liberty as an approach to public policy. Nevertheless, Smith was a careful writer and theorist.

Remember! This is just a sample
You can get your custom paper by one of our expert writers

His scholarly work has made his writing style and rhetoric a challenge to many. Smith displayed virtue in presenting out his ideas in a way that also accommodated those that opposed to him. He was considered a radical against oppositionists to his ideas. But he was a bit careful in his rhetoric because he wanted his ideas to be received carefully (Clark, 2004).

Smith’s interventionist ideas hold the approach that liberty is best unless proved otherwise. The interventionist suggestions are the most common statements in Smith’s writings. For example, all matters of taxation are considered interventionist. Many authors who investigated Smith’s interventionist approach do not side on Smith’s analysis of tax revenues for their analysis. In the studies about Smith’s interventionist writings, the researchers catalogued interventions using a coding system that ranks the interventions. In some instances, the ranking follows the certainty with which Smith has interventionist approach. The range differs from clear support to an acceptance of the breaking “only as a concession” (Clark, 2004, p. 37).

In a study, Smith provided support for nine interventions. There are times that Smith backs an intervention, but with a rather limited scope. Smith looks at joint-stock companies as privileged companies. These companies have the legal ability that can give them limited liability, and their structure and setup also allow the joint-stock companies more political privileges. These joint-stock companies, according to Smith, are established by royal charter or act of parliament. However, Smith limited his support of establishing joint-stock organisations to four public works: banks, insurance, canals, and water supply (Clark, 2004). The public application of joint stock companies should be limited.

The rationality theory

Börgers’s (2001) rationality theory provides positive concept on individual behaviour, as it says that economic agents’ behaviour can help solve issues in optimisation issues, although there are only instances that economic agents’ behaviour display such acts, and not all the time. However, the rationality or irrationality problem is not the main concern in Börgers’s (2001) article but the issue of equilibrium choices in games. The term “rational” in the economist’s viewpoint is different from the common use of the term.

Being rational does not mean that it is the result of careful analysis or any traditional or routine way. What is important is the optimality of the behaviour. Herbert Simon (as cited in Börgers, 2001) refers to rationality as a “substantive rationality,” not a procedural one. The economist’s use of actual behaviour as “rational behaviour” can be compared to the way empirical sciences explain related phenomena and regarding them as a solution of a particular optimisation issue.

The rationality theory is still controversial in the economics world. Yet, there have been successes brought about by some empirical data. For example, the rationality theory has made division of consumption expenditures between different products easy to rationalise now, although the entire cost is not easy to predict (Börgers, 2001). Consumers’ decisions to invest the money they have saved are still some of the difficult aspects.

Another impossible thing to rationalise is the laboratory data done by researchers regarding decision-theoretic issues. In situations involving strategic decisions such as games, the rationality theory is not well proven. This theory or hypothesis can also be proved using auctions. But in other instances, experienced bidders do not follow mere predictions. Börgers’s (2001) conclusion is that empirical evidence in economics can be explained by the rationality hypothesis but there are instances that behaviour (i.e. that which pertains to economics) is difficult to rationalise.

Theory of economic mechanism

The theory of economic mechanism is a general theory in which theorists can base their mechanisms. One cannot only study the market mechanism, the planning economic mechanism, but also other mechanisms such as the mixed mechanism of market economy and the planning economy mechanism. The theory of economic mechanism also provides concrete bases and mechanisms that can get significant results. For example, all results in the theory of general equilibrium can be thought of as consequences of the theory (Van Essen, 2010).

We will write
a custom essay
specifically for you
Get your first paper with
15% OFF

The theory of economic mechanism consists of two parts: incentive and information (Tian, 1987). Although discussions of the role of private incentives have been included in writings on economics and political economy for over two hundred years, at least as far back as Adam Smith’s “Wealth of Nations,” the formal treatment of the subject is not just too long ago. Basic model of implementation consists of four basic components: economic environment (endowments, preferences, technologies, etc.), the allocation mechanism (a language and an outcome rule), a reduced form description of self-interested behaviour (an example is Nash equilibrium), and a concept of “good” allocation (such as Pareto efficient, equitable, etc.).

The first and last are familiar to economists since the components are from standard general equilibrium theory. The second and third are familiar to game theorists since most of these components come from standard n-person non-cooperative game theory (Van Essen, 2010).

In order to address issues such as those raised by Adam Smith and Samuelson concerning the performance of various ways of allocating resources in the face of self-interested behaviour, many self-interested behaviour assumptions have been used, such as: dominant strategy equilibrium, Nash equilibrium, Bayes equilibrium, manipulative equilibrium, maximum equilibrium, and so on (Nash, 2000).

Some ideal social choice correspondences that are currently used are:

  1. the Pareto-efficient allocations,
  2. the individually rational allocations,
  3. the core allocations,
  4. the Walrasian allocations,
  5. the Lindahl allocations,
  6. the Shapley-value allocations, and
  7. equitable allocations.

Two alternative definitions of Pareto efficiency are widely used: the regular and the weak. These two generally coincide for as long as preferences are continuous and strictly monotone increasing. However, it may not be true for some economies with public good, but might be true for others.

In the implementation context, researchers would like to take correspondences that produce Pareto-efficient and individually-rational allocations in applying social choice rules. The Walrasian correspondence is Pareto efficient but has not been considered monotone. In Maskin’s (as cited in Tian, 1987) study, there were feasible equilibrium and “no veto power” that can be implemented in Nash equilibrium, only if there is monotone increasing. The Walrasian correspondence cannot be implemented but the constrained Walrasian, because of its monotonicity, can be implemented.

Bochet (2004) constructs a simple mechanism using controlled Walrasian correspondence for exchange economies with three or more agents. The mechanism is further extended with enlarged message space of agents. The author characterises a Bayesian equilibrium outcomes, and includes a “Constrained Walrasian correspondence” when an agent diverges from complete to incomplete set of messages. Bochet (2004) further shows that restrictions on Constrained Rational Expectations Equilibrium do not evolve from agents’ behaviour: there are simple economies that do not contain the ‘Constrained Rational Expectations Equilibrium’ in the Bayesian outcomes.

Bochet (2004) indicates in his study that when agents do not disclose their own information and asked to report this information, they act strategically or take some course of action. The author gives an example in which a planner is planning to implement Walrasian allocations. As there are unknown characteristics in the economy, the planner provides appropriate incentives to the agents so they may reveal their characteristics. With these characteristics, the equilibrium outcome is known to be a Walrasian allocation (Bochet, 2004).

In his doctoral dissertation, Bochet (2004) used an implementation approach to find the connections “between Walrasian Equilibrium and Rational Expectations Equilibrium” (p. 9), henceforth known as WE and REE, respectively. Several economists/authors provided mechanisms that used the correspondences, among them Hurwicz, Schmeidler, and Postlewaite-Wettstein (as cited in Bochet, 2004). Using the incomplete information, Palfrey-Srivastava (as cited in Bochet, 2004) explains that economies with non-exclusive information, with endowments not belonging to the state, the Bayesian equilibrium can be implemented. Wettstein (as cited in Bochet, 2004) focused on a continuous mechanism that implements Constrained REE allocations using Bayesian.

But Wettstein provides an information results distinct from the Bayesian Implementation. An initial stage is presented at the “ex-ante stage” (Bochet, 2004). Common in the Bayesian Implementation is that agents receive their private information even before the game is played, which is to say that in the framework provided, they already have knowledge of their kind.

Bochet (2004) restricts the environments with non-exclusive information, or the mechanism “takes place at the interim stage” (p. 10). The restriction of the information would mean that by providing the information from the hands of agents (n – 1) everyone would be able to know the state of the world. Blume-Easley (as cited in Bochet, 2004) indicates that if non-exclusive information is not given, a planner can build a strong example of an economy with a distinct REE that is not regarded as incentive-compatible. Thus, Bochet (2004) limits the environments whose information was non-exclusive.

However, the Walrasian correspondence cannot be implemented in the Nash equilibrium, given that monotonicity is not given for Walrasian allocations. Bochet (2004) focused on “Constrained Walrasian and Constrained REE Equilibria” (p. 10).

Criteria for economic mechanisms

A good mechanism must be able to provide a balanced equilibrium. A requirement is that if the message is not continuous the agent’s strategy may not provide the needed allocation. For example, the strategy choice may be very close to the equilibrium strategy, the outcome may not be so close to the equilibrium allocation. This can result in a bigger problem, for example, for a mechanism used as an iterative process, since it is expected that terminal messages are close to equilibria and any discontinuity in the outcome rule can result in the difficulty for approximation in the final outcome. If the mechanism were continuous, one would know that if the messages are close to equilibria then the allocation will be close to equilibrium allocations (Van Essen, 2010).

Furthermore, because of error in information transformation it is a must to have continuous outcome functions. Another thing is that if a mechanism is not balanced, the amount of resources allocated by the mechanism exceeds the total endowments. Therefore, it is important to have a balanced condition and individual feasibility condition. Many existing mechanisms on implementation of social choice correspondences do not meet the necessary requirements mentioned above.

Dynamic mechanism design for public good

A dynamic mechanism design for public good can be clearly explained by drawing an example, or a case study provided by Candel-Sánchez (2003). The author focused on a two-part public undertaking scheduled for two periods. One period has the planner to decide whether to pursue the large part of the project. An example is this. The project consists of constructing a two-part highway. The planner has to provide the first section, while the second part will become feasible. An interesting aspect of this project is that the first part has to be constructed before the second part can be provided. Valuations for the parts of the project will also evolve from stage one to stage two.

Candel-Sánchez (2003) provided this economy consisting of a pure public good and a private good (money). The planner provides the public good in two stages and has to decide whether to provide or not the good when the social welfare that it creates is more than the cost of provision. The planner can produce the valuation for the public good and efficient allocation with the introduction of a dynamic mechanism.

One case study is regarded as classical, which pertains to the Clarke-Groves mechanism that implements a dynamic mechanism for public goods using “quasi-linear preferences” (Groves; Vickrey; & Clarke as cited in Candel-Sánchez, 2003). Within this context, using the Revelation Principle, Candel-Sánchez (2003) described game forms that acquired a social welfare in “a (dominant strategy) equilibrium of the game” (p. 622). These are efficient mechanisms characterised by features associated with Groves’ mechanisms and are made to be compatible with the social welfare optimal and upon maximisation of each agent’s part (Candel-Sánchez, 2003).

The framework for this dynamic mechanism starts by considering an economy with a public and a private good, as in Van Essen’s (2010) example. We have a finite number N and agents, i = 1, …, N, to include a social planner, who plans to carry out a fixed project in 2 stages. The provision will be given for each project’s start. The good provision in period t = 1, 2, is signified by dt (Candel-Sánchez, 2003). It is specified once again that in order for the second stage to be feasible, the first one must have to be undertaken first. The planner provides positive or negative conditions to the agents. The transfer to person i in period t is translated as Tt = (Tt1, Tt2, …, TtN). The set of alternatives is presented as:

Dynamic mechanism design for public good

Wherein: x represents a social alternative or outcome.

Each agent specializes in public good allocation in both time periods and the transfers given by the planner. Preferences of agent i on equation X are shown by the quasi-linear utility function:

Function

where θi 1ii2 represents agent i’s valuation of public good. In time period, 1 is the agent’s learning period for the first part of the public good, and only when he has completed stage one will he learn about stage 2.

In summary, Condition I and Condition II have the same equivalence but Condition II cannot be attained without first completing Condition I. The idea behind this is that agent i will reveal the truth in the second stage, and the fulfilment of θi2 θi2affects Stage 2. In this dynamic mechanism design, there are two critical characteristics – the second stage cannot be completed without first undertaking the state 1. Implementing a design for a public good without the participants’ valuations of the two stages can be made by providing a mechanism for public goods (Candel-Sánchez, 2003).

Game theory

Why mechanisms are called games is that there are players, and before a game is played there has to be a theory. A particular example is supply chain management that uses the game theory, in which the different parties act as players (Mesterton as cited in Dai, 2003). Suppliers, retailers, and end-users are called players who have “profit function,” and whose responses reflect their best strategy. The Nash equilibrium creates something like a game in which the players have their decisions, which are actually strategies of the players. There is maximum profit if only one decision is made based on all available information, also referred to as “the optimal case or first-best case,” which can be linked to the “centralized control” (Dai, 2003, p. 9).

This situation is not what is actually happening because the entities or units do not belong to one corporation – it further leads to conflict of decisions. This is referred to as “a decentralized control structure” (Dai, 2003, p. 9). Furthermore, each player commits his/her own strategy in order to attain the optimum profit. In a centralized control, it is the responsibility of the manager to construct an effective design to achieve the optimum performance of the entire supply chain structure.

A retailer can form a mixed strategy in which he/she asks other players to change their payoffs. A kind of decentralized structure to attain optimal profit can be executed. The player participates in what is termed as “channel coordination” (Dai, 2003).

Cachon and Zipkin (as cited in Dai, 2013) made a study on suppliers and retailers, with emphasis on “serial supply chain”. They compared stock policies and tried to reach a recommendation about cost reduction. The study resulted into a change in payoffs, which was near the optimum. On the other hand, some authors used price discounts to influence customers’ decisions (Klastorin, Moinzadeh, & Son, 2002). The supplier offers a discount to a retailer that coincides with the normal course of the business cycle. This situation can provide an effective supply chain initiative, and lead to a clear method of determining the maximum price discount.

Determination can be further attained in other models with effective design, which provide ideas into lower price schemes that can provide optimal outcomes (Goyal & Gupta, 1989; Weng, 1995). Another work is that of Lariviere and Portueus (as cited in Dai, 2003) which describes a supply chain initiative involving a supplier and retailer who faces a newsvendor problem. Indeed, the market size influenced volume of sales and profit. Dai’s (2003) model uses a single product with one-time distribution process, using one supplier and two retailers.

The formula and data are as follows. Customers who encounter out-of-stock product (stockout) at retailer i go to retailer j, (j≠i), and the probability is represented by aij; a matrix (A) is constructed to represent the market structure, wherein: “a11 = a22 = 0 and 0 ≤ aij ≤1 for i ≠j,” which is called the “market search matrix” (Dai, 2003, p. 18). In sum, retailer i (local demand) gets the demands from his/her customers and those who come from the other retailer j (distant demand) because of the products that do not sell (Dai, 2003).

During a particular season, the normal course of business is observed, in which the retailers ask for their usual product. There may be penalties because of unsold stock known as stockout. The retailer will encounter a problem like that of an ordinary newsvendor. The decision of one affects the other, which makes the situation like a game due to the ordering decisions. It is presumed that each retailer knows the ins and outs of the distribution process, to include the demands and the market situation. It is further assumed that the players involved are rational players for they know how to maximize their expected payoff (Dai, 2003).

Example of an economy

An economy may be comprised of two consumers – Eric and Sarah, who have preferences for: (yi) and (x)” (Van Essen, 2010, p. 22). These preferences are represented in the following equation:

Equation

Consumers Eric and Sarah are players in an economy. The situation allows the government to produce the good, such as a particular project, but it has to use a production technology using the following equation: F(z) = ¼z; 4 is the real marginal fee to produce the good (Van Essen, 2010, p. 22). In this situation, we have to find out the Pareto efficient allocations and the allocations that make Eric and Sarah worse off. This will need a computation that will use the “Samuelson Marginal condition” (Van Essen, 2010).

Sarah’s marginal rate is: MRSA = 6 – x; whereas Eric’s is MRSB = 8 – x. The two substitutions will be added and the result is 4; accordingly, “14 – 2x = 4, and xPO = 5”. In Van Essen’s (2010) computation, 4 items of the good are needed to generate an item and the total cost of producing public good is 20, considering that 4 items of the good are needed to produce one unit that is open to the public, or: “xPO = 5, and YA + YB = 20” (Van Essen, 2010, p. 23).

Lindahl Equilibrium

Erik Lindahl (as cited in Van Essen, 2010) provided a cost-sharing process for financing public goods in 1919 and argued that this would produce Pareto efficient outcome. There were a number of mechanisms proposed to support Lindahl’s procedure, such as the mechanism by Hurwicz and Walker (as cited in Van Essen, 2010) who suggested the Nash equilibrium with the “incentive capable mechanisms” (Van Essen, 2012). Designs in mechanisms are modified, such as instilling a dynamic stability to address instability problem.

Hurwicz (as cited in Tian, 1987) presented a quasi-game that produced Pareto efficient and individually rational allocations, and Nash allocations also produced Lindahl efficiency using this mechanism. However, there are two characteristics that might be regarded as insufficient. In this mechanism, there is a non-player participant known as the auctioneer. Another objection is the imbalance in the outcome.

Moreover, Hurwicz (as cited in Tian, 1987) provided another mechanism with a positive outcome (i.e. for the players), in which the Nash allocations used the Walrasian design, with respect to the private goods economies, and without the utility of an auctioneer. Hurwicz’s mechanisms are not individually practicable, but at equilibrium, the result is in the consumption pattern.

Hurwicz (as cited in (Bochet, 2004) proved that the Walrasian mechanism is no strategy-proof. Agents have the incentive not to honestly report their demand functions. The auctioneer cannot easily verify the information provided by the agents. Bochet (2004) cited an example of a particular case in which a benevolent person tries to build a bridge to benefit a certain community. When the planner asks the agents in the community to submit report about their payment for the bridge, the agents have the tendency to under-report the amount they want to pay. This situation is referred to as the free-rider problem (Bochet, 2004).

The question may arise on how to resolve the problem between private incentives and efficiency. Economists try to solve or identify situations in which the free-rider problem occurs and how to solve the problem. The “Theory of Implementation,” according to Corchon (as cited in Bochet, 2004) was created to deal with the free rider problem and this depends on the kind of game that agents involve in and the theories implemented in the game. This further involves liberation from constraints.

Bochet (2004) further adds that the Implementation Theory is the study of “the relationship between the structure of the institution through which individuals interact and the outcome of that interaction” (p. 3). In other words, implementation is about interaction of different persons or players, with a game-theoretic exchange of messages and ideas, and this includes the design of the mechanism for proper implementation.

According to Jackson (as cited in Bochet, 2004), an important part of the implementation theory is the prerequisite that all equilibriums should point to choices by individuals (Bochet, 2004, p. 3). Robust implementation theory is focused on mechanisms that enforce a social choice function. Moreover, static robust mechanisms depend on the agents’ rationality and the traditional belief in rationality (Müller, 2010).

Walker (as cited in Tian, 1987) modifies Hurwicz, given that the former’s mechanism has all of the above cited properties (or characteristics) of Hurwicz’s mechanism but strategically uses a bit tighter message space. Accordingly, this message space is of nominal aspect for Nash implementation of the Lindahl interaction (Tian, 1987). Walker’s mechanism, like Hurwicz’s, is also not individually possible. Three authors – Hurwicz, Maskin and Postlewaite (as cited in Tian, 1987) – presented mechanisms using a message system of the Walrasian and Lindahl equilibrium, and it is possible that their outcome function is individually feasible and balanced, although the function was criticised as discontinuous (Tian, 1987).

Postlewaite and Wettestein (as cited in Tian, 2000) designed a mechanism for the private goods economies that produced individually feasible outcome, but it was not a strongly balanced outcome, although it had a Nash implementation of the controlled Walrasian type. It was not a balanced equilibrium and the message space was larger than the standard.

Nevertheless, Hurwicz and Walker (as cited in Van Essen, 2012) were the first to successfully experiment the Groves-Ledyard mechanism that created Lindahl mechanisms and produced Lindahl allocations. The authors’ works on Lindahl mechanisms, particularly the Walker mechanism, presents a simple but significant description on how the mechanisms work. The Walker mechanism needs particular attention and study.

Van Essen (2012) describes how a mechanism works. A mechanism is a set of principles or rules that gives directions. Hurwicz and Marschak (2001) add that a mechanism is needed in an organisation for members to cope with the changing environment. The organisation is comprised of traders where goods are exchanged for goods or money, or it may consist of division managers within an organisation.

The members have knowledge of the organisation’s environment – this can be termed the member’s “individual characteristic or local environment” (Hurwicz & Marschak, 2001). Tian (1987) indicates that a mechanism can have characteristics of individual feasibility, balance and continuity, along with properties of Nash implementation and Pareto optimality.

The rules in the mechanism include specifications about choices, messages, and economic players (e.g. retailers, suppliers, etc.) to be sent to a planner; and the process of mapping into outcomes for those choices. When these choices are applied with individual preferences that will have significant outcomes in the mechanism, it produces a game that will start right away. In other words, a mechanism leads to a game. A public good mechanism starts with a mapping of messages m to a process known as allocation, in which the level or degree of pubic good x and the players’ good consumption has to be imposed with a tax Ti (Van Essen, 2012).

Walker mechanism

Focusing on the Walker mechanism, the process is for each player to announce a request designated as mi. The mechanism works on the messages to find out the best for public good, and this is provided in the following equation:

Equation

The players’ requests can be designated in m = (m1, …, mN). The individual tax can be computed as follows:

Equation

and the data include: N + 2 = 2; N + 1 = 1 (Van Essen, 2012). Customer i has a personalised price of:

Equation

In the equation above, i’s personalised price is independent from the rest. And the budget can reach a balance as in:

The Walker mechanism can be illustrated further with Van Essen’s (2012) example using an economy where Joe, Mary, and Christine interact economically. Each of the three has two good preferences, for example, “a private good (yi) and a public good (x)” (Van Essen, 2012, p. 40). The customers’ preferences are presented in the following functions:

Functions

Joe, Mary, and Christine have no items regarded as public, but in the private good they have exact 20 units. The government used the technology [F (z) = ¼ z] to generate the public good, with the private good as an initial product; the real marginal cost of production needed here is 4 (Van Essen, 2012). The Lindahl prices for the three customers are represented as follows: (PxA,PxB,PxC) = (1, 3, 0), and the Pareto efficient allocation is: (x,yA, yB, yC) (Van Essen, 2012, p. 41).

In the economy of Joe, Mary, and Christine, a Walker scheme can be applied, drawn from the example of Van Essen (2012). The three consumers have to give a number mi to the planner, who is from the government. Subsequently, the government is responsible for producing the public good that corresponds to the sum of the customers’ three messages: x(m) = mA + mB + mC.

The tax the government will impose for each consumer is the outcome of the following function:

Function

There are specific rules for this mechanism. Consumer A will solve the following problem:

Function

which has the following order:

Function

The consumers’ personalised prices also act as real prices because they are independent from each other. Moreover, mi can take any real number. Customer i will select a public good level that should be at its maximum. The three customers will have M RSi (m⃰i; m⃰-i) = Pi(m⃰i). The consumers’ equilibrium can be presented as follows:

Equilibrium

and the public good produced from the equilibrium mechanism is 15. The Samuelson Marginal condition and the Walker’s mechanism have the same results, but this is no coincidence according to Van Essen (2012), given that the customers’ settings of utility maximisation and those of the Samuelson Marginal condition are almost the same, i.e. the equilibrium is ΣiRSi = 4 = β.

Mechanisms supporting Lindahl equilibrium

There are other designs introduced that have markedly stood in the literature and they are by authors Vega-Redondo, de Trenqualye, Kim, and Chen (all cited in Van Essen, 2010), among others, some of whom are the subject of discussion in this paper. The Chen and Kim (as cited in Van Essen, 2012) mechanisms achieve Lindahl allocations at the Nash equilibria, but they are distinct from each other.

The two models have their precise equilibrium levels but their outcomes are not conclusive. Clarke’s (as cited in Van Essen, 2014) mechanism makes no use of preferences and costs, but imposes a tax to each agent (also known as Clarke tax) that supports incentives allocated for each consumer with the base group, making a direct reference to the efficiency of the public good. This mechanism can result in more tax revenues, but it does not ensure an optimal Pareto allocation (Van Essen, 2014).

The Chen mechanism

This is how a Chen mechanism works. There is a request ri forwarded by every participant. The profile is the same as in Walker mechanism as it uses the equation

Equation

wherein r is a determinant of public good level x (Van Essen, 2010). A requirement of the Chen is that each participant should provide a second number, Pi, herein interpreted as a prediction for the public of good. The tax is given in the equation:

Equation

wherein the specifications for the parameters are γ and δ as provided by Chen. The tax relies on each participant’s accurate prediction by means of (pj – x)2. The Chen is regarded a generalisation of the Kim’s mechanism. When out of equilibrium, its budget is not balanced (Van Essen, 2010).

The Kim mechanism

This is considered a special case of the Chen, γ = 1 and δ = 0 and the Kim tax has the following equation:

Formula

The Kim mechanism is not stable even in the presence of a reasonably active behaviour of the participants. However, it is not like the simple Walker mechanism, given that each participant must expand his/her public-good request with other requests. We must also consider the unbalanced nature of the budget (Van Essen, 2010).

Vega-Redondo

Vega-Redondo (as cited in Čábelka, 2001) is one among the equilibrium models in which players choose from among the best strategies in the population, without considering past experiences. But in other models, like those of Schlag, Eshel, Samuelson and Shaked (as cited in Čábelka, 2001), players have the tendency to imitate past experiences in the process of revision. The result in the Vega-Redondo model resembles the Walrasian outcome and applies to a different class of games than mere Cournot games (Schipper, 2004). Vega-Redondo (as cited in Schipper, 2004) uses stochastic stability in analysing games.

Economic agents usually commit imitation. Robson and Vega-Redondo (as cited in Chen, Chow, & Wu, 2013) focused on players’ long-run behaviour as they are imitating and having random matching. In this model, participants are paired with each other randomly and then independently to participate in coordination games (CG). The actions are to imitate “the best average payoff actions and local interactions” (Chen et al., 2013, p. 1042).

Chen et al. (2013) studied the analyses of Eshel et al. and Vega-Redondo by producing an evolutionary CG. First, the participants meet their two neighbours to play CG, and they are expected to imitate actions producing the highest average payoffs and to gather information from their neighbours. For each time period, players are allowed to make mistakes or to experiment some actions. In this situation, Chen et al. (2013) discovered that the participants’ long-run behaviour relies on the structure and the number of players, whereas mutation rates have the tendency to go zero.

When the population increases, to include “gains of payoff-dominant-strategy,” the dominant equilibrium has a probability less than one. In the long run, the risk-dominant equilibrium can still be present. The prediction is that during average payoffs, the strength-dominant strategy decreases in strength.

For example, if player i gets the risk-dominant strategy, it includes his left neighbour, while the neighbour on his/her right selects the payoff-dominant strategy. If the gain is large, the left neighbour will obtain a lower payoff than the one on the right, which stimulates “the average payoff of the risk-dominant strategy” lower than the “pay-off dominant one” (Chen et al., 2013, p. 1043). In other words, player i will change to the payoff-dominant strategy.

Chen et al. (2013) compares their work with Vega-Redondo’s work, saying that the latter enforces an extra condition on payoff structures to enhance development of the risk-dominant equilibrium. Vega-Redondo (as cited in Chen et al., 2013) has the assumption that participants meet during their two neighbours’ rounds in the different time periods and mimic actions that produce “the highest random average payoff” (Chen et al., 2013, p. 1043).The authors applied the Strong Law of Large Numbers, indicating that the players have the same opportunity to meet each of the two neighbours.

Supermodular mechanisms

In developing supermodular mechanism design, Mathevet (2010) developed a mechanism that can deal with multiple equilibrium problems and can strongly counter bounded rationality. One of the primary objectives of supermodular mechanisms is to deal with these equilibrium problems. Mathevet (2010) described the conditions in supermodular mechanisms: strategies are complementary, or agents strive to take a higher strategy as other agents are also doing the same. From the studies of Milgrom and Roberts (as cited in Mathevet, 2010), supermodular mechanisms resulted in “extremal equilibrium,” which has breaks that can result in a number of problems. But in dealing with the intervals, the problem can be reduced, measured, and eventually solved at times (Mathevet, 2010).

Theories on implementation focus on equilibrium of mechanisms. When repeated over and over, mechanisms diverge from equilibria. Moreover, minor worries in beliefs or behaviors can lead to a negative equilibrium outcome and possible problems. This is a problem because a mechanism is designed for practical reasons, e.g. incentive design is made to attain a desirable result in equilibrium (Mathevet, 2008).

Supermodular games are strongly related with implementation framework, given that their combined strategy equilibria are locally unbalanced in a monotone environment, like the Cournot design (Echenique & Edlin as cited in Mathevet, 2010). In an actual game, a static mechanism should be used many times to find an outcome. An example is this. The traffic authorities are planning to set up a toll-system aimed at minimising traffic congestion and provide users with better roads. Plans are provided by the manager and contracts are given for the agents to approach revenue maximisation for a certain period of time. There may be a procurement department to allocate the needed jobs and the contractors will provide an auction or bidding several times (Mathevet, 2008).

Mathevet (2008) developed a theory of supermodular Bayesian implementation to enhance mechanism design, particularly in the aspect of learning and stability. The mechanism and the rules of the game are as follows. The mechanism assigns practicable strategies to the agents and gives rules regarding implementation into outcomes. An example is when a worker desires to increase his effort when others exert more effort in their own jobs (Mathevet, 2008).

Supermodular implementation has vibrant features. Participants are motivated to provide best-replies, and this enables boundedly rational agents to reach the equilibrium in the sense that most learning lessons get some monotonicity that leads them near equilibria (Mathevet, 2008). This theory helps explain the literature provided in Jackson (as cited in Mathevet, 2008, p. 41) regarding mechanisms and the way players behave.

Groves-Ledyard: A super modular mechanism

There is strong experimental evidence in using supermodular mechanism as a stability criterion. According to Chen (2002), experiments that meet the Nash equilibrium follow the requirements of a supermodular game. One example is the Groves-Ledyard mechanism. Mechanisms of Hurwicz, Walker and Kim (as cited in Chen, 2002) are not supermodular. Supermodular games have strong dynamic stability features (Milgrom & Roberts as cited in Chen, 2002).

The Groves-Ledyard (GL) mechanism is used in determining the amount of public good to be provided and the cost each participant must undertake for the public good (Swarthout & Walker, 2009). A benefit of GL is that it is a solution to the “free-rider” problem (Healy & Jain, 2016; Chen & Plott, 2001). GL is a decentralized system applied with a taxation scheme and the result is Pareto optimal. According to Chen and Plott (2001), the allocation-taxation system allows consumers to realise the benefits that motivate them to talk about their true preferences. The system also gives incentives.

In an example economy provided by Swarthout & Walker (2009), the players have pseudo-linear utility functions and the public good is created at continuous minimal cost. This mechanism has Pareto optimal if there is continuous action. But when the actions are discrete, the result is multiple equilibria, termed in economics research as non-cooperative game that is supermodular. The Pareto optimal result will depend whether the action spaces are continuous (Milgrom & Roberts as cited in Swarthout & Walker, 2009). There is no efficient outcome when the outcomes are large and action spaces are disconnected. This means that predictions might go wrong once discrete implementation is used. An example of GL is provided below.

Two goods can be presented as X and Y, with their quantities represented as x and y. X is a public good but it can be produced using input Y. Units c represent Y in providing a unit of X. N participants are represented by i, and each participant’s preference is represented by the equation:

ui (x, yi) = aixbix2yi.

wherein:

yiY good contributed to the production of X by i.

In providing for the cost of cx, the values of x and y1 will be calculated through:

Equation

The assumption here is that no one customer surpasses his prescribed endowment; meaning, there is enough of the Y good for each person. The Pareto efficiency is provided in these two equations:

Equations

In this situation, the Groves-Ledyard mechanism can tells us the levels of x and y1 …, yn, in which each participant will provide a vote or message mi € R, and proceeds with

Equations

In the equation above, y > 0 is regarded an exogenous parameter; additionally, µi and σ2i are defined as:

Equations

Groves-Ledyard has been proven to be Pareto efficient. In the economies just provided – “the quasi-linear utility functions and a linear cost function” (Swarthout & Walker, 2009) – the equations 3 and 4 result in equation 2, and that parameter y having any value can result in a unique Nash equilibrium that satisfies equation 1, which can be considered Pareto optimal (Swarthout & Walker, 2009).

Pareto efficiency

Pareto efficiency is an effective social welfare criterion in the context of general equilibrium theory and economic mechanism design theory. One definition of Pareto efficiency states that two persons cannot be completely opposite in terms of benefits from public or private good. A weak Pareto means not all are equal in benefitting from public or private goods (Tian, 1987). These two concepts of Pareto efficiency concur with each other but only when preferences are continuous and strictly one direction. For some economies with public goods, this may not be true, but for those without public goods it is true (Tian, 1987).

A probable reasoning for this is that for some possible allocation an agent has “nonzero amount” of the private goods while the other agents have zero amount of private goods. It could be that the agent with nonzero amount of the private goods may not be able to improve or lose his/her utility for a mix of the private and public goods due to some preferences. This can be considered weak Pareto efficient. But when we take some private goods from this person to produce public goods, the rest can be better now than in the past because there are more public goods. In a sense, this is not considered Pareto efficient. Accordingly, a weak Pareto efficient is not Pareto efficient when used in economies with public goods.

Moreover, since the Walrasian and Lindahl equilibrium with their correspondences are not regarded as monotone when boundary solutions are given, they cannot be implemented using the Nash equilibrium and Maskin’s results (as cited in Tian, 1987). Some economists want to consider the design for Walrasian correspondence to aim at Pareto-efficient allocation. Hurwicz (as cited in Tian, 1987) considers the Lindahl correspondence as monotone.

Thomson and Tian (1987) showed that the limited Walrasian communication and the constrained Lindahl communication are “not Pareto efficient even if preferences are locally non-satiated” (p. 23). However, the two correspondences can be Pareto efficient when strict monotone is increasing. Tian (1987) says that it is still safe to state that the two correspondences are weak Pareto efficient, but can be Pareto efficient in a wider context and individually rational. For private good economies, Pareto-efficient and weak Pareto-efficient correspondences are in accord if there is strictly monotone increasing. But they do not coincide for the public goods economies even in the same condition (Tian, 1987).

An example of Pareto-efficient allocation can be drawn from fisheries economics. Sever’s (as cited in Tian, 1987) study recognises the technical and allocative inefficiency in the harvesting sector, even if the highly perishable fish is stabilised into a finished or intermediate product form. There is a race to process into a finished product, given that fish are perishable. Choice of the policy instrument to internalize the externality should reflect its scope if complete welfare effects are to be addressed, i.e., economists are to advise policy makers of “… the full array of beneficial and harmful effects” (Bromley as cited in Tian, 1987) from switching to transferrable fishing quota (ITQ) management.

Unlike other regulatory change that intentionally creates winners and losers (e.g. public utility regulation), there has never been any analysis that suggests the processing sector is responsible for open access efficiency losses in the harvesting sector, nor has there been a collective, political assessment that processors should suffer as a result of switching to ITQs. And there has never been any analysis to determine whether this wealth transfer is avoidable in achieving the public interest in efficient fisheries (Sever as cited in Tian, 1987).

The importance of this kind of wealth redistribution problem was presented by the U.S. Congress. The Magnuson-Stevens Act (as cited in Tian, 1987) enforced a four-year moratorium on new ITQs, during which time the National Academy of Sciences is to prepare a report on ITQ management for Congress. Concerns of processing quotas and equity need a wider scope of discussion of commercial fisheries. For example, there can be interdependence between two principal constituents who invest in open access fisheries, which could be vessel owners and processors.

Equity factors pertaining to hired skippers and crew members who work on a share basis are contained in their involvement to would-be rents earned by the fishing vessels, which are separable from pseudo-rents earned by processors (Sever as cited in Tian, 1987).

A harvester-only allocation can be reasonable because it does not needlessly trap harvesting or processing assets or redistribute “status quo ante” wealth if only one of two simple but essential suppositions is made. First, all factors of production must be perfectly flexible, meaning all factors of production are not only mobile, but can earn an identical economic return in another situation. This assumption guarantees no factor of production can be stranded by switching property institutions from open access to ITQ management.

Next, even if productive inputs were not perfectly flexible, the conventional allocation of rights only to fishers would be equitable if a fishery consisted entirely of vertically integrated fishers and processors. It is just right that rights were given to fishers or processors because they would be one and the same entity (Sever as cited in Tian, 1987). These statements were part of the economics literature.

Sever’s (as cited in Tian, 1987) paper investigated the efficiency and distributional inferences of alternative rights assignments intended to internalise the overcapitalisation externality that affects fishers and processors. There are two methods to providing policy instruments that prevent redistributing status quo ante wealth while focusing efficiency in both sectors. One method is to ensure that affected parties will be fully paid. However, this method may not be policy possible. Agents involve would have an opportunity to over-reporting their damage, and if compensation is to be paid out of the ITQ-created rent gains, the gainers will have an incentive to under-report any gains.

Even if direct compensation policy were possible, the processing sector would have to believe that a policy would ensure immediate compensation. Another approach is to provide political agreement by eluding unimportant wealth redistribution in the first place, and that would take designing a Pareto safe initial allocation that leaves no fisher nor processor worse off, while permitting efficiency gains to happen from inter- or intra-sector quota (Tian, 1987).

Pareto Optimal

A public good economy shows that a non-excludable good can be enjoyed by more than one consumer (Van Essen, 2010). In Van Essen’s (2010) study, the following formula provides Pareto optimal.

Formula

These data could be used for the constrained maximisation problem, and the Lagrangian is as follows:

Formula

The Kuhn-Tucker application may result in the Pareto efficiency as:

Formula

Van Essen (2012) described the “incentive compatible mechanism.” A mechanism can be a plan of action in which consumers’ actions in the form of messages are stated, and categorising the messages into an outcome, or allocation (Van Essen, 2012). An example is this.

Two mechanisms were being considered: consumer i has a message space recorded as Mi = R2 and has a generic element described as mi = (ri, si). Van Essen (2012) denoted (r1, r2, …rN) as (r, s) regarding this as random strategy data. Each mechanism chooses the public good level x with a particular outcome function, with a recommended tax (the Clarke tax) that relies on the function τi (r, s).

The outcome determines consumer i’s consumption data. Two outcome functions, the Chen (CH) and “the paired-statement” (PS) mechanisms were examined in the Van Essen (2012, p. 17) article, with the following formula for Chen:

Formula

The paired-statement (PS) mechanism has the following formula (or outcome functions) (Van Essen, 2012, p. 19):

Formula

The examples for CH and PS mechanisms described above encourage games, or behaviour. The actions or messages of customer motivate actions of others; thus customer i has the tendency to maximise utility ui. Van Essen (2012) indicates that the outcome of this purported game is similar to a Lindahl equilibrium allocation.

Free-rider problem

When agents “under report” their public goods, the outcome is “under-provision” of such public goods, and the “free-rider problem” occurs (Samejima, 2004). Under-reporting is an “incentive” in Lindahl equilibria, but it becomes a problem clearly because of the outcome. To address this, economists use Pareto efficient allocations.

Why does under-reporting occur? From the different Lindahl allocations, there exists one preference section that allows agents not to participate – this study came from Saijo and Yamato (as cited in Samejima, 2004). Most studies clearly state that agents have a role in a proposed mechanism. This leads to a problem for the mechanism because agents might really not participate if they have the freedom to do so, which is to say that agents have the option to “free-ride” (Samejima, 2004).

Saijo and Yamato gave this example of a two-stage game. In the first stage, the agents altogether decide to participate but in the second stage a public provision mechanism is at hand. The authors find that a preference profile is available for every achievable Pareto efficient allocation. This non-participation incentive occurs during the second stage in the mechanism. Samejima’s (2004) analysis departs from Saijo and Yamato’s by proposing redesigned mechanisms that address the non-participation problem.

First, during agents’ simultaneous decision to participate Samejima (2004) considers a sequential participation method in which a player is given the chance to reconsider his/her decision of non-participation. In other words, it is a sequential process of “participation and non-participation asymmetrically,” where the non-participation decision is reversible while the participant cannot do otherwise (Samejima, 2004, p. 32). The irreversibility strengthens the participants’ decision while the sequential participation method entices the non-participation players to participate.

Second, Samejima’s (2004) mechanisms have public goods provision only for as long as agents commit to participate, whereas Saijo and Yamato’s (as cited in Samejima, 2004) mechanisms assume that there are players who have the preference not to participate. Samejima’s mechanisms do not allow participants to produce public goods in the presence of any non-participation. This assumes that mechanisms can control resources of the players. This is a standard practice, given that the agent’s decision in the participation process is voluntary (Samejima, 2004).

A further explanation by Samejima (2004) states that his mechanisms differ from the mechanism described by Saijo and Yamato (as cited in Samejima, 2004), given that the unanimity mechanism inhibits society to producing public goods in the presence of even only one non-participant. This unanimity mechanism by Saijo and Yamato is available only if the planner completely controls the agents not to produce public goods, for example, if the planner controls access to technology. Samejima’s mechanisms, on the other hand, control the resources of the participants.

There are actually two phases in Samejima’s (2004) mechanism. First, there are no participants. Agents are asked to decide to participate; those who do not respond are considered non-participants. When there is no participation, the planner decides to stop the inquiry. But when some agents decide to participate, the planner again starts the inquiry and asks the non-participants to reconsider. The inquiry goes on and on until all agents in a row choose to participate, or not to participate. The planner has control in producing public goods and so he/she decides to prohibit the participants from producing public goods if there are players who do not participate – by then the inquiry stops. If all agents participate, the second phase starts and this achieves Lindahl allocations (Samejima, 2004).

Samejima’s (2004) theorem states that the two-phase mechanism is assured of agents’ participation “in all subgame-perfect equilibria” (p. 34), wherein the order of participation does not affect the results in the second phase. However, the mechanism with the Lindahl allocations is still used in the next phase. Lindahl allocation should be achieved by all the participants because this is in the core (Foley as cited in Samejima, 2004), and the outcome is that all agents choose to participate in the equilibrium.

Samejima’s “three-agent”

In non-participation, Samejima (2004) provided an example with a proposed solution, focusing on “non-excludable indivisible public good,” costing 60 (p. 35). Each agent has a public good valuation of 40. According to the author, agents usually cooperate without bargaining cost in the absence of a mechanism and produce the public good to reach efficiency mechanism, but the only thing is that agents have distinct bargaining powers. In this example, agents 1 and 2 possess similar bargaining power, although not within the mechanism, whereas agent 3 has a weaker bargaining power and has to pay more.

The Pareto efficiency given is that the three agents participate with equal cost allocation for a payoff of 20. The problem occurs when one of them does not participate; the non-participant has to pay 40 considering that he can free-ride on the good from the 2 agents. If the three participate at the same time, their participation is not an equilibrium. Participation of all agents can only be considered an equilibrium if they focus on a certain sequence and if the mechanism provides the public good when the decision of all three is to participate (Samejima, 2004). This sequential procedure is presented in figure 1.

A logical explanation for the three-agent example is depicted in the diagram in figure 1. At phase 1, agents begin with non-participation. In the ongoing process, the planner asks the participants to participate. As soon as one of them chooses to participate, the planner goes on with the next round, for example round 2, in which the other two participants are asked again whether they will participate.

If all three decides to participate, the mechanism is then used to provide the public good. But if someone does not participate, the participants are not allowed to contribute for the public good, although they have the liberty to produce the public good. In the example, it is presumed that agent 3 has a weak bargaining power and has to pay more than the other two participants. Agent 2 considers his/her position and has to participate.

Then in the next round, the two agents are in non-participation mood, or it may be that at least one player decides to participate. But when no one decides to participate, it results into a payoff of 10 each since the two can divide the cost. If one player participates, it will cost 20, considering that agent 2 will take part in round 3. An equilibrium is afforded for anyone of them. It has to be mentioned once again that initial activity at round 2 results in non-participation of any agent. As we can see in the figure, it is only when agent 3 is asked of his participation that he/she decides to participate. Agent 3 is wise to decipher the events as he/she decides to participate only when the others are planning to participate.

The three-agent.
Figure 1. Diagram showing example of “the three-agent”.

In round 2, only one is a participant while agents 1 and 2 prefer not to participate. If they do not participate, they have to pay 10 each because, without 3, they have to produce the public good outside the mechanism and shoulder the cost between the two of them. But when agent 1 participates, he/she has to pay 20 given that agent 2 takes his part in round 3. One of the participants therefore joins the equilibrium. Because the agents participate in a sequential method, symmetric Pareto efficient is achieved in a situation known as “subgame-perfect,” because it is in the core (Samejima, 2004).

Local Lindahl equilibrium

Duncan Foley (as cited in Johnson, 2000) studied the existence of a local Lindahl equilibrium in a pure public-goods economy by devising a general technique. He did this by constructing an artificial, extended economy with only private goods within public goods environment and considered each consumer’s consumption of each public good as a distinct private good. Foley used Gerard Debreu’s (as cited in Johnson, 2000) theorem, showing existence of an equilibrium in the expanded economy and also proving it to be an equilibrium in the original economy. However, Johnson (2000) provided a local existence theorem different from that of Foley, requiring that individual endowments should be semi-positive and expanding the original economy in a significantly different way to prove the local nature of the public goods.

Johnson (2000) proves the existence in this method. He constructs a large private goods economy from the original economy, logically presenting a one-to-one correspondence between possible states in the two economies, and showing every local equilibrium in the expanded economy provides a local Lindahl equilibrium in the original economy. Consider a consumption for consumers i in two ways. First add (J – 1)M terms that corresponds to i’s consumption of public goods in the state where he does not live. In the original economy, i cannot consume anything of these goods, but in the expanded economy i can consume these goods but does not make any use of them (or no utility at all from the goods) (Johnson, 2000).

In considering non-resident consumptions like this will prove that no consumer can gain a minimum wealth case. Johnson proceeds to add (I – 1) MJ zeros which reflects i’s zero consumption of the other consumer’s public goods in all J jurisdictions. A consumption for i in the expanded economy is a (NJ + MJ1) vector, which is related to a consumption (xi , yi) in the original economy (Johnson, 2000). The formula is xi1 = xi, an NJ vector of private goods. A local equilibrium L1 given a fixed consumer assignment can be defined in E1 as a locally feasible allocation. In the original economy, consumption of public goods corresponds to all public goods produced for the community (Johnson, 2000).

Robust virtual implementation

Robust implementation theory does not fully focus on mechanisms that enforce a social choice function. Static robust mechanisms depend upon the agent’s rationality. It is common that agents have various beliefs in dynamic mechanisms, which are Bayesian beliefs of the agents. Bergemann and Morris (2009) show that if a mechanism implements a social choice function that is robustly statically implementable, it is considered “robustly monotone.” If the planner allows for infinite static mechanisms, outcomes are lotteries over a countable set of pure outcomes, agents have von-Neumann Morgenstern utility functions and there is also a conditional no-total-indifference situation.

According to Müller (2010), robust monotonicity is not a requirement for robust implementation as this implies “a stronger version of ex-post incentive compatibility called semi-strict ex-post incentive compatibility” (p. 17). But a requirement or condition is “ex post incentive compatibility,” in which an incentive is provided for the agent to truthfully report his own type, as the way others must also report their own types. Ex post compatibility can provide the preferred equilibria but a rule to restrict social function is to take out other unnecessary equilibria (Bergemann & Morris, 2009). Dynamic mechanisms can help to robustly implement monotone social functions with the unavailability of lotteries.

Müller (2010) argues that Bergemann and Morris’s (2009) condition fails in this situation, saying further than in an environment without lotteries, a researcher can “present a strictly monotone and robustly measurable social choice function that fails to satisfy this condition and therefore is not rs-implementable” (p. 17). This social choice function is robustly implementable in a dynamic mechanism.

Bergemann and Morris’s (2009) contribution is an analysis of a strategy that is easily distinguishable. The authors call a condition robust monotonicity if it is critical to the understanding of robust implementation in static mechanisms.

A social choice function that qualifies for rs-implementation is “strictly robustly monotone” (Bergemann & Morris, 2009, p. 50). Rs-implementation, through a robust monotonicity along with a conditional no total indifference situation, can be attained if the outcome space is comprised of lotteries over a set of pure outcomes, and one admits countable static mechanisms. Bergemann and Morris (2009) further present in their study a robustly monotone social choice function that should be “semi-strict ex-post incentive compatible” (p. 51).

An example is presented in the following equation.

Equation

Social choice function f is “semi-strict ex-post incentive compatible” (Müller, 2010, p. 22). This example presents a social choice function that is not semi-strict epic but can have a strong rs-implementation. It further tells that it is not robustly monotone.

Conclusion

The robustness, efficiency and validity of mechanisms were tested through experiments/tests provided in this review of public good. This paper provided example mechanisms that can be considered stable and Pareto optimal in different environments, in and out of equilibrium. The study used literature review as a methodology, but a different kind of review which is interpretive. By focusing on particular studies in the literature, this paper provided an effective analysis of certain aspects in economics and public goods. Specifically, this study conducted an interpretive analysis in order to get new meanings from the past studies on public good and the economics literature. First, we focused on Lindahl equilibrium allocations that can be used in public-good consumption and economies. The different mechanisms and their robustness and efficiency were also provided.

Contractive mechanism is a concept of dynamic stability. Healy and Mathevet (2012) call a mechanism contractive if it encourages a game in which “contraction mapping” is one of the best functions, and in any environment. In the Nash implementation, stability has long been a problem (Healy & Mathevet, 2012). Groves-Ledyard is an optimal public-good mechanism, but this mechanism becomes unstable if the punishment section is small, and so equilibrium is hard to attain (Healy & Mathevet, 2012).

On the other hand, a mechanism can become supermodular in quasilinear conditions with a wider punishment consideration, even if the crucial condition of a strong strategy space for supermodularity is not given, resulting in unclear situations for stability. Notwithstanding its stability features, a primary disadvantage of the Groves-Ledyard mechanism is because of its individual rationality, i.e. the participants’ benefit can be lower than their original endowments (Healy & Mathevet, 2012). Hurwicz shows that Walrasian or Lindahl equilibrium mechanisms can be Pareto optimal and have “individual rational outcomes” in economic environments, even with a slight continuity requirement (Van Essen, 2010; Tian, 2010).

In the context of the Nash-Lindahl mechanisms, emphasised in the study of Hurwicz and Walker, the mechanisms have been criticised by Healy and Mathevet (2012) as having poor stability features, that result in poor performance. Mechanism designers/ Researchers Vega-Redondo, de Trenqualye, and Kim (as cited in Chen et al., 2013) conducted studies and provided designs for Nash-Lindahl mechanisms that are stable, but they noted several restrictions on preferences. The Kim and Jordan (as cited in Healy & Mathevet, 2012) designs have stability results but also with restrictions.

Chen and Plott (2001) performed laboratory controlled tests of Nash-Lindahl allocations and found that supermodularity can be used to meet Nash equilibrium. Chen (2002) delivers a group of supermodular mechanisms for Nash-Lindahl, but their strategy spaces are not continuous.

Van Essen (as cited in Healy & Mathevet, 2012) indicates that supermodularity (with unbounded strategy spaces) can be unstable. He introduced a new incentive compatible mechanism, using Lindahl allocations and resulting in Nash equilibria. Van Essen (2010, p. 107) admits that an “incentive compatible Lindahl mechanism” that leads to a supermodular game cannot simply get a stable Nash equilibrium. But he recommends using contraction mapping as a way to provide stability. Ironically, Van Essen (2010) says that inducing a supermodular game is enough to ensure that mapping is a contraction.

The study of Van Essen (2010) can be a subject for future research. He studied two-dimensional stable Lindahl mechanisms which are present in quasi-linear preference situations. Van Essen (2010) argues that the maximum preference domain for stable environments in his study is unknown, even while the results in quasi-linear environments are significant. There should be further research on Lindahl and Walrasian contractive mechanisms.

This subject is being explored by Healy and Mathevet (2012) who have presented contractive mechanisms that lead to games and result in a wide range of learning rules that converge to equilibrium. More tests are also needed on implementation theory as there are few empirical studies on the subject and also limited literature. The studies provided in this dissertation delved on experiments of known authors and researchers who have given us a better way to deal with mechanism characteristics.

References

Bergemann, D. (2009). Robust virtual implementation. Theoretical Economics, 4(1), 45-88.

Bochet, O. (2004). Implementation and experimental approaches to efficiency and market equilibrium. Web.

Börgers,T. (2001). On the relevance of learning and evolution to economic theory. The Economic Journal, 106(438), 1374-1385.

Čábelka, S. (2001). Four essays on boundedly rational learning. Web.

Chen, H., Chow, Y., & Wu, L. (2013). Imitation, local interaction, and coordination. International Journal of Game Theory, 42(4), 1041-1057.

Chen, Y. (2002). A family of supermodular Nash mechanisms implementing Lindahl allocations. Economic Theory, 19(1), 773-790.

Chen, Y., & Plott, C. (2001). The Groves-Ledyard mechanism: An experimental study of institutional design. Journal of Public Economics, 59(1), 335-364.

Clark, M. (2004). The virtuous discourse of Adam Smith: The political economist’s measured words on public policy.Web.

Dai, Y. (2003). Game theoretic approach to supply chain management. Web.

Goyal, S., & Gupta, Y. (1989). Integrated inventory models: The buyer-vendor coordination. European Journal of Operational Research, 41(1), 261-269. Web.

Hayes, M. (2008). Macroeconomics and the composition of modern economic government: towards a critical sociology of economics. Web.

Healy, P., & Jain, R. (2016). Generalized Groves-Ledyard mechanisms. Games and Economic Behavior, 101(1), 204-217.

Healy, P., & Mathevet, L. (2012). Designing stable mechanisms for economic environments. Theoretical Economics, 7(3).

Hurwicz, L., & Marschak, T. (2001). Comparing finite mechanisms. Economic Theory, 21(4), 783-842.

Johnson, M. (2000). A pure theory of local public goods. Web.

Klastorin, T., Moinzadeh, K., & Son, J. (2002). Coordinating orders in supply chains through price discounts. HE Transactions, 34(1), 679-689. Web.

Mathevet, L. (2008). Selection, learning, and nomination: Essays on supermodular games, design, and political theory. Web.

Mathevet, L. (2010). Supermodular mechanism design. Theoretical Economics, 5(3), 403-443.

Mueller, P. (2013). Learning from Adam Smith: Propriety in individual choice, moral judgment, and politics. Web.

Müller, C. (2010). Robust implementation in dynamic mechanisms. Web.

Samejima, Y. (2004). Essays on social choice and mechanism design. Web.

Schipper, B. (2004). Submodularity and the evolution of Walrasian behaviour. International Journal of Game Theory, 32(4), 471-477.

Swarthout, J., & Walker, M. (2009). Discrete implementation of the Groves-Ledyard mechanism. Review of Economic Design, 13(1/2), 101-114.

Tian, G. (1987). Nash-implementation of social choice correspondences by completely feasible continuous outcome functions. Web.

Van Essen, M. (2010). Implementing Lindahl allocations – Incorporating experimental observations into mechanism design theory. Web.

Van Essen, M. (2012). Information complexity, punishment, and stability in two Nash efficient Lindahl mechanisms. Review of Economic Design, 16(1), 15-40.

Van Essen, M. (2014). A Clarke tax tâtonnement that converges to the Lindahl allocation. Social Choice & Welfare, 43(2), 309-327.

Print
Need an custom research paper on Efficiency Properties of Markets: Public Good written from scratch by a professional specifically for you?
808 writers online
Cite This paper
Select a referencing style:

Reference

IvyPanda. (2024, April 8). Efficiency Properties of Markets: Public Good. https://ivypanda.com/essays/public-good-allocation/

Work Cited

"Efficiency Properties of Markets: Public Good." IvyPanda, 8 Apr. 2024, ivypanda.com/essays/public-good-allocation/.

References

IvyPanda. (2024) 'Efficiency Properties of Markets: Public Good'. 8 April.

References

IvyPanda. 2024. "Efficiency Properties of Markets: Public Good." April 8, 2024. https://ivypanda.com/essays/public-good-allocation/.

1. IvyPanda. "Efficiency Properties of Markets: Public Good." April 8, 2024. https://ivypanda.com/essays/public-good-allocation/.


Bibliography


IvyPanda. "Efficiency Properties of Markets: Public Good." April 8, 2024. https://ivypanda.com/essays/public-good-allocation/.

Powered by CiteTotal, online essay citation creator
If you are the copyright owner of this paper and no longer wish to have your work published on IvyPanda. Request the removal
More related papers
Cite
Print
1 / 1