ECO1C01

Download as pdf or txt
Download as pdf or txt
You are on page 1of 233

MICROECONOMICS:

THEORY AND APPLICATIONS-I


(ECO1C01)

STUDY MATERIAL
I SEMESTER
CORE COURSE
MA ECONOMICS
(2019 Admission onwards)

UNIVERSITY OF CALICUT
SCHOOL OF DISTANCE EDUCATION
CALICUT UNIVERSITY P.O.
MALAPPURAM - 673 635, KERALA

190301
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

2
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

SCHOOL OF DISTANCE EDUCATION


UNIVERSITY OF CALICUT

STUDY MATERIAL
FIRST SEMESTER

MA ECONOMICS (2019 ADMISSION ONWARDS)


CORE COURSE:
ECO1C01 : MICROECONOMICS:
THEORY AND APPLICATIONS-I

Prepared by:

Dr. SHIJI O
Assistant Professor on Contract (Economics)
School of Distance Education
University of Calicut

Scrutinized by:
Dr. SITARA V. ATTOKKARAN
Assistant Professor
Department of Economics
Vimala College, Thrissur

3
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

4
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

CONTENTS

Module I Consumer Behaviour under 9


Uncertainty and Risk

Module II Market Demand for Commodities 57

Module III Theory of Production and Costs 87

Module IV Theory of Imperfect Markets 151

Module V Theory of Games 201

5
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

6
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Syllabus

Module I : Consumer Behaviour under Uncertainty and


Risk
Choice under uncertainty - Representing uncertainty by
Probability distributions - Expected Value and Variability -
Maximising expected utility - Fair gambles and expected
utility hypothesis - St. Petersburg paradox - Neumann-
Morgenstern utility index - Friedman Savage hypothesis -
Markowitz hypothesis - Utility functions and attitudes towards
risk - risk neutrality, risk aversion, risk preference, certainty
equivalent, demand for risky assets - reducing risks
diversification, insurance, flexibility, information - The state
preference approach to choice under uncertainty.
Module II : Market Demand for Commodities
Deriving market demand - Network externalities - Bandwagon
effect, Snob effect and Veblen effect - Empirical estimation of
demand - Linear demand curve, Constant elasticity demand
function - Dynamic versions of demand functions - Nerlove,
Houthakker and Taylor - Linear expenditure system-
Characteristic approach to demand function.
Module III : Theory of Production and Costs
Short run and long run production function- returns to scale -
elasticity of substitution - Homogeneous production function -
Linear homogeneous production function - Fixed proportion
production function - Cobb Douglas production function and
CES production function - Technological progress and

7
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

production function - Cost function - Cost minimising input


choices - properties of cost functions - Economies of scope -
The Learning curve – Estimating and Predicting cost - Short
run and long run distinction.
Module IV : Theory of Imperfect Markets
Oligopoly - Characteristics - Collusive versus non-collusive
oligopoly - Non-collusive models - Cournot model - Bertrand
model - Chamberlin’s model - Kinked demand curve model of
Sweezy - Stackelberg’s model - Welfare properties of
duopolistic markets - Collusive models - Cartels and Price
leadership.
Module V : Theory of Games
Basic concepts - Cooperative versus non-cooperative game -
Zero sum versus non-zero sum game - Prisoner’s dilemma -
Dominant strategies - Nash equilibrium - Prisoner’s dilemma -
Pure strategies - Mixed strategies - repeated games -
Sequential games - Threats, commitments and credibility

8
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

MODULE I
CONSUMER BEHAVIOUR UNDER
UNCERTAINTY AND RISK

Choice under uncertainty


We start by assuming that the outcomes of any
random event can be categorized into a number of states of
the world. We cannot predict exactly what will happen, say,
tomorrow, but we assume that it is possible to categorize all of
the possible things that might happen into a fixed number of
well-defined states. For example, we might make the very
crude approximation of saying that the world will be in only
one of two possible states tomorrow: It will be either “good
times” or “bad times.” One could make a much finer
gradation of states of the “world (involving even millions of
possible states), but most of the essentials of the theory can
be developed using only two states. A conceptual idea that
can be developed concurrently with the notion of states of
the world is that of contingent commodities. These are goods
delivered only if a particular state of the world occurs. As
an example, “Rs.1 in good times” is a contingent commodity
that promises the individual Rs.1 in good times but nothing
should tomorrow turn out to be bad times. It is even
possible, by stretching one’s intuitive ability somewhat, to
conceive of being able to purchase this commodity: I might
be able to buy from someone the promise of Rs.1 if

9
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

tomorrow turns out to be good times. Because tomorrow


could be bad, this good will probably sell for less than Rs.1.
If someone were also willing to sell me the contingent
commodity “Rs.1 in bad times,” then I could assure myself
of having Rs.1 tomorrow by buying the two contingent
commodities “Rs.1 in good times” and “Rs.1 in bad times.”
Utility analysis
Examining utility-maximizing choices among
contingent commodities proceeds formally in much the same
way we analyzed choices previously. The principal difference
is that, after the fact, a person will have obtained only one
contingent good (depending on whether it turns out to be good
or bad times). Before the uncertainty is resolved, however, the
individual has two contingent goods from which to choose
and will probably buy some of each because he or she does
not know which state will occur. We denote these two
contingent goods by Wg (wealth in good times) and Wb
(wealth in bad times). Assuming that utility is independent
occurs of which state and that this individual believes
that good times will occur with probability p, the expected
utility associated with these two contingent goods is
V (Wg, Wb) = πU (Wg)+(1- π) U (Wb).
This is the magnitude this individual seeks to maximize
given his or her initial wealth, W.
Prices of contingent commodities
Assuming that this person can purchase a rupee of

10
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

wealth in good times for Pg and a Rupees of wealth in bad


times for Pb, his or her budget constraint is then
W=PgWg + PbWb
The price ratio Pg/Pb shows how this person can
trade rupees of wealth in good times for rupees in bad
times. If, for example, Pg 0:80 and Pb 0:20, the sacrifice of
Rs.1 of wealth in good times would permit this person to buy
contingent claims yielding Rs.4 of wealth should times turn
out to be bad. Whether such a trade would improve utility
will, of course, depend on the specifics of the situation. But
looking at problems involving uncertainty as situations in
which various contingent claims are traded is the key insight
offered by the state preference model.
Fair markets for contingent goods
If markets for contingent wealth claims are well
developed and there is general agreement about the
likelihood of good times (π), then prices for these claims will
be actuarially fair— that is, they will equal the underlying
probabilities:
Pg= π
Pb= (1- π)
Hence, the price ratio Pg/Pb will simply reflect the odds
in favour of good times:

11
In our previous example, if Pg= π= 0:8 and Pb= (1- π) = 0:2
then = 4. In this case the odds in favour of good times
would be stated as “4 to 1”. Fair markets for contingent claims
(such as insurance markets) will also reflect these odds. An
analogy is provided by the “odds” quoted in horse races. These
odds are “fair” when they reflect the true probabilities that
various horses will win.
Risk aversion
We are now in a position to show how risk aversion
is manifested in the state-preference model. Specifically, we
can show that, if contingent claims markets are fair, then a
utility maximizing individual will opt for a situation in
which Wg= Wb ; that is, he or she will arrange matters so
that the wealth ultimately obtained is the same no matter what
state occurs. Maximization of utility subject to a budget
constraint requires that this individual set the MRS of Wg for
Wb equal to the ratio of these “goods” prices:

In view of the assumption that markets for contingent


claims are fair, this first-order condition reduces to

or Wg= Wb
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Uncertainty and Risk


So far, we have assumed that prices, incomes, and
other variables are known with certainty. However, many of
the choices that people make involve considerable
uncertainty. Most people, for example, borrow to finance
large purchases, such as a house or a college education, and
plan to pay for them out of future income. But for most of us,
future incomes are uncertain. Our earnings can go up or down;
we can be promoted or demoted, or even lose our jobs. And if
we delay buying a house or investing in a college education,
the risk price increases that could make such purchases less
affordable. Therefore we must examine the ways that people
can compare and choose among risky alternatives. We will do
this by taking the following steps:
1. In order to compare the riskiness of alternative choices, we
need to quantify risk.
2. We will examine people’s preferences toward risk. Most
people find risk undesirable, but some people find it more
undesirable than others.
3. We will see how people can sometimes reduce or
eliminate risk. Sometimes risk can be reduced by
diversification, by buying insurance, or by investing in
additional information.
4. In some situations, people must choose the amount of
risk they wish to bear. A good example is investing in
stocks or bonds. We will see that such investments involve
trade-offs between the monetary gain that one can expect
and the riskiness of that gain.

13
5. Sometimes demand for a good is driven partly or entirely
by speculation— people buy the good because they think
its price will rise. We will see how this can lead to a
bubble, where more and more people, convinced that the
price will keep rising, buy the good and push its price up
further—until eventually the bubble bursts and the price
plummets.
In a world of uncertainty, individual behaviour may
sometimes seem unpredictable, even irrational, and perhaps
contrary to the basic assumptions of consumer theory.
Describing Risk
To describe risk quantitatively, we begin by listing all
the possible outcomes of a particular action or event, as well
as the likelihood that each outcome will occur. Suppose, for
example, that you are considering investing in a company that
explores for offshore oil. If the exploration effort is
successful, the company’s stock will increase from Rs.30 to
Rs.40 per share; if not, the price will fall to Rs.20 per
share. Thus there are two possible future outcomes: a
Rs.40-per-share price and a Rs.20-per-share price.
Probability
Probability is the likelihood that a given outcome will
occur. In our example, the probability that the oil exploration
project will be successful might be ¼ and the probability
that it is unsuccessful 3/4. (Note that the probabilities for all
possible events must add up to 1.) Our interpretation of
probability can depend on the nature of the uncertain event, on
the beliefs of the people involved, or both. One objective
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

interpretation of probability relies on the frequency with


which certain events tend to occur. Suppose we know that
of the last 100 offshore oil explorations, 25 have succeeded
and 75 failed. In that case, the probability of success of 1/4
is objective because it is based directly on the frequency of
similar experiences. But when there are no similar past
experiences to help measure probability then, objective
measures of probability cannot be deduced and more
subjective measures are needed. Subjective probability is the
perception that an outcome will occur. This perception may
be based on a person’s judgment or experience, but not
necessarily on the frequency with which a particular outcome
has actually occurred in the past. When probabilities are
subjectively determined, different people may attach
different probabilities to different outcomes and thereby
make different choices. For example, if the search for oil were
to take place in an area where no previous searches had ever
occurred, I might attach a higher subjective probability than
you to the chance that the project will succeed: Perhaps I know
more about the project or I have a better understanding of the
oil business and can therefore make better use of our
common information. Either different information or
different abilities to process the same information can cause
subjective probabilities to vary among individuals.
Regardless of the interpretation of probability, it is
used in calculating two important measures that help us
describe and compare risky choices. One measure tells us the
expected value and the other the variability of the possible
outcomes.

15
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Expected Value
The expected value associated with an uncertain
situation is a weighted average of the payoffs or values
associated with all possible outcomes. The probabilities of
each outcome are used as weights. Thus the expected value
measures the central tendency—the payoff or value that we
would expect on average.
Our offshore oil exploration example had two possible
outcomes: Success yields a payoff of Rs.40 per share, failure
a payoff of Rs.20 per share. Denoting “probability of” by
Pr, we express the expected value in this case as
Expected value = Pr (success)(Rs.40/share) + Pr (failure)
(Rs.20/share)
= (1/4)(Rs.40/share) + (3/4)(Rs.20/share)
= Rs.25/share
More generally, if there are two possible outcomes
having payoffs X1 and X2 and if the probabilities of each
outcome are given by Pr1 and Pr2, then the expected value is
E(X) = Pr1X1 + Pr2X2
When there are n possible outcomes, the expected
value becomes
E(X) = Pr1X1 + Pr2X2 +........+ PrnXn
Variability
Variability is the extent to which the possible
outcomes of an uncertain situation differ. Now we can discuss

16
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

why variability is important. Suppose you are choosing


between two part-time summer sales jobs that have the same
expected income (Rs.1500). The first job is based entirely
on commission—the income earned depends on how much
you sell. There are two equally likely payoffs for this job:
Rs.2000 for a successful sales effort and Rs.1000 for one that
is less successful. The second job is salaried. It is very likely
(.99 probability) that you will earn Rs.1510, but there is a .01
probability that the company will go out of business, in which
case you would earn only Rs.510 in severance pay. The
following table summarizes these possible outcomes, their
payoffs, and their probabilities.
Table 1.1: Income from Sales Jobs

Outcome 1 Outcome 2 Expected


Proba- Income Income Income
Probability (Rs.)
bility (Rs.) (Rs.)
Job 1:
Commi- 0.5 2000 0.5 1000 1500
ssion
Job
2:Fixed 0.99 1510 0.01 510 1500
Salary

Note that these two jobs have the same expected income. For
Job 1, expected income is 0.5(Rs.2000) + 0.5(Rs.1000) =
Rs.1500; for Job 2, it is 0 .99 (Rs.1510) + 0.01(Rs.510) =
Rs.1500. However, the variability of the possible payoffs is
different. We measure variability by recognizing that large
differences between actual and expected pay offs (whether

17
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

positive or negative) imply greater risk. We call these


differences deviations. Table below shows the deviations of
the possible income from the expected income from each job.
Table 1.2: Deviations from Expected Income (Rs.)

Outcome 1 Deviation Outcome 2 Deviation


Job 1 2000 500 1000 -500
Job 2 1510 10 510 -990

By themselves, deviations do not provide a measure


of variability because they are sometimes positive and
sometimes negative. We can see from the table 1.2 that the
average of the probability weighted deviations is always 0. To
get around this problem, we square each deviation, yielding
numbers that are always positive. We then measure
variability by calculating the standard deviation: Square root
of the weighted average of the squares of the deviations of
the payoffs associated with each outcome from their
expected values. The following table shows the calculation of
the standard deviation for our example.
Table 1.3: Calculating Variance (Rs.)

Weighted
Out- Devia-
Deviation Out- average Standard
come tion
Squared come 2 Deviation Deviation
1 Squared
Squared
Job 1 2000 250,000 1000 250,000 250,000 500
Job 2 1510 100 510 980,100 9900 99.50

18
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Note that the average of the squared deviations under Job 1 is


given by
0.5(Rs.250,000) + 0.5(Rs.250,000) = Rs.250,000
The standard deviation is therefore equal to the square
root of Rs.250,000, or Rs.500. Likewise, the probability-
weighted average of the squared deviations under Job 2 is
0.99(Rs.100) + 0.01(Rs.980,100) = Rs.9900
The standard deviation is the square root of Rs.9900,
or Rs.99.50. Thus the second job is much less risky than the
first; the standard deviation of the incomes is much lower.
The concept of standard deviation applies equally well
when there are many outcomes rather than just two. Suppose,
for example, that the first summer job yields incomes
ranging from Rs.1000 to Rs.2000 in increments of Rs.100 that
are all equally likely. The second job yields incomes from
Rs.1300 to Rs.1700 (again in increments of Rs.100) that is also
equally likely.
Expected Utility
Consider an agent trying to evaluate the utility
associated to consuming the random variable.
A random variable y is a map from Ω to R. which takes
values
y1 = y(ω1); y2 = y (ω2)
The expected value of y is:
py1 + (1-p) y2;

19
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

denoted Ey.
Let the utility function of the agent be u(c). His
utility when consuming y is: u(y1) with probability p; and
u(y2) with probability 1- p. We assume an agent evaluates the
utility of a random variable y by expected utility; that is,
pu (y1) + (1-p) u (y2)
In other words, we assume that, when facing
uncertainty, agents maximize expected utility. A lot of
experiments document failures of this assumption in various
circumstances. A lot of theoretical work addresses the failure,
postulating different (still optimizing) behavior on the part of
agents. Most economic theory still uses the assumption in
approximation.
The Von Neumann –Morgenstern Theorem
In their book The Theory of Games and Economic
Behaviour, John von Neumann and Oscar Morgenstern
developed mathematical models for examining the economic
behaviour of individuals under conditions of uncertainty. To
understand these interactions, it was necessary first to
investigate the motives of the participants in such “games.”
Because the hypothesis “that individuals make choices in
uncertain situations based on expected utility seemed
intuitively reasonable, the authors set out to show that this
hypothesis could be derived from more basic axioms of
“rational” behaviour. The axioms represent an attempt by the
authors to generalize the foundations of the theory of
individual choice to cover uncertain situations. Although

20
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

most of these axioms seem eminently reasonable at first


glance, many important questions about their tenability have
been raised.
The von Neumann–Morgenstern utility index
To begin, suppose that there are n possible prizes
that an individual might win by participating in a lottery. Let
these prizes be denoted by x1, x2,…, xn and assume that these
have been arranged in order of ascending desirability.
Therefore, x1 is the least preferred prize for the individual and
xn is the most preferred prize. Now assign arbitrary utility
numbers to these two extreme prizes. For example, it is
convenient to assign
U(x1) = 0,
U (xn) = 1,
but any other pair of numbers would do equally well. Using
these two values of utility, the point of the von Neumann–
Morgenstern theorem is to show that a reasonable way exists
to assign specific utility numbers to the other prizes available.
Suppose that we choose any other prize, say, xi. Consider the
following experiment. Ask the individual to state the
probability, say, πi, at which he or she would be indifferent
between xi with certainty, and a gamble offering prizes of
xn with probability πi and x1 with probability (1- πi). It seems
reasonable (although this is the most problematic assumption
in the von Neumann–Morgenstern approach) that such a
probability will exist: The individual will always be indifferent
between a gamble and a sure thing, provided that a high

21
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

enough probability of winning the best prize is offered. It also


seems likely that πi will be higher the more desirable xi is;
the better xi is, the better the chance of winning xn must be to
get the individual to gamble. The probability πi therefore
measures how desirable the prize xi is. In fact, the von
Neumann–Morgenstern technique is to define the utility of xi
as the expected utility of the gamble that the individual
considers equally desirable to xi:

U(xi) = πi . U(xn) + (1- πi). U(x1)


Because of our choice of scale in equation
U(x1) = 0, U (xn) = 1
We have
U(xi)= πi .1+(1 - πi).0= πi.
By judiciously choosing the utility numbers to be assigned to
the best and worst prizes, we have been able to devise a scale
under which the utility number attached to any other prize is
simply the probability of winning the top prize in a gamble
the individual regards as equivalent to the prize in question.
This choice of utility numbers is arbitrary. Any other two
numbers could have been used to construct this utility scale,
but our initial choice (Equation U(x1) = 0, U (xn) = 1) is a
particularly convenient one.
Expected utility maximization
In line with the choice of scale and origin
represented by Equation U(x1) = 0, U (xn) = 1, suppose that

22
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

probability πi has been assigned to represent the utility of


every prize xi. Notice in particular that π1=0, πn=1, and that
the other utility values range between these extremes. Using
these utility numbers, we can show that a “rational”
individual will choose among gambles based on their
expected “utilities” (that is, based on the expected value of
these von Neumann–Morgenstern utility index numbers).
As an example, consider two gambles. One gamble
offers x2, with probability q, and x3, with probability (1- q).
The other offers x5, with probability t , and x6, with
probability (1-t). We want to show that this person will
choose gamble 1 if and only if the expected utility of
gamble 1 exceeds that of gamble 2. Now for the gambles:
expected utility (1) = q.U(x2)+(1-q).U(x3)
expected utility(2) = t.U(x5)+(1-t).U(x6)
Substituting the utility index numbers (that is, π2 is the “utility”
of x2, and so forth) gives
expected utility (1) = q. π2+(1-q). Π3
expected utility(2) = t. Π5+(1-t). π6
We wish to show that the individual will prefer gamble 1 to
gamble 2 if and only if
q. π2 + (1=q). Π3 t. Π5 + (1-t). π6.
To show this, recall the definitions of the utility index.
The individual is indifferent between x2 and a gamble
promising x1 with probability (1- π2) and xn with probability

23
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

π2. We can use this fact to substitute gambles involving only


x1 and xn for all utilities in Equation expected utility (1) = q.
π2+(1-q). Π3 and expected utility (2) = t. Π5+(1-t). π6.
(even though the individual is indifferent between these, the
assumption that this substitution can be made implicitly
assumes that people can see through complex lottery
combinations). After a bit of messy algebra, we can conclude
that gamble 1 is equivalent to a gamble promising xn with
probability q π2 +(1-q) π3, and gamble 2 is equivalent to
a gamble promising xn with probability t π2 + (1-q) π3, and
gamble 2 is equivalent to a gamble promising xn with
probability t π5+(1-t) π6. The individual will presumably
prefer the gamble with the higher probability of winning the
best prize. Consequently, he or she will choose gamble 1 if
and only if
q π2 +(1-q) π3 >t π5 +(1-t) π6.
But this is precisely what we wanted to show.
Consequently, we have proved that an individual will
choose the gamble that provides the highest level of expected
(von Neumann–Morgenstern) utility. We now make
considerable use of this result, which can be summarized as
follows:
Expected utility maximization: If individuals obey the von
Neumann–Morgenstern axioms of behaviour in uncertain
situations, they will act as if they choose the option that
maximizes the expected value of their von Neumann–
Morgenstern utility index.

24
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Expected Utility Hypothesis


In economics, game theory, and decision theory, the
expected utility hypothesis—concerning people’s preferences
with regard to choices that have uncertain outcomes
(gambles)—states that the subjective value associated with an
individual's gamble is the statistical expectation of that
individual's valuations of the outcomes of that gamble,
where these valuations may differ from the Rupees value
of those outcomes. The introduction of St. Petersburg
Paradox by Daniel Bernoulli in 1738 is considered the
beginnings of the hypothesis. This hypothesis has proven
useful to explain some popular choices that seem to
contradict the expected value criterion (which takes into
account only the sizes of the payouts and the probabilities of
occurrence), such as occur in the contexts of gambling and
insurance.
The von Neumann–Morgenstern utility theorem
provides necessary and sufficient conditions under which the
expected utility hypothesis holds. From relatively early on, it
was accepted that some of these conditions would be violated
by real decision-makers in practice but that the conditions
could be interpreted nonetheless as ‘axioms’ of rational choice.
Until the mid-twentieth century, the standard term
for the expected utility was the moral expectation, contrasted
with “mathematical expectation” for the expected value.
Bernoulli came across expected utility by playing the
St Petersburg paradox. This paradox involves you flipping a

25
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

coin until you get to heads. The number of times it took you
to get to heads is what you put as an exponent to 2 and
receives that in rupees amounts. This game helped to
understand what people were willing to pay versus what
people were expected to gain from this game.
Formula for Expected Utility
When the entity whose value affects a person's utility
takes on one of a set of discrete values, the formula for
expected utility, which is assumed to be maximized, is
E [u(x)] = p1.u(x1) +p2. u (x2)+.....
where the left side is the subjective valuation of the gamble
as a whole, x1 is the ith possible outcome, u(xi) is its
valuation, and pi is its probability. There could be either a
finite set of possible values xi in which case the right side of
this equation has a finite number of terms;
or there could be an infinite set of discrete values, in which
case the right side has an infinite number of terms. When x
can take on any of a continuous range of values, the
expected utility is given by

Where, f(x) is the probability density function of x.


Expected value and choice under risk
In the presence of risky outcomes, a human
decision maker does not always choose the option with
higher expected value investments. For example, suppose

26
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

there is a choice between a guaranteed payment of Rs.1.00,


and a gamble in which the probability of getting a Rs.100
payment is 1 in 80 and the alternative, far more likely
outcome (79 out of 80) is receiving Rs.0. The expected value
of the first alternative is Rs.1.00 and the expected value of the
second alternative is Rs.1.25. According to expected value
theory, people should choose the Rs.100-or-nothing gamble;
however, as stressed by expected utility theory, some people
are risk averse enough to prefer the sure thing, despite its
lower expected value. People with less risk aversion would
choose the riskier, higher-expected-value gamble. This is
precedence for utility theory.
Bernoulli’s formulation
Nicolas Bernoulli described the St. Petersburg paradox
(involving infinite expected values) in 1713, prompting two
Swiss mathematicians to develop expected utility theory as a
solution. The theory can also more accurately describe more
realistic scenarios (where expected values are finite) than
expected value alone. In 1728, Gabriel Cramer, in a letter to
Nicolas Bernoulli, wrote, “the mathematicians estimate
money in proportion to its quantity, and men of good sense
in proportion to the usage that they may make of it.”
In 1738, Nicolas’ cousin Daniel Bernoulli , published
the canonical 18th Century description of this solution in
Specimen theoriae novae de mensura sortis or Exposition of a
New Theory on the Measurement of Risk. Daniel Bernoulli
proposed that a nonlinear function of utility of an outcome

27
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

should be used instead of the expected value of an outcome,


accounting for risk aversion, where the risk premium is
higher for low-probability events than the difference
between the payout level of a particular outcome and its
expected value. Bernoulli further proposed that it was not the
goal of the gambler to maximize his expected gain but to
instead maximize the logarithm of his gain. Bernoulli’s paper
was the first formalization of marginal utility, which has
broad application in economics in addition to expected
utility theory. He used this concept to formalize the idea that
the same amount of additional money was less useful to an
already-wealthy person than it would be to a poor person.
St. Petersburg Paradox
The St. Petersburg paradox (named after the journal in
which Bernoulli’s paper was published) arises when there is
no upper bound on the potential rewards from very low
probability events. Because some probability distribution
functions have an infinite expected value , an expected-wealth
maximizing person would pay an arbitrarily large finite
amount to take this gamble. In real life, people do not do this.
Bernoulli proposed a solution to this paradox in his paper: the
utility function used in real life means that the expected
utility of the gamble is finite, even if it’s expected value is
infinite. (Thus he hypothesized diminishing marginal utility
of increasingly larger amounts of money.) It has also been
resolved differently by other economists by proposing that
very low probability events are neglected, by taking into

28
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

account the finite resources of the participants, or by noting


that one simply cannot buy that which is not sold (and that
sellers would not produce a lottery whose expected loss to
them were unacceptable).
The game:
Flip a fair coin until the first head appears
The payoff: If the first head appears on the kth flip, you get
Rs.2k Using an expected value rule, you should be willing to
pay at least the expected value of the payoff from playing the
game. Then the expected Payoff for the St. Petersburg Game is
that
Recall the expected payoff will be the probability weighted
sum of the possible outcomes. Note: The tosses are
independent; a tail on the previous toss does not influence the
outcome of the subsequent toss. Head has a ½ or 50% chance
of occurring on any single toss.

Outcomes = Head 1 2 3 … k
appears in toss
Probability head ½ ¼ 1/8 ..... 1/2k
occurs on given toss
Payoff = 2k 2 4 8 ..... 2k

Therefore,
Expected Payoff = ½ X 2 + ¼ X 4 + 1/8 X 8 + . . . + 1/2k X 2k
+ . . . = 1+ 1+ 1+ 1+ … = ∞

29
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

In situations involving uncertainty (risk), individuals


act as if they choose on the basis of expected utility – the
utility of expected wealth, consumption, etc. -- rather than
expected value. In our discussions we can think of individuals
choosing between different probability distributions of wealth.
The Friedman-Savage Hypothesis:
The Neumann-Morgenstern method is based on the
expected values of utilities and therefore, does not refer to
whether the marginal utility of money diminishes or
increases. In this respect, this method of measuring utility
is incomplete. When a person gets an insurance policy, he
pays to escape or avoid risk. But when he buys a lottery
ticket, he gets a small chance of a large gain. Thus he
assumes risk. Some people indulge both in buying insurance
and gambling and thus they both avoid and choose risks.
This has been explained by the Freedman-Savage
Hypothesis as an extension of the N-M method.
It states that marginal utility of money diminishes for
incomes below some level; it increases for incomes between
that level and some higher level of income, and again
diminishes for all incomes above that higher level. This is
illustrated in figure 1.1 in terms of the total utility curve TU
where utility is plotted on the vertical axis and income on the
horizontal axis.

30
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 1.1

Suppose a person buys insurance for his house against


the small chance of a heavy loss from fire and also buys a
lottery ticket which offers a small chance of a large win.
Such a conflicting behaviour of a person who buys insurance
and also gambles has been shown by Friedman and Savage
with a total utility curve. Such a curve first rises at a
diminishing rate so that the marginal utility of money declines
and then it rises at an increasing rate so that the marginal
utility of income increases.
The curve TU in the figure 1.1 first rises facing
downward up to point F1and then facing upward up to point
K1 .Suppose the person’s income from his house is OF with
FF1 utility without a fire. Now he buys insurance to avoid risk
from a fire. If the house is burnt down by fire, his income is
reduced to OA with AA utility. By joining points A1 and F1,

31
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

we get utility points between these two uncertain income


situations. If the probability of no fire is P, then the expected
income of this person on the basis of the N-M utility index is
Y = P (OF) + (1 -P) (OA).
Let the expected income (Y) of the person be OE,
then its utility is EE1 on the dashed line At Fr Now assume
that the cost of insurance, (insurance premium) is FD. Thus
the person’s assured income with insurance is OD (= OF-
FD) which gives him greater utility DD1 than EE1from
expected income OE with probability of no fire. Therefore,
the person will buy insurance to avoid risk and have the
assured income OD by paying FD premium in case his house
is burnt down by fire.
With OD income left with the person after buying
insurance of the house against fire, he decides to purchase a
lottery ticket which costs DB. If he does not win, his income
would fall to OB with utility BB1. If he wins, his income
would increase to OK with utility KK1 Thus his expected
income with probability P’ of not winning the lottery is
Y1 = P'(OB) + (1 -P’) (OK)
Let the expected income F, of the person be ОС,
then its utility is CC1 on the dashed line B1K1which gives
him greater utility (CC1) by purchasing the lottery ticket than
DD1 if he had not bought it. Thus the person will also buy
the ticket along with insurance for the house against fire.
Let us take OG expected income in the rising

32
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

portion F1K1 of the TU curve when the marginal utility of


income is increasing. In this case, the utility of buying the
lottery ticket is GG1which is greater than DD1 if he were not
to buy the lottery. Thus he will stake his money on the lottery.
In the last stage when the expected income of the
person is more than OK in the region K1T1of the TU curve,
the marginal utility of income is declining and consequently,
he is not willing to undertake risks in buying lottery tickets
or in other risky investments except at favourable odds. This
region explains St. Petersburg Paradox.
Friedman and Savage believe that the TU curve
describes the attitudes of people towards risks in different
socio-economic groups. However, they recognise many
differences between persons even in the same socio-
economic group. Some are habitual gamblers while others
avoid risks. Still, Friedman and Savage believe that the curve
describes the propensities of the main groups.
According to them, people in the middle income
group with increasing marginal utility of income are those
who are willing to take risks to improve their lot. If they
succeed in their efforts in having more money by taking
risks, they lift themselves up into the next higher socio-
economic group. They do not want just more consumer goods.
Rather, they want to rise in the social scale and to change
their patterns of life. That is why, the marginal utility of
income increases for them.

33
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

The Markowitz Hypothesis:


Prof. Markowitz found the Friedman-Savage
hypothesis contrary to common observations. According to
him, it is not correct to say that the poor and the rich are
unwilling to gamble and take risks except at favourable odds.
Rather, both purchase lotteries and gamble on horse races.
They also play the games at casinos and gamble alike in
the stock market. Thus Friedman and Savage failed to
observe the actual behaviour of the poor and the rich because
they assume that the marginal utility of income depends on
the absolute level of income. Markowitz has modified it by
relating the marginal utility of income to changes in the level
of present income.
According to Markowitz, when income increases by a
small increment, it leads to increasing marginal utility of
income. But large increases in income lead to diminishing
marginal utility of income. That is why at higher levels of
income people are reluctant to indulge in gambling even at
fair bets and people in slowly rising income groups indulge
in gambling to improve their position. On the other hand,
when there are small decreases in income, the marginal
utility of income rises. But large decreases in income lead to
diminishing marginal utility of income. That is why people
insure against small losses but indulge in gambling where
large losses are involved. This is called the Markowitz
hypothesis which is explained in following figure where
Markowitz takes three inflexion points M, N and P in the
upper portion of the diagram with present income at the
middle point N on the TU curve of income.

34
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 1.2

The marginal utility of income curve MU is derived


in the lower portion of the figure 1.2 where the present
income level is OB. With a small increase in the income of a
person from OB to ОС, the marginal utility of income
increases from point S to T on the MU curve. But large
increases in income beyond ОС lead to diminishing
marginal utility of income from point T onwards along the
MU curve. On the other hand, small decreases in income from
OB to О A lead to increasing marginal utility of income from
S to R on the MU curve. But large decreases in income to the

35
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

left of A lead to diminishing marginal utility of income from


point R towards О along the MU curve. The Markowitz
hypothesis is an improvement over the Friedman-Savage
hypothesis. Instead of the absolute level of income, it takes
the present level of income of a person. It suggests that a
person’s behaviour towards insurance and gambling is the
same whether he is poor or rich. The emphasis is on small or
large increases or decreases in the present income of a person
that determines his behaviour towards insurance and
gambling.
Critical Appraisal of Modern Utility Analysis:
In the modem utility analysis of risk or uncertainty,
the Neumann and Morgenstem hypothesis implies
measurable utility up to a linear transformation thereby
reintroducing diminishing or increasing marginal utility. The
Friedman-Savage hypothesis contains an added element.
It attempts to explain the shape of the curve of total
utility of income. These hypotheses are thus attempts to
rehabilitate the measurement of utility. But the N-M theory of
risky choices along with its variants like the Friedman-
Savage hypothesis and Markowitz hypothesis ate still a
subject of controversy on two counts; firstly, from the
practical standpoint, and secondly, whether it is a cardinal or
an ordinal method.
Firstly, it is doubtful if risk is measurable when
Neumann and Morgenstem assume that the risk does not
possess any utility or disutility of its own, they ignore the

36
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

pleasures or pains of uncertainty- bearing.


Secondly, in the majority of individual choices the
element of uncertainty is very little. Thirdly, individual choices
are of an infinite variety. Guaranteed that they are uncertain,
it is possible to measure them with the N-M method? Lastly,
it does not measure the ‘strength of feelings’ of individuals
towards goods and services under uncertain choices.
The question whether the N-M method measures utility
cardinally or ordinally, there is great confusion among
economists. Robertson in his Utility and All That uses it in
the cardinal sense, while Profs. Baumol, Fellner and others
are of the view that the ranking of utility makes it ordinal.
According to Baumol, the N-M theory has nothing in common
with the neo- classical theory regarding cardinality.
In the neo-classical theory the word “cardinal” is
used to denote introspective absolute marginal measurement
of utility while in this theory it is used operationally. In
the N-M theory, utility numbers are assigned to lottery
tickets according to a person’s ranking of the prizes and the
prediction is made numerically as to which of the two tickets
will be chosen. Though the N-M formula is used to derive the
utility index, yet it says nothing about diminishing marginal
utility. Thus the N-M utility is not the neoclassical cardinal
utility.
The refinements made by Friedman-Savage and
Markowitz have tendered to drop the neo- classical
assumption that the marginal utility of income diminishes for

37
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

all ranges of income. Thus the theory of measurement of


utility under risky choices is superior to the neo-classical
introspective cardinalism of certain choices.
Economists like Dorfman, Samuelson and Solow have
derived the Paretian indices of utility from the N-M formula.
And when the N-M index based on individual ranking is
constructed, it conveys information about his preferences.
Baumol uses further the N-M measurement in the
ordinal sense when he equates the N-M marginal utility with
the marginal rate of substitution. He writes: “The N-M
marginal utility X of ends up as no more than the marginal
rate of substitution between and the probability of winning the
pre-specified prize (E) of the standard lottery ticket. This is
surely not cardinal measurement in the classical sense.”
Different Preferences toward Risk: risk neutrality, risk
aversion and risk preference
People differ in their willingness to bear risk. Some
are risk averse, some risk loving, and some risk neutral. An
individual who is risk averse prefers a certain given income
to a risky income with the same expected value. (Such a
person has a diminishing marginal utility of income.) Risk
aversion is the most common attitude toward risk. To see that
most people are risk averse most of the time, note that
most people not only buy life insurance, health insurance,
and car insurance, but also seek occupations with relatively
stable wages.

38
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 1.3

Income (Rs. 1000)


The figure 1.3 shows a woman who is risk averse.
Suppose hypothetically that she can have either a certain
income of Rs.20,000, or a job yielding an income of
Rs.30,000 with probability .5 and an income of Rs.10,000
with probability .5 (so that the expected income is also
Rs.20,000). As we saw, the expected utility of the uncertain
income is 14—an average of the utility at point A (10) and the
utility at E (18)—and is shown by F. Now we can compare the
expected utility associated with the risky job to the utility
generated if Rs.20,000 were earned without risk. This latter
utility level, 16, is given by D in the figure. It is clearly
greater than the expected utility of 14 associated with the

39
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

risky job. For a risk-averse person, losses are more important


(in terms of the change in utility) than gains. A Rs.10,000
increase in income, from Rs. 20,000 to Rs.30,000, generates
an increase in utility of two units; a Rs.10,000 decrease in
income, from Rs.20,000 to Rs.10,000, creates a loss of
utility of six units.
A person who is risk neutral is indifferent between a
certain income and an uncertain income with the same
expected value. It is shown in the figure 1.4.
Figure 1.4

Income (Rs.1000)
Here the utility associated with a job generating an
income of either Rs.10,000 or Rs.30,000 with equal
probability is 12, as is the utility of receiving a certain
income of Rs.20,000. As you can see from the figure, the
marginal utility of income is constant for a risk-neutral
person. Thus, when people are risk neutral, the income they

40
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

earn can be used as an indicator of well-being. A government


policy that doubles incomes would then also double their
utility. At the same time, government policies that alter the
risks that people face, without changing their expected
incomes, would not affect their well-being. Risk neutrality
allows a person to avoid the complications that might be
associated with the effects of governmental actions on the
riskiness of outcomes.
Finally, an individual who is risk loving prefers an
uncertain income to a certain one, even if the expected value
of the uncertain income is less than that of the certain
income. The following figure shows this third possibility.
Figure 1.5

Income (Rs.1000)
In this case, the expected utility of an uncertain
income, which will be either Rs.10,000 with probability .5 or
Rs.30,000 with probability .5, is higher than the utility

41
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

associated with a certain income of Rs.20,000. Numerically,


E (u) = .5u (Rs.10,000) + .5u(Rs.30,000) =
.5(3) + .5(18) = 10.5 u (Rs.20,000) = 8
Of course, some people may be averse to some risks
and act like risk lovers with respect to others. For example,
many people purchase life insurance and are conservative
with respect to their choice of jobs, but still enjoy gambling.
Some criminologists might describe criminals as risk lovers,
especially if they commit crimes despite a high prospect of
apprehension and punishment. Except for such special cases,
however, few people are risk loving, at least with respect to
major purchases or large amounts of income or wealth.
Risk Premium
The risk premium is the maximum amount of money
that a risk-averse person will pay to avoid taking a risk. In
general, the magnitude of the risk premium depends on the
risky alternatives that the person faces.
The Certainty Equivalent
The certainty equivalent is a guaranteed return that
someone would accept now, rather than taking a chance on a
higher, but uncertain, return in the future. Put another way, the
certainty equivalent is the guaranteed amount of cash that a
person would consider as having the same amount of
desirability as a risky asset.
Investments must pay a risk premium to compensate
investors for the possibility that they may not get their money

42
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

back and the higher the risk, the higher premium an investor
expects over the average return. If an investor has a choice
between a government bond paying 3% interest and a
corporate bond paying 8% interest and he chooses the
government bond, the payoff differential is the certainty
equivalent. The corporation would need to offer this particular
investor a potential return of more than 8% on its bonds to
convince him to buy.
A company seeking investors can use the certainty
equivalent as a basis for determining how much more it needs
to pay to convince investors to consider the riskier option.
The certainty equivalent varies because each investor has a
unique risk tolerance. The term is also used in gambling, to
represent the amount of payoff someone would require to be
indifferent between it and a given gamble. This is called the
gamble's certainty equivalent.
• The certainty equivalent represents the amount of
guaranteed money an investor would accept now instead of
taking a risk of getting more money at a future date
• The certainty equivalent varies between investors based
on their risk tolerance, and a retiree would have a higher
certainty equivalent because he's less willing to risk his
retirement funds
• The certainty equivalent is closely related to the
concept of risk premium or the amount of additional
return an investor requires to choose a risky investment
over a safer investment

43
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Demand for Risky Assets


1. Stock Demand
We assume a utility function over money:
U (W) = LnW (1)
The consumer has an opportunity to buy stocks. Total
amount of money depends upon the amount of stocks bought
and the realization of the stocks. With probability p, the stock
goes up by Rs.1 and with probability (1- p); it goes down by
Rs.1. Therefore, with probability p; the amount of money, W;
that the consumer has is:
W+α
where α is the number of stocks purchased and with
probability 1-p, the amount of money the consumer has is:
W–α
Writing the expected utility function, we get:
EU = pU (W + α) + (1-p)U(W )
Writing with our particular choice of utility function (1) :
EU = pLn(W+ α) + (1- p) Ln (W- α)
To get how many stocks (W+ α) should the consumer
purchase and what are the endogenous variables and
exogenous parameters we take the first order conditions. Then
we get:

44
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Remembering from calculus that = we can rewrite


the second equation as

= (1-p)

= p (W – ) = (1- p) (W + )
= pw - (1- p) W = (1-p) +p
α* = (2p-1) w
Note that when p < 1; α* is negative. In other words, when p
α+ (1-p) (- α) = (2p-1) α
< 0, then the expected value of the stock is negative (p < ).
In that case, the optimal amount of stock purchased would be
negative as well. If the expected value of the stock is zero (p
= ), then the optimal amount of stock purchased is zero. If the
expected value is positive (p > ); then the optimal amount of
stock purchased is positive.
2. Insurance Demand
We assume a utility function over money:
U (W) = Ln W
In one state of the world 1, the consumer is healthy and
has wealth W1. This occurs with probability p. In a second
state of the world, the consumer is sick must spend money for
health services, after which she is left with wealth W2 < W1.

45
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

There is an opportunity for the consumer to buy insurance. The


premium is one rupee per unit of insurance and pays off R > 1
Rupees if the consumer is sick.
If the consumer buys α units of insurance, the amount
of money the consumer has left if she is healthy is:
W1 - α
If the consumer buys units of insurance, the amount of
money the consumer has left if she is sicks is:
W2 - α + αR = W2 + (R -1) α
Writing the expected utility function, we get:
pU(W1 – α) + (1- p) U (W2 + (R-1) α)
To know how much insurance (α) should the consumer
purchase, what are the exogenous variables and the parameters
we take the first order conditions, we get:

Again, remembering that = , we can rewrite:

-p = - (R-1) (1- p)

Dividing by -1 on both sides, we get:


p [W2+α (R- 1)] = (1- p) (R- 1) [W1 - α]

46
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Bring the exogenous variable to one side and the terms only
involving parameters on the other, we get:
α (1- p) (R- 1) + α p (R- 1)
= (1- p) (R- 1) W1 - pW2
Solving for α*, we finally get:

α*, = =(1-p) W1 -

Certainty Equivalent
In the decision analysis literature, a decision-maker
is called risk-neutral if he (or she) is willing to base his
decisions purely on the criterion of maximizing the expected
value of his monetary income. The criterion of maximizing
expected monetary value is so simple to work with that it is
often used as a convenient guide to decision-making even by
people who are not perfectly risk neutral. But in many
situations, people feel that comparing gambles only on the
basis of expected monetary values would take insufficient
account of their aversion to risks. For example, imagine that
you had a lottery ticket that would pay you either Rs.20,000 or
Rs.0, each with probability 1/2. If you are risk neutral, then
you should be unwilling to sell this ticket for any amount of
money less than its expected value, which is Rs.10,000. But
many risk averse people might be very glad to exchange
this risky lottery for a certain payment of Rs.9000. Given
any such lottery or gamble that promises to pay you an amount
of money that will be drawn randomly from some
probability distribution, a decision-makers certainty

47
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

equivalent of this gamble is the lowest amount of money-for-


certain that the decision maker would be willing to accept
instead of a gamble. Which means the amount of payoff that
an agent would have to receive to be indifferent between that
payoff and a given gamble is called that gamble’s ‘certainty
equivalent’. That is, saying that Rs.7000 is your certainty
equivalent of the lottery that would pay you either
Rs.20,000 or Rs.0, each with probability 1/2, means that
you would be just indifferent between having a ticket to
this lottery or having Rs.7000 cash in hand.
In these terms, a risk-neutral person is one whose
certainty equivalent of any gamble is just equal to its
expected monetary value. A person is risk averse if his or her
certainty equivalent of a gamble is less than the gamble’s
expected monetary value. The difference between the
expected monetary value of a gamble and a risk-averse
decision- maker’s certainty equivalent of the gamble is called
the decision-makers risk premium for the gamble. So if a
lottery paying Rs.20,000 or Rs.0, each with probability 1/2, is
worth Rs.7000 to you, then your risk premium for this lottery
is Rs.10,000-Rs.7000 = Rs.3000. When you have a choice
among various gambles, you should choose the one for which
you have the highest certainty equivalent, because it is the
one that is worth the most to you. But when a gamble is
complicated, you may find it difficult to assess your
certainty equivalent for it. The great appeal of the risk-
neutrality assumption is that, by identifying your certainty
equivalent with the expected monetary value, it makes your

48
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

certainty equivalent something that is straightforward to


compute or estimate by simulation. So what we need now is
to find more general formulas that risk-averse decision-
makers can use to compute their certainty equivalents for
complex gambles and monetary risks.
A realistic way of calculating certainty equivalents
must include some way of taking account of a decision-
makers personal willingness to take risks. The full diversity
of formulas that a rational decision-maker might use to
calculate certainty equivalents is described by a branch of
economics called utility theory. Utility theory generalizes the
principle of expected value maximization in a simple but
very versatile way. Instead of assuming that people want to
maximize their expected monetary values, utility theory
instead assumes that each individual has a personal utility
function that assigns a utility value to every possible
monetary income level that the individual might receive, such
that the individual always wants to maximize the expected
value of his or her utility. For example, suppose that you have
to choose among two gambles, where the random variable X
denotes the amount of money that you would get from the first
gamble and the random variable Y denotes the amount of
money that you would get from the second gamble. A risk-
neutral decision-maker would prefer the first gamble if
E(X) E(Y). But according to utility theory, when U(x) denotes
your "utility" for getting any amount of money x, you should
prefer the first gamble if E (U(X)) E(U(Y)). Furthermore,
your certainty equivalent of the gamble that will pay the

49
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

random monetary amount X should be the amount of money


that gives you the same utility as the expected utility of the
gamble. Thus, we have the basic equation
U (CE) = E (U(X)).
Utility theory can account for risk aversion, but it
also is consistent with risk neutrality or even risk-seeking
behaviour, depending on the shape of the utility function.
Demand for Risky Assets
Demand for Risky Assets is also known as Portfolio
Choice. There are two assets: safe and risky. The safe asset
has a return of 1 Rupee per Rupees invested. The risky asset
has a random return of z ϵ2 [a, b] Rupees with distribution F.
Assume that E(z) 1. An individual with utility function u (.)
has an initial amount of wealth w and invests an amount α
in the risky asset. The individual’s end-of-period wealth would
be:
(w-α). 1 + α z = w + (z - 1).
Reducing Risks
As the recent growth in state lotteries shows, people
sometimes choose risky alternatives that suggest risk-loving
rather than risk-averse behaviour. Most people, however,
spend relatively small amounts on lottery tickets and casinos.
When more important decisions are involved, they are
generally risk averse. In this section, we describe three ways
by which both consumers and businesses commonly reduce
risks: diversification, insurance, and obtaining more
information about choices and payoffs.

50
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Diversification
Recall the old saying, “Don’t put all your eggs in
one basket.” Ignoring this advice is unnecessarily risky: If
your basket turns out to be a bad bet, all will be lost. Instead,
you can reduce risk through diversification: allocating your
resources to a variety of activities whose outcomes are not
closely related. Suppose, for example, that you plan to take a
part-time job selling appliances on a commission basis. You
can decide to sell only air conditioners or only heaters, or you
can spend half your time selling each. Of course, you can’t be
sure how hot or cold the weather will be next year. Risk can
be minimized by diversification—by allocating your time so
that you sell two or more products (whose sales are not
closely related) rather than a single product. Suppose there is
a 0.5 probability that it will be a relatively hot year, and a 0.5
probability that it will be cold.
Table 1.4

Hot weather Cold weather


Air conditioner sales 30,000 12,000
Heater sales 12,000 30,000

If you sell only air conditioners or only heaters, your


actual income will be either Rs.12,000 or Rs.30,000, but
your expected income will be Rs.21,000 (.5[Rs.30,000] +
.5[Rs.12,000]). But suppose you diversify by dividing your
time evenly between the two products. In that case, your
income will certainly be Rs.21,000, regardless of the weather.

51
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

If the weather is hot, you will earn Rs.15,000 from air


conditioner sales and Rs.6000 from heater sales; if it is cold,
you will earn Rs.6000 from air conditioners and Rs.15,000
from heaters. In this instance, diversification eliminates all
risk.
Of course, diversification is not always this easy. In
our example, heater and air conditioner sales are negatively
correlated variables—they tend to move in opposite directions;
whenever sales of one are strong, sales of the other are weak.
But the principle of diversification is a general one: As long
as you can allocate your resources toward a variety of
activities whose outcomes are not closely related, you can
eliminate some risk.
Insurance
We have seen that risk-averse people are willing to
pay to avoid risk. In fact, if the cost of insurance is equal to
the expected loss (e.g., a policy with an expected loss of
Rs.1000 will cost Rs.1000), risk-averse people will buy
enough insurance to recover fully from any financial losses
they might suffer.
The answer is implicit in our discussion of risk
aversion. Buying insurance assures a person of having the
same income whether or not there is a loss. Because the
insurance cost is equal to the expected loss, this certain
income is equal to the expected income from the risky
situation. For a risk-averse consumer, the guarantee of the
same income regardless of the outcome generates more

52
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

utility than would be the case if that person had a high income
when there was no loss and a low income when a loss
occurred.
To clarify this point, let’s suppose a homeowner faces
a 10-percent probability that his house will be burglarized and
he will suffer a Rs.10,000 loss. Let’s assume he has Rs.50,000
worth of property. Note that expected wealth is the same
(Rs.49,000) in both situations. The variability, however, is
quite different. As the table shows, with no insurance the
standard deviation of wealth is Rs.3000; with insurance, it is
0. If there is no burglary, the uninsured homeowner gains
Rs.1000 relative to the insured homeowner. But with a
burglary, the uninsured homeowner loses Rs.9000 relative to
the insured homeowner. Remember: for a risk-averse
individual, losses count more (in terms of changes in utility)
than gains. A risk- averse homeowner, therefore, will enjoy
higher utility by purchasing insurance.
The Value of Information
People often make decisions based on limited
information. If more information were available, one could
make better predictions and reduce risk. Because information
is a valuable commodity, people will pay for it. The value of
complete information is the difference between the expected
value of a choice when there is complete information and the
expected value when information is incomplete. To see how
information can be valuable, suppose you manage a clothing
store and must decide how many suits to order for the fall
season. If you order 100 suits, your cost is Rs.180 per suit. If

53
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

you order only 50 suits, your cost increases to Rs.200. You


know that you will be selling suits for Rs.300 each, but you
are not sure how many you can sell. All suits not sold can be
returned, but for only half of what you paid for them.
Without additional information, you will act on your belief
that there is a .5 probability that you will sell 100 suits and a
.5 probability that you will sell 50. Table 5.7 gives the profit
that you would earn in each of these two cases. Without
additional information, you would choose to buy 100 suits if
you were risk neutral, taking the chance that your profit
might be either Rs.12,000 or Rs.1500. But if you were risk
averse, you might buy 50 suits: In that case, you would know
for sure that your profit would be Rs.5000. With complete
information, you can place the correct order regardless of
future sales. If sales were going to be 50 and you ordered 50
suits, your profits would be Rs.5000. If, on the other hand,
sales were going to be 100 and you ordered 100 suits,
your profits would be Rs.12,000. Because both outcomes
are equally likely, your expected profit with complete
information would be Rs.8500.
The value of information is computed as
Expected value with complete information: Rs.8500
Less: Expected value with uncertainty (buy 100 suits): -6750
Equals: Value of complete information Rs.1750
Thus it is worth paying up to Rs.1750 to obtain an
accurate prediction of sales. Even though forecasting is
inevitably imperfect, it may be worth investing in a

54
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

marketing study that provides a reasonable forecast of next


year’s sales.
Flexibility
The reduced flexibility always increases the demand
for insurance has led to the belief that reduced flexibility
always makes individuals more ‘risk averse’. However it is
possible for reduced flexibility to lead to less risk averse
behaviour in the standard asset demand sense.
The state preference approach to choice under uncertainty
The “state-preference” approach to uncertainty was
introduced by Kenneth J. Arrow (1953) and further detailed
by Gérard Debreu (1959: Ch.7). It was made famous in the
late 1960s, with the work of Jack Hirshleifer (1965, 1966) in
the theory of investment and was advanced even further in
the 1970s with developments of Roy Radner (1968, 1972)
and others in finance and general equilibrium.
The basic principle is that it can reduce choices under
uncertainty to a conventional choice problem by changing the
commodity structure appropriately. The state-preference
approach is thus distinct from the conventional
“microeconomic” treatment of choice under uncertainty , such
as that of Von Neumann and Morgenstern (1944), in that
preferences are not formed over “lotteries” directly but,
instead, preferences are formed over state-contingent
commodity bundles. In its reliance on states and choices of
actions which are effectively functions from states to
outcomes, it is much closer in spirit to Leonard Savage

55
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

(1954). It differs from Savage in not relying on the


assignment of subjective probabilities , although such a
derivation can, if desired, be occasionally made.
The basic proposition of the state-preference approach
to uncertainty is that commodities can be differentiated not
only by their physical properties and location in space and
time but also by their location in “state”. By this we
mean that “ice cream when it is raining” is a different
commodity than “ice cream when it is sunny” and thus are
treated differently by agents and can command different
prices. Thus, letting S be the set of mutually-exclusive
“states of nature” (e.g. S = {rainy, sunny}), then we can index
every commodity by the state of nature in which it is received
and thus construct a set of “state-contingent” markets.
The state preference approach analytically reduces
uncertainty to certainty. It poses, however, a dilemma. It is
often of interest to associate with a utility function u(c) over
plans a utility function v(c; s) over consumption bundles
conditional on the realization of the state of nature. This is
necessary, for instance, in order to answer the question
whether the individuals find it desirable to revise their
choices after the uncertainty has been resolved, and the
related question of ex-ante versus ex-post optimality.

56
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

MODULE II
MARKET DEMAND FOR COMMODITIES

Market Demand
A market demand curve shows how much of a good
consumers overall are willing to buy as its price changes. In
this section, we show how market demand curves can be
derived as the sum of the individual demand curves of all
consumers in a particular market.
From Individual to Market Demand
To keep things simple, let’s assume that only three
consumers (A, B, and C) are in the market for coffee. Table
2.1 tabulates several points on each consumer’s demand
curve. The market demand, column (5), is found by adding
columns (2), (3), and (4), representing our three consumers,
to determine the total quantity demanded at every price.
When the price is Rs.3, for example, the total quantity
demanded is 2 + 6 + 10, or 18.
Table 2.1 Determining the Market Demand Curve
Price Individual A Individual B Individual C Market
(Rs.) (units) (units) (units) (units)
1 6 10 16 32
2 4 8 13 25
3 2 6 10 18
4 0 4 7 11
5 0 2 4 6

57
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 2.1 shows these same three consumers’


demand curves for coffee (labelled DA, DB, and DC). In the
graph, the market demand curve is the horizontal summation
of the demands of each consumer. We sum horizontally to find
the total amount that the three consumers will demand at any
given price. For example, when the price is Rs.4, the
quantity demanded by the market (11 units) is the sum of the
quantity demanded by A (no units), by B (4 units), and by C
(7 units). Because all of the individual demand curves
slope downward, the market demand curve will also slope
downward. However, even though each of the individual
demand curves is a straight line, the market demand curve
need not be.
Figure 2.1

In the above figure market demand curve is kinked


because one consumer makes no purchases at prices that the
other consumers find acceptable (those above Rs.4). Two
points should be noted as a result of this analysis:

58
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

1. The market demand curve will shift to the right as more


consumers enter the market.
2. Factors that influence the demands of many consumers
will also affect market demand. Suppose, for example,
that most consumers in a particular market earn more
income and, as a result, increase their demands for coffee.
Because each consumer’s demand curve shifts to the right,
so the market demand curve. The aggregation of
individual demands into market demands is not just a
theoretical exercise. It becomes important in practice
when market demands are built up from the demands of
different demographic groups or from consumers located
in different areas. For example, we might obtain
information about the demand for home computers by
adding independently obtained information about the
demands of the following groups:
• Households with children
• Households without children
• Single individuals
Now we can consider another example that is India’s
aggregate demand for wheat is calculated by adding domestic
demand (i.e., by Indian consumers) and export demand (i.e.,
by foreign consumers). That is the demand for Indian wheat
has two components: domestic demand (by Indian
consumers) and export demand (by foreign consumers). So
the total demand for wheat can be obtained by aggregating the
domestic and foreign demands.

59
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Domestic demand for wheat is given by the equation


QDD = 1430 - 55P
Where, QDD is the quantity in kilograms demanded
domestically, and P is the price in rupees. Export demand is
given by
QDE = 1470 - 70P
Where, QDE is the quantity in kilograms demanded from
abroad. To obtain the total demand for wheat, we add both the
equations, obtaining
QDD + QDE = (1430 - 55P) + (1470 - 70P)
= 2900 - 125P
Network Externalities
So far, we have assumed that people’s demands for a
good are independent of one another. In other words, Tom’s
demand for coffee depends on Tom’s tastes and income, the
price of coffee, and perhaps the price of tea. But it does not
depend on Dick’s or Harry’s demand for coffee. This
assumption has enabled us to obtain the market demand curve
simply by summing individuals’ demands. For some goods,
however, one person’s demand also depends on the demands
of other people. In particular, a person’s demand may be
affected by the number of other people who have
purchased the good. If this is the case, there exists a
network externality. Network externalities can be positive or
negative. A positive network externality exists if the
quantity of a good demanded by a typical consumer

60
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

increases in response to the growth in purchases of other


consumers. If the quantity demanded decreases, there is a
negative network externality.
Positive Network Externalities
One example of a positive network externality is word
processing. Many students use Microsoft Word in part
because their friends and many of their professors do as
well. That allows us to send and receive drafts without the
need to convert from one program to another. The more people
use a particular product or participate in a particular activity,
the greater the intrinsic value of that activity or product to
each individual. Social network websites provide another
good example. If I am the only member of that site, it will
have no value to me. But the greater number of people who
join the site, the more valuable it will become. If one social
networking site has a small advantage in terms of market
share early on, the advantage will grow, because new
members will prefer to join the larger site. A similar story
holds for virtual worlds and for multiplayer online games.
Another example of a positive network externality is
the bandwagon effect. Bandwagon effect is the desire to be
in style, to possess a good because almost everyone else has
it, or to indulge a fad. The bandwagon effect often arises
with children’s toys (video games, for example). In fact,
exploiting this effect is a major objective in marketing and
advertising toys. Often it is the key to success in selling
clothing.
Positive network externalities are illustrated in Figure
2.2, in which the horizontal axis measures the sales of a
61
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

product in thousands per month. Suppose consumers think


that only 20000 people have purchased a certain product.
Because this is a small number relative to the total population,
consumers will have little incentive to buy the product.
Some consumers may still buy it (depending on price), but
only for its intrinsic value. In this case demand is given by
the curve D20. (This hypothetical demand curve assumes that
there are no externalities.) Suppose instead that consumers
think 40000 people have bought the product. Now they find it
more attractive and want to buy more. The demand curve is
D40, which is to the right of D20. Similarly, if consumers think
that 60000 people have bought the product, the demand curve
will be D60, and so on. The more people consumers believe to
have purchased the product, the farther to the right the demand
curve shifts.
Figure 2.2

62
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Ultimately, consumers will get a good sense of how


many people have in fact purchased a product. This number
will depend, of course, on its price. In the figure, for
example, we see that if the price were $30, then 40000
people would buy the product. Thus the relevant demand
curve would be D40. If the price were $20, 80000 people
would buy the product and the relevant demand curve would
be D80. The market demand curve is therefore found by
joining the points on the curves D20, D40, D60, D80, and D100
that correspond to the quantities 20000, 40000, 60000, 80000
and 100000.
Compared with the curves D20, etc., the market demand
curve is relatively elastic. To see why the positive externality
leads to a more elastic demand curve, consider the effect of a
drop in price from $30 to $20, with a demand curve of D40. If
there were no externality, the quantity demanded would
increase from 40000 to only 48000. But as more people buy
the product, the positive network externality increases the
quantity demanded further, to 80000. Thus, the positive
network externality increases the response of demand to price
changes i.e., it makes demand more elastic. As we’ll see later,
this result has important implications for producers’ pricing
strategies.
Negative Network Externalities
Network externalities are sometimes negative.
Congestion offers one example. When skiing, people prefer
short lines at ski lifts and fewer skiers on the slopes. As a

63
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

result, the value of a lift ticket at a ski resort is lower the more
people who bought the tickets. Likewise, for entry to an
amusement park, skating rink, or beach. Another example of a
negative network externality is the snob effect. Snob effect is
the desire to own an exclusive or unique good. The quantity
demanded of a “snob good” is higher the fewer people who
own it. Rare works of art, specially designed sports cars, and
made-to-order clothing are snob goods. The value one gets
from a painting or a sports car is partly the prestige, status,
and exclusivity resulting from the fact that few other people
own one like it.
Figure 2.3

Figure 2.3 illustrates how a negative network


externality works. We will assume that the product in

64
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

question is a snob good, so people value exclusivity. In


the figure, D2 is the demand curve that would apply if
consumer believed that only 2000 people used the good. If
they believe that 4000 people use the good, it would be
less exclusive, and so its value decreases. The quantity
demanded will therefore be lower; curve D4 applies.
Similarly, if consumers believe that 6000 people use the
good, demand is even smaller and D6 applies.
Eventually, consumers learn how widely owned the
good actually is. Thus, the market demand curve is found by
joining the points on curves D2, D4, D6 etc., that actually
correspond to the quantities 2000, 4000, 6000, etc.
Note that the negative network externality makes
market demand less elastic. To see why, suppose the price
was initially $30000 with 2000 people using the good. If
there were no externality, the quantity purchased would
increase to 14000 (along curve D2) when the price is lowered
to $15000. But the value of the good is greatly reduced if more
people own it. The negative network externality dampens the
increase in the quantity demanded, cutting it by 8000 units;
the net increase in sales is only to 6000 units.
For a variety of goods, marketing and advertising are
geared to creating a snob effect. (Think of Rolex watches.)
The goal is a very inelastic demand -which makes it possible
for firms to charge very high prices. Negative network
externalities can arise for other reasons. Consider the effect of
congestion in queues. Because people prefer short lines and

65
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

fewer skiers on the slopes, the value obtain from a lift ticket at
a ski resort is lower the more people there are who have
bought tickets. Likewise, for entry to an amusement park,
skating rink, or beach. Network externalities have been
crucial drivers for many modern technologies over many
years. Telephones, fax machines, email, Craigslist, Second
Life, and Twitter are just a few examples.
Veblen effect
Veblen Goods are a class of goods that do not strictly
follow the law of demand, which states that there exists an
inverse relationship between the price of a good or service
and the quantity demanded of that good or service. Veblen
goods violate the law of demand after prices have risen
above a certain level. It is named after American economist
and sociologist Thorstein Veblen, who studied the
phenomenon of conspicuous consumption in the late 19th
century.
The Veblen Effect is the positive impact of the price
of a commodity on the quantity demanded of that commodity.
It is abnormal market behaviour where consumers purchase the
higher priced goods whereas similar low priced (but not
identical) substitute are available. It is caused either by the
belief that higher price means higher quality, or by the
desire for conspicuous consumption (to be seen as buying an
expensive, prestige item).

66
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 2.4

Consider the demand curve shown in the figure 2.4. As


the price of the commodity rises from P-A to P-B, the quantity
demanded of the commodity falls from A to B. As the price
of the commodity rises from P-B to P-C, the quantity
demanded of the commodity falls from B to C. Between prices
P-A and P-C, the law of demand holds and there exists an
inverse relationship between the price of a commodity and
demand for that commodity. However, for prices beyond P-
C, the Veblen Effect dominates the law of demand. As the
price rises from P-C to P-D, demand increases from C to D.
For all the prices above, P-C, the law of demand does not
hold, and there exists a positive relationship between the
price of a commodity and demand for that commodity.

67
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Reasons for the Veblen Effect


1. Perception of quality
In Veblen’s analysis of conspicuous consumption, the
economist noted that for certain luxury goods and services, a
higher price was often associated with the perception of
higher quality. Therefore, a price increase was seen as
evidence of the producer improving quality. For example,
the demand for a designer handbag rises with an increase in
its price. The price increase is viewed by consumers as
evidence that the producer of the designer handbag has
improved the quality of the handbag.
2. Positional goods
Veblen goods are often positional goods. The quantity
demanded of a positional good depends on how the good is
distributed in society. Veblen goods often exhibit a negative
positional effect, i.e., the quantity demanded of a Veblen good
increase with a reduction in the distribution of the good. It
occurs because the utility gained by a consumer from holding
such a good arises purely from the fact that few other
consumers hold it. For example, the utility gained by a
consumer from owning a diamond-encrusted handbag might
arise primarily from the fact that few other people in society
can afford to own such an object. Thus, for this consumer,
the diamond-encrusted handbag acts as a positional good.
Empirical Estimation of Demand
Now we can discuss how demand information is

68
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

used as input into a firm’s economic decision-making


process. Knowledge about demand is also important for public
policy decisions. Understanding the demand for oil, for
instance, can help the ministry to decide whether to pass an
oil import tax or not. Firms often rely on market
information based on actual studies of demand. Properly
applied, the statistical approach to demand estimation can help
researchers sort out the effects of variables, such as income
and the prices of other products, on the quantity of a product
demanded.
Table 2.2
Year Quantity (Q) Price (P) Income (I )
2010 4 24 10
2011 7 20 10
2012 8 17 10
2013 13 17 17
2014 16 10 27
2015 15 15 27
2016 19 12 20
2017 20 9 20
2018 22 5 20

For example, the table 2.2 shows the quantity of


apple sold in a market each year. Information about the
market demand for apple would be valuable to an

69
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

organization representing growers because it would allow


them to predict sales on the basis of their own estimates of
price and other demand-determining variables. Let’s suppose
that, focusing on demand, researchers find that the quantity
of apple produced is sensitive to weather conditions but not
to the current market price (because farmers make their
planting decisions based on last year’s price).
The price and quantity data from Table 2.2 are graphed
in Figure 2.5. If we believe that price alone determines
demand, it would be plausible to describe the demand for
the product by drawing a straight line (or other appropriate
curve),
Q = a - bp,
which “fit” the points as shown by demand curve D.
Figure 2.5

The curve D really represents the demand for the

70
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

product only if no important factors other than price affect


demand. In Table 2.2, however, we have included data for one
other variable: the average income of purchasers of the
product. Note that income (I) has increased twice during the
study, suggesting that the demand curve has shifted twice.
Thus demand curves d1, d2, and d3 in Figure 2.5 give a more
likely description of demand. This linear demand curve would
be described algebraically as
Q=a–bP+cI
The income term in the demand equation allows the
demand curve to shift in a parallel fashion as income
changes. The above discussed demand relationships are
straight lines because the effect of a change in price on
quantity demanded is constant.
Linear Demand Curve
A linear demand curve is the graphical representation
of the relationship between the price of a good and the
quantity of that good, consumers are willing to pay at a certain
price at a point in time. The slope, or rate that the line rises
or falls, is equal to the difference between two quantities of a
product - usually represented on the horizontal axis on the
graph -- divided by the difference price of two points of the
graph - usually on the vertical axis. That is linear demand
curve simply refers to the shape of demand curve that is linear.
This occurs due to the law of demand, where when the price
goes up, demand decrease; when the price decreases, demand
increases. Hence, the demand curve will have the negative

71
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

gradient, while having a linear curve. A linear demand curve


is a straight line that represents demand in a market.
Figure 2.6

Demand is often represented by a function Q (P)


where the input P is the price of a product, and Q is the
quantity demanded. In this example diagram, the equation
for demand is Q = 40-P.
Features
1. Linear curves rarely exist in the real world because demand
depends in large part on elasticity of demand, or how
consumers react to a change in price.
2. Also, the relationship between demand and price is not
always constant. Some products are in demand regardless
of price. For instance, customers probably use about the
same amount of electricity regardless of price because it is
essential to living. On the other hand, televisions are a
luxury, so consumers usually become exponentially more
willing to buy a unit as the price drops.
The price elasticity of demand is not the same along

72
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

a downward sloping linear demand curve. On a linear


demand curve, the price elasticity of demand varies
depending on the interval over which we are measuring it.
For any linear demand curve, the absolute value of the price
elasticity of demand will fall as we move down and to the
right along the curve. For a linear demand curve sloping
downward from left to right, the price elasticity of demand
starts from infinity (∞) at the point where the demand curve
cuts the price axis and falls as we move downward to the right
along the curve to zero at the point where the demand curve
cuts the quantity axis.
Figure 2.7

At the mid-point of the demand curve, demand is


unit elastic. Between the price intercept and mid-point of the
demand curve, demand is elastic while between the mid-
point and the quantity intercept, demand is inelastic.

73
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Constant Elasticity Demand Function


Constant elasticity function is a function that exhibits
a constant elasticity, i.e. has a constant elasticity coefficient.
The elasticity is the ratio of the percentage change in the
dependent variable to the percentage causative change in the
independent variable, in the limit as the changes approach
zero in magnitude. The most commonly used form of demand
function in applied research has been the ‘constant-elasticity’
type
Q x=b0.Pb1x.Pb20.Yb3.eb4t
Where,
Q x= quantity demanded of commodity x Px = price of x
P0 = prices of other commodities
Y =consumers’ aggregate income
eb4t = a trend factor for tastes (e = base of natural logarithms)
b1 = price elasticity of demand b2 = cross-elasticity of demand
b3 = income elasticity of demand
The term constant elasticity demand function is due to
the fact that in this form the coefficients b1, b2, b3 are
elasticities of demand which are assumed to remain constant.
For a log-linear demand curve Q = AP −B, elasticity equals
B everywhere on the demand curve. Hence, another name for
constant-elasticity demand curves is log-linear (or power)
demand curves. We now see that the constant elasticity
demand curve is linear in logarithms. Furthermore, β1, the
slope of this linear in logarithms demand curve, equals εd,
the price elasticity of demand. As εd equals the slope of a

74
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

linear curve, which is a constant, it follows that the price


elasticity of demand is constant; hence the name of this
demand curve. Again, given one point on the constant
elasticity demand curve and an estimate of its elasticity, the
whole curve can be plotted straight forwardly.
Note that the area under a constant elasticity demand
curve from quantity q0 to quantity q1 is given exactly by:

Where, ρ = [1 + (1/β1)].
Linear Demand versus Constant-Elasticity Demand
Linear demand functions and constant-elasticity (log-
linear) demand functions are the simplest classes of demand
functions. Neither is a true representation of real world
demand functions (which are too complicated to work with or
measure exactly), but each is a useful approximation. One
might say that linear functions are too straight and constant
elasticity functions are too curved, with the curvature of
real-world demand functions lying between the two.
A linear demand curve is easy to draw and work
with. Furthermore, it has the property that demand becomes
more elastic moving up the demand curve, which holds in
the real world and is important for certain qualitative
conclusions. One cannot accurately estimate an entire demand
curve of constant-elasticity demand. When we estimate the
linear regression equation log (Q) = log (A) − B log (P), the

75
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

coefficient B is simply the elasticity. Any such estimation is an


approximation, because (a) there are other variables that affect
demand and (b) the relationship between the included variables
and demand is not perfectly linear or log-linear.
Dynamic Versions of Demand Functions: Distributed-lag
models of demand
A recent development in demand studies is the
expression of demand functions in dynamic form. Dynamic
demand functions include lagged values of the quantity
demanded and of income as separate variables influencing
the demand in any particular period. Dynamisation of the
demand functions expresses the generally accepted idea that
current purchasing decisions are influenced by past
behaviour. Models (functions), including lagged values of
demand, of income (or of other variables) are called
‘distributed-lag models’. In general form a distributed-lag
model may be expressed as
Qx(t) = f{Px(t), Px(t-1), ......., Qx(t-1), Qx(t-2).... , Y(t), Y(t-1)...... }
The number of lags depends on the particular
relationship being studied. The necessity of a dynamic
approach has long been recognised for the study of the
demand of certain commodities (consumer durables). R.
Stone extended the dynamic formulation to a wider range of
commodities. Houthakker and Taylor generalised the
dynamisation of demand functions.
A widely used model, both in demand functions and
in investment functions, is the model based on the ‘stock-

76
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

adjustment principle’, which has been developed by Nerlove.


This model has initially been applied to the study of
demand functions for consumer durables. Recently
Houthakker and Taylor have extended the ‘stock-
adjustment principle’ to non- durables, giving it the name
‘habit-creation principle’.
Nerlove’s ‘Stock-Adjustment Principle’
The model as applied to consumer durables results in a
demand function of the form
Q (t) = a1 Y(t) + a2Qx(t-1)
The model is derived as follows. There is a desired
level of durables Q*(t) which is determined by the current
level of income:
Q*(t) = b Y(t)
However, the consumer cannot immediately acquire
the desired level of durables due to limited income, credit
limitations, etc. Thus in each period the consumer acquires
only a part of the desired level. In other words, the acquisition
of the desired level of durables is gradual; in each period we
come closer to Q*(t).
In each period we purchase a certain quantity Q(t).
There is an actual change from the quantity bought in the
previous period denoted by the difference Q(t) –Q(t-1). This
change in actual purchases is only a fraction k of the desired
change, Q*(t) –Q (t-1). Thus

77
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

[Q(t) - Q(t-1)] = k [Q*(t) – Q(t-1)]


actual change desired change
Where, k is the coefficient of stock adjustment. The value of k
lies between zero and one. If in this expression of stock
adjustment we substitute for Q*(t) we obtain
Q (t) - Q (t-1) =k (bY (t) – Q(t-1))
Rearranging we have
Q(t) = (kb)Y(t)+ (l - k)Q(t-1)
Setting kb = a1 and (l - k) = a2 we obtain the final form of the
stock-adjustment model
Q(t)= a1Y(t)+ a2 Q(t-1)
Houthakker’s and Taylor’s dynamic model
Their model is based on Nerlove’s formulation. They
extended the idea of stock adjustment to non-durables. The
current demand for durables depends on, among other things,
the stock of such commodities (stock adjustment process).
The current demand for non-durables depends on, among
other things, the purchases of the commodities in the past,
because by consuming a certain commodity we get
accustomed to it (habit-formation process). The demand
function is of the form
Qt = a0+ a1Pt+ a2 ∆Pt+ a3 Yt + a4∆Yt+ a5 Qt-1
Where, ∆Yt is the change in income and ∆Pt is the change in
price between period t and t - 1. The demand function is

78
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

derived as follows.
Demand in any particular period depends on price, on
stocks of the commodity and on the current level of income
Qt = b0+ b1Pt+ b2 St+ b3Yt
where,
St = stocks of durables, if the function refers to such goods
St, = ‘stocks of habits’, if the function refers to non-durables
The sign of the coefficient of S will be negative for
durables: the more we have of furniture, electrical appliances,
etc., the less our demand for such commodities will be. The
sign of the coefficient of S will be positive for non-durables:
the higher our purchases of non-durables the stronger our
habit becomes.
Stocks S, however, cannot be measured: (i) The
stock of durables is composed of heterogeneous items of
various ages- the electrical equipment we have is not of the
same age, some items may be very old and need scrapping
and replacing, some others are new. Their heterogeneity also
makes direct measurement difficult. What we ideally want for
stocks is the sum of depreciated inventories of durables; but
the appropriate depreciation rates are not known. (ii) The
‘stock of habits’ is a psychological variable and cannot be
quantified.
However, we can eliminate algebraically stocks, S,
from the demand function and replace it with other
measurable variables by making some ‘reasonable’

79
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

assumptions.
For durables the elimination process may be outlined as
follows:
(1) The net change in stocks realised in any period (St -
St-1) is equal to our purchases in that period minus the
depreciation of our old possessions:
St - St-1 = Qt -depreciation
(2) Assume that depreciation is equal in all the periods of
the life of the durable, i.e.
Depreciation = δ St
Where, δ is a constant depreciation rate (for example, if
the life of the durable is ten years, we assume that the yearly
depreciation is 10 per cent of the value of the durable). Thus
(St - St-1) = Qt - δ St
(3) From the demand function
Qt = b0+ b1Pt+ b2 St+ b3 Yt
Solving for St we obtain

St = (Qt - b0 - b1Pt - b3Yt)

Substituting this value in the right hand side of the equation (St
- St-1) = Qt - δ St we have

(St - St-1) = Qt - δ (Qt - b0 - b1Pt - b3Yt)

(4) Since the relation Qt = b0+ b1Pt+ b2 St+ b3Yt holds for
period t, the relationship

80
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Qt-1 = b0+ b1Pt-1+ b2 St-1+ b3Yt-1


will hold for period t - 1.
Subtracting these two equations we have
Qt - Qt-1 = b1 (Pt -Pt-1) + b2 (St -St-1) + b3 (Yt - Yt-1)

Substituting equation (St - St-1) = Qt - δ (Qt - b0 - b1Pt - b3 Yt)


in equation Qt - Qt-1 = b1(Pt -Pt-1) + b2(St -St-1 ) + b3(Yt -Yt-1)
and rearranging and Setting a0, a1, a2, a3, a4 and a5 for the
coefficients of this equation we arrive at the final form of
Houthakker’s and Taylor’s formulation
Qt = a0 + a1Pt + a2(∆Pt) + a3Yt + a4(∆Yt) + a5Qt-1
Linear Expenditure System
These are models which deal with groups of
commodities rather than individual commodities. Such groups,
when added, yield total consumer expenditure. Linear
expenditure systems are thus of great interest in aggregate
econometric models, where they provide desirable
disaggregation of the consumption function. One of the
earliest linear expenditure models was suggested by R. Stone
(Economic Journal, 1954). The linear expenditure systems
(LES) are usually formulated on the basis of a utility
function, from which demand functions are derived in the
normal way (by maximisation of the utility function
subject to a budget constraint). In this respect the approach
of LES is the same as that of models based on
indifference curves. However, LES differ in that they are
applied to ‘groups of commodities’ between which no

81
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

substitution is possible, while the indifference curves approach


is basically designed for handling commodities which are
substitutes. The very notion of an indifference curve is the
substitutability of the commodities concerned. Actually the
indifference map of a LES would appear as in figure 2.8,
implying the non-substitutability of groups of
commodities.
Figure 2.8
Indifference Map for Complementary Goods

The utility function is additive, that is, total utility (U)


is the sum of the utilities derived from the various groups of
commodities. For example, assume that all the commodities
bought by the consumers are grouped in five categories:
A- Food and beverages B - Clothing
C - Consumers’ durables
D - Household-operation expenses
E - Services (transport, entertainment, etc.).
The total utility is

82
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

U=ƩUi
or
U=U(A)+U(B)+U(C)+U(D)+U(E)
Additivity implies that the utilities of the various groups are
independent, that is, there is no possibility of substitution (or
complementarity) between the groups A, B, C, D and E.
In linear expenditure systems the commodities bought
by the consumers are grouped in broad categories, so as to be
compatible with the additivity postulate of the utility function.
Thus each group must include all substitutes, and
complements. In this way substitution between groups is
ruled out, but substitution can occur within each group.
The consumers buy some minimum quantity from
each group, irrespective of prices. The minimum quantities
are called ‘subsistence quantities’ because they are the
minimum requirements for keeping the consumer alive. The
income left (after the expenditure on the minimum quantities
is covered) is allocated among the various groups on the basis
of prices.
The income of the consumer is, therefore, split into
two parts: the ‘subsistence income’, which is spent for the
acquisition of the minimum quantities of the various
commodities, and the ‘supernumerary income’, the income
left after the minimum expenditures are covered.
Characteristic Approach to Demand Function
Characteristics demand theory states that consumers
83
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

derive utility not from the actual contents of the basket but
from the characteristics of the goods in it. This theory was
developed by Kelvin Lancaster in 1966 in his working paper
“A New Approach to Consumer Theory”.
This approach allows us to predict how preferences
will change when we change the options or baskets presented
to consumers by studying how these vary according to the
change in the characteristics that make them up. With
conventional theory, the introduction of a new option meant
that we could not reliably predict how this would slot into the
consumer’s preference map. However, by relying on a study
of the characteristics rather than the goods or service
involved, we can predict how changes will affect a
consumer’s behaviour without needing to start once again
empirically.
This allows us to calculate ‘shadow prices’ for
different attributes without having a price for the good itself
by associating utility to the characteristics that make up the
good rather than the good itself. With these ‘shadow prices’,
we can solve utility maximisation problems for baskets or
options for which we do not have empirical evidence, as
Lancaster demand also lends itself to building utility
functions (based on the amount of each type of characteristic
rather than the amount of each type of good in a particular
basket).
Characteristic demand theory also helps justify the
existence of brands. Luxury brands are able to charge a
surprice for their products by differentiating themselves
from competitors that sell similar goods. In the first diagram
84
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

of figure 2.9, if we suppose that both brands have the same


characteristics and are perfect competitors, then we will
choose the basket that maximises our total consumption.
This means will tend to opt for the cheaper brand, which
allows us to reach the highest utility curve: for a given
amount of money, we are able to buy either a certain amount
of brand 1 (point B) or a certain amount of brand 2 (point
A). We choose A since it’s on a higher indifference curve.
Point C represents a higher utility curve achieved by a drop
in the price of brand 1. However, even though brand 1 got
cheaper, we’ll still consume A, since it remains on a higher
indifference curve.
Figure 2.9

85
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

In the second diagram, if we look at Lancaster demand, our


utility functions will be based on the characteristics that each
basket contains rather than on the amount of each type of
good. Here, it is no longer ‘all or nothing’ we can allow for
convex demand curves that represent our preference for
variety in consumption: point C. This time, if the price of one
brand drops, we will change our outcome: we can opt for point
D.

86
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

MODULE III
THEORY OF PRODUCTION AND COSTS

Production Function
The production function is a purely technical
relation which connects factor inputs and outputs. It
describes the laws of proportion, that is, the transformation
of factor inputs into products (outputs) at any particular
time period. The production function represents the
technology of a firm of an industry, or of the economy as a
whole. The production function includes all the technically
efficient methods or production.
A method of production (process, activity) is a
combination of factor inputs required for the production of
one unit of output. Usually a commodity may be produced by
various methods of production. The theory of production
describes the laws of production. The choice of any particular
technique (among the set of technically efficient processes)
is an economic one, based on prices, and not a technical one.
We note here that a technically efficient method is not
necessarily economically efficient. There is a difference
between technical and economic efficiency. An isoquant
includes (is the locus of) all the technically efficient methods
(or all the combinations of factors of production) for
producing a given level of output. The production isoquant
may assume various shapes depending on the degree of
substitutability of factors.

87
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Production function shows the relationship between


the output and inputs. In other words it shows that output is a
function of inputs (land, labour, capital and entrepreneur) and
how the change in inputs affects the change in the output.
The general mathematical form of the production function is
Y = f(L, K, R, S,V,γ)
Where, Y = output
L = labour input
K = capital input
R = raw materials
S = land input
ᵥ = returns to scale
γ =efficiency parameter.
All variables are flows, that is, they are measured per
unit of time. In its general form the production function is a
purely technological relationship between quantities of
inputs and quantities of output. Prices of factors or of the
product do not enter into the production function. They are
used only for the production decision of the firm or other
economic entities. However, in practice it has been observed
that raw materials bear a constant relation to output at all
levels of production. For example, the number of bricks for a
given type of house is constant, irrespective of the number of
houses built; similarly the metal required for a certain type of
car is constant, irrespective of the number of cars produced.
This allows the subtraction of the value of raw materials
from the value of output, and the measurement of output in
terms of value added.
88
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

The input of land, S, is constant for the economy as a


whole, and hence does not enter into an aggregate production
function. However, S is not constant for individual sectors or
for individual firms. In these cases land-inputs are lumped
together with machinery and equipment, in the factor K.
Thus the production function in traditional economic
theory assumes the form
X = f (L, K, V,γ)
The factor ᵥ, ‘returns to scale’, refers to the long-run
analysis of the laws of production, since it assumes change in
the plant. It will be discussed in detail in a subsequent
section. The efficiency parameter, γ, refers to the
entrepreneurial-organisational aspects of production. Two
firms with identical factor inputs (and the same returns to
scale) may have different levels of output due to differences
in their entrepreneurial and organisational efficiency.
Laws of Production
The laws of production describe the technically
possible ways of increasing the level of production. Output
may increase in various ways. Output can be increased by
changing all factors of production. Clearly this is possible
only in the long run. Thus the laws of returns to scale refer to
the long-run analysis of production. In the short run output
may be increased by using more of the variable factor(s), while
capital (and possibly other factors as well) are kept constant.
The marginal product of the variable factor(s) will decline
eventually as more and more quantities of this factor are

89
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

combined with the other constant factors. Therefore, the


short run production is also known as single variable
production function and it is termed as: Q= f (K, L) Here a
bar on K represents that capital is fixed and the producer
cannot infuse more capital in the production in short run,
whereas labour, L, is variable and changeable. Hence, the
laws of production under short-run conditions is called
‘the law of variable proportions’, the ‘law of returns to a
variable input’ and the ‘law of diminishing marginal
returns’; whereas, in the long-run, it is known as the ‘law of
returns to scale’. We will first examine the long-run laws of
returns of scale.
Laws of Returns to Scale: Long-Run Analysis of
Production
In the long run expansion of output may be achieved by
varying all factors. In the long run all factors are variable. The
laws of returns to scale refer to the effects of scale
relationships. In the long run output may be increased by
changing all factors by the same proportion, or by different
proportions. Traditional theory of production concentrates on
the first case, that is, the study of output as all inputs change
by the same proportion. The term ‘returns to scale’ refers to
the changes in output as all factors change by the same
proportion. That is the rate at which output increases as
inputs are increased proportionately. Suppose we start from
an initial level of inputs and output
X 0 = f (L, K)

90
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

and we increase all the factors by the same proportion k. We


will clearly obtain a new level of output X*, higher than the
original level X0,
X* = f (kL, kK)
If X* increases by the same proportion k as the inputs,
we say that there are constant returns to scale.
If X* increases less than proportionally with the
increase in the factors, we have decreasing returns to scale.
If X* increases more than proportionally with the
increase in the factors, we have increasing returns to scale.
We will examine three different cases: increasing,
constant, and decreasing returns to scale.
Increasing Returns to Scale
If output more than doubles when inputs are doubled,
there are increasing returns to scale. This might arise
because the larger scale of operation allows managers and
workers to specialize in their tasks and to make use of
more sophisticated, large-scale factories and equipment. The
automobile assembly line is a famous example of increasing
returns. The prospect of increasing returns to scale is an
important issue from a public policy perspective. If there are
increasing returns, then it is economically advantageous to
have one large firm producing (at relatively low cost) rather
than to have many small firms (at relatively high cost).
Because this large firm can control the price that it sets, it
may need to be regulated. For example, increasing returns in

91
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

the provision of electricity is one reason why we have


large, regulated power companies.
Figure 3.1

In the figure (3.1), the firm’s production function


exhibits increasing returns to scale. Now the isoquants come
closer together as we move away from the origin along 0A.
As a result, less than twice the amount of both inputs is
needed to increase production from 10 units to 20;
substantially less than three times the inputs are needed to
produce 30 units.
Constant Returns to Scale
A second possibility with respect to the scale of
production is that output may double when inputs are
doubled. In this case, we say there are constant returns to
scale. With constant returns to scale, the size of the firm’s
operation does not affect the productivity of its factors:
Because one plant using a particular production process can

92
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

easily be replicated, two plants produce twice as much


output. For example, a large travel agency might provide the
same service per client and use the same ratio of capital (office
space) and labor (travel agents) as a small agency that services
fewer clients.
Figure 3.2

In the figure (3.2), the firm’s production function


exhibits constant returns to scale. When 5 hours of labour
and 2 hours of machine time are used, an output of 10
units is produced.
When both inputs double, output doubles from 10 to
20 units; when both inputs triple, output triples, from 10 to 30
units. Put differently, twice as much of both inputs is needed to
produce 20 units, and three times as much is needed to produce
30 units.

93
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Decreasing Returns to Scale


Finally, output may less than double when all inputs
double. This case of decreasing returns to scale applies to
some firms with large-scale operations. Eventually,
difficulties in organizing and running a large-scale operation
may lead to decreased productivity of both labour and
capital. Communication between workers and managers can
become difficult to monitor as the workplace becomes more
impersonal. Thus, the decreasing- returns case is likely to be
associated with the problems of coordinating tasks and
maintaining a useful line of communication between
management and workers.
Figure 3.3

In the figure (3.3), the firm’s production function


exhibits decreasing returns to scale. Now the isoquants come
wider as we move away from the origin along 0R. That is with
decreasing returns, the isoquants are increasingly distant

94
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

from one another as output levels increase proportionally.


Returns to scale vary considerably across firms and
industries. Other things being equal, the greater the returns
to scale, the larger the firms in an industry are likely to
be. Because manufacturing involves large investments in
capital equipment, manufacturing industries are more likely to
have increasing returns to scale than service-oriented
industries. Services are more labor-intensive and can usually
be provided as efficiently in small quantities as they can on a
large scale.
Law of Return to Variable Proportions (Short Run Law of
Production)
Assumptions of law of variable proportions:
• Technology is given
• Homogeneous labour
• Capital is fixed/constant

The law of returns to a variable proportions state that


when output is increased by using only one variable input, as
the all other inputs is fixed, then initially the output
increases as an increasing rate, then at a constant rate and
finally it keep on increasing at a diminishing rate. In other
words, if more and more of labour are used then the output
initially increases at an increasing rate, when more labour is
again used then the output increases at a constant rate and
then when again the labour is increased then the output
increases but it increases at a diminishing rate. The ultimate
law is that the marginal increase in total output eventually

95
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

decreases when additional units of a variable factor are


applied to a given quantity of fixed factors. Accordingly,
there are three laws of returns to variable inputs (i) the law of
increasing returns (ii) the law of constant returns and (iii) the
law of diminishing returns.
In order to understand this law it is important to
understand few concepts first. Total product (TPL) is defined
as the total output produced with the help of labour. Average
product (APL) is defined as total product divided by the
number of labour used in the production process. Marginal
product (MPL) is defined as the additional output produced
by employing / increasing one more labour. That is, TPL -
TPL-1 = MPL. Hence, given the number of workers and
total product, average product and marginal product can
be derived in the following way:
Table 3.1

Marginal
Amount of Amount of Total Output Average
Product
Labour (L) Capital (K) (q) Product(q/L)
(MPL)(∆q/∆L)
1 10 10 10 10
2 10 30 15 20
3 10 60 20 30
4 10 80 20 20
5 10 95 19 15
6 10 108 18 13
7 10 112 16 4

96
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Marginal
Amount of Amount of Total Output Average
Product
Labour (L) Capital (K) (q) Product(q/L)
(MPL)(∆q/∆L)
8 10 112 14 0
9 10 108 12 -4
10 10 100 10 -8

The contribution that labour makes to the production


process can be described on both an average and a
marginal (i.e., incremental) basis. The fourth column
shows the average product of labour (APL), which is the
output per unit of labour input. The average product is
calculated by dividing the total output q by the total input of
labour L. The average product of labour measures the
productivity of the firm’s workforce in terms of how much
output each worker produces on average. In our example, the
average product increases initially but falls when the labour
input becomes greater than four. The marginal product of
labour (MPL) is the additional output produced as the labour
input is increased by 1 unit. For example, with capital fixed
at 10 units, when the labour input increases from 2 to 3, total
output increases from 30 to 60, creating an additional output
of 30 (i.e., 60–30) units. The marginal product of labour can
be written as ∆q/∆L -in other words, the change in output ∆q
resulting from a 1- unit increase in labour input ∆L.
Remember that the marginal product of labour depends on
the amount of capital used. If the capital input increased from
10 to 20, the marginal product of labour most likely would

97
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

increase. Because additional workers are likely to be more


productive if they have more capital to use. Like the average
product, the marginal product first increases then falls—in this
case, after the third unit of labour.
The above portion of the following figure shows that
as labour is increased, output increases until it reaches the
maximum output of 112; thereafter, it falls. The portion of the
total output curve that is declining is drawn with a dashed
line to denote that producing with more than eight workers is
not economically rational; it can never be profitable to use
additional amounts of a costly input to produce less output.
The lower portion of the following figure shows the average
and marginal product curves. (The units on the vertical axis
have changed from output per month to output per worker
per month.) Note that the marginal product is positive as
long as output is increasing, but becomes negative when
output is decreasing. It is no coincidence that the marginal
product curve crosses the horizontal axis of the graph at the
point of maximum total product. This happens because
adding a worker in a manner that slows production and
decreases total output implies a negative marginal product
for that worker.
The average product and marginal product curves
are closely related. When the marginal product is greater
than the average product, the average product is increasing. If
the output of an additional worker is greater than the
average output of each existing worker (i.e., the marginal
product is greater than the average product), then adding the

98
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

worker causes average output to rise. In table two workers


produce 30 units of output, for an average product of 15 units
per worker. Adding a third worker increases output by 30 units
(to 60), which raises the average product from 15 to 20.
Similarly, when the marginal product is less than the average
product, the average product is decreasing. This is the case
when the labour input is greater than 4 in the figure.
Figure 3.4

99
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

We have seen that the marginal product is above


the average product when the average product is increasing
and below the average product when the average product is
decreasing. It follows, therefore, that the marginal product
must equal the average product when the average product
reaches its maximum. This happens at point E in the above
figure (3.4)
Elasticity of Substitution
In this section we analyze how changes in factor
prices affect the share of factors and distribution of factors’
income. When input prices change the firm will employ
cheaper input in place of relatively costly one. For example,
if the cost of labour increases relative to the cost of capital,
the firm will substitute more capital in place of labour. This
happens due to profit maximization of firm. In order to
maximize profit the firm has to change the ratio of capital to
labour ( / ) due to the change factor prices. This results in
the change in the relative shares of the factors. The change
in factor price affects how much depend on the
responsiveness of the change of / ratio to the change in factor
price. This responsiveness is measured by the concept of
elasticity of substitution which is defined as the percentage
change in the / ratio due to the percentage change of the
marginal rate of technical substitution of labour for capital
( ). That is,

100
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

In perfectly competitive factor market the profit


maximization condition occurs when the marginal rate of
technical substitution of factors equals to the ratio of factor
prices, that is:
= /
We can therefore rewrite the elasticity of substitution
as

The sign of will be always positive except when =


0 as the numerator and denominator change in same
direction. This implies that if w/r increases, labour is
relatively costlier than capital and the firm, therefore,
substitutes capital for labour leading to increase in / . The
values of lie between the range from 0 to ∞. When = 0, there
will be no substitution of one factor to another. This is a case
of fixed proportion production function. The factors are
perfect substitutes to each other when = ∞. In this situation,
the isoquant will be negatively sloped straight line. The
factors will be substitutable to each other to a certain extent
when 0 < < ∞ and the isoquant will be convex to origin. For
Cobb-Douglas type production function the value of elasticity
of substitution equals to unity. We can compare the degrees of
substitutability by considering the following three cases:
(i) < 1 (Inelastic substitutability)
(ii) = 1 (Unitary elastic substitutability)
(iii) 1 (Elastic Substitutability)

101
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

How the distribution of factors’ share is related to


different values of elasticity of substitution analyzed below.
The share of labour and capital can be written as:

Share of labour = & share of capital =

Hence, the relative factor share is

Or
Share of L / share of K = (w/r)/(K/L)
The above equation traces the relationship between
change in / ratio to the relative shares of labour and capital.
Now when < 1, a given percentage change in the / ratio
induces a smaller change in the / ratio. This results an
increase in relative factor share. The = 1 implies that there
will be equal percentage change in / ratio for a given
percentage change of / and consequently the relative factor
share will be unchanged. However, in case of > 1 , the
relative factor share will decline as percentage increase in /
ratio causes relatively smaller percentage increase in / ratio.
It is important to note that there is a two-way causation
between / ratio and / . That is, change in / ratio induces
change in the ratio of factor prices which in turn changes the
shares of factors to output.
Homogeneous Production Function
Specifically, a function f (x1, x2,…, xn) is said to be
homogeneous of degree k if
102
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

f (tx1, tx2,…, txn) = tk f(x1, x2,…, xn).


The most important examples of homogeneous
functions are those for which k = 1 or k = 0. In words, when
a function is homogeneous of degree one, a doubling of all of
its arguments doubles the value of the function itself. For
functions that are homogeneous of degree 0, a doubling of
all of its arguments leaves the value of the function
unchanged. Functions may also be homogeneous for changes
in only certain subsets of their arguments, that is, a doubling
of some of the x’s may double the value of the function if the
other arguments of the function are held constant. Usually,
however, homogeneity applies to changes in all of the
arguments in a function.
Euler’s theorem
Another useful feature of homogeneous functions can
be shown by differentiating the definition for homogeneity
with respect to the proportionality factor, t. In this case,
we differentiate the right side of the equation f (tx1, tx2,…, txn)
= tk f(x1, x2,…, xn) first:
ktk-1 f1(x1, x2,…, xn) = x1f1 (tx1, tx2,…, txn) +......+ xnfn (tx1,
tx2,…, txn).
If we let t = 1, this equation becomes
kf(x1, x2,…, xn) = x1f1 (x1,x2,…,xn) +......+ xnfn (x1, x2,…, xn).
This equation is termed Euler’s theorem for
homogeneous functions. It shows that, for a homogeneous
function, there is a definite relationship between the values of

103
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

the function and the values of its partial derivatives. Several


important economic relationships among functions are based
on this observation.
Linear Homogeneous Production Function
The linear homogeneous production function implies
that with the proportionate change in all the factors of
production, the output is also increases in the same
proportion. Such as, if the input factors are doubled the output
also gets doubled. This is also known as constant returns to
scale. The production function is said to be homogeneous
when he elasticity of substitution is equal to one. The linear
homogeneous production function can be used in the
empirical studies because it can be handled wisely. That is
why it is widely used in linear programming and input output
analysis. This production function can be shown symbolically
kP =f(kx1,kx2)
where,
k = number of times
P = output
x1,x2= input
Thus with the increase in the inputs x1,x2 by k times the
output is also increases in the same proportion. The concept
of linear homogeneous production function can be further
comprehended through the figure 3.5.

104
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 3.5

In the case of a linear homogeneous production


function, the expansion is always a straight line through the
origin. This means that the proportions between the factors
used will always be the same irrespective of the output levels,
provided the factor prices remain constant.
Fixed Proportion Production Function
The fixed-proportions production function, sometimes
called a Leontief production function. In this case, it is
impossible to make any substitution among inputs. Each
level of output requires a specific combination of labour and
capital: Additional output cannot be obtained unless more
capital and labour are added in specific proportions. As a
result, the isoquants are L-shaped, just as indifference curves
are L-shaped when two goods are perfect complements.

105
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 3.6

In the figure (3.6) points A, B, and C represent


technically efficient combinations of inputs. For example, to
produce output q1, a quantity of labour L1 and capital K1 can
be used, as at If capital stays fixed at K1, adding more labour
does not change output. Nor does adding capital with labour
fixed at L1. Thus, on the vertical and the horizontal
segments of the L- shaped isoquants, either the marginal
product of capital or the marginal product of labour is zero.
Higher output results only when both labour and capital are
added, as in the move from input combination A to input
combination B. The fixed-proportions production function
describes situations in which methods of production are
limited. For example, the production of a television show
might involve a certain mix of capital (camera and sound
equipment, etc.) and labour (producer, director, actors, etc.).
To make more television shows, all inputs to production
must be increased proportionally. In particular, it would be
difficult to increase capital inputs at the expense of labour,

106
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

because actors are necessary inputs to production (except


perhaps for animated films). Likewise, it would be difficult
to substitute labour for capital, because filmmaking today
requires sophisticated film equipment.
Cobb Douglas Production Function
In Economics, the Cobb Douglas production function
is widely used to represent the relationship between inputs
and output in an economy. The two most important
neoclassical production functions are the Constant Elasticity of
Substitution (CES) and the Cobb Douglas.
The Cobb Douglas production function was created
by Charles Cobb (Mathematician) and Paul Douglas
(Economist) in 1927. But its functional form was proposed
by Knut Wicksell (Economist) in the 19th Century. The Cobb
Douglas production function lies between linear and fixed
proportion production function with elasticity of substitution
equal to one. It is very popular among economist because of
its flexibility and ease of use. Mathematical Form: The
mathematical form of the Cobb Douglas production function
for a single output with two factors can be written as
1−
=( , , )=
Where,
Y: Output
K: Capital input
L: Labour input
A: Level of technology or total factor productivity (A>0)
α: Constant between 0 and 1( 0< α<1)
107
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

The Cobb-Douglas production function exhibits


constant returns to scale. Constant returns to scale occurs when
output increases in the same proportion as increase in input.
Under constant returns to scale the sum of two exponents for
capital and labour is one i.e. α + (1-α) =1. The return to scale is
a long run concept when all the factors of production are
variable. In long run output can be increased by increasing all
the factor of production. An increase in scale means that all
factors are increased in the same proportion, output will
increase but the increase may be at an increasing rate or at a
constant rate or at a decreasing rate. Constant return to scale
occurs when output increases in the same proportion as
increase in input. If all factors are increased by 20% then
output also increases by 20%. So doubling of all factors causes
a doubling of output then returns to scale are constant. The
constant return to scale is also called linearly homogenous
production function. It is shown in Figure 3.7.
Figure 3.7

108
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

The labour and capital is shown on X-axis and Y-


axis. There are three isoquant curves showing different
levels of output. Under constant return to scale the
distance between successive isoquants remain same as we
expand output from 100 to 200 to 300 units. On straight
line OR starting from origin the distance OD, DE and EF all
are equal. If the sum of the two exponents for capital and
labour is greater than one then the function exhibits
increasing returns to scale. And if the sum of the two
exponents for capital and labour is less than one then the
function exhibits decreasing returns to scale. Isoquant are
Convex to the Origin: Under Cobb-Douglas production
function, isoquant are convex to the origin.
Properties of Cobb Douglas Production Function
i. Constant Returns to Scale: The Cobb Douglas
production function exhibits constant returns to
scale. If the inputs capital and labour are increased
by a positive constant, λ, then output also increases
by the same proportion i.e. (λK, λL, A) = λ f(K, L,
A) for all λ>0.
1−
=( , , )=
(λK, λL, A) = ( )( ) 1−
1− 1−
=
1−
=
=
If the function exhibits decreasing returns to scale

109
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

then ( , , )< > 1.


If the function exhibits increasing returns to scale
then ( , , ) > > 1.
ii. Positive and Diminishing Returns to Inputs: The
Cobb Douglas production function is increasing in
labor and capital i.e. positive marginal products.

(i) >0 >0


1−
=( , , )=
−1 1−
= =

= = (1 − ) −

Assuming A, L and K are all positive and 0 < < 1, the


marginal products are positive.
(ii) Diminishing Marginal Products with respect to
each Input:
<0 <0

= ( − 1)−2 1−
< if <1

Here, any small increase in capital will lead to a decrease in


the marginal product of capital. Any small increase in capital
cause output to rise but at a diminishing rate. The same is true
for labour.
iii. Inada Conditions
(i) The marginal product of capital (labour)

110
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

approaches infinity as capital (labour) goes


to zero.

lim →0 = lim →0 =∞

(ii) The marginal product of capital (labour)


approaches zero as capital (labour) goes to
infinity.

lim →∞ = lim →∞ =0

iv. The Cobb Douglas production function has elasticity of


substitution equal to unity.

where = /

= = =

∗ ,

= =1

CES Production Function


Another most popular neo classical production
function is constant elasticity of substitution (CES)
production function. The CES production function was
developed by Arrow, Chenery, Minhas and Solow as a
generalisation of the Cobb Douglas production function that
allows for non-negative and constant elasticity of substitution.

111
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Functional Form: The standard CES production function can


be written as:
− −
Y = A [δ + (1 − δ) ] −1/
where Y is output, K and L are capital and labour inputs, and
A, δ, ρ are the parameters.
• A, technology determines the productivity and A[0, ∞).
• δ determines the optimal distribution of inputs and
δϵ[0,1].
• Ρ determines the elasticity of substitution (σ) where
ρϵ[-1,0)U(0,∞) and = 1 1+
Special Cases under CES Production Function
(i) For ρ→0, σ approaches 1» Cobb Douglas Production
Function.
(ii) For ρ→+∞, σ approaches 0» Leontief Production
Function.
(iii)For ρ→-1, σ approaches ∞» Perfect Substitutes.
The CES production function is linearly homogeneous and
therefore exhibits constant returns to scale. It is non linear in
parameters, so cannot be estimated using least squares method.
Properties of CES Production Function
i. Constant Returns to Scale: The Cobb Douglas
production function exhibits constant returns to
scale.
− −
Y = A [δ + (1 − δ) ] −1/

112
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

( , , )= A [δ( ) − + (1 − δ) ( ) − ] −1/
− −
= A λ[δ + (1 − δ) ] −1/
=λY.
ii. Positive and Diminishing Returns to Inputs:
The marginal products of the input are
+1
= = (Y/K)

+1
MPL = = (Y/L)

They both are positive for K,L >0. With any


small increase in capital or labour increases the
output but at a diminishing rate.
iii. Inada Conditions
(i) The marginal product of capital
((labour) approaches infinity as capital
(labour) goes to zero.

lim K→0 = lim L→0 =∞

(ii) The marginal product of capital (labour)


approaches zero as capital (labour) goes
to infinity.

lim K→∞ = lim L→∞ =0

iv. The Elasticity of Substitution is = .

The elasticity of substitution is calculated using


formula:

113
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Where, MRTS = MPL/MPK


MRTS = = (K/L)

= (K/L)(1+ )

= =

Technological Progress and the Production Function


Technology changes, as knowledge of new and more
efficient methods of production becomes available.
Furthermore new inventions may result in the increase of the
efficiency of all methods of production. At the same time
some techniques may become inefficient and drop out from
the production function. These changes in technology
constitute technological progress. Graphically the effect of
innovation in processes is shown with an upward shift of the
production function, or a downward movement of the
production isoquants (figure 3.8). This shift shows that the
same output may be produced by less factor inputs, or more
output may be obtained with the same inputs.
Figure 3.8

114
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Technical progress may also change the shape (as


well as produce a shift) of the isoquant. Hicks has
distinguished three types of technical progress, depending on
its effect on the rate of substitution of the factors of
production.
Capital-Deepening Technical Progress
Technical progress is capital-deepening (or capital-
using) if, along a line on which the K/L ratio is constant,
the MRSL, K increases. This implies that technical progress
increases the marginal product of capital by more than the
marginal product of labour. The ratio of marginal products
(which is the MRSL, K) decreases in absolute value; but taking
into account that the slope of the isoquant is negative, this sort
of technical progress increases the MRSL,K.
The slope of the shifting isoquant becomes less steep
along any given radius. The capital- deepening technical
progress is shown in figure 3.9.
Figure 3.9

115
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Labour-Deepening Technical Progress


Technical progress is labour-deepening if, along a
radius through the origin (with constant K/L ratio), the MRSL,
K increases. This implies that the technical progress increases
the MPL faster than the M PK. Thus the M RS L, K, being the
ratio of the marginal products [(∂X/∂L)]/[( ∂X/∂K)], increases
in absolute value (but decreases if the minus sign is taken into
account). The downwards-shifting isoquant becomes steeper
along any given radius through the origin. This is shown in
figure 3.10.
Figure 3.10

Neutral-Technical Progress
Technical progress is neutral if it increases the
marginal product of both factors by the same percentage, so
that the MRSL, K (along any radius) remains constant. The
isoquant shifts downwards parallel to itself. This is shown in
figure 3.11.

116
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 3.11

Cost Function
Cost functions are derived functions. They are
derived from the production function, which describes the
available efficient methods of production at any one time.
Economic theory distinguishes between short-run costs and
long-run costs. Short-run costs are the costs over a period
during which some factors of production (usually capital
equipment and management) are fixed. The long-run costs are
the costs over a period long enough to permit the change of all
factors of production. In the long run all factors become
variable. Both in the short run and in the long run, total cost is
a multi variable function, that is, total cost is determined by
many factors. Symbolically we may write the long-run cost
function as
C = f (X, T, Pf)
and the short-run cost function as

117
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

C = f (X, T, Pf ,K )
Where,
C = total cost
X= output
T = technology
Pf = prices of factors
K = fixed factor(s)

Graphically, costs are shown on two-dimensional


diagrams. Such curves imply that cost is a function of output,
C = f(X), ceteris paribus. The clause ceteris paribus implies
that all other factors which determine costs are constant. If
these factors do change, their effect on costs is shown
graphically by a shift of the cost curve. This is the reason
why determinants of cost, other than output, are called shift
factors. Mathematically there is no difference between the
various determinants of costs. The distinction between
movements along the cost curve (when output changes)
and shifts of the curve (when the other determinants
change) is convenient only pedagogically, because it allows
the use of two-dimensional diagrams. But it can be misleading
when studying the determinants of costs. It is important to
remember that if the cost curve shifts, this does not imply that
the cost function is indeterminate.
The factor ‘technology’ is itself a multidimensional
factor, determined by the physical quantities of factor inputs,
the quality of the factor inputs, the efficiency of the
entrepreneur, both in organising the physical side of the

118
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

production (technical efficiency of the entrepreneur), and in


making the correct economic choice of techniques (economic
efficiency of the entrepreneur). Thus, any change in these
determinants (e.g., the introduction of a better method of
organisation of production, the application of an educational
programme to the existing labour) will shift the production
function, and hence will result in a shift of the cost curve.
Similarly the improvement of raw materials, or the
improvement in the use of the same raw materials will lead to
a shift downwards of the cost function.
The short-run costs are the costs at which the firm
operates in any one period. The long-run costs are planning
costs or ex ante costs, in that they present the optimal
possibilities for expansion of the output and thus help the
entrepreneur plan his future activities. Before an investment
is decided the entrepreneur is in a long-run situation, in the
sense that he can choose any one of a wide range of
alternative investments, defined by the state of technology.
After the investment decision is taken and funds are tied up
in fixed-capital equipment, the entrepreneur operates under
short-run conditions.
A distinction is necessary between internal (to the
firm) economies of scale and external economies. The
internal economies are built into the shape of the long-run cost
curve, because they accrue to the firm from its own action as
it expands the level of its output. The external economies
arise outside the firm, from improvement (or deterioration) of
the environment in which the firm operates. Such economies

119
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

external to the firm may be realised from actions of other


firms in the same or in another industry. The important
characteristic of such economies is that they are independent
of the actions of the firm, they are external to it. Their effect
is a change in the prices of the factors employed by the firm
(or in a reduction in the amount of inputs per unit of output),
and thus cause a shift of the cost curves, both the short-run
and the long-run.
In summary, while the internal economies of scale
relate only to the long-run and are built into the shape of the
long-run cost curve, the external economies affect the
position of the cost curves: both the short-run and the long-
run cost curves will shift if external economies affect the
prices of the factors and/or the production function. Any
point on a cost curve shows the minimum cost at which a
certain level of output may be produced. This is the
optimality implied by the points of a cost curve. Usually the
above optimality is associated with the long-run cost curve.
However, a similar concept may be applied to the short-run,
given the plant of the firm in any one period.
The Cost-Minimizing Input Choice
The amount of labour and capital that the firm uses
will depend, of course, on the prices of these inputs. We will
assume that because there are competitive markets for both
inputs, their prices are unaffected by what the firm does. In
this case, the price of labour is simply the wage rate, w. In
the long run, the firm can adjust the amount of capital it
uses. Even if the capital includes specialized machinery that
120
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

has no alternative use, expenditures on this machinery are not


yet sunk and must be taken into account; the firm is deciding
prospectively how much capital to obtain. Unlike labour
expenditures, however, large initial expenditures on capital
are necessary. In order to compare the firm’s expenditure
on capital with its ongoing cost of labour, we want to
express this capital expenditure as a flow - e.g., in dollars per
year. To do this, we must amortize the expenditure by
spreading it over the lifetime of the capital, and we must also
account for the forgone interest that the firm could have
earned by investing the money elsewhere. As we have just
seen, this is exactly what we do when we calculate the user
cost of capital. As above, the price of capital is its user cost,
given by r = Depreciation rate + Interest rate.
The Rental Rate of Capital
Capital is often rented rather than purchased. If the
capital market is competitive (as we have assumed it is), the
rental rate should be equal to the user cost, r. Because in a
competitive market, firms that own capital (e.g., the owner of
the large office building) expect to earn a competitive return
when they rent it—namely, the rate of return that they could
have earned by investing their money elsewhere, plus an
amount to compensate for the depreciation of the capital. This
competitive return is the user cost of capital. Capital that is
purchased can be treated as though it were rented at a rental
rate equal to the user cost of capital. We assume a firm rents
all of its capital at a rental rate, or “price,” r, just as it hires
labour at a wage rate, or “price,” w. We will also assume that

121
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

firms treat any sunk cost of capital as a fixed cost that is


spread out over time. We need not, therefore, concern
ourselves with sunk costs. Rather, we can now focus on how
a firm takes these prices into account when determining how
much capital and labour to utilize.
Economies of Scope
In general, economies of scope are present when the
joint output of a single firm is greater than the output that
could be achieved by two different firms each producing a
single product (with equivalent production inputs allocated
between them). If a firm’s joint output is less than that
which could be achieved by separate firms, then its
production process involves diseconomies of scope. This
possibility could occur if the production of one product
somehow conflicted with the production of the second. There
is no direct relationship between economies of scale and
economies of scope. A two-output firm can enjoy economies
of scope even if its production process involves diseconomies
of scale. Suppose, for example, that manufacturing flutes and
piccolos jointly is cheaper than producing both separately.
Yet the production process involves highly skilled labour and
is most effective if undertaken on a small scale. Likewise, a
joint-product firm can have economies of scale for each
individual product yet not enjoy economies of scope.
Imagine, for example, a large conglomerate that owns several
firms that produce efficiently on a large scale but that do not
take advantage of economies of scope because they are
administered separately.

122
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

The Degree of Economies of Scope


The extent to which there are economies of scope can
also be determined by studying a firm’s costs. If a
combination of inputs used by one firm generates more
output than two independent firms would produce, then it
costs less for a single firm to produce both products than it
would cost the independent firms. To measure the degree to
which there are economies of scope, we should ask what
percentage of the cost of production is saved when two (or
more) products are produced jointly rather than individually.
The following equation gives the degree of economies of
scope (SC) that measures this savings in cost:

SC =

C(q1) represents the cost of producing only output q1, C(q2)


represents the cost of producing only output q2, and C(q1, q2)
the joint cost of producing both outputs. When the physical
units of output can be added, the expression becomes C (q1 +
q2). With economies of scope, the joint cost is less than the
sum of the individual costs. Thus, SC is greater than 0.
With diseconomies of scope, SC is negative. In general the
larger the value of SC the greater the economies of scope.
The Learning Curve
Firms that enjoy lower average cost over time are
growing firms with increasing returns to scale. But this need
not be true. In some firms, long-run average cost may decline
over time because workers and managers absorb new

123
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

technological information as they become more experienced


at their jobs. As management and labour gain experience
with production, the firm’s marginal and average costs of
producing a given level of output fall for four reasons:
1. Workers often take longer to accomplish a given task the
first few times they do it. As they become more adept,
their speed increases.
2. Managers learn to schedule the production process more
effectively, from the flow of materials to the organization
of the manufacturing itself.
3. Engineers who are initially cautious in their product
designs may gain enough experience to be able to allow
for tolerances in design that save costs without increasing
defects. Better and more specialized tools and plant
organization may also lower cost.
4. Suppliers may learn how to process required materials
more effectively and pass on some of this advantage in the
form of lower costs.
As a consequence, a firm “learns” over time as
cumulative output increases. Managers can use this learning
process to help plan production and forecast future costs.
Figure 3.12 shows a learning curve for the production of
machine tools. The horizontal axis measures the cumulative
number of lots of machine tools (groups of approximately
40) that the firm has produced. The vertical axis shows the
number of hours of labor needed to produce each lot. Labor
input per unit of output directly affects the production cost

124
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

because the fewer the hours of labour needed, the lower the
marginal and average cost of production.
Figure 3.12

The learning curve in Figure 3.12 is based on the


relationship
L = A + BN -β
Where, N is the cumulative units of output produced and L the
labour input per unit of output. A, B, and β are constants, with
A and B positive, and β between 0 and 1. When N is equal to
1, L is equal to A + B, so that A+ B measures the labour
input required to produce the first unit of output. When β
equals 0, labour input per unit of output remains the same
as the cumulative level of output increases; there is no
learning. When β is positive and N gets larger and larger, L
becomes arbitrarily close to A. A, therefore, represents the
minimum labour input per unit of output after all learning

125
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

has taken place. The larger β is, the more important the
learning effect. With β equal to 0.5, for example, the labour
input per unit of output falls proportionately to the square
root of the cumulative output. This degree of learning can
substantially reduce production costs as a firm becomes
more experienced. In this machine tool example, the value
of β is 0.31. For this particular learning curve, every
doubling in cumulative output causes the input requirement
(less the minimum attainable input requirement) to fall by
about 20 percent.
Estimating and Predicting Cost
A business that is expanding or contracting its
operation must predict how costs will change as output
changes. Estimates of future costs can be obtained from a
cost function, which relates the cost of production to the
level of output and other variables that the firm can control.
Suppose we wanted to characterize the short-run cost of
production in the automobile industry. We could obtain data
on the number of automobiles Q produced by each car
company and relate this information to the company’s
variable cost of production VC. The use of variable cost,
rather than total cost, avoids the problem of trying to allocate
the fixed cost of a multiproduct firm’s production process to
the particular product being studied.

126
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 3.13

Figure 3.13 shows a typical pattern of cost and output


data. Each point on the graph relates the output of an auto
company to that company’s variable cost of production. To
predict cost accurately, we must determine the underlying
relationship between variable cost and output. Then, if a
company expands its production, we can calculate what the
associated cost is likely to be. The curve in the figure is drawn
with this in mind—it provides a reasonably close fit to the cost
data. Here is one cost function that we might choose:
VC= βq
Although easy to use, this linear relationship between cost
and output is applicable only if marginal cost is constant.
For every unit increase in output, variable cost increases by
β; marginal cost is thus constant and equal to β. If we wish to
allow for a U-shaped average cost curve and a marginal cost
that is not constant, we must use a more complex cost

127
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

function. One possibility is the quadratic cost function, which


relates variable cost to output and output squared:
VC= βq + γq2
This function implies a straight-line marginal cost
curve of the form MC = β + 2γq. Marginal cost increases with
output if γ is positive and decreases with output if γ is
negative. If the marginal cost curve is not linear, we might use
a cubic cost function:
VC= βq + γq2+ δq3
Figure 3.14 shows this cubic cost function.
Figure 3.14

It implies U-shaped marginal as well as average cost


curves. Cost functions can be difficult to measure for several
reasons. First, output data often represent an aggregate of
different types of products. The automobiles produced by
General Motors, for example, involve different models of

128
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

cars. Second, cost data are often obtained directly from


accounting information that fails to reflect opportunity costs.
Third, allocating maintenance and other plant costs to a
particular product is difficult when the firm is a
conglomerate that produces more than one product line.
Short run and Long run Distinction
1. Cost in Short Run:

It may be noted at the outset that, in cost accounting,


we adopt functional classification of cost. But in economics
we adopt a different type of classification, viz., behavioural
classification-cost behaviour is related to output changes. In
the short run the levels of usage of some input are fixed and
costs associated with these fixed inputs must be incurred
regardless of the level of output produced. Other costs do
vary with the level of output produced by the firm during
that time period. The sum-total of all such costs-fixed and
variable, explicit and implicit- is short-run total cost. It is also
possible to speak of semi-fixed or semi-variable cost such as
wages and compensation of foremen and electricity bill. For
the sake of simplicity we assume that all short run costs to
fall into one of two categories, fixed or variable.
Short-Run Total Cost:
A typical short-run total cost curve (STC) is shown in
figure 3.15. This curve indicates the firm’s total cost of
production for each level of output when the usage of one or
more of the firm’s resources remains fixed. When output is
zero, cost is positive because fixed cost has to be incurred

129
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

regardless of output. Examples of such costs are rent of land,


depreciation charges, license fee, interest on loan, etc. They
are called unavoidable contractual costs. Such costs remain
contractually fixed and so cannot be avoided in the short run.
The only way to avoid such costs is by going into
liquidation. The total fixed cost (TFC) curve is a horizontal
straight line. Total variable is the difference between total cost
and fixed cost. The total variable cost curve (TVC) starts from
the origin, because such cost varies with the level of output
and hence are avoidable. Examples are electricity tariff, wages
and compensation of casual workers, cost of raw materials etc.
Figure 3.15

In figure the total cost (OC) of producing Q units of


output is total fixed cost OF plus total variable cost (FC).
Clearly, variable cost and, therefore, total cost must
increase with an increase in output. We also see that variable
cost first increase at a decreasing rate (the slope of STC
decreases) then increase at an increasing rate (the slope of

130
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

STC increases). This cost structure is accounted for by the law


of Variable Proportions.
Average and Marginal Cost:
One can gain a better insight into the firm’s cost
structure by analysing the behaviour of short-run average
and marginal costs. We may first consider average fixed cost
(AFC).
Average fixed cost is total fixed cost divided by output,
i.e., AFC = TFC /Q
Since total fixed cost does not vary with output average
fixed cost is a constant amount divided by output. Average
fixed cost is relatively high at very low output levels.
However, with gradual increase in output, AFC continues to
fall as output increases, approaching zero as output becomes
very large. In figure 3.16, we observe that the AFC curve
takes the shape of a rectangular hyperbola.
Figure 3.16

131
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

We now consider average variable cost (AVC) which


is arrived at by dividing total variable cost by output, i.e.,
AVC= TVC/Q. In figure 3.16, AVC is a typical average
variable cost curve. Average variable cost first falls, reaches
a minimum point (at output level Q2) and subsequently
increases. The next important concept is one of average total
cost (ATC). It is calculated by dividing total cost by output,

It is, therefore, the sum of average fixed cost and


average variable cost. The ATC curve, illustrated, is U-
shaped in figure 3.16 because the AVC cost curve is U-
shaped. This is accounted for by the Law of Variable
Proportions. It first declines, reaches a minimum (at Q3 units
of output) and subsequently rises. The minimum point on
ATC is reached at a larger output than at which AVC attains
its minimum. This point can easily be proved.
ATC = AFC + AVC
We know that and that average fixed cost
continuously falls over the whole range of output. Thus, ATC
declines at first because both AFC and AVC are falling. Even
when AVC begins to rise after Q2, the decrease in AFC
continues to drive down ATC as output increases. However, an

132
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

output of Q3 is finally reached, at which the increase in AVC


overcomes the decrease in AFC, and ATC starts rising. Since
ATC = AFC + AVC, the vertical distance between average
total cost and average variable cost measures average fixed
cost. AFC declines over the entire range of output. AVC
becomes closer and closer to ATC as output increases. We
may finally consider short-run marginal cost (SMC).
Marginal cost is the change in short-run total cost
attributable to an extra unit of output: or

Short-run marginal cost refers to the change in cost that


results from a change in output when the usage of the variable
factor changes. As figure 3.16 shows, marginal cost first
declines, reaches a minimum at Qx (note that minimum
marginal cost is attained at a level of output less than that at
which AVC and ATC attain their minimum) and rises
thereafter. The marginal cost curve intersects AVC and ATC
at their respective minimum points. This result follows from
the definitions of the cost curves. If marginal cost curve lies
below average variable cost curve the implication is clear:
each additional unit of output adds less to total cost than the

133
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

average variable cost. Thus average variable cost has to fall.


So long as MC is above AVC, each additional unit of output
adds more to total cost than AVC. Thus, in this case, AVC
must rise.
Thus when MC is less than AVC, average variable
cost is falling. When MC is greater than AVC, average
variable cost is rising. Thus MC must equal AVC at the
minimum point of AVC. Exactly the same reasoning would
apply to show MC crosses ATC at the minimum point of the
latter curve.
a. Short-run Cost Functions:
Summary of the Main Points All the important short-
run cost relations may now be summed up:
The total cost function may be expressed as:
TC = k + ƒ(Q)
Where, k is total fixed cost which is a constant, and ƒ(Q) is
total variable cost which is a function of output.
ATC = k/Q + ƒ(Q)/ Q = AFC + AVC
Since k is a constant and Q gradually increases, the ratio k/Q
falls. Hence the AFC curve is a rectangular hyperbola. Here

Where, ƒ'(Q) is the change in TVC and may be called


marginal variable cost (MVC). Thus, it is clear that MC refers
to MVC and has no relation to fixed cost. Since business
134
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

decisions are largely governed by marginal cost, and


marginal costs have no relation to fixed cost, it logically
follows costs do not affect business decisions.
b. Relation between MC and AC:
There is a close relation between MC and AC. When
AC is falling, MC is less than AC. This can be proved as
follows:
When AC is falling,

c. Cost Elasticity:
On the basis of the relation between MC and AC we
can develop a new concept, viz., the concept of cost
elasticity. It measures the responsiveness of total cost to a
small change in the level of output.
It can be expressed as:

So it is the ratio of MC to AC. The properties of the


135
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

average and marginal cost curves and their relationship to


each other are as described in figure 3.16. From the diagram
the following relationships can be discovered.
(1) AFC declines continuously; approaching both axes
asymptomatically (as shown by the decreasing distance
between ATC and AVC) and is a rectangular hyperbola.
(2) AVC first declines, reaches a minimum at Q2and rises
thereafter. When AVC is at its minimum, MC equals AVC.
(3) ATC first declines, reaches a minimum at Q3, and
rises thereafter. When ATC is at its minimum, MC equals
ATC.
(4) MC first declines, reaches a minimum at Q1, and rises
thereafter. MC equals both AVC and ATC when these curves
are at their minimum values.
Table 3.2: Short run Cost Schedule of a Hypothetical firm
(in Rs)
Average Average Average Marginal
Total Fixed Variable
Output Fixed Variable Total cost(per
cost cost cost
cost cost cost unit)
100 6000 4000 2000 40 20 60 20
200 7000 4000 3000 20 15 35 10
300 7500 4000 35000 13.33 11.67 25 5
400 9000 4000 5000 10 12.5 22.5 15
500 11000 4000 7000 8 14 22 20
600 14000 4000 10000 6.67 16.67 23.33 30
700 18000 4000 14000 5,71 20 25.71 40
800 24000 4000 20000 5 25 30 60
900 34000 4000 30000 4.44 33.33 37.77 100
1000 5000 4000 46000 4 46 50 160

136
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

The lowest point of the AVC curve is called the shut


(close)- down point and that of the ATC curve the break-even
point. These two concepts will be discussed in the context
of market structure and pricing. Finally, we see that MC lies
below both AVC and ATC over the range in which these curves
decline; contrarily, MC lies above them when they are rising.
Table 3.2 numerically illustrates the characteristics of
all the cost curves. Column (5) shows that average fixed cost
decreases over the entire range of output. Columns (6) and (7)
depict that both average variable and average total cost first
decrease, then increase, with average variable cost attaining
a minimum at a lower output than that at which average
total cost reaches its minimum. Column (8) shows that
marginal cost per 100 units is the incremental increase in
total cost and variable cost. If we compare columns (6)
and (8) we see that marginal cost (per unit) is below average
variable and average total cost when each is falling and is
greater than each when AVC and ATC are rising
2) Long-Run Costs: The Planning Horizon:
The long run does not refer to ‘some date in the
future’. Instead, the long run simply refers to a period of time
during which all inputs can be varied. Therefore, a decision
has to be made by the owner and/or manager of the firm
about the scale of operation, that is, the size of the firm. In
order to be able to make this decision the manager must have
knowledge about the cost of producing each relevant level
of output. We shall now discover how to determine these
long-run costs.
137
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Derivation of Cost Schedules from a Production Function:


For the sake of analysis, we may assume that the
firm’s level of usage of the inputs does not affect the input
(factor) prices. We also assume that the firm’s manager has
already evaluated the production function for each level of
output in the feasible range and has derived an expansion
path. For the sake of analytical simplicity, we may assume
that the firm uses only two variable factors, labour and
capital, that cost Rs. 5 and Rs. 10 per unit, respectively. The
characteristics of a derived expansion path are shown in
Columns 1, 2 and 3 of Table 3.2. In column (1) we see
seven output levels and in Columns (2) and (3) we see
the optimal combinations of labour and capital respectively
for each level of output, at the existing factor prices. These
combinations enable us to locate seven points on the expansion
path.
Column (4) shows the total cost of producing each
level of output at the lowest possible cost. For example, for
producing 300 units of output, the least cost combination of
inputs is 20 units of labour and 10 of capital. At existing
factor prices, the total cost is Rs. 200. Here, Column (4) is a
least-cost schedule for various levels of production.
In Column (5), we show average cost which is
obtained by dividing total cost figures of Column (4) by the
corresponding output figures of Column (1). Thus, when
output is 100, average cost is Rs. 120/100 = Rs. 1.20. All
other figures of Column (5) are derived in a similar way.
From column (5) we derive an important characteristic of

138
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

long-run average cost: average cost first declines, reaches a


minimum, then rises, as in the short-run. In Column (6) we
show long-run marginal cost figures.
Each such figure is arrived at by dividing change in
total cost by change in output. For example, when output
increases from Rs. 100 to Rs. 200, the total cost increases
from Rs. 120 to Rs. 140. Therefore, marginal cost (per unit) is
Rs. 20/100 = Re. 0.20. Similarly, when output increases from
600 to 700 units, MC per unit is 720-560/100 =160/100 =1.60
Column (6) depicts the behaviour of per unit MC:
marginal cost first decreases then increases, as in the short run.
We may now show the relationship between the expansion
path and long-run cost graphically. In figure 3.17 two inputs,
K and L, are measured along the two axes. The fixed factor
price ratio is represented by the slope of the isocost lines
I1I’1, I2I’2 and so on. Finally, the known production function
gives us the isoquant map, represented by Q1, Q2 and so forth.
Figure 3.17

139
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

From our earlier discussion of long-run production


function we know that, when all inputs are variable (that is,
in long-run), the manager will choose the least cost
combinations of producing each level of output. In figure
3.17, we see that the locus of all such combinations is
expansion path OP’ B’R’S’. Given the factor-price ratio
and the production function (which is determined by the
state of technology), the expansion path shows the
combinations of inputs that enables the firm to produce each
level of output at the lowest cost.
Table 3.3: Derivation of Long run cost Schedules
Total
Least
cost at
cost Marginal
Output Labour Rs. 5 Average
usage of cost (per
(units) (units) per unit cost
capital unit)
of
labour
capital
100 11 7 120 1.2 1.2
200 12 8 140 0.7 0.20
300 20 10 200 .67 0.60
400 30 15 300 .75 1.00
500 40 22 420 .84 .20
600 52 30 560 .93 1.40
700 60 42 720 1.03 1.60

We may now relate this expansion path to a long-run


total cost (LRTC) curve. Figure 3.18 shows the ‘least cost
curve’ associated with expansion path in figure 3.17. This
least cost curve is the long-run total cost curve. Points P, B, R
and S are associated with points P’, B’, R’ and S’ on the

140
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

expansion path. For example, in figure 3.17 the least cost


combination of inputs that can produce Q1 is K1 units of capital
and L1 units of labour.
Thus, in figure 3.18, minimum possible cost of
producing Q1 units of output is TC1, which is K1 + wL1, i.e.,
the price of capital (or the rate of interest) times K1, plus the
price of labour (or the wage rate) times L1. Every other point
on LRTC is derived in a similar way.
Figure 3.18

Since the long run permits capital-labour substitution,


the firm may choose different combinations of these two
inputs to produce different levels of output. Thus, totally
different production processes may be used to produce (say)
Q 1 and Q2 units of output at the lowest attainable cost. Since
the long run permits capital-labour substitution, the firm may
choose different combinations of these two inputs to produce

141
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

different levels of output. Thus, totally different production


processes may be used to produce (say) Q 1 and Q2 units of
output at the lowest attainable cost. On the basis of this
diagram we may suggest a definition of the long run total
cost. The time period during which even/thing (except factor
prices and the state of technology or art of production) is
variable is called the long run and the associated curve that
shows the minimum cost of producing each level of output is
called the long- run total cost curve.
The shape of the long-run total cost (LRTC) curve
depends on two factors: the production function and the
existing factor prices. Table 3.3 and figure 3.18 reflect two of
the commonly assumed characteristics of long-run total costs.
First, costs and output are directly related; that is, the LRTC
curve has a positive slope. But, since there is no fixed cost in
the long run, the long run total cost curve starts from the
origin. Another characteristic of LRTC is that costs first
increase at a decreasing rate (until point B in figure
3.18), and an increasing rate thereafter. Since the slope of
the total cost curve measures marginal cost, the implication
is that long-run marginal cost first decreases and then
increases. It may be added that all implicit costs of
production are included in the LRTC curve.
Long-Run Average and Marginal Costs:
We turn now to distinguish between long run average
and marginal costs. Long-run average cost is arrived at by
dividing the total cost of producing a particular output by the
number of units produced:
142
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

LRTC= LRTC/Q
Long-run marginal cost is the extra total cost of producing an
additional unit of output when all inputs are optimally
adjusted:
LRTC= ∆ LRTC /∆Q
It, therefore, measures the change in total cost per unit of
output as the firm moves along the long run total cost curve
(or the expansion path). Figure 3.19 illustrates typical long-
run average and marginal cost curves. They have essentially
the same shape and relation to each other as in the short run.
Long-run average cost first declines, reaches a minimum (at
Q2 in Fig. 14.8), then increases. Long-run marginal cost first
declines, reaches minimum at a lower output than that
associated with minimum average cost (Q1 in Figure 3.19),
and increases thereafter. The marginal cost intersects the
average cost curve at its lowest point (L in Figure 3.19) as in
the short-run. The reason is also the same. The reason has
been aptly summarized by Maurice and Smithson thus:
“When marginal cost is less than average cost, each
additional unit produced adds less than average cost to
total cost; so average cost must decrease. When marginal
cost is greater than average cost, each additional unit of the
good produced adds more than average cost to total cost; so
average cost must be increasing over this range of output.
Thus marginal cost must be equal to average cost when
average cost is at its minimum.

143
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 3.19

The Shape of the LAC: Economies and Diseconomies of


Scale:
The shape of the long-run average cost depends on
certain advantages and disadvantages associated with large
scale production. These are known as economies and
diseconomies of scale.
Economies of Scale:
Various factors may give rise to economies of scale,
that is, to decreasing long-run average costs of production.
Greater Specialization of Resources:
With an expansion of a firm’s scale of operation, its
opportunities for specialization—whether performed by men
or by machines—are greatly enhanced. It is because a large-
scale firm can often divide the tasks and work to be done more
readily than a small-scale firm.

144
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

More Efficient Utilization of Equipment:


In some industries, the technology of production is
such that a large unit of costly equipment has to be used.
The production of automobiles, steel and refined petroleum
are obvious examples.
In such industries, companies must be able to afford
whatever equipment is necessary and must be able to use it
efficiently by spreading the cost per unit over a sufficiently
large volume of output. A small-scale firm cannot ordinarily
do these things.
Reduced Unit Costs of Inputs:
A large-scale firm can often buy its inputs-such as its
raw materials-at a cheaper price per unit and thus gets
discounts on bulk purchases. Moreover, for certain types of
equipment, the price per unit of capacity is often much less
than larger sizes purchased. For instance, the construction
cost per square foot for a large factory is usually less than
that for a small one. Again, the price per horsepower of
various electric motors varies inversely with the amount of
horsepower.
Utilization of by-products:
In certain industries, larger-scale firms can make
effective use of many by-products that would go waste in a
small firm. A typical example is the sugar industry, where
by-products like molasses and bagasse are made use of.
Growth of Auxiliary Facilities:
In certain places, an expanding firm often benefits

145
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

from, or encourages other firms to develop, ancillary facilities,


such as warehousing, marketing, and transportation systems,
thus saving the growing firm considerable costs. For example,
commercial and industrial establishments often benefit from
improved transportation and warehousing facilities.
Diseconomies of Scale:
With continuous expansion of the scale of operation
of a firm, a point may ultimately be reached when
diseconomies of scale begin to exercise a more than
offsetting effect on the firm’s cost curve. As a result, the long-
run average cost curve starts to rise.
This is attributable to the following two main reasons:
Decision-Making Role of Management:
As a firm becomes larger, heavier burdens are placed
on the management so that eventually this resource input is
overworked relative to others and ‘diminishing returns’ to
management set in. In fact, management is an indivisible input
which is not capable of continuous variation. With increase in
the size of organisation there occurs delay in decision-making.
Competition for Resources:
Rising long-run average costs can occur as a growing
firm increasingly bids labour or other resources away from
other industries. In the real world, it is very difficult, if not
virtually impossible, to determine just when diseconomies of
scale are encountered and when they become strong enough
to outweigh the economies of scale.
In business where economies of scale are negligible,

146
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

diseconomies may soon assume paramount significance


causing LAC to turn up at a relatively small volume of
output. Panel A of figure 3.20 shows a long run average cost
curve for a firm of this type. In other cases, economies of
scale assume strategic significance. Even after the efficiency
of management starts declining, technological economies of
scale may offset the diseconomies over a wide range of
output. Thus, the LAC curve may not slope upward until a
very large volume of output is produced. This case (typified
by the so-called natural monopolies) is illustrated in Panel B
of figure 3.20.
Figure 3.20

In many actual situations, however, neither of these extremes


describes the behaviour of LAC. A very modest scale of
operation may not set in until a very large volume of
output is produced. In such a situation, LAC would have a
long horizontal section as shown in Panel C of figure 3.20. It
is widely agreed by economists and business executives that
this type of LAC curve describes many production processes
147
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

in the real commercial world. For theoretical analysis,


however, we continue to assume a “representative” LAC,
such as that illustrated earlier in figure 3.20.
Average Cost in the Long Run: Smooth Envelope Case:
We know that in the short-run the firm has a fixed
plant and it has a short run U-shaped cost curve SAC. If a
new and larger plant is built, the new SAC will be drawn
further to the right. We assume that the firm is still in the
planning stage and yet to undertake any fixed commitment. It
can now draw all possible different U-shaped SAC curves,
from which to choose one SAC for each specified level of
output that promises the lowest cost. As output increases, the
firm moves to a new SAC curve.
In the long run, the firm can change the size of the
plant. Starting from zero output level, successively larger
plants typically have lower and lower ATC up to some
output level and then successively higher ATC curves beyond.
The three representative ATC curves associated with the three
successively larger plants are shown in figure 3.21.
Figure 3.21

148
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Plant I is the best plant for output levels less than 900
units because its AC curve is the lowest to the left of point a.
Plant II is the best plant size for output levels between 900 to
2,000 units, because its AC curve is the lowest between point
a and b. Plant III is the best plant size for output levels
greater than 2,000 units, since its AC curve is the lowest
beyond point b. If these are only three possible plant sizes,
the long run ATC curve will consist of the segments of Plant
I’s AC curve up to point a, the segment of plant II’s AC curve
between points a and b, and the segment of Plant Ill’s AC
curve from point of b and so on. The thick LAC is
composed of the three lowest branches of SACs. This is why
the LAC is called the envelope curve.
Figure 3.22 is the smooth envelope case. Writes
Samuelson: “In the long run, a firm can choose its best plant
sizes and its lower envelope curve.” Since there is an infinite
number of choices, we get LAC as a smooth envelope. And,
as in the short-run, we can derive LMC from LAC, and
LMC emerges from the minimum point of LAC with a
smoother slope than the SMC curve.
Figure 3.22

149
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Differences
The main difference between long run and short run
costs is that there are no fixed factors in the long run; there
are both fixed and variable factors in the short run. In the
long run the general price level, contractual wages, and
expectations adjust fully to the state of the economy. In the
short run these variables do not always adjust due to the
condensed time period. In order to be successful a firm must
set realistic long run cost expectations. How the short run
costs are handled determines whether the firm will meet its
future.

150
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

MODULE IV
THEORY OF IMPERFECT MARKETS

Oligopoly
Oligopoly is a market situation in which there are
a few firms selling homogeneous or differentiated goods.
Though it is very difficult to identify the number of sellers,
but due to few sellers the actions and decisions of one seller
influence the others. Furthermore, under oligopoly market
firms producing homogeneous products are known as pure or
perfect oligopoly and firms selling the differentiated goods
are known as imperfect or differentiated oligopoly. For
instance, pure oligopoly is found among the producers of
industrial goods like aluminum, zinc, copper, cement, steel,
crude oil etc, and imperfect oligopoly is found among the
producers of consumer goods like T.V., automobiles,
typewriters, refrigerators etc.
In some oligopolistic markets, some or all firms earn
substantial profits over the long run because barriers to entry
make it difficult or impossible for new firms to enter.
Oligopoly is a prevalent form of market structure. Managing
an oligopolistic firm is complicated because pricing, output,
advertising, and investment decisions involve important
strategic considerations. Because only a few firms are
competing, each firm must carefully consider how its actions
will affect its rivals, and how its rivals are likely to react.
When making decisions, each firm must weigh its

151
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

competitors’ reactions, knowing that these competitors will


also weigh its reactions to their decisions. Furthermore,
decisions, reactions, reactions to reactions, and so forth are
dynamic, evolving over time. When the managers of a
firm evaluate the potential consequences of their decisions,
they must assume that their competitors are as rational and
intelligent as they are. Then, they must put themselves in their
competitors’ place and consider how they would react.
In these markets, each firm could take price or market
demand as given and largely ignore its competitors. In an
oligopolistic market, however, a firm sets price or output
based partly on strategic considerations regarding the
behaviour of its competitors. At the same time, competitors’
decisions depend on the first firm’s decision.
Characteristics
In addition to fewness of sellers, following are the
common characteristics of oligopolistic industries:
• Interdependence: There is complete interdependence
among sellers in this market. Since, there are few firms
producing a considerable fraction of the total output of the
industry, so the actions taken by one seller affect the
others. By reducing or increasing the price for the whole
oligopolistic market, one seller can sell more or less
quantity and can affect the profits of the other sellers.
This also implied that in this type of market, each seller
is conscious of the price moves of the other sellers and is
aware of its impact on his profit. In addition to this he also

152
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

knows the action of rivals due to the influence of his price


moves. Thus, there is full interdependent among the
sellers in this market with respect to their price output
policies/ decisions.
• Advertisement: Due to the interdependence of sellers in
this market, it becomes very important of each individual
seller to highlight his product and tell the consumers
about the different features of his product. Thus it
becomes necessary for the firms in an oligopolistic market
to spend much on advertisements and customer services,
so that he can attract more market for his product and can
give a tough competition to his rivals.
• Competitions: since under oligopoly, there are a few
sellers, thus a move by one seller immediately affects the
rivals and is followed by their counter moves. Thus we
can say that there exists a tough competition among all
the sellers in an oligopolistic market.
• Barriers to entry of firms: Due to the intense competition
in an oligopolistic market, there are no barriers to entry
into the market or exit from it. However, in the long run,
the types of barriers to entry which have a tendency
to restrain new firms from entering into the industry are
economies of scale, high capital requirement, exclusive
patents and licenses etc. Thus, when entry is restricted/
blocked by such natural and artificial barriers, the
oligopolistic industry can earn long run super natural
profits.

153
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

• Lack of uniformity: there exists lack of uniformity in the


size of oligopolistic firms. It can be small or very large,
such situation is also known as asymmetrical with firms of
a uniform size is rare.
• Demand curve: It is not easy to sketch the demand
curve for the product of an oligopolistic seller because
unless the exact behaviour pattern of a producer can be
curtained with certainty, his demand curve cannot be
drawn accurately with definiteness. And since an
oligopolistic seller do not show a unique pricing
pattern/behaviour, therefore, it is difficult to trace the
demand curve for an oligopolistic seller. However,
some of the economists have sketched the demand
curves based on certain assumptions, which are explained
in the following sections.
• No unique pattern of pricing behaviour: Due to the rivalry
arising from interdependence among the oligopolistic,
each seller wants to be independent and wants to earn
the maximum possible profits and in order to fulfil this
motive they act and reach on the price- output movements
of other sellers with uncertainty for which he readies to
cooperate with his rivals in order to reduce or eliminate
this element of uncertainty. For this, all rivals form a
kind of formal agreement with regard to their price -
output changes, which in turn lead to a kind of
monopoly within oligopoly. They may also even identify
one seller as a leader at whose proposal all the other
sellers raise or lower their prices. Hence, due to these

154
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

conflicting attitudes, it is not possible to predict any


unique pattern of pricing behaviour in oligopolistic
markets.
Demand curve and pricing in oligopoly market
If an industry is composed of few firms each selling
identical or homogenous products and having powerful
influence on the total market, the price and output policy of
each is likely to affect the other appreciably, therefore they
will try to promote collusion. In case there is product
differentiation, an oligopolist can raise or lower his price
without any fear of losing customers or of immediate reactions
from his rivals. However, keen rivalry among them may create
condition of monopolistic competition. There is no single
theory which satisfactorily explains the oligopoly behaviour
regarding price and output in the market. There are set of
theories like Cournot Duopoly Model, Bertrand Duopoly
Model, the Chamberlin Model, the Kinked Demand Curve
Model, the Centralised Cartel Model, Price Leadership
Model, etc., which have been developed on particular set of
assumptions about the reaction of other firms to the action of
the firm under study. Price determination under oligopoly can
be done under two models
1) Non collusive oligopoly model
2) The collusive oligopoly models
Non-collusive models
The common characteristic of these models is that they
assume a certain pattern of reaction of competitors in each

155
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

period and despite the fact that the ‘expected’ reaction does
not in fact materialise, the firms continue to assume that the
initial assumption holds. In other words, firms are assumed
never to learn from past experience, which makes their
behaviour at least naive (if not stupid).
Cournot model
The earliest duopoly model was developed in 1838 by
the French economist Augustin Cournot. Actually Cournot
illustrated his model with the example of two firms each
owning a spring of mineral water, which is produced at zero
costs. They sell their output in a market with a straight-line
demand curve. Each firm acts on the assumption that its
competitor will not change its output, and decides its own
output so as to maximise profit. Assume that firm A is the
first to start producing and selling mineral water. It will
produce quantity A, at price P where profits are at a
maximum (figure 4.1), because at this point M C = M R =
0. The elasticity of market demand at this level of output is
equal to unity and the total revenue of the firm is a
maximum. With zero costs, maximum R implies maximum
profits, π. Now firm B assumes that A will keep its output
fixed (at OA), and hence considers that its own demand curve
is CD'. Clearly firm B will produce half the quantity AD',
because (under the Cournot assumption of fixed output of
the rival) at this level (AB) of output (and at price P') its
revenue and profit is at a maximum. B produces half of
the market which has not been supplied by A, that is, B’s
output is of the total market.

156
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Firm A, faced with this situation, assumes that B will


retain his quantity constant in the next period. So he will
produce one-half of the market which is not supplied by B.
Since B covers one-quarter of the market, A will, in the next
period, produce of the total market.

Figure 4.1

Firm B reacts on the Cournot assumption, and will


produce one-half of the unsupplied section of the market, i.e.
.

In the third period firm A will continue to assume that


B will not change its quantity, and thus will produce one-half
of the remainder of the market, i.e. .

This action-reaction pattern continues, since firms have


the naive behaviour of never learning from past patterns of
reaction of their rival. However, eventually an equilibrium will

157
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

be reached in which each firm produces one-third of the total


market. Together they cover two-thirds of the total market.
Each firm maximises its profit in each period, but the industry
profits are not maximised. That is, the firms would have higher
joint profits if they recognised their interdependence, after
their failure in forecasting the correct reaction of their rival.
Recognition of their interdependence (or open collusion)
would lead them to act as ‘a monopolist’, producing one-half
of the total market output, selling it at the profit-maximising
price P, and sharing the market equally, that is, each producing
one-quarter of the total market (instead of one-third).
The equilibrium of the Cournot firms may be obtained
as follows:
1. The product of firm A in successive periods is

period 1:

period 2:

period 3:

period 4:

We observe that the output of A declines gradually. We may


rewrite this expression as follows

Product of A in equilibrium=

158
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

2. The product of firm B in successive periods is

period 2:

period 3:

period 3:

period 4:

We observe that B’s output increases, but at a declining rate.


We may write

Product of B in equilibrium=

Applying the above expression for the summation of a


declining geometric series we find
Product of B in equilibrium =

Thus the Cournot solution is stable. Each firm supplies of the


market, at a common price which is lower than the monopoly
price, but above the pure competitive price (which is zero in
the Cournot example of costless production). It can be shown
that if there are three firms in the industry, each will produce
one-quarter of the market and all of them together will supply

=( ) of the entire market OD'. And, in general, if there are


n firms in the industry each will provide 1/(n + 1) of the
market, and the industry output will be n/(n + 1) = 1/(n + 1) n.
Clearly as more firms are assumed to exist in the industry, the
higher the total quantity supplied and hence the lower the

159
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

price. The larger the number of firms the closer is output and
price to the competitive level.
Bertrand’s Model
Bertrand developed his duopoly model in 1883. His
model differs from Cournot’s in that he assumes that each firm
expects that the rival will keep its price constant, irrespective
of its own decision about pricing. Thus each firm is faced by
the same market demand, and aims at the maximisation of its
own profit on the assumption that the price of the competitor
will remain constant.
The model may be presented with the analytical tools
of the reaction functions of the duopolists. In Bertrand’s model
the reaction curves are derived from isoprofit maps which are
convex to the axes, on which we now measure the prices of the
duopolists. Each isoprofit curve for firm A shows the same
level of profit which would accrue to A from various levels of
prices charged by this firm and its rival. The isoprofit curve for
A is convex to its price axis (P A). This shape shows the fact
that firm A must lower its price up to a certain level (point e in
figure 4.2) to meet the cutting of price of its competitor, in
order to maintain the level of its profits at πA 2. However, after
that price level has been reached and if B continues to cut its
price, firm A will be unable to retain its profits, even if it keeps
its own price unchanged (at PAe). If, for example, firm B cuts
its price at PB , firm A will find itself at a lower isoprofit curve
(πA 1) which shows lower profits. The reduction of profits of A
is due to the fall in price, and the increase in output beyond the
optimal level of utilisation of the plant with the consequent
160
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

increase in costs. Clearly the lower the isoprofit curve, the


lower the level of profits.
Figure 4.2

Similarly, an isoprofit curve for firm B is the locus of


points of different levels of output of the two competitors
which yield to B the same level of profit (figure 4.3).
Figure 4.3

161
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

From the above definitions it should be clear that the


isoprofit curves are a type of indifference curves. There is a
whole family of isoprofit curves for each firm which have the
following properties.
1. Isoprofit curves for substitute commodities are
concave to the axes along which we measure the output of
the rival firms. For example, an isoprofit curve of firm A is
concave to the horizontal axis QA. This shape shows how A
can react to B's output decisions so as to retain a given level
of profit. For example, consider the isoprofit curve ΠA1 in
figure 4.4. Suppose that firm B decides to produce the level of
output B1. A line parallel to the horizontal axis through B1
intersects the isoprofit curve ΠA1 at points h and g. This shows
that given the output that B decides to produce, firm A will
realise the profit ΠA1 if it produces either of the two levels of
output corresponding to points h and g, that is, either Ah or
Ag.
Figure 4.4

162
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Assume that firm A decides to react by producing the higher


level Ag. If now firm B increases its output (say at the level
B2), firm A must decrease its output (at A1) if it wants to
retain its profit at the same level (ΠA1). If firm A continued to
produce Ag while B increased its production, the total quantity
supplied in the market would depress the price, and hence the
profit of firm A would decline. Up to a certain point (e in
figure 4.4) firm A must react to increases in B's output by
reducing its own production, otherwise the market price
would fall and A's profit would decrease. As firm A reduces
its output, its costs also change, but the net profit (Π = R - C)
remains at the same level (ΠA1) because of market elasticity
and/or decreasing costs arising from a better utilisation of A's
plant.
Consider now point h. If firm A reacts to B's initial
decision by producing the lower output Ah, it will clearly
earn the same profit ΠA1. If firm B decides to increase its
output (at the levels B2 , B3 and so on, up to B.), firm A will
react by increasing its output as well: A’s profit will remain
the same despite the resulting fall in the market price, because
of market elasticity and/or decrease in its costs due to a better
utilisation of its plant.
2. The farther the isoprofit curves (for substitute
commodities) lie from the axes, the lower is the profit. And
vice versa, the closer to the quantity-axis an isoprofit curve
lies, the higher the profitability of the firm is. Consider figure
4.5.

163
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 4.5

If firm B were to increase its output beyond Be, firm A


would not be able to retain its level of profit. Suppose that
firm B decides to produce B4. Firm A can react in three
ways: increase, decrease or retain its output constant (at A.). If
A retains its output constant while B increases its production,
the ensuing fall in the market price will result in a reduction in
the revenue and in the profits of A, given its costs. If firm A
were to increase output beyond Ae, its profit would fall
because of inelasticity of demand and/or increasing costs. If
firm A were to reduce output below Ae, its profit would fall
because of elasticity of demand and/or increasing costs. Thus
firm A will earn a lower level of profit, no matter what its
reaction, if B increased its output beyond Be. A line through
B4 parallel to the QA-axis lies above ΠA1 and will intersect (or
will be tangent) to an isoprofit curve which represents a lower
profit for firm A. In figure 4.5 the isoprofit curve ΠA2
represents a lower profit than ΠA1.
To summarise: for any given output that firm B may produce,
there will be a unique level of output for firm A which
164
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

maximises the latter's profit. This unique profit-maximising


level of output will be determined by the point of tangency
of the line through the given output of firm B and the lowest
attainable isoprofit curve of firm A. In other words, the profit-
maximising output of A (for any given quantity of B) is
established at the highest point on the lowest attainable
isoprofit curve of A.
3. For firm A, the highest points of successive isoprofit
curves lie to the left of each other. If we join the highest
points of the isoprofit curves we obtain firm A’s reaction
curve. Thus, the reaction curve of firm A is the locus of
points of highest profits that firm A can attain, given the level
of output of rival B. It is called ‘reaction curve’ because it
shows how firm A will determine its output as a reaction to
B’s decision to produce a certain level of output. A's
reaction curve is shown in figure 4.6.
Figure 4.6

165
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

B’s isoprofit curves are concave to the QB axis. Their shape


and position are determined by the same factors as the ones
underlying firm A’s isoprofit curves. The highest point of
the isoprofit curves of B lie to the right of each other as we
move to curves further away from the QB axis. If we join
these highest points we obtain B’s reaction function (figure
4.7). Each point of the reaction curve shows how much
output B must produce in order to maximise its own profit,
given the level of output of its rival.
Figure 4.7

Cournot’s equilibrium is determined by the


intersection of the two reaction curves. It is a stable
equilibrium, provided that A’s reaction curve is steeper than
B’s reaction curve. (This condition is satisfied by the
assumption we made that the highest points of successive
isoprofit curves of A lie to the left of one another, while the
highest points of B’s isoprofit curves lie to the right of each
166
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

other.) To see that, let us examine the situation arising from


A’s decision to produce quantity A1, lower than the equilibrium
quantity Ae (figure 4.8).
Figure 4.8

Firm B will react by producing B1 given the Cournot


assumption that firm A will keep its quantity fixed at A1.
However, A reacts by producing a higher quantity, of A2,
on the assumption that B will stay at the level B1. Now firm
B1 reacts by reducing its quantity at B2. This adjustment will
continue until point e is reached. The same equilibrium
would be reached if we started from a point to the right of e.
Thus e is a stable equilibrium. Note that at point e each firm
maximises its own profit, but the industry (joint profit) is not
maximised (figure 4.9).

167
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 4.9

This is easily seen by a curve similar to Edgeworth’s


contract curve which traces points of tangency of the two
firms’ isoprofit curves. Points on the contract curve are
optimal in the sense that points off this curve imply a lower
profit for one or both firms, that is, less industry profits as
compared to points on the curve. Point e is a suboptimal
point, and total industry profits would be higher if firms
moved away from it on any point between a and b on the
contract curve: at point a firm A would continue to have the
same profit ΠA3 while firm B would have a higher profit (ΠB2
ΠB3). At point b firm B would remain on the same isoprofit
curve ΠB3 while firm A would move to a higher isoprofit
curve (ΠB2 ΠB3). Finally at any intermediate point between a
and b, e.g. at c, both firms would realise higher profits. The
question arises of why the firms choose the suboptimal
equilibrium e. The answer is that the Cournot pattern of

168
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

behaviour implies that the firms do not learn from past


experience, each expecting the other to remain at a given
position. Each firm acts independently, in that it does not know
that the other behaves on the same assumption (behavioural
pattern).
The graphical solution of Cournot’s model is found
by the intersection of the two reaction curves which are
plotted in figure 4.10.
Figure 4.10

Bertrand’s Model
Bertrand developed his duopoly model in 1883. His
model differs from Cournot’s in that he assumes that each
firm expects that the rival will keep its price constant,
irrespective of its own decision about pricing. Thus each firm
is faced by the same market demand, and aims at the
maximisation of its own profit on the assumption that the
price of the competitor will remain constant.
The model may be presented with the analytical

169
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

tools of the reaction functions of the duopolists. In


Bertrand’s model the reaction curves are derived from isoprofit
maps which are convex to the axes, on which we now
measure the prices of the duopolists. Each isoprofit curve for
firm A shows the same level of profit which would accrue to
A from various levels of prices charged by this firm and its
rival. The isoprofit curve for A is convex to its price axis (P
A). This shape shows the fact that firm A must lower its
price up to a certain level (point e in figure 4.11) to meet the
cutting of price of its competitor, in order to maintain the
level of its profits at πA 2. However, after that price level has
been reached and if B continues to cut its price, firm A will
be unable to retain its profits, even if it keeps its own
price unchanged (at PAe). If, for example, firm B cuts its
price at PB , firm A will find itself at a lower isoprofit curve
(πA 1) which shows lower profits.
Figure 4.11

170
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

The reduction of profits of A is due to the fall in price, and the


increase in output beyond the optimal level of utilisation of
the plant with the consequent increase in costs. Clearly the
lower the isoprofit curve, the lower the level of profits.
To summarise: for any price charged by firm B there will be
a unique price of firm A which maximises the latter’s profit.
This unique profit-maximising price is determined at the
lowest point on the highest attainable isoprofit curve of A.
The minimum points of the isoprofit curves lie to the right
of each other, reflecting the fact that as firm A moves to a
higher level of profit, it gains some of the customers of B
when the latter increases its price, even if A also raises its
price. If we join the lowest points of the successive isoprofit
curves we obtain the reaction curve (or conjectural variation)
of firm A: this is the locus of points of maximum profits
that A can attain by charging a certain price, given the price of
its rival.
The reaction curve of firm B may be derived in a
similar way, by joining the lowest points of its isoprofit curves
(figure 4.12).
Figure 4.12

171
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Bertrand’s model leads to a stable equilibrium, defined


by the point of intersection of the two reaction curves (figure
4.13). Point e denotes a stable equilibrium, since any departure
from it sets in motion forces which will lead back to point e
at which the price charged by A and B are PAe and PBe
respectively. For example, if firm A charges a lower price
PA1 firm B will charge PB1 because on the Bertrand
assumption, this price will maximise B’s profit (given PA1).
Firm A will react to this decision of its rival by charging a
higher price PA2. Firm B will react by increasing its price, and
so on, until point e is reached, when the market will be in
equilibrium. The same equilibrium will be reached if firms
started by charging a price higher than PAe or PBe: a
competitive price cut would take place which would drive both
prices down to their equilibrium level PAe and PBe.
Figure 4.13

Note that Bertrand’s model does not lead to the


maximisation of the industry (joint) profit, due to the fact
that firms behave naively, by always assuming that their

172
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

rival will keep its price fixed, and they never learn from past
experience which showed that rival did not in fact keep its
price constant. The industry profit could be increased if firms
recognised their past mistakes and abandoned the Bertrand
pattern of behaviour (figure 4.14).
Figure 4.14

If firms moved on any point between c and d on the


Edgeworth contract curve (which is the locus of points of
tangency of the isoprofit curves of the competitors) one or
both firms would have higher profits, and hence industry
profits would be higher. At point c firm B would retain the
same profit (B6) as at point e, while A would move to higher
profit level (A9). At point d firm A would have the same
profit (A5) as at the Bertrand equilibrium e, but firm B would
move to a higher isoprofit curve (B10). Finally, at any point
between c and d (e.g. at f) both firms would realise higher

173
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

profits (A 7 and B8) as compared to those attained at


Bertrand’s solution (A7 A5 and B8 B 6).
Chamberlin’s model (Small Group Model)
Chamberlin’s contribution to the theory of oligopoly
consists in his suggestion that a stable equilibrium can be
reached with the monopoly price being charged by all
firms, if firms recognise their interdependence and act so as
to maximise the industry profit (monopoly profit).
Chamberlin accepts that if firms do not recognise
their interdependence, the industry will reach either the
Cournot equilibrium, if each firm acts independently on the
assumption that the rivals will keep their output constant; or
the industry will reach the Bertrand equilibrium if each firm
acts independently, trying to maximise its own profit on the
assumption that the other rivals will keep their price
unchanged.
Chamberlin, however, rejects the assumption of
independent action by competitors. He says that the firms do
in fact recognise their interdependence. Firms are not as
naive as Cournot and Bertrand assume. Firms, when
changing their price or output, recognise the direct and
indirect effects of their decisions. The direct effects are those
which would occur if competitors were assumed to remain
passive (either in the Cournot or in the Bertrand sense). The
indirect effects are those which result from the fact that
rivals do not in fact remain passive but react to the decisions
of the firm which changes its price or output. The recognition

174
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

of the full effects (direct and indirect) of a change in the


firm’s output (or price) results in a stable industry equilibrium
with the monopoly price and monopoly output. Chamberlin
assumes that the monopoly solution (with industry or joint
profits being maximised) can be achieved without collusion:
the entrepreneurs are assumed to be intelligent enough to
quickly recognise their interdependence, learn from their past
mistakes and adopt the best (for all) position, which is
charging the monopoly price.
Chamberlin’s model can best be understood if
presented in a duopoly market. Initially Chamberlin’s model
is the same as Cournot’s. The market demand is a straight
line with negative slope, and production is assumed costless
for simplicity (figure 4.15).
Figure 4.15

If firm A is the first to start production it will produce the


profit maximising output OXM and sell it at the monopoly

175
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

price PM. Firm B, under the Cournot assumption that the rival
A will retain his quantity unchanged, considers that its
demand curve is CD and will attempt to maximise its profit
by producing one-half of this demand, that is, quantity XMB
(at which B’s MR = MC = 0). As a consequence the total
industry output is OB and the price falls to P. Now firm A
realises that its rival does in fact react to its actions, and
taking that into account decides to reduce its output to OA
which is one-half of OXM and equal to B’s output. The
industry output is thus OXM and price rises to the monopoly
level OPM. Firm B realises that this is the best for both of
them and so will keep its output the same at XMB = AXM.
Thus, by recognising their interdependence the firms reach
the monopoly solution. Under the assumption of our example
of equal costs (that is, costs = 0) the market will be shared
equally between A and B (clearly OA = AXM) ·
Chamberlin’s model is an advance over the previous
models in that it assumes that the firms are sophisticated
enough to realise their interdependence, and that it leads to a
stable equilibrium, which is the monopoly solution.
However, joint profit maximisation via non-collusive
action implies that firms have a good knowledge of the
market-demand curve and that they soon realise their
mistakes. That is, they somehow acquire knowledge of the
total-supply curve (i.e. of the individual costs of the rivals)
and hence they define the (monopoly) price which is best for
the group as a whole.

176
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Kinked demand curve model of Sweezy


The kinked-demand curve as a tool of analysis
originated from Chamberlin’s intersection of the individual
dd curve of the firm and its market-share curve DD'.
However, Chamberlin himself did not use ‘kinked-demand’
in his analysis. Hall and Hitch in their famous article ‘Price
Theory and Business Behaviour’ used the kinked-demand
curve not as a tool of analysis for the determination of the
price and output in oligopolistic markets, but to explain why
the price, once determined on the basis of the average-cost
principle, will remain ‘sticky’. That is, Hall and Hitch use the
kinked-demand curve in order to explain the ‘stickiness’ of
prices in oligopolistic markets, but not as a tool for the
determination of the price itself, which is decided on the
average-cost principle. However, in the same year (1939), P.
Sweezy published an article in which he introduced the
kinked-demand curve as an operational tool for the
determination of the equilibrium in oligopolistic markets. The
demand curve of the oligopolist has a kink (at point E in
figure 4.16), reflecting the following behavioural pattern. If
the entrepreneur reduces his price he expects that his
competitors will follow suit, matching the price cut, so
that, although the demand in the market increases, the
shares of competitors remain unchanged. Thus for price
reductions below P (which corresponds to the point of the
kink) the share-of-the market- demand curve is the relevant
curve for decision-making.

177
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 4.16

However, the entrepreneur expects that his competitors will


not follow him if he increases his price, so that he will lose a
considerable part of his custom. Thus for price increases
above P, the relevant demand curve is the section dE of the dd'
curve. The upper section of the kinked- demand curve has a
higher price elasticity than the lower part. Due to the kink in
the demand curve of the oligopolist, his MR curve is
discontinuous at the level of output corresponding to the kink.
The MR has two segments: segment dA corresponds
to the upper part of the demand curve, while the segment
from point B corresponds to the lower part of the kinked-
demand curve. The equilibrium of the firm is defined by the
point of the kink because at any point to the left of the kink
MC is below the MR, while to the right of the kink the MC is
larger than the MR. Thus total profit is maximised at the
point of the kink. However, this equilibrium is not
necessarily defined by the intersection of the MC and the

178
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

MR curve. Indeed in general the MC passes somewhere


through the discontinuous segment AB, and in that sense one
might argue that, although marginalistic calculations are
behind the ‘kink-equilibrium,’ the kinked- demand curve is a
manifestation of the breakdown of the basic marginalistic rule
according to which the price and output level that maximise
profit are defined by equating MC with MR. Intersection of
the MC with the MR segments requires abnormally high or
abnormally low costs, which are rather rare in practice. The
discontinuity (between A and B) of the MR curve implies that
there is a range within which costs may change without
affecting the equilibrium P and X of the firm.
In figure 4.16, so long as MC passes through the
segment AB, the firm maximises its profits by producing P
and X. This level of price and output is compatible with a wide
range of costs. Thus the kink can explain why price and
output will not change despite changes in costs (within the
range AB defined by the discontinuity of the MR curve). The
greater the difference of elasticities of the upper and lower
parts of the kinked-demand curve, the wider the discontinuity
in the MR curve, and hence the wider the range of cost
conditions compatible with the equilibrium price P and output
X.
There is only one case in which a rise in cost will
most certainly induce the firm to increase its price when costs
rise, despite the fact that the higher costs pass through the
discontinuity of the MR curve. This occurs when the rise in
costs is general (for example, imposition of a sales tax) and

179
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

affects all firms equally. Under these circumstances the firm


will increase its price with the certainty that the others in
the industry will follow, since their costs are similarly
affected. The point of the kink shifts upwards to the left, and
equilibrium is established at a higher price and a lower
output (figure 4.17). The firms, via independent action,
move closer to the point of joint profit maximisation.
Figure 4.17

Furthermore there is a range through which demand


may shift without a change in price although quantity will
change. If the demand curve is kinked, a shift in the market
demand upwards or downwards will affect the volume of
output, but not the level of price, so long as the cost passes
within the range of the discontinuity of the new MR. In this
case the shift occurs along the same price line (figure 4.18).
As the market expands, the firm will not raise its price,
because its (given) cost continues to pass through the
discontinuity of the new MR curve and hence there is not

180
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

incentive to change P, although output will increase. Prima


facie the kinked-demand hypothesis appears attractive. The
behavioural pattern implied by the kink seems quite realistic
in the highly competitive business world which is dominated
by strongly competing oligopolists.
Figure 4.18

The kinked-demand curve can explain the ‘stickiness’ of


prices in a situation of changing costs and of high rivalry.
The kink is the consequence (manifestation) of the
uncertainty of the oligopolists and of their expectations that
competitors will match price cuts, but not price increases.
Stackelberg’s model
This model was developed by the German economist
Heinrich von Stackelberg and is an extension of Cournot’s
model. It is assumed, by von Stackelberg, that one
duopolist is sufficiently sophisticated to recognise that his
competitor acts on the Cournot assumption. This recognition

181
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

allows the sophisticated duopolist to determine the reaction


curve of his rival and incorporate it in his own profit
function, which he then proceeds to maximise like a
monopolist.
Figure 4.19

Assume that the isoprofit curves and the reaction


functions of the duopolists are those depicted in figure 4.19. If
firm A is the sophisticated oligopolist, it will assume that its
rival will act on the basis of its own reaction curve. This
recognition will permit firm A to choose to set its own output
at the level which maximises its own profit. This is point a
(in figure 4.19) which lies on the lowest possible isoprofit
curve of A, denoting the maximum profit A can achieve
given B’s reaction curve. Firm A, acting as a monopolist (by
incorporating B’s reaction curve in his profit-maximising
computations) will produce XA, and firm B will react by
producing XB according to its reaction curve.

182
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

The sophisticated oligopolist becomes in effect the


leader, while the naive rival who acts on the Cournot
assumption becomes the follower. Clearly sophistication is
rewarding for A because he reaches an isoprofit curve
closer to his axis than if he behaved with the same naïve as
his rival. The naive follower is worse off as compared with the
Cournot equilibrium, since with this level of output he reaches
an isoprofit curve further away from his axis. If firm B is the
sophisticated oligopolist, it will choose to produce XB,
corresponding to point b on A’s reaction curve, because this is
the largest profit that B can achieve given his isoprofit map
and A’s reaction curve. Firm B will now be the leader while
firm A becomes the follower. B has a higher profit and the
naive firm A has a lower profit as compared with the
Cournot equilibrium.
In summary, if only one firm is sophisticated, it
will emerge as the leader, and a stable equilibrium will
emerge, since the naive firm will act as a follower. However,
if both firms are sophisticated, then both will want to act as
leaders, because this action yields a greater profit to them. In
this case the market situation becomes unstable. The situation
is known as Stackelberg’s disequilibrium and the effect will
either be a price war until one of the firms surrenders and
agrees to act as follower, or a collusion is reached, with both
firms abandoning their naive reaction functions and moving to
a point closer to (or on) the Edgeworth contract curve with
both of them attaining higher profits. If the final equilibrium
lies on the Edgeworth contract curve the industry profits (joint
profits) are maximised (figure 4.20).

183
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 4.20

Von Stackelberg’s model has interesting implications.


It shows clearly that naive behaviour does not pay. The rivals
should recognise their interdependence. By recognising the
other’s reactions each duopolist can reach a higher level of
profit for himself. If both firms start recognising their mutual
interdependence, each starts worrying about the rival’s profits
and the rival’s reactions. If each ignores the other, a price
war will be inevitable, as a result of which both will be worse
off.
The model shows that a bargaining procedure and
a collusive agreement become advantageous to both
duopolists. With such a collusive agreement the duopolists
may reach a point on the Edgeworth contract curve, thus
attaining joint profit maximisation. It should be noted that
Stackelberg's model of sophisticated behaviour is not
applicable in a market in which the firms behave on
Bertrand’s assumption. However, in a Bertrand-type market
184
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

the sophisticated duopolist can do nothing which would


increase his own profit and persuade the other to stop price-
cutting. The most he can do is to keep his own price
constant, that is, behave exactly as his opponent expects him
to behave.
We may now summarise Stackelberg’s model. Each
duopolist estimates the maximum profit that he would earn
(a) if he acted as leader, (b) if he acted as follower, and
chooses the behaviour which yields the largest maximum.
Four situations may arise: (1) Duopolist A wants to be
leader and B wants to be follower. (2) Duopolist B wants to
be leader and A wants to be follower. (3) Both firms want to
be followers. (4) Both firms desire to be leaders. In situations
(1) and (2) the result is a determinate equilibrium (provided
that the first- and second-order conditions for maxima are
fulfilled). If both firms desire to be followers, their
expectations do not materialise (since each assumes that the
rival will act as a leader), and they must revise them. Two
behavioural patterns are possible. If each duopolist
recognises that his rival wants also to be a follower, the
Cournot equilibrium is reached. Otherwise, one of the rivals
must alter his behaviour and act as a leader before
equilibrium is attained. Finally, if both duopolists want to be
leaders a disequilibrium arises, whose outcome, according to
Stackelberg, is economic warfare. Equilibrium will be
reached either by collusion, or after the ‘weaker’ firm is
eliminated or succumbs to the leadership of the other.

185
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Welfare Properties of Duopolistic Markets


Now we can discuss some long run welfare effects
of oligopoly. First, as in the case of monopoly and
monopolistic competition, oligopolists usually do not
produce at the lowest point on their LAC curve. This would
only occur by sheer coincidence if the oligoplist’s MR curve
intersected the LAC curve at the lowest point of the latter.
Only under perfect competition will forms produce at the
lowest point on the LAC in the long run. Oligopoly,
however often results because of the smallness of the market
in relation to the optimum size of the firm, and so it does not
make much sense to compare oligopoly to perfect competition.
Automobiles, steel, aluminium and many other products
could only be produced at prohibitive costs under perfectly
competitive conditions.
Second, as in the case of monopoly oligopolists can
earn long run profits and so price can exceed LAC. This is
to be contrasted with the case of perfect completion and
monopolistic competition where P=LAC in the long run.
However, some economists believe that ologopolists utilise a
great deal of their profits for research and development to
produce new and better products and to find cheaper
production methods. These are the primary sources of growth
in modern economies. These same economists point out that,
monopolists do not have as much incentive to engage in
R&D, and perfect competitors and monopolistic competitors
are too small and do not have the resource to do so on a large
scale.

186
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Third, as in imperfect competition in general, PLMC


under oligopoly, and so there is under allocation of resources
to the industry. Specifically, since the demand curve facing
oligopolists is negatively sloped, PMR. Thus at the best level
of output (given by the point where the LMC intersects
firm’s MR curve from below), PLMC. This means that
society values an additional unit of the commodity more than
the marginal cost of producing it. But again, P=LMC only
under perfect competition, and economies of scale may
make perfect competition infeasible.
Fourth, while some advertising and product
differentiation are useful because they provide information
and satisfy the consumer’s tastes for diversity, they are
likely to be pushed beyond what is socially desirable in
oligopolistic markets. It is difficult, however, to determine
exactly how much advertising and product differentiation is
socially desirable in the real world. For example, the cost of
model changes equals about one fourth of the price of a new
automobile during many years, to the extent that consumers
purchase new automobiles and choose to have options
introduced into the new models, we can infer that most of
the costs of model changes are wanted by consumers and do
not represent a waste of resources. Nevertheless, the demand
for some model changes and for some new options is
surely created by advertising and may not represent true needs.
We will now investigate the social surplus created at
the Cournot equilibrium in the duopoly. The duopoly social
surplus lies between the social surplus in the monopoly

187
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

market, and the social surplus in the competitive market. In


short, duopoly (and more generally oligopoly) creates some
deadweight loss, but not as much as monopoly creates.
Total social surplus under monopoly would be consumers’
surplus plus producer’s surplus. If this market were a
duopoly(transition from a monopoly to duopoly), consumers’
surplus would grow and producers’ surplus would shrink.
But the sum of the two welfare measures would definitely
grow. Finally, if this were a competitive market, the
equilibrium would require that price equal marginal cost. In
a transition from duopoly to competition, consumers’ surplus
would greatly expand and producers’ surplus would
disappear. We conclude that the competitive outcome is best
for society in the sense that it maximizes social surplus. The
Cournot equilibrium in a duopoly is worse than the
competitive outcome. The monopoly outcome is the worst of
all.
Collusive models
One way of avoiding the uncertainty arising from
oligopolistic interdependence is to enter into collusive
agreements. There are two main types of collusion, cartels and
price leadership. Both forms generally imply tacit (secret)
agreements, since open collusive action is commonly illegal
in most countries at present. Although direct agreements
among the oligopolists are the most obvious examples of
collusion, in the modern business world trade associations,
professional organisations and similar institutions usually
perform many of the activities and achieve in a legal or

188
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

indirect way the goals of direct collusive agreements. For


example, trade associations issue various periodicals with
information concerning actual or planned action of members.
In this official way firms get the message and act
accordingly. Now we will examine the two formal types of
collusion, cartels and price leadership. Both forms have been
exhaustively analysed by W. Fellner.
Cartels
A cartel is an alliance of independent firms of the
same industry which follows common policies related to
pricing, outputs, sales, profit maximization and distribution
of products. Cartels may be voluntary/ compulsory and
open/ secret depending upon the policy of the government
with regard to their formation. Cartel provides a sort of
incentive from uncertainty to the rival firms in which
firms producing a homogeneous product form a centralized
cartel board in the industry and all individual firms give up
their price output decisions to this central board and in return
this board decides the output quotas, price to be charged and
the distribution of industry profits for all its members
which further aims to maximize the joint profits of the entire
oligopolist industry.
In the figure 4.21, given the market demand curve
and its corresponding MR curve, joint profits will be
maximized when the industry MR equals the industry MC.
The following figure illustrates this situation where D is the
market or cartel demand curve and MR is its corresponding
marginal revenue curve and MC is drawn by the lateral
189
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

summation of the MC curves of firm A & B so that MC =


MCa + MCb. The cartel solution which maximizes joint profit
is determined at point E where MC intersects MR and thus the
total output is OQ, to be sold at OP prices.
Figure 4.21

Now the cartel board will allocate the industry output by


equating the industry MR to the marginal cost of each firm.
The share of each firm in the industry output is obtained by
drawing a straight line from E to the vertical axis which
passes through the curves MCb and MCa of firms B & A at
point Eb and Ea respectively. Thus the share of firm A is OQa
and that of firm B is OQb which equals the total output OQ.
Here we can see that firm A is having lower cost of
production and thus it is selling a larger output as compared to
firm B, but this does not mean that A will be getting more
profit than B. the joint maximum profit is the sum of RSTP
and ABCP earned by A & B respectively. Thus this type of
perfect collusion by oligopolist firms in the form of cartels
does not only avoids price wars among rivals but also
maximizes the joint profits of all firms, which is generally
more than the total profits earned by them if they were to act
independently.
190
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

However another type of perfect collusion in an


oligopolist market relates to market sharing by the member
firms of the cartel. Under this the firms enter into a market
sharing agreement either through a non price competition or
through quota system, to form a cartel but keep a
considerable degree of freedom concerning the style of their
output, their selling activities and other decisions. Under the
non price competition cartel, the low cost firms insist for a low
price and the high cost firms for a high price but at the end,
they agree upon a common price below which they will not
sell their product. In such case, the firms compete with one
another on a non price basis by varying the colour, design,
shape, packing etc of their product and having their own
different advertisement and other selling activities and thus
this type of cartel allow them to earn some individual profit.
However this type of cartel is unstable because if one
low cost firm cheats the other firm by charging a lower price
than the common price, then it will attract the customers of
other member firms too and earn larger profits. And when
the other firm comes to know about it then it will leave the
cartel and a price war will then start in the industry. Under
market sharing by quota agreement, all firms in an
oligopolist industry enter into collusion and charge an
agreed uniform price for sharing the market equally among
them so that each firm would earn and get its profit on its sale.
For example, if there are only two firms with
identical costs, each firm will sell at the monopoly price
one-half of the total quantity demanded in the market at that

191
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

price. In figure 4.22 the monopoly price is PM and the


quotas which will be agreed are x1= x2= XM. However, if
costs are different, the quotas and shares of the market will
differ. Allocation of quota-shares on the basis of costs is again
unstable. Shares in the case of cost differentials are decided by
bargaining. The final quota of each firm depends on the level
of its costs as well as on its bargaining skill. During the
bargaining process two main statistical criteria are most often
adopted: quotas are decided on the basis of past levels of
sales, and/or on the basis of ‘productive capacity’. The ‘past-
period sales’ and/or the definition of ‘capacity’ of the firm
depends largely on their bargaining power and skill.
Figure 4.22

In the figure 4.22 D is the market demand curve and


MR is its corresponding MR curve MC = MC1+MC2 is the
aggregate MC curve of the industry which intersects MR
curve at point E which determines OP price and OXM
quantity for the industry. This is also known as the
monopoly solution in the market sharing cartel. However,
this industry output can also be shared equally between the

192
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

two firms. For this we assume that the MR1 and MR2 is
the demand curve of each firm and MR is its corresponding
MR curve. AC & MC are their identical costs curve where
the MC curve intersects the MR curve so that the profit
maximization output of each firm is OX1 and OX2. So that
OX1 + OX2= OXM, it is equally shared by the 2 firms as per
the quota agreement between them.
Another popular method of sharing the market is the
definition of the region in which each firm is allowed to sell.
In this case of geographical sharing of the market the price as
well as the style of the product may differ. There are many
examples of regional market-sharing cartels, some operating
at international levels. However, even a regional split of the
market is inherently unstable. The regional agreements are
often violated in practice, either by mistake or intentionally,
by the low-cost firms who have always the incentive to
expand their output by selling at a lower price openly
defined, or by secret price concessions, or by reaching
adjacent markets through advertising. It should be obvious
that the cartel models of collusive oligopoly are ‘closed’
models.
If entry is free, the inherent instability of cartels is
intensified: the behaviour of the entrant is not predictable
with certainty. It is not certain that a new firm will join the
cartel. On the contrary, if the profits of the cartel members
are lucrative and attract new firms in the industry, the
newcomer has a strong incentive not to join the cartel,
because in this way his demand curve will be more elastic,

193
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

and by charging a slightly lower price than the cartel he can


secure a considerable share in the market, on the assumption
that the cartel members will stick to their agreement. Cartels,
being aware of the dangers of entry, will either charge a low
price so as to make entry unattractive, or may threaten a price
war on the newcomer. If entry occurs and the cartel carries
out its threat of price war, the newcomer may still survive,
depending on his cost advantage, and his financial strength
in withstanding possible losses during the initial period of his
establishment, until he reaches the size which will allow him to
reap the full ‘scale economies’ that he has over those enjoyed
by existing firms.
Price Leadership
Price leadership is imperfect collusion among the
oligopolist firms in an industry when all firms follow the lead
of one big firm. This is of three types:
1. The Low Cost Price Leadership Model:

In the low cost leadership model, an oligopolist firm


having low cost than the other firms sets the lower price
which the other firms have to follow. Thus the low cost
firm becomes the price leader. The main assumption of this
model is that the cost of all the firms is different but they all
have identical demands and MR curves. We will illustrate this
model with an example of duopoly. It is assumed that there
are two firms which produce a homogeneous product at
different costs, which clearly must be sold at the same price.
The firms may have equal markets (or they may come to an

194
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

agreement to share the market equally) as in figure 4.23, or


they may have unequal markets (or agree to share the market
with unequal shares), as in figure 4.24. The important
condition for this model is that the firms have unequal costs.
Figure 4.23

The firm with the lowest cost will charge a lower price
(PA) and this price will be followed by the high-cost firm,
although at this price firm B (the follower) does not maximise
its profits. The follower would obtain a higher profit by
producing a lower output (XBe) and selling it at a higher price
(PB). However, it prefers to follow the leader, sacrificing some
of its profits in order to avoid a price war, which would
eliminate it if price fell sufficiently low as not to cover its
LAC. It should be stressed that for the leader to maximise his
profit price must be retained at the level PA and he should
sell XA. This implies that the follower must supply a quantity

195
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

(OXB in figure 4.24, or OX1 = OX2 in figure 4.23) sufficient to


maintain the price set by the leader.
Figure 4.24

Although the price leadership model stresses the fact that the
leader sets the price and the follower adopts it, it is clear that
the firms must also enter a share-of-the-market agreement,
formally or informally, otherwise the follower could adopt the
price of the leader but produce a lower quantity than the level
required to maintain the price (set by the leader) in the
market, and thus push (indirectly, by not producing enough
output) the leader to a non- profit-maximising position. In this
respect the price follower is not completely passive: he may
be coerced to adopt the leader's price, but, unless tied by
a quota-share agreement (formal or informal) he can push the
leader to a non-maximising position.
2. The Dominant Firm Price Leadership Model :

Under this model, there is one large dominant firm

196
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

and a number of small firms in the industry, where the


dominant firm fixes the price for the entire industry and the
small firms sell as much product as they like and the
remaining market is filled by the dominant firm itself. In
this case, the dominant firm will select the price which will
give it more profits. However, when each firm sells its
product at the price set by the dominant firm, then its
demand curve is perfectly elastic at that price and its
marginal revenue curve coincides with the horizontal
demand curve and the firm will produce that output
where its MR=MC. Moreover, the MC curves of all the small
firms combined laterally to establish their aggregate supply
curve where all these firms behave competitively while the
dominant firm behaves passively by fixing the price and
allowing the small firms to sell all they wish at that price.
In this model it is assumed that there is a large
dominant firm which has a considerable share of the total
market, and some smaller firms, each of them having a small
market share. The market demand (DD in figure 4.25) is
assumed known to the dominant firm. It is also assumed that
the dominant leader knows the MC curves of the smaller
firms, which he can add horizontally and find the total supply
by the small firms at each price; or at best that he has a fair
estimate, from past experience, of the likely total output from
this source at various prices. With this knowledge the leader
can obtain his own demand curve as follows.

197
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Figure 4.25

At each price the larger firm will be able to supply the


section of the total market not supplied by the smaller firms.
That is, at each price the demand for the product of the leader
will be the difference between total D (at that price) and the
total S1. For example, at price P1 the demand for the product
of the leader will be zero, because the total quantity
demanded (D1) is supplied by the smaller firms. As price
falls below P1 the demand for the leader’s product increases.
At P2 the total demand is D2; the part P2 A is supplied by the
small firms and the remaining AD2 is supplied by the leader.
At P3 total demand is D3 and the total quantity is supplied by
the leader since at that price the small firms do not supply any
quantity. Below P3 the market demand coincides with the
leader’s demand curve.
The dominant firm leader maximises his profit by
equating his MC to his MR, while the smaller firms are

198
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

price-takers, and may or may not maximise their profit,


depending on their cost structure. It is assumed that the
small firms cannot sell more (at each price) than the
quantity denoted by S1. However, if the leader is to maximise
his profit, he must make sure that the small firms will not
only follow his price, but that they will also produce the
right quantity (PB, at price P). Thus, if there is no tight
sharing-the market agreement, the small firms may produce
less output than PB and thus force the leader to a non
maximising position. Hence this analysis shows that the price
quantity solution is stable because the small firms behave
passively as price takers.
3. The Barometric Price Leadership Model :

Under this, there is no leader firm as such but one


firm among the oligopolist firm with the wisest management
which announces the price change first which is followed by
other firms in the industry. Here the barometric price leader
may not be the dominant firm with the lowest cost or even
the largest firm in the industry, it is a firm which acts like a
barometer in forecasting changes in cost and demand
conditions in the industry and economic conditions in the
economy as a whole. In this model it is formally or informally
agreed that all firms will follow (exactly or approximately)
the changes of the price of a firm which is considered to
have a good knowledge of the prevailing conditions in the
market and can forecast better than the others the future
developments in the market. In short, the firm chosen as the
leader is considered as a barometer, reflecting the changes in

199
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

economic environment. The barometric firm may be neither


a low-cost nor a large firm. Usually it is a firm which
from past behaviour has established the reputation of a good
forecaster of economic changes. A firm belonging to another
industry may also be chosen as the barometric leader.
For example, a firm in the steel industry may be
agreed as the (barometric) leader for price changes in the
motor-car industry. Barometric price leadership may be
established for various reasons. Firstly, rivalry between several
large firms in an industry may make it impossible to accept
one among them as the leader. Secondly, followers avoid the
continuous recalculation of costs, as economic conditions
change. Thirdly, the barometric firm usually has proved
itself as a ‘reasonably’ good forecaster of changes in cost
and demand conditions in the particular industry and the
economy as a whole, and by following it the other firms can
be ‘reasonably’ sure that they choose the correct price policy.

200
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

MODULE V
THEORY OF GAMES
Basic concepts
A different approach to the study of the oligopoly
problem is provided by the theory of games. The first
systematic attempt in this field is von Neumann’s and
Morgenstern’s Theory of Games and Economic Behaviour,
published in 1944. Since that time numerous economists
have developed models of oligopolistic behaviour based on the
theory of games.
The firm has various instruments or policy variables
with which it can pursue its goals. The most important are the
price, quantity and style of its products, advertising and other
selling activities, research and development expenditures,
channels for selling the product(s), and changes in the
number of products (discontinuation of an old product or
introduction of new ones).
1. Strategy:

A strategy is a specific course of action with clearly


defined values for the policy variables. For example, a
strategy may consist of setting a price of Rs. 3.95,
spending Rs.2000 on advertising, making a change in the
packaging of the product, and selling it in supermarkets.
Another strategy may involve leaving the price unchanged,
spending Rs.1000 on advertising, devoting Rs. 2000 to
research on a new product, and so on. To each of these

201
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

strategies a competitor may react in different ways, that is,


by adopting different strategies. He may decide to adopt the
same or a different course of action than the one adopted by
the other firm. Thus each firm has various strategies open to
it, and in any particular case it will adopt the one that will
seem most advantageous under the circumstances.
2. Payoff:

The payoff of a strategy is the ‘net gain’ it will


bring to the firm for any given counterstrategy of the
competitor(s). This gain is measured in terms of the goal(s) of
the firm. For example, if the goal of the firm is to maximise
its profit, the payoffs of a strategy will be measured in terms
of profit levels that it yields; if the goal is maximisation of
the market share, the payoffs will be measured as the actual
shares that the strategy will secure to the firm adopting it.
3. Payoff Matrix:

The payoff matrix of a firm is a table showing the


payoffs accruing to this firm as a result of each possible
combination of strategies adopted by it and by its rival(s). For
example, assume that there are two firms in the industry. Firm
I has to choose among five strategies (A1, A2 ,...., A5 ) and
Firm II can react by adopting any one of six strategies open to
it (B1, B2,....,B6 ). Thus to each strategy of Firm I there are six
possible counterstrategies of Firm II, and similarly to each
strategy of Firm II there are five counterstrategies of the rival
Firm I. Thus the payoff matrix of each firm will include 5 x 6
= 30 payoffs, corresponding to the results of each possible

202
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

combination of strategies selected by both rivals.


Let us denote each payoff Gij, where i refers to the
strategy adopted by Firm I and j to the counter-strategy
adopted by Firm II. Thus for the above example the payoff
matrix for Firm I will be of the general form of table 5.1. If
Firm I adopts strategy A1 and its rival reacts by adopting
among the strategies open to it B5, the payoff (gain) of Firm I
will be G15. If Firm I chooses strategy A4 and its rival reacts
with strategy B6, the payoff of Firm I will be G46, and so on.
In the theory of games the firms in oligopolistic
markets are treated as players in a chess game: to each
movement by one player the other may choose among several
counter-movements. The counter-movements of rivals are
probable but not certain. Yet it is possible to choose a strategy
which (under certain conditions) will maximise the firm’s
expected ‘gain’, after making due allowance for the effects of
rivals’ probable reactions.
Table 5.1

Firm II’s Strategies


Firm I’s Strategie

B1 B2 B3 B4 B5 B6
A1 G11 G2 G13 G14 G15 G16
A2 G21 G2 G23 G24 G25 G26
A3 G31 G32 G33 G34 G35 G36
A4 G41 G42 G43 G44 G45 G46
A5 G51 G52 G53 G54 G55 G56
4.

203
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Cooperative versus Non-Cooperative Game


The economic games that firms play can be either
cooperative or non cooperative. In a cooperative game,
players can negotiate binding contracts that allow them to
plan joint strategies. In a non cooperative game, negotiation
and enforcement of binding contracts are not possible.
An example of a cooperative game is the bargaining
between a buyer and a seller over the price of a rug. If the
rug costs Rs.100 to produce and the buyer values the rug at
Rs.200, a cooperative solution to the game is possible: An
agreement to sell the rug at any price between Rs.101 and
Rs.199 will maximize the sum of the buyer’s consumer
surplus and the seller’s profit, while making both parties better
off. Another cooperative game would involve two firms
negotiating a joint investment to develop a new technology
(assuming that neither firm would have enough know-how to
succeed on its own). If the firms can sign a binding contract
to divide the profits from their joint investment, a
cooperative outcome that makes both parties better off is
possible. An example of a non cooperative game is a
situation in which two competing firms take each other’s
likely behaviour into account when independently setting
their prices. Each firm knows that by undercutting its
competitor, it can capture more market share. But it also
knows that in doing so, it risks setting off a price war.
Another non cooperative game is the auction mentioned
above: Each bidder must take the likely behaviour of the
other bidders into account when determining an optimal

204
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

bidding strategy.
Note that the fundamental difference between
cooperative and non cooperative games lies in the contracting
possibilities. In cooperative games, binding contracts are
possible; in non cooperative games, they are not. We will be
concerned mostly with non cooperative games. Whatever the
game, however, keep in mind the following key point about
strategic decision making: It is essential to understand your
opponent’s point of view and to deduce his or her likely
responses to your actions. This point may seem obvious—of
course, one must understand an opponent’s point of view. Yet
even in simple gaming situations, people often ignore or
misjudge opponents’ positions and the rational responses that
those positions imply.
Two Person Zero Sum Game
A. Certainty Model
The simplest model is a duopoly market in which
each duopolist attempts to maximise his market share. Given
this goal, whatever a firm gains (by increasing its share of
the market) the other firm loses (because of the decrease in its
share). Thus any gain of one rival is offset by the loss of the
other, and the net gain sums up to zero. Hence, the name ‘zero-
sum game’. The assumptions of the model are:
1. The firms have a given, well-defined goal. In our
particular example the goal is maximisation of the market
share.

205
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

2. Each firm knows the strategies open to it and to its


rival, or concentrates on the most important of these
strategies.
3. Each firm knows with certainty the payoffs of all
combinations of the strategies being considered. This implies
that the firm knows its total revenue, total costs and total profit
from each combination of strategies.
4. The actions chosen by the duopolists do not affect the total
size of the market.
5. Each firm chooses its strategy ‘expecting the worst from
its rival’, that is, each firm acts in the most conservative way,
expecting that the rival will choose the best possible
counter- strategy open to him. This behaviour is defined as
‘rational’.
6. In the zero-sum game there is no incentive for
collusion, given assumption 4, since the goals of the firms are
diametrically opposed.
In order to find the equilibrium solution we need
information on the payoff matrix of the two firms. In our
example the payoffs will be shares of the market resulting
from the adoption of any two strategies by the rivals. Assume
that Firm I has four strategies open to it and Firm II has five
strategies. The payoff matrices of the duopolists are shown in
tables 5.2 and 19.3.
Clearly the sum of the payoffs in corresponding cells of
the two payoff tables adds up to unity, since the numbers in

206
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

these cells are shares, and the total market is shared between
the two firms. In general, in the two-person zero-sum game
we need not write both payoff matrices because of the nature
of the game: the goals are opposing, and, in our example,
the payoff table of Firm I contains indirectly information
about the payoff of Firm II. Still we start by showing both
tables, and then we show how the equilibrium solution can be
found from only the first payoff matrix.
Choice of strategy by Firm I
Table 5.2: Payoff Matrix of Firm I

Firm II’s Strategies


Firm I’s Strategies

B1 B2 B3 B4 B5
A1 0.10 0.20 0.15 0.30 0.25
A2 0.40 0.30 0.50 0.55 0.45
A3 0.35 0.25 0.20 0.40 0.50
A4 0.25 0.15 0.35 0.60 0.20

Firm I examine the outcomes of each strategy open to


it. That is, Firm I examines each row of its payoff matrix and
finds the most favourable outcome of the corresponding
strategy, because the firm expects the rival to adopt the most
advantageous action open to him. This is the behavioural rule
implied by assumption 5 of this model. Thus:
If Firm I adopt strategy A1, the worst outcome that it may

207
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

expect is a share of 0.10 (which will be realised if the rival


Firm II adopts its most favourable strategy B1).
If Firm I adopts strategy A2, the worst outcome will be a share
of 0.30 (if the rival adopts the best action for him, B2).
If Firm I adopts strategy A3, the worst outcome will be a share
of 0.20 (if Firm II chooses the best open alternative, B3).
If Firm I adopt strategy A4, the worst outcome will be a
share of 0.15 (which would be realised by action B2 of Firm
II).
Among all these minima (that is, among the above
worst outcomes) Firm I chooses the maximum, the ‘best of
the worst’. This is called a maximin strategy, because the firm
chooses the maximum among the minima. In our example the
maximin strategy of Firm I is A2, that is, the strategy which
yields a share of 0.30.
Choice of strategy by Firm II
Table 5.3: Payoff Matrix of Firm II
Firm II’s Strategies
B1 B2 B3 B4 B5
Firm II's Strategies

A1 0.90 0.80 0.85 0.70 0.75


A2 0.60 0.70 0.50 0.45 0.55
A3 0.65 0.75 0.80 0.60 0.50

A4 0.75 0.85 0.65 0.40 0.80

208
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Firm II behaves in exactly the same way. The only


difference is that Firm II examines the columns of its payoff
table, because these columns include the results-payoffs of
each of the strategies open to Firm II. For each strategy, that
is, for each column, Firm II finds the worst outcome (on the
assumption that the rival will choose the best), and among
these worst outcomes Firm II chooses the best. Thus, if Firm
II uses its own payoff table, its behaviour is a maximin
behaviour identical to the behaviour of Firm I.
However, as we said earlier, in the zero-sum game
only one payoff matrix is adequate for the equilibrium
solution. In our example the first payoff table will be used not
only by Firm I but also by Firm II. Thus concentrating on
the first payoff table we may restate the decision- making
process of Firm II as follows. Firm II examines the
columns of the (first) payoff matrix because these columns
contain the information about the payoffs of its strategies. For
each column-strategy Firm II finds the maximum payoff (of
Firm I) because this is the worst situation the firm (II) will
face if it adopts the strategy corresponding to that column.
Thus for strategy B1 the worst outcome (for Firm II) is 0.40;
for strategy B2 the worst outcome is 0.30; for strategy B3 the
worst outcome is 0.50; for strategy B4 the worst result is 0.60;
for strategy B5 the worst result is 0.50.
Among these maxima of each column-strategy Firm
II will choose the strategy with minimum value. Thus the
strategy of Firm II is a minimax strategy, since it involves
the choice of a minimum among the maxima payoffs

209
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

(Table 5.4). It should be stressed that although different


terms are used for the choice of the two firms (maximin
behaviour of Firm I, minimax behaviour of Firm II), the
behavioural rule for both firms is the same: each firm expects
the worst from its rival.
Table 5.4: Combined Payoff Matrix

Firm II’s Strategies


(minimax behaviour)
B1 B2 B3 B4 B5
(minimax behaviour)
Firm I’s Strategies

A1 0.10 0.20 0.15 0.30 0.25

A2 0.40 0.30 0.50 0.55 0.45

A3 0.35 0.25 0.20 0.40 0.50

A4 0.25 0.15 0.35 0.60 0.20

In our example the equilibrium solution is strategy


A2 for Firm I and B2 for Firm II. This solution yields shares
0.30 for Firm I and 0.70 for Firm II. It is an equilibrium
solution because it is the preferred one by both firms. This
solution is called the ‘saddle point’, and the preferred strategies
A2 and B2 are called ‘dominant strategies’.
It should be clear that there exists no such equilibrium
(saddle) solution if there is no payoff which is preferred by
both firms simultaneously. Under certain mathematical
conditions other solutions and strategy choices can be
determined.

210
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

B. Uncertainty Model
The assumption that each firm knows with certainty
the exact value of the payoff of each strategy is unrealistic.
The most probable situation in the real business world is that
the firm, by adopting a certain strategy, may expect a range of
results for each counter-strategy of the rival, each result with
an associated probability. Thus the payoff matrix is constructed
so as to include the expected value of each payoff. The
expected value is the sum of the products of the possible
outcomes of a pair of strategies (adopted by the two firms)
each multiplied by its probability:
E(Gij) = g1iP1 + g2iP2 + · · · + gniPn
=
Where, gsi = the sth of the n possible outcomes of strategy i of
Firm I (given that Firm II has chosen strategy j)
Ps = the probability of the sth outcome of strategy i
For example, assume that Firm I chooses strategy A1
and Firm II reacts with strategy B1. This pair of simultaneous
strategies may yield the shares for Firm I each with a certain
probability, shown in the second column of table 5.5. Thus the
expected payoff of the pair of strategies A1 and B1 is
E(G11 ) = (0.00)(0.00) + (0.05)(0.05) + (0.15)(0.05) + · · · +
(0.95)(0.02) + (1)(0)
= 0·458

211
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Table 5.5

Possible shares of Firm I


Probability of
for the pair of strategies
each share
A1, B1
0.00 0.00
0.05 0.05
0.15 0.05
0.25 0.10
0·35 0.15
0.45 0.25
0.55 0.20
0.65 0.10
0.75 0.05
0.85 0.03
0.95 0.02
1.00 0.00

In a similar way we find the expected payoff of all


combinations of strategies. Given the matrix of expected
payoffs, the behavioural pattern of the firms is the same as in
the certainty model.
That is:
Firm I adopt the maximin strategy. It finds for each row the
minimum expected payoff, and among these minima the firm

212
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

chooses the one with the highest value (the maximum among
the minima).
Firm II adopts the minimax strategy. It finds for each column
the maximum expected payoff, and among these maxima
Firm II chooses the one with the smallest value (the
minimum among the maxima).
Although the uncertainty zero-sum game seems simple, its
assumptions are quite stringent:
1. The firms maximise their expected payoffs.
2. The zero-sum game assumes that both firms assign the
same probability to each pair of payoffs; they make the
same judgement. This implies that the firms must have the
same information and the same objective criteria with which
to evaluate the probabilities of the different payoffs.
Otherwise the probability distribution of the payoffs will not
be objective.
3. The firms maximise their total utility, and the utility of
each payoff is proportional to the value assumed by the
payoff.
The above assumptions are clearly strong and
unrealistic. Furthermore, the basic condition of the zero-sum
game, that the ‘gain’ of one firm is equal to the ‘loss’ of the
other, is rarely met in the real business world. Usually the
‘gains’ are not ‘offset’ by equal ‘losses’. Only in the case of a
share goal, and in the rare case of extinction tactics, do we
have a zero-sum game. In most cases we have a non-zero-sum
game.

213
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Non-Zero-Sum Game
This model is illustrated with a duopolistic market in
which the firms aim at the maximisation of their profit. Their
products are close substitutes so that if their prices differ the
firm with the lower price will supply the largest part of the
market. It is assumed that the firms will use price as their
instrumental variable. For simplicity we assume that each
firm can charge two prices (either Rs. 3 or Rs. 5), that is,
there are two strategies open to each competitor. Each firm
has a different cost structure and the market size is affected
by the rivals’ combined action. Under these conditions the
payoff matrix of each firm is expressed in terms of levels of
profit, and the gains of one rival need not be (and in our
example are not) equal to the losses of the other. The payoff
matrices of the two firms are shown below in tables 5.6 and
5.7, and are subsequently combined in a single table (Table
5.8.).
The behavioural rule is the same for both firms: each
expects the worst from the rival. The choice of Firm I is a
maximin strategy. If Firm I sets the price of Rs.5 the
minimum gain is Rs.50; if it sets P = 3 its minimum profit is
Rs.80. Among these two minima the firm chooses the
maximum, that is, the preferred strategy by Firm I is P = 3.
The choice of Firm II is also a maximin strategy. If Firm II
charges a price of Rs.5 the worst it can expect is a profit of
Rs.60; if it charges a price of Rs.3 the minimum level of
profit is Rs.100. Among these minima the firm will choose the
maximum, that is, Firm II will choose the price of Rs.3.

214
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Table 5.6: Firm I’s payoff matrix


(level of profits of I)

Firm II’s strategies

P=5 P=3
Firm I’s P=5 ΠA =90 ΠA =50
strategies
P=3 ΠA =150 ΠA =80

Table 5.7: Firm II’s payoff matrix


(level of profits of I)

Firm II’s strategies

PB=5 PB=3
Firm I’s
PA=5 ΠB =90 ΠB =50
strategies
PA=3 ΠB =150 ΠB =80

Table 5.8: Combined payoff matrix

Firm II’s strategies


Firm I’s strategies

PB=5 PB=3

PA=5 ΠA =90 ΠB =110 ΠA =50 ΠB =120


Joint Π =200 Joint Π =170
PA=3 ΠA =150 ΠB =60 ΠA =80 ΠB =100
Joint Π =210 Joint Π =180

215
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Under these circumstances there is a unique


equilibrium price (Rs.3) which will be adopted by both firms.
Thus the strategy P = Rs.3 is a dominant strategy. Yet with
this strategy both firms are in a worse situation as compared
to the alternative strategy P = 5, since both realise a lower
profit. And of course the industry (joint) profit is not
maximised. The conservative maximin strategy is not the
optimal solution in this case. If the firms colluded and
both charged the higher price of Rs. 5 the joint profit and
their individual profits would be higher (90 80, 110 100, and
200 180). Thus while the maximin strategy provides an
optimal solution in the zero-sum game, this may not be so in
the variable-sum game.
Many other oligopolistic actions may be analysed
with the above apparatus of the theory of games. For
example, advertising campaigns change in style or
diversification, research and development expenditures may
be examined by the principles of the games theory. In most
real-world cases we see that firms choose strategies which
do not maximise their profits: advertising or new-product
rivalry often lead to excessive increases in costs of all firms in
the industry. Such situations may be explained by the
conservative behaviour of maximin strategies.
In many oligopolistic situations firms seem to avoid
the unfavourable outcomes predicted by the maximin-
minimax behaviour of the theory of games. Several reasons
have been offered for these cases.
Firstly, the duration of rivalry: If rivalry has been
216
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

continuous for a considerable time-period the rivals ‘learn’ to


predict the reactions of each other and this leads to the
avoidance of moves which, from well-established past
patterns, have proved disadvantageous to all parties. Secondly,
the stability of tastes and processes: Firms are more likely
to avoid mutually damaging actions in a market where
demand does not change and technological progress is slow.
On the other hand, in markets with frequent changes of
tastes it is almost certain that firms will adopt maximin
strategies despite their mutually unfavourable results.
Thirdly, the existence of common sources of
information and communication between rivals: If the rivals
Jack information it is most natural to fear the worst (maximin
assumption) from the competitors and reach suboptimal
solutions.
Fourthly, time-Jags are important in deciding which
strategy to adopt. If the imitation of an action (for example, a
new product, a new process) is easy the firms will recognise
that they have little to gain and much to Jose by aggressive
action and hence will adopt maximin strategies, which are
by their nature conservative. If, however, an action cannot
easily or quickly be imitated the firms will tend to abandon
maximin attitudes and adopt actions which lead to more
favourable positions for themselves, instead of expecting the
worst from their rivals, for the simple reason that the rivals
cannot quickly adopt the most advantageous action. In
summary: games theory has not provided a general theory
of oligopolistic behaviour. However, the games-theory

217
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

approach has been able to explain some real-world situations.


It has helped in showing that there are strong incentives for
collusion in several oligopolistic situations. Perhaps the
most important contribution of this theory is that it has
led to controlled experiments in the study of firms’
behaviour. By expressing the alternatives open to a firm and
its rivals in the form of payoff matrices it has made possible
the examination of an increasing number of alternatives with
controlled experiments based on the use of high- speed
computers.
Prisoner’s Dilemma
In most cases of variable-sum games the maximin-
minimax behaviour leads the rivals to suboptimal solutions,
that is, to situations worse than they need be. These examples
and their suboptimal solutions are a special case of the general
group of problems which are known as Prisoner’s Dilemma
Games. A brief exposition of the original prisoner’s dilemma
might help the understanding of the behaviour of firms faced
by uncertainty about their rivals’ action. Two criminals are
arrested after committing a big bank robbery. However, the
evidence is not adequate to make the robbery charge stand
unless one or both criminals confess. Each suspect is
interrogated in isolation from his companion so that no
communication is possible between them. The District
Attorney promises no punishment for the suspect who
confesses and a heavy sentence of, say, twenty years’
imprisonment for the other party. If both suspects do not
confess, both will go free. If both confess, they will get the

218
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

sentence prescribed by the law for the crime of robbery, for


example ten years’ imprisonment. Thus each suspect has
two ‘strategies’ open to himself, to confess or not to confess,
and is faced with the dilemma: to confess (and go free if the
other does not confess, or get the ten-year sentence) or not
to confess (and go free if the other does not confess, or get the
heavy twenty-year sentence if he is betrayed by the other
suspect). The payoff matrix of the two prisoners is shown in
table 5.9.
Table 5.9: Payoff matrix of the two prisoners

Prisoner B’s strategies


No confession Confession
Prisoner
No
A’s A0 B0 A 20 B0
confession
strategies
Confession A0 B 20 A 10 B 10

Given the lack of communication between the suspects


and the uncertainty as to the ‘loyalty’ of the other prisoner,
each one of them prefers to adopt the second strategy, that is,
to confess, so that both get a ten-year sentence. Clearly this
is a worse situation as compared with the adoption of the
'no confession' strategy by both robbers. The ‘dominant’
strategy, which implies the rule ‘expects the worst from the
other (s) (the maximin assumption), leads to a worse position
than the robbers need be in. If communication were possible,
or if from past experience the fellows had learned to trust
each other, they would both plead ‘not guilty’ and would go
free, thus maximising their ‘gains’. The drawing of
219
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

analogies concerning the conditions of uncertainty facing the


firms in oligopolistic markets is straightforward.
Dominant strategies
Some strategies may be successful to determine how
the rational behaviour of each player will lead to an
equilibrium solution, if competitors make certain choices but
fail if they make other choices. Other strategies, however, may
be successful regardless of what competitors do. We begin
with the concept of a dominant strategy—one that is optimal
no matter what an opponent does. The following example
illustrates this in a duopoly setting.
Suppose Firms A and B sell competing products
and are deciding whether to undertake advertising
campaigns. Each firm will be affected by its competitor’s
decision. The possible outcomes of the game are illustrated by
the payoff matrix in Table 5.10.
Table 5.10: Payoff Matrix of Advertising

Firm B
Advertise Don’t advertise
Firm A Advertise 10 5 15 0
Don’t advertise 6 8 10 2

Observe that if both firms advertise, Firm A will earn a


profit of 10 and Firm B a profit of 5. If Firm A advertises and
Firm B does not, Firm A will earn 15 and Firm B zero. The
table also shows the outcomes for the other two

220
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

possibilities. What strategy should each firm choose? First


consider Firm A. It should clearly advertise because no matter
what firm B does, Firm A does best by advertising. If Firm B
advertises, A earns a profit of 10 if it advertises but only 6 if
it doesn’t. If B does not advertise, A earns 15 if it advertises
but only 10 if it doesn’t. Thus advertising is a dominant
strategy for Firm A. The same is true for
Firm B: No matter what firm A does, Firm B does best by
advertising. Therefore, assuming that both firms are rational,
we know that the outcome for this game is that both firms
will advertise. This outcome is easy to determine because both
firms have dominant strategies.
When every player has a dominant strategy, we call the
outcome of the game an equilibrium in dominant strategies.
Such games are straightforward to analyze because each
player’s optimal strategy can be determined without worrying
about the actions of the other players.
Unfortunately, not every game has a dominant strategy for
each player.
Table 5.11: Modified Advertising Game

Firm B
Advertise Don’t advertise
Firm A Advertise 10 5 15 0
Don’t advertise 6 8 20 2

To see this, let’s change our advertising example


slightly. The payoff matrix in Table 5.11 is the same as in
221
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Table 5.10 except for the bottom right-hand corner—if neither


firm advertises, Firm B will again earn a profit of 2, but Firm
A will earn a profit of 20. (Perhaps Firm A’s ads are
expensive and largely designed to refute Firm B’s claims, so
by not advertising, Firm A can reduce its expenses
considerably.) Now Firm A has no dominant strategy. Its
optimal decision depends on what Firm B does. If Firm B
advertises, Firm A does best by advertising; but if Firm B does
not advertise, Firm A also does best by not advertising. Now
suppose both firms must make their decisions at the same
time. Firm A must put itself in Firm B’s shoes. Firm B has a
dominant strategy—advertise, no matter what Firm A does.
(If Firm A advertises, B earns 5 by advertising and 0 by not
advertising; if A doesn’t advertise, B earns 8 if it advertises
and 2 if it doesn’t.) Therefore, Firm A can conclude that Firm
B will advertise. This means that Firm A should advertise
(and thereby earn 10 instead of 6). The logical outcome of
the game is that both firms will advertise because Firm A is
doing the best it can given Firm B’s decision; and Firm B is
doing the best it can given Firm A’s decision.
Nash Equilibrium
A Nash equilibrium is a set of strategies (or actions)
such that each player is doing the best it can given the actions
of its opponents. Because each player has no incentive to
deviate from its Nash strategy, the strategies are stable. In
the example shown in Table 5.11, the Nash equilibrium is
that both firms advertise: Given the decision of its
competitor, each firm is satisfied that it has made the best

222
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

decision possible, and so has no incentive to change its


decision. Note that a dominant strategy equilibrium is a special
case of a Nash equilibrium. In the advertising game of Table
5.11, there is a single Nash equilibrium—both firms advertise.
In general, a game need not have a single Nash
equilibrium. Sometimes there is no Nash equilibrium, and
sometimes there are several (i.e., several sets of strategies
are stable and self-enforcing).
The strategy set given by the bottom left-hand
corner of the payoff matrix is stable and constitutes a Nash
equilibrium: Given the strategy of its opponent, each firm is
doing the best it can and has no incentive to deviate. In the
same way if the two firms are not allowed to collude does
not mean that they will not reach a Nash equilibrium. As an
industry develops, understandings often evolve as firms
“signal” each other about the paths the industry is to take.

Pure Strategies and Mixed Strategies


In all of the games that we have examined so far, we have
considered strategies in which players make a specific
choice or take a specific action: advertise or don’t advertise,
set a price of Rs. 4 or a price of Rs. 6, and so on. Strategies of
this kind are called pure strategies. There are games, however,
in which a pure strategy is not the best way to play.
For example take the game of Matching Pennies. In this
game, each player chooses heads or tails and the two players

223
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

reveal their coins at the same time. If the coins match (i.e.,
both are heads or both are tails), Player A wins and receives a
dollar from Player B. If the coins do not match, Player B wins
and receives a dollar from Player A. The payoff matrix is
shown in Table 5.12.
Table 5.12: Matching Pennies
Player B
Heads Tails
Player A Heads 10 5 15 0
Tails 6 8 20 2

Note that there is no Nash equilibrium in pure


strategies for this game. Suppose, for example, that Player A
chose the strategy of playing heads. Then Player B would want
to play tails. But if Player B plays tails, Player A would also
want to play tails. No combination of heads or tails leaves
both players satisfied- one player or the other will always want
to change strategies. Although there is no Nash equilibrium
in pure strategies, there is a Nash equilibrium in mixed
strategies: strategies in which players make random choices
among two or more possible actions, based on sets of
chosen probabilities. In this game, for example, Player A
might simply flip the coin, thereby playing heads with
probability 0.5 and playing tails with probability 0.5. In fact, if
Player A follows this strategy and Player B does the same, we
will have a Nash equilibrium:

224
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Both players will be doing the best they can given


what the opponent is doing. Note that although the outcome
is random, the expected payoff is 0 for each player.
It may seem strange to play a game by choosing
actions randomly. But put our self in the position of Player
A and think what would happen if you followed a strategy
other than just flipping the coin. Suppose you decided to play
heads. If Player B knows this, she would play tails and you
would lose. Even if Player B didn’t know your strategy, if the
game were played repeatedly, she could eventually discern
your pattern of play and choose a strategy that countered
it. Of course, you would then want to change your
strategy-which is why this would not be a Nash equilibrium.
Only if you and your opponent both choose heads or tails
randomly with probability 0.5 would neither of you have any
incentive to change strategies.
One reason to consider mixed strategies is that some
games (such as “Matching Pennies”) do not have any Nash
equilibria in pure strategies. It can be shown, however, that
once we allow for mixed strategies, every game has at least
one Nash equilibrium. Mixed strategies, therefore, provide
solutions to games when pure strategies fail. Of course,
whether solutions involving mixed strategies are reasonable
will depend on the particular game and players. Mixed
strategies are likely to be very reasonable for “Matching
Pennies,” poker, and other such games. A firm, on the
other hand, might not find it reasonable to believe that its
competitor will set its price randomly.
Some games have Nash equilibria both in pure

225
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

strategies and in mixed strategies. An example is “The Battle


of the Sexes”. Jim and Joan would like to spend Saturday
night together but have different tastes in entertainment. Jim
would like to go to the opera, but Joan prefers mud wrestling.
As the payoff matrix in Table 5.13 shows, Jim would most
prefer to go to the opera with Joan, but prefers watching mud
wrestling with Joan to going to the opera alone, and similarly
for Joan.
Table 5.13: Matching Pennies

Jim
Wrestling Opera
Joa n Wrestling 2 1 0 0
Opera 0 0 1 2

First, note that there are two Nash equilibria in pure


strategies for this game- the one in which Jim and Joan both
watch mud wrestling, and the one in which they both go to the
opera. Joan, of course, would prefer the first of these outcomes
and Jim the second, but both outcomes are equilibria- neither
Jim nor Joan would want to change his or her decision, given
the decision of the other. This game also has equilibrium in
mixed strategies: Joan chooses wrestling with probability 2/3
and opera with probability 1/3, and Jim chooses wrestling with
probability 1/3 and opera with probability 2/3. You can check
that if Joan uses this strategy, Joan cannot do better with any
other strategy, and vice versa. The outcome is random, and
Jim and Joan will each have an expected payoff of 2/3.

226
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Repeated Games
In oligopolistic markets, firms often find themselves
in a prisoners’ dilemma when making output or pricing
decisions. In the example of prisoners’ dilemma some
prisoners may have only one opportunity in life to confess or
not, most firms set output and price over and over again. In
real life, firms play repeated games: Actions are taken and
payoffs received over and over again. In repeated games,
strategies can become more complex. For example, with each
repetition of the prisoners’ dilemma, each firm can
develop a reputation about its own behaviour and can study
the behaviour of its competitors.
In a repeated game, the prisoners’ dilemma can have a
cooperative outcome. In most markets, the game is in fact
repeated over a long and uncertain length of time, and
managers have doubts about how “perfectly rationally” they
and their competitors operate. As a result, in some
industries, particularly those in which only a few firms
compete over a long period under stable demand and cost
conditions, cooperation prevails, even though no contractual
arrangements are made. In many other industries, however,
there is little or no cooperative behaviour. Sometimes
cooperation breaks down or never begins because there are
too many firms. More often, failure to cooperate is the result of
rapidly shifting demand or cost conditions. Uncertainties
about demand or costs make it difficult for the firms to
reach an implicit understanding of what cooperation should
entail.

227
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Sequential games
In most of the games both players move at the same
time. In the Cournot model of duopoly, for example, both
firms set output at the same time. In sequential games,
players move in turn. The Stackelberg model is an example of
a sequential game; one firm sets output before the other does.
There are many other examples: an advertising decision by
one firm and the response by its competitor; entry-deterring
investment by an incumbent firm and the decision whether to
enter the market by a potential competitor; or a new
government regulatory policy and the investment and output
response of the regulated firms. In a sequential game, the key
is to think through the possible actions and rational reactions of
each player.
As a simple example, let’s return to the product choice
problem. This problem involves two companies facing a
market in which two new variations of breakfast cereal can be
successfully introduced as long as each firm introduces only
one variation. This time, let’s change the payoff matrix
slightly. As Table 5.14 shows, the new sweet cereal will
inevitably be a better seller than the new crispy cereal,
earning a profit of 20 rather than 10 (perhaps because
consumers prefer sweet things to crispy things). Both new
cereals will still be profitable, however, as long as each is
introduced by only one firm.

228
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Table 5.14: Modified Product Choice Problem

Firm 2
Crispy Sweet
Firm 1 Crispy -5 -5 10 20
Sweet 20 10 -5 -5

Suppose that both firms, in ignorance of each other’s


intentions, must announce their decisions independently and
simultaneously. In that case, both will probably introduce
the sweet cereal—and both will lose money. Now suppose that
Firm 1 can gear up its production faster and introduce its new
cereal first. We now have a sequential game: Firm 1
introduces a new cereal, and then Firm 2 introduces one.
When making its decision, Firm 1 must consider the rational
response of its competitor. It knows that whichever cereal it
introduces, Firm 2 will introduce the other kind. Thus it will
introduce the sweet cereal, knowing that Firm 2 will respond
by introducing the crispy one.
Threats, Commitments and Credibility
1. Threats
The product choice problem and the Stackelberg
model are two examples of how a firm that moves first can
create a fait accompli that gives it an advantage over its
competitor. In the Stackelberg model, the firm that moved
first gained an advantage by committing itself to a large
output. Making a commitment- constraining its future
behaviour-is crucial. Suppose that the first mover (Firm 1)
229
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

could later change its mind in response to what Firm 2 does.


Clearly, Firm 2 would produce a large output. Because it
knows that Firm 1 will respond by reducing the output that it
first announced. The only way that Firm 1 can gain a first-
mover advantage is by committing itself. In effect, Firm 1
constrains Firm 2’s behaviour by constraining its own
behaviour.
The idea of constraining your own behaviour to gain an
advantage may seem paradoxical, but we’ll soon see that it is
not. Let’s consider a few examples. First, let’s return once
more to the product-choice problem shown in Table 5.14.
The firm that introduces its new breakfast cereal first will
do best. Even if both firms require the same amount of
time to gear up production, each has an incentive to commit
itself first to the sweet cereal. The key word is commit. If
Firm 1 simply announces it will produce the sweet cereal,
Firm 2 will have little reason to believe it. After all, Firm 2,
knowing the incentives, can make the same announcement
louder and more vociferously. Firm 1 must constrain its
own behaviour in some way that convinces Firm 2 that Firm
1 has no choice but to produce the sweet cereal. Firm 1 might
launch an expensive advertising campaign describing the new
sweet cereal well before its introduction, thereby putting its
reputation on the line.
Firm 1 might also sign a contract for the forward
delivery of a large quantity of sugar (and make the contract
public, or at least send a copy to Firm 2). The idea is for Firm
1 to commit itself to produce the sweet cereal. Commitment is

230
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

a strategic move that will induce Firm 2 to make the decision


that Firm 1 wants it to make- namely, to produce the crispy
cereal. Firm 1 can’t simply threaten Firm 2, because Firm 2
has little reason to believe the threat—and can make the same
threat itself. A threat is useful only if it is credible.
2. Commitment and Credibility

Sometimes firms can make threats credible. For


example, Race Car Motors, Inc., produces cars, and Far Out
Engines, Ltd., produces specialty car engines. Far Out Engines
sells most of its engines to Race Car Motors, and a few to
a limited outside market. Nonetheless, it depends heavily on
Race Car Motors and makes its production decisions in
response to Race Car’s production plans. We thus have a
sequential game in which Race Car is the “leader.” It will
decide what kind of cars to build, and Far Out Engines will
then decide what kind of engines to produce. The payoff
matrix in Table 5.15 shows the possible outcomes of this
game.
Table 5.15
Production Choice Problem

Race Car Motors


Far Out Engines

Small car Big car


Small engines 3 6 3 0
Big engines 1 1 8 3

231
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Observe that Race Car will do best by deciding to


produce small cars. It knows that in response to this
decision, Far Out will produce small engines, most of which
Race Car will then buy. As a result, Far Out will make Rs.3
million and Race Car Rs.6 million. Far Out, however, would
much prefer the outcome in the lower right-hand corner of the
payoff matrix. If it could produce big engines and if Race
Car produced big cars and thus bought the big engines it
would make Rs.8 million. (Race Car, however, would make
only Rs.3 million.) Suppose Far Out threatens to produce big
engines no matter what Race Car does; suppose, too, that no
other engine producer can easily satisfy the needs of Race
Car. If Race Car believed Far Out’s threat, it would produce
big cars: Otherwise, it would have trouble finding engines for
its small cars and would earn only Rs.1 million instead of
Rs.3 million. But the threat is not credible: Once Race Car
responded by announcing its intentions to produce small cars,
Far Out would have no incentive to carry out its threat.
Far Out can make its threat credible by visibly and
irreversibly reducing some of its own payoffs in the matrix,
thereby constraining its own choices. In particular, Far Out
must reduce its profits from small engines (the payoffs in the
top row of the matrix). It might do this by shutting down or
destroying some of its small engine production capacity. This
would result in the payoff matrix shown in Table 5.16.

232
ECO1C01 - MICROECONOMICS: THEORY AND APPLICATIONS-I

Table 5.16
Modified Production Choice Problem

Race Car Motors


Far Out Engines

Small car Big car

Small engines 0 6 0 0

Big engines 1 1 8 3

Now Race Car knows that whatever kind of car it


produces, Far Out will produce big engines. If Race Car
produces the small cars, Far Out will sell the big engines as
best it can to other car producers and settle for making only
Rs.1 million. But this is better than making no profits by
producing small engines. Because Race Car will have to look
elsewhere for engines, its profit will also be lower (Rs.1
million). Now it is clearly in Race Car’s interest to produce
large cars. By taking an action that seemingly puts itself at a
disadvantage, Far Out has improved its outcome in the game.
Although strategic commitments of this kind can be effective,
they are risky and depend heavily on having accurate
knowledge of the payoff matrix and the industry. Suppose,
for example, that Far Out commits itself to producing big
engines but is surprised to find that another firm can produce
small engines at a low cost. The commitment may then lead
Far Out to bankruptcy rather than continued high profits.

233

You might also like