Basic Investor’s Problem

K Nimmanunta
NIDA Business School
First Version: June 2025
This Version: September 2025

Case I

Consider a simple case of maximizing two-period concave rewards as follows:

\max R(C_1) + R(C_2)

s.t. C_1 + C_2 = W_0

The optimal consumption policy can be found by FOC.

R'(C_1) = R'(C_2) = \lambda

where \lambda is the lagrange.

This can be interpreted as that the marginal reward gain or loss from moving dC between the two periods must be equal. Since they are the same function with same slope, this implies that C_1 = C_2.

However, if the reward function is linear, their slope will equal regardless of the consumption allocation. In this case, the allocation doesn’t matter. Investor is indifferent between consuming more or less, now or later.

It is worth thinking of the case of a convex reward function. Following the FOC would result in the minimum rewards, instead. The optimal consumption policy is a corner solution, where we should either consume everything today or everything tomorrow.

Case II

Now, let’s introduce the time value of the reward. Investor prefer the reward today rather than wait. The reward received next period will be discounted by the discount rate b.

\max R(C_1) + R(C_2)/(1+b)

s.t. C_1 + C_2 = W_0

The optimal consumption policy can be found by FOC.

R'(C_1) = R'(C_2)/(1+b) = \lambda

where \lambda is the lagrange.

With a concave reward, the optimal consumption for the two periods would be that the gain and loss from moving the consumption amount dC between the two periods would be the same. To achieve this, we set the consumption in period 2 to be g% bigger than period 1, so that the reward in the period 2 is b% larger than in period 1.

R'(C_1) = R'(C_1 + g C_1))/(1+b) \approx R'(C_1)/(1+b) + R''(C_1)(g C_1)/(1+b)

b \approx [C_1 R''(C_1)/R'(C_1)] g

The [C_1 R''(C_1)/R'(C_1)] is the elasticity of the slope of the reward function. It tells us how larger the marginal reward would be (in %) when the consumption grows at 1%.

With a linear reward function, the optimal consumption is a corner solution where investor consumes everything today. There is no point delaying the consumption as the reward received will be discounted by b%.

With a convex reward, the optimal consumption would be a corner solution similar to the linear case. Consume everything later the reward will get discounted by b. So, it’s best to consume everything today. The optimal consumption policy from FOC would result in the minimum rewards.

Case III

Let’s introduce the risk-free rate. The money will grow at a risk-free rate so it becomes an opportunity cost. To compare wealth and consumption between two periods, one must calculate the present value. Here the b is the time value of reward and r is the time value of money.

\max R(C_1) + R(C_2(1+r))/(1+b)

s.t. a budget constraint, C_1 + C_2 = W_0.

The optimal consumption policy can be found by FOC.

R'(C_1) = R'(C_2(1+r))(1+r)/(1+b) = \lambda

where \lambda is the lagrange.

R'(C_1) = R'((C_1 + g C_1))(1+r)/(1+b) \approx R'(C_1)(1+r)/(1+b) + R''(C_1)C_1 g (1+r)/(1+b)

b - r \approx [C_1 R''(C_1)/R'(C_1)]g(1+r)

An alternative way (better way) to set up this problem is

\max R(C_1) + R(C_2)/(1+b)

w.r.t. W_1 = (1+r)W_0 - C_1 and C_2 = (1+r)W_1

The two budget constraints become one:

(1+r)C_1 + C_2 = (1+r)^2W_0

The optimal consumption policy can be found by FOC.

R'(C_1) = (1+r)\lambda

R'(C_2)/(1+b) = \lambda

R'(C_1)/(1+r) = R'(C_2)/(1+b) = \lambda

where \lambda is the lagrange. This can be interpreted that marginal reward at time 2 would be discounted at rate b and the marginal reward at time 1 would be discounted at risk-free rate r, and they must be equal. I have spent tremendous time trying to understand why the marginal reward at time 1 must be discounted by r. I will explain later. For now, let’s finish the derivation:

R'(C_1) = R'(C_1 + g C_1))(1+r)/(1+b) \approx R'(C_1)(1+r)/(1+b) + R''(C_1)(g C_1)(1+r)/(1+b)

bR'(C_1) \approx rR'(C_1) + R''(C_1)(g C_1)(1+r)

b - r \approx [C_1 R''(C_1)/R'(C_1)] g (1+r)

Notice that the RHS consists of (1+r) too.


ใส่ความเห็น