In this article, we will try to retell the paper in a simpler way by which it is easier to follow. At the moment, we will only focus on the first part of the paper which tries to give an answer to the following question:

Is there a way to guarantee policy improvement?

And the answer is yes.

In order to do show we need 3 ingredients:

  1. Policy performance measurement
  2. Policy improvement algorithm
  3. Improved policy performance estimation

The overall idea is that if we can give the lower bound to the improved policy performance and we can show that it is > 0, we thus guarantee policy improvement.

So the path forward is to show you approaches to estimate the improved policy performance.

Basics

is a state-value function.

is a action-value function.

is an advantage function.

Policy performance

We first define the policy performance as an average performance over state states.

where the start state distribution is . In the paper, this could be substituted with other distribution at will under the notion of restart distribution but we don’t care about it that much here. Let’s say that it is under some start state distribution.

Conservative greedy policy improvement

The usual policy improvement is to alter the current policy to be for all . Here we look for a more general case allowing us to transform the policy in a more granular way using as a parameter.

where is a greedy improvement of .

So our goal is to guarantee the improvement of policy under the conservative greedy improvement that is:

That is at any moment we need to find that satisfies the above inequation. In other words, how small should be that it still improves the policy.

Improved policy performance estimation

As you see from , we need to get the improved policy performance , but we want to get it cheaply because we might need to fine tune it for the right . This not viable to just rerun the policy evaluation (on a new set of experience from ), it is just too slow. We need to estimate its lower bound.

In the paper, the author shows two ways for estimation:

  1. Using Taylor’s series to the first order. Unfortunately this approach does get us any closer to the lower bound of the estimation. But it is a useful starting point anyway.
  2. Using the author’s proposed approach. This gives a lower bound.

Using Taylor’s series to approximate

If we write using Taylor’s expansion to the first degree we will get:

Here we have an approximation error in the order of albeit not knowing its constant factor.

Since our policy improvement is not exactly in the form of aforementioned , we rather want it to be in the form of (recall the conservative policy improvement).

So we want to get the estimate of something like:

From , the only problematic part is the second term (first derivative), we want not .

We now begin to derive the .

The gradient of policy performance was first derived in Sutton’s 1999, policy gradient theorem. We would put it here without further ado:

where is a discounted state visitation probability. For completeness:

where is the probability of visiting state after taking steps under a policy . Please note that is not a probability distribution (it does not sum to ), but we can make it so by multiplying to it (since ).

From , we substitute with , we also write as a function of :

Consider :

We substitute into followed by some algebra:

This gradient can be computed without the need to further interact with the environment. We just need to change to and then rerun on the previous experience.

Policy advantage

The quantity in is closely related to policy advantage which defines:

Since it obeys the expectation, it uses a normalized distribution. Hence:

Intuitively, the policy advantage tells us how much tries to take large advantages (be greedy). If , this quantity is . It is maximized when is a greedy policy wrt. .

Don’t be confused! is a policy advantage which looks at all states, but is an advantage function looks at a particular state and action.

Taylor’s expansion of policy performance

We now get:

Now, we can draw some conclusion from the above equation:

  1. With policy improvement the second term (first derivative) is positive (if the policy is not optimal).
  2. If is small enough, the second term will dominate the third term (second derivative) resulting in policy improvement.

The only problem is that we don’t know what is to guarantee the policy improvement. We now turn to a different approach.

Using the author’s approach

In order to guarantee policy improvement, we need to show that .

We first rewrite it in a different form.

Lemma 6.1

Proof:

With Lemma 6.1 we now have:

where is the probability of visiting state at time under policy .

Evidently, we do have but we do not have . A way forward is to estimate the equation with all we have. Since the deviation from our estimate comes from the mismatch between and , intuitively, a small should result in a small mismatch and vice versa. This implies that must share some roots with which part we can work with. This allows us to get an informed estimate and put an upper bound to the part we cannot work with.

The two parts

Consider the policy , we know from its definition that it is a compound policy. Another way to look at it is we have two policies: an . With probability of we will select an action according to , and probability of we will select an action from .

At time , we define our two parts as:

  1. Part one: we follow from until now.
  2. Part two: at some point we selected an action from .

If we has been following .

The probability is .

The expected advantage function for this part is:

If we has followed at any point prior .

The probability is .

The expected advantage function for this part is:

We can define the upper bound of this value to be:

This is obvious we just use the here which literally cannot be exceeded. As you shall see later on, the smaller the the tighter our estimate would be.

The total expected advantage function at time is then the sum of both:

Furthermore, we can show that :

Substitute into :

This is just for a time frame . After all, we still need to incorporate it into the whole trajectories which extends from to .

Apply to all time steps

Substitute into :

Looking more carefully at the first term, depends on which is something we want to find (remember we want to find the policy improving ). With this form, solving to find will be very hard because it is not in a closed form. We want the $\sum$ term to be a constant independent of . In this way, solving to find becomes trivial.

To realize this, we further substitute into :

Consider , this is in fact the unnormalized policy advantage (see ). It equals to .

Now consider the , we can substitute its real values and get:

Substitute them into :

We call equation theorem 4.1.

Finding the right step

Finally, we want to guarantee the policy improvement by selecting a proper . We then need to solve for :

This would be guaranteed to improve the policy because we calculate it from the pessimistic estimate (its lower bound).