Tutorial - Pricing a European call option using quantum computing

Welcome!πŸ‘‹πŸ‘‹ This tutorial notebook is designed such that you will get to work hands-on with pricing a European call option using quantum computing algorithms. It is structured in the following 4 sections:

  1. Reminder - what is a European call option?

  2. Asset distributions and payoff function

  3. Classical Monte Carlo on a QC

  4. Quantum Amplitude Estimation (QAE)

If you have any questions or need assistance, dont hesitate to find either one of us in the room and we’ll try and help you! πŸ€—

Authors: BjΓΆrn LΓΆfdahl, Victorio Úbeda Sosa, Wilhelm Γ…gren


0. Reminder - what is a European call option?

A call option is a financial contract in which the holder (buyer) has the right (but not the obligation) to buy a specified quantity of a security at a specified price (strike price) within a fixed period of time (until its expiration/maturity). For a European call option the holder can only exercise the option at expiry/maturity.

For the writer (seller) of a call option, it represents an obligation to sell the underlying security at the strike price if the option is exercised. The call option writer is paid a premium for taking on the risk associated with the obligation (this is however omitted here as it only offsets the payoff by the premium price).

Suppose a European call option with strike price \(K\) and and underlying asset whose spot price at maturity \(S_T\) follows a given distribution, then the corresponding payoff function for the holder is defined as

\[ \phi(S_T) = \max(S_T - K, 0). \]

For an investor to profit from a European call option, the spot price at maturity \(S_T\) has to be high enough above the strike price to cover the cost of the option premium.

Simple example

Assume that Bob is the writer for a European call option with IonQ (NYSE: IONQ) as the underlying with spot price \(S_0=€30\). The contract has maturity time \(T\) of 30 days from today, strike price \(K=€35\), with a premium of \(€5\) per contract. In this simple example we disregard all brokerage and assume that Bob does not already own the underlying asset (referred to as a naked call option).

Is this premium too high? Is it too low? Is it perhaps a fair price? πŸ€”

Alice is interested in buying this call option but is unsure whether Bob has a fair premium on the contract. However, Alice strongly believes that IonQ will be trading for \(€50\) in 30 days (i.e. at maturity \(T\)). Fast forward 30 days and in this case Alices’ beliefs turned out to be true, thus she will make \(S_T - K - \text{premium} = 50 - 35 - 5 = €10\) per contract.

If IonQ would be worth less than the strike price \(K\) at maturity \(T\) then the option is not exercised and expires worthless. The holder (Alice) loses the premium and the writer (Bob) profits on the premium.

Why would Bob want to write this specific call option? Based on the strike price \(K=€35\) and the premium \(€5\) we know that Bob makes money if IonQ is trading for less than \(€40\) at maturity (\(\text{premium} -(S_T - K) = 5 - (40 - 35)\)). If \(S_T=€38\) then Bob makes \(5 - (38 -35) = €2\) per contract. So Bob would only really want to write this call option because he believes that IonQ will not be worth much more in the future.

In our simple case above Alice accepted the premium (the price of the option) based solely on her strong belief that IonQ would be worth much more in the future. But could she in some way have made a better decision on whether or not the premium of the call option was fair?

Yes, she herself could have estimated a fair price of the call option based on e.g. the current spot price \(S_t\), the desired strike price \(K\), the time to maturity \(T\), the risk-free rate \(r\), the assets implied volatility \(\sigma\) by, and a prediction on the value of the underlying at maturity, by estimating the discounted expected payoff

\[ v = e^{-rT}\mathbb{E}[\phi(S_T)] = e^{-rT}\mathbb{E}[\max(S_T - K, 0)]. \]

So what should be the fair price of the European call option? Let’s estimate it using quantum computing! πŸš€

1. Asset distributions and payoff function

The first step in estimating a fair price for the European call option is to encode/load the distribution of the underlying asset on to a quantum circuit. Assume that the underlying asset can be described by the Geometric Brownian Motion

\[ dS_t = rS_tdt + \sigma S_tdW_t \]

and that we follow the Black & Scholes model such that the spot price at time \(S_t\) is described by the following log-normal distribution

\[ ln\Bigl( \frac{S_T}{S_t} \Bigr) \sim N\Bigl( (r - \frac{1}{2}\sigma^2)T, \, \sigma \sqrt{T} \Bigr) \]

where \(\mu\) is the annual drift rate of S and we denote \(\sigma'\) as its scaled volatility, such that

\[\begin{split} \begin{align} \mu = (r - \frac{\sigma^2}{2})& T + \log(S_0)\\ \sigma' =\, & \sigma \sqrt{T} \end{align} \end{split}\]

Because we will be working with quantum circuits in this tutorial, we will have to create an equidistant discretization of the distribution on to \(2^n\) grid points, where \(n\) is the number of qubits of our quantum circuit. To be able to do this we also have to limit the range of the distribution to some interval, e.g., between 3 standard deviations of its mean

\[ x \sim N(\mu, \sigma') \in [\mu - 3\sigma', \mu + 3\sigma']. \]

πŸ“ Exercise

Now we want you to:

  1. Classically create the distribution that we just defined, and sample from it on an equidistant interval.

  2. Change the variables around and see how the distribution of the spot price at maturity \(S_T\) changes.

  3. What happens if the asset has a large initial spot price \(S_0 > 10\) with a relatively high volatility \(\sigma > 0.2\)?

    • Hint: perhaps we have too few qubits to represent such a β€œwide” distribution?

Some help to get you started:

  • To calculate \(\mu\) and \(\sigma\) refer to the above equations.

  • To create an equidistant range with \(2^n\) grid points you can use the np.linspace method.

  • Normalizing the distribution in this context means that the probabilities should sum to 1.

import numpy as np
import matplotlib.pyplot as plt
from scipy import stats

n_uncertainty_qubits = 3  # change this also if you want to :)
n_qubit_states = 2 ** n_uncertainty_qubits

S = 2  # initial spot price
volatility = 0.4  # implied volatility
r = 0.05  # risk-free market rate
T = 40 / 365  # time to maturity

mu = ...  # this should be the annualized drift rate of S
sigma = ...  # this should be the scaled volatility

# Some properties of the log-normal distribution
# https://en.wikipedia.org/wiki/Log-normal_distribution
mean = np.exp(mu + 0.5 * sigma ** 2)
variance = (np.exp(sigma ** 2) - 1) * np.exp(2 * mu + sigma ** 2)
std_dev = np.sqrt(variance)

low = np.maximum(mean - 3 * std_dev, 0)
high = mean + 3 * std_dev

# Scipy log-normal distribution
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html
lognorm_scale = S * np.exp(r * T - 0.5 * sigma ** 2)
xx = ...  # create an equidistant interval from `low` to `high` with `n_qubit_states` grid points
yy = stats.lognorm.pdf(xx, sigma, scale=lognorm_scale) / sigma
yy = ...  # normalize the pdf

plt.figure()
plt.bar(xx, yy, width=0.2, alpha=0.5)
plt.plot(xx, yy, linewidth=2, color="blue")
plt.xticks(np.arange(min(xx) - 0.05, max(xx) + 0.05, (max(xx) - min(xx)) / 10), size=12, rotation=70)
plt.yticks(size=12)
plt.title(r"$ln \frac{S_T}{S_t} \sim N((r - \frac{1}{2}\sigma^2)T, \sigma\sqrt{T})$", size=20)
plt.xlabel(r"Spot price at maturity $S_T$ (€)", size=15)
plt.ylabel(r"Probability ($\%$)", size=15)
plt.grid()
plt.show()

Encode the distribution onto a quantum circuit

Next we need to construct a circuit to load the log-normal distribution into a quantum state. The distribution as defined before is truncated to the given interval \([\mu - 3\sigma, \mu + 3\sigma]\) and discretized using \(2^n\) grid points, where \(n\) denotes the number of qubits we are using. The unitary operator \(\mathcal{A}\) coresponding to the circuit can be represented as

\[ \mathcal{A}\ket{0}_n = \sum_{i=0}^{2^n - 1} \sqrt{p_i}\ket{i}_n \]

where \(\ket{i}_n\) is the \(i^{th}\) state, \(p_i\) represents the probability of measuring the \(i^{th}\) discretized grid.

We can use a pre-defined circuit from the qiskit-finance library which calculates the same probabilities as we did above and then initalizes a circuit with gates that encode the probabilities for the states. It uses the Initialize class from the main qiskit library to create a circuit with gates that represent the provided probabilities.

πŸ“ Exercise

No we want you to:

  1. Change the number of qubits (defined in the code cell above) and see how the depth of the circuit changes.

    • What happens to the depth? Is this a sustainable behaviour?

  2. Look at the types of gates and try and think about what would happen if the hardware we wanted to run on only supported a certain set of (native) basis gates. Do you think the circuit would be deeper? Shallower? What are you thoughts?

    • You will get to explore this practically later in the tutorial, but it is good to start thinking about already.

  3. Verify that the quantum circuit has implemented the expected distribution (the one you made in the code cell above).

from qiskit_finance.circuit.library import LogNormalDistribution

uncertainty_model = LogNormalDistribution(
    n_uncertainty_qubits, mu=mu, sigma=sigma**2, bounds=(low, high)
)

decomposed_uncertainy_model = uncertainty_model.decompose(reps=10)

print(f"Depth of the P(X) circuit: {decomposed_uncertainy_model.depth()} ({n_uncertainty_qubits} qubits)")
display(uncertainty_model.draw("mpl"))
display(decomposed_uncertainy_model.draw("mpl"))

# Verify that the circuit encodes the same probability distribution.
# You should see the exact same distribution as above when we used `scipy.stats.lognorm.pdf`.

xx = uncertainty_model.values
yy = uncertainty_model.probabilities

plt.figure()
plt.bar(xx, yy, width=0.2, alpha=0.5)
plt.plot(xx, yy, linewidth=2, color="blue")
plt.xticks(np.arange(min(xx) - 0.05, max(xx) + 0.05, (max(xx) - min(xx)) / 10), size=12, rotation=70)
plt.yticks(size=12)
plt.title(r"$ln \frac{S_T}{S_t} \sim N((r - \frac{1}{2}\sigma^2)T, \sigma\sqrt{T})$", size=20)
plt.xlabel(r"Spot price at maturity $S_T$ (€)", size=15)
plt.ylabel(r"Probability ($\%$)", size=15)
plt.grid()
plt.show()

Encoding the payoff function

Next step is to encode the payoff function onto a quantum circuit. Remember that the payoff function is defined as

\[ f(X) = \max(X - K, 0) \]

and since we have a equidistant discretized grid for the random variable \(X\) on \(\{0, 1, \dots, 2^n - 1\}\) we can define an operator \(\mathcal{F}\) as

\[ \mathcal{F}\ket{i}_n\ket{0} = \sqrt{1 - f(i)}\ket{i}_n\ket{0} + \sqrt{f(i)}\ket{i}_n\ket{1} \]

where it is required that the function \(f(x) \mapsto [0, 1]\). Applying the operator \(\mathcal{F}\) to our distribution operator \(\mathcal{A}\ket{0}_n\ket{0}\) yields

\[ \mathcal{F}\mathcal{A}\ket{0}_n\ket{0} = \dots \ket{0} + \sum_{i=0}^{2^n-1}\sqrt{f(i)}\sqrt{p_i}\ket{i}_n\ket{1} \]

where we get that the probability of measuring \(\ket{1}\) in the final qubit is

\[ \sum_{i=0}^{2^n-1} f(i)p_i = \mathbb{E}[f(X)]. \]

However, we have a problem, because our payoff function does not currently map to the \([0, 1]\) interval. To fix this we can rescale it as

\[ \hat{f}(X) = \frac{f(\phi(X))}{f(X_{\max})},\,\, \text{with}\,\,\phi(X) = X_{\min} + X * \frac{X_{\max} - X_{\min}}{2^n - 1}. \]

πŸ“ Exercise

No we want you to:

  1. Set a desired strike price \(K\)

  2. Implement the payoff function \(f(X)\) classically

Some help:

  • Set the strike price based on your belief on how the underlying assets value will evolve until the time to maturity.

  • The np.maximum method can be used to get the largest of two values.

K = ...  # define a strike price

xx = uncertainty_model.values
yy = ...  # implement the payoff function

plt.figure()
plt.plot(xx, yy, linewidth=2, color="red", marker=".", ms=8)
plt.title(r"Payoff function $\phi(S_T) = \max(S_T - K, 0)$", size=20)
plt.xlabel(r"Spot price at maturity $S_T$ (€)", size=15)
plt.ylabel(r"Payoff amount (€)", size=15)
plt.xticks(np.arange(min(xx) - 0.05, max(xx) + 0.05, (max(xx) - min(xx)) / 10), size=12, rotation=70)
plt.yticks(size=12)
plt.grid()
plt.show()

Now we want to implement this function on a quantum circuit. The LinearAmplitudeFunction in the qiskit circuit library uses PiecewiseLinearPauliRotations to implement the piecewise linear function which is our payoff function.

πŸ“ Exercise

  1. Set the slopes of the piecewise payoff function.

  2. Set the minimum and maximum value that the payoff function can have.

Some help:

  • Look at the above plot of the payoff function and try and figure out which two lines together make up the payoff function.

    • Hint: At the strike price \(K\) we start making money, before that, we profit nothing (payoff=0).

  • What is the minimum amount of money that we can make? Think logically 😊

  • The maximum amount of money we can make depends on the highest value in our equidistant range.

from qiskit.circuit.library import LinearAmplitudeFunction

# approximation scaling for the payoff function, determines accuracy in the Taylor approximation.
# https://www.nature.com/articles/s41534-019-0130-6
c_approx = 0.25

# piecewise linear with breakpoints on x_min and when S_T >= K.
breakpoints = (low, K)
slopes = ...  # set the slopes of the piecewise linear payoff function
offsets = (0, 0)
f_min = ...  # set the minimum value that the payoff function can have
f_max = ...  # set the maximum value that the payoff function can have
european_call_objective = LinearAmplitudeFunction(
    num_state_qubits=n_uncertainty_qubits,
    slope=slopes,
    offset=offsets,
    domain=(low, high),
    image=(f_min, f_max),
    breakpoints=breakpoints,
    rescaling_factor=c_approx,
)

decomposed_european_call_objective = european_call_objective.decompose(reps=10)

print(f"Depth of the f(x) circuit: {decomposed_european_call_objective.depth()} ({n_uncertainty_qubits} qubits)")
display(decomposed_european_call_objective.draw("mpl"))

Wait, now we are working with more qubits than we originally specified for \(P(X)\).. ❓

For each qubit in our \(P(X)\) circuit we need an equal amount of qubits as ancilla qubits for the payoff function circuit, plus one extra qubit which encodes the value that we are trying to estimate in its \(\ket{1}\) state.

πŸ“ Exercise

  1. For a circuit with \(n\) qubits for the \(P(X)\) circuit, how many total qubits would we need to construct the full circuit with both \(P(X)\) and the payoff function circuit?

    • Hint: change the number of qubits for \(P(X)\) and see how the number of qubits changes for the payoff function circuit.

Now we need to add the \(P(X)\) distribution circuit to before the payoff function

from qiskit import AncillaRegister, ClassicalRegister, QuantumCircuit, QuantumRegister

n_payoff_qubits = european_call_objective.num_qubits
n_cl_bits = 1

qreg_s = QuantumRegister(n_uncertainty_qubits, "s")
qreg_target = QuantumRegister(1, "q")
qreg_a = AncillaRegister(n_payoff_qubits - n_uncertainty_qubits - 1, "a")
creg = ClassicalRegister(n_cl_bits, "creg")

european_call = QuantumCircuit(qreg_s, qreg_target, qreg_a)
european_call.append(uncertainty_model, range(n_uncertainty_qubits))
european_call.append(european_call_objective, range(n_payoff_qubits))

measured_european_call = european_call.copy()
measured_european_call.add_register(creg)
measured_european_call.measure(qreg_target, creg)

decomposed_european_call = european_call.decompose(reps=10)
decomposed_measured_european_call = measured_european_call.decompose(reps=10)

print(f"Depth of the P(X) + f operations circuit: {decomposed_measured_european_call.depth()} ({decomposed_measured_european_call.num_qubits} qubits)")

display(measured_european_call.draw("mpl"))
display(decomposed_measured_european_call.draw("mpl"))

Do you see any potential issues with the circuit depth?

What would happen to the depth if we had to apply the payoff function circuit \(f(x)\) multiple times?


2. Classical Monte Carlo on a QC

If we sample multiple times from the circuit that we have defined above then we can get an estimate of the expected payoff, however, this is just classical monte carlo but performed using quantum circuits.

There is no potential speed-up involved by doing this, but doing this step might be helpful for you to see and understand the gains with doing actual Quantum Monte Carlo (also referred to as Quantum Amplitude Estimation), which you will get to do soon in this tutorial.

Estimate the expected payoff for the European call option

Now you will perform classical monte carlo using the quantum circuit that you created above. We are specifically interested in the probability of measuring the \(\ket{1}\) state in the target qubit

\[ \mathbb{P}(\text{measure } 1) = \mathbb{E}[f(X)] = \sum_{i=0}^{2^n-1}f(i)p_i \]

and with probability \(1 - \delta\), the estimate \(\tilde{\mu}\) satisfies \(|\mu - \tilde{\mu}| < \epsilon\) with

\[ |\mu - \tilde{\mu}| \leq \phi^{-1}\Bigl( 1 - \frac{\delta}{2} \Bigr)\frac{\text{Var}(f(S_T))}{\sqrt{N}} \sim \mathcal{O}\Bigl( \frac{1}{\sqrt{N}} \Bigr) \]

πŸ“ Exercise

  1. Determine how many samples you want to take.

    • Try both a small - and a large amount, and see how the estimation and its confidence bound changes.

  2. Rescale the estimated value to its original interval \([x_{\min}, x_{\max}]\).

    • Hint: the circuit used for implementing the objective function has a method called post_processing that can be used to map the scaled results to its original domain.

  3. Define a confidence bound on the estimated value.

    • Hint: assume a confidence bound on \([\mu - 2\sigma, \mu + 2\sigma]\) and use the same method as above to rescale to the correct interval.

from qiskit.primitives import StatevectorSampler as Sampler

sampler = Sampler()
n_shots = ...  # how many samples do you want to take for your MC?
result = sampler.run([measured_european_call, ], shots=n_shots).result()

quasi_dist_dict = result[0].data.creg.get_int_counts()
quasi_dist = np.array([quasi_dist_dict[0], quasi_dist_dict[1]])
dist = quasi_dist / quasi_dist.sum()

plt.figure()
plt.bar(["|0>", "|1>"], dist, width=0.2, alpha=0.8)
plt.yticks(size=12)
plt.xticks(size=12)
plt.title(r"Probability of measuring either $|0>$ or $|1>$", size=20)
plt.ylabel(r"Quasi-probability ($\%$)", size=15)
plt.grid()
plt.show()

p_hat = dist[1]
p_std = np.sqrt(p_hat * (1 - p_hat) / n_shots)

print(f"Raw estimated mu={p_hat:.5f}, std={p_std:.5f}")

mc_expectation = ...  # estimated value
mc_lower_conf = ...  # lower confidence bound
mc_upper_conf = ...  # upper confidence bound

print(f"Estimation of the expected payoff:\t\t{mc_expectation:.5f}")
exact_value = np.dot(uncertainty_model.probabilities, np.maximum(uncertainty_model.values - K, 0))
print(f"Exact expected value from discretization:\t{exact_value:.5f}")
print(f"Confidence bound on estimation:\t\t({mc_lower_conf:.5f}, {mc_upper_conf:.5f})")

3. Quantum Amplitude Estimation (QAE)

Remember that the standard Monte Carlo method for pricing consists in approximating the expected payoff \(\mu = \mathbb{E}[f(S_T)]\) as

\[ \mu \approx \tilde{\mu}_N = \frac{1}{N}\sum_{i=1}^Nf \Bigl( S_T^{(i)} \Bigr) \]

where \(S_T^{(i)},\, i=1,\dots,N\) are i.i.d samples from the probability distribution of the underlying asset \(S\) at the expiration time \(T\). Note that a consequence of the central limit theorem is that the estimation error of the classical Monte Carlo method satisfies

\[ |\mu - \tilde{\mu}| \sim \mathcal{O}\Bigl( \frac{1}{\sqrt{N}} \Bigr) \]

and the posed question is whether or not the estimation error rate can be improved on?

Quantum Amplitude Estimation (QAE) is a quantum algorithm that provides an alternative to the classical Monte Carlo in order to compute approximate expectations of random variables.

Suppose we are able to construct a circuit that implements a state preparation operator \(\mathcal{U}\) acting on an \((n+1)\) qubit register as follows

\[ \mathcal{U}\ket{0}_{n+1} = \sqrt{1 - a}\ket{\psi_0}_n \ket{0}+ \sqrt{a}\ket{\psi_1}_n\ket{1} \]

where \(a\in[0, 1]\) is an unknonwn quantity associated with the value \(\mu\) which we want to estimate (e.g. an appropriate re-scaling of \(\mu\) to the interval \([0, 1]\)).

Then, QAE can be used to obtain an estimate of \(a\) through repeated controlled applications of the Grover operator

\[ \mathcal{Q} = \mathcal{U}S_0\mathcal{U}^{\dagger}S_{\Psi_0} \]

together with an inverse Quantum Fourier Transform (QFT), where \(S_0\) is the zero reflection and \(S_{\Psi_0}\) is the phase oracle in the Grover operator. For further details on the algorithm, please refer to Quantum Amplitude Amplification and Estimation. Brassard et al., 2000.

It can be showed that, with high probability \((8/\pi^2 \approx 81\%)\), the estimate \(\tilde{a}_M\) provided by the QAE satisfies

\[ |a - \tilde{a}_M| \leq \frac{2\pi\sqrt{a(1-a)}}{M} + \frac{\pi^2}{M^2} \sim \mathcal{O}\Bigl( \frac{1}{M} \Bigr) \]

where \(M=2^m\), and \(m\) is the number of ancilla qubits used by the algorithm (which determines how many repetitions of the operator \(\mathcal{Q}\) should be applied). Comparing this to classical Monte Carlo and we can see that QAE provides a (theoretical) quadratic speedup.


Quantum Amplitude Estimation can be applied to our circuit that we performed classical Monte Carlo on, and doing QAE should theoretically improve our estimation on the fair price of the European call option.

So lets apply QAE and see if we get a better result than classical Monte Carlo! πŸš€πŸš€

πŸ“ Exercise

  1. Pick \(m\), i.e., the number of repetitions of \(\mathcal{Q}\).

    • How is the depth of the circuit impacted by the choice of \(m\)?

# Define QAE circuit
from qiskit import QuantumCircuit, QuantumRegister, AncillaRegister, ClassicalRegister
from qiskit.circuit.library import QFT, GroverOperator
class AECircuit(QuantumCircuit):
    def __init__(self, state_preparation_circuit, num_ancilla_qubits, objective_qubit):
        self.state_preparation_circuit = state_preparation_circuit # The cirtuit implementing operator A
        self.num_ancilla_qubits = num_ancilla_qubits # Number of ancilla qubits (m in IQAE paper)
        self.num_state_qubits = state_preparation_circuit.num_qubits # Number of qubits in circuit A (n+1 in the IQAE paper)
        self.objective_qubit = objective_qubit # Index of the objective qubit within the circuit A (0<=objective_qubit<=num_state_qubits) 
   
        # Initialize circuit 
        ancilla_register = AncillaRegister(self.num_ancilla_qubits, name="ancilla")
        state_register = QuantumRegister(self.num_state_qubits, name="state")
        classical_register = ClassicalRegister(self.num_ancilla_qubits, name="creg")
        super().__init__(
            ancilla_register,
            state_register,
            classical_register
        )
        
        # Hadamard gates on the ancilla qubits
        for j in range(self.num_ancilla_qubits):
            self.h(ancilla_register[j])
            
        # Circuit A on the state and objective qubits
        A_gate = self.state_preparation_circuit.to_gate(label="$A$")
        self.append(A_gate, state_register[:])
       
        self.barrier()
        
        # Powers of Q
        Q = self.groverOp(self.state_preparation_circuit, self.objective_qubit)
        for j in range(num_ancilla_qubits):
            Qj = Q.power(2**j).to_gate(label=f"$Q^{{{2**j}}}$").control(1) # Controlled version of Q^{2j}
            # Apply the controlled Q^{2j} gate on state and objective qubits, controlled by the j-th ancilla qubit
            self.append(Qj, [ancilla_register[self.num_ancilla_qubits-j-1]] + state_register[:])
            
        self.barrier()
       
        # Inverse QFT
        QFT_gate = QFT(num_qubits=self.num_ancilla_qubits, inverse=True, do_swaps=False).to_gate(label="$QFT^\dagger$") 
        self.compose(QFT_gate, ancilla_register, inplace=True)
        
        self.barrier()
        
        # Measure the ancilla qubits
        self.measure(ancilla_register, classical_register)
       
    def groverOp(self, state_preparation_circuit, objective_qubit):
        # construct the grover operator
        oracle = QuantumCircuit(max(state_preparation_circuit.num_qubits - state_preparation_circuit.num_ancillas,1))
        oracle.h(objective_qubit)
        oracle.x(objective_qubit)
        oracle.h(objective_qubit)
        Q = GroverOperator(oracle, state_preparation_circuit) 
        return Q
m = ...  # The number of Q repetitions (number of ancilla qubits)
M = 2 ** m
n_ancilla_qubits = m
qae = AECircuit(
    state_preparation_circuit=decomposed_european_call,
    num_ancilla_qubits=n_ancilla_qubits,
    objective_qubit=n_uncertainty_qubits,
)

decomposed_qae = qae.decompose(reps=10)

print(f"Depth of the full QAE circuit: {decomposed_qae.depth()} ({decomposed_qae.num_qubits} qubits) ({m} ancilla qubits)")
display(qae.draw("mpl"))

Recall from the lecture that quantum amplitude estimation uses a combination of phase estimation and amplitude amplification in order to get an estimate of \(\mu\), specifically, because of the inverse QFT, what we are measuring from the quantum circuit is a number \(\tilde{q} \in \{0, 1, \dots, 2^m - 1\}\) that we have to map to the actual estimate through

\[ \tilde{\mu} = \sin^2\Bigl( \frac{\pi \tilde{q}}{2^m} \Bigr). \]

Now you should run the QAE circuit that was constructed above. Note that in a world with perfect quantum computers, with no errors, we would only have to run this circuit once.

However, we do not (yet?) live in such a world, and thus we have to sample \(\tilde{q}\) multiple times.

πŸ“ Exercise

  1. Define how many times you want to sample/measure from the QAE circuit.

sampler = Sampler()
n_shots = ...  # how many samples do you want to take for the estimation?
result = sampler.run([qae], shots=n_shots).result()

πŸ“ Exercise

  1. Calculate the angles based on the measured values \(\tilde{q}\):

    • Hint: the formula for the angle is \(\frac{\pi \tilde{q}}{2^m}\)

  2. Calculate the estimates based on the angles:

    • Hint: the formula for the estimate is \(\tilde{\mu} = \sin^2\Bigl( \frac{\pi \tilde{q}}{2^m} \Bigr)\)

    • Optional: round it to a suitable number of decimals (e.g. 15)

  3. Determine which estimate has the highest probability and then post-process it.

    • Hint: you can use the np.argmax method to get the index with the largest value.

#
# QAE Post-processing
#

counts = result[0].data.creg.get_int_counts()
qq = np.array(list(counts.keys()))

theta_estimates = ...  # Calculate the angles
estimates = ...  # Calculate the estimates

probabilities = np.array(list(counts.values()), dtype="float64")
probabilities = probabilities / probabilities.sum()

# Combine any estimate or probability duplicates (because sin is periodic)
unique_estimates = list(set(estimates))
unique_probabilities = [np.sum(probabilities[estimates==e]) for e in unique_estimates]
qae_estimates = np.array(unique_estimates)
qae_probabilities = np.array(unique_probabilities)

qae_p_mode = ...  # Extract the estimate with the highest probability
qae_expectation = european_call_objective.post_processing(qae_p_mode)

print("                   Exact            MC             QAE")
print("="*70)
print(f"Expectation:      {exact_value:.5f}        {mc_expectation:.5f}         {qae_expectation:.5f}")
print(f"Diff  (abs):        N/A          {np.abs(exact_value - mc_expectation):.5f}         {np.abs(exact_value - qae_expectation):.5f}")
print(f"Diff    (%):        N/A          {np.abs(exact_value - mc_expectation)/mc_expectation:.5f}         {np.abs(exact_value - qae_expectation)/qae_expectation:.5f}")

What if we wanted to run this QMC circuit on a specific backend?

If you for example wanted to run this circuit on the Helmi QC then you would have to transpile the circuit to be compatible with the native gate set of the target backend.

Qiskit has a method for (attempting) translating (transpiling) your circuit with your gates to the target gate set, this method is called transpile and composes 6 different stages (each of which can be read about in detail on the provided link).

The 5th stage in the transpilation process is called optimization and can be controlled by the user by specifying what optimization level you want to have. The optimization stage is necessary in order to manage increased circuit depths due to the transpilation process potentially adding a lot of gates in order to both: map the source gates to the target gate set, but also to match the target hardware qubit topology. We have discussed circuit depth quite a bit in this tutorial, and one way to (potentially) cope with extremely deep circuits is through transpilation optimization.

The user can specify what optimization level they want to run with:

  1. no optimization

  2. light optimization

  3. heavy optimization

  4. even heavier optimization

and the default level is 2. The higher the optimization level, the more optimized circuits, at the expense of longer transpilation times. Important to note is that the transpilation process is stochastic and will thus generate circuits with e.g. varying depth and gate layout even for the same source circuit.

πŸ“ Exercise

  1. Decide on a set of native gates that you want to have as target for the transpilation process.

    • If you have the time, you could try and find the specifications of a real QPU and its native gate set and use that. Here is a link for the IBM QPUs.

  2. Set your desired optimization level and see how the depth of the circuit changes as you change the optimization level.

    • How many gates can you remove from the original (extremely deep) QAE circuit by optimizing it?

  3. Can you find a gate set that you can’t transpile the circuit to? Why do you think that is?

from qiskit import transpile
from typing import List

backend_native_gates: List[str] = ...  # the native gates of the target backend
optimization_level: int = ...  # your desired optimization level [0, 3]

transpiled_circuit = transpile(
    circuits=qae,
    basis_gates=backend_native_gates,
    optimization_level=optimization_level,
)

print(transpiled_circuit.depth())

# You can most likely not display this circuit since the transpilation process
# decomposes any grouped gates, meaning, it will try and draw ~100_000 gates...
# display(transpiled_circuit.draw("mpl"))

That’s all we had prepared for you today, well done! πŸŽ‰πŸ’«πŸŒŸ

Hopefully you had the time to get a better understanding of how quantum amplitude estimation can be used in finance for pricing options, and also that you had the time to reflect on the suitability of these algorithms in the NISQ era.


Code solutions

If you are stuck on a problem or you want to compare your code to our code, look here. 😎

#
# Log-normal distribution (classically)
#

import numpy as np
import matplotlib.pyplot as plt
from scipy import stats

n_uncertainty_qubits = 3
n_qubit_states = 2 ** n_uncertainty_qubits

S = 2  # initial spot price
volatility = 0.4  # implied volatility
r = 0.05  # risk-free market rate
T = 40 / 365  # time to maturity

mu = (r - 0.5 * volatility ** 2) * T + np.log(S)
sigma = volatility * np.sqrt(T)

# Some properties of the log-normal distribution
# https://en.wikipedia.org/wiki/Log-normal_distribution
mean = np.exp(mu + 0.5 * sigma ** 2)
variance = (np.exp(sigma ** 2) - 1) * np.exp(2 * mu + sigma ** 2)
std_dev = np.sqrt(variance)

low = np.maximum(mean - 3 * std_dev, 0)
high = mean + 3 * std_dev

# Scipy log-normal distribution
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html
lognorm_scale = S * np.exp(r * T - 0.5 * sigma ** 2)
xx = np.linspace(low, high, num=n_qubit_states)
yy = stats.lognorm.pdf(xx, sigma, scale=lognorm_scale) / sigma
yy = yy / yy.sum()

plt.figure()
plt.bar(xx, yy, width=0.2, alpha=0.5)
plt.plot(xx, yy, linewidth=2, color="blue")
plt.xticks(np.arange(min(xx) - 0.05, max(xx) + 0.05, (max(xx) - min(xx)) / 10), size=12, rotation=70)
plt.yticks(size=12)
plt.title(r"$ln \frac{S_T}{S_t} \sim N((r - \frac{1}{2}\sigma^2)T, \sigma\sqrt{T})$", size=20)
plt.xlabel(r"Spot price at maturity $S_T$ (€)", size=15)
plt.ylabel(r"Probability ($\%$)", size=15)
plt.grid()
plt.show()
#
# Encoding the payoff function (classically)
#

K = 1.713  # this can be whatever..

xx = uncertainty_model.values
yy = np.maximum(xx - K, 0)

plt.figure()
plt.plot(xx, yy, linewidth=2, color="red", marker=".", ms=8)
plt.title(r"Payoff function $\phi(S_T) = \max(S_T - K, 0)$", size=20)
plt.xlabel(r"Spot price at maturity $S_T$ (€)", size=15)
plt.ylabel(r"Payoff amount (€)", size=15)
plt.xticks(np.arange(min(xx) - 0.05, max(xx) + 0.05, (max(xx) - min(xx)) / 10), size=12, rotation=70)
plt.yticks(size=12)
plt.grid()
plt.show()
#
# Encoding the payoff function (QC)
#

from qiskit.circuit.library import LinearAmplitudeFunction

# approximation scaling for the payoff function, determines accuracy in the Taylor approximation.
# https://www.nature.com/articles/s41534-019-0130-6
c_approx = 0.25

breakpoints = (low, K)
slopes = (0, 1)
offsets = (0, 0)
f_min = 0
f_max = high - K
european_call_objective = LinearAmplitudeFunction(
    num_state_qubits=n_uncertainty_qubits,
    slope=slopes,
    offset=offsets,
    domain=(low, high),
    image=(f_min, f_max),
    breakpoints=breakpoints,
    rescaling_factor=c_approx,
)

decomposed_european_call_objective = european_call_objective.decompose(reps=10)

print(f"Depth of the f(x) circuit: {decomposed_european_call_objective.depth()} ({n_uncertainty_qubits} qubits)")
display(decomposed_european_call_objective.draw("mpl"))
#
# Performing classical MC on a QC
#

from qiskit.primitives import StatevectorSampler as Sampler

sampler = Sampler()
n_shots = 100_000
result = sampler.run([measured_european_call, ], shots=n_shots).result()

quasi_dist_dict = result[0].data.creg.get_int_counts()
quasi_dist = np.array([quasi_dist_dict[0], quasi_dist_dict[1]])
dist = quasi_dist / quasi_dist.sum()

plt.figure()
plt.bar(["|0>", "|1>"], dist, width=0.2, alpha=0.8)
plt.yticks(size=12)
plt.xticks(size=12)
plt.title(r"Probability of measuring either $|0>$ or $|1>$", size=20)
plt.ylabel(r"Quasi-probability ($\%$)", size=15)
plt.grid()
plt.show()

p_hat = dist[1]
p_std = np.sqrt(p_hat * (1 - p_hat) / n_shots)

print(f"Raw estimated mu={p_hat:.5f}, std={p_std:.5f}")

expectation = european_call_objective.post_processing(p_hat)
lower_conf = european_call_objective.post_processing(p_hat - 2 * p_std)
upper_conf = european_call_objective.post_processing(p_hat + 2 * p_std)

print(f"Estimation of the expected payoff:\t\t{expectation:.5f}")
exact_value = np.dot(uncertainty_model.probabilities, np.maximum(uncertainty_model.values - K, 0))
print(f"Exact expected value from discretization:\t{exact_value:.5f}")
print(f"Confidence bound on estimation:\t\t({lower_conf:.5f}, {upper_conf:.5f})")
#
# QAE Post-processing
#

counts = result[0].data.creg.get_int_counts()
qq = np.array(list(counts.keys()))

theta_estimates = qq * np.pi / M
estimates = np.round(np.sin(theta_estimates) ** 2, 15)

probabilities = np.array(list(counts.values()), dtype="float64")
probabilities = probabilities / probabilities.sum()

# Combine any estimate or probability duplicates (because sin is periodic)
unique_estimates = list(set(estimates))
unique_probabilities = [np.sum(probabilities[estimates==e]) for e in unique_estimates]
qae_estimates = np.array(unique_estimates)
qae_probabilities = np.array(unique_probabilities)

qae_p_mode = qae_estimates[np.argmax(qae_probabilities)]
qae_expectation = european_call_objective.post_processing(qae_p_mode)

print("                   Exact            MC             QAE")
print("="*70)
print(f"Expectation:      {exact_value:.5f}        {mc_expectation:.5f}         {qae_expectation:.5f}")
print(f"Diff  (abs):        N/A          {np.abs(exact_value - mc_expectation):.5f}         {np.abs(exact_value - qae_expectation):.5f}")
print(f"Diff    (%):        N/A          {np.abs(exact_value - mc_expectation)/mc_expectation:.5f}         {np.abs(exact_value - qae_expectation)/qae_expectation:.5f}")
#
# Transpiling and optimizing quantum circuits using qiskit
#

from qiskit import transpile
from typing import List

backend_native_gates: List[str] = ["u", "x", "ry", "cx"]
optimization_level: int = 3

transpiled_circuit = transpile(
    circuits=qae,
    basis_gates=backend_native_gates,
    optimization_level=optimization_level,
)

print(transpiled_circuit.depth())

# You can most likely not display this circuit since the transpilation process
# decomposes any grouped gates, meaning, it will try and draw ~100_000 gates...
# display(transpiled_circuit.draw("mpl"))

References