Tutorial - MAP Decoding Exercise

The goal of MAP decoding is to find the most probable error vector e given the syndrome s. We optimize:

max P(e | s)

Given the constraints from the parity-check matrix H:

H @ e = s (mod 2)

Throughout the excercise we will assume that errors are iid with \(p<1/2\).

1. MAP Decoding for [7,4] Hamming code

import numpy as np
from itertools import product

# Function to compute the syndrome for a given error vector
def compute_syndrome(H, e):
    return np.mod(H @ e, 2)

# Example of a parity-check matrix for a [7,4] Hamming code
H = np.array([
    [1, 1, 0, 1, 1, 0, 0],
    [1, 0, 1, 1, 0, 1, 0],
    [0, 1, 1, 1, 0, 0, 1],
])

# Define an error vector e
e = np.array([1, 0, 0, 0, 0, 0, 0])

s = np.mod(H@e, 2)
print("\nReceived syndrome, s= ", s)
Received syndrome, s=  [1 1 0]

2. Exhaustive MAP Decoding

Now let’s assume we only received the syndrome s and want to find the most likely error vector e

Let’s simulate the decoding process by searching through all possible error vectors.

  • Generate all possible error vectors e.

  • Check which satisfy H @ e = s (mod 2).

  • Among valid solutions, select the one with the smallest Hamming weight.

# Generate all possible error vectors for n = 7
n = H.shape[1]
error_vectors = [np.array(e) for e in product([0, 1], repeat=n)]

# Find all error vectors that satisfy H @ e = s (mod 2)

print("Valid error vectors, with weight:")
valid_errors = []
for e in error_vectors:
    if np.array_equal(compute_syndrome(H, e), s):
        valid_errors.append(e)
        print(e, np.sum(e))

# Select the error vector with the smallest Hamming weight
best_error = min(valid_errors, key=lambda e: np.sum(e))

print("\nBest error vector (minimum weight):", best_error, np.sum(best_error))
print("We needed to search", len(error_vectors), "error vectors.")
Valid error vectors, with weight:
[0 0 0 0 1 1 0] 2
[0 0 0 1 0 0 1] 2
[0 0 1 0 1 0 1] 3
[0 0 1 1 0 1 0] 3
[0 1 0 0 0 1 1] 3
[0 1 0 1 1 0 0] 3
[0 1 1 0 0 0 0] 2
[0 1 1 1 1 1 1] 6
[1 0 0 0 0 0 0] 1
[1 0 0 1 1 1 1] 5
[1 0 1 0 0 1 1] 4
[1 0 1 1 1 0 0] 4
[1 1 0 0 1 0 1] 4
[1 1 0 1 0 1 0] 4
[1 1 1 0 1 1 0] 5
[1 1 1 1 0 0 1] 5

Best error vector (minimum weight): [1 0 0 0 0 0 0] 1
We needed to search 128 error vectors.

3. Heuristic Approach: Greedy Decoding

Instead of checking all error vectors, we can approximate the solution using a greedy algorithm:

  • Start with an initial guess (e.g., all zeros).

  • Iteratively flip bits to reduce the syndrome error.

This is not guaranteed to find the optimal solution but can be faster in practice.

Observe that the number of steps varies with each run, and it can give solutions that do not minimize the weight.

# Greedy decoding algorithm
def greedy_decoder(H, s, max_iter=10):
    steps = 0
    n = H.shape[1]
    e = np.zeros(n, dtype=int)  # Start with no errors
    for _ in range(max_iter):
        steps+=1
        syndrome = compute_syndrome(H, e)
        if np.array_equal(syndrome, s):
            break  # Decoded successfully
        # Find a bit to flip (choose one randomly for simplicity)
        flip_index = np.random.choice(np.where(syndrome != s)[0])
        e[flip_index] ^= 1  # Flip the bit
    return e, steps

# Run the greedy decoder
decoded_error, steps = greedy_decoder(H, s)
print("\nDecoded error vector using heuristic:", decoded_error)
print("Number of steps required:", steps)
Decoded error vector using heuristic: [1 0 0 0 0 0 0]
Number of steps required: 2