Monte Carlo View: Nominal vs Robust Grating#
The robust adjoint design trades a sliver of peak efficiency for tighter fabrication yield. Building on the fabrication-aware optimizer from the previous notebook, we now quantify how much that robustness actually helps under process variation.
This notebook compares the nominal adjoint design against the robustness-optimized variant using a matched Monte Carlo experiment, highlighting the yield benefits of carrying fabrication awareness into the optimization loop.
[ ]:
import json
from pathlib import Path
import autograd.numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tidy3d as td
from setup import (
center_wavelength,
default_spacer_thickness,
get_mode_monitor_power,
make_simulation,
)
from tidy3d import web
[2]:
design_paths = {
"nominal": Path("./results") / "gc_adjoint_best.json",
"robust": Path("./results") / "gc_adjoint_robust_best.json",
}
[3]:
def load_nominal_parameters(path):
"""Load a design JSON (Bayes or adjoint) into numpy-friendly fields."""
data = json.loads(Path(path).read_text(encoding="utf-8"))
return {
"widths_si": np.array(data["widths_si"], dtype=float),
"gaps_si": np.array(data["gaps_si"], dtype=float),
"widths_sin": np.array(data["widths_sin"], dtype=float),
"gaps_sin": np.array(data["gaps_sin"], dtype=float),
"first_gap_si": float(data["first_gap_si"]),
"first_gap_sin": float(data["first_gap_sin"]),
"spacer_thickness": default_spacer_thickness,
}
def make_variation_builder(nominal):
"""Return a closure that maps process deltas to a tidy3d Simulation."""
base_widths_si = np.array(nominal["widths_si"])
base_gaps_si = np.array(nominal["gaps_si"])
def builder(*, overlay_delta=0.0, spacer_delta=0.0, etch_bias=0.0):
# Etch bias widens features when positive and narrows them when
# negative, so widths grow with the bias while gaps shrink, mirroring
# the fabrication effect of over/under etching.
pert_widths_si = base_widths_si + etch_bias
pert_gaps_si = base_gaps_si - etch_bias
return make_simulation(
pert_widths_si,
pert_gaps_si,
nominal["widths_sin"],
nominal["gaps_sin"],
first_gap_si=nominal["first_gap_si"] + overlay_delta,
first_gap_sin=nominal["first_gap_sin"],
spacer_thickness=nominal["spacer_thickness"] + spacer_delta,
)
return builder
Distribution of Center-Wavelength Loss#
Both designs now face identical process draws. The plot below overlays the center wavelength loss distributions in dB. Dashed vertical lines mark the nominal (unperturbed) efficiency for each design.
[9]:
fig, ax = plt.subplots(figsize=(6, 4))
bins = "auto"
colors = {
"nominal": "tab:blue",
"robust": "tab:green",
}
for label, result in design_results.items():
losses_db = linear_to_loss_db(result["samples"])
ax.hist(
losses_db,
bins=bins,
alpha=0.6,
label=f"{label.capitalize()} design",
color=colors.get(label, None),
edgecolor="white",
)
nominal_loss = linear_to_loss_db([result["nominal"]])[0]
ax.axvline(
nominal_loss,
color=colors.get(label, None),
linestyle="--",
linewidth=2,
)
ax.set_xlabel("Center wavelength loss (dB)")
ax.set_ylabel("Sample count")
ax.set_title("Monte Carlo comparison at shared perturbations")
ax.legend()
ax.grid(alpha=0.25)
plt.show()
What the numbers say#
Both designs were tested under identical Monte Carlo perturbations (N = 100, σₒᵥₑᵣₗₐᵧ = 25 nm, σₛₚₐcₑᵣ = 20 nm, σ_wᵢdₜₕ = 10 nm) and evaluated at the center wavelength.
Results:
Average loss: Robust 2.51 dB vs nominal 2.56 dB (Δ = −0.05 dB). In linear scale, that’s 0.561 vs 0.555, or about +1.1 % higher mean transmission.
Variability: Standard deviation (linear) increases slightly (0.027 -> 0.028, +3 %), suggesting a comparable level of fluctuation between samples.
Spread (10th–90th percentile): 0.0707 -> 0.0755 (+7 %) - a slightly broader distribution.
Tails: 90th-percentile loss improves (2.86 -> 2.82 dB, better worst-case). 10th-percentile loss worsens (2.31 -> 2.23 dB, slightly lower best-case).
In short: The robust design maintains essentially the same overall spread but shifts the entire distribution slightly toward lower loss. While variability remains comparable, the robust version delivers a modest boost in average transmission and improved worst-case performance, at the cost of a marginally weaker best-case - a balanced, realistic outcome consistent with fabrication-aware optimization.
How and when robustness was introduced into the optimization (for example, from the start or as a final fine-tuning).
The starting point, optimizer settings, and number of iterations used.
The perturbation model and its assumed standard deviations or correlations.
The type of device. Grating couplers are quite resonant and inherently sensitive to fabrication noise, so they tend to show smaller relative gains.