Virtual Metrology & Process Control

ML-Enhanced Run-to-Run Control

Augmenting traditional R2R control with machine learning for tighter process windows

Run-to-Run Control Fundamentals

Run-to-Run Control Fundamentals

Run-to-Run (R2R) control adjusts equipment recipe parameters between wafer runs (or lot runs) to compensate for drift and maintain target metrology values. It's the most common form of advanced process control (APC) in semiconductor fabs.

The Classic R2R Framework

Traditional R2R uses a simple linear model:

  • Process model: y = a·x + b, where y is the metrology output and x is the controllable recipe parameter (e.g., etch time).
  • EWMA filter: Exponentially weighted moving average smooths noisy metrology signals to estimate the current process state.
  • Controller: Calculates the recipe adjustment needed to bring the next run's output back to target.

Analogy: The Thermostat

R2R control is like a thermostat for your etch process. The "temperature" is your etch depth, the "setpoint" is your target CD, and the "heater knob" is etch time. If the last wafer came out too deep, you reduce etch time for the next one. The EWMA filter prevents you from over-reacting to a single noisy measurement.

import numpy as np

class TraditionalR2RController:
    """Classic EWMA-based Run-to-Run controller."""

    def __init__(self, target, gain, ewma_lambda=0.3):
        self.target = target          # Target metrology value
        self.gain = gain              # Process model: dy/dx
        self.ewma_lambda = ewma_lambda
        self.ewma_state = target      # Initialize at target
        self.recipe_offset = 0.0      # Current recipe adjustment

    def update(self, metrology_value):
        """Update controller with new metrology measurement."""
        # EWMA filter
        self.ewma_state = (
            self.ewma_lambda * metrology_value +
            (1 - self.ewma_lambda) * self.ewma_state
        )

        # Calculate error
        error = self.target - self.ewma_state

        # Calculate recipe adjustment (dead-beat control)
        self.recipe_offset = error / self.gain

        return self.recipe_offset

# Example: controlling etch depth via etch time
controller = TraditionalR2RController(
    target=50.0,   # Target: 50 nm etch depth
    gain=1.1,      # 1.1 nm depth per second of etch time
    ewma_lambda=0.3
)

# Simulate
measurements = [50.5, 50.8, 51.0, 50.3, 49.8]
for m in measurements:
    adj = controller.update(m)
    print(f"Measured: {m:.1f} nm → Recipe adj: {adj:+.2f} sec")

ML-Augmented R2R Control

ML-Augmented R2R Control

Traditional R2R has limitations: linear process models, single input/single output, no feed-forward capability. ML enhances R2R in several ways:

1. Nonlinear Process Models

Replace the linear y = a·x + b with a neural network or gradient boosting model that captures complex, nonlinear relationships between multiple recipe knobs and metrology outputs.

2. Feed-Forward Compensation

Use VM predictions of incoming wafer state (from upstream processes) to proactively adjust the recipe before processing — rather than waiting for post-process metrology to react.

3. Multi-Input, Multi-Output (MIMO) Control

ML models handle the coupling between multiple recipe parameters and multiple metrology targets that traditional R2R cannot.

import numpy as np
from sklearn.ensemble import GradientBoostingRegressor

class MLAugmentedR2RController:
    """R2R controller using ML process model and feed-forward."""

    def __init__(self, process_model, target_values, recipe_bounds):
        self.process_model = process_model  # Trained ML model
        self.targets = target_values        # Dict of target metrology values
        self.bounds = recipe_bounds         # Min/max for each recipe param
        self.feedback_bias = {}             # Learned bias corrections

    def compute_feedforward(self, upstream_vm_predictions):
        """
        Use upstream VM to proactively adjust recipe.
        e.g., if incoming film is thicker than nominal,
        increase etch time to compensate.
        """
        # The ML model predicts: metrology = f(recipe, incoming_state)
        # We invert this to find: recipe = f^-1(target, incoming_state)
        # Using simple grid search over recipe space
        best_recipe = None
        best_error = float('inf')

        recipe_grid = self._generate_recipe_grid()
        for recipe_candidate in recipe_grid:
            features = {**recipe_candidate, **upstream_vm_predictions}
            predicted_metrology = self.process_model.predict(
                [list(features.values())]
            )[0]

            error = abs(predicted_metrology - self.targets['etch_depth'])
            if error < best_error:
                best_error = error
                best_recipe = recipe_candidate

        # Apply feedback bias correction
        for param, bias in self.feedback_bias.items():
            if param in best_recipe:
                best_recipe[param] += bias

        return best_recipe

    def update_feedback(self, actual_metrology):
        """Update feedback bias from post-process measurement."""
        error = self.targets['etch_depth'] - actual_metrology
        # EWMA update of bias
        for param in self.feedback_bias:
            self.feedback_bias[param] = (
                0.3 * error / self._get_gain(param) +
                0.7 * self.feedback_bias[param]
            )

    def _generate_recipe_grid(self):
        # Simplified: in production, use Bayesian optimization
        return [{'etch_time': t, 'rf_power': p}
                for t in np.linspace(40, 60, 20)
                for p in np.linspace(200, 300, 20)]

    def _get_gain(self, param):
        return 1.0  # Simplified

Key Concept: Feed-Forward + Feedback

The most powerful R2R systems combine feed-forward (proactive adjustment based on incoming wafer state from VM) with feedback (reactive correction based on post-process metrology). Feed-forward handles known incoming variation; feedback handles unknown disturbances and model errors.

Challenges and Best Practices

Challenges and Best Practices

Challenge: Controller Stability

ML models can produce unstable control actions if the model is wrong in extrapolation regions. Traditional R2R controllers are inherently stable (EWMA is a low-pass filter). ML controllers need guardrails:

  • Clamp recipe adjustments: Never allow adjustments larger than a predefined maximum per run.
  • Gradual transition: Blend ML recommendations with traditional R2R output using a trust factor that increases as the ML model proves itself.
  • Fallback logic: If the ML model's confidence drops below threshold, revert to traditional R2R.

Challenge: Delays and Missing Data

Metrology results arrive 2–8 hours after processing. During that time, 50+ wafers may have already been processed. The controller must handle this "dead time" gracefully — typically by predicting the current state from the most recent VM + the oldest available metrology.

Did You Know?

The combination of VM + ML-R2R can reduce process variation (Cpk) by 30–50% compared to traditional R2R alone. In advanced logic fabs, this translates directly to tighter CD distributions, fewer reworks, and higher die yield.

# Safety-wrapped ML controller
class SafeMLController:
    """ML controller with safety bounds and fallback."""

    def __init__(self, ml_controller, traditional_controller,
                 max_adjustment, trust_factor=0.5):
        self.ml = ml_controller
        self.traditional = traditional_controller
        self.max_adj = max_adjustment
        self.trust = trust_factor  # 0 = pure traditional, 1 = pure ML

    def compute_adjustment(self, metrology, vm_predictions, ml_confidence):
        # Get both recommendations
        ml_adj = self.ml.compute_feedforward(vm_predictions)
        trad_adj = self.traditional.update(metrology)

        # Blend based on trust and confidence
        effective_trust = self.trust * ml_confidence
        blended = {}
        for param in ml_adj:
            ml_val = ml_adj[param]
            trad_val = trad_adj if isinstance(trad_adj, float) else 0
            blended[param] = effective_trust * ml_val + (1 - effective_trust) * trad_val

            # Safety clamp
            blended[param] = np.clip(
                blended[param], -self.max_adj, self.max_adj
            )

        return blended

Knowledge Check

Knowledge Check

1 / 3

What is the role of the EWMA filter in traditional R2R control?