1 core.autodiff
core.autodiff
Automatic differentiation for symbolic expressions.
Implements symbolic differentiation using the chain rule, producing gradient expressions that can be compiled for fast evaluation.
Supports native gradient rules for vector expressions (VectorSum, LinearCombination, DotProduct) with O(1) coefficient lookup for scalability to n=10,000+ variables.
1.1 Functions
| Name | Description |
|---|---|
| apply_gradient_rule | Apply the registered gradient rule for an expression type. |
| compile_hessian | Compile the Hessian for fast evaluation. |
| compile_jacobian | Compile the Jacobian for fast evaluation. |
| compute_hessian | Compute the Hessian matrix of an expression. |
| compute_jacobian | Compute the Jacobian matrix of expressions with respect to variables. |
| gradient | Compute the symbolic gradient of an expression with respect to a variable. |
| has_gradient_rule | Check if an expression type has a registered gradient rule. |
| register_gradient | Decorator to register a gradient rule for an expression type. |
1.1.1 apply_gradient_rule
core.autodiff.apply_gradient_rule(expr, wrt)Apply the registered gradient rule for an expression type.
1.1.1.1 Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| expr | 'Expression' | The expression to differentiate. | required |
| wrt | 'Variable' | The variable to differentiate with respect to. | required |
1.1.1.2 Returns
| Name | Type | Description |
|---|---|---|
| 'Expression' | The gradient expression. |
1.1.1.3 Raises
| Name | Type | Description |
|---|---|---|
| ValueError | If no gradient rule is registered for this expression type. |
1.1.2 compile_hessian
core.autodiff.compile_hessian(expr, variables)Compile the Hessian for fast evaluation.
1.1.2.1 Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| expr | Expression | The expression to differentiate. | required |
| variables | list[Variable] | List of variables. | required |
1.1.2.2 Returns
| Name | Type | Description |
|---|---|---|
| A callable that takes a 1D array and returns the Hessian as a 2D array. |
1.1.3 compile_jacobian
core.autodiff.compile_jacobian(exprs, variables)Compile the Jacobian for fast evaluation.
1.1.3.1 Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| exprs | list[Expression] | List of expressions. | required |
| variables | list[Variable] | List of variables. | required |
1.1.3.2 Returns
| Name | Type | Description |
|---|---|---|
| A callable that takes a 1D array and returns the Jacobian as a 2D array. |
1.1.3.3 Performance
For linear expressions where all Jacobian elements are constants, returns a pre-computed array directly (9.7x speedup vs element-by-element).
1.1.4 compute_hessian
core.autodiff.compute_hessian(expr, variables)Compute the Hessian matrix of an expression.
1.1.4.1 Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| expr | Expression | The expression to differentiate twice. | required |
| variables | list[Variable] | List of variables. | required |
1.1.4.2 Returns
| Name | Type | Description |
|---|---|---|
| list[list[Expression]] | Hessian matrix as H[i][j] = d²(expr)/d(var_i)d(var_j). |
1.1.4.3 Note
The Hessian is symmetric, so H[i][j] = H[j][i]. We compute the full matrix but could optimize by exploiting symmetry.
1.1.5 compute_jacobian
core.autodiff.compute_jacobian(exprs, variables)Compute the Jacobian matrix of expressions with respect to variables.
1.1.5.1 Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| exprs | list[Expression] | List of expressions (constraints or objectives). | required |
| variables | list[Variable] | List of variables to differentiate with respect to. | required |
1.1.5.2 Returns
| Name | Type | Description |
|---|---|---|
| list[list[Expression]] | Jacobian matrix as J[i][j] = d(expr_i)/d(var_j). |
1.1.5.3 Example
x, y = Variable(“x”), Variable(“y”) exprs = [x**2 + y, x*y] J = compute_jacobian(exprs, [x, y]) # J[0][0] = 2*x, J[0][1] = 1 # J[1][0] = y, J[1][1] = x
1.1.6 gradient
core.autodiff.gradient(expr, wrt)Compute the symbolic gradient of an expression with respect to a variable.
Uses a three-tier approach for optimal performance: 1. Registered gradient rules (O(1) for vector expressions) 2. Cached recursive computation (for shallow trees) 3. Iterative fallback (for deep trees to avoid RecursionError)
1.1.6.1 Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| expr | Expression | The expression to differentiate. | required |
| wrt | Variable | The variable to differentiate with respect to. | required |
1.1.6.2 Returns
| Name | Type | Description |
|---|---|---|
| Expression | A new Expression representing the derivative. |
1.1.6.3 Example
x = Variable(“x”) expr = x**2 + 3x grad = gradient(expr, x) # Returns: 2x + 3
1.1.7 has_gradient_rule
core.autodiff.has_gradient_rule(expr)Check if an expression type has a registered gradient rule.
1.1.7.1 Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| expr | 'Expression' | The expression to check. | required |
1.1.7.2 Returns
| Name | Type | Description |
|---|---|---|
| bool | True if a gradient rule is registered for this expression type. |
1.1.8 register_gradient
core.autodiff.register_gradient(expr_type)Decorator to register a gradient rule for an expression type.
Registered gradient rules are used by the main gradient() function before falling back to recursive tree traversal. This enables O(1) gradient computation for vector expressions.
1.1.8.1 Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| expr_type | type | The expression class to register a gradient rule for. | required |
1.1.8.2 Returns
| Name | Type | Description |
|---|---|---|
| Callable[[GradientFunc], GradientFunc] | A decorator that registers the gradient function. |
1.1.8.3 Example
@register_gradient(VectorSum) def gradient_vector_sum(expr: VectorSum, wrt: Variable) -> Expression: # O(1) gradient computation …