Unlike black-box solvers, Optyx lets you see exactly what’s happening:
from optyx import Variablex = Variable("x")y = Variable("y")# Build an expressionexpr = (x +2*y)**2- x*y# See its structureprint(f"Expression: {expr}")print(f"Variables: {[v.name for v in expr.get_variables()]}")print(f"Value at x=1, y=2: {expr.evaluate({'x': 1, 'y': 2})}")
Expression: (((Variable('x') + (Constant(2) * Variable('y'))) ** Constant(2)) - (Variable('x') * Variable('y')))
Variables: ['x', 'y']
Value at x=1, y=2: 23
1.4 The Core Ideas
1.4.1 Expressions are Symbolic
When you write x + y, Optyx builds a symbolic tree—not a Python value. This means:
Expressions can be inspected, differentiated, and analyzed
Optyx analyzes your problem and picks the right solver:
Linear? → HiGHS (fast industrial LP solver)
Unconstrained NLP? → L-BFGS-B
Constrained NLP? → SLSQP or trust-constr
You can override, but you rarely need to.
1.4.4 Re-solving is Fast
Changed a parameter? Optyx caches the problem structure:
# First solve compiles the problemsolution = problem.solve()# Subsequent solves reuse compilationfor scenario in scenarios: solution = problem.solve(x0=scenario)
Up to 800x faster on repeated solves.
1.5 Under the Hood: A Glimpse
Optyx isn’t magic—it’s a SciPy wrapper with a symbolic frontend. Here’s the 30-second version of what happens when you solve a problem:
Your Code What Optyx Builds What Runs
────────── ───────────────── ─────────
x**2 + y**2 → Expression Tree → SciPy's minimize()
Add
/ \
Power Power
/ \ / \
x 2 y 2
Step 1: Build the Tree — Python operators (+, *, **) are overloaded to construct a symbolic expression tree instead of computing values.
Step 2: Walk for Gradients — Each node knows its derivative rule. Power(x, 2) knows ∂/∂x = 2x. The tree is walked to compute exact gradients via the chain rule.
Step 3: Compile to Callables — The tree is compiled into fast Python functions that SciPy can call repeatedly during optimization.
Step 4: Call SciPy — Optyx passes your objective, gradient, and constraints to scipy.optimize.minimize() (or HiGHS for LP). The actual optimization algorithm is SciPy’s.
WarningHonest About Overhead
The symbolic layer adds compilation cost. First solve is ~1.5-2x slower than hand-written SciPy. The payoff comes from: (1) never writing gradients, (2) re-solves reusing compiled functions, (3) readable code you can maintain.
What if you have many variables? The natural approach is loops:
from optyx import Variable, Problemimport numpy as np# 10 portfolio weightsn_assets =10weights = [Variable(f"w_{i}", lb=0, ub=1) for i inrange(n_assets)]# Expected returnsnp.random.seed(42)returns = np.random.uniform(0.05, 0.15, n_assets)# Build objective via loopexpected_return =sum(weights[i] * returns[i] for i inrange(n_assets))# Budget constraint via loopbudget =sum(weights)print(f"Created {len(weights)} variables")print(f"Objective type: {type(expected_return).__name__}")
Created 10 variables
Objective type: BinaryOp
This works for small problems. But there are limitations…
1.8 The Limitations of Loops
As problems grow, the loop approach becomes painful:
1.8.1 1. Verbose Code
# 100 variables = 100 iterationsweights = [Variable(f"w_{i}", lb=0, ub=1) for i inrange(100)]# Quadratic objective = 10,000 iterations (n²)variance =sum( weights[i] * covariance[i, j] * weights[j]for i inrange(100)for j inrange(100))
1.8.2 2. Error-Prone Indexing
# Easy to make mistakes with indicesconstraint =sum(weights[i] * data[j] for i inrange(n)) # Bug: should be data[i]
1.8.3 3. Slow Expression Building
Each loop iteration creates expression nodes. For n=100: - Linear sum: 100 additions - Quadratic form: 10,000 multiplications + 10,000 additions
This compilation overhead grows with problem size.
1.8.4 4. No Vectorized Gradients
Loop-built expressions differentiate element-by-element. NumPy-style operations could be much faster.
1.9 VectorVariable: Vectors Done Right
VectorVariable solves these problems with NumPy-like syntax:
from optyx import VectorVariableimport numpy as np# Create 10 variables in one linew = VectorVariable("w", size=10, lb=0, ub=1)print(f"Created: {w.name} with {len(w)} elements")print(f"First element: {w[0].name}")print(f"Last element: {w[-1].name}")
Created: w with 10 elements
First element: w[0]
Last element: w[9]
1.9.1 Vectorized Operations
from optyx import VectorVariable, Problemimport numpy as npnp.random.seed(42)n =10# Data as NumPy arraysreturns = np.random.uniform(0.05, 0.15, n)# Variables as VectorVariablew = VectorVariable("w", n, lb=0, ub=1)# Dot product — no loops!expected_return = returns @ w# Sum — one call!budget = w.sum()print(f"Return type: {type(expected_return).__name__}")print(f"Sum type: {type(budget).__name__}")
Return type: LinearCombination
Sum type: VectorSum
1.9.2 Full Portfolio Example
from optyx import VectorVariable, Problemimport numpy as npnp.random.seed(42)n =20# 20 assets# Problem datareturns = np.random.uniform(0.05, 0.15, n)# Decision variablesw = VectorVariable("w", n, lb=0, ub=1)# Maximize return subject to budgetsolution = ( Problem("portfolio_vector") .maximize(returns @ w) .subject_to(w.sum().eq(1)) .solve())# Extract resultsopt_weights = np.array([solution[f"w[{i}]"] for i inrange(n)])print(f"Status: {solution.status}")print(f"Max return: {solution.objective_value:.2%}")print(f"Non-zero positions: {np.sum(opt_weights >0.01)}")
Status: SolverStatus.OPTIMAL
Max return: 14.70%
Non-zero positions: 1
For portfolio variance (w.T @ Σ @ w), use w.dot(Σ @ w) for natural math-like syntax with analytic gradients:
from optyx import VectorVariable, Problemimport numpy as npnp.random.seed(42)n =10# Create positive-definite covariancedata = np.random.randn(n, 5)cov = data @ data.T /5# Decision variablesw = VectorVariable("w", n, lb=0, ub=1)# Math-like syntax: w · (Σw) = wᵀΣw with analytic gradientvariance = w.dot(cov @ w)solution = ( Problem("min_variance") .minimize(variance) .subject_to(w.sum().eq(1)) .solve())print(f"Min variance: {solution.objective_value:.6f}")
Min variance: 0.000119
NoteWhy w.dot(Σ @ w)?
A loop-built quadratic form creates n² expression nodes. w.dot(Σ @ w) uses a single MatrixVectorProduct + DotProduct node with analytic gradient: ∇(wᵀ Q w) = 2Qw. This is much faster for large n.