Optyx
  • Getting Started
    • Installation
    • Quickstart
    • Core Concepts
    • How It Works
  • Tutorials
    • Basic Optimization
    • Working with Constraints
    • Automatic Differentiation
    • Vector Variables
    • Matrix Variables
    • Performance Guide
  • Examples
    • Portfolio Optimization
    • Fleet Dispatch
    • Mine Scheduling
    • Road Maintenance
    • Rosenbrock Function

    • Large-Scale
    • Resource Allocation
    • Advanced Portfolio (Covariance)
  • API Reference
    • Overview

    • Guides
    • Expressions
    • Vector Variables
    • Matrix Variables
    • Parameters
    • Constraints
    • Problem
    • Solution
    • Autodiff

    • Auto-Generated
    • Full API Reference
  • Benchmarks
  • Contributing
  • PyPI

Optyx

1 Optyx

1.0.1 Optimization that reads like Python

1.1 See the Difference

1.1.1 With Optyx

from optyx import Variable, Problem

x = Variable("x", lb=0)
y = Variable("y", lb=0)

solution = (
    Problem()
    .minimize(x**2 + y**2)
    .subject_to(x + y >= 1)
    .solve()
)
# x=0.5, y=0.5, objective=0.5

1.1.2 With SciPy

from scipy.optimize import minimize
import numpy as np

def objective(vars):
    return vars[0]**2 + vars[1]**2

def gradient(vars):  # manual gradient!
    return np.array([2*vars[0], 2*vars[1]])

result = minimize(
    objective, x0=[1, 1], jac=gradient,
    method='SLSQP',
    bounds=[(0, None), (0, None)],
    constraints={'type': 'ineq', 
                 'fun': lambda v: v[0]+v[1]-1}
)

Your optimization code should read like your math. With Optyx, x + y >= 1 is exactly that—not a lambda buried in a constraint dictionary.


1.2 Why Another Optimization Library?

Python has excellent tools. SciPy provides algorithms. CVXPY handles convex problems elegantly. Pyomo scales to industrial applications.

Optyx takes a different path: radical simplicity.

We believe most optimization code is harder to write than it needs to be. Optyx is for developers who want to:

  • Write problems as they think them — x**2 + y**2 not lambda v: v[0]**2 + v[1]**2
  • Never compute a gradient by hand — symbolic autodiff handles it
  • Skip the solver configuration — sensible defaults, automatic solver selection

1.2.1 Being Honest

Optyx is young and opinionated. It’s not a replacement for specialized tools:

  • Need MILP at scale? → Use Pyomo or Gurobi
  • Need convex guarantees? → Use CVXPY
  • Need maximum performance? → Use raw solver APIs

But if you want readable optimization code that just works for most problems, keep reading.


1.3 What You Can Do

from optyx import Variable, Problem

# Find the minimum of Rosenbrock function in a box
x = Variable("x", lb=-2, ub=2)
y = Variable("y", lb=-2, ub=2)

rosenbrock = 100*(y - x**2)**2 + (1 - x)**2

solution = (
    Problem("rosenbrock")
    .minimize(rosenbrock)
    .solve()
)

print(f"Minimum at ({solution['x']:.4f}, {solution['y']:.4f})")
print(f"Objective value: {solution.objective_value:.6f}")
Minimum at (1.0000, 1.0000)
Objective value: 0.000000

1.3.1 Portfolio Optimization

from optyx import Variable, Problem

# Three assets with position limits
tech = Variable("tech", lb=0, ub=0.4)
energy = Variable("energy", lb=0, ub=0.4) 
finance = Variable("finance", lb=0, ub=0.4)

# Maximize return while controlling risk
expected_return = 0.12*tech + 0.08*energy + 0.10*finance
risk = 0.04*tech**2 + 0.02*energy**2 + 0.03*finance**2

solution = (
    Problem("portfolio")
    .minimize(risk)
    .subject_to((tech + energy + finance).eq(1))  # fully invested
    .subject_to(expected_return >= 0.095)          # target return
    .solve()
)

print(f"Allocation: tech={solution['tech']:.1%}, energy={solution['energy']:.1%}, finance={solution['finance']:.1%}")
print(f"Expected return: {0.12*solution['tech'] + 0.08*solution['energy'] + 0.10*solution['finance']:.1%}")
print(f"Portfolio risk: {solution.objective_value:.4f}")
Allocation: tech=25.7%, energy=40.0%, finance=34.3%
Expected return: 9.7%
Portfolio risk: 0.0094

1.3.2 Inspect What You Built

Unlike black-box solvers, Optyx lets you see exactly what’s happening:

from optyx import Variable

x = Variable("x")
y = Variable("y")

# Build an expression
expr = (x + 2*y)**2 - x*y

# See its structure
print(f"Expression: {expr}")
print(f"Variables: {[v.name for v in expr.get_variables()]}")
print(f"Value at x=1, y=2: {expr.evaluate({'x': 1, 'y': 2})}")
Expression: (((Variable('x') + (Constant(2) * Variable('y'))) ** Constant(2)) - (Variable('x') * Variable('y')))
Variables: ['x', 'y']
Value at x=1, y=2: 23

1.4 The Core Ideas

1.4.1 Expressions are Symbolic

When you write x + y, Optyx builds a symbolic tree—not a Python value. This means:

  • Expressions can be inspected, differentiated, and analyzed
  • Gradients are exact (no finite differences)
  • Errors are caught before solving, not during

1.4.2 Problems are Fluent

The Problem API chains naturally:

Problem("name")
    .minimize(objective)
    .subject_to(constraint1)
    .subject_to(constraint2)
    .solve()

One line to read, one line to understand.

1.4.3 Solvers are Automatic

Optyx analyzes your problem and picks the right solver:

  • Linear? → HiGHS (fast industrial LP solver)
  • Unconstrained NLP? → L-BFGS-B
  • Constrained NLP? → SLSQP or trust-constr

You can override, but you rarely need to.

1.4.4 Re-solving is Fast

Changed a parameter? Optyx caches the problem structure:

# First solve compiles the problem
solution = problem.solve()

# Subsequent solves reuse compilation
for scenario in scenarios:
    solution = problem.solve(x0=scenario)

Up to 800x faster on repeated solves.


1.5 Under the Hood: A Glimpse

Optyx isn’t magic—it’s a SciPy wrapper with a symbolic frontend. Here’s the 30-second version of what happens when you solve a problem:

Your Code                    What Optyx Builds              What Runs
──────────                   ─────────────────              ─────────
x**2 + y**2         →        Expression Tree        →       SciPy's minimize()
                                   Add
                                  /   \
                              Power   Power
                              / \     / \
                             x   2   y   2

Step 1: Build the Tree — Python operators (+, *, **) are overloaded to construct a symbolic expression tree instead of computing values.

Step 2: Walk for Gradients — Each node knows its derivative rule. Power(x, 2) knows ∂/∂x = 2x. The tree is walked to compute exact gradients via the chain rule.

Step 3: Compile to Callables — The tree is compiled into fast Python functions that SciPy can call repeatedly during optimization.

Step 4: Call SciPy — Optyx passes your objective, gradient, and constraints to scipy.optimize.minimize() (or HiGHS for LP). The actual optimization algorithm is SciPy’s.

WarningHonest About Overhead

The symbolic layer adds compilation cost. First solve is ~1.5-2x slower than hand-written SciPy. The payoff comes from: (1) never writing gradients, (2) re-solves reusing compiled functions, (3) readable code you can maintain.

Full technical deep-dive →


1.6 Variable: The Building Block

Everything in Optyx starts with Variable. It represents a decision variable—a value the solver will determine.

from optyx import Variable

# Basic variable
x = Variable("x")

# With bounds
y = Variable("y", lb=0, ub=10)  # 0 ≤ y ≤ 10

# Integer domain
n = Variable("n", lb=0, ub=100, domain="integer")

# Binary (0 or 1)
select = Variable("select", domain="binary")

print(f"Variable x: bounds=[{x.lb}, {x.ub}]")
print(f"Variable y: bounds=[{y.lb}, {y.ub}]")
print(f"Variable n: domain={n.domain}")
Variable x: bounds=[None, None]
Variable y: bounds=[0, 10]
Variable n: domain=integer

Variables combine with Python operators to build expressions:

from optyx import Variable

x = Variable("x")
y = Variable("y")

# Arithmetic builds expression trees
linear = 3*x + 2*y - 5
quadratic = x**2 + x*y + y**2
nonlinear = (x + 1) / (y + 1)

print(f"Linear: {linear}")
print(f"Quadratic: {quadratic}")
Linear: (((Constant(3) * Variable('x')) + (Constant(2) * Variable('y'))) - Constant(5))
Quadratic: (((Variable('x') ** Constant(2)) + (Variable('x') * Variable('y'))) + (Variable('y') ** Constant(2)))

1.7 Loops: The Traditional Approach

What if you have many variables? The natural approach is loops:

from optyx import Variable, Problem
import numpy as np

# 10 portfolio weights
n_assets = 10
weights = [Variable(f"w_{i}", lb=0, ub=1) for i in range(n_assets)]

# Expected returns
np.random.seed(42)
returns = np.random.uniform(0.05, 0.15, n_assets)

# Build objective via loop
expected_return = sum(weights[i] * returns[i] for i in range(n_assets))

# Budget constraint via loop
budget = sum(weights)

print(f"Created {len(weights)} variables")
print(f"Objective type: {type(expected_return).__name__}")
Created 10 variables
Objective type: BinaryOp

This works for small problems. But there are limitations…


1.8 The Limitations of Loops

As problems grow, the loop approach becomes painful:

1.8.1 1. Verbose Code

# 100 variables = 100 iterations
weights = [Variable(f"w_{i}", lb=0, ub=1) for i in range(100)]

# Quadratic objective = 10,000 iterations (n²)
variance = sum(
    weights[i] * covariance[i, j] * weights[j]
    for i in range(100)
    for j in range(100)
)

1.8.2 2. Error-Prone Indexing

# Easy to make mistakes with indices
constraint = sum(weights[i] * data[j] for i in range(n))  # Bug: should be data[i]

1.8.3 3. Slow Expression Building

Each loop iteration creates expression nodes. For n=100: - Linear sum: 100 additions - Quadratic form: 10,000 multiplications + 10,000 additions

This compilation overhead grows with problem size.

1.8.4 4. No Vectorized Gradients

Loop-built expressions differentiate element-by-element. NumPy-style operations could be much faster.


1.9 VectorVariable: Vectors Done Right

VectorVariable solves these problems with NumPy-like syntax:

from optyx import VectorVariable
import numpy as np

# Create 10 variables in one line
w = VectorVariable("w", size=10, lb=0, ub=1)

print(f"Created: {w.name} with {len(w)} elements")
print(f"First element: {w[0].name}")
print(f"Last element: {w[-1].name}")
Created: w with 10 elements
First element: w[0]
Last element: w[9]

1.9.1 Vectorized Operations

from optyx import VectorVariable, Problem
import numpy as np

np.random.seed(42)
n = 10

# Data as NumPy arrays
returns = np.random.uniform(0.05, 0.15, n)
 
# Variables as VectorVariable
w = VectorVariable("w", n, lb=0, ub=1)

# Dot product — no loops!
expected_return = returns @ w

# Sum — one call!
budget = w.sum()

print(f"Return type: {type(expected_return).__name__}")
print(f"Sum type: {type(budget).__name__}")
Return type: LinearCombination
Sum type: VectorSum

1.9.2 Full Portfolio Example

from optyx import VectorVariable, Problem
import numpy as np

np.random.seed(42)
n = 20  # 20 assets

# Problem data
returns = np.random.uniform(0.05, 0.15, n)

# Decision variables
w = VectorVariable("w", n, lb=0, ub=1)

# Maximize return subject to budget
solution = (
    Problem("portfolio_vector")
    .maximize(returns @ w)
    .subject_to(w.sum().eq(1))
    .solve()
)

# Extract results
opt_weights = np.array([solution[f"w[{i}]"] for i in range(n)])
print(f"Status: {solution.status}")
print(f"Max return: {solution.objective_value:.2%}")
print(f"Non-zero positions: {np.sum(opt_weights > 0.01)}")
Status: SolverStatus.OPTIMAL
Max return: 14.70%
Non-zero positions: 1

1.9.3 Key VectorVariable Features

Feature Syntax Description
Create VectorVariable("x", n) n-element vector
Index x[i], x[-1] Single Variable
Slice x[2:5] Sub-vector
Sum x.sum() Σ xᵢ
Dot a @ x Σ aᵢxᵢ
L2 Norm norm(x) √(Σ xᵢ²)
L1 Norm norm(x, ord=1) Σ |xᵢ|
Equality x.sum().eq(1) Sum equals 1

Full VectorVariable Tutorial →


1.10 MatrixVariable: 2D Arrays

For problems with natural 2D structure—transportation, assignment, scheduling—use MatrixVariable:

from optyx import MatrixVariable

# 3×4 matrix of decision variables
X = MatrixVariable("X", rows=3, cols=4, lb=0)

print(f"Shape: {X.shape}")
print(f"Element [1,2]: {X[1, 2].name}")
Shape: (3, 4)
Element [1,2]: X[1,2]

1.10.1 Row and Column Operations

from optyx import MatrixVariable

X = MatrixVariable("X", rows=3, cols=4, lb=0)

# Row slice — returns VectorVariable
row_0 = X[0, :]
print(f"Row 0: {len(row_0)} elements")

# Column slice
col_2 = X[:, 2]
print(f"Column 2: {len(col_2)} elements")

# Row sum
row_sum = X[0, :].sum()
print(f"Row sum type: {type(row_sum).__name__}")
Row 0: 4 elements
Column 2: 3 elements
Row sum type: VectorSum

1.10.2 Transportation Problem Example

Ship goods from warehouses to stores, minimizing cost:

from optyx import MatrixVariable, Problem
import numpy as np

# 3 warehouses, 4 stores
supply = np.array([100, 150, 200])
demand = np.array([80, 120, 100, 150])
costs = np.array([
    [4, 6, 9, 5],
    [5, 3, 7, 8],
    [6, 8, 4, 3],
])

# Decision: ship[i,j] = units from warehouse i to store j
ship = MatrixVariable("ship", rows=3, cols=4, lb=0)

# Objective: minimize total cost
total_cost = sum(
    costs[i, j] * ship[i, j]
    for i in range(3) for j in range(4)
)

problem = Problem("transport").minimize(total_cost)

# Supply constraints
for i in range(3):
    problem = problem.subject_to(ship[i, :].sum() <= supply[i])

# Demand constraints
for j in range(4):
    problem = problem.subject_to(ship[:, j].sum().eq(demand[j]))

solution = problem.solve()

print(f"Status: {solution.status}")
print(f"Total cost: ${solution.objective_value:.2f}")
Status: SolverStatus.OPTIMAL
Total cost: $1660.00

1.10.3 Key MatrixVariable Features

Feature Syntax Description
Create MatrixVariable("X", m, n) m×n matrix
Index X[i, j] Single Variable
Row X[i, :] Row as VectorVariable
Column X[:, j] Column as VectorVariable
Transpose X.T Transposed view
Diagonal X.diagonal() Main diagonal (square)
Symmetric symmetric=True Enforce X[i,j]==X[j,i]

Full MatrixVariable Tutorial →


1.11 Math-like Quadratic Forms

For portfolio variance (w.T @ Σ @ w), use w.dot(Σ @ w) for natural math-like syntax with analytic gradients:

from optyx import VectorVariable, Problem
import numpy as np

np.random.seed(42)
n = 10

# Create positive-definite covariance
data = np.random.randn(n, 5)
cov = data @ data.T / 5

# Decision variables
w = VectorVariable("w", n, lb=0, ub=1)

# Math-like syntax: w · (Σw) = wᵀΣw with analytic gradient
variance = w.dot(cov @ w)

solution = (
    Problem("min_variance")
    .minimize(variance)
    .subject_to(w.sum().eq(1))
    .solve()
)

print(f"Min variance: {solution.objective_value:.6f}")
Min variance: 0.000119
NoteWhy w.dot(Σ @ w)?

A loop-built quadratic form creates n² expression nodes. w.dot(Σ @ w) uses a single MatrixVectorProduct + DotProduct node with analytic gradient: ∇(wᵀ Q w) = 2Qw. This is much faster for large n.

Advanced Portfolio Tutorial →


1.12 Performance

Optyx is designed for developer productivity first, but it’s not slow:

Scenario Time vs Raw SciPy Notes
Linear programs ~1.0x Near-parity via HiGHS
Nonlinear (first solve) ~1.5-2.0x Autodiff compilation cost
Nonlinear (re-solve) 0.001-0.4x Cached structure wins

For problems where compilation is amortized over many solves, Optyx can be orders of magnitude faster than naive loops.

See Benchmarks for detailed analysis.


1.13 Get Started

pip install optyx

Then try the 5-minute quickstart or explore the tutorials.


1.14 Scalable Optimization

Optyx scales from prototypes to production with Vector and Matrix features:

1.14.1 Vector & Matrix Variables

Handle hundreds or thousands of variables with clean syntax:

from optyx import VectorVariable, Problem
import numpy as np

# 100 decision variables in one line
weights = VectorVariable("w", 100, lb=0, ub=1)

# Covariance matrix (100x100)
cov = np.random.randn(100, 100)
cov = cov @ cov.T  # Make positive definite

# Math-like quadratic form: w · (Σw)
risk = weights.dot(cov @ weights)

solution = (
    Problem()
    .minimize(risk)
    .subject_to(weights.sum().eq(1))
    .solve()
)

No loops, no index bookkeeping—just math.

1.14.2 Parameters for Fast Re-solves

Change inputs without rebuilding the problem:

from optyx import Variable, Parameter, Problem

x = Variable("x", lb=0)
price = Parameter("price", value=100)

prob = Problem().maximize(price * x - x**2)

# First solve
sol1 = prob.solve()

# Price changes — instant re-solve
price.set(150)
sol2 = prob.solve()  # Uses cached structure

Instant feedback for real-time applications.

1.14.3 Key Features

  • VectorVariable — Create vectors of variables with VectorVariable("x", 100)
  • MatrixVariable — 2D variable arrays with row/column slicing
  • Native gradient rules — L2Norm, L1Norm, DotProduct compute gradients without loops
  • Math-like quadratic forms — w.dot(Σ @ w) for portfolio variance
  • Parameter class — Updatable constants for fast scenario analysis
  • VectorParameter — Array-valued parameters for bulk updates

Learn more about Vectors →


1.15 What’s Coming

Optyx is actively developed. On the roadmap:

  • JAX backend — JIT-compiled autodiff for complex models
  • More solvers — IPOPT integration for large-scale NLP
  • Debugging tools — Infeasibility diagnostics and constraint analysis
  • Stochastic programming — Scenario-based optimization support

Ready to try it? Get Started → | View on PyPI

© 2025 Optyx Contributors

 
  • Report an issue
  • View source