Problem API

Building and solving optimization problems
Published

February 8, 2026

1 Problem

The Problem class is the main container for optimization problems.

from optyx import Problem

prob = Problem(name=None)

1.1 Parameters

Parameter Type Description Default
name str | None Optional problem name None

2 Building Problems

2.1 Fluent API

Chain methods for concise problem definition:

from optyx import Variable, Problem

x = Variable("x", lb=0)
y = Variable("y", lb=0)

solution = (
    Problem("quadratic")
    .minimize(x**2 + y**2)
    .subject_to(x + y >= 1)
    .subject_to(x <= 5)
    .solve()
)

print(f"Objective: {solution.objective_value:.4f}")
Objective: 0.5000

2.2 Step-by-Step

Build problems incrementally:

from optyx import Variable, Problem

x = Variable("x", lb=0)
y = Variable("y", lb=0)

prob = Problem("step-by-step")
prob.minimize(x**2 + y**2)
prob.subject_to(x + y >= 1)
prob.subject_to(x <= 5)

solution = prob.solve()
print(f"Objective: {solution.objective_value:.4f}")
Objective: 0.5000

3 Methods

3.1 .minimize(objective)

Set a minimization objective.

prob.minimize(expr)
Parameter Type Description
objective Expression The expression to minimize

Returns: self (for chaining)

3.2 .maximize(objective)

Set a maximization objective.

prob.maximize(expr)
Parameter Type Description
objective Expression The expression to maximize

Returns: self (for chaining)

Note

Internally, maximization is converted to minimization by negating the objective.

3.3 .subject_to(constraint)

Add a constraint to the problem.

prob.subject_to(constraint)
Parameter Type Description
constraint Constraint An inequality or equality constraint

Returns: self (for chaining)

3.4 .solve(**kwargs)

Solve the optimization problem.

solution = prob.solve(method="auto", strict=False, x0=None, tol=None, maxiter=None)
Parameter Type Description Default
method str Solver method "auto"
strict bool Raise error for unsupported features False
x0 np.ndarray | None Initial point Auto-generated
tol float | None Solver tolerance Solver default
maxiter int | None Maximum iterations Solver default

Returns: Solution

Raises: ValueError if strict=True and the problem contains features the solver cannot handle (e.g., integer/binary variables with SciPy).

Automatic method selection (method="auto"):

When method="auto" (default), Optyx automatically selects the best solver based on problem structure:

Problem Type Selected Method
Linear objective and constraints linprog (HiGHS LP solver)
Unconstrained, n ≤ 3 Nelder-Mead
Unconstrained, n > 1000 L-BFGS-B
Unconstrained, else BFGS
Bounds only (no general constraints) L-BFGS-B
Has equality constraints trust-constr
Inequality constraints only SLSQP

Available methods:

Method Bounds Constraints Gradient Hessian Description
"auto" Automatic selection (default)
"linprog" ✅ (linear) N/A N/A HiGHS LP solver (for linear problems)
"SLSQP" Sequential Least Squares Programming
"trust-constr" Trust-region constrained optimization
"L-BFGS-B" Limited-memory BFGS with bounds
"COBYLA" ✅ (ineq) Constrained Optimization BY Linear Approx
"TNC" Truncated Newton Conjugate-Gradient
"Powell" Powell’s conjugate direction method
"Nelder-Mead" Simplex algorithm (derivative-free)
"CG" Conjugate gradient
"BFGS" Broyden-Fletcher-Goldfarb-Shanno
"Newton-CG" Newton conjugate gradient
"dogleg" Dog-leg trust-region
"trust-ncg" Newton conjugate gradient trust-region
"trust-exact" Nearly exact trust-region
"trust-krylov" Krylov subspace trust-region
TipRecommended Methods
  • Linear problems: Use "auto" or "linprog" for best performance
  • With constraints: Use "SLSQP" or "trust-constr"
  • Bounds only: Use "L-BFGS-B" for large-scale problems
  • Unconstrained: Use "BFGS" or "trust-ncg" for smooth problems
NotePerformance
  • LP problems: ~1x overhead vs raw SciPy (near parity)
  • NLP problems: ~1.4-2.2x overhead (autodiff cost, but exact gradients)
  • Repeated solves: 2x-900x speedup due to caching

See the Benchmarks page for detailed performance analysis.


4 Strict Mode

Use strict=True to enforce that the solver can handle all problem features exactly:

# Default: warn and relax integer/binary to continuous
solution = prob.solve()  # Works, but may produce fractional values

# Strict: fail if problem can't be solved exactly
solution = prob.solve(strict=True)  # Raises ValueError for integer/binary

This is useful for production code where you want to ensure correctness:

  • Prototyping: Use strict=False (default) to quickly test ideas
  • Production: Use strict=True to catch unsupported configurations early

5 Properties

Property Type Description
.name str | None Problem name
.objective Expression Objective function
.sense str "minimize" or "maximize"
.constraints list[Constraint] All constraints
.variables list[Variable] All decision variables

5.1 Examples

from optyx import Variable, Problem

x = Variable("x", lb=0)
y = Variable("y", lb=0)

prob = (
    Problem("demo")
    .minimize(x**2 + y**2)
    .subject_to(x + y >= 1)
)

print(f"Name: {prob.name}")
print(f"Sense: {prob.sense}")
print(f"Variables: {[v.name for v in prob.variables]}")
print(f"Num constraints: {len(prob.constraints)}")
Name: demo
Sense: minimize
Variables: ['x', 'y']
Num constraints: 1

6 Complete Example

from optyx import Variable, Problem, exp

# Portfolio with 3 assets
w1 = Variable("stocks", lb=0, ub=0.6)
w2 = Variable("bonds", lb=0, ub=0.5)
w3 = Variable("cash", lb=0.1, ub=1.0)

# Expected returns
returns = 0.08*w1 + 0.04*w2 + 0.02*w3

# Risk (simplified variance)
risk = 0.04*w1**2 + 0.01*w2**2 + 0.001*w3**2

# Build and solve
solution = (
    Problem("portfolio")
    .minimize(risk)
    .subject_to(w1 + w2 + w3 >= 0.99)  # Fully invested
    .subject_to(w1 + w2 + w3 <= 1.01)
    .subject_to(returns >= 0.05)        # Min 5% return
    .solve()
)

print("Optimal Allocation:")
print(f"  Stocks: {solution['stocks']:.1%}")
print(f"  Bonds:  {solution['bonds']:.1%}")
print(f"  Cash:   {solution['cash']:.1%}")
print(f"Risk: {solution.objective_value:.6f}")
Optimal Allocation:
  Stocks: 34.1%
  Bonds:  46.8%
  Cash:   20.1%
Risk: 0.006873