Data Science

Accelerate Decision Optimization Using Open Source NVIDIA cuOpt

Businesses make thousands of decisions every day—what to produce, where to ship, how to allocate resources. At scale, optimizing these decisions becomes a computational challenge. Linear programming (LP), mixed-integer programming (MIP), and vehicle routing problems (VRP) provide structure, but solving them fast is where the bottleneck begins. 

NVIDIA cuOpt brings GPU acceleration to decision optimization, delivering massive speedups for real-world LP, MIP, and VRP workloads. Now available as open source under the Apache 2.0 license, cuOpt makes it easier than ever to adopt, adapt, and scale optimization in your workflows—locally or in the cloud.

For developers, the best part is near-zero modeling language changes. You can drop cuOpt into existing models built with PuLP and AMPL, with minimal refactoring. It’s fast, flexible, and ready for experimentation or production.

Want to see cuOpt in action at scale? Check out Supercharging Optimization: How Artelys Powered by FICO and NVIDIA Scale Up Energy Modeling, which showcases cuOpt’s role in achieving up to 20x speedups in large-scale unit commitment problems.

This post explains how cuOpt solves LP and MIP with near-zero changes in modeling languages like PuLP and AMPL. You’ll learn how to:

  • Get started using open source cuOpt optimization in minutes with Python, REST API, or CLI, locally or in the cloud
  • Solve VRP problems with cuOpt GPU acceleration

A real-world use case: Coffee logistics at scale

Imagine a global coffee chain. Each store needs thousands of bags of beans per year. Beans are sourced, roasted, packaged, and shipped—each stage constrained by facility capacity and dynamic demand. If a roastery suddenly goes offline, the supply chain must instantly reroute orders and reassign suppliers.

Add delivery? Now you’re routing drivers across shifting orders and time windows, while respecting labor rules and shift limits

These are real-world LP, MIP, and VRP problems—and they are computationally hard to solve fast. cuOpt is built for this kind of complexity.

Quick start: Solve your first problem in minutes

Whether you’re optimizing supply chains, scheduling production, or routing deliveries, cuOpt offers multiple ways to get started quickly.

cuOpt Server 

This option is best for LP, MIP, and VRP through REST. Spin up a REST API server that supports all problem types.

Install through pip:

pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.5.* cuopt-sh==25.5.*

Run with Docker (includes REST plus client):

docker run --gpus all -it --rm -p 8000:8000 -e CUOPT_SERVER_PORT=8000 nvidia/cuopt:latest-cuda12.8-py312 python3 -m cuopt_server.cuopt_service

Python API

This option is best for VRP. Use cuOpt native Python API for programmatic control and  integration:

pip install --extra-index-url=https://pypi.nvidia.com cuopt-cu12==25.5.*

Command-line interface

This option is best for benchmarking LP and MIP. If you have models in MPS-format, use the command-line interface (CLI) to benchmark and automate.

Run a benchmark model:

wget https://plato.asu.edu/ftp/lptestset/ex10.mps.bz2
bunzip2 ex10.mps.bz2
./cuopt_cli ex10.mps

This example solves an LP with over 69 K constraint and 17 K variables in under 0.3 seconds on an NVIDIA H100 Tensor Core GPU.

Try cuOpt in the cloud

No local GPU? You can run cuOpt from your browser or in a persistent cloud environment.

FeatureGoogle ColabDeploy Launchable
Set upNone1-click launch
GPU accessYes (limited, free)Yes (full GPU instance)
Persistent environmentNoYes
Preloaded configurationManualAutomatic
Optimal useDemos and quick testsFull development workflows
Table 1. Cloud deployment options for running cuOpt: Colab versus Launchable

Minimal modeling changes: LP and MIP in AMPL and PuLP

cuOpt integrates with modeling languages like AMPL and PuLP. Just switch the solver, no rewrite needed.

Example 1: AMPL plus cuOpt

./ampl
var x >= 0; 
var y >= 0; 
maximize objective: 5*x + 3*y;
subject to c1: 2*x + 4*y >= 230;
subject to c2: 3*x + 2*y <= 190;
option solver cuoptmp;
solve;
display x, y;

To switch to MIP, declare variables as integer.

Example 2: PuLP plus cuOpt

import pulp
model = pulp.LpProblem("Maximize", pulp.LpMaximize)
x = pulp.LpVariable('x', lowBound=0)
y = pulp.LpVariable('y', lowBound=0)
model += 5*x + 3*y, "obj"
model += 2*x + 4*y >= 230
model += 3*x + 2*y <= 190
model.solve(pulp.CUOPT())

To switch to MIP:

x = pulp.LpVariable('x', lowBound=0, cat="Integer")
y = pulp.LpVariable('y', lowBound=0, cat="Integer")

Solving VRP with cuOpt client

cuOpt solves VRPs using structured JSON inputs through Python or REST:

Example workflow:

from cuopt_sh_client import CuOptServiceSelfHostClient
import json

cuopt_service_client = CuOptServiceSelfHostClient(ip="localhost", port=5000)
optimized_routes = cuopt_service_client.get_optimized_routes(json_data)
print(json.dumps(optimized_routes, indent=4))

For more, visit NVIDIA/cuopt-examples on GitHub.

Sample output:

"num_vehicles": 2,
"solution_cost": -435.0,
"vehicle_data": {
  "Car-A":  {"task_id": [...],"arrival_stamp": [...]},
  "Bike-B": {"task_id": [...],"arrival_stamp": [...]}
},
"total_solve_time": 10.7

Ideal for logistics or dispatch systems, cuOpt returns optimized routes, cost, and task-level assignments.

Get started with open source optimization 

Check out ways you can get started with NVIDIA cuOpt to bring GPU acceleration to your existing optimization stack—no vendor lock-in, no rewrite, just faster solves. This optimization is GPU-native, developer-first, and built for scale. Key benefits include:

NVIDIA cuOpt is also now available in the coin-or/cuopt GitHub repo, a hub for open-source operations research tools. This follows the recent announcement of the collaboration between COIN-OR and NVIDIA, further strengthening the ecosystem for optimization developers. As part of COIN-OR, cuOpt can be more easily discovered, extended, and used alongside other open source solvers.

Join the open source community and help shape the future of real-time, intelligent decision optimization-with full control and flexibility. 

Discuss (0)

Tags

Comments are closed.