Codex Integration
This guide shows how to configure Codex CLI to use model capabilities through the unified R9S API.
Prerequisites
Section titled “Prerequisites”Before you start, make sure you have:
-
Codex CLI installed
Follow the official Codex CLI installation instructions to install Codex locally.
-
An R9S API key
Sign in to the R9S platform, create an API key, and copy it.
Store this key securely. Codex will read it from an environment variable.
Configure Codex CLI
Section titled “Configure Codex CLI”Edit your Codex configuration file:
vim ~/.codex/config.tomlAdd the following configuration:
model_provider = "r9s"model_reasoning_effort = "high"model = "gpt-5.3-codex"web_search = "disabled"
[model_providers.r9s]name = "R9S"base_url = "https://api.r9s.ai/v1"env_key = "R9S_API_KEY"Configuration notes:
model_providermust match the provider name under[model_providers.r9s].env_keyis the environment variable name that stores your API key. Do not put the API key itself inconfig.toml.- To switch models, update the
modelvalue inconfig.toml.
Set the API Key Environment Variable
Section titled “Set the API Key Environment Variable”For the current terminal session, set:
export R9S_API_KEY="YOUR_R9S_API_KEY"Replace YOUR_R9S_API_KEY with your actual R9S API key.
For long-term use, add the export command to your shell configuration file, such as ~/.zshrc, ~/.bashrc, or ~/.config/fish/config.fish.
Start Codex
Section titled “Start Codex”Open your project directory and start Codex:
cd /path/to/your/projectcodexIf the configuration is correct, Codex will send model requests through the R9S API.