Adding AI to VS Code using Continue and Generative APIs
AI-powered coding is transforming software development by automating repetitive tasks, generating code, improving code quality, and even detecting and fixing bugs. By integrating AI-driven tools, developers can significantly boost productivity and streamline their workflows. This guide provides a step-by-step guide on how to integrate AI-powered code models into VS Code using Continue and Scaleway's Generative APIs.
Before you start
To complete the actions presented below, you must have:
- A Scaleway account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- A valid API key for API authentication
- Installed Visual Studio Code on your local machine
Install Continue in VS Code
You can install Continue directly from the Visual Studio Marketplace or via the command line:
code --install-extension continue.continue
Configure Continue to use Scaleway’s Generative APIs
Configure Continue through the graphical interface
To link Continue with Scaleway's Generative APIs, you can use built-in menus from Continue in VS Code.
- Click Continue in the menu on the left.
- In the prompt section, click on Select model dropdown, then on Add Chat model.
- Select Scaleway as provider.
- Select the model you want to use (we recommend
Qwen 2.5 Coder 32b
to get started with chat and autocompletion only). - Enter your Scaleway secret key.
These actions will automatically edit your config.yaml
file. To edit it manually, see Configure Continue through configuration file.
Configure Continue through a configuration file
To link Continue with Scaleway's Generative APIs, you can configure a settings file:
- Open your
config.yaml
settings file:- If you have already configured a Local Assistant, click Local Assistant, then click the wheel icon to open your existing
config.yaml
- Otherwise, create a
config.yaml
file inside your.continue
directory.
- If you have already configured a Local Assistant, click Local Assistant, then click the wheel icon to open your existing
- Add the following configuration to enable Scaleway's Generative API. This configuration uses three different models for each tasks:
devstral-small-2505
for agentic workflows through a chat interfaceqwen2.5-coder-32b-instruct
for autocompletion when editing a filebge-multilingual-gemma2
for embedding and retrieving code contextname: Continue Config version: 0.0.1 models: - name: Devstral - Scaleway provider: openai model: devstral-small-2505 apiBase: https://api.scaleway.ai/v1/ apiKey: ###SCW_SECRET_KEY### defaultCompletionOptions: maxTokens: 8000 contextLength: 50000 roles: - chat - apply - embed - edit capabilities: - tool_use - name: Autocomplete - Scaleway provider: openai model: qwen2.5-coder-32b-instruct apiBase: https://api.scaleway.ai/v1/ apiKey: ###SCW_SECRET_KEY### defaultCompletionOptions: maxTokens: 8000 contextLength: 50000 roles: - autocomplete - name: Embeddings Model - Scaleway provider: openai model: bge-multilingual-gemma2 apiBase: https://api.scaleway.ai/v1/ apiKey: ###SCW_SECRET_KEY### roles: - embed embedOptions: maxChunkSize: 256 maxBatchSize: 32 context: - provider: problems - provider: tree - provider: url - provider: search - provider: folder - provider: codebase - provider: web params: n: 3 - provider: open params: onlyPinned: true - provider: docs - provider: terminal - provider: code - provider: diff - provider: currentFile
- Save the file at the correct location:
- Linux/macOS:
~/.continue/config.yaml
- Windows:
%USERPROFILE%\.continue\config.yaml
- Linux/macOS:
- In Local Assistant, click on Reload config or restart VS Code.
Alternatively, a config.json
file can be used with the following format. Note that this format is deprecated, and we recommend using config.yaml
instead.
{
"models": [
{
"model": "devstral-small-2505",
"title": "Devstral - Scaleway",
"provider": "openai",
"apiKey": "###SCW_SECRET_KEY###"
}
],
"embeddingsProvider": {
"model": "bge-multilingual-gemma2",
"provider": "openai",
"apiKey": "###SCW_SECRET_KEY###"
},
"tabAutocompleteModel": {
"model": "qwen2.5-coder-32b-instruct",
"title": "Autocomplete - Scaleway",
"provider": "openai",
"apiKey": "###SCW_SECRET_KEY###"
}
}
Activate Continue in VS Code
After configuring the API, open VS Code and activate Continue:
- Open the Command Palette (
Ctrl+Shift+P
on Windows/Linux,Cmd+Shift+P
on Mac) - Type
"Continue"
and select the appropriate command to enable AI-powered assistance.
Going further
You can add more parameters to configure your model's behavior by editing config.yaml
.
For instance, you can add the following chatOptions.baseSystemMessage
value to modify LLM messages "role":"system"
and/or "role":"developer"
and provide less verbose answers:
model:...
chatOptions:
baseSystemMessage: "You are an expert developer. Only write concise answers."