Connecting ChatGPT and LLMs to MATLAB: Hands-On Guide with OpenAI, Azure, and Local Ollama

MATLABSolutions. Dec 18 2025 · 7 min read
Connecting ChatGPT and LLMs to MATLAB: Hands-On Guide with O

Large Language Models (LLMs) like ChatGPT have transformed natural language processing, code generation, and data analysis. As of 2025, MATLAB makes it easier than ever to integrate powerful LLMs directly into your workflows using the official Large Language Models (LLMs) with MATLAB add-on (available via Add-On Explorer or GitHub). This supports OpenAI (ChatGPT), Azure OpenAI, and local models via Ollama – all without leaving the MATLAB environment.

Whether you're generating code, analyzing text, building chatbots, or running private offline models, this guide walks you through setup and hands-on examples.

Step 1: Install the LLMs with MATLAB Add-On

This add-on provides functions like openAIChat, azureOpenAIChat, and ollamaChat for seamless integration.

Connecting to OpenAI (ChatGPT API)

Access GPT models like gpt-4o, gpt-4o-mini, or gpt-3.5-turbo.

  1. Get your OpenAI API key from platform.openai.com/api-keys.
  2. Store it securely (e.g., in a .env file or environment variable).

Example Code:

% Initialize ChatGPT connection

chat = openAIChat("You are a helpful MATLAB assistant.", ...

    Model="gpt-4o-mini");

 

% Generate response

response = generate(chat, "Write a MATLAB function to plot sine wave");

disp(response);

 

This can generate, explain, or debug MATLAB code instantly.

Connecting to Azure OpenAI

Ideal for enterprise users needing secure, compliant deployments.

  1. Deploy a model (e.g., GPT-4o) in your Azure OpenAI resource.
  2. Note your endpoint, deployment name, and API key.

Example Code:

Matlab

chat = azureOpenAIChat( ...

    Endpoint="https://your-resource.openai.azure.com/", ...

    Deployment="gpt-4o-deployment", ...

    APIKey="your-azure-key");

 

response = generate(chat, "Summarize this signal processing data: " + dataSummary);

disp(response);

 

Use for sensitive data with Azure's governance features.

Running Local LLMs with Ollama

Run models offline for privacy and zero cost.

  1. Download and install Ollama from ollama.com.
  2. Pull a model: Open terminal and run ollama run llama3 (or mistral, phi3, etc.).
  3. Start the server (runs automatically).

Example Code:

Matlab

 

% Connect to local Ollama model

chat = ollamaChat(Model="llama3");

 

% Query the local model

response = generate(chat, "Explain Fourier transform in simple terms");

disp(response);

Supports tool calling, JSON output, and RAG workflows.

Advanced Features