LiteLLM API

Last updated on 2025-10-14 | Edit this page

Overview

Questions

  • What can I use the API for?

Objectives

  • Understand the API endpoints
  • Understand how requests are made
Callout

In this episode, we will be working with the LiteLLM proxy server. This infrastructure setup is specific to the University of Amsterdam. However, if your institution uses LiteLLM as well, you should be able to follow these instructions, too.

Overview


The API can be reached at https://ai-research-proxy.azurewebsites.net/

When you open the link in a browser, you will see the so called Swagger UI, a user interface that lists all of the API’s endpoints; it allows you to test those, as long as you have an API key.

You will note that the list is quite long; however, most of those endpoints are not relevant for this workshop, neither for most research purposes.

Callout

For this part of the workshop, you will need a working API key.

Using Swagger UI


In order to explore and test the available API endpoints, the Swagger UI is an ideal starting point. To be able to execute requests, you need to authorize first (top right button) by inserting your API key.

Let’s have a look at a simple GET request using /models:

We don’t need to pass any data, simply press Execute. The status code of our request should be 200 and the response body should contain a list of available models like this:

JSON

{
  "data": [
    {
      "id": "gpt-4",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "text-embedding-ada-002",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "gpt-4.1",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "Llama-3.3-70B-Instruct",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    }
  ]
}

This information is handy once we start sending input text as we need to specify the exact model to use.

Using Python and R


In the previous chapter, we installed and used the requests (Python) and httr2 (R) libraries. We will now use those again to make requests to the LiteLLM API.

Since it is now necessary to authorize ourselves using an API key, we need to send additional information to the server, so called headers. In essence, we are specifying that we are going to send JSON data and that we would like to receive JSON data as well. We are also sending the API key with each request so that the server is able to authorize us.