Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.asquareportal.com/llms.txt

Use this file to discover all available pages before exploring further.

Portals works immediately without any setup — Pollinations is enabled by default and requires no API key. When you’re ready to use a different model or provider, open Settings → AI Configuration and choose from the supported providers below. Your API keys are stored only on your device and are never sent to any Portals server.
API keys are saved locally on your device. Portals does not transmit your keys to any external server.
If you don’t have any API keys yet, start with Pollinations (built-in, free) or OpenRouter Free to access a rotating selection of no-cost models.

Supported providers

Pollinations is the default provider in Portals. It runs a free inference service and requires no sign-up or API key. Portal Agent uses it automatically the first time you open the app.Default model: selected automatically by PollinationsHow to use it: No configuration needed. If you previously switched to another provider and want to return to Pollinations, open Settings → AI Configuration, select Pollinations, and save.
Use GPT-4o and other OpenAI models with your own API key.Default model: gpt-4oGet an API key:
1

Go to platform.openai.com

Sign in or create an account at platform.openai.com.
2

Create an API key

Navigate to API keys in the left sidebar and click Create new secret key. Copy the key — you won’t be able to view it again.
3

Enter it in Portals

Open Settings → AI Configuration, select OpenAI, paste your key into the API Key field, and choose a model. Click Save.
You can optionally set an Organization ID and a custom Base URL if you are using a self-hosted or Azure OpenAI deployment.
Use Claude models — including the claude-3-5-sonnet series — with your Anthropic API key.Default model: claude-3-5-sonnet-20241022Get an API key:
1

Go to console.anthropic.com

Sign in or create an account at console.anthropic.com.
2

Create an API key

Navigate to API Keys and click Create Key. Copy the key.
3

Enter it in Portals

Open Settings → AI Configuration, select Anthropic, paste your key, choose a model, and click Save.
Use Gemini models via the Google AI Studio or Vertex AI API.Default model: gemini-1.5-proGet an API key:
1

Go to aistudio.google.com

Sign in at aistudio.google.com with your Google account.
2

Create an API key

Click Get API key and follow the prompts. Copy the generated key.
3

Enter it in Portals

Open Settings → AI Configuration, select Google Gemini, paste your key, and click Save.
You can optionally provide a Project ID for Vertex AI deployments.
Groq provides fast inference for open-weight models at low latency.Default model: set during configurationGet an API key:
1

Go to console.groq.com

Sign in or create an account at console.groq.com.
2

Create an API key

Navigate to API Keys and generate a new key. Copy it.
3

Enter it in Portals

Open Settings → AI Configuration, select Groq, paste your key, enter a model ID (for example llama3-8b-8192), and click Save.
Ollama runs models entirely on your machine. No API key is required, and no data leaves your device.Default model: llama3.2 Default base URL: http://localhost:11434Set up Ollama:
1

Install Ollama

Download and install Ollama from ollama.com. Start the Ollama service.
2

Pull a model

In your terminal, run ollama pull llama3.2 (or any other supported model).
3

Enter it in Portals

Open Settings → AI Configuration, select Ollama, confirm the base URL is http://localhost:11434, enter the model name, and click Save.
Portals must be able to reach your Ollama server over HTTP. If you’re running Portals in a browser on a remote machine, make sure the Ollama endpoint is accessible from that origin.
OpenRouter routes requests to hundreds of models from multiple providers using a single API key.Default model: anthropic/claude-3.5-sonnetGet an API key:
1

Go to openrouter.ai

Sign in or create an account at openrouter.ai.
2

Create an API key

Navigate to Keys in your account settings and generate a new key.
3

Enter it in Portals

Open Settings → AI Configuration, select OpenRouter, paste your key, enter a model ID (for example anthropic/claude-3.5-sonnet), and click Save.
You can optionally provide a Site URL and Site Name that OpenRouter will include in request attribution.
OpenRouter Free gives you access to a rotating selection of models at no cost. It requires an OpenRouter API key but uses only the free-tier models that OpenRouter makes available.How it works:Portals fetches the current list of available free models from OpenRouter and lets you select one. The available models change over time as OpenRouter updates its free tier.
1

Get an OpenRouter API key

Follow the same steps as the OpenRouter provider above to create an account and generate a key at openrouter.ai.
2

Select OpenRouter Free in Portals

Open Settings → AI Configuration, select OpenRouter Free, paste your key, and click Fetch free models.
3

Choose a model

Select from the list of available free models and click Save.

OpenAI-compatible providers

If you run a self-hosted inference server — such as LM Studio, vLLM, or a custom deployment — that exposes an OpenAI-compatible API, you can connect it using the OpenAI Compatible provider type.Required fields:
  • Base URL — the root URL of your API, for example https://api.example.com/v1
  • API Key — your authentication token (required by most servers, even local ones)
  • Model — the model identifier to pass in requests
You can also provide a Display Name to identify this provider in the UI.Open Settings → AI Configuration, select OpenAI Compatible, fill in the fields above, and click Save.