Skip to main content

Welcome to EcoLink

EcoLink is a GPU cloud and inference platform for developers and teams building with AI. You can:

  • Call ready-to-use inference APIs for chat, embeddings, reranker, image generation, speech-to-text, text-to-speech, and video generation — all via standard OpenAI-compatible endpoints at https://api.ecohash.com.
  • Launch your own GPU instances — containerized Linux environments with 1, 2, 4, or 8 GPUs, SSH and web-terminal access, Jupyter support, and attachable block or shared storage.
  • Deploy your own model as a managed inference endpoint — register a model from HuggingFace or a shared filesystem, launch an inference instance, and call it through the same unified API with your API key.
  • Run multi-replica GPU clusters — scale a single container to N GPUs behind a single endpoint URL.

Everything runs on our multi-region NVIDIA GPU fleet — on-demand, no reservations required.

Pick your starting point

You want to…Start here
Call a pre-deployed LLM / image / audio model with an API keyYour first API call
Launch a GPU machine you can SSH intoLaunching a GPU instance
Deploy your own model as an inference endpointUser inference overview
Understand what you'll be chargedHow billing works

How you pay

One credit balance covers everything on the platform:

  • Per-token for LLM / embeddings / reranker requests
  • Per-image for image generation
  • Per-second for audio (TTS / STT) and video
  • Per-GPU-hour for GPU instances, clusters, and user inference instances
  • Per-GB-month (charged daily) for storage

See How billing works for the full billing model. Each product's current rate is displayed in the console at the point you launch it.

What's covered in these docs

  • Getting Started — sign up, find your way around the console, create an API key, make your first API call.
  • Platform Models — catalog of pre-deployed models you can call immediately.
  • GPU Compute — launching and managing your own GPU instances and clusters.
  • User Inference — registering a model and exposing it as an OpenAI-compatible endpoint.
  • Billing & Account — credits, pricing, transactions, team invites, notifications.
  • API Reference — every endpoint with request/response examples.
  • Help — FAQ and common issues.

Get in touch

Join the #ecolink-support Slack channel for questions, bug reports, or feature requests.