Working With Models
Choose from a vast library of cloud, private or external AIs.
Introduction
AI models play the role of the computer brain that answers queries, makes use of provided tools and ultimately fulfills the given task.
It's crucial to choose AI provider and model with reasoning and problem solving capabilities that best match the task while keeping costs and response times under control. TaskLift empowers you in that effort by working as the ultimate hub for all AI models in following ways:
- One-click access to leading cloud providers – just pick and switch between OpenAI, Anthropic, Google, xAI or DeepSeek without worrying about provider-specific subscriptions or APIs
- Zero-effort hosting of private models – run your own local models such as Qwen, DeepSeek, Gemma, Llama & others with all the server heavylifting & cost optimization covered
- Support for custom providers and models – for those powerusers that need anything else than the above, you can just fill API details and keys and start rolling on TaskLift too
In all of these cases, you're using a single billing which mixes the ease & stability of a subscription with control & efficiency of usage-based approach. Learn more on the Pricing page.
Regardless of the model type, it can consistently be used across all TaskLift features such as:
- Chats – see the Chatting page
- Assistants – see the Working With Assistants page
- Third party apps, agents and libraries - see the Chatting Over API page
Built-In Models
We provide unified, zero-effort access to all major cloud model providers including OpenAI, Google, xAI, Anthropic and DeepSeek. Our built-in models often provide cheaper access, more models to choose from and faster response times than provider-side subscriptions.
Built-in models are by far the simplest way to start using AI on TaskLift. Just pick one of leading providers and use it. As long as you have an active TaskLift subscription you can just use them without any setup. You will also get to switch between them which is useful for multiple reasons:
- Pick the best model for given use case or task (use distinct one for each chat or assistant)
- Stay on top of the fast-paced "best AI model for X" race
- Cut costs in (also very dynamic and competitive) world of AI pricings
- Fallback to a different model when your previous pick is currently offline
To start using built-in models, follow the steps below:
- Navigate to the Models section.
- Optionally click the Filter button to filter the listing by Built-In type.
- Pick one of providers such as OpenAI, Google, xAI, Anthropic and DeepSeek.
- Find the model you like such as GPT 4.1 Mini from OpenAI, Gemini 2.5 Pro from Google, Grok 3 from xAI or Claude Sonnet 4 from Anthropic. Take note of pricing presented below each.
- Click either Chat or Use Externally button next to it depending on how you'd like to use it.
While at the provider page, you may want to use either the Favorite or the Hide button in order to increase or reduce the model's visibility when later selecting a model for a chat or assistant.
Private Models
We can host your own private models with a wide choice of models supported by Ollama that fit the NVIDIA L40S GPU (models up to 40B parameters are usually fine). We offer very competitive prices while also covering all the hosting-related heavylifting. Private models are powered by our scalable, secure and cost-efficient hosting platform.
This type of models is almost as easy to use as built-in one. You basically pick which model or models should be provided and you're ready to go. From then on, whenever you use specific private model, we will spin up a GPU-powered server and bootstrap the picked model(s). Few moments later you should see a chat response.
If you make further queries to the same private model provider within a next couple of minutes it will reply instantly and if you won't we will stop the server in order to cut the costs.
By supporting private models, TaskLift gives you an alternative to built-in ones that may be more viable for you for a few reasons:
- Privacy - private models are contained locally so no cloud provider will train on your data
- Capabilities - allowing to pick more specialized or growingly powerful general-purpose models
- Cost - we provide competitive pricing and getting billed per hour may be cheaper than per token
- Availability - private model is fully yours so you'll never suffer delays or service disruptions
Learn more about private model hosting on the Hosting Private Models And Tools page.
External Models
We aim for TaskLift to be as flexible and unconstrained as possible - that's why we allow to plug & use any external (cloud or private) models by connecting to arbitrary OpenAI-compatible API (which means virtually all major cloud AI providers).
For example, you may use your own OpenAI or DeepSeek subscription that's already prepaid and ready (or provided to you by your company or team). Or you may want to plug your own Ollama service hosted on that cheap GPU-backed server that you've nailed during last Black Friday. That's all possible and as easy as copy-pasting a bunch of API details.
This may be useful to include your arbitrary AI providers in TaskLift chats, assistants, or to use them with TaskLift tools, embeds or APIs. Sky is the limit (for sure we won't be the ones limiting your view).
In case of any questions, issues or concerns related to TaskLift, don't hesistate to contact us.