# tabby
**Repository Path**: RapidAI/tabby
## Basic Information
- **Project Name**: tabby
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2023-04-21
- **Last Updated**: 2023-04-25
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# 🐾 Tabby
[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/psf/black)
[](https://github.com/TabbyML/tabby/actions/workflows/docker.yml)
[](https://hub.docker.com/r/tabbyml/tabby)

Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.
> **Warning**
> Tabby is still in the alpha phase
## Features
* Self-contained, with no need for a DBMS or cloud service
* Web UI for visualizing and configuration models and MLOps.
* OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
* Consumer level GPU supports (FP-16 weight loading with various optimization).
## Demo
## Get started: Server
### Docker
**NOTE**: Tabby requires [Pascal or newer](https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/) NVIDIA GPU.
Before running Tabby, ensure the installation of the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html).
We suggest using NVIDIA drivers that are compatible with CUDA version 11.8 or higher.
```bash
# Create data dir and grant owner to 1000 (Tabby run as uid 1000 in container)
mkdir -p data/hf_cache && chown -R 1000 data
docker run \
--gpus all \
-it --rm \
-v "/$(pwd)/data:/data" \
-v "/$(pwd)/data/hf_cache:/home/app/.cache/huggingface" \
-p 5000:5000 \
-e MODEL_NAME=TabbyML/J-350M \
-e MODEL_BACKEND=triton \
--name=tabby \
tabbyml/tabby
```
You can then query the server using `/v1/completions` endpoint:
```bash
curl -X POST http://localhost:5000/v1/completions -H 'Content-Type: application/json' --data '{
"prompt": "def binarySearch(arr, left, right, x):\n mid = (left +"
}'
```
We also provides an interactive playground in admin panel [localhost:5000/_admin](http://localhost:5000/_admin)
### Skypilot
See [deployment/skypilot/README.md](./deployment/skypilot/README.md)
## Getting Started: Client
We offer multiple methods to connect to Tabby Server, including using OpenAPI and editor extensions.
### API
Tabby has opened a FastAPI server at [localhost:5000](https://localhost:5000), which includes an OpenAPI documentation of the HTTP API. The same API documentation is also hosted at https://tabbyml.github.io/tabby
### Editor Extensions
* [VSCode Extension](./clients/vscode) – Install from the [marketplace](https://marketplace.visualstudio.com/items?itemName=TabbyML.vscode-tabby)
* [VIM Extension](./clients/vim)
## Development
Go to `development` directory.
```bash
make dev
```
or
```bash
make dev-triton # Turn on triton backend (for cuda env developers)
```