# FalconFS
**Repository Path**: shilinlee/FalconFS
## Basic Information
- **Project Name**: FalconFS
- **Description**: A high-performance distributed file system designed for AI workloads.
- **Primary Language**: Unknown
- **License**: MulanPSL-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 4
- **Created**: 2025-08-18
- **Last Updated**: 2025-08-18
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# FalconFS
[](https://github.com/falcon-infra/falconfs/actions/workflows/build.yml)
[](LICENSE)
FalconFS is a high-performance distributed file system (DFS) optimized for AI workloads. It addresses the following challenges:
1. **Massive small files** â Its high-performance distributed metadata engine dramatically improves I/O throughput of handling massive small files (e.g., images), eliminating storage bottlenecks in AI data preprocessing and model training.
2. **High throughput requirement** â In tiered storage (i.e., DRAM, SSD and elastic object store), FalconFS can aggregates near-compute DRAM and SSDs to provide over TB/s high throughput for AI workloads (e.g., model training, data preprocessing and KV cache offloading).
3. **Large scale** - FalconFS can scale to thousands of NPUs through its scale-out metadata engine and scale-up single metadata performance.
Through the above advantages, FalconFS delivers an ideal storage solution for modern AI workloads and has been running in Huawei autonomous driving system's production environment with 10,000 NPUs.
## Documents
- [FalconFS Design](./docs/design.md)
- [FalconFS Cluster Test Setup Guide](./docs/setup.md)
## Architecture

## Performance
**Test Environment Configuration:**
- **CPU:** 2 x Intel Xeon 3.00GHz, 12 cores
- **Memory:** 16 x DDR4 2933 MHz 16GB
- **Storage:** 2 x NVMe SSD
- **Network:** 2 x 100GbE
- **OS:** Ubuntu 20.04 Server 64-bit
> **âšī¸ Note**
> This experiment uses an optimized Linux fuse module. The relevant code will be open-sourced in the near future.
We conduct the experiments in a cluster of 13 dual-socket machines, whose configuration is shown above. To better simulate large scale deployment in data centers, we have the following setups:
- First, to expand the test scale, we abstract each machine into two nodes, with each node bound to one socket, one SSD, and one NIC, scaling up the testbed to 26 nodes.
- Second, to simulate the resource ratio in real deployment, we reduce the server resources to 4 cores per node. So that we can:
- generate sufficient load to stress the servers with a few client nodes.
- correctly simulate the 4:1 ratio between CPU cores and NVMe SSDs in typical real deployments.
In the experiments below, we run 4 metadata nodes and 12 data nodes for each DFS instance and saturate them with 10 client nodes. All DFSs do not enable metadata or data replication.
**Compared Systems:**
- CephFS 12.2.13.
- JuiceFS 1.2.1, with TiKV 1.16.1 as the metadata engine and data store.
- Lustre 2.15.6.