|
1 | | -<div align="center"> |
2 | | -<picture> |
3 | | - <source media="(prefers-color-scheme: dark)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/flashpack-logo-white.png?raw=true"> |
4 | | - <source media="(prefers-color-scheme: light)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/flashpack-logo-black.png?raw=true"> |
5 | | - <img alt="FlashPack Logo" src="https://github.com/fal-ai/flashpack/blob/main/media/flashpack-logo-black.png?raw=true"> |
6 | | -</picture> |
7 | | -<h2>Disk-to-GPU Tensor loading at up to 25Gbps without GDS</h2> |
8 | | -</div> |
9 | | - |
10 | | -<div align="center"> |
11 | | -<picture> |
12 | | - <source media="(prefers-color-scheme: dark)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/benchmark-white.png?raw=true"> |
13 | | - <source media="(prefers-color-scheme: light)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/benchmark-black.png?raw=true"> |
14 | | - <img alt="Benchmark Results" src="https://github.com/fal-ai/flashpack/blob/main/media/benchmark-black.png?raw=true"> |
15 | | -</picture> |
16 | | -<em>Run this benchmark in `scripts/run_benchmark.py`</em> |
17 | | -</div> |
18 | | - |
19 | | -<div align="center"> |
20 | | -<picture> |
21 | | - <source media="(prefers-color-scheme: dark)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/load-state-dict-comparison-white.png?raw=true"> |
22 | | - <source media="(prefers-color-scheme: light)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/load-state-dict-comparison-black.png?raw=true"> |
23 | | - <img alt="Benchmark Results" src="https://github.com/fal-ai/flashpack/blob/main/media/load-state-dict-comparison-black.png?raw=true"> |
24 | | -</picture> |
25 | | -<em>Run this benchmark in `tests/test_speed_comparison.py`</em> |
26 | | -</div> |
27 | | - |
28 | | -# Integration Guide |
29 | | -## Mixins |
30 | | -### Diffusers/Transformers |
31 | | - |
32 | | -```py |
33 | | -# Integration classes |
34 | | -from flashpack.integrations.diffusers import FlashPackDiffusersModelMixin, FlashPackDiffusionPipeline |
35 | | -from flashpack.integrations.transformers import FlashPackTransformersModelMixin |
36 | | - |
37 | | -# Base classes |
38 | | -from diffusers.models import MyModel, SomeOtherModel |
39 | | -from diffusers.pipelines import MyPipeline |
40 | | - |
41 | | -# Define mixed classes |
42 | | -class FlashPackMyModel(MyModel, FlashPackDiffusersModelMixin): |
43 | | - pass |
44 | | - |
45 | | -class FlashPackMyPipeline(MyPipeline, FlashPackDiffusionPipine): |
46 | | - def __init__( |
47 | | - self, |
48 | | - my_model: FlashPackMyModel, |
49 | | - other_model: SomeOtherModel, |
50 | | - ) -> None: |
51 | | - super().__init__() |
52 | | - |
53 | | -# Load base pipeline |
54 | | -pipeline = FlashPackMyPipeline.from_pretrained("some/repository") |
55 | | - |
56 | | -# Save flashpack pipeline |
57 | | -pipeline.save_pretrained_flashpack( |
58 | | - "some_directory", |
59 | | - push_to_hub=False, # pass repo_id when using this |
60 | | -) |
61 | | - |
62 | | -# Load directly from flashpack directory or repository |
63 | | -pipeline = FlashPackMyPipeline.from_pretrained_flashpack("my/flashpack-repository") |
64 | | -``` |
65 | | - |
66 | | -### Vanilla PyTorch |
67 | | - |
68 | | -```py |
69 | | -from flashpack import FlashPackMixin |
70 | | - |
71 | | -class MyModule(nn.Module, FlashPackMixin): |
72 | | - def __init__(self, some_arg: int = 4) -> None: |
73 | | - ... |
74 | | - |
75 | | -module = MyModule(some_arg = 4) |
76 | | -module.save_flashpack("model.flashpack") |
77 | | - |
78 | | -loaded_module = module.from_flashpack("model.flashpack", some_arg=4) |
79 | | -``` |
80 | | - |
81 | | -## Direct Integration |
82 | | - |
83 | | -```py |
84 | | -from flashpack import pack_to_file, assign_from_file |
85 | | - |
86 | | -flashpack_path = "/path/to/model.flashpack" |
87 | | -model = nn.Module(...) |
88 | | - |
89 | | -pack_to_file(model, flashpack_path) # write state dict to file |
90 | | -assign_from_file(model, flashpack_path) # load state dict from file |
91 | | -``` |
| 1 | +<div align="center"> |
| 2 | +<picture> |
| 3 | + <source media="(prefers-color-scheme: dark)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/flashpack-logo-white.png?raw=true"> |
| 4 | + <source media="(prefers-color-scheme: light)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/flashpack-logo-black.png?raw=true"> |
| 5 | + <img alt="FlashPack Logo" src="https://github.com/fal-ai/flashpack/blob/main/media/flashpack-logo-black.png?raw=true"> |
| 6 | +</picture> |
| 7 | +<h2>Disk-to-GPU Tensor loading at up to 25Gbps without GDS</h2> |
| 8 | +</div> |
| 9 | + |
| 10 | +## Updates |
| 11 | + |
| 12 | +- **2025-11-25**: Now supports **multiple data types per checkpoint** with no regressions in speed! |
| 13 | + |
| 14 | +<div align="center"> |
| 15 | +<picture> |
| 16 | + <source media="(prefers-color-scheme: dark)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/benchmark-white.png?raw=true"> |
| 17 | + <source media="(prefers-color-scheme: light)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/benchmark-black.png?raw=true"> |
| 18 | + <img alt="Benchmark Results" src="https://github.com/fal-ai/flashpack/blob/main/media/benchmark-black.png?raw=true"> |
| 19 | +</picture> |
| 20 | +<em>Run this benchmark in `scripts/run_benchmark.py`</em> |
| 21 | +</div> |
| 22 | + |
| 23 | +<div align="center"> |
| 24 | +<picture> |
| 25 | + <source media="(prefers-color-scheme: dark)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/load-state-dict-comparison-white.png?raw=true"> |
| 26 | + <source media="(prefers-color-scheme: light)" srcset="https://github.com/fal-ai/flashpack/blob/main/media/load-state-dict-comparison-black.png?raw=true"> |
| 27 | + <img alt="Benchmark Results" src="https://github.com/fal-ai/flashpack/blob/main/media/load-state-dict-comparison-black.png?raw=true"> |
| 28 | +</picture> |
| 29 | +<em>Run this benchmark in `tests/test_speed_comparison.py`</em> |
| 30 | +</div> |
| 31 | + |
| 32 | +# Integration Guide |
| 33 | +## Mixins |
| 34 | +### Diffusers/Transformers |
| 35 | + |
| 36 | +```py |
| 37 | +# Integration classes |
| 38 | +from flashpack.integrations.diffusers import FlashPackDiffusersModelMixin, FlashPackDiffusionPipeline |
| 39 | +from flashpack.integrations.transformers import FlashPackTransformersModelMixin |
| 40 | + |
| 41 | +# Base classes |
| 42 | +from diffusers.models import MyModel, SomeOtherModel |
| 43 | +from diffusers.pipelines import MyPipeline |
| 44 | + |
| 45 | +# Define mixed classes |
| 46 | +class FlashPackMyModel(MyModel, FlashPackDiffusersModelMixin): |
| 47 | + pass |
| 48 | + |
| 49 | +class FlashPackMyPipeline(MyPipeline, FlashPackDiffusionPipine): |
| 50 | + def __init__( |
| 51 | + self, |
| 52 | + my_model: FlashPackMyModel, |
| 53 | + other_model: SomeOtherModel, |
| 54 | + ) -> None: |
| 55 | + super().__init__() |
| 56 | + |
| 57 | +# Load base pipeline |
| 58 | +pipeline = FlashPackMyPipeline.from_pretrained("some/repository") |
| 59 | + |
| 60 | +# Save flashpack pipeline |
| 61 | +pipeline.save_pretrained_flashpack( |
| 62 | + "some_directory", |
| 63 | + push_to_hub=False, # pass repo_id when using this |
| 64 | +) |
| 65 | + |
| 66 | +# Load directly from flashpack directory or repository |
| 67 | +pipeline = FlashPackMyPipeline.from_pretrained_flashpack("my/flashpack-repository") |
| 68 | +``` |
| 69 | + |
| 70 | +### Vanilla PyTorch |
| 71 | + |
| 72 | +```py |
| 73 | +from flashpack import FlashPackMixin |
| 74 | + |
| 75 | +class MyModule(nn.Module, FlashPackMixin): |
| 76 | + def __init__(self, some_arg: int = 4) -> None: |
| 77 | + ... |
| 78 | + |
| 79 | +module = MyModule(some_arg = 4) |
| 80 | +module.save_flashpack("model.flashpack") |
| 81 | + |
| 82 | +loaded_module = module.from_flashpack("model.flashpack", some_arg=4) |
| 83 | +``` |
| 84 | + |
| 85 | +## Direct Integration |
| 86 | + |
| 87 | +```py |
| 88 | +from flashpack import pack_to_file, assign_from_file |
| 89 | + |
| 90 | +flashpack_path = "/path/to/model.flashpack" |
| 91 | +model = nn.Module(...) |
| 92 | + |
| 93 | +pack_to_file(model, flashpack_path) # write state dict to file |
| 94 | +assign_from_file(model, flashpack_path) # load state dict from file |
| 95 | +``` |
| 96 | + |
| 97 | +# CLI Commands |
| 98 | + |
| 99 | +FlashPack provides a command-line interface for converting, inspecting, and reverting flashpack files. |
| 100 | + |
| 101 | +## `flashpack convert` |
| 102 | + |
| 103 | +Convert a model to a flashpack file. |
| 104 | + |
| 105 | +```bash |
| 106 | +flashpack convert <path_or_repo_id> [destination_path] [options] |
| 107 | +``` |
| 108 | + |
| 109 | +**Arguments:** |
| 110 | +- `path_or_repo_id` - Local path or Hugging Face repository ID |
| 111 | +- `destination_path` - (Optional) Output path for the flashpack file |
| 112 | + |
| 113 | +**Options:** |
| 114 | +| Option | Description | |
| 115 | +|--------|-------------| |
| 116 | +| `--subfolder` | Subfolder of the model (for repo_id) | |
| 117 | +| `--variant` | Model variant (for repo_id) | |
| 118 | +| `--dtype` | Target dtype for the flashpack file. When omitted, no type changes are made | |
| 119 | +| `--ignore-names` | Tensor names to ignore (can be specified multiple times) | |
| 120 | +| `--ignore-prefixes` | Tensor prefixes to ignore (can be specified multiple times) | |
| 121 | +| `--ignore-suffixes` | Tensor suffixes to ignore (can be specified multiple times) | |
| 122 | +| `--use-transformers` | Load the path as a transformers model | |
| 123 | +| `--use-diffusers` | Load the path as a diffusers model | |
| 124 | +| `-v, --verbose` | Enable verbose output | |
| 125 | + |
| 126 | +**Examples:** |
| 127 | +```bash |
| 128 | +# Convert a local model |
| 129 | +flashpack convert ./my_model ./my_model.flashpack |
| 130 | + |
| 131 | +# Convert from Hugging Face |
| 132 | +flashpack convert stabilityai/stable-diffusion-xl-base-1.0 --subfolder unet --use-diffusers |
| 133 | + |
| 134 | +# Convert with specific dtype |
| 135 | +flashpack convert ./my_model ./my_model.flashpack --dtype float16 |
| 136 | +``` |
| 137 | + |
| 138 | +## `flashpack revert` |
| 139 | + |
| 140 | +Revert a flashpack file back to safetensors or torch format. |
| 141 | + |
| 142 | +```bash |
| 143 | +flashpack revert <path> [destination_path] [options] |
| 144 | +``` |
| 145 | + |
| 146 | +**Arguments:** |
| 147 | +- `path` - Path to the flashpack file |
| 148 | +- `destination_path` - (Optional) Output path for the reverted file |
| 149 | + |
| 150 | +**Options:** |
| 151 | +| Option | Description | |
| 152 | +|--------|-------------| |
| 153 | +| `-v, --verbose` | Enable verbose output | |
| 154 | + |
| 155 | +**Example:** |
| 156 | +```bash |
| 157 | +flashpack revert ./my_model.flashpack ./my_model.safetensors |
| 158 | +``` |
| 159 | + |
| 160 | +## `flashpack metadata` |
| 161 | + |
| 162 | +Print the metadata of a flashpack file. |
| 163 | + |
| 164 | +```bash |
| 165 | +flashpack metadata <path> [options] |
| 166 | +``` |
| 167 | + |
| 168 | +**Arguments:** |
| 169 | +- `path` - Path to the flashpack file |
| 170 | + |
| 171 | +**Options:** |
| 172 | +| Option | Description | |
| 173 | +|--------|-------------| |
| 174 | +| `-i, --show-index` | Show the tensor index | |
| 175 | +| `-j, --json` | Output metadata in JSON format | |
| 176 | + |
| 177 | +**Examples:** |
| 178 | +```bash |
| 179 | +# View basic metadata |
| 180 | +flashpack metadata ./my_model.flashpack |
| 181 | + |
| 182 | +# View metadata with tensor index |
| 183 | +flashpack metadata ./my_model.flashpack --show-index |
| 184 | + |
| 185 | +# Output as JSON |
| 186 | +flashpack metadata ./my_model.flashpack --json |
| 187 | +``` |
0 commit comments