# transformers-docs-zh **Repository Path**: runbeyondmove/transformers-docs-zh ## Basic Information - **Project Name**: transformers-docs-zh - **Description**: 完全中文版的 Transformers 学习笔记及演示示例 - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-09-05 - **Last Updated**: 2025-10-29 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # transformers-docs-zh【日更|持续更新中】 完全中文版的 Transformers 学习笔记及演示示例,支持 Jupyter Notebook,主要内容来自 🤗 Hugging Face 中关于 Transformers 的教材文档。 本教程在官方文档的基础上修改了部分示例的代码,补充在运行过程中遇到的问题和对应的解决方案,同时也对代码中重要的函数和参数都增加了更为详细的解释。 一起来学习 Transformers 吧! # 目录 - [安装 Transformers (windows & macos))](./docs/started/0_installation.ipynb) - [Transformers 快速上手](./docs/started/1_quick_tour.ipynb) - 教程 - [使用 pipelines 进行推理](./docs/tutorials/2_pipeline.ipynb) - [使用 AutoClass 加载预训练实例](./docs/tutorials/3_autoclass.ipynb) - [预处理数据](./docs/tutorials/4_preprocess_data.ipynb) - [微调预训练模型](./docs/tutorials/5_fine_tune_pretrained_model.ipynb) - [使用脚本进行训练](./docs/tutorials/6_train_with_script.ipynb) - [使用 Accelerate 进行分布式训练](./docs/tutorials/7_distributed_training_with_accelerate.ipynb) - [使用 PEFT 加载和训练 adapters](./docs/tutorials/8_load_adapters_with_PEFT.ipynb) - [如何分享模型](./docs/tutorials/9_share_model.ipynb) - [Transformers Agents 快速上手](./docs/tutorials/10_agents.ipynb) - [使用 LLMs 进行生成](./docs/tutorials/11_generation_with_llms.ipynb) - [Agents and Tools 介绍和指南](./docs/tutorials/12_agents_and_tools.ipynb) - 任务指南 - 自然语言处理 - [文本分类](./docs/guide/13_text_classification.ipynb) - [标记分类(实体分类)](./docs/guide/14_token_classification.ipynb) - [因果语言模型(CLM)](./docs/guide/28_causal_language_modeling.ipynb) - [遮蔽语言模型(MLM)](./docs/guide/29_masked_language_modeling.ipynb) - [文本翻译](./docs/guide/30_translation.ipynb) - [文本摘要(文本总结)](./docs/guide/31_summarization.ipynb) - [问题解答任务(问答任务)](./docs/guide/33_question_answering.ipynb) - [多项选择任务](./docs/guide/32_mutil_choice.ipynb) - 音频处理 - [音频分类](./docs/guide/34_audio_classification.ipynb) - [自动语音识别 (ASR, Automatic speech recognition)](./docs/guide/16_automatic_speech_recognition.ipynb) - 计算机视觉 - [图像分类](./docs/guide/25_image_classification.ipynb) - [图像分割](./docs/guide/26_image_segmentation.ipynb.ipynb) - [视频分类](./docs/guide/35_video_classification.ipynb) - [目标检测](./docs/guide/36_object_detection.ipynb) - [零样本目标检测](./docs/guide/37_Zero-shot_object_detection.ipynb) - [零样本图像分类](./docs/guide/38_Zero-shot_image_classification.ipynb) - [单目深度估计(单图像深度估计)](./docs/guide/39_monocular_depth_estimation.ipynb) - [以图生图(图像增强、图像修复等图像处理任务)](./docs/guide/27_image_to_image.ipynb) - [图像特征提取](./docs/guide/40_Image_Feature_Extraction.ipynb) - [图像掩码生成](./docs/guide/41_Mask_Generation.ipynb) - [关键点检测(图像特征点检测)](./docs/guide/42_Keypoint_Detection.ipynb) - [知识蒸馏在计算机视觉中的应用](./docs/guide/43_Knowledge_Distillation_for_Computer_Vision.ipynb) - 多模态 - [图像描述生成](./docs/guide/22_image_captioning.ipynb) - [文本转语音 (TTS, Text-to-speech)](./docs/guide/17_text_to_speech.ipynb) - [图像-视觉多模态理解模型 (VLM with image-input, Image-text-to-text)](./docs/guide/18_image_text_to_text.ipynb) - [视频-视觉多模态理解模型 (VLM with video-input, Video-text-to-text)](./docs/guide/21_video_text_to_text.ipynb.ipynb) - [文档问答 (DQA, Document Question Answering)](./docs/guide/20_document_question_answering.ipynb) - [视觉问答 (VQA, Visual Question Answering)](./docs/guide/19_visual_question_answering.ipynb) - 生成策略 - [(自定义)文本生成策略](./docs/guide/24_text_generation_strategies.ipynb.ipynb) - [使用缓存优化生成的最佳实践](./docs/guide/23_best_practices_for_generation_with_cache.ipynb) - 提示技术 - [使用 IDEFICS 大型多模态模型来解决图像-文本任务](./docs/guide/44_Image_tasks_with_IDEFICS.ipynb) - [LLM 提示指南](./docs/guide/15_llm_prompt_guide.ipynb)s - 开发者指南 - [使用 Tokenizers 中的分词器](./docs/developer_guide/45_Use_tokenizers_from_Tokenizers.ipynb) - [使用多语言模型运行推理](./docs/developer_guide/46_Multilingual_models_for_inference.ipynb) - [创建自定义架构(模型架构)](./docs/developer_guide/47_Create_custom_architecture.ipynb) - [创建自定义模型](./docs/developer_guide/48_Building_custom_models.ipynb) - [聊天模版(Chat Templates)](./docs/developer_guide/49_Chat_Templates.ipynb) - [Trainer类 (Transformers 库中一个完整地实现了 PyTorch 模型训练和评估循环的类)](./docs/developer_guide/50_Trainer.ipynb) - [将模型导出为 ONNX 格式](./docs/developer_guide/51_Export_to_ONNX.ipynb) - [将模型导出至 TFLite](./docs/developer_guide/52_Export_to_TFLite.ipynb) - [导出到 TorchScript](./docs/developer_guide/53_Export_to_TorchScript.ipynb) - [基准测试](./docs/developer_guide/54_Benchmarks.ipynb) - [Transformers Notebooks 记事本示例合集](./docs/developer_guide/55_Transformers_Notebooks.ipynb) - [Transformers Community 社区与资源](./docs/developer_guide/56_Community_resources.ipynb) - [Transformers 故障排除](./docs/developer_guide/57_Troubleshoot.ipynb) - [在 Transformers 中加载 GGUF 文件](./docs/developer_guide/58_Interoperability_with_GGUF_files.ipynb) - [在 Transformers 中加载 Tiktoken 文件](./docs/developer_guide/59_Interoperability_with_TikToken_files.ipynb) - [模块化 Transformers](./docs/developer_guide/60_Modular_transformers.ipynb) - [如何修改 Transformers 模型(通过修改现有的 Transformers 模型以满足特定需求)](./docs/developer_guide/61_Model_Hacking.ipynb) - 量化方法 - [量化技术概述](./docs/quantization/62_getting_started.ipynb) - [量化模型(8位与4位量化)](./docs/quantization/63_bitsandbytes.ipynb) - [GPTQ —— 广义剪枝训练量化](./docs/quantization/64_GPTQ.ipynb) - [AWQ —— 激活感知权重量化](./docs/quantization/65_AWQ.ipynb) - [AQLM —— 加性量化语言模型](./docs/quantization/66_AQLM.ipynb) - [Optimum-quanto —— PyTorch 多功能量化工具包](./docs/quantization/67_Optimum-quanto.ipynb) - [EETQ —— 高效的 Tensor 量化方法](./docs/quantization/68_EETQ.ipynb) - [HQQ —— 半二次量化](./docs/quantization/69_HQQ.ipynb) - [FBGEMM FP8(模型量化至权重为8位,激活为8位)](./docs/quantization/70_FP8.ipynb) - [Optimum](./docs/quantization/71_Optimum.ipynb) - [TorchAO —— 用于 PyTorch 的架构优化库](./docs/quantization/72_TorchAO.ipynb) - [BitNet —— 新型的神经网络架构](./docs/quantization/73_BitNet.ipynb) - [压缩张量](./docs/quantization/74_Compressed_Tensors.ipynb) - [添加新的量化方法](./docs/quantization/75_Contribute_new_quantization_method.ipynb) - 性能优化与可扩展性 - [性能优化概述](./docs/optimization/76_performance.ipynb) - [LLM 推理优化](./docs/optimization/77_llm_optims.ipynb) - [创建大型模型实例(如何加载大型模型)](./docs/optimization/91_big_models.ipynb) - [调试(通过调试解决训练过程中可能遇到的问题)](./docs/optimization/92_debugging.ipynb) - [TensorFlow 模型中的 XLA 集成](./docs/optimization/93_tf_xla.ipynb) - [使用 `torch.compile()` 优化推理速度](./docs/optimization/94_perf_torch_compile.ipynb) - 高效训练方法 - [在单个 GPU 上进行高效训练的方法和工具](./docs/optimization/78_perf_train_gpu_one.ipynb) - [在多个 GPU 上进行高效训练的方法和工具](./docs/optimization/79_perf_train_gpu_many.ipynb) - [FSDP —— 完全分片数据并行方法](./docs/optimization/80_fsdp.ipynb) - [DeepSpeed —— 分布式高效训练 PyTorch 模型优化库](./docs/optimization/81_deepSpeed.ipynb) - [在单个 CPU 上高效训练](./docs/optimization/82_perf_train_cpu.ipynb) - [在多个 CPU 上高效训练](./docs/optimization/83_perf_train_cpu_many.ipynb) - [使用 TensorFlow 在 TPU 上进行训练](./docs/optimization/84_perf_train_tpu_tf.ipynb) - [使用 PyTorch 在 Apple 芯片上训练模型](./docs/optimization/85_perf_train_special.ipynb) - [使用自定义的硬件进行训练](./docs/optimization/86_perf_hardware.ipynb) - [使用 Trainer API 进行超参数搜索](./docs/optimization/87_hpo_train.ipynb) - 优化推理方法(优化生成方法) - [通过 CPU 优化推理](./docs/optimization/88_perf_infer_cpu.ipynb) - [通过多个 CPU 优化推理](./docs/optimization/89_perf_infer_gpu_multi.ipynb) - [通过 GPU 优化推理](./docs/optimization/90_perf_infer_gpu_one.ipynb) - 概念指南 - [哲学理念](./docs/conceptual/95_philosophy.ipynb) - [专业术语](./docs/conceptual/96_glossary.ipynb) - [Transformers 库能做什么](./docs/conceptual/97_task_summary.ipynb) - [Transformers 库解决了哪些问题](./docs/conceptual/98_tasks_explained.ipynb) - [Transformer 模型家族](./docs/conceptual/99_model_summary.ipynb) - [Tokenizer —— 文本分词器概述](./docs/conceptual/100_tokenizer_summary.ipynb) - [Attention —— 注意力机制](./docs/conceptual/101_attention.ipynb) - [Padding & Truncation —— 填充与截断](./docs/conceptual/102_pad_truncation.ipynb) - [BERTology —— 基于 BERT 进行的相关研究](./docs/conceptual/103_bertology.ipynb) - [Perplexity —— 困惑度](./docs/conceptual/104_perplexity.ipynb) - [使用 pipelines 作为 Web 服务器以提供推理服务](./docs/conceptual/105_pipeline_webserver.ipynb) - [模型训练剖析](./docs/conceptual/106_model_memory_anatomy.ipynb) - [优化 LLM 部署的有效技术](./docs/conceptual/107_llm_tutorial_optimization.ipynb)