49 lines
1.6 KiB
Markdown
49 lines
1.6 KiB
Markdown
|
# 部署步骤
|
|||
|
|
|||
|
1. 将该仓库clone到本地
|
|||
|
2. 创建虚拟环境并启动
|
|||
|
|
|||
|
``` shell
|
|||
|
conda create -n takway python=3.9
|
|||
|
conda activate takway
|
|||
|
```
|
|||
|
|
|||
|
3. cd进入仓库目录下,安装依赖
|
|||
|
|
|||
|
``` shell
|
|||
|
cd ~/TakwayDisplayPlatform
|
|||
|
pip install -r requirements.txt
|
|||
|
```
|
|||
|
|
|||
|
4. 在./utils/目录下创建vits_model文件夹
|
|||
|
|
|||
|
从[链接](https://huggingface.co/spaces/zomehwh/vits-uma-genshin-honkai/tree/main/model)下载vits_model并放入该文件夹下,只需下载config.json和G_953000.pth即可
|
|||
|
|
|||
|
5. 在./utils/bert_vits2/目录下创建bert,slm文件夹
|
|||
|
|
|||
|
从[链接1](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large),[链接2](https://huggingface.co/microsoft/deberta-v3-large),[链接3](https://huggingface.co/ku-nlp/deberta-v2-large-japanese-char-wwm)下载模型,放入bert目录下
|
|||
|
|
|||
|
从[链接4](https://huggingface.co/microsoft/wavlm-base-plus)下载模型,放入slm目录下
|
|||
|
|
|||
|
在./utils/bert_vits2/bert目录下创建bert_models.json文件,填入如下内容
|
|||
|
|
|||
|
``` json
|
|||
|
{
|
|||
|
"deberta-v2-large-japanese-char-wwm": {
|
|||
|
"repo_id": "ku-nlp/deberta-v2-large-japanese-char-wwm",
|
|||
|
"files": ["pytorch_model.bin"]
|
|||
|
},
|
|||
|
"chinese-roberta-wwm-ext-large": {
|
|||
|
"repo_id": "hfl/chinese-roberta-wwm-ext-large",
|
|||
|
"files": ["pytorch_model.bin"]
|
|||
|
},
|
|||
|
"deberta-v3-large": {
|
|||
|
"repo_id": "microsoft/deberta-v3-large",
|
|||
|
"files": ["spm.model", "pytorch_model.bin"]
|
|||
|
}
|
|||
|
}
|
|||
|
```
|
|||
|
|
|||
|
6. 在 ./utils/bert_vits2/data/mix/目录下创建models文件夹,并放入预训练模型250000_G.pth
|
|||
|
7. 回到根目录,`python main.py`启动程序
|