maya2024
发表于 2025-4-8 08:09:17
老大,看一下这是什么情况,可以启动,但运行就是这样
无言以对
发表于 2025-4-8 08:26:03
maya2024 发表于 2025-4-8 08:09
老大,看一下这是什么情况,可以启动,但运行就是这样
看使用教程,默认只下载了一个1.3B文生视频模型,其他类型生成需要先手动下载模型,再运行。
jerryleee1
发表于 2025-4-8 19:17:30
Returning existing local_dir `models\Wan-AI\Wan2.1-I2V-14B-720P` as remote repo cannot be accessed in `snapshot_download` (429 Client Error: Too Many Requests for url: https://hf-mirror.com/api/models/Wan-AI/Wan2.1-I2V-14B-720P/revision/main).
无言以对
发表于 2025-4-8 19:29:21
jerryleee1 发表于 2025-4-8 19:17
Returning existing local_dir `models\Wan-AI\Wan2.1-I2V-14B-720P` as remote repo cannot be accessed i ...
换个时段尝试下载吧,估计是文件太大,服务端拒绝了访问请求
ken7121
发表于 2025-4-9 12:03:58
@jasonlee1838480 下載完成 一樣不行我把difuserr裡的檔案copy到啟動目錄後 突然1.3b的文生視頻能用了 影生視頻 一樣不行 其他480那些都不能動all error
无言以对
发表于 2025-4-9 12:45:39
ken7121 发表于 2025-4-9 12:03
@jasonlee1838480 下載完成 一樣不行我把difuserr裡的檔案copy到啟動目錄後 突然1.3b的文生視頻能用 ...
最近hf-mirror这个镜像流量限制,大文件下载会被限制IP
如果你能科学上网,打开i2v-480.py这个文件,把endpoint='https://hf-mirror.com', 这段代码删了,再执行模型下载就可以了
jerryleee1
发表于 2025-4-9 16:15:12
本帖最后由 jerryleee1 于 2025-4-9 17:33 编辑
所有模型都下了。但是运行的时候会出错。网页上会出现 connection time out
* 在本地URL上运行:http://127.0.0.1:7860
* 整合包制作:https://deepface.cc
To create a public link, set `share=True` in `launch()`.
WAN 2.1 14B Image-to-Video 720P
Checking if pipeline needs clearing...
Pipeline config not changed or pipeline is None. No clearing needed.
Loading model: 14B_image_720p with torch dtype: torch.bfloat16 and num_persistent_param_in_dit: 3000000000
Loading models from: models\Wan-AI\Wan2.1-I2V-14B-720P\models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth
model_name: wan_video_image_encoder model_class: WanImageEncoder
The following models are loaded: ['wan_video_image_encoder'].
Loading models from: ['models\\Wan-AI\\Wan2.1-I2V-14B-720P\\diffusion_pytorch_model-00001-of-00007.safetensors', 'models\\Wan-AI\\Wan2.1-I2V-14B-720P\\diffusion_pytorch_model-00002-of-00007.safetensors', 'models\\Wan-AI\\Wan2.1-I2V-14B-720P\\diffusion_pytorch_model-00003-of-00007.safetensors', 'models\\Wan-AI\\Wan2.1-I2V-14B-720P\\diffusion_pytorch_model-00004-of-00007.safetensors', 'models\\Wan-AI\\Wan2.1-I2V-14B-720P\\diffusion_pytorch_model-00005-of-00007.safetensors', 'models\\Wan-AI\\Wan2.1-I2V-14B-720P\\diffusion_pytorch_model-00006-of-00007.safetensors', 'models\\Wan-AI\\Wan2.1-I2V-14B-720P\\diffusion_pytorch_model-00007-of-00007.safetensors']
model_name: wan_video_dit model_class: WanModel
This model is initialized with extra kwargs: {'has_image_input': True, 'patch_size': , 'in_dim': 36, 'dim': 5120, 'ffn_dim': 13824, 'freq_dim': 256, 'text_dim': 4096, 'out_dim': 16, 'num_heads': 40, 'num_layers': 40, 'eps': 1e-06}
Press any key to continue . . .
无言以对
发表于 2025-4-9 16:35:49
jerryleee1 发表于 2025-4-9 16:15
所有模型都下了。但是运行的时候会出错。网页上会出现 connection time out
* 在本地URL上运行:http://1 ...
1.3B的文生图能用吗?
jerryleee1
发表于 2025-4-9 17:18:12
本帖最后由 jerryleee1 于 2025-4-9 17:31 编辑
无言以对 发表于 2025-4-9 16:35
1.3B的文生图能用吗?
不能!报错
* 在本地URL上运行:http://127.0.0.1:7860
* 整合包制作:https://deepface.cc
To create a public link, set `share=True` in `launch()`.
WAN 2.1 1.3B (Text/Video-to-Video)
Checking if pipeline needs clearing...
Pipeline config not changed or pipeline is None. No clearing needed.
Loading model: 1.3B with torch dtype: torch.bfloat16 and num_persistent_param_in_dit: 7000000000
Loading models from: models\Wan-AI\Wan2.1-T2V-1.3B\diffusion_pytorch_model.safetensors
model_name: wan_video_dit model_class: WanModel
This model is initialized with extra kwargs: {'has_image_input': False, 'patch_size': , 'in_dim': 16, 'dim': 1536, 'ffn_dim': 8960, 'freq_dim': 256, 'text_dim': 4096, 'out_dim': 16, 'num_heads': 12, 'num_layers': 30, 'eps': 1e-06}
The following models are loaded: ['wan_video_dit'].
Loading models from: models\Wan-AI\Wan2.1-T2V-1.3B\models_t5_umt5-xxl-enc-bf16.pth
model_name: wan_video_text_encoder model_class: WanTextEncoder
The following models are loaded: ['wan_video_text_encoder'].
Loading models from: models\Wan-AI\Wan2.1-T2V-1.3B\Wan2.1_VAE.pth
model_name: wan_video_vae model_class: WanVideoVAE
The following models are loaded: ['wan_video_vae'].
Using wan_video_text_encoder from models\Wan-AI\Wan2.1-T2V-1.3B\models_t5_umt5-xxl-enc-bf16.pth.
Using wan_video_dit from models\Wan-AI\Wan2.1-T2V-1.3B\diffusion_pytorch_model.safetensors.
Using wan_video_vae from models\Wan-AI\Wan2.1-T2V-1.3B\Wan2.1_VAE.pth.
No wan_video_image_encoder models available.
num_persistent_val 7000000000
Model loaded successfully.
Traceback (most recent call last):
File "E:\Wan2.1-V2\deepface\lib\site-packages\gradio\queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "E:\Wan2.1-V2\deepface\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "E:\Wan2.1-V2\deepface\lib\site-packages\gradio\blocks.py", line 2137, in process_api
result = await self.call_function(
File "E:\Wan2.1-V2\deepface\lib\site-packages\gradio\blocks.py", line 1663, in call_function
prediction = await anyio.to_thread.run_sync(# type: ignore
File "E:\Wan2.1-V2\deepface\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "E:\Wan2.1-V2\deepface\lib\site-packages\anyio\_backends\_asyncio.py", line 2470, in run_sync_in_worker_thread
return await future
File "E:\Wan2.1-V2\deepface\lib\site-packages\anyio\_backends\_asyncio.py", line 967, in run
result = context.run(func, *args)
File "E:\Wan2.1-V2\deepface\lib\site-packages\gradio\utils.py", line 890, in wrapper
response = f(*args, **kwargs)
File "E:\Wan2.1-V2\app.py", line 1315, in generate_videos
video_data = loaded_pipeline(
File "E:\Wan2.1-V2\deepface\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
TypeError: WanVideoPipeline.__call__() got an unexpected keyword argument 'cancel_fn'
ken7121
发表于 2025-4-9 17:26:21
本帖最后由 ken7121 于 2025-4-9 17:32 编辑
无言以对 发表于 2025-4-9 16:35
1.3B的文生图能用吗?
我把diffusion....裡的檔案複製到啟動目錄裡文生視頻1.3b就可以用