|
发表于 2025-4-9 17:18:12
|
显示全部楼层
本帖最后由 jerryleee1 于 2025-4-9 17:31 编辑
无言以对 发表于 2025-4-9 16:35
1.3B的文生图能用吗?
不能!报错
* 在本地URL上运行: http://127.0.0.1:7860
* 整合包制作:https://deepface.cc
To create a public link, set `share=True` in `launch()`.
WAN 2.1 1.3B (Text/Video-to-Video)
[CMD - DEBUG] Checking if pipeline needs clearing...
[CMD - DEBUG] Pipeline config not changed or pipeline is None. No clearing needed.
[CMD] Loading model: 1.3B with torch dtype: torch.bfloat16 and num_persistent_param_in_dit: 7000000000
Loading models from: models\Wan-AI\Wan2.1-T2V-1.3B\diffusion_pytorch_model.safetensors
model_name: wan_video_dit model_class: WanModel
This model is initialized with extra kwargs: {'has_image_input': False, 'patch_size': [1, 2, 2], 'in_dim': 16, 'dim': 1536, 'ffn_dim': 8960, 'freq_dim': 256, 'text_dim': 4096, 'out_dim': 16, 'num_heads': 12, 'num_layers': 30, 'eps': 1e-06}
The following models are loaded: ['wan_video_dit'].
Loading models from: models\Wan-AI\Wan2.1-T2V-1.3B\models_t5_umt5-xxl-enc-bf16.pth
model_name: wan_video_text_encoder model_class: WanTextEncoder
The following models are loaded: ['wan_video_text_encoder'].
Loading models from: models\Wan-AI\Wan2.1-T2V-1.3B\Wan2.1_VAE.pth
model_name: wan_video_vae model_class: WanVideoVAE
The following models are loaded: ['wan_video_vae'].
Using wan_video_text_encoder from models\Wan-AI\Wan2.1-T2V-1.3B\models_t5_umt5-xxl-enc-bf16.pth.
Using wan_video_dit from models\Wan-AI\Wan2.1-T2V-1.3B\diffusion_pytorch_model.safetensors.
Using wan_video_vae from models\Wan-AI\Wan2.1-T2V-1.3B\Wan2.1_VAE.pth.
No wan_video_image_encoder models available.
num_persistent_val 7000000000
[CMD] Model loaded successfully.
Traceback (most recent call last):
File "E:\Wan2.1-V2\deepface\lib\site-packages\gradio\queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "E:\Wan2.1-V2\deepface\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "E:\Wan2.1-V2\deepface\lib\site-packages\gradio\blocks.py", line 2137, in process_api
result = await self.call_function(
File "E:\Wan2.1-V2\deepface\lib\site-packages\gradio\blocks.py", line 1663, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "E:\Wan2.1-V2\deepface\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "E:\Wan2.1-V2\deepface\lib\site-packages\anyio\_backends\_asyncio.py", line 2470, in run_sync_in_worker_thread
return await future
File "E:\Wan2.1-V2\deepface\lib\site-packages\anyio\_backends\_asyncio.py", line 967, in run
result = context.run(func, *args)
File "E:\Wan2.1-V2\deepface\lib\site-packages\gradio\utils.py", line 890, in wrapper
response = f(*args, **kwargs)
File "E:\Wan2.1-V2\app.py", line 1315, in generate_videos
video_data = loaded_pipeline(
File "E:\Wan2.1-V2\deepface\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
TypeError: WanVideoPipeline.__call__() got an unexpected keyword argument 'cancel_fn' |
-
|