|
|
发表于 2026-2-27 17:09:40
|
显示全部楼层
盟主好!V4正常,V3出错。信息如下,敬请指教。谢谢!
程序启动直到界面打开后的信息(之前还有一些看不到了):
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 850.00 MiB. GPU 0 has a total capacity of 6.00 GiB of which 0 bytes is free. Of the allocated memory 5.33 GiB is allocated by PyTorch, and 8.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management
2026-02-27 16:56:49.905 | WARNING | acestep.llm_inference:initialize:375 - vllm initialization failed, falling back to PyTorch backend
Warning: 5Hz LM initialization failed: ❌ Error initializing 5Hz LM: CUDA out of memory. Tried to allocate 850.00 MiB. GPU 0 has a total capacity of 6.00 GiB of which 0 bytes is free. Of the allocated memory 5.33 GiB is allocated by PyTorch, and 8.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management
Traceback:
Traceback (most recent call last):
File "C:\AI\ACE-Step-V3\acestep\llm_inference.py", line 216, in _load_pytorch_model
self.llm = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\transformers\models\auto\auto_factory.py", line 604, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\transformers\modeling_utils.py", line 277, in _wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\transformers\modeling_utils.py", line 4971, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\transformers\models\qwen3\modeling_qwen3.py", line 436, in __init__
self.model = Qwen3Model(config)
^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\transformers\models\qwen3\modeling_qwen3.py", line 342, in __init__
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\torch\nn\modules\sparse.py", line 167, in __init__
torch.empty((num_embeddings, embedding_dim), **factory_kwargs),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\torch\utils\_device.py", line 104, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 850.00 MiB. GPU 0 has a total capacity of 6.00 GiB of which 0 bytes is free. Of the allocated memory 5.33 GiB is allocated by PyTorch, and 8.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management
Detected system language: zh
Service initialization completed!
Creating Gradio interface...
Enabling queue for multi-user support...
Launching server on 0.0.0.0:7860...
* Running on local URL:
* To create a public link, set `share=True` in `launch()`.
在界面中选择“简单”,输入“歌曲描述”并点击“生成音乐”之后的信息(之前还有一些看不到了):
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 850.00 MiB. GPU 0 has a total capacity of 6.00 GiB of which 0 bytes is free. Of the allocated memory 5.33 GiB is allocated by PyTorch, and 8.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management
Detected system language: zh
Service initialization completed!
Creating Gradio interface...
Enabling queue for multi-user support...
Launching server on 0.0.0.0:7860...
* Running on local URL:
* To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\gradio\queueing.py", line 766, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\gradio\route_utils.py", line 355, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\gradio\blocks.py", line 2152, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\gradio\blocks.py", line 1641, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\gradio\utils.py", line 861, in async_iteration
return await anext(iterator)
^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\gradio\utils.py", line 852, in __anext__
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\anyio\to_thread.py", line 63, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\anyio\_backends\_asyncio.py", line 2502, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\anyio\_backends\_asyncio.py", line 986, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\gradio\utils.py", line 835, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\deepface\Lib\site-packages\gradio\utils.py", line 1019, in gen_wrapper
response = next(iterator)
^^^^^^^^^^^^^^
File "C:\AI\ACE-Step-V3\acestep\gradio_ui\events\__init__.py", line 671, in generation_wrapper
raise gr.Error(f"Failed to create sample: {result.status_message}")
gradio.exceptions.Error: 'Failed to create sample: 5Hz LM not initialized. Please initialize it first.'
|
|