签到天数: 21 天 [LV.4]偶尔看看III
无名之辈
- 积分
- 95
|
发表于 2025-4-8 11:32:18
|
显示全部楼层

During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\gradio\queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\gradio\blocks.py", line 2042, in process_api
result = await self.call_function(
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\gradio\blocks.py", line 1589, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\anyio\_backends\_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\anyio\_backends\_asyncio.py", line 962, in run
result = context.run(func, *args)
File "E:\AI\software\LatentSync\LatentSync-V6\deepface\lib\site-packages\gradio\utils.py", line 883, in wrapper
response = f(*args, **kwargs)
File "<frozen app>", line 96, in process_video
gradio.exceptions.Error: '处理过程中出错: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 11.00 GiB of which 7.00 MiB is free. Of the allocated memory 7.68 GiB is allocated by PyTorch, and 2.14 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/ ... vironment-variables)'
处理过程中报错了,请求帮助! |
|