|
发表于 2024-9-10 00:13:47
|
显示全部楼层
我也碰到这个问题,安装了cuda和cudnn后,就再也运行不了
出错提示:
Running on local URL: http://127.0.0.1:7860
2024-09-10 00:09:43.1035172 [E nnxruntime efault, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime: roviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\FaceFusion_2.6.0\python310\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:743 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
when using [('CUDAExecutionProvider', {'device_id': '0', 'cudnn_conv_algo_search': 'DEFAULT'})]
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
2024-09-10 00:09:43.2558304 [E nnxruntime efault, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime: roviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\FaceFusion_2.6.0\python310\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
Traceback (most recent call last):
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 463, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:743 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\gradio\queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\gradio\blocks.py", line 1550, in process_api
result = await self.call_function(
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\gradio\blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\gradio\utils.py", line 661, in wrapper
response = f(*args, **kwargs)
File "C:\FaceFusion_2.6.0\facefusion\uis\components\preview.py", line 178, in update_preview_image
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, temp_vision_frame)
File "C:\FaceFusion_2.6.0\facefusion\uis\components\preview.py", line 200, in process_preview_frame
target_vision_frame = frame_processor_module.process_frame(
File "C:\FaceFusion_2.6.0\facefusion\processors\frame\modules\face_enhancer.py", line 261, in process_frame
target_vision_frame = enhance_face(target_face, target_vision_frame)
File "C:\FaceFusion_2.6.0\facefusion\processors\frame\modules\face_enhancer.py", line 204, in enhance_face
crop_vision_frame = apply_enhance(crop_vision_frame)
File "C:\FaceFusion_2.6.0\facefusion\processors\frame\modules\face_enhancer.py", line 213, in apply_enhance
frame_processor = get_frame_processor()
File "C:\FaceFusion_2.6.0\facefusion\processors\frame\modules\face_enhancer.py", line 107, in get_frame_processor
FRAME_PROCESSOR = onnxruntime.InferenceSession(model_path, providers = apply_execution_provider_options(facefusion.globals.execution_device_id, facefusion.globals.execution_providers))
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 430, in __init__
raise fallback_error from e
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 425, in __init__
self._create_inference_session(self._fallback_providers, None)
File "C:\FaceFusion_2.6.0\python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 463, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:743 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported. |
|