Error trying to load model components
#3
by
kishimita
- opened
.flux2-venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:115: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
warnings.warn(
Error loading CLIP model: [ONNXRuntimeError] : 1 : FAIL : Load model from Flux2/ai-toolkit/model_weights/clip/models--black-forest-labs--FLUX.1-dev-onnx/snapshots/b566cc0360f26cdbbbabec71621a9f9260835cdd/clip.opt/model.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:180 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 11, max supported IR version: 10
Error loading T5 model: [ONNXRuntimeError] : 1 : FAIL : Load model from /Flux2/ai-toolkit/model_weights/t5/models--black-forest-labs--FLUX.1-dev-onnx/snapshots/b566cc0360f26cdbbbabec71621a9f9260835cdd/t5.opt/model.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:180 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 11, max supported IR version: 10
Error loading Transformer model: [ONNXRuntimeError] : 1 : FAIL : Load model from /Flux2/ai-toolkit/model_weights/transformer/fp8/models--black-forest-labs--FLUX.1-dev-onnx/snapshots/b566cc0360f26cdbbbabec71621a9f9260835cdd/transformer.opt/fp8/model.onnx failed:Fatal error: trt:TRT_FP8QuantizeLinear(-1) is not a registered function/op
Error loading VAE model: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from /Flux2/ai-toolkit/model_weights/vae/models--black-forest-labs--FLUX.1-dev-onnx/snapshots/b566cc0360f26cdbbbabec71621a9f9260835cdd/vae.opt/model.onnx failed:This is an invalid model. Type Error: Type 'tensor(bfloat16)' of input parameter (latent) of operator (Conv) in node (/decoder/conv_in/Conv) is invalid.
can this be solved by installed onnx 1.18 version ?
Happy birthday
Same problem!