打造使用comfyui过程中所遇各类问题[lbk]最强贴[rbk]
stablediffusion吧
全部回复
仅看楼主
level 9
alenh 楼主
大家在使用comfyui过程中遇到的各种困惑和问题,可以通过提供、说明详细的线索信息,进而帮助分析、识别和解决。
2025年04月15日 17点04分 1
level 3
为了装双截棍结果更新了python到3.11和pytorch2.8.0,结果不但双截棍用不了,好多之前的插件也用不了,比如xlab,提示入下,大哥能给看看不?
2025年04月16日 05点04分 2
level 3
Error message occurred while importing the 'x-flux-comfyui' module.
Traceback (most recent call last):
File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\utils\import_utils.py", line 1778, in _get_module
return importlib.import_module("." + module_name, self.__name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI-aki-v1.4\python\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\models\clip\modeling_clip.py", line 45, in <module>
from ...modeling_flash_attention_utils import _flash_attention_forward
File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\modeling_flash_attention_utils.py", line 27, in <module>
from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\flash_attn-2.7.4.post1-py3.11-win-amd64.egg\flash_attn\__init__.py", line 3, in <module>
from flash_attn.flash_attn_interface import (
File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\flash_attn-2.7.4.post1-py3.11-win-amd64.egg\flash_attn\flash_attn_interface.py", line 15, in <module>
import flash_attn_2_cuda as flash_attn_gpu
ImportError: DLL load failed while importing flash_attn_2_cuda: 找不到指定的程序。
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "E:\ComfyUI-aki-v1.4\nodes.py", line 2153, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "E:\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\__init__.py", line 1, in <module>
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "E:\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\nodes.py", line 17, in <module>
from .xflux.src.flux.util import (configs, load_ae, load_clip,
File "E:\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\xflux\src\flux\util.py", line 16, in <module>
from .modules.conditioner import HFEmbedder
File "E:\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\xflux\src\flux\modules\conditioner.py", line 2, in <module>
from transformers import (CLIPTextModel, CLIPTokenizer, T5EncoderModel,
File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist
File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\utils\import_utils.py", line 1767, in __getattr__
value = getattr(module, name)
^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\utils\import_utils.py", line 1766, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\utils\import_utils.py", line 1780, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback):
DLL load failed while importing flash_attn_2_cuda: 找不到指定的程序。
2025年04月16日 05点04分 3
pytorch版本太高,降级到2.5.1即可兼容
2025年04月16日 06点04分
@alenh 我是 50 系的显卡,降级了不影响使用吧?大哥了解么
2025年04月16日 14点04分
影响,最好pytorch2.6,cuda 128
2025年04月16日 14点04分
level 7
已经安装成功nunchaku,且成功出图,运行了别的工作流后,再次运行nunchaku文生图工作流,节点爆红失效,然后卸载多次,也无法再次安装节点,怎么回事阿,麻烦楼主看看
2025年04月17日 00点04分 4
@alenh 谢谢 麻烦了 晚点发你看一下
2025年04月17日 05点04分
@alenh 我自己解决了,确实是依赖问题,重装了就好了[真棒]
2025年04月18日 00点04分
线索太少,看log日志文件
2025年04月17日 00点04分
@alenh 晚上回去截下图,主要就是节点安装不上,custom_nodesw文件夹里已经有nunchaku文件了,但是进到comfyui里节点就是爆红,也没法添加
2025年04月17日 01点04分
level 3
兄弟按照你说的降级pytorch2.6了,结果
2025年04月17日 02点04分 5
level 3
File "E:\ComfyUI-aki-v1.4\custom_nodes\comfyui_slk_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( ^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( ^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( ^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( ^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\functional.py", line 1352, in dequantize_4bit absmax = dequantize_blockwise(quant_state.absmax, quant_state.state2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\functional.py", line 1043, in dequantize_blockwise lib.cdequantize_blockwise_fp32(*args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\cextension.py", line 46, in __getattr__ return getattr(self._lib, item) ^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\ctypes\__init__.py", line 389, in __getattr__ func = self.__getitem__(name) ^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\ctypes\__init__.py", line 394, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^AttributeError: function 'cdequantize_blockwise_fp32' not found2025-04-17T10:03:29.386108 - Prompt executed in 0.46 seconds```## Attached WorkflowPlease make sure that workflow does not contain any sensitive information such as API keys or passwords.```Workflow too large. Please manually upload the workflow from local file system.```## Additional Context(Please add any additional context or steps to reproduce the error here)
2025年04月17日 02点04分 6
@King_Hamlet 难道咱俩遇到一个问题了 我也是用了提示词反推之后 双截棍就跑不起来了
2025年04月17日 05点04分
抱歉太长了贴不满,总归就是JoyCaptionTwofunction 'cdequantize_blockwise_fp32' not found
2025年04月17日 02点04分
Joy2,发log文件私我
2025年04月17日 02点04分
@alenh 好了,按照你说的调整好环境就能用了[真棒],不过双截棍还没在用,下午试试看。总之拜谢啦哈哈
2025年04月18日 06点04分
level 1
# ComfyUI Error Report#
# Error Details- **Node ID:** 7- **Node Type:** CLIPTextEncode- **Exception Type:** RuntimeError- **Exception Message:** ERROR: clip input is invalid: NoneIf the clip is from a checkpoint loader node your checkpoint does not contain a valid clip or text encoder model.## Stack Trace``` File "E:\1\ComfyUI-aki-v1.4\execution.py", line 345, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "E:\1\ComfyUI-aki-v1.4\execution.py", line 220, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "E:\1\ComfyUI-aki-v1.4\execution.py", line 192, in _map_node_over_list process_inputs(input_dict, i) File "E:\1\ComfyUI-aki-v1.4\execution.py", line 181, in process_inputs results.append(getattr(obj, func)(**inputs)) File "E:\1\ComfyUI-aki-v1.4\nodes.py", line 67, in encode raise RuntimeError("ERROR: clip input is invalid: None\n\nIf the clip is from a checkpoint loader node your checkpoint does not contain a valid clip or text encoder model.")```## System Information- **ComfyUI Version:** 0.3.29- **Arguments:** E:\1\ComfyUI-aki-v1.4\main.py --auto-launch --preview-method auto --disable-cuda-malloc- **OS:** nt- **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]- **Embedded Python:** false- **PyTorch Version:** 2.3.1+cu121## Devices- **Name:** cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 17170825216 - **VRAM Free:**
15821963264
- **Torch VRAM Total:** 0 - **Torch VRAM Free:** 0## Logs```2025-04-19T00:17:17.802883 - [START] Security scan2025-04-19T00:17:17.802883 - 2025-04-19T00:17:21.072363 - [DONE] Security scan2025-04-19T00:17:21.072363 - 2025-04-19T00:17:21.217797 - ## ComfyUI-Manager: installing dependencies done.2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** ComfyUI startup time:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - 2025-04-19 00:17:21.2177972025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** Platform:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - Windows2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** Python version:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** Python executable:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - E:\1\ComfyUI-aki-v1.4\python\python.exe2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** ComfyUI Path:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - E:\1\ComfyUI-aki-v1.42025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** Log path:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - E:\1\ComfyUI-aki-v1.4\comfyui.log2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.220791 - Prestartup times for custom nodes:2025-04-19T00:17:21.220791 - 0.0 seconds: E:\1\ComfyUI-aki-v1.4\custom_nodes\rgthree-comfy2025-04-19T00:17:21.220791 - 0.0 seconds: E:\1\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold2025-04-19T00:17:21.220791 - 3.4 seconds: E:\1\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager2025-04-19T00:17:21.226771 - 2025-04-19T00:17:22.494286 - Warning, you are using an old pytorch version and some ckpt/pt files might be loaded unsafely. Upgrading to 2.4 or above is recommended.2025-04-19T00:17:22.656335 - Total VRAM 16375 MB, total RAM 65362 MB2025-04-19T00:17:22.657330 - pytorch version: 2.3.1+cu1212025-04-19T00:17:23.718809 - xformers version: 0.0.272025-04-19T00:17:23.718809 - Set vram state to: NORMAL_VRAM2025-04-19T00:17:23.718809 - Device: cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync2025-04-19T00:17:25.396817 - Using xformers attention2025-04-19T00:17:26.093722 - Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]2025-04-19T00:17:26.093722 - ComfyUI version: 0.3.292025-04-19T00:17:26.177442 - ComfyUI frontend version: 1.16.82025-04-19T00:17:26.179432 - [Prompt Server] web root: E:\1\ComfyUI-aki-v1.4\python\lib\site-packages\comfyui_frontend_package\static2025-04-19T00:17:26.725086 - [AnimateDiffEvo] - [0;31mERROR[0m - No motion models found. Please download one and place in:
2025年04月18日 16点04分 7
pytorch版本太低
2025年04月21日 06点04分
level 1
Traceback (most recent call last):
File "H:\confyUI\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 345, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\confyUI\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 220, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\confyUI\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 192, in _map_node_over_list
process_inputs(input_dict, i)
File "H:\confyUI\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 181, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\confyUI\ComfyUI-aki-v1.6\ComfyUI\custom_nodes\comfyui-mimicmotionwrapper\nodes.py", line 400, in process
snapshot_download(repo_id="hr16/yolox-onnx",
File "H:\confyUI\ComfyUI-aki-v1.6\python\Lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "H:\confyUI\ComfyUI-aki-v1.6\python\Lib\site-packages\huggingface_hub\_snapshot_download.py", line 235, in snapshot_download
raise LocalEntryNotFoundError(
huggingface_hub.errors.LocalEntryNotFoundError: An error happened while trying to locate the files on the Hub and we cannot find the appropriate snapshot folder for the specified revision on the local disk. Please check your internet connection and try again.
Prompt executed in 21.05 seconds
2025年04月22日 15点04分 8
缺yolo模型
2025年04月22日 15点04分
comfyui_controlnet_aux\tests\test_cn_aux_full.json,把这个工作流跑通
2025年04月22日 15点04分
level 3
安装完nunchaku插件以及对应版本的轮子依赖后就报有个过时version版本被安装与impact pack不兼容,这种有大的影响吗?应该如何解决?
2025年04月22日 18点04分 9
亲测有效
2025年04月24日 01点04分
这是comfyui前端依赖,pip install -U comfyui_frontend_package,执行更新下依赖
2025年04月22日 23点04分
执行了还是会报这个,后来把插件节点comfyui_Impact Pack退回了8.11版本就没报了
2025年04月23日 16点04分
@alenh 执行了还是会报这个,后来把插件节点comfyui_Impact Pack退回了8.11版本就没报了
2025年04月23日 16点04分
level 2
一运行就报这个
2025年04月23日 09点04分 10
level 2
2025年04月23日 09点04分 11
线索太少,截图看流
2025年04月23日 09点04分
@alenh pip install -U comfyui_frontend_package 操作了嘛
2025年04月23日 14点04分
@alenh pip install -U comfyui_frontend_package 操作了嘛
2025年04月23日 14点04分
谢谢您了,昨天自己莫名其妙的好了,我只是调整了下虚拟内存
2025年04月24日 06点04分
level 4
An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your lnternet connection is on. 使用controlnet模型时出错了,请问一下如何解决
2025年05月02日 22点05分 12
把梯子打开,缺预处理模型
2025年05月03日 08点05分
level 9
1、提示词长度问题。
Token indices sequence length is longer than the specified maximum sequence length for this model (328>77).Running this sequence through the model will result in indexing errors.
这个实测下来超77部分的提示词还是会反映到生成图片中,那这条警告可以不用管吗,还是最好把文本编码器换成A1111那种模式?但flux是不是没法用A1111模式呀?
2、cn模型的下载和读取问题。
Failed to find S:\StableDiffusion\ComfyUI-aki-v1.6\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\lllyasviel/Annotators\body_pose_model.pth.
Downloading from huggingface.co
cacher folder is C:\Users\AppData\Local\Temp, you can change it by custom_tmp_path in config.yaml
S:\StableDiffusion\ComfyUI-aki-v1.6\python\Lib\site-packages\huggingface_hub\file_download.py:832: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`.
For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder.
翻译成中文也没懂什么意思,只知道它提示我缺模型,所以要在线下载。因为我之前用的是webui,从webui转comfyui,模型读取路径用的是webui,但我发现cn模型似乎无法从webui文件夹里读取,所以每次用cn都提示我要下载cn模型。但我没开v,所以每次下载都失败,失败之后它又能自己生成骨骼图,我都不知道它从哪里获取的模型。现在每次用cn,等它下载失败要等很久,后续生成倒是没问题,不知道要怎么解决这个问题[狂汗]
2025年05月03日 04点05分 13
缺预处理器模型
2025年05月03日 08点05分
token的问题更新本体
2025年05月03日 08点05分
level 1
大佬求问这个是什么情况
2025年05月08日 05点05分 14
ERROR: clip input is invalid: None If the clip is from a checkpoint loader node your checkpoint does not contain a valid clip or text encoder model.
2025年05月08日 05点05分
@贴吧用户_aJNZKXA 什么模型
2025年05月08日 06点05分
@alenh F.1基础算法模型
2025年05月09日 06点05分
@贴吧用户_aJNZKXA 需要双clip加载
2025年05月09日 06点05分
1 2 3 尾页