level 1
Swy198781
楼主
每到95%就报错
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 900, 1, 512) (torch.float32) key : shape=(1, 900, 1, 512) (torch.float32) value : shape=(1, 900, 1, 512) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 `cutlassF` is not supported because: device=privateuseone (supported: {'cuda'}) `flshattF` is not supported because: device=privateuseone (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 `tritonflashattF` is not supported because: device=privateuseone (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 triton is not available `smallkF` is not supported because: device=privateuseone (supported: {'cuda', 'cpu'}) max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 512
有没有大佬知道是什么原因,改图片大小也不行
2023年04月03日 11点04分
1
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 900, 1, 512) (torch.float32) key : shape=(1, 900, 1, 512) (torch.float32) value : shape=(1, 900, 1, 512) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 `cutlassF` is not supported because: device=privateuseone (supported: {'cuda'}) `flshattF` is not supported because: device=privateuseone (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 `tritonflashattF` is not supported because: device=privateuseone (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 triton is not available `smallkF` is not supported because: device=privateuseone (supported: {'cuda', 'cpu'}) max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 512
有没有大佬知道是什么原因,改图片大小也不行