-
Notifications
You must be signed in to change notification settings - Fork 21.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
General MPS op coverage tracking issue #77764
Comments
Are there any linear algebra ops not implemented in MPS that you have made custom shaders for? Any shaders I could "borrow" from your project (with full credit) and use in my own? Specifically, it would be helpful to have SVD and reverse-mode Cholesky operators. |
Hey, There are no custom shaders at the moment as everything we needed for the basic networks we looked at was already provided by MPS (or a set of ops in MPS). Also , required functions that are not in the hot path are simply falling back to CPU for now. It is mentioned here as this is something that is possible to be done easily within the integration. But not something that is used today. |
I was testing a bunch of speech synthesis and vocoder models, and found the following operators missing so far:
|
One vote for a CPU fallback for Is there any reason, given the unified memory architecture, that every op not implemented on Metal cannot just fall back to the CPU implementation without memory copy operations? (Based, of course, on my 10,000ft view of the architecture, which I'm sure is wildly oversimplified.) |
Tip for everyone: Run your script with PYTORCH_ENABLE_MPS_FALLBACK=1 which will fallback to the CPU. I'm using a custom build which merges pull request #77791 so am not sure if this is included in the current build (Edit: It's not. You need to build PyTorch yourself with the pull request or trust an online build with it). |
Testing with some huggingface transformers code: + 1 vote for |
One missing op I ran into and haven't seen mentioned yet is
then
and finally the forward pass through my model crashes with
On |
+1
|
@lhoenig could you open a new separate issue for the cpu fallback failing for you? @Willian-Zhang the fallback is ONLY available if you build from source right now. It will be in the nightly build tomorrow (May 21st). |
Would like to add |
I've got a non supported op:
|
Not supported
Code X, y = torch.rand(16, 10).to("mps"), torch.rand(16, 1).to("mps")
model = nn.Linear(10, 1).to("mps")
criterion = nn.L1Loss() # nn.KLDivLoss()
loss = criterion(model(X), y)
loss.backward() Output
|
Trying to use affine crop from torchvision, and found the operator |
Trying to use MPS backend with pytorch geometric, and found the operator |
Found the operator 'aten::grid_sampler_2d' is not current implemented for the MPS device. |
Would be great to add |
I ran into this error with |
The operator 'aten::upsample_bicubic2d.out' is not currently implemented for the MPS device. |
Please prioritize "isin.Tensor_Tensor_out" NotImplementedError: The operator 'aten::isin.Tensor_Tensor_out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on #77764. As a temporary fix, you can set the environment variable |
Please prioritize aten::upsample_bicubic2d.out, the temporary fix doesn't work for me :( |
The operator |
+1 for |
So, really stupid question.. why do these functions need to be reimplemented on each accelerator architecture? Why isn't there a code generator/compiler for this? |
Please prioritize aten::_convert_indices_from_coo_to_csr.out 🙏 NotImplementedError: The operator |
Hi, I want to use the Gemma-2 9B model for a generation task. I am running the code on an M3 Pro chip, but I am getting this error: Complete error message : - NotImplementedError: The operator 'aten::isin.Tensor_Tensor_out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on #77764. As a temporary fix, you can set the environment variable Please let me know how to fix this. |
Hello, I ran into this issue running on
We developed a deep survival library and we rely on the function below for our Cox model (see code here). Would it be possible to add Thanks! |
here is a vote for |
done. |
done |
Vote for |
Please support 'aten::cumprod.out'... basic function used heavily in lots of statistical / ML apps.! |
We use We noticed that some ops automatically fallback to CPU (e.g. |
Vote for |
Voting for aten::upsample_bicubic2d.out to implemented for MPS |
ya please support mps for nms |
Voting for aten::upsample_bicubic2d.out to implemented for MPS |
+1 for |
torchvision::nms is supported since last year. |
In which version? I just ran into this error after updating the libraries: The last upgrade was: |
@FiReTiTi Since torchvision 0.16. If it's still an issue, can you please open an issue in the torchvision repository and provide a minimal reproducer? Thank you. |
voting for aten::max_pool3d_with_indices for u-net models |
another vote for |
a new vote yet again for implementing MPS .....The operator 'aten::upsample_bicubic2d.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on #77764...... |
Voting for aten::_linalg_eigvals NotImplementedError: The operator 'aten::_linalg_eigvals' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on #77764. As a temporary fix, you can set the environment variable for torcheval, frenchetinceptiondistance |
This issue is to have a centralized place to list and track work on adding support to new ops for the MPS backend.
PyTorch MPS Ops Project : Project to track all the ops for MPS backend. There are a very large number of operators in pytorch and so they are not all yet implemented. We will be prioritizing adding new operators based on user feedback. If possible, please also provide link to the network or use-case where this op is getting used.
As Ops are requested we will add " To Triage" pool. If we have 3+ requests for an operation and given its complexity/need the operation will be moved "To be implemented" pool. If you want to work on adding support for such op, feel free to comment below to get assigned one. Please avoid pickup up an op that is already being worked on tracked in "In progress" pool.
Link to the wiki for details on how to add these ops and example PRs.
MPS operators coverage matrix - The matrix covers most of the supported operators but is not exhaustive. Please look at the
In vx.x.x
column, if the box is green, it means that the op implementation is included in the latest release; on the other hand, if the box is yellow, it means the op implementation is in the nightly and has not yet included in the latest release. Before you comment below, please take a look at this matrix to make sure the operator you're requesting has not been implemented in nightly. More details can be found on the readme.cc @kulinseth @malfet @DenisVieriu97 @jhavukainen
The text was updated successfully, but these errors were encountered: