Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thanks ! #1

Open
Limezy opened this issue Jan 17, 2023 · 1 comment
Open

Thanks ! #1

Limezy opened this issue Jan 17, 2023 · 1 comment

Comments

@Limezy
Copy link

Limezy commented Jan 17, 2023

Hi, many thanks for your blog post and for these scripts and ressources.

I'm able to get around x2 speed on the large-v2 model with an AMD Rx 6800 (Ubuntu 22.04, PyTorch with ROCm 5.2 backend).

However, this hardware is not mine, and I'm considering upgrading my current Mac to a Mini M1 or Mac Studio. That is why, I would be very interested to know if you are able to run Whisper with the newly released MPS version of PyTorch

  • Are you able to run the large model ?
  • How fast (hopefully faster than your CPU based tests) ?

Thanks

@johnny12150
Copy link

Maybe you should try this one out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants