-
Notifications
You must be signed in to change notification settings - Fork 801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Porting to Mac and improve performance #27
Comments
Thanks! If the transcription is slow, most likely culprit would be if this logging message: is false. Then Whisper cannot transcript with the GPU and will use the CPU instead, which is way slower. Outside of using whisper, none of the code should run noticeably slow on any OS or computer. |
Agree. torch.cuda.is_available() is only true with Nvida CUDA system, any AMD, Intel or other embedded GPU would not by default support it. Would you consider integrate Whisper online API to not rely on local resource? Consider LangChain? |
It would be nice to have an option to use a API for whisper transcriptions. Should be fairly straight forwards since we can use this |
I've been using whisper with a M2 with good results using this: openai/whisper#382 |
You can check out this branch https://github.com/SevaSk/ecoute/tree/29-add-option-to-use-speech-to-text-api-rather-than-transcribing-locally use the command
it honestly way faster and better then the local model edit: |
Thanks for sharing and the work, on my way to try. |
This is a really good project. I'm learning from it.
I just did some dirty hard-code and make it work on my Macbook Air (Intel model). It runs pretty slow.
Could you share any solution to optimize? (Yes, I"m asking Google and ChatGPT at the same time)
Thanks!
p.s. I'm trying to on an ARM Linux SBC as well, hope it works.
The text was updated successfully, but these errors were encountered: