Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GPU support #346

Closed
wants to merge 8 commits into from
Closed

Add GPU support #346

wants to merge 8 commits into from

Conversation

chidiwilliams
Copy link
Owner

No description provided.

@codecov
Copy link

codecov bot commented Feb 1, 2023

Codecov Report

Merging #346 (5de7b49) into main (bc51c58) will decrease coverage by 0.37%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##             main     #346      +/-   ##
==========================================
- Coverage   83.97%   83.61%   -0.37%     
==========================================
  Files           8        8              
  Lines        1604     1605       +1     
==========================================
- Hits         1347     1342       -5     
- Misses        257      263       +6     
Flag Coverage Δ
Linux 83.61% <100.00%> (-0.24%) ⬇️
macOS ?

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
buzz/transcriber.py 77.53% <100.00%> (+0.49%) ⬆️
buzz/gui.py 87.18% <0.00%> (-0.85%) ⬇️

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@chidiwilliams
Copy link
Owner Author

@chidiwilliams
Copy link
Owner Author

Seems to work on Linux

image

@Sircam19
Copy link

Sircam19 commented Feb 2, 2023

@Sircam19 @shruru

Does this build use the GPU for you? https://github.com/chidiwilliams/buzz/actions/runs/4062599632

Hello Chidi. I'd like to test it for you as I am so excited about your project but not sure how to access this version you've provided. If there is a test version I'd be happy to try it. But not sure how to via GitHub. Merci.

@chidiwilliams
Copy link
Owner Author

@Sircam19 You'll find this at the bottom of the page I shared:

image

@Sircam19
Copy link

Sircam19 commented Feb 2, 2023

@Sircam19 You'll find this at the bottom of the page I shared:

image

Hello Chidi. Thank you, obviously too little sleep on my part and missed scrolling all the way down. Anyway, I gave it a try and it failed immediately. Here is the screenshot, note that last two attempts (i.e. the others are history from days back). The first was with the language recognition setting enabled and the second fail was with language set specifically to French).
CleanShot 2023-02-02 at 06 21 34@2x

Happy to send other information to help with debugging... Merci.

S

@chidiwilliams
Copy link
Owner Author

Thanks, @Sircam19. Could you share the logs from $HOME/Library/Logs/Buzz/logs.txt?

@Sircam19
Copy link

Sircam19 commented Feb 2, 2023

@Sircam19 You'll find this at the bottom of the page I shared:
image

Hello Chidi. Thank you, obviously too little sleep on my part and missed scrolling all the way down. Anyway, I gave it a try and it failed immediately. Here is the screenshot, note that last two attempts (i.e. the others are history from days back). The first was with the language recognition setting enabled and the second fail was with language set specifically to French). CleanShot 2023-02-02 at 06 21 34@2x

Happy to send other information to help with debugging... Merci.

S

I also just installed Buzz on my non M1 Mac Mini and got the same type of result.
CleanShot 2023-02-02 at 06 35 59@2x

@Sircam19
Copy link

Sircam19 commented Feb 2, 2023

Thanks, @Sircam19. Could you share the logs from $HOME/Library/Logs/Buzz/logs.txt?

Here you go....

Logs

[2022-12-01 18:01:30,039] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-01 18:01:33,907] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-01 18:01:34,341] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 2, sample rate = 16000
[2022-12-01 18:01:41,826] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.21528509259223938
[2022-12-01 18:01:42,343] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-01 18:01:42,343] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-01 18:01:49,168] transcriber.process_queue:116 DEBUG -> Received next result, length = 62, time taken = 0:00:07.340786
[2022-12-01 18:01:49,169] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-01 18:03:14,928] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = False
[2022-12-01 18:03:38,766] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/small.pt
[2022-12-01 18:03:38,768] transcriber.transcribe:277 DEBUG -> Starting file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Little Talk in Slow French Learn French through conversations /Listening Test N°1 Qu'est-ce qu'on aime en France .mp3, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Little Talk in Slow French Learn French through conversations /Listening Test N°1 Qu'est-ce qu'on aime en France (Transcribed on 01-Dec-2022 18-03-10).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/small.pt
[2022-12-01 18:18:57,538] transcriber.transcribe:324 DEBUG -> Completed file transcription, time taken = 0:15:18.768803
[2022-12-01 18:40:34,522] whispr.get_model_path:118 DEBUG -> Loading model = medium, whisper.cpp = False
[2022-12-01 18:42:12,279] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/medium.pt
[2022-12-01 18:42:12,281] transcriber.transcribe:277 DEBUG -> Starting file transcription, file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Isaac/Isaac_Video-20221024_140107-Meeting Recording.mp4, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Isaac/Isaac_Video-20221024_140107-Meeting Recording (Transcribed on 01-Dec-2022 18-40-28).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/medium.pt
[2022-12-01 18:48:25,061] transcriber.stop:342 DEBUG -> File transcription process terminated
[2022-12-01 18:48:25,065] transcriber.stop:345 DEBUG -> Waiting for file transcription thread to terminate
[2022-12-01 18:48:25,284] transcriber.transcribe:324 DEBUG -> Completed file transcription, time taken = 0:06:13.002814
[2022-12-01 18:48:25,285] transcriber.stop:348 DEBUG -> File transcription thread terminated
[2022-12-01 18:48:52,657] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = False
[2022-12-01 18:48:55,234] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/small.pt
[2022-12-01 18:48:55,235] transcriber.transcribe:277 DEBUG -> Starting file transcription, file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Isaac/Isaac_Video-20221024_140107-Meeting Recording.mp4, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Isaac/Isaac_Video-20221024_140107-Meeting Recording (Transcribed on 01-Dec-2022 18-48-50).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/small.pt
[2022-12-01 20:00:12,654] transcriber.transcribe:324 DEBUG -> Completed file transcription, time taken = 1:11:17.417697
[2022-12-01 20:05:40,510] whispr.get_model_path:118 DEBUG -> Loading model = medium, whisper.cpp = False
[2022-12-01 20:05:47,473] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/medium.pt
[2022-12-01 20:05:47,474] transcriber.transcribe:277 DEBUG -> Starting file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Real Life French/Le torticolis.mp3, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Real Life French/Le torticolis (Transcribed on 01-Dec-2022 20-05-36).txt, output format = OutputFormat.TXT, model_path = /Users/steven/.cache/whisper/medium.pt
[2022-12-01 20:26:46,901] transcriber.transcribe:324 DEBUG -> Completed file transcription, time taken = 0:20:59.426667
[2022-12-02 14:26:11,151] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 14:26:12,578] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 14:26:13,012] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 1, sample rate = 16000
[2022-12-02 14:26:18,154] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 2.1019476964872256e-45
[2022-12-02 14:26:19,198] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 14:26:19,198] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 14:26:23,710] transcriber.process_queue:116 DEBUG -> Received next result, length = 4, time taken = 0:00:05.554765
[2022-12-02 14:26:23,710] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 14:26:28,747] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 14:26:30,015] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 14:26:30,412] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 1, sample rate = 16000
[2022-12-02 14:26:34,985] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 14:26:34,986] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 14:26:34,986] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 14:26:56,358] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 14:26:57,642] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 14:26:58,047] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-02 14:27:01,812] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 14:27:01,812] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 14:27:01,812] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 14:28:09,309] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 14:28:10,541] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 14:28:10,943] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-02 14:28:16,068] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.053412169218063354
[2022-12-02 14:28:25,578] transcriber.process_queue:116 DEBUG -> Received next result, length = 3, time taken = 0:00:09.508281
[2022-12-02 14:28:25,690] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 64001, amplitude = 0.04605419933795929
[2022-12-02 14:28:29,810] transcriber.process_queue:116 DEBUG -> Received next result, length = 62, time taken = 0:00:04.119449
[2022-12-02 14:28:29,818] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 48001, amplitude = 0.04849354177713394
[2022-12-02 14:28:41,028] transcriber.process_queue:116 DEBUG -> Received next result, length = 27, time taken = 0:00:11.210627
[2022-12-02 14:28:41,037] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 144001, amplitude = 0.1449260413646698
[2022-12-02 14:28:45,259] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 14:28:45,259] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 14:28:51,147] transcriber.process_queue:116 DEBUG -> Received next result, length = 37, time taken = 0:00:10.109365
[2022-12-02 14:28:51,147] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 14:28:54,724] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 14:28:55,927] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 14:28:56,320] transcriber.process_queue:81 DEBUG -> Recording, language = en, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-02 14:29:01,450] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.16822969913482666
[2022-12-02 14:29:11,254] transcriber.process_queue:116 DEBUG -> Received next result, length = 0, time taken = 0:00:09.804224
[2022-12-02 14:29:11,262] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 64001, amplitude = 0.049557387828826904
[2022-12-02 14:29:19,348] transcriber.process_queue:116 DEBUG -> Received next result, length = 0, time taken = 0:00:08.085629
[2022-12-02 14:29:19,356] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 112001, amplitude = 0.059186700731515884
[2022-12-02 14:29:27,936] transcriber.process_queue:116 DEBUG -> Received next result, length = 10, time taken = 0:00:08.580121
[2022-12-02 14:29:27,945] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.043017398566007614
[2022-12-02 14:29:36,355] transcriber.process_queue:116 DEBUG -> Received next result, length = 35, time taken = 0:00:08.410649
[2022-12-02 14:29:36,365] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.046664465218782425
[2022-12-02 14:29:45,919] transcriber.process_queue:116 DEBUG -> Received next result, length = 55, time taken = 0:00:09.553751
[2022-12-02 14:29:45,929] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.6323920488357544
[2022-12-02 14:29:48,724] transcriber.process_queue:116 DEBUG -> Received next result, length = 30, time taken = 0:00:02.794499
[2022-12-02 14:29:48,731] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 128001, amplitude = 0.9676333069801331
[2022-12-02 14:29:58,536] transcriber.process_queue:116 DEBUG -> Received next result, length = 45, time taken = 0:00:09.804411
[2022-12-02 14:29:58,545] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.056489475071430206
[2022-12-02 14:30:07,729] transcriber.process_queue:116 DEBUG -> Received next result, length = 290, time taken = 0:00:09.184130
[2022-12-02 14:30:07,739] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 1.198199987411499
[2022-12-02 14:30:09,576] transcriber.process_queue:116 DEBUG -> Received next result, length = 53, time taken = 0:00:01.836271
[2022-12-02 14:30:09,583] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 112001, amplitude = 1.0584956407546997
[2022-12-02 14:30:13,184] transcriber.process_queue:116 DEBUG -> Received next result, length = 105, time taken = 0:00:03.600852
[2022-12-02 14:30:13,196] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 80001, amplitude = 0.7612597346305847
[2022-12-02 14:30:18,330] transcriber.process_queue:116 DEBUG -> Received next result, length = 76, time taken = 0:00:05.133400
[2022-12-02 14:30:18,338] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 80001, amplitude = 0.7194278836250305
[2022-12-02 14:30:33,906] transcriber.process_queue:116 DEBUG -> Received next result, length = 53, time taken = 0:00:15.568153
[2022-12-02 14:30:33,915] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.7399351596832275
[2022-12-02 14:30:48,056] transcriber.process_queue:116 DEBUG -> Received next result, length = 41, time taken = 0:00:14.141235
[2022-12-02 14:30:48,065] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.13968196511268616
[2022-12-02 14:31:01,985] transcriber.process_queue:116 DEBUG -> Received next result, length = 16, time taken = 0:00:13.919717
[2022-12-02 14:31:01,994] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.13756638765335083
[2022-12-02 14:31:12,942] transcriber.process_queue:116 DEBUG -> Received next result, length = 7, time taken = 0:00:10.947965
[2022-12-02 14:31:12,951] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 1.3710765838623047
[2022-12-02 14:31:29,221] transcriber.process_queue:116 DEBUG -> Received next result, length = 93, time taken = 0:00:16.268858
[2022-12-02 14:31:29,229] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.05511469766497612
[2022-12-02 14:31:39,876] transcriber.process_queue:116 DEBUG -> Received next result, length = 19, time taken = 0:00:10.647087
[2022-12-02 14:31:39,885] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.09478382021188736
[2022-12-02 14:31:50,598] transcriber.process_queue:116 DEBUG -> Received next result, length = 0, time taken = 0:00:10.713263
[2022-12-02 14:31:50,606] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.9868314266204834
[2022-12-02 14:32:00,915] transcriber.process_queue:116 DEBUG -> Received next result, length = 4, time taken = 0:00:10.309439
[2022-12-02 14:32:00,924] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 1.1288502216339111
[2022-12-02 14:32:27,060] transcriber.process_queue:116 DEBUG -> Received next result, length = 8, time taken = 0:00:26.135973
[2022-12-02 14:32:27,069] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.0504312664270401
[2022-12-02 14:32:32,069] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 14:32:32,069] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 14:32:37,352] transcriber.process_queue:116 DEBUG -> Received next result, length = 4, time taken = 0:00:10.283375
[2022-12-02 14:32:37,355] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 16:13:05,407] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 16:13:06,781] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 16:13:07,200] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-02 16:13:12,326] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.07362176477909088
[2022-12-02 16:13:15,270] transcriber.process_queue:116 DEBUG -> Received next result, length = 97, time taken = 0:00:02.943839
[2022-12-02 16:13:17,366] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.08374150842428207
[2022-12-02 16:13:30,188] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 16:13:30,190] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 16:13:32,676] transcriber.process_queue:116 DEBUG -> Received next result, length = 41, time taken = 0:00:15.309618
[2022-12-02 16:13:32,676] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 16:13:35,477] whispr.get_model_path:118 DEBUG -> Loading model = medium, whisper.cpp = False
[2022-12-02 16:13:42,258] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/medium.pt
[2022-12-02 16:13:49,717] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-02 16:13:54,908] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.1288035809993744
[2022-12-02 16:14:16,608] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 16:14:16,608] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 16:15:12,974] transcriber.process_queue:116 DEBUG -> Received next result, length = 49, time taken = 0:01:18.063386
[2022-12-02 16:15:13,093] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 16:18:02,183] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = False
[2022-12-02 16:18:05,265] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/small.pt
[2022-12-02 16:18:07,321] transcriber.process_queue:81 DEBUG -> Recording, language = en, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-02 16:18:12,450] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.15829291939735413
[2022-12-02 16:18:23,046] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 16:18:23,046] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 16:18:23,998] transcriber.process_queue:116 DEBUG -> Received next result, length = 61, time taken = 0:00:11.547792
[2022-12-02 16:18:24,036] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 16:18:36,339] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = False
[2022-12-02 16:18:38,723] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/small.pt
[2022-12-02 16:18:40,650] transcriber.process_queue:81 DEBUG -> Recording, language = en, task = Task.TRANSCRIBE, device = 1, sample rate = 16000
[2022-12-02 16:18:45,786] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.15829291939735413
[2022-12-02 16:19:06,973] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 16:19:06,974] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 16:19:08,873] transcriber.process_queue:116 DEBUG -> Received next result, length = 5, time taken = 0:00:23.086748
[2022-12-02 16:19:08,908] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 19:09:08,132] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 19:09:09,529] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 19:09:09,953] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 2, sample rate = 16000
[2022-12-02 19:09:15,088] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.3247763514518738
[2022-12-02 19:09:18,024] transcriber.process_queue:116 DEBUG -> Received next result, length = 42, time taken = 0:00:02.935132
[2022-12-02 19:09:20,124] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.24977001547813416
[2022-12-02 19:09:22,817] transcriber.process_queue:116 DEBUG -> Received next result, length = 41, time taken = 0:00:02.692213
[2022-12-02 19:09:25,073] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.05723973736166954
[2022-12-02 19:09:30,673] transcriber.process_queue:116 DEBUG -> Received next result, length = 18, time taken = 0:00:05.599945
[2022-12-02 19:09:30,681] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.0668480396270752
[2022-12-02 19:09:33,341] transcriber.process_queue:116 DEBUG -> Received next result, length = 15, time taken = 0:00:02.660348
[2022-12-02 19:09:35,061] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.09535929560661316
[2022-12-02 19:09:42,920] transcriber.process_queue:116 DEBUG -> Received next result, length = 182, time taken = 0:00:07.857926
[2022-12-02 19:09:43,021] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 32001, amplitude = 0.08367961645126343
[2022-12-02 19:09:45,663] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 19:09:45,663] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 19:09:54,827] transcriber.process_queue:116 DEBUG -> Received next result, length = 111, time taken = 0:00:11.805685
[2022-12-02 19:09:54,830] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 19:10:15,318] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 19:10:16,597] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 19:10:16,987] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 2, sample rate = 16000
[2022-12-02 19:10:22,117] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.30127131938934326
[2022-12-02 19:10:33,387] transcriber.process_queue:116 DEBUG -> Received next result, length = 102, time taken = 0:00:11.269615
[2022-12-02 19:10:33,396] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 96001, amplitude = 0.4389173984527588
[2022-12-02 19:10:36,162] transcriber.process_queue:116 DEBUG -> Received next result, length = 87, time taken = 0:00:02.765760
[2022-12-02 19:10:36,171] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 64001, amplitude = 0.3582872450351715
[2022-12-02 19:10:46,824] transcriber.process_queue:116 DEBUG -> Received next result, length = 49, time taken = 0:00:10.652778
[2022-12-02 19:10:46,832] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 144001, amplitude = 0.2578394412994385
[2022-12-02 19:10:49,799] transcriber.process_queue:116 DEBUG -> Received next result, length = 113, time taken = 0:00:02.966586
[2022-12-02 19:10:49,807] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 112001, amplitude = 0.24726414680480957
[2022-12-02 19:10:52,793] transcriber.process_queue:116 DEBUG -> Received next result, length = 90, time taken = 0:00:02.986647
[2022-12-02 19:10:52,801] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 80001, amplitude = 0.2968771457672119
[2022-12-02 19:10:55,774] transcriber.process_queue:116 DEBUG -> Received next result, length = 83, time taken = 0:00:02.973660
[2022-12-02 19:10:55,782] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 48001, amplitude = 0.2585233747959137
[2022-12-02 19:10:58,797] transcriber.process_queue:116 DEBUG -> Received next result, length = 67, time taken = 0:00:03.015050
[2022-12-02 19:10:58,805] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 16001, amplitude = 0.3326118588447571
[2022-12-02 19:11:01,951] transcriber.process_queue:116 DEBUG -> Received next result, length = 91, time taken = 0:00:03.145470
[2022-12-02 19:11:02,138] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.2776346802711487
[2022-12-02 19:11:05,445] transcriber.process_queue:116 DEBUG -> Received next result, length = 98, time taken = 0:00:03.306373
[2022-12-02 19:11:07,088] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.3114374876022339
[2022-12-02 19:11:10,378] transcriber.process_queue:116 DEBUG -> Received next result, length = 119, time taken = 0:00:03.289474
[2022-12-02 19:11:12,137] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.26753219962120056
[2022-12-02 19:11:14,264] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 19:11:14,265] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 19:11:27,963] transcriber.process_queue:116 DEBUG -> Received next result, length = 92, time taken = 0:00:15.826330
[2022-12-02 19:11:27,965] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 19:11:52,990] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 19:11:54,191] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 19:11:54,582] transcriber.process_queue:81 DEBUG -> Recording, language = fr, task = Task.TRANSCRIBE, device = 2, sample rate = 16000
[2022-12-02 19:11:59,716] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.2066861391067505
[2022-12-02 19:12:04,465] transcriber.process_queue:116 DEBUG -> Received next result, length = 193, time taken = 0:00:04.748612
[2022-12-02 19:12:04,746] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.201328843832016
[2022-12-02 19:12:06,282] transcriber.process_queue:116 DEBUG -> Received next result, length = 60, time taken = 0:00:01.535452
[2022-12-02 19:12:09,695] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.23785275220870972
[2022-12-02 19:12:11,368] transcriber.process_queue:116 DEBUG -> Received next result, length = 90, time taken = 0:00:01.673281
[2022-12-02 19:12:14,730] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.22788913547992706
[2022-12-02 19:12:21,716] transcriber.process_queue:116 DEBUG -> Received next result, length = 127, time taken = 0:00:06.985471
[2022-12-02 19:12:21,725] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 32001, amplitude = 0.16706109046936035
[2022-12-02 19:12:23,485] transcriber.process_queue:116 DEBUG -> Received next result, length = 87, time taken = 0:00:01.759255
[2022-12-02 19:12:24,708] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.21054241061210632
[2022-12-02 19:12:26,565] transcriber.process_queue:116 DEBUG -> Received next result, length = 98, time taken = 0:00:01.856719
[2022-12-02 19:12:29,744] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.17741943895816803
[2022-12-02 19:12:31,643] transcriber.process_queue:116 DEBUG -> Received next result, length = 80, time taken = 0:00:01.899358
[2022-12-02 19:12:34,693] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.3242717385292053
[2022-12-02 19:12:39,891] transcriber.process_queue:116 DEBUG -> Received next result, length = 97, time taken = 0:00:05.197598
[2022-12-02 19:12:39,900] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.19012267887592316
[2022-12-02 19:12:41,819] transcriber.process_queue:116 DEBUG -> Received next result, length = 106, time taken = 0:00:01.919383
[2022-12-02 19:12:44,676] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.24466833472251892
[2022-12-02 19:12:46,597] transcriber.process_queue:116 DEBUG -> Received next result, length = 116, time taken = 0:00:01.920290
[2022-12-02 19:12:49,710] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.2291530817747116
[2022-12-02 19:12:51,570] transcriber.process_queue:116 DEBUG -> Received next result, length = 99, time taken = 0:00:01.859360
[2022-12-02 19:12:54,745] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.20052742958068848
[2022-12-02 19:12:56,569] transcriber.process_queue:116 DEBUG -> Received next result, length = 82, time taken = 0:00:01.822760
[2022-12-02 19:12:59,700] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.29614704847335815
[2022-12-02 19:13:01,633] transcriber.process_queue:116 DEBUG -> Received next result, length = 107, time taken = 0:00:01.932086
[2022-12-02 19:13:04,729] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.3353540897369385
[2022-12-02 19:13:06,642] transcriber.process_queue:116 DEBUG -> Received next result, length = 99, time taken = 0:00:01.912281
[2022-12-02 19:13:09,679] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.2578312158584595
[2022-12-02 19:13:20,629] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 19:13:20,629] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 19:13:22,333] transcriber.process_queue:116 DEBUG -> Received next result, length = 83, time taken = 0:00:12.652619
[2022-12-02 19:13:22,334] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 19:14:00,549] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = True
[2022-12-02 19:16:04,496] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = False
[2022-12-02 19:16:07,494] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/small.pt
[2022-12-02 19:16:09,516] transcriber.process_queue:81 DEBUG -> Recording, language = fr, task = Task.TRANSCRIBE, device = 0, sample rate = 16000
[2022-12-02 19:16:14,855] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 4.999999987376214e-07
[2022-12-02 19:16:31,332] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 19:16:31,333] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 19:17:23,065] transcriber.process_queue:116 DEBUG -> Received next result, length = 0, time taken = 0:01:08.208796
[2022-12-02 19:17:23,111] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 19:17:40,598] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 19:17:41,739] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 19:17:42,143] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 2, sample rate = 16000
[2022-12-02 19:17:47,266] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.22437821328639984
[2022-12-02 19:17:49,999] transcriber.process_queue:116 DEBUG -> Received next result, length = 28, time taken = 0:00:02.733252
[2022-12-02 19:17:52,299] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.32287272810935974
[2022-12-02 19:17:55,026] transcriber.process_queue:116 DEBUG -> Received next result, length = 66, time taken = 0:00:02.726603
[2022-12-02 19:17:57,259] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.2857457399368286
[2022-12-02 19:18:00,017] transcriber.process_queue:116 DEBUG -> Received next result, length = 107, time taken = 0:00:02.756781
[2022-12-02 19:18:02,283] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.27010732889175415
[2022-12-02 19:18:11,504] transcriber.process_queue:116 DEBUG -> Received next result, length = 99, time taken = 0:00:09.220409
[2022-12-02 19:18:11,512] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 64001, amplitude = 0.3289046883583069
[2022-12-02 19:18:14,414] transcriber.process_queue:116 DEBUG -> Received next result, length = 52, time taken = 0:00:02.902700
[2022-12-02 19:18:14,422] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 32001, amplitude = 0.3605231046676636
[2022-12-02 19:18:28,837] transcriber.process_queue:116 DEBUG -> Received next result, length = 129, time taken = 0:00:14.415141
[2022-12-02 19:18:28,935] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.3196001648902893
[2022-12-02 19:18:31,960] transcriber.process_queue:116 DEBUG -> Received next result, length = 59, time taken = 0:00:03.024277
[2022-12-02 19:18:31,967] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 128001, amplitude = 0.33224421739578247
[2022-12-02 19:18:34,951] transcriber.process_queue:116 DEBUG -> Received next result, length = 59, time taken = 0:00:02.983565
[2022-12-02 19:18:34,959] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 96001, amplitude = 0.29113852977752686
[2022-12-02 19:18:38,059] transcriber.process_queue:116 DEBUG -> Received next result, length = 79, time taken = 0:00:03.100348
[2022-12-02 19:18:38,067] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 64001, amplitude = 0.27384328842163086
[2022-12-02 19:18:41,213] transcriber.process_queue:116 DEBUG -> Received next result, length = 92, time taken = 0:00:03.145498
[2022-12-02 19:18:41,220] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 32001, amplitude = 0.290249228477478
[2022-12-02 19:18:47,676] transcriber.process_queue:116 DEBUG -> Received next result, length = 116, time taken = 0:00:06.456207
[2022-12-02 19:18:47,685] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 64001, amplitude = 0.30817970633506775
[2022-12-02 19:18:50,739] transcriber.process_queue:116 DEBUG -> Received next result, length = 71, time taken = 0:00:03.053674
[2022-12-02 19:18:50,747] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 32001, amplitude = 0.3022621273994446
[2022-12-02 19:18:53,883] transcriber.process_queue:116 DEBUG -> Received next result, length = 86, time taken = 0:00:03.136414
[2022-12-02 19:18:53,892] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.2935682535171509
[2022-12-02 19:18:58,860] transcriber.process_queue:116 DEBUG -> Received next result, length = 75, time taken = 0:00:04.967915
[2022-12-02 19:18:58,869] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.26547640562057495
[2022-12-02 19:19:00,414] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 19:19:00,415] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 19:19:11,457] transcriber.process_queue:116 DEBUG -> Received next result, length = 104, time taken = 0:00:12.588466
[2022-12-02 19:19:11,458] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 19:19:31,192] whispr.get_model_path:118 DEBUG -> Loading model = medium, whisper.cpp = False
[2022-12-02 19:19:38,363] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/medium.pt
[2022-12-02 19:19:45,601] transcriber.process_queue:81 DEBUG -> Recording, language = fr, task = Task.TRANSCRIBE, device = 2, sample rate = 16000
[2022-12-02 19:19:50,769] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.30591264367103577
[2022-12-02 19:20:51,966] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 19:20:51,967] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 19:21:14,262] transcriber.process_queue:116 DEBUG -> Received next result, length = 55, time taken = 0:01:23.491000
[2022-12-02 19:21:14,380] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 19:21:14,412] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = False
[2022-12-02 19:21:17,500] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/small.pt
[2022-12-02 19:21:19,450] transcriber.process_queue:81 DEBUG -> Recording, language = fr, task = Task.TRANSCRIBE, device = 2, sample rate = 16000
[2022-12-02 19:21:24,572] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.5509064793586731
[2022-12-02 19:21:36,657] transcriber.process_queue:116 DEBUG -> Received next result, length = 98, time taken = 0:00:12.084641
[2022-12-02 19:21:36,665] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 112001, amplitude = 0.3959283232688904
[2022-12-02 19:21:48,740] transcriber.process_queue:116 DEBUG -> Received next result, length = 82, time taken = 0:00:12.074691
[2022-12-02 19:21:48,747] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.3929823040962219
[2022-12-02 19:22:00,845] transcriber.process_queue:116 DEBUG -> Received next result, length = 45, time taken = 0:00:12.097383
[2022-12-02 19:22:00,853] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.2844272553920746
[2022-12-02 19:22:13,597] transcriber.process_queue:116 DEBUG -> Received next result, length = 81, time taken = 0:00:12.743933
[2022-12-02 19:22:13,607] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.28456106781959534
[2022-12-02 19:22:26,315] transcriber.process_queue:116 DEBUG -> Received next result, length = 68, time taken = 0:00:12.707968
[2022-12-02 19:22:26,324] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.29433953762054443
[2022-12-02 19:22:39,341] transcriber.process_queue:116 DEBUG -> Received next result, length = 75, time taken = 0:00:13.017075
[2022-12-02 19:22:39,350] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.3218532204627991
[2022-12-02 19:22:52,419] transcriber.process_queue:116 DEBUG -> Received next result, length = 79, time taken = 0:00:13.069116
[2022-12-02 19:22:52,429] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.31510722637176514
[2022-12-02 19:24:15,480] transcriber.process_queue:116 DEBUG -> Received next result, length = 114, time taken = 0:01:23.050850
[2022-12-02 19:24:15,489] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.41077888011932373
[2022-12-02 19:25:40,313] transcriber.process_queue:116 DEBUG -> Received next result, length = 21, time taken = 0:01:24.823302
[2022-12-02 19:25:40,323] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.42960453033447266
[2022-12-02 19:27:18,831] transcriber.process_queue:116 DEBUG -> Received next result, length = 32, time taken = 0:01:38.507647
[2022-12-02 19:27:18,842] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.34538355469703674
[2022-12-02 19:27:32,794] transcriber.process_queue:116 DEBUG -> Received next result, length = 65, time taken = 0:00:13.951803
[2022-12-02 19:27:32,805] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.34153735637664795
[2022-12-02 19:29:09,739] transcriber.process_queue:116 DEBUG -> Received next result, length = 86, time taken = 0:01:36.933496
[2022-12-02 19:29:09,750] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.40689897537231445
[2022-12-02 19:29:24,170] transcriber.process_queue:116 DEBUG -> Received next result, length = 95, time taken = 0:00:14.420138
[2022-12-02 19:29:24,178] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.45255690813064575
[2022-12-02 19:31:02,476] transcriber.process_queue:116 DEBUG -> Received next result, length = 257, time taken = 0:01:38.297252
[2022-12-02 19:31:02,490] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.29057615995407104
[2022-12-02 19:32:39,960] transcriber.process_queue:116 DEBUG -> Received next result, length = 20, time taken = 0:01:37.469677
[2022-12-02 19:32:39,970] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.3199145197868347
[2022-12-02 19:33:06,536] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 19:33:06,537] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 19:33:07,585] transcriber.process_queue:116 DEBUG -> Received next result, length = 54, time taken = 0:00:27.615506
[2022-12-02 19:33:07,633] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 19:33:31,593] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = False
[2022-12-02 19:33:33,940] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/small.pt
[2022-12-02 19:33:35,877] transcriber.process_queue:81 DEBUG -> Recording, language = fr, task = Task.TRANSCRIBE, device = 2, sample rate = 16000
[2022-12-02 19:33:41,002] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.447084903717041
[2022-12-02 19:34:05,365] transcriber.process_queue:116 DEBUG -> Received next result, length = 75, time taken = 0:00:24.363021
[2022-12-02 19:34:05,374] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.23841620981693268
[2022-12-02 19:34:17,443] transcriber.process_queue:116 DEBUG -> Received next result, length = 94, time taken = 0:00:12.068658
[2022-12-02 19:34:17,451] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.2211081087589264
[2022-12-02 19:34:29,704] transcriber.process_queue:116 DEBUG -> Received next result, length = 77, time taken = 0:00:12.252460
[2022-12-02 19:34:29,711] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.32525330781936646
[2022-12-02 19:35:56,551] transcriber.process_queue:116 DEBUG -> Received next result, length = 101, time taken = 0:01:26.839225
[2022-12-02 19:35:56,563] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.2856135070323944
[2022-12-02 19:36:09,899] transcriber.process_queue:116 DEBUG -> Received next result, length = 104, time taken = 0:00:13.335943
[2022-12-02 19:36:09,908] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.3716166913509369
[2022-12-02 19:36:23,606] transcriber.process_queue:116 DEBUG -> Received next result, length = 50, time taken = 0:00:13.697811
[2022-12-02 19:36:23,616] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.4951305091381073
[2022-12-02 19:36:37,687] transcriber.process_queue:116 DEBUG -> Received next result, length = 62, time taken = 0:00:14.071024
[2022-12-02 19:36:37,698] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.464813768863678
[2022-12-02 19:38:22,096] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 19:38:22,098] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 19:38:23,059] transcriber.process_queue:116 DEBUG -> Received next result, length = 119, time taken = 0:01:45.361759
[2022-12-02 19:38:23,091] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-02 19:40:52,307] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-02 19:40:53,464] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-02 19:40:53,846] transcriber.process_queue:81 DEBUG -> Recording, language = fr, task = Task.TRANSCRIBE, device = 2, sample rate = 16000
[2022-12-02 19:40:58,968] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.48416760563850403
[2022-12-02 19:41:05,366] transcriber.process_queue:116 DEBUG -> Received next result, length = 141, time taken = 0:00:06.398314
[2022-12-02 19:41:05,374] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 16001, amplitude = 0.4367164373397827
[2022-12-02 19:41:15,733] transcriber.process_queue:116 DEBUG -> Received next result, length = 68, time taken = 0:00:10.358917
[2022-12-02 19:41:15,741] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 96001, amplitude = 0.2440016269683838
[2022-12-02 19:41:17,464] transcriber.process_queue:116 DEBUG -> Received next result, length = 109, time taken = 0:00:01.722551
[2022-12-02 19:41:17,472] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 48001, amplitude = 0.27857959270477295
[2022-12-02 19:41:19,202] transcriber.process_queue:116 DEBUG -> Received next result, length = 96, time taken = 0:00:01.729843
[2022-12-02 19:41:19,210] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.33792513608932495
[2022-12-02 19:41:29,992] transcriber.process_queue:116 DEBUG -> Received next result, length = 55, time taken = 0:00:10.782340
[2022-12-02 19:41:30,100] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 96001, amplitude = 0.3007902503013611
[2022-12-02 19:41:45,950] transcriber.process_queue:116 DEBUG -> Received next result, length = 176, time taken = 0:00:15.849271
[2022-12-02 19:41:45,958] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.24651774764060974
[2022-12-02 19:41:58,638] transcriber.process_queue:116 DEBUG -> Received next result, length = 55, time taken = 0:00:12.679586
[2022-12-02 19:41:58,647] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.36448076367378235
[2022-12-02 19:42:07,618] transcriber.process_queue:116 DEBUG -> Received next result, length = 57, time taken = 0:00:08.970939
[2022-12-02 19:42:07,626] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.26903674006462097
[2022-12-02 19:42:19,572] transcriber.process_queue:116 DEBUG -> Received next result, length = 41, time taken = 0:00:11.946476
[2022-12-02 19:42:19,580] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.2663012146949768
[2022-12-02 19:42:30,883] transcriber.process_queue:116 DEBUG -> Received next result, length = 128, time taken = 0:00:11.302397
[2022-12-02 19:42:30,891] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.2884479761123657
[2022-12-02 19:42:32,576] transcriber.process_queue:116 DEBUG -> Received next result, length = 6, time taken = 0:00:01.684324
[2022-12-02 19:42:32,586] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 112001, amplitude = 0.33294379711151123
[2022-12-02 19:42:34,494] transcriber.process_queue:116 DEBUG -> Received next result, length = 108, time taken = 0:00:01.908050
[2022-12-02 19:42:34,502] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 64001, amplitude = 0.28870558738708496
[2022-12-02 19:42:36,363] transcriber.process_queue:116 DEBUG -> Received next result, length = 73, time taken = 0:00:01.860923
[2022-12-02 19:42:36,371] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 16001, amplitude = 0.3381248712539673
[2022-12-02 19:42:38,242] transcriber.process_queue:116 DEBUG -> Received next result, length = 95, time taken = 0:00:01.870835
[2022-12-02 19:42:39,997] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.3269389569759369
[2022-12-02 19:42:54,302] transcriber.process_queue:116 DEBUG -> Received next result, length = 112, time taken = 0:00:14.304833
[2022-12-02 19:42:54,310] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 144001, amplitude = 0.390110045671463
[2022-12-02 19:42:56,137] transcriber.process_queue:116 DEBUG -> Received next result, length = 91, time taken = 0:00:01.826587
[2022-12-02 19:42:56,145] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 96001, amplitude = 0.3015826940536499
[2022-12-02 19:42:57,943] transcriber.process_queue:116 DEBUG -> Received next result, length = 71, time taken = 0:00:01.798355
[2022-12-02 19:42:57,951] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 32001, amplitude = 0.3360745906829834
[2022-12-02 19:42:59,719] transcriber.process_queue:116 DEBUG -> Received next result, length = 51, time taken = 0:00:01.768057
[2022-12-02 19:42:59,964] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.27217572927474976
[2022-12-02 19:43:13,017] transcriber.process_queue:116 DEBUG -> Received next result, length = 133, time taken = 0:00:13.052547
[2022-12-02 19:43:13,032] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 128001, amplitude = 0.2894386351108551
[2022-12-02 19:43:27,354] transcriber.process_queue:116 DEBUG -> Received next result, length = 125, time taken = 0:00:14.321384
[2022-12-02 19:43:27,361] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.21431156992912292
[2022-12-02 19:43:29,295] transcriber.process_queue:116 DEBUG -> Received next result, length = 104, time taken = 0:00:01.933220
[2022-12-02 19:43:29,303] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 112001, amplitude = 0.2195204645395279
[2022-12-02 19:43:31,172] transcriber.process_queue:116 DEBUG -> Received next result, length = 109, time taken = 0:00:01.868891
[2022-12-02 19:43:31,179] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 64001, amplitude = 0.22988831996917725
[2022-12-02 19:43:33,010] transcriber.process_queue:116 DEBUG -> Received next result, length = 64, time taken = 0:00:01.830834
[2022-12-02 19:43:33,018] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 16001, amplitude = 0.27261316776275635
[2022-12-02 19:43:34,876] transcriber.process_queue:116 DEBUG -> Received next result, length = 74, time taken = 0:00:01.857195
[2022-12-02 19:43:36,999] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.2540932297706604
[2022-12-02 19:43:41,368] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-02 19:43:41,368] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-02 19:43:52,807] transcriber.process_queue:116 DEBUG -> Received next result, length = 101, time taken = 0:00:15.807780
[2022-12-02 19:43:52,809] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-05 11:24:39,310] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-05 11:24:40,733] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-05 11:24:41,161] transcriber.process_queue:81 DEBUG -> Recording, language = en, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-05 11:24:46,274] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.5738022923469543
[2022-12-05 11:24:47,952] transcriber.process_queue:116 DEBUG -> Received next result, length = 70, time taken = 0:00:01.677219
[2022-12-05 11:24:51,308] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.35075777769088745
[2022-12-05 11:24:52,732] transcriber.process_queue:116 DEBUG -> Received next result, length = 58, time taken = 0:00:01.423713
[2022-12-05 11:24:56,269] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.0183669812977314
[2022-12-05 11:25:04,698] transcriber.process_queue:116 DEBUG -> Received next result, length = 0, time taken = 0:00:08.426237
[2022-12-05 11:25:04,706] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 48001, amplitude = 0.01710141822695732
[2022-12-05 11:25:07,972] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-05 11:25:07,972] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-05 11:25:13,678] transcriber.process_queue:116 DEBUG -> Received next result, length = 0, time taken = 0:00:08.971892
[2022-12-05 11:25:13,679] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-05 11:25:39,255] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-05 11:25:40,534] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-05 11:25:40,953] transcriber.process_queue:81 DEBUG -> Recording, language = fr, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-05 11:25:46,091] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.10099959373474121
[2022-12-05 11:25:47,580] transcriber.process_queue:116 DEBUG -> Received next result, length = 59, time taken = 0:00:01.489224
[2022-12-05 11:25:51,109] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.10501518845558167
[2022-12-05 11:25:52,534] transcriber.process_queue:116 DEBUG -> Received next result, length = 43, time taken = 0:00:01.424976
[2022-12-05 11:25:56,064] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.11423271894454956
[2022-12-05 11:25:57,514] transcriber.process_queue:116 DEBUG -> Received next result, length = 46, time taken = 0:00:01.449903
[2022-12-05 11:26:01,093] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.1386222541332245
[2022-12-05 11:26:02,595] transcriber.process_queue:116 DEBUG -> Received next result, length = 65, time taken = 0:00:01.501401
[2022-12-05 11:26:06,042] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.10873367637395859
[2022-12-05 11:26:07,607] transcriber.process_queue:116 DEBUG -> Received next result, length = 79, time taken = 0:00:01.564645
[2022-12-05 11:26:11,076] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.09702026844024658
[2022-12-05 11:26:12,622] transcriber.process_queue:116 DEBUG -> Received next result, length = 52, time taken = 0:00:01.543867
[2022-12-05 11:26:16,127] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.10458970814943314
[2022-12-05 11:26:17,759] transcriber.process_queue:116 DEBUG -> Received next result, length = 67, time taken = 0:00:01.632005
[2022-12-05 11:26:21,071] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.09075969457626343
[2022-12-05 11:26:38,405] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-05 11:26:38,406] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-05 11:26:38,547] transcriber.process_queue:116 DEBUG -> Received next result, length = 81, time taken = 0:00:17.475854
[2022-12-05 11:26:38,548] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-05 11:26:43,604] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = False
[2022-12-05 11:26:46,437] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/small.pt
[2022-12-05 11:26:48,408] transcriber.process_queue:81 DEBUG -> Recording, language = fr, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-05 11:26:53,551] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.16344428062438965
[2022-12-05 11:27:05,527] transcriber.process_queue:116 DEBUG -> Received next result, length = 65, time taken = 0:00:11.975093
[2022-12-05 11:27:05,535] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 96001, amplitude = 0.09387989342212677
[2022-12-05 11:27:17,365] transcriber.process_queue:116 DEBUG -> Received next result, length = 60, time taken = 0:00:11.829981
[2022-12-05 11:27:17,374] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.10076232254505157
[2022-12-05 11:27:29,785] transcriber.process_queue:116 DEBUG -> Received next result, length = 69, time taken = 0:00:12.410604
[2022-12-05 11:27:29,794] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.0832691639661789
[2022-12-05 11:27:41,891] transcriber.process_queue:116 DEBUG -> Received next result, length = 48, time taken = 0:00:12.096695
[2022-12-05 11:27:41,899] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.11095049977302551
[2022-12-05 11:27:54,611] transcriber.process_queue:116 DEBUG -> Received next result, length = 79, time taken = 0:00:12.712606
[2022-12-05 11:27:54,620] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.09840452671051025
[2022-12-05 11:28:21,180] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-05 11:28:21,182] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-05 11:29:10,191] transcriber.process_queue:116 DEBUG -> Received next result, length = 23, time taken = 0:01:15.570695
[2022-12-05 11:29:10,222] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-05 11:30:38,902] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = True
[2022-12-05 11:31:00,810] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/Library/Caches/Buzz/ggml-model-whisper-small.bin
[2022-12-05 11:31:16,834] whispr.get_model_path:118 DEBUG -> Loading model = small, whisper.cpp = True
[2022-12-05 11:31:19,520] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/Library/Caches/Buzz/ggml-model-whisper-small.bin
[2022-12-05 11:31:32,825] whispr.get_model_path:118 DEBUG -> Loading model = tiny, whisper.cpp = False
[2022-12-05 11:31:34,044] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-05 11:31:34,443] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-05 11:31:39,566] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.10563242435455322
[2022-12-05 11:31:44,226] transcriber.process_queue:116 DEBUG -> Received next result, length = 50, time taken = 0:00:04.660080
[2022-12-05 11:31:44,595] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.12101537734270096
[2022-12-05 11:31:47,669] transcriber.process_queue:116 DEBUG -> Received next result, length = 68, time taken = 0:00:03.073948
[2022-12-05 11:31:49,544] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.18491193652153015
[2022-12-05 11:31:52,693] transcriber.process_queue:116 DEBUG -> Received next result, length = 60, time taken = 0:00:03.148623
[2022-12-05 11:31:54,584] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.10604482889175415
[2022-12-05 11:31:56,565] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-05 11:31:56,565] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-05 11:31:57,745] transcriber.process_queue:116 DEBUG -> Received next result, length = 67, time taken = 0:00:03.160327
[2022-12-05 11:31:57,748] transcriber.stop_recording:161 DEBUG -> Recording thread terminated
[2022-12-05 11:32:03,277] whispr.get_model_path:118 DEBUG -> Loading model = medium, whisper.cpp = False
[2022-12-05 11:32:10,048] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/medium.pt
[2022-12-05 11:32:17,191] transcriber.process_queue:81 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 3, sample rate = 16000
[2022-12-05 11:32:22,346] transcriber.process_queue:99 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.1507277935743332
[2022-12-05 11:32:37,284] transcriber.stop_recording:156 DEBUG -> Closed recording stream
[2022-12-05 11:32:37,285] transcriber.stop_recording:159 DEBUG -> Waiting for recording thread to terminate
[2022-12-05 11:36:22,012] whispr.get_model_path:118 DEBUG -> Loading model = medium, whisper.cpp = False
[2022-12-05 11:36:28,947] whispr.get_model_path:140 DEBUG -> Model path = /Users/steven/.cache/whisper/medium.pt
[2022-12-05 11:36:28,948] transcriber.transcribe:277 DEBUG -> Starting file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Autour de la question/Comment décoloniser la science .mp3, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Autour de la question/Comment décoloniser la science (Transcribed on 05-Dec-2022 11-36-19).txt, output format = OutputFormat.TXT, model_path = /Users/steven/.cache/whisper/medium.pt
[2022-12-05 14:59:44,654] transcriber.transcribe:324 DEBUG -> Completed file transcription, time taken = 3:23:15.156850
[2022-12-16 16:45:40,254] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:45:40,258] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:45:49,305] transcriber.process_queue:98 DEBUG -> Recording, language = None, task = Task.TRANSCRIBE, device = 3, sample rate = 16000, model_path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-16 16:45:54,410] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.15537284314632416
[2022-12-16 16:45:57,575] transcriber.process_queue:134 DEBUG -> Received next result, length = 71, time taken = 0:00:03.164621
[2022-12-16 16:45:59,455] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.11282274127006531
[2022-12-16 16:46:02,482] transcriber.process_queue:134 DEBUG -> Received next result, length = 50, time taken = 0:00:03.026584
[2022-12-16 16:46:04,398] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.020677300170063972
[2022-12-16 16:46:15,056] transcriber.process_queue:134 DEBUG -> Received next result, length = 23, time taken = 0:00:10.657396
[2022-12-16 16:46:15,182] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 80001, amplitude = 0.13592766225337982
[2022-12-16 16:46:18,114] transcriber.process_queue:134 DEBUG -> Received next result, length = 42, time taken = 0:00:02.931311
[2022-12-16 16:46:18,121] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 48001, amplitude = 0.16573503613471985
[2022-12-16 16:46:30,688] transcriber.process_queue:134 DEBUG -> Received next result, length = 106, time taken = 0:00:12.566561
[2022-12-16 16:46:30,697] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.16427335143089294
[2022-12-16 16:46:33,761] transcriber.process_queue:134 DEBUG -> Received next result, length = 62, time taken = 0:00:03.064327
[2022-12-16 16:46:33,769] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 128001, amplitude = 0.2076166570186615
[2022-12-16 16:46:36,931] transcriber.process_queue:134 DEBUG -> Received next result, length = 96, time taken = 0:00:03.162099
[2022-12-16 16:46:36,939] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 96001, amplitude = 0.15268521010875702
[2022-12-16 16:46:40,094] transcriber.process_queue:134 DEBUG -> Received next result, length = 71, time taken = 0:00:03.155019
[2022-12-16 16:46:40,102] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 64001, amplitude = 0.12776078283786774
[2022-12-16 16:46:43,397] transcriber.process_queue:134 DEBUG -> Received next result, length = 82, time taken = 0:00:03.294216
[2022-12-16 16:46:43,405] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 48001, amplitude = 0.15172822773456573
[2022-12-16 16:46:46,729] transcriber.process_queue:134 DEBUG -> Received next result, length = 78, time taken = 0:00:03.324171
[2022-12-16 16:46:46,738] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 16001, amplitude = 0.14099794626235962
[2022-12-16 16:46:50,117] transcriber.process_queue:134 DEBUG -> Received next result, length = 90, time taken = 0:00:03.379234
[2022-12-16 16:46:50,393] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.10724663734436035
[2022-12-16 16:46:53,749] transcriber.process_queue:134 DEBUG -> Received next result, length = 56, time taken = 0:00:03.354820
[2022-12-16 16:46:55,423] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.10390578210353851
[2022-12-16 16:46:58,762] transcriber.process_queue:134 DEBUG -> Received next result, length = 16, time taken = 0:00:03.338030
[2022-12-16 16:47:00,464] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.13473433256149292
[2022-12-16 16:47:05,648] transcriber.process_queue:134 DEBUG -> Received next result, length = 32, time taken = 0:00:05.183451
[2022-12-16 16:47:05,656] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.01626577042043209
[2022-12-16 16:47:21,839] transcriber.process_queue:134 DEBUG -> Received next result, length = 0, time taken = 0:00:16.182921
[2022-12-16 16:47:21,847] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.12504583597183228
[2022-12-16 16:47:25,207] transcriber.process_queue:134 DEBUG -> Received next result, length = 74, time taken = 0:00:03.360598
[2022-12-16 16:47:25,215] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 128001, amplitude = 0.15679895877838135
[2022-12-16 16:47:28,562] transcriber.process_queue:134 DEBUG -> Received next result, length = 89, time taken = 0:00:03.347006
[2022-12-16 16:47:28,571] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 112001, amplitude = 0.13609588146209717
[2022-12-16 16:47:31,899] transcriber.process_queue:134 DEBUG -> Received next result, length = 68, time taken = 0:00:03.328137
[2022-12-16 16:47:31,907] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 80001, amplitude = 0.154939666390419
[2022-12-16 16:47:35,243] transcriber.process_queue:134 DEBUG -> Received next result, length = 70, time taken = 0:00:03.335620
[2022-12-16 16:47:35,251] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 48001, amplitude = 0.17463064193725586
[2022-12-16 16:47:38,673] transcriber.process_queue:134 DEBUG -> Received next result, length = 98, time taken = 0:00:03.421911
[2022-12-16 16:47:38,684] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 32001, amplitude = 0.20816701650619507
[2022-12-16 16:47:41,953] transcriber.process_queue:134 DEBUG -> Received next result, length = 61, time taken = 0:00:03.268723
[2022-12-16 16:47:41,961] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 1, amplitude = 0.017718391492962837
[2022-12-16 16:47:56,215] transcriber.process_queue:134 DEBUG -> Received next result, length = 0, time taken = 0:00:14.253870
[2022-12-16 16:47:56,222] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 144001, amplitude = 0.14641526341438293
[2022-12-16 16:47:59,521] transcriber.process_queue:134 DEBUG -> Received next result, length = 49, time taken = 0:00:03.297962
[2022-12-16 16:47:59,529] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 128001, amplitude = 0.1594594419002533
[2022-12-16 16:48:02,861] transcriber.process_queue:134 DEBUG -> Received next result, length = 71, time taken = 0:00:03.332459
[2022-12-16 16:48:02,869] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 96001, amplitude = 0.13744217157363892
[2022-12-16 16:48:22,166] transcriber.process_queue:134 DEBUG -> Received next result, length = 6, time taken = 0:00:19.296607
[2022-12-16 16:48:22,176] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.22226868569850922
[2022-12-16 16:48:25,527] transcriber.process_queue:134 DEBUG -> Received next result, length = 54, time taken = 0:00:03.350660
[2022-12-16 16:48:25,534] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 144001, amplitude = 0.19941751658916473
[2022-12-16 16:48:28,904] transcriber.process_queue:134 DEBUG -> Received next result, length = 65, time taken = 0:00:03.369692
[2022-12-16 16:48:28,913] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 112001, amplitude = 0.15561245381832123
[2022-12-16 16:48:32,314] transcriber.process_queue:134 DEBUG -> Received next result, length = 93, time taken = 0:00:03.400505
[2022-12-16 16:48:32,322] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 80001, amplitude = 0.05946808308362961
[2022-12-16 16:48:46,757] transcriber.process_queue:134 DEBUG -> Received next result, length = 51, time taken = 0:00:14.435425
[2022-12-16 16:48:46,766] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 160001, amplitude = 0.12715330719947815
[2022-12-16 16:48:50,150] transcriber.process_queue:134 DEBUG -> Received next result, length = 67, time taken = 0:00:03.383232
[2022-12-16 16:48:50,158] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 128001, amplitude = 0.1727868765592575
[2022-12-16 16:48:53,604] transcriber.process_queue:134 DEBUG -> Received next result, length = 53, time taken = 0:00:03.446148
[2022-12-16 16:48:53,613] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 112001, amplitude = 0.16651204228401184
[2022-12-16 16:48:56,980] transcriber.process_queue:134 DEBUG -> Received next result, length = 81, time taken = 0:00:03.366408
[2022-12-16 16:48:56,988] transcriber.process_queue:117 DEBUG -> Processing next frame, sample size = 80000, queue size = 80001, amplitude = 0.23764702677726746
[2022-12-16 16:49:02,268] transcriber.process_queue:134 DEBUG -> Received next result, length = 80, time taken = 0:00:05.280739
[2022-12-16 16:49:02,402] transcriber.stop_recording:174 DEBUG -> Closed recording stream
[2022-12-16 16:49:02,402] transcriber.stop_recording:177 DEBUG -> Waiting for recording thread to terminate
[2022-12-16 16:49:02,402] transcriber.stop_recording:179 DEBUG -> Recording thread terminated
[2022-12-16 16:50:57,249] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:50:57,252] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:51:09,508] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-16 16:51:10,063] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-16 16:51:33,495] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:51:33,499] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:51:57,785] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:51:57,787] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:52:06,040] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-16 16:52:06,210] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-16 16:52:29,931] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:52:29,933] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:55:07,206] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:55:07,208] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:55:44,390] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:55:44,391] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:55:54,620] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-16 16:55:54,771] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-16 16:56:06,411] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:56:06,412] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:56:31,310] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:56:31,311] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-16 16:56:44,669] transcriber.run:325 DEBUG -> Starting file transcription, file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Robert/Robbie_Video-20221216_150515-Meeting Recording.mp4, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Robert/Robbie_Video-20221216_150515-Meeting Recording (Transcribed on 16-Dec-2022 16-56-37).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/medium.pt
[2022-12-16 21:04:30,786] transcriber.run:347 DEBUG -> whisper process completed with code = 0, time taken = 4:07:46.098309
[2022-12-22 16:14:49,149] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-22 16:14:49,154] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-22 16:15:22,399] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-22 16:15:22,400] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-22 16:15:56,855] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-22 16:15:56,856] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-22 16:16:10,408] transcriber.run:325 DEBUG -> Starting file transcription, file path = /Users/steven/Downloads/Elizabeth_Osborne_Video-20221222_155344-Meeting Recording.mp4, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Downloads/Elizabeth_Osborne_Video-20221222_155344-Meeting Recording (Transcribed on 22-Dec-2022 16-16-02).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/medium.pt
[2022-12-22 18:44:26,383] transcriber.run:347 DEBUG -> whisper process completed with code = 0, time taken = 2:28:15.971196
[2022-12-27 10:08:01,678] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-27 10:08:01,680] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-27 10:08:09,296] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-27 10:08:09,588] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-28 16:24:23,759] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-28 16:24:23,761] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-28 16:24:52,458] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-28 16:24:52,459] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-28 16:25:11,605] transcriber.run:325 DEBUG -> Starting file transcription, file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenL_Video-20221227_193017-Meeting Recording/StephenL_Video-20221227_193017-Meeting Recording.mp4, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenL_Video-20221227_193017-Meeting Recording/StephenL_Video-20221227_193017-Meeting Recording (Transcribed on 28-Dec-2022 16-25-03).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/medium.pt
[2022-12-28 21:58:16,498] transcriber.stop:359 DEBUG -> File transcription process terminated
[2022-12-28 21:58:16,506] transcriber.stop:359 DEBUG -> File transcription process terminated
[2022-12-28 21:58:16,740] transcriber.run:347 DEBUG -> whisper process completed with code = -15, time taken = 5:33:05.135284
[2022-12-28 22:55:24,779] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-28 22:55:24,785] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-28 22:55:43,357] transcriber.run:325 DEBUG -> Starting file transcription, file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenL_Video-20221227_193017-Meeting Recording/StephenL_Video-20221227_193017-Meeting Recording.mp4, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenL_Video-20221227_193017-Meeting Recording/StephenL_Video-20221227_193017-Meeting Recording (Transcribed on 28-Dec-2022 22-55-32).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/medium.pt
[2022-12-28 22:56:11,316] transcriber.stop:359 DEBUG -> File transcription process terminated
[2022-12-28 22:56:11,530] transcriber.run:347 DEBUG -> whisper process completed with code = -15, time taken = 0:00:28.173333
[2022-12-28 22:56:30,648] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-28 22:56:30,649] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-30 10:54:29,604] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-30 10:54:29,607] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-30 10:54:58,057] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-30 10:54:58,058] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-30 10:55:07,676] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-30 10:55:09,143] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-30 10:55:25,811] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-30 10:55:25,812] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-30 10:55:37,700] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-30 10:55:37,701] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-30 10:55:55,261] transcriber.run:325 DEBUG -> Starting file transcription, file path = /Users/steven/Downloads/Le phénomène Chat GPT.mp4, language = fr, task = Task.TRANSCRIBE, output file path = /Users/steven/Downloads/Le phénomène Chat GPT (Transcribed on 30-Dec-2022 10-55-47).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/medium.pt
[2022-12-30 12:26:14,957] transcriber.stop:359 DEBUG -> File transcription process terminated
[2022-12-30 12:26:14,960] transcriber.stop:359 DEBUG -> File transcription process terminated
[2022-12-30 12:26:15,134] transcriber.run:347 DEBUG -> whisper process completed with code = -15, time taken = 1:30:19.873182
[2022-12-30 12:26:48,151] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-30 12:26:48,152] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:47:24,233] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:47:24,236] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:47:34,322] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-31 12:47:34,484] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-31 12:49:26,280] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:49:26,280] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:49:35,882] transcriber.run:325 DEBUG -> Starting file transcription, file path = /Users/steven/Downloads/La production nucléaire française dépasse les 40 GW - pourquoi est-ce une bonne nouvelle ? - lindependant.fr.mp3, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Downloads/La production nucléaire française dépasse les 40 GW - pourquoi est-ce une bonne nouvelle ? - lindependant.fr (Transcribed on 31-Dec-2022 12-49-33).txt, output format = OutputFormat.TXT, model_path = /Users/steven/.cache/whisper/tiny.pt
[2022-12-31 12:49:46,641] transcriber.run:347 DEBUG -> whisper process completed with code = 0, time taken = 0:00:10.759701
[2022-12-31 12:50:07,806] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-31 12:50:07,955] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-31 12:50:22,763] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:50:22,764] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:50:41,732] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:50:41,732] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:50:50,477] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-31 12:50:50,617] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-31 12:51:06,371] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:51:06,372] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:51:19,033] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:51:19,034] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:51:34,683] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-31 12:51:34,834] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-31 12:51:42,107] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:51:42,108] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:51:54,527] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:51:54,528] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:52:04,756] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-31 12:52:04,892] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-31 12:52:39,091] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:52:39,092] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:53:14,079] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:53:14,080] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:53:28,190] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2022-12-31 12:53:28,315] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2022-12-31 12:53:43,587] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:53:43,589] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:53:53,886] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): api.github.com:443
[2022-12-31 12:53:54,136] connectionpool._make_request:456 DEBUG -> https://api.github.com:443 "GET /repos/chidiwilliams/buzz/releases/latest HTTP/1.1" 200 1296
[2022-12-31 12:54:40,531] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:54:40,532] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 12:54:56,380] transcriber.run:325 DEBUG -> Starting file transcription, file path = /Users/steven/Movies/FRENCH VIDEO/001 - Racisme - le point de vue de Lili.mp4, language = fr, task = Task.TRANSCRIBE, output file path = /Users/steven/Downloads/001 - Racisme - le point de vue de Lili (Transcribed on 31-Dec-2022 12-54-50).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/small.pt
[2022-12-31 12:57:19,865] transcriber.run:347 DEBUG -> whisper process completed with code = 0, time taken = 0:02:23.485568
[2022-12-31 13:03:17,299] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 13:03:17,300] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2022-12-31 13:03:33,311] transcriber.run:325 DEBUG -> Starting file transcription, file path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenL_Video-20221227_193017-Meeting Recording/StephenL_Video-20221227_193017-Meeting Recording.mp4, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Downloads/StephenL_Video-20221227_193017-Meeting Recording (Transcribed on 31-Dec-2022 13-03-25).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/small.pt
[2022-12-31 14:16:15,631] transcriber.run:347 DEBUG -> whisper process completed with code = 0, time taken = 1:12:42.319952
[2023-01-03 10:33:25,819] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 10:34:25,861] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2023-01-03 10:34:26,182] connectionpool._make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt HTTP/1.1" 200 3086999982
[2023-01-03 10:38:01,020] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Fiascos_industriels__les_Google_Glass.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.LARGE: 'large'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Fiascos_industriels__les_Google_Glass.mp3']), model_path='/Users/steven/.cache/whisper/large-v2.pt', id=303800, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 10:38:19,421] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:18.401115
[2023-01-03 10:38:19,426] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 10:40:57,022] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Fiascos_industriels__les_Google_Glass.mp3', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.8,), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Fiascos_industriels__les_Google_Glass.mp3']), model_path='/Users/steven/.cache/whisper/medium.pt', id=709567, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 10:41:04,526] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:07.504517
[2023-01-03 10:41:04,527] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 10:41:39,149] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 10:42:30,740] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/En Turquie, un avortement légal mais inaccessible dans les hôpitaux.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/En Turquie, un avortement légal mais inaccessible dans les hôpitaux.mp3']), model_path='/Users/steven/.cache/whisper/medium.pt', id=269686, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 10:42:39,784] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:09.043178
[2023-01-03 10:42:39,785] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 10:43:48,375] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 10:44:35,759] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenLargen_Video-20220830_181150-Meeting Recording.mp3', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.SMALL: 'small'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenLargen_Video-20220830_181150-Meeting Recording.mp3']), model_path='/Users/steven/.cache/whisper/small.pt', id=412791, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 10:44:39,186] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:03.426922
[2023-01-03 10:44:39,187] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 10:45:06,577] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 10:51:03,015] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2023-01-03 10:51:03,017] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2023-01-03 10:51:37,519] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2023-01-03 10:51:37,520] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2023-01-03 10:51:52,158] transcriber.run:325 DEBUG -> Starting file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Fiascos_industriels__les_Google_Glass.mp3, language = None, task = Task.TRANSCRIBE, output file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Fiascos_industriels__les_Google_Glass (Transcribed on 03-Jan-2023 10-51-47).srt, output format = OutputFormat.SRT, model_path = /Users/steven/.cache/whisper/small.pt
[2023-01-03 10:55:42,297] transcriber.run:347 DEBUG -> whisper process completed with code = 0, time taken = 0:03:50.138374
[2023-01-03 11:16:56,498] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 11:17:46,560] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Downloads/StephenLargen_Video-20220830_181150-Meeting Recording (Transcribed on 03-Jan-2023 11-17-35).txt, output format = OutputFormat.TXT, number of segments = 0
[2023-01-03 11:17:59,574] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 11:21:48,699] connectionpool.new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-03 11:21:48,914] connectionpool.make_request:456 DEBUG -> https://huggingface.co:443 "GET /datasets/ggerganov/whisper.cpp/resolve/main/ggml-tiny.bin HTTP/1.1" 302 1130
[2023-01-03 11:21:48,975] connectionpool.new_conn:1003 DEBUG -> Starting new HTTPS connection (1): cdn-lfs.huggingface.co:443
[2023-01-03 11:21:49,226] connectionpool.make_request:456 DEBUG -> https://cdn-lfs.huggingface.co:443 "GET /repos/6c/1e/6c1eeb636552d4af5365746431688ab38d2591bc1d919dec249ec309d64812c8/be07e048e1e599ad46341c8d2a135645097a538221678b7acdd1b1919c6e1b21?response-content-disposition=attachment%3B%20filename%3D%22ggml-tiny.bin%22&Expires=1673022109&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzZjLzFlLzZjMWVlYjYzNjU1MmQ0YWY1MzY1NzQ2NDMxNjg4YWIzOGQyNTkxYmMxZDkxOWRlYzI0OWVjMzA5ZDY0ODEyYzgvYmUwN2UwNDhlMWU1OTlhZDQ2MzQxYzhkMmExMzU2NDUwOTdhNTM4MjIxNjc4YjdhY2RkMWIxOTE5YzZlMWIyMT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPWF0dGFjaG1lbnQlM0IlMjBmaWxlbmFtZSUzRCUyMmdnbWwtdGlueS5iaW4lMjIiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2NzMwMjIxMDl9fX1dfQ
&Signature=TPI7DQDtOR42WQATjZqyqZrPK46CwvZzPBvegtxEGKuvnLLmQQzYGA1tw9gaDctYEv5ylTo63gCn2SdG0YeDXxtPmGnpwWOeFZqxJV9w4DBhfzw89eUVrvSV-qpq1AW2OTBzCq5AmALiimCSSiUdaovJuzyeKf20efiQ8Yn-9tNdIsUYBI-r31oRr4WRgewavbShBctCXAZizlZqRzil2Yaut1HxhU4i6F-WN9f3HjEeb45jVZCyhuU7dUfRedjY-pS1Id0gLiTgfCO88V2myAi-XSqB0oRQo6RLY6eWj1n-x9-jLOPRHxCESGuvMCSz2NdthhtlEUo7iKlw
&Key-Pair-Id=KVTP0A1DKRTAX HTTP/1.1" 200 77691713
[2023-01-03 11:21:53,210] transcriber.run:247 DEBUG -> Starting whisper_cpp file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/En Turquie, un avortement légal mais inaccessible dans les hôpitaux.mp3, language = fr, task = Task.TRANSCRIBE, model_path = /Users/steven/Library/Caches/Buzz/ggml-model-whisper-tiny.bin, word level timings = False
[2023-01-03 11:22:01,756] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 11:22:51,119] connectionpool.new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-03 11:22:51,319] connectionpool.make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /None/resolve/main/preprocessor_config.json HTTP/1.1" 401 0
[2023-01-03 11:23:10,320] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 11:23:33,236] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/8_milliards_d_habitants_sur_Terre___Le_taux_de_croissance_diminue_et_devrait_continuer___diminuer
.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.TINY: 'tiny'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/8_milliards_d_habitants_sur_Terre___Le_taux_de_croissance_diminue_et_devrait_continuer___diminuer
.mp3']), model_path='/Users/steven/.cache/whisper/tiny.pt', id=641464, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 11:23:35,113] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:01.876583
[2023-01-03 11:23:35,114] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 12:21:14,687] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 12:48:57,507] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 12:49:31,232] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 12:49:41,514] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 13:39:13,060] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Mexique__Portugal__Turquie__Inde___conomiser_l_eau_face___la_s_cheresse.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.TINY: 'tiny'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Mexique__Portugal__Turquie__Inde___conomiser_l_eau_face___la_s_cheresse.mp3']), model_path='/Users/steven/.cache/whisper/tiny.pt', id=974031, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 13:39:15,106] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:02.045973
[2023-01-03 13:39:15,107] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 13:40:05,836] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Mexique__Portugal__Turquie__Inde___conomiser_l_eau_face___la_s_cheresse.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Mexique__Portugal__Turquie__Inde___conomiser_l_eau_face___la_s_cheresse.mp3']), model_path='/Users/steven/.cache/whisper/medium.pt', id=300175, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 13:40:16,176] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:10.340156
[2023-01-03 13:40:16,177] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 13:42:28,742] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/2022.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/2022.mp3']), model_path='/Users/steven/.cache/whisper/medium.pt', id=433272, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 13:42:37,729] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:08.986909
[2023-01-03 13:42:37,730] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 13:51:09,759] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-03 13:51:09,951] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "GET /datasets/ggerganov/whisper.cpp/resolve/main/ggml-medium.bin HTTP/1.1" 302 1132
[2023-01-03 13:51:10,013] connectionpool.new_conn:1003 DEBUG -> Starting new HTTPS connection (1): cdn-lfs.huggingface.co:443
[2023-01-03 13:51:10,513] connectionpool.make_request:456 DEBUG -> https://cdn-lfs.huggingface.co:443 "GET /repos/6c/1e/6c1eeb636552d4af5365746431688ab38d2591bc1d919dec249ec309d64812c8/6c14d5adee5f86394037b4e4e8b59f1673b6cee10e3cf0b11bbdbee79c156208?response-content-disposition=attachment%3B%20filename%3D%22ggml-medium.bin%22&Expires=1673024626&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzZjLzFlLzZjMWVlYjYzNjU1MmQ0YWY1MzY1NzQ2NDMxNjg4YWIzOGQyNTkxYmMxZDkxOWRlYzI0OWVjMzA5ZDY0ODEyYzgvNmMxNGQ1YWRlZTVmODYzOTQwMzdiNGU0ZThiNTlmMTY3M2I2Y2VlMTBlM2NmMGIxMWJiZGJlZTc5YzE1NjIwOD9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPWF0dGFjaG1lbnQlM0IlMjBmaWxlbmFtZSUzRCUyMmdnbWwtbWVkaXVtLmJpbiUyMiIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3MzAyNDYyNn19fV19&Signature=CNiO1m9yIixOHzN7DVtqaU88tsh4fwuan4-xNccBEuqBm0bnoKCRNMKolOG5dWdkQvjS5b4twMvY5PNLt-RZZKU83RZrIb0wZ95nI-MSfMHyKk0zWZC0YBIDCs8n7vOkKg3sg5nPjYryjBH2Ua-VGkRQPFD8hZA35XSjFac0Y4ck7AQz5I1n9OezG975CEs46eXkImqmtVqQooGjpq6-WF1QLIS2OzxMOcEza4ahWLbsT0Rt9gM0mNqJvOtYmU0i2StT2--LhDWoBEBhlaf-vfALZ3b6P-UA-wb2pD6w-UT0kwYGUeza4Kgio8nVAN8O7LYiHUpf9EYCLJ2E2w
&Key-Pair-Id=KVTP0A1DKRTAX HTTP/1.1" 200 1533763059
[2023-01-03 13:52:04,712] transcriber.run:247 DEBUG -> Starting whisper_cpp file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Diplômées, bio... trois agricultrices bien dans leur botte.mp3, language = None, task = Task.TRANSCRIBE, model_path = /Users/steven/Library/Caches/Buzz/ggml-model-whisper-medium.bin, word level timings = False
[2023-01-03 13:53:14,434] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 13:55:49,517] connectionpool.new_conn:1003 DEBUG -> Starting new HTTPS connection (1): openaipublic.azureedge.net:443
[2023-01-03 13:55:49,657] connectionpool.make_request:456 DEBUG -> https://openaipublic.azureedge.net:443 "GET /main/whisper/models/ed3a0b6b1c0edf879ad9b11b1af5a0e6ab5db9205f891f668f8b0e6c6326e34e/base.pt HTTP/1.1" 200 145262807
[2023-01-03 13:55:58,421] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/En Turquie, un avortement légal mais inaccessible dans les hôpitaux (Transcribed on 03-Jan-2023 12-05-37).srt', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.BASE: 'base'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/En Turquie, un avortement légal mais inaccessible dans les hôpitaux (Transcribed on 03-Jan-2023 12-05-37).srt']), model_path='/Users/steven/.cache/whisper/base.pt', id=446974, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 13:56:00,726] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:02.305127
[2023-01-03 13:56:00,727] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 13:58:10,210] connectionpool.new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-03 13:58:10,395] connectionpool.make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /None/resolve/main/preprocessor_config.json HTTP/1.1" 401 0
[2023-01-03 13:59:47,278] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 14:00:38,302] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Mexique__Portugal__Turquie__Inde___conomiser_l_eau_face___la_s_cheresse.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.TINY: 'tiny'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Mexique__Portugal__Turquie__Inde___conomiser_l_eau_face___la_s_cheresse.mp3']), model_path='/Users/steven/.cache/whisper/tiny.pt', id=769434, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 14:00:40,144] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:01.841772
[2023-01-03 14:00:40,145] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 14:06:06,454] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Allemagne__l_heure_est_aux__conomies_d__nergie.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.TINY: 'tiny'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Allemagne__l_heure_est_aux__conomies_d__nergie.mp3']), model_path='/Users/steven/.cache/whisper/tiny.pt', id=963401, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 14:06:08,291] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:01.837264
[2023-01-03 14:06:08,292] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 14:17:48,733] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Mexique__Portugal__Turquie__Inde___conomiser_l_eau_face___la_s_cheresse.mp3', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.TINY: 'tiny'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Mexique__Portugal__Turquie__Inde___conomiser_l_eau_face___la_s_cheresse.mp3']), model_path='/Users/steven/.cache/whisper/tiny.pt', id=692055, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 14:17:50,577] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:01.844036
[2023-01-03 14:17:50,578] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 14:40:32,337] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Mexique__Portugal__Turquie__Inde___conomiser_l_eau_face___la_s_cheresse (Transcribed on 03-Jan-2023 14-40-15).srt, output format = OutputFormat.SRT, number of segments = 0
[2023-01-03 14:43:09,611] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 14:43:40,931] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path="/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Italie, l'énergie en partage.mp3", transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.TINY: 'tiny'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=["/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Italie, l'énergie en partage.mp3"]), model_path='/Users/steven/.cache/whisper/tiny.pt', id=996032, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 14:43:42,961] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:02.029829
[2023-01-03 14:43:42,962] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 15:18:57,581] transcriber.run:247 DEBUG -> Starting whisper_cpp file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Propagation_des_incendies_en_France___Cet__t__est_certainement_un_aper_u_des_ann_es___venir
.mp3, language = None, task = Task.TRANSCRIBE, model_path = /Users/steven/Library/Caches/Buzz/ggml-model-whisper-tiny.bin, word level timings = False
[2023-01-03 15:19:02,967] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 15:21:41,316] transcriber.run:247 DEBUG -> Starting whisper_cpp file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Pierre le Grand et l’impôt sur les barbes en Russie | L’Histoire nous le dira # 222.mp3, language = None, task = Task.TRANSCRIBE, model_path = /Users/steven/Library/Caches/Buzz/ggml-model-whisper-medium.bin, word level timings = False
[2023-01-03 16:35:47,472] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 16:36:01,436] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Downloads/testdata_whisper-french.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.TINY: 'tiny'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Downloads/testdata_whisper-french.mp3']), model_path='/Users/steven/.cache/whisper/tiny.pt', id=693629, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 16:36:03,491] transcriber.run:370 DEBUG -> whisper process completed with code = 1, time taken = 0:00:02.055232
[2023-01-03 16:36:03,492] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 16:46:49,596] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2023-01-03 16:46:49,599] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2023-01-03 16:46:58,753] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2023-01-03 16:46:58,754] gui.init:556 DEBUG -> Loaded settings from path = /Users/steven/Library/Preferences/com.buzz.Buzz.plist
[2023-01-03 16:47:14,061] transcriber.run:325 DEBUG -> Starting file transcription, file path = /Users/steven/Downloads/testdata_whisper-french.mp3, language = fr, task = Task.TRANSCRIBE, output file path = /Users/steven/Downloads/testdata_whisper-french (Transcribed on 03-Jan-2023 16-47-10).txt, output format = OutputFormat.TXT, model_path = /Users/steven/.cache/whisper/small.pt
[2023-01-03 16:47:30,621] transcriber.run:347 DEBUG -> whisper process completed with code = 0, time taken = 0:00:16.559311
[2023-01-03 21:01:09,575] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 21:01:33,832] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Downloads/testdata_whisper-french.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.TINY: 'tiny'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Downloads/testdata_whisper-french.mp3']), model_path='/Users/steven/.cache/whisper/tiny.pt', id=826352, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 21:01:35,502] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-03 21:01:42,490] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-03 21:01:42,490] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-03 21:01:42,732] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:00:08.900181, number of segments = 2
[2023-01-03 21:01:42,733] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-03 21:02:47,346] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/97. [Langue française] Faut-il simplifier la grammaire française? (argumentaire type DALF) | Should French grammar be simplified?.mp3', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.LARGE: 'large'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/97. [Langue française] Faut-il simplifier la grammaire française? (argumentaire type DALF) | Should French grammar be simplified?.mp3']), model_path='/Users/steven/.cache/whisper/large-v2.pt', id=224252, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-03 21:03:04,176] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-04 01:52:33,273] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-04 01:52:33,299] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-04 01:52:34,388] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 4:49:47.041383, number of segments = 413
[2023-01-04 01:52:34,393] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 01:52:34,415] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Propagation_des_incendies_en_France___Cet__t__est_certainement_un_aper_u_des_ann_es___venir
.mp3', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.SMALL: 'small'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Propagation_des_incendies_en_France___Cet__t__est_certainement_un_aper_u_des_ann_es___venir
.mp3']), model_path='/Users/steven/.cache/whisper/small.pt', id=679004, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-04 01:52:37,907] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-04 01:56:41,557] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-04 01:56:41,558] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-04 01:56:41,785] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:04:07.370395, number of segments = 73
[2023-01-04 01:56:41,788] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 07:19:28,449] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Propagation_des_incendies_en_France___Cet__t__est_certainement_un_aper_u_des_ann_es___venir
(Transcribed on 04-Jan-2023 07-19-22).srt, output format = OutputFormat.SRT, number of segments = 73
[2023-01-04 07:20:02,229] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/97. [Langue française] Faut-il simplifier la grammaire française? (argumentaire type DALF) | Should French grammar be simplified? (Transcribed on 04-Jan-2023 07-19-57).srt, output format = OutputFormat.SRT, number of segments = 413
[2023-01-04 07:21:41,769] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 07:35:01,505] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 07:35:56,108] connectionpool.new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-04 07:35:56,336] connectionpool.make_request:456 DEBUG -> https://huggingface.co:443 "GET /datasets/ggerganov/whisper.cpp/resolve/main/ggml-base.bin HTTP/1.1" 302 1130
[2023-01-04 07:35:56,394] connectionpool.new_conn:1003 DEBUG -> Starting new HTTPS connection (1): cdn-lfs.huggingface.co:443
[2023-01-04 07:35:56,738] connectionpool.make_request:456 DEBUG -> https://cdn-lfs.huggingface.co:443 "GET /repos/6c/1e/6c1eeb636552d4af5365746431688ab38d2591bc1d919dec249ec309d64812c8/60ed5bc3dd14eea856493d334349b405782ddcaf0028d4b5df4088345fba2efe?response-content-disposition=attachment%3B%20filename%3D%22ggml-base.bin%22&Expires=1673079473&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzZjLzFlLzZjMWVlYjYzNjU1MmQ0YWY1MzY1NzQ2NDMxNjg4YWIzOGQyNTkxYmMxZDkxOWRlYzI0OWVjMzA5ZDY0ODEyYzgvNjBlZDViYzNkZDE0ZWVhODU2NDkzZDMzNDM0OWI0MDU3ODJkZGNhZjAwMjhkNGI1ZGY0MDg4MzQ1ZmJhMmVmZT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPWF0dGFjaG1lbnQlM0IlMjBmaWxlbmFtZSUzRCUyMmdnbWwtYmFzZS5iaW4lMjIiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2NzMwNzk0NzN9fX1dfQ
&Signature=ATmIzCnvkT0n7qOpGQCZKCC1qA8ZoV7XGwBOjbmhw7MeQt-LmjRE37t5SuHBsRpvD8Wk2JepSdr4heIGRe6SHnx9OnwqXIsr56Kte3xDYhFOLGVLojlTKx6LRzXb4ZIvl9hRS-KrF6nAJ1eOpOAV7XjAiN3M1Lxm8XQHAV-VTlty3mTO60ed3rk8JqgJn3AbzR-25BlGPciGHzZSYsXNDp0Prpsd01d1kj6MEDRP-4eJ9XTEyumzPmQQUxTpoIW-5HJoAhTvPUMBBqdJd90l3-QywDR77TGVT-OFIzhPWB0vOUoyg5cGmRSNdVYKGgnrfY3YpOQ8OGThSA8Q
&Key-Pair-Id=KVTP0A1DKRTAX HTTP/1.1" 200 147951465
[2023-01-04 07:36:02,780] transcriber.run:247 DEBUG -> Starting whisper_cpp file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Italie, l'énergie en partage.mp3, language = None, task = Task.TRANSCRIBE, model_path = /Users/steven/Library/Caches/Buzz/ggml-model-whisper-base.bin, word level timings = False
[2023-01-04 07:36:03,168] transcriber.run:268 DEBUG -> Running whisper_cpp process, args = "--language en --max-len 0 --model /Users/steven/Library/Caches/Buzz/ggml-model-whisper-base.bin /var/folders/bq/pgm63ssd2c38lc5f0jk2n50m0000gn/T/tmpp3wur8kb.wav"
[2023-01-04 07:36:03,181] transcriber.run:278 DEBUG -> whisper_cpp process completed with status = ExitStatus.NormalExit
[2023-01-04 07:36:03,182] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 07:36:30,132] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-04 07:36:30,339] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /None/resolve/main/preprocessor_config.json HTTP/1.1" 401 0
[2023-01-04 07:36:44,741] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 10:52:24,736] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 10:53:02,381] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenL_Video-20230103_192425-Meeting Recording/StephenL_Video-20230103_192425-Meeting Recording.mp4', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.SMALL: 'small'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenL_Video-20230103_192425-Meeting Recording/StephenL_Video-20230103_192425-Meeting Recording.mp4']), model_path='/Users/steven/.cache/whisper/small.pt', id=213239, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-04 10:53:06,005] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-04 10:53:36,311] transcriber.run:375 DEBUG -> whisper process completed with code = -15, time taken = 0:00:33.930311, number of segments = 0
[2023-01-04 10:53:36,315] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 21:35:13,687] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 21:36:07,051] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 21:37:49,037] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Gabbi/Video-20210920_130142-Meeting Recording.mp4', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.BASE: 'base'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Gabbi/Video-20210920_130142-Meeting Recording.mp4']), model_path='/Users/steven/.cache/whisper/base.pt', id=525459, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-04 21:37:51,118] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-04 21:58:26,577] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-04 21:58:26,579] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-04 21:58:27,015] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:20:37.978689, number of segments = 1512
[2023-01-04 21:58:27,020] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-04 22:06:02,579] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Gabbi/Video-20210920_130142-Meeting Recording (Transcribed on 04-Jan-2023 22-05-59).srt, output format = OutputFormat.SRT, number of segments = 1512
[2023-01-07 08:36:35,410] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-07 14:32:20,147] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-07 14:32:47,507] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Allemagne__l_heure_est_aux__conomies_d__nergie.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.SMALL: 'small'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Allemagne__l_heure_est_aux__conomies_d__nergie.mp3']), model_path='/Users/steven/.cache/whisper/small.pt', id=874857, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-07 14:32:54,855] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-07 14:34:48,754] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-07 14:34:48,755] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-07 14:34:49,164] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:02:01.657172, number of segments = 35
[2023-01-07 14:34:49,166] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-07 14:36:39,087] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Géopolitique, le débat - Climat et enjeux stratégiques/Allemagne__l_heure_est_aux__conomies_d__nergie (Transcribed on 07-Jan-2023 14-36-25).srt, output format = OutputFormat.SRT, number of segments = 35
[2023-01-07 14:38:34,518] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Allemagne__l_heure_est_aux__conomies_d__nergie/Allemagne__l_heure_est_aux__conomies_d__nergie (Transcribed on 07-Jan-2023 14-38-29).srt, output format = OutputFormat.SRT, number of segments = 35
[2023-01-07 16:15:30,256] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-08 10:21:29,656] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:30,008] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/preprocessor_config.json HTTP/1.1" 200 0
[2023-01-08 10:21:30,014] _api.acquire:172 DEBUG -> Attempting to acquire lock 6024243392 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/c2048dfa9fd94a052e62e908d2c4dfb18534b4d2.lock
[2023-01-08 10:21:30,014] _api.acquire:176 DEBUG -> Lock 6024243392 acquired on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/c2048dfa9fd94a052e62e908d2c4dfb18534b4d2.lock
[2023-01-08 10:21:30,019] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:30,280] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "GET /openai/whisper-medium/resolve/main/preprocessor_config.json HTTP/1.1" 200 184990
[2023-01-08 10:21:30,883] _api.release:209 DEBUG -> Attempting to release lock 6024243392 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/c2048dfa9fd94a052e62e908d2c4dfb18534b4d2.lock
[2023-01-08 10:21:30,883] _api.release:212 DEBUG -> Lock 6024243392 released on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/c2048dfa9fd94a052e62e908d2c4dfb18534b4d2.lock
[2023-01-08 10:21:30,918] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:31,246] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/tokenizer_config.json HTTP/1.1" 200 0
[2023-01-08 10:21:31,252] _api.acquire:172 DEBUG -> Attempting to acquire lock 6024636848 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/5e6c8377adf6019428b34a1ad906fb43de71d387.lock
[2023-01-08 10:21:31,253] _api.acquire:176 DEBUG -> Lock 6024636848 acquired on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/5e6c8377adf6019428b34a1ad906fb43de71d387.lock
[2023-01-08 10:21:31,257] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:31,634] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "GET /openai/whisper-medium/resolve/main/tokenizer_config.json HTTP/1.1" 200 830
[2023-01-08 10:21:31,639] _api.release:209 DEBUG -> Attempting to release lock 6024636848 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/5e6c8377adf6019428b34a1ad906fb43de71d387.lock
[2023-01-08 10:21:31,639] _api.release:212 DEBUG -> Lock 6024636848 released on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/5e6c8377adf6019428b34a1ad906fb43de71d387.lock
[2023-01-08 10:21:31,642] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:31,926] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/vocab.json HTTP/1.1" 200 0
[2023-01-08 10:21:31,931] _api.acquire:172 DEBUG -> Attempting to acquire lock 6024637184 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/0f3456460629e21d559c6daa23ab6ce3644e8271.lock
[2023-01-08 10:21:31,931] _api.acquire:176 DEBUG -> Lock 6024637184 acquired on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/0f3456460629e21d559c6daa23ab6ce3644e8271.lock
[2023-01-08 10:21:31,936] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:32,271] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "GET /openai/whisper-medium/resolve/main/vocab.json HTTP/1.1" 200 1036558
[2023-01-08 10:21:32,743] _api.release:209 DEBUG -> Attempting to release lock 6024637184 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/0f3456460629e21d559c6daa23ab6ce3644e8271.lock
[2023-01-08 10:21:32,743] _api.release:212 DEBUG -> Lock 6024637184 released on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/0f3456460629e21d559c6daa23ab6ce3644e8271.lock
[2023-01-08 10:21:32,746] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:33,385] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/tokenizer.json HTTP/1.1" 404 0
[2023-01-08 10:21:33,396] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:33,916] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/merges.txt HTTP/1.1" 200 0
[2023-01-08 10:21:33,923] _api.acquire:172 DEBUG -> Attempting to acquire lock 6024636752 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/3a00c89ee5e8ae0cb159a6ec838843fb2266fac6.lock
[2023-01-08 10:21:33,923] _api.acquire:176 DEBUG -> Lock 6024636752 acquired on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/3a00c89ee5e8ae0cb159a6ec838843fb2266fac6.lock
[2023-01-08 10:21:33,928] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:34,407] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "GET /openai/whisper-medium/resolve/main/merges.txt HTTP/1.1" 200 493864
[2023-01-08 10:21:34,754] _api.release:209 DEBUG -> Attempting to release lock 6024636752 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/3a00c89ee5e8ae0cb159a6ec838843fb2266fac6.lock
[2023-01-08 10:21:34,754] _api.release:212 DEBUG -> Lock 6024636752 released on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/3a00c89ee5e8ae0cb159a6ec838843fb2266fac6.lock
[2023-01-08 10:21:34,760] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:35,094] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/normalizer.json HTTP/1.1" 200 0
[2023-01-08 10:21:35,100] _api.acquire:172 DEBUG -> Attempting to acquire lock 6024636560 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/dd6ae819ad738ac1a546e9f9282ef325c33b9ea0.lock
[2023-01-08 10:21:35,100] _api.acquire:176 DEBUG -> Lock 6024636560 acquired on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/dd6ae819ad738ac1a546e9f9282ef325c33b9ea0.lock
[2023-01-08 10:21:35,103] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:35,421] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "GET /openai/whisper-medium/resolve/main/normalizer.json HTTP/1.1" 200 52666
[2023-01-08 10:21:35,506] _api.release:209 DEBUG -> Attempting to release lock 6024636560 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/dd6ae819ad738ac1a546e9f9282ef325c33b9ea0.lock
[2023-01-08 10:21:35,506] _api.release:212 DEBUG -> Lock 6024636560 released on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/dd6ae819ad738ac1a546e9f9282ef325c33b9ea0.lock
[2023-01-08 10:21:35,511] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:35,924] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/added_tokens.json HTTP/1.1" 200 0
[2023-01-08 10:21:35,928] _api.acquire:172 DEBUG -> Attempting to acquire lock 6024630848 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/47e9dd31523ecea227504afad3870da1cfe5ad81.lock
[2023-01-08 10:21:35,928] _api.acquire:176 DEBUG -> Lock 6024630848 acquired on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/47e9dd31523ecea227504afad3870da1cfe5ad81.lock
[2023-01-08 10:21:35,931] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:36,327] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "GET /openai/whisper-medium/resolve/main/added_tokens.json HTTP/1.1" 200 2108
[2023-01-08 10:21:36,336] _api.release:209 DEBUG -> Attempting to release lock 6024630848 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/47e9dd31523ecea227504afad3870da1cfe5ad81.lock
[2023-01-08 10:21:36,336] _api.release:212 DEBUG -> Lock 6024630848 released on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/47e9dd31523ecea227504afad3870da1cfe5ad81.lock
[2023-01-08 10:21:36,341] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:36,547] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/special_tokens_map.json HTTP/1.1" 200 0
[2023-01-08 10:21:36,552] _api.acquire:172 DEBUG -> Attempting to acquire lock 6024633968 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/9115b6806f75d5122486b0e1ae0279a0207199c2.lock
[2023-01-08 10:21:36,552] _api.acquire:176 DEBUG -> Lock 6024633968 acquired on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/9115b6806f75d5122486b0e1ae0279a0207199c2.lock
[2023-01-08 10:21:36,556] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:36,921] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "GET /openai/whisper-medium/resolve/main/special_tokens_map.json HTTP/1.1" 200 2064
[2023-01-08 10:21:36,928] _api.release:209 DEBUG -> Attempting to release lock 6024633968 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/9115b6806f75d5122486b0e1ae0279a0207199c2.lock
[2023-01-08 10:21:36,928] _api.release:212 DEBUG -> Lock 6024633968 released on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/9115b6806f75d5122486b0e1ae0279a0207199c2.lock
[2023-01-08 10:21:37,034] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:37,407] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/config.json HTTP/1.1" 200 0
[2023-01-08 10:21:37,415] _api.acquire:172 DEBUG -> Attempting to acquire lock 6024636752 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/f84be5dbc1bfd09035c3fd3e01b777bc47f14a66.lock
[2023-01-08 10:21:37,415] _api.acquire:176 DEBUG -> Lock 6024636752 acquired on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/f84be5dbc1bfd09035c3fd3e01b777bc47f14a66.lock
[2023-01-08 10:21:37,420] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:37,629] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "GET /openai/whisper-medium/resolve/main/config.json HTTP/1.1" 200 1969
[2023-01-08 10:21:37,637] _api.release:209 DEBUG -> Attempting to release lock 6024636752 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/f84be5dbc1bfd09035c3fd3e01b777bc47f14a66.lock
[2023-01-08 10:21:37,638] _api.release:212 DEBUG -> Lock 6024636752 released on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/f84be5dbc1bfd09035c3fd3e01b777bc47f14a66.lock
[2023-01-08 10:21:37,645] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-08 10:21:37,986] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/pytorch_model.bin HTTP/1.1" 302 0
[2023-01-08 10:21:38,061] _api.acquire:172 DEBUG -> Attempting to acquire lock 6132743232 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/96d734d68ad5d63c8f41d525f5769788432f6963f32dbe36feefaa33d736a962.lock
[2023-01-08 10:21:38,061] api.acquire:176 DEBUG -> Lock 6132743232 acquired on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/96d734d68ad5d63c8f41d525f5769788432f6963f32dbe36feefaa33d736a962.lock
[2023-01-08 10:21:38,067] connectionpool.new_conn:1003 DEBUG -> Starting new HTTPS connection (1): cdn-lfs.huggingface.co:443
[2023-01-08 10:21:38,707] connectionpool.make_request:456 DEBUG -> https://cdn-lfs.huggingface.co:443 "GET /repos/94/79/9479008c03391f4eb62fd6db91da1b92f3e793982cd98011d29a5a82231ca1e4/96d734d68ad5d63c8f41d525f5769788432f6963f32dbe36feefaa33d736a962?response-content-disposition=attachment%3B%20filename%3D%22pytorch_model.bin%22&Expires=1673450498&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzk0Lzc5Lzk0NzkwMDhjMDMzOTFmNGViNjJmZDZkYjkxZGExYjkyZjNlNzkzOTgyY2Q5ODAxMWQyOWE1YTgyMjMxY2ExZTQvOTZkNzM0ZDY4YWQ1ZDYzYzhmNDFkNTI1ZjU3Njk3ODg0MzJmNjk2M2YzMmRiZTM2ZmVlZmFhMzNkNzM2YTk2Mj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPWF0dGFjaG1lbnQlM0IlMjBmaWxlbmFtZSUzRCUyMnB5dG9yY2hfbW9kZWwuYmluJTIyIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNjczNDUwNDk4fX19XX0&Signature=vsvMoFjaJt3FDQL2ZoCcm93lCRUaUits5GPf9ZdZa3Iz4IyI3CKrMeEHZ5bddC4jvX1SDiZCOmAwkw8FDDN6MHLmrM3-bCyi2SEeIpavDf00F2KyzhgQtKUneUllJd7wbp7kCJjOeJmcZJqXRljL8Xsb1xWs1FvQzwxXzMfhAYwbUs9Al2kWOTBLbNdZxg2Mo655SGXJi8YlThr0-TZvemHmQUkKCpEQ831gg-AhsZyK0ihTWgrBL1EHIJJF76uTU4N1DhI54cLA0KZztRm02jRRjLOzsjhCRgdCS1vJ5q8G3cjCfShdqtqC2JBckMzeD2b5P3jrKwmkPQsroA
&Key-Pair-Id=KVTP0A1DKRTAX HTTP/1.1" 200 3055735323
[2023-01-08 10:25:00,023] _api.release:209 DEBUG -> Attempting to release lock 6132743232 on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/96d734d68ad5d63c8f41d525f5769788432f6963f32dbe36feefaa33d736a962.lock
[2023-01-08 10:25:00,025] _api.release:212 DEBUG -> Lock 6132743232 released on /Users/steven/.cache/huggingface/hub/models--openai--whisper-medium/blobs/96d734d68ad5d63c8f41d525f5769788432f6963f32dbe36feefaa33d736a962.lock
[2023-01-08 10:25:30,140] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/La Biélorussie ouvre sa frontière à ses voisins, les Lituaniens en profitent.mp3', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.HUGGING_FACE: 'Hugging Face'>, whisper_model_size=<WhisperModelSize.TINY: 'tiny'>, hugging_face_model_id='openai/whisper-medium'), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/La Biélorussie ouvre sa frontière à ses voisins, les Lituaniens en profitent.mp3']), model_path='openai/whisper-medium', id=550301, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-08 10:26:15,024] transcriber.read_line:413 DEBUG -> whisper (stderr): transformers/generation_utils.py:1359: UserWarning: Neither max_length nor max_new_tokens has been set, max_length will default to 448 (self.config.max_length). Controlling max_length via the config is deprecated and max_length will be removed from the config in v5 of Transformers -- we recommend using max_new_tokens to control the maximum length of the generation.
warnings.warn(
[2023-01-08 10:41:20,894] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-08 10:41:20,897] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-08 10:41:21,350] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:15:51.209456, number of segments = 5
[2023-01-08 10:41:21,352] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-10 20:32:08,006] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-10 20:36:50,263] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-11 08:57:56,182] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-11 09:16:24,286] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Isaac/Isaac_Video-20230109_150459-Meeting Recording.mp4', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Isaac/Isaac_Video-20230109_150459-Meeting Recording.mp4']), model_path='/Users/steven/.cache/whisper/medium.pt', id=433410, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-11 09:16:32,869] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-11 17:09:30,579] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-11 17:09:30,587] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-11 17:09:31,397] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 7:53:07.111266, number of segments = 924
[2023-01-11 17:09:31,401] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-11 17:12:54,258] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Isaac/Isaac_Video-20230109_150459-Meeting Recording (Transcribed on 11-Jan-2023 17-12-40).srt, output format = OutputFormat.SRT, number of segments = 924
[2023-01-11 22:31:43,390] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path="/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Real Life French/L'alcool.mp3", transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=["/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Real Life French/L'alcool.mp3"]), model_path='/Users/steven/.cache/whisper/medium.pt', id=927622, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-11 22:31:52,850] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-11 22:51:09,500] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-11 22:51:09,503] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-11 22:51:09,975] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:19:26.586055, number of segments = 133
[2023-01-11 22:51:09,976] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-11 22:52:00,874] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Real Life French/L'alcool (Transcribed on 11-Jan-2023 22-51-48).srt, output format = OutputFormat.SRT, number of segments = 133
[2023-01-12 19:20:44,525] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/Video-20230112_180335-Meeting Recording.mp4', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/Video-20230112_180335-Meeting Recording.mp4']), model_path='/Users/steven/.cache/whisper/medium.pt', id=200654, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-12 19:20:55,503] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-12 21:04:20,396] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-12 21:04:20,427] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-12 21:04:21,228] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 1:43:36.703523, number of segments = 461
[2023-01-12 21:04:21,235] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-12 21:24:16,099] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/Video-20230112_180335-Meeting Recording (Transcribed on 12-Jan-2023 21-24-08).srt, output format = OutputFormat.SRT, number of segments = 461
[2023-01-13 09:24:28,023] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-16 11:06:38,434] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-16 12:18:49,910] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Downloads/Call with physiotherapist assistant for confirmation.m4a', transcription_options=TranscriptionOptions(language='en', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.BASE: 'base'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Downloads/Call with physiotherapist assistant for confirmation.m4a']), model_path='/Users/steven/.cache/whisper/base.pt', id=969443, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-16 12:18:52,495] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-16 12:21:12,690] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-16 12:21:12,692] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-16 12:21:12,975] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:02:23.065768, number of segments = 122
[2023-01-16 12:21:12,976] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-16 12:23:53,540] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Downloads/Call with physiotherapist assistant for confirmation (Transcribed on 16-Jan-2023 12-23-49).srt, output format = OutputFormat.SRT, number of segments = 122
[2023-01-16 12:25:01,225] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Downloads/Call with physiotherapist assistant for confirmation 2.m4a', transcription_options=TranscriptionOptions(language='en', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Downloads/Call with physiotherapist assistant for confirmation 2.m4a']), model_path='/Users/steven/.cache/whisper/medium.pt', id=573836, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-16 12:25:09,347] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-16 12:50:06,659] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-16 12:50:06,662] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-16 12:50:06,952] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:25:05.726713, number of segments = 138
[2023-01-16 12:50:06,953] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-16 13:50:15,404] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Downloads/Call with physiotherapist assistant for confirmation 2 (Transcribed on 16-Jan-2023 13-50-13).srt, output format = OutputFormat.SRT, number of segments = 138
[2023-01-17 20:14:07,043] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenL_Video-20230117_213438-Meeting Recording/StephenL_Video-20230117_213438-Meeting Recording.mp4', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenL_Video-20230117_213438-Meeting Recording/StephenL_Video-20230117_213438-Meeting Recording.mp4']), model_path='/Users/steven/.cache/whisper/medium.pt', id=448486, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-17 20:14:17,242] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-17 22:36:22,421] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-17 22:36:22,430] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-17 22:36:23,271] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 2:22:16.229684, number of segments = 511
[2023-01-17 22:36:23,274] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-17 23:26:41,545] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/Stephen Largen/StephenL_Video-20230117_213438-Meeting Recording/StephenL_Video-20230117_213438-Meeting Recording (Transcribed on 17-Jan-2023 23-26-35).srt, output format = OutputFormat.SRT, number of segments = 511
[2023-01-18 10:39:59,909] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/JK Hartley/JK_Hartley_Video-20230118_135237-Meeting Recording.mp4', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/JK Hartley/JK_Hartley_Video-20230118_135237-Meeting Recording.mp4']), model_path='/Users/steven/.cache/whisper/medium.pt', id=126573, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-18 10:40:11,579] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-18 12:41:36,463] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-18 12:41:36,496] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-18 12:41:37,266] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 2:01:37.358474, number of segments = 624
[2023-01-18 12:41:37,272] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-18 13:25:07,927] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/JK Hartley/JK_Hartley_Video-20230118_135237-Meeting Recording (Transcribed on 18-Jan-2023 13-25-02).srt, output format = OutputFormat.SRT, number of segments = 624
[2023-01-19 10:14:14,353] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Real Life French/Le bilan.mp3', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Real Life French/Le bilan.mp3']), model_path='/Users/steven/.cache/whisper/medium.pt', id=219213, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-19 10:14:24,925] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-19 10:31:14,253] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-19 10:31:14,257] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-19 10:31:15,022] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:17:00.670357, number of segments = 70
[2023-01-19 10:31:15,025] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-19 13:14:29,629] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/Me_Video-20230119_180933-Meeting Recording.mp4', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/Me_Video-20230119_180933-Meeting Recording.mp4']), model_path='/Users/steven/.cache/whisper/medium.pt', id=693601, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-19 13:14:40,438] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-19 16:14:42,520] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-19 16:14:42,594] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-19 16:14:43,569] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 3:00:13.941543, number of segments = 1136
[2023-01-19 16:14:43,576] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-19 16:16:24,917] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/Me_Video-20230119_180933-Meeting Recording/Me_Video-20230119_180933-Meeting Recording (Transcribed on 19-Jan-2023 16-16-14).srt, output format = OutputFormat.SRT, number of segments = 1136
[2023-01-19 16:20:08,607] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/Me_Video-20230109_150459-Meeting Recording/Me_Video-20230109_150459-Meeting Recording.mp4', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/Me_Video-20230109_150459-Meeting Recording/Me_Video-20230109_150459-Meeting Recording.mp4']), model_path='/Users/steven/.cache/whisper/medium.pt', id=779721, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-19 16:20:18,652] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-20 00:49:21,973] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-20 00:49:22,025] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-20 00:49:22,904] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 8:29:14.297077, number of segments = 662
[2023-01-20 00:49:22,909] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-20 08:05:33,991] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/Me_Video-20230109_150459-Meeting Recording/Me_Video-20230109_150459-Meeting Recording (Transcribed on 20-Jan-2023 08-05-24).srt, output format = OutputFormat.SRT, number of segments = 662
[2023-01-20 15:18:56,976] transcriber.run:247 DEBUG -> Starting whisper_cpp file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/French Expat/Nolwenn (Miami, FL) J'explique l'Amérique aux francophones sur Instagram .mp3, language = fr, task = Task.TRANSCRIBE, model_path = /Users/steven/Library/Caches/Buzz/ggml-model-whisper-medium.bin, word level timings = False
[2023-01-20 15:19:00,518] transcriber.run:268 DEBUG -> Running whisper_cpp process, args = "--language fr --max-len 0 --model /Users/steven/Library/Caches/Buzz/ggml-model-whisper-medium.bin /var/folders/bq/pgm63ssd2c38lc5f0jk2n50m0000gn/T/tmp_28mg61x.wav"
[2023-01-20 15:19:00,534] transcriber.run:278 DEBUG -> whisper_cpp process completed with status = ExitStatus.NormalExit
[2023-01-20 15:19:00,538] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-20 15:28:14,139] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-20 15:28:14,335] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/preprocessor_config.json HTTP/1.1" 200 0
[2023-01-20 15:28:14,384] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-20 15:28:14,557] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/tokenizer_config.json HTTP/1.1" 200 0
[2023-01-20 15:28:14,742] connectionpool._new_conn:1003 DEBUG -> Starting new HTTPS connection (1): huggingface.co:443
[2023-01-20 15:28:14,921] connectionpool._make_request:456 DEBUG -> https://huggingface.co:443 "HEAD /openai/whisper-medium/resolve/main/config.json HTTP/1.1" 200 0
[2023-01-20 15:28:23,752] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Nouvelles technologies/Un «New Deal» numérique pour l’Afrique.mp3', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.HUGGING_FACE: 'Hugging Face'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id='openai/whisper-medium'), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Nouvelles technologies/Un «New Deal» numérique pour l’Afrique.mp3']), model_path='openai/whisper-medium', id=4245, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-20 15:29:08,469] transcriber.read_line:413 DEBUG -> whisper (stderr): transformers/generation_utils.py:1359: UserWarning: Neither max_length nor max_new_tokens has been set, max_length will default to 448 (self.config.max_length). Controlling max_length via the config is deprecated and max_length will be removed from the config in v5 of Transformers -- we recommend using max_new_tokens to control the maximum length of the generation.
warnings.warn(
[2023-01-20 15:32:04,629] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-20 15:32:04,631] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-20 15:32:05,185] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:03:41.433238, number of segments = 5
[2023-01-20 15:32:05,186] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-20 15:34:54,037] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Nouvelles technologies/Un «New Deal» numérique pour l’Afrique (Transcribed on 20-Jan-2023 15-34-51).srt, output format = OutputFormat.SRT, number of segments = 5
[2023-01-23 16:56:02,311] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-23 17:53:28,654] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Downloads/Responsabiliser les auteurs de violences conjugales pour éviter la récidive.mp3', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Downloads/Responsabiliser les auteurs de violences conjugales pour éviter la récidive.mp3']), model_path='/Users/steven/.cache/whisper/medium.pt', id=765876, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-23 17:53:38,849] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-23 18:45:32,422] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-23 18:45:32,431] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-23 18:45:33,245] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:52:04.592323, number of segments = 454
[2023-01-23 18:45:33,251] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-23 23:40:03,921] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Downloads/Responsabiliser les auteurs de violences conjugales pour éviter la récidive (Transcribed on 23-Jan-2023 23-39-55).srt, output format = OutputFormat.SRT, number of segments = 454
[2023-01-26 11:13:28,077] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-26 11:14:41,764] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Choses à Savoir TECH VERTE/Nucléaire qu’est-ce que le projet Cigéo .mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.SMALL: 'small'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Choses à Savoir TECH VERTE/Nucléaire qu’est-ce que le projet Cigéo .mp3']), model_path='/Users/steven/.cache/whisper/small.pt', id=498313, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-26 11:14:45,816] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-26 11:16:29,208] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-26 11:16:29,209] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-26 11:16:29,591] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:01:47.827216, number of segments = 27
[2023-01-26 11:16:29,592] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-26 11:16:57,013] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Choses à Savoir TECH VERTE/Nucléaire qu’est-ce que le projet Cigéo (Transcribed on 26-Jan-2023 11-16-54).srt, output format = OutputFormat.SRT, number of segments = 27
[2023-01-26 11:17:05,044] transcriber.write_output:456 DEBUG -> Writing transcription output, path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Choses à Savoir TECH VERTE/Nucléaire qu’est-ce que le projet Cigéo (Transcribed on 26-Jan-2023 11-17-02).txt, output format = OutputFormat.TXT, number of segments = 27
[2023-01-26 17:11:21,329] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-26 21:35:02,127] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/2023/Me_Video-20230126_180031-Meeting Recording.mp4', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.SMALL: 'small'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/2023/Me_Video-20230126_180031-Meeting Recording.mp4']), model_path='/Users/steven/.cache/whisper/small.pt', id=233712, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-26 21:35:06,286] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-26 22:27:38,810] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-26 22:27:38,813] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-26 22:27:39,245] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:52:37.117801, number of segments = 1287
[2023-01-26 22:27:39,253] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-28 08:05:11,703] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-28 11:19:41,603] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-28 18:36:55,789] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-28 21:17:31,684] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-01-30 16:36:43,190] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Downloads/La promotion.mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Downloads/La promotion.mp3']), model_path='/Users/steven/.cache/whisper/medium.pt', id=319110, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-30 16:36:53,803] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-01-30 16:58:39,823] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-30 16:58:39,825] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-01-30 16:58:40,283] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:21:57.095662, number of segments = 66
[2023-01-30 16:58:40,284] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-02-02 06:17:21,774] transcriber.run:596 DEBUG -> Waiting for next transcription task
[2023-02-02 06:20:15,615] model_loader.download_model:102 DEBUG -> Downloading model from https://openaipublic.azureedge.net/main/whisper/models/345ae4da62f9b3d59415adc60127b97c714f32e89e936602e85993674d08dcb1/medium.pt to /Users/steven/.cache/whisper/medium.pt
[2023-02-02 06:20:20,307] transcriber.run:616 DEBUG -> Starting next transcription task
[2023-02-02 06:20:20,309] transcriber.run:358 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/2023/Me_Video-20230126_180031-Meeting Recording/Me_Video-20230126_180031-Meeting Recording.mp4', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/LUC UK TEACHER/SKYPE_LESSONS/ME/2023/Me_Video-20230126_180031-Meeting Recording/Me_Video-20230126_180031-Meeting Recording.mp4']), model_path='/Users/steven/.cache/whisper/medium.pt', id=746418, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-02-02 06:20:22,234] transcriber.run:380 DEBUG -> whisper process completed with code = -11, time taken = 0:00:01.925414, number of segments = 0
[2023-02-02 06:20:22,235] transcriber.run:596 DEBUG -> Waiting for next transcription task
[2023-02-02 06:21:17,348] model_loader.download_model:102 DEBUG -> Downloading model from https://openaipublic.azureedge.net/main/whisper/models/345ae4da62f9b3d59415adc60127b97c714f32e89e936602e85993674d08dcb1/medium.pt to /Users/steven/.cache/whisper/medium.pt
[2023-02-02 06:21:21,975] transcriber.run:616 DEBUG -> Starting next transcription task
[2023-02-02 06:21:21,979] transcriber.run:358 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Autour de la question/Comment habiter le monde autrement .mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Autour de la question/Comment habiter le monde autrement .mp3']), model_path='/Users/steven/.cache/whisper/medium.pt', id=666087, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-02-02 06:21:23,838] transcriber.run:380 DEBUG -> whisper process completed with code = -11, time taken = 0:00:01.859365, number of segments = 0
[2023-02-02 06:21:23,839] transcriber.run:596 DEBUG -> Waiting for next transcription task
[2023-02-02 06:27:02,749] model_loader.download_model:102 DEBUG -> Downloading model from https://huggingface.co/datasets/ggerganov/whisper.cpp/resolve/main/ggml-medium.bin to /Users/steven/Library/Caches/Buzz/ggml-model-whisper-medium.bin
[2023-02-02 06:27:07,798] transcriber.run:616 DEBUG -> Starting next transcription task
[2023-02-02 06:27:07,805] transcriber.run:250 DEBUG -> Starting whisper_cpp file transcription, file path = /Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Autour de la question/Comment revenir sur Terre .mp3, language = None, task = Task.TRANSCRIBE, model_path = /Users/steven/Library/Caches/Buzz/ggml-model-whisper-medium.bin, word level timings = False
[2023-02-02 06:27:14,662] transcriber.run:271 DEBUG -> Running whisper_cpp process, args = "--language en --max-len 0 --model /Users/steven/Library/Caches/Buzz/ggml-model-whisper-medium.bin /var/folders/bq/pgm63ssd2c38lc5f0jk2n50m0000gn/T/tmp68b2azco.wav"
[2023-02-02 06:27:14,675] transcriber.run:281 DEBUG -> whisper_cpp process completed with status = ExitStatus.NormalExit
[2023-02-02 06:27:14,676] transcriber.run:596 DEBUG -> Waiting for next transcription task
[2023-02-02 06:28:10,857] transcriber.run:586 DEBUG -> Waiting for next file transcription task
[2023-02-02 06:28:55,024] transcriber.run:355 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Choses à Savoir ÉCONOMIE/À quel âge les Français partent-ils réellement à la retraite .mp3', transcription_options=TranscriptionOptions(language='fr', task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.MEDIUM: 'medium'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/Users/steven/Desktop/FRENCH/PODCASTS_FRENCH/PODCAST_AUDIOS_ONLY/Choses à Savoir ÉCONOMIE/À quel âge les Français partent-ils réellement à la retraite .mp3']), model_path='/Users/steven/.cache/whisper/medium.pt', id=730446, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-02-02 06:29:03,834] transcriber.read_line:413 DEBUG -> whisper (stderr): whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
[2023-02-02 06:38:33,771] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-02-02 06:38:33,773] transcriber.read_line:413 DEBUG -> whisper (stderr):
[2023-02-02 06:38:34,218] transcriber.run:375 DEBUG -> whisper process completed with code = 0, time taken = 0:09:39.193901, number of segments = 33
[2023-02-02 06:38:34,219] transcriber.run:586 DEBUG -> Waiting for next file transcription task

@chidiwilliams
Copy link
Owner Author

chidiwilliams commented Feb 2, 2023

@Sircam19, thanks for the logs. I also just caught that you're using a Mac. Whisper only supports CUDA-enabled GPUs at the moment. Support for Mac MPS GPUs has not been merged yet: openai/whisper#382.

@chidiwilliams
Copy link
Owner Author

chidiwilliams commented Feb 2, 2023

Running with PyTorch version 1.12.1 (tested on Ubuntu 20.04) fails with:

NVIDIA A10 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA A10 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Traceback (most recent call last):
  File "main.py", line 50, in <module>
  File "whisper/__init__.py", line 120, in load_model
  File "torch/nn/modules/module.py", line 1604, in load_state_dict
RuntimeError: Error(s) in loading state_dict for Whisper:
        While copying the parameter named "encoder.blocks.0.attn.query.weight", whose dimensions in the model are torch.Size([384, 384]) and whose dimensions in the checkpoint are torch.Size([384, 384]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors m

Meanwhile, upgrading PyTorch to version 1.13.1 seems to fail on all environments (with/without GPU) with a segfault:

crash-logs.txt

I've already spent a lot of time trying to make this work, and unfortunately, I think I'll have to stop here for now. I'll continue if I can find any information on importing PyTorch 1.13.1 into a PyInstaller project.

@Sircam19
Copy link

Sircam19 commented Feb 2, 2023

Totally understand and is appreciated. Thanks for trying and your tool is still so useful great combination of accuracy for my purposes. THANK YOU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants