So I am receiving csvs from an azure database based on the url on the clientside. Then I rewrite the ExampleData.csv file to have this new csv received from the database. I am having a problem when the csv is read and it loads in a visualization, the file may be too large and therefore reads the previous csv before the write has finished therefore reading the wrong file. Right now I have a time.sleep(5) but that is not a viable long term solution as it takes a while to load the visualizations and would not work for larger files possibly.
The csv files can be as large as about 15-20GB and as small at 80MB. What should I use to receive the csvs faster
I thought to try caching in python/redis/file based caching but I am unsure what is the best for my use case.