0

Im planning on making a script on python that monitors some external parameters and act based on the retrieved data. The check basically is do an http call on a server on the network, and if the response it not as expected do another http call on another server on the network.

The script will have a 60 seconds interval on checking, and will keep checking hopefully indefinitely.

Here is a trimmed down version of what i currently have

import time

if __name__ == '__main__':
    while True:
        runCheckfunction()
        time.sleep(60)

Its very straight forward, which makes me doubt, it cant be that simple right?

Concerns:

  1. During sleep i am worried that this script might be using excessive CPU resources, which might be a concern since it will be run on an embedded machine.

  2. Memory leak. Although very unlikely and if it does happen it wont be in the infinite loop part and it would be in the runCheckFunction() due to my bad programming. After the functions exits is it safe to assume that it frees up all memory it used? I dont have global variables and everything is self contained within that function?

  3. Are there better methods for this? like a standard module that i do not know of that is made to handle this exact purpose?

15
  • 3
    sleep uses virtually no resources. The operating system blocks the process until the timer expires.
    – Barmar
    Commented Jul 9 at 20:21
  • 1
    There's pretty much no alternative.
    – Barmar
    Commented Jul 9 at 20:22
  • 2
    Beyond what Barmar said -- use a process supervision tool to restart your service if it crashes (mostly this is systemd in modern distros, but some embedded systems use an alternate one). You can also configure it to run in a constrained namespace so it exits and gets restarted on hitting a high water mark well below what would impact other services on the same device. Commented Jul 9 at 20:23
  • 1
    All local variables of the function go away when it returns. So unless it adds something to a global variable there's no memory leak.
    – Barmar
    Commented Jul 9 at 20:23
  • 2
    This is a perfectly fine implementation. The one drawback I can see is that 60 seconds is hard-coded into it. A cronjob or external scheduler would be one way to extract that. If you're concerned about the job hanging or a memory leak, then an external scheduler would also help with that, but I would rather monitor the thing for a while and see if it has any issues worth spending time on.
    – kojiro
    Commented Jul 9 at 20:27

1 Answer 1

2

It sounds like its a stateless script that you want to run periodically.

Instead of relaying on python to make it run periodiclally why not use a cron job that invokes your script with your interval.

advantages:

  1. cron jobs are made excactly for this, they are consistent
  2. will prevent drift, time.sleep can result in drift due to how long it takes the program to wake up
  3. most importantly I think, if your pc/server restarts, you will need to manually run the script against, a cron job will automatically do that

to set up:

crontab -e
* * * * * /usr/bin/python3 /path/to/your/script/test.py

similar option, systemd service Create a service file /etc/systemd/system/test_script.service:

[Unit]
Description=Run Python script every 60 seconds

[Service]
ExecStart=/usr/bin/python3 /path/to/your/script/test.py
Restart=always
RestartSec=60

[Install]
WantedBy=multi-user.target

Alternative options: signal library

1
  • If you want to use systemd with regular restarts, the right way to do that is with a .timer unit and Type=oneshot on the corresponding .service unit, not using Restart=always. Commented Jul 9 at 21:06

Not the answer you're looking for? Browse other questions tagged or ask your own question.