Silent log source detection

Hi Team,

I am looking to get an alert if I miss a log from an endpoint from a server. Since the ingestion API monitoring is not granular enough to get this info, I planned to create a dedicated Yara-L rule for the said server. The rule I had planned for looked something like this: 

 

 

 

rule silentLogFromCriticalEndpoint {

  meta:
    author = "Srijan Kafle"
    description = "Detects missing logs from server"
    severity = "MEDIUM"
    created = "2024-06-03"

  events:
    $session.principal.namespace = "prodSydney" and $session.principal.hostname = "redacted"

    $source = $session.principal.hostname
    $tenant = $session.principal.namespace

  match:
    $source,$tenant over 12h

  outcome:
    $risk_score = 3

    // logic to check delay here which is not working
    $delay = $session.metadata.event_timestamp - timestamp.current_seconds()

  condition:
    $session and $delay >= xyz
}

 

 line 23 and 26 not a working line currently but wanted to include for sharing logic. Is there any way I can calculated the delay? Another alternative was to search in shorter duration (e.g 1 hour) and if there are none trigger and alert. Since the query does not return any fields during search - this also doesn't trigger. 

Any alternative that helps detects similar granularity would help.

0 6 309
6 REPLIES 6

There isn't really a great way to solve this from the Yara-L side, but there is a blog post from Chris Martin that covers how to approach this problem from BiqQuery  https://medium.com/@thatsiemguy/silent-asset-detection-47ad34fdab55  I recommend you give this a read. 

Hi @Rene_Figueroa,

As stated in the post, for my usecase the ingestion API is not granular enough as it only goes as far as the source and not the fields within the pulled data. Let me know if I am missing out on the implementation

ahh. Apologies, I missed the Ingestion API part. I think the blog that Jeremy suggested will be helpful here then. 

Rules are meant to create detection on certain events and not to track missing logs, so the logic is not there for it.

A combination of cloud logging, bindplane agent, cloud monitoring and 1:1 ingestion labels might be a solution. 

You could create a rule, to match across the log source, and identify if there's at least 1 event being generated. And then utilise the runretrohunt API endpoint (https://cloud.google.com/chronicle/docs/reference/detection-engine-api#runretrohunt), listretrohunt API to identify when the retrohunt has finished running (https://cloud.google.com/chronicle/docs/reference/detection-engine-api#listretrohunts) and then getretrohunt to view the retrohunt detections (https://cloud.google.com/chronicle/docs/reference/detection-engine-api#getretrohunt). And then utilise a SOAR platform / python script to 'getretrohunt', to view the detections from the retrohunt, and then identify which log sources haven't generated a detection, thus isn't 'logging'. You could then re-ingest this discrepancy into Google Chronicle using the Ingestion API (https://cloud.google.com/chronicle/docs/reference/ingestion-api) and create a rule to look for that specific ingested log.

The above is just one way, however, it requires using a conjunction of in-platform and out-of-platform to reach your solution.