Explore the latest in technology and cybersecurity with insightful blog posts, expert tips, and in-depth analysis. Stay informed, stay secure!

Title Image for KQL Jobs in Sentinel Data Lake

Unlocking Scalable Security Analytics: How to Automate KQL Jobs in Sentinel Data Lake

Introduction

In Part 4 of this series, we focused on optimizing KQL queries in Microsoft Sentinel Data Lake. Optimized queries are powerful for investigations, but sometimes you need to automate and repeat them. That’s where KQL jobs come in.

A KQL job allows you to run scheduled queries across Sentinel Data Lake, store the results, and reuse them in dashboards, alerts, or reports. Consequently, KQL jobs make security analytics more scalable by reducing manual work and ensuring consistent insights.

📚 Reference: Create KQL jobs in Sentinel Data Lake (preview)


1. What Are KQL Jobs?

KQL jobs are scheduled or on-demand query executions in Sentinel Data Lake. They:

  • Run across large volumes of data, including archived logs.
  • Save query outputs into new tables for faster access.
  • Enable automation for recurring security tasks.

👉 Instead of rerunning the same query manually, you schedule it as a job that refreshes results automatically.


2. Why Use KQL Jobs?

  • Performance: Pre-compute results to reduce latency.
  • Cost efficiency: Avoid repeatedly scanning massive datasets.
  • Consistency: Ensure teams use the same logic in every investigation.
  • Integration: Feed outputs into workbooks, alerts, or hunting queries.

As a result, your SOC spends less time rerunning queries and more time responding to threats.


3. How to Create a KQL Job

Step 1: Open the Defender Portal

Navigate to https://security.microsoft.comSentinel Data Lake → KQL jobs.

Defender Portal Page

Step 2: Define the Query

Write your KQL query directly in the editor. For example, to summarize failed sign-ins daily:

SigninLogs
| where ResultType != 0
| summarize Failures = count() by bin(TimeGenerated, 1d), UserPrincipalName
KQL Query Job Page

Step 3: Configure Job Settings

  • Name the job clearly (e.g., “Daily Failed Sign-ins”).
  • Select job type: Scheduled (recurring) or On-demand.
  • Choose output table where results will be stored.
  • Define frequency (hourly, daily, weekly).
Creating a New KQL Job

Step 4: Monitor Job Results

Afterwards, monitor job runs and verify that results populate in the target table. You can also visualize the outputs in workbooks or feed them into automated alerts.

KQL Query Results

4. Best Practices for KQL Jobs

  • Start small: Test queries interactively before scheduling jobs.
  • Be selective: Schedule only queries that provide recurring value.
  • Optimize cost: Use summarization and filters to minimize scanned data.
  • Secure results: Apply RBAC to control access to job outputs.
  • Integrate: Feed job outputs into detections, dashboards, or workflows.

Conclusion

KQL jobs extend Sentinel Data Lake beyond one-time queries into repeatable, automated analytics. By creating jobs, your SOC gains consistency, cost efficiency, and the ability to scale detection and reporting across large datasets.

This completes the first five parts of the Unlocking Scalable Security Analytics series:

👉 Next week, we’ll explore how utilize Jupyter Notebooks bring even deeper insights into Sentinel Data Lake.