Welcome to the final post in the Mastering SecOps series! We’ve explored smarter detections with MITRE ATT&CK and UEBA, automated response with playbooks, visualized SOC performance with workbooks, and mapped alerts to compliance frameworks. Now, let’s talk about how to extend Microsoft Sentinel’s capabilities even further—with custom integrations.
Microsoft Sentinel is powerful out of the box, but its true strength lies in its flexibility. Whether you need to ingest logs from a legacy system, run custom enrichment scripts, or build automation that crosses toolsets, Sentinel has options.
Why Extend Sentinel?

No two environments are the same. Your SOC could need to rely on:
- A legacy application that doesn’t support native connectors
- A third-party threat feed you want to enrich alerts with
- Internal security tools that produce valuable logs
By extending Sentinel, you can:
- Eliminate visibility gaps
- Automate advanced logic
- Enrich alerts with proprietary intel
- Unify your security tools into one workflow
Option 1: Use the Logs Ingestion API (New Standard)
Microsoft now recommends using the Logs Ingestion API to send custom data to Sentinel via Azure Monitor. This modern approach provides better schema control, authentication, and performance.
Use cases:
- Sending logs from custom or legacy apps
- Integrating data from unsupported SaaS tools
import requests
import datetime
import hashlib
import hmac
import base64
import json
# Replace with your actual values
workspace_id = 'YOUR_WORKSPACE_ID'
shared_key = 'YOUR_PRIMARY_KEY'
log_type = 'WebAppUserActivity'
# Simulated user login event
user_event = {
"TimeGenerated": datetime.datetime.utcnow().isoformat(),
"Username": "jane.doe@example.com",
"Action": "Login",
"Status": "Success",
"IPAddress": "192.168.1.10",
"App": "CustomerPortal"
}
body = json.dumps([user_event])
method = 'POST'
content_type = 'application/json'
resource = '/api/logs'
rfc1123date = datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
content_length = len(body)
# Create signature
string_to_hash = f"POST\n{content_length}\n{content_type}\nx-ms-date:{rfc1123date}\n{resource}"
bytes_to_hash = bytes(string_to_hash, encoding='utf-8')
decoded_key = base64.b64decode(shared_key)
encoded_hash = base64.b64encode(hmac.new(decoded_key, bytes_to_hash, digestmod=hashlib.sha256).digest()).decode()
signature = f"SharedKey {workspace_id}:{encoded_hash}"
# Build request
uri = f"https://{workspace_id}.ods.opinsights.azure.com{resource}?api-version=2016-04-01"
headers = {
'Content-Type': content_type,
'Authorization': signature,
'Log-Type': log_type,
'x-ms-date': rfc1123date
}
# Send data
response = requests.post(uri, data=body, headers=headers)
print(f"Response code: {response.status_code}")
if response.status_code >= 200 and response.status_code <= 299:
print("✅ Log sent successfully to Sentinel.")
else:
print(f"❌ Failed to send log: {response.text}")
📘 Azure Monitor Logs Ingestion API Overview
🔧 Tip: Use Data Collection Rules (DCRs) to define the schema and route your data into Log Analytics.
Option 2: Azure Functions for Custom Logic
Use Azure Functions to write and run serverless code that reacts to Sentinel alerts or timers.
Use cases:
- Enriching alerts with external API data (e.g., VirusTotal, Shodan)
- Performing response actions in non-Microsoft tools
- Normalizing or transforming custom logs
import logging
import azure.functions as func
import requests
from azure.identity import ManagedIdentityCredential
from azure.keyvault.secrets import SecretClient
def main(req: func.HttpRequest) -> func.HttpResponse:
indicator = req.params.get('indicator')
if not indicator:
return func.HttpResponse("Missing 'indicator' parameter.", status_code=400)
try:
# Get VirusTotal API key from Key Vault
credential = ManagedIdentityCredential()
secret_client = SecretClient(vault_url="https://test-keys.vault.azure.net/", credential=credential)
api_key = secret_client.get_secret("VirusTotalApiKey").value
# Call VirusTotal API
headers = {"x-apikey": api_key}
vt_url = f"https://www.virustotal.com/api/v3/ip_addresses/{indicator}" # or /domains/, /files/ etc.
response = requests.get(vt_url, headers=headers)
return func.HttpResponse(response.text, status_code=response.status_code)
except Exception as e:
logging.error(f"Error: {e}")
return func.HttpResponse("Internal server error", status_code=500)
📘 Trigger Sentinel playbooks with Azure Functions
🔐 Tip: Use managed identities for secure API calls.
Option 3: Logic Apps with External Integrations
Logic Apps are low-code workflows that can:
- Fetch data from APIs
- Connect to databases or ticketing systems
- Orchestrate multi-step responses
import requests
from requests.auth import HTTPBasicAuth
import json
# Jira credentials and endpoint
jira_url = "https://your-domain.atlassian.net"
issue_key = "PROJ-123"
api_token = "your_api_token"
user_email = "your_email@example.com"
# Fields to update
update_payload = {
"fields": {
"summary": "Updated summary from automation",
"description": "This issue was updated via automated script.",
"priority": {"name": "High"}
}
}
# Make the request
response = requests.put(
f"{jira_url}/rest/api/3/issue/{issue_key}",
data=json.dumps(update_payload),
headers={"Content-Type": "application/json"},
auth=HTTPBasicAuth(user_email, api_token)
)
# Output result
print(f"Status Code: {response.status_code}")
print(f"Response: {response.text}")
Use cases:
- Sending alerts to ServiceNow, Jira, or Slack
- Automating account disables across hybrid environments
- Posting to Teams channels with context
Option 4: Logstash via the Microsoft Sentinel Output Plugin
Microsoft now provides an officially supported Logstash output plugin that integrates with the Logs Ingestion API using Data Collection Rules (DCRs). This is the preferred approach for scalable, structured log ingestion.
Why use it:
- Supports Logstash 7.0–8.15
- Enables high-volume, schema-controlled ingestion
- Integrates with modern DCR architecture
input {
file {
path => "/var/log/custom_logs/sample.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
# Example: Parse a simple log line
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{GREEDYDATA:log_message}" }
}
date {
match => ["timestamp", "ISO8601"]
}
mutate {
remove_field => ["@version", "host", "path"]
}
}
output {
http {
url => "https://<workspace-id>.ods.opinsights.azure.com/api/logs?api-version=2016-04-01"
http_method => "post"
format => "json_batch"
headers => {
"Content-Type" => "application/json"
"Log-Type" => "CustomLogType"
"x-ms-date" => "%{+EEE, dd MMM yyyy HH:mm:ss Z}"
"Authorization" => "SharedKey <workspace-id>:<signature>"
}
}
}
📘 Configure the Microsoft Sentinel Logstash Output Plugin
Example Setup:
- Install the plugin:
bin/logstash-plugin install microsoft-sentinel-log-analytics-logstash-output-plugin - Define a DCR and Log Analytics table
- Configure the
outputsection in Logstash to point to the DCR stream - Start streaming logs directly into Sentinel
Final Thoughts
Microsoft Sentinel doesn’t have to work in isolation. With the right custom integrations, it becomes the central nervous system of your security operations – connected, responsive, and deeply contextual.
As you continue your SecOps journey, remember: smart security isn’t just about detection. It’s about integration, automation, and visibility across the stack.
Thank you for joining the Mastering SecOps series!
If you missed the 4th post in this series, please take a look here or check out other blog posts.
As we close out the end of the FY25 at Microsoft. I will be taking a few weeks break to spend time with family and recharge. Be ready for more Exciting Content in July 2025 (FY26)!
