If your security tools are screaming non-stop about things that aren't actually problems, your IT person (or team) will start ignoring alerts out of sheer exhaustion. A textile export company in Tiruppur got breached because their firewall was throwing 500 alerts per day about network noise, and the one actual intrusion attempt got lost in the pile—by the time they noticed, customer payment data was already copied. High false-alarm rates also waste hours each month investigating phantom issues instead of doing real work, and in an audit or compliance check, examiners will see that you're not actually monitoring effectively. Regulatory bodies and customers (especially international ones) expect you to show that you're watching for real threats, not just collecting data.
Find where your organisation is today. Be honest — the self-assessment is only useful if it reflects reality.
Absent
You either don't have any security monitoring tools running, or you have them running but nobody ever looks at the alerts they produce. When alerts pile up in an inbox or log file, nobody reviews them or adjusts anything.
Initial
You have tools generating alerts, and someone checks them occasionally (maybe once a week), but there's no formal process for deciding which alerts matter and which ones are noise. You're not tuning or adjusting the alert rules based on what you see.
Developing
You have a basic list of alert types you care about, and once a month someone spends a few hours manually reviewing alerts and updating a couple of rules to reduce obvious noise. Most alerts are still either ignored or acted on without much thought.
Defined
You maintain a documented process for reviewing alerts (at least weekly), you track which alerts are false versus real, and you have a simple spreadsheet or log showing rule changes you've made to reduce noise. Your IT person knows which alert types to trust and which ones to investigate.
Managed
You have a formal alert tuning process with monthly reviews, documented alert baselines for your environment, and you track metrics like false-positive rate and alert response times. Rules are adjusted systematically, and there's evidence of collaboration between security and operations teams to refine thresholds.
Optimised
Your alert system is continuously tuned using both manual review and automated analysis of alert patterns; you maintain a detailed baseline of normal behavior, and your false-positive rate is under 10%. Alert rules are version-controlled, changes are tested before rollout, and you have clear escalation paths based on alert severity and confidence.
| Step | What to Do | Who | Effort |
|---|---|---|---|
| 0 → 1 | Enable and configure at least one security monitoring tool (such as Windows Event Log review on servers, or firewall log review), and assign one person to spend 30 minutes each week looking at what it reports | IT Manager or Owner | 1 day |
| 1 → 2 | Create a simple one-page document listing the top 5 types of alerts your tools generate; mark each as 'always investigate', 'sometimes investigate', or 'ignore'; review this list monthly with your IT person and update based on what you actually saw | IT Manager | 1 week |
| 2 → 3 | Set up a weekly alert review meeting (30 minutes); maintain a simple log showing date, alert type, whether it was a real issue or false alarm, and any rule adjustments made; document the thresholds you're using for key alerts | IT Manager with Owner or Finance Head | 2-4 weeks |
| 3 → 4 | Establish a formal alert management procedure document that includes alert triage levels, response times for each level, a monthly metrics report (false-positive rate, alert volume trends), and a change log for all alert rule modifications | IT Manager with CISO or Compliance Officer if available | 1-2 months |
| 4 → 5 | Implement automated analysis of alert patterns (using built-in SIEM/SOC tools or basic scripts), establish machine-learning-based baselining of normal traffic, test all alert rule changes in a sandbox before production deployment, and publish monthly alert effectiveness reports to leadership | IT Manager or Security Analyst with vendor support | Ongoing |
Documents and records that prove your maturity level.
- A documented list or policy of alert types you monitor, with documented thresholds (e.g., 'failed login attempts > 5 per hour = alert')
- Weekly or monthly alert review logs showing the date, number of alerts reviewed, count of false alarms identified, and count of real incidents found
- A change log documenting at least 3-5 instances where you modified alert rules or thresholds to reduce noise, with dates and the reason for each change
- A simple spreadsheet or dashboard showing alert volume over time, false-positive rate trends, and mean time to respond to alerts
- Email or meeting notes from at least one alert tuning discussion where a team member suggested a rule change to reduce false alarms
Prepare for these questions from customers or third-party reviewers.
- "How many security alerts does your monitoring system generate per day on average, and what percentage of them are typically false positives? Can you show me your tracking of this?"
- "Walk me through a recent example where you identified that an alert was causing too much noise and decided to adjust or disable it. What was the process and who approved it?"
- "How often do you review your alert rules, and who is responsible for tuning them? Do you have documented procedures for this?"
- "Can you show me a case where a real security incident was caught by your alerts in the past 6 months, and one where an alert rule change actually improved your monitoring?"
| Purpose | Free Option | Paid Option |
|---|---|---|
| Collect and review security logs from servers and firewalls to identify patterns and tune alert rules | Windows Event Viewer (built-in), Syslog, open-source ELK Stack (Elasticsearch, Logstash, Kibana) on your own server | Splunk (₹4-8 lakhs/year for small deployment), Microsoft Sentinel (₹50,000-2 lakhs/year depending on data volume), Datadog (₹3-6 lakhs/year) |
| Aggregate alerts from multiple tools and reduce noise through correlation rules and deduplication | Graylog (self-hosted), open-source OSSIM, Wazuh (open-source with managed cloud option) | IBM QRadar (enterprise pricing, ₹10+ lakhs/year), Rapid7 InsightIDR (₹5-10 lakhs/year), CrowdStrike Falcon (₹8-15 lakhs/year) |
| Track and document alert review decisions, rule changes, and false-positive metrics over time | Google Sheets or Excel with shared access, Jira with a custom 'Alert Tuning' project | ServiceNow ITSM (₹3-8 lakhs/year depending on seats), Atlassian Jira with custom workflows (₹50,000-2 lakhs/year) |
- Setting alert thresholds too low in the hope of 'catching everything'—this creates thousands of false alarms per day and ensures nothing gets investigated. A manufacturing company in Maharashtra set failed-login alerts to trigger on any failed attempt and then ignored the 800-per-day alert deluge until a real breach happened.
- Treating all alerts as equally urgent instead of prioritizing by severity and confidence—when everything is high-priority, nothing is. Many Indian small businesses don't distinguish between 'suspicious but probably benign' and 'absolutely needs immediate action', wasting hours on low-risk events.
- Never actually reviewing the alerts you're generating, so you have no idea whether your monitoring is working or just producing noise—this is especially common when a tool is installed and then forgotten. Audit findings often reveal that alerts have been running for months or years with zero review, making them worthless for compliance.
| Standard | Relevant Section |
|---|---|
| DPDP Act 2023 | Section 8 (Security of personal data) and Schedule 2 (Technical and organizational measures); requires organizations to maintain and regularly review security monitoring |
| CERT-In 2022 | Direction 4 (Log retention and monitoring) mandates that organizations monitor security logs and respond to anomalies in a timely manner |
| ISO 27001:2022 | Annex A, Controls A.12.4.1 (Event logging) and A.12.4.3 (Protection of log information); requires review and analysis of security logs |
| NIST CSF 2.0 | Detect (DE) function, specifically DE.AE-1 (A baseline of network operations and expected data flows is established) and DE.AE-5 (Incident alert thresholds are established and monitored) |
Ready to assess your organisation?
Answer all 191 questions and get your NIRMATA maturity score across all 12 pillars.
Start Free Self-Assessment →