NCSRC NIRMATA
Home Guides Framework Start Assessment →
Home › Guides › Monitoring & Detection › MD-13
MD-13 Monitoring & Detection 6% of OML score

Are monitoring outputs used to improve security controls over time?

Are you actually using the security alerts and reports you collect to make your systems safer? This question asks whether you review what your monitoring systems find and then fix the problems they identify, rather than just collecting data and ignoring it.

⚡
Why This Matters to Your Business

If you collect security alerts but never act on them, you are spending money on monitoring without getting protection—attackers will exploit the same holes repeatedly. A Delhi logistics company was hit by ransomware twice in one year because their firewall logs showed the attack pattern the first time, but no one reviewed them or blocked it. Without using monitoring data to improve, you also fail compliance audits (regulators and customers will ask what you did with the findings) and lose customer trust when breaches happen that your own logs could have prevented. You may also face penalties under DPDP Act 2023 for failing to take corrective action after detecting security incidents.

📊
What Each Maturity Level Looks Like

Find where your organisation is today. Be honest — the self-assessment is only useful if it reflects reality.

Level 0
Absent

You have no monitoring system in place, or alerts are generated but never looked at. No one is assigned to read logs or security reports, and no action is ever taken based on what they show.

Level 1
Initial

You have monitoring tools running, and someone occasionally glances at alerts, but there is no documented process to decide what to do about them. Findings are not tracked, and it is unclear if any of them have been fixed.

Level 2
Developing

You have a basic process: alerts are logged in a spreadsheet, and the IT person reviews them monthly. Some obvious issues (like repeated failed login attempts) trigger a manual action, but there is no formal follow-up system or timeline.

Level 3
Defined

You have a documented incident response process that requires the IT team to review alerts weekly, categorize them, and create a ticket for each finding that needs fixing. A manager reviews the ticket list monthly to ensure issues are being closed, and there is a record of what was found and what was done.

Level 4
Managed

Your monitoring system automatically categorizes and prioritizes alerts; high-risk findings trigger immediate action with a deadline, and all resolutions are tracked in a formal system. You conduct quarterly reviews of trends (e.g., 'we had 50 brute-force attempts last quarter; we implemented IP blocking') and update security controls based on patterns.

Level 5
Optimised

Monitoring data feeds directly into a continuous improvement cycle: alerts are analyzed for root causes, findings automatically trigger policy updates or control enhancements, and you measure the effectiveness of each fix. You conduct annual security reviews with external input and adjust your entire security strategy based on what monitoring has revealed over time.

🚀
How to Move Up — Practical Steps
StepWhat to DoWhoEffort
0 → 1 Enable basic logging on your critical systems (firewall, server, email) and assign one person (IT staff or external consultant) to review logs manually at least once per week and document what they see in a simple shared folder or email. IT person or IT Manager 3 days
1 → 2 Create a simple spreadsheet with columns: Date Found, Issue Description, Severity (High/Medium/Low), Action Taken, Date Closed. Set a recurring weekly task to fill this in and show it to the business owner or operations manager. IT person 1 week
2 → 3 Document a formal process: define who reviews alerts, when (e.g., every Monday morning), what action categories exist (e.g., 'block IP', 'reset password', 'patch system'), and require sign-off by a manager. Move tracking to a simple ticketing system (even a free Trello board) so nothing is forgotten. IT Manager + Business Owner 2-3 weeks
3 → 4 Set up automated alert categorization (using built-in rules in your monitoring tool or a simple script); define SLAs (e.g., 'Critical alerts must be acted on within 24 hours'). Conduct a monthly trend review meeting and document what changes you made based on the data (e.g., 'We saw 30 failed logins from country X; we geo-blocked it'). IT Manager + Security Lead 4-6 weeks
4 → 5 Integrate monitoring insights into your annual security strategy review. Measure the impact of each control change (e.g., 'After we blocked high-risk countries, unauthorized login attempts dropped 60%'). Share findings with your leadership and external auditors as evidence of continuous improvement. IT Manager + Business Owner + External Auditor (annual) Ongoing quarterly reviews + annual strategy update
📁
Evidence You Should Have

Documents and records that prove your maturity level.

  • A monitoring system dashboard or log file showing active alerts and their timestamps (firewall logs, antivirus alerts, server event logs, or a SIEM tool output)
  • A tracking record (spreadsheet, ticketing system, or document) listing each finding, its severity, the action taken, who took it, and the date it was closed
  • A documented process or policy describing who reviews alerts, how often, and what triggers each type of action
  • Meeting minutes or emails showing that a manager or senior staff member reviews the alert log and approves actions taken (at least monthly)
  • Evidence of at least 3 control improvements made in the past 12 months based on monitoring findings (e.g., firewall rule added, patch deployed, access revoked, training conducted)
🔍
What an Auditor Will Ask

Prepare for these questions from customers or third-party reviewers.

  • "Show me your monitoring logs and alert records for the past 3 months. What high-risk alerts did you find, and what did you do about each one?"
  • "Who is responsible for reviewing your security alerts, and how often do they do it? Can you show me a record of their reviews?"
  • "Can you give me an example of a security finding you detected through monitoring and then fixed? What was it, and how did you verify the fix worked?"
  • "How do you know if your security controls are actually getting better? What metrics or trends do you track from your monitoring data?"
🛠
Tools That Work in India
PurposeFree OptionPaid Option
Centralized log collection and alert management (collects logs from all systems in one place) ELK Stack (Elasticsearch, Logstash, Kibana) – open-source, requires technical setup; Graylog – free version available Splunk (₹8–15 lakhs/year for small businesses); Microsoft Azure Monitor (₹10,000–50,000/month depending on data volume)
Ticket and issue tracking system (to record findings and track their resolution) Trello (free tier), Jira (free tier for small teams), OpenProject – open-source alternative Jira (₹70,000–2,00,000/year for small team); ServiceNow (₹3–5 lakhs/year)
Real-time alert and notification system (sends alerts to staff when critical events occur) Grafana (open-source, needs setup); Zabbix – open-source monitoring platform Datadog (₹3–8 lakhs/year); New Relic (₹2–10 lakhs/year)
🛡
How This Makes You More Resilient
When you act on monitoring findings, you stop attacks and problems before they become breaches or outages. Instead of discovering a data theft months later when a customer complains, you detect suspicious activity in real time and block it the same day. Your business avoids costly recovery efforts, regulatory fines, and the loss of customer trust that comes from publicizing a breach.
⚠️
Common Pitfalls in India
  • Collecting logs but never reading them: Many Indian SMEs buy monitoring tools to tick a compliance box, but no one is assigned to actually review the alerts regularly, so breaches go undetected for weeks or months.
  • No documented process for action: When an alert is found, the IT person fixes it ad-hoc with no record, so there is no proof of what was done, the issue happens again, or a manager cannot verify that actions were taken.
  • Treating all alerts as equally urgent: Without prioritization, staff are overwhelmed by low-severity warnings and miss the critical ones; conversely, some teams only act on critical alerts and leave medium-risk issues unaddressed.
  • No follow-up or closure verification: Tickets are opened but never formally closed; no one confirms that the fix actually worked (e.g., a firewall rule was added but not tested), so the same problem recurs.
  • Ignoring trends and repeating the same mistakes: Logs show the same vulnerability being exploited every month, but because findings are not reviewed collectively, the underlying cause is never addressed and the business keeps spending time on reactive fixes instead of preventing the issue once and for all.
⚖️
Compliance References
StandardRelevant Section
DPDP Act 2023 Section 8 (accountability), Section 4(12) (data security obligation requiring corrective action after detecting incidents)
CERT-In 2022 Guideline 3.4 (organizations must maintain logs and respond to incidents identified through monitoring)
ISO 27001:2022 A.12.4.1 (Event logging), A.13.2.1 (Security monitoring and logging), A.16.1 (Incident response and improvement)
NIST CSF 2.0 Detect function (DE) – Processes, procedures, and tools to support timely discovery and investigation; Respond function (RS) – Plans and processes to contain impact of detected incidents

Ready to assess your organisation?

Answer all 191 questions and get your NIRMATA maturity score across all 12 pillars.

Start Free Self-Assessment →

TRUST-IN Bharat · NIRMATA Framework · Licensed CC BY-SA 4.0 · Custodian: Elytra Security

← Back to all guides  ·  trustinbharat.org