manchester university tiny network element monitor (mutiny nem) a network/systems management tool
DESCRIPTION
Manchester University Tiny Network Element Monitor (MUTiny NEM) A Network/Systems Management Tool Dave McClenaghan, Manchester Computing George Neisser, Manchester Computing. 1. Introduction . MUTiny Overview. - Runs on commodity hardware. - Coded in Perl, Perl/Tk. - PowerPoint PPT PresentationTRANSCRIPT
Manchester University Tiny Network Element Monitor(MUTiny NEM)
A Network/Systems Management Tool
Dave McClenaghan, Manchester Computing George Neisser, Manchester Computing
1. Introduction.MUTiny Overview. - Runs on commodity hardware. - Coded in Perl, Perl/Tk. - Easy to install, use and maintain. - Free (unsupported) to academia. - Subject to ongoing development.
Network Management Overview.NMS Components:
- A Manager, running NM applications.
- A set of managed nodes. (The managed domain) - Defined management information (MIBs).
- A Network Management protocol (SNMP).
2. The MUTiny NM Model.
Platform: PC (Running Linux).
Description: A Network/Systems Management tool. Coded entirely in Perl and Perl/Tk.
MUTiny Applications.
- A Network Element Monitor/Manager.
- A Unix Systems Monitor/Manager.
- A MIB data collection and reporting tool.
MUTiny NM Applications.
- Graphically display the domain topology. - Monitor and report node status changes.
MUTiny NM Applications (continued).
- Display monitored node information. - Collect and report network statistics.
MUTiny Network Management.
Key areas:
- Domain Management.
- Event Management.
- Network Statistics.
MUTiny NEM front end. _menu bar _host attributes _domain status
_topology display
_session bar
The GUI
2.1 Domain Management.
The Managed Domain. The set of all monitored nodes.
Monitored Node. Any network device, router, switch, etc. That is regularly polled for management information.
Domain Topology Display. ICMP Status (Background)
Green OK. Red No Echo. Amber Problem. Clear Unknown.
Domain Topology Display. SNMP Status (Foreground)
Black OK. Blue No SNMP. Grey Unknown.
Topology Management.
The initial domain topology display
Topology Management.
Add Node Window - No Auto-Discovery, nodes added by choice.
Topology Management.
- Node and ‘path’ nodes are added to the display.
Topology Management.
- Path is determined by traceroute.
Topology Management.
Change Symbol/Label Window
Connectivity Status Polling.
ICMP connectivity determined by ping. SNMP connectivity determined by sysUpTime.
Management Station
Monitored Node IF’s
ICMP
SNMP Agent
MUTiny ICMPNEM nemPoll SNMP
sysUpTime
nemStatusTable
nemPoll.conf
Configuring Status Poll Parameters.
Figure 2.2a Interface Polling Parameters
Configuring Status Poll Parameters.
Figure 2.2b General Polling Parameters - Fully configurable polling. - Poll ‘Back Off’ options.
4.2 Event Management.Event Definition.
2.2 Event Management Connectivity Events.
ICMP:Node_No_Echo ICMP:Node_OK ICMP:Node_Problem ICMP:Status_Unknown SNMP:No_Response SNMP:OK SNMP:Status_Unknown
Network Event Logging.
Figure 2.3a Network Event Log Window
Network Event Alarms.
Figure 2.3b A Pop-Up Alarm
Pop-Ups may be accompanied with an optional Audible_Alarm (bell).
Event/Action Management.*
Event Configuration Window
Event/Action Management.*
Event/Source Configuration Window
Event Time Frames.
Contact/Frame Configuration Window
2.3 Network Statistics.MIB Data Collection.
Figure 2.4 MIB Data Collection Mechanism Data stored in: */nemdata/mibdata/<yearmon>/<Datafile>
Management Station
Monitored Node (hostname)
SNMP Agent
MUTiny NEM
nemDataPoll
Datafiles
nemDataPoll.conf
MIB Data
Data Storage. Time Object Value PI 953036400 ifOutOctets.6 84118 60 953036400 ifOutOctets.7 13275 60 953037000 ifInOctets.8 7219 60 953037000 ifInOctets.5 14303 60 953037000 ifInOctets.6 18287 60 Sample data stored in: */nemdata/mibdata/200002/gw-site
Configuring MIB Data Collection.
Figure 2.5a Collection Configuration Window
Node Data Collection.
Figure 2.5b Node Collection Window
Node Data Collection. The Storage Interval. A multiple of the sampling-interval, eg.
samp-int = 60 seconds store-int = 15 min
- This greatly reduces the amount of disk space required to store the data ( by a factor of 15 in this case ).
Node Data Collection. The Store-Identifier.
The store-id, if set, stores object-id as:
<mib-obj>.<store-id>
eg.ifInOctets.Liv3
- Useful if instance-id prone to change.
Node Data Collection.
Test Collection Window
‘Change Control’.*
‘Anchors’ collection to IP_addr or Phys_addr
Reporting Network Statistics.
Figure 4.11 The Reporting Mechanism.
The reports are generated from user-defined Report Parameter Files (RPF’s).
# nemReport -r my.rpf
nemDatafiles Reports ( *.rpt )
nemReport
Report Parameter Files
Reporting Network Statistics.
The Report Template.
Reporting Network Statistics.
Figure 4.12b Sample ‘Fixed Column’ Report - Variable and graphical* formats supported.
Report Name: liv_traffic_251099.rptReport Date: Wed Nov 10 13:00:55 GMT 1999Host Name: gw-liv.netnw.net.ukMIB Objects: ifInOctets ifOutOctets
-------InBytes.138-------- -------OutBytes.138------- -------(Bytes/sec)-------- -------(Bytes/sec)--------Date From To Average Max MaxT Average Max MaxT10/25/99 08:00 09:00 37575.64 358257.00 08:15 7254.06 31816.40 08:3010/25/99 09:00 10:00 48182.86 102154.00 09:02 11811.73 59364.60 09:3910/25/99 10:00 11:00 63443.69 123937.00 10:22 16965.79 41818.30 10:4810/25/99 11:00 12:00 109494.27 204122.00 11:38 22146.68 95139.40 11:2110/25/99 12:00 13:00 120689.33 219084.00 12:59 36755.46 77071.20 12:2710/25/99 13:00 14:00 104552.70 262792.00 13:00 39029.91 96999.50 13:2110/25/99 14:00 15:00 102345.99 187674.00 14:47 20576.39 56095.10 14:2110/25/99 15:00 16:00 93852.23 278003.00 15:19 22362.19 69550.20 15:5410/25/99 16:00 17:00 115890.41 485605.00 16:36 23883.81 57638.10 16:5010/25/99 17:00 18:00 61375.79 143225.00 17:39 16664.71 41997.50 17:31
Absolute Total: 2943.66 746.56 MBytes MBytes
4.4 Monitoring MUTiny.
Figure 4.14 The Host System Attributes Area
nemNEMPoll self checks: - NEM Processes. - Host system metrics.
MUTiny Self Monitoring.
NEM Self Monitor Configuration Window
3. WWW Cache Status Monitoring.3.1 Caching Service Configuration.
requests
responses
30 machines Manchester
3 machines L’borough
6 machines London
Global Internet
150+ Sites
Figure 3.1 The Operation of the UK National
JANET Caching Service
3.2 Caching Service Operation. Need to know for each node: - Network accessibility. - CPU loading. - Memory utilisation. - Disk utilisation. - Squid application status.
Network Accessibility.
Figure 3.2 Manchester Main Window
System Metric Monitoring.
Figure 3.3 Manchester Cache Systems Window
System Metric Monitoring.
Display indicates: - If the machine is pingable. - If SNMP is operational. - The CPU loading. - Memory utilisation. - Disk utilisation. - Critical process status ( squid ).
System Metric Monitoring.Prerequisites.
The UCD-SNMP mechanism
The monitored host must be running the UCD-SNMP agent software.
Management Station Remote Host
UCD-MIBsnmpd.conf
UCD-SNMP requests snmpwalk, snmpget
UCD-SNMP Agent
Access control
Host information Load Memory Disk Processes
Extensiblesection
System Metric Monitoring.
System Polling Configuration Window
System Metric Monitoring.
Disk Statistics Window
System Metric Monitoring.
A Pop_Up Alarm Optional Audible_Alarm (bell).
System Metric Monitoring.
Figure 5.5b Domain Status Section
Indicates most critical entry in each column.
Real Time Metrics.
‘top’ Metric Window
4 Future Developments.
- Distributed Network Management. - MUTiny NEM GUI. - Web Based Topology Display. - An Applications Interface.