· pdf filereference ............ . 85 configuration options for the ioaz logstash output...
TRANSCRIPT
IBM Operations Analytics for z Systems
Installing, configuring, using, andtroubleshootingVersion 3 Release 1
IBM
IBM Operations Analytics for z Systems
Installing, configuring, using, andtroubleshootingVersion 3 Release 1
IBM
ii Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Contents
What's new in Version 3.1.0 . . . . . . 1
Conventions used in this documentation 3
Operations Analytics for z Systemsoverview . . . . . . . . . . . . . . 5z/OS Insight Packs . . . . . . . . . . . . 7IBM Operations Analytics - Log Analysis V1.3.3 . . 8Log Analysis extensions for z/OS Problem Insightsand client-side Expert Advice . . . . . . . . . 9IBM zAware V3.1 feature . . . . . . . . . . 11IBM zAware data gatherer . . . . . . . . . 12IBM Common Data Provider for z Systems V1.1.0 12Logstash and the ioaz Logstash output plugin . . 12Installation and configuration checklists . . . . . 12
Planning for installation andconfiguration of Operations Analyticsfor z Systems . . . . . . . . . . . . 15System requirements . . . . . . . . . . . 15Planning for installation of Log Analysis . . . . 16Planning for Log Analysis to support LDAPinteraction with RACF for sign-on authentication . . 17Planning for configuration of IBM Common DataProvider for z Systems . . . . . . . . . . 18
Correlation of the data to be analyzed with theassociated data stream types, data source types,and dashboards . . . . . . . . . . . . 19Determination of time zone information for z/OSlog records . . . . . . . . . . . . . 23Requirements for enabling the generation of SMFdata at the source . . . . . . . . . . . 24Requirements for configuring SMF data streams 26Requirements for gathering WebSphereApplication Server for z/OS log data . . . . . 27
Planning for configuration of Logstash . . . . . 29Creation of data sources . . . . . . . . . . 30Data source types . . . . . . . . . . . . 31
Installing Operations Analytics for zSystems . . . . . . . . . . . . . . 33Installing Log Analysis . . . . . . . . . . 33Installing the z/OS Insight Packs, extensions, andIBM zAware data gatherer . . . . . . . . . 35Installing Logstash and the ioaz Logstash outputplugin . . . . . . . . . . . . . . . . 36Uninstalling the Insight Packs . . . . . . . . 38Removing data from Log Analysis. . . . . . . 39Uninstalling Log Analysis . . . . . . . . . 40
Upgrading Operations Analytics for zSystems . . . . . . . . . . . . . . 43Upgrading to Log Analysis V1.3.3.1 . . . . . . 45
Upgrading from Log Analysis V1.3.2 to V1.3.3.1 46Upgrading from Log Analysis V1.3.3.0 to V1.3.3.1 46
Upgrading the z/OS Insight Packs and extensions 47Replacing the scala Logstash output plugin withthe ioaz Logstash output plugin . . . . . . . 48Migrating to IBM Common Data Provider for zSystems for streaming z/OS operational data . . . 50
Upgrading to the new z/OS Log Forwarder andthe System Data Engine . . . . . . . . . 51Migrating to use IBM Common Data Provider forz Systems for gathering data . . . . . . . 52Migrating to route SMF data directly from theSystem Data Engine to the Data Streamer . . . 54
Configuring Operations Analytics for zSystems . . . . . . . . . . . . . . 57Configuring the IBM zAware data gatherer . . . . 57
Configuration reference for the IBM zAware datagatherer . . . . . . . . . . . . . . 58
Configuring Logstash . . . . . . . . . . . 60Verifying the identity of the Log Analysis server 61Updating the Logstash configuration with theuser password for the Log Analysis server . . . 62
Securing communication between IBM CommonData Provider for z Systems and Logstash . . . . 62
Preparing to analyze z/OS log data . . 65Logging in to Log Analysis . . . . . . . . . 65
Use of cookies in the Log Analysis UI . . . . 65Grouping data sources to optimize troubleshootingin your IT environment . . . . . . . . . . 66Extending troubleshooting capability with CustomSearch Dashboard applications . . . . . . . . 66
Customizing the dashboard applications. . . . 67Running the dashboard applications . . . . . 72
Getting started with the IBM zAware data gatherer 72Getting started with Problem Insights for z/OS . . 74Getting started with client-side Expert Advice . . . 74
Troubleshooting Operations Analyticsfor z Systems . . . . . . . . . . . . 77Log files . . . . . . . . . . . . . . . 77Enabling tracing for the ioaz Logstash outputplugin . . . . . . . . . . . . . . . . 78APPLID values for CICS Transaction Server mightnot be correct in Log Analysis user interface . . . 78DB2 or MQ command prefix values might not becorrect in Log Analysis user interface . . . . . . 79Log record skipped and not available in LogAnalysis . . . . . . . . . . . . . . . 79z/OS log data is missing in Log Analysis searchresults . . . . . . . . . . . . . . . . 80After upgrade, interval anomaly data is not visiblein Log Analysis user interface . . . . . . . . 82
© Copyright IBM Corp. 2014, 2016 iii
Reference . . . . . . . . . . . . . 85Configuration options for the ioaz Logstash outputplugin . . . . . . . . . . . . . . . . 85WebSphere Application Server for z/OS data sourcetypes . . . . . . . . . . . . . . . . 87
zOS-WAS-SYSOUT data source type . . . . . . 88zOS-WAS-SYSPRINT data source type . . . . . 90zOS-WAS-HPEL data source type . . . . . . . 93
z/OS Network data source types . . . . . . . 96zOS-NetView data source type . . . . . . . 96
z/OS SMF data source types . . . . . . . . 99zOS-SMF30-Annotate annotations . . . . . . 100zOS-SMF80-Annotate annotations . . . . . . 101zOS-SMF110_E-Annotate annotations . . . . . 103zOS-SMF110_S_10-Annotate annotations . . . . 105zOS-SMF120-Annotate annotations. . . . . . 107
z/OS SYSLOG data source types . . . . . . . 111zOS-SYSLOG-Console data source type . . . . 112zOS-SYSLOG-SDSF data source type . . . . . 115zOS-syslogd data source type . . . . . . . 118zOS-CICS-MSGUSR data source type: threevariations . . . . . . . . . . . . . 120zOS-CICS-EYULOG data source type: threevariations . . . . . . . . . . . . . 121zOS-CICS-Annotate annotations . . . . . . 122zOS-Anomaly-Interval data source type . . . 124
Dashboards . . . . . . . . . . . . . . 126
WebSphere Application Server for z/OS CustomSearch Dashboard applications . . . . . . 126z/OS Network Custom Search Dashboardapplications . . . . . . . . . . . . . 127z/OS SMF Custom Search Dashboardapplications . . . . . . . . . . . . . 128z/OS SYSLOG Custom Search Dashboardapplications . . . . . . . . . . . . . 132
Sample searches . . . . . . . . . . . . 136WebSphere Application Server for z/OS samplesearches . . . . . . . . . . . . . . 136z/OS Network sample searches . . . . . . 137z/OS SMF sample searches. . . . . . . . 138z/OS SYSLOG sample searches . . . . . . 139
SMF type 80-related records that the System DataEngine creates . . . . . . . . . . . . . 143
SMF80_COMMAND record type . . . . . . . . 144SMF80_LOGON record type . . . . . . . . . 144SMF80_OMVS_RES record types . . . . . . . 146SMF80_OMVS_SEC record types . . . . . . . 146SMF80_OPERATION record type . . . . . . . 147SMF80_RESOURCE record type . . . . . . . 149
Notices . . . . . . . . . . . . . . 151Terms and conditions for product documentation 152Trademarks . . . . . . . . . . . . . . 153
iv Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
What's new in Version 3.1.0
Review the new features that are available in Version 3.1.0 of IBM® OperationsAnalytics for z Systems to understand the potential benefits to your z/OS-based IToperations environment.
V3.1.0 features
Table 1 lists the new features and indicates where you can find more information.
Table 1. New features in IBM Operations Analytics for z Systems V3.1.0
Feature description More information
IBM Common Data Provider for z Systems replaces the z/OS®
Log Forwarder. It can automatically monitor and forward z/OSdata to Logstash.
IBM Operations Analytics for z Systems provides a copy ofLogstash 2.3.4 and an ioaz Logstash output plugin thatforwards data to the IBM Operations Analytics - Log Analysisserver for processing by the z/OS Insight Packs.
v “Operations Analytics for z Systems overview” on page 5
v “Planning for installation and configuration of OperationsAnalytics for z Systems” on page 15
v “Installing Operations Analytics for z Systems™” on page 33
v “Upgrading Operations Analytics for z Systems” on page 43
v “Configuring Operations Analytics for z Systems” on page 57
v “Configuration options for the ioaz Logstash output plugin”on page 85
v “Log files” on page 77
v “Enabling tracing for the ioaz Logstash output plugin” onpage 78
IBM Operations Analytics for z Systems includes the separatelyinstallable IBM z Advanced Workload Analysis Reporter (IBMzAware) feature.
v “Operations Analytics for z Systems overview” on page 5
v “IBM zAware V3.1 feature” on page 11
Updates to the Problem Insights UI:
v The Problem Insights and Suggested Actions table has anInterval Score column that shows the zAware anomaly scorefor the interval in which the problem occurred. You can clicka score to navigate in context to IBM zAware for moreinformation about the interval.
v If you hover over a number in the Count column of theProblem Insights and Suggested Actions table, a pop-upwindow shows more information about each occurrence ofthe problem, including the interval score. You can click thescore to navigate in context to IBM zAware for moreinformation about that interval.
v A bar chart for each system within the selected sysplex showsthe interval scores for the last 24 hours. If you hover over anindividual bar in a bar chart, a pop-up window shows thestart and stop time for the interval, the total number ofunique message IDs for the interval, and the interval score.You can click an individual bar to navigate in context to IBMzAware for more information about the interval.
v “Log Analysis extensions for z/OS Problem Insights andclient-side Expert Advice” on page 9
v “Installing the z/OS Insight Packs, extensions, and IBMzAware data gatherer” on page 35
v “Getting started with Problem Insights for z/OS” on page 74
© Copyright IBM Corp. 2014, 2016 1
Table 1. New features in IBM Operations Analytics for z Systems V3.1.0 (continued)
Feature description More information
Support for gathering interval anomaly data from IBM zAware v “Operations Analytics for z Systems overview” on page 5
v “IBM zAware V3.1 feature” on page 11
v “IBM zAware data gatherer” on page 12
v “Installing the z/OS Insight Packs, extensions, and IBMzAware data gatherer” on page 35
v “Configuring the IBM zAware data gatherer” on page 57
v “Configuration reference for the IBM zAware data gatherer”on page 58
v “Getting started with the IBM zAware data gatherer” on page72
v “After upgrade, interval anomaly data is not visible in LogAnalysis user interface” on page 82
v “z/OS SYSLOG data source types” on page 111
v “zOS-Anomaly-Interval data source type” on page 124
v “zOS-Anomaly-Interval-Annotate annotations” on page 124
2 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Conventions used in this documentation
This information describes conventions that are used in the IBM OperationsAnalytics for z Systems documentation.
LA_INSTALL_DIRLA_INSTALL_DIR is the directory where IBM Operations Analytics - LogAnalysis is installed.
The default location for installing Log Analysis is the /home/user/IBM/LogAnalysis directory, where user represents the user ID of the non-rootuser who installs Log Analysis.
LOGSTASH_INSTALL_DIRLOGSTASH_INSTALL_DIR is the directory where Logstash is installed.
© Copyright IBM Corp. 2014, 2016 3
4 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Operations Analytics for z Systems overview
IBM Operations Analytics for z Systems extends the capabilities of IBM OperationsAnalytics - Log Analysis to help you more quickly identify, isolate, and resolveproblems in a z/OS-based IT operations environment.
IBM Operations Analytics for z Systems provides the following capabilities:v Capability to gather z/OS logs across the System z® enterprise and to forward
them to the IBM Operations Analytics - Log Analysis server for analysisv Capability to index, search, and analyze application, middleware, and
infrastructure log data across the System z enterprisev Capability to quickly search and visualize errors across thousands of log recordsv Expert advice that is based on linking search results to available troubleshooting
information, such as best practices and previously documented solutionsv Continuous streaming of z/OS logs
IBM products that are closely related to IBM OperationsAnalytics for z Systems
IBM Operations Analytics for z Systems is closely related to the followingproducts:
IBM Operations Analytics - Log AnalysisIBM Operations Analytics - Log Analysis is a cross-platform solution foridentifying and resolving problems in an IT operations environment.
In IBM Operations Analytics - Log Analysis, an Insight Pack is software thatextends the capabilities of IBM Operations Analytics - Log Analysis toprovide support for loading and analyzing data from sources that sharecommon characteristics. Examples include log sources for a specificoperating system or for a specific application, such as IBM WebSphere®
Application Server.
IBM Operations Analytics for z Systems includes z/OS Insight Packs thatextend IBM Operations Analytics - Log Analysis function to analyze thevarious types of z/OS log data.
IBM Operations Analytics for z Systems also includes a version of IBMOperations Analytics - Log Analysis.
IBM Common Data Provider for z SystemsIBM Common Data Provider for z Systems is a single data provider forsources of both structured and unstructured data. It provides a nearreal-time data feed of z/OS log data and System Management Facilities(SMF) data that can be used by IBM Operations Analytics for z Systemsand other software.
IBM Operations Analytics for z Systems includes a version of IBMCommon Data Provider for z Systems.
IBM z Advanced Workload Analysis Reporter (IBM zAware)IBM zAware monitors software and models normal system behavior todetect and diagnose anomalies in z/OS and Linux on z Systemsenvironments. It creates a model of normal system behavior that is basedon prior system data, and it uses pattern recognition techniques to identify
© Copyright IBM Corp. 2014, 2016 5
unexpected messages from the systems that it is monitoring. This analysisof events provides near real-time detection of anomalies that you can easilyview through a GUI, which you can also use to determine the cause ofpast or current anomalies. This early detection can help IT personnelcorrect problems before the problems affect system processing.
IBM Operations Analytics for z Systems includes the separately installableIBM zAware feature.
Flow of source data
Figure 1 illustrates the flow of data among the following primary components ofIBM Operations Analytics for z Systems:v “z/OS Insight Packs” on page 7v “IBM Operations Analytics - Log Analysis V1.3.3” on page 8v “IBM Common Data Provider for z Systems V1.1.0” on page 12
The following steps describe the data flow among components, which is indicatedby arrows in the illustration:1. In each z/OS logical partition (LPAR), the IBM Common Data Provider for z
Systems retrieves the data from the respective source (log or SMF record) andforwards it to the IBM Operations Analytics - Log Analysis server.The IBM Common Data Provider for z Systems sends data to the IBMOperations Analytics - Log Analysis server.
2. The source data is processed by the respective z/OS Insight Pack. Insight isprovided for data from the following source types:v z/OS system log (SYSLOG)v CICS® Transaction Server for z/OS EYULOG or MSGUSR log datav Network data, such as data from UNIX System Services system log (syslogd)
or z/OS Communications Server
z/OS LPAR 1
Log
Log
Log
Common DataProvider forz Systems
z/OS LPAR 2
Log
Log
Log
Common DataProvider forz Systems
1
2GenericReceiver
UserInterface
z/OSInsightPacks
3
Log Analysis server
Figure 1. Flow of source data among IBM Operations Analytics for z Systems components
6 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
v NetView® for z/OS message datav SMF datav WebSphere Application Server for z/OS logs that include SYSOUT,
SYSPRINT, or HPEL log data3. Users can see visualizations of the analyzed data in the Log Analysis user
interface.
Insight is also provided for IBM zAware interval anomaly data. The flow ofinterval anomaly data includes the following steps:1. The IBM zAware data gatherer of IBM Operations Analytics for z Systems
resides on the same distributed system as the Log Analysis server. It retrievesinterval anomaly data from the IBM zAware Representational State Transfer(REST) API and provides it to the Log Analysis server.
2. The source data is processed by the z/OS SYSLOG Insight Pack.3. Users can see visualizations of the analyzed data in the Log Analysis user
interface.
z/OS Insight PacksEach z/OS Insight Pack in IBM Operations Analytics for z Systems extends IBMOperations Analytics - Log Analysis function to analyze a unique type of z/OS logdata.
The following z/OS Insight Packs are available:
z/OS SYSLOG Insight PackThis Insight Pack enables Log Analysis to ingest, and perform searchesagainst, data that is retrieved from the following sources:v z/OS SYSLOG console, which includes data from the following software:
– IBM CICS Transaction Server for z/OS– IBM DB2® for z/OS– IBM IMS™ for z/OS– IBM MQ for z/OS– IBM Resource Access Control Facility (RACF®)– z/OS Communications Server
v CICS Transaction Server for z/OS EYULOG and MSGUSR logv UNIX System Services system log (syslogd)v IBM z Advanced Workload Analysis Reporter (IBM zAware)
Summary: Install this Insight Pack if you want to analyze any of thefollowing data:v z/OS SYSLOG console datav CICS Transaction Server for z/OS EYULOG or MSGUSR log datav UNIX System Services system log (syslogd)v IBM zAware interval anomaly datav SMF data
z/OS Network Insight PackThis Insight Pack enables Log Analysis to perform searches against z/OSnetwork data. It also enables Log Analysis to ingest, and perform searchesagainst, IBM Tivoli® NetView for z/OS message data.
Operations Analytics for z Systems overview 7
To analyze z/OS network data, you must use both the z/OS NetworkInsight Pack and the z/OS SYSLOG Insight Pack. The z/OS SYSLOGInsight Pack enables Log Analysis to ingest z/OS network data.
Summary: Install this Insight Pack if you want to analyze any of thefollowing data:v Network data, such as data from UNIX System Services system log
(syslogd) or z/OS Communications Serverv NetView for z/OS message data
z/OS SMF Insight PackThis Insight Pack enables Log Analysis to ingest, and perform searchesagainst, data that is retrieved from IBM z/OS System ManagementFacilities (SMF).
Summary: Install this Insight Pack if you want to analyze SMF data.
WebSphere Application Server for z/OS Insight PackThis Insight Pack enables Log Analysis to ingest, and perform searchesagainst, logging data that is retrieved from IBM WebSphere ApplicationServer for z/OS.
Summary: Install this Insight Pack if you want to analyze WebSphereApplication Server for z/OS log data.
Table 2 lists each z/OS Insight Pack with its associated installation package file.
Table 2. Installation package files for z/OS Insight Packs
Insight Pack Installation package
WebSphere Application Server for z/OS Insight Pack WASforzOSInsightPack_v3.1.0.0.zip
z/OS Network Insight PackImportant: To analyze network data, install both thez/OS SYSLOG Insight Pack and this insight pack.
zOSNetworkInsightPack_v3.1.0.0.zip
z/OS SMF Insight PackImportant: To analyze SMF data, install both the z/OSSYSLOG Insight Pack and this insight pack.
SMFforzOSInsightPack_v3.1.0.0.zip
z/OS SYSLOG Insight Pack SYSLOGforzOSInsightPack_v3.1.0.0.zip
IBM Operations Analytics - Log Analysis V1.3.3IBM Operations Analytics - Log Analysis is used with the z/OS Insight Packs toprovide support for analyzing z/OS log data.
IBM Operations Analytics - Log Analysis includes the following functions that youcan configure in the Log Analysis UI:
Role-based access controlYou can create and modify users and roles to assign role-based accesscontrol to individual users.
For more information, see Users and roles in the Log Analysisdocumentation.
AlertingYou can create alerts that are based on events, and you can define the
8 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
actions for the system to take when an alert is triggered. For example, youcan define an action to send an email notification to one or more peoplewhen an alert is triggered.
For more information, see Managing alerts in the Log Analysisdocumentation.
Some components that are preinstalled with Log Analysis
Although many components are preinstalled with IBM Operations Analytics - LogAnalysis, the following components might be especially useful with your z/OSInsight Packs:
Expert advice custom search dashboardThe Expert Advice provides links to contextually relevant information tohelp you resolve problems quickly. Using this application, you can selectany column or cells in the Grid view and can launch a search of the IBMSupport Portal. The application searches for matches to unique terms thatare contained in the column that you select.
You can start the Expert Advice application by clicking Search Dashboardsin the navigation pane of the Search workspace.
For more information about this component, see Custom SearchDashboards in the Log Analysis documentation.
Tip: To use this Expert Advice, the Log Analysis server must have accessto the Internet. With the client-side Expert Advice extension that isprovided by IBM Operations Analytics for z Systems, you can accessexpert advice even if the Log Analysis server does not have access to theInternet. For more information about the client-side Expert Advice, see“Log Analysis extensions for z/OS Problem Insights and client-side ExpertAdvice.”
WebSphere Application Server Insight PackThis Insight Pack is different from the WebSphere Application Server forz/OS Insight Pack. It is intended for use with WebSphere ApplicationServer on distributed platforms. It does not support native logging formatsfor WebSphere Application Server for z/OS. However, if you configureyour WebSphere Application Server for z/OS environment to use adistributed logging format, this Insight Pack provides you with therequired annotation and indexing capabilities.
For more information about this component, see WebSphere ApplicationServer Insight Pack in the Log Analysis documentation.
Log Analysis extensions for z/OS Problem Insights and client-sideExpert Advice
IBM Operations Analytics for z Systems provides extensions to IBM OperationsAnalytics - Log Analysis for z/OS Problem Insights and client-side Expert Advice.To use these extensions, you must have Log Analysis Version 1.3.3 Fix Pack 1installed.
Problem Insights
The Problem Insights extension provides near real-time insight into problems inyour IT environment, with suggested actions to help resolve the problems.
Operations Analytics for z Systems overview 9
If the Problem Insights component of IBM Operations Analytics for z Systems isinstalled, the Log Analysis UI includes a new tab that is titled Problem Insights.For each sysplex from which data is being forwarded to the Log Analysis server,the Problem Insights page includes insight about certain problems that areidentified in the ingested data.
You can select the time range for which you want to see insights. For example, ifyou want to know about certain problems that were identified in the last hour, youcan select the last hour as the time range. The default time range is the last 15minutes.
You can also reload the Problem Insights page and associated data by clicking theRefresh button that is located in the area of the UI where you select the timerange.
Each sysplex in the monitored environment is represented by a button thatindicates the number of problems that are found in that sysplex. You can togglethe showing or hiding of data for a sysplex by clicking the button for therespective sysplex.
The Problem Insights and Suggested Actions table includes the followinginformation about each problem that is discovered:
SeverityThe indication of the severity of the problem.
SysplexThe sysplex where the problem occurred.
SystemThe system where the problem occurred.
Interval scoreIndicates unusual patterns of message IDs within an analysis interval, incomparison to the model of normal system behavior for the respectivesystem. This score is extracted from the IBM zAware interval anomaly score.A score of 99.5 or higher indicates an anomaly.
You can click a score to navigate in context to IBM zAware for moreinformation about the interval.
SubsystemThe subsystem or system resources manager where the problem occurred.
Time The last time that the problem occurred in the selected time range. Forexample, if you select the last 15 minutes as the time range, this columnshows the last time that the problem occurred in the last 15 minutes.
Problem SummaryA summary of the problem that provides insight into the cause.
Count The total number of occurrences of the problem in the selected time range.
If you hover over this number, a pop-up window shows the followinginformation about each occurrence:v Timev Severityv Sysplexv System
10 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
v Interval score. You can click a score to navigate in context to IBMzAware for more information about that interval.
Suggested ActionsClick a question mark (?) to go to more insight about the problem,including actions that you can take to resolve the problem. The SuggestedActions information also includes links to other sources of information thatmight help you resolve the problem, such as relevant topics in the IBMKnowledge Center.
EvidenceThe message number of the message that identifies the problem, whichyou can click to view more information.
After the Problem Insights and Suggested Actions table, a bar chart for eachsystem within the selected sysplex shows the interval scores for the last 24 hours.If you hover over an individual bar in a bar chart, a pop-up window shows thefollowing information:v The start and stop time for the intervalv The total number of unique message IDs for the intervalv Interval score
You can click an individual bar to navigate in context to IBM zAware for moreinformation about the interval.
Client-side Expert Advice
If the IBM Operations Analytics - Log Analysis server is behind a firewall, andtherefore does not have access to the Internet, you can use the Client-side ExpertAdvice (IBMSupportPortal-ExpertAdvice on Client) rather than the default ExpertAdvice (IBMSupportPortal-ExpertAdvice) in the Log Analysis UI.
After you install the IBM Operations Analytics for z Systems extensions, thefollowing applications are shown under Expert Advice in the Custom SearchDashboards panel of the left navigation pane of the Search workspace:v IBMSupportPortal-ExpertAdvice, which is the default Expert Advice in Log
Analysis. When this Expert Advice is launched, the Log Analysis server sendssearch requests to the IBM Support Portal and displays the query search resultsin the Log Analysis UI.
v IBMSupportPortal-ExpertAdvice on Client, which is the extension forclient-side Expert Advice that is provided by IBM Operations Analytics for zSystems. When this Expert Advice is launched, the client browser sends searchrequests directly to the IBM Support Portal and opens a new browser tab todisplay the query search results.
IBM zAware V3.1 featureIBM Operations Analytics for z Systems includes the separately installable IBM zAdvanced Workload Analysis Reporter (IBM zAware) feature.
For more information about this feature, see the IBM zAware Guide.
This feature must be installed before you can use the IBM zAware data gatherer.
Operations Analytics for z Systems overview 11
IBM zAware data gathererIBM Operations Analytics for z Systems includes the IBM zAware data gatherer,which is a client that gathers interval anomaly data from an IBM zAware server.
The data is ingested into IBM Operations Analytics - Log Analysis and is shown inthe Log Analysis user interface under the data source that is named zOS AnomalyInterval. Visualizations of this interval anomaly data are also shown on the newProblem Insights page, if the Problem Insights extension is installed.
Before you can use the IBM zAware data gatherer, both the IBM zAware V3.1feature and the data gatherer must be installed, and the data gatherer must beconfigured with an IBM zAware server definition. The data gatherer is installed aspart of the installation of the z/OS Insight Packs and extensions. You can alsoconfigure the data gatherer during this installation by responding to prompts forthe values of the configuration parameters.
IBM Common Data Provider for z Systems V1.1.0The IBM Common Data Provider for z Systems automatically monitors z/OS logdata and SMF data and forwards it to the configured destination.
In each logical partition (LPAR) from which you want to analyze z/OS log data orSMF data, a unique instance of IBM Common Data Provider for z Systems must beinstalled and configured to specify the type of data to be gathered and thedestination for that data.
For more information, see the IBM Common Data Provider for z Systems V1.1.0documentation.
Logstash and the ioaz Logstash output pluginLogstash is an open source data collection engine that in near real-time, candynamically unify data from disparate sources and normalize the data into thedestinations of your choice for analysis and visualization. IBM OperationsAnalytics for z Systems provides a copy of Logstash 2.3.4 and an ioaz Logstashoutput plugin that forwards data to the IBM Operations Analytics - Log Analysisserver for processing by the z/OS Insight Packs.
Because IBM Common Data Provider for z Systems cannot forward data directly tothe Log Analysis server, the z/OS log data and SMF data that is gathered by IBMCommon Data Provider for z Systems must be forwarded to a Logstash instance.The Logstash instance then forwards the data to the Log Analysis server. ThroughLogstash, you can also forward data to other destinations, or route data through amessage broker such as Apache Kafka.
For more information about Logstash, see the Logstash documentation.
Installation and configuration checklistsThese checklists summarize the important steps for installing and configuring theOperations Analytics for z Systems components, including IBM OperationsAnalytics - Log Analysis, the z/OS Insight Packs, the Log Analysis extensions, theIBM zAware data gatherer, and Logstash and the ioaz Logstash output plugin.
The following steps outline the installation and configuration process:
12 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
__ 1. Install IBM Operations Analytics - Log Analysis V1.3.3 Fix Pack 1. See“Checklist for IBM Operations Analytics - Log Analysis.”
__ 2. On the Log Analysis server, install the z/OS Insight Packs, the Log Analysisextensions, and the IBM zAware data gatherer. See Checklist for z/OSInsight Packs, Log Analysis extensions, and the IBM zAware data gatherer.
__ 3. Install Logstash 2.3.4 and the ioaz Logstash output plugin. See “Checklistfor Logstash and the ioaz Logstash output plugin” on page 14.
__ 4. Determine the sources from which you want to gather log data. For moreinformation, see “Planning for configuration of IBM Common Data Providerfor z Systems” on page 18.
__ 5. Install the IBM Common Data Provider for z Systems in each z/OS logicalpartition (LPAR) for which you want to process z/OS log messages. Then, ineach LPAR where the IBM Common Data Provider for z Systems isinstalled, configure it to gather and forward z/OS log data to Logstash. Formore information, see the IBM Common Data Provider for z Systems V1.1.0documentation.If you plan to gather System Management Facilities (SMF) data, alsocomplete the configuration steps that are described in “Requirements forconfiguring SMF data streams” on page 26.
__ 6. Secure communication between IBM Common Data Provider for z Systemsand Logstash. For more information, see “Securing communication betweenIBM Common Data Provider for z Systems and Logstash” on page 62.
__ 7. Ensure that z/OS data sources are configured and grouped appropriately inLog Analysis. For more information, see “Grouping data sources to optimizetroubleshooting in your IT environment” on page 66.
Checklist for IBM Operations Analytics - Log Analysis__ 1. Verify that the system requirements for Log Analysis, including the
installation of the prerequisite software, are met. For more information, seeHardware and software requirements.
__ 2. Using the ulimit command, tune the Linux operating system so that thenumber of concurrent files is 4096, and the virtual memory is unlimited. Formore information, see Hardware and software requirements.
__ 3. To install and run Log Analysis, you must be logged in to the Linuxcomputer system with a non-root user ID. Either create this non-root userID, or use an existing non-root user ID. For more information, see “Planningfor installation of Log Analysis” on page 16.
__ 4. Review the network port assignments for Log Analysis. If the default portsare not available, determine alternative port assignments. For moreinformation, see Default ports in the Log Analysis documentation.
__ 5. The IBM Tivoli Monitoring Log File Agent is an optional component that isprovided with Log Analysis and other IBM products. If you plan to gatherlogs from Linux for z Systems, install this agent as part of the Log Analysisinstallation. For more information, see Installing and configuring the IBMTivoli Monitoring Log File Agent in the Log Analysis documentation.
__ 6. Verify that you are logged in to the Linux computer system with thenon-root user ID, and complete the steps in “Installing Log Analysis” onpage 33.
Checklist for z/OS Insight Packs, Log Analysis extensions, andthe IBM zAware data gatherer__ 1. Verify that Log Analysis is running.
Operations Analytics for z Systems overview 13
__ 2. Verify that you are logged in to the Linux computer system with thenon-root user ID that was used to install Log Analysis.
__ 3. Install the z/OS Insight Packs with sample searches, the extensions forProblem Insights and client-side Expert Advice, and the IBM zAware datagatherer. For more information, see “Installing the z/OS Insight Packs,extensions, and IBM zAware data gatherer” on page 35.
Tip: If the IBM zAware data gatherer is not configured, you cannot seevisualizations of the interval anomaly data in the Log Analysis user interface.Therefore, if you choose not to configure the IBM zAware data gatherer atinstallation time, you must configure it before you use the Problem Insights forz/OS.
For more information about configuring and getting started with the data gatherer,see the following topics:v “Configuring the IBM zAware data gatherer” on page 57v “Getting started with the IBM zAware data gatherer” on page 72
Checklist for Logstash and the ioaz Logstash output plugin__ 1. Verify that the Logstash system requirements are met. For more information,
see “Logstash requirements” on page 16.__ 2. Install Logstash and the ioaz Logstash output plugin. For more information,
see “Installing Logstash and the ioaz Logstash output plugin” on page 36.__ 3. Configure Logstash and the ioaz Logstash output plugin. For more
information, see “Configuring Logstash” on page 60.
14 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Planning for installation and configuration of OperationsAnalytics for z Systems
In planning for the installation of IBM Operations Analytics for z Systems, youmust also plan for the installation of IBM Operations Analytics - Log Analysis andthe installation and configuration of both Logstash and the IBM Common DataProvider for z Systems.
System requirementsEnsure that your environment meets the system requirements for IBM OperationsAnalytics for z Systems.
Log Analysis requirements
For information about the system requirements for Log Analysis, see Hardwareand software requirements.
Restriction: If you deploy Log Analysis on a Linux on System z system, you canuse the IBM InfoSphere® BigInsights® Hadoop Version 3.0 service, but at this time,the Hadoop Distributed File System (HDFS) must be installed on an x86 Linuxsystem. The support for integrating Log Analysis on a Linux on System z systemwith other Hadoop distributions is dependent on the platform support that isprovided by the respective Hadoop distribution.
z/OS Insight Pack requirements
You must install the z/OS Insight Packs on a Log Analysis server with LogAnalysis Version 1.3.3. The IBM Operations Analytics for z Systems Version 3.1.0package includes IBM Operations Analytics - Log Analysis Version 1.3.3 StandardEdition. To use the extensions for z/OS Problem Insights and client-side ExpertAdvice and to use the IBM zAware data gatherer, you must also have Log AnalysisVersion 1.3.3 Fix Pack 1 (also known as V1.3.3.1) installed.
Tip: To get Log Analysis Version 1.3.3 Fix Pack 1 (1.3.3.1), complete the followingsteps:1. Go to IBM Fix Central.2. In the Product selector field, start typing IBM Operations Analytics - Log
Analysis, and when the correct product name is shown in the resulting list,select it. More fields are then shown.
3. In the Installed Version field, select 1.3.3.4. In the Platform field, select All.5. Click Continue.6. In the resulting “Identify fixes” window, select Browse for fixes, and click
Continue.7. In the “Select fixes” window, you should see the fix pack 1.3.3-TIV-IOALA-
FP001, which you can select and download. For installation instructions, see thereadme file.
Table 3 on page 16 indicates the software versions that are supported by each z/OSInsight Pack.
© Copyright IBM Corp. 2014, 2016 15
Table 3. Software versions that are supported by each z/OS Insight Pack
Insight Pack Supports data ingestion from this software
z/OS Network Insight Pack v z/OS 2.1 and 2.2
v NetView for z/OS 6.1, 6.2, and 6.2.1
z/OS SMF Insight Pack v z/OS 2.1 and 2.2
z/OS SYSLOG Insight Pack v z/OS 2.1 and 2.2
v DB2 for z/OS 10.1 and 11.1
v CICS Transaction Server for z/OS 4.2, 5.1.1, 5.2,and 5.3
v IMS for z/OS 12.1, 13.1, and 14.1
v MQ for z/OS 7.1, 8.0, and 9.0
WebSphere Application Server forz/OS Insight Pack
WebSphere Application Server for z/OS 7.0, 8.0,8.5.5, and 9.0
Logstash requirements
You must install Logstash 2.3.4 on a Linux on System p, System x, or System zsystem with the following software:v Red Hat Enterprise Linux 6 or 7 or SUSE Linux Enterprise Server 11 or 12v Java Runtime Environment (JRE) 7 or 8
Logstash requires 180 MB of disk space, plus more disk space for logs and thecache. The cache is used for storing data that cannot be sent if the IBM OperationsAnalytics - Log Analysis server is down.
IBM Common Data Provider for z Systems requirements
You must run the IBM Common Data Provider for z Systems in each z/OS logicalpartition (LPAR) for which you want to process z/OS log messages. Forinformation about the system requirements for this provider, see the IBM CommonData Provider for z Systems V1.1.0 documentation.
Planning for installation of Log AnalysisBefore you install IBM Operations Analytics - Log Analysis, review therequirements for the installation user ID, domain name resolution, and networkconnectivity. Also, decide whether you need to use the optional IBM TivoliMonitoring Log File Agent component.
Before you begin
Verify that your environment meets the system requirements that are described in“System requirements” on page 15.
About this task
Installation user IDTo install and run Log Analysis, you must be logged in to the Linuxcomputer system with a non-root user ID.
16 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Domain name resolutionOn the system where you plan to install Log Analysis, verify that thedetails for the Log Analysis server are maintained correctly in the/etc/hosts file.
Network connectivity between the Logstash server and the IBM Common DataProvider for z Systems LPARs
The LPARs with IBM Common Data Provider for z Systems mustcommunicate with the Logstash server on the port that is defined duringLogstash pipeline configuration.
The default value for this port is 8080. Ensure that communication on thisport is not blocked by a firewall or by other aspects of your networkconfiguration.
Network connectivity between the Logstash server and the Log Analysis serverThe Logstash server must communicate with the Log Analysis server onthe port that is defined during Log Analysis installation.
The default value for this port is 9987. Ensure that communication on thisport is not blocked by a firewall or by other aspects of your networkconfiguration.
Optional Log File Agent componentThe IBM Tivoli Monitoring Log File Agent is an optional component that isprovided with Log Analysis and other IBM products. You can use thisagent to collect logs from Linux for z Systems and from other Linux andUNIX operating systems. For more information, see Installing andconfiguring the IBM Tivoli Monitoring Log File Agent in the Log Analysisdocumentation.
Planning for Log Analysis to support LDAP interaction with RACF forsign-on authentication
You can configure IBM Operations Analytics - Log Analysis to support LDAPinteraction with RACF for sign-on authentication.
Procedure
You must plan for the following configuration steps:1. In Log Analysis, verify that no data sources are created and that no permissions
are assigned to data sources.2. In the Log Analysis UI, create at least one user ID for use in doing
administrative tasks in Log Analysis.3. Define other user IDs for use in doing non-administrative tasks in Log
Analysis.4. Because LDAP and RACF authenticate these user IDs during sign-on, ensure
that these user IDs are also defined to LDAP and RACF.
Tip: In this configuration with LDAP and RACF, Log Analysis cannot use thedefault user IDs unityadmin and unityuser due to RACF restrictions on thelength of a user ID (it must be 7 characters or less).
5. Configure Log Analysis to use LDAP for authentication.For instructions, see LDAP configuration in the Log Analysis documentation.For z Systems LDAP support, choose the Tivoli Directory Server as the type ofLDAP server.
Planning for installation and configuration 17
6. Save a backup copy of the following Log Analysis configuration files so thatyou can update these files.v LA_INSTALL_DIR/utilities/datacollector-client/
javaDatacollector.properties
v LA_INSTALL_DIR/remote_install_tool/config/rest-api.properties
v LA_INSTALL_DIR/UnityEIFReceiver/config/unity.conf
v LA_INSTALL_DIR/solr_install_tool/scripts/register_solr_instance.sh
7. Update the passwords in the four previously listed Log Analysis configurationfiles.For more information, see Changing a user password in the Log Analysisdocumentation.
8. Map your LDAP groups to the Log Analysis security role so that the users canaccess Log Analysis.Individual users can also be granted user or administrative privileges.For more information, see LDAP configuration in the Log Analysisdocumentation.
9. If you use the delete utility to prune data sources, provide a user ID andpassword that are known to RACF to this utility.You can encrypt the password if you choose to store it in the Log Analysisserver.
Planning for configuration of IBM Common Data Provider for zSystems
You must determine the sources from which you want IBM Common DataProvider for z Systems to gather data.
About this task
You can gather data from the following sources:v Any job logv Any z/OS UNIX log file or entry-sequenced Virtual Storage Access Method
(VSAM) cluster that meets the following criteria:– Is encoded in Extended Binary Coded Decimal Interchange Code (EBCDIC)– Is record-based– Has a time stamp in each record
v z/OS SYSLOG consolev System Management Facilities (SMF)
Tip: Table 11 on page 22 indicates the SMF record types that are supported bythe z/OS SMF Insight Pack with their associated data stream types. These datastream types are shown in the configuration tool for the IBM Common DataProvider for z Systems under the title SMF Data > IOAz.
v NetView for z/OS messagesv WebSphere Application Server for z/OS High Performance Extensible Logging
(HPEL) logs
If you plan to gather data from the following software, you must gather z/OSSYSLOG data:v IBM CICS Transaction Server for z/OS
18 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
v IBM DB2 for z/OSv IBM IMS for z/OSv IBM MQ for z/OSv IBM Resource Access Control Facility (RACF)v z/OS Communications Server
Procedure
Determine the sources from which you want to gather data so that you canconfigure the IBM Common Data Provider for z Systems to stream the operationaldata to IBM Operations Analytics for z Systems and can obtain actionable insightsinto the health of your IT operations environment.
Correlation of the data to be analyzed with the associateddata stream types, data source types, and dashboards
For the data that you want to analyze, you must know which data stream types todefine during the configuration of the IBM Common Data Provider for z Systems.The data stream types are associated with data source types that are defined in theIBM Operations Analytics - Log Analysis server and UI.
The data source types are also associated with IBM-provided Custom SearchDashboard applications. You can customize these dashboard applications, whichare available in the Log Analysis UI, to help you analyze the data and troubleshootproblems.
The following sections correlate each type of data that you can analyze with thefollowing information:v The Insight Packs that must be installed to analyze the datav The associated data stream types that you must define during the configuration
of IBM Common Data Provider for z Systemsv The associated data source types that are defined in the Log Analysis server and
UIv The associated Custom Search Dashboard applications from the Insight Pack that
you can customize and use in the Log Analysis UI
Important:
v All data that is ingested into Log Analysis must be encoded in UnicodeTransformation Format, 8-bit encoding form (UTF-8).
v All data streams that are destined for Log Analysis must be transformed toUTF-8 encoding.
Planning for installation and configuration 19
CICS Transaction Server for z/OS data
Table 4. Analyzing CICS Transaction Server for z/OS data: required Insight Packs and associated data stream types,data source types, and dashboards
Required InsightPacks Data stream types Data source types Dashboard applications
z/OS SYSLOG InsightPack
v z/OS SYSLOG v zOS-SYSLOG-Console
v zOS-SYSLOG-SDSF
“CICS Transaction Server for z/OSDashboard” on page 134
v CICS EYULOG v zOS-CICS-EYULOG None
v CICS EYULOG DMY v zOS-CICS-EYULOGDMY None
v CICS EYULOG YMD v zOS-CICS-EYULOGYMD None
v CICS User Messages v zOS-CICS-MSGUSR None
v CICS User Messages DMY v zOS-CICS-MSGUSRDMY None
v CICS User Messages YMD v zOS-CICS-MSGUSRYMD None
z/OS SMF Insight Pack v SMF110_E v zOS-SMF110_E “z/OS SMF Custom Search Dashboardapplications” on page 128
v SMF110_S_10 v zOS-SMF110_S_10 “z/OS SMF Custom Search Dashboardapplications” on page 128
DB2 for z/OS data
Table 5. Analyzing DB2 for z/OS data: required Insight Packs and associated data stream types, data source types,and dashboards
Required InsightPacks Data stream types Data source types Dashboard applications
z/OS SYSLOG InsightPack
v z/OS SYSLOG v zOS-SYSLOG-Console
v zOS-SYSLOG-SDSF
“DB2 for z/OS Dashboard” on page 134
IMS for z/OS data
Table 6. Analyzing IMS for z/OS data: required Insight Packs and associated data stream types, data source types,and dashboards
Required InsightPacks Data stream types Data source types Dashboard applications
z/OS SYSLOG InsightPack
v z/OS SYSLOG v zOS-SYSLOG-Console
v zOS-SYSLOG-SDSF
“IMS for z/OS Dashboard” on page 135
MQ for z/OS data
Table 7. Analyzing MQ for z/OS data: required Insight Packs and associated data stream types, data source types,and dashboards
Required InsightPacks Data stream types Data source types Dashboard applications
z/OS SYSLOG InsightPack
v z/OS SYSLOG v zOS-SYSLOG-Console
v zOS-SYSLOG-SDSF
“MQ for z/OS Dashboard” on page 135
20 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
NetView for z/OS messages
Table 8. Analyzing NetView for z/OS messages: required Insight Packs and associated data stream types, datasource types, and dashboards
Required InsightPacks Data stream types Data source types Dashboard applications
z/OS Network InsightPack
v NetView Netlog v zOS-NetView “NetView for z/OS Dashboard” on page127
Network data
Network data can include, for example, data from UNIX System Services systemlog (syslogd) or z/OS Communications Server.
Table 9. Analyzing network data: required Insight Packs and associated data stream types, data source types, anddashboards
Required InsightPacks Data stream types Data source types Custom Search Dashboard application
z/OS Network InsightPack
None None “z/OS Networking Dashboard” on page127
z/OS SYSLOG InsightPack
v z/OS SYSLOG v zOS-SYSLOG-Console
v zOS-SYSLOG-SDSF
v “SYSLOG for z/OS Time ComparisonDashboard” on page 133
v “SYSLOG for z/OS Dashboard” onpage 132
v USS Syslogd Admin
v USS Syslogd Debug
v USS Syslogd Error
v zOS-syslogd None
Security data
Table 10. Analyzing security data: required Insight Packs and associated data stream types, data source types, anddashboards
Required InsightPacks Data stream types Data source types Custom Search Dashboard application
z/OS SYSLOG InsightPack
v z/OS SYSLOG v zOS-SYSLOG-Console
v zOS-SYSLOG-SDSF
“Security for z/OS Dashboard” on page136
z/OS SMF Insight Pack v SMF80_COMMAND
v SMF80_LOGON
v SMF80_OMVS_RES_1
v SMF80_OMVS_RES_2
v SMF80_OMVS_SEC_1
v SMF80_OMVS_SEC_2
v SMF80_OPERATION
v SMF80_RESOURCE
v zOS-SMF80 (RACF) “z/OS SMF Custom Search Dashboardapplications” on page 128
Planning for installation and configuration 21
SMF data
Table 11. Analyzing SMF data: required Insight Packs and associated data stream types, data source types, anddashboards
Required InsightPacks Data stream types Data source types Dashboard applications
z/OS SMF Insight Pack v SMF30 v zOS-SMF30 (z/OS jobperformance)
“z/OS SMF Custom Search Dashboardapplications” on page 128
v SMF80_COMMAND
v SMF80_LOGON
v SMF80_OMVS_RES_1
v SMF80_OMVS_RES_2
v SMF80_OMVS_SEC_1
v SMF80_OMVS_SEC_2
v SMF80_OPERATION
v SMF80_RESOURCE
v zOS-SMF80 (RACF) “z/OS SMF Custom Search Dashboardapplications” on page 128
v SMF110_E v zOS-SMF110_E (CICS TransactionServer for z/OS)
“z/OS SMF Custom Search Dashboardapplications” on page 128
v SMF110_S_10 v zOS-SMF110_S_10 ( CICSTransaction Server for z/OS)
“z/OS SMF Custom Search Dashboardapplications” on page 128
v WASACT_REQAPPL
v WASACT_REQCONT
v zOS-SMF120 (WebSphereApplication Server for z/OS)
“z/OS SMF Custom Search Dashboardapplications” on page 128
z/OS SYSLOG InsightPack
v z/OS SYSLOG v zOS-SYSLOG-Console
v zOS-SYSLOG-SDSF
None
WebSphere Application Server for z/OS data
Table 12. Analyzing WebSphere Application Server for z/OS data: required Insight Packs and associated data streamtypes, data source types, and dashboards
Required InsightPacks Data stream types Data source types Dashboard applications
WebSphere ApplicationServer for z/OS InsightPack
v WebSphere SYSOUT v zOS-WAS-SYSOUT “WebSphere Application Server forz/OS Custom Search Dashboardapplications” on page 126
v WebSphere SYSPRINT
v WebSphere USS Sysprint
v zOS-WAS-SYSPRINT “WebSphere Application Server forz/OS Custom Search Dashboardapplications” on page 126
v WebSphere HPEL v zOS-WAS-HPEL “WebSphere Application Server forz/OS Custom Search Dashboardapplications” on page 126
z/OS SMF Insight Pack v WASACT_REQAPPL
v WASACT_REQCONT
v zOS-SMF120 “z/OS SMF Custom Search Dashboardapplications” on page 128
22 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
z/OS system log (SYSLOG) data
Table 13. Analyzing z/OS SYSLOG data: required Insight Packs and associated data stream types, data sourcetypes, and dashboards
Required InsightPacks Data stream types Data source types Dashboard applications
z/OS SYSLOG InsightPack
v z/OS SYSLOG v zOS-SYSLOG-Console
v zOS-SYSLOG-SDSF
v “SYSLOG for z/OS Time ComparisonDashboard” on page 133
v “SYSLOG for z/OS Dashboard” onpage 132
Determination of time zone information for z/OS log recordsIBM Operations Analytics - Log Analysis determines time zone information forz/OS log records. The time zone information varies depending on the source of thelog data.
If Log Analysis cannot determine the time zone information for each log record, itmight not identify the correct relative placement in time for log records fromdifferent sources.
The following information describes how time zone is determined for z/OS logrecords, depending on the source of the log data:
z/OS SYSLOG dataThe time stamps for z/OS SYSLOG messages include time zoneinformation.
CICS Transaction Server for z/OS log dataThe time stamps for CICS Transaction Server for z/OS EYULOG andMSGUSR log messages do not include time zone information. These timestamps are based on the local z/OS system time zone.
NetView for z/OS messagesThe time stamps for the NetView for z/OS messages that are provided bythe NetView message provider do not include time zone information.These time stamps are based on Coordinated Universal Time (UTC).
SMF dataThe time stamps for SMF messages do not include time zone information.These time stamps are based on the local z/OS system time zone.
UNIX System Services system log (syslogd) messagesThe time stamps for syslogd messages do not include time zoneinformation. These time stamps are based on the local z/OS system timezone.
WebSphere Application Server for z/OS log dataThe time stamps for the WebSphere Application Server for z/OS logmessages do not include time zone information, with the followingexceptions:v WebSphere Application Server for z/OS log messages data that is
produced in distributed format contains time stamps with time zoneinformation.
v WebSphere Application Server for z/OS log messages data that isretrieved from High Performance Extensible Logging (HPEL) containstime stamps with time zone information.
Planning for installation and configuration 23
By default, time stamps in the WebSphere Application Server for z/OS logsare based on UTC. However, if the WebSphere Application Server for z/OSvariable ras_time_local is set to 1, time stamps are based on the local z/OSsystem time zone. WebSphere Application Server for z/OS variables can beset at the cell, cluster, node, or server scope level.
For each WebSphere Application Server for z/OS data set that is written toa JES job log or z/OS UNIX log file, determine whether the time stamps inthe log data are based on the local z/OS system time zone or on UTC.
Requirements for enabling the generation of SMF data at thesource
In addition to configuring the IBM Common Data Provider for z Systems SystemData Engine to monitor and stream System Management Facilities (SMF) data, youmust enable SMF data to be generated at its source.
SMF 30 data generation
SMF record type 30 data is job performance data for z/OS software, which isbased on accounting data.
To enable the generation of SMF record type 30 data, the only requirement is toinclude the SMF 30 record type in the single SMF log stream that the System DataEngine processes.
SMF 80 data generation
SMF record type 80 data is produced during Resource Access Control Facility(RACF) processing.
To enable the generation of SMF record type 80 data, you must include the SMF 80record type in the single SMF log stream that the System Data Engine processes.RACF must also be installed, active, and configured to protect resources.
For information about the subset of SMF record type 80 data that the System DataEngine collects, see “SMF type 80-related records that the System Data Enginecreates” on page 143.
SMF also records information that is gathered by RACF auditing. By using variousRACF options, you can regulate the granularity of SMF record type 80 data that iscollected. In the IBM Knowledge Center, see the following information from thez/OS documentation:v Information about the following options of the SETROPTS LOGOPTIONS
command, through which you can control auditing:– DIRSRCH
– DIRACC
– FSOBJ
– FSSEC
v Examples for setting audit controls by using SETROPTS
Before you enable RACF log options, consider the impact in your environment. Forexample, enabling RACF log options can result in the following consequences:v An increase in the amount of disk space that is used for logging
24 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
v An increase in the network activity that is required to transmit SMF data to theIBM Operations Analytics - Log Analysis server
SMF 110 data generation
The System Data Engine collects only a subset of the SMF data that is generated byCICS Transaction Server for z/OS. It collects the following data from SMF recordtype 110:v Monitoring exceptions data for CICS Transaction Server for z/OS from SMF type
110 subtype 1 records, with a class where data = 4v Global transaction manager statistics data for CICS Transaction Server for z/OS
from SMF type 110 subtype 2 records, with a class where STID = 10
To enable the generation of SMF record type 110 data, you must include the SMF110 record type in the single SMF log stream that the System Data Engineprocesses. You must also define the following CICS Transaction Server for z/OSinitialization parameters in the SYSIN data set of the CICS startup job stream:STATRCD=ON, Interval statistics recordingSTATINT=001000, Interval definitionMN=ON, Turn monitoring on or offMNEXC=ON, Exceptions monitoringMNRES=ON, Resource monitoring
For more information about enabling the generation of SMF record type 110 data,see Specifying system initialization parameters before startup in the CICSTransaction Server for z/OS Version 5.3 documentation.
The System Data Engine creates the following record types as it extracts therelevant data from SMF type 110 records:v zOS-SMF110_E for monitoring exceptions datav zOS-SMF110_S_10 for global transaction manager statistics data
The monitoring exceptions records contain information about CICS TransactionServer for z/OS resource shortages that occur during a transaction, such asqueuing for file strings and waiting for temporary storage. This data highlightspossible problems in CICS system operation. It can help you identify systemconstraints that affect the performance of your transactions. CICS writes oneexception record for each exception condition that occurs.
The global transaction manager records contain transactions summary informationfor CICS Transaction Server for z/OS. This data can give you a more holistic viewof the CICS region, including a comparison among the current and peak numbersof transactions that are running in the region, and the maximum number ofallowed transactions.
SMF 120 data generation
The System Data Engine collects only a subset of the SMF data that is generated byWebSphere Application Server for z/OS. It collects performance data from SMFrecord type 120 subtype 9. The default SMF type 120 subtype 9 record containsinformation for properly monitoring the performance of your EJB components andweb applications.
Tip: This data does not include data for the WebSphere Liberty server.
Planning for installation and configuration 25
To enable the generation of SMF record type 120 data, you must include the SMF120 record type in the single SMF log stream that the System Data Engineprocesses. Also, for each application server instance that you want to monitor, youmust specify properties for SMF data collection by setting WebSphere ApplicationServer for z/OS environment variables from the WebSphere Application ServerAdministrative Console. For more information about enabling the generation ofSMF record type 120 data, see Using the administrative console to enableproperties for specific SMF record types in the WebSphere Application Server forz/OS Version 9.0 documentation.
The System Data Engine creates the following record types as it extracts theperformance data from SMF type 120 subtype 9 records:v SMF120_REQAPPL for WebSphere application recordsv SMF120_REQCONT for WebSphere controller records
The SMF type 120 subtype 9 record contains information about the activity of theWebSphere server and the hosted applications. This record is produced whenever aserver receives a request. When you do capacity planning, consider the costs thatare involved in running requests and the number of requests that you processduring a specific time. You can use the SMF type 120 subtype 9 record to monitorwhich requests are associated with which applications, the number of requests thatoccur, and the amount of resource that each request uses. You can also use thisrecord to identify the applications that are involved and the amount of CPU timethat the requests use.
As part of planning to collect SMF 120 data, consider the disk space requirementsfor storing the data and the increase in network activity that is required to transmitSMF data to the IBM Operations Analytics - Log Analysis server.
To reduce any system performance degradation due to data collection and toimprove the usability of the data, the System Data Engine aggregates the SMFactivity records in 1-minute collection intervals by default. Ensure that thecollection interval is an integral factor of the SMF global recording interval, asmeasured in minutes, so that data collection is synchronized. For example, a 1-, 3-,or 5-minute collection interval is an integral factor of a typical 15-minute SMFglobal recording interval, but a 4-minute collection interval is not. The SMF globalrecording interval INTERVAL(nn) is defined in the SMFPRMxx member ofSYS1.PARMLIB (or its equivalent).
Requirements for configuring SMF data streamsAfter you install and configure IBM Common Data Provider for z Systems, youmust complete some other configuration steps that are related to defining SystemManagement Facilities (SMF) data streams for IBM Operations Analytics for zSystems.
Enable the configuration tool to support SMF data
At the following time, complete the step that is described in this section:v When the postinstallation customization of the configuration tool for the IBM
Common Data Provider for z Systems is completev Before you define any SMF data streams for IBM Operations Analytics for z
Systems
26 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
You must enable the configuration tool of the IBM Common Data Provider for zSystems to support SMF data for IBM Operations Analytics for z Systems. To dothis, you must copy the glasmf.streams.json file from the installation directory forIBM Operations Analytics for z Systems to the directory in IBM Common DataProvider for z Systems where policies and other configuration files are stored. Thisdirectory is defined by the variable configPath in the file LIB/cdpConfig.json,which is in the installation directory for the configuration tool of IBM CommonData Provider for z Systems. The default path for this directory is /etc/cdpConfig.
From UNIX System Services, use the following command to copy the configurationfile, but update the path for the IBM Operations Analytics for z Systemsinstallation directory for your environment:cp /usr/lpp/IBM/zscala/V3R1/samples/glasmf.streams.json /etc/cdpConfig/
For more information about postinstallation customization of the IBM CommonData Provider for z Systems configuration tool, see the IBM Common DataProvider for z Systems V1.1.0 documentation.
Specify the necessary parameters in the configuration tool
To configure an SMF data stream for IBM Operations Analytics for z Systems, youmust specify the following parameters, with the following values, in theconfiguration tool for IBM Common Data Provider for z Systems:
IOAz ConcatentationValue:SGLASAMP dataset installed with IBM Operations Analytics for z Systems
CDP ConcatenationValue:SHBODEFS dataset installed with IBM Common Data Provider for z Systems
For more information about configuring the System Data Engine, see the IBMCommon Data Provider for z Systems V1.1.0 documentation.
Requirements for gathering WebSphere Application Server forz/OS log data
If you plan to gather log data for WebSphere Application Server for z/OS, youmust determine the application servers from which to gather log data.
For each of the application servers, you must then determine where to retrieve thelog data.
If the application server is configured to use High Performance Extensible Logging(HPEL) mode, a best practice is to retrieve the log data by using the HPEL API.
If the application server is configured to use basic logging, the log data is retrievedfrom JES job logs, z/OS UNIX log files, or both, depending on how the server isconfigured.
If the application server is configured to use HPEL mode
For each application server that is configured to use HPEL mode, complete thefollowing steps:
Planning for installation and configuration 27
1. Use the WebSphere Integrated Solutions Console to determine the HPELlogging and trace directories. Logging and trace information is typically in thesame directory, but it can be configured to be in different directories.
2. Determine whether logging data only, trace data only, or both, is gathered.Logging data includes data from the java.util.logging package (the levelDETAIL and higher), the System.out stream, and the System.err stream.Trace data includes data from the java.util.logging package (the levelDETAIL and lower).
3. Ensure that the user ID that is associated with the IBM Common Data Providerfor z Systems procedure is authorized to read the HPEL logging and tracedirectories and files.
If the application server is configured to log to JES job logs
For each application server that is configured to log to JES job logs, complete thefollowing steps:1. Determine which regions of the application server to gather log data from.
Region Focus area
Controller region Inbound and outbound communication,security, and transaction control
Servant region Most of the application server components
Adjunct region Internal messaging
2. For each application server region, determine the job name.
Region Typical job name
Controller region Server short name
Servant region Job name for the controller region with an“S” appended
Adjunct region Job name for the controller region with an“A” appended
3. For each application server region, determine whether to gather SYSOUT data,SYSPRINT data, or both types of data.SYSOUT data includes Java™ logs (non-trace levels) and native message logs.SYSPRINT data includes Java logs (with trace levels) and native trace.
If the application server is configured to log to z/OS UNIX logfiles
For each application server that is configured to log to z/OS UNIX log files,complete the following steps:1. Determine which regions of the application server to gather log data from.
Region Focus area
Controller region Inbound and outbound communication,security, and transaction control
Servant region Most of the application server components
Adjunct region Internal messaging
28 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
2. For each application server region, determine whether to gather SYSOUT data,SYSPRINT data, or both types of data.SYSOUT data includes Java logs (non-trace levels) and native message logs.SYSPRINT data includes Java logs (with trace levels) and native trace.
3. Determine the file path of each z/OS UNIX log file to be gathered.4. For each z/OS UNIX file to be gathered, determine whether the path name is
constant or varies. The path name varies in the following situations:v When date and time substitution is used in the file name that is specified in
the data definition (DD) statement. The use of date and time substitutioncauses a new log file to be created for each server instance.
v When the WebSphere environment variable redirect_server_output_dir is usedto redirect output to files. The use of this variable causes a new log file to becreated for each server instance. It also gives you the capability to use theROLL_LOGS parameter of the modify command to create a new set of log files.
For more information about file paths for rolling logs, see the IBM CommonData Provider for z Systems V1.1.0 documentation.
5. Ensure that the user ID that is associated with the z/OS Log Forwarderprocedure is authorized to read the z/OS UNIX log files.
Planning for configuration of LogstashYou must install and configure at least one Logstash instance to receive z/OS datafrom one or more IBM Common Data Provider for z Systems instances and toforward this data to the IBM Operations Analytics - Log Analysis server.
About this task
You can install Logstash on the same system as the Log Analysis server or on aseparate server.
A self-extracting installer installs Logstash and the ioaz Logstash output pluginand updates the Logstash configuration file with the values that are providedduring the installation. You can use that Logstash configuration, or you can set upa more advanced Logstash configuration.
Procedure
Plan to use either the basic Logstash configuration that results from running theself-extracting installer, or set up your configuration as described in one of thefollowing two examples:
Configuration description More information
Basic Logstash configuration Use the single Logstash instance that isconfigured by the self-extracting installer.This configuration is a good way to start.
Planning for installation and configuration 29
Configuration description More information
Example 1 of an advanced Logstashconfiguration
Use the following two different Logstashinstances:
Receiver instanceA Logstash instance that receivesdata from IBM Common DataProvider for z Systems.
Sender instanceA Logstash instance that forwardsdata to the Log Analysis server.
With this configuration, you can route thedata through a message broker such asApache Kafka.
Example 2 of an advanced Logstashconfiguration
For each z/OS sysplex or other grouping ofLPARs, use a different Logstash instance.With this configuration, each Logstashinstance processes a smaller volume of data.
Important: Regardless of the Logstash configuration that you use, the data that isread by IBM Common Data Provider for z Systems for each data stream mustarrive at the Log Analysis server in the order in which it was read. Otherwise,partial records might be incorrectly combined in the Log Analysis server.
Creation of data sourcesIn the IBM Common Data Provider for z Systems configuration for each LPAR,you define a data stream for each separate source of z/OS log data or SMF data.
For each data stream that you define, a corresponding data source definition mustexist in IBM Operations Analytics - Log Analysis. A data source is metadata aboutlog data that enables the log data to be ingested for analysis. The data sourceincludes, for example, the log type, the origin of the log, and an annotationfunction that enables the content to be more easily searched, filtered, and used tocreate visualizations in dashboards.
Important: Data sources are not predefined. Before you can use an Insight Pack, atleast one data source must be defined.
To create a data source, you must have the following information:v A name that uniquely identifies the data sourcev The data source type, as it is specified in the respective Insight Pack
configuration artifact
You can use either of the following two methods to create data sources:
Automatic creation of data sourcesThe preferred method for creating data sources is to define data sourcenames in the data stream definitions in the IBM Common Data Providerfor z Systems configuration. These data source names, together with theconfigured data source types, are sent to Logstash as part of each Logstashevent.
Each time that the ioaz Logstash output plugin is started, it creates datasources as needed when it gets data for a data stream. Before the ioaz
30 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Logstash output plugin sends data for a data stream, it verifies that theappropriate data source exists in IBM Operations Analytics - Log Analysis.If a data source does not exist, the ioaz Logstash output plugin uses thedata source name and data source type in the Logstash event to create thedata source automatically.
Manual creation of data sources by using the Log Analysis Data Sourcesworkspace
Before you start IBM Common Data Provider for z Systems for the firsttime, manually configure the data sources by using the Log Analysis DataSources workspace.
Tip:
v The host name of the Log Analysis data source must match the fullyqualified domain name of the source system.
v The file path must match the filePath value that is specified in the datastream definition.
Data source typesThe configuration artifacts that are provided with each z/OS Insight Pack includedata source types. A log file splitter and a log record annotator are provided foreach type of data source.
Log file splittersThe log file splitters split data into records.
Log record annotators
The annotators annotate fields in the log records so that data in thosefields can be more easily searched, filtered, and used to createvisualizations in dashboards. Each field corresponds to part of a log record.The fields are defined in the index configuration file, and each field isassigned index configuration attributes.
The annotated fields are displayed in the IBM Operations Analytics - LogAnalysis Search workspace and can be used to filter or search the logrecords.
Related reference:“WebSphere Application Server for z/OS data source types” on page 87The WebSphere Application Server for z/OS Insight Pack includes support foringesting, and performing metadata searches against, the following types of datasources: zOS-WAS-SYSOUT, zOS-WAS-SYSPRINT, and zOS-WAS-HPEL.“z/OS Network data source types” on page 96The z/OS Network Insight Pack includes support for ingesting, and performingmetadata searches against, the zOS-NetView type of data source.“z/OS SMF data source types” on page 99The z/OS SMF Insight Pack includes support for ingesting, and performingmetadata searches against, the following types of data sources for SystemManagement Facilities (SMF) data: zOS-SMF30, zOS-SMF80, zOS-SMF110_E,zOS-SMF110_S_10, and zOS-SMF120.“z/OS SYSLOG data source types” on page 111The z/OS SYSLOG Insight Pack includes support for ingesting, and performingmetadata searches against, the following types of data sources:zOS-SYSLOG-Console, zOS-SYSLOG-SDSF, zOS-syslogd, the three variations ofzOS-CICS-MSGUSR, the three variations of zOS-CICS-EYULOG, andzOS-Anomaly-Interval.
Planning for installation and configuration 31
32 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Installing Operations Analytics for z Systems™
As part of installing Operations Analytics for z Systems, you install IBMOperations Analytics - Log Analysis, the z/OS Insight Packs, Logstash and theioaz Logstash output plugin, and the IBM Common Data Provider for z Systems.
About this task
You must install the following software:1. Log Analysis2. z/OS Insight Packs
The following software is installed as part of installing the z/OS Insight Packs:v Log Analysis extensions for Problem Insights and client-side Expert Advicev IBM zAware data gatherer
3. Logstash, including the ioaz Logstash output plugin4. IBM Common Data Provider for z Systems
For information about installing IBM Common Data Provider for z Systems, seethe IBM Common Data Provider for z Systems V1.1.0 documentation.
Tip: To verify that your version of the Operations Analytics for z Systems is thelatest available version, check for updates on the IBM Software Support site.
Installing Log AnalysisIBM Operations Analytics for z Systems can be used with an existing instance ofIBM Operations Analytics - Log Analysis Version 1.3.3. The IBM OperationsAnalytics for z Systems Version 3.1.0 package includes IBM Operations Analytics -Log Analysis Version 1.3.3 Standard Edition. You must also install Log AnalysisVersion 1.3.3 Fix Pack 1.
Before you begin
Tip: To get Log Analysis Version 1.3.3 Fix Pack 1 (1.3.3.1), complete the followingsteps:1. Go to IBM Fix Central.2. In the Product selector field, start typing IBM Operations Analytics - Log
Analysis, and when the correct product name is shown in the resulting list,select it. More fields are then shown.
3. In the Installed Version field, select 1.3.3.4. In the Platform field, select All.5. Click Continue.6. In the resulting “Identify fixes” window, select Browse for fixes, and click
Continue.7. In the “Select fixes” window, you should see the fix pack 1.3.3-TIV-IOALA-
FP001, which you can select and download. For installation instructions, see thereadme file.
Complete the planning task that is described in “Planning for installation of LogAnalysis” on page 16.
© Copyright IBM Corp. 2014, 2016 33
The Log Analysis installation media is included in the Operations Analytics for zSystems offering on the DVDs with the following labels:
IBM Operations Analytics - Log Analysis 1.3 Linux 64 bit (LCD8-2724)IBM Operations Analytics - Log Analysis 1.3 Linux on System z 64 bit (LCD7-6059)IBM Operations Analytics - Log Analysis 1.3 Linux on Power 8 64 bit (LCD8-2737)
For more information about installing Log Analysis, see Installing in the LogAnalysis documentation.
Procedure1. Insert the DVD media into the DVD drive of (or mount the corresponding ISO
image on) the Linux system that you plan to use as the Log Analysis server. Ifthe DVD or image is not mounted automatically, mount it by using one of theutilities that are provided with the Linux operating system.
2. In a terminal window, change to the DVD or image directory by issuing thefollowing command, where media_mountpoint is the directory in which themedia is mounted:cd media_mountpoint
3. Run the installation process in either graphical or console mode, and providevalues as necessary.
Option Description
Graphical mode Run the install.sh script.
Console mode Run the install.sh script with the -coption, as shown in the following example:
./install.sh –c
4. Wait for installation to complete.When the installation is complete, the Log Analysis server starts automatically.
5. To verify the status of the Log Analysis server, issue the following command:LA_INSTALL_DIR/utilities/unity.sh -status
The following example shows sample output of this command:Mon April 27 09:43:13 EDT 2016IBM Operations Analytics - Log Analysis v1.3.3.1 STANDARD EDITION Application Services Status:---------------------------------------------------------No. Service Status Process ID---------------------------------------------------------1 Derby Network Server UP 262792 ZooKeeper UP 263173 Websphere Liberty Profile UP 264404 EIF Receiver UP 26579---------------------------------------------------------Getting status of Solr on myhost.mydomain.comStatus of Solr Nodes:-----------------------------------------------------------------No. Instance Name Host Status State-----------------------------------------------------------------1 SOLR_NODE_LOCAL myhost.mydomain.com UP ACTIVE-----------------------------------------------------------------All Application Services are in Running StateChecking server initialization status: Server has initialized!
Depending on the components that are installed, different services aredisplayed in the output for this command. Verify that all services display an UPstatus. Also, verify that the last message in the output indicates that the serverhas initialized.
34 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Installing the z/OS Insight Packs, extensions, and IBM zAware datagatherer
For IBM Operations Analytics - Log Analysis to provide insight about z/OSoperational data, you must install the IBM Operations Analytics for z Systemscomponents for Log Analysis, including the z/OS Insight Packs, the extensions forProblem Insights and client-side Expert Advice, and the IBM zAware data gatherer.
Before you begin
IBM Operations Analytics - Log Analysis Version 1.3.3 Fix Pack 1 (1.3.3.1) must beinstalled. For more information, see “Installing Log Analysis” on page 33.
For more information about the z/OS Insight Packs, the extensions, and the IBMzAware data gatherer, see the following topics:v “z/OS Insight Packs” on page 7v “Log Analysis extensions for z/OS Problem Insights and client-side Expert
Advice” on page 9v “IBM zAware data gatherer” on page 12
Each Insight Pack includes optional sample searches (sometimes called Quick Searchsamples) to help you find common errors in the software products that the InsightPack supports.
The installation packages are on the product DVD with the following label:IBM Operations Analytics for z Systems (LCD7-6544)
When you install the z/OS Insight Packs and extensions, Log Analysis must berunning, and you must be logged in to the Linux computer system with thenon-root user ID that was used to install Log Analysis.
About this task
The self-extracting installer file ioaz_install.run simplifies the installation of theInsight Packs with sample searches and the extensions for Problem Insights andclient-side Expert Advice. All four z/OS Insight Packs and their sample searchesare installed. The IBM zAware data gatherer is also installed.
To install the Insight Packs, the installer uses the pkg_mgmt.sh command with the-upgrade option. If previous versions of the z/OS Insight Packs are installed on thesystem, the installer upgrades those Insight Packs as part of the installationprocess. For information about the pkg_mgmt.sh command, see pkg_mgmt.shcommand in the Log Analysis documentation.
To install Insight Packs, extensions, sample searches, and the IBM zAware datagatherer, the installer must have the directory where Log Analysis is installed(LA_INSTALL_DIR) and the user name and password for logging in to LogAnalysis.
The installer prompts you to respond whether you want to configure the IBMzAware data gatherer. If you choose to configure this data gatherer at installationtime, you are then prompted for other parameters that are needed to complete theconfiguration. If you choose not to configure at installation time, you can configurethis data gatherer later from the command line by using the
Installing Operations Analytics for z Systems™ 35
zAwareDataGathererConfig.py script. For more information about thisconfiguration, see “Configuring the IBM zAware data gatherer” on page 57.
Important: During the installation of the Problem Insights and client-side ExpertAdvice extensions with the installer file, the Log Analysis server is stopped andrestarted.
Procedure1. Insert the Insight Pack DVD media into the DVD drive of the Log Analysis
server. If the DVD is not mounted automatically, mount it by using one of theutilities that are provided with the Linux operating system.
Restriction: If you obtained the z/OS Insight Packs and extensions as a .tarfile from IBM Shopz or IBM Fix Central, unpack the .tar file into a temporarydirectory on the target computer, and complete the remaining steps in thisprocedure.
2. Run the command sh ioaz_install.run, and specify the requested information.
If you try to install with an incorrect user ID: Remember that to install theInsight Packs and extensions, you must be logged in to the Linux computersystem with the non-root user ID that was used to install Log Analysis.
If you try to install these with some other user ID, and then rerun the installerwith the correct user ID, the installer might stop with the following message:Creating directory /tmp/install_ioazVerifying archive integrity... All good.Uncompressing Install Extensions and Insight Packs 100% Extraction failed.Terminated
To resolve this error, remove the directory and log file by running the followingcommands:rm -rf /tmp/install_ioazrm /tmp/ioaz_install.log
Related reference:“Sample searches” on page 136If you installed any of the optional sample searches (sometimes called Quick Searchsamples) in IBM Operations Analytics - Log Analysis, they are available in the zosfolder in the Saved Searches navigator of the Log Analysis user interface.
Installing Logstash and the ioaz Logstash output pluginTo route data from the IBM Common Data Provider for z Systems to the IBMOperations Analytics - Log Analysis server, you must install the ioaz Logstashoutput plugin with Logstash 2.3.4. The Logstash 2.3.4 version that is provided withIBM Operations Analytics for z Systems is optimized for use with Linux on zSystems.
Before you begin
Review the following information, and verify that the system where you plan toinstall Logstash meets the Logstash system requirements:v “Logstash and the ioaz Logstash output plugin” on page 12v Logstash system requirementsv “Planning for configuration of Logstash” on page 29
36 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
About this task
To install Logstash and the ioaz Logstash output plugin, run the self-extractinginstaller file logstash_install.run on the target system. For the installation ofLogstash, you do not need administrator privileges, and you can use a non-rootuser ID.
During the installation, you are prompted for the following information:
Logstash installation directoryThe directory where you want to install Logstash. You must have systempermissions to create this directory, and adequate free space must beavailable on the file system for the installation.
Log Analysis server nameThe fully qualified domain name of the IBM Operations Analytics - LogAnalysis server that the ioaz Logstash output plugin connects to.
If you use the default value of localhost, the Log Analysis server must berunning on the same system where you install Logstash.
Log Analysis server port numberThe port number that is used by the ioaz Logstash output plugin tocommunicate with the Log Analysis server. The default value is 9987.
Log Analysis user nameThe user name that the ioaz Logstash output plugin uses to log in to theLog Analysis server. The default value is unityadmin.
Password for Log Analysis user nameThe password for the Log Analysis user name that the ioaz Logstashoutput plugin uses to log in to the Log Analysis server. The default valueis unityadmin.
Plugin cache directoryThe directory that the ioaz Logstash output plugin uses for caching eventswhen it cannot communicate with the Log Analysis server. The defaultvalue is LOGSTASH_INSTALL_DIR/cache_dir. Verify that adequate space forthe cache is available in the specified cache directory.
Plugin log directoryThe directory to which the ioaz Logstash output plugin writes logs. Thedefault value is LOGSTASH_INSTALL_DIR/logs.
Port on which Logstash listens for z/OS dataThe port on which the ioaz Logstash output plugin listens for data fromthe IBM Common Data Provider for z Systems. The default value is 8080.
Trust all certificatesA true or false indication of whether the ioaz Logstash output pluginshould trust all security certificates when it sends data to the Log Analysisserver. The default value is true.
A value of false instructs the ioaz Logstash output plugin to compare thesecurity certificate from the Log Analysis server with the certificates thatare stored in the cacerts keystore file. Therefore, if you specify a value offalse, you must manually import a security certificate for the Log Analysisserver into the cacerts keystore file for the Java Runtime Environment(JRE) that Logstash uses. For more information, see “Verifying the identityof the Log Analysis server” on page 61.
Installing Operations Analytics for z Systems™ 37
Procedure
To install Logstash and the ioaz Logstash output plugin, complete the followingsteps:1. For specification during the installation, gather the information that is
described in About this task.2. Transfer the Logstash logstash_install.run file to the system where you plan
to install Logstash and the ioaz Logstash output plugin.3. Complete one of the following actions to give the self-extracting installer file
logstash_install.run access to the Java Runtime Environment (JRE) that isused for installing and running Logstash:v Set the JAVA_HOME environment variable to the JRE location.v Add the fully qualified path of the JRE executable to the PATH environment
variable.For example, if Log Analysis is installed in the /opt/IBM/LogAnalysis directoryon the system where you plan to install Logstash and the plugin, and you wantto use the JRE 8 that is packaged with Log Analysis, issue one of the followingcommands:
To set JAVA_HOMEexport JAVA_HOME=/opt/IBM/LogAnalysis/ibm-java
To add the JRE path to PATHexport PATH=/opt/IBM/LogAnalysis/ibm-java/bin:$PATH
4. Issue the following command to run the installer:sh logstash_install.run
Results
After Logstash and the ioaz Logstash output plugin are installed, the Logstashconfiguration file is updated with the values that were provided during theinstallation, and Logstash is started. The Logstash configuration file isLOGSTASH_INSTALL_DIR/config/logstash-ioaz.conf, and you can update it byusing a text editor.
You can start and stop the ioaz Logstash output plugin by using theLOGSTASH_INSTALL_DIR/bin/logstash_util.sh script.Related reference:“Configuration options for the ioaz Logstash output plugin” on page 85The ioaz Logstash output plugin, which is provided by IBM Operations Analyticsfor z Systems, forwards event data from the IBM Common Data Provider for zSystems to the IBM Operations Analytics - Log Analysis server. This reference liststhe configuration options that can be set for the plugin in the Logstashconfiguration file, which is LOGSTASH_INSTALL_DIR/config/logstash-ioaz.conf. Youcan update this file by using a text editor.
Uninstalling the Insight PacksWhen you uninstall an Insight Pack, all data sources that are associated with theInsight Pack are deleted. If ingested data exists for any data source types that aredefined by the Insight Pack, you must remove that data before you can uninstallthe Insight Pack.
38 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Before you begin
For information about the data source types that are defined by each Insight Pack,see the following information:
Insight Pack Data source types
WebSphere Application Server for z/OSInsight Pack
“WebSphere Application Server for z/OSdata source types” on page 87
z/OS Network Insight Pack “z/OS Network data source types” on page96
z/OS SMF Insight Pack “z/OS SMF data source types” on page 99
z/OS SYSLOG Insight Pack “z/OS SYSLOG data source types” on page111
If you are uninstalling the WebSphere Application Server for z/OS InsightPack: The WASSystemOut data source type is defined by the WebSphere ApplicationServer Insight Pack that is provided with Log Analysis. Data sources that areassociated with this data source type are not deleted when you uninstall theWebSphere Application Server for z/OS Insight Pack. You do not have to removedata for the WASSystemOut data source type.
Procedure
To uninstall an Insight Pack, complete the following steps:1. If ingested data exists for any data source types that are defined by the Insight
Pack, remove that data from IBM Operations Analytics - Log Analysis byfollowing the instructions in “Removing data from Log Analysis.”
2. Uninstall the Insight Pack by using the pkg_mgmt.sh command that is providedwith Log Analysis. For example, issue one of the following commands,depending on the Insight Pack that you are uninstalling:LA_INSTALL_DIR/utilities/pkg_mgmt.sh -uninstall
LA_INSTALL_DIR/unity_content/SMFforzOS/SMFforzOSInsightPack_v3.1.0.0.zip
LA_INSTALL_DIR/utilities/pkg_mgmt.sh -uninstallLA_INSTALL_DIR/unity_content/SYSLOGforzOS/SYSLOGforzOSInsightPack_v3.1.0.0.zip
LA_INSTALL_DIR/utilities/pkg_mgmt.sh -uninstallLA_INSTALL_DIR/unity_content/WASforzOS/WASforzOSInsightPack_v3.1.0.0.zip
LA_INSTALL_DIR/utilities/pkg_mgmt.sh -uninstallLA_INSTALL_DIR/unity_content/zOSNetwork/zOSNetworkInsightPack_v3.1.0.0.zip
Removing data from Log AnalysisTo remove data from IBM Operations Analytics - Log Analysis, use the deletiontool.
Before you begin
Before you remove the data, complete the following steps:1. Stop each instance of the IBM Common Data Provider for z Systems.2. Ensure that Log Analysis is running.
Installing Operations Analytics for z Systems™ 39
About this task
The deletion tool provides the following options (or use cases) for deleting data:1. Delete all data from a single data source.2. Delete all data from a single collection.3. For a specified time period, delete data from all data sources.4. At regular intervals, delete data that is older than the specified retention
period.
For more information about removing data from Log Analysis, seedelete.properties.
Procedure
To remove data from Log Analysis by using the deletion tool, complete thefollowing steps:1. In the LA_INSTALL_DIR/utilities/deleteUtility directory, open the
delete.properties file.2. For the use case that you want to run, specify the use case number and the
variables that are associated with that use case.The use cases are summarized in About this task.
3. Save the delete.properties file.4. Run the following command, where Python_path represents the location where
Python is installed, and password represents the password that is defined in thedelete.properties file for the associated user name:Python_path deleteUtility.py password
Uninstalling Log AnalysisYou can uninstall IBM Operations Analytics - Log Analysis by using the IBMInstallation Manager.
Before you begin
Before you uninstall Log Analysis, uninstall any remote installations of ApacheSolr.
For more information about uninstalling Log Analysis, see Removing Log Analysis.
Procedure
Run the uninstallation process in either graphical or console mode.
Option Description
Graphical mode 1. Go to the following directory, whereim_install_dir represents the directorywhere the IBM Installation Manager isinstalled: im_install_dir/IBM/InstallationManager/eclipse.
2. Run the following command:
./launcher
40 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Option Description
Console mode 1. Go to the following directory, whereim_install_dir represents the directorywhere the IBM Installation Manager isinstalled: im_install_dir/IBM/InstallationManager/eclipse/tools.
2. Run the following command:
./imcl -c
Installing Operations Analytics for z Systems™ 41
42 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Upgrading Operations Analytics for z Systems
You can upgrade from IBM Operations Analytics for z Systems Version 2.2.0.1Interim Feature 2 to Version 3.1.0. IBM Operations Analytics for z Systems Version3.1.0 includes IBM Operations Analytics - Log Analysis Version 1.3.3 StandardEdition and IBM Common Data Provider for z Systems Version 1.1.0.
Before you begin
If your version of IBM Operations Analytics for z Systems is earlier than V2.2.0.1Interim Feature 2, see the following table for more information about the upgradesthat you must do before you upgrade to V3.1.
IBM Operations Analytics for z Systemsversion that is installed
How to upgrade to V2.2.0.1 Interim Feature2
2.2.0.1 or 2.2.0.1 Interim Feature 1 From IBM Fix Central, obtain the followingfix, and apply it according to the installationinstructions in the readme file:
v 2.2.0.1-CSI-IOAz-IF0002
2.2.0.0 From IBM Fix Central, obtain the followingfixes, and apply them according to theinstallation instructions in the readme files:
v 2.2.0.0-CSI-IOAz-FP0001
v 2.2.0.1-CSI-IOAz-IF0002
2.1 Upgrade to IBM Operations Analytics for zSystems V2.2.0.1 Interim Feature 2 bycompleting the instructions in UpgradingOperations Analytics for z Systems in theIBM Operations Analytics for z SystemsV2.2.0 documentation.
Ensure that RACF definitions are updated with the correct started tasknames: Before you start the upgrade process, ensure that your RACF definitionsare updated with the correct names for the following started tasks:v The System Data Engine component of IBM Common Data Provider for z
Systems is a started task that replaces the SMF real-time data providercomponent of IBM Operations Analytics for z Systems V2.2.
v The Data Streamer component of IBM Common Data Provider for z Systems is anew started task.
For more information, see Security and authorization prerequisites in the IBMCommon Data Provider for z Systems documentation.
About this task
The following information explains the notations that are used in the procedure toindicate the type of system on which the step must be done.
On Linux systemsA Linux system administrator typically does these tasks. The systems canbe Linux on System p, System x, or System z, but they are typically Linuxon System z systems.
© Copyright IBM Corp. 2014, 2016 43
On z/OS systemsA z/OS system administrator typically does these tasks.
Across Linux and z/OS systemsThe Linux and z/OS system administrators must cooperatively do thesetasks that involve both Linux and z/OS systems.
Because IBM Operations Analytics for z Systems remains fully operational aftereach step, including after each sub-step, in this procedure, you can stage the upgradeprocess over a period of time (for example, hours, days, or even weeks) to allowfor system stabilization after each step.
Procedure
To upgrade from IBM Operations Analytics for z Systems Version 2.2.0.1 InterimFeature 2 to Version 3.1.0., complete the following steps:1. On Linux systems: If you are using IBM Operations Analytics - Log Analysis
Version 1.3.2 or earlier, or Version 1.3.3.0, upgrade to Log Analysis to Version1.3.3 with Fix Pack 1 (V1.3.3.1). See “Upgrading to Log Analysis V1.3.3.1” onpage 45.
2. On Linux systems: Upgrade the z/OS Insight Packs and extensions. See“Upgrading the z/OS Insight Packs and extensions” on page 47.
3. On Linux systems: Depending on the following characteristics of yourconfiguration, complete one of the following actions:
Characteristics of your configuration Action to take
You are using the z/OS Log Forwarder toforward data directly to the Log Analysisserver.
Install and configure Logstash and the ioazLogstash output plugin to forward data tothe Log Analysis server.
For more information, see “InstallingLogstash and the ioaz Logstash outputplugin” on page 36. Accept the default valueof true for Trust all certificates. You canupdate this value later.
You are using the z/OS Log Forwarder toforward data to Logstash, and usingLogstash to forward data to the LogAnalysis server.
v If either of the following conditions aretrue, install a new Logstash instance withthe ioaz Logstash output plugin asdescribed in “Installing Logstash and theioaz Logstash output plugin” on page 36:
– If your existing Logstash instance is notLogstash Version 2.2.1 or later
– If you want to have your Logstashinstance on a Linux on System zsystem
v Otherwise, you can continue to use yourLogstash instance, but replace the scalaLogstash output plugin with the ioazLogstash output plugin. For moreinformation, see “Replacing the scalaLogstash output plugin with the ioazLogstash output plugin” on page 48.
4. On z/OS systems: Migrate to IBM Common Data Provider for z Systems tostream z/OS operational data to IBM Operations Analytics for z Systems. See
44 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
“Migrating to IBM Common Data Provider for z Systems for streaming z/OSoperational data” on page 50. This step includes the following sub-steps:a. Upgrade to using the new z/OS Log Forwarder and the System Data
Engine with your existing configurations. See “Upgrading to the new z/OSLog Forwarder and the System Data Engine” on page 51.
b. Migrate to using IBM Common Data Provider for z Systems to gather z/OSlog data and SMF data. See “Migrating to use IBM Common Data Providerfor z Systems for gathering data” on page 52.
c. If you gather z/OS SMF data, migrate the IBM Common Data Provider forz Systems configuration to route SMF data directly from the System DataEngine to the Data Streamer, rather than through the z/OS Log Forwarder.See “Migrating to route SMF data directly from the System Data Engine tothe Data Streamer” on page 54.
5. Across Linux and z/OS systems: Configure a secure data connection forstreaming operational data from IBM Common Data Provider for z Systems toLogstash. See “Securing communication between IBM Common Data Providerfor z Systems and Logstash” on page 62.
6. On Linux systems: If you installed a new Logstash instance, configureLogstash to verify the identity of the Log Analysis server. See “Verifying theidentity of the Log Analysis server” on page 61.
Upgrading to Log Analysis V1.3.3.1If you are using IBM Operations Analytics - Log Analysis Version 1.3.2 or earlier,or Version 1.3.3.0, you must upgrade to Log Analysis Version 1.3.3 with Fix Pack 1(V1.3.3.1).
Before you begin
Because a direct upgrade from Log Analysis Versions 1.3.0 and 1.3.1 to Version1.3.3 is not supported, you must first upgrade Log Analysis Versions 1.3.0 and 1.3.1to Version 1.3.2.
See the following table for more information about the upgrade paths:
Log Analysis version that is installed How to upgrade to Log Analysis V1.3.3.1
1.3.3.0 For instructions, see “Upgrading from LogAnalysis V1.3.3.0 to V1.3.3.1” on page 46.
1.3.2 For instructions, see “Upgrading from LogAnalysis V1.3.2 to V1.3.3.1” on page 46.
1.3.0 or 1.3.1 1. Upgrade to Log Analysis V1.3.2.
2. Upgrade from Log Analysis V1.3.2 toV1.3.3.1. For instructions, see “Upgradingfrom Log Analysis V1.3.2 to V1.3.3.1” onpage 46.
Obtain Log Analysis V1.3.3.1 before you begin the upgrade procedure that isappropriate to your environment.
Tip: To get Log Analysis Version 1.3.3 Fix Pack 1 (1.3.3.1), complete the followingsteps:1. Go to IBM Fix Central.
Upgrading Operations Analytics for z Systems 45
2. In the Product selector field, start typing IBM Operations Analytics - LogAnalysis, and when the correct product name is shown in the resulting list,select it. More fields are then shown.
3. In the Installed Version field, select 1.3.3.4. In the Platform field, select All.5. Click Continue.6. In the resulting “Identify fixes” window, select Browse for fixes, and click
Continue.7. In the “Select fixes” window, you should see the fix pack 1.3.3-TIV-IOALA-
FP001, which you can select and download. For installation instructions, see thereadme file.
Upgrading from Log Analysis V1.3.2 to V1.3.3.1Log data, System Management Facilities (SMF) data, and data sources that areassociated with the z/OS Insight Packs are retained during the upgrade from IBMOperations Analytics - Log Analysis Version 1.3.2 to Version 1.3.3.1.
Procedure
To upgrade from Log Analysis Version 1.3.2 to Version 1.3.3.1, complete thefollowing steps:1. Stop each instance of the z/OS Log Forwarder, and in each LPAR from which
you gather z/OS SMF data, stop the z/OS SMF real-time data provider.2. Stop the Log Analysis server.3. Back up the data in Log Analysis.
For information about backing up the data, see Upgrading, backing up, andmigrating data in the Version 1.3.3 Log Analysis documentation.
4. Uninstall Log Analysis Version 1.3.2.See “Uninstalling Log Analysis” on page 40.
5. Install IBM Operations Analytics - Log Analysis Version 1.3.3.See “Installing Log Analysis” on page 33.
6. Restore the backed-up data to Log Analysis.For information about restoring the data, see Restoring data in the Version 1.3.3Log Analysis documentation.
7. Install Fix Pack 1 of Log Analysis Version 1.3.3.8. Start each instance of the z/OS Log Forwarder, and in each LPAR from which
you gather z/OS SMF data, start the z/OS SMF real-time data provider.9. Ensure that z/OS log data and SMF data (if applicable) are being properly sent
to, and ingested by, the Log Analysis server.
Upgrading from Log Analysis V1.3.3.0 to V1.3.3.1If you are using IBM Operations Analytics - Log Analysis Version 1.3.3.0, you mustinstall Fix Pack 1 (V1.3.3.1).
Procedure
To upgrade from Log Analysis Version 1.3.3.0 to Version 1.3.3.1, complete thefollowing steps:1. Stop each instance of the z/OS Log Forwarder, and in each LPAR from which
you gather z/OS SMF data, stop the z/OS SMF real-time data provider.
46 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
2. Install Fix Pack 1 of Log Analysis Version 1.3.3.3. Start each instance of the z/OS Log Forwarder, and in each LPAR from which
you gather z/OS SMF data, start the z/OS SMF real-time data provider.4. Ensure that z/OS log data and SMF data (if applicable) are being properly sent
to, and ingested by, the Log Analysis server.
Upgrading the z/OS Insight Packs and extensionsTo get the latest insights about z/OS operational data, you must upgrade severalIBM Operations Analytics for z Systems components, including the z/OS InsightPacks and the extensions for Problem Insights and client-side Expert Advice. Youmust also install the zAware data gatherer.
Before you begin
Your IBM Operations Analytics - Log Analysis version must be Version 1.3.3 withFix Pack 1 (V1.3.3.1). If this is not your version, upgrade your version according tothe instructions in “Upgrading to Log Analysis V1.3.3.1” on page 45.
About this task
The self-extracting installer file ioaz_install.run upgrades the four Insight Packswith sample searches and the extensions for Problem Insights and client-sideExpert Advice. If the extensions were previously installed, they are upgraded. Ifthey were not previously installed, they are installed as part of the upgradeprocess. The zAware data gatherer is also installed.
Procedure
To upgrade the z/OS Insight Packs and the extensions for Problem insights andclient-side Expert Advice, complete the following steps:1. Stop each instance of the z/OS Log Forwarder, and in each LPAR from which
you gather z/OS SMF data, stop the z/OS SMF real-time data provider.2. Because Custom Search Dashboard applications are reinstalled as part of this
upgrade, back up any IBM-provided Custom Search Dashboard applicationsthat you customized.These Custom Search Dashboard applications are in the following directories:
WebSphere Application Server for z/OS Insight Pack directory for dashboardapplications
LA_INSTALL_DIR/AppFramework/Apps/WASforzOSInsightPack_v2.2.0.2
z/OS Network Insight Pack directory for dashboard applicationsLA_INSTALL_DIR/AppFramework/Apps/zOSNetworkInsightPack_v1.1.0.2
z/OS SMF Insight Pack directory for dashboard applicationsLA_INSTALL_DIR/AppFramework/Apps/SMFforzOSInsightPack_v1.1.0.3
z/OS SYSLOG Insight Pack directory for dashboard applicationsLA_INSTALL_DIR/AppFramework/Apps/SYSLOGforzOSInsightPack_v2.2.0.2
Upgrading Operations Analytics for z Systems 47
3. To upgrade the z/OS Insight Packs and extensions, complete the steps forinstalling the z/OS Insight Packs and extensions, which are described in“Installing the z/OS Insight Packs, extensions, and IBM zAware data gatherer”on page 35.
4. Update the IBM-provided Custom Search Dashboard applications in thefollowing directories to include any custom changes that you made to theprevious version of these Apps.
WebSphere Application Server for z/OS Insight Pack directory for dashboardapplications
LA_INSTALL_DIR/AppFramework/Apps/WASforzOSInsightPack_v3.1.0.0
z/OS Network Insight Pack directory for dashboard applicationsLA_INSTALL_DIR/AppFramework/Apps/zOSNetworkInsightPack_v3.1.0.0
z/OS SMF Insight Pack directory for dashboard applicationsLA_INSTALL_DIR/AppFramework/Apps/SMFforzOSInsightPack_v3.1.0.0
z/OS SYSLOG Insight Pack directory for dashboard applicationsLA_INSTALL_DIR/AppFramework/Apps/SYSLOGforzOSInsightPack_v3.1.0.0
5. Start each instance of the z/OS Log Forwarder, and in each LPAR from whichyou gather z/OS SMF data, start the z/OS SMF real-time data provider.
6. Ensure that z/OS log data and SMF data (if applicable) are being properly sentto, and ingested by, the Log Analysis server.
What to do next
If the Problem Insights page does not get updated, or does not render correctly,clear the browser cache.
Replacing the scala Logstash output plugin with the ioaz Logstashoutput plugin
If you are using the z/OS Log Forwarder to forward data to Logstash, and usingLogstash to forward data to the IBM Operations Analytics - Log Analysis server,you can continue to use your existing Logstash instance if the following conditionsare true: 1) if your existing Logstash instance is Logstash Version 2.2.1 or later, and2) if you do not plan to have your Logstash instance on a Linux on System zsystem. However, you must replace the scala Logstash output plugin with theioaz Logstash output plugin.
Before you begin
The ioaz Logstash output plugin automatically forwards the metadata that isneeded by IBM Operations Analytics for z Systems to the Log Analysis server. Theplugin also supports automatic data source creation in the Log Analysis server. Usethe ioaz Logstash output plugin rather than the scala Logstash output plugin toforward z/OS log data and SMF data from Logstash to the Log Analysis server.Continue to use the scala Logstash output plugin for data from distributedsystems, including Linux on System z systems.
48 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
The ioaz Logstash output plugin is packaged in a self-contained Ruby packagethat is called a gem. The package file is named logstash-output-ioaz-3.1.0.0.gemand is located in both of the following places:v In the Logstash directory on the Insight Pack DVDv In the .tar file that you obtain from IBM Shopz or IBM Fix Central
About this task
The following procedure describes how to update a single Logstash instance to usethe ioaz Logstash output plugin rather than the scala Logstash output plugin.However, if this Logstash instance also processes log data from distributedsystems, including Linux on System z systems, you must keep the scala Logstashoutput plugin, and add an ioaz Logstash output plugin to the configuration. Usetags or another Logstash technique to route z/OS data to the ioaz output pluginand to route distributed data to the scala output plugin.
If you have multiple Logstash instances that forward data to the Log Analysisserver, choose a single Logstash instance to update first, and update the otherinstances later.
Procedure
To replace the scala Logstash output plugin with the ioaz Logstash output pluginin an existing Logstash instance, complete the following steps:1. Stop each instance of the z/OS Log Forwarder, and in each LPAR from which
you gather z/OS SMF data, stop the z/OS SMF real-time data provider.2. Wait until the disk cache is empty, and use the following command to
determine whether any files are present in the disk cache directory (directoriesare present). The disk cache directory is specified by the disk_cache_pathconfiguration option in the scala Logstash output plugin.find disk_cache_path -type f
3. Stop Logstash.4. Delete all directories and files under the disk_cache_path directory.5. Install the ioaz Logstash output plugin by running the following command:
LOGSTASH_INSTALL_DIR/bin/logstash-plugin install –local logstash-output-ioaz-3.1.0.0.gem
6. Update the Logstash pipeline configuration file to replace the scala outputwith an ioaz output, and make the following changes to the configurationoptions for the output plugin:v Remove the batch_size option, if it is setv Remove the sequential_flush option, if it is setv Remove the num_current_writers option, if it is setv Remove the use_structured_api option, if it is setv Remove the scala_fields option, if it is setv Remove the metadata_fields option, if it is setv Remove the date_format_string option, if it is setv Change the log_file option to log_path, and remove the file name from the
path in the value, as shown in the following example:
Before the changelog_file => "/root/Logstash/logs/scala_logstash.log"
After the changelog_path => "/root/Logstash/logs"
Upgrading Operations Analytics for z Systems 49
v Change the value of the log_level option to “info”, if it is setv Add a trust_all_certificates option with a value of trueIf you are adding an ioaz output rather than replacing the scala output, makea copy of the existing scala output, and make the preceding changes in thecopy.
7. Add a tcp input to the Logstash pipeline configuration. Use a port that isdifferent from the port that is specified in the http input, and a specify acodec of json as shown in the following example. The IBM Common DataProvider for z Systems Data Streamer uses this port to send data to Logstash.
Before the changeinput {
http {port => 8080
}}
After the changeinput {
http {port => 8080
}tcp {
port => 8081codec => “json”
}}
Having both inputs enables some LPARs to use a z/OS Log Forwarder tosend data to Logstash and other LPARs to use an IBM Common DataProvider for z Systems instance to send data to Logstash.
8. Start Logstash.9. Start each instance of the z/OS Log Forwarder, and in each LPAR from which
you gather z/OS SMF data, start the z/OS SMF real-time data provider.10. Ensure that z/OS log data and SMF data (if applicable) are being properly
sent to, and ingested by, the Log Analysis server.
Migrating to IBM Common Data Provider for z Systems for streamingz/OS operational data
You must now use IBM Common Data Provider for z Systems to stream z/OSoperational data to IBM Operations Analytics for z Systems. Components of IBMCommon Data Provider for z Systems include a new version of the z/OS LogForwarder and a System Data Engine, which is a new version of the SMF real-timedata provider that was a component of IBM Operations Analytics for z SystemsVersion 2.2 and earlier.
About this task
The z/OS Log Forwarder in IBM Common Data Provider for z Systems gathersz/OS log data and SMF data and forwards the data to the IBM Common DataProvider for z Systems Data Streamer. The Data Streamer forwards the data toLogstash, which forwards the data to the IBM Operations Analytics - Log Analysisserver.
50 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
This procedure describes how to migrate a single LPAR to use IBM Common DataProvider for z Systems for streaming z/OS operational data. Choose a single LPARto migrate first, then go back and migrate the others. You must complete thefollowing tasks, in the following order:1. Upgrade to using the new z/OS Log Forwarder and the System Data Engine
with your existing configurations.2. Migrate to using IBM Common Data Provider for z Systems to gather z/OS log
data and SMF data.3. If you gather z/OS SMF data, migrate the IBM Common Data Provider for z
Systems configuration to route SMF data directly from the System Data Engineto the Data Streamer, rather than through the z/OS Log Forwarder.
Upgrading to the new z/OS Log Forwarder and the SystemData Engine
The first step in migrating to IBM Common Data Provider for z Systems is toupgrade to using the new z/OS Log Forwarder and the System Data Engine withyour existing configurations.
About this task
The following procedure describes how to upgrade to using the new z/OS LogForwarder and the System Data Engine for a single logical partition (LPAR).Choose a single LPAR to upgrade first, and upgrade your other LPARs later.
Procedure
To upgrade to using the new z/OS Log Forwarder and the System Data Engine fora single LPAR, complete the following steps:1. Stop the z/OS Log Forwarder, and if you gather z/OS SMF data from this
LPAR, stop the z/OS SMF real-time data provider also.2. Install IBM Common Data Provider for z Systems by using SMP/E for z/OS
(SMP/E).
Important: Create a new SMP/E installation environment, and maintain theold environment.For the installation instructions, see the following program directories:v Program Directory for IBM Operations Analytics for z Systems (GI13-4168)v IBM Common Data Provider for z Systems - Common Data Handler Program
Directory (GI13-4173) in the IBM Publications Centerv IBM Common Data Provider for z Systems - System Data Engine Program
Directory (GI13-4177-00) (GI13-4177) in the IBM Publications Center3. Complete the following updates to reflect the new directory:
a. Edit the z/OS Log Forwarder start procedure, and set the GLABASEprocedure variable to the directory where the startup.sh script is located.The following directory is the default installation directory for thestartup.sh script:/usr/lpp/IBM/zscala/V2R2/samples
b. Edit the environment configuration file to update the ZLF_HOME andZLF_LOG environment variables.The following default values indicate the default installation directories:
Upgrading Operations Analytics for z Systems 51
Environment variable Default value
ZLF_HOME /usr/lpp/IBM/zscala/V3R1
ZLF_LOG /usr/lpp/IBM/zscala/V3R1/samples
4. To ensure that the z/OS Log Forwarder user exit is loaded from the correctlibrary during future system IPLs, add the IBM Common Data Provider for zSystems Version 1.1 SGLALPA library to an LPALSTnn member. Also, removeany previous version of the SGLALPA library from the link pack area (LPA)library list.
5. Edit the z/OS SMF real-time data provider start procedure to set theGLALMOD assignment to point to the SGLALINK data set, as shown in thefollowing example:
Assignment Default value
GLALMOD ZSCALA.V3R1M0.SGLALINK
6. Edit the z/OS SMF real-time data provider start procedure to update thefollowing statements in the JCL:
Change this statement: To this statement:
DRLIN DD HBOIN DD
PGM=GLAPDE PGM=HBOPDE
DRLOUT DD HBOOUT DD
DRLDUMP DD HBODUMP DD
7. Start the z/OS Log Forwarder, and if you gather z/OS SMF data from thisLPAR, start the z/OS SMF real-time data provider also.
8. Ensure that z/OS log data and SMF data (if applicable) are being properly sentto, and ingested by, the Log Analysis server.
Migrating to use IBM Common Data Provider for z Systems forgathering data
After you upgrade to using the new z/OS Log Forwarder and the System DataEngine with your existing configurations, migrate to using IBM Common DataProvider for z Systems to gather z/OS log data and SMF data.
Before you begin
Complete the following steps, if you have not yet done so:1. Complete the steps that are described in “Upgrading to the new z/OS Log
Forwarder and the System Data Engine” on page 51.2. Install and configure a Logstash instance. For more information, see “Installing
Logstash and the ioaz Logstash output plugin” on page 36.3. Install the IBM Common Data Provider for z Systems Configuration Tool. For
installation instructions, see Common Data Provider for z SystemsConfiguration Tool in the IBM Common Data Provider for z Systemsdocumentation.
52 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
About this task
The following procedure describes how to migrate a single logical partition (LPAR)to use IBM Common Data Provider for z Systems to gather z/OS log data andSMF data. Choose a single LPAR to migrate first, and migrate your other LPARslater.
Procedure
To migrate a single LPAR to use IBM Common Data Provider for z Systems togather z/OS log data and SMF data, complete the following steps:1. Stop the z/OS Log Forwarder, and if you gather z/OS SMF data from this
LPAR, stop the z/OS SMF real-time data provider also.2. Download the z/OS Log Forwarder data configuration file and environment
configuration file to the desktop of the Linux or Windows system that includesthe web browser that you plan to use to access the IBM Common DataProvider for z Systems Configuration Tool.Attention: To prevent file corruption, file transfer must occur in ASCII mode.
3. By using the IBM Common Data Provider for z Systems Configuration Tool,complete the following steps to create a new policy that is equivalent to thez/OS Log Forwarder configuration:a. In the Configuration Tool, create a new policy and edit it.b. Drag the data configuration file from the existing z/OS Log Forwarder
configuration to the graph area of the policy edit page.The Configuration Tool then creates the following items:v A data stream for each data gatherer that is configured in the data
configuration filev A UTF-8 transform for all data streamsv A single subscriber that represents the Logstash instance to which the
data is to be streamedc. Drag the environment configuration file from the existing z/OS Log
Forwarder configuration to the graph area of the policy edit page. TheConfiguration Tool then updates the IBM Common Data Provider for zSystems z/OS Log Forwarder configuration with the information in theenvironment configuration file.
d. If the data configuration file that you imported does not have a data sourcename and a data source type for each data gatherer, individually edit eachdata stream to configure the appropriate data source name and data sourcetype.
e. Edit the subscriber to configure the host and the port for the Logstashinstance to which the data is to be streamed. This port is the port on whichthe tcp input plugin is listening.
f. Save the policy.
Tip: For more information about migrating a z/OS Log Forwarderconfiguration to an IBM Common Data Provider for z Systems configuration,see the IBM Common Data Provider for z Systems V1.1.0 documentation.
4. Replace the z/OS Log Forwarder data configuration file and environmentconfiguration file with the equivalent files that are produced by theConfiguration Tool.
5. Prepare the Data Streamer by using the Data Streamer configuration file that isproduced by the Configuration Tool.
Upgrading Operations Analytics for z Systems 53
Tip: For information about running the Data Streamer, see the IBM CommonData Provider for z Systems V1.1.0 documentation.
6. If the z/OS Log Forwarder forwarded data to a Logstash instance rather thandirectly to the IBM Operations Analytics for z Systems server, use the LogAnalysis Administrative Settings UI to change the host name for each datasource from the IP address of the LPAR to the fully qualified domain name ofthe LPAR. This step must be done because the http input plugin that was usedto receive data from the z/OS Log Forwarder overwrote the host field in theLogstash event with the IP address of the originating system.Because the tcp input plugin that is used to receive data from IBM CommonData Provider for z Systems Data Streamer does not overwrite the host field,the host field contains the fully qualified domain name as intended.
7. Start the Data Streamer and the z/OS Log Forwarder. If you gather z/OS SMFdata from this LPAR, start the System Data Engine by using the z/OS SMFreal-time data provider start procedure.
Tip: For information about running the Data Streamer, see the IBM CommonData Provider for z Systems V1.1.0 documentation.
8. Ensure that z/OS log data and SMF data (if applicable) are being properly sentto, and ingested by, the Log Analysis server.
Migrating to route SMF data directly from the System DataEngine to the Data Streamer
If you gather z/OS SMF data, migrate the IBM Common Data Provider for zSystems configuration to route SMF data directly from the System Data Engine tothe Data Streamer, rather than through the z/OS Log Forwarder.
Before you begin
Complete the steps that are described in “Migrating to use IBM Common DataProvider for z Systems for gathering data” on page 52.
About this task
The following procedure describes how to migrate a single LPAR to route SMFdata directly from the System Data Engine to the Data Streamer. The result of thismigration is that new data streams are created for SMF data, typically with filepath values that are different from the values for the previous data streams.Therefore, new data sources are created on the IBM Operations Analytics - LogAnalysis server so that the SMF data from the new data streams can be ingested.
Choose a single LPAR to migrate first, and migrate your other LPARs later.
Procedure
To migrate a single LPAR to route SMF data directly from the System Data Engineto the Data Streamer, complete the following steps:1. Stop the z/OS Log Forwarder and the System Data Engine.2. By using the IBM Common Data Provider for z Systems Configuration Tool,
edit the policy, and for each Generic ZFS File data stream that is used to streamSMF data, complete the following steps:
54 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
a. Create a new SMF data stream of the corresponding type. For example, fora Generic ZFS File data stream to gather SMF record type 30 data, select theSMF data stream with the name SMF30, which creates a new data stream forSMF record type 30 data.
Tip: Eight different data stream types exist for SMF record type 80 data. Ifyou gather SMF record type 80 data, you typically create data streams forall eight types, but if you want only some subtypes, you can choose asubset of the types.
b. Transform the newly created stream (or for SMF record type 80, thestreams) to UTF-8.
c. Subscribe the UTF-8 transform, or transforms, to the subscriber.d. Delete the original Generic ZFS File data stream.e. Save the policy.
3. Replace the z/OS Log Forwarder data configuration file with the one that isproduced by the Configuration Tool.
4. Replace the Data Streamer configuration file with the one that is produced bythe Configuration Tool.
5. Prepare the System Data Engine by using the System Data Engine configurationfile that is produced by the Configuration Tool.
Tip: For more information about configuring the System Data Engine, see theIBM Common Data Provider for z Systems V1.1.0 documentation.
6. Start the Data Streamer and the z/OS Log Forwarder, and start the SystemData Engine by using the System Data Engine start procedure.
Tip: For more information about running the System Data Engine, see the IBMCommon Data Provider for z Systems V1.1.0 documentation.
7. Ensure that z/OS log data and SMF data are being properly sent to, andingested by, the Log Analysis server.
Upgrading Operations Analytics for z Systems 55
56 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Configuring Operations Analytics for z Systems
The primary configuration tasks for Operations Analytics for z Systems are toconfigure the IBM zAware data gatherer and to secure communication betweenIBM Common Data Provider for z Systems and Logstash, and between Logstashand the IBM Operations Analytics - Log Analysis server.
Configuring the IBM zAware data gathererDuring the installation of the z/OS Insight Packs, extensions, and IBM zAwaredata gatherer, you are prompted about whether you want to configure the IBMzAware data gatherer. If you choose not to configure this data gatherer atinstallation time, you can configure it later from the command line by using thezAwareDataGathererConfig.py script.
Before you begin
For more information about the installation process, see “Installing the z/OSInsight Packs, extensions, and IBM zAware data gatherer” on page 35.
About this task
For more information about the configuration process, see “Configuration referencefor the IBM zAware data gatherer” on page 58. The following sections list anddescribe the information that you must provide during the configuration,regardless of whether you configure at installation time or later from the commandline:v “IBM zAware server information in the zAwareConfig.json configuration file” on
page 59v “Log Analysis server information in the unityClientConfig.json configuration
file” on page 60
Procedure
To configure the data gatherer from the command line, complete the followingsteps:1. Verify that you are logged in to the Linux computer system with the non-root
user ID that was used to install Log Analysis.2. Run the zAwareDataGathererConfig.py script.
When you run the zAwareDataGathererConfig.py script, it first prompts you forwhether you are adding or deleting an IBM zAware server definition.
Tips:
v The data gatherer gathers interval anomaly data from only one IBM zAwareserver. In this configuration, you must specify the host name or IP address ofthe IBM zAware server from which you want to gather interval anomalydata. That server must be actively collecting z/OS log data and derivinginterval anomaly scores.
v If you want to change the server definition after it is configured, you mustfirst delete the existing definition, and then, add a new definition.
© Copyright IBM Corp. 2014, 2016 57
v You can configure this data gatherer from the command line without usingthe options. Then, you are prompted for the parameters that are needed tocomplete the configuration. Otherwise, the following examples illustrate theuse of the options:– The following example illustrates how to add a new IBM zAware server
definition to the configuration:./zAwareDataGathererConfig.py add -z server01.anycompany.com
-u admin -w password -l unityadmin -s unityadmin -t 9987
– The following example illustrates how to delete an existing IBM zAwareserver definition from the configuration:./zAwareDataGathererConfig.py delete -z server01.mycompany.com
– To access the Help for adding or deleting a server definition, use the -hoption, as shown in the following example for getting help for the addoperation:./zAwareDataGathererConfig.py add -h
What to do next
See “Getting started with the IBM zAware data gatherer” on page 72.
Configuration reference for the IBM zAware data gathererThe configuration of the IBM zAware data gatherer results in the creation of twoJavaScript Object Notation (JSON) files, zAwareConfig.json andunityClientConfig.json, on the Log Analysis server in the LA_INSTALL_DIR/zAwareDataGatherer/config directory. These files contain the configurationinformation that was provided for the data gatherer either at installation time orlater by using the zAwareDataGathererConfig.py script.
This reference includes the following information:v “Verification of the configuration”v “Situations in which you must reconfigure the data gatherer”v “IBM zAware server information in the zAwareConfig.json configuration file” on
page 59v “Log Analysis server information in the unityClientConfig.json configuration
file” on page 60
Verification of the configuration
If the two configuration files are present in the following directory, the IBMzAware data gatherer is configured, and you can open the files to view or verifytheir content:LA_INSTALL_DIR/zAwareDataGatherer/config
Situations in which you must reconfigure the data gatherer
If an external change occurs to the values for any of the entries in theconfiguration files, you must reconfigure the IBM zAware data gatherer. Forexample, assume that a system administrator changes the password for theadministrator user ID for the IBM zAware server server01.anycompany.com, whichis defined in the data gatherer configuration. For the IBM zAware data gatherer tohave continued access to server01.anycompany.com, the system administrator mustreconfigure the data gatherer according to the following steps:1. Stop the running process for the data gatherer.
58 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
2. Use the zAwareDataGathererConfig.py script to delete the existing definition forthe server.
3. Use the zAwareDataGathererConfig.py script to add a new definition for theserver, and specify the administrator user ID with the new password.
4. Restart the data gatherer.
IBM zAware server information in the zAwareConfig.jsonconfiguration file
The zAwareConfig.json file contains the information that is required by the IBMzAware data gatherer to access the IBM zAware server that provides intervalanomaly data.
The following information about the IBM zAware server is defined in this file:
Host name or IP addressThe host name or IP address of the IBM zAware server from which youwant to gather interval anomaly data. The server must be activelycollecting z/OS log data and deriving interval anomaly scores.
If the host name or IP address that you provide is not valid, the IBMzAware data gatherer does not start.
Required or optional?Required for adding or deleting a IBM zAware server definition
Default valueNone
Command line option for specifying this information-z
User IDThe user ID that is defined to the IBM zAware server in the Administratorrole and is authorized to retrieve interval anomaly data.
Required or optional?Required for adding a new IBM zAware server definition
Default valueadmin
Command line option for specifying this information-u
PasswordThe password for the user ID. This password is encrypted in theconfiguration file.
Required or optional?Required for adding a new IBM zAware server definition
Default valueNone
Command line option for specifying this information-w
Configuring Operations Analytics for z Systems 59
Log Analysis server information in the unityClientConfig.jsonconfiguration file
The unityClientConfig.json file contains the information that is required by theIBM zAware data gatherer to access the Log Analysis server to provide the intervalanomaly data for display in the Log Analysis UI.
The following information about the Log Analysis server is defined in this file:
User IDThe user ID for the Log Analysis server that is authorized to receiveinterval anomaly data.
Required or optional?Required for adding a new IBM zAware server definition
Default valueunityadmin
Command line option for specifying this information-l
PasswordThe password for the user ID. This password is encrypted in theconfiguration file.
Required or optional?Required for adding a new IBM zAware server definition
Default valueNone
Command line option for specifying this information-s
Port numberThe number of the Log Analysis server port that is listening for inboundcommunication.
Required or optional?Optional
Default value9987
Command line option for specifying this information-t
Configuring LogstashAfter Logstash and the ioaz Logstash output plugin are installed, the Logstashconfiguration file is updated with the values that were provided during theinstallation, and Logstash is started. The remaining basic configuration task is tosecure communication between Logstash and the IBM Operations Analytics - LogAnalysis server.
Before you begin
Review “Planning for configuration of Logstash” on page 29.
60 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
For more information about the configuration options for the ioaz Logstash outputplugin, see “Configuration options for the ioaz Logstash output plugin” on page85.
Verifying the identity of the Log Analysis serverFor communication between Logstash and the IBM Operations Analytics - LogAnalysis server to be secure, the identity of the Log Analysis server must beverified by the ioaz Logstash output plugin. Communication between Logstashand the Log Analysis server occurs over the Hypertext Transfer Protocol Secure(HTTPS). By default, the identity of the Log Analysis server is trusted withoutvalidation.
About this task
In the Logstash configuration file, specifying a value of false for thetrust_all_certificates option directs the ioaz Logstash output plugin tocompare the security certificate from the Log Analysis server with the certificatesthat are stored in the cacerts keystore file. Therefore, you must manually import asecurity certificate for the Log Analysis server into the cacerts keystore file for theJava Runtime Environment (JRE) that Logstash uses.
Location of cacerts keystore fileThe cacerts keystore file is in the following path, where java.homerepresents the directory for the JRE that Logstash uses:java.home/lib/security
You can configure and manage this file by using the keytool utility. Theinitial password of the cacerts file is changeit.
Default location of Log Analysis server certificate fileThe default location of the Log Analysis server certificate file is thefollowing path:LA_INSTALL_DIR/wlp/usr/servers/Unity/resources/security/client.crt
Procedure
To secure communication between Logstash and the Log Analysis server, completethe following steps:1. Copy the Log Analysis server certificate file from the Log Analysis server to the
Logstash server.Attention: To prevent file corruption, the file transfer must occur in binarymode.
2. To import the Log Analysis server certificate, issue the following command (allon one line):JRE_path/bin/keytool -import -alias IOAz -file
certificate_path/client.crt -keystorejava.home/lib/security/cacerts -storepass changeit
3. In the Logstash configuration file, change the value of thetrust_all_certificates option from true to false.
4. Restart Logstash by using the LOGSTASH_INSTALL_DIR/bin/logstash_util.shscript.
Configuring Operations Analytics for z Systems 61
Updating the Logstash configuration with the user passwordfor the Log Analysis server
For communication between Logstash and the IBM Operations Analytics - LogAnalysis server to be secure, you must change the user password for the LogAnalysis server on a regular schedule, encrypt the password, and update theLogstash configuration with the encrypted password.
Procedure
To update the user password in the Logstash configuration, complete the followingsteps:1. If Logstash is not installed on the Log Analysis server, copy the ioaz.ks file
that is specified by the keystore_path option in the Logstash configuration fileto the Log Analysis server.Attention: To prevent file corruption, the file transfer must occur in binarymode.
2. On the Log Analysis server, use the unity_securityUtility.sh command toencrypt the password. Specify the ioaz.ks file as the third parameter to ensurethat the key in this keystore is used to perform the encoding.
3. Copy the Advanced Encryption Standard (AES) encrypted password, andspecify it as the value of the password option in the Logstash configuration file.
4. Restart Logstash by using the LOGSTASH_INSTALL_DIR/bin/logstash_util.shscript.
Securing communication between IBM Common Data Provider for zSystems and Logstash
You must configure a secure data connection for streaming operational data fromIBM Common Data Provider for z Systems to Logstash.
Procedure
Depending on your environment and Logstash setup, use one of the followingoptions to secure the data connection:
Option More information
You want to use the Logstash that isprovided by IBM Operations Analytics forz Systems, and you have OpenSSLinstalled.
To enable SSL, uncomment the SSLstatements in the Logstash configuration file(LOGSTASH_INSTALL_DIR/config/logstash-ioaz.conf). As part of the Logstashinstallation, the Logstash installer thensecures the Logstash end of the dataconnection.
To secure the IBM Common Data Providerfor z Systems end of the data connection,complete the steps that are outlined in theIBM Common Data Provider for z SystemsV1.1.0 documentation.
You want to use the Logstash that isprovided by IBM Operations Analytics forz Systems, but you do not have OpenSSLinstalled.
To secure the data connection, complete thesteps that are outlined in the IBM CommonData Provider for z Systems V1.1.0documentation.
62 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Option More information
You want to use an existing installation ofLogstash rather than the Logstash that isprovided by IBM Operations Analytics forz Systems.
To secure the data connection, complete thesteps that are outlined in the IBM CommonData Provider for z Systems V1.1.0documentation.
Configuring Operations Analytics for z Systems 63
64 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Preparing to analyze z/OS log data
To begin analyzing z/OS log data, you must log in to IBM Operations Analytics -Log Analysis. To optimize troubleshooting in your IT operations environment,assign data sources to groups within the Log Analysis user interface, andcustomize the Custom Search Dashboards that are provided in the z/OS InsightPacks.
About this task
You might also want to use the Problem Insights page and the client-side ExpertAdvice in the Log Analysis UI.
Logging in to Log AnalysisTo create or update data sources (including to group data sources), you must log into IBM Operations Analytics - Log Analysis with a user name that hasadministrator authority. If you are only analyzing z/OS log data, the user namethat you use does not require administrator authority.
Before you begin
For information about administering, using, and troubleshooting IBM OperationsAnalytics - Log Analysis, see the Log Analysis documentation.
Procedure
To log in to IBM Operations Analytics - Log Analysis for the first time, use thefollowing URL:https://fully_qualified_ioala_hostname:secure_port/Unity
where fully_qualified_ioala_hostname is the fully qualified domain name of theIBM Operations Analytics - Log Analysis server, and secure_port is the WebConsole secure port that is defined during the installation of IBM OperationsAnalytics - Log Analysis. The default value for this port is 9987.
Use of cookies in the Log Analysis UIIn the IBM Operations Analytics - Log Analysis UI, you can enable the searchhistory. Log Analysis then uses a cookie that is saved in the temporary directory ofthe browser to remember the last 10 terms that are entered in the search field.
By default, the cookie that saves the search terms expires every 30 days.
For information about how to enable or clear this search history, see theinformation about enabling the GUI search history in the Enabling the GUI searchhistory.
Privacy policy considerations
IBM Software products, including software as a service solutions, (“SoftwareOfferings”) may use cookies or other technologies to collect product usageinformation, to help improve the end user experience, to tailor interactions with
© Copyright IBM Corp. 2014, 2016 65
the end user, or for other purposes. In many cases, no personally identifiableinformation is collected by the Software Offerings. Some of our Software Offeringscan help enable you to collect personally identifiable information. If this SoftwareOffering uses cookies to collect personally identifiable information, specificinformation about this offering’s use of cookies is set forth below.
This Software Offering does not use cookies or other technologies to collectpersonally identifiable information.
For more information about the use of various technologies, including cookies, forthese purposes, see IBM's Privacy Policy at http://www.ibm.com/privacy andIBM's Online Privacy Statement at http://www.ibm.com/privacy/details in thesection entitled “Cookies, Web Beacons and Other Technologies,” and the “IBMSoftware Products and Software-as-a-Service Privacy Statement” athttp://www.ibm.com/software/info/product-privacy.
Grouping data sources to optimize troubleshooting in your ITenvironment
By defining logical groups of data sources within IBM Operations Analytics - LogAnalysis, you can more easily apply searches to related sets of data sources tooptimize troubleshooting in your IT environment.
About this task
Determine whether any data sources can be organized into meaningful logicalgroups.
For example, assume that your IT environment contains an online bankingapplication with a WebSphere Application Server for z/OS front end and a DB2 forz/OS back end. Organizing the associated data sources into a group can help youto quickly focus your troubleshooting efforts on the online banking application.
Although you can configure the ioaz Logstash output plugin to create thenecessary data sources if they do not exist, the plugin does not group the datasources. To group the data sources that are created by the ioaz Logstash outputplugin, you must manually assign the data sources to groups by using LogAnalysis user interface.
For more information about grouping data sources, see Editing service topologyinformation in Group JSON in the Log Analysis documentation.
Extending troubleshooting capability with Custom Search Dashboardapplications
The Custom Search Dashboard applications that are provided in the z/OS InsightPacks are samples of custom logic that you can use to extend the troubleshootingcapability of IBM Operations Analytics - Log Analysis. You can use these samplesto create dashboards for presenting the generated data in a useful visual format,such as a chart or graph, and for presenting HTML content.
Before you begin
For information about the content of each dashboard, see “Dashboards” on page126.
66 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
For information about the data source types that are used in populating thedashboards, see “Correlation of the data to be analyzed with the associated datastream types, data source types, and dashboards” on page 19.
For more information about Custom Search Dashboards, see Custom SearchDashboards.
About this task
Before you run these applications, you must customize them for your environment.
Each dashboard application is defined in a JavaScript Object Notation (JSON) filethat specifies the following information:v The script that runs the custom logicv The parameters to pass to the scriptv The output charts to display in the dashboard
To update the parameters for a dashboard application, edit the appropriate *.appfile in the following directories:
WebSphere Application Server for z/OS Custom Search Dashboard applicationsLA_INSTALL_DIR/AppFramework/Apps/WASforzOSInsightPack_3.1.0.0
z/OS Network Custom Search Dashboard applicationsLA_INSTALL_DIR/AppFramework/Apps/zOSNetworkInsightPack_v3.1.0.0
z/OS SMF Custom Search Dashboard applicationsLA_INSTALL_DIR/AppFramework/Apps/SMFforzOSInsightPack_v3.1.0.0
z/OS SYSLOG Custom Search Dashboard applicationsLA_INSTALL_DIR/AppFramework/Apps/SYSLOGforzOSInsightPack_3.1.0.0
Customizing the dashboard applicationsBefore you run the Custom Search Dashboard applications, customize them foryour environment.
Before you begin
In this topic, the DB2 for z/OS Troubleshooting dashboard is used as an exampleto show you how to customize a dashboard application.
If you are customizing the SYSLOG for z/OS Time Comparison dashboard, alsosee “Customizing the SYSLOG for z/OS Time Comparison dashboard” on page 71.
About this task
To customize a Troubleshooting application, you must update the parameters forthe following items:
data sourceThe data source, or group of data sources, to use as input for theTroubleshooting dashboard
time intervalThe time interval for which to extract data to present in theTroubleshooting dashboard
Preparing to analyze z/OS log data 67
host name (optional)The host name for which to group log data in the Troubleshootingdashboard. Specifying this parameter is optional, but grouping log data byhost name can help you identify where problems are occurring.
Procedure1. To update the parameters, edit the following file:
LA_INSTALL_DIR/AppFramework/Apps/SYSLOGforzOSInsightPack_2.2.0.0/DB2_for_zOS_Troubleshooting.app
In the search parameter, you specify the data source, or group of data sources,to be used as input for the DB2 for z/OS Troubleshooting dashboard.
Tip: The name of the data source or group of data sources that you specifymust match the name that was defined in the Data Sources tab of theAdministrative Settings workspace in IBM Operations Analytics - Log Analysis.
2. To specify the data sources to be used for the dashboard data, use one of thefollowing formats, depending on whether you want to specify data sourcesindividually or as a group:v To specify individual data sources, use the following format:
"parameters": [{"name": "search","type": "SearchQuery","value": {
"logsources": [{
"type": "logSource","name": "SYSLOG1"
},{
"type": "logSource","name": "SYSLOG2"
}]
}},
Specify the following values for the search parameter:
type logSource
name The name of the data source that was defined in the Data Sourcestab of the Administrative Settings workspace in IBM OperationsAnalytics - Log Analysis. An example value is SYSLOG.
v To specify a group of data sources, use the following format:"parameters": [
{"name": "search","type": "SearchQuery","value": {
"logsources": [{
"type": "tag","name": "/day trader/trading application1/"
},{
"type": "tag","name": "/day trader/trading application2/"
}]
}},
68 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Specify the following values for the search parameter:
type tag
name The tag path that was defined in the Data Sources tab of theAdministrative Settings workspace in IBM Operations Analytics -Log Analysis. An example value is /day trader/tradingapplication/.
3. To specify the time interval for which to extract data to present in thedashboard, define either a relative time interval or a custom time interval.v To specify a relative time interval, use the relativeTimeInterval parameter,
as shown in the following format:"parameters": [
{"name": "search","type": "SearchQuery","value": {
"logsources": [{
"type": "logSource","name": "SYSLOG"
}]
}},{
"name": "relativeTimeInterval","type": "string","value": "LastDay"
},{
"name": "timeFormat","type": "data","value": {
"timeUnit": "hour","timeUnitFormat": "MM-dd HH:mm"
}},{
"name": "hostnameField","type": "string","value": "SystemName"
}],
The following values are valid for the relativeTimeInterval parameter:– LastQuarterHour
– LastHour
– LastDay
– LastWeek
– LastMonth
– LastYear
v To specify a custom time interval, use the timestamp value in the searchparameter, as shown in the following format:"parameters": [
{"name": "search","type": "SearchQuery","value": {
"filter":{"range":{
Preparing to analyze z/OS log data 69
"timestamp":{"from":"01/01/2012 00:00:00.000 -0400","to":"04/25/2013 00:00:00.000 -0400","dateFormat":"MM/dd/yyyy HH:mm:ss.SSS Z"
}}
},"logsources": [
{"type": "logSource","name": "SYSLOG"
}]
}},{
"name": "timeFormat","type": "data","value": {
"timeUnit": "hour","timeUnitFormat": "yyyy-MM-dd HH:mm:ss"
}},{
"name": "hostnameField","type": "string","value": "SystemName"
}],
Specify the following values:
from The start of the time interval for which data is extracted to present inthe dashboard
to The end of the time interval for which data is extracted to present inthe dashboard
dateFormatThe date format string that is used in the from and to fields
4. The time format defines how the time interval (either relative or custom) isdisplayed on the x-axis of a chart in the dashboard. To specify the time format,use the timeFormat parameter, as shown in the following format:{
"name": "timeFormat","type": "data","value": {
"timeUnit": "hour","timeUnitFormat": "MM-dd HH:mm"
}},
Specify the following values:
timeUnitThe discrete time unit that is displayed on the x-axis of a chart in thedashboard. The following values are valid:v minute
v hour
v day
v week
v month
v year
70 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
The time unit should be a smaller unit than the time interval. Forexample, if the chart displays a time interval of LastWeek, a reasonabletime unit is day.
timeUnitFormatThe date format of the time unit that is displayed on the x-axis of achart in the dashboard, as specified by the Java SimpleDateFormat class.For example, a date format might be yyyy-MM-dd HH:mm:ssZ.
5. Optional: To specify the host name, use the hostnameField parameter, as shownin the following format:{
"name": "hostnameField","type": "string","value": "SystemName"
}
Specify one of the following values for the host name, or accept the defaultvalue, which is SystemName:
hostnameThe host name that is specified in the Service Topology that isassociated with the data source.
SystemNameThe system name.
6. Ensure that the port on which IBM Operations Analytics - Log Analysis listensis set to the default value of 9987 in the following file:LA_INSTALL_DIR/AppFramework/Apps/SYSLOGforzOSInsightPack_2.2.0.0/CommonAppMod.py
For example, ensure that the following line of the file contains the correct portnumber:baseurl = ’https://localhost:9987/Unity’
Customizing the SYSLOG for z/OS Time Comparison dashboardTo customize the SYSLOG for z/OS Time Comparison dashboard, you must firstcomplete the basic customization for the SYSLOG for z/OS dashboard. Then,update the parameter for the time filter in the *.app file.
Before you begin
For more information about the basic customization, see “Customizing thedashboard applications” on page 67.
About this task
To specify a time filter, use the timefilters parameter, as shown in the followingformat. The lastnum value specifies the number of days between the two timeperiods for the comparison."filter": {
"timefilters":{
"lastnum": 1,"granularity": "days","type": "relative"
}}
You can use the SYSLOG for z/OS Time Comparison dashboard in either of thefollowing ways:
Preparing to analyze z/OS log data 71
v If you have an active set of search results in the Log Analysis user interface, thisapplication uses the time filter, the data set filter, and the query from that searchresult as the base for time period 1. Time period 2 is then based on what youspecify for the time filter in the *.app file. In the preceding example, time period2 is one day.
v If you have no active search results in the Log Analysis user interface, timeperiod 1 is based on the values for the logsources and therelativeTimeInterval parameters.
Procedure
To update the parameters, edit the appropriate *.app file in the followingdirectory:LA_INSTALL_DIR/AppFramework/Apps/SYSLOGforzOSInsightPack_2.2.0.0
Running the dashboard applicationsYou run the Custom Search Dashboard applications from the IBM OperationsAnalytics - Log Analysis user interface.
Procedure
To run the applications, complete the following steps:1. Log in to the IBM Operations Analytics - Log Analysis Search workspace.2. In the Search Dashboards section, expand one of the following folders,
depending on which applications you want to run:v SMFforzOSInsightPack_v3.1.0.0
v SYSLOGforzOSInsightPack_v3.1.0.0
v WASforzOSInsightPack_v3.1.0.0
v zOSNetworkInsightPack_v3.1.0.0
Tip: If you do not see the folder, refresh the contents of this section.3. Double-click the name of the application.
Tip: If you hover over the application tab in the workspace, the tooltipdisplays the time that the application was run, which is an indication of thetime that the data for the charts was generated.
Getting started with the IBM zAware data gathererBefore you can use the IBM zAware data gatherer and see visualizations of theresulting interval anomaly data in the IBM Operations Analytics - Log Analysisuser interface, you must verify that all prerequisites are met, and start the datagatherer.
Before you begin
Verify that the following prerequisites for using the IBM zAware data gatherer aremet:__ 1. The IBM zAware V3.1 feature is installed. See “IBM zAware V3.1 feature” on
page 11.__ 2. IBM Operations Analytics - Log Analysis Version 1.3.3 Fix Pack 1 (V1.3.3.1)
is installed. See “Installing Log Analysis” on page 33.
72 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
__ 3. The z/OS Insight Packs, extensions, and IBM zAware data gatherer areinstalled. See “Installing the z/OS Insight Packs, extensions, and IBMzAware data gatherer” on page 35.
__ 4. The IBM zAware data gatherer is correctly configured with the definition forthe IBM zAware server from which you plan to gather interval anomalydata. See “Configuring the IBM zAware data gatherer” on page 57.
Tip: The data gatherer requires a version of Python that is supported by LogAnalysis. For more information about the versions, see Verifying the Pythonversion in the Log Analysis documentation.
About this task
The IBM zAware data gatherer files are in the following directory on the LogAnalysis server:LA_INSTALL_DIR/zAwareDataGatherer
Logs are in the following directory:LA_INSTALL_DIR/zAwareDataGatherer/logs
Usage notes for the data gatherer:
v The data gatherer creates a data source with the name “zOS Anomaly Interval”and the type zOS-Anomaly-Interval. All interval anomaly data that is retrievedby the data gatherer is ingested into this data source. If another data source withthe same name but with a different type was previously defined to the LogAnalysis server, the IBM zAware data gatherer cannot gather data.
v The data gatherer gathers data only if the following conditions are met:– The data gatherer is running.– The IBM zAware server from which data is being gathered is accessible.– The Log Analysis server to which data is being sent for analysis and
visualization is accessible.v The data gatherer does not support the retrieval of historical interval anomaly
data.v The data gatherer obtains interval anomaly data only for z/OS systems that are
monitored by IBM zAware. It does not obtain Linux system data.
Procedure
To start the IBM zAware data gatherer, complete the following steps:1. Verify that you are logged in to the Linux computer system with the non-root
user ID that was used to install Log Analysis.2. Run the following command from the LA_INSTALL_DIR/zAwareDataGatherer/bin
directory, where zaware_server represents either the host name or the IPaddress of the IBM zAware server from which you want to gather intervalanomaly data:python zAwareDataGatherer.py zaware_server
Tip: The data gatherer gathers interval anomaly data from only one IBMzAware server. Therefore, the data gatherer script requires only one argument,which is for specifying the host name or IP address of the IBM zAware serverfrom which you want to gather interval anomaly data. This server must have aconfiguration entry in the following file:LA_INSTALL_DIR/zAwareDataGatherer/config/zAwareConfig.json
Preparing to analyze z/OS log data 73
Getting started with Problem Insights for z/OSIf the Problem Insights component of IBM Operations Analytics for z Systems isinstalled, the IBM Operations Analytics - Log Analysis UI includes a new tab thatis titled Problem Insights. For each sysplex from which data is being forwarded tothe Log Analysis server, the Problem Insights page includes insight about certainproblems that are identified in the ingested data.
Before you begin
The Problem Insights extension must be installed.
For more information about the Problem Insights component and the systemrequirements and installation information for this component, see the followingtopics:v “Log Analysis extensions for z/OS Problem Insights and client-side Expert
Advice” on page 9v “System requirements” on page 15v “Installing Log Analysis” on page 33v “Installing the z/OS Insight Packs, extensions, and IBM zAware data gatherer”
on page 35
Procedure
To use the Problem Insights page in the Log Analysis UI, click the tab that is titledProblem Insights.
Getting started with client-side Expert AdviceIf the client-side Expert Advice component of IBM Operations Analytics for zSystems is installed, IBMSupportPortal-ExpertAdvice on Client is a choice underExpert Advice in the Search workspace of the IBM Operations Analytics - LogAnalysis UI. With this extension, you can access Expert Advice even if the LogAnalysis server does not have access to the Internet.
Before you begin
Ensure that the client-side Expert Advice component is installed. Also, ensure thatyou are using Log Analysis Version 1.3.3 Fix Pack 1. For more information aboutthe client-side Expert Advice component and the system requirements andinstallation information for this component, see the following topics:v “Log Analysis extensions for z/OS Problem Insights and client-side Expert
Advice” on page 9v “System requirements” on page 15v “Installing Log Analysis” on page 33v “Installing the z/OS Insight Packs, extensions, and IBM zAware data gatherer”
on page 35
About this task
The only field that you can update in the IBMSupportPortal-ExpertAdvice onClient.app file is _TERM_LIMIT, which defines the maximum number of keywordsthat a user can search for at one time.
74 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
The following data is removed from the search keywords by the client-side ExpertAdvice (IBMSupportPortal-ExpertAdvice on Client.app) to increase the chance ofa successful search on the client:v URLsv File namesv File pathsv IP addressesv Numbersv Double spaces
Tips for using the client-side Expert Advice:
1. Verify that the client browser settings allow pop-up windows from the LogAnalysis server.
2. If the Log Analysis server cannot be reached, clear cookies for that domain, andagain, try to connect.
Procedure
To use the client-side Expert Advice, complete the following steps:1. In the _TERM_LIMIT field of the IBMSupportPortal-ExpertAdvice on Client.app
file, define the maximum number of keywords that a user can search for at onetime.
2. To launch the client-side Expert Advice, click IBMSupportPortal-ExpertAdviceon Client under Expert Advice in the Custom Search Dashboards panel of theleft navigation pane of the Search workspace.The client browser sends search requests directly to the IBM Support Portal andopens a new browser tab to display the query search results.
Preparing to analyze z/OS log data 75
76 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Troubleshooting Operations Analytics for z Systems
This reference lists known problems that you might experience in using the IBMOperations Analytics for z Systems and describes known solutions.
About this task
For more troubleshooting information, see the following sources:v IBM Operations Analytics - Log Analysis V1.3.3 documentationv IBM Common Data Provider for z Systems V1.1.0 documentation
Log filesTroubleshooting information is available in log files from IBM Common DataProvider for z Systems, IBM Operations Analytics - Log Analysis, Logstash, andthe ioaz Logstash output plugin.
IBM Common Data Provider for z Systems log filesThe following log files provide information about the IBM Common DataProvider for z Systems:
Data Streamer log filesLogging information (and trace information, if trace is enabled) issent to the STDOUT data set on the HBODS001 job. Some DataStreamer messages are also written to the console.
z/OS Log Forwarder log filesLogging information (and trace information, if trace is enabled) issent to the STDERR data set on the GLAPROC job. Significantz/OS Log Forwarder messages are also written to the console.
System Data Engine log filesLogging information is sent to the HBOOUT data set on theHBOSMF job. The following information is also included if youspecify in the logging configuration that this information must besent:v A copy of the input stream is included with the information that
is sent to the HBOOUT data set on the HBOSMF job.v A report about the records that are processed for each processing
interval is sent to the HBODUMP data set on the HBOSMF job.
Log Analysis log filesThe following log files, which are located on the Log Analysis server,provide information about the processing of z/OS log data:
LA_INSTALL_DIR/logs/GenericReceiver.logContains information about the ingestion of log records and InsightPack processing.
LA_INSTALL_DIR/logs/UnityApplication.logContains information about searches.
Logstash log fileThe log file is LOGSTASH_INSTALL_DIR/logs/logstash-ioaz.log.
© Copyright IBM Corp. 2014, 2016 77
For information about using the following options when you startLogstash, see the Logstash documentation:v --debug
v --debug-config
ioaz Logstash output plugin log fileThe log file is ioaz-logstash.n.log, where n is a number from 0 to 19, andwhere 0 indicates the most recent file, and 19 indicates the oldest file. Thelog file location is specified by the user at installation time. The location isthe value of the log_path option in the Logstash configuration file.
The Logstash configuration file also contains a log_level option that youcan use to gather more information. For more information about theconfiguration options, see “Configuration options for the ioaz Logstashoutput plugin” on page 85.
Enabling tracing for the ioaz Logstash output pluginFor the ioaz Logstash output plugin, you can enable tracing by using thelog_level option in the Logstash configuration file.
About this task
The Logstash configuration file is LOGSTASH_INSTALL_DIR/config/logstash-ioaz.conf. The value of the log_level option in the Logstash configuration file isapplied each time that Logstash is started.
For more information about the configuration options, see “Configuration optionsfor the ioaz Logstash output plugin” on page 85.
Procedure
To enable tracing for the ioaz Logstash output plugin, complete the followingsteps:1. Edit the Logstash configuration file, and change the value of the log_level
option to debug or trace. A value of debug results in limited additionalinformation, and a value of trace results in more detailed information.
2. Restart the ioaz Logstash output plugin by using the LOGSTASH_INSTALL_DIR/bin/logstash_util.sh script.
What to do next
When you no longer need the trace settings, change the value of the log_leveloption to info (the default value).
APPLID values for CICS Transaction Server might not be correct inLog Analysis user interface
For CICS Transaction Server for z/OS, the application identifier (APPLID) valuesmight not be correct in the user interface of IBM Operations Analytics - LogAnalysis.
78 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Symptom
APPLID values for CICS Transaction Server for z/OS are expected to be providedin grid views and patterns in the Log Analysis user interface. However, if theAPPLID value is not present in the CICS Transaction Server for z/OS message text,the first word of the message text is incorrectly used as the APPLID.
Cause
CICS Transaction Server for z/OS typically includes the APPLID as the first wordof the message. However, when CICS Transaction Server for z/OS messages do notinclude the APPLID as the first word in the message, the z/OS SYSLOG InsightPack incorrectly assumes that the first word of the message is an APPLID.
Solution
No workaround is available.
DB2 or MQ command prefix values might not be correct in LogAnalysis user interface
For DB2 for z/OS and MQ for z/OS, the command prefix values might not becorrect in the user interface of IBM Operations Analytics - Log Analysis.
Symptom
Command prefix values for DB2 for z/OS and MQ for z/OS are expected to beprovided in grid views and patterns in the Log Analysis user interface. However, ifthe command prefix value is not present in the DB2 for z/OS or MQ for z/OSmessage text, the first word of the message text is incorrectly used as thecommand prefix.
Cause
DB2 for z/OS and MQ for z/OS typically include the command prefix as the firstword of the message. However, when DB2 for z/OS or MQ for z/OS messages donot include the command prefix as the first word in the message, the z/OSSYSLOG Insight Pack incorrectly assumes that the first word of the message is acommand prefix.
Solution
No workaround is available.
Log record skipped and not available in Log AnalysisIndividual log records are not retained by the IBM Operations Analytics - LogAnalysis server.
Symptom
If the Log Analysis server is shut down, the server might not retain the lasttransmitted log record for each data source.
Troubleshooting Operations Analytics for z Systems 79
Cause
When the Log Analysis server shuts down, the log records that the server isprocessing might be lost because the server does not cache in-process log records.
Solution
No workaround is available.
z/OS log data is missing in Log Analysis search resultsThe IBM Operations Analytics - Log Analysis search results do not include theexpected z/OS log data.
If log data that is issued in a z/OS logical partition (LPAR) is not displayed in IBMOperations Analytics - Log Analysis, the following steps can help you determinepossible causes.
If NetView or SMF data is not displayed: If NetView for z/OS messages or SMFdata are not displayed, also see the IBM Common Data Provider for z SystemsV1.1.0 documentation.
Step 1: Verify that IBM Common Data Provider for z Systems isrunning
Verify that IBM Common Data Provider for z Systems is running. If it is running,determine whether it logged any error or warning messages.
For more information about any error or warning messages that you find, see theIBM Common Data Provider for z Systems V1.1.0 documentation.
Step 2: Check the logstash-ioaz.log and ioaz-logstash.n.logfiles for error or warning messages
If no IBM Common Data Provider for z Systems error or warning messages arelogged, review the LOGSTASH_INSTALL_DIR/logs/logstash-ioaz.log andLOGSTASH_INSTALL_DIR/logs/ioaz-logstash.n.log files on the Logstash server,where n represents a number from 0 to 19, with 0 indicating the most recent fileand 19 indicating the oldest file.
Look for messages with severity ERROR or WARN.
Step 3: If no Logstash or ioaz Logstash output plugin error orwarning messages are logged, check the GenericReceiver.log filefor error or warning messages
If no Logstash or ioaz Logstash output plugin error or warning messages arelogged, review the LA_INSTALL_DIR/logs/GenericReceiver.log file on the IBMOperations Analytics - Log Analysis server.
Look for messages with severity ERROR or WARN.
For more information about any error or warning messages that you find, seeTroubleshooting in the Log Analysis documentation.
80 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Step 4: Check the GenericReceiver.log file for ingestion status
The LA_INSTALL_DIR/logs/GenericReceiver.log file also specifies the number oflog records that are processed in each batch of log data, as shown in the followingexample:03/06/14 15:58:25:660 EST [Default Executor-thread-4] INFO - DataCollectorRestServlet :Batch of Size 9 processed and encountered 0 failures
In this example, nine log records were successfully ingested, and no log recordsfailed to be ingested.
You can determine the data source name by looking for a data source ingestionrecord before the batch status record in the GenericReceiver.log. In the followingexample, the data source name is my_sysprint:03/06/14 15:58:25:650 EST [Default Executor-thread-4] INFO - UnityFlowController :Adding data source under ingestion, logsource = my_sysprint threadId = 96
Determine whether the status information in the GenericReceiver.log indicatesone of the following three situations:
All batch status records for a data source have a size value of 0Log data is being ingested into IBM Operations Analytics - Log Analysis,but one of the following issues is preventing any log records from beingidentified:v The data source type that is specified for the data source that is
configured in IBM Operations Analytics - Log Analysis is incorrect.
Examples:
– The data source type is zOS-CICS-MSGUSR, but the log data is beingsent from an EYULOG data set. Therefore, the data source type mustbe specified as zOS-CICS-EYULOG.
– The data source type is zOS-WAS-SYSPRINT, and the log data is beingsent from a SYSPRINT data set, but the SYSPRINT data set containslog data in the distributed format. Therefore, the data source typemust be specified as WASSystemOut.
v Although the data source type is correct, the log data does not containany records in the format that is expected by the data source type.
Examples:
– The SYSPRINT data contains only WebSphere Application Serverinternal trace records. It does not contain records that are producedby the Java RAS component. For SYSPRINT, only records that areproduced by the Java RAS component are ingested.
– The SYSPRINT data contains only unstructured data, such as datathat is produced by System.out.println() calls from Java code. ForSYSPRINT, only records that are produced by the Java RAScomponent are ingested.
– The SYSOUT data contains Java garbage collector trace data. Javagarbage collector data is not supported by the zOS-WAS-SYSOUT datasource type.
Batch status records are present for the data source, the batch size is nonzero,and the number of failures is zero
Log records are being ingested successfully. Determine whether you haveone of the following situations:
Troubleshooting Operations Analytics for z Systems 81
v The search criteria is incorrect or not broad enough to include theexpected log data.
Example: If the search time filter is set from 2:00 PM until 2:05 PM, arecord that was logged at 2:05:01 PM is not flagged.Verify the search filters. You might want to start with a broad search,and refine it as needed.
v The time zone information is incorrectly configured in the IBM CommonData Provider for z Systems. This issue can cause the time stamps of theingested records to be 1 - 12 hours earlier or later than they should be.
v If only the most recent log record is missing, the record might be held inbuffer by IBM Operations Analytics - Log Analysis. IBM OperationsAnalytics - Log Analysis cannot confirm whether a log record iscomplete until the next log record is received for that data source.Therefore, for each data source, the last log record that is sent to IBMOperations Analytics - Log Analysis is typically held (and not ingested)until the next log record is received.
No batch status records exist for the data sourceDetermine whether you have one of the following situations:v IBM Common Data Provider for z Systems and Logstash are started, but
no log records are generated for the data source. No log records areingested because no log data exists to ingest.
v IBM Common Data Provider for z Systems does not have theappropriate access to files or directories.
Example: For example, the user ID might not have read access to one ofthe following items:– A z/OS UNIX log file– The High Performance Extensible Logging (HPEL) log or trace
directory.
Change the file permissions to give IBM Common Data Provider for zSystems the appropriate access.
v Because the IBM Common Data Provider for z Systems is incorrectlyconfigured, it cannot find the log data.
After upgrade, interval anomaly data is not visible in Log Analysisuser interface
After you upgrade to IBM Operations Analytics for z Systems Version 3.1.0, you donot see any information about interval anomalies on the Problem Insights page ofthe Log Analysis user interface. For example, when you look at the data for asysplex, you do not see the new Interval Score column in the table, and you do notsee a bar chart for each system within the selected sysplex.
Cause
The most probable cause is that the IBM zAware data gatherer is not configured. Ifthe data gather is configured, the most probable cause is that the browser cacheneeds to be cleared.
82 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Solution
Complete the following steps:1. Verify that the IBM zAware data gatherer is configured. See “Verification of the
configuration” on page 58.2. Complete one of the following steps:v If the data gatherer is not configured, configure it. See “Configuring the IBM
zAware data gatherer” on page 57.v If the data gatherer is configured, clear the browser cache.
Troubleshooting Operations Analytics for z Systems 83
84 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Reference
Reference information is available for configuration options for the ioaz Logstashoutput plugin, data source types, dashboards, sample searches, SMF type80-related records that the System Data Engine component of IBM Common DataProvider for z Systems creates, and messages.
Configuration options for the ioaz Logstash output pluginThe ioaz Logstash output plugin, which is provided by IBM Operations Analyticsfor z Systems, forwards event data from the IBM Common Data Provider for zSystems to the IBM Operations Analytics - Log Analysis server. This reference liststhe configuration options that can be set for the plugin in the Logstashconfiguration file, which is LOGSTASH_INSTALL_DIR/config/logstash-ioaz.conf. Youcan update this file by using a text editor.
disk_cache_pathThe file system path to the directory where data is temporarily held. Theioaz Logstash output plugin writes data to this directory beforetransmission. The available disk space under the path must be largeenough to store bursts of data that are not immediately handled by theLog Analysis server.
Input typeString
Required value?Yes
Default valueNone
idle_flush_timeThe maximum time (in seconds) between successive transmissions for adata source.
Input typeNumber
Required value?No
Default value5
keystore_pathThe path to the Log Analysis keystore that contains the key for decryptingthe Log Analysis user password.
Input typeString
Required value?This value is required only if the value of the password option isencrypted.
Default valueNone
© Copyright IBM Corp. 2014, 2016 85
log_levelThe level of logging information. The following values are valid:v severe
v warning
v info
v event
v debug
v trace
The value of the log_level option is applied each time that Logstash isstarted.
Input typeString
Required value?No
Default valueinfo
log_pathThe path to the directory that is used for logging information from theioaz Logstash output plugin. A maximum of 200MB of log files can bewritten to this directory.
Input typeString
Required value?No
Default value/tmp/ioaz
passwordThe Log Analysis user password. The password can be cleartext orencrypted. If the password is encrypted, the value of the keystore_pathoption must also be specified.
Use the unity_securityUtility.sh command on the Log Analysis server toencrypt the password.
Input typeString
Required value?No
Default valueunityadmin
port The port on which the ioaz Logstash output plugin listens for data fromthe IBM Common Data Provider for z Systems.
Input typeString
Required value?No
Default value8080
86 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
trust_all_certificatesA true or false indication of whether the ioaz Logstash output pluginshould trust all security certificates when it sends data to the IBMOperations Analytics - Log Analysis server.
A value of true directs the plugin to trust all security certificates that areprovided by the Log Analysis server. A value of false directs the plugin touse only a specific security certificate.
The default value is true because the default behavior is to trust allsecurity certificates.
Input typeBoolean
Required value?No
Default valuetrue
url The Representational State Transfer (REST) endpoint for the Log Analysisingestion REST API.
Input typeString
Required value?Yes
Default valueNone
user The Log Analysis user name.
Input typeString
Required value?No
Default valueunityadmin
WebSphere Application Server for z/OS data source typesThe WebSphere Application Server for z/OS Insight Pack includes support foringesting, and performing metadata searches against, the following types of datasources: zOS-WAS-SYSOUT, zOS-WAS-SYSPRINT, and zOS-WAS-HPEL.
zOS-WAS-SYSOUTWebSphere Application Server for z/OS SYSOUT job log data
zOS-WAS-SYSPRINTWebSphere Application Server for z/OS SYSPRINT job log data
zOS-WAS-HPELWebSphere Application Server for z/OS High Performance ExtensibleLogging (HPEL) data
Table 14 on page 88 lists the configuration artifacts that are provided with theInsight Pack for each type of WebSphere Application Server for z/OS data source.
Reference 87
Table 14. Configuration artifacts that are provided with the WebSphere Application Serverfor z/OS Insight Pack
Data source type Splitter Annotator
zOS-WAS-SYSOUT zOS-WAS-SYSOUT-Split zOS-WAS-SYSOUT-Annotate
zOS-WAS-SYSPRINT zOS-WAS-SYSPRINT-Split zOS-WAS-SYSPRINT-Annotate
zOS-WAS-HPEL zOS-WAS-HPEL-Split zOS-WAS-HPEL-Annotate
zOS-WAS-SYSOUT data source typeFor WebSphere Application Server for z/OS data sources of type zOS-WAS-SYSOUT,the log file splitter zOS-WAS-SYSOUT-Split breaks up the data into log records. Thelog record annotator zOS-WAS-SYSOUT-Annotate annotates the log records.
For data sources of type zOS-WAS-SYSOUT, SYSOUT contains the WebSphereApplication Server for z/OS log stream. IBM Operations Analytics - Log Analysisingests all log records in SYSOUT.
File format
The following output is sample output from the error log. The line numbers areadded for illustrative purposes and are not in the actual output.1| BossLog: { 0017} 2008/10/01 15:58:25.557 03 SYSTEM=SY1 CELL=BBOCELL2| NODE=BBONODE CLUSTER=BBOC001 SERVER=BBOS001 PID=0X0100003C3| TID=0X24F82920 00000000 t=6C8B58 c=3.C5D02 ./bboiroot.cpp+11954| tag=classlvl... BBOU0012W The function IRootHomeImpl::findHome(const char*)5| +1195 received CORBA system exception CORBA::INTERNAL. Error code is6| C9C21200.
Table 15 describes the fields in the sample output. Some of these fields are notannotated. For information about which fields are annotated, see“zOS-WAS-SYSOUT-Annotate annotations” on page 89.
Tip: If WebSphere Application Server for z/OS is configured to use distributedlogging, SYSOUT is either empty or contains Java garbage collection data. Javagarbage collection data is not processed by the WebSphere Application Server forz/OS Insight Pack.
Table 15. Fields in sample output from error log for zOS-WAS-SYSOUT data source
Line Component value Value description
1 0017 Entry number
1 2000/06/01 15:58:25.557 03 Date, time stamp, and two-digit recordversion number
1 SY1 System name
1 BBOCELL Cell name
2 BBONODE Node name
2 BBOC001 Cluster name
2 BBOS001 Server name
2 0X0100003C Process ID
3 0X24F82920 00000000 Thread ID
3 6C8B58 Thread address
3 3.C5D02 Request correlation information
88 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 15. Fields in sample output from error log for zOS-WAS-SYSOUT data source (continued)
Line Component value Value description
3 ./bboiroot.cpp+1195 File name and line number
4 classlvl Message tag that is defined in theclassification file
4 BBOU0012W Log message number
4 The functionIRootHomeImpl::findHome(const char*)
Log message
5 - 6 +1195 received CORBA system exceptionCORBA::INTERNAL. Error code isC9C21200.
Continuation lines of the CERR jobmessage
zOS-WAS-SYSOUT-Split log record splitter
To start a new record, the zOS-WAS-SYSOUT-Split splitter uses the presence of thestring "BossLog: ".
Records that are generated by calls to the System.out.println() orSystem.err.println() methods are ingested, but are not treated as separate logrecords.
zOS-WAS-SYSOUT-Annotate annotationsThis reference describes the fields that are annotated by zOS-WAS-SYSOUT-Annotate.
Table 16 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 16. Log record fields that are annotated by zOS-WAS-SYSOUT-Annotate
Field DescriptionDatatype Sortable Filterable Searchable
application The application name that is populatedby the group data source field
Text U
datasourceHostname The host name that is specified in thedata source
Text U U
entryNumber Entry number Long U
exceptionClassName If this record was generated due to anexception, this name is the class name inthe top stack trace entry.
Text U U
exceptionFileName If this record was generated due to anexception, this name is the file name inthe top stack trace entry.
Text U
exceptionLineNumber If this record was generated due to anexception, this number is the linenumber in the top stack trace entry.
Long
Reference 89
Table 16. Log record fields that are annotated by zOS-WAS-SYSOUT-Annotate (continued)
Field DescriptionDatatype Sortable Filterable Searchable
exceptionMethodName If this record was generated due to anexception, this name is the method namein the top stack trace entry.
Text U U
exceptionPackageName If this record was generated due to anexception, this name is the packagename in the top stack trace entry.
Text U U
hostname The host name that is populated by thegroup data source field
Text U U U
javaException The first Java exception name thatmatches the following pattern:
*.*Exception
Text U U
logRecord The log record Text U
message The log message text. If a value isdetected for msgClassifier, messagecontains the msgClassifier also.
Text U
messageTag The message tag that is defined in theclassification file
Text U U
middleware The middleware name that is populatedby the group data source field
Text U
msgClassifier The log message number Text U U
processID The process identifier Text U U
service The service name that is populated bythe group data source field
Text U
SysplexName The sysplex name Text U U
SystemName The system name Text U U U
threadAddress The thread address Text U U
threadID An eight-character hexadecimal threadidentifier.
Text U U
timestamp The time stamp of the log record Date U U U
zOS-WAS-SYSPRINT data source typeFor WebSphere Application Server for z/OS data sources of typezOS-WAS-SYSPRINT, the log file splitter zOS-WAS-SYSPRINT-Split breaks up the datainto log records. The log record annotator zOS-WAS-SYSPRINT-Annotate annotatesthe log records.
For data sources of type zOS-WAS-SYSPRINT, SYSPRINT contains many tracestreams, including Java trace, System.out trace, and native trace. IBM OperationsAnalytics - Log Analysis ingests Java trace log records that are reported by the JavaReliability and Serviceability (RAS) component. Trace records from othercomponents and trace streams are not ingested.
The following items are ingested:v Any message that is logged by a web application that is using Java core logging
facilities
90 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
v Any message that is logged by any log utility (such as log4j) that is built on theJava core logging facilities
Trace records that are generated by calls to the System.out.println() orSystem.err.println() methods are ingested, but are not treated as separate logrecords.
File format
The following output is sample output from a trace log that is ingested. The linenumbers are added for illustrative purposes and are not in the actual output.1| Trace: 2009/07/14 17:26:19.577 02 t=6C8B58 c=UNK key=P8 tag=jperf (13007004)2| SourceId: PingServlet3| ExtendedMessage: BBOO0222I: Audit Message from PingServlet
Table 17 describes the fields in the sample output. Some of these fields are notannotated. For information about which fields are annotated, see“zOS-WAS-SYSPRINT-Annotate annotations.”
Tip: If WebSphere Application Server for z/OS is configured to use distributedlogging, SYSPRINT contains log data in distributed format. Therefore, to annotatethat log data, you must use the WebSphere Application Server Insight Pack, whichis preinstalled as part of the IBM Operations Analytics - Log Analysis installation.
Table 17. Fields in sample output from trace log for zOS-WAS-SYSPRINT data source
Line Component value Value description
1 2009/07/14 17:26:19.577 02 Date, time stamp, and two-digit recordversion number
1 6C8B58 Thread address
1 UNK Request correlation information
1 P8 System protection key
1 jperf Message tag from classification file
1 13007004 Trace specific value for this trace point
2 PingServlet Source ID
3 BBOO0222I: Audit Message fromPingServlet
Extended Message
zOS-WAS-SYSPRINT-Split log record splitter
To start a new record, the zOS-WAS-SYSPRINT-Split splitter uses the presence of thestring "Trace: ".
Records that are generated by calls to the System.out.println() orSystem.err.println() methods are ingested, but are not treated as separate logrecords.
zOS-WAS-SYSPRINT-Annotate annotationsThis reference describes the fields that are annotated by zOS-WAS-SYSPRINT-Annotate.
Table 18 on page 92 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspace
Reference 91
v The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 18. Log record fields that are annotated by zOS-WAS-SYSPRINT-Annotate
Field DescriptionDatatype Sortable Filterable Searchable
application The application name that is populated bythe group data source field
Text U
datasourceHostname The host name that is specified in the datasource
Text U U
exceptionClassName If this record was generated due to anexception, this name is the class name in thetop stack trace entry.
Text U U
exceptionFileName If this record was generated due to anexception, this name is the file name in thetop stack trace entry.
Text U
exceptionLineNumber If this record was generated due to anexception, this number is the line number inthe top stack trace entry.
Long
exceptionMethodName If this record was generated due to anexception, this name is the method name inthe top stack trace entry.
Text U U
exceptionPackageName If this record was generated due to anexception, this name is the package name inthe top stack trace entry.
Text U U
hostname The host name that is populated by the groupdata source field
Text U U U
javaException The first Java exception name that matchesthe following pattern:
*.*Exception
Text U U
logRecord The log record Text U
message The extended message. If a value is detectedfor msgClassifier, message contains themsgClassifier also.
Text U
messageTag The message tag that is defined in theclassification file
Text U U
middleware The middleware name that is populated bythe group data source field
Text U
msgClassifier The extended message number Text U U
service The service name that is populated by thegroup data source field
Text U
sourceID The source identifier Text U U
SysplexName The sysplex name Text U U
SystemName The system name Text U U U
threadAddress The hexadecimal thread address Text U U
timestamp The time stamp of the log record Date U U U
92 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
zOS-WAS-HPEL data source typeFor WebSphere Application Server for z/OS data sources of type zOS-WAS-HPEL, thelog file splitter zOS-WAS-HPEL-Split breaks up the data into log records. The logrecord annotator zOS-WAS-HPEL-Annotate annotates the log records.
File format
For data sources of type zOS-WAS-HPEL, the IBM Common Data Provider for zSystems retrieves data from the WebSphere Application Server for z/OS HighPerformance Extensible Logging (HPEL) API. The IBM Common Data Provider forz Systems converts the data to a custom comma-separated value (CSV) format andsends it to IBM Operations Analytics - Log Analysis.
The following format illustrates a sample WebSphere Application Server for z/OSmessage:"08/03/15 21:51:28:935UTC","00000001","com.ibm.ws390.orb.CommonBridge","printProperties","BBOJ0077I",,,,,,,"AUDIT","com.ibm.ws390.orb.CommonBridge","10",,"STC00968","BBOSKDM","BBOJ0077I:osgi.framework = file:/u/WebSphere/V8R5/bbocell/bbonode/AppServer/plugins/org.eclipse.osgi_.jar",
Table 19 describes the fields in the sample output. Some of these fields are notannotated. For information about which fields are annotated, see“zOS-WAS-HPEL-Annotate annotations” on page 94.
Table 19. Fields in sample output for zOS-WAS-HPEL data source
Component value Value description
08/03/15 21:51:28:935 UTC Time stamp of the log record
00000001 The ID of the thread on which this requestwas logged.
This ID is based on the java.util.loggingrepresentation of the thread ID and is notequivalent to the operating systemrepresentation of the thread ID.
com.ibm.ws390.orb.CommonBridge The name of the Java class that made thecall to the logger. This name might be thename of the source class that is supplied inthe call to the logger, or it might be aninferred source class name. The name mightnot be accurate.
printProperties The name of the method that made the callto the logger. This name might be the nameof the source method that is supplied in thecall to the logger, or it might be an inferredsource method name. The name might notbe accurate.
BBOJ0077I The message ID of the log record message.This message ID is the same regardless ofthe locale in which the message is rendered.For non-message records, and for othermessages that do not begin with a messageID, this field is empty.
AUDIT The message level, which is an indication ofthe severity of the message
com.ibm.ws390.orb.CommonBridge The name of the logger that created thisrecord
Reference 93
Table 19. Fields in sample output for zOS-WAS-HPEL data source (continued)
Component value Value description
10 The sequence index of the message that isgenerated by the logger
STC00968 The ID of the JES job that created this record
BBOSKDM The name of the JES job that created thisrecord
BBOJ0077I: osgi.framework =file:/u/WebSphere/V8R5/bbocell/bbonode/AppServer/plugins/org.eclipse.osgi_.jar
The formatted version of the log record,with values substituted for any placeholderparameters
zOS-WAS-HPEL-Split log record splitter
The zOS-WAS-HPEL-Split splitter considers each line of the log data to be a record.However, if a line break is contained within a quoted field, the line break isconsidered to be part of the field rather than an indication of the end of a record.
zOS-WAS-HPEL-Annotate annotationsThis reference describes the fields that are annotated by zOS-WAS-HPEL-Annotate.
Table 20 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 20. Log record fields that are annotated by zOS-WAS-HPEL-Annotate
Field DescriptionDatatype Sortable Filterable Searchable
application The application name that is populated by thegroup data source field
Text U
appName The name of the Java Platform, EnterpriseEdition (Java EE) application that the log ortrace record relates to, if any.
Text U U
className The name of the class that made the call to thelogger. This name might be the name of thesource class that is supplied in the call to thelogger, or it might be an inferred source classname. The name might not be accurate.
Text U U
datasourceHostname The host name that is specified in the datasource
Text U U
exceptionClassName If this record was generated due to anexception, this name is the class name in thetop stack trace entry.
Text U U
exceptionFileName If this record was generated due to anexception, this name is the file name in the topstack trace entry.
Text U U
exceptionLineNumber If this record was generated due to anexception, this number is the line number inthe top stack trace entry.
Long
94 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 20. Log record fields that are annotated by zOS-WAS-HPEL-Annotate (continued)
Field DescriptionDatatype Sortable Filterable Searchable
exceptionMethodName If this record was generated due to anexception, this name is the method name inthe top stack trace entry.
Text U U
exceptionPackageName If this record was generated due to anexception, this name is the package name inthe top stack trace entry.
Text U U
hostname The host name that is populated by the groupdata source field
Text U U U
javaException The first Java exception name that matches thefollowing pattern:
*.*Exception
Text U U
jobId The identifier of the Job Entry Subsystem (JES)job that created this record
Text U U
jobName The name of the JES job that created thisrecord
Text U U
level The message level, which is an indication ofthe severity of the message
Text U U
loggerName The name of the logger that created this record Text U U
logRecord The log record Text U
message The formatted version of the log record, withvalues substituted for any placeholderparameters. If a value is detected formsgClassifier, message contains the msgClassifieralso.
Text U
methodName The name of the method that made the call tothe logger. This name might be the name ofthe source method that is supplied in the callto the logger, or it might be an inferred sourcemethod name. This name might not beaccurate.
Text U U
middleware The middleware name that is populated by thegroup data source field
Text U
msgClassifier The message identifier of the log recordmessage. This message ID is the sameregardless of the locale in which the messageis rendered. For non-message records, and forother messages that do not begin with amessage ID, this field is empty.
Text U U
sequence The sequence index of the message asgenerated by the logger
Long U
service The service name that is populated by thegroup data source field
Text U
SysplexName The sysplex name Text U U
SystemName The system name Text U U U
threadID The identifier of the thread on which thisrequest was logged. This ID is based on thejava.util.logging representation of the threadID, and is not equivalent to the operatingsystem representation of the thread ID.
Text U U
Reference 95
Table 20. Log record fields that are annotated by zOS-WAS-HPEL-Annotate (continued)
Field DescriptionDatatype Sortable Filterable Searchable
timestamp The time stamp of the log record Date U U U
traceBlockAll If this record was generated due to anexception, this is the stack trace. The stacktrace is computed only for records where athrowable exception is explicitly supplied bythe caller.
Text U
z/OS Network data source typesThe z/OS Network Insight Pack includes support for ingesting, and performingmetadata searches against, the zOS-NetView type of data source.
zOS-NetViewNetView for z/OS message data
Table 21 lists the configuration artifacts that are provided with the Insight Pack foreach type of z/OS Network data source.
Table 21. Configuration artifacts that are provided with the z/OS Network Insight Pack
Data source type Splitter Annotator
zOS-NetView zOS-NetView-Split zOS-NetView-Annotate
zOS-NetView data source typeFor z/OS Network data sources of type zOS-NetView, the log file splitterzOS-NetView-Split breaks up the data into log records. The log record annotatorzOS-NetView-Annotate annotates the log records.
File format
For data sources of type zOS-NetView, the z/OS NetView Message gatherer sendsdata directly to the IBM Common Data Provider for z Systems by using theNetView program-to-program interface (PPI). The IBM Common Data Provider forz Systems forwards the NetView for z/OS data to IBM Operations Analytics - LogAnalysis.
The following format illustrates a sample NetView message as formatted by thez/OS NetView Message gatherer:06/23/15 19:48:09,NMP190 ,CNM01 ,NETOP1 ,-,BNH365E AUTOTBL MEMBER SPECIFIEDIS NOT UNIQUE WITHIN THE LIST OF ACTIVE AUTOMATION TABLES
Table 22 describes the fields in the sample output.
Table 22. Fields in sample output from HFS file for zOS-NetView data source
Component value Value description
06/23/15 19:48:09 Time stamp, which is formatted as MM/DD/YYHH:mm:ss
NMP190 System name, with blank characters addedas needed to form the maximum of 8characters
96 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 22. Fields in sample output from HFS file for zOS-NetView data source (continued)
Component value Value description
CNM01 Domain, with blank characters added asneeded to form the maximum of 8characters
NETOP1 Operator ID, with blank characters added asneeded to form the maximum of 8characters
- HDRMTYPE, which is only 1 character
BNH365E Message ID
BNH Message prefix
E Message type
BNH365E AUTOTBL MEMBER SPECIFIED IS NOTUNIQUE WITHIN THE LIST OF ACTIVEAUTOMATION TABLES
Message text
NetView commands can be echoed in the message, and they are also included inthe HFS file, as shown in the following example:06/23/15 19:49:56,NMP190 ,CNM01 ,NETOP1 ,*,LIST DEFAULTS
Table 23 describes the fields in the sample output.
Table 23. Fields in sample output from HFS file with echoed NetView command
Component value Value description
06/23/15 19:49:56 Time stamp
NMP190 System name
CNM01 Domain
NETOP1 Operator ID
* HDRMTYPE
Message ID
Message prefix
Message type
LIST DEFAULTS Message text
zOS-NetView-Split log record splitter
The NetView data that is provided by the z/OS NetView Message gatherer isformatted so that each of the fields in a log record are separated by a comma, asshown in the following example:06/23/15 19:48:09,NMP190 ,CNM55 ,NETOP1 ,-,BNH365E AUTOTBL MEMBER SPECIFIEDIS NOT UNIQUE WITHIN THE LIST OF ACTIVE AUTOMATION TABLES
To start a new record, the zOS-NetView-Split splitter searches for the followingfirst five fields in the record:v Time stampv System namev Domainv Operator ID
Reference 97
v HDRMTYPE
zOS-NetView-Annotate annotationsThis reference describes the fields that are annotated by zOS-NetView-Annotate.
Table 24 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 24. Log record fields that are annotated by zOS-NetView-Annotate
Field DescriptionDatatype Sortable Filterable Searchable
datasourceHostname The host name that is specified in the IBMOperations Analytics - Log Analysis datasource
Text U U
Domain The NetView domain Text U U U
HDRMTYPE The NetView message type Text U
logRecord The log record Text U
MessageID The message identifier
Also, see “Message IDs.”
Text U U U
MessagePrefix The first 3 characters of the messageidentifier. If no value is detected forMessageID, MessagePrefix has no value.
Text U U
MessageText The message text. If a value is detected forMessageID, MessageText contains theMessageID also.
Text U
MessageType The 1-character message type that isspecified in the MessageID value. Validvalues are A, D, E, I, S, U, or W.
If no value is detected for MessageID, or ifthe MessageID value does not contain amessage type, MessageType has no value.
Text U U
OperatorID The NetView operator ID Text U
SubsystemID The identifier of the software product orsubsystem that generated the message.
Text U U U
SysplexName The sysplex name Text U U
SystemName The system name Text U U U
timestamp The time stamp of the log record Date U U U
Message IDs
A string is detected as a message ID if it matches one of the following formats:aaannnaaannntaaaannnaaaannnt
98 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
aaaaannnaaaaannntaaannnnaaannnntaaaannnnaaaannnntaaaaannnnaaaaannnntaaannnnnaaannnnntaaaannnnnaaaannnnntaaaaannnnnaaaaannnnnt
where:v a represents an uppercase alphabetic character (A - Z).
The string can have 3 to 5 uppercase alphabetic characters but only the first 3characters are considered the message prefix.
v n represents a numeric character (0 - 9).v t represents a type character (A, D, E, I, S, U, or W).
Sometimes, a string that is not a message ID, but matches one of the precedingformats, might show in the MessageID field.
z/OS SMF data source typesThe z/OS SMF Insight Pack includes support for ingesting, and performingmetadata searches against, the following types of data sources for SystemManagement Facilities (SMF) data: zOS-SMF30, zOS-SMF80, zOS-SMF110_E,zOS-SMF110_S_10, and zOS-SMF120.
zOS-SMF30Job performance data (based on accounting data) for z/OS software
zOS-SMF80Data that is produced during Resource Access Control Facility (RACF)processing
zOS-SMF110_EMonitoring exceptions data for CICS Transaction Server for z/OS
zOS-SMF110_S_10Global transaction manager statistics data for CICS Transaction Server forz/OS
zOS-SMF120Performance data for WebSphere Application Server for z/OS from SMFrecord type 120 subtype 9.
Tip: This data does not include data for the WebSphere Liberty server.
Table 25 lists the configuration artifacts that are provided with the Insight Pack foreach type of SMF data source.
Table 25. Configuration artifacts that are provided with the z/OS SMF Insight Pack
Data source type Splitter Annotator
zOS-SMF30 zOS-SMF-Split zOS-SMF30-Annotate
zOS-SMF80 zOS-SMF-Split zOS-SMF80-Annotate
Reference 99
Table 25. Configuration artifacts that are provided with the z/OS SMF InsightPack (continued)
Data source type Splitter Annotator
zOS-SMF110_E zOS-SMF-Split zOS-SMF110_E-Annotate
zOS-SMF110_S_10 zOS-SMF-Split zOS-SMF110_S_10-Annotate
zOS-SMF120 zOS-SMF-Split zOS-SMF120-Annotate
File format
For SMF data source types, the SMF data stream stores SMF data in a UNIXSystem Services file in a proprietary format that is similar to a log file format.
zOS-SMF-Split log record splitter
The zOS-SMF-Split splitter breaks up the SMF data that is sent by the IBMCommon Data Provider for z Systems into individual records. The SMF data file isformatted so that each record is contained on a single line.
zOS-SMF30-Annotate annotationsThis reference describes the fields that are annotated by zOS-SMF30-Annotate.
Table 26 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 26. Log record fields that are annotated by zOS-SMF30-Annotate
Field DescriptionDatatype Sortable Filterable Searchable
CPU The CPU usage for the monitored task Double U U U
datasourceHostname The host name that is specified in theIBM Operations Analytics - LogAnalysis data source
Text U U
IORate The I/O rate for the monitored task Double U U U
JobName The 8-character name of the job on thez/OS system
Text U U U
logRecord The log record Text U
PagingRate The paging rate for the monitored task Double U U U
ProgName The name of the program that isrunning under the monitored task
Text U U U
RecordType The type of SMF record Text U U U
SysplexName The sysplex name Text U U
SystemID The system identifier Text U U U
SystemName The system name Text U U U
Task The job name for the task that issuedthe message
Text U
100 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 26. Log record fields that are annotated by zOS-SMF30-Annotate (continued)
Field DescriptionDatatype Sortable Filterable Searchable
timestamp The time stamp of the log record Date U U U
WorkingSet The working set size for the monitoredtask
Double U U U
zOS-SMF80-Annotate annotationsThis reference describes the fields that are annotated by zOS-SMF80-Annotate.
Table 27 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
The column titled “Corresponding SMF field” indicates the SMF field name thatcorresponds to the field name in the annotation.
Table 27. Log record fields that are annotated by zOS-SMF80-Annotate
Field DescriptionCorrespondingSMF field Data type Sortable Filterable Searchable
AccessAllow Access authority is allowed SMF80DTA Text
AccessReq Access authority is requested SMF80DTA Text
AccessType Setting that is used in granting access.The following values are possible:
v None
v Owner
v Group
v Other
SMF80DA2 Text
Application Application name that is specified onthe RACROUTE request
SMF80DTA Text U U U
AuditDesc Descriptive name of the operation thatis audited
SMF80DA2 Text U
AuditName Name of the operation that is audited SMF80DA2 Text U
Auditor AUDITOR attribute (Y/N) SMF80ATH Text
AuditorExec Auditor execute/search audit options SMF80DA2 Text
AuditorRead Auditor read access audit options SMF80DA2 Text
AuditorUserExec User execute/search audit options SMF80DA2 Text
AuditorUserRead User read access audit options SMF80DA2 Text
AuditorUserWrite User write access audit options SMF80DA2 Text
AuditorWrite Auditor write access audit options SMF80DA2 Text
AuthorityFlags Flags that indicate the authority checksthat are made for the user whorequested the action
SMF80ATH Text U
CHOWNGroupID z/OS UNIX group identifier (GID)input parameter
SMF80DA2 Text
CHOWNUserID z/OS UNIX user identifier (UID) inputparameter
SMF80DA2 Text
Reference 101
Table 27. Log record fields that are annotated by zOS-SMF80-Annotate (continued)
Field DescriptionCorrespondingSMF field Data type Sortable Filterable Searchable
Class The class entries that are supplied byIBM in the class descriptor table(ICHRRCDX)
SMF80DTA Text U
Command A string that is derived by using theSMF80EVT and SMF80EVQ values
SMF80EVT,SMF80EVQ
Text U
EffectiveGroup User's effective GID setting SMF80DA2 Text
EffectiveUser User's effective UID setting SMF80DA2 Text
Event Short description of the event codeand qualifier
SMF80EVT,SMF80EVQ
Text U U U
EventCode Event code SMF80EVT Text U
EventDesc Verbose description of the event codeand qualifier
SMF80EVT Text U U U
EventQual Event code qualifier SMF80EVQ Text U
Failed Event code qualifier is nonzero, whichindicates a failed request (Y/N)
SMF80EVQ Text
Filename File name of the file that is beingchecked
SMF80DA2 Text U U U
FileOwnerGroup File owner's GID SMF80DA2 Text
FileOwnerUser File owner's UID SMF80DA2 Text
Generic Generic profile used (Y/N) SMF80DTP Text
GroupExec Group permissions bit: execute SMF80DA2 Text
GroupRead Group permissions bit: read SMF80DA2 Text
GroupWrite Group permissions bit: write SMF80DA2 Text
ISGID Requested file mode: S_ISGID bit SMF80DA2 Text
ISUID Requested file mode: S_ISUID bit SMF80DA2 Text
ISVTX Requested file mode: S_ISVTX bit SMF80DA2 Text
logRecord The log record Set by the dataprovider
Text U
OtherExec Other permissions bit: execute SMF80DA2 Text
OtherRead Other permissions bit: read SMF80DA2 Text
OtherWrite Other permissions bit: write SMF80DA2 Text
OwnerExec Owner permissions bit: execute SMF80DA2 Text
OwnerRead Owner permissions bit: read SMF80DA2 Text
OwnerWrite Owner permissions bit: write SMF80DA2 Text
Pathname Full path name of the file that is beingchecked
SMF80DA2 Text U
ProfileName Name of the Resource Access ControlFacility (RACF) profile that is used toaccess the resource
SMF80DTA Text
RealGroup User's real GID setting SMF80DA2 Text
RealUser User's real UID setting SMF80DA2 Text
102 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 27. Log record fields that are annotated by zOS-SMF80-Annotate (continued)
Field DescriptionCorrespondingSMF field Data type Sortable Filterable Searchable
RecordType Internal record type. The followingvalues are possible:
v SMF80_COMMAND
v SMF80_LOGON
v SMF80_OMVS_RES_1
v SMF80_OMVS_RES_2
v SMF80_OMVS_SEC_1
v SMF80_OMVS_SEC_2
v SMF80_OPERATION
v SMF80_RESOURCE
For information about these values, seethe IBM Common Data Provider for zSystems documentation in the IBMKnowledge Center..
Set by the dataprovider
Text U U U
ResourceName Resource name SMF80DTA Text U U U
SavedGroup User's saved GID setting SMF80DA2 Text
SavedUser User's saved UID setting SMF80DA2 Text
Special SPECIAL attribute (Y/N) SMF80ATH Text
SuperUser z/OS UNIX superuser (Y/N) SMF80AU2 Text
SysplexName The sysplex name Set by the dataprovider
Text U U
SystemID The system identifier from the SIDparameter in the SMFPRMnn member
SMF80SID Text U U U
SystemName The system name Set by the dataprovider
Text U U U
TermID Terminal ID of the foreground user(zero if not available)
SMF80TRM Text U
timestamp Date that the record was moved to theSMF buffer
SMF80DTE,SMF80TME
Date U U U
UserID Identifier of the user that is associatedwith this event. The value of JobNameis used if the user is not defined toRACF.
SMF80USR Text U U U
zOS-SMF110_E-Annotate annotationsThis reference describes the fields that are annotated by zOS-SMF110_E-Annotate.
Table 28 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 28. Log record fields that are annotated by zOS-SMF110_E-Annotate
Field Description Data type Sortable Filterable Searchable
ApplID The product name Text U U U
Reference 103
Table 28. Log record fields that are annotated by zOS-SMF110_E-Annotate (continued)
Field Description Data type Sortable Filterable Searchable
CICSTrans The transactionidentification
Text U U U
datasourceHostname The host name that isspecified in the IBMOperations Analytics - LogAnalysis data source
Text U U
ExceptionID The exception ID Text U
ExceptionID2 The extended exception ID Text U
ExceptionNumber The exception sequencenumber for the task
Text U U
ExceptionType The exception type Text U U
JobName The 8-character name ofthe job on the z/OSsystem
Text U U U
logRecord The log record Text U
LU The logical unit on thez/OS system
Text U
NetID The NETID if a networkqualified name is receivedfrom z/OSCommunications Server.For a z/OSCommunications Serverresource where thenetwork qualified name isnot yet received, NETID iseight blanks. In all othercases, this field is null.
Text U
ProgName The name of the currentlyrunning program for theuser task when theexception conditionoccurred
Text U U U
RecordType The type of SMF record Text U U U
ResourceID The exception resourceidentification
Text U U
ResourceType The exception resourcetype
Text U U
SubsystemID The subsystemidentification
Text U U U
SysplexName The sysplex name Text U U
SystemID The system identifier Text U U U
SystemName The system name Text U U U
timestamp The time stamp of the logrecord
Date U U U
TransNum The transactionidentification number
Text U
104 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 28. Log record fields that are annotated by zOS-SMF110_E-Annotate (continued)
Field Description Data type Sortable Filterable Searchable
UserID The user identification attask creation. Thisidentifier can also be theremote user identifier for atask that is created as theresult of receiving anATTACH request across amultiregion operation (MRO) or AdvancedProgram-to-ProgramCommunication (APPC)link with attach-timesecurity enabled.
Text U U U
zOS-SMF110_S_10-Annotate annotationsThis reference describes the fields that are annotated by zOS-SMF110_S_10-Annotate.
Table 29 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 29. Log record fields that are annotated by zOS-SMF110_S_10-Annotate
Field Description Data type Sortable Filterable Searchable
ApplID The application identifier Text U U U
AtsMxt An indicator of the limitfor the number ofconcurrent tasks
Text U U
datasourceHostname The host name that isspecified in the IBMOperations Analytics - LogAnalysis data source
Text U U
GmtsMxtReached According to Greenwichmean time (GMT), the timewhen the task limit (thevalue of MAXTASKS) is met
Date U
IntervalDuration For a status type(StatsType) of INT, theinterval duration, which isrepresented in the timeformat HHMMSS
Text
logRecord The log record Text U
MAXTASKS The limit for the numberof concurrent tasks
Long
RecordType The type of SMF record Text U U U
StatsArea The status area Text U
Reference 105
Table 29. Log record fields that are annotated by zOS-SMF110_S_10-Annotate (continued)
Field Description Data type Sortable Filterable Searchable
StatsType The status type. Forexample, one of thefollowing types:
v EOD
v INT
v REQ
v RRT
v USS
Text U
SysplexName The sysplex name Text U U
SystemID The system identifier Text U U U
SystemName The system name Text U U U
timestamp The time stamp of the logrecord
Date U U U
TransCount The number of user andsystem transactions thatare attached
Double U
TransCurrentActiveUser At the present time, thenumber of active usertransactions in the system
Long U U U
TransCurrent_QSec At the present time, thenumber of seconds thattransactions have beenqueued because the tasklimit (the value ofMAXTASKS) is met
Double U
TransPeakActiveUser The highest number ofactive user transactions
Long U
TransQueuedUser The number of queueduser transactions in thesystem
Long U
TransTimesAtMAXTASKS The number of times thatthe task limit (the value ofMAXTASKS) was met
Long U U
TransTotalActive For a specified timeinterval, the number ofactive user transactions inthe system
Long U
TransTotalDelayed For a specified timeinterval, the number ofuser transactions that weredelayed because the tasklimit (the value ofMAXTASKS) was met
Long U
TransTotal_QSec For a specified timeinterval, the number ofseconds that transactionswere queued because thetask limit (the value ofMAXTASKS) was met
Double U
106 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 29. Log record fields that are annotated by zOS-SMF110_S_10-Annotate (continued)
Field Description Data type Sortable Filterable Searchable
TransTotalTasks At the time of the lastreset, the number oftransactions in the system
Double U U
zOS-SMF120-Annotate annotationsThis reference describes the fields that are annotated by zOS-SMF120-Annotate.
Table 30 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
The column titled “Corresponding SMF field” indicates the SMF field name thatcorresponds to the field name in the annotation.
Table 30. Log record fields that are annotated by zOS-SMF120-Annotate
Field DescriptionCorrespondingSMF field
Datatype Sortable Filterable Searchable
Application The application name SM1209EO Text U U U
ControllerJobname The job name for thecontroller
SM1209BT Text U U U
DeleteServiceCPUActiveCount The count of samples whenthe enclave delete CPUservice time was non-zero.Time is accumulated by theenclave as reported by theCPUSERVICE parameter ofthe IWM4EDEL API. Avalue of 0 indicates that theenclave was not deleted.
SM1209DNcount
Long
DispatchCPU The amount of CPU time,in microseconds, that isused by dispatch TCB.
SM1209CI Double U U U
EnclaveCPU The amount of CPU timethat was used by theenclave as reported by theCPUTIME parameter of theIWM4EDEL API.
SM1209DH Double
EnclaveServiceDeleteCPU The enclave delete CPUservice that is accumulatedby the enclave as reportedby the CPUSERVICEparameter of theIWM4EDEL API. A value of0 indicates that the enclavewas not deleted.
SM1209DN Double
logRecord The log record Set by the dataprovider
Text U
Reference 107
Table 30. Log record fields that are annotated by zOS-SMF120-Annotate (continued)
Field DescriptionCorrespondingSMF field
Datatype Sortable Filterable Searchable
RecordType Internal record type. Thefollowing values arepossible:
v SMF120_REQAPPL, whichindicates a WebSphereapplication record
v SMF120_REQCONT, whichindicates a WebSpherecontroller record
Set by the dataprovider
Text U U U
RequestCount Request count Set by the dataprovider
Long U U U
RequestEnclaveCPU The enclave CPU time atthe end of the dispatch ofthis request, as reported bythe CPUTIME parameter ofthe IWMEQTME API. Theunits are in TOD format.
SM1209DA Double U U U
RequestTime The time that the requestwas received, or the timethat the WebSphereapplication or controllercompleted processing of therequest response.
SM1209CM,SM1209CQ
Double U U U
RequestType The type of request thatwas processed. Thefollowing values arepossible:
v HTTP
v HTTPS
v IIOP
v INTERNAL
v MBEAN
v MDB-A
v MDB-B
v MDB-C
v NOTKNOWN
v OTS
v SIP
v SIPS
v UNKNOWN
SM1209CK Text U U U
SpecialtyCPU The amount of CPU timethat was spent onnon-standard CPs, such asthe z Systems ApplicationAssist Processor (zAAP)and z Systems IntegratedInformation Processor(zIIP). This value isobtained from theTIMEUSED API.
SM1209CX Double U U U
108 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 30. Log record fields that are annotated by zOS-SMF120-Annotate (continued)
Field DescriptionCorrespondingSMF field
Datatype Sortable Filterable Searchable
SpecialtyCPUActiveCount The count of samples whenthe amount of CPU timethat was spent onnon-standard CPs, such asthe zAAP and zIIP, wasnon-zero. The CPUutilization value is obtainedfrom the TIMEUSED API.
SM1209CXcount
Long
SysplexName The sysplex name Set by the dataprovider
Text U U
SystemID The system identifier SM120SID Text U U U
SystemName The system name Set by the dataprovider
Text U U U
timestamp The date that the recordwas moved to the SMFbuffer
SM120DTE,SM120TME
Date U U U
zAAPCPUActiveCount The count of samples whenthe delete zAAP CPUenclave time was non-zero.A value of 0 indicates thatthe enclave was not deletedor not normalized. ThisCPU time is obtained fromthe ZAAPTIME field in theIWM4EDEL macro.
SM1209DIcount
Long
zAAPEligibleCPU The amount of CPU time atthe end of the dispatch ofthis request that is spent ona regular CP that couldhave been run on a zAAP,but the zAAP was notavailable. This value isobtained from theZAAPONCPTIME field inthe IWMEQTME macro.
SM1209DC Double U U U
zAAPEnclaveCPUNormalized The enclave zAAP CPUtime at the end of thedispatch of this request, asreported by the ZAAPTIMEparameter of theIWMEQTME API. Thisutilization is adjusted bythe zAAP normalizationfactor at the end of thedispatch of this request. Thenormalization factor isobtained from theZAAPNFACTOR parameter ofthe IWMEQTME API.
SM1209DG,SM1209DB
Double
Reference 109
Table 30. Log record fields that are annotated by zOS-SMF120-Annotate (continued)
Field DescriptionCorrespondingSMF field
Datatype Sortable Filterable Searchable
zAAPEnclaveDeleteCPU The delete zAAP CPUenclave. A value of 0indicates that the enclavewas not deleted or notnormalized. This value isobtained from theZAAPTIME field in theIWM4EDEL macro. Thisvalue is normalized by theenclave delete zAAPnormalization factor asreported by theZAAPNFACTOR parameter ofthe IWM4EDEL API.
SM1209DJ,SM1209DI
Double
zAAPEnclaveServiceDeleteCPU The enclave delete zAAPService that is accumulatedby the enclave as reportedby the ZAAPSERVICEparameter of theIWM4EDEL API. A value of0 indicates that the enclavewas not deleted.
SM1209DM Double
zAAPServiceCPUActiveCount The count of samples whenthe enclave delete zAAPservice time was non-zero.Time is accumulated by theenclave as reported by theZAAPSERVICE parameter ofthe IWM4EDEL API. Avalue of 0 indicates that theenclave was not deleted.
SM1209DMcount
Long
zIIPCPUActiveCount The count of samples whenthe enclave delete zIIP timewas non-zero. Time isaccumulated by the enclaveas reported by the ZIIPTIMEparameter of theIWM4EDEL API. A value of0 indicates that the enclavewas not deleted.
SM1209DKcount
Long
zIIPEligibleCPUEnclave The eligible zIIP enclavethat is on the CPU at theend of the dispatch of thisrequest. This value isobtained from theZIIPTIME field in theIWMEQTME macro.
SM1209DF Double U U U
zIIPEnclaveCPU The zIIP enclave that is onthe CPU at the end of thedispatch of this request.This value is obtained fromthe ZIIPONCPTIME field inthe IWMEQTME macro.
SM1209DD Double
zIIPEnclaveDeleteCPU The enclave delete zIIP timethat is accumulated by theenclave as reported by theZIIPTIME parameter of theIWM4EDEL API. A value of0 indicates that the enclavewas not deleted.
SM1209DK Double
110 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 30. Log record fields that are annotated by zOS-SMF120-Annotate (continued)
Field DescriptionCorrespondingSMF field
Datatype Sortable Filterable Searchable
zIIPEnclaveQualityCPU The zIIP Quality Timeenclave that was on theCPU at the end of thedispatch of this request.This value is obtained fromthe ZIIPQUALTIME field inthe IWMEQTME macro.
SM1209DE Double
zIIPEnclaveServiceDeleteCPU The enclave delete zIIPservice that is accumulatedby the enclave as reportedby the ZIIPSERVICEparameter of theIWM4EDEL API. A value of0 indicates that the enclavewas not deleted or notnormalized.
SM1209DL Double
zIIPServiceCPUActiveCount The count of samples whenthe enclave delete zIIPservice time was non-zero.Time is accumulated by theenclave as reported by theZIIPSERVICE parameter ofthe IWM4EDEL API. Avalue of 0 indicates that theenclave was not deleted ornot normalized.
SM1209DLcount
Long
z/OS SYSLOG data source typesThe z/OS SYSLOG Insight Pack includes support for ingesting, and performingmetadata searches against, the following types of data sources:zOS-SYSLOG-Console, zOS-SYSLOG-SDSF, zOS-syslogd, the three variations ofzOS-CICS-MSGUSR, the three variations of zOS-CICS-EYULOG, andzOS-Anomaly-Interval.
zOS-SYSLOG-Consolez/OS SYSLOG data that is formatted by the IBM Common Data Providerfor z Systems
zOS-SYSLOG-SDSFz/OS SYSLOG data that is formatted by the System Display and SearchFacility (SDSF)
zOS-syslogdSyslogd data in UNIX System Services
zOS-CICS-MSGUSR, zOS-CICS-MSGUSRDMY, and zOS-CICS-MSGUSRYMDCICS Transaction Server for z/OS MSGUSR log data.
Use one of the three variations of this data source type, depending on thedate format in the time stamp for the data source.v zOS-CICS-MSGUSR uses the default date format MDY.v zOS-CICS-MSGUSRDMY uses the format DMY.v zOS-CICS-MSGUSRYMD uses the format YMD.
zOS-CICS-EYULOG, zOS-CICS-EYULOGDMY, and zOS-CICS-EYULOGYMDCICS Transaction Server for z/OS EYULOG data.
Reference 111
Use one of the three variations of this data source type, depending on thedate format in the time stamp for the data source.v zOS-CICS-EYULOG uses the default date format MDY.v zOS-CICS-EYULOGDMY uses the format DMY.v zOS-CICS-EYULOGYMD uses the format YMD.
zOS-Anomaly-IntervalIBM z Advanced Workload Analysis Reporter (IBM zAware) intervalanomaly data
Table 31 lists the configuration artifacts that are provided with the Insight Pack foreach type of z/OS SYSLOG data source.
Table 31. Configuration artifacts that are provided with the z/OS SYSLOG Insight Pack
Data source type Splitter Annotator
zOS-SYSLOG-Console zOS-SYSLOG-Console-Split zOS-SYSLOG-Console-Annotate
zOS-SYSLOG-SDSF zOS-SYSLOG-SDSF-Split zOS-SYSLOG-SDSF-Annotate
zOS-syslogd zOS-syslogd-Split zOS-syslogd-Annotate
zOS-CICS-MSGUSR zOS-CICS-MSGUSR-Split zOS-CICS-Annotate
zOS-CICS-MSGUSRDMY zOS-CICS-MSGUSRDMY-Split zOS-CICS-DMY-Annotate
zOS-CICS-MSGUSRYMD zOS-CICS-MSGUSRYMD-Split zOS-CICS-YMD-Annotate
zOS-CICS-EYULOG zOS-CICS-EYULOG-Split zOS-CICS-Annotate
zOS-CICS-EYULOGDMY zOS-CICS-EYULOGDMY-Split zOS-CICS-DMY-Annotate
zOS-CICS-EYULOGYMD zOS-CICS-EYULOGYMD-Split zOS-CICS-YMD-Annotate
zOS-Anomaly-Interval zOS-Anomaly-Split zOS-Anomaly-Interval-Annotate
zOS-SYSLOG-Console data source typeFor z/OS SYSLOG data sources of type zOS-SYSLOG-Console, the log file splitterzOS-SYSLOG-Console-Split breaks up the data into log records. The log recordannotator zOS-SYSLOG-Console-Annotate annotates the log records.
File format
For data sources of type zOS-SYSLOG-Console, the data is converted to a customcomma-separated value (CSV) format.
The following sample illustrates the data in CSV format:NE,001E,15203 08.55.11.340 -0400,TVT7007,STC00722,INSTREAM,00000000000000000000000000000000,00000290,TSO ," LOGON"
Table 32 describes the fields in the sample output. Some of these fields are notannotated. For information about which fields are annotated, see“zOS-SYSLOG-Console-Annotate annotations” on page 113.
Table 32. Fields in sample output from SYSLOG for zOS-SYSLOG-Console data source
Component value Value description
NE 2-character indicator that specifies whetherthe message is a single line message (NC orNE) or a multi-line message (MC)
001E Address space identifier
112 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 32. Fields in sample output from SYSLOG for zOS-SYSLOG-Console datasource (continued)
Component value Value description
15203 08.55.11.340 -0400 Time stamp of the log record
TVT7007 System name
STC00722 Job identifier for the task that issued themessage
INSTREAM Console name
00000000000000000000000000000000 Route codes
00000290 User exit flags
TSO 8-character name of the job on the z/OSsystem
LOGON Message text
zOS-SYSLOG-Console-Split log record splitter
The zOS-SYSLOG-Console-Split splitter considers each line of the log data to be arecord. However, if a line break is contained within a quoted field, the line break isconsidered to be part of the field rather than an indication of the end of a record.
zOS-SYSLOG-Console-Annotate annotationsThis reference describes the fields that are annotated by zOS-SYSLOG-Console-Annotate.
Table 33 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 33. Log record fields that are annotated by zOS-SYSLOG-Console-Annotate
Field Description Data type Sortable Filterable Searchable
ApplID The application identifier Text U U U
ASID The address space identifier Text U U U
CommandPrefix The command prefix Text U U U
Component The component identifier, whichshows the domain or componentthat issues the message
Text U U U
ConsoleName The console name Text U U U
datasourceHostname The host name that is specified inthe IBM Operations Analytics - LogAnalysis data source
Text U U
JobName The 8-character name of the job onthe z/OS system
Text U U U
logRecord The log record Text U
Reference 113
Table 33. Log record fields that are annotated by zOS-SYSLOG-Console-Annotate (continued)
Field Description Data type Sortable Filterable Searchable
MessageID The message identifier
Also, see “Message IDs.”
Text U U U
MessagePrefix The first 3 characters of the messageidentifier. If no value is detected forMessageID, MessagePrefix has novalue.
Text U U
MessageText The message text. If a value isdetected for MessageID, MessageTextcontains the MessageID also.
Text U
MessageType The one-character message type thatis specified in the MessageID value.Valid values are A, I, E, W, D or S.
If no value is detected for MessageID,or if the MessageID value does notcontain a message type, MessageTypehas no value.
Text U U
RouteCodes The route codes Text U
SubsystemID The identifier of the softwareproduct or subsystem that generatedthe message.
Text U U U
SysplexName The sysplex name Text U U
SystemName The system name Text U U U
Task The job identifier for the task thatissued the message
Text U
timestamp The time stamp of the log record Date U U U
UserExitFlags The user exit flags Text U
Message IDs
A string is detected as a message ID if it matches one of the following formats:aaxxxnaaxxxntaaxxxxnaaxxxxntaaxxxxxnaaxxxxxntaaxxxxxxnaaxxxxxxnt$HASPnnn$HASPnnnnDFHaannDFHaannnDFHaannnnDFHnnDFHnntDFHnnnDFHnnnnEYUaannEYUaannnEYUaannnn
114 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
EYUnnEYUnntEYUnnnEYUnnnn
where:v a represents an uppercase alphabetic character (A - Z).v n represents a numeric character (0 - 9).v x represents an uppercase alphabetic character or a numeric character.v t represents a type character (A, I, E, W, D, or S). If the first 3 characters of the
message ID are DFH or EYU, U is also a valid type character.
Sometimes, a string that is not a message ID, but matches one of the precedingformats, might show in the MessageID field.
zOS-SYSLOG-SDSF data source typeFor z/OS SYSLOG data sources of type zOS-SYSLOG-SDSF, the log file splitterzOS-SYSLOG-SDSF-Split breaks up the data into log records. The log recordannotator zOS-SYSLOG-SDSF-Annotate annotates the log records.
For data sources of type zOS-SYSLOG-SDSF, the log data is rendered by the SystemDisplay and Search Facility (SDSF) and can be ingested in batch mode by using theIBM Operations Analytics - Log Analysis Data Collector client.
File format
The following output is sample output from the SYSLOG. The line numbers areadded for illustrative purposes and are not in the actual output.1| MRC000000 TVT7007 13234 10:33:19.20 INTERNAL 00000290
IXC467I STOPPING PATH STRUCTURE IXCSTR1 1512| ER 151 00000290
RSN: SYSPLEX PARTITIONING OF LOCAL SYSTEM
Table 34 describes the fields in the sample output. Some of these fields are notannotated. For information about which fields are annotated, see“zOS-SYSLOG-SDSF-Annotate annotations” on page 116.
Table 34. Fields in sample output from SYSLOG for zOS-SYSLOG-SDSF data source
Line Component value Value description
1 M Record type
1 R An indication of whether the line wasgenerated because of a command
1 C000000 Hexadecimal representation of routingcodes 1 - 28
1 TVT7007 System name
1 13234 Julian date
1 10:33:19.20 Time stamp
1 INTERNAL Job identifier for the task that issued themessage
1 00000290 Installation exit and message suppressionflags
1 IXC467I Message identifier
Reference 115
Table 34. Fields in sample output from SYSLOG for zOS-SYSLOG-SDSF datasource (continued)
Line Component value Value description
1 STOPPING PATH STRUCTURE ICSTR1 First line of message text
1 151 Multi-line identifier
2 E Record type
2 R An indication of whether the line wasgenerated because of a command
2 151 Multi-line identifier
2 00000290 Installation exit and message suppressionflags
2 RSN: SYSPLEX PARTITIONING OF LOCALSYSTEM
Last line of message text
If the message is a DB2 or MQ message, a command prefix might be present, asshown in the following example:M 8000000 SYSL 13237 14:00:20.23 STC21767 00000294 DSNR031I :D91A...
|DB2 CommandPrefix
If the message is a CICS message, an Application ID might be present, as shown inthe following example:N FFFF000 MV20 14071 20:16:14.27 JOB21691 00000090 DFHPA1101 CMAS51 ...
|CICSApplID
zOS-SYSLOG-SDSF-Split log record splitter
To start a new record, the zOS-SYSLOG-SDSF-Split splitter uses the time stamp of aline. Records are split at the beginning of the line that contains the time stamp ratherthan at the beginning of the time stamp itself. The time stamp is a combination ofthe Julian date, the time, and the time zone values.
Any subsequent lines that do not contain a time stamp are considered to be part ofthe same record. In the following example, the first line has a time stamp. Becausethe second and third lines do not have a time stamp, they are considered to bepart of the same record. The fourth line has a time stamp and starts a new record.M 0000000 TVT7007 13314 13:25:15.31 00000290 IEA009I SYMBOLIC DEFINITIONS...D 007 00000290 IEASYM00E 007 00000290 IEASYM01N 4000000 TVT7007 13314 13:25:15.32 00000290 IEE252I MEMBER IEASYM00...
zOS-SYSLOG-SDSF-Annotate annotationsThis reference describes the fields that are annotated by zOS-SYSLOG-SDSF-Annotate.
Table 35 on page 117 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation represents
116 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
v The primary index configuration attributes that are assigned to the field, such asthe data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 35. Log record fields that are annotated by zOS-SYSLOG-SDSF-Annotate
Field DescriptionDatatype Sortable Filterable Searchable
ApplID The application identifier Text U U U
CommandPrefix The command prefix Text U U U
Component The component identifier, which showsthe domain or component that issues themessage
Text U U U
ConsoleName The console name Text U U U
datasourceHostname The host name that is specified in the IBMOperations Analytics - Log Analysis datasource
Text U U
logRecord The log record Text U
MessageID The message identifier
Also, see “Message IDs.”
Text U U U
MessagePrefix The first 3 characters of the messageidentifier. If no value is detected forMessageID, MessagePrefix has no value.
Text U U
MessageText The message text. If a value is detected forMessageID, MessageText contains theMessageID also.
Text U
MessageType The one-character message type that isspecified in the MessageID value. Validvalues are A, I, E, W, D or S.
If no value is detected for MessageID, or ifthe MessageID value does not contain amessage type, MessageType has no value.
Text U U
RouteCodes Routing codes 1 - 28 Text U
SubsystemID The identifier of the software product orsubsystem that generated the message.
Text U U U
SysplexName The sysplex name Text U U
SystemName The system name Text U U U
Task The job identifier for the task that issuedthe message. If the ConsoleName fieldcontains a value, this field does notcontain a value.
Text U
timestamp The time stamp of the log record Date U U U
UserExitFlags The user exit flags Text U
Message IDs
A string is detected as a message ID if it matches one of the following formats:aaxxxnaaxxxntaaxxxxnaaxxxxnt
Reference 117
aaxxxxxnaaxxxxxntaaxxxxxxnaaxxxxxxnt$HASPnnn$HASPnnnnDFHaannDFHaannnDFHaannnnDFHnnDFHnntDFHnnnDFHnnnnEYUaannEYUaannnEYUaannnnEYUnnEYUnntEYUnnnEYUnnnn
where:v a represents an uppercase alphabetic character (A - Z).v n represents a numeric character (0 - 9).v x represents an uppercase alphabetic character or a numeric character.v t represents a type character (A, I, E, W, D, or S). If the first 3 characters of the
message ID are DFH or EYU, U is also a valid type character.
Sometimes, a string that is not a message ID, but matches one of the precedingformats, might show in the MessageID field.
zOS-syslogd data source typeFor z/OS SYSLOG data sources of type zOS-syslogd, the log file splitterzOS-syslogd-Split breaks up the data into log records. The log record annotatorzOS-syslogd-Annotate annotates the log records.
File format
The following output is sample output from the error log:Dec 15 14:29:23 TVT7008 sshd[33554458]: debug3: mm_request_send entering: type 25
Table 36 describes the fields in the sample output.
Table 36. Fields in sample output from error log for zOS-syslogd data source
Component value Value description
Dec 15 14:29:23 Time stamp
TVT7008 System name
sshd Application
33554458 Process ID
debug3: mm_request_send entering: type25
Message
118 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
zOS-syslogd-Split log record splitter
To start a new record, the zOS-syslogd-Split splitter uses the time stamp that isshown at the beginning of each line of the log data. Any subsequent lines that donot begin with a time stamp are considered to be part of the same record, althoughtypically each record is only one line. The time stamp is a combination of themonth, the day of the month, and the time.
The year is determined by assuming that the record is no more than one year old.Records that are more than one year old are not supported.
In the following example, each of the two lines begins a new record:Mar 24 07:29:30 localhost syslogd: FSUM1220 syslogd: restartMar 24 07:29:30 localhost syslogd: FSUM1232 syslogd: running non-swappable
Based on the preceding example, the following two scenarios illustrate how theyear is determined, depending on the present time:
Assume that today is January 2015The year that is associated with these records is 2014 because the month inthe records is March, and in January 2015, March 2015 is in the future.
Assume that today is June 2015The year that is associated with these records is 2015 because March, themonth in the records, is three months earlier than the present time of June2015.
zOS-syslogd-Annotate annotationsThis reference describes the fields that are annotated by zOS-syslogd-Annotate.
Table 37 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 37. Log record fields that are annotated by zOS-syslogd-Annotate
Field DescriptionDatatype Sortable Filterable Searchable
Application The application identifier Text U U U
datasourceHostname The host name that is specified in the IBMOperations Analytics - Log Analysis datasource
Text U U
logRecord The log record Text U
MessageID The message identifier Text U U U
MessagePrefix The first 3 characters of the messageidentifier. If no value is detected forMessageID, MessagePrefix has no value.
Text U U
MessageText The message text Text U
Reference 119
Table 37. Log record fields that are annotated by zOS-syslogd-Annotate (continued)
Field DescriptionDatatype Sortable Filterable Searchable
MessageType The one-character message type that isspecified in the MessageID value. Validvalues are A, I, E, W, D or S.
If no value is detected for MessageID, or ifthe MessageID value does not contain amessage type, MessageType has no value.
Text U U
processID The process identifier Text U U U
SubsystemID The identifier of the software product orsubsystem that generated the message.
Text U U U
SysplexName The sysplex name Text U U
SystemName The system name Text U U U
timestamp The time stamp of the log record Date U U U
zOS-CICS-MSGUSR data source type: three variationsThe three variations of the z/OS SYSLOG zOS-CICS-MSGUSR data source type arezOS-CICS-MSGUSR, zOS-CICS-MSGUSRDMY, and zOS-CICS-MSGUSRYMD. Each variationcorresponds to a different date format in the time stamp for the data source.
For z/OS SYSLOG data sources of type zOS-CICS-MSGUSR, the log file splitterzOS-CICS-MSGUSR-Split breaks up the data into log records. The log recordannotator zOS-CICS-Annotate annotates the log records.
For z/OS SYSLOG data sources of type zOS-CICS-MSGUSRDMY, the log file splitterzOS-CICS-MSGUSRDMY-Split breaks up the data into log records. The log recordannotator zOS-CICS-DMY-Annotate annotates the log records.
For z/OS SYSLOG data sources of type zOS-CICS-MSGUSRYMD, the log file splitterzOS-CICS-MSGUSRYMD-Split breaks up the data into log records. The log recordannotator zOS-CICS-YMD-Annotate annotates the log records.
File format
For data sources of type zOS-CICS-MSGUSR, zOS-CICS-MSGUSRDMY, orzOS-CICS-MSGUSRYMD, the following format illustrates a sample CICS message in theCICS Transaction Server for z/OS MSGUSR log:DFHAP1901 04/04/2014 17:03:25 CMAS01 SPI audit log is available.
Table 38 describes the fields in the sample output.
Table 38. Fields in sample CICS message in the MSGUSR log for zOS-CICS-MSGUSR datasource
Component value Value description
DFHAP1901 Message ID
04/04/2014 17:03:25 (for thezOS-CICS-MSGUSR data source type, whichuses a default date format of MDY)
Time stamp
CMAS01 Application identifier (APPLID)
120 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 38. Fields in sample CICS message in the MSGUSR log for zOS-CICS-MSGUSR datasource (continued)
Component value Value description
SPI audit log is available. Message text
The CICS messages in the CICS MSGUSR log can have different formats, and thelog might include custom application messages.
Example: Assume that your custom application generates messages to be added tothe CICS MSGUSR log. When the CICS annotator processes the messages, it hasthe following behavior:
If the messages follow the preceding format with the time stamps, the CICSannotator uses the time stamp for each message.If the messages do not follow the preceding format and do not have timestamps, the CICS annotator assigns the current time as the time stamp for eachmessage.
zOS-CICS-MSGUSR-Split log record splitter
The MSGUSR log file starts each new record in the first column of the file.Subsequent lines of the same record are indented a number of spaces.
To start a new record, the zOS-CICS-MSGUSR-Split splitter uses the absence ofspaces in the first few columns. Also, the records that have a time stamp, but donot include a date, are assigned to the prior record, as shown in the followingexample:DFHZC3437 I 04/07/2014 17:43:09 CMAS01 N701 CSNE Node N701 action taken: NOCREATE
CLSDST ABTASK ABSEND ABRECV ((1) Module name: DFHZNAC)DFHZC3462 I 04/07/2014 17:43:09 CMAS01 N701 CSNE Node N701 session terminated.
((2) Module name: DFHZCLS)NQNAME N701,CSNE,17:43:09,NET1 N701DFHZC5966 I 04/07/2014 17:43:09 CMAS01 DELETE started for TERMINAL ( N701)
(Module name: DFHBSTZ).DFHZC6966 I 04/07/2014 17:43:09 CMAS01 Autoinstall delete for terminal N701
with netname N701 was successful.
In this example, the line
NQNAME N701,CSNE,17:43:09,NET1 N701
is assigned to the record for message DFHZC3462.
zOS-CICS-EYULOG data source type: three variationsThe three variations of the z/OS SYSLOG zOS-CICS-EYULOG data source type arezOS-CICS-EYULOG, zOS-CICS-EYULOGDMY, and zOS-CICS-EYULOGYMD. Each variationcorresponds to a different date format in the time stamp for the data source.
For z/OS SYSLOG data sources of type zOS-CICS-EYULOG, the log file splitterzOS-CICS-EYULOG-Split breaks up the data into log records. The log recordannotator zOS-CICS-Annotate annotates the log records.
For z/OS SYSLOG data sources of type zOS-CICS-EYULOGDMY, the log file splitterzOS-CICS-EYULOGDMY-Split breaks up the data into log records. The log recordannotator zOS-CICS-DMY-Annotate annotates the log records.
Reference 121
For z/OS SYSLOG data sources of type zOS-CICS-EYULOGYMD, the log file splitterzOS-CICS-EYULOGYMD-Split breaks up the data into log records. The log recordannotator zOS-CICS-YMD-Annotate annotates the log records.
File format
For data sources of type zOS-CICS-EYULOG, zOS-CICS-EYULOGDMY, orzOS-CICS-EYULOGYMD, the following format illustrates a sample CICS message in theCICS Transaction Server for z/OS EYULOG:06/29/2015 19:41:26 EYUVS0001I WUINCM03 CICSPLEX SM WEB USER INTERFACE INITIALIZATION STARTED
Table 39 describes the fields in the sample output.
Table 39. Fields in sample CICS message in the EYULOG for zOS-CICS-EYULOG data source
Component value Value description
EYUVS0001I Message ID
06/29/2015 19:41:26 (for thezOS-CICS-EYULOG data source type, whichuses a default date format of MDY)
Time stamp
WUINCM03 Application identifier (APPLID)
CICSPLEX SM WEB USER INTERFACEINITIALIZATION STARTED
Message text
zOS-CICS-EYULOG-Split log record splitter
The EYULOG file starts each new record in the first column of the file.
To start a new record, the zOS-CICS-EYULOG-Split splitter uses the time stamp of aline. Any subsequent lines that do not contain a time stamp are considered to bepart of the same record.
In the following example, each line begins a new record:06/10/2015 19:41:26 EYUVS0001I CMAS01 CICSPLEX SM WEB USER INTERFACE INITIALIZATION STARTED.06/10/2015 19:41:26 EYUVS0107I CMAS01 READING STARTUP PARAMETERS.06/10/2015 19:41:26 EYUVS0109I CMAS01 TCPIPHOSTNAME(VM30059.IBM.COM)06/10/2015 19:41:26 EYUVS0109I CMAS01 TCPIPPORT(12345)06/10/2015 19:41:26 EYUVS0109I CMAS01 CMCIPORT(12346)06/10/2015 19:41:26 EYUVS0109I CMAS01 DEFAULTCMASCTXT(CMAS03)06/10/2015 19:41:26 EYUVS0109I CMAS01 DEFAULTCONTEXT(PLEX3)06/10/2015 19:41:26 EYUVS0109I CMAS01 DEFAULTSCOPE(PLEX3)06/10/2015 19:41:26 EYUVS0109I CMAS01 AUTOIMPORTDSN(DFH.V5R2M0.CPSM.SEYUVIEW)06/10/2015 19:41:26 EYUVS0109I CMAS01 AUTOIMPORTMEM(EYUEA*)06/10/2015 19:41:26 EYUVS0108I CMAS01 STARTUP PARAMETERS READ.06/10/2015 19:41:26 EYUVS0101I CMAS01 Parameter service initialization complete.06/10/2015 19:41:28 EYUVS1063I CMAS01 Import ’ALL (OVERWRITE)’ initiated for user (IBMUSER) from data06/10/2015 19:41:28 EYUVS1063I CMAS01 set(DFH.V5R2M0.CPSM.SEYUVIEW), member (EYUEA*).
zOS-CICS-Annotate annotationsThis reference describes the fields that are annotated by zOS-CICS-Annotate,zOS-CICS-DMY-Annotate, and zOS-CICS-YMD-Annotate.
Table 40 on page 123 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation represents
122 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
v The primary index configuration attributes that are assigned to the field, such asthe data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 40. Log record fields that are annotated by zOS-CICS-Annotate, zOS-CICS-DMY-Annotate, andzOS-CICS-YMD-Annotate
Field DescriptionDatatype Sortable Filterable Searchable
ApplID The application identifier Text U U U
Component The component identifier, which shows thedomain or component that issues themessage
Text U U U
datasourceHostname The host name that is specified in the IBMOperations Analytics - Log Analysis datasource
Text U U
logRecord The log record Text U
MessageID The message identifier
Also, see “Message IDs.”
Text U U U
MessagePrefix The first 3 characters of the messageidentifier. If no value is detected forMessageID, MessagePrefix has no value.
Text U U
MessageText The message text Text U
MessageType The one-character message type that isspecified in the MessageID value. Validvalues are A, I, E, W, D or S.
If no value is detected for MessageID, or ifthe MessageID value does not contain amessage type, MessageType has no value.
Text U U
SubsystemID The identifier of the software product orsubsystem that generated the message.
Text U U U
SysplexName The sysplex name Text U U
SystemName The system name Text U U U
timestamp The time stamp of the log record Date U U U
Message IDs
A string is detected as a message ID if it matches one of the following formats:DFHnnDFHnntDFHnnnDFHnnntDFHnnnnDFHnnnntDFHaannDFHaanntDFHaannnDFHaannntDFHaannnnDFHaannnntEYUnnEYUnntEYUnnnEYUnnnt
Reference 123
EYUnnnnEYUnnnntEYUaannEYUaanntEYUaannnEYUaannntEYUaannnnEYUaannnnt
where:v a represents an uppercase alphabetic character (A - Z).v n represents a numeric character (0 - 9).v t represents a type character (A, I, E, W, D, S, or U).
Sometimes, a string that is not a message ID, but matches one of the precedingformats, might show in the MessageID field.
zOS-Anomaly-Interval data source typeFor z/OS SYSLOG data sources of type zOS-Anomaly-Interval, the log file splitterzOS-Anomaly-Split breaks up the data into log records. The log record annotatorzOS-Anomaly-Interval-Annotate annotates the log records.
zOS-Anomaly-Split log record splitter
To start a new record, the zOS-Anomaly-Split splitter uses the open bracket { thatis shown at the beginning of each line of the data.
zOS-Anomaly-Interval-Annotate annotationsThis reference describes the fields that are annotated by zOS-Anomaly-Interval-Annotate.
Table 41 includes the following information about each field:v The field name, which corresponds to a field in the IBM Operations Analytics -
Log Analysis Search workspacev The description of what the annotation representsv The primary index configuration attributes that are assigned to the field, such as
the data type and the indication of whether the field can be sorted, filtered, orsearched.
Table 41. Log record fields that are annotated by zOS-Anomaly-Interval-Annotate
Field Description Data type Sortable Filterable Searchable
IntervalAnomaly A double value that indicatesthe anomaly score for theinterval. The score is thepercentile of the sum of eachanomaly score for individualmessage IDs within theinterval.
Double U U
124 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 41. Log record fields that are annotated by zOS-Anomaly-Interval-Annotate (continued)
Field Description Data type Sortable Filterable Searchable
IntervalEndTime The time, based onCoordinated Universal Time(UTC), that indicates the endof an interval for which thelog messages that areproduced are used to generatethe anomaly record. Theformat isYYYY-MM-DDTHH:mm:ss.sssZ.
Date U U U
IntervalIndex An integer that indicates thesequence number of thisinterval within the specifieddate. Each index represents a10-minute period.
Long U
IntervalStartTime The time, based on UTC, thatindicates the start of aninterval for which logmessages that are producedare used to generate theanomaly record. The format isYYYY-MM-DDTHH:mm:ss.sssZ.
Date U U U
LimitedModelStatus An indication of whether themodel that is used tocalculate the anomaly scorefor this interval is a limitedmodel. The following valuesare valid:
v YES
v NO
v UNKNOWN
Text
logRecord The log record Text U
ModelGroupName The name of an analysisgroup. Each analysis group isassociated with one or moresystems from which the logsare used to create a singlemodel.
Text U
NumMessagesNeverSeenBefore An integer that indicates thenumber of message IDs thatwere issued during thisanalysis interval for the firsttime but were never seen inany previous analysis intervalor in the current model.
Long
NumMessagesNotInModelFirstReported An integer that indicates thenumber of message IDs thatare not in the model and wereissued during this analysisinterval for the first time.
Long
NumMessagesUnique An integer that indicates thenumber of unique messageIDs that were issued duringthis analysis interval.
Long
Reference 125
Table 41. Log record fields that are annotated by zOS-Anomaly-Interval-Annotate (continued)
Field Description Data type Sortable Filterable Searchable
SysplexName The sysplex name Text U U
SystemName The system name Text U U U
timestamp The time, based on UTC, thatindicates the end of theinterval record. This time isequivalent to the value for theIntervalEndTime field. Whenyou search for intervalanomaly scores that are basedon a time stamp, ensure thatyou search for the end time ofthe interval record. Theformat isYYYY-MM-DDTHH:mm:ss.sssZ.
Date U U U
zAwareServer The hostname or IP addressof the IBM z AdvancedWorkload Analysis Reporter(IBM zAware) server fromwhich the interval anomalydata is retrieved.
Text U U
DashboardsThe Custom Search Dashboard applications that are provided in the z/OS InsightPacks can help you troubleshoot problems in your IT operations environment.Each main troubleshooting dashboard contains visual representations of the datathat is generated by the dashboard application. “Information links” dashboardscontain links to troubleshooting information in the respective softwaredocumentation, including message explanations.
WebSphere Application Server for z/OS Custom SearchDashboard applications
The WebSphere Application Server for z/OS Custom Search Dashboardapplications include dashboards for WebSphere Application Server for z/OS.
WebSphere Application Server for z/OS Dashboard
This dashboard contains the following content:
Message Counts - Top 5 over Last DayShows counts of the top five WebSphere Application Server for z/OSmessages, based on the msgClassifier value, that are found in a timeinterval.
Messages by Hostname - Top 5 over Last DayShows counts of the WebSphere Application Server for z/OS messages,based on the msgClassifier value, that are found for a host name in a timeinterval.
Java Exception Counts - Top 5 over Last DayShows counts of Java exceptions, based on the javaException value, that arefound in a time interval.
126 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Java Exception by Hostname - Top 5 over Last DayShows counts of Java exceptions, based on the javaException value, that arefound for a host name in a time interval.
z/OS Network Custom Search Dashboard applicationsThe z/OS Network Custom Search Dashboard applications include network-relateddashboards.
Information Links dashboards for the following software are included:v NetView for z/OSv TCP/IPv UNIX System Services system log (syslogd)
z/OS Networking Dashboard
This dashboard contains the following content:
syslogd Message Counts - Top 5 Applications per hour over Last DayShows the message count for each of the five applications that logged themost data in the syslogd file.
syslogd Messages by Hostname - Top 5 per hour over Last DayShows the message count for each of the five hosts that logged the mostdata in the syslogd file.
Total syslogd Message Counts per hour over Last DayShows the total message count for all applications in a time interval.
Total syslogd Messages by Hostname per hour over Last DayShows the total message count for all hosts in a time interval.
NetView for z/OS Dashboard
This dashboard contains the following content:
Message Counts - Top 5 per hour over Last DayShows counts of the top five NetView for z/OS messages per hour, basedon the MessageID value, that are found in a time interval.
Message Type Counts - Top 5 per hour over Last DayShows counts of the top five NetView for z/OS message types per hour,based on the MessageType value, that are found in a time interval.
Total Message Counts - Top 5 per hour over Last DayShows the total number of NetView for z/OS messages per hour that arefound in a time interval.
Messages by Hostname - Top 5 per hour over Last DayShows counts of the top five NetView for z/OS messages per hour, basedon the MessageID value, that are found for a host name in a time interval.
Message Types by Hostname - Top 5 per hour over Last DayShows counts of the top five NetView for z/OS message types, based onthe MessageType value, that are found for a host name in a time interval.
Total Messages by Hostname - Top 5 per hour over Last DayShows the total number of NetView for z/OS messages per hour that arefound for a host name in a time interval.
Reference 127
z/OS SMF Custom Search Dashboard applicationsThe z/OS SMF Custom Search Dashboard applications include dashboards thatrepresent SMF data for CICS Transaction Server for z/OS, DB2 for z/OS, IMS forz/OS, MQ for z/OS, security for z/OS, and WebSphere Application Server forz/OS.
The following dashboards are included:v CICS TS for z/OS Performance Dashboards:
– CICS Regions Dashboard– CICS Regions Transaction Dashboard– CICS Jobs Dashboard
v DB2 for z/OS Performance Dashboardv IMS for z/OS Performance Dashboardv MQ for z/OS Performance Dashboardv z/OS Jobs Performance Dashboardv z/OS SMF 80 Security Events Dashboardv z/OS Top 10 RACF SMF 80 Events Dashboardv WebSphere Application Server for z/OS System Activity Dashboardv WebSphere Application Server Activity Summary by Controller Dashboard
Content that is common to many dashboards from the z/OS SMFCustom Search Dashboard
The following dashboards contain the content that is described in this section:v CICS Jobs Dashboard (under CICS TS for z/OS Performance Dashboards)v DB2 for z/OS Performance Dashboardv IMS for z/OS Performance Dashboardv MQ for z/OS Performance Dashboardv z/OS Jobs Performance Dashboard
Tip: The z/OS Jobs Performance Dashboard shows data based on theperformance of the programs that are started by the job JCL. If the job that isbeing monitored contains multiple steps that start different programs, you mightnotice vertical lines in the dashboards. These vertical lines are shown when onejob step program completes, and a new job step program starts.
CPU Utilization - Top 5 Jobs over Last DayShows a line chart of the CPU usage of the top five tasks, based onaverage CPU usage during the last day.
CPU Utilization - Max and Average per Job over Last DayFor all tasks on each host, shows the maximum and average CPU usageduring the last day.
I/O Rate - Top 5 Jobs over Last DayShows a line chart of the I/O rate of the top five tasks, based on averageI/O rates during the last day.
I/O Rate - Max and Average per Job over Last Day For all tasks on each host, shows the maximum and average I/O ratesduring the last day.
128 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Working Set - Top 5 Jobs over Last DayShows a line chart of the working set size of the top five tasks, based onaverage working set size during the last day.
Working Set - Max and Average per Job over Last DayFor all tasks on each host, shows the maximum and average working setsize during the last day.
Paging Rate - Top 5 Jobs over Last DayShows a line chart of the paging rate of the top five tasks, based onaverage paging rates during the last day.
Paging Rate - Max and Average per Job over Last DayFor all tasks on each host, shows the maximum and average paging rateduring the last day.
CICS Regions Dashboard
The CICS Regions Dashboard contains the following content:
Wait on Storage Exceptions per Region over Last DayFor each CICS region, shows the number of wait-on-storage events thatoccurred during the last day.
Exceptions by Resource ID over Last DayShows the number of exceptions that occurred for each resource ID duringthe last day.
Short on Storage per Region over Last DayFor each CICS region, shows the number of low-on-storage messages thatoccurred during the last day.
Tasks at Maximum Threshold per Region over Last DayFor each CICS region, shows the number of times that the number ofactive user transactions equaled the specified maximum number of usertransactions (MXT) during the last day.
Storage Violations per Region over Last DayFor each CICS region, shows the number of storage violations thatoccurred during the last day.
CICS Regions Transaction Dashboard
The CICS Regions Transaction Dashboard contains the following content:
Transactions - Top 5 Regions over Last DayShows the number of transactions for the five CICS regions with thehighest number of transactions, based on the average number oftransactions during the last day.
Transactions - Max and Average per Region over Last DayFor each CICS region, shows the maximum and average numbers oftransactions that occurred during the last day.
Security events dashboards
The dashboards for security events focus on access attempts that fail due toincorrect credentials or insufficient authority levels. Failed access attempts mightindicate that external hackers are guessing at credentials or that a company insideris trying to use personal information to get a higher level of access than
Reference 129
authorized. Violations due to insufficient authority might indicate either probingfor access to a resource from an existing user or a compromised credential.
These dashboards contain the following content:
z/OS SMF 80 Security Events DashboardThis dashboard provides an overview of the system access violations thatoccurred during the last week. It includes five pie chart views. Each viewis limited to a maximum of 1000 data points by count from the searchresults.
Failed z/OS Logon Attempts by UserIDBy user ID, shows the number of z/OS authentication violationsdue to incorrect or revoked credentials.
Failed z/OS Logon Attempts by TermIDBy terminal ID, shows the number of z/OS authenticationviolations due to incorrect or revoked credentials.
Failed z/OS UNIX File System Access AttemptsBy file name, shows the number of UNIX file access violations. Thestring /CWD represents the user’s current working directory.
Failed z/OS Resource Access AttemptsBy resource name, shows the number of access violations due toinsufficient RACF authority to access a z/OS resource.
Failed z/OS RACF CommandsBy user ID, shows the number of access violations due toinsufficient authority to issue a RACF command. Only securityadministrators are authorized to change the RACF settings ofanother user.
z/OS Top 10 RACF SMF 80 Events DashboardThis dashboard includes four graphical views. Each view is limited to amaximum of the 10 largest data points by count from the search results.
Top 10 Failed Logons by SystemName and UserIDBy user ID, shows the number of z/OS authentication violationsdue to incorrect or revoked credentials.
Top 10 Failed Logons by SystemName and UserID (Current day,off-shift)
By user ID, shows the number of z/OS authentication violationsdue to incorrect or revoked credentials that do not occur duringthe traditional prime shift work hours of 9:00 AM to 5:00 PMCoordinated Universal Time (UTC) on the current day.
Top 10 OMVS File Access Failures by UserID and FilenameBy user ID and file name, shows the number of UNIX file accessviolations due to insufficient RACF authority to access a UNIXresource.
Top 10 z/OS Resource Accesses by UserID and ResourceNameBy user ID and resource name, shows the number of accessviolations due to insufficient RACF authority to access a z/OSresource. Examples of z/OS resources include data sets, diskvolumes, and tape volumes.
130 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
WebSphere Application Server for z/OS dashboards
These dashboards contain the following content:
WebSphere Application Server for z/OS System Activity DashboardThis dashboard provides insight on the amount of processing that isperformed by WebSphere Application Server for z/OS for each z/OSsystem. The dashboard views are from a system-wide perspective andindicate when a problem might be developing with a web server orapplication. Problems are often shown initially because of spikes inperformance due to higher than normal activity. Performance data,together with synchronized WebSphere Application Server for z/OS logs,can provide a context for diagnosing application issues.
Total System-wide WebSphere Controller Request CountShows the number of requests that are processed by all theWebSphere Application Server for z/OS servers on the system.
Total System-wide WebSphere Controller CPU UtilizationAggregates the CPU utilization for all the WebSphere ApplicationServer for z/OS servers on the system. This view is a high-levelperspective of the amount of CPU resources that are used tohandle web requests.
Total System-wide WebSphere Controller Specialty CPU UtilizationShows the amount of WebSphere Application Server for z/OSprocessing that was offloaded to specialty processors on thissystem to free general purpose processors for other processing.
Total System-wide WebSphere Controller Eligible Specialty CPUUtilization
Depending on the available capacity of specialty processors, workcan be moved from the general purpose processors to the specialtyprocessors. This value shows the amount of WebSphereApplication Server for z/OS processing that was eligible fordispatch on a specialty processor, but because no specialtyprocessor was available, was instead processed on a generalpurpose processor.
WebSphere Application Server Activity Summary by Controller DashboardThis dashboard provides insight on the amount of processing that isperformed by WebSphere Application Server for z/OS for each WebSpherecontroller address space across z/OS systems. The dashboard views arefrom a controller perspective and indicate when a problem might bedeveloping with an individual controller address space. Because manyWebSphere controllers might be running on one z/OS system, or multiplesystems might be providing data to the same IBM Operations Analytics -Log Analysis server, graph content is limited to views of the 10 controllerswith the most processing.
Top 10 Total WebSphere Controller Request CountShows the number of requests that are processed by individualWebSphere Application Server for z/OS controller address spaceson the system. The view is filtered for the top 10 controllers.
Top 10 Total WebSphere Controller CPU Utilization Aggregates the CPU utilization by controller name for theWebSphere Application Server for z/OS servers on the system.This view is a high-level perspective of the amount of CPU
Reference 131
resources that are used by controller address spaces to handle webrequests. The view is filtered for the top 10 controllers.
Top 10 Total WebSphere Controller Specialty CPU UtilizationShows the amount of WebSphere controller processing that wasoffloaded to specialty processors on this system to free generalpurpose processors for other processing. The view is filtered forthe top 10 controllers.
Top 10 Total WebSphere Controller Eligible Specialty CPU UtilizationDepending on the available capacity of specialty processors, workcan be moved from the general purpose processors to the specialtyprocessors. This value shows the amount of WebSphere controllerprocessing that was eligible for dispatch on a specialty processor,but because no specialty processor was available, was insteadprocessed on a general purpose processor. The view is filtered forthe top 10 controllers.
z/OS SYSLOG Custom Search Dashboard applicationsThe z/OS SYSLOG Custom Search Dashboard applications include dashboards forSYSLOG for z/OS, CICS Transaction Server for z/OS, DB2 for z/OS, IMS forz/OS, MQ for z/OS, and security for z/OS.
The following troubleshooting dashboards are included:v SYSLOG for z/OS Dashboardv SYSLOG for z/OS Time Comparison Dashboardv “CICS Transaction Server for z/OS Dashboard” on page 134v “DB2 for z/OS Dashboard” on page 134v “IMS for z/OS Dashboard” on page 135v “MQ for z/OS Dashboard” on page 135v “Security for z/OS Dashboard” on page 136
Information Links dashboards for the following software are included:v CICS Transaction Server for z/OSv DB2 for z/OSv IMS for z/OSv MQ for z/OSv Resource Access Control Facility (RACF)
SYSLOG for z/OS Dashboard
This dashboard contains the following content:
Message Counts - Top 5 per hour over Last DayShows counts of the top five z/OS SYSLOG messages per hour, based onthe MessageID value, that are found in the last day.
Message Type Counts - Top 5 per hour over Last DayShows counts of the top five z/OS SYSLOG message types per hour, basedon the MessageType value, that are found in the last day.
Total Message Counts per hour over Last DayShows the total number of z/OS SYSLOG messages per hour that arefound in the last day.
132 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Messages by Hostname - Top 5 per hour over Last DayShows counts of the top five z/OS SYSLOG messages per hour, based onthe MessageID value, that are found for a host name in the last day.
Message Types by Hostname - Top 5 over Last DayShows counts of the top five z/OS SYSLOG message types, based on theMessageType value, that are found for a host name in the last day.
Total Messages by Hostname per hour over Last DayShows the total number of z/OS SYSLOG messages per hour that arefound for a host name in the last day.
SYSLOG for z/OS Time Comparison Dashboard
Known issue: Python Version 2.6 or later must be installed on the system wherethis dashboard is run. The dashboard fails when it is run on a system with anearlier version of Python.
This dashboard contains the following content:
Time Range - Time Period 1Shows the “from” and “to” dates and times that are used for the first timeperiod of the comparison.
Time Range - Time Period 2Shows the “from” and “to” dates and times that are used for the secondtime period of the comparison.
Message Type Counts - Top 5 per hour over Time Period 1Shows counts of the top five message types per hour, based on theMessageType value, that are found in the first time period.
Message Type Counts - Top 5 per hour over Time Period 2Shows counts of the top five message types per hour, based on theMessageType value, that are found in the second time period.
Message Type Counts - Top 5 from Time Periods 1 and 2Shows the top five message types from both time periods, based on theMessageType value.
Message Type - Percentage ChangeShows, as a percentage value, the amount of change between the top 5message type counts in the first time period and the second time period.
Message ID Counts - Top 5 per hour over Time Period 1Shows counts of the top five messages per hour, based on the MessageIDvalue, that are found in the first time period.
Message ID Counts - Top 5 per hour over Time Period 2Shows counts of the top five messages per hour, based on the MessageIDvalue, that are found in the second time period.
Message ID Counts - Top 5 from Time Periods 1 and 2Shows the top five messages from both time periods, based on theMessageID value.
Message ID - Percentage ChangeShows, as a percentage value, the amount of change between the top 5message ID counts in the first time period and the second time period.
Reference 133
Message Prefix Counts - Top 5 per hour over Time Period 1Shows counts of the top five message prefixes per hour, based on theMessagePrefix value, that are found in the first time period.
Message Prefix Counts - Top 5 per hour over Time Period 2Shows counts of the top five message prefixes per hour, based on theMessagePrefix value, that are found in the second time period.
Message Prefix Counts - Top 5 from Time Periods 1 and 2Shows the top five message prefixes from both time periods, based on theMessagePrefix value.
Message Prefix - Percentage ChangeShows, as a percentage value, the amount of change between the top 5message prefix counts in the first time period and the second time period.
CICS Transaction Server for z/OS Dashboard
This dashboard contains the following content:
CICS Message Counts - Top 5 per hour over Last DayShows counts of the top five CICS Transaction Server for z/OS messagesper hour, based on the MessageID value, that are found in the last day.
CICS Message Type Counts - Top 5 per hour over Last DayShows counts of the top five CICS Transaction Server for z/OS messagetypes per hour, based on the MessageType value, that are found in the lastday.
Total CICS Message Counts per hour over Last DayShows the total number of CICS Transaction Server for z/OS messages perhour that are found in the last day.
CICS Messages by Hostname - Top 5 per hour over Last DayShows counts of the top five CICS Transaction Server for z/OS messagesper hour, based on the MessageID value, that are found for a host name inthe last day.
CICS Message Types by Hostname - Top 5 over Last DayShows counts of the top five CICS Transaction Server for z/OS messagetypes, based on the MessageType value, that are found for a host name inthe last day.
Total CICS Messages by Hostname per hour over Last DayShows the total number of CICS Transaction Server for z/OS messages perhour that are found for a host name in the last day.
DB2 for z/OS Dashboard
This dashboard contains the following content:
DB2 Message Counts - Top 5 per hour over Last DayShows counts of the top five DB2 for z/OS messages per hour, based onthe MessageID value, that are found in the last day.
DB2 Message Type Counts - Top 5 per hour over Last DayShows counts of the top five DB2 for z/OS message types per hour, basedon the MessageType value, that are found in the last day.
Total DB2 Message Counts per hour over Last DayShows the total number of DB2 for z/OS messages per hour that are foundin the last day.
134 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
DB2 Message Count by Hostname - Top 5 per hour over Last DayShows counts of the top five DB2 for z/OS messages per hour, based onthe MessageID value, that are found for a host name in the last day.
DB2 Message Types by Hostname - Top 5 over Last DayShows counts of the top five DB2 for z/OS message types, based on theMessageType value, that are found for a host name in the last day.
Total DB2 Messages by Hostname per hour over Last DayShows the total number of DB2 for z/OS messages per hour that are foundfor a host name in the last day.
IMS for z/OS Dashboard
This dashboard contains the following content:
IMS Message Counts - Top 5 per hour over Last DayShows counts of the top five IMS for z/OS messages per hour, based onthe MessageID value, that are found in the last day.
IMS Message Type Counts - Top 5 per hour over Last DayShows counts of the top five IMS for z/OS message types per hour, basedon the MessageType value, that are found in the last day.
Total IMS Message Counts per hour over Last DayShows the total number of IMS for z/OS messages per hour that are foundin the last day.
IMS Messages by Hostname - Top 5 per hour over Last DayShows counts of the top five IMS for z/OS messages per hour, based onthe MessageID value, that are found for a host name in the last day.
IMS Message Types by Hostname - Top 5 over Last DayShows counts of the top five IMS for z/OS message types, based on theMessageType value, that are found for a host name in the last day.
Total IMS Messages by Hostname per hour over Last DayShows the total number of IMS for z/OS messages per hour that are foundfor a host name in the last day.
MQ for z/OS Dashboard
This dashboard contains the following content:
MQ Message Counts - Top 5 per hour over Last DayShows counts of the top five MQ for z/OS messages per hour, based onthe MessageID value, that are found in the last day.
MQ Message Type Counts - Top 5 per hour over Last DayShows counts of the top five MQ for z/OS message types per hour, basedon the MessageType value, that are found in the last day.
Total MQ Message Counts per hour over Last DayShows the total number of MQ for z/OS messages per hour that are foundin the last day.
MQ Messages by Hostname - Top 5 per hour over Last DayShows counts of the top five MQ for z/OS messages per hour, based onthe MessageID value, that are found for a host name in the last day.
Reference 135
MQ Message Types by Hostname - Top 5 over Last DayShows counts of the top five MQ for z/OS message types, based on theMessageType value, that are found for a host name in the last day.
Total MQ Messages by Hostname per hour over Last DayShows the total number of MQ for z/OS messages per hour that are foundfor a host name in the last day.
Security for z/OS Dashboard
This dashboard contains the following content:
Security Message Counts - Top 5 per hour over Last DayShows counts of the top five RACF messages per hour, based on theMessageID value, that are found in the last day.
Security Message Type Counts - Top 5 per hour over Last DayShows counts of the top five RACF message types per hour, based on theMessageType value, that are found in the last day.
Total Security Message Counts per hour over Last DayShows the total number of RACF messages per hour that are found in thelast day.
Security Messages by Hostname - Top 5 per hour over Last DayShows counts of the top five RACF messages per hour, based on theMessageID value, that are found for a host name in the last day.
Security Message Types by Hostname - Top 5 over Last DayShows counts of the top five RACF message types, based on theMessageType value, that are found for a host name in the last day.
Total Security Messages by Hostname per hour over Last DayShows the total number of RACF messages per hour that are found for ahost name in the last day.
Intrusion DetectionShows messages that are flagged as intrusion-related indicators by someapplications.
Sample searchesIf you installed any of the optional sample searches (sometimes called Quick Searchsamples) in IBM Operations Analytics - Log Analysis, they are available in the zosfolder in the Saved Searches navigator of the Log Analysis user interface.
Some sample searches are organized into subfolders under the zos folder.
To use a sample search to search logs, double-click the title of the search in theSaved Searches navigator.
WebSphere Application Server for z/OS sample searchesYou can install these optional WebSphere Application Server for z/OS samplesearches for use with the WebSphere Application Server for z/OS Insight Pack.
WAS Error MessagesSearches for WebSphere Application Server for z/OS messages thatoccurred in the last day and that indicate an error occurred.
136 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
WAS ExceptionsSearches for occurrences of Java exceptions in the WebSphere ApplicationLogs during the last day.
z/OS Network sample searchesYou can install these optional z/OS Network sample searches for use with thez/OS Network Insight Pack.
Searches for common network errors
The following samples search for common network error messages:v ATTLS Error Messagesv CSSMTP Error Messagesv Device Error Messagesv FTP Error Messagesv IKED Error Messagesv IPSEC Error Messagesv OMPROUTE Error Messagesv PAGENT Error Messagesv Storage Error Messagesv syslogd FTPD Messagesv syslogd Messagesv syslgod SSHD Messagesv syslogd TELNETD Messagesv TCP/IP Error Messagesv TN3270 Telnet Error Messagesv VTAM® Connection Error Messagesv VTAM CSM Error Messagesv VTAM Storage Error Messages
NetView for z/OS sample searches
NetView MessagesSearches for NetView for z/OS messages that occurred during the last day.
NetView Action, Decision, ErrSearches for NetView for z/OS messages that occurred during the last dayand that indicate any of the following situations:v Immediate action is required.v A decision is required.v An error occurred.
NetView AutomationSearches for a set of predefined NetView for z/OS messages that indicatethat automation table violations possibly occurred during the last day.
NetView Command AuthorizationSearches for a set of predefined NetView for z/OS messages that indicatethat command authorization table violations possibly occurred during thelast day.
Reference 137
NetView Resource LimitsSearches for a set of predefined NetView for z/OS messages that indicatethat resource limits or storage thresholds were possibly exceeded duringthe last day.
NetView SecuritySearches for a set of predefined NetView for z/OS messages that indicatethat insufficient access authority or security environment violationspossibly occurred during the last day.
z/OS SMF sample searchesYou can install these optional z/OS SMF sample searches for use with the z/OSSMF Insight Pack.
CICS Transaction Server for z/OS sample searches:
All ExceptionsSearches for CICS Transaction Server for z/OS exceptions thatoccurred in the last day.
Job PerformanceSearches for records that occurred in the last day and have aprogram name of DFHSIP or EYU9XECS.
Policy ExceptionsSearches for CICS Transaction Server for z/OS SMF policy-basedexceptions that occurred in the last day.
Transaction SummarySearches for CICS Transaction Server for z/OS transactionsummary interval records that occurred in the last day.
Transaction Summary - WeekSearches for CICS Transaction Server for z/OS end-of-daytransaction summary records that occurred in the last week.
Transaction Times at MAXTASKSSearches for CICS Transaction Server for z/OS transaction recordswhere the number of active user transactions equaled the specifiedmaximum allowed number of user transactions.
Wait on Storage ExceptionsSearches for CICS storage manager messages and CICS TransactionServer for z/OS SMF Wait on Storage exceptions that occurred inthe last day.
DB2 for z/OS PerformanceSearches for records that occurred in the last day and have a programname of DSNYASCP or DSNADMT0.
IMS for z/OS PerformanceSearches for records that occurred in the last day and have a programname of DFSAMVRC0, DFSRRC00, or DXRRLM00.
MQ for z/OS PerformanceSearches for records that occurred in the last day and have a programname of CSQXJST or CSQYASCP.
z/OS Job PerformanceSearches for records that occurred in the last day and have an assignedprogram name.
138 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Security sample searches:To obtain results from these searches, the Resource Access Control Facility(RACF) must be active and protecting the resources or commands that arethe subject of each search.
RACF SETROPTS Commands IssuedSearches for SETROPTS commands that were issued during the lastday.
RACF Accesses of config filesSearches for any file with the extension .config that was accessedby RACF during the last day.
RACF Activity for OperationsSearches for any events that were caused by a user with the RACFOperation attribute during the last day.
RACF Logons and CommandsSearches for logons and commands that were issued from aspecific terminal ID (TermID field) during the last day. The defaultvalue for the TermID field is *.
RACF CHOWN, CHGRP, CHMOD CmdsSearches for occurrences of the UNIX commands CHOWN,CHGRP, and CHMOD that were issued during the last day.
RACF Data Set Access SuccessesSearches for successful attempts by RACF to access data setsduring the last day.
RACF Failed Access AttemptsSearches for unsuccessful attempts by RACF to access data setsduring the last day.
WebSphere Application Server for z/OS sample searches:To obtain results from these searches, WebSphere Application Server forz/OS must be active and must be configured to create SMF 120 subtype 9records.
Controller Managed JavaBeansSearches for the managed JavaBeans requests that are processed bythe WebSphere Application Server Controller during the last day.
Cont Requests Not InternalsSearches for the requests for controller processing that are notattributed to internal WebSphere processing during the last day.
Activity for All AppsSearches for the requests for processing that are attributed toWebSphere Application Server for z/OS applications.
Apps with Nonzero Dispatch TCBSearches for the requests for processing that are attributed toWebSphere Application Server for z/OS applications with non-zerodispatch Task Control Block (TCB) time.
z/OS SYSLOG sample searchesYou can install these optional z/OS SYSLOG sample searches for use with thez/OS SYSLOG Insight Pack.v “CICS Transaction Server for z/OS sample searches” on page 140v “DB2 for z/OS sample searches” on page 140
Reference 139
v “IMS for z/OS sample searches” on page 141v “Security for z/OS sample searches” on page 142v “MQ for z/OS sample searches” on page 143
CICS Transaction Server for z/OS sample searches
CICS TS MessagesSearches for CICS Transaction Server messages that occurred during thelast day. CICS Transaction Server messages start with the prefix DFH or EYU.
CICS Action, Decision, or ErrorSearches for CICS messages that occurred during the last day and thatindicate any of the following situations:v Immediate action is required.v A decision is required.v An error occurred.
The search is based on the CICS message IDs and on an action code of A,D, E, S, or U during the last day.
CICS TS Abend or SevereSearches for CICS Transaction Server messages that have both of thefollowing characteristics:v The messages occurred during the last day.v The messages have the format DFHccxxxx, where cc represents a
component identifier (such as SM for Storage Manager), and xxxx iseither 0001 or 0002 (which indicates an abend or severe error in thespecified component).
For example: This sample would search for DFHSM0001 but not forDFH0001.
CICS TS Key MessagesSearches for a set of predefined message numbers to determine whetherany of the corresponding messages occurred during the last day.
CICS TS Short on StorageSearches for CICS Transaction Server for z/OS messages that indicate thata storage shortage occurred during the last day.
CICS TS Storage ViolationsSearches for CICS Transaction Server for z/OS messages that indicate thata storage violation occurred during the last day.
DB2 for z/OS sample searches
DB2 MessagesSearches for DB2 messages that occurred during the last day. DB2messages start with the prefix DSN.
DB2 Action, Decision, or ErrorsSearches for DB2 messages that occurred during the last day and thatindicate any of the following situations:v Immediate action is required.v A decision is required.v An error occurred.
140 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
DB2 Data Set MessagesSearches for DB2 messages that occurred during the last day and thatindicate any of the following situations:v Failure of a data set definitionv Failure of a data set extendv Impending space shortage
DB2 Data Sharing MessagesSearches for internal resource lock manager (IRLM) messages that wereissued to DB2 during the last day and that indicate at least one of thefollowing situations:v The percentage of available lock structure capacity is low.v An error occurred when IRLM used the specified z/OS automatic restart
manager (ARM) function.
DB2 Lock Conflict MessagesSearches for DB2 messages that occurred during the last day and thatindicate that a plan was denied an IRLM lock due to a detected deadlockor timeout.
DB2 Log Data Set MessagesSearches for messages that indicate that DB2 log data sets are full, arebecoming full, or could not be allocated during the last day.
DB2 Log Frequency MessagesSearches for DB2 messages that occurred during the last day and thatindicate that log archives were offloaded or are waiting to be offloaded.
DB2 Pool Shortage MessagesSearches for DB2 messages that occurred during the last day and thatindicate that the amount of storage in the group buffer pool (GBP)coupling facility structure that is available for writing new pages is low orcritically low.
IMS for z/OS sample searches
IMS MessagesSearches for IMS messages that occurred during the last day. IMS messagesstart with any of the following prefixes:BPE, CQS, CSL, DFS, DSP, DXR, ELX, FRP, HWS, MDA, PCB, PGE, SEG, or SFL
IMS Action, Decision, or ErrorSearches for IMS messages that occurred during the last day and thatindicate any of the following situations:v Immediate action is required.v A decision is required.v An error occurred.
The search is based on the IMS message IDs and on an action code of A, E,W, or X during the last day.
IMS Security ViolationsSearches for error messages that indicate security violations that weredetected during the last day.
IMS Abend MessagesSearches for messages that indicate abends that were detected during thelast day.
Reference 141
IMS Common Queue Server MsgsSearches for IMS Common Queue Server component messages thatoccurred during the last day. IMS Common Queue Server messages startwith the prefix CQS.
IMS Resources in Waiting ErrorSearches for error messages that occurred during the last day and thatindicate that a resource is waiting on other resources to become available.
IMS DB Recovery Control ErrorsSearches for error messages that occurred for the DB Recovery Controlcomponent during the last day. DB Recovery Control messages start withthe prefix DSP.
IMS Connect MessagesSearches for IMS Connect component messages that occurred during thelast day. IMS Connect messages start with the prefix HWS.
IMS Log MessagesSearches for messages that occurred during the last day and that indicatehow often IMS logs are rolled.
IMS Stopped ResourcesSearches for messages that occurred during the last day and that indicatethe IMS and related components that stopped running.
IMS Pool IssuesSearches for messages that occurred during the last day and that indicateIMS pool-related issues.
IMS Terminal Related MsgsSearches for messages that occurred during the last day and that indicateIMS terminal-related issues or terminals that stopped receiving messages.
IMS Locking MessagesSearches for messages that occurred during the last day and that indicatethe IMS resources that are locked.
Security for z/OS sample searches
RACF MessagesSearches for RACF messages that occurred during the last day. ResourceAccess Control Facility (RACF) messages start with either of the followingprefixes:v ICHv IRR
RACF Action, Decision, or ErrorSearches for RACF messages that occurred during the last day and thatindicate any of the following situations:v Immediate action is required.v A decision is required.v An error occurred.
RACF Insufficient AccessSearches for RACF messages that occurred during the last day and thatindicate insufficient access authority.
RACF Insufficient AuthoritySearches for RACF messages that occurred during the last day and thatindicate insufficient authority.
142 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
RACF Invalid Logon AttemptsSearches for RACF messages that occurred during the last day and thatindicate invalid logon attempts.
MQ for z/OS sample searches
MQ MessagesSearches for MQ messages that occurred during the last day. MQ messagesstart with the prefix CSQ.
MQ Action, Decision, or ErrorSearches for MQ messages that occurred during the last day and thatindicate any of the following situations:v Immediate action is required.v A decision is required.v An error occurred.
The search is based on the MQ message IDs and on an action code of A, D,or E during the last day.
MQ Queue Manager StorageSearches for messages that indicate whether MQ queue manager requiredmore storage at any time during the last day.
MQ Logs Start and StopSearches for messages that are related to the starting, stopping, andflushing of the MQ log data sets during the last day.
MQ Key MessagesSearches for a set of predefined message numbers to determine whetherany of the corresponding messages occurred in the last day.
MQ Interesting InformationalSearches for a set of predefined message numbers for informationalmessages that can be analyzed to determine whether any of thecorresponding messages occurred during the last day.
MQ Channel Initiator ErrorsSearches for error messages that indicate the occurrence of MQ channelinitiator errors during the last day.
MQ Channel ErrorsSearches for error messages that indicate the occurrence of MQ channelerrors during the last day.
MQ Buffer Pool ErrorsSearches for error messages that indicate the occurrence of MQ buffer poolerrors during the last day.
SMF type 80-related records that the System Data Engine createsThe IBM Common Data Provider for z Systems System Data Engine collects asubset of the SMF data that is generated by the Resource Access Control Facility(RACF). This reference describes the types of records that the System Data Enginecreates as it extracts relevant data from SMF type 80 records.
The System Data Engine creates the following record types:v SMF80_COMMAND
v SMF80_LOGON
Reference 143
v SMF80_OMVS_RES_1
v SMF80_OMVS_RES_2
v SMF80_OMVS_SEC_1
v SMF80_OMVS_SEC_2
v SMF80_OPERATION
v SMF80_RESOURCE
From each SMF type 80 record that it collects, the System Data Engine uses thefollowing information to determine what data to extract:v SMF event in the SMF80EVT fieldv RACF event code qualifier in the SMF80EVQ field
The System Data Engine excludes SMF events that occur for hierarchical storagemanagement (HSM), for example, events where the value of the user ID SMF80USRis HSM.
For more information about SMF record type 80 records, see the following topicsfrom the z/OS documentation in the IBM Knowledge Center:v SMF record type 80: RACF processing recordv Format of SMF record type 80 recordsv SMF record type 80 event codes and event code qualifiers
SMF80_COMMAND record typeSMF record type 80 records for events 8 - 25 are created when RACF commandsfail because the user who ran them does not have sufficient authority. Relevantfields from these SMF event records are stored in the SMF80_COMMAND records thatare created by the System Data Engine.
Table 42 describes the event code qualifiers for events 8 - 25, which provide moreinformation about why the command failed.
Table 42. SMF80_COMMAND record type: event code qualifiers for events 8 - 25
Event code qualifier Description
1 Insufficient authority
2 Keyword violations detected
3 Successful listing of data sets
4 System error in listing of data sets
SMF80_LOGON record typeSMF record type 80 records for event 1 are created when RACF authentication failsbecause of incorrect user credentials, which prevents the user from accessing thesystem. Relevant fields from this SMF event record are stored in the SMF80_LOGONrecords that are created by the System Data Engine.
Table 43 describes the event code qualifiers for event 1, which provide moreinformation about why the logon failed.
Table 43. SMF80_LOGON record type: event code qualifiers for event 1
Event code qualifier Description
1 Invalid password
144 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 43. SMF80_LOGON record type: event code qualifiers for event 1 (continued)
Event code qualifier Description
2 Invalid group
3 Invalid object identifier (OID) card
4 Invalid terminal/console
5 Invalid application
6 Revoked user ID attempting access
7 User ID automatically revoked
9 Undefined user ID
10 Insufficient security label authority
11 Not authorized to security label
14 System now requires more authority
15 Remote job entry—job not authorized
16 Surrogate class is inactive
17 Submitter is not authorized by user
18 Submitter is not authorized to security label
19 User is not authorized to job
20 Warning—insufficient security labelauthority
21 Warning—security label missing from job,user, or profile
22 Warning—not authorized to security label
23 Security labels not compatible
24 Warning—security labels not compatible
25 Current password has expired
26 Invalid new password
27 Verification failed by installation
28 Group access has been revoked
29 Object identifier (OID) card is required
30 Network job entry—job not authorized
31 Warning—unknown user from trusted nodepropagated
32 Successful initiation using PassTicket
33 Attempted replay of PassTicket
34 Client security label not equivalent toservers
35 User automatically revoked due to inactivity
36 Passphrase is not valid
37 New passphrase is not valid
38 Current passphrase has expired
39 No RACF user ID found for distributedidentity
Reference 145
SMF80_OMVS_RES record typesSMF record type 80 records for events 28 - 30 are created when the following z/OSUNIX operations occur: directory search, check access to directory, or check accessto file. Relevant fields from these SMF event records are stored in theSMF80_OMVS_RES_1 and SMF80_OMVS_RES_2 records that are created by the SystemData Engine.
Table 44 describes the event code qualifiers for events 28 - 30, which provide moreinformation about the operation results.
Table 44. SMF80_OMVS_RES_1 and SMF80_OMVS_RES_2 record types: event code qualifiers forevents 28 - 30
Event code qualifier Description
0 Access allowed
1 Not authorized to search directory
2 Security label failure
SMF80_OMVS_SEC record typesSMF record type 80 records for events 31 and 33 - 35 are created when the z/OSUNIX commands CHAUDIT, CHMOD, or CHOWN are entered, or when the SETID bits fora file are cleared. Relevant fields from these SMF event records are stored in theSMF80_OMVS_SEC_1 and SMF80_OMVS_SEC_2 records that are created by the SystemData Engine.
Table 45, Table 46, Table 47 on page 147, and Table 48 on page 147 describe theevent code qualifiers for events 31 and 33 - 35, which provide more informationabout the operation results.
Table 45. SMF80_OMVS_SEC_1 and SMF80_OMVS_SEC_2 record types: event code qualifiers forevent 31
Event code qualifier Description
0 File's audit options changed
1 Caller does not have authority to changeuser audit options of specified file
2 Caller does not have authority to changeauditor audit options
3 Security label failure
Table 46. SMF80_OMVS_SEC_1 and SMF80_OMVS_SEC_2 record types: event code qualifiers forevent 33
Event code qualifier Description
0 File's mode changed
1 Caller does not have authority to changemode of specified file
2 Security label failure
146 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 47. SMF80_OMVS_SEC_1 and SMF80_OMVS_SEC_2 record types: event code qualifiers forevent 34
Event code qualifier Description
0 File's owner or group owner changed
1 Caller does not have authority to changeowner or group owner of specified file
2 Security label failure
Table 48. SMF80_OMVS_SEC_1 and SMF80_OMVS_SEC_2 record types: event code qualifiers forevent 35
Event code qualifier Description
0 S_ISUID, S_ISGID, and S_ISVTX bits changedto zero (write).
SMF80_OPERATION record typeSMF record type 80 records for events 2 - 7 are created when a z/OS resource thatis protected by RACF is updated, deleted, or accessed by a user that is defined toRACF with the SPECIAL attribute. Relevant fields from these SMF event records arestored in the SMF80_OPERATION records that are created by the System Data Engine.
Table 49, Table 50 on page 148, Table 51 on page 148, Table 52 on page 148, Table 53on page 149, and Table 54 on page 149 describe the event code qualifiers for events2 - 7, which provide more information about the operation results.
Table 49. SMF80_OPERATION record type: event code qualifiers for event 2
Event code qualifier Description
0 Successful access
1 Insufficient authority
2 Profile not found—RACFIND specified onmacro
3 Access permitted due to warning
4 Failed due to PROTECTALL SETROPTS
5 Warning issued due to PROTECTALLSETROPTS
6 Insufficient category/SECLEVEL
7 Insufficient security label authority
8 Security label missing from job, user, orprofile
9 Warning—insufficient security labelauthority
10 Warning—data set not cataloged
11 Data set not cataloged
12 Profile not found—required for authoritychecking
13 Warning—insufficient category/SECLEVEL
14 Warning—non-main execution environment
Reference 147
Table 49. SMF80_OPERATION record type: event code qualifiers for event 2 (continued)
Event code qualifier Description
15 Conditional access allowed via basic modeprogram
Table 50. SMF80_OPERATION record type: event code qualifiers for event 3
Event code qualifier Description
0 Successful processing of new volume
1 Insufficient authority
2 Insufficient security label authority
3 Less specific profile exists with differentsecurity label
Table 51. SMF80_OPERATION record type: event code qualifiers for event 4
Event code qualifier Description
0 Successful rename
1 Invalid group
2 User not in group
3 Insufficient authority
4 Resource name already defined
5 User not defined to RACF
6 Resource not protected SETROPTS
7 Warning——resource not protectedSETROPTS
8 User in second qualifier is not RACF defined
9 Less specific profile exists with differentsecurity label
10 Insufficient security label authority
11 Resource not protected by security label
12 New name not protected by security label
13 New security label must dominate oldsecurity label
14 Insufficient security label authority
15 Warning—resource not protected by securitylabel
16 Warning—new name not protected bysecurity label
17 Warning—new security label must dominateold security label
Table 52. SMF80_OPERATION record type: event code qualifiers for event 5
Event code qualifier Description
0 Successful scratch
1 Resource not found
2 Invalid volume
148 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Table 53. SMF80_OPERATION record type: event code qualifiers for event 6
Event code qualifier Description
0 Successful deletion
Table 54. SMF80_OPERATION record type: event code qualifiers for event 7
Event code qualifier Description
0 Successful definition
1 Group undefined
2 User not in group
3 Insufficient authority
4 Resource name already defined
5 User not defined to RACF
6 Resource not protected
7 Warning—resource not protected
8 Warning—security label missing from job,user, or profile
9 Insufficient security label authority
10 User in second qualifier in not defined toRACF
11 Insufficient security label authority
12 Less specific profile exists with a differentsecurity label
SMF80_RESOURCE record typeSMF record type 80 records for event 2 are created when a z/OS resource that isprotected by RACF is updated, deleted, or accessed by a user. Relevant fields fromthese SMF event records are stored in the SMF80_RESOURCE records that are createdby the System Data Engine.
Table 55 describes the event code qualifiers for event 2, which provide moreinformation about the operation results.
Table 55. SMF80_RESOURCE record type: event code qualifiers for event 2
Event code qualifier Description
0 Successful access
1 Insufficient authority
2 Profile not found—RACFIND specified onmacro
3 Access permitted due to warning
4 Failed due to PROTECTALL SETROPTS
5 Warning issued due to PROTECTALLSETROPTS
6 Insufficient category/SECLEVEL
7 Insufficient security label authority
Reference 149
Table 55. SMF80_RESOURCE record type: event code qualifiers for event 2 (continued)
Event code qualifier Description
8 Security label missing from job, user, orprofile
9 Warning—insufficient security labelauthority
10 Warning—data set not cataloged
11 Data set not cataloged
12 Profile not found—required for authoritychecking
13 Warning—insufficient category/SECLEVEL
14 Warning—non-main execution environment
15 Conditional access allowed via basic modeprogram
150 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document inother countries. Consult your local IBM representative for information on theproducts and services currently available in your area. Any reference to an IBMproduct, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product,program, or service that does not infringe any IBM intellectual property right maybe used instead. However, it is the user's responsibility to evaluate and verify theoperation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matterdescribed in this document. The furnishing of this document does not grant youany license to these patents. You can send license inquiries, in writing, to:
IBM Director of LicensingIBM CorporationNorth Castle DriveArmonk, NY 10504-1785U.S.A.
For license inquiries regarding double-byte character set (DBCS) information,contact the IBM Intellectual Property Department in your country or sendinquiries, in writing, to:
Intellectual Property LicensingLegal and Intellectual Property LawIBM Japan, Ltd.19-21, Nihonbashi-Hakozakicho, Chuo-kuTokyo 103-8510, Japan
The following paragraph does not apply to the United Kingdom or any othercountry where such provisions are inconsistent with local law:INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THISPUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHEREXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIEDWARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESSFOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express orimplied warranties in certain transactions, therefore, this statement may not applyto you.
This information could include technical inaccuracies or typographical errors.Changes are periodically made to the information herein; these changes will beincorporated in new editions of the publication. IBM may make improvementsand/or changes in the product(s) and/or the program(s) described in thispublication at any time without notice.
Any references in this information to non-IBM Web sites are provided forconvenience only and do not in any manner serve as an endorsement of those Websites. The materials at those Web sites are not part of the materials for this IBMproduct and use of those Web sites is at your own risk.
© Copyright IBM Corp. 2014, 2016 151
IBM may use or distribute any of the information you supply in any way itbelieves appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purposeof enabling: (i) the exchange of information between independently createdprograms and other programs (including this one) and (ii) the mutual use of theinformation which has been exchanged, should contact:
IBM Corporation2Z4A/10111400 Burnet RoadAustin, TX 78758U.S.A.
Such information may be available, subject to appropriate terms and conditions,including in some cases, payment of a fee.
The licensed program described in this document and all licensed materialavailable for it are provided by IBM under terms of the IBM Customer Agreement,IBM International Program License Agreement or any equivalent agreementbetween us.
Any performance data contained herein was determined in a controlledenvironment. Therefore, the results obtained in other operating environments mayvary significantly. Some measurements may have been made on development-levelsystems and there is no guarantee that these measurements will be the same ongenerally available systems. Furthermore, some measurements may have beenestimated through extrapolation. Actual results may vary. Users of this documentshould verify the applicable data for their specific environment.
Information concerning non-IBM products was obtained from the suppliers ofthose products, their published announcements or other publicly available sources.IBM has not tested those products and cannot confirm the accuracy ofperformance, compatibility or any other claims related to non-IBM products.Questions on the capabilities of non-IBM products should be addressed to thesuppliers of those products.
All statements regarding IBM's future direction or intent are subject to change orwithdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily businessoperations. To illustrate them as completely as possible, the examples include thenames of individuals, companies, brands, and products. All of these names arefictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.
Terms and conditions for product documentationPermissions for the use of these publications are granted subject to the followingterms and conditions.
Applicability
These terms and conditions are in addition to any terms of use for the IBMwebsite.
152 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
Personal use
You may reproduce these publications for your personal, noncommercial useprovided that all proprietary notices are preserved. You may not distribute, displayor make derivative work of these publications, or any portion thereof, without theexpress consent of IBM.
Commercial use
You may reproduce, distribute and display these publications solely within yourenterprise provided that all proprietary notices are preserved. You may not makederivative works of these publications, or reproduce, distribute or display thesepublications or any portion thereof outside your enterprise, without the expressconsent of IBM.
Rights
Except as expressly granted in this permission, no other permissions, licenses orrights are granted, either express or implied, to the publications or anyinformation, data, software or other intellectual property contained therein.
IBM reserves the right to withdraw the permissions granted herein whenever, in itsdiscretion, the use of the publications is detrimental to its interest or, asdetermined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in fullcompliance with all applicable laws and regulations, including all United Statesexport laws and regulations.
IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESEPUBLICATIONS. THE PUBLICATIONS ARE PROVIDED "AS-IS" AND WITHOUTWARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDINGBUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY,NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
TrademarksIBM, the IBM logo, and ibm.com are trademarks or registered trademarks ofInternational Business Machines Corp., registered in many jurisdictions worldwide.Other product and service names might be trademarks of IBM or other companies.A current list of IBM trademarks is available on the Web at "Copyright andtrademark information" at http://www.ibm.com/legal/copytrade.shtml.
Java and all Java-based trademarks and logos are trademarks or registeredtrademarks of Oracle and/or its affiliates.
Linux is a registered trademark of Linus Torvalds in the United States, othercountries, or both.
UNIX is a registered trademark of The Open Group in the United States and othercountries.
Notices 153
154 Operations Analytics for z Systems: Installing, configuring, using, and troubleshooting
IBM®
Printed in USA