end to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

384
ibm.com/redbooks End-to-End Scheduling with IBM Tivoli Workload kload Scheduler V 8.2 Vasfi Gucer Michael A. Lowry Finn Bastrup Knudsen Plan and implement your end-to-end scheduling environment Experiment with real-life scenarios Learn best practices and troubleshooting

Upload: banking-at-ho-chi-minh-city

Post on 14-May-2015

10.888 views

Category:

Technology


6 download

TRANSCRIPT

Page 1: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

ibm.com/redbooks

End-to-End Scheduling with IBM Tivoli Workload kload Scheduler V 8.2

Vasfi GucerMichael A. Lowry

Finn Bastrup Knudsen

Plan and implement your end-to-end scheduling environment

Experiment with real-life scenarios

Learn best practices and troubleshooting

Front cover

Page 2: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624
Page 3: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

September 2004

International Technical Support Organization

SG24-6624-00

Page 4: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

© Copyright International Business Machines Corporation 2004. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADPSchedule Contract with IBM Corp.

First Edition (September 2004)

This edition applies to IBM Tivoli Workload Scheduler Version 8.2, IBM Tivoli Workload Scheduler for z/OS Version 8.2.

Note: Before using this information and the product it supports, read the information in “Notices” on page ix.

Page 5: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiThe team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiNotice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiBecome a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Job scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Introduction to end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Introduction to Tivoli Workload Scheduler for z/OS. . . . . . . . . . . . . . . . . . . 4

1.3.1 Overview of Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . . . 41.3.2 Tivoli Workload Scheduler for z/OS architecture . . . . . . . . . . . . . . . . 4

1.4 Introduction to Tivoli Workload Scheduler. . . . . . . . . . . . . . . . . . . . . . . . . . 51.4.1 Overview of IBM Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . 51.4.2 IBM Tivoli Workload Scheduler architecture . . . . . . . . . . . . . . . . . . . . 6

1.5 Benefits of integrating Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.6 Summary of enhancements in V8.2 related to end-to-end scheduling . . . . 81.6.1 New functions related with performance and scalability . . . . . . . . . . . 81.6.2 General enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.6.3 Security enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1.7 The terminology used in this book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Chapter 2. End-to-end scheduling architecture . . . . . . . . . . . . . . . . . . . . . 252.1 IBM Tivoli Workload Scheduler for z/OS architecture . . . . . . . . . . . . . . . . 27

2.1.1 Tivoli Workload Scheduler for z/OS configuration. . . . . . . . . . . . . . . 282.1.2 Tivoli Workload Scheduler for z/OS database objects . . . . . . . . . . . 322.1.3 Tivoli Workload Scheduler for z/OS plans. . . . . . . . . . . . . . . . . . . . . 372.1.4 Other Tivoli Workload Scheduler for z/OS features . . . . . . . . . . . . . 44

2.2 Tivoli Workload Scheduler architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . 502.2.1 The IBM Tivoli Workload Scheduler network . . . . . . . . . . . . . . . . . . 512.2.2 Tivoli Workload Scheduler workstation types . . . . . . . . . . . . . . . . . . 542.2.3 Tivoli Workload Scheduler topology . . . . . . . . . . . . . . . . . . . . . . . . . 562.2.4 IBM Tivoli Workload Scheduler components . . . . . . . . . . . . . . . . . . 572.2.5 IBM Tivoli Workload Scheduler plan . . . . . . . . . . . . . . . . . . . . . . . . . 58

2.3 End-to-end scheduling architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

© Copyright IBM Corp. 2004. All rights reserved. iii

Page 6: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

2.3.1 How end-to-end scheduling works . . . . . . . . . . . . . . . . . . . . . . . . . . 602.3.2 Tivoli Workload Scheduler for z/OS end-to-end components . . . . . . 622.3.3 Tivoli Workload Scheduler for z/OS end-to-end configuration . . . . . 682.3.4 Tivoli Workload Scheduler for z/OS end-to-end plans . . . . . . . . . . . 752.3.5 Making the end-to-end scheduling system fault tolerant. . . . . . . . . . 842.3.6 Benefits of end-to-end scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . 86

2.4 Job Scheduling Console and related components . . . . . . . . . . . . . . . . . . 892.4.1 A brief introduction to the Tivoli Management Framework . . . . . . . . 902.4.2 Job Scheduling Services (JSS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912.4.3 Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

2.5 Job log retrieval in an end-to-end environment . . . . . . . . . . . . . . . . . . . . . 982.5.1 Job log retrieval via the Tivoli Workload Scheduler connector . . . . . 982.5.2 Job log retrieval via the OPC connector . . . . . . . . . . . . . . . . . . . . . . 992.5.3 Job log retrieval when firewalls are involved. . . . . . . . . . . . . . . . . . 101

2.6 Tivoli Workload Scheduler, important files, and directory structure . . . . 1032.7 conman commands in the end-to-end environment . . . . . . . . . . . . . . . . 106

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

3.1 Different ways to do end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . 1113.2 The rationale behind end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . 1123.3 Before you start the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

3.3.1 How to order the Tivoli Workload Scheduler software . . . . . . . . . . 1143.3.2 Where to find more information for planning . . . . . . . . . . . . . . . . . . 116

3.4 Planning end-to-end scheduling with Tivoli Workload Scheduler for z/OS1163.4.1 Tivoli Workload Scheduler for z/OS documentation . . . . . . . . . . . . 1173.4.2 Service updates (PSP bucket, APARs, and PTFs) . . . . . . . . . . . . . 1173.4.3 Tivoli Workload Scheduler for z/OS started tasks for end-to-end

scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233.4.4 Hierarchical File System (HFS) cluster . . . . . . . . . . . . . . . . . . . . . . 1243.4.5 Data sets related to end-to-end scheduling . . . . . . . . . . . . . . . . . . 1273.4.6 TCP/IP considerations for end-to-end server in sysplex . . . . . . . . . 1293.4.7 Upgrading from Tivoli Workload Scheduler for z/OS 8.1 end-to-end

scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373.5 Planning for end-to-end scheduling with Tivoli Workload Scheduler . . . 139

3.5.1 Tivoli Workload Scheduler publications and documentation. . . . . . 1393.5.2 Tivoli Workload Scheduler service updates (fix packs). . . . . . . . . . 1403.5.3 System and software requirements. . . . . . . . . . . . . . . . . . . . . . . . . 1403.5.4 Network planning and considerations . . . . . . . . . . . . . . . . . . . . . . . 1413.5.5 Backup domain manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423.5.6 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1443.5.7 Fault-tolerant agent (FTA) naming conventions . . . . . . . . . . . . . . . 146

3.6 Planning for the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . . 149

iv End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 7: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

3.6.1 Job Scheduling Console documentation. . . . . . . . . . . . . . . . . . . . . 1503.6.2 Job Scheduling Console service (fix packs) . . . . . . . . . . . . . . . . . . 1503.6.3 Compatibility and migration considerations for the JSC . . . . . . . . . 1513.6.4 Planning for Job Scheduling Console availability . . . . . . . . . . . . . . 1533.6.5 Planning for server started task for JSC communication . . . . . . . . 154

3.7 Planning for migration or upgrade from previous versions . . . . . . . . . . . 1553.8 Planning for maintenance or upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . 156

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

4.1 Before the installation is started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1584.2 Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling . . 159

4.2.1 Executing EQQJOBS installation aid . . . . . . . . . . . . . . . . . . . . . . . 1624.2.2 Defining Tivoli Workload Scheduler for z/OS subsystems . . . . . . . 1674.2.3 Allocate end-to-end data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684.2.4 Create and customize the work directory . . . . . . . . . . . . . . . . . . . . 1704.2.5 Create started task procedures for Tivoli Workload Scheduler for z/OS

1734.2.6 Initialization statements for Tivoli Workload Scheduler for z/OS

end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1744.2.7 Initialization statements used to describe the topology. . . . . . . . . . 1844.2.8 Example of DOMREC and CPUREC definitions. . . . . . . . . . . . . . . 1974.2.9 The JTOPTS TWSJOBNAME() parameter . . . . . . . . . . . . . . . . . . . 2004.2.10 Verify end-to-end installation in Tivoli Workload Scheduler for z/OS .

2034.3 Installing Tivoli Workload Scheduler in an end-to-end environment . . . . 207

4.3.1 Installing multiple instances of Tivoli Workload Scheduler on one machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

4.3.2 Verify the Tivoli Workload Scheduler installation . . . . . . . . . . . . . . 2114.4 Define, activate, verify fault-tolerant workstations . . . . . . . . . . . . . . . . . . 211

4.4.1 Define fault-tolerant workstation in Tivoli Workload Scheduler controller workstation database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

4.4.2 Activate the fault-tolerant workstation definition . . . . . . . . . . . . . . . 2134.4.3 Verify that the fault-tolerant workstations are active and linked . . . 214

4.5 Creating fault-tolerant workstation job definitions and job streams . . . . . 2174.5.1 Centralized and non-centralized scripts . . . . . . . . . . . . . . . . . . . . . 2174.5.2 Definition of centralized scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2194.5.3 Definition of non-centralized scripts . . . . . . . . . . . . . . . . . . . . . . . . 2214.5.4 Combination of centralized script and VARSUB, JOBREC parameters

2324.5.5 Definition of FTW jobs and job streams in the controller. . . . . . . . . 234

4.6 Verification test of end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . 2354.6.1 Verification of job with centralized script definitions . . . . . . . . . . . . 236

Contents v

Page 8: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

4.6.2 Verification of job with non-centralized scripts . . . . . . . . . . . . . . . . 2394.6.3 Verification of centralized script with JOBREC parameters . . . . . . 242

4.7 Activate support for the Tivoli Workload Scheduler Job Scheduling Console245

4.7.1 Install and start Tivoli Workload Scheduler for z/OS JSC server . . 2464.7.2 Installing and configuring Tivoli Management Framework 4.1 . . . . 2524.7.3 Alternate method using Tivoli Management Framework 3.7.1 . . . . 2534.7.4 Creating connector instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2554.7.5 Creating WTMF administrators for Tivoli Workload Scheduler . . . . 2574.7.6 Installing the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . 261

Chapter 5. End-to-end implementation scenarios and examples. . . . . . 2655.1 Description of our environment and systems . . . . . . . . . . . . . . . . . . . . . 2665.2 Creation of the Symphony file in detail . . . . . . . . . . . . . . . . . . . . . . . . . . 2735.3 Migrating Tivoli OPC tracker agents to end-to-end scheduling . . . . . . . . 274

5.3.1 Migration benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2745.3.2 Migration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2765.3.3 Migration checklist. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2775.3.4 Migration actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2785.3.5 Migrating backward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288

5.4 Conversion from Tivoli Workload Scheduler network to Tivoli Workload Scheduler for z/OS managed network . . . . . . . . . . . . . . . . . . . . . . . . . . 288

5.4.1 Illustration of the conversion process . . . . . . . . . . . . . . . . . . . . . . . 2895.4.2 Considerations before doing the conversion. . . . . . . . . . . . . . . . . . 2915.4.3 Conversion process from Tivoli Workload Scheduler to Tivoli Workload

Scheduler for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2935.4.4 Some guidelines to automate the conversion process . . . . . . . . . . 299

5.5 Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios . . . . 3035.5.1 Configure Tivoli Workload Scheduler for z/OS backup engines . . . 3045.5.2 Configure DVIPA for Tivoli Workload Scheduler for z/OS end-to-end

server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3055.5.3 Configure backup domain manager for first-level domain manager 3065.5.4 Switch to Tivoli Workload Scheduler backup domain manager . . . 3085.5.5 Implementing Tivoli Workload Scheduler high availability on high

availability environments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3185.6 Backup and maintenance guidelines for FTAs . . . . . . . . . . . . . . . . . . . . 318

5.6.1 Backup of the Tivoli Workload Scheduler FTAs . . . . . . . . . . . . . . . 3195.6.2 Stdlist files on Tivoli Workload Scheduler FTAs . . . . . . . . . . . . . . . 3195.6.3 Auditing log files on Tivoli Workload Scheduler FTAs. . . . . . . . . . . 3215.6.4 Monitoring file systems on Tivoli Workload Scheduler FTAs . . . . . 3215.6.5 Central repositories for important Tivoli Workload Scheduler files . 322

5.7 Security on fault-tolerant agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3235.7.1 The security file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

vi End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 9: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.7.2 Sample security file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3275.8 End-to-end scheduling tips and tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . 331

5.8.1 File dependencies in the end-to-end environment . . . . . . . . . . . . . 3315.8.2 Handling offline or unlinked workstations . . . . . . . . . . . . . . . . . . . . 3325.8.3 Using dummy jobs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3345.8.4 Placing job scripts in the same directories on FTAs . . . . . . . . . . . . 3345.8.5 Common errors for jobs on fault-tolerant workstations . . . . . . . . . . 3345.8.6 Problems with port numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3365.8.7 Cannot switch to new Symphony file (EQQPT52E) messages. . . . 340

Appendix A. Connector reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343Setting the Tivoli environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344Authorization roles required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344Working with Tivoli Workload Scheduler for z/OS connector instances. . . . . 344

The wopcconn command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345Working with Tivoli Workload Scheduler connector instances . . . . . . . . . . . . 346

The wtwsconn.sh command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347Useful Tivoli Framework commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

Contents vii

Page 10: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

viii End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 11: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

© Copyright IBM Corp. 2004. All rights reserved. ix

Page 12: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:

AIX®AS/400®HACMP™IBM®Language Environment®Maestro™MVS™

NetView®OS/390®OS/400®RACF®Redbooks™Redbooks (logo) ™S/390®

ServicePac®Tivoli®Tivoli Enterprise Console®TME®VTAM®z/OS®zSeries®

The following terms are trademarks of other companies:

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Intel is a trademark of Intel Corporation in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, and service names may be trademarks or service marks of others.

x End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 13: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Preface

The beginning of the new century sees the data center with a mix of work, hardware, and operating systems previously undreamed of. Today’s challenge is to manage disparate systems with minimal effort and maximum reliability. People experienced in scheduling traditional host-based batch work must now manage distributed systems, and those working in the distributed environment must take responsibility for work running on the corporate OS/390® system.

This IBM® Redbook considers how best to provide end-to-end scheduling using IBM Tivoli® Workload Scheduler Version 8.2, both distributed (previously known as Maestro™) and mainframe (previously known as OPC) components.

In this book, we provide the information for installing the necessary Tivoli Workload Scheduler software components and configuring them to communicate with each other. In addition to technical information, we consider various scenarios that may be encountered in the enterprise and suggest practical solutions. We describe how to manage work and dependencies across both environments using a single point of control.

We believe that this redbook will be a valuable reference for IT specialists who implement end-to-end scheduling with Tivoli Workload Scheduler 8.2.

The team that wrote this redbookThis redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, Austin Center.

Vasfi Gucer is a Project Leader at the International Technical Support Organization, Austin Center. He worked for IBM Turkey for 10 years and has been with the ITSO since January 1999. He has more than 10 years of experience in the areas of systems management, and networking hardware and software on mainframe and distributed platforms. He has worked on various Tivoli customer projects as a Systems Architect in Turkey and the United States. Vasfi is also a IBM Certified Senior IT Specialist.

Michael A. Lowry is an IBM Certified Consultant and Instructor currently working for IBM in Stockholm, Sweden. Michael does support, consulting, and training for IBM customers, primarily in Europe. He has 10 years of experience in the IT services business and has worked for IBM since 1996. Michael studied engineering and biology at the University of Texas in Austin, his hometown.

© Copyright IBM Corp. 2004. All rights reserved. xi

Page 14: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Before moving to Sweden, he worked in Austin for Apple, IBM, and the IBM Tivoli Workload Scheduler Support Team at Tivoli Systems. He has five years of experience with Tivoli Workload Scheduler and has extensive experience with IBM network and storage management products. He is also an IBM Certified AIX® Support Professional.

Finn Bastrup Knudsen is an Advisory IT Specialist in Integrated Technology Services (ITS) in IBM Global Services in Copenhagen, Denmark. He has 12 years of experience working with IBM Tivoli Workload Scheduler for z/OS® (OPC) and four years of experience working with IBM Tivoli Workload Scheduler. Finn primarily does consultation and services at customer sites, as well as IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler training. He is a certified Tivoli Instructor in IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler. He has worked at IBM for 13 years. His areas of expertise include IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler.

Also thanks to the following people for their contributions to this project:

International Technical Support Organization, Austin CenterBudi Darmawan and Betsy Thaggard

IBM ItalyAngelo D'ambrosio, Paolo Falsi, Antonio Gallotti, Pietro Iannucci, Valeria Perticara

IBM USARobert Haimowitz, Stephen Viola

IBM GermanyStefan Franke

NoticeThis publication is intended to help Tivoli specialists implement an end-to-end scheduling environment with IBM Tivoli Workload Scheduler 8.2. The information in this publication is not intended as the specification of any programming interfaces that are provided by Tivoli Workload Scheduler 8.2. See the PUBLICATIONS section of the IBM Programming Announcement for Tivoli Workload Scheduler 8.2 for more information about what publications are considered to be product documentation.

xii End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 15: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Become a published authorJoin us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You will team with IBM technical professionals, Business Partners, and/or customers.

Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you will develop a network of contacts in IBM development labs, and increase your productivity and marketability.

Find out more about the residency program, browse the residency index, and apply online at:

ibm.com/redbooks/residencies.html

Comments welcomeYour comments are important to us. We want our Redbooks™ to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways:

� Use the online Contact us review redbook form found at:

ibm.com/redbooks

� Send your comments in an e-mail to:

[email protected]

� Mail your comments to:

IBM Corporation, International Technical Support OrganizationDept. JN9B Building 905 Internal Zip 283411501 Burnet RoadAustin, Texas 78758-3493

Preface xiii

Page 16: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

xiv End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 17: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Chapter 1. Introduction

IBM Tivoli Workload Scheduler for z/OS Version 8.2 introduces many new features and further integrates the OPC-based and Maestro-based scheduling engines.

In this chapter, we give a brief introduction to the IBM Tivoli Workload Scheduler 8.2 suite and summarize the functions that are introduced in Version 8.2:

� “Job scheduling” on page 2

� “Introduction to end-to-end scheduling” on page 3

� “Introduction to Tivoli Workload Scheduler for z/OS” on page 4

� “Introduction to Tivoli Workload Scheduler” on page 5.2

� “Benefits of integrating Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler” on page 7

� “Summary of enhancements in V8.2 related to end-to-end scheduling” on page 8

� “The terminology used in this book” on page 21

1

© Copyright IBM Corp. 2004 1

Page 18: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

1.1 Job schedulingScheduling is the nucleus of the data center. Orderly, reliable sequencing and management of process execution is an essential part of IT management. The IT environment consists of multiple strategic applications, such as SAP/3 and Oracle, payroll, invoicing, e-commerce, and order handling. These applications run on many different operating systems and platforms. Legacy systems must be maintained and integrated with newer systems.

Workloads are increasing, accelerated by electronic commerce. Staffing and training requirements increase, and many platform experts are needed. There are too many consoles and no overall point of control. Constant (24x7) availability is essential and must be maintained through migrations, mergers, acquisitions, and consolidations.

Dependencies exist between jobs in different environments. For example, a customer can use a Web browser to fill out an order form that triggers a UNIX® job that acknowledges the order, an AS/400® job that orders parts, a z/OS job that debits the customer’s bank account, and a Windows NT® job that prints an invoice and address label. Each job must run only after the job before it has completed.

The IBM Tivoli Workload Scheduler Version 8.2 suite provides an integrated solution for running this kind of complicated workload. Its Job Scheduling Console provides a centralized point of control and unified interface for managing the workload regardless of the platform or operating system on which the jobs run.

The Tivoli Workload Scheduler 8.2 suite includes IBM Tivoli Workload Scheduler, IBM Tivoli Workload Scheduler for z/OS, and the Job Scheduling Console. Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS can be used separately or together.

End-to-end scheduling means using both products together, with an IBM mainframe acting as the scheduling controller for a network of other workstations.

Because Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS have different histories and work on different platforms, someone who is familiar with one of the programs may not be familiar with the other. For this reason, we give a short introduction to each product separately and then proceed to discuss how the two programs work together.

2 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 19: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

1.2 Introduction to end-to-end schedulingEnd-to-end scheduling means scheduling workload across all computing resources in your enterprise, from the mainframe in your data center, to the servers in your regional headquarters, all the way to the workstations in your local office. The Tivoli Workload Scheduler end-to-end scheduling solution is a system whereby scheduling throughout the network is defined, managed, controlled, and tracked from a single IBM mainframe or sysplex.

End-to-end scheduling requires using two different programs: Tivoli Workload Scheduler for z/OS on the mainframe, and Tivoli Workload Scheduler on other operating systems (UNIX, Windows®, and OS/400®). This is shown in Figure 1-1.

Figure 1-1 Both schedulers are required for end-to-end scheduling

Despite the similar names, Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler are quite different and have distinct histories. IBM Tivoli Workload Scheduler for z/OS was originally called OPC. It was developed by IBM in the early days of the mainframe. IBM Tivoli Workload Scheduler was originally developed by a company called Unison Software. Unison was purchased by Tivoli, and Tivoli was then purchased by IBM.

Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler have slightly different ways of working, and programs have many features in common. IBM has continued development of both programs toward the goal of providing closer

Domain Manager

DMB

MASTERDM

AIXDomain Manager

DMA

HPUX

Linux Windows XP Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

Master DomainManager

OPCMASTER

z/OS

Tivoli WorkloadSchedulerfor z/OS

Tivoli WorkloadScheduler

Chapter 1. Introduction 3

Page 20: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

and closer integration between them. The reason for this integration is simple: to facilitate an integrated scheduling system across all operating systems.

It should be obvious that end-to-end scheduling depends on using the mainframe as the central point of control for the scheduling network. There are other ways to integrate scheduling between z/OS and other operating systems. We will discuss these in the following sections.

1.3 Introduction to Tivoli Workload Scheduler for z/OSIBM Tivoli Workload Scheduler for z/OS has been scheduling and controlling batch workloads in data centers since 1977. Originally called Operations Planning and Control (OPC), the product has been extensively developed and extended to meet the increasing demands of customers worldwide. An overnight workload consisting of 100,000 production jobs is not unusual, and Tivoli Workload Scheduler for z/OS can easily manage this kind of workload.

1.3.1 Overview of Tivoli Workload Scheduler for z/OSIBM Tivoli Workload Scheduler for z/OS databases contain all of the information about the work that is to be run, when it should run, and the resources that are needed and available. This information is used to calculate a forecast called the long-term plan. Data center staff can check this to confirm that the desired work is being scheduled when required. The long-term plan usually covers a time range of four to twelve weeks. The current plan is produced based on the long-term plan and the databases. The current plan usually covers 24 hours and is a detailed production schedule. Tivoli Workload Scheduler for z/OS uses the current plan to submit jobs to the appropriate processor at the appropriate time. All jobs in the current plan have Tivoli Workload Scheduler for z/OS status codes that indicate the progress of work. When a job’s predecessors are complete, Tivoli Workload Scheduler for z/OS considers it ready for submission. It verifies that all requested resources are available, and when these conditions are met, it causes the job to be submitted.

1.3.2 Tivoli Workload Scheduler for z/OS architectureIBM Tivoli Workload Scheduler for z/OS consists of a controller and one or more trackers. The controller, which runs on a z/OS system, manages the Tivoli Workload Scheduler for z/OS and the long term and current plans. The controller schedules work and causes jobs to be submitted to the appropriate system at the appropriate time.

4 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 21: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Trackers are installed on every system managed by the controller. The tracker is the link between the controller and the managed system. The tracker submits jobs when the controller instructs it to do so, and it passes job start and job end information back to the controller.

The controller can schedule jobs on z/OS system using trackers or on other operating systems using fault-tolerant agents (FTAs). FTAs can be run on many operating systems, including AIX, Linux®, Solaris, HP-UX, OS/400, and Windows. FTAs run IBM Tivoli Workload Scheduler, formerly called Maestro.

The most common way of working with the controller is via ISPF panels. However, several other methods are available, including Program Interfaces, TSO commands, and the Job Scheduling Console.

The Job Scheduling Console (JSC) is a Java™-based graphical user interface for controlling and monitoring workload on the mainframe and other platforms. The first version of JSC was released at the same time as Tivoli OPC Version 2.3. The current version of JSC (1.3) has been updated with several new functions specific to Tivoli Workload Scheduler for z/OS. JSC provides a common interface to both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler.

For more information about IBM Tivoli Workload Scheduler for z/OS architecture, see Chapter 2, “End-to-end scheduling architecture” on page 25.

1.4 Introduction to Tivoli Workload SchedulerIBM Tivoli Workload Scheduler is descended from the Unison Maestro program. Unison Maestro was developed by Unison Software on the Hewlett-Packard MPE operating system. It was then ported to UNIX and Windows. In its various manifestations, Tivoli Workload Scheduler has a 17-year track record. During the processing day, Tivoli Workload Scheduler manages the production environment and automates most operator activities. It prepares jobs for execution, resolves interdependencies, and launches and tracks each job. Because jobs begin as soon as their dependencies are satisfied, idle time is minimized. Jobs never run out of sequence. If a job fails, IBM Tivoli Workload Scheduler can handle the recovery process with little or no operator intervention.

1.4.1 Overview of IBM Tivoli Workload SchedulerAs with IBM Tivoli Workload Scheduler for z/OS, there are two basic aspects to job scheduling in IBM Tivoli Workload Scheduler: The database and the plan. The database contains all definitions for scheduling objects, such as jobs, job streams, resources, and workstations. It also holds statistics of job and job stream execution, as well as information on the user ID that created an object

Chapter 1. Introduction 5

Page 22: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

and when an object was last modified. The plan contains all job scheduling activity planned for a period of one day. In IBM Tivoli Workload Scheduler, the plan is created every 24 hours and consists of all the jobs, job streams, and dependency objects that are scheduled to execute for that day. Job streams that do not complete successfully can be carried forward into the next day’s plan.

1.4.2 IBM Tivoli Workload Scheduler architectureA typical IBM Tivoli Workload Scheduler network consists of a master domain manager, domain managers, and fault-tolerant agents. The master domain manager, sometimes referred to as just the master, contains the centralized database files that store all defined scheduling objects. The master creates the plan, called Symphony, at the start of each day.

Each domain manager is responsible for distribution of the plan to the fault-tolerant agents (FTAs) in its domain. A domain manager also handles resolution of dependencies between FTAs in its domain.

FTAs are the workhorses of a Tivoli Workload Scheduler network. FTAs are where most jobs are run. As their name implies, fault-tolerant agents are fault tolerant. This means that in the event of a loss of communication with the domain manager, FTAs are capable of resolving local dependencies and launching their jobs without interruption. FTAs are capable of this because each FTA has its own copy of the plan. The plan contains a complete set of scheduling instructions for the production day. Similarly, a domain manager can resolve dependencies between FTAs in its domain even in the event of a loss of communication with the master, because the domain manager’s plan receives updates from all subordinate FTAs and contains the authoritative status of all jobs in that domain.

The master domain manager is updated with the status of all jobs in the entire IBM Tivoli Workload Scheduler network. Logging and monitoring of the IBM Tivoli Workload Scheduler network is performed on the master.

Starting with Tivoli Workload Scheduler Version 7.0, a new Java-based graphical user interface was made available to provide an easy-to-use interface to Tivoli Workload Scheduler. This new GUI is called Job Scheduling Console (JSC). The current version of JSC has been updated with several functions specific to Tivoli Workload Scheduler. The JSC provides a common interface to both Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS.

For more about IBM Tivoli Workload Scheduler architecture, see Chapter 2, “End-to-end scheduling architecture” on page 25.

6 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 23: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

1.5 Benefits of integrating Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler

Both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler have individual strengths. While an enterprise running mainframe and non-mainframe systems could schedule and control work using only one of these tools or using both tools separately, a complete solution requires that Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler work together.

The Tivoli Workload Scheduler for z/OS long-term plan gives peace of mind by showing the workload forecast weeks or months into the future. Tivoli Workload Scheduler fault-tolerant agents go right on running jobs even if they lose communication with the domain manager. Tivoli Workload Scheduler for z/OS manages huge numbers of jobs through a sysplex of connected z/OS systems. Tivoli Workload Scheduler extended agents can control work on applications such as SAP R/3 and Oracle.

Many data centers need to schedule significant amounts of both mainframe and non-mainframe jobs. It is often desirable to have a single point of control for scheduling on all systems in the enterprise, regardless of platform, operating system, or application. These businesses would probably benefit from implementing the end-to-end scheduling configuration. End-to-end scheduling enables the business to make the most of its computing resources.

That said, the end-to-end scheduling configuration is not necessarily the best way to go for every enterprise. Some computing environments would probably benefit from keeping their mainframe and non-mainframe schedulers separate. Others would be better served by integrating the two schedulers in a different way (for example, z/OS [or MVS™] extended agents). Enterprises with a majority of jobs running on UNIX and Windows servers might not want to cede control of these jobs to the mainframe. Because the end-to-end solution involves software components on both mainframe and non-mainframe systems, there will have to be a high level of cooperation between your mainframe operators and your UNIX and Windows system administrators. Careful consideration of the requirements of end-to-end scheduling is necessary before going down this path.

There are also several important decisions that must be made before beginning an implementation of end-to-end scheduling. For example, there is a trade-off between centralized control and fault tolerance. Careful planning now can save you time and trouble later. In Chapter 3, “Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2” on page 109, we explain in detail the decisions that must be made prior to implementation. We strongly recommend that you read this chapter in full before beginning any implementation.

Chapter 1. Introduction 7

Page 24: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

1.6 Summary of enhancements in V8.2 related to end-to-end scheduling

Version 8.2 is the latest version of both IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS. In this section we cover the new functions that affect end-to-end scheduling in three categories.

1.6.1 New functions related with performance and scalability Several features are now available with IBM Tivoli Workload Scheduler for z/OS 8.2 that directly or indirectly affect performance.

Multiple first-level domain managersIn IBM Tivoli Workload Scheduler for z/OS 8.1, there was a limitation of only one first-level domain manager (called the primary domain manager). In Version 8.2, you can have multiple first-level domain managers (that is, the level immediately below OPCMASTER). See Figure 1-2 on page 9.

This allows greater flexibility and scalability and eliminates a potential performance bottleneck. It also allows greater freedom in defining your Tivoli Workload Scheduler distributed network.

8 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 25: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 1-2 IBM Tivoli Workload Scheduler network with two first-level domains

Improved SCRIPTLIB parserThe job definitions for non-centralized scripts are kept in members in the SCRPTLIB data set (EQQSCLIB DD statement). The definitions are specified in keywords and parameter definitions. See example below:

Example 1-1 SCRPTLIB dataset

BROWSE TWS.INST.SCRPTLIB(AIXJOB01) - 01.08 Line 00000000 Col 001 Command ===> Scroll ===> ********************************* Top of Data *****************************/* Job to be executed on AIX machines */VARSUB TABLES(FTWTABLE) PREFIX('&') VARFAIL(YES) TRUNCATE(NO)JOBREC JOBSCR('&TWSHOME./scripts/return_rc.sh 2') RCCONDSUCC('(RC=4) OR (RC=6)')RECOVERY OPTION(STOP) MESSAGE('Reply Yes when OK to continue')

DomainY

AIXDomain Manager

DMA

AIX

DomainA

FTA1 FTA2

Linux

HPUXDomain Manager

DMB

Windows 2000 Solaris

DomainB

FTA3 FTA4

OPCMASTER

AIXDomain Manager

DMY

AIXDomain Manager

DMZ

DomainZ

Domain Manager

DMC

DomainCHPUX

MasterDomain Manager

z/OS

DomainY

AIXDomain Manager

DMA

AIX

DomainA

FTA1 FTA2

Linux

HPUXDomain Manager

DMB

Windows 2000 Solaris

DomainB

FTA3 FTA4

OPCMASTER

AIXDomain Manager

DMY

AIXDomain Manager

DMZ

DomainZ

Domain Manager

DMC

DomainCHPUX

MasterDomain Manager

z/OS

Chapter 1. Introduction 9

Page 26: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

******************************** Bottom of Data ***************************

The information in the SCRPTLIB member must be parsed every time a job is added to the Symphony file (both at Symphony creation or dynamically).

In IBM Tivoli Workload Scheduler 8.1, the TSO parser was used, but this caused a major performance issue: up to 70% of the time that it took to create a Symphony file was spent parsing the SCRIPTLIB library members. In Version 8.2, a new parser has been implemented that significantly reduces the parsing time and consequently the Symphony file creation time.

Check server status before Symphony file creationIn an end-to-end configuration, daily planning batch jobs require that both the controller and server are active to be able to synchronize all the tasks and avoid unprocessed events being left in the event files. If the server is not active the daily planning batch process now fails at the beginning to avoid pointless extra processing. Two new log messages show the status of the end-to-end server:

EQQ3120E END-TO-END SERVER NOT AVAILABLEEQQZ193I END-TO-END TRANSLATOR SERVER PROCESS IS NOW AVAILABLE

Improved job log retrieval performanceIn IBM Tivoli Workload Scheduler 8.1, the thread structure of the Translator process implied that only usual incoming events were immediately notified to the controller; job log events were detected by the controller only when another event arrived or after a 30-second timeout.

In IBM Tivoli Workload Scheduler 8.2, a new input-writer thread has been implemented that manages the writing of events to the input queue and takes input from both the input translator and the job log retriever. This enables the job log retriever to test whether there is room on the input queue and if not, it loops until enough space is available. Meanwhile the input translator can continue to write its smaller events to the queue.

1.6.2 General enhancementsIn this section, we cover enhancements in the general category.

Centralized Script Library ManagementIn order to ease the migration path from OPC tracker agents to IBM Tivoli Workload Scheduler Distributed Agents, a new function has been introduced in Tivoli Workload Scheduler 8.2 called Centralized Script Library Management (or Centralized Scripting). It is now possible to use the Tivoli Workload Scheduler for z/OS engine as the centralized repository for scripts of distributed jobs.

10 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 27: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Centralized script is stored in the JOBLIB and it provides features that were on OPC tracker agents such as:

� JCL Editing � Variable substitution and Job Setup� Automatic Recovery� Support for usage of the job-submit exit (EQQUX001)

Rules for defining centralized scriptsTo define a centralized script in the JOBLIB, the following rules must be considered:

� The lines that start with //* OPC, //*%OPC, and //*>OPC are used for the variable substitution and the automatic recovery. They are removed before the script is downloaded on the distributed agent.

� Each line starts from column 1 to column 80.

� Backslash (\) at column 80 is the character of continuation.

� Blanks at the end of the line are automatically removed.

These rules guarantee the compatibility with the old tracker agent jobs.

For more details, refer to 4.5.2, “Definition of centralized scripts” on page 219.

A new data set, EQQTWSCS, has been introduced with this new release to facilitate centralized scripting. EQQTWSCS is a PDSE data set used to temporarily store a script when it is downloaded from the JOBLIB data set to the agent for its submission.

User interface changes for the centralized scriptCentralized Scripting required changes to several Tivoli Workload Scheduler for z/OS interfaces such as ISPF, Job Scheduling Console, and a number of batch interfaces. In this section, we cover the changes to the user interfaces ISPF and Job Scheduling Console.

In ISPF, a new job option has been added to specify whether an operation that runs on a fault tolerant workstation has a centralized script. It can value Y/N:

� Y if the job has the script stored centrally in the JOBLIB.

Note: Centralized script feature is not supported for fault tolerant jobs running on an AS/400 fault tolerant agent.

Note: The SCRIPTLIB follows the TSO rules, so the rules to define a centralized script in the JOBLIB differ from those to define the JOBSCR and JOBCMD of a non-centralized script.

Chapter 1. Introduction 11

Page 28: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� N if the script is stored locally and the job has the job definition in the SCRIPTLIB.

In a database, the value of this new job option can be modified during the add/modify of an application or operation. It can be set for every operation, without workstation checking. When a new operation is created, the default value for this option is N. For non-FTW (Fault Tolerant Workstation) operations, the value of the option is automatically changed to Y during Daily Plan or when exiting the Modify an occurrence or Create an occurrence dialog.

The new Centralized Script option was added for operations in the Application Description database and is always editable (Figure 1-3).

Figure 1-3 CENTRALIZED SCRIPT option in the AD dialog

The Centralized Script option also has been added for operations in the current plan. It is editable only when adding a new operation. It can be browsed when modifying an operation (Figure 1-4 on page 13).

12 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 29: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 1-4 CENTRALIZED SCRIPT option in the CP dialog

Similarly, Centralized Script has been added in the Job Scheduling Console dialog for creating an FTW task, as shown in Figure 1-5.

Figure 1-5 Centralized Script option in the JSC dialog

Chapter 1. Introduction 13

Page 30: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Considerations when using centralized scriptsUsing centralized scripts can ease the migration path from OPC tracker agents to FTAs. It is also easier to maintain the centralized scripts because they are kept in a central location, but these benefits come with some limitations. When deciding whether to store the script locally or centrally, take into consideration that:

� The script must be downloaded every time a job runs. There is no caching mechanism on the FTA. The script is discarded as soon as the job completes. A rerun of a centralized job causes the script to be downloaded again.

� There is a reduction in the fault tolerance, because the centralized dependency can be released only by the controller.

Recovery for non-centralized jobs In Tivoli Workload Scheduler 8.2, a new simple syntax has been added in the job definition to specify recovery options and actions. Recovery is performed automatically on the FTA in case of an abend. By this feature, it is now possible to use the recovery for jobs running in a end-to-end network as implemented in IBM Tivoli Workload Scheduler distributed.

Defining recovery for non-centralized jobs To activate the recovery for a non-centralized job, you have to specify the RECOVERY statement in the job member in the scriptlib.

It is possible to specify one or both of the following recovery actions:

� A recovery job (JOBCMD or JOBSCR keywords)� A recovery prompt (MESSAGE keyword)

The recovery actions must be followed by one of the recovery options (the OPTION keyword), stop, continue, or rerun. The default is stop with no recovery job and no recovery prompt.

Figure 1-6 on page 15 shows the syntax of the RECOVERY statement.

14 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 31: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 1-6 Syntax of the RECOVERY statement

The keywords JOBUSR, JOBWS, INTRACTV, and RCCONDSUC can be used only if you have defined a recovery job using the JOBSCR or JOBCMD keyword.

You cannot use the recovery prompt if you specify the recovery STOP option without using a recovery job. Having the OPTION(RERUN) and no recovery prompt specified could cause a loop. To prevent this situation, after a failed rerun of the job, a recovery prompt message is shown automatically.

For more details, refer to 4.5.3, “Definition of non-centralized scripts” on page 221.

Recovery actions availableThe following table describes the recovery actions that can be taken against a job that ended in error (and not failed). Note that JobP is the principal job, while JobR is the recovery job.

Table 1-1 The recovery actions taken against a job ended in error

Note: The RECOVERY statement is ignored if it is used with a job that runs a centralized script.

ACTION/OPTION Stop Continue Rerun

No recovery prompt/No recovery job

JobP remains in error. JobP is completed. Rerun JobP.

A recovery prompt/No recovery job

Issue the prompt. JobP remains in error.

Issue recovery prompt. If “yes” reply, JobP is completed. If 'no' reply, JobP remains in error.

Issue the prompt. If 'no' reply, JobP remains in error. If “yes” reply, rerun JobP.

Chapter 1. Introduction 15

Page 32: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Job Instance Recovery Information panelsFigure 1-7 shows the Job Scheduling Console Job Instance Recovery Information panel. You can browse the job log of the recovery job, and you can reply prompt. Note the fields in the Job Scheduling Console panel and JOBREC parameters mapping.

Figure 1-7 JSC and JOBREC parameters mapping

No recovery prompt/A recovery job

Launch JobR. If it is successful, JobP is completed; otherwise JobP remains in error.

Launch JobR. JobP is completed.

Launch JobR. If it is successful, rerun JobP; otherwise JobP remains in error.

A recovery prompt/A recovery job

Issue the prompt. If 'no' reply, JobP remains in error. If “yes” reply:

� Launch JobR.

� If it is successful, JobP is completed; otherwise JobP remains in error.

Issue the prompt. If 'no' reply, JobP remains in error.If “yes” reply:

� Launch JobR.

� JobP is completed.

Issue the prompt. If 'no' reply, JobP remains in error. If “yes” reply:

� Launch JobR.

� If it is successful, rerun JobP; otherwise JobP remains in error.

ACTION/OPTION Stop Continue Rerun

16 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 33: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Also note that you can access the same information from the ISPF panels. From the Operation list in MCP (5.3), if the operation is abended and the RECOVERY statement has been used, you can use the row command RI (Recovery Information) to display the new panel EQQRINP as shown in Figure 1-8.

Figure 1-8 EQQRINP ISPF panel

Variable substitution for non-centralized jobsIn Tivoli Workload Scheduler 8.2, a new simple syntax has been added in the job definition to specify Variable Substitution Directives. This provides the capability to use the variable substitution for jobs running in an end-to-end network without using the centralized script solution.

Tivoli Workload Scheduler for z/OS–supplied variables and user-defined variables (defined using a table) are supported in this new function. Variables are substituted when a job is added to Symphony (that is, when the Daily Planning creates the Symphony or the job is added to the plan using the MCP dialog).

To activate the variable substitution, use the VARSUB statement. The syntax of the VARSUB statement is given in Figure 1-9 on page 18. Note that it must be the first one in the SCRPTLIB member containing the job definition. The VARSUB statement enables you to specify variables when you set a statement keyword in the job definition.

Chapter 1. Introduction 17

Page 34: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 1-9 Syntax of the VARSUB statement

Use the TABLES keyword to identify the variable tables that must be searched and the search order. In particular:

� APPL indicates the application variable table specified in the VARIABLE TABLE field on the MCP panel, at Occurrence level.

� GLOBAL indicates the table defined in the GTABLE keyword of the OPCOPTS controller and BATCHOPT batch options.

Any non-alphanumeric character, except blanks, can be used as a symbol to indicate that the characters that follow represent a variable. You can define two kinds of symbols using the PREFIX or BACKPREF keywords in the VARSUB statement; it allows you to define simple and compound variables.

For more details, refer to 4.5.3, “Definition of non-centralized scripts” on page 221, and “Job Tailoring” in IBM Tivoli Workload Scheduler for z/OS Managing the Workload, SC32-1263.

Return code mappingIn Tivoli Workload Scheduler 8.1, if a fault tolerant job ends with a return code greater then 0 it is considered as abended.

It should be possible to define whether a job is successful or abended according to a “success condition” defined at job level. This would supply the NOERROR functionality, supported only for host jobs.

In Tivoli Workload Scheduler 8.2 for z/OS, a new keyword (RCCONDSUC) has been added in the job definition to specify the success condition. Tivoli Workload Scheduler 8.2 for z/OS interfaces show the operations return code.

Customize the JOBREC and the RECOVERY statements in the SCRIPTLIB to specify a success condition for the job adding the RCCONDSUC keyword. The success condition expression can contain a combination of comparison and Boolean expressions.

18 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 35: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Comparison expression Comparison expression specifies the job return codes. The syntax is: (RC operator operand)-

RC The RC keyword.Operand An integer between -2147483647 and

2147483647.Operator Comparison operator Table 1-2 lists the values it can have.

Table 1-2 Operator Comparison operator values

The successful RC is specified by a logical combination of comparison expressions. The syntax is: comparison_expression operator comparison_expression.

For example, you can define a successful job as a job that ends with a return code less than 3 or equal to 5 as follows:

RCCONDSUC(“(RC<3) OR (RC=5)“)

Late job handling In IBM Tivoli Workload Scheduler 8.2 distributed, a user can define a DEADLINE time for a job or a job stream. If the job never started or if it is still executing after the deadline time has passed, Tivoli Workload Scheduler informs the user about the missed deadline.

Example Operator Description

RC < a < Less than

RC <= a <= Less than or equal to

RC> a > Greater than

RC >= a >= Greater than or equal to

RC = a = Equal to

RC <> a <> Not equal to

Note: Unlike IBM Tivoli Workload Scheduler distributed, the != operator is not supported to specify a ‘not equal to’ condition.

Note: If you do not specify the RCCONDSUC, only a return code equal to zero corresponds to a successful condition.

Chapter 1. Introduction 19

Page 36: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

IBM Tivoli Workload Scheduler for z/OS 8.2 now supports this function. In Version 8.2, the user can specify and modify a deadline time for a job or a job stream. If the job is running on a fault-tolerant agent, the deadline time is also stored in the Symphony file, and it is managed locally by the FTA.

In an end-to-end network, the deadline is always defined for operations and occurrences. Batchman process on USS does not check the deadline to improve performances.

1.6.3 Security enhancementsThis new version includes a number of security enhancements, which are discussed in this section.

Firewall support in an end-to-end environmentFor previous versions of Tivoli Workload Scheduler for z/OS, running the commands to start or stop a workstation or to get the standard list requires opening a direct TCP/IP connection between the originator and the destination nodes. In a firewall environment, this forces users to break the firewall to open a direct communication path between the Tivoli Workload Scheduler for z/OS master and each fault-tolerant agent in the network.

In this version, it is now possible to enable the firewall support of Tivoli Workload Scheduler in an end-to-end environment. If a firewall exists between a workstation and its domain manager, in order to force the start, stop, and get job output commands to go through the domain’s hierarchy, it is necessary to set the FIREWALL option to YES in the CPUREC statement.

Example 1-2 shows a CPUREC definition that enables the firewall support.

Example 1-2 CPUREC definition with firewall support enabled

CPUREC CPUNAME(TWAD) CPUOS(WNT) CPUNODE(jsgui) CPUDOMAIN(maindom) CPUTYPE(FTA)

FIREWALL(Y)

SSL supportIt is now possible to enable the strong authentication and encryption (SSL) support of IBM Tivoli Workload Scheduler in an end-to-end environment.

You can enable the Tivoli Workload Scheduler processes that run as USS (UNIX System Services) processes in the Tivoli Workload Scheduler for z/OS address

20 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 37: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

space to establish SSL authentication between a Tivoli Workload Scheduler for z/OS master and the underlying IBM Tivoli Workload Scheduler domain managers.

The authentication mechanism of IBM Tivoli Workload Scheduler is based on the OpenSSL toolkit, while IBM Tivoli Workload Scheduler for z/OS uses the System SSL services of z/OS.

To enable SSL authentication for your end-to-end network, you must perform the following actions:

1. Create as many private keys, certificates, and trusted certification authority (CA) chains as you plan to use in your network.

Refer to the OS/390 V2R10.0 System SSL Programming Guide and Reference, SC23-3978, for further details about the SSL protocol.

2. Customize the localopts file on IBM Tivoli Workload Scheduler workstations.

To find how to enable SSL in the IBM Tivoli Workload Scheduler domain managers, refer to IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264.

3. Configure IBM Tivoli Workload Scheduler for z/OS:

– Customize localopts file on USS workdir.

– Customize the TOPOLOGY statement for the OPCMASTER.

– Customize CPUREC statements for every workstation in the net.

Refer to IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265, for the SSL support in the Tivoli Workload Scheduler for z/OS.

1.7 The terminology used in this bookThe IBM Tivoli Workload Scheduler 8.2 suite comprises two somewhat different software programs, each with its own history and terminology. For this reason, there are sometimes two different and interchangeable names for the same thing. Other times, a term used in one context can have a different meaning in another context. To help clear up this confusion, we now introduce some of the terms and acronyms that will be used throughout the book. In order to make the terminology used in this book internally consistent, we adopted a system of terminology that may be a bit different than that used in the product documentation. So take a moment to read through this list, even if you are already familiar with the products.

IBM Tivoli Workload Scheduler 8.2 suite

Chapter 1. Introduction 21

Page 38: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The suite of programs that includes IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS. These programs are used together to make end-to-end scheduling work. Sometimes called just IBM Tivoli Workload Scheduler.

IBM Tivoli Workload Scheduler

This is the version of IBM Tivoli Workload Scheduler that runs on UNIX, OS/400, and Windows operating systems, as distinguished from IBM Tivoli Workload Scheduler for z/OS, a somewhat different program. Sometimes called IBM Tivoli Workload Scheduler Distributed. IBM Tivoli Workload Scheduler is based on the old Maestro program.

IBM Tivoli Workload Scheduler for z/OS

This is the version of IBM Tivoli Workload Scheduler that runs on z/OS, as distinguished from IBM Tivoli Workload Scheduler (by itself, without the for z/OS specification). IBM Tivoli Workload Scheduler for z/OS is based on the old OPC program.

Master The top level of the IBM Tivoli Workload Scheduler or IBM Tivoli Workload Scheduler for z/OS scheduling network. Also called the master domain manager, because it is the domain manager of the MASTERDM (top-level) domain.

Domain manager The agent responsible for handling dependency resolution for subordinate agents. Essentially an FTA with a few extra responsibilities.

Fault-tolerant agent An agent that keeps its own local copy of the plan file and can continue operation even if the connection to the parent domain manager is lost. Also called an FTA. In IBM Tivoli Workload Scheduler for z/OS, FTAs are referred to as fault tolerant workstations.

Scheduling engine An IBM Tivoli Workload Scheduler engine or IBM Tivoli Workload Scheduler for z/OS engine.

IBM Tivoli Workload Scheduler engine

The part of IBM Tivoli Workload Scheduler that does actual scheduling work, as distinguished from the other components that are related primarily to the user interface (for example, the IBM Tivoli Workload Scheduler connector). Essentially the part of IBM Tivoli Workload Scheduler that is descended from the old Maestro program.

22 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 39: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

IBM Tivoli Workload Scheduler for z/OS engine

The part of IBM Tivoli Workload Scheduler for z/OS that does actual scheduling work, as distinguished from the other components that are related primarily to the user interface (for example, the IBM Tivoli Workload Scheduler for z/OS connector). Essentially the controller plus the server.

IBM Tivoli Workload Scheduler for z/OS controller

The part of the IBM Tivoli Workload Scheduler for z/OS engine that is based on the old OPC program.

IBM Tivoli Workload Scheduler for z/OS server

The part of IBM Tivoli Workload Scheduler for z/OS that is based on the UNIX IBM Tivoli Workload Scheduler code. Runs in UNIX System Services (USS) on the mainframe.

JSC Job Scheduling Console. This is the common graphical user interface (GUI) to both the IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS scheduling engines.

Connector A small program that provides an interface between the common GUI (Job Scheduling Console) and one or more scheduling engines. The connector translates to and from the different “languages” used by the different scheduling engines.

JSS Job Scheduling Services. Essentially a library that is used by the connectors.

TMF Tivoli Management Framework. Also called just the Framework.

Chapter 1. Introduction 23

Page 40: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

24 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 41: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Chapter 2. End-to-end scheduling architecture

End-to-end scheduling involves running programs on multiple platforms. For this reason, it is important to understand how the different components work together. Taking the time to get acquainted with end-to-end scheduling architecture will make it easier for you to install, use, and troubleshoot your end-to-end scheduling system.

In this chapter, the following topics are discussed:

� “IBM Tivoli Workload Scheduler for z/OS architecture” on page 27

� “Tivoli Workload Scheduler architecture” on page 50

� “End-to-end scheduling architecture” on page 59

� “Job Scheduling Console and related components” on page 89

If you are unfamiliar with IBM Tivoli Workload Scheduler for z/OS, you can start with the section about its architecture to get a better understanding of how it works.

If you are already familiar with Tivoli Workload Scheduler for z/OS but would like to learn more about IBM Tivoli Workload Scheduler (for other platforms such as UNIX, Windows, or OS/400), you can skip to that section.

2

© Copyright IBM Corp. 2004 25

Page 42: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

If you are already familiar with both IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS, skip ahead to the third section, in which we describe how both programs work together when configured as an end-to-end network.

The Job Scheduling Console, its components, and its architecture, are described in the last topic. In this topic, we describe the different components that are used to establish a Job Scheduling Console environment.

26 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 43: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

2.1 IBM Tivoli Workload Scheduler for z/OS architectureIBM Tivoli Workload Scheduler for z/OS expands the scope for automating your data processing operations. It plans and automatically schedules the production workload. From a single point of control, it drives and controls the workload processing at both local and remote sites. By using IBM Tivoli Workload Scheduler for z/OS to increase automation, you use your data processing resources more efficiently, have more control over your data processing assets, and manage your production workload processing better.

IBM Tivoli Workload Scheduler for z/OS is composed of three major features:

� The IBM Tivoli Workload Scheduler for z/OS agent feature

The agent is the base product in IBM Tivoli Workload Scheduler for z/OS. The agent is also called a tracker. It must run on every operating system in your z/OS complex on which IBM Tivoli Workload Scheduler for z/OS controlled work runs. The agent records details of job starts and passes that information to the engine, which updates the plan with statuses.

� The IBM Tivoli Workload Scheduler for z/OS engine feature

One z/OS operating system in your complex is designated the controlling system and it runs the engine. The engine is also called the controller. Only one engine feature is required, even when you want to establish standby engines on other z/OS systems in a sysplex.

The engine manages the databases and the plans and causes the work to be submitted at the appropriate time and at the appropriate system in your z/OS sysplex or on another system in a connected z/OS sysplex or z/OS system.

� The IBM Tivoli Workload Scheduler for z/OS end-to-end feature

This feature makes it possible for the IBM Tivoli Workload Scheduler for z/OS engine to manage a production workload in a Tivoli Workload Scheduler distributed environment. You can schedule, control, and monitor jobs in Tivoli Workload Scheduler from the Tivoli Workload Scheduler for z/OS engine with this feature.

The end-to-end feature is covered in 2.3, “End-to-end scheduling architecture” on page 59.

The workload on other operating environments can also be controlled with the open interfaces that are provided with Tivoli Workload Scheduler for z/OS. Sample programs using TCP/IP or a Network Job Entry/Remote Spooling Communication Subsystem (NJE/RSCS) combination show you how you can control the workload on environments that at present have no scheduling feature.

Chapter 2. End-to-end scheduling architecture 27

Page 44: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

In addition to these major parts, the IBM Tivoli Workload Scheduler for z/OS product also contains the IBM Tivoli Workload Scheduler for z/OS connector and the Job Scheduling Console (JSC).

� IBM Tivoli Workload Scheduler for z/OS connector

Maps the Job Scheduling Console commands to the IBM Tivoli Workload Scheduler for z/OS engine. The Tivoli Workload Scheduler for z/OS connector requires that the Tivoli Management Framework be configured for a Tivoli server or Tivoli managed node.

� Job Scheduling Console

A Java-based graphical user interface (GUI) for the IBM Tivoli Workload Scheduler suite.

The Job Scheduling Console runs on any machine from which you want to manage Tivoli Workload Scheduler for z/OS engine plan and database objects. It provides, through the IBM Tivoli Workload Scheduler for z/OS connector, functionality similar to the IBM Tivoli Workload Scheduler for z/OS legacy ISPF interface. You can use the Job Scheduling Console from any machine as long as it has a TCP/IP link with the machine running the IBM Tivoli Workload Scheduler for z/OS connector.

The same Job Scheduling Console can be used for Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS.

In the next topics, we provide an overview of IBM Tivoli Workload Scheduler for z/OS configuration, the architecture, and the terminology used in Tivoli Workload Scheduler for z/OS.

2.1.1 Tivoli Workload Scheduler for z/OS configurationIBM Tivoli Workload Scheduler for z/OS supports many configuration options using a variety of communication methods:

� The controlling system (the controller or engine)

� Controlled z/OS systems

� Remote panels and program interface applications

� Job Scheduling Console

� Scheduling jobs that are in a distributed environment using Tivoli Workload Scheduler (described in 2.3, “End-to-end scheduling architecture” on page 59)

28 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 45: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The controlling systemThe controlling system requires both the agent and the engine. One controlling system can manage the production workload across all of your operating environments.

The engine is the focal point of control and information. It contains the controlling functions, the dialogs, the databases, the plans, and the scheduler’s own batch programs for housekeeping and so forth. Only one engine is required to control the entire installation, including local and remote systems.

Because IBM Tivoli Workload Scheduler for z/OS provides a single point of control for your production workload, it is important to make this system redundant. This minimizes the risk of having any outages in your production workload in case the engine or the system with the engine fails. To make the engine redundant, one can start backup engines (hot standby engines) on other systems in the same sysplex as the active engine. If the active engine or the controlling system fails, Tivoli Workload Scheduler for z/OS can automatically transfer the controlling functions to a backup system within a Parallel Sysplex. Through Cross Coupling Facility (XCF), IBM Tivoli Workload Scheduler for z/OS can automatically maintain production workload processing during system failures. The standby engine can be started on several z/OS systems in the sysplex.

Figure 2-1 on page 30 shows an active engine with two standby engines running in one sysplex. When an engine is started on a system in the sysplex, it will check whether there is already an active engine in the sysplex. It there are no active engines, it will be an active engine. If there is an active engine, it will be a standby engine. The engine in Figure 2-1 on page 30 has connections to eight agents: three in the sysplex, two remote, and three in another sysplex. The agents on the remote systems and in the other sysplexes are connected to the active engine via ACF/VTAM® connections.

Chapter 2. End-to-end scheduling architecture 29

Page 46: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-1 Two sysplex environments and stand-alone systems

Controlled z/OS systemsAn agent is required for every controlled z/OS system in a configuration. This includes, for example, locally controlled systems within shared DASD or sysplex configurations.

The agent runs as a z/OS subsystem and interfaces with the operating system through JES2 (Job Execution Subsystem) or JES3, and SMF (System Management Facility), using the subsystem interface and the operating system exits. The agent monitors and logs the status of work, and passes the status information to the engine via shared DASD, XCF, or ACF/VTAM.

You can exploit z/OS and the cross-system coupling facility (XCF) to connect your local z/OS systems. Rather than being passed to the controlling system via shared DASD, work status information is passed directly via XCF connections. XCF enables you to exploit all production-workload-restart facilities and its hot standby function in Tivoli Workload Scheduler for z/OS.

Remote systemsThe agent on a remote z/OS system passes status information about the production work in progress to the engine on the controlling system. All communication between Tivoli Workload Scheduler for z/OS subsystems on the controlling and remote systems is done via ACF/VTAM.

Remote Agent

z/OSSYSPLEX

z/OSSYSPLEX

Agent

Active Engine

Agent Agent

Standby Engine

Standby Engine

Remote Agent

Remote Agent

Remote Agent

Remote Agent

VTAM VTAM

30 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 47: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Tivoli Workload Scheduler for z/OS enables you to link remote systems using ACF/VTAM networks. Remote systems are frequently used locally (on premises) to reduce the complexity of the data processing installation.

Remote panels and program interface applicationsISPF panels and program interface (PIF) applications can run in a different z/OS system than the one where the active engine is running. Dialogs and PIF applications send requests to and receive data from a Tivoli Workload Scheduler for z/OS server that is running on the same z/OS system as the target engine, via advanced program-to-program communications (APPC). The APPC server communicates with the active engine to perform the requested actions.

Using an APPC server for ISPF panels and PIF gives the user the freedom to run ISPF panels and PIF on any system in a z/OS enterprise, as long as this system has advanced program-to-program communication with the system where the active engine is started. This also means that you do not have to make sure that your PIF jobs always run on the z/OS system where the active engine is started. Furthermore, using the APPC server makes it seamless for panel users and PIF programs if the engine is moved to its backup engine.

The APPC server is a separate address space, started and stopped either automatically by the engine, or by the user via the z/OS start command. There can be more than one server for an engine. If the dialogs or the PIF applications run on the same z/OS system as the target engine, the server may not be involved. As shown in Figure 2-2 on page 32, it is possible to run the IBM Tivoli Workload Scheduler for z/OS dialogs and PIF applications from any system as long as the system has an ACF/VTAM connection to the APPC server.

Chapter 2. End-to-end scheduling architecture 31

Page 48: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-2 APPC server with remote panels and PIF access to ITWS for z/OS

2.1.2 Tivoli Workload Scheduler for z/OS database objectsScheduling with IBM Tivoli Workload Scheduler for z/OS includes the capability to do the following:

� Schedule jobs across multiple systems local and remotely.

� Group jobs into job streams according to, for example, function or application, and define advanced run cycles based on customized calendars for the job streams.

� Set workload priorities and specify times for the submission of particular work.

� Base submission of work on availability of resources.

� Tailor jobs automatically based on dates, date calculations, and so forth.

� Ensure correct processing order by identifying dependencies such as successful completion of previous jobs, availability of resources, and time of day.

Note: Job Scheduling Console is the GUI to both IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler. JSC is discussed in 2.4, “Job Scheduling Console and related components” on page 89.

ISPF panels

Remote System

z/OSSYSPLEX

Active Engine

Remote System

Remote System

VTAM VTAM

ISPF panels

ISPF panels

APPC Server

PIF program

PIF program

32 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 49: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Define automatic recovery and restart for jobs.

� Forward incomplete jobs to the next production day.

This is accomplished by defining scheduling objects in the Tivoli Workload Scheduler for z/OS databases that are managed by the active engine and shared by the standby engines. Scheduling objects are combined in these databases so that they represent the workload that you want to have handled by Tivoli Workload Scheduler for z/OS.

Tivoli Workload Scheduler for z/OS databases contain information about the work that is to be run, when it should be run, and the resources that are needed and available. This information is used to calculate a forward forecast called the long-term plan.

Scheduling objects are elements that are used to define your Tivoli Workload Scheduler for z/OS workload. Scheduling objects include job streams (jobs and dependencies as part of job streams), workstations, calendars, periods, operator instructions, resources, and JCL variables.

All of these scheduling objects can be created, modified, or deleted by using the legacy IBM Tivoli Workload Scheduler for z/OS ISPF panels. Job streams, workstations, and resources can be managed from the Job Scheduling Console as well.

Job streams A job stream (also known as an application in the legacy OPC ISPF interface) is a description of a unit of production work. It includes a list of jobs (related tasks) that are associated with that unit of work. For example, a payroll job stream might include a manual task in which an operator prepares a job; several computer-processing tasks in which programs are run to read a database, update employee records, and write payroll information to an output file; and a print task that prints paychecks.

IBM Tivoli Workload Scheduler for z/OS schedules work based on the information that you provide in your job stream description. A job stream can include the following:

� A list of the jobs (related tasks) that are associated with that unit of work, such as:

– Data entry

– Job preparation

– Job submission or started-task initiation

– Communication with the NetView® program

– File transfer to other operating environments

Chapter 2. End-to-end scheduling architecture 33

Page 50: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

– Printing of output

– Post-processing activities, such as quality control or dispatch

– Other tasks related to the unit of work that you want to schedule, control, and track

� A description of dependencies between jobs within a job stream and between jobs in other job streams

� Information about resource requirements, such as exclusive use of a data set

� Special operator instructions that are associated with a job

� How, when, and where each job should be processed

� Run policies for that unit of work; that is, when it should be scheduled or, alternatively, the name of a group definition that records the run policy

Workstations When scheduling and processing work, Tivoli Workload Scheduler for z/OS considers the processing requirements of each job. Some typical processing considerations are:

� What human or machine resources are required for processing the work (for example, operators, processors, or printers)?

� When are these resources available?

� How will these jobs be tracked?

� Can this work be processed somewhere else if the resources become unavailable?

You can plan for maintenance windows in your hardware and software environments. Tivoli Workload Scheduler for z/OS enables you to perform a controlled and incident-free shutdown of the environment, preventing last-minute cancellation of active tasks. You can choose to reroute the workload automatically during any outage, planned or unplanned.

Tivoli Workload Scheduler for z/OS tracks jobs as they are processed at workstations and dynamically updates the plan with real-time information on the status of jobs. You can view or modify this status information online using the workstation ready lists in the dialog.

Dependencies In general, every data-processing-related activity must occur in a specific order. Activities performed out of order will, at the very least, create invalid output; in the worst case your

34 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 51: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

corporate data will be corrupted. In any case, the result is costly reruns, missed deadlines, and unsatisfied customers.

You can define dependencies for jobs when a specific processing order is required. When IBM Tivoli Workload Scheduler for z/OS manages the dependent relationships, the jobs are started in the correct order every time they are scheduled. A dependency is called internal when it is between two jobs in the same job stream, and external when it is between two jobs in different job streams.

You can work with job dependencies graphically from the Job Scheduling Console (Figure 2-3).

Figure 2-3 Job Scheduling Console display of dependencies between jobs

Calendars Tivoli Workload Scheduler for z/OS uses information about when the jobs departments work and when they are free, so job streams are not scheduled to run on days when processing resources are not available (such as Sundays and holidays). This information is stored in a calendar. Tivoli Workload Scheduler for z/OS supports multiple calendars for enterprises where different departments have different work days and free

Chapter 2. End-to-end scheduling architecture 35

Page 52: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

days (different groups within a business operate according to different calendars).

The multiple calendar function is critical if your enterprise has installations in more than one geographical location (for example, with different local or national holidays).

Resources Tivoli Workload Scheduler for z/OS enables you to serialize work based on the status of any data processing resource. A typical example is a job that uses a data set as input but must not start until the data set is successfully created and loaded with valid data. You can use resource serialization support to send availability information about a data processing resource to the workload in Tivoli Workload Scheduler for z/OS. To accomplish this, Tivoli Workload Scheduler for z/OS uses resources (also called special resources).

Resources are typically defined to represent physical or logical objects used by jobs. A resource can be used to serialize access to a data set or to limit the number of file transfers on a particular network link. The resource does not have to represent a physical object in your configuration, although it often does.

Tivoli Workload Scheduler for z/OS keeps a record of the state of each resource and its current allocation status. You can choose to hold resources in case a job allocating the resources ends abnormally. You can also use the Tivoli Workload Scheduler for z/OS interface with the Resource Object Data Manager (RODM) to schedule jobs according to real resource availability. You can subscribe to RODM updates in both local and remote domains.

Tivoli Workload Scheduler for z/OS enables you to subscribe to data set activity on z/OS systems. Its dataset triggering function automatically updates special resource availability when a data set is closed. You can use this notification to coordinate planned activities or to add unplanned work to the schedule.

Periods Tivoli Workload Scheduler for z/OS uses business processing cycles, or periods, to calculate when your job streams should be run; for example, weekly or every 10th working day. Periods are based on the business cycles of your customers.

Tivoli Workload Scheduler for z/OS supports a range of periods for processing the different job streams in your production workload. It has several predefined periods that can be used when defining run cycles for your job streams, such as

36 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 53: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

week, month, year, and all of the Julian months (January through December).

When you define a job stream, you specify when it should be planned using a run cycle, which can be:

� A rule with a format such as:

ONLY the SECOND TUESDAY of every MONTHEVERY FRIDAY in the user-defined period SEMESTER1

In this example, the words in capitals are selected from lists of ordinal numbers, names of days, and common calendar intervals or period names, respectively.

� A combination of period and offset. For example, an offset of 10 in a monthly period specifies the tenth day of each month.

Operator instr. You can specify an operator instruction to be associated with a job in a job stream. This could be, for example, special running instructions for a job or detailed restart information in case a job abends and needs to be restarted.

JCL variables JCL variables are used to do automatic job tailoring in Tivoli Workload Scheduler for z/OS. There are several predefined JCL variables, such as current date, current time, planning date, day number of week, and so forth. Besides these predefined variables, you can define specific or unique variables, so your local defined variables can be used for automatic job tailoring as well.

2.1.3 Tivoli Workload Scheduler for z/OS plansIBM Tivoli Workload Scheduler for z/OS plans your production workload schedule. It produces both high-level (long-term) plan and detailed (current) plans. These plans drive the production workload and can show the status of the production workload on your system at any specified time. You can produce trial plans to forecast future workloads (for example, to simulate the effects of changes to your production workload, calendar, and installation).

Tivoli Workload Scheduler for z/OS builds the plans from your description of the production workload (that is, the objects you have defined in the Tivoli Workload Scheduler for z/OS databases).

The plan processFirst, the long-term plan is created, which shows the job streams that should be run each day in a period, usually for one or two months. Then a more detailed

Chapter 2. End-to-end scheduling architecture 37

Page 54: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

current plan is created. The current plan is used by Tivoli Workload Scheduler for z/OS to submit and control jobs and job streams.

Long-term planningThe long-term plan is a high-level schedule of your anticipated production workload. It lists, by day, the instances of job streams to be run during the period of the plan. Each instance of a job stream is called an occurrence. The long-term plan shows when occurrences are to run, as well as the dependencies that exist between the job streams. You can view these dependencies graphically on your terminal as a network to check that work has been defined correctly. The plan can assist you in forecasting and planning for heavy processing days. The long-term-planning function can also produce histograms showing planned resource use for individual workstations during the plan period.

You can use the long-term plan as the basis for documenting your service level agreements. It lets you relate service level agreements directly to your production workload schedules so that your customers can see when and how their work is to be processed.

The long-term plan provides a window to the future. How far into the future is up to you, from one day to four years. Normally, the long-term plan goes two to three months into the future. You can also produce long-term plan simulation reports for any future date. IBM Tivoli Workload Scheduler for z/OS can automatically extend the long-term plan at regular intervals. You can print the long-term plan as a report, or you can view, alter, and extend it online using the legacy ISPF dialogs.

The long-term plan extension is performed by a Tivoli Workload Scheduler for z/OS program. This program is normally run as part of the daily Tivoli Workload Scheduler for z/OS housekeeping job stream. By running this program on workdays and letting the program extend the long-term plan by one working day, you assure that the long-term plan is always up-to-date (Figure 2-4 on page 39).

38 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 55: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-4 The long-term plan extension process

This way the long-term plan always reflects changes that are made to job streams, run cycles, and calendars, because these definitions are reread by the program that extends the long-term plan. The long-term plan extension program reads job streams (run cycles), calendars, and periods and creates the high-level long-term plan based on these objects.

Current planThe current plan, or simply the plan is the heart of Tivoli Workload Scheduler for z/OS processing: In fact, it drives the production workload automatically and provides a way to check its status. The current plan is produced by the run of batch jobs that extract from the long-term plan the occurrences that fall within the specified period of time, considering also the job details. The current plan selects a window from the long-term plan and makes the jobs ready to be run. They will actually be started depending on the decided restrictions (dependencies, resources availability, or time-dependent jobs).

Job streams and related objects are copied from the Tivoli Workload Scheduler for z/OS databases to the current plan occurrences. Because the objects are copied to the current plan data set, any changes that are made to them in the plan will not be reflected in the Tivoli Workload Scheduler for z/OS databases.

The current plan is a rolling plan that can cover several days. The extension of the current plan is performed by a Tivoli Workload Scheduler for z/OS program that normally is run on workdays as part of the daily workday-scheduled housekeeping job stream (Figure 2-5 on page 40).

ResourcesJob

StreamsPeriodsWorkstations CalendarsDatabases

1. Extend long term plan

90 days

1workday

Long TermPlan Long Term Plan

90 days

1workday

Long TermPlan Long Term Plan

Chapter 2. End-to-end scheduling architecture 39

Page 56: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-5 The current plan extension process

Extending the current plan by one workday means that it can cover more than one calendar day. If, for example, Saturday and Sunday are considered as Fridays (in the calendar used by the run cycle for the housekeeping job stream), then when the current plan extension program is run on Friday afternoon and the plan will go to Monday afternoon. A common method is to cover 1–2 days with regular extensions each shift.

Production workload processing activities are listed by minute in the plan. You can either print the current plan as a report, or view, alter, and extend it online, by using the legacy ISPF dialogs.

Running job streams and jobs in the planTivoli Workload Scheduler for z/OS automatically:

� Starts and stops started tasks

Note: Changes that are made to the job stream run-cycle, such as changing the job stream from running on Mondays to running on Tuesdays, will not be reflected immediately in the long-term or current plan. To have such changes reflected in the long-term plan and current plan you must first run a Modify all or Extend long-term plan and then extend or replan the current plan. Therefore, it is a good practice to run the Extend long-term plan with one working day (shown in Figure 2-4 on page 39) before the Extend of current plan as part of normal Tivoli Workload Scheduler for z/OS housekeeping.

Resources

Old current plan

JobStreams

PeriodsWorkstations CalendarsDatabases

Remove completed job streams

Add detail for next day New current plan

CurrentPlan Extension

90 days

today

Long Term Plan

tomorrow

1workday

Long TermPlan

40 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 57: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Edits z/OS job JCL statements before submission

� Submits jobs in the specified sequence to the target operating environment—every time

� Tracks each scheduled job in the plan

� Determines the success or failure of the jobs

� Displays status information and instructions to guide workstation operators

� Provides automatic recovery of z/OS jobs when they end in error

� Generates processing dates for your job stream run cycles using rules such as:

– Every second Tuesday of the month

– Only the last Saturday in June, July, and August

– Every third workday in the user-defined PAYROLL period

� Starts jobs with regard to real resource availability

� Performs data set cleanup in error and rerun situations for the z/OS workload

� Tailors the JCL for step restarts of z/OS jobs and started tasks

� Dynamically schedules additional processing in response to activities that cannot be planned

� Provides automatic notification when an updated data set is closed, which can be used to trigger subsequent processing

� Generates alerts when abnormal situations are detected in the workload

Automatic workload submissionTivoli Workload Scheduler for z/OS automatically drives work through the system, taking into account work that requires manual or program-recorded completion. (Program-recorded completion refers to situations where the status of a scheduler-controlled job is set to Complete by a user-written program.) It also promotes the optimum use of resources, improves system availability, and automates complex and repetitive operator tasks. Tivoli Workload Scheduler for z/OS automatically controls the submission of work according to:

� Dependencies between jobs� Workload priorities� Specified times for the submission of particular work� Availability of resources

By saving a copy of the JCL for each separate run, or occurrence, of a particular job in its plans, Tivoli Workload Scheduler for z/OS prevents the unintentional reuse of temporary JCL changes, such as overrides.

Chapter 2. End-to-end scheduling architecture 41

Page 58: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Job tailoringTivoli Workload Scheduler for z/OS provides automatic job-tailoring functions, which enable jobs to be automatically edited. This can reduce your dependency on time-consuming and error-prone manual editing of jobs. Tivoli Workload Scheduler for z/OS job tailoring provides:

� Automatic variable substitution� Dynamic inclusion and exclusion of inline job statements� Dynamic inclusion of job statements from other libraries or from an exit

For jobs to be submitted on a z/OS system, these job statements will be z/OS JCL.

Variables can be substituted in specific columns, and you can define verification criteria to ensure that invalid strings are not substituted. Special directives supporting the variety of date formats used by job stream programs enable you to dynamically define the required format and change them multiple times for the same job. Arithmetic expressions can be defined to let you calculate values such as the current date plus four work days.

Manual control and interventionTivoli Workload Scheduler for z/OS enables you to check the status of work and intervene manually when priorities change or when you need to run unplanned work. You can query the status of the production workload and then modify the schedule if needed.

Status inquiriesWith the legacy ISPF dialogs or with the Job Scheduling Console, you can make queries online and receive timely information about the status of the production workload.

Time information that is displayed by the dialogs can be in the local time of the dialog user. Using the dialogs, you can request detailed or summary information about individual job streams, jobs, and workstations, as well as summary information concerning workload production as a whole. You can also display dependencies graphically as a network at both job stream and job level.

Status inquiries:

� Provide you with overall status information that you can use when considering a change in workstation capacity or when arranging an extra shift or overtime work.

� Help you supervise the work flow through the installation; for instance, by displaying the status of work at each workstation.

42 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 59: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Help you decide whether intervention is required to speed the processing of specific job streams. You can find out which job streams are the most critical. You can also check the status of any job stream, as well as the plans and actual times for each job.

� Enable you to check information before making modifications to the plan. For example, you can check the status of a job stream and its dependencies before deleting it or changing its input arrival time or deadline. See “Modifying the current plan” on page 43 for more information.

� Provide you with information about the status of processing at a particular workstation. Perhaps work that should have arrived at the workstation has not arrived. Status inquiries can help you locate the work and find out what has happened to it.

Modifying the current planTivoli Workload Scheduler for z/OS makes status updates to the plan automatically using its tracking functions. However, you can change the plan manually to reflect unplanned changes to the workload or to the operations environment, which often occur during a shift. For example, you may need to change the priority of a job stream, add unplanned work, or reroute work from one workstation to another. Or you may need to correct operational errors manually. Modifying the current plan may be the best way to handle these situations.

You can modify the current plan online. For example, you can:

� Include unexpected jobs or last-minute changes to the plan. Tivoli Workload Scheduler for z/OS then automatically creates the dependencies for this work.

� Manually modify the status of jobs.

� Delete occurrences of job streams.

� Graphically display job dependencies before you modify them.

� Modify the data in job streams, including the JCL.

� Respond to error situations by:

– Rerouting jobs

– Rerunning jobs or occurrences

– Completing jobs or occurrences

– Changing jobs or occurrences

� Change the status of workstations by:

– Rerouting work from one workstation to another

– Modifying workstation reporting attributes

Chapter 2. End-to-end scheduling architecture 43

Page 60: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

– Updating the availability of resources

– Changing the way resources are handled

� Replan or extend the current plan.

In addition to using the dialogs, you can modify the current plan from your own job streams using the program interface or the application programming interface. You can also trigger Tivoli Workload Scheduler for z/OS to dynamically modify the plan using TSO commands or a batch program. This enables unexpected work to be added automatically to the plan.

It is important to remember that the current plan contains copies of the objects that are read from the Tivoli Workload Scheduler for z/OS databases. This means that changes that are made to current plan instances will not be reflected in the corresponding database objects.

2.1.4 Other Tivoli Workload Scheduler for z/OS featuresIn the following sections we investigate other features of IBM Tivoli Workload Scheduler for z/OS.

Automatically controlling the production workloadTivoli Workload Scheduler for z/OS automatically drives the production workload by monitoring the flow of work and by directing the processing of jobs so that it follows the business priorities that are established in the plan.

Through its interface to the NetView program or its management-by-exception ISPF dialog, Tivoli Workload Scheduler for z/OS can alert the production control specialist to problems in the production workload processing. Furthermore, the NetView program can automatically trigger Tivoli Workload Scheduler for z/OS to perform corrective actions in response to these problems.

Recovery and restartTivoli Workload Scheduler for z/OS provides automatic restart facilities for your production work. You can specify the restart actions to be taken if work that it initiates ends in error (Figure 2-6 on page 45). You can use these functions to predefine automatic error-recovery and restart actions for jobs and started tasks. The scheduler’s integration with the NetView for OS/390 program enables it to automatically pass alerts to the NetView for OS/390 in error situations. Use of the z/OS cross-system coupling facility (XCF) enables Tivoli Workload Scheduler for z/OS processing when system failures occur.

44 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 61: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-6 IBM Tivoli Workload Scheduler for z/OS automatic recovery and restart

Recovery of jobs and started tasksAutomatic recovery actions for failed jobs are specified in user-defined control statements. Parameters in these statements determine the recovery actions to be taken when a job or started task ends in error.

Restart and cleanupRestart and cleanup are basically two tasks:

� Restarting an operation at the job level or step level� Cleaning up the associated data sets

You can use restart and cleanup to catalog, uncatalog, or delete data sets when a job ends in error or when you need to rerun a job. Dataset cleanup takes care of JCL in the form of in-stream JCL, in-stream procedures, and cataloged procedures on both local and remote systems. This function can be initiated automatically by Tivoli Workload Scheduler for z/OS or manually by a user through the panels. Tivoli Workload Scheduler for z/OS resets the catalog to the status that it was before the job ran for both generation data set groups (GDGs)

Note: The IBM Tivoli Workload Scheduler for z/OS 8.2 restart and cleanup function has been updated and redesigned. Apply fix for APAR PQ79506 and PQ79507 to get the redesigned and updated function.

Chapter 2. End-to-end scheduling architecture 45

Page 62: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

and for DD allocated data sets contained in JCL. In addition, restart and cleanup supports the use of Removable Media Manager in your environment.

Restart at both the step level and job level are also provided in the IBM Tivoli Workload Scheduler for z/OS legacy ISPF panels and in the JSC. It manages resolution of generation data group (GDG) names, JCL-containing nested INCLUDEs or PROC, and IF-THEN-ELSE statements. Tivoli Workload Scheduler for z/OS also automatically identifies problems that can prevent successful restart, providing a logic of the “best restart step.”

You can browse the job log or request a step-level restart for any z/OS job or started task even when there are no catalog modifications. The job-log browse functions are also available for the workload on other operating platforms, which is especially useful for those environments that do not support a System Display and Search Facility (SDSF) or something similar.

These facilities are available to you without the need to make changes to your current JCL. Tivoli Workload Scheduler for z/OS gives you an enterprise-wide data set cleanup capability on remote agent systems.

Production workload restartTivoli Workload Scheduler for z/OS provides a production workload restart, which can automatically maintain the processing of your work if a system or connection fails. Scheduler-controlled production work for the unsuccessful system is rerouted to another system. Because Tivoli Workload Scheduler for z/OS can restart and manage the production workload, the integrity of your processing schedule is maintained, and service continues for your customers.

Tivoli Workload Scheduler for z/OS exploits the VTAM Model Application Program Definition feature and the z/OS-defined symbols to ease the configuration and job in a sysplex environment, giving the user a single-system view of the sysplex.

Starting, stopping, and managing your engines and agents does not require you to know which sysplex the z/OS image is actually running on.

z/OS Automatic Restart Manager supportIn case of program failure, all of the scheduler components are enabled to be restarted by the Automatic Restart Manager (ARM) of the z/OS operating system.

Automatic status checkingTo track the work flow, Tivoli Workload Scheduler for z/OS interfaces directly with the operating system, collecting and analyzing status information about the production work that is currently active in the system. Tivoli Workload Scheduler for z/OS can record status information from both local and remote processors.

46 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 63: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

When status information is reported from remote sites in different time zones, Tivoli Workload Scheduler for z/OS makes allowances for the time differences.

Status reporting from heterogeneous environmentsThe processing on other operating environments can also be tracked by Tivoli Workload Scheduler for z/OS. You can use supplied programs to communicate with the engine from any environment that can establish communications with a z/OS system.

Status reporting from user programsYou can pass status information about production workload processing to Tivoli Workload Scheduler for z/OS from your own user programs through a standard supplied routine.

Additional job-completion checkingIf required, Tivoli Workload Scheduler for z/OS provides further status checking by scanning SYSOUT and other print data sets from your processing when the success or failure of the processing cannot be determined by completion codes. For example, Tivoli Workload Scheduler for z/OS can check the text of system messages or messages originating from your user programs. Using information contained in job completion checker (JCC) tables, Tivoli Workload Scheduler for z/OS determines what actions to take when it finds certain text strings. These actions can include:

� Reporting errors� Re-queuing SYSOUT� Writing incident records to an incident data set

Managing unplanned workTivoli Workload Scheduler for z/OS can be automatically triggered to update the current plan with information about work that cannot be planned in advance. This enables Tivoli Workload Scheduler for z/OS to control unexpected work. Because it checks the processing status of this work, automatic recovery facilities are also available.

Interfacing with other programsTivoli Workload Scheduler for z/OS provides a program interface (PIF) with which you can automate most actions that you can perform online through the dialogs. This interface can be called from CLISTs, user programs, and via TSO commands.

The application programming interface (API) lets your programs communicate with Tivoli Workload Scheduler for z/OS from any compliant platform. You can use Common Programming Interface for Communications (CPI-C), advanced program-to-program communication (APPC), or your own logical unit (LU) 6.2

Chapter 2. End-to-end scheduling architecture 47

Page 64: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

verbs to converse with Tivoli Workload Scheduler for z/OS through the API. You can use this interface to query and update the current plan. The programs can be running on any platform that is connected locally, or remotely through a network, with the z/OS system where the engine runs.

Management of critical jobsIBM Tivoli Workload Scheduler for z/OS exploits the capability of the Workload Manager (WLM) component of z/OS to ensure that critical jobs are completed on time. If a critical job is late, Tivoli Workload Scheduler for z/OS favors it using the existing Workload Manager interface.

SecurityToday, data processing operations increasingly require a high level of data security, particularly as the scope of data processing operations expands and more people within the enterprise become involved. Tivoli Workload Scheduler for z/OS provides complete security and data integrity within the range of its functions. It provides a shared central service to different user departments even when the users are in different companies and countries, and a high level of security to protect scheduler data and resources from unauthorized access. With Tivoli Workload Scheduler for z/OS, you can easily organize, isolate, and protect user data to safeguard the integrity of your end-user applications (Figure 2-7). Tivoli Workload Scheduler for z/OS can plan and control the work of many user groups and maintain complete control of access to data and services.

Figure 2-7 IBM Tivoli Workload Scheduler for z/OS security

48 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 65: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Audit trailWith the audit trail, you can define how you want IBM Tivoli Workload Scheduler for z/OS to log accesses (both reads and updates) to scheduler resources. Because it provides a history of changes to the databases, the audit trail can be extremely useful for staff that works with debugging and problem determination.

A sample program is provided for reading audit-trail records. The program reads the logs for a period that you specify and produces a report detailing changes that have been made to scheduler resources.

System Authorization Facility (SAF)IBM Tivoli Workload Scheduler for z/OS uses the system authorization facility, a function of z/OS, to pass authorization verification requests to your security system (for example, RACF®). This means that you can protect your scheduler data objects with any security system that uses the SAF interface.

Protection of data and resourcesEach user request to access a function or to access data is validated by SAF. This is some of the information that can be protected:

� Calendars and periods� Job stream names or job stream owner, by name� Workstation, by name� Job stream-specific data in the plan� Operator instructions� JCL

To support distributed, multi-user handling,Tivoli Workload Scheduler for z/OS enables you to control the level of security that you want to implement, right down to the level of individual records. You can define generic or specific RACF resource names to extend the level of security checking.

If you have RACF Version 2 Release 1 installed, you can use the IBM Tivoli Workload Scheduler for z/OS reserved resource class (IBMOPC) to manage your Tivoli Workload Scheduler for z/OS security environment. This means that you do not have to define your own resource class by modifying RACF and restarting your system.

Data integrity during submissionTivoli Workload Scheduler for z/OS can ensure the correct security environment for each job it submits, regardless of whether the job is run on a local or a remote system. Tivoli Workload Scheduler for z/OS enables you to create tailored security profiles for individual jobs or groups of jobs.

Chapter 2. End-to-end scheduling architecture 49

Page 66: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

2.2 Tivoli Workload Scheduler architectureTivoli Workload Scheduler helps you plan every phase of production. During the processing day, its production control programs manage the production environment and automate most operator activities. Tivoli Workload Scheduler prepares jobs for execution, resolves interdependencies, and launches and tracks each job. Because jobs start running as soon as their dependencies are satisfied, idle time is minimized and throughput is improved. Jobs never run out of sequence. If a job ends in error, Tivoli Workload Scheduler handles the recovery process with little or no operator intervention.

IBM Tivoli Workload Scheduler is composed of three major parts:

� IBM Tivoli Workload Scheduler engine

The IBM Tivoli Workload Scheduler engine is installed on every non-mainframe workstation in the scheduling network (UNIX, Windows, and OS/400 computers). When the engine is installed on a workstation, it can be configured to play a specific role in the scheduling network. For example, the engine can be configured to be a master domain manager, a domain manager, or a fault-tolerant agent. In an ordinary Tivoli Workload Scheduler network, there is a single master domain manager at the top of the network. However, in an end-to-end scheduling network, there is no master domain manager. Instead, its functions are instead performed by the IBM Tivoli Workload Scheduler for z/OS engine, installed on a mainframe. This is discussed in more detail later in this chapter.

� IBM Tivoli Workload Scheduler connector

The connector “connects” the Job Scheduling Console to Tivoli Workload Scheduler, routing commands from JSC to the Tivoli Workload Scheduler engine. In an ordinary IBM Tivoli Workload Scheduler network, the Tivoli Workload Scheduler connector is usually installed on the master domain manager. In an end-to-end scheduling network, there is no master domain manager. so the connector is usually installed on the first-level domain managers. The Tivoli Workload Scheduler connector can also be installed on other domain managers or fault-tolerant agents in the network.

The connector software is installed on top of the Tivoli Management Framework, which must be configured as a Tivoli Management Region server or managed node. The connector software cannot be installed on a TMR endpoint.

� Job Scheduling Console (JSC)

JSC is the Java-based graphical user interface for the IBM Tivoli Workload Scheduler suite. The Job Scheduling Console runs on any machine from which you want to manage Tivoli Workload Scheduler plan and database objects. It provides, through the Tivoli Workload Scheduler connector, the

50 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 67: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

functions of the command-line programs conman and composer. The Job Scheduling Console can be installed on a desktop workstation or laptop, as long as the JSC has a TCP/IP link with the machine running the Tivoli Workload Scheduler connector. Using the JSC, operators can schedule and administer Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS over the network.

In the next sections, we provide an overview of the IBM Tivoli Workload Scheduler network and workstations, the topology that is used to describe the architecture in Tivoli Workload Scheduler, the Tivoli Workload Scheduler components, and the plan.

2.2.1 The IBM Tivoli Workload Scheduler networkA Tivoli Workload Scheduler network is made up of the workstations, or CPUs, on which jobs and job streams are run.

A Tivoli Workload Scheduler network contains at least one IBM Tivoli Workload Scheduler domain, the master domain, in which the master domain manager is the management hub. It is the master domain manager that manages the databases and it is from the master domain manager that you define new objects in the databases. Additional domains can be used to divide a widely distributed network into smaller, locally managed groups.

In the simplest configuration, the master domain manager maintains direct communication with all of the workstations (fault-tolerant agents) in the Tivoli Workload Scheduler network. All workstations are in the same domain, MASTERDM (Figure 2-8).

Figure 2-8 A sample IBM Tivoli Workload Scheduler network with only one domain

MASTERDM

AIX

Linux Windows XP Solaris

FTA1 FTA2 FTA3 FTA4

OS/400

Master Domain Manager

Chapter 2. End-to-end scheduling architecture 51

Page 68: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Using multiple domains reduces the amount of network traffic by reducing the communications between the master domain manager and the other computers in the network. Figure 2-9 depicts an example of a Tivoli Workload Scheduler network with three domains. In this example, the master domain manager is shown as an AIX system. The master domain manager does not have to be on an AIX system; it can be installed on any of several different platforms, including AIX, Linux, Solaris, HPUX, and Windows. Figure 2-9 is only an example that is meant to give an idea of a typical Tivoli Workload Scheduler network.

Figure 2-9 IBM Tivoli Workload Scheduler network with three domains

In this configuration, the master domain manager communicates directly only with the subordinate domain managers. The subordinate domain managers communicate with the workstations in their domains. In this way, the number of connections from the master domain manager are reduced. Multiple domains also provide fault-tolerance: If the link from the master is lost, a domain manager can still manage the workstations in its domain and resolve dependencies between them. This limits the impact of a network outage. Each domain may also have one or more backup domain managers that can become the domain manager for the domain if the domain manager fails.

Before the start of each day, the master domain manager creates a plan for the next 24 hours. This plan is placed in a production control file, named Symphony. Tivoli Workload Scheduler is then restarted throughout the network, and the master domain manager sends a copy of the Symphony file to each of the

Domain Manager

DMB

MASTERDM

Master Domain Manager

AIX

AIXDomain Manager

DMA

HPUX

Linux Windows XP Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

52 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 69: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

subordinate domain managers. Each domain manager then sends a copy of the Symphony file to the fault-tolerant agents in that domain.

After the network has been started, scheduling events such as job starts and completions are passed up from each workstation to its domain manager. The domain manager updates its Symphony file with the events and then passes the events up the network hierarchy to the master domain manager. The events are then applied to the Symphony file on the master domain manager. Events from all workstations in the network will be passed up to the master domain manager. In this way, the master’s Symphony file contains the authoritative record of what has happened during the production day. The master also broadcasts the changes down throughout the network, updating the Symphony files of domain managers and fault-tolerant agents that are running in full status mode.

It is important to remember that Tivoli Workload Scheduler does not limit the number of domains or levels (the hirerarchy) in the network. There can be as many levels of domains as is appropriate for a given computing environment. The number of domains or levels in the network should be based on the topology of the physical network where Tivoli Workload Scheduler is installed. Most often, geographical boundaries are used to determine divisions between domains.

See 3.5.4, “Network planning and considerations” on page 141 for more information about how to design an IBM Tivoli Workload Scheduler network.

Figure 2-10 on page 54 shows an example of a four-tier Tivoli Workload Scheduler network:

1. Master domain manager, MASTERDM2. DomainA and DomainB3. DomainC, DomainD, DomainE, FTA1, FTA2, and FTA34. FTA4, FTA5, FTA6, FTA7, FTA8, and FTA9

Chapter 2. End-to-end scheduling architecture 53

Page 70: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-10 A multi-tiered IBM Tivoli Workload Scheduler network

2.2.2 Tivoli Workload Scheduler workstation typesFor most cases, workstation definitions refer to physical workstations. However, in the case of extended and network agents, the workstations are logical definitions that must be hosted by a physical IBM Tivoli Workload Scheduler workstation.

There are several different types of Tivoli Workload Scheduler workstations:

� Master domain manager (MDM)

The domain manager of the topmost domain of a Tivoli Workload Scheduler network. It contains the centralized database of all defined scheduling objects, including all jobs and their dependencies. It creates the plan at the start of each day, and performs all logging and reporting for the network. The master distributes the plan to all subordinate domain managers and fault-tolerant agents. In an end-to-end scheduling network, the IBM Tivoli Workload Scheduler for z/OS engine (controller) acts as the master domain manager.

FTA7

DomainEDomainDDomainC

FTA2

DomainBDomainA

AIX

AIXDomain Manager

DMA

HPUXDomain Manager

DMB

AIX

Solaris

Solaris

MASTERDM

DMC DMD DME

AIX

FTA3

HPUX

FTA1

FTA6

Win 2K

FTA4 FTA5

Linux HPUX

FTA9FTA8

AIXWin XP

AIX

OS/400

MasterDomain Manager

54 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 71: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Domain manager (DM)

The management hub in a domain. All communications to and from the agents in a domain are routed through the domain manager. The domain manager can resolve dependencies between jobs in its subordinate agents. The copy of the plan on the domain manager is updated with reporting and logging from the subordinate agents.

� Backup domain manager

A fault-tolerant agent that is capable of assuming the responsibilities of its domain manager. The copy of the plan on the backup domain manager is updated with the same reporting and logging information as the domain manager plan.

� Fault-tolerant agent (FTA)

A workstation that is capable of resolving local dependencies and launching its jobs in the absence of a domain manager. It has a local copy of the plan generated in the master domain manager. It is also called a fault tolerant workstation.

� Standard agent (SA)

A workstation that launches jobs only under the direction of its domain manager.

� Extended agent (XA)

A logical workstation definition that enables one to launch and control jobs on other systems and applications. IBM Tivoli Workload Scheduler for Applications includes extended agent methods for the following systems: SAP R/3, Oracle Applications, PeopleSoft, CA7, JES2, and JES3.

Figure 2-11 on page 56 shows a Tivoli Workload Scheduler network with some of the different workstation types.

It is important to remember that domain manager FTAs, including the master domain manager FTA and backup domain manager FTAs, are FTAs with some extra responsibilities. The servers with these FTAs can, and most often will, be servers where you run normal batch jobs that are scheduled and tracked by Tivoli Workload Scheduler. This means that these servers do not have to be servers dedicated only for Tivoli Workload Scheduler work. The servers can still do some other work and run some other applications.

However, you should not choose to use one of your busiest servers as one of your Tivoli Workload Scheduler domain managers of first-level.

Chapter 2. End-to-end scheduling architecture 55

Page 72: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-11 IBM Tivoli Workload Scheduler network with different workstation types

2.2.3 Tivoli Workload Scheduler topologyThe purpose of having multiple domains is to delegate some of the responsibilities of the master domain manager and to provide extra fault tolerance. Fault tolerance is enhanced because a domain manager can continue to resolve dependencies within the domain even if the master domain manager is temporarily unavailable.

Workstations are generally grouped into a domain because they share a common set of characteristics. Most often, workstations will be grouped into a domain because they are in close physical proximity to one another, such as in the same office. Domains may also be based on organizational unit (for example, department), business function, or application. Grouping related workstations in a domain reduces the amount of information that must be communicated between domains, and thereby reduces the amount of network traffic generated.

In 3.5.4, “Network planning and considerations” on page 141, you can find more information about how to configure an IBM Tivoli Workload Scheduler network based on your particular distributed network and environment.

FTA7

DomainEDomainDDomainC

FTA2

DomainBDomainA

AIX

AIXDomain Manager

DMA

HPUXDomain Manager

DMB

AIX

Solaris

Solaris

MASTERDM

DMC DMD DME

AIX

FTA3

HPUX

FTA1

FTA6

Win NT

FTA4 FTA5

Linux HPUX

FTA9FTA8

AIXWin 2K

AIX

OS/400

MasterDomain Manager

Job Scheduling

Console

56 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 73: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

2.2.4 IBM Tivoli Workload Scheduler componentsTivoli Workload Scheduler is comprised of several separate programs, each with a distinct function. This division of labor segregates networking, dependency resolution, and job launching into their own individual processes. These processes communicate among themselves through the use of message files (also called event files). Every event that occurs during the production day is handled by passing events between processes through the message files.

A computer running Tivoli Workload Scheduler has several active IBM Tivoli Workload Scheduler processes. They are started as a system service, by the StartUp command, or manually from the Job Scheduling Console. The main processes are:

netman The network listener program, which initially receives all TCP connections. The netman program accepts an incoming request from a remote program, spawns a new process to handle the request, and if necessary hands the socket over to the new process.

writer The network writer process that passes incoming messages from a remote workstation to the local mailman process (via the Mailbox.msg event file).

mailman The primary message management process. The mailman program reads events from the Mailbox.msg file and then either passes them to batchman (via the Intercom.msg event file) or sends them to a remote workstation.

batchman The production control process. Working with the plan (Symphony), batchman starts jobs streams, resolves dependencies, and directs jobman to launch jobs. After the Symphony file has been created (at the beginning of the production day), batchman is the only program that makes changes to the Symphony file.

jobman The job control process. The jobman program launches and monitors jobs.

Figure 2-12 on page 58 shows the IBM Tivoli Workload Scheduler processes and their intercommunication via message files.

Chapter 2. End-to-end scheduling architecture 57

Page 74: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-12 IBM Tivoli Workload Scheduler interprocess communication

2.2.5 IBM Tivoli Workload Scheduler planThe IBM Tivoli Workload Scheduler plan is the to-do list that tells Tivoli Workload Scheduler what jobs to run and what dependencies must be satisfied before each job is launched. The plan usually covers 24 hours; this period is sometimes referred to as the production day and can start at any point in the day. The best time of day to create a new plan is a time when few or no jobs are expected to be running. A new plan is created at the start of the production day.

After the plan has been created, a copy is sent to all subordinate workstations. The domain managers then distribute the plan to their fault-tolerant agent.

The subordinate domain managers distribute their copy to all of the fault-tolerant agents in their domain and to all domain managers that are subordinate to them, and so on down the line. This enables fault-tolerant agents throughout the network to continue processing even if the network connection to their domain manager is down. From the Job Scheduling Console or the command line interface, the operator can view and make changes in the day’s production by making changes in the Symphony file.

Figure 2-13 on page 59 shows the distribution of the Symphony file from master domain manager to domain managers and their subordinate agents.

Symphony &message files

Courier.msg

Mailbox.msg

NetReq.msg

Intercom.msg

remote

writer

remote

mailm

an

User interface

mailman

netman

batchman

jobman

writer

stop, start& shut

Changes toSymphony

TWS processes

Symphony

maestro_plan

maestro_engine

TWS connectorprograms

Operator input

JSC

conman

JSC

conman

58 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 75: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-13 Distribution of plan (Symphony file) in a Tivoli Workload Scheduler network

IBM Tivoli Workload Scheduler processes monitor the Symphony file and make calls to the operating system to launch jobs as required. The operating system runs the job, and in return informs IBM Tivoli Workload Scheduler whether the job has completed successfully or not. This information is entered into the Symphony file to indicate the status of the job. This way the Symphony file is continuously updated with the status of all jobs: the work that needs to be done, the work in progress, and the work that has been completed.

2.3 End-to-end scheduling architectureIn the two previous sections, 2.2, “Tivoli Workload Scheduler architecture” on page 50, and 2.1, “IBM Tivoli Workload Scheduler for z/OS architecture” on page 27, we described the architecture of Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS. In this section, we bring the two together; here we describe how the programs work together to function as a unified end-to-end scheduling solution.

End-to-end scheduling makes it possible to schedule and control jobs on mainframe, Windows, and UNIX environments, providing truly distributed scheduling. In the end-to-end configuration, Tivoli Workload Scheduler for z/OS

FTA4

MASTERDM

MasterDomain Manager

AIX

AIX

Domain Manager

DMA

HPUX

Domain Manager

DMB

AIX Windows XP Solaris

DomainA DomainB

FTA1 FTA2 FTA3

OS/400

TWS plan

Chapter 2. End-to-end scheduling architecture 59

Page 76: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

is used as the planner for the job scheduling environment. Tivoli Workload Scheduler domain managers and fault-tolerant agents are used to schedule on the non-mainframe platforms, such as UNIX and Windows.

2.3.1 How end-to-end scheduling worksEnd-to-end scheduling means controlling scheduling from one end of an enterprise to the other — from the mainframe all the way down to the client workstation. Tivoli Workload Scheduler provides an end-to-end scheduling solution whereby one or more IBM Tivoli Workload Scheduler domain managers, and its underlying agents and domains, are put under the direct control of an IBM Tivoli Workload Scheduler for z/OS engine. To the domain managers and FTAs in the network, the IBM Tivoli Workload Scheduler for z/OS engine appears to be the master domain manager.

Tivoli Workload Scheduler for z/OS creates the plan (the Symphony file) for the Tivoli Workload Scheduler network and sends the plan down to the first-level domain managers. Each of these domain managers sends the plan to all of the subordinate workstations in its domain.

The domain managers act as brokers for the distributed network by resolving all dependencies for the subordinate managers and agents. They send their updates (in the form of events) to Tivoli Workload Scheduler for z/OS, which updates the plan accordingly. Tivoli Workload Scheduler for z/OS handles its own jobs and notifies the domain managers of all the status changes of its jobs that involve the IBM Tivoli Workload Scheduler plan. In this configuration, the domain manager and all the Tivoli Workload Scheduler workstations recognize Tivoli Workload Scheduler for z/OS as the master domain manager and notify it of all of the changes occurring in their own plans. At the same time, the agents are not permitted to interfere with the Tivoli Workload Scheduler for z/OS jobs, because they are viewed as running on the master that is the only node that is in charge of them.

In Figure 2-14 on page 61, you can see a Tivoli Workload Scheduler network managed by a Tivoli Workload Scheduler for z/OS engine. This is accomplished by connecting a Tivoli Workload Scheduler domain manager directly to the Tivoli Workload Scheduler for z/OS engine. Actually, if you compare Figure 2-9 on page 52 with Figure 2-14 on page 61, you will see that the Tivoli Workload Scheduler network that is connected to Tivoli Workload Scheduler for z/OS is managed by a Tivoli Workload Scheduler master domain manager. When connecting this network to the engine, the AIX server that was acting as the Tivoli Workload Scheduler master domain manager is replaced by a mainframe. The new master domain manager is the Tivoli Workload Scheduler for z/OS engine.

60 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 77: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-14 IBM Tivoli Workload Scheduler for z/OS end-to-end scheduling

In Tivoli Workload Scheduler for z/OS, you can access job streams (also known as schedules in Tivoli Workload Scheduler and applications in Tivoli Workload Scheduler for z/OS) and add them to the current plan in Tivoli Workload Scheduler for z/OS. In addition, you can build dependencies among Tivoli Workload Scheduler for z/OS job streams and Tivoli Workload Scheduler jobs. From Tivoli Workload Scheduler for z/OS, you can monitor and control the FTAs.

In the Tivoli Workload Scheduler for z/OS current plan, you can specify jobs to run on workstations in the Tivoli Workload Scheduler network. The Tivoli Workload Scheduler for z/OS engine passes the job information to the Symphony file in the Tivoli Workload Scheduler for z/OS server, which in turn passes the Symphony file to the first-level Tivoli Workload Scheduler domain managers to distribute and process. In turn, Tivoli Workload Scheduler reports the status of running and completed jobs back to the current plan for monitoring in the Tivoli Workload Scheduler for z/OS engine.

The IBM Tivoli Workload Scheduler for z/OS engine is comprised of two components (started tasks on the mainframe): the controller and the server (also called the end-to-end server).

Domain Manager

DMB

MASTERDM

AIX

Domain Manager

DMA

Linux Windows XP Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

Master DomainManager

OPCMASTER

z/OS

HPUX

TWS for z/OS Engine

Server

Controller

Chapter 2. End-to-end scheduling architecture 61

Page 78: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

2.3.2 Tivoli Workload Scheduler for z/OS end-to-end componentsTo run the Tivoli Workload Scheduler for z/OS end-to-end, you must have a Tivoli Workload Scheduler for z/OS server started task dedicated for end-to-end scheduling. It is also possible to use the same server to communicate with the Job Scheduling Console. The Tivoli Workload Scheduler for z/OS uses TCP/IP for communication.

The Tivoli Workload Scheduler for z/OS controller uses the end-to-end server to communicate events to the FTAs. The end-to-end server will start multiple tasks and processes using the z/OS UNIX System Services (USS).

The Tivoli Workload Scheduler for z/OS end-to-end server must run on the same z/OS systems where the served Tivoli Workload Scheduler for z/OS controller is started and active.

Tivoli Workload Scheduler for z/OS end-to-end scheduling is comprised of three major components:

� The IBM Tivoli Workload Scheduler for z/OS controller: Manages database objects, creates plans with the workload, and executes and monitors the workload in the plan.

� The IBM Tivoli Workload Scheduler for z/OS server: Acts as the Tivoli Workload Scheduler master domain manager. It receives a part of the current plan (the Symphony file) from the Tivoli Workload Scheduler for z/OS controller, which contains job and job streams to be executed in the Tivoli Workload Scheduler network. The server is the focal point for all communication to and from the Tivoli Workload Scheduler network.

� IBM Tivoli Workload Scheduler domain managers at the first level: Serve as the communication hub between the Tivoli Workload Scheduler for z/OS server and the distributed Tivoli Workload Scheduler network. The domain managers at first level are connected directly to the Tivoli Workload Scheduler master domain manager running in USS in the Tivoli Workload Scheduler for z/OS end-to-end server.

In Tivoli Workload Scheduler for z/OS 8.2, you can have one or several Tivoli Workload Scheduler domain managers at the first level. These domain managers are connected directly to the Tivoli Workload Scheduler for z/OS end-to-end server, so they are called first-level domain managers.

It is possible to designate Tivoli Workload Scheduler for z/OS backup domain managers for the first-level Tivoli Workload Scheduler domain managers (as it is for “normal” Tivoli Workload Scheduler fault-tolerant agents and domain managers).

62 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 79: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Detailed description of the communicationFigure 2-15 shows the communication between the Tivoli Workload Scheduler for z/OS controller and the Tivoli Workload Scheduler for z/OS server.

Figure 2-15 IBM Tivoli Workload Scheduler for z/OS 8.2 interprocess communication

Tivoli Workload Scheduler for z/OS server processes and tasksThe end-to-end server address space hosts the tasks and the data sets that function as the intermediaries between the controller and the domain managers of first level. In many cases, these tasks and data sets are replicas of the distributed Tivoli Workload Scheduler processes and files.

The Tivoli Workload Scheduler for z/OS server uses the following processes, threads, and tasks for end-to-end scheduling (see Figure 2-15):

netman The Tivoli Workload Scheduler network listener daemon. It is started automatically when the end-to-end server task starts. The netman process monitors the NetReq.msg queue and listens to the TCP port defined in the server topology portnumber parameter. (Default is port 31111.) When netman receives a request, it starts another program to handle the request, usually writer or mailman. Requests to start or stop mailman are written by output

TWS for z/OS Server programs running in USS

end-to-endenabler

TWS for z/OS Controller

receiversubtask

spawns writer

NetReq.msg

Intercom.msg

netman

writer

Symphony &message files

TWSprocesses

GSGS

WA

NMM

EM

tomaster.msg

remote

writer

remote

mailm

an

TWS for z/OS Engine

mailman

batchman

Symphony

TWSIN

translator threads

translator

scriptdownloader

Mailbox.msg

TWSCS

sendersubtask

remotedwnldr

job logretriever

remotescribner

TWSOU

inputwriter

inputtranslator

spawns threads

outputtranslator

start & stopevents only

Chapter 2. End-to-end scheduling architecture 63

Page 80: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

translator to the NetReq.msg queue. Requests to start or stop writer are sent via TCP by the mailman process on a remote workstation (domain manager at the first level).

writer One writer process is started by netman for each connected remote workstation (domain manager at the first level). Each writer process receives events from the mailman process on a remote workstation and writes these events to the Mailbox.msg file.

mailman The main message handler process. Its main tasks are:

� Routing events. It reads the events stored in the Mailbox.msg queue and sends them either to the controller (writing them in the Intercom.msg file), or to the writer process on a remote workstation (via TCP).

� Linking to remote workstations (domain managers at the first level). The mailman process requests that the netman program on each remote workstation starts a writer process to accept the connection.

� Sending the Symphony file to subordinate workstations (domain managers at the first level). When a new Symphony file is created, the mailman process sends a copy of the file to each subordinate domain manager and fault-tolerant agent.

batchman Updates the Symphony file and resolves dependencies at master level. After the Symphony file has been written the first time, batchman is the only program that makes changes to the file.

The batchman program in USS does not perform job submission; this is why there is no jobman process running in UNIX System Services).

translator Through its input and output threads (discussed in more detail below), the translator process translates events from Tivoli Workload Scheduler format to Tivoli Workload Scheduler for z/OS format and vice versa. The translator program was developed specifically to handle the job of event translation from OPC events to Maestro events, and vice versa. The translator process runs in UNIX System Services on the mainframe; it does not run on domain managers or FTAs. The translator program provides the glue that binds Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler together; translator enables

64 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 81: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

these two products to function as a unified scheduling system.

job log retriever A thread of the translator process that is spawned to fetch a job log from a fault-tolerant agent. One job log retriever thread is spawned for each requested FTA job log.

The job log retriever receives the log, sizes it according to the LOGLINES parameter, translates it from UTF-8 to EBCDIC, and queues it in the inbound queue of the controller. The retrieval of a job log is a lengthy operation and can take a few moments to complete.

The user may request several logs at the same time. The job log retriever thread terminates after the log has been written to the inbound queue. If using the IBM Tivoli Workload Scheduler for z/OS ISPF panel interface, the user will be notified by a message when the job log has been received.

script downloader A thread of the translator process that is spawned to download the script for an operation (job) defined in Tivoli Workload Scheduler with the Centralized Script option set to Yes. One script downloader thread is spawned for each script that must be downloaded. Several script downloader threads can be active at the same time. The script that is to be downloaded is received from the output translator.

starter The basic or main process in the end-to-end server UNIX System Services. The starter process is the first process that is started in UNIX System Services when the end-to-end server started task is started. The starter process starts the translator and the netman processes (not shown in Figure 2-15 on page 63).

Events passed from the server to the controller

input translator A thread of the translator process. The input translator thread reads events from the tomaster.msg file and translates them from Tivoli Workload Scheduler format to Tivoli Workload Scheduler for z/OS format. It also performs UTF-8 to EBCDIC translation and sends the translated events to the input writer.

input writer Receives the input from the job log retriever, input translator, and script downloader and writes it in the inbound queue (the EQQTWSIN data set).

Chapter 2. End-to-end scheduling architecture 65

Page 82: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

receiver subtask A subtask of the end-to-end task run in the Tivoli Workload Scheduler for z/OS controller. It receives events from the inbound queue and queues them to the Event Manager task. The events have already been filtered and elaborated by the input translator.

Events passed from the controller to the server

sender subtask A subtask of the end-to-end task in the Tivoli Workload Scheduler for z/OS controller. It receives events for changes to the current plan that is related to Tivoli Workload Scheduler fault-tolerant agents. The Tivoli Workload Scheduler for z/OS tasks that can change the current plan are: General Service (GS), Normal Mode Manager (NMM), Event Manager (EM), and Workstation Analyzer (WA).

The events are received via SSI, the usual method the Tivoli Workload Scheduler for z/OS tasks use to exchanged events.

The NMM sends events to the sender task when the plan is extended or replanned for synchronization purposes.

output translator A thread of the translator process. The output translator thread reads events from the outbound queue. It translates the events from Tivoli Workload Scheduler for z/OS format to Tivoli Workload Scheduler format and evaluates them, performing the appropriate function. Most events, including those related to changes to the Symphony file, are written to Mailbox.msg. Requests to start or stop netman or mailman are written to NetReq.msg. Output translator also translates events from EBCDIC to UTF-8.

The output translator interacts with three different components, depending on the type of the event:

� Starts a job log retriever thread if the event is to retrieve the log of a job from a Tivoli Workload Scheduler agent.

� Starts a script downloader thread if the event is to download the script.

� Queues an event in NetReq.msg if the event is to start or stop mailman.

� Queues events in Mailbox.msg for the other events that are sent to update the Symphony file on the Tivoli Workload Scheduler agents (for example, events for a job that has changed status, events for manual changes

66 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 83: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

on jobs or workstations, or events to link or unlink workstations).

� Switches the Symphony files.

IBM Tivoli Workload Scheduler for z/OS datasets and files used for end-to-end scheduling

The Tivoli Workload Scheduler for z/OS server and controller uses the following data sets and files for end-to-end scheduling:

EQQTWSIN Sequential data set used to queue events sent by the server to the controller (the inbound queue). Must be defined in Tivoli Workload Scheduler for z/OS controller and the end-to-end server started task procedure (shown as TWSIN in Figure 2-15 on page 63).

EQQTWSOU Sequential data set used to queue events sent by the controller to the server (the outbound queue). Must be defined in Tivoli Workload Scheduler for z/OS controller and the end-to-end server started task procedure (shown as TWSOU in Figure 2-15 on page 63).

EQQTWSCS Partitioned data set used to temporarily store a script when it is downloaded from the Tivoli Workload Scheduler for z/OS JOBLIB data set to the fault-tolerant agent for its submission. This data set is shown as TWSCS in Figure 2-15 on page 63.

This data set is described in “Tivoli Workload Scheduler for z/OS end-to-end database objects” on page 69. It is not shown in Figure 2-15 on page 63.

Symphony HFS file containing the active copy of the plan used by the distributed Tivoli Workload Scheduler agents.

Sinfonia HFS file containing the distribution copy of the plan used by the distributed Tivoli Workload Scheduler agents. This file is not shown in Figure 2-15 on page 63.

NetReq.msg HFS file used to queue requests for the netman process.

Mailbox.msg HFS file used to queue events sent to the mailman process.

intercom.msg HFS file used to queue events sent to the batchman process.

tomaster.msg HFS file used to queue events sent to the input translator process.

Chapter 2. End-to-end scheduling architecture 67

Page 84: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Translator.chk HFS file used as checkpoint file for the translator process. It is equivalent to the checkpoint data set used by the Tivoli Workload Scheduler for z/OS controller. For example, it contains information about the status of the Tivoli Workload Scheduler for z/OS current plan, Symphony run number, Symphony availability. This file is not shown in Figure 2-15 on page 63.

Translator.wjl HFS file used to store information about job log retrieval and script downloading that are in progress. At initialization, the translator checks the translator.wjl file for job log retrieval and script downloading that did not complete (both correctly or in error) and sends the error back to the controller. This file is not shown in Figure 2-15 on page 63.

EQQSCLIB Partitioned data set used as a repository for jobs with non-centralized script definitions running on FTAs. The EQQSCLIB data set is described in “Tivoli Workload Scheduler for z/OS end-to-end database objects” on page 69. It is not shown in Figure 2-15 on page 63.

EQQSCPDS VSAM data sets containing a copy of the current plan used by the daily plan batch programs to create the Symphony file.

The end-to-end plan creating process is described in 2.3.4, “Tivoli Workload Scheduler for z/OS end-to-end plans” on page 75. It is not shown in Figure 2-15 on page 63.

2.3.3 Tivoli Workload Scheduler for z/OS end-to-end configurationThe topology of the distributed IBM Tivoli Workload Scheduler network that is connected to the IBM Tivoli Workload Scheduler for z/OS engine is described in parameter statements for the Tivoli Workload Scheduler for z/OS server and for the Tivoli Workload Scheduler for z/OS programs that handle the long-term plan and the current plan.

Parameter statements are also used to activate the end-to-end subtasks in the Tivoli Workload Scheduler for z/OS controller.

The parameter statements that are used to describe the topology is covered in 4.2.6, “Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 174. This section also includes an example of how to reflect a specific Tivoli Workload Scheduler network topology in Tivoli Workload

68 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 85: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Scheduler for z/OS servers and plan programs using the Tivoli Workload Scheduler for z/OS topology parameter statements.

Tivoli Workload Scheduler for z/OS end-to-end database objects

In order to run jobs on fault-tolerant agents or extended agents, one must first define database objects related to the Tivoli Workload Scheduler workload in Tivoli Workload Scheduler for z/OS databases.

The Tivoli Workload Scheduler for z/OS end-to-end related database objects are:

� IBM Tivoli Workload Scheduler for z/OS fault tolerant workstations

A fault tolerant workstation is a computer workstation configured to schedule jobs on FTAs. The workstation must also be defined in the server CPUREC initialization statement (see Figure 2-16 on page 70).

� IBM Tivoli Workload Scheduler for z/OS job streams, jobs, and dependencies

Job streams and jobs to run on Tivoli Workload Scheduler FTAs are defined like other job streams and jobs in Tivoli Workload Scheduler for z/OS. To run a job on a Tivoli Workload Scheduler FTA, the job is simply defined on a fault tolerant workstation. Dependencies between Tivoli Workload Scheduler distributed jobs are created exactly the same way as other job dependencies in the Tivoli Workload Scheduler for z/OS controller. This is also the case when creating dependencies between Tivoli Workload Scheduler distributed jobs and Tivoli Workload Scheduler for z/OS mainframe jobs.

Some of the Tivoli Workload Scheduler for z/OS mainframe-specific options are not available for Tivoli Workload Scheduler distributed jobs.

Chapter 2. End-to-end scheduling architecture 69

Page 86: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-16 A workstation definition and its corresponding CPUREC

� IBM Tivoli Workload Scheduler for z/OS resources

Only global resources are supported and can be used for Tivoli Workload Scheduler distributed jobs. This means that the resource dependency is resolved by the Tivoli Workload Scheduler for z/OS controller and not locally on the FTA.

For a job running on an FTA, the use of resources causes the loss of fault tolerance. Only the controller determines the availability of a resource and consequently lets the FTA start the job. Thus, if a job running on an FTA uses a resource, the following occurs:

– When the resource is available, the controller sets the state of the job to started and the extended status to waiting for submission.

– The controller sends a release-dependency event to the FTA.

– The FTA starts the job.

If the connection between the engine and the FTA is broken, the operation does not start on the FTA even if the resource becomes available.

Topology definition for F100 workstation:

F100 workstation definition in ISPF:

F100 workstation definition in JSC:

Topology definition for F100 workstation:

F100 workstation definition in ISPF:

F100 workstation definition in JSC:

70 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 87: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� The task or script associated with the FTA job, defined in Tivoli Workload Scheduler for z/OS

In IBM Tivoli Workload Scheduler for z/OS 8.2, the task or script associated to the FTA job can be defined in two different ways:

a. Non-centralized script

Defined in a special partitioned data set, EQQSCLIB, allocated in the Tivoli Workload Scheduler for z/OS controller started task procedure, stores the job or task definitions for FTA jobs. The script (the JCL) resides on the fault-tolerant agent. This is the default behavior in Tivoli Workload Scheduler for z/OS for fault-tolerant agent jobs.

b. Centralized script

Defines the job in Tivoli Workload Scheduler for z/OS with the Centralized Script option set to Y (Yes).

A centralized script resides in the IBM Tivoli Workload Scheduler for z/OS JOBLIB and is downloaded to the fault-tolerant agent every time the job is submitted. The concept of centralized script has been added for compatibility with the way that Tivoli Workload Scheduler for z/OS manages jobs in the z/OS environment.

Note: Special resource dependencies are represented differently depending on whether you are looking at the job through Tivoli Workload Scheduler for z/OS interfaces or Tivoli Workload Scheduler interfaces. If you observe the job using Tivoli Workload Scheduler for z/OS interfaces, you can see the resource dependencies as expected.

However, when you monitor a job on a fault-tolerant agent by means of the Tivoli Workload Scheduler interfaces, you will not be able to see the resource that is used by the job. Instead you will see a dependency on a job called OPCMASTER#GLOBAL.SPECIAL_RESOURCES. This dependency is set by the engine. Every job that has special resource dependencies has a dependency to this job.

When the engine allocates the resource for the job, the dependency is released. (The engine sends a release event for the specific job through the network.)

Note: The default for all operations and jobs in Tivoli Workload Scheduler for z/OS is N (No).

Chapter 2. End-to-end scheduling architecture 71

Page 88: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Non-centralized scriptFor every FTA job definition in Tivoli Workload Scheduler for z/OS where the centralized script option is set to N (non-centralized script) there must be a corresponding member in the EQQSCLIB data set. The members of EQQSCLIB contain a JOBREC statement that describes the path to the job or the command to be executed and eventually the user to be used when the job or command is executed.

Example for a UNIX script:

JOBREC JOBSCR(/Tivoli/tws/scripts/script001_accounting) JOBUSR(userid01)

Example for a UNIX command:

JOBREC JOBCMD(ls) JOBUSR(userid01)

If the JOBUSR (user for the job) keyword is not specified, the user defined in the CPUUSER keyword of the CPUREC statement for the fault-tolerant workstation is used.

If necessary, Tivoli Workload Scheduler for z/OS JCL variables can be used in the JOBREC definition. Tivoli Workload Scheduler for z/OS JCL variables and variable substitution in a EQQSCLIB member is managed and controlled by VARSUB statements placed directly in the EQQSCLIB member with the JOBREC definition for the particular job.

Furthermore, it is possible to define Tivoli Workload Scheduler recovery options for the job defined in the JOBREC statement. Tivoli Workload Scheduler recovery options are defined with RECOVERY statements placed directly in the EQQSCLIB member with the JOBREC definition for the particular job.

The JOBREC (and optionally VARSUB and RECOVERY) definitions are read by the Tivoli Workload Scheduler for z/OS plan programs when producing the new current plan and placed as part of the job definition in the Symphony file.

If a Tivoli Workload Scheduler distributed job stream is added to the plan in Tivoli Workload Scheduler for z/OS, the JOBREC definition will be read by Tivoli Workload Scheduler for z/OS, copied to the Symphony file on the Tivoli Workload Scheduler for z/OS server, and sent (as events) by the server to the Tivoli Workload Scheduler agent Symphony files via the directly connected Tivoli Workload Scheduler domain managers.

It is important to remember that the EQQSCLIB member only has a pointer (the path) to the job that is going to be executed. The actual job (the JCL) is placed locally on the FTA or workstation in the directory defined by the JOBREC JOBSCR definition.

72 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 89: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

This also means that it is not possible to use the JCL edit function in Tivoli Workload Scheduler for z/OS to edit the script (the JCL) for jobs where the script (the pointer) is defined by a JOBREC statement in the EQQSCLIB data set.

Centralized scriptScript for a job defined with centralized script option set to Y must be defined in Tivoli Workload Scheduler for z/OS JOBLIB. The script is defined the same way as normal JCL.

It is possible (but not necessary) to define some parameters of the centralized script, such as the user, in a job definition member of the SCRPTLIB data set.

With centralized scripts, you can perform variable substitution, automatic recovery, JCL editing, and job setup (as for “normal” z/OS jobs defined in the Tivoli Workload Scheduler for z/OS JOBLIB). It is also possible to use the job-submit exit (EQQUX001).

Note that jobs with centralized script will be defined in the Symphony file with a dependency named script. This dependency will be released when the job is ready to run and the script is downloaded from the Tivoli Workload Scheduler for z/OS controller to the fault-tolerant agent.

To download a centralized script, the DD statement EQQTWSCS must be present in the controller and server started tasks. During the download the <twshome>/centralized directory is created at the fault-tolerant workstation. The script is downloaded to this directory. If an error occurs during this operation, the controller retries the download every 30 seconds for a maximum of 10 times. If the script download still fails after 10 retries, the job (operation) is marked as Ended-in-error with error code OSUF.

Here are the detailed steps for downloading and executing centralized scripts on FTAs (Figure 2-17 on page 75):

1. Tivoli Workload Scheduler for z/OS controller instructs sender subtask to begin script download.

2. The sender subtask writes the centralized script to the centralized scripts data set (EQQTWSCS).

3. The sender subtask writes a script download event (type JCL, action D) to the output queue (EQQTWSOU).

4. The output translator thread reads the JCL-D event from the output queue.

5. The output translator thread reads the script from the centralized scripts data set (EQQTWSCS).

6. The output translator thread spawns a script downloader thread.

Chapter 2. End-to-end scheduling architecture 73

Page 90: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

7. The script downloader thread connects directly to netman on the FTA where the script will run.

8. netman spawns dwnldr and connects the socket from the script downloader thread to the new dwnldr process.

9. dwnldr downloads the script from the script downloader thread and writes it to the TWSHome/centralized directory on the FTA.

10.dwnldr notifies the script downloader thread of the result of the download.

11.The script downloader thread passes the result to the input writer thread.

12.If the script download was successful, the input writer thread writes a script download successful event (type JCL, action C) on the input queue (EQQTWSIN). If the script download was unsuccessful, the input writer thread writes a a script download in error event (type JCL, action E) on the input queue.

13.The receiver subtask reads the script download result event from the input queue.

14.The receiver subtask notifies the Tivoli Workload Scheduler for z/OS controller of the result of the script download. If the result of the script download was successful, the OPC controller then sends a release dependency event (type JCL, action R) to the FTA, via the normal IPC channel (sender subtask → output queue → output translator → Mailbox.msg → mailman → writer on FTA, and so on). This event causes the job to run.

74 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 91: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-17 Steps and processes for downloading centralized script

Creating centralized script in the Tivoli Workload Scheduler for z/OS JOBLIB data set is described in 4.5.2, “Definition of centralized scripts” on page 219.

2.3.4 Tivoli Workload Scheduler for z/OS end-to-end plansWhen scheduling jobs in the Tivoli Workload Scheduler environment, current plan processing also includes the automatic generation of the Symphony file that goes to the IBM Tivoli Workload Scheduler for z/OS server and IBM Tivoli Workload Scheduler subordinate domain managers as well as fault-tolerant agents.

The Tivoli Workload Scheduler for z/OS current plan program is normally run on workdays in the engine as described in 2.1.3, “Tivoli Workload Scheduler for z/OS plans” on page 37.

DomainZ

Domain Manager

DMB

Domain Manager

DMZ

AIX

HPUX

AIX Windows XP Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

netman

dwnldr

myscript.sh

MASTERDMMasterDomain Manager

z/OS

script downloader

output translatorOPC Controller sender subtask

receiver subtask in

out431

8

9

126cs

input writer

2 5

10 7

1113

14

AIXDomain Manager

DMA

Chapter 2. End-to-end scheduling architecture 75

Page 92: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-18 shows a combined view of long-term planning and current planning. Changes to the databases need an update of the long-term plan, thus most site run the LTP Modify batch job immediately before extending the current plan.

Figure 2-18 Combined view of the long-term planning and current planning

If the end-to-end feature is activated in Tivoli Workload Scheduler for z/OS, the current plan program will read the topology definitions described in the TOPLOGY, DOMREC, CPUREC, and USRREC initialization statements (see 2.3.3, “Tivoli Workload Scheduler for z/OS end-to-end configuration” on page 68) and the script library (EQQSCLIB) as part of the planning process. Information from the initialization statements and the script library will be used to create a Symphony file for the Tivoli Workload Scheduler FTAs (see Figure 2-19 on page 77). The whole process is handled by Tivoli Workload Scheduler for z/OS planning programs.

Old current plan

JobStreams

PeriodsWorkstationsResources CalendarsDatabases

1. Extend long term plan

90 days 1 workday

2. Extend current plan

today

Plan

Remove completed job

streams

Add detail for next

day

Long Term PlanLTP

extension

tomorrow

New current plan

Details of current plan extension

Steps of plan extension

Old current plan

JobStreams

PeriodsWorkstationsResources CalendarsDatabases

1. Extend long term plan

90 days 1 workday

2. Extend current plan

today

Plan

Remove completed job

streams

Add detail for next

day

Long Term PlanLTP

extension

tomorrow

New current plan

Details of current plan extension

Steps of plan extension

76 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 93: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-19 Creation of Symphony file in Tivoli Workload Scheduler for z/OS plan programs

The process is handled by Tivoli Workload Scheduler for z/OS planning programs, which is described in the next section.

Detailed description of the Symphony creationFigure 2-15 on page 63 gives a description of the tasks and processes involved in the Symphony creation.

Resources

Old current plan

JobStreams

Script library

Workstations

TopologyDefinitions

Databases

Remove completed job streams

Add detail for next day New current plan

CurrentPlan Extension&Replan

New Symphony

1. Extract TWS plan form current plan2. Add topology (domain, workstation)3. Add task definition (path and user) for

distributed TWS jobs

Chapter 2. End-to-end scheduling architecture 77

Page 94: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-20 IBM Tivoli Workload Scheduler for z/OS 8.2 interprocess communication

1. The process is handled by Tivoli Workload Scheduler for z/OS planning batch programs. The batch produces the NCP and initializes the symUSER.

2. The Normal Node Manager (NMM) sends the SYNC START ('S') event to the server, and the E2E receiver starts, leaving all events in the inbound queue (TWSIN).

3. When the SYNC START ('S') is processed by the output translator, it stops the OPCMASTER, sends the SYNC END ('E') to the controller, and stops the entire network.

4. The NMM applies the job tracking events received while the new plan was produced. It then copies the new current plan data set (NCP) to the Tivoli Workload Scheduler for z/OS current plan data set (CP1 or CP2), make a current plan backup up (copies active CP1/CP2 to inactive CP1/CP2) and creates the Symphony Current Plan (SCP) data set as a copy of the active current plan (CP1 or CP2) data set.

5. Tivoli Workload Scheduler for z/OS mainframe schedule is resumed.

6. The end-to-end receiver begins to process events in the queue.

TWS for z/OS Server programs running in USS

end-to-endenabler

TWS for z/OS Controller

receiversubtask

spawns writer

NetReq.msg

Intercom.msg

netman

writer

Symphony &message files

TWSprocesses

GSGS

WA

NMM

EM

tomaster.msg

remote

writer

remote

mailm

an

TWS for z/OS Engine

mailman

batchman

Symphony

TWSIN

translator threads

translator

scriptdownloader

Mailbox.msg

TWSCS

sendersubtask

remotedwnldr

job logretriever

remotescribner

TWSOU

inputwriter

inputtranslator

spawns threads

outputtranslator

start & stopevents only

78 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 95: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

7. The SYNC CPREADY ('Y') is sent to the output translator and starts, leaving in the outbound queue (TWSOU) all the events.

8. The plan program starts producing the SymUSER file starting from SCP and then renames it Symnew.

9. When the Symnew file has been created, the plan program ends and NMM notifies the output translator that the Symnew file is ready, sending the SYNC SYMREADY ('R') event to the output translator.

10.The output translator renames old Symphony and Sinfonia files to Symold and Sinfold files, and a Symphony OK ('X') or NOT OK ('B') Sync event is sent to the Tivoli Workload Scheduler for z/OS engine, which logs a message in the engine message log indicating whether the Symphony has been switched.

11.The Tivoli Workload Scheduler for z/OS server master is started in USS and the Input Translator starts to process new events. As in Tivoli Workload Scheduler distributed, mailman and batchman process events are left in local event files and start distributing the new Symphony file to the whole IBM Tivoli Workload Scheduler network.

When the Symphony file is created by the Tivoli Workload Scheduler for z/OS plan programs, it (or, more precisely, the Sinfonia file) will be distributed to the Tivoli Workload Scheduler for z/OS subordinate domain manager, which in turn distributes the Symphony (Sinfonia) file to its subordinate domain managers and fault-tolerant agents. (See Figure 2-21 on page 80.)

Chapter 2. End-to-end scheduling architecture 79

Page 96: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-21 Symphony file distribution from ITWS for z/OS server to ITWS agents

The Symphony file is generated:

� Every time the Tivoli Workload Scheduler for z/OS plan is extended or replanned

� When a Symphony renew batch job is submitted (from Tivoli Workload Scheduler for z/OS legacy ISPF panels, option 3.5)

The Symphony file contains:

� Jobs to be executed on Tivoli Workload Scheduler FTAs

� z/OS (mainframe) jobs that are predecessors to Tivoli Workload Scheduler distributed jobs

� Job streams that have at least one job in the Symphony file

� Topology information for the distributed network with all the workstation and domain definitions, including the master domain manager of the distributed network; that is, the Tivoli Workload Scheduler for z/OS host.

DomainZ

MASTERDM

Domain Manager

DMZ

AIX

AIXDomain Manager

DMA

HPUXDomain Manager

DMB

AIX Windows 2000 Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

TWS for z/OS plan

The TWS plan is extracted from the TWS for z/OS plan

TWS plan

The TWS plan is then distributed to the subordinate DMs and FTAs

TWS plan

MasterDomain Manager

z/OS

80 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 97: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

After the Symphony file is created and distributed to the Tivoli Workload Scheduler FTAs, the Symphony file is updated by events:

� When job status changes

� When jobs or job streams are modified

� When jobs or job streams for the Tivoli Workload Scheduler FTAs are added to the plan in the Tivoli Workload Scheduler for z/OS controller.

If you look at the Symphony file locally on a Tivoli Workload Scheduler FTA, from the Job Scheduling Console, or using the Tivoli Workload Scheduler command line interface to the plan (conman), you will see that:

� The Tivoli Workload Scheduler workstation has the same name as the related workstation defined in Tivoli Workload Scheduler for z/OS for the agent. OPCMASTER is the hard-coded name for the master domain manager workstation for the Tivoli Workload Scheduler for z/OS controller.

� The name of the job stream (or schedule) is the hexadecimal representation of the occurrence (job stream instance) token (internal unique and invariant identifier for occurrences). The job streams are always defined on the OPCMASTER workstation. (Having no dependencies, this does not reduce fault tolerance.) See Figure 2-22 on page 82.

� Using this hexadecimal representation for the job stream instances makes it possible to have several instances for the same job stream, because they have unique job stream names. Therefore, it is possible to have a plan in the Tivoli Workload Scheduler for z/OS controller and a distributed Symphony file that spans more than 24 hours.

� The job name is made up according to one of the following formats (see Figure 2-22 on page 82 for an example):

– <T>_<opnum>_<applname>when the job is created in the Symphony file

– <T>_<opnum>_<ext>_<applname> when the job is first deleted from the current plan and then recreated in the current plan

Note: In Tivoli Workload Scheduler for z/OS, the key in the plan for an occurrence is job stream name and input arrival time.

In the Symphony file, the key is the job stream instance name. Since Tivoli Workload Scheduler for z/OS can have several job stream instances with the same name in the plan, it is necessary with an unique and invariant identifier (the occurrence token) for the occurrence or job stream instance name in the Symphony file.

Chapter 2. End-to-end scheduling architecture 81

Page 98: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

In these examples:

– <T> is J for normal jobs (operations), P for jobs that are representing pending predecessors, or R for recovery jobs (jobs added by Tivoli Workload Scheduler recovery).

– <opnum> is the operation number for the job in the job stream (in current plan).

– <ext> is a sequential number that is incremented every time the same operation is deleted and then recreated in current plan; if 0, it is omitted.

– <applname> is the name of the occurrence (job stream) the operation belongs to.

Figure 2-22 Job name and job stream name as generated in the Symphony file

Tivoli Workload Scheduler for z/OS uses the job name and an operation number as "key" for the job in a job stream.

In the Symphony file it is only the job name that is used as key. Since Tivoli Workload Scheduler for z/OS can have the same job name several times in on job stream and distinguishes between identical job names with the operation number, the job names generated in the Symphony file contains the Tivoli Workload Scheduler for z/OS operation number as part of the job name.

The name of a job stream (application) can contain national characters such as dollar ($), sect (§), and pound (£). These characters are converted into dashes (-) in the names of included jobs when the job stream is added to the symphony file or when the Symphony file is created. For example, consider the job stream name:

APPL$$234§§ABC£

In the Symphony file, the names of the jobs in this job stream will be:

<T>_<opnum>_APPL--234--ABC-

This nomenclature is still valid because the job stream instance (occurrence) is identified by the occurrence token, and the operations are each identified by the

Job name and workstation for distributed job in Symphony file

Job Stream name and workstation for job stream in Symphony file

Job name and workstation for distributed job in Symphony file

Job Stream name and workstation for job stream in Symphony file

82 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 99: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

operation numbers (<opnum>) that are part of the job names in the Symphony file.

In normal situations, the Symphony file is automatically generated as part of the Tivoli Workload Scheduler for z/OS plan process. The topology definitions are read and built into the Symphony file as part of the Tivoli Workload Scheduler for z/OS plan programs, so regular operation situations can occur where you need to renew (or rebuild) the Symphony file from the Tivoli Workload Scheduler for z/OS plan:

� When you make changes to the script library or to the definitions of the TOPOLOGY statement

� When you add or change information in the plan, such as workstation definitions

To have the Symphony file rebuilt or renewed, you can use the Symphony Renew option of the Daily Planning menu (option 3.5 in the legacy IBM Tivoli Workload Scheduler for z/OS ISPF panels).

This renew function can also be used to recover from error situations such as:

� A non-valid job definition in the script library

� Incorrect workstation definitions

� An incorrect Windows user name or password

� Changes to the script library or to the definitions of the TOPOLOGY statement

In 5.8.5, “Common errors for jobs on fault-tolerant workstations” on page 334, we describe how to correct several of these error situations without redistributing the Symphony file. It is worth it to get familiar with these alternatives before you start redistributing a Symphony file in a heavily loaded production environment.

Note: The criteria that are used to generate job names in the Symphony file can be managed by the Tivoli Workload Scheduler for z/OS JTOPTS TWSJOBNAME() parameter, which was introduced with APAR PQ77970. It is possible, for example, to use the job name (from the operation) instead of the job stream name for the job name in the Symphony file, so the job name will be <T>_<opnum>_<jobname> in the Symphony file.

Chapter 2. End-to-end scheduling architecture 83

Page 100: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

2.3.5 Making the end-to-end scheduling system fault tolerantIn the following, we cover some possible cases of failure in end-to-end scheduling and ways to mitigate against these failures:

1. The Tivoli Workload Scheduler for z/OS engine (controller) can fail due to a system or task outage.

2. The Tivoli Workload Scheduler for z/OS server can fail due to a system or task outage.

3. The domain managers at the first level, that is the domain managers directly connected to the Tivoli Workload Scheduler for z/OS server, can fail due to a system or task outage.

To avoid an outage of the end-to-end workload managed in the Tivoli Workload Scheduler for z/OS engine and server and in the Tivoli Workload Scheduler domain manager, you should consider:

� Using a standby engine (controller) for the Tivoli Workload Scheduler for z/OS engine (controller).

� Making sure that your Tivoli Workload Scheduler for z/OS server can be reached if the Tivoli Workload Scheduler for z/OS engine (controller) is moved to one of its standby engines (TCP/IP configuration in your enterprise). Remember that the end-to-end server started task always must be active on the same z/OS system as the active engine (controller).

� Defining backup domain managers for your Tivoli Workload Scheduler domain managers at the first level.

Figure 2-23 shows an example of a fault-tolerant end-to-end network with a Tivoli Workload Scheduler for z/OS standby controller engine and one Tivoli Workload Scheduler backup domain manager for one Tivoli Workload Scheduler domain manager at the first level.

Note: It is a good practice to define backup domain managers for all domain managers in the distributed Tivoli Workload Scheduler network.

84 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 101: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-23 Redundant configuration with standby engine and IBM Tivoli Workload Scheduler backup DM

If the domain manager for DomainZ fails, it will be possible to switch to the backup domain manager. The backup domain manager has an updated Symphony file and knows the subordinate domain managers and fault-tolerant agents, so it can take over the responsibilities of the domain manager. This switch can be performed without any outages in the workload management.

If the switch to the backup domain manager is going to be active across the Tivoli Workload Scheduler for z/OS plan extension, you must change the topology definitions in the Tivoli Workload Scheduler for z/OS DOMREC initialization statements. The backup domain manager fault tolerant workstation is going to be the domain manager at the first level for the Tivoli Workload Scheduler distributed network, even after the plan extension.

Example 2-1 shows how to change the name of the fault tolerant workstation in the DOMREC initialization statement, if the switch to the backup domain manager is effective across the Tivoli Workload Scheduler for z/OS plan extension. (See 5.5.4, “Switch to Tivoli Workload Scheduler backup domain manager” on page 308 for more information.)

ActiveEngine

DomainZ

MASTERDM

DomainManager

DMZ

AIX

AIXDomainManager

DMA

HPUXDomainManager

DMB

AIX Windows 2000 Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

z/OSSYSPLEX

StandbyEngine

StandbyEngine

OS/400

Server

BackupDomainManager

(FTA)

AIX

Chapter 2. End-to-end scheduling architecture 85

Page 102: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 2-1 DOMREC initialization statement

DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMZ) DOMPARENT(MASTERDM)

Should be changed to:

DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMB) DOMPARENT(MASTERDM)

Where FDMB is the name of the fault tolerant workstation where the backup domain manager is running.

If the Tivoli Workload Scheduler for z/OS engine or server fails, it will be possible to let one of the standby engines in the same sysplex take over. This takeover can be accomplished without any outages in the workload management.

The Tivoli Workload Scheduler for z/OS server must follow the Tivoli Workload Scheduler for z/OS engine. That is, if the Tivoli Workload Scheduler for z/OS engine is moved to another system in the sysplex, the Tivoli Workload Scheduler for z/OS server must be moved to the same system in the sysplex.

2.3.6 Benefits of end-to-end schedulingThe benefits that can be gained from using the Tivoli Workload Scheduler for z/OS end-to-end scheduling include:

� The ability to connect Tivoli Workload Scheduler fault-tolerant agents to an Tivoli Workload Scheduler for z/OS controller.

� Scheduling on additional operating systems.

� The ability to define resource dependencies between jobs that run on different FTAs or in different domains.

� Synchronizing work in mainframe and distributed environments.

� The ability to organize the scheduling network into multiple tiers, delegating some responsibilities to Tivoli Workload Scheduler domain managers.

� Extended planning capabilities, such as the use of long-term plans, trial plans, and extended plans, also for the Tivoli Workload Scheduler network.

Extended plans also means that the current plan can span more than 24 hours. One possible benefit is being able to extend a current plan over a time

Note: The synchronization between the Symphony file on the Tivoli Workload Scheduler domain manager and the Symphony file on its backup domain manager has improved considerably with FixPack 04 for IBM Tivoli Workload Scheduler, in which an enhanced and improved fault tolerant switch manager functionality is introduced.

86 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 103: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

period when no one will be available to verify that the current plan was successfully created each day, such as over a holiday weekend. The end-to-end environment also allows the current plan to be extended for a specified length of time, or to replan the current plan to remove completed jobs.

� Powerful run-cycle and calendar functions. Tivoli Workload Scheduler end-to-end enables more complex run cycles and rules to be defined to determine when a job stream should be scheduled.

� Ability to create a Trial Plan that can span more than 24 hours.

� Improved use of resources (keep resource if job ends in error).

� Enhanced use of host names instead of dotted IP addresses.

� Multiple job or job stream instances in the same plan. In the end-to-end environment, job streams are renamed using a unique identifier so that multiple job stream instances can be included in the current plan.

� The ability to use batch tools (for example, Batchloader, Massupdate, OCL, BCIT) that enable batched changes to be made to the Tivoli Workload Scheduler end-to-end database and plan.

� The ability to specify at the job level whether the job’s script should be centralized (placed in Tivoli Workload Scheduler for z/OS JOBLIB) or non-centralized (placed locally on the Tivoli Workload Scheduler agent).

� Use of Tivoli Workload Scheduler for z/OS JCL variables in both centralized and non-centralized scripts.

� The ability to use Tivoli Workload Scheduler for z/OS recovery in centralized scripts or Tivoli Workload Scheduler recovery in non-centralized scripts.

� The ability to define and browse operator instructions associated with jobs in the database and plan. In a Tivoli Workload Scheduler distributed environment, it is possible to insert comments or a description in a job definition, but these comments and description are not visible from the plan functions.

� The ability to define a job stream that will be submitted automatically to Tivoli Workload Scheduler when one of the following events occurs in the z/OS system: a particular job is executed or terminated in the z/OS system, a specified resource becomes available, or a z/OS dataset is created or opened.

Chapter 2. End-to-end scheduling architecture 87

Page 104: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

ConsiderationsImplementing Tivoli Workload Scheduler for z/OS end-to-end also imposes some limitations:

� Windows users’ passwords are defined directly (without any encryption) in the Tivoli Workload Scheduler for z/OS server initialization parameters. It is possible to place these definitions in a separate library with restricted access (restricted by RACF, for example) to authorized persons.

� In an end-to-end configuration, some of the conman command options are disabled. On an end-to-end FTA, the conman command only allows display operations and the subset of commands (such as kill, altpass, link/unlink, start/stop, switchmgr) that do not affect the status or sequence of jobs. Command options that could affect the information that is contained in the Symphony file are not allowed. For a complete list of allowed conman commands, refer to 2.7, “conman commands in the end-to-end environment” on page 106.

� Workstation classes are not supported in an end-to-end configuration.

� The LIMIT attribute is supported on the workstation level, not on the job stream level in an end-to-end environment.

� Some Tivoli Workload Scheduler functions are not available directly on Tivoli Workload Scheduler FTAs, but can be handled by other functions in Tivoli Workload Scheduler for z/OS.

For example:

– IBM Tivoli Workload Scheduler prompts

• Recovery prompts are supported.

• The Tivoli Workload Scheduler predefined and ad hoc prompts can be replaced with the manual workstation function in Tivoli Workload Scheduler for z/OS.

– IBM Tivoli Workload Scheduler file dependencies

• It is not possible to define file dependencies directly at job level in Tivoli Workload Scheduler for z/OS for distributed Tivoli Workload Scheduler jobs.

• The filewatch program that is delivered with Tivoli Workload Scheduler can be used to create file dependencies for distributed jobs in Tivoli Workload Scheduler for z/OS. Using the filewatch program, the file dependency is “replaced” by a job dependency in which a predecessor job checks for the file using the filewatch program.

88 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 105: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

– Dependencies on job stream level

The traditional way to handle these types of dependencies in Tivoli Workload Scheduler for z/OS is to define a “dummy start” and “dummy end” job at the beginning and end of the job streams, respectively.

– Repeat range (that is, rerun this job every 10 minutes”)

Although there is no built-in function for this in Tivoli Workload Scheduler for z/OS, it can be accomplished in different ways, such as by defining the job repeatedly in the job stream with specific start times or by using a PIF (Tivoli Workload Scheduler for z/OS Programming Interface) program to rerun the job every 10 minutes.

– Job priority change

Job priority cannot be changed directly for an individual fault-tolerant job. In an end-to-end configuration, it is possible to change the priority of a job stream. When the priority of a job stream is changed, all jobs within the job stream will have the same priority.

– Internetwork dependencies

An end-to-end configuration supports dependencies on a job that is running in the same Tivoli Workload Scheduler end-to-end or distributed topology (network).

2.4 Job Scheduling Console and related componentsThe Job Scheduling Console (JSC) provides another way of working with Tivoli Workload Scheduler for z/OS databases and current plan. The JSC is a graphical user interface that connects to the Tivoli Workload Scheduler for z/OS engine via a Tivoli Workload Scheduler for z/OS TCP/IP server task. Usually this task is dedicated exclusively to handling JSC communications. Later in this book, the server task that is dedicated to JSC communications will be referred to as the JSC server (Figure 2-24 on page 90).

The TCP/IP server is a separate address space, started and stopped either automatically by the engine or by the user via the z/OS start and stop commands. More than one TCP/IP server can be associated with an engine.

Chapter 2. End-to-end scheduling architecture 89

Page 106: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-24 Communication between JSC and ITWS for z/OS via the JSC Server

The Job Scheduling Console can be run on almost any platform. Using the JSC, an operator can access both Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS scheduling engines. In order to communicate with the scheduling engines, the JSC requires several additional components to be installed:

� Tivoli Management Framework

� Job Scheduling Services (JSS)

� Tivoli Workload Scheduler connector, Tivoli Workload Scheduler for z/OS connector, or both

The Job Scheduling Services and the connectors must be installed on top of the Tivoli Management Framework. Together, the Tivoli Management Framework, the Job Scheduling Services, and the connector provide the interface between JSC and the scheduling engine.

The Job Scheduling Console is installed locally on your desktop computer, laptop computer, or workstation.

2.4.1 A brief introduction to the Tivoli Management FrameworkTivoli Management Framework provides the foundation on which the Job Scheduling Services and connectors are installed. It also performs access verification when a Job Scheduling Console user logs in. The Tivoli Management Environment (TME®) uses the concept of Tivoli Management Regions (TMRs). There is a single server for each TMR, called the TMR server; this is analogous

MasterDomain Manager

TWS for z/OS Engine

TMRServer

Tivoli ManagementFramework

OPCConnector

Current Plan

DatabasesJSC Server

Job Scheduling

Console

Job Scheduling

Console

Job Scheduling

Console

90 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 107: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

to the IBM Tivoli Workload Scheduler master server. The TMR server contains the Tivoli object repository (a database used by the TMR). Managed nodes are semi-independent agents that are installed on other nodes in the network; these are roughly analogous to Tivoli Workload Scheduler fault-tolerant agents. For more information about the Tivoli Management Framework, see the IBM Tivoli Management Framework 4.1 User’s Guide, GC32-0805.

2.4.2 Job Scheduling Services (JSS)The Job Scheduling Services component provides a unified interface in the Tivoli Management Framework for different job scheduling engines. Job Scheduling Services does not do anything on its own; it requires additional components called connectors in order to connect to job scheduling engines. It must be installed on either the TMR server or a managed node.

2.4.3 ConnectorsConnectors are the components that enable the Job Scheduling Services to talk with different types of scheduling engines. When working with a particular type of scheduling engine, the Job Scheduling Console communicates with the scheduling engine via the Job Scheduling Services and the connector. A different connector is required for each type of scheduling engine. A connector can only be installed on a computer where the Tivoli Management Framework and Job Scheduling Services have already been installed.

There are two types of connectors for connecting to the two types of scheduling engines in the IBM Tivoli Workload Scheduler 8.2 suite:

� IBM Tivoli Workload Scheduler for z/OS connector (or OPC connector)� IBM Tivoli Workload Scheduler connector

Job Scheduling Services communicates with the engine via the connector of the appropriate type. When working with a Tivoli Workload Scheduler for z/OS engine, the JSC communicates via the Tivoli Workload Scheduler for z/OS connector. When working with a Tivoli Workload Scheduler engine, the JSC communicates via the Tivoli Workload Scheduler connector.

The two types of connectors function somewhat differently: The Tivoli Workload Scheduler for z/OS connector communicates over TCP/IP with the Tivoli Workload Scheduler for z/OS engine running on a mainframe (MVS or z/OS) computer. The Tivoli Workload Scheduler connector performs direct reads and writes of the Tivoli Workload Scheduler plan and database files on the same computer as where the Tivoli Workload Scheduler connector runs.

Chapter 2. End-to-end scheduling architecture 91

Page 108: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

A connector instance must be created before the connector can be used. Each type of connector can have multiple instances. A separate instance is required for each engine that will be controlled by JSC.

We will now discuss each type of connector in more detail.

Tivoli Workload Scheduler for z/OS connectorAlso sometimes called the OPC connector, the Tivoli Workload Scheduler for z/OS connector can be instantiated on any TMR server or managed node. The Tivoli Workload Scheduler for z/OS connector instance communicates via TCP with the Tivoli Workload Scheduler for z/OS TCP/IP server. You might, for example, have two different Tivoli Workload Scheduler for z/OS engines that both must be accessible from the Job Scheduling Console. In this case, you would install one connector instance for working with one Tivoli Workload Scheduler for z/OS engine, and another connector instance for communicating with the other engine. When a Tivoli Workload Scheduler for z/OS connector instance is created, the IP address (or host name) and TCP port number of the Tivoli Workload Scheduler for z/OS engine’s TCP/IP server are specified. The Tivoli Workload Scheduler for z/OS connector uses these two pieces of information to connect to the Tivoli Workload Scheduler for z/OS engine. See Figure 2-25 on page 93.

Tivoli Workload Scheduler connectorThe Tivoli Workload Scheduler connector must be instantiated on the host where the Tivoli Workload Scheduler engine is installed so that it can access the plan and database files locally. This means that the Tivoli Management Framework must be installed (either as a TMR server or managed node) on the server where the Tivoli Workload Scheduler engine resides. Usually, this server is the Tivoli Workload Scheduler master domain manager. But it may also be desirable to connect with JSC to another domain manager or to a fault-tolerant agent. If multiple instances of Tivoli Workload Scheduler are installed on a server, it is possible to have one Tivoli Workload Scheduler connector instance for each Tivoli Workload Scheduler instance on the server. When a Tivoli Workload Scheduler connector instance is created, the full path to the Tivoli Workload Scheduler home directory associated with that Tivoli Workload Scheduler instance is specified. This is how the Tivoli Workload Scheduler connector knows where to find the Tivoli Workload Scheduler databases and plan. See Figure 2-25 on page 93.

Connector instancesWe now give some examples of how connector instances might be installed in the real world.

92 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 109: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

One connector instance of each typeIn Figure 2-25, there are two connector instances, including one Tivoli Workload Scheduler for z/OS connector instance and one Tivoli Workload Scheduler connector instance :

� The Tivoli Workload Scheduler for z/OS connector instance is associated with a Tivoli Workload Scheduler for z/OS engine running in a remote sysplex. Communication between the connector instance and the remote scheduling engine is conducted over a TCP connection.

� The Tivoli Workload Scheduler connector instance is associated with a Tivoli Workload Scheduler engine installed on the same AIX server. The Tivoli Workload Scheduler connector instance reads from and writes to the plan (the Symphony file) of the Tivoli Workload Scheduler engine.

Figure 2-25 One ITWS for z/OS connector and one ITWS connector instance

MasterDomainManager

DomainA

MASTERDM

DomainManager

DMB

AIX

z/OS

TWSConnector

Framework

OPCConnector

Current Plan

DatabasesJSC Server

Symphony

TWS

TWS for z/OS

Other DMs and FTAs

JobScheduling

Console

Chapter 2. End-to-end scheduling architecture 93

Page 110: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

In this example, the connectors are installed on the domain manager DMB. This domain manager has one connector instance of each type:

� A Tivoli Workload Scheduler connector to monitor the plan file (Symphony) locally on DMB

� A Tivoli Workload Scheduler for z/OS (OPC) connector to work with the databases and current plan on the mainframe

Having the Tivoli Workload Scheduler connector installed on a DM provides the operator with the ability to use JSC to look directly at the Symphony file on that workstation. This is particularly useful in the event that problems arise during the production day. If any discrepancy appears between the state of a job in the Tivoli Workload Scheduler for z/OS current plan and the Symphony file on an FTA, it is useful to be able to look at the Symphony file directly. Another benefit is that retrieval of job logs from an FTA is much faster when the job log is retrieved through the Tivoli Workload Scheduler connector. If the job log is fetched through the Tivoli Workload Scheduler for z/OS engine, it can take much longer.

Connectors on multiple domain managersWith the previous version of IBM Tivoli Workload Scheduler — Version 8.1 — it was necessary to have a single primary domain manager that was the parent of all other domain managers. Figure 2-25 on page 93 shows an example of such an arrangement. Tivoli Workload Scheduler 8.2 removes this limitation. With Version 8.2, it is possible to have more than one domain manager directly under the master domain manager. Most end-to-end scheduling networks will have more than one domain manager under the master. For this reason, it is a good idea to install the Tivoli Workload Scheduler connector and OPC connector on more than one domain manager.

Tip: Tivoli Workload Scheduler connector instances must be created on the server where the Tivoli Workload Scheduler engine is installed. This is because the connector must be able to have access locally to the Tivoli Workload Scheduler engine (specifically, to the plan and database files). This limitation obviously does not apply to Tivoli Workload Scheduler for z/OS connector instances because the Tivoli Workload Scheduler for z/OS connector communicates with the remote Tivoli Workload Scheduler for z/OS engine over TCP/IP.

94 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 111: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-26 An example with two connector instances of each type

Next, we discuss the connectors in more detail.

The connector programsThese are the programs that run behind the scenes to make the connectors work. Each program and its function is described below.

Programs of the IBM Tivoli Workload Scheduler for z/OS connectorThe programs that comprise the Tivoli Workload Scheduler for z/OS connector are located in $BINDIR/OPC (Figure 2-27 on page 96).

Note: It is a good idea to set up more than one Tivoli Workload Scheduler for z/OS connector instance associated with the engine (as in Figure 2-26). This way, if there is a problem with one of the workstations running the connector, JSC users will still be able to access the Tivoli Workload Scheduler for z/OS engine via the other connector. If JSC access is important to your enterprise, it is vital to set up redundant connector instances like this.

MasterDomainManager

MASTERDMz/OS

Current Plan

DatabasesJSC Server

TWS for z/OS

DomainA

DomainManager

DMA

AIX

TWSConnector

Framework

OPCConnector

Symphony

TWS

Other DMs and FTAs

DomainB

DomainManager

DMA

AIX

TWSConnector

Framework

OPCConnector

Symphony

TWS

Other DMs and FTAs

JobScheduling

Console

Chapter 2. End-to-end scheduling architecture 95

Page 112: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-27 Programs of the IBM Tivoli Workload Scheduler for z/OS (OPC) connector

� opc_connector

The main connector program that contains the implementation of the main connector methods (basically all the methods that are required to connect to and retrieve data from Tivoli Workload Scheduler for z/OS engine). It is implemented as a threaded daemon, which means that it is automatically started by the Tivoli Framework at the first request that should be handled by it, and it will stay active until there has not been a request for a long time. After it is started, it handles starting new threads for all JSC requests that require data from a specific Tivoli Workload Scheduler for z/OS engine.

� opc_connector2

A small connector program that contains the implementation for small methods that do not require data from Tivoli Workload Scheduler for z/OS. This program is implemented per method, which means that Tivoli Framework starts this program when a method implemented by it is called, the process performs the action for this method, and then is terminated. This is useful for methods (like the ones called by JSC when it starts and asks for information from all of the connectors) that can be isolated and not logical to maintain the process activity.

TMR Server or Managed Nodewith JSS

oserv

opc_connector opc_connector2

TWS for z/OS (OPC)

z/OS

Current Plan

DatabasesJSC Server

TWS for z/OS

AIX

Job Scheduling

Console

96 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 113: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Programs of the IBM Tivoli Workload Scheduler connectorThe programs that comprise the Tivoli Workload Scheduler connector are located in $BINDIR/Maestro (Figure 2-28).

Figure 2-28 Programs of the IBM Tivoli Workload Scheduler connector

� maestro_engine

The maestro_engine program performs authentication when a user logs in via the Job Scheduling Console. It also starts and stops the Tivoli Workload Scheduler engine. It is started by the Tivoli Management Framework (specifically, the oserv program) when a user logs in from JSC. It terminates after 30 minutes of inactivity.

� maestro_plan

The maestro_plan program reads from and writes to the Tivoli Workload Scheduler plan. It also handles switching to a different plan. The program is started when a user accesses the plan. It terminates after 30 minutes of inactivity.

Note: oserv is the Tivoli service that is used as the object request broker (ORB). This service runs on the Tivoli management region server and each managed node.

TWS

Databases

Symphony

maestro_enginestart & stop events,

oserv

maestro_database

maestro_plan

maestro_x_serverr3batch

joblog_retriever

remote scribnerremote SAP host

SAP pick lists: jobs, tasks,

variants, etc.

netman

TWS TWS Connector TMF

JobScheduling

Console

Chapter 2. End-to-end scheduling architecture 97

Page 114: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� maestro_database

The maestro_database program reads from and writes to the Tivoli Workload Scheduler database files. It is started when a JSC user accesses a database object or creates a new object definition. It terminates after 30 minutes of inactivity.

� job_instance_output

The job_instance_ouput program retrieves job standard list files. It is started when a JSC user runs the Browse Job Log operation. It starts up, retrieves the requested stdlist file, and then terminates.

� maestro_x_server

The maestro_x_server program is used to provide an interface to certain types of extended agents, such as the SAP R/3 extended agent (r3batch). It starts up when a command is run in JSC that requires execution of an agent method. It runs the X-agent method, returns the output, and then terminates. It only runs on workstations that host an r3batch extended agent.

2.5 Job log retrieval in an end-to-end environmentIn this section, we cover the detailed steps of job log retrieval in an end-to-end environment using the JSC. There are different steps involved depending on which connector you are using to retrieve the job log and whether the firewalls are involved. We cover all of these scenarios: using the Tivoli Workload Scheduler (distributed) connector (via the domain manager or first-level domain manager), using the Tivoli Workload Scheduler for z/OS (or OPC) connector, and with the firewalls in the picture.

2.5.1 Job log retrieval via the Tivoli Workload Scheduler connectorAs shown in Figure 2-29 on page 99, the steps behind the scenes in an end-to-end scheduling network when retrieving the job log via the domain manager (using the Tivoli Workload Scheduler (distributed) connector) are:

1. Operator requests joblog in Job Scheduling Console.

2. JSC connects to oserv running on the domain manager.

3. oserv spawns job_instance_output to fetch the job log.

4. job_instance_output communicates over TCP directly with the workstation where the joblog exists, bypassing the domain manager.

5. netman on that workstation spawns scribner and hands over the TCP connection with job_instance_output to the new scribner process.

6. scribner retrieves the joblog.

98 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 115: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

7. scribner sends the joblog to job_instance_output on the master.

8. job_instance_ouput relays the job log to oserv.

9. oserv sends the job log to JSC.

Figure 2-29 Job log retrieval in an end-to-end scheduling network via the domain manager

2.5.2 Job log retrieval via the OPC connectorAs shown in Figure 2-30 on page 101, the following steps take place behind the scenes in an end-to-end scheduling network when retrieving the job log using the OPC connector.

The initial request for joblog is done:

1. Operator requests joblog in Job Scheduling Console.

2. JSC connects to oserv running on the domain manager.

3. oserv tells the OPC connector program to request the joblog from the OPC system.

DomainZ

Domain Manager

DMB

AIX

HPUX

AIX Windows XP Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

netman

scribner

013780.0559

Job Scheduling

Console

2

5

6

7

job_instance_output

oserv

3

4

8 9

MASTERDMMasterDomain Manager

z/OS

Domain Manager

DMZ

AIXDomain Manager

DMA

Chapter 2. End-to-end scheduling architecture 99

Page 116: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

4. opc_connector relays the request to the JSC Server task on the mainframe.

5. The JSC Server requests the job log from the controller.

The next step depends on whether the job log has already been retrieved. If the job log has already been retrieved, skip to step 17. If the job log has not been retrieved yet, continue with step 6.

Assuming that the log has not been retrieved already:

6. The controller sends the request for the joblog to the sender subtask.

7. The controller sends a message to the operator indicating that the job log has been requested. This message is displayed in a dialog box in JSC. (The message is sent via this path: Controller → JSC Server → opc_connector → oserv → JSC).

8. The sender subtask sends the request to the output translator, via the output queue.

9. The output translator thread reads the request and spawns a job log retriever thread to handle it.

10.The job log retriever thread opens a TCP connection directly to the workstation where the job log exists, bypassing the domain manager.

11.netman on that workstation spawns scribner and hands over the TCP connection with the job log retriever to the new scribner process.

12.scribner retrieves the job log.

13.scribner sends the joblog to the job log retriever thread.

14.The job log retriever thread passes the job log to the input writer thread

15.The input writer thread sends the job log to the receiver subtask, via the input queue

16.The receiver subtask sends the job log to the controller

When the operator requests the job log a second time, the first five steps are the same as in the initial request (above). This time around, because the job log has already been received by the controller;

17.The controller sends the job log to the JSC Server.

18.The JSC Server sends the information to the OPC connector program running on the domain manager.

19.The IBM Tivoli Workload Scheduler for z/OS connector relays the job log to oserv.

20.oserv relays the job log to JSC and JSC displays the job log in a new window.

100 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 117: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-30 Job log retrieval in an end-to-end network via the ITWS for z/OS- no FIREWALL=Y configured

2.5.3 Job log retrieval when firewalls are involvedWhen the firewalls are involved (that is, FIREWALL=Y configured in the CPUREC definition of the workstation in which the job log is retrieved), the steps for retrieving the job log in an end-to-end scheduling network are different. These steps are shown in Figure 2-31 on page 102. Note that the firewall is configured to allow only the following traffic: DMY → DMA and DMZ → DMB.

1. Operator requests job log in JSC, or the mainframe ISPF panels.

2. TCP connection is opened to the parent domain manager of the with the workstation where the job log exists.

3. netman on that workstation spawns router and hands over the TCP socket to the new router process.

DMZ

Domain Manager

DMB

Domain Manager

DMZ

AIX

HPUX

AIX Windows XP Solaris

DMA DMB

FTA1 FTA2 FTA3 FTA4

OS/400

netman

scribner

013780.0559

Job Scheduling

Console

MASTERDMMasterDomain Manager

z/OS

input writer

output translator

oserv

opc_connector

JSC Server

OPC Controller sender subtask

receiver subtask in

out

4

1

2

5

6 8

9

10

11

12

13

14

1617

18

20

319

job log retriever

15

AIXDomain Manager

DMA

Cannot load the Job output.Reason: EQQMA41I The engine has requestedto the remote agent the joblog info needed toprocess the command. Please, retry later.->EQQM637I A JOBLOG IS NEEDED TOPROCESS THE COMMAND. IT HAS BEENREQUESTED.

7

Chapter 2. End-to-end scheduling architecture 101

Page 118: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

4. router opens a TCP connection to netman on the parent domain manager of the workstation where the job log exists, because this DM is also behind the firewall.

5. netman on the DM spawns router and hands over the TCP socket with router to the new router process.

6. router opens a TCP connection to netman on the workstation where the job log exists.

7. netman on that workstation spawns scribner and hands over the TCP socket with router to the new scribner process.

8. scribner retrieves the job log.

9. scribner on FTA4 sends the job log to router on DMB.

10.router sends the job log to the router program running on DMZ.

Figure 2-31 Job log retrieval in an end-to-end network via the ITWS for z/OS- with FIREWALL=Y configured

It is important to note that in the previous scenario, you should not configure the domain manager DMB as FIREWALL=N in its CPUREC definition. If you do, you

Domain Manager

DMB

HPUX

AIX Windows XP Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

netman

scribner

013780.0559

7

8

9

Job log is requested

2

netman

router5

6

10

1

FIREWALL(Y)

AIXDomain Manager

DMA

DomainZAIX

Domain Manager

DMZ

netman

router3

4

FIREWALL(Y)

Firewall

DomainYAIX

Domain Manager

DMY

Domain Manager or z/OS Master

11

102 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 119: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

will not be able to retrieve the job log from FTA4, even though FTA4 is configured as FIREWALL=Y. This is shown is Figure 2-32.

In this case, when the TCP connection to the parent domain manager of the workstation where the job log exists (DMB) is blocked by the firewall, the connection request is not received by netman on DMB. The firewall does not allow direct connections from DMZ to FTA4. The only connections from DMZ that are permitted are those that go to DMB. Because DMB has FIREWALL=N, the connection did not go through DMZ – it tried to go straight to FTA4.

Figure 2-32 Wrong configuration: connection blocked

2.6 Tivoli Workload Scheduler, important files, and directory structure

Figure 2-33 on page 104 shows the most important files in the Tivoli Workload Scheduler 8.2 working directory in USS (WRKDIR).

Domain Manager

DMB

HPUX

AIX Windows XP Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

Job log is requested

2

1

FIREWALL=Y

AIXDomain Manager

DMA

netman

Firewall

FIREWALL=N

DomainZAIX

Domain Manager

DMZ

DomainYAIX

Domain Manager

DMY

Domain Manager or z/OS master

Chapter 2. End-to-end scheduling architecture 103

Page 120: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-33 The most important files in the Tivoli Workload Scheduler 8.2 working directory in USS

The descriptions of the files are:

SymX (where X is the name of the user that ran the CP extend or Symphony renew job): A temporary file created during a CP extend or Symphony renew. This file is copied to Symnew, which is then copied to Sinfonia and Symphony.

Symbad (Bad Symphony) Only created if CP extend or Symphony renew results in an invalid Symphony.

Symold (Old Symphony) From prior to most recent CP extend or Symphony renew.

Translator.wjl Translator event log for requested job logs.

Translator.chk Translator checkpoint file.

stdlistpoboxversion

logs

localopts TWSCCLog.propterties

SymphonySinfoniaSymnewSymbad

ServerN.msg

FTA.msg

Color Legend

Only found on E2E Server in HFS on mainframe(not found on Unix or Windows workstations)

logs

event queues

Symbol Legend

plans

options & config. files databases

Translator.chkTranslator.wjl

SymX

tomaster.msg

YYYYMMDD_NETMAN.log

YYYYMMDD_TWSMERGE.log

YYYYMMDD_E2EMERGE.log

audit

Symold Mailbox.msg Intercom.msg

network

NetConf

NetReq.msg

mozart

mastsked

jobs

globalopts

WRKDIR

104 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 121: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

YYYYMMDD_E2EMERGE.log Translator log.

Figure 2-34 shows the most important files in the Tivoli Workload Scheduler 8.2 binary directory in USS (BINDIR). The options files in the config subdirectory are only reference copies of these files; they are not active configuration files.

Figure 2-34 A list of the most important files in the Tivoli Workload Scheduler 8.2 binary directory in USS

Figure 2-35 on page 106 shows the Tivoli Workload Scheduler 8.2 directory structure on the fault-tolerant agents. Note that the database files (such as jobs and calendars) are not used in the Tivoli Workload Scheduler 8.2 end-to-end scheduling environment.

Note: The Symnew, SymX, and Symbad files are temporary files and normally cannot be seen in USS work directory.

bin

batchman

netman

mailman

starter

writer

translator

Symbol Legend

scripts & programs

options & config. files

catalog codeset config

Color Legend

Only found on E2E Server in HFS on mainframe(not found on Unix or Windows workstations)

IBM

EQQBTCHMEQQCNFG0EQQCNFGREQQMLMN0EQQNTMN0EQQSTRTREQQTRNSLEQQWRTR0

config

configure

zoneinfo

NetConf

globalopts

localopts

BINDIR

Chapter 2. End-to-end scheduling architecture 105

Page 122: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-35 Tivoli Workload Scheduler 8.2 directory structure on the fault-tolerant agents

2.7 conman commands in the end-to-end environmentIn Tivoli Workload Scheduler, you can use the conman command line interface to manage the distributed production. A subset of these commands can also be used in end-to-end scheduling. In general, command options that could affect the information contained in the Symphony file are not allowed. Disallowed conman command options include add and remove dependencies, submit and cancel jobs, and so forth.

Figure 2-36 on page 107 and Figure 2-37 on page 107 list the conman commands that are available on end-to-end fault-tolerant workstations in a Tivoli Workload Scheduler 8.2 end-to-end scheduling network. Note that in the Type field, M stands for domain managers, F for fault-tolerant agents and A stands for standard agents.

tws

networkSecurity bin mozart

cpudata userdata mastsked jobs

schedlog stdlist audit pobox

calendars

parameters localopts

globaloptsprompts resources

version

database file

option file

Legend

Note: The composer command line interface, which is used to manage database objects in a distributed Tivoli Workload Scheduler environment, is not used in end-to-end scheduling because in end-to-end scheduling, the databases are located on the Tivoli Workload Scheduler for z/OS master.

106 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 123: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 2-36 conman commands available in end-to-end environment

Figure 2-37 conman commands available in end-to-end environment

Chapter 2. End-to-end scheduling architecture 107

Page 124: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

108 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 125: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2

In this chapter, we provide details on how to plan for end-to-end scheduling with Tivoli Workload Scheduler for z/OS, Tivoli Workload Scheduler, and the Job Scheduling Console.

The chapter covers two areas:

1. Before the installation is performed

Here we describe what to consider before performing the installation and how to order the product. This includes the following sections:

– “Different ways to do end-to-end scheduling” on page 111

– “The rationale behind end-to-end scheduling” on page 112

– “Before you start the installation” on page 113

2. Planning for end-to-end scheduling

Here we describe relevant planning issues that should be considered and handled before the actual installation and customization of Tivoli Workload

3

© Copyright IBM Corp. 2004 109

Page 126: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Scheduler for z/OS, Tivoli Workload Scheduler, and Job Scheduling Console is performed. This includes the following sections:

– “Planning end-to-end scheduling with Tivoli Workload Scheduler for z/OS” on page 116

– “Planning for end-to-end scheduling with Tivoli Workload Scheduler” on page 139

– “Planning for the Job Scheduling Console” on page 149

– “Planning for migration or upgrade from previous versions” on page 155

– “Planning for maintenance or upgrades” on page 156

110 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 127: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

3.1 Different ways to do end-to-end schedulingThe ability to connect mainframe and distributed platforms into an integrated scheduling network is not new. Several years ago, IBM offered two methods:

� By use of Tivoli OPC tracker agents

With tracker agents, the Tivoli Workload Scheduler for z/OS can submit and monitor jobs on remote tracker agents. The tracker agent software had limited support for diverse operating systems. Also tracker agents were not fault-tolerant, so if the network went down, tracker agents would not continue to run.

Furthermore, the scalability for tracker agents was not good, which means that it simply was not possible to get a stable environment for large distributed environments with several hundreds of tracker agents.

� By use of Tivoli Workload Scheduler MVS extended agents

Using extended agents, Tivoli Workload Scheduler can submit and monitor mainframe jobs in (for example) OPC or JES. The extended agents had limited functionality and were not fault tolerant. This required a Tivoli Workload Scheduler master and was not ideal for large, established MVS workloads. Extended agents, though, can be a perfectly viable solution for a large Tivoli Workload Scheduler that needs to run few jobs in a z/OS mainframe environment.

From Tivoli Workload Scheduler 8.1, it was possible to integrate Tivoli Workload Scheduler agents with Tivoli Workload Scheduler for z/OS, so Tivoli Workload Scheduler for z/OS was the master doing scheduling and tracking for jobs in the mainframe environment as well as in the distributed environment.

The end-to-end scheduling feature of Tivoli Workload Scheduler 8.1 was the first step toward a complete unified system.

The end-to-end solution has been optimized in Tivoli Workload Scheduler 8.2 where the integration between the two products, Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS is even tighter.

Furthermore, some of the functions that were missing in the first Tivoli Workload Scheduler 8.1 solution have been added in the Version 8.2 end-to-end solution.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 111

Page 128: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

3.2 The rationale behind end-to-end schedulingAs described in Section 2.3.6, “Benefits of end-to-end scheduling” on page 86, you can gain several benefits by using Tivoli Workload Scheduler for z/OS end-to-end scheduling. To review:

� You can use fault-tolerant agents so that distributed job scheduling is more independent from problems with network connections and poor network performance.

� You can schedule workload on additional operation systems such as Linux and Windows 2000.

� You have a seamless synchronization of work in mainframe and distributed environments.

Making dependencies between mainframe jobs and jobs in distributed environments is straightforward, using the same terminology and known interfaces.

� Tivoli Workload Scheduler for z/OS can use multi-tier architecture with Tivoli Workload Scheduler domain managers.

� You get extended planning capabilities, such as the use of long-term plans, trial plans, and extended plans, as well as to the distributed Tivoli Workload Scheduler network.

Extended plans means that the current plan can span more than 24 hours.

� The powerful run-cycle and calendar functions in Tivoli Workload Scheduler for z/OS can be used for distributed Tivoli Workload Scheduler jobs.

Besides these benefits, using the Tivoli Workload Scheduler for z/OS end-to-end also makes it possible to:

� Reuse or reinforce the procedures and processes that are established for the Tivoli Workload Scheduler for z/OS mainframe environment.

Operators, planners, and administrators who are trained and experienced in managing Tivoli Workload Scheduler for z/OS workload can reuse their skills and knowledge in the distributed jobs managed by the Tivoli Workload Scheduler for z/OS end-to-end.

� Extend disciplines established to manage and operate workload scheduling in mainframe environments, to the distributed environment.

� Extend procedures for a contingency established for the mainframe environment to the distributed environment.

Basically, when we look at end-to-end scheduling in this book, we consider scheduling in the enterprise (mainframe and distributed) where the Tivoli Workload Scheduler for z/OS engine is the master.

112 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 129: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

3.3 Before you start the installationThe short version of this story is: “Get the right people on board.”

End-to-end scheduling with Tivoli Workload Scheduler is not complicated to implement, but it is important to understand that end-to-end scheduling can involve many different platforms and operating systems, will use IP communication, can work across firewalls, and uses SSL communication.

As described earlier in this book, end-to-end scheduling involves two products: Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS. These products must be installed and configured to work together for successful end-to-end scheduling. Tivoli Workload Scheduler for z/OS is installed in the z/OS mainframe environment, and Tivoli Workload Scheduler is installed on the distributed platforms where job scheduling is going to be performed.

We suggest that you establish an end-to-end scheduling team or project group that includes people who are skilled in the different platforms and operating systems. Ensure that you have skilled people who know how IP communication, firewalls, and SSL work in the different environments and can configure these components to work in them.

The team will be responsible for doing the planning, installation, and operation of the end-to-end scheduling environment, must be able to cooperate across department boundaries, and must understand the entire scheduling environment, both mainframe and distributed.

Tivoli Workload Scheduler for z/OS administrators should be familiar with the domain architecture and the meaning of fault tolerant in order to understand that, for example, the script is not necessarily located in the job repository database. This is essential when it comes to reflecting the end-to-end network topology in Tivoli Workload Scheduler for z/OS.

On the other hand, people who are in charge of Tivoli Workload Scheduler need to know the Tivoli Workload Scheduler for z/OS architecture to understand the new planning mechanism and Symphony file creation.

Another important thing to plan for is education or skills transfer to planners and operators who will have the daily responsibilities of end-to-end scheduling. If your planners and operators are knowledgeable, they will be able to work more independently with the products and you will realize better quality.

We recommend that all involved people (mainframe and distributed scheduling) become familiar with both scheduling environments as described throughout this book.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 113

Page 130: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Because end-to-end scheduling can involve different platforms and operating systems with different interfaces (TSO/ISPF on mainframe, command prompt on UNIX, and so forth) we also suggest planning to deploy of the Job Scheduling Console. The reason is that the JSC provides a unified and platform-independent interface to job scheduling, and users do not need detailed skills to handle or use interfaces that depend on a particular operating system.

3.3.1 How to order the Tivoli Workload Scheduler softwareThe Tivoli Workload Scheduler solution consists of three products:

� IBM Tivoli Workload Scheduler for z/OS (formerly called Tivoli Operations Planning and Control, or OPC)Focused on mainframe-based scheduling

� Tivoli Workload Scheduler(formerly called Maestro)Focused on open systems–based scheduling and can be used with the mainframe-based products for a comprehensive solution across both distributed and mainframe environments

� Tivoli Workload Scheduler for ApplicationsEnables direct, easy integration between the Tivoli Workload Scheduler and enterprise applications such as Oracle E-business Suite, PeopleSoft, and SAP R/3.

Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS can be ordered independently or together in one program suite. The JSC graphical user interface is delivered together with Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler. This is also the case for the connector software that makes it possible for the JSC to communicate with either Tivoli Workload Scheduler for z/OS or Tivoli Workload Scheduler.

Example 3-1 shows each product and its included components.

Table 3-1 Product and components

Components IBM Tivoli Workload Scheduler for z/OS 8.2

Tivoli Workload Scheduler 8.2

Tivoli Workload Scheduler 8.2 for Applications

z/OS engine (OPC Controller and Tracker)

X

Tracker agent enabler X

End-to-end enabler X

114 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 131: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Note that the end-to-end enabler component (FMID JWSZ203) is used to populate the base binary directory in an HFS during System Modification Program/Extended (SMP/E) installation.

The tracker agent enabler component (FMID JWSZ2C0) makes it possible for the Tivoli Workload Scheduler for z/OS controller to communicate with old Tivoli OPC distributed tracker agents.

To be able to use the end-to-end scheduling solution you should order both products: IBM Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler. In the following section, we list the ordering details.

Contact your IBM representative if you have any problems ordering the products or are missing some of the delivery or components.

Software ordering detailsTable 3-2 on page 116 shows ordering details for Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler.

Tivoli Workload Scheduler distributed (Maestro)

X

Tivoli Workload Scheduler Connector

X

IBM Tivoli Workload Scheduler for z/OS Connector

X

Job Scheduling Console X X

IBM Tivoli Workload Scheduler for Applications for z/OS (Tivoli Workload Scheduler extended agent for z/OS)

X

Attention: The Tivoli OPC distributed tracker agents went out of support October 31, 2003.

Components IBM Tivoli Workload Scheduler for z/OS 8.2

Tivoli Workload Scheduler 8.2

Tivoli Workload Scheduler 8.2 for Applications

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 115

Page 132: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Table 3-2 Ordering details

3.3.2 Where to find more information for planningBesides this redbook, you can find more information in IBM Tivoli Workload Scheduling Suite General Information Version 8.2, SC32-1256. This manual is a good place to start to learn more about Tivoli Workload Scheduler, Tivoli Workload Scheduler for z/OS, the JSC, and end-to-end scheduling.

3.4 Planning end-to-end scheduling with Tivoli Workload Scheduler for z/OS

Before installing the Tivoli Workload Scheduler for z/OS and activating the end-to-end scheduling feature, there are several areas to consider and plan for. These areas are described in the following sections.

Component DeliveryCommentsProgram number

IBM Tivoli Workload Scheduler for z/OS 8.2

IBM Tivoli Workload Scheduler for z/OS Host Edition

Tivoli Workload Scheduler 8.2

z/OS Engine Yes, optional Yes

z/OS Agent Yes, optional Yes

End-to-end Enabler

Yes, optional Yes

Distributed FTA Yes

JSC Yes Yes Yes

Delivery Native tape, Service Pack or

CBPDO

ServicePac® or CBPDO

CD-ROM for all distributed platforms

Comments The 3 z/OS components can be licensed and

delivered individually

All 3 z/OS components are included when

customer buy and get deliver

Program number 5697-WSZ 5698-WSH 5698-A17

116 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 133: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

3.4.1 Tivoli Workload Scheduler for z/OS documentationTivoli Workload Scheduler for z/OS documentation is not shipped in hardcopy form with IBM Tivoli Workload Scheduler for z/OS 8.2.

The books are available in PDF and IBM softcopy format and delivered on a CD-ROM with the Tivoli Workload Scheduler for z/OS product. The CD-ROM has part number SK2T-6951 and can also be ordered separately.

Several of the Tivoli Workload Scheduler for z/OS books have been updated or revised starting in April 2004. This means that the books that are delivered with the base product are outdated, and we strongly suggest that you confirm that you have the newest versions of the books before starting the installation. This is true even for Tivoli Workload Scheduler for z/OS 8.2.

We recommend that you have access to, and possibly print, the newest versions of the Tivoli Workload Scheduler for z/OS publications before starting the installation.

Tivoli OPC tracker agentsAlthough the distributed Tivoli OPC tracker agents are not supported and cannot be ordered any more, Tivoli Workload Scheduler for z/OS 8.2 can still communicate with these tracker agents, because the agent enabler software (FMID JWSZ2C0) is delivered with Version 8.2.

However, the Version 8.2 manuals do not describe the related TCP or APPC ROUTOPTS initialization statement parameters. If you are going to use Tivoli OPC tracker agents with Version 8.2, then save the related Tivoli OPC publications, so you can use them for reference when necessary.

3.4.2 Service updates (PSP bucket, APARs, and PTFs)Before starting the installation, be sure to check the service level for the Tivoli Workload Scheduler for z/OS that you have received from IBM, and make sure that you get all available service so it can be installed with Tivoli Workload Scheduler for z/OS.

Note: The publications are available for download in PDF format at:

http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html

Look for books marked with “Revised April 2004,” as they have been updated with changes introduced by service (APARs and PTFs) for Tivoli Workload Scheduler for z/OS produced after the base version of the product was released in June 2003.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 117

Page 134: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Because the period from the time that installation of Tivoli Workload Scheduler for z/OS is started until it is activated in your production environment can be several months, we suggest that the installed Tivoli Workload Scheduler for z/OS be updated with all service that is available at the installation time.

Preventive service planning (PSP)The Program Directory that is provided with your Tivoli Workload Scheduler for z/OS distribution tape is an important document that may include technical information that is more recent than the information provided in this section. It also describes the program temporary fix (PTF) level of the Tivoli Workload Scheduler for z/OS licensed program when it was shipped from IBM, and contains instructions for unloading the software and information about additional maintenance for your level of the received distribution tape for the z/OS installation.

Before you start installing Tivoli Workload Scheduler for z/OS, check the preventive service planning bucket for recommendations that may have been added by the service organizations after your Program Directory was produced. The PSP includes a recommended service section that includes high-impact or pervasive (HIPER) APARs. Ensure that the corresponding PTFs are installed before you start to customize a Tivoli Workload Scheduler for z/OS subsystem.

Table 3-3 gives the PSP information for Tivoli Workload Scheduler for z/OS to be used when ordering the PSP bucket.

Table 3-3 PSP upgrade and subset ID information

Upgrade Subset Description

TWSZOS820 HWSZ200 Agent for z/OS

JWSZ202 Engine (Controller)

JWSZ2A4 Engine English NLS

JWSZ201 TCP/IP communication

JWSZ203 End-to-end enabler

JWSZ12C0 Agent enabler

Important: If you are running a previous version of IBM Tivoli Workload Scheduler for z/OS or OPC on a system where the JES2 EXIT2 was assembled using the Tivoli Workload Scheduler for z/OS 8.2 macros, apply the following PTFs to avoid job tracking problems due to missing A1 and A3P records:

� Tivoli OPC 2.3.0: Apply UQ66036 and UQ68474.� IBM Tivoli Workload Scheduler for z/OS 8.1: Apply UQ67877.

118 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 135: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Important service for Tivoli Workload Scheduler for z/OSBesides the APARs and PTFs that are listed in the PSP bucket, we suggest that you plan to apply all available service for Tivoli Workload Scheduler for z/OS in the installation phase.

At the time of writing this book, we found several important APARs for Tivoli Workload Scheduler for z/OS end-to-end scheduling and have listed some of them in Table 3-4. The table also shows whether the corresponding PTFs were available when this book was written (the number in the PTF number column).

Table 3-4 Important service

Note: The APAR list in Table 3-4 is not complete, but is used to give some examples of important service to apply during the installation. As mentioned before, we strongly suggest that you apply all available service during your installation of Tivoli Workload Scheduler for z/OS.

APAR number PTF number Description

PQ76474 UQ81495 UQ81498

Checks for number of dependencies for FTW job and two new messages, EQQX508E and EQQ3127E, to indicate that FTW job cannot be added to AD or CP (Symphony file) due to more than 40 dependencies for this job.

PQ77014 UQ81476UQ81477

During the daily planning or Symphony renew, the batch job ends with RC=0 even though warning messages have been issued for the Symphony file.

PQ77535Doc. APAR

Not available Important documentation with additional information when creating and maintaining HFS files needed for Tivoli Workload Scheduler end-to-end processing.

PQ77970 UQ82583UQ82584UQ82585UQ82587UQ82579UQ82601UQ82602

Makes it possible to customize the job name in the Symphony file. Before the fix, the job name was always generated using the operation number and occurrence name. Now it can be customized.The EQQPDFXJ member in the SEQQMISC library holds a detailed description (see Chapter 4, “Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling” on page 157 for more information).

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 119

Page 136: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

PQ78043 UQ81567 64M is recommended as the minimum region size for an E2E server; however, the sample server JCL (member EQQSER in SEQQSAMP) still has REGION=6M. This should be changed to REGION=64M.

PQ78097Doc. APAR

Not available Better documentation of the WSSTAT MANAGES keyword.

PQ78356 UQ82697 When a job stream is added via MCP to the Symphony file, it is always added with the GMT time; therefore in this case the local timezone set for the FTA is completely ignored.

PQ78891 UQ82790UQ82791UQ82784UQ82793UQ82794

Introduces new messages in the server message log when USS processes end abnormally or unexpectedly. Important for monitoring of the server and USS processes.Updates server related messages in controller message log to be more precise.

PQ79126Doc. APAR

Not available In the documentation, any reference to ZFS files is missing. The Tivoli Workload Scheduler end-to-end server fully supports and can access UNIX System Services (USS) in a Hierarchical File System (HFS) or in a zSeries® File System (zFS) cluster.

PQ79875Doc. APAR

Not available If you have any fault-tolerant workstations on Windows supported platforms and you want to run jobs on these workstations, you must create a member containing all users and passwords for Windows users who need to schedule jobs to run on Windows workstations. The Windows users are described using USRREC initialization statements.

PQ80229Doc. APAR

Not available In the IBM Tivoli Workload Scheduler for z/OS Installation Guide, the description of the end-to-end Input and Output Events Data Sets (EQQTWSIN and EQQTWSOU) is misleading because it states that the LRECL for these files can be anywhere from 120 to 32000 bytes. In reality, the LRECL must be 120. Defining a larger LRECL causes a waste of disk space, which can lead to problems if the EQQTWSIN and EQQTWSOU files fill up completely.Also see text in APAR PQ77970.

APAR number PTF number Description

120 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 137: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

PQ80341 UQ88867UQ88868UQ88869

End-to-end: Missing synchronization process between event manager and receiver tasks at Controller startup.Several new messages are introduced by this APAR (documented in EQQPDFEM member in SEQQMISC data).

PQ81405 UQ82765UQ82766

Checks for number of dependencies for FTW job and new message, EQQG016E, to indicate that FTW job cannot be added to CP due to more than 40 dependencies for this job.

PQ84233 UQ87341UQ87342UQ87343UQ87344UQ87345 UQ87377

Implements support for Tivoli Workload Scheduler for z/OS commands: NP (NOP), UN (UN-NOP), EX (Execute), and for the “submit” automatic option for operations defined on fault-tolerant workstations.Also introduces a new TOPOLOGY NOPTIMEDEPENDENCY (YES/NO) parameter.

PQ87120 UQ89138 Porting of Tivoli Workload Scheduler 8.2 FixPack 04 to end-to-end feature on z/OS.With this APAR the Tivoli Workload Scheduler for z/OS 8.2 end-to-end code has been aligned with the Tivoli Workload Scheduler distributed code FixPack 04 level.This APAR also introduces the Backup Domain Fault Tolerant feature in the end-to-end environment.

PQ87110 UQ90485UQ90488

The Tivoli Workload Scheduler end-to-end server is not able to get mutex lock if mountpoint of a shared HFS is moved without stopping the server. Also it contains a very important documentation update that describes how to configure the end-to-end server work directory correctly in an sysplex environment with hot stand-by controllers.

Note: To learn about updates to the Tivoli Workload Scheduler for z/OS books and the APARs and PTFs that pre-date April 2004, consult “April 2004 Revised” versions of the books, as mentioned in 3.4.1, “Tivoli Workload Scheduler for z/OS documentation” on page 117.

APAR number PTF number Description

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 121

Page 138: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Special documentation updates introduced by serviceSome APARs were fixed on Tivoli Workload Scheduler for z/OS 8.1 while the general availability code for Tivoli Workload Scheduler for z/OS 8.2 was frozen because of shipment. All these fixes or PTFs are sysrouted through level set APAR PQ74854 (also described as hiper cumulative APAR).

This cumulative APAR is meant to align the Version 8.2 code with the maintenance level that was reached during the time the GA code was frozen.

With APAR PQ74854, the documentation has been updated and is available in a PDF file. To access the changes described in this PDF file:

� Apply the PTF for APAR PQ74854.

� Transfer the EQQPDF82 member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be pdf.

� Read the document using Adobe (Acrobat) Reader.

APAR PQ77970 (see Table 3-4 on page 119) makes it possible to customize how the job name in the Symphony file is generated. The PTF for APAR PQ77970 installs a member, EQQPDFXJ, in the SEQQMISC library. This member holds a detailed description of how the job name in the Symphony file can be customized and how to specify the related parameters. To read the documentation in the EQQPDFXJ member:

� Apply the PTF for APAR PQ77970.

� Transfer the EQQPDFXJ member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be pdf.

� Read the document using Adobe Reader.

APAR PQ84233 (see Table 3-4 on page 119) implements support for Tivoli Workload Scheduler for z/OS commands for fault-tolerant agents and introduces a new TOPLOGY NOPTIMEDEPENDENCY(Yes/No) parameter. The PTF for APAR PQ84233 installs a member, EQQPDFNP, in the SEQQMISC library. This member holds a detailed description of the supported commands and the NOPTIMEDEPENDENCY parameter. To read the documentation in the EQQPDFNP member:

� Apply the PTF for APAR PQ84233.

� Transfer the EQQPDFNP member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be pdf.

� Read the document using Adobe Reader.

122 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 139: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

APAR PQ80341 (see Table 3-4 on page 119) improves the synchronization process between the controller event manager and receiver tasks. The APAR also introduces several new or updated messages. The PTF for APAR PQ80341 installs a member, EQQPDFEM, in the SEQQMISC library. This member holds a detailed description of the new or updated messages related to the improved synchronization process. To read the documentation in the EQQPDFEM member:

� Apply the PTF for APAR PQ80341.

� Transfer the EQQPDFEM member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be.pdf.

� Read the document using Adobe Reader.

APAR PQ87110 (see Table 3-4 on page 119) contains important documentation updates with suggestions on how to define the end-to-end server work directory in a SYSPLEX shared HFS environment and a procedure to be followed before starting a scheduled shutdown for a system in the sysplex.

The PTF for APAR PQ87110 installs a member, EQQPDFSY, in the SEQQMISC library. This member holds the documentation updates. To read the documentation in the EQQPDFSY member:

� Apply the PTF for APAR PQ87110.

� Transfer the EQQPDFEM member from the SEQQMISC library on the

� mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be .pdf.

� Read the document using Adobe Reader.

3.4.3 Tivoli Workload Scheduler for z/OS started tasks for end-to-end scheduling

As described in the architecture chapter, end-to-end scheduling involves at least two started tasks: the Tivoli Workload Scheduler for z/OS controller and the Tivoli Workload Scheduler for z/OS server.

Note: The documentation updates that are described in the EQQPDF82, EQQPDFXJ, and EQQPDFNP members in SEQQMISC are in the “April 2004 Revised” versions of the Tivoli Workload Scheduler for z/OS books, mentioned in 3.4.1, “Tivoli Workload Scheduler for z/OS documentation” on page 117.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 123

Page 140: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The server started task will do all communication with the distributed fault-tolerant agents and will handle updates (for example, to the Symphony file). The server task must always run on the same z/OS system as the active controller task.

In Tivoli Workload Scheduler for z/OS 8.2, it is possible to configure one server started task that can handle end-to-end scheduling, communication with JSC users, and APPC communication.

Even though it is possible to configure one server started task, we strongly suggest using a dedicated server started task for the end-to-end scheduling.

Using dedicated started tasks with dedicated responsibilities makes it possible, for example, to restart the JSC server started task without any impact on the scheduling in the end-to-end server started task.

Although it is possible to run end-to-end scheduling with the Tivoli Workload Scheduler for z/OS ISPF interface, we suggest that you plan for use of the Job Scheduling Console (JSC) graphical user interface. Users with background in the distributed world will find the JSC much easier to use than learning a new interface such as TSO/ISPF to manage their daily work. Therefore we suggest planning for implementation of a server started task that can handle the communication with the JSC Connector (JSC users).

3.4.4 Hierarchical File System (HFS) cluster

Tivoli Workload Scheduler code has been ported into UNIX System Services (USS) on z/OS. When planning for the end-to-end scheduling with Tivoli Workload Scheduler for z/OS, keep in mind that the server starts multiple tasks and processes using the USS in z/OS. The end-to-end server accesses the code delivered from IBM and creates several work files in Hierarchical File System clusters.

Because of this, the z/OS USS function must be active in the z/OS environment before you can install and use the end-to-end scheduling feature in Tivoli Workload Scheduler for z/OS.

Terminology note: An HFS data set is a z/OS data set that contains a POSIX-compliant hierarchical file system, which is a collection of files and directories organized in a hierarchical structure that can be accessed using the z/OS UNIX system services (USS).

124 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 141: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The Tivoli Workload Scheduler code is installed with SMP/E in an HFS cluster in USS. It can be installed in an existing HFS cluster or in a dedicated HFS cluster, depending on how the z/OS USS is configured.

Besides the installation binaries delivered from IBM, the Tivoli Workload Scheduler for z/OS server also needs several work files in a USS HFS cluster. We suggest that you use a dedicated HFS cluster to the server workfiles. If you are planning to install several Tivoli Workload Scheduler for z/OS end-to-end scheduling environments, you should allocate one USS HFS cluster for workfiles per end-to-end scheduling environment.

Furthermore, if the z/OS environment is configured as a sysplex, where the Tivoli Workload Scheduler for z/OS server can be active on different z/OS systems within the sysplex, you should make sure that the USS HFS clusters with Tivoli Workload Scheduler for z/OS binaries and workfiles can be accessed from all of the sysplex’s systems. Starting from OS/390 Version 2 Release 9, it is possible to mount USS HFS clusters either in read-only mode or in read/write mode on all systems in a sysplex.

The USS HFS cluster with the Tivoli Workload Scheduler for z/OS binaries should then be mounted in read mode on all systems and the USS HFS cluster with the Tivoli Workload Scheduler for z/OS work files should be mounted in read/write mode on all systems in the sysplex.

Figure 3-1 on page 126 illustrates the use of dedicated HFS clusters for two Tivoli Workload Scheduler for z/OS environments: test and production.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 125

Page 142: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 3-1 Dedicated HFS clusters for Tivoli Workload Scheduler for z/OS server test and production environment

We recommend that you create a separate HFS cluster for the working directory, mounted in read/write mode. This is because the working directory is application specific and contains application-related data. It also makes your backup easier. The size of the cluster depends on the size of the Symphony file and how long you want to keep the stdlist files. We recommend starting with at least 2 GB of space.

We also recommend that you plan to have separate HFS clusters for the binaries if you have more than one Tivoli Workload Scheduler end-to-end scheduling environment, as shown in Figure 3-1. This makes it possible to apply and test maintenance and test it in the test environment before it is populated to the production environment.

Note: IBM Tivoli Workload Scheduler for z/OS 8.2 supports zFS (z/OS File System) clusters as well as HFS clusters (APAR PQ79126). Because zFS clusters offers significant performance improvements over HFS, we suggest considering use of zFS clusters instead of HFS clusters. For this redbook, we used HFS clusters in our implementation.

Production environment for server:

* WRKDIR('/TWS/TWSCPROD')

* BINDIR('/TWS/PROD/bin820’) HFS DSN: OMVS.PROD.TWS820.HFSMount point: /TWS/PROD/bin820Mounted Read Only (on all systems)

HFS DSN: OMVS.TWSCPROD.HFSMount point: /TWS/TWSCPRODMounted Read/Write (on all systems)

Installation Binaries

Workfiles

Test environment for server:

* WRKDIR('/TWS/TWSCTEST')

* BINDIR('/TWS/TEST/bin820’) HFS DSN: OMVS.TEST.TWS820.HFSMount point: /TWS/TEST/bin820Mounted Read Only (on all systems)

HFS DSN: OMVS.TWSCTEST.HFSMount point: /TWS/TWSCTESTMounted Read/Write (on all systems)

Installation Binaries

Workfiles

126 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 143: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

As mentioned earlier, OS/390 2.9 and higher support use of shared HFS clusters. Some directories (usually /var, /dev, /etc, and /tmp) are system specific, meaning that those paths are logical links pointing to different directories. When you specify the work directory, make sure that it is not on a system-specific filesystem. Or, if this is the case, make sure that the same directories on the filesystem of the other systems are pointing to the same directory. For example, you can use /u/TWS; that is not system-specific. Or you can use /var/TWS on system SYS1 and create a symbolic link /SYS2/var/TWS to /SYS1/var/TWS so that /var/TWS will point to the same directory on both SYS1 and SYS2.

If you are using OS/390 versions earlier than Version 2.9 in a sysplex, the HFS cluster with the work files and binaries should be mounted manually on the system where the server is active. If the server is going to be moved to another system in the sysplex, the HFS clusters should be unmounted from the first system and mounted on the system where the server is going to be active. On the new system, the HFS cluster with work files should be mounted in read/write mode, and the HFS cluster with the binaries should be mounted in read mode. The filesystem can be mounted in read/write mode on only one system at a time.

Migrating from IBM Tivoli Workload Scheduler for z/OS 8.1If you are migrating from Tivoli Workload Scheduler for z/OS 8.1 to Tivoli Workload Scheduler for z/OS 8.2 and you are using end-to-end scheduling in the 8.1 environment, we suggest that you allocate new dedicated USS HFS clusters for the Tivoli Workload Scheduler for z/OS 8.2 work files and installation binaries.

3.4.5 Data sets related to end-to-end schedulingTivoli Workload Scheduler for z/OS has several data sets that are dedicated for end-to-end scheduling:

� End-to-end input and output data sets (EQQTWSIN and EQQTWSOU).These data sets are used to send events from controller to server and from server to controller. They must be defined in the controller and end-to-end server started task procedure.

� Current plan backup copy data set to create Symphony (EQQSCPDS). This is a VSAM data set used as a CP backup copy for the production of the

Note: Please also check documentation updates in APAR PQ87110 (see table 3-4 on page 118) if you are planning to use shared HFS with work directory for the end-to-end server. The PTFs for this APAR contain important documentation updates with suggestions on how to define the end-to-end server work directory in a SYSPLEX shared HFS environment and a procedure to be followed before starting a scheduled shutdown for a system in the sysplex.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 127

Page 144: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Symphony file in USS. It must be defined in controller started task procedure and in the current plan extend job, the current plan replan job, and the Symphony renew job.

� End-to-end script library (EQQSCLIB) is a partitioned data set that holds commands or job definitions for fault-tolerant agent jobs. This must be defined in the controller started task procedure and in the current plan extend job, current plan replan job, and the Symphony renew job.

� End-to-end centralized script data set (EQQTWSCS). This is a partitioned data set that holds scripts for fault-tolerant agents jobs while they are sent to the agent. It must be defined in controller and end-to-end server started task.

Plan for the allocation of these data sets and remember to specify the data sets in controller and end-to-end server started task procedures as required as well as in the current plan extend job, the replan job, and the Symphony renew job as required.

In the planning phase you should also consider whether your installation will use centralized scripts, non-centralized (local) scripts, or a combination of centralized and non-centralized scripts.

� Non-centralized (local) scripts

– In Tivoli Workload Scheduler for z/OS 8.2, it is possible to have job definitions in the end-to-end script library and have the script (the job) be executed on the fault-tolerant agent. This is referred to as a non-centralized script.

– Using non-centralized scripts makes it possible for the fault-tolerant agent to run local jobs without any connection to the controller on mainframe.

– On the other hand, if the non-centralized script should be updated, it must be done locally on the agent.

– Local placed scripts can be consolidated in a central repository placed on the mainframe or on a fault-tolerant agent; then on a daily basis, changed or updated scripts can be distributed to the FTAs where they will be executed. By doing this, you can keep all scripts in a common repository. This facilitates easy modification of scripts, because you only have to change the scripts in one place. We recommend this option because it gives most of the benefits of using centralized scripts without sacrificing fault tolerance.

� Centralized scripts

– Another possibility in Tivoli Workload Scheduler for z/OS 8.2 is to have the scripts on the mainframe. The scripts will then be defined in the controller job library and, via the end-to-end server, the controller will send the script to the fault-tolerant agent when jobs are ready to run.

128 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 145: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

– This makes it possible to centrally manage all scripts.

– However, it compromises the fault tolerance in the end-to-end scheduling network, because the controller must have a connection to the fault-tolerant agent to be able to send the script.

– The centralized script function makes migration from Tivoli OPC tracker agents with centralized scripts to end-to-end scheduling much simpler.

� Combination of non-centralized and centralized scripts

– The third possibility is to use a combination of non-centralized and centralized scripts.

– Here the decision can be made based on such factors as:

• Where a particular FTA is placed in the network

• How stable the network connection is to the FTA

• How fast the connection is to the FTA

• Special requirements for different departments to have dedicated access to their scripts on their local FTA

– For non-centralized scripts, it is still possible to have a centralized repository with the scripts and then, on a daily basis, to distribute changed or updated scripts to the FTAs with non-centralized scripts.

3.4.6 TCP/IP considerations for end-to-end server in sysplexIn Tivoli Workload Scheduler end-to-end scheduling, the TCP/IP protocol is used to communicate between the end-to-end server task and the domain managers at the first level.

The fault-tolerant architecture in the distributed network has the advantage that the individual FTAs in the distributed network can continue their own processing during a network failure. If there is no connection to the controller on the mainframe, the domain managers at the first level will buffer their events in a local file called tomaster.msg. This buffering continues until the link to the end-to-end server is re-established. If there is no connection between the domain managers at the first level and the controller on the mainframe side, dependencies between jobs on the mainframe and jobs in the distributed environment cannot be resolved. You cannot schedule these jobs before the connection is re-established.

If the connection is down when, for example, a new plan is created, this new plan (the new Symphony file) will not be distributed to the domain managers at the first level and further down in the distributed network.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 129

Page 146: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

In the planning phase, consider what can happen:

� When the z/OS system with the controller and server tasks fails

� When the controller or the server task fails

� When the z/OS system with the controller has to be stopped for a longer time (for example, due to maintenance).

The goal is to make the end-to-end server task and the controller task as fail-safe as possible and to make it possible to move these tasks from one system to another within a sysplex without any major disruption in the mainframe and distributed job scheduling.

As explained earlier, the end-to-end server is a started task that must be running on the same z/OS system as the controller. The end-to-end server handles all communication with the controller task and the domain managers at the first level in the distributed Tivoli Workload Scheduler distributed network.

One of the main reasons to configure the controller and server task in a sysplex environment is to make these tasks as fail-safe as possible. This means that the tasks can be moved from one system to another within the same sysplex without any stop in the batch scheduling. The controller and server tasks can be moved as part of planned maintenance or in case a system fails. Handling of this process can be automated and made seamless for the user by using the Tivoli Workload Scheduler for z/OS Hot Standby function.

The problem running end-to-end in a z/OS sysplex and trying to move the end-to-end server from one system to another is that the end-to-end server by default gets the IP address from the TCP/IP stack of the z/OS system where it is started. If the end-to-end server is moved to another z/OS system within the sysplex it normally gets another IP address (Figure 3-2).

130 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 147: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 3-2 Moving one system to another within a z/OS sysplex

When the end-to-end server starts, it looks in the topology member to find its host name or IP address and port number. In particular, the host name or IP address is:

� Used to identify the socket from which the server receives and sends data from and to the distributed agents (domain managers at the first level)

� Stored in the Symphony file and is recognized by the distributed agents as the IP address (or host name) of the master domain manager (OPCMASTER)

If the host name is not defined or the default is used, the end-to-end server by default will use the host name that is returned by the operating system (that is, the host name returned by the active TCP/IP stack on the system).

The port number and host name will be inserted in the Symphony file when a current plan extend or replan batch job is submitted or a Symphony renew is initiated in the controller task. The Symphony file will then be distributed to the domain managers at the first level. The domain managers at the first level in turn use this information to link back to the server.

1. Active controller and server on one z/OS system in the sysplex

Server has a “system dependent” IP-address

2. Active Engine is moved to another system in the z/OS sysplex.

Server gets a new “system dependent”IP-address.

Can cause problems for FTA connections because IP address is in Symphony file

Active Controller

z/OSSYSPLEX

Standby Controller

Standby ControllerServer

IP-address 2

z/OSSYSPLEX

Standby Controller

Standby Controller

Server

IP-address 1

Active Controller

1. Active controller and server on one z/OS system in the sysplex

Server has a “system dependent” IP-address

2. Active Engine is moved to another system in the z/OS sysplex.

Server gets a new “system dependent”IP-address.

Can cause problems for FTA connections because IP address is in Symphony file

Active Controller

z/OSSYSPLEX

Standby Controller

Standby ControllerServer

IP-address 2

Active Controller

z/OSSYSPLEX

Standby Controller

Standby ControllerServer

Active Controller

z/OSSYSPLEX

Standby Controller

Standby ControllerServer

IP-address 2

z/OSSYSPLEX

Standby Controller

Standby Controller

Server

IP-address 1

Active Controller

z/OSSYSPLEX

Standby Controller

Standby Controller

Server

IP-address 1

Active Controller

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 131

Page 148: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 3-3 First-level domain managers connected to Tivoli Workload Scheduler for z/OS server in z/OS sysplex

If the z/OS controller fails on the wtsc64 system (see Figure 3-3), the standby controller either on wtsc63 or wtsc65 can take over all of the engine functions (run the controller and the end-to-end server tasks). Which controller takes over depends on how the standby controllers are configured.

The domain managers of first level (london, geneva, and stockholm in Figure 3-3 on page 132) know wtsc64 as their master domain manager (from the Symphony file), so the link from the domain managers to the end-to-end server will fail, no matter which system (wtsc63 or wtsc65) the controller takes over on. One solution could be to send a new Symphony file (renew the Symphony file) from the controller and server that has taken over the domain managers of first level.

Doing a renew of the Symphony file on the new controller and server recreates the Symphony file and adds the new z/OS host name or IP address (read from the topology definition or returned by the z/OS operating system) to the Symphony file. The domain managers then use this information to reconnect to the server on the new z/OS system.

Since renewing the Symphony file can be disruptive, especially in a heavily loaded scheduling environment, we explain three alternative strategies that can

FTAN001

FTAN002

AIXDomainManager

U000

UK

FTAU001

FTAU002

Active Controller&

Server

MASTERDM

london9.3.4.63

W2K

edinburgh9.3.4.188

AIX

belfast9.3.4.64

DomainManager

E000

Windows 2000

Europe

FTAE001

FTAE002

geneva9.3.4.185

W2K

amsterdam9.3.4.187

AIX

rome9.3.4.122

AIXDomainManager

N000

Nordic

stockholm9.3.4.47

Linux

helsinki10.2.3.190

oslo10.2.3.184

SSL

W2K

wtsc64

z/OS

9.12.6.9

wtsc659.12.6.10

z/OS

wtsc639.12.6.8

z/OS

StandbyController

StandbyController

z/OS Sysplex

FTAN003

W2K

copenhagen10.2.3.189

Firewall & Router

FTAN001

FTAN002

AIXDomainManager

U000

UK

FTAU001

FTAU002

Active Controller&

Server

MASTERDM

london9.3.4.63

W2K

edinburgh9.3.4.188

AIX

belfast9.3.4.64

DomainManager

E000

Windows 2000

Europe

FTAE001

FTAE002

geneva9.3.4.185

W2K

amsterdam9.3.4.187

AIX

rome9.3.4.122

AIXDomainManager

N000

Nordic

stockholm9.3.4.47

Linux

helsinki10.2.3.190

oslo10.2.3.184

SSL

W2K

wtsc64

z/OS

9.12.6.9

wtsc659.12.6.10

z/OS

wtsc639.12.6.8

z/OS

StandbyController

StandbyController

z/OS Sysplex

FTAN003

W2K

copenhagen10.2.3.189

Firewall & Router

132 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 149: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

be used to solve the reconnection problem after the server and controller have been moved to another system in a sysplex.

For all three alteratives, the topology member is used to specify the host name and port number for the Tivoli Workload Scheduler for z/OS server task. The host name is copied to the Symphony file when the Symphony file is renewed or the Tivoli Workload Scheduler for z/OS current plan is extended or replanned. The distributed domain managers at the first level will use the host name read from the Symphony file to connect to the end-to-end server.

Because the first-level domain managers will try to link to the end-to-end server using the host name that is defined in the server hostname parameter, you must take the required action to successfully establish a reconnection. Make sure that the host name always resolves correctly to the IP address of the z/OS system with the active end-to-end server. This can be acquired in different ways.

In the following three sections, we have described three different ways to handle the reconnection problem when the end-to-end server is moved from one system to another in the same sysplex.

Use of the host file on the domain managers at the first levelTo be able to use the same host name after a fail-over situation (where the engine is moved to one of its backup engines) and gain additional flexibility, we will use a host name that always can be resolved to the IP address of the z/OS system with the active end-to-end server. The resolution of the host name is done by the first-level domain managers using their local host files to get the IP address of the z/OS system with the end-to-end server.

In the end-to-end server topology we can define a host name with a given name (such as TWSCE2E). This host name will be associated with an IP address by the TCP/IP stack, for example in the USS /etc/hosts file, where the end-to-end server is active.

The different IP addresses of the systems where the engine can be active are defined in the host name file (/etc/hosts on UNIX) on the domain managers at the first level, as in Example 3-1.

Example 3-1 hosts file

9.12.6.8 wtsc63.itso.ibm.com9.12.6.9 wtsc64.itso.ibm.com TWSCE2E9.12.6.10 wtsc65.itso.ibm.com

If the server is moved to the wtsc63 system, you only have to edit the hosts file on the domain managers at the first level, so TWSCE2E now points to the new system as in Example 3-2.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 133

Page 150: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 3-2 hosts file

9.12.6.8 wtsc63.itso.ibm.com TWSCE2E9.12.6.9 wtsc64.itso.ibm.com 9.12.6.10 wtsc65.itso.ibm.com

This change takes effect dynamically (the next time the domain manager tries to reconnect to the server).

One major disadvantage with this solution is that the change must be carried out by editing a local file on the domain managers at the first level. A simple move of the tasks on mainframe will then involve changes on distributed systems as well. In our example in Figure 3-3 on page 132, the local host file should be edited on three domain managers at the first level (the london, geneva and stockholm servers).

Furthermore, localopts nm ipvalidate must be set to none on the agent, because the node name and IP address for the end-to-end server, which are stored for the OPCMASTER workstation (the workstation representing the end-to-end server) in the Symphony file on the agent, has changed. See the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273 for further information.

Use of stack affinity on the z/OS systemAnother possibility is to use stack affinity to ensure that the end-to-end server host name resolves to the same IP address, even if the end-to-end server is moved to another z/OS system in the sysplex.

With stack affinity, the end-to-end server host name will always be resolved using the same TCP/IP stack (the same TCP/IP started task) and hence always get the same IP address, regardless of which z/OS system the end-to-end server is started on.

Stack affinity provides the ability to define which specific TCP/IP instance the application should bind to. If you are running in a multiple-stack environment in which each system has its own TCP/IP stack, the end-to-end server can be forced to use a specific stack, even if it runs on another system.

A specific stack or stack affinity is defined in the Language Environment® variable: _BPXK_SETIBMOPT_TRANSPORT. To define environment variables in the end-to-end server, DD-name, STDENV should be added to the end-to-end server started task procedure. The STDENV DD-name can point to a sequential data set or a member in a partitioned dataset (for example, a member of the

134 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 151: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

end-to-end server PARMLIB) in which it is possible to define environment variables to initialize Language Environment.

In this data set or member environment, variables can be specified as VARNAM=value. See IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264, for further information.

For example:

//STDENV DD DISP=SHR,DSN=MY.FILE.PARM(STDENV)

This member can be used to set the stack affinity using the following environment variable.

_BPXK_SETIBMOPT_TRANSPORT=xxxxx

(xxxxx indicates the TCP/IP stack the end-to-end server should bind to.)

One disadvantage of stack affinity is that a particular stack on a specific z/OS system is used. If this stack (the TCP/IP started task) or the z/OS system with this stack has to be stopped or requires an IPL, the end-to-end server that can run on another system will not be able to establish connections to the domain managers at the first level. If this happens, manual interaction is required.

For more information, see the z/OS V1R2 Communications Server: IP Configuration Guide, SC31-8775.

Use of Dynamic Virtual IP Addressing (DVIPA)DVIPA, which was introduced with OS/390 V2R8, makes it possible to assign a specific virtual IP address to a specific application. The configurations can be created to have this virtual IP address independent of any specific TCP/IP stack within the sysplex and dependent of the started application — that is, this IP address will be the same for the application no matter which system in the sysplex the application is started on.

Even if your application has to be moved to another system because of failure or maintenance, the application can be reached under the same virtual IP address. Use of DVIPA is the most flexible way to be prepared for application or system failure.

We recommend that you plan for use of DVIPA for the following Tivoli Workload Scheduler for z/OS components:

� Server started task used for end-to-end scheduling � Server started task used for the JSC communication

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 135

Page 152: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The Tivoli Workload Scheduler for z/OS end-to-end (and JSC) server has been improved for Version 8.2. This improvement makes better use of DVIPA for the end-to-end (and JSC) server than in Tivoli Workload Scheduler 8.1.

In IBM Tivoli Workload Scheduler for z/OS 8.1, a range of IP addresses to be used by DVIPA (VIPARANGE) had to be defined, as did specific PORT and IP addresses for the end-to-end server (Example 3-3).

Example 3-3 Some required DVIPA definitions for Tivoli Workload Scheduler for z/OS 8.1

VIPADYNAMICviparange define 255.255.255.248 9.12.6.104ENDVIPADYNAMICPORT 5000 TCP TWSJSC BIND 9.12.6.10631182 TCP TWSCE2E BIND 9.12.6.107

In this example, DVIPA automatically assigns started task TWSCE2E, which represents our end-to-end server task that is configured to use port 31182 and IP address 9.12.6.107.

DVIPA is described in great detail in the z/OS V1R2 Communications Server: IP Configuration Guide, SC31-8775. In addition, the redbook TCP/IP in a Sysplex, SG24-5235, provides useful information for DVIPA.

One major problem using DVIPA in the Tivoli Workload Scheduler for z/OS 8.1 end-to-end server was that the end-to-end server mailman process still used the IP address for the z/OS system (the local IP address for outbound connections was determined by the routing table on the z/OS system). If localops nm ipvalidate was set to full in the first-level domain manager or backup domain manager, the outbound connection from the end-to-end mailman server process to the domain manager netman was rejected by the domain manager netman process. The result was that the outbound connection could not be established when the end-to-end server was moved from one system in the sysplex to another.

This is changed in Tivoli Workload Scheduler for z/OS 8.2, so the end-to-end sever will use the host name or IP address that is specified in the TOPOLOGY HOSTNAME parameter both for inbound and outbound connections. This has the following advantages compared to Version 8.1:

1. It is not necessary to define the end-to-end server started task in the static DVIPA PORT definition. It is sufficient to define the DVIPA VIPARANGE parameter.

When the end-to-end server starts and reads the TOPOLOGY HOSTNAME() parameter, it performs a gethostbyname() on the host name. The host name can be related to an IP address (in the VIPARANGE), for example in the USS

136 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 153: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

/etc/hosts file. It then will get the same IP address across z/OS systems in the sysplex.

A major advantage also if the host name or IP address is going to be changed, it is sufficient to make the change in the /etc/hosts file. It is not necessary to change the TCP/IP definitions and restart the TCP/IP stack (as long as the new IP address is within the defined range of IP addresses in the VIPARANGE parameter).

2. The host name in the TOPOLOGY HOSTNAME() parameter is used for outbound connections (from end-to-end server to the domain managers at the first level).

3. You can use network address IP validation on the domain managers at the first level.

The advantages of 1 and 2 also apply to the JSC server.

Example 3-4 shows required DVIPA definitions for Tivoli Workload Scheduler 8.2 in our environment.

Example 3-4 Example of required DVIPA definitions for ITWS for z/OS 8.2

VIPADYNAMICviparange define 255.255.255.248 9.12.6.104ENDVIPADYNAMIC

And the /etc/hosts file in USS looks like:

9.12.6.107 twsce2e.itso.ibm.com twsce2e

3.4.7 Upgrading from Tivoli Workload Scheduler for z/OS 8.1 end-to-end scheduling

If you are running Tivoli Workload Scheduler for z/OS 8.1 end-to-end scheduling and are going to update this environment to 8.2 level, you should plan for use of the new functions and possibilities in Tivoli Workload Scheduler for z/OS 8.2 end-to-end scheduling.

Note: In the previous example, we show use of the /etc/hosts file in USS. For DVIPA, it is advisable to use the DNS instead of the /etc/hosts file because the /etc/hosts definitions in general are defined locally on each machine (each z/OS image) in the sysplex.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 137

Page 154: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Be aware especially of the new possibilities introduced by:

� Centralized script

Are you using non-centralized script in the Tivoli Workload Scheduler for z/OS 8.1 scheduling environment?

Will it be better or more efficient to use centralized scripts?

If centralized scripts are going to be used, you should plan for necessary activities to have the non-centralized scripts consolidated in Tivoli Workload Scheduler for z/OS controller JOBLIB.

� JCL variables in centralized or non-centralized scripts or both

In Tivoli Workload Scheduler for z/OS 8.2 you can use Tivoli Workload Scheduler for z/OS JCL variables in centralized and non-centralized scripts.

If you have implemented some locally developed workaround in Tivoli Workload Scheduler for z/OS 8.1 to use JCL variables in the Tivoli Workload Scheduler for z/OS non-centralized script, you should consider using the new possibilities in Tivoli Workload Scheduler for z/OS 8.2.

� Recovery possible for jobs with non-centralized and centralized scripts.

Will or can use of recovery in jobs with non-centralized or centralized scripts improve your end-to-end scheduling? Is it something you should use in your Tivoli Workload Scheduler for z/OS 8.2 environment?

Should the Tivoli Workload Scheduler for z/OS 8.1 job definitions be updated or changed to use these new recovery possibilities?

Here again, some planning and considerations will be of great value.

� New options and possibilities when defining fault-tolerant workstation jobs in Tivoli Workload Scheduler for z/OS and working with fault-tolerant workstations.

Tivoli Workload Scheduler for z/OS 8.2 introduces some new options in the legacy ISPF dialog as well as in the JSC, when defining fault-tolerant jobs in Tivoli Workload Scheduler for z/OS.

Furthermore, the legacy ISPF dialogs have changed and improved, and new options have been added to work more easily with fault-tolerant workstations.

Be prepared to educate your planners and operations so they know how to use these new options and functions!

End-to-end scheduling is greatly improved in Version 8.2 of Tivoli Workload Scheduler for z/OS. Together with this improvement, several initialization statements have been changed. Furthermore, the network configuration for the end-to-end environment can be designed in another way in Tivoli Workload

138 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 155: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Scheduler for z/OS 8.2 because, for example, Tivoli Workload Scheduler for z/OS 8.2 supports more than one first-level domain manager.

To summarize: Expect to take some time to plan your upgrade from Tivoli Workload Scheduler for z/OS Version 8.1 end-to-end scheduling to Version 8.2 end-to-end scheduling, because Tivoli Workload Scheduler for z/OS Version 8.2 has been improved with many new functions and initialization parameters.

Plan to have some time to investigate and read the new Tivoli Workload Scheduler for z/OS 8.2 documentation (remember to use the April 2004 Revised versions) to get a good understanding of the new end-to-end scheduling possibilities in Tivoli Workload Scheduler for z/OS Version 8.2 compared to V8.1

Furthermore, plan time to test and verify the use of the new functions and possibilities in Tivoli Workload Scheduler for z/OS 8.2 end-to-end scheduling.

3.5 Planning for end-to-end scheduling with Tivoli Workload Scheduler

In this section, we discuss how to plan end-to-end scheduling for Tivoli Workload Scheduler. We show how to configure your environment to fit your requirements, and we point you to special considerations that apply to the end-to-end solution with Tivoli Workload Scheduler for z/OS.

3.5.1 Tivoli Workload Scheduler publications and documentationHardcopy Tivoli Workload Scheduler documentation is not shipped with the product. The books are available in PDF format on the Tivoli Workload Scheduler 8.2 product CD-ROM.

Note: The publications are also available for download in PDF format at:

http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html

Look for books marked with “Revised April 2004,” as they have been updated with documentation changes that were introduced by service (fix pack) for Tivoli Workload Scheduler that was produced since the base version of the product was released in June 2003.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 139

Page 156: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

3.5.2 Tivoli Workload Scheduler service updates (fix packs)Before installing Tivoli Workload Scheduler, it is important to check for the latest service (fix pack) for Tivoli Workload Scheduler. Service for Tivoli Workload Scheduler is released in packages that normally contain a full replacement of the Tivoli Workload Scheduler code. These packages are called fix packs and are numbered FixPack 01, FixPack 02, and so forth. New fix packs are usually released every three months. The base version of Tivoli Workload Scheduler must be installed before a fix pack can be installed.

Check for the latest fix pack level and download it so that you can update your Tivoli Workload Scheduler installation and test the end-to-end scheduling environment on the latest fix pack level.

At time of writing this book the latest fix pack for Tivoli Workload Scheduler was FixPack 04.

When the fix pack is downloaded, installation guidelines can be found in the 8.2.0-TWS-FP04.README file.

The Tivoli Workload Scheduler documentation has been updated to FixPack 03 in the “April 2004 Revised” versions of the Tivoli Workload Scheduler manuals. As mentioned in 3.5.1, “Tivoli Workload Scheduler publications and documentation” on page 139, the latest versions of the Tivoli Workload Scheduler manuals can be downloaded from the IBM Web site.

3.5.3 System and software requirementsSystem and software requirements for installing and running Tivoli Workload Scheduler is described in great detail in the IBM Tivoli Workload Scheduler Release Notes Version 8.2 (Maintenance Release April 2004), SC32-1277.

Tip: Fix packs for Tivoli Workload Scheduler can be downloaded from:

ftp://ftp.software.ibm.com

Log on with user ID anonymous and your e-mail address for the password. Fix packs for Tivoli Workload Scheduler are in this directory: /software/tivoli_support/patches/patches_8.2.0.

Note: FixPack 04 introduces a new Fault-Tolerant Switch Feature, which is described in a PDF file named FaultTolerantSwitch.README.

The new Fault-Tolerant Switch Feature replaces and enhances the existing or traditional Fault-Tolerant Switch Manager for backup domain managers.

140 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 157: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

It is very important to consult and read this release-notes document before installing Tivoli Workload Scheduler because release notes contain system and software requirements, as well as the latest installation and upgrade notes.

3.5.4 Network planning and considerationsBefore you install Tivoli Workload Scheduler, be sure that you know about the various configuration examples. Each example has specific benefits and disadvantages. Here are some guidelines to help you to find the right choice:

� How large is your IBM Tivoli Workload Scheduler network? How many computers does it hold? How many applications and jobs does it run?

The size of your network will help you decide whether to use a single domain or the multiple-domain architecture. If you have a small number of computers or a small number of applications to control with Tivoli Workload Scheduler, there may not be a need for multiple domains.

� How many geographic locations will be covered in your Tivoli Workload Scheduler network? How reliable and efficient is the communication between locations?

This is one of the primary reasons for choosing a multiple-domain architecture. One domain for each geographical location is a common configuration. If you choose single domain architecture, you will be more reliant on the network to maintain continuous processing.

� Do you need centralized or decentralized management of Tivoli Workload Scheduler?

A Tivoli Workload Scheduler network, with either a single domain or multiple domains, gives you the ability to manage Tivoli Workload Scheduler from a single node, the master domain manager. If you want to manage multiple locations separately, you can consider installing a separate Tivoli Workload Scheduler network at each location. Note that some degree of decentralized management is possible in a stand-alone Tivoli Workload Scheduler network by mounting or sharing file systems.

� Do you have multiple physical or logical entities at a single site? Are there different buildings with several floors in each building? Are there different departments or business functions? Are there different applications?

These may be reasons for choosing a multi-domain configuration, such as a domain for each building, department, business function, or application (manufacturing, financial, engineering).

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 141

Page 158: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Do you run applications, such as SAP R/3, that operate with Tivoli Workload Scheduler?

If they are discrete and separate from other applications, you may choose to put them in a separate Tivoli Workload Scheduler domain.

� Would you like your Tivoli Workload Scheduler domains to mirror your Windows NT domains? This is not required, but may be useful.

� Do you want to isolate or differentiate a set of systems based on performance or other criteria? This may provide another reason to define multiple Tivoli Workload Scheduler domains to localize systems based on performance or platform type.

� How much network traffic do you have now? If your network traffic is manageable, the need for multiple domains is less important.

� Do your job dependencies cross system boundaries, geographical boundaries, or application boundaries? For example, does the start of Job1 on workstation3 depend on the completion of Job2 running on workstation4?

The degree of interdependence between jobs is an important consideration when laying out your Tivoli Workload Scheduler network. If you use multiple domains, you should try to keep interdependent objects in the same domain. This will decrease network traffic and take better advantage of the domain architecture.

� What level of fault tolerance do you require? An obvious disadvantage of the single domain configuration is the reliance on a single domain manager. In a multi-domain network, the loss of a single domain manager affects only the agents in its domain.

3.5.5 Backup domain managerEach domain has a domain manager and, optionally, one or more backup domain managers. A backup domain manager (Figure 3-4 on page 143) must be in the same domain as the domain manager it is backing up. The backup domain managers must be fault-tolerant agents running the same product version of the domain manager they are supposed to replace, and must have the Resolve Dependencies and Full Status options enabled in their workstation definitions.

If a domain manager fails during the production day, you can use either the Job Scheduling Console, or the switchmgr command in the console manager command line (conman), to switch to a backup domain manager. A switch manager action can be executed by anyone with start and stop access to the domain manager and backup domain manager workstations.

142 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 159: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

A switch manager operation stops the backup manager then restarts it as the new domain manager and converts the old domain manager to a fault-tolerant agent.

The identities of the current domain managers are documented in the Symphony files on each FTA and remain in effect until a new Symphony file is received from the master domain manager (OPCMASTER).

Figure 3-4 Backup domain managers (BDM) within a end-to-end scheduling network

As mentioned in 2.3.5, “Making the end-to-end scheduling system fault tolerant” on page 84, a switch to a backup domain manager remains in effect until a new Symphony file is received from the master domain manager (OPCMASTER in Figure 3-4). If the switch to the backup domain manager will be active across the Tivoli Workload Scheduler for z/OS plan extension or replan, you must change the topology definitions in the Tivoli Workload Scheduler for z/OS DOMREC initialization statements. The backup domain manager fault-tolerant workstation should be changed to the domain manager for the domain.

Example 3-5 shows how DOMREC for DomainA is changed so that the backup domain manager FTA1 in Figure 3-4 is the new domain manager for DomainA.

DomainB

Domain Manager

FDMB

MASTERDM

AIXDomain Manager FDMA

AIX Solaris

DomainA

FTA1BDM for DomainA

FTA2 FTA4

OS/400

Master DomainManager

OPCMASTER

z/OS

FTA3BDM for DomainB

AIX

AIX

DomainB

Domain Manager

FDMB

MASTERDM

AIXDomain Manager FDMA

AIX Solaris

DomainA

FTA1BDM for DomainA

FTA2 FTA4

OS/400

Master DomainManager

OPCMASTER

z/OS

FTA3BDM for DomainB

AIX

AIX

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 143

Page 160: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Because the change is also made in the DOMREC topology definition (in connection with the switch domain manager from FDMA to FTA1), FTA1 remains domain manager even if the Symphony file is recreated with Tivoli Workload Scheduler for z/OS plan extend or replan jobs.

Example 3-5 Change in DOMREC for long-term switch to backup domain manager FTA1

DOMREC DOMAIN(DOMAINA) DOMMGR(FDMA) DOMPARENT(OPCMASTER)

Should be changed to:

DOMREC DOMAIN(DOMAINA) DOMMGR(FTA1) DOMPARENT(OPCMASTER)

Where FDMA is the name of the fault tolerant workstation that is domain manager before the switch.

3.5.6 Performance considerationsTivoli Workload Scheduler 8.1 introduced some important performance-related initialization parameters. These can be used to optimize or tune Tivoli Workload Scheduler networks. If you suffer from poor performance and have already isolated the bottleneck on the Tivoli Workload Scheduler side, you may want to take a closer look at the localopts parameters listed in Table 3-5 (default values shown in the table).

Table 3-5 Performance-related localopts parameter

These localopts parameters are described in detail in the following sections. For more information, check the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273, and the redbook IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices, SG24-6628.

Mailman cache (mm cache mailbox and mm cache size)Tivoli Workload Scheduler can read groups of messages from a mailbox and put them into a memory cache. Access to disk through cache is extremely faster than accessing to disk directly. The advantage is even more relevant if you think that traditional mailman needs at least two disk accesses for every mailbox message.

Syntax Default value

mm cache mailbox=yes/no No

mm cache size = bytes 32

sync level=low/medium/high High

wr enable compression=yes/no No

144 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 161: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

A special mechanism ensures that messages that are considered essential are not put into cache but are handled immediately. This avoids loss of vital information in case of a mailman failure. The settings in the localopts file regulate the behavior of mailman cache:

� mm cache mailboxThe default is no. Specify yes to enable mailman to use a reading cache for incoming messages.

� mm cache sizeSpecify this option only if you use the mm cache mailbox option. The default is 32 bytes, which should be a reasonable value for most small and medium-sized Tivoli Workload Scheduler installations. The maximum value is 512, and higher values are ignored.

File system synchronization level (sync level)Sync level attribute specifies the frequency at which Tivoli Workload Scheduler synchronizes messages held on disk with those in memory. There are three possible settings:

Low Lets the operating system handle the speed of write access. This option speeds up all processes that use mailbox files. Disk usage is notably reduced, but if the file system is reliable the data integrity should be assured anyway.

Medium Makes an update to the disk after a transaction has completed. This setting could be a good trade-off between acceptable performance and high security against loss of data. Write is transaction-based; data written is always consistent.

High (default setting) Makes an update every time data is entered.

Important: mm cache mailbox parameter can be used on both UNIX and Windows workstations. This option is not applicable (has no effect) on USS.

Tip: If necessary, you can experiment with increasing this setting gradually for better performance. You can use values larger than 32 bytes for large networks, but in small networks do not set this value unnecessarily large, because this would reduce the available memory that could be allocated to other applications or other Tivoli Workload Scheduler processes.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 145

Page 162: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Sinfonia file compression (wr enable compression)Starting with Tivoli Workload Scheduler 8.1, domain managers may distribute Sinfonia files to their FTAs in compressed form. Each Sinfonia record is compressed by mailman domain managers, then decompressed by writer FTA. A compressed Sinfonia record is about 7 times smaller in size. It can be particularly useful when a Symphony file is huge and network connection between two nodes is slow or not reliable (WAN). If any FTAs in the network have pre-8.1 versions of Tivoli Workload Scheduler, Tivoli Workload Scheduler domain managers can send Sinfonia files to these workstations in uncompressed form.

The following localopts setting is used to set compression in Tivoli Workload Scheduler:

� wr enable compression=yes: This means that Sinfonia will be compressed. The default is no.

3.5.7 Fault-tolerant agent (FTA) naming conventions Each FTA represents a physical machine within a Tivoli Workload Scheduler network. Depending on the size of your distributed environment or network and how much it can grow in the future, it makes sense to think about naming conventions for your FTAs and eventually Tivoli Workload Scheduler domains. A good naming convention for FTAs and domains can help to identify an FTA easily in terms of where it is located or the business unit it belongs to. This becomes more important in end-to-end scheduling environments because the length of the workstation name for an FTA is limited in Tivoli Workload Scheduler for z/OS.

Important considerations for the sync level usage:

� For most UNIX systems (especially new UNIX systems with reliable disk subsystems), a setting of low or medium is recommended.

� In end-to-end scheduling, we recommend that you set this at low, because host disk subsystems are considered as highly reliable systems.

� This option is not applicable on Windows systems.

� Regardless of the sync level value that you set in the localopts file, Tivoli Workload Scheduler makes an update every time data is entered for messages that are considered as essential (uses sync level=high for the essential messages). Essential messages are considered the utmost important by the Tivoli Workload Scheduler.

Tip: Due to the overhead of compression and decompression, we recommend that you use compression if Sinfonia is 4 MB or larger.

146 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 163: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 3-5 on page 147 shows a typical end-to-end network. It consists of two domain managers at the first level, two backup domain managers, and some FTAs.

Figure 3-5 Example of naming convention for FTA workstations in end-to-end network

In Figure 3-5, we have illustrated one naming convention for the fault-tolerant workstations in Tivoli Workload Scheduler for z/OS. The idea with this naming convention is the following:

� First digit

Character F is used to identify the workstation as an FTA. It will, for example, be possible to make lists in the legacy ISPF interface and in the JSC that shows all FTAs.

Note: The name of any workstation in Tivoli Workload Scheduler for z/OS workstations for fault-tolerant agents included end-to-end is limited to four digits. The name must be alphanumeric, where the first digit must be alphabetical or national.

Domain2

Domain Manager

F200

MASTERDM

AIXDomain Manager

F100

AIX Solaris

Domain1

F101BDM for DomainA

F102 F202

OS/400

Master DomainManager

OPCMASTER

z/OS

F201BDM for DomainB

AIX

AIX

Domain2

Domain Manager

F200

MASTERDM

AIXDomain Manager

F100

AIX Solaris

Domain1

F101BDM for DomainA

F102 F202

OS/400

Master DomainManager

OPCMASTER

z/OS

F201BDM for DomainB

AIX

AIX

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 147

Page 164: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Second digit

Character or number used to identify the domain for the workstation.

� Third and fourth digits

Use to allow a high number of uniquely named servers or machines. The last two digits are reserved for the numbering of each workstation.

With this naming convention there will be room to define 1296 (that is, 36*36) fault-tolerant workstations for each domain named F1** to FZ**. If the domain manager fault-tolerant workstation for the first domain is named F100 (F000 is not used), it will be possible to define 35 domains with 1296 FTAs in each domain — that is, 45360 FTAs.

This example is meant to give you an idea of the number of fault-tolerant workstations that can be defined, even using only four digits in the name.

In the example, we did not change the first character in the workstation name: It was “fixed” at F. It is, of course, possible to use different characters here as well; for example, one could use D for domain managers and F for fault-tolerant agents. Changing the first digit in the workstation name increases the total number of fault-tolerant workstations that can be defined in Tivoli Workload Scheduler for z/OS and cannot cover all specific requirements. It demonstrates only that the naming needs careful consideration.

Because a four-character name for the FTA workstation does not tell much about the server name or IP address for the server where the FTA is installed, another good naming convention for the FTA workstations is to have the server name (the DNS name or maybe the IP address) in the description field for the workstation in Tivoli Workload Scheduler for z/OS. The description field for workstations in Tivoli Workload Scheduler for z/OS allows up to 32 characters. This way, it is much easier to relate the four-character workstation name to a specific server in your distributed network.

Example 3-6 shows how the description field can relate the four-character workstation name to the server name for the fault-tolerant workstations used in Figure 3-5 on page 147.

Example 3-6 Workstation description field (copy of workstation list in the ISPF panel)

Work station T R Last update name description user date time

Tip: The host name in the workstation description field, in conjunction with the four-character workstation name, provides an easy way to illustrate your configured environment.

148 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 165: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

F100 COPENHAGEN - AIX DM for Domain1 C A CCFBK 04/07/16 14.59 F101 STOCKHOLM - AIX BDM for Domain1 C A CCFBK 04/07/16 15.00 F102 OSLO - OS/400 LFTA in DM1 C A CCFBK 04/07/16 15.00 F200 ROM - AIX DM for Domain2 C A CCFBK 04/07/16 15.02 F201 MILANO - AIX BDM for Domain2 C A CCFBK 04/07/16 15.08 F202 VENICE - SOLARIS FTA in DM2 C A CCFBK 04/07/16 15.17

3.6 Planning for the Job Scheduling ConsoleIn this section, we discuss planning considerations for the Tivoli Workload Scheduler Job Scheduling Console (JSC). The JSC is not a required component when running end-to-end scheduling with Tivoli Workload Scheduler. The JSC provides a unified GUI to different job-scheduling engines, Tivoli Workload Scheduler for z/OS controller, and Tivoli Workload Scheduler master domain manager, domain managers, and fault-tolerant agents.

Job Scheduling Console 1.3 is the version that is delivered and used with Tivoli Workload Scheduler 8.2 and Tivoli Workload Scheduler for z/OS 8.2. The JSC code is shipped together with the Tivoli Workload Scheduler for z/OS or the Tivoli Workload Scheduler code.

With the JSC, it is possible to work with different Tivoli Workload Scheduler for z/OS controllers (such as test and production) from one GUI. Also, from this same GUI, the user can at the same time work with Tivoli Workload Scheduler master domain managers or fault-tolerant agents.

In end-to-end scheduling environments, the JSC can be a helpful tool when analyzing problems with the end-to-end scheduling network or for giving some dedicated users access to their own servers (fault-tolerant agents).

The JSC is installed locally on your personal desktop, laptop, or workstation.

Before you can run and use the JSC, the following additional components must be installed and configured:

� Tivoli Management Framework, V3.7.1 or V4.1

� Installed and configured in Tivoli Management Framework

– Job Scheduling Services (JSS)

– Tivoli Workload Scheduler connector

– Tivoli Workload Scheduler for z/OS connector

– JSC instances for Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS environments

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 149

Page 166: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Server started task on mainframe used for JSC communication

This server started task is necessary to communicate and work with Tivoli Workload Scheduler for z/OS from the JSC.

3.6.1 Job Scheduling Console documentation The documentation for the Job Scheduling Console includes:

� IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes (Maintenance Release April 2004), SC32-1277

� IBM Tivoli Workload Scheduler Job Scheduling Console Users Guide (Maintenance Release April 2004), SC32-1257. This manual contains information about how to:

– Install and update the JSC.

– Install and update JSS, Tivoli Workload Scheduler connector and Tivoli Workload Scheduler for z/OS connector.

– Create Tivoli Workload Scheduler connector instances and Tivoli Workload Scheduler for z/OS connector instances.

– Use the JSC to work with Tivoli Workload Scheduler.

– Use the JSC to work with Tivoli Workload Scheduler for z/OS.

The documentation is not shipped in hardcopy form with the JSC code, but is available in PDF format on the JSC Version 1.3 CD-ROM.

3.6.2 Job Scheduling Console service (fix packs)Before installing the JSC, it is important to check for and, if necessary, download latest service (fix pack) level. Service for the JSC is released in packages that normally contain a full replacement of it. These packages are called fix packs and are numbered FixPack 01, FixPack 02, and so forth. Usually, a new fix pack is released once every three months. The base version of the JSC must be installed before a fix pack can be installed.

Note: The publications are also available for download in PDF format at:

http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html

Here you can find the newest versions of the books. Look for books marked with “Maintenance Release April 2004” because they have been updated with documentation changes introduced after the base version of the product was released in June 2003.

150 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 167: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

As this book is written, the latest fix pack for the JSC was FixPack 05. It is important to note that the JSC fix pack level should correspond to the connector FixPack level, that is apply the same fix pack level to the JSC and to the connector at the same time.

3.6.3 Compatibility and migration considerations for the JSCThe Job Scheduling Console feature level 1.3 can work with different versions of Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS.

Before installing the Job Scheduling Console, consider Table 3-6 and Table 3-7 on page 152, which summarize the supported interoperability combinations between the Job Scheduling Console, the connectors, and the Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS engines.

Table 3-6 shows the supported combinations of JSC, Tivoli Workload Scheduler connectors, and Tivoli Workload Scheduler engine (master domain manager, domain manager, or fault-tolerant agent).

Table 3-6 Tivoli Workload Scheduler connector and engine combinations

Tip: Fix packs for JSC can be downloaded from the IBM FTP site:

ftp://ftp.software.ibm.com

Log in with user ID anonymous and use your e-mail address for the password. Look for JSC fix packs in the /software/tivoli_support/patches/patches_1.3.0 directory.

Installation guidelines are in the 1.3.0-JSC-FP05.README text file.

Note: FixPack 05 improves performance for the JSC in two areas:

� Response time improvements� Memory consumption improvements

Job Scheduling Console Connector Tivoli Workload Scheduler engine

1.3 8.2 8.2

1.3 8.1 8.1

1.2 8.2 8.2

Note: The engine can be a fault-tolerant agent, a domain manager, or a master domain manager.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 151

Page 168: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Table 3-7 shows the supported combinations of JSC, Tivoli Workload Scheduler for z/OS connectors, and Tivoli Workload Scheduler for z/OS engine (controller).

Table 3-7 Tivoli Workload Scheduler for z/OS connector and engine combinations

Satisfy the following requirements before installing The following software and hardware prerequisites and other considerations should be taken care of before installing the JSC.

SoftwareThe following is required software:

� Tivoli Management Framework Version 3.7.1 with FixPack 4 or higher� Tivoli Job Scheduling Services 1.2

HardwareThe following is required hardware:

� CD-ROM drive� Approximately 200 MB free disk space for installation of the JSC � At least 256 MB RAM (preferably 512 MB RAM)

Job Scheduling Console Connector IBM Tivoli Workload Scheduler for z/OS engine (controller)

1.3 1.3 8.2

1.3 1.3 8.1

1.3 1.3 2.3 (Tivoli OPC)

1.3 1.2 8.1

1.3 1.2 2.3 (Tivoli OPC)

1.2 1.3 8.2

1.2 1.3 8.1

1.2 1.3 2.3 (Tivoli OPC)

Note: If your environment comprises installations of updated and back-level versions of the products, some functionalities might not work correctly. For example, the new functionalities such as Secure Socket Layer (SSL) protocol support, return code mapping, late job handling, extended task name and recovery information for z/OS jobs are not supported by the Job Scheduling Console feature level 1.2. A warning message is displayed if you try to open an object created with the new functionalities, and the object is not opened.

152 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 169: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

OtherThe Job Scheduling Console can be installed on any workstation that has a TCP/IP connection. It can connect only to a server or workstation that has properly configured installations of the following products:

� Job Scheduling Services and IBM Tivoli Workload Scheduler for z/OS connector (mainframe-only scheduling solution)

� Job Scheduling Services and Tivoli Workload Scheduler connector (distributed-only scheduling solution)

� Job Scheduling Services, IBM Tivoli Workload Scheduler for z/OS connector, and Tivoli Workload Scheduler Connector (end-to-end scheduling solution)

The latest and most up-to-date system and software requirements for installing and running the Job Scheduling Console are described in great detail in the IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes, Feature level 1.3, SC32-1258 (remember to get the April 2004 revision).

It is important to consult and read this release notes document before installing the JSC because release notes contain system and software requirements as well as the latest installation and upgrade notes.

3.6.4 Planning for Job Scheduling Console availabilityThe legacy GUI gconman and gcomposer are no longer included with Tivoli Workload Scheduler, so the Job Scheduling Console fills the roles of those program as the primary interface to Tivoli Workload Scheduler. Staff that work only with the JSC and are not familiar with the command line interface (CLI) depend on continuous JSC availability. This requirement must be taken into consideration when planning for a Tivoli Workload Scheduler backup domain manager. We therefore recommend that there be a Tivoli Workload Scheduler connector instance on the Tivoli Workload Scheduler backup domain manager. This guarantees JSC access without interruption.

Because the JSC communicates with Tivoli Workload Scheduler for z/OS, Tivoli Workload Scheduler domain managers, and Tivoli Workload Scheduler backup domain managers through one IBM Tivoli Management Framework (Figure 3-6 on page 154), this framework can be a single point of failure. Consider establishing a backup Tivoli Management Framework or minimize the risk for outage in the framework by using (for example) clustering techniques.

You can read more about how to make a Tivoli Management Framework fail-safe in the redbook High Availability Scenarios with IBM Tivoli Workload Scheduler and IBM Tivoli Framework, SG24-6632. Figure 3-6 on page 154 shows two domain managers at the first level directly connected to Tivoli Workload Scheduler for z/OS (OPC). In end-to-end scheduling environments it is, as

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 153

Page 170: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

mentioned earlier, advisable to plan and install connectors and prerequisite components (Tivoli Management Framework and Job Scheduling Services) on all first-level domain managers.

Figure 3-6 JSC connections in an end-to-end environment

3.6.5 Planning for server started task for JSC communicationTo use the JSC to communicate with Tivoli Workload Scheduler for z/OS, it is necessary for the z/OS system to have a started task that handles IP communication with the JSC (more precisely, with the Tivoli Workload Scheduler for z/OS (OPC) Connector in the Tivoli Management Framework) (Figure 3-6).

The same server started task can be used for JSC communication and for the end-to-end scheduling. We recommend having two server started tasks, one dedicated for end-to-end scheduling and one dedicated for JSC communication. With two server started tasks, the JSC server started task can be stopped and started without any impact on the end-to-end scheduling network.

MasterDomain Manager

MASTERDMz/OS

Current Plan

DatabasesJSC Server

OPC

DomainA

Domain Manager

DMA

AIX

TWSConnector

Framework

OPCConnector

Symphony

TWS

Other DMs and FTAs

DomainB

Domain Manager

DMA

AIX

TWSConnector

Framework

OPCConnector

Symphony

TWS

Other DMs and FTAs

Job Scheduling

Console

MasterDomain Manager

MASTERDMz/OS

Current Plan

DatabasesJSC Server

OPC

DomainA

Domain Manager

DMA

AIX

TWSConnector

Framework

OPCConnector

TWSConnector

Framework

OPCConnector

Symphony

TWS

Other DMs and FTAs

DomainB

Domain Manager

DMA

AIX

TWSConnector

Framework

OPCConnector

TWSConnector

Framework

OPCConnector

Symphony

TWS

Other DMs and FTAs

Job Scheduling

Console

154 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 171: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The JSC server started task acts as the communication layer between the Tivoli Workload Scheduler for z/OS connector in the Tivoli Management Framework and the Tivoli Workload Scheduler for z/OS controller.

3.7 Planning for migration or upgrade from previous versions

If you are running end-to-end scheduling with Tivoli Workload Scheduler for z/OS Version 8.1 and Tivoli Workload Scheduler Version 8.1, you should plan how to do the upgrade or migration from Version 8.1 to Version 8.2. This is also the case if you are running an even older version, such as Tivoli OPC Version 2.3.0, Tivoli Workload Scheduler 7.0, or Maestro 6.1.

Tivoli Workload Scheduler 8.2 supports backward compatibility so you can upgrade your network gradually, at different times, and in no particular order.

You can upgrade top-down — that is, upgrade the Tivoli Workload Scheduler for z/OS controller (master) first, then the domain managers at the first level, then the subordinate domain managers and fault-tolerant agents — or upgrade bottom-up by starting with the fault-tolerant agents, then upgrading in sequence, leaving the Tivoli Workload Scheduler for z/OS controller (master) for last.

However, if you upgrade the Tivoli Workload Scheduler for z/OS controller first, some new Version 8.2 functions, (firewall support, centralized script) will not work until the whole network is upgraded. During the upgrade procedure, the installation backs up all of the configuration information, installs the new product code, and automatically migrates old scheduling data and configuration information. However, it does not migrate user files or directories placed in the Tivoli Workload Scheduler for z/OS server work directory or in the Tivoli Workload Scheduler TWShome directory.

Before doing the actual installation, you should decide on the migration or upgrade strategy that will be best in your end-to-end scheduling environment. This is also the case if you are upgrading from old Tivoli OPC tracker agents or if you decide to merge a stand-alone Tivoli Workload Scheduler environment with your Tivoli Workload Scheduler for z/OS environment to have a new end-to-end scheduling environment.

Our experience is that installation and upgrading of an existing end-to-end scheduling environment takes some time, and the time required depends on the size of the environment. It is good to be prepared form the first day and to try to make some good and realistic implementation plans and schedules.

Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 155

Page 172: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Another important thing to remember is that Tivoli Workload Scheduler end-to-end scheduling has been improved and has changed considerably from Version 8.1 to Version 8.2. If you are running Tivoli Workload Scheduler 8.1 end-to-end scheduling and are planning to upgrade to Version 8.2 end-to-end scheduling, we recommend that you:

1. First do a “one-to-one” upgrade from Tivoli Workload Scheduler 8.1 end-to-end scheduling to Tivoli Workload Scheduler 8.2 end-to-end scheduling.

2. When the upgrade is completed and you are running Tivoli Workload Scheduler 8.2 end-to-end scheduling in the whole network, then start to implement the new functions and facilities that were introduced in Tivoli Workload Scheduler for z/OS 8.2 and Tivoli Workload Scheduler 8.2.

3.8 Planning for maintenance or upgrades The Tivoli maintenance strategy for Tivoli Workload Scheduler introduces a new way to maintain the product more effectively and easily. On a quarterly basis, Tivoli provides updates with recent patches and offers a fix pack that is similar to a maintenance release. This fix pack can be ordered either via the common support Web page ftp://ftp.software.ibm.com/software/tivoli_support/patches, or shipped on a CD. Ask your local Tivoli support for more details.

In this book, we have recommended upgrading your end-to-end scheduling environment to FixPack 04 level. This level will change with time, of course, so when you start the installation you should plan to download and install the latest fix pack level.

156 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 173: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling

When planning as described in the previous chapter is completed, it is time to install the software (Tivoli Workload Scheduler for z/OS V8.2 and Tivoli Workload Scheduler V8.2 and, optionally, Tivoli Workload Scheduler Job Scheduling Console V1.3) and configure the installed software for end-to-end scheduling.

In this chapter, we provide details on how to install and configure Tivoli Workload Scheduler end-to-end scheduling and Job Scheduling Console (JSC), including how to perform the installation and the necessary steps involved.

We describe installation of:

� IBM Tivoli Workload Scheduler for z/OS V8.2

� IBM Tivoli Workload Scheduler V8.2

� IBM Tivoli Workload Scheduler Job Scheduling Console V1.3

We also describe installation of the components that are required to run the JSC.

4

© Copyright IBM Corp. 2004 157

Page 174: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

4.1 Before the installation is startedBefore you start the installation, it is important to understand that Tivoli Workload Scheduler end-to-end scheduling involves two components:

� IBM Tivoli Workload Scheduler for z/OS� IBM Tivoli Workload Scheduler

The Tivoli Workload Scheduler Job Scheduling Console is not a required product, but our experience from working with the Tivoli Workload Scheduler end-to-end scheduling environment is that the JSC is a very nice and helpful tool to have for troubleshooting or for new users who do not know much about job scheduling, Tivoli Workload Scheduler, or Tivoli Workload Scheduler for z/OS.

The overall installation and customization process is not complicated and can be narrowed down to the following steps:

1. Design the topology (for example, domain hierarchy or number of domains) for the distributed Tivoli Workload Scheduler network in which Tivoli Workload Scheduler for z/OS will do the workload scheduling. Use the guidelines in 3.5.4, “Network planning and considerations” on page 141 when designing the topology.

2. Install and verify the Tivoli Workload Scheduler for z/OS controller and end-to-end server tasks in the host environment.

Installation and verification of Tivoli Workload Scheduler for z/OS end-to-end scheduling is described in 4.2, “Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 159.

3. Install and verify the Tivoli Workload Scheduler distributed workstations (fault-tolerant agents).

Installation and verification of the Tivoli Workload Scheduler distributed workstations is described in 4.3, “Installing Tivoli Workload Scheduler in an end-to-end environment” on page 207.

Note: If you run on a previous release of IBM Tivoli Workload Scheduler for z/OS (OPC), you should also migrate from this release to Tivoli Workload Scheduler for z/OS 8.2 as part of the installation. Migration steps are described in the Tivoli Workload Scheduler for z/OS Installation Guide, SH19-4543. Migration is performed with a standard program supplied with Tivoli Workload Scheduler for z/OS.

158 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 175: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

4. Define and activate fault-tolerant workstations (FTWs) in the Tivoli Workload Scheduler for z/OS controller:

– Define FTWs in the Tivoli Workload Scheduler for z/OS database.

– Activate the FTW definitions by running the plan extend or replan batch job.

– Verify that the workstations are active and linked.

This is described in 4.4, “Define, activate, verify fault-tolerant workstations” on page 211.

5. Create fault-tolerant workstation jobs and job streams for the jobs to be executed on the FTWs, using either centralized script, non-centralized script, or a combination.

This is described in 4.5, “Creating fault-tolerant workstation job definitions and job streams” on page 217.

6. Do a verification test of the Tivoli Workload Scheduler for z/OS end-to-end scheduling. The verification test is used to verify that the Tivoli Workload Scheduler for z/OS controller can schedule and track jobs on the FTWs.

The verification test should also confirm that it is possible to browse the job log for completed jobs run on the FTWs.

This is described in 4.6, “Verification test of end-to-end scheduling” on page 235.

If you would like to use the Job Scheduling Console to work with Tivoli Workload Scheduler for z/OS, Tivoli Workload Scheduler, or both, you should also activate support for the JSC. The necessary installation steps for activating support for the JSC are described in 4.7, “Activate support for the Tivoli Workload Scheduler Job Scheduling Console” on page 245.

4.2 Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling

In this section, we guide you though the installation process of Tivoli Workload Scheduler for z/OS, especially the end-to-end feature. We do not duplicate the

Important: These workstations can be installed and configured before the Tivoli Workload Scheduler for z/OS components, but it will not be possible to test the connections before the mainframe components are installed and ready.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 159

Page 176: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

entire installation of the base product, which is described in the IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264.

To activate support for end-to-end scheduling in Tivoli Workload Scheduler for z/OS to be able to schedule jobs on the Tivoli Workload Scheduler FTAs, follow these steps:

1. Run EQQJOBS and specify Y for the end-to-end feature.

See 4.2.1, “Executing EQQJOBS installation aid” on page 162.

2. Define controller (engine) and tracker (agent) subsystems in SYS1.PARMLIB.

See 4.2.2, “Defining Tivoli Workload Scheduler for z/OS subsystems” on page 167.

3. Allocate the end-to-end data sets running the EQQPCS06 sample generated by EQQJOBS.

See 4.2.3, “Allocate end-to-end data sets” on page 168.

4. Create and customize the work directory by running the EQQPCS05 sample generated by EQQJOBS.

See 4.2.4, “Create and customize the work directory” on page 170.

5. Create started task procedures for Tivoli Workload Scheduler for z/OS

See 4.2.5, “Create started task procedures for Tivoli Workload Scheduler for z/OS” on page 173.

6. Define workstation (CPU) configuration and domain organization by using the CPUREC and DOMREC statements in a new PARMLIB member. (The default member name is TPLGINFO.)

See 4.2.6, “Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 174, “DOMREC statement” on page 185, “CPUREC statement” on page 187, and Figure 4-6 on page 176.

7. Define Windows user IDs and passwords by using the USRREC statement in a new PARMLIB member. (The default member name is USRINFO.)

It is important to remember that you have to define Windows user IDs and passwords only if you have fault-tolerant agents on Windows-supported platforms and want to schedule jobs to be run on these Windows platforms.

See “USRREC statement” on page 195.

160 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 177: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

8. Define the end-to-end configuration by using the TOPOLOGY statement in a new PARMLIB member. (The default member name is TPLGPARM.) The TOPOLOGY statement is described in “TOPOLOGY statement” on page 178.

In the TOPOLOGY statement, you should specify the following:

– For the TPLGYMEM keyword, write the name of the member used in step 6. (See Figure 4-6 on page 176.)

– For the USRMEM keyword, write the name of the member used in step 7 on page 160. (See Figure 4-6 on page 176.)

9. Add the TPLGYSRV keyword to the OPCOPTS statement in the Tivoli Workload Scheduler for z/OS controller to specify the server name that will be used for end-to-end communication.

See “OPCOPTS TPLGYSRV(server_name)” on page 176.

10.Add the TPLGYPRM keyword to the SERVOPTS statement in the Tivoli Workload Scheduler for z/OS end-to-end server to specify the member name used in step 8 on page 161. This step activates end-to-end communication in the end-to-end server started task.

See “SERVOPTS TPLGYPRM(member name/TPLGPARM)” on page 177.

11.Add the TPLGYPRM keyword to the BATCHOPT statement to specify the member name used in step 8 on page 161. This step activates the end-to-end feature in the plan extend, plan replan, and Symphony renew batch jobs.

See “TPLGYPRM(member name/TPLGPARM) in BATCHOPT” on page 177.

12.Optionally, you can customize the way the job name is generated in the Symphony file by the Tivoli Workload Scheduler for z/OS plan extend, replan, and Symphony renew batch jobs.

The job name in the Symphony file can be tailored or customized by the JTOPTS TWSJOBNAME() parameter. See 4.2.9, “The JTOPTS TWSJOBNAME() parameter” on page 200 for more information.

If you decide to customize the job name layout in the Symphony file, be aware that it can require that you reallocate the EQQTWSOU data set with larger record length. See “End-to-end input and output data sets” on page 168 for more information.

Note: The JTOPTS TWSJOBNAME() parameter was introduced by APAR PQ77970.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 161

Page 178: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

13.Verify that the Tivoli Workload Scheduler for z/OS controller and server started tasks can be started (or restarted if already running) and verify that everything comes up correctly.

Verification is described in 4.2.10, “Verify end-to-end installation in Tivoli Workload Scheduler for z/OS” on page 203.

4.2.1 Executing EQQJOBS installation aidEQQJOBS is a CLIST-driven ISPF dialog that can help you install Tivoli Workload Scheduler for z/OS. EQQJOBS assists in the installation of the engine and agent by building batch-job JCL that is tailored to your requirements. To make EQQJOBS executable, allocate these libraries to the DD statements in your TSO session:

� SEQQCLIB to SYSPROC� SEQQPNL0 to ISPPLIB� SEQQSKL0 and SEQQSAMP to ISPSLIB

Use EQQJOBS installation aid as follows:

1. To invoke EQQJOBS, enter the TSO command EQQJOBS from an ISPF environment. The primary panel shown in Figure 4-1 appears.

Figure 4-1 EQQJOBS primary panel

You only need to select options 1 and 2 for end-to-end specifications. We do not want to step through the whole EQQJOBS dialog so, instead, we show only the related end-to-end panels. (The referenced panel names are indicated in the top-left corner of the panels, as shown in Figure 4-1.)

2. Select option 1 in panel EQQJOBS0 (and press Enter twice), and make your necessary input into panel ID EQQJOBS8. (See Figure 4-2 on page 163.)

EQQJOBS0 ------------ EQQJOBS application menu --------------Select option ===> 1 - Create sample job JCL

2 - Generate OPC batch-job skeletons

3 - Generate OPC Data Store samples

X - Exit from the EQQJOBS dialog

162 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 179: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-2 Server-related input panel

The following definitions are important:

– END-TO-END FEATURE

Specify Y if you want to install end-to-end scheduling and run jobs on Tivoli Workload Scheduler fault-tolerant agents.

– Installation Directory

Specify the (HFS) path where SMP/E has installed the Tivoli Workload Scheduler for z/OS files for UNIX system services that apply the End-to-End enabler feature. This directory is the one containing the bin directory. The default path is /usr/lpp/TWS/V8R2M0.

The installation directory is created by SMP/E job EQQISMKD and populated by applying the end-to-end feature (JWSZ103).

This should be mounted Read-Only on every system in your sysplex.

– Work Directory

Specify where the subsystem-specific files are. Replace with a name that uniquely identifies your subsystem. Each subsystem that will use the fault-tolerant workstations must have its own work directory. Only the server and the daily planning batch jobs update the work directory.

EQQJOBS8---------------------------- Create sample job JCL --------------------Command ===> END TO END FEATURE: Y (Y= Yes ,N= No) Installation Directory ===> /usr/lpp/TWS/V8R2M0_____________________ ===> ________________________________________ ===> ________________________________________ Work Directory ===> /var/inst/TWS___________________________ ===> ________________________________________ ===> ________________________________________ User for OPC address space ===> UID ___ Refresh CP group ===> GID __ RESTART AND CLEANUP (DATA STORE) N (Y= Yes ,N= No) Reserved destination ===> OPC_____ Connection type ===> SNA (SNA/XCF) SNA Data Store luname ===> ________ (only for SNA connection ) SNA FN task luname ===> ________ (only for SNA connection ) Xcf Group ===> ________ (only for XCF connection ) Xcf Data store member ===> ________ (only for XCF connection ) Xcf FL task member ===> ________ (only for XCF connection ) Press ENTER to create sample job JCL

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 163

Page 180: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

This directory is where the end-to-end processes have their working files (Symphony, event files, traces). It should be mounted Read/Write on every system in your sysplex.

– User for OPC address space

This information is used to create the EQQPCS05 sample job used to build the directory with the right ownership. In order to run the end-to-end feature correctly, the ownership of the work directory and the files contained in it must be assigned to the same user ID that RACF associates with the server started task. In the User for OPC address space field, specify the RACF user ID used for the Server address space. This is the name specified in the started-procedure table.

– Refresh CP group

This information is used to create the EQQPCS05 sample job used to build the directory with the right ownership. In order to create the new Symphony file, the user ID that is used to run the daily planning batch job must belong to the group that you specify in this field. Make sure that the user ID that is associated with the Server and Controller address spaces (the one specified in the User for OPC address space field) belongs to this group or has this group as a supplementary group.

Important: To configure end-to-end scheduling in a sysplex environment successfully, make sure that the work directory is available to all systems in the sysplex. This way, in case of a takeover situation, the new server will be started on a new system in the sysplex, and the server must be able to access the work directory to continue processing.

As described in Section 3.4.4, “Hierarchical File System (HFS) cluster” on page 124, we recommend having dedicated HFS clusters for each end-to-end scheduling environment (end-to-end server started task), that is:

� One HFS cluster for the installation binaries per environment (test, production, and so forth)

� One HFS cluster for the work files per environment (test, production and so forth)

The work HFS clusters should be mounted in Read/Write mode and the HFS cluster with binaries should be mounted Read-Only. This is because the working directory is application-specific and contains application-related data. Besides, it makes your backup easier. The size of the cluster depends on the size of the Symphony file and how long you want to keep the stdlist files. We recommend that you allocate 2 GB of space.

164 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 181: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

As you can see in Figure 4-3 on page 165, we defined RACF user ID TWSCE2E to the end-to-end server started task. User TWSCE2E belongs to RACF group TWSGRP. Therefore, all users of the RACF group TWSGRP and its supplementary group get access to create the Symphony file and to modify and read other files in the work directory.

Figure 4-3 Description of the input fields in the EQQJOBS8 panel

3. Press Enter to generate the installation job control language (JCL) jobs. Table 4-1 lists the subset of the sample JCL members created by EQQJOBS that relate to end-to-end scheduling.

Tip: The Refresh CP group field can be used to give access to the HFS file as well as to protect the HFS directory from unauthorized access.

EQQPCS05 sample JCL//TWS JOB ,'TWS INSTALL',CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1)/*JOBPARM SYSAFF=SC64//JOBLIB DD DSN=TWS.V8R2M0.SEQQLMD0,DISP=SHR//ALLOHFS EXEC PGM=BPXBATCH,REGION=4M//STDOUT DD PATH='/tmp/eqqpcs05out',// PATHOPTS=(OCREAT,OTRUNC,OWRONLY),PATHMODE=SIRWXU//STDIN DD PATH='/usr/lpp/TWS/V8R2M0/bin/config',// PATHOPTS=(ORDONLY)//STDENV DD *eqqBINDIR=/usr/lpp/TWS/V8R2M0eqqWRKDIR=/var/inst/TWSeqqUID=E2ESERVeqqGID=TWSGRP/*//*//OUTPUT1 EXEC PGM=IKJEFT01//STDOUT DD SYSOUT=*,DCB=(RECFM=V,LRECL=256)//OUTPUT DD PATH='/tmp/eqqpcs05out',// PATHOPTS=ORDONLY//SYSTSPRT DD DUMMY//SYSTSIN DD *OCOPY INDD(OUTPUT) OUTDD(STDOUT)BPXBATCH SH rm /tmp/eqqpcs05out

/*

EQQJOBS8 ------------------- Create sample job JCL ------------Command ===>

end-to-end FEATURE: Y (Y= Yes , N= No)HFS Installation Directory ===> /usr/lpp/TWS/V8R2M0______________

===> ___________________________===> ___________________________

HFS Work Directory ===> /var/inst/TWS_____________===> ___________________________===> ___________________________

User for OPC Address Space ===> E2ESERV_Refresh CP Group ===> TWSGRP__

...

EQQJOBS8 ------------------- Create sample job JCL ------------Command ===>

end-to-end FEATURE: Y (Y= Yes , N= No)HFS Installation Directory ===> /usr/lpp/TWS/V8R2M0______________

===> ___________________________===> ___________________________

HFS Work Directory ===> /var/inst/TWS_____________===> ___________________________===> ___________________________

User for OPC Address Space ===> E2ESERV_Refresh CP Group ===> TWSGRP__

...

HFS Binary DirectoryWhere the TWS binaries that run in USS were installed. E.g., translator, mailman, and batchman. This should be the same as the value of the TOPOLOGY BINDIR parameter.

HFS Working DirectoryWhere the TWS files that change throughout the day will reside. E.g., Symphony, mailbox files, and logs for the TWS processes that run in USS. This should be the same as the value of the TOPOLOGY WRKDIR parameter.

User for End-to-end Server TaskThe user associated with the end-to-end server started task.

Group for Batch Planning JobsThe group containing all users who will run batch planning jobs (CP extend, replan, refresh, and Symphony renew).

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 165

Page 182: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Table 4-1 Sample JCL members related to end-to-end scheduling (created by EQQJOBS)

4. EQQJOBS is also used to create batch-job skeletons. That is, skeletons for the batch jobs (such as plan extend, replan, Symphony renew) that you can submit from Tivoli Workload Scheduler for z/OS legacy ISPF panels. To create batch-job skeletons, select option 2 in the EQQJOBS primary panel (see Figure 4-1 on page 162). Make your necessary entries until panel EQQJOBSA appears (Figure 4-4).

Member Description

EQQCON Sample started task procedure for a Tivoli Workload Scheduler for z/OS controller and tracker in same address space.

EQQCONO Sample started task procedure for the Tivoli Workload Scheduler for z/OS controller only.

EQQCONP Sample initial parameters for a Tivoli Workload Scheduler for z/OS controller and tracker in same address space.

EQQCONOP Sample initial parameters for a Tivoli Workload Scheduler for z/OS controller only.

EQQPCS05 Creates the working directory in HFS used by the end-to-end server task.

EQQPCS06 Allocates data sets necessary to run end-to-end schedulingend-to-end.

EQQSER Sample started task procedure for a server task.

EQQSERV Sample initialization parameters for a server task.

166 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 183: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-4 Generate end-to-end skeletons

5. Specify Y for the END-TO-END FEATURE if you want to use end-to-end scheduling to run jobs on Tivoli Workload Scheduler fault-tolerant workstations.

6. Press Enter and the skeleton members for daily plan extend, replan, trial plan, long-term plan extend, replan, and trial plan are created with data sets related to end-to-end scheduling. Also, a new member is created. (See Table 4-2 on page 167.)

Table 4-2 End-to-end skeletons

4.2.2 Defining Tivoli Workload Scheduler for z/OS subsystemsThe subsystem for the Tivoli Workload Scheduler for z/OS controllers (engines) and trackers on the z/OS images (agents) must be defined in the active subsystem-name-table member of SYS1.PARMLIB. It is advisable to install at least two Tivoli Workload Scheduler for z/OS controlling systems, one for testing and one for your production environment.

EQQJOBSA -------------- Generate OPC batch-job skeletons ----------------------Command ===> Specify if you want to use the following optional features: END TO END FEATURE: Y (Y= Yes ,N= No) (To interoperate with TWS fault tolerant workstations) RESTART AND CLEAN UP (DATA STORE): N (Y= Yes ,N= No) (To be able to retrieve job log, execute dataset clean up actions and step restart) FORMATTED REPORT OF TRACKLOG EVENTS: Y (Y= Yes ,N= No) EQQTROUT dsname ===> TWS.V8R20.*.TRACKLOG____________________________ EQQAUDIT output dsn ===> TWS.V8R20.*.EQQAUDIT.REPORT_____________________ Press ENTER to generate OPC batch-job skeletons

Member Description

EQQSYRES Tivoli Workload Scheduler Symphony renew

Note: We recommend that you install the trackers (agents) and the Tivoli Workload Scheduler for z/OS controller (engine) in separate address spaces.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 167

Page 184: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

To define the subsystems, update the active IEFSSNnn member in SYS1.PARMLIB. The name of the subsystem initialization module for Tivoli Workload Scheduler for z/OS is EQQINITF. Include records, as in the following example.

Example 4-1 Subsystem definition record (IEFSSNnn member of SYS1.PARMLIB)

SUBSYS SUBNAME(subsystem name) /* TWS for z/OS subsystem */ INITRTN(EQQINITF) INITPARM('maxecsa,F')

Note that the subsystem name must be two to four characters: for example, TWSC for the controller subsystem and TWST for the tracker subsystems. Check the IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264, for more information.

4.2.3 Allocate end-to-end data sets Member EQQPCS06, created by EQQJOBS in your sample job JCL library, allocates the following VSAM and sequential data sets needed for end-to-end scheduling:

� End-to-end script library (EQQSCLIB) for non-centralized script� End-to-end input and output events data sets (EQQTWSIN and

EQQTWSOU)� Current plan backup copy data set to create Symphony (EQQSCPDS)� End-to-end centralized script data library (EQQTWSCS)

We explain the use and allocation of these data sets in more detail.

End-to-end script library (EQQSCLIB)This script library data set includes members containing the commands or the job definitions for fault-tolerant workstations. It is required in the controller if you want to use the end-to-end scheduling feature. See Section 4.5.3, “Definition of non-centralized scripts” on page 221 for details about the JOBREC, RECOVERY, and VARSUB statements.

End-to-end input and output data sets These data sets are required by every Tivoli Workload Scheduler for z/OS address space that uses the end-to-end feature. They record the descriptions of

Tip: Do not compress members in this PDS. For example, do not use the ISPF PACK ON command, because Tivoli Workload Scheduler for z/OS does not use ISPF services to read it.

168 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 185: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

related events with operations running on FTWs and are used by both the end-to-end enabler task and the translator process in the scheduler’s server.

The data sets are device-dependent and can have only primary space allocation. Do not allocate any secondary space. They are automatically formatted by Tivoli Workload Scheduler for z/OS the first time they are used.

EQQTWSIN and EQQTWSOU are wrap-around data sets. In each data set, the header record is used to track the amount of read and write records. To avoid the loss of event records, a writer task will not write any new records until more space is available when all existing records have been read.

The quantity of space that you need to define for each data set requires some attention. Because the two data sets are also used for job log retrieval, the limit for the job log length is half the maximum number of records that can be stored in the input events data set. Two cylinders are sufficient for most installations.

The maximum length of the events logged in those two data sets, including the job logs, is 120 bytes. Anyway, it is possible to allocate the data sets with a longer logical record length. Using record lengths greater than 120 bytes does not produce either advantages or problems. The maximum allowed value is 32000 bytes; greater values will cause the end-to-end server started task to terminate. In both data sets there must be enough space for at least 1000 events. (The maximum number of job log events is 500.) Use this as a reference if you plan to define a record length greater than 120 bytes. So, when the record length of 120 bytes is used, the space allocation must be at least 1 cylinder. The data sets must be unblocked and the block size must be the same as the logical record length.

A minimum record length of 160 bytes is necessary for the EQQTWSOU data set in order to be able to decide how to build the job name in the Symphony file. (Refer to the TWSJOBNAME parameter in the JTOPTS statement in Section 4.2.9, “The JTOPTS TWSJOBNAME() parameter” on page 200.)

For good performance, define the data sets on a device with plenty of availability. If you run programs that use the RESERVE macro, try to allocate the data sets on a device that is not, or only slightly, reserved.

Initially, you may need to test your system to get an idea of the number and types of events that are created at your installation. After you have gathered enough information, you can reallocate the data sets. Before you reallocate a data set, ensure that the current plan is entirely up-to-date. You must also stop the

Note An SD37 abend code is produced when Tivoli Workload Scheduler for z/OS formats a newly allocated data set. Ignore this error.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 169

Page 186: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

end-to-end sender and receiver task on the controller and the translator thread on the server that add this data set.

Current plan backup copy data set (EQQSCPDS)EQQSCPDS is the current plan backup copy data set that is used to create the Symphony file.

During the creation of the current plan, the SCP data set is used as a CP backup copy for the production of the Symphony file. This VSAM is used when the end-to-end feature is active. It should be allocated with the same size as the CP1/CP2 and NCP VSAM data sets.

End-to-end centralized script data set (EQQTWSCS)Tivoli Workload Scheduler for z/OS uses the end-to-end centralized script data set to temporarily store a script when it is downloaded from the JOBLIB data set to the agent for its submission.

Set the following attributes for EQQTWSCS:

DSNTYPE=LIBRARY, SPACE=(CYL,(1,1,10)), DCB=(RECFM=FB,LRECL=80,BLKSIZE=3120)

If you want to use centralized script support when scheduling end-to-end, use the EQQTWSCS DD statement in the controller and server started tasks. The data set must be a partitioned extended-data set.

4.2.4 Create and customize the work directory To install the end-to-end feature, you must allocate the files that the feature uses. Then, on every Tivoli Workload Scheduler for z/OS controller that will use this feature, run the EQQPCS05 sample to create the directories and files.

The EQQPCS05 sample must be run by a user with one of the following permissions:

� UNIX System Services (USS) user ID (UID) equal to 0

� BPX.SUPERUSER FACILITY class profile in RACF

Tip: Do not move these data sets after they have been allocated. They contain device-dependent information and cannot be copied from one type of device to another, or moved around on the same volume. An end-to-end event data set that is moved will be re-initialized. This causes all events in the data set to be lost. If you have DFHSM or a similar product installed, you should specify that end-to-end event data sets are not migrated or moved.

170 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 187: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� UID specified in the JCL in eqqUID and belonging to the group (GID) specified in the JCL in eqqGID

If the GID or the UID has not been specified in EQQJOBS, you can specify them in the STDENV DD before running the EQQPCS05.

The EQQPCS05 job runs a configuration script (named config) residing in the installation directory. This configuration script creates a working directory with the right permissions. It also creates several files and directories in this working directory. (See Figure 4-5 on page 171.)

Figure 4-5 EQQPCS05 sample JCL and the configure script

After running EQQPCS05, you can find the following files in the work directory:

� localopts

Defines the attributes of the local workstation (OPCMASTER) for batchman, mailman, netman, and writer processes and for SSL. Only a subset of these attributes is used by the end-to-end server on z/OS. Refer to IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265, for information about customizing this file.

USS

z/OS

EQQPCS05EQQPCS05

configureconfigure

EQQPCS05 must be run as:• a user associated with USS UID 0; or• a user with the BPX.SUPERUSER facility in RACF; or• the user that will be specified in eqqUID(the user associated with the end-to-end server started task)

EQQPCS05 must be run as:• a user associated with USS UID 0; or• a user with the BPX.SUPERUSER facility in RACF; or• the user that will be specified in eqqUID(the user associated with the end-to-end server started task)

WRKDIR

Sample JCL forinstallation of

End-to-end feature

configconfigPermissions Owner Group Size Date Time File Name__________ -rw-rw---- 1 E2ESERV TWSGRP 755 Feb 3 13:01 NetConf-rw-rw---- 1 E2ESERV TWSGRP 1122 Feb 3 13:01 TWSCCLog.properties-rw-rw---- 1 E2ESERV TWSGRP 2746 Feb 3 13:01 localoptsdrwxrwx--- 2 E2ESERV TWSGRP 8192 Feb 3 13:01 mozartdrwxrwx--- 2 E2ESERV TWSGRP 8192 Feb 3 13:01 poboxdrwxrwxr-x 3 E2ESERV TWSGRP 8192 Feb 11 09:48 stdlist

BINDIR

The configure script creates subdirectories; copies configuration files; and sets the owner, group, and permissions of these directories and files. This last step is the reason EQQPCS05 must be run as a user with sufficient priviliges.

The configure script creates subdirectories; copies configuration files; and sets the owner, group, and permissions of these directories and files. This last step is the reason EQQPCS05 must be run as a user with sufficient priviliges.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 171

Page 188: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� mozart/globalopts

Defines the attributes of the IBM Tivoli Workload Scheduler network (OPCMASTER ignores them).

� Netconf

Netman configuration files

� TWSCCLOG.properties

Defines attributes for trace function in the end-to-end server USS.

You will also find the following directories in the work directory:

� mozart � pobox � stdlist� stdlist/logs (contains the log files for USS processes)

Do not touch or delete any of these files or directories, which are created in the work directory by the EQQPCS05 job, unless you are directed to do so, for example in error situations.

Note that the EQQPCS05 job does not define the physical HFS (or z/OS) data set. The EQQPCS05 initiates an existing HFS data set with the necessary files and directories for the end-to-end server started task.

The physical HFS data set can be created with a job that contains an IEFBR14 step, as shown in Example 4-2.

Example 4-2 HFS data set creation

//USERHFS EXEC PGM=IEFBR14 //D1 DD DISP=(,CATLG),DSNTYPE=HFS, // SPACE=(CYL,(prispace,secspace,1)), // DSN=OMVS.TWS820.TWSCE2E.HFS

Allocate the HFS work data set with enough space for your end-to-end server started task. In most installations, 2 GB disk space is enough.

Tip: If you execute this job in a sysplex that cannot share the HFS (prior OS/390 V2R9) and get messages like cannot create directory, you may want a closer look on which machine the job really ran. Because without system affinity, every member that has the initiater in the right class started can execute the job so you must add a /*JOBPARM SYSAFF to make sure that the job runs on the system where the work HFS is mounted.

172 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 189: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

4.2.5 Create started task procedures for Tivoli Workload Scheduler for z/OS

Perform this task for a Tivoli Workload Scheduler for z/OS tracker (agent), controller (engine), and server started task. You must define a started task procedure or batch job for each Tivoli Workload Scheduler for z/OS address space.

The EQQJOBS dialog generates several members in the output sample library that you specified when running the EQQJOBS installation aid program. These members contain started task JCL that is tailored with the values you entered in the EQQJOBS dialog. Tailor these members further, according to the data sets you require. See Figure 4-1 on page 166.

Because the end-to-end server started task uses TCP/IP communication, you should do the following:

First, you have to modify the JCL of EQQSER in the following way:

� Make sure that the end-to-end server started task has access to the C runtime libraries, either as STEPLIB (include the CEE.SCEERUN in the STEPLIB concatenation) or by LINKLIST (the CEE.SCEERUN is in the LINKLIST concatenation).

� If you have multiple TCP/IP stacks, or if the name you used for the procedure that started up the TCPIP address space was not the default (TCPIP), change the end-to-end server started task procedure to include the SYSTCPD DD card to point to a data set containing the TCPIPJOBNAME parameter. The standard method to determine the connecting TCP/IP image is:

– Connect to the TCP/IP specified by TCPIPJOBNAME in the active TCPIP.DATA.

– Locate TCPIP.DATA using the SYSTCPD DD card.

You can also use the end-to-end server TOPOLOGY TCPIPJOBNAME() parameter to specify the TCP/IP started task name that is used by the end-to-end server. This parameter can be used if you have multiple TCP/IP stacks or if the TCP/IP started task name is different form TCPIP.

You must have a server started task to handle end-to-end scheduling. You can use the same server also to communicate with the Job Scheduling Console. In fact, the server can also handle APPC communication if configured to this.

In Tivoli Workload Scheduler for z/OS 8.2, the type of communication that should be handled by the server started task is defined in the new SERVOPTS PROTOCOL() parameter.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 173

Page 190: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

In the PROTOCOL() parameter, you can specify any combination of:

� APPC: The server should handle APPC communication.� JSC: The server should handle JSC communication.� E2E: The server should handle end-to-end communication.

The Tivoli Workload Scheduler for z/OS controller uses the end-to-end server to communicate events to the FTAs. The end-to-end server will start multiple tasks and processes using the UNIX System Services.

4.2.6 Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end scheduling

Initialization statements for end-to-end scheduling fit into two categories:

1. Statements used to configure the Tivoli Workload Scheduler for z/OS controller (engine) and end-to-end server:

a. OPCOPTS and TPLGYPRM statements for the controller

b. SERVOPTS statement for the end-to-end server

2. Statements used to define the end-to-end topology (the network topology for the distributed Tivoli Workload Scheduler network).

The end-to-end topology statements fall into two categories:

a. Topology statements used to initialize the end-to-end server environment in USS on the mainframe:

• The TOPOLOGY statement

Recommendations: The Tivoli Workload Scheduler for z/OS controller and end-to-end server use TCP/IP services. Therefore it is necessary to define a USS segment for the controller and end-to-end server started task userids. No special authorization necessary; it is only required to be defined in USS with any user ID.

Even though it is possible to have one server started task handling end-to-end scheduling, JSC communication, and even APPC communication as well, we recommend having a server started task dedicated for end-to-end scheduling (SERVOPTS PROTOCOL(E2E)). This has the advantage that you do not have to stop the whole server processes if the JSC server must be restarted.

The server started task is important for handling JSC and end-to-end communication. We recommend setting the end-to-end and JSC server started tasks as non-swappable and giving it at least the same dispatching priority as the Tivoli Workload Scheduler for z/OS controller (engine).

174 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 191: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

b. Statements used to describe the distributed Tivoli Workload Scheduler network and the responsibilities for the different Tivoli Workload Scheduler agents in this network:

• The DOMREC, CPUREC, and USRREC statements

These statements are used by the end-to-end server and the plan extend, plan replan, and Symphony renew batch jobs. The batch jobs use the information when the Symphony file is created.

See “Initialization statements used to describe the topology” on page 184.

We go through each initialization statement in detail and give you an example of how a distributed Tivoli Workload Scheduler network can be reflected in Tivoli Workload Scheduler for z/OS using the topology statements.

Table 4-3 Initialization members related to end-to-end scheduling

You can find more information in Tivoli Workload Scheduler for z/OS Customization and Tuning, SH19-4544.

Figure 4-6 on page 176 illustrates the relationship between the initialization statements and members related to end-to-end scheduling.

Initialization member Description

TPLGYSRV Activates end-to-end in the Tivoli Workload Scheduler for z/OS controller.

TPLGYPRM Activates end-to-end in the Tivoli Workload Scheduler for server and batch jobs (plan jobs).

TOPOLOGY Specifies all the statements for end-to-end.

DOMREC Defines domains in a distributed Tivoli Workload Scheduler network.

CPUREC Defines agents in a Tivoli Workload Scheduler distributed network.

USRREC Specifies user ID and password for NT users.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 175

Page 192: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-6 Relationship between end-to-end initialization statements and members

In the following sections, we cover the different initialization statements and members and describe their meaning and usage one by one. Refer to Figure 4-6 when reading these sections.

OPCOPTS TPLGYSRV(server_name)Specify this keyword if you want to activate the end-to-end feature in the Tivoli Workload Scheduler for z/OS (OPC) controller (engine).

Activates the end-to-end feature in the controller. If you specify this keyword, the IBM Tivoli Workload Scheduler Enabler task is started. The specified server_name is that of the end-to-end server that handles the events to and from the FTAs. Only one server can handle events to and from the FTAs.

This keyword is defined in OPCOPTS.

If you plan to use Job Scheduling Console to work with OPC, it is a good idea to run two separate servers: one for JSC connections (JSCSERV), and another for the connection with the TWS network (E2ESERV).

TOPOLOGYBINDIR(/tws)WRKDIR(/tws/wrkdir)HOSTNAME(TWSC.IBM.COM)PORTNUMBER(31182)TPLGYMEM(TPLGINFO)USRMEM(USERINFO)TRCDAYS(30)LOGLINES(100)

EQQPARM(TPLGPARM)

OPC Controller

SERVOPTSSUBSYS(TWSC)PROTOCOL(E2E)TPLGYPRM(TPLGPARM)...

TWSC

Topology Parameters

USRREC ...USRREC ...USRREC ......

EQQPARM(USRINFO)

Topology Records

User Records

BATCHOPT...TPLGYPRM(TPLGPARM)...

Daily Planning Batch Jobs

DOMREC ...DOMREC ...CPUREC ...CPUREC ...CPUREC ...CPUREC ......

EQQPARM(TPLGINFO)

End-to-end ServerTWSCE2E

OPCOPTSTPLGYSRV(TWSCE2E)SERVERS(TWSCJSC,TWSCE2E)...

(CPE, LTPE, etc.)Note: It is possible to run many servers, but only one server can be the end-to-end server (also called the topology server). Specify this server using the TPLGYSRV controller option. The SERVERS option specifies the servers that will be started when the controller starts.

JSC Server

SERVOPTSSUBSYS(TWSCC)PROTOCOL(JSC)CODEPAGE(500)JSCHOSTNAME(TWSCJSC)PORTNUMBER(42581)USERMAP(USERMAP)...

USER 'ROOT@M-REGION' RACFUSER(TMF) RACFGROUP(TIVOLI)

...

EQQPARM(USERMAP)

User Map

TWSCJSC

If you plan to use Job Scheduling Console to work with OPC, it is a good idea to run two separate servers: one for JSC connections (JSCSERV), and another for the connection with the TWS network (E2ESERV).

TOPOLOGYBINDIR(/tws)WRKDIR(/tws/wrkdir)HOSTNAME(TWSC.IBM.COM)PORTNUMBER(31182)TPLGYMEM(TPLGINFO)USRMEM(USERINFO)TRCDAYS(30)LOGLINES(100)

EQQPARM(TPLGPARM)

OPC Controller

SERVOPTSSUBSYS(TWSC)PROTOCOL(E2E)TPLGYPRM(TPLGPARM)...

TWSC

Topology Parameters

USRREC ...USRREC ...USRREC ......

EQQPARM(USRINFO)

Topology Records

User Records

BATCHOPT...TPLGYPRM(TPLGPARM)...

Daily Planning Batch Jobs

DOMREC ...DOMREC ...CPUREC ...CPUREC ...CPUREC ...CPUREC ......

EQQPARM(TPLGINFO)

End-to-end ServerTWSCE2E

OPCOPTSTPLGYSRV(TWSCE2E)SERVERS(TWSCJSC,TWSCE2E)...

(CPE, LTPE, etc.)Note: It is possible to run many servers, but only one server can be the end-to-end server (also called the topology server). Specify this server using the TPLGYSRV controller option. The SERVERS option specifies the servers that will be started when the controller starts.

JSC Server

SERVOPTSSUBSYS(TWSCC)PROTOCOL(JSC)CODEPAGE(500)JSCHOSTNAME(TWSCJSC)PORTNUMBER(42581)USERMAP(USERMAP)...

USER 'ROOT@M-REGION' RACFUSER(TMF) RACFGROUP(TIVOLI)

...

EQQPARM(USERMAP)

User Map

TWSCJSC

176 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 193: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

SERVOPTS TPLGYPRM(member name/TPLGPARM)The SERVOPTS statement is the first statement read by the end-to-end server started task. In the SERVOPTS, you specify different initialization options for the server started task, such as:

� The name of the Tivoli Workload Scheduler for z/OS controller that the server should communicate with (serve). The name is specified with the SUBSYS() keyword.

� The type of protocol. The PROTOCOL() keyword is used to specify the type of communication used by the server.

In Tivoli Workload Scheduler for z/OS 8.2, you can specify any combination of the following values separated by comma: E2E, JSC, APPC.

� The TPLGYPRM() parameter is used to define the member name of the member in parmlib with the TOPOLOGY definitions for the distributed Tivoli Workload Scheduler network.

The TPLGYPRM() parameter must be specified if PROTOCOL(E2E) is specified.

See Figure 4-6 on page 176 for an example of the required SERVOPTS parameters for an end-to-end server (TWSCE2E in Figure 4-6 on page 176).

TPLGYPRM(member name/TPLGPARM) in BATCHOPTIt is important to remember to add the TPLGYPRM() parameter to the BATCHOPT initialization statement that is used by the Tivoli Workload Scheduler for z/OS planning jobs (trial plan extend, plan extend, plan replan) and Symphony renew.

If the TPLGYPRM() parameter is not specified in the BATCHOP initialization statement that is used by the plan jobs, no Symphony file will be created and no jobs will run in the distributed Tivoli Workload Scheduler network.

See Figure 4-6 on page 176 for an example of how to specify the TPLGYPRM() parameter in the BATCHOPT initialization statement.

Tip: If you want let the Tivoli Workload Scheduler for z/OS controller start and stop the end-to-end server, use the servers keyword in OPCOPTS parmlib member (see Figure 4-6 on page 176)

Note: With Tivoli Workload Scheduler for z/OS 8.2, the TCPIP value has been replaced by the combination of the E2E and JSC values, but the TCPIP value is still allowed for backward compatibility.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 177

Page 194: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

TOPOLOGY statementThis statement includes all of the parameters that are related to the end-to-end feature. TOPOLOGY is defined in the member of the EQQPARM library as specified by the TPLGYPRM parameter in the BATCHOPT and SERVOPTS statements. Figure 4-8 on page 185 shows the syntax of the topology member.

Note: The topology definitions in TPLGYPRM() in the BATCHOPT initialization statement is read and verified by the trial plan extend job in Tivoli Workload Scheduler for z/OS. This means that the trial plan extend job can be used to verify the TOPLOGY definitions such as DOMREC, CPUREC, and USRREC for syntax errors or logical errors before the plan extend or plan replan job is executed.

Also note that the trial plan extend job does not create a new Symphony file because it does not update the current plan in Tivoli Workload Scheduler for z/OS.

178 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 195: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-7 The statements that can be specified in the topology member

Description of the topology statementsThe topology parameters are described in the following sections.

BINDIR(directory name)Specifies the name of the base file system (HFS or zOS) directory where binaries, catalogs, and the other files are installed and shared among subsystems.

The specified directory must be the same as the directory where the binaries are, without the final bin. For example, if the binaries are installed in /usr/lpp/TWS/V8R2M0/bin and the catalogs are in

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 179

Page 196: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

/usr/lpp/TWS/V8R2M0/catalog/C, the directory must be specified in the BINDIR keyword as follows: /usr/lpp/TWS/V8R2M0.

CODEPAGE(host system codepage/IBM-037)Specifies the name of the host code page and applies to the end-to-end feature. The value is used by the input translator to convert data received from Tivoli Workload Scheduler domain managers at the first level from UTF-8 format to EBCIDIC format. You can provide the IBM – xxx value, where xxx is the EBCDIC code page. The default value, IBM – 037, defines the EBCDIC code page for US English, Portuguese, and Canadian French.

For a complete list of available code pages, refer to Tivoli Workload Scheduler for z/OS Customization and Tuning, SH19-4544.

ENABLELISTSECCHK(YES/NO) This security option controls the ability to list objects in the plan on an FTA using conman and the Job Scheduling Console. Put simply, this option determines whether conman and the Tivoli Workload Scheduler connector programs will check the Tivoli Workload Scheduler Security file before allowing the user to list objects in the plan.

If set to YES, objects in the plan are shown to the user only if the user has been granted the list permission in the Security file. If set to NO, all users will be able to list objects in the plan on FTAs, regardless of whether list access is granted in the Security file. The default value is NO. Change the value to YES if you want to check for the list permission in the security file.

GRANTLOGONASBATCH(YES/NO) This is only for jobs running on Windows platforms. If set to YES, the logon users for Windows jobs are automatically granted the right to log on as batch job. If set to NO or omitted, the right must be granted manually to each user or group. The right cannot be granted automatically for users running jobs on a backup domain controller, so you must grant those rights manually.

HOSTNAME(host name /IP address/ local host name)Specifies the host name or the IP address used by the server in the end-to-end environment. The default is the host name returned by the operating system.

If you change the value, you also must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file.

As described in Section 3.4.6, “TCP/IP considerations for end-to-end server in sysplex” on page 129, you can define a virtual IP address for each server of the active controller and the standby controllers. If you use a dynamic virtual IP address in a sysplex environment, when the active controller fails and the

180 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 197: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

standby controller takes over the communication, the FTAs automatically switch the communication to the server of the standby controller.

To change the HOSTNAME of a server, perform the following actions:

1. Set the nm ipvalidate keyword to off in the localopts file on the first-level domain managers.

2. Change the HOSTNAME value of the server using the TOPOLOGY statement.

3. Restart the server with the new HOSTNAME value.

4. Renew the Symphony file.

5. If the renewal ends successfully, you can set the ipvalidate to full on the first-level domain managers.

See 3.4.6, “TCP/IP considerations for end-to-end server in sysplex” on page 129 for a description of how to define DVIPA IP address

LOGLINES(number of lines/100)Specifies the maximum number of lines that the job log retriever returns for a single job log. The default value is 100. In all cases, the job log retriever does not return more than half of the number of records that exist in the input queue.

If the job log retriever does not return all of the job log lines because there are more lines than the LOGLINES() number of lines, a notice similar to this appears in the retrieved job log output:

*** nnn lines have been discarded. Final part of Joblog ... ******

The line specifies the number (nnn) of job log lines not displayed, between the first lines and the last lines of the job log.

NOPTIMEDEPENDENCY(YES/NO) With this option, you can change the behavior of noped operations that are defined on fault-tolerant workstations and have the centralized script option set to N. In fact, Tivoli Workload Scheduler for z/OS completes the noped operations without waiting for the time dependency resolution: With this option set to YES, the operation can be completed in the current plan after the time dependency has been resolved. The default value is NO.

PLANAUDITLEVEL(0/1) Enables or disables plan auditing for FTAs. Each Tivoli Workload Scheduler workstation maintains its own log. Valid values are 0 to disable plan auditing and

Note: This statement is introduced by APAR PQ84233.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 181

Page 198: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

1 to activate plan auditing. Auditing information is logged to a flat file in the TWShome/audit/plan directory. Only actions, not the success or failure of any action are logged in the auditing file. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file.

PORTNUMBER(port/31111)Defines the TCP/IP port number that is used by the server to communicate with the FTAs. This value has to be different from that specified in the SERVOPTS member. The default value is 31111, and accepted values are from 0 to 65535.

If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file.

SSLLEVEL(ON/OFF/ENABLED/FORCE) Defines the type of SSL authentication for the end-to-end server (OPCMASTER workstation). It must have one of the following values:

ON The server uses SSL authentication only if another workstation requires it.

OFF (default value) The server does not support SSL authentication for its connections.

ENABLED The server uses SSL authentication only if another workstation requires it.

FORCE The server uses SSL authentication for all of its connections. It refuses any incoming connection if it is not SSL.

If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file.

SSLPORT(SSL port number/31113) Defines the port used to listen for incoming SSL connections on the server. It substitutes the value of nm SSL port in the localopts file, activating SSL support on the server. If SSLLEVEL is specified and SSLPORT is missing, 31113 is used as the default value. If SSLLEVEL is not specified, the default value of this parameter is 0 on the server, which indicates that no SSL authentication is required.

If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file.

Important: The port number must be unique within a Tivoli Workload Scheduler network.

182 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 199: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

TCPIPJOBNAME(TCP/IP started-task name/TCPIP) Specifies the TCP/IP started-task name used by the server. Set this keyword when you have multiple TCP/IP stacks or a TCP/IP started task with a name different from TCPIP. You can specify a name from one to eight alphanumeric or national characters, where the first character is alphabetic or national.

TPLGYMEM(member name/TPLGINFO) Specifies the PARMLIB member where the domain (DOMREC) and workstation (CPUREC) definition specific to the end-to-end are. The default value is TPLGINFO.

If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file.

TRCDAYS(days/14) Specifies the number of days the trace files and file in the stdlist directory are kept before being deleted. Every day the USS code creates the new stdlist directory to contain the logs for the day. All log directories that are older than the number of days specified in TRCDAYS() are deleted automatically. The default value is 14. Specify 0 if you do not want the trace files to be deleted.

USRMEM(member name/USRINFO) Specifies the PARMLIB member where the user definitions are. This keyword is optional except if you are going to schedule jobs on Windows operating systems, in which case, it is required.

The default value is USRINFO.

If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file.

WRKDIR(directory name)Specifies the location of the working files for an end-to-end server started task. Each Tivoli Workload Scheduler for z/OS end-to-end server must have its own WRKDIR.

Recommendation: Monitor the size of your working directory (that is, the size of the HFS cluster with work files) to prevent the HFS cluster from becoming full. The trace files and files in the stdlist directory contain internal logging information and Tivoli Workload Scheduler messages that may be useful for troubleshooting. You should consider deleting them on a regular interval using the TRCDAYS() parameter.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 183

Page 200: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

ENABLESWITCHFT(Y/N)New parameter (not shown in Figure 4-7 on page 179) that was introduced in FixPack 04 for Tivoli Workload Scheduler and APAR PQ81120 for Tivoli Workload Scheduler.

It is used to activated the enhanced fault-tolerant mechanism on domain managers. The default is N, meaning that the enhanced fault-tolerant mechanism is not activated. For more information, check the documentation in the FaultTolerantSwitch.README.pdf file delivered with FixPack 04 for Tivoli Workload Scheduler.

4.2.7 Initialization statements used to describe the topology With the last three parameters listed in Table 4-3 on page 175, DOMREC, CPUREC, and USRREC, you define the topology of the distributed Tivoli Workload Scheduler network in Tivoli Workload Scheduler for z/OS. The defined topology is used by the plan extend, replan, and Symphony renew batch jobs when creating the Symphony file for the distributed Tivoli Workload Scheduler network.

Figure 4-8 on page 185 shows how the distributed Tivoli Workload Scheduler topology is described using CPUREC and DOMREC initialization statements for the Tivoli Workload Scheduler for z/OS server and plan programs. The Tivoli Workload Scheduler for z/OS fault-tolerant workstations are mapped to physical Tivoli Workload Scheduler agents or workstations using the CPUREC statement. The DOMREC statement is used to describe the domain topology in the distributed Tivoli Workload Scheduler network.

Note that the MASTERDM domain is predefined in Tivoli Workload Scheduler for z/OS. It is not necessary to specify a DOMREC parameter for the MASTERDM domain.

Also note that the USRREC parameters are not depicted in Figure 4-8 on page 185.

184 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 201: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-8 The topology definitions for server and plan programs

In the following sections, we walk through the DOMREC, CPUREC, and USRREC statements.

DOMREC statementThis statement begins a domain definition. You must specify one DOMREC for each domain in the Tivoli Workload Scheduler network, with the exception of the master domain.

The domain name used for the master domain is MASTERDM. The master domain consists of the controller, which acts as the master domain manager. The CPU name used for the master domain manager is OPCMASTER.

You must specify at least one domain, child of MASTERDM, where the domain manager is a fault-tolerant agent. If you do not define this domain, Tivoli Workload Scheduler for z/OS tries to find a domain definition that can function as a child of the master domain.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 185

Page 202: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-9 Example of two DOMREC statements for a network with two domains

DOMREC is defined in the member of the EQQPARM library that is specified by the TPLGYMEM keyword in the TOPOLOGY statement (see Figure 4-6 on page 176 and Figure 4-9).

Figure 4-10 illustrates the DOMREC syntax.

Figure 4-10 Syntax for the DOMREC statement

DOMAIN(domain name)The name of the domain, consisting of up to 16 characters starting with a letter. It can contain dashes and underscores.

OPC doesn’t have a built-in place to store information about TWS domains. Domains and their relationships are defined in DOMRECs.

There is no DOMREC for the Master Domain, MASTERDM.

EQQSCLIB(MYJOB)DOMREC DOMAIN(DOMAINA)

DOMMNGR(A000)DOMPARENT(MASTERDM)

DOMREC DOMAIN(DOMAINB)DOMMNGR(B000)DOMPARENT(MASTERDM)

...

EQQPARM(TPLGINFO)

Symphony

DOMRECs are used to add information about TWS domains to the Symphony file.

DOMRECs in topology member

B000

MASTERDMOPCMASTER

A000

DomainA DomainB

A001 A002 B001 B002

OPC doesn’t have a built-in place to store information about TWS domains. Domains and their relationships are defined in DOMRECs.

There is no DOMREC for the Master Domain, MASTERDM.

EQQSCLIB(MYJOB)DOMREC DOMAIN(DOMAINA)

DOMMNGR(A000)DOMPARENT(MASTERDM)

DOMREC DOMAIN(DOMAINB)DOMMNGR(B000)DOMPARENT(MASTERDM)

...

EQQPARM(TPLGINFO)

Symphony

DOMRECs are used to add information about TWS domains to the Symphony file.

DOMRECs in topology member

B000

MASTERDMOPCMASTER

A000

DomainA DomainB

A001 A002 B001 B002

EQQSCLIB(MYJOB)DOMREC DOMAIN(DOMAINA)

DOMMNGR(A000)DOMPARENT(MASTERDM)

DOMREC DOMAIN(DOMAINB)DOMMNGR(B000)DOMPARENT(MASTERDM)

...

EQQPARM(TPLGINFO)

Symphony

DOMRECs are used to add information about TWS domains to the Symphony file.

DOMRECs in topology member

B000

MASTERDMOPCMASTER

A000

DomainA DomainB

A001 A002 B001 B002

B000

MASTERDMOPCMASTER

A000

DomainA DomainB

A001 A002 B001 B002

186 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 203: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

DOMMNGR(domain manager name)The Tivoli Workload Scheduler workstation name of the domain manager. It must be a fault-tolerant agent running in full status mode.

DOMPARENT(parent domain)The name of the parent domain.

CPUREC statementThis statement begins a Tivoli Workload Scheduler workstation (CPU) definition. You must specify one CPUREC for each workstation in the Tivoli Workload Scheduler network, with the exception of the controller that acts as master domain manager. You must provide a definition for each workstation of Tivoli Workload Scheduler for z/OS that is defined in the database as a Tivoli Workload Scheduler fault-tolerant workstation.

CPUREC is defined in the member of the EQQPARM library that is specified by the TPLGYMEM keyword in the TOPOLOGY statement (see Figure 4-6 on page 176 and Figure 4-11 on page 188).

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 187

Page 204: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-11 Example of two CPUREC statements for two workstations

Figure 4-12 on page 189 illustrates the CPUREC syntax.

A000B000A001A002B001B002...

Workstationsin OPC

EQQSCLIB(MYJOB)CPUREC CPUNAME(A000)

CPUOS(AIX)CPUNODE(stockholm)CPUTCPIP(31281)CPUDOMAIN(DomainA)CPUTYPE(FTA)CPUAUTOLINK(ON)CPUFULLSTAT(ON)CPURESDEP(ON)CPULIMIT(20)CPUTZ(ECT)CPUUSER(root)

CPUREC CPUNAME(A001)CPUOS(WNT)CPUNODE(copenhagen)CPUDOMAIN(DOMAINA)CPUTYPE(FTA)CPUAUTOLINK(ON)CPULIMIT(10)CPUTZ(ECT)CPUUSER(Administrator)FIREWALL(Y)SSLLEVEL(FORCE)SSLPORT(31281)

...

EQQPARM(TPLGINFO)

CPURECs are used to add information about DMs & FTAs to the Symphony file.

OPC does not have fields tocontain the extra information in a TWS workstation definition; OPC workstations marked fault tolerantmust also have a CPUREC. The workstation name in OPC acts as a pointer to the CPUREC.There is no CPUREC for the Master Domain manager, OPCMASTER.

Symphony

CPURECs in topology member

Valid CPUOS values:

AIXHPUXPOSIXUNIXWNTOTHER

B000

MASTERDMOPCMASTER

A000

DomainA DomainB

A001 A002 B001 B002

A000B000A001A002B001B002...

Workstationsin OPC

EQQSCLIB(MYJOB)CPUREC CPUNAME(A000)

CPUOS(AIX)CPUNODE(stockholm)CPUTCPIP(31281)CPUDOMAIN(DomainA)CPUTYPE(FTA)CPUAUTOLINK(ON)CPUFULLSTAT(ON)CPURESDEP(ON)CPULIMIT(20)CPUTZ(ECT)CPUUSER(root)

CPUREC CPUNAME(A001)CPUOS(WNT)CPUNODE(copenhagen)CPUDOMAIN(DOMAINA)CPUTYPE(FTA)CPUAUTOLINK(ON)CPULIMIT(10)CPUTZ(ECT)CPUUSER(Administrator)FIREWALL(Y)SSLLEVEL(FORCE)SSLPORT(31281)

...

EQQPARM(TPLGINFO)

CPURECs are used to add information about DMs & FTAs to the Symphony file.

OPC does not have fields tocontain the extra information in a TWS workstation definition; OPC workstations marked fault tolerantmust also have a CPUREC. The workstation name in OPC acts as a pointer to the CPUREC.There is no CPUREC for the Master Domain manager, OPCMASTER.

Symphony

CPURECs in topology member

Valid CPUOS values:

AIXHPUXPOSIXUNIXWNTOTHER

B000

MASTERDMOPCMASTER

A000

DomainA DomainB

A001 A002 B001 B002

188 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 205: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-12 Syntax for the CPUREC statement

CPUNAME(cpu name)The name of the Tivoli Workload Scheduler workstation, consisting of up to four alphanumerical characters, starting with a letter.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 189

Page 206: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

CPUOS(operating system)The host CPU operating system related to the Tivoli Workload Scheduler workstation. The valid entries are AIX, HPUX, POSIX, UNIX, WNT, and OTHER.

CPUNODE(node name)The node name or the IP address of the CPU. Fully-qualified domain names up to 52 characters are accepted.

CPUTCPIP(port number/ 31111)The TCP port number of netman on this CPU. It comprises five numbers and, if omitted, uses the default value, 31111.

CPUDOMAIN(domain name)The name of the Tivoli Workload Scheduler domain of the CPU.

CPUHOST(cpu name)The name of the host CPU of the agent. It is required for standard and extended agents. The host is the Tivoli Workload Scheduler CPU with which the standard or extended agent communicates and where its access method resides.

CPUACCESS(access method)The name of the access method. It is valid for extended agents and must be the name of a file that resides in the Tivoli Workload Scheduler <home>/methods directory on the host CPU of the agent.

CPUTYPE(SAGENT/ XAGENT/ FTA)The CPU type specified as one of the following:

FTA (default) Fault-tolerant agent, including domain managers and backup domain managers.

SAGENT Standard agent

XAGENT Extended agent

CPUAUTOLNK(OFF/ON)Autolink is most effective during the initial start-up sequence of each plan. Then a new Symphony file is created and all workstations are stopped and restarted.

Note: The host cannot be another standard or extended agent.

Note: If the extended-agent workstation is manually set to Link, Unlink, Active, or Offline, the command is sent to its host CPU.

190 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 207: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

For a fault-tolerant agent or standard agent, specify ON so that, when the domain manager starts, it sends the new production control file (Symphony) to start the agent and open communication with it.

For the domain manager, specify On so that when the agents start they open communication with the domain manager.

Specify OFF to initialize an agent when you submit a link command manually from the Tivoli Workload Scheduler for z/OS Modify Current Plan ISPF dialogs or from the Job Scheduling Console.

CPUFULLSTAT(ON/OFF)This applies only to fault-tolerant agents. If you specify OFF for a domain manager, the value is forced to ON.

Specify ON for the link from the domain manager to operate in Full Status mode. In this mode, the agent is kept updated about the status of jobs and job streams that are running on other workstations in the network.

Specify OFF for the agent to receive status information only about the jobs and schedules on other workstations that affect its own jobs and schedules. This can improve the performance by reducing network traffic.

To keep the production control file for an agent at the same level of detail as its domain manager, set CPUFULLSTAT and CPURESDEP (see below) to ON. Always set these modes to ON for backup domain managers.

You should also be aware of the new TOPOLOGY ENABLESWITCHFT() parameter described in “ENABLESWITCHFT(Y/N)” on page 184.

CPURESDEP(ON/OFF)This applies only to fault-tolerant agents. If you specify OFF for a domain manager, the value is forced to ON.

Specify ON to have the agent’s production control process operate in Resolve All Dependencies mode. In this mode, the agent tracks dependencies for all of its jobs and schedules, including those running on other CPUs.

Note: If the X-agent workstation is manually set to Link, Unlink, Active, or Offline, the command is sent to its host CPU.

Note: CPUFULLSTAT must also be ON so that the agent is informed about the activity on other workstations.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 191

Page 208: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Specify OFF if you want the agent to track dependencies only for its own jobs and schedules. This reduces CPU usage by limiting processing overhead.

To keep the production control file for an agent at the same level of detail as its domain manager, set CPUFULLSTAT and CPURESDEP to ON. Always set these modes to ON for backup domain managers.

You should also be aware of the new TOPOLOGY ENABLESWITCHFT() parameter that is described in “ENABLESWITCHFT(Y/N)” on page 184.

CPUSERVER(server ID)This applies only to fault-tolerant and standard agents. Omit this option for domain managers.

This keyword can be a letter or a number (A-Z or 0-9) and identifies a server (mailman) process on the domain manager that sends messages to the agent. The IDs are unique to each domain manager, so you can use the same IDs for agents in different domains without conflict. If more than 36 server IDs are required in a domain, consider dividing it into two or more domains.

If a server ID is not specified, messages to a fault-tolerant or standard agent are handled by a single mailman process on the domain manager. Entering a server ID causes the domain manager to create an additional mailman process. The same server ID can be used for multiple agents. The use of servers reduces the time that is required to initialize agents and generally improves the timeliness of messages.

Notes on multiple mailman processes:

� When setting up multiple mailman processes, do not forget that each mailman server process uses extra CPU resources on the workstation on which it is created, so be careful not to create excessive mailman processes on low-end domain managers. In most of the cases, using extra domain managers is a better choice than configuring extra mailman processes.

� Cases in which use of extra mailman processes might be beneficial include:

– Important FTAs that run mission critical jobs.

– Slow-initializing FTAs that are at the other end of a slow link. (If you have more than a couple of workstations over a slow link connection to the OPCMASTER, a better idea is to place a remote domain manager to serve these workstations.)

� If you have unstable workstations in the network, do not put them under the same mailman server ID with your critical servers.

192 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 209: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

See Figure 4-13 for an example of CPUSERVER() use. The figure shows that one mailman process on domain manager FDMA has to handle all outbound communication with the five FTAs (FTA1 to FTA5) if these workstations (CPUs) are defined without the CPUSERVER() parameter. If FTA1 and FTA2 are defined with CPUSERVER(A), and FTA3 and FTA4 are defined with CPUSERVER(1), the domain manager FDMA will start two new mailman processes for these two server IDs (A and 1).

Figure 4-13 Usage of CPUSERVER() IDs to start extra mailman processes

CPULIMIT(value/1024)Specifies the number of jobs that can run at the same time in a CPU. The default value is 1024. The accepted values are integers from 0 to 1024. If you specify 0, no jobs are launched on the workstation.

CPUTZ(timezone/UTC)Specifies the local time zone of the FTA. It must match the time zone of the operating system in which the FTA runs. For a complete list of valid time zones, refer to the appendix of the IBM Tivoli Workload Scheduler Reference Guide, SC32-1274.

DomainA AIXDomain Manager FDMA

SERVERA mailman

SERVERA mailman

mailmanmailmanSERVER1 mailman

SERVER1 mailman

Linux

FTA1

Server ID AWindows 2000

FTA3

Server ID 1Solaris

FTA2

Server ID A

parent domain manager

FTA4

HPUXServer ID 1

OS/400

FTA5

DomainA AIXDomain Manager FDMA

mailmanmailman

Linux

FTA1

No Server IDLinux

FTA1

No Server IDWindows 2000

FTA3

No Server IDSolaris

FTA2

No Server ID

parent domain manager

FTA4

HPUXNo Server ID

OS/400No Server ID

FTA5

• No Server IDsThe main mailman process on DMA handles all outbound communications with the FTAs in the domain.

• 2 Different Server IDsAn extra mailman process is spawned for each server ID in the domain.

No Server ID

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 193

Page 210: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

If the time zone does not match that of the agent, the message AWSBHT128I is displayed in the log file of the FTA. The default is UTC (universal coordinated time).

To avoid inconsistency between the local date and time of the jobs and of the Symphony creation, use the CPUTZ keyword to set the local time zone of the fault-tolerant workstation. If the Symphony creation date is later than the current local date of the FTW, Symphony is not processed.

In the end-to-end environment, time zones are disabled by default when installing or upgrading Tivoli Workload Scheduler for z/OS. If the CPUTZ keyword is not specified, time zones are disabled. For additional information about how to set the time zone in an end-to-end network, see the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273.

CPUUSER(default user/tws) Specifies the default user for the workstation. The maximum length is 47 characters. The default value is tws.

The value of this option is used only if you have not defined the user in the JOBUSR option of the SCRPTLIB JOBREC statement or supply it with the Tivoli Workload Scheduler for z/OS job submit exit EQQUX001 for centralized script.

SSLLEVEL(ON/OFF/ENABLED/FORCE) Must have one of the following values:

ON The workstation uses SSL authentication when it connects with its domain manager. The domain manager uses the SSL authentication when it connects with a domain manager of a parent domain. However, it refuses any incoming connection from its domain manager if the connection does not use the SSL authentication.

OFF (default) The workstation does not support SSL authentication for its connections.

ENABLED The workstation uses SSL authentication only if another workstation requires it.

FORCE The workstation uses SSL authentication for all of its connections. It refuses any incoming connection if it is not SSL.

If this attribute is set to OFF or omitted, the workstation is not intended to be configured for SSL. In this case, any value for SSLPORT (see below) will be ignored. You should also set the nm ssl port local option to 0 (in the localopts file) to be sure that this port is not opened by netman.

194 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 211: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

SSLPORT(SSL port number|/31113)Defines the port used to listen for incoming SSL connections. This value must match the one defined in the nm SSL port local option (in the localopts file) of the workstation (the server with Tivoli Workload Scheduler installed). It must be different from the nm port local option (in the localopts file) that defines the port used for normal communications. If SSLLEVEL is specified but SSLPORT is missing, 31113 is used as the default value. If not even SSLLEVEL is specified, the default value of this parameter is 0 on FTWs, which indicates that no SSL authentication is required.

FIREWALL(YES/NO)Specifies whether the communication between a workstation and its domain manager must cross a firewall. If you set the FIREWALL keyword for a workstation to YES, it means that a firewall exists between that particular workstation and its domain manager, and that the link between the domain manager and the workstation (which can be another domain manager itself) is the only link that is allowed between the respective domains. Also, for all workstations having this option set to YES, the commands to start (start workstation) or stop (stop workstation) the workstation or to get the standard list (showjobs) are transmitted through the domain hierarchy instead of opening a direct connection between the master (or domain manager) and the workstation. The default value for FIREWALL is NO, meaning that there is no firewall boundary between the workstation and its domain manager.

To specify that an extended agent is behind a firewall, set the FIREWALL keyword for the host workstation. The host workstation is the Tivoli Workload Scheduler workstation with which the extended agent communicates and where its access method resides.

USRREC statementThis statement defines the passwords for the users who need to schedule jobs to run on Windows workstations.

USRREC is defined in the member of the EQQPARM library as specified by the USERMEM keyword in the TOPOLOGY statement. (See Figure 4-6 on page 176 and Figure 4-15 on page 197.)

Figure 4-14 illustrates the USRREC syntax.

Figure 4-14 Syntax for the USRREC statement

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 195

Page 212: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

USRCPU(cpuname)The Tivoli Workload Scheduler workstation on which the user can launch jobs. It consists of four alphanumerical characters, starting with a letter. It is valid only on Windows workstations.

USRNAM(logon ID) The user name of a Windows workstation. It can include a domain name and can consist of 47 characters.

Windows user names are case-sensitive. The user must be able to log on to the computer on which Tivoli Workload Scheduler has launched jobs, and must also be authorized to log on as batch.

If the user name is not unique in Windows, it is considered to be either a local user, a domain user, or a trusted domain user, in that order.

USRPWD(password) The user password for the user of a Windows workstation (Figure 4-15 on page 197). It can consist of up to 31 characters and must be enclosed in single quotation marks. Do not specify this keyword if the user does not need a password. You can change the password every time you create a Symphony file (when creating a CP extension).

Attention: The password is not encrypted. You must take the necessary action to protect the password from unauthorized access.

One way to do this is to place the USRREC definitions in a separate member in a separate library. This library should then be protected with RACF so it can be accessed only by authorized persons. The library should be added in the EQQPARM data set concatenation in the end-to-end server started task and in the plan extend, replan, and Symphony renew batch jobs.

Example JCL for plan replan, extend, and Symphony renew batch jobs:

//EQQPARM DD DISP=SHR,DSN=TWS.V8R20.PARMLIB(BATCHOPT)// DD DISP=SHR,DSN=TWS.V8R20.PARMUSR

In this example, the USRREC member is placed in the TWS.V8R20.PARMUSR library. This library can then be protected with RACF according to your standards. All other BATCHOPT initialization statements are placed in the usual parameter library. In the example, this library is named TWS.V8R20.PARMLIB and the member is BATCHOPT.

196 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 213: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-15 Example of three USRREC definitions: for a local and domain Windows user

4.2.8 Example of DOMREC and CPUREC definitionsWe have explained how to use DOMREC and CPUREC statements to define the network topology for a Tivoli Workload Scheduler network in a Tivoli Workload Scheduler for z/OS end-to-end environment. We now use these statements to define a simple Tivoli Workload Scheduler network in Tivoli Workload Scheduler for z/OS.

As an example, Figure 4-16 on page 198 illustrates a simple Tivoli Workload Scheduler network. In this network there is one domain, DOMAIN1, under the master domain (MASTERDM).

USRRECUSRCPU(F202)USERNAM(tws)USRPSW(tivoli00)

USRRECUSRCPU(F202)USERNAM(Jim Smith)USRPSW(ibm9876)

USRRECUSRCPU(F302)USERNAM(South\MUser1)USRPSW(d9fj4k)

...

EQQPARM(USERINFO)

OPC doesn’t have a built-in way to store Windows users and passwords; for this reason, the users are defined by adding USRRECs to the user member of EQQPARM.

USRRECs in user member

Symphony

USRRECs are used to add Windows NT user definitions to the Symphony file.

South\MUser1twsJim Smith

B000

MASTERDMOPCMASTER

A000

DomainA DomainB

A001 A002 B001 B002

USRRECUSRCPU(F202)USERNAM(tws)USRPSW(tivoli00)

USRRECUSRCPU(F202)USERNAM(Jim Smith)USRPSW(ibm9876)

USRRECUSRCPU(F302)USERNAM(South\MUser1)USRPSW(d9fj4k)

...

EQQPARM(USERINFO)

OPC doesn’t have a built-in way to store Windows users and passwords; for this reason, the users are defined by adding USRRECs to the user member of EQQPARM.

USRRECs in user member

Symphony

USRRECs are used to add Windows NT user definitions to the Symphony file.

South\MUser1twsJim Smith

B000

MASTERDMOPCMASTER

A000

DomainA DomainB

A001 A002 B001 B002

South\MUser1twsJim Smith

B000

MASTERDMOPCMASTER

A000

DomainA DomainB

A001 A002 B001 B002

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 197

Page 214: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-16 Simple end-to-end scheduling environment

Example 4-3 describes the DOMAIN1 domain with the DOMAIN topology statement.

Example 4-3 Domain definition

DOMREC DOMAIN(DOMAIN1) /* Name of the domain is DOMAIN1 */ DOMMMNGR(F100) /* F100 workst. is domain mng. */ DOMPARENT(MASTERDM) /* Domain parent is MASTERDM */

In end-to-end, the master domain (MASTERDM) is always the Tivoli Workload Scheduler for z/OS controller. (It is predefined and cannot be changed.) Since the DOMAIN1 domain is under the MASTERDM domain, MASTERDM must be defined in the DOMPARENT parameter. The DOM;MNGR keyword represents the name of the workstation.

There are three workstations (CPUs) in the DOMAIN1 domain. To define these workstations in the Tivoli Workload Scheduler for z/OS end-to-end network, we must define three CPURECs, one for each workstation (server) in the network.

Example 4-4 Workstation (CPUREC) definitions for the three FTWs

CPUREC CPUNAME(F100) /* Domain manager for DM100 */ CPUOS(AIX) /* AIX operating system */

M ASTERDM

AIXcopenhagen.dk.ibm .com

D om ain M anager

F100

AIXlondon.uk.ibm .com

D O M AIN1

F101BDM for Dom ainA

F102

W indowsstockholm .se.ibm .com

M aster D om ainM anager

O PC M ASTER

z/O S

M ASTERDM

AIXcopenhagen.dk.ibm .com

D om ain M anager

F100

AIXlondon.uk.ibm .com

D O M AIN1

F101BDM for Dom ainA

F102

W indowsstockholm .se.ibm .com

M aster D om ainM anager

O PC M ASTER

z/O S

198 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 215: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

CPUNODE(copenhagen.dk.ibm.com) /* IP address of CPU (DNS) */ CPUTCPIP(31281) /* TCP port number of NETMAN */ CPUDOMAIN(DM100) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* This is a FTA CPU type */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for DM */ CPURESDEP(ON) /* Resolve dependencies on for DM*/ CPULIMIT(20) /* Number of jobs in parallel */ CPUTZ(Europe/Copenhagen) /* Time zone for this CPU */ CPUUSER(twstest) /* default user for CPU */ SSLLEVEL(OFF) /* SSL is not active */ SSLPORT(31113) /* Default SSL port */ FIREWALL(NO) /* WS not behind firewall */CPUREC CPUNAME(F101) /* fault tolerant agent in DM100 */ CPUOS(AIX) /* AIX operating system */ CPUNODE(london.uk.ibm.com) /* IP address of CPU (DNS) */ CPUTCPIP(31281) /* TCP port number of NETMAN */ CPUDOMAIN(DM100) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* This is a FTA CPU type */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for BDM */ CPURESDEP(ON) /* Resolve dependencies on BDM */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(A) /* Start extra mailman process */ CPUTZ(Europe/London) /* Time zone for this CPU */ CPUUSER(maestro) /* default user for ws */ SSLLEVEL(OFF) /* SSL is not active */ SSLPORT(31113) /* Default SSL port */ FIREWALL(NO) /* WS not behind firewall */CPUREC CPUNAME(F102) /* fault tolerant agent in DM100 */ CPUOS(WNT) /* Windows operating system */ CPUNODE(stockholm.se.ibm.com) /* IP address for CPU (DNS) */ CPUTCPIP(31281) /* TCP port number of NETMAN */ CPUDOMAIN(DM100) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* This is a FTA CPU type */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dependencies off FTA */ CPULIMIT(10) /* Number of jobs in parallel */ CPUSERVER(A) /* Start extra mailman process */ CPUTZ(Europe/Stockholm) /* Time zone for this CPU */ CPUUSER(twstest) /* default user for ws */ SSLLEVEL(OFF) /* SSL is not active */ SSLPORT(31113) /* Default SSL port */ FIREWALL(NO) /* WS not behind firewall */

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 199

Page 216: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Because F101 is going to be backup domain manager for F100, F101 is defined with CPUFULLSTATUS (ON) and CPURESDEP(ON).

F102 is a fault-tolerant agent without extra responsibilities, so it is defined with CPUFULLSTATUS(OFF) and CPURESDEP(OFF) because dependency resolution within the domain is the task of the domain manager. This improves performance by reducing network traffic.

Finally, since F102 runs on a Windows server, we must create at least one USRREC definition for this server. In our example, we would like to be able to run jobs on the Windows server under either the Tivoli Workload Scheduler installation user (twstest) or the database user, databusr.

Example 4-5 USRREC definition for tws F102 Windows users, twstest and databusr

USRREC USRCPU(F102) /* Definition for F102 Windows CPU */ USRNAM(twstest) /* The user name (local user) */ USRPSW('twspw01') /* The password for twstest */USRREC USRCPU(F102) /* Definition for F102 Windows CPU */ USRNAM(databusr) /* The user name (local user) */ USRPSW('data01ad') /* Password for databusr */

4.2.9 The JTOPTS TWSJOBNAME() parameterWith the JTOPTS TWSJOBNAME() parameter, it is possible to specify different criteria that Tivoli Workload Scheduler for z/OS should use when creating the job name in the Symphony file in USS.

The syntax for the JTOPTS TWSJOBNAME() parameter is:

TWSJOBNAME(EXTNAME/EXTNOCC/JOBNAME/OCCNAME)

If you do not specify the TWSJOBNAME() parameter, the value OCCNAME is used by default.

When choosing OCCNAME, the job names in the Symphony file will be generated with one of the following formats:

� <X>_<Num>_<Application Name> when the job is created in the Symphony file

� <X>_<Num>_<Ext>_<Application Name> when the job is first deleted and then recreated in the current plan

In these examples, <X> can be J for normal jobs (operations), P for jobs representing pending predecessors, and R for recovery jobs.

Note: CPUOS(WNT) applies for all Windows platforms.

200 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 217: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

<Num> is the operation number.

<Ext> is a sequential decimal number that is increased every time an operation is deleted and then recreated.

<Application Name> is the name of the occurrence that the operation belongs to.

See Figure 4-17 for an example of how the job names (and job stream names) are generated by default in the Symphony file when JTOPTS TWSJOBNAME(OCCNAME) is specified or defaulted.

Note that occurrence in Tivoli Workload Scheduler for z/OS is the same as JSC job stream instance (that is, a job stream or and application that is on the plan in Tivoli Workload Scheduler for z/OS).

Figure 4-17 Generation of job and job stream names in the Symphony file

OPC Current Plan Symphony

Each instance of a job stream in OPC is assigned a unique occurence token. If the job stream is added to the TWS Symphony file, the occurence token is used as the job stream name in the Symphony file.

0800

Input Arr. Time

B8FF08015E683C44

Occurence Token

DAILY

Job Stream Instance(Application Occurence)

0800

Input Arr. Time

B8FF08015E683C44

Occurence Token

DAILY

Job Stream Instance(Application Occurence)

B8FFF05B29182108

Job Stream Instance(Schedule)

B8FFF05B29182108

Job Stream Instance(Schedule)

J_015_DAILY

J_010_DAILY

J_020_DAILY

Job Instance

J_015_DAILY

J_010_DAILY

J_020_DAILY

Job Instance

0900

Input Arr. Time

B8FFF05B29182108

Occurence Token

DAILY

Job Stream Instance(Application Occurence)

0900

Input Arr. Time

B8FFF05B29182108

Occurence Token

DAILY

Job Stream Instance(Application Occurence)

DLYJOB3020

DLYJOB2015

DLYJOB1010

Operation Number

Job (Operation)

DLYJOB3020

DLYJOB2015

DLYJOB1010

Operation Number

Job (Operation)

Symphony FileCP

DLYJOB3020

DLYJOB2015

DLYJOB1010

Operation Number

Job (Operation)

DLYJOB3020

DLYJOB2015

DLYJOB1010

Operation Number

Job (Operation)

B8FF08015E683C44

Job Stream Instance(Schedule)

B8FF08015E683C44

Job Stream Instance(Schedule)

J_015_DAILY

J_010_DAILY

J_020_DAILY

Job Instance

J_015_DAILY

J_010_DAILY

J_020_DAILY

Job Instance

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 201

Page 218: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

If any of the other values (EXTNAME, EXTNOCC, or JOBNAME) is specified in the JTOPTS TWSJOBNAME() parameter, the job name in the Symphony file is created according to one of the following formats:

� <X><Num>_<JobInfo> when the job is created in the Symphony file

� <X><Num>_<Ext>_<JobInfo> when the job is first deleted and then recreated in the current plan

In these examples:

<X> can be J for normal jobs (operations), P for jobs representing pending predecessors, and R for recovery jobs. For jobs representing pending predecessors, the job name is in all cases generated by using the OCCNAME criterion. This is because, in the case of pending predecessors, the current plan does not contain the required information (excepting the name of the occurrence) to build the Symphony name according to the other criteria.

<Num> is the operation number.

<Ext> is the hexadecimal value of a sequential number that is increased every time an operation is deleted and then recreated.

<JobInfo> depends on the chosen criterion:

– For EXTNAME: <JobInfo> is filled with the first 32 characters of the extended job name associated with that job (if it exists) or with the eight-character job name (if the extended name does not exist). Note that the extended job name, in addition to being defined in the database, must also exist in the current plan.

– For EXTNOCC: <JobInfo> is filled with the first 32 characters of the extended job name associated with that job (if it exists) or with the application name (if the extended name does not exist). Note that the extended job name, in addition to being defined in the database, must also exist in the current plan.

– For JOBNAME: <JobInfo> is filled with the 8-character job name.

The criterion that is used to generate a Tivoli Workload Scheduler job name will be maintained throughout the entire life of the job.

Note: In order to choose the EXTNAME, EXTNOCC, or JOBNAME criterion, the EQQTWSOU data set must have a record length of 160 bytes. Before using any of the above keywords, you must migrate the EQQTWSOU data set if you have allocated the data set with a record length less than 160 bytes. Sample EQQMTWSO is available to migrate this data set from record length 120 to 160 bytes.

202 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 219: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Limitations when using the EXTNAME and EXTNOCC criteria:

� The job name in the Symphony file can contain only alphanumeric characters, dashes, and underscores. All other characters that are accepted for the extended job name are converted into dashes. Note that a similar limitation applies with JOBNAME: When defining members of partitioned data sets (such as the script or the job libraries), national characters can be used, but they are converted into dashes in the Symphony file.

� The job name in the Symphony file must be in uppercase. All lowercase characters in the extended name are automatically converted to uppercase by Tivoli Workload Scheduler for z/OS.

4.2.10 Verify end-to-end installation in Tivoli Workload Scheduler for z/OS

When all installation tasks as described in the previous sections have been completed, and all initialization statements and data sets related to end-to-end scheduling have been defined in the Tivoli Workload Scheduler for z/OS controller, end-to-end server, and plan extend, replan, and Symphony renew batch jobs, it is time to do the first verification of the mainframe part.

Verify the Tivoli Workload Scheduler for z/OS controllerAfter the customization steps haven been completed, simply start the Tivoli Workload Scheduler controller. Check the controller message log (EQQMLOG) for any unexpected error or warning messages. All Tivoli Workload Scheduler z/OS messages are prefixed with EQQ. See the IBM Tivoli Workload Scheduler

Note: Using the job name (or the extended name as part of the job name) in the Symphony file implies that it becomes a key for identifying the job. This also means that the extended name - job name is used as a key for addressing all events that are directed to the agents. For this reason, be aware of the following facts for the operations that are included in the Symphony file:

� Editing the extended name is inhibited for operations that are created when the TWSJOBNAME keyword was set to EXTNAME or EXTNOCC.

� Editing the job name is inhibited for operations created when the TWSJOBNAME keyword was set to EXTNAME or JOBNAME.

Note: This verification can be postponed until workstations for the fault-tolerant agents have been defined in Tivoli Workload Scheduler for z/OS and, optionally, Tivoli Workload Scheduler has been installed on the fault-tolerant agents (the Tivoli Workload Scheduler servers or agents).

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 203

Page 220: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

for z/OS Messages and Codes Version 8.2 (Maintenance Release April 2004), SC32-1267.

Because we have activated the end-to-end feature in the controller initialization statements by specifying the OPCOPTS TPLGYSRV() parameter and we have asked the controller to start our end-to-end server by the SERVERS(TWSCE2E) parameter, we will see messages as shown in Example 4-6 in the Tivoli Workload Scheduler for z/OS controller message log (EQQMLOG).

Example 4-6 IBM Tivoli Workload Scheduler for z/OS controller messages for end-to-end

EQQZ005I OPC SUBTASK E2E ENABLER IS BEING STARTEDEQQZ085I OPC SUBTASK E2E SENDER IS BEING STARTEDEQQZ085I OPC SUBTASK E2E RECEIVER IS BEING STARTEDEQQG001I SUBTASK E2E ENABLER HAS STARTEDEQQG001I SUBTASK E2E SENDER HAS STARTEDEQQG001I SUBTASK E2E RECEIVER HAS STARTED EQQW097I END-TO-END RECEIVER STARTED SYNCHRONIZATION WITH THE EVENT MANAGER EQQW097I 0 EVENTS IN EQQTWSIN WILL BE REPROCESSED EQQW098I END-TO-END RECEIVER FINISHED SYNCHRONIZATION WITH THE EVENT MANAGER EQQ3120E END-TO-END TRANSLATOR SERVER PROCESS IS NOT AVAILABLE EQQZ193I END-TO-END TRANSLATOR SERVER PROCESSS NOW IS AVAILABLE

The messages in Example 4-6 are extracted from the Tivoli Workload Scheduler for z/OS controller message log. There will be several other messages between the messages shown in Example 4-6 if you look in your controller message log.

If the Tivoli Workload Scheduler for z/OS controller is started with empty EQQTWSIN and EQQTWSOU data sets, messages shown in Example 4-7 will be issued in the controller message log (EQQMLOG).

Example 4-7 Formatting messages when EQQTWSOU and EQQTWSIN are empty

EQQW030I A DISK DATA SET WILL BE FORMATTED, DDNAME = EQQTWSOUEQQW030I A DISK DATA SET WILL BE FORMATTED, DDNAME = EQQTWSINEQQW038I A DISK DATA SET HAS BEEN FORMATTED, DDNAME = EQQTWSOUEQQW038I A DISK DATA SET HAS BEEN FORMATTED, DDNAME = EQQTWSIN

Note: If you do not see all of these messages in your controller message log, you probably have not applied all available service updates. See 3.4.2, “Service updates (PSP bucket, APARs, and PTFs)” on page 117.

204 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 221: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The messages in Example 4-8 and Example 4-9 show that the controller is started with the end-to-end feature active and that it is ready to run jobs in the end-to-end environment.

When the Tivoli Workload Scheduler for z/OS controller is stopped, the end-to-end related messages shown in Example 4-8 will be issued.

Example 4-8 Controller messages for end-to-end when controller is stopped

EQQG003I SUBTASK E2E RECEIVER HAS ENDEDEQQG003I SUBTASK E2E SENDER HAS ENDEDEQQZ034I OPC SUBTASK E2E SENDER HAS ENDED.EQQZ034I OPC SUBTASK E2E RECEIVER HAS ENDED.EQQZ034I OPC SUBTASK E2E ENABLER HAS ENDED.

Verify the Tivoli Workload Scheduler for z/OS serverAfter the customization steps haven been completed for the Tivoli Workload Scheduler end-to-end server started task, simply start the end-to-end server started task. Check the server message log (EQQMLOG) for any unexpected error or warning messages. All Tivoli Workload Scheduler z/OS messages are prefixed with EQQ. See the IBM Tivoli Workload Scheduler for z/OS Messages and Codes, Version 8.2 (Maintenance Release April 2004), SC32-1267.

When the end-to-end server is started for the first time, check that the messages shown in Example 4-9 appear in the Tivoli Workload Scheduler for z/OS end-to-end server EQQMLOG.

Example 4-9 End-to-end server messages first time the end-to-end server is started

EQQPH00I SERVER TASK HAS STARTED EQQPH33I THE END-TO-END PROCESSES HAVE BEEN STARTED EQQZ024I Initializing wait parameters EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/translator" has been started,

pid is 67371783EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/netman" has been started,

pid is 67371919EQQPT56W The /DD:EQQTWSIN queue has not been formatted yet

Note: In the Tivoli Workload Scheduler for z/OS system messages, there will also be two IEC031I messages related to the formatting messages in Example 4-7. These messages can be ignored because they are related to the formatting of the EQQTWSIN and EQQTWSOU data sets.

The IEC031I messages look like:

IEC031I D37-04,IFG0554P,TWSC,TWSC,EQQTWSOU,........................IEC031I D37-04,IFG0554P,TWSC,TWSC,EQQTWSIN,.............................

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 205

Page 222: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

EQQPT22I Input Translator thread stopped until new Symphony will be available

The messages shown in Example 4-9 on page 205 is normal when the Tivoli Workload Scheduler for z/OS end-to-end server is started for the first time and there is no Symphony file created.

Furthermore the end-to-end server message EQQPT56W is normally only issued for the EQQTWSIN data set, if the EQQTWSIN and EQQTWSOU data sets are both empty and there is no Symphony file created.

If the Tivoli Workload Scheduler for z/OS controller and end-to-end server is started with an empty EQQTWSOU data set (for example reallocated with a new record length), message EQQPT56W will be issued for the EQQTWSOU data set:

EQQPT56W The /DD:EQQTWSOU queue has not been formatted yet

If a Symphony file has been created the end-to-end server messages log contains the messages in the following example.

Example 4-10 End-to-end server messages when server is started with Symphony file

EQQPH33I THE END-TO-END PROCESSES HAVE BEEN STARTED EQQZ024I Initializing wait parameters EQQPT01I Program "/usr/lpp/TWS/TWS820/bin/translator" has been started,

pid is 33817341EQQPT01I Program "/usr/lpp/TWS/TWS820/bin/netman" has been started,

pid is 262958EQQPT20I Input Translator waiting for Batchman and Mailman are started EQQPT21I Input Translator finished waiting for Batchman and Mailman

The messages shown in Example 4-10 are the normal start-up messages for an Tivoli Workload Scheduler for z/OS end-to-end server with a Symphony file.

When the end-to-end server is stopped the messages shown in Example 4-11 should be issued in the EQQMLOG.

Example 4-11 End-to-end server messages when server is stopped

EQQZ000I A STOP OPC COMMAND HAS BEEN RECEIVED EQQPT04I Starter has detected a stop command EQQPT40I Input Translator thread is shutting down EQQPT12I The Netman process (pid=262958) ended successfully EQQPT40I Output Translator thread is shutting down EQQPT53I Output Translator thread has terminated EQQPT53I Input Translator thread has terminated EQQPT40I Input Writer thread is shutting down EQQPT53I Input Writer thread has terminated EQQPT12I The Translator process (pid=33817341) ended successfully

206 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 223: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

EQQPT10I All Starter's sons ended EQQPH34I THE END-TO-END PROCESSES HAVE ENDED EQQPH01I SERVER TASK ENDED

After successful completion of the verification, move on to the next step in the end-to-end installation.

4.3 Installing Tivoli Workload Scheduler in an end-to-end environment

In this section, we describe how to install Tivoli Workload Scheduler in an end-to-end environment.

Installing a Tivoli Workload Scheduler agent in an end-to-end environment is not very different from installing Tivoli Workload Scheduler when Tivoli Workload Scheduler for z/OS is not involved. Follow the installation instructions in the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273. The main differences to keep in mind are that in an end-to-end environment, the master domain manager is always the Tivoli Workload Scheduler for z/OS engine (known by the Tivoli Workload Scheduler workstation name OPCMASTER), and the local workstation name of the fault-tolerant workstation is limited to four characters.

4.3.1 Installing multiple instances of Tivoli Workload Scheduler on one machine

As mentioned in Chapter 2, “End-to-end scheduling architecture” on page 25, there are often good reasons to install multiple instances of the Tivoli Workload Scheduler engine on the same machine. If you plan to do this, there are some

Important: Maintenance releases of Tivoli Workload Scheduler are made available about every three months. We recommend that, before installing, check for the latest available update at:

ftp://ftp.software.ibm.com

The latest release (as we write this book) for IBM Tivoli Workload Scheduler is 8.2-TWS-FP04 and is available at:

ftp://ftp.software.ibm.com/software/tivoli_support/patches/patches_8.2.0/8.2.0-TWS-FP04/

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 207

Page 224: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

important considerations that should be made. Careful planning before installation can save you a considerable amount of work later.

The following items must be unique for each instance of the Tivoli Workload Scheduler engine that is installed on a computer:

� The Tivoli Workload Scheduler user name and ID associated with the instance

� The home directory of the Tivoli Workload Scheduler user

� The component group (only on tier-2 platforms: LinuxPPC, IRIX, Tru64 UNIX, Dynix, HP-UX 11i Itanium)

� The netman port number (set by the nm port option in the localopts file)

First, the user name and ID must be unique. There are many different ways to these users. Choose user names that make sense to you. It may simplify things to create a group called IBM Tivoli Workload Scheduler and make all Tivoli Workload Scheduler users members of this group. This would enable you to add group access to files to grant access to all Tivoli Workload Scheduler users. When installing Tivoli Workload Scheduler on UNIX, the Tivoli Workload Scheduler user is specified by the -uname option of the UNIX customize script. It is important to specify the Tivoli Workload Scheduler user because otherwise the customize script will choose the default user name maestro. Obviously, if you plan to install multiple Tivoli Workload Scheduler engines on the same computer, they cannot both be installed as the user maestro.

Second, the home directory must be unique. In order to keep two different Tivoli Workload Scheduler engines completely separate, each one must have its own home directory.

Note: Previous versions of Tivoli Workload Scheduler installed files into a directory called unison in the parent directory of the Tivoli Workload Scheduler home directory. Tivoli Workload Scheduler 8.2 simplifies things by placing the unison directory inside the Tivoli Workload Scheduler home directory.

The unison directory is a relic of the days when Unison Software’s Maestro program (the direct ancestor of IBM Tivoli Workload Scheduler) was one of several programs that all shared some common data. The unison directory was where the common data shared between Unison’s various products was stored. Important information is still stored in this directory, including the workstation database (cpudata) and the NT user database (userdata). The Tivoli Workload Scheduler Security file is no longer stored in the unison directory; it is now stored in the Tivoli Workload Scheduler home directory.

208 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 225: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-18 should give you an idea of how two Tivoli Workload Scheduler engines might be installed on the same computer. You can see that each engine has its own separate Tivoli Workload Scheduler directory.

Figure 4-18 Two separate Tivoli Workload Scheduler engines on one computer

Example 4-12 shows the /etc/passwd entries that correspond to the two Tivoli Workload Scheduler users.

Example 4-12 Excerpt from /etc/passwd: two different Tivoli Workload Scheduler users

tws-a:!:31111:9207:TWS Engine A User:/tivoli/tws/tws-a:/usr/bin/kshtws-b:!:31112:9207:TWS Engine B User:/tivoli/tws/tws-b:/usr/bin/ksh

Note that each Tivoli Workload Scheduler user has a unique name, ID, and home directory.

On tier-2 platforms only (Linux/PPC, IRIX, Tru64 UNIX, Dynix, HP-UX 11i/Itanium), Tivoli Workload Scheduler still uses the /usr/unison/components file to keep track of each installed Tivoli Workload Scheduler engine. Each Tivoli Workload Scheduler engine on a tier-2 platform computer must have a unique component group name. The component group is arbitrary; it is just a name that is used by Tivoli Workload Scheduler programs to keep each engine separate. The name of the component group is entirely up to you. It can be specified using the -group option of the UNIX customize script during installation on a tier-2 platform machine. It is important to specify a different component group name for each instance of the Tivoli Workload Scheduler engine installed on a computer.

TWS Engine B

/

tivoli

tws

network networkSecurity Securitybin mozart bin mozart

… …

cpudata userdata cpudata userdatamastsked jobs

mastsked jobs

TWS Engine A

tws-a tws-b

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 209

Page 226: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Component groups are stored in the file /usr/unison/components. This file contains two lines for each component group.

Example 4-13 shows the components file corresponding to the two Tivoli Workload Scheduler engines.

Example 4-13 Sample /usr/unison/components file for tier-2 platforms

netman 1.8.1 /tivoli/TWS/TWS-A/tws TWS-Engine-Amaestro 8.1 /tivoli/TWS/TWS-A/tws TWS-Engine-Anetman 1.8.1.1 /tivoli/TWS/TWS-B/tws TWS-Engine-Bmaestro 8.1 /tivoli/TWS/TWS-B/tws TWS-Engine-B

The component groups are called TWS-Engine-A and TWS-Engine-B. For each component group, the version and path for netman and maestro (the Tivoli Workload Scheduler engine) are listed. In this context, maestro refers simply to the Tivoli Workload Scheduler home directory.

On tier-1 platforms (such as AIX, Linux/x86, Solaris, HP-UX, and Windows XP), there is no longer a need to be concerned with component groups because the new ISMP installer automatically keeps track of each installed Tivoli Workload Scheduler engine. It does so by writing data about each engine to a file called /etc/TWS/TWS Registry.dat.

Finally, because netman listens for incoming TCP link requests from other Tivoli Workload Scheduler agents, it is important that the netman program for each Tivoli Workload Scheduler engine listen to a unique port. This port is specified by the nm port option in the Tivoli Workload Scheduler localopts file. If you change this option, you must shut down netman and start it again to make the change take effect.

In our test environment, we chose a netman port number and user ID that was the same for each Tivoli Workload Scheduler engine. This makes it easier to remember and simpler when troubleshooting. Table 4-4 on page 211 shows the names and numbers we used in our testing.

Important: The /usr/unison/components file is used only on tier-2 platforms.

Important: Do not edit or remove the /etc/TWS/TWS Registry.dat file because this could cause problems with uninstalling Tivoli Workload Scheduler or with installing fix packs. Do not remove this file unless you intend to remove all installed Tivoli Workload Scheduler 8.2 engines from the computer.

210 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 227: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Table 4-4 If possible, choose user IDs and port numbers that are the same

4.3.2 Verify the Tivoli Workload Scheduler installationStart Tivoli Workload Scheduler and verify that it starts without any error messages.

Note that if there are no active workstations in Tivoli Workload Scheduler for z/OS for the Tivoli Workload Scheduler agent, only the netman process will be started. But you can verify that the netman process is started and that it listens to the IP port number that you have decided to use in your end-to-end environment.

4.4 Define, activate, verify fault-tolerant workstationsTo be able to define jobs in Tivoli Workload Scheduler for z/OS to be scheduled on FTWs, the workstations must be defined in Tivoli Workload Scheduler for z/OS controller.

The workstations that are defined via the CPUREC keyword should also be defined in the Tivoli Workload Scheduler for z/OS workstation database before they can be activated in the Tivoli Workload Scheduler for z/OS plan. The workstations are defined the same way as computer workstations in Tivoli Workload Scheduler for z/OS, except they need a special flag: fault tolerant. This flag is used to indicate in Tivoli Workload Scheduler for z/OS that these workstations should be treated as FTWs.

When the FTWs have been defined in the Tivoli Workload Scheduler for z/OS workstation database, they can be activated in the Tivoli Workload Scheduler for z/OS plan by either running a plan replan or plan extend batch job.

The process is as follows:

1. Create a CPUREC definition for the workstation as described in “CPUREC statement” on page 187.

2. Define the FTW in the Tivoli Workload Scheduler for z/OS workstation database. Remember to set it to fault tolerant.

3. Run Tivoli Workload Scheduler for z/OS plan replan or plan extend to activate the workstation definition in Tivoli Workload Scheduler for z/OS.

4. Verify that the FTW gets active and linked.

User name User ID Netman port

tws-a 31111 31111

tws-b 31112 31112

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 211

Page 228: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5. Define jobs and job streams on the newly created and activated FTW as described in 4.5, “Creating fault-tolerant workstation job definitions and job streams” on page 217.

4.4.1 Define fault-tolerant workstation in Tivoli Workload Scheduler controller workstation database

A fault-tolerant workstation can be defined either from Tivoli Workload Scheduler for z/OS legacy ISPF dialogs (use option 1.1 from main menu) or in the JSC.

In the following steps, we show how to define an FTW from the JSC (see Figure 4-19 on page 213):

1. Open the Actions Lists, select New Workstation, then select the instance for the Tivoli Workload Scheduler for z/OS controller where the workstation should be defined (TWSC-zOS in our example).

2. The Properties - Workstation in Database window opens.

3. Select the Fault Tolerant check box and fill in the Name field (the four-character name of the FTW) and, optionally, the Description field. See Figure 4-19 on page 213.

4. Save the new workstation definition by clicking OK.

Important: Please note that order of the operations in this process is important.

Note: It is a good standard to use the first part of the description field to list the DNS name or host name for the FTW. This makes it easier to remember which server or machine the four-character workstation name in Tivoli Workload Scheduler for z/OS relates to. You can add up to 32 alphanumeric characters in the description field.

212 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 229: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-19 Defining a fault-tolerant workstation from the JSC

4.4.2 Activate the fault-tolerant workstation definitionFault-tolerant workstation definitions can be activated in the Tivoli Workload Scheduler for z/OS plan either by running the replan or the extend plan programs in the Tivoli Workload Scheduler for z/OS controller.

Note: When we used the JSC to create FTWs as described, we sometimes received this error:

GJS0027E Cannot save the workstation xxxx. Reason: EQQW787E FOR FT WORKSTATIONS RESOURCES CANNOT BE USED AT PLANNING

If you receive this error when creating the FTW from the JSC, then select the Resources tab (see Figure 4-19 on page 213) and un-check the Used for planning check box for Resource 1 and Resource 2. This must be done before selecting the Fault Tolerant check box on the General tab.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 213

Page 230: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

When running the replan or extend program, Tivoli Workload Scheduler for z/OS creates (or recreates) the Symphony file and distributes it to the domain managers at the first level. These domain managers, in turn, distribute the Symphony file to their subordinate fault-tolerant agents and domain managers, and so on. If the Symphony file is successfully created and distributed, all defined FTWs should be linked and active.

We run the replan program and verify that the Symphony file is created in the end-to-end server. We also verify that the FTWs become available and have linked status in the Tivoli Workload Scheduler for z/OS plan.

4.4.3 Verify that the fault-tolerant workstations are active and linkedFirst, it should be verified that there is no warning or error message in the replan batch job (EQQMLOG). The message log should show that all topology statements (DOMREC, CPUREC, and USRREC) have been accepted without any errors or warnings.

Verify messages in plan batch jobFor a successful creation of the Symphony file, the message log should show messages similar to those in Example 4-14.

Example 4-14 Plan batch job EQQMLOG messages when Symphony file is created

EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPDOMAIN IS: 0000 EQQZ013I NOW PROCESSING PARAMETER LIBRARY MEMBER TPUSER EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPUSER IS: 0000 EQQQ502I SPECIAL RESOURCE DATASPACE HAS BEEN CREATED. EQQQ502I 00000020 PAGES ARE USED FOR 00000100 SPECIAL RESOURCE RECORDS. EQQ3011I WORKSTATION F100 SET AS DOMAIN MANAGER FOR DOMAIN DM100 EQQ3011I WORKSTATION F200 SET AS DOMAIN MANAGER FOR DOMAIN DM200 EQQ3105I A NEW CURRENT PLAN (NCP) HAS BEEN CREATED EQQ3106I WAITING FOR SCP EQQ3107I SCP IS READY: START JOBS ADDITION TO SYMPHONY FILE EQQ4015I RECOVERY JOB OF F100DJ01 HAS NO JOBWS KEYWORD SPECIFIED, EQQ4015I THE WORKSTATION F100 OF JOB F100DJ01 IS USED EQQ3108I JOBS ADDITION TO SYMPHONY FILE COMPLETED EQQ3101I 0000019 JOBS ADDED TO THE SYMPHONY FILE FROM THE CURRENT PLAN EQQ3087I SYMNEW FILE HAS BEEN CREATED

Verify messages in the end-to-end server message logIn the Tivoli Workload Scheduler for z/OS end-to-end server message log, we see the messages shown in Example 4-15. These messages show that the Symphony file has been created by the plan replan batch jobs and that it was possible for the end-to-end server to switch to the new Symphony file.

214 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 231: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 4-15 End-to-end server messages when Symphony file is created

EQQPT30I Starting switching Symphony EQQPT12I The Mailman process (pid=Unknown) ended successfully EQQPT12I The Batchman process (pid=Unknown) ended successfully EQQPT22I Input Translator thread stopped until new Symphony will be available EQQPT31I Symphony successfully switched EQQPT20I Input Translator waiting for Batchman and Mailman are started EQQPT21I Input Translator finished waiting for Batchman and Mailman EQQPT23I Input Translator thread is running

Verify messages in the controller message logThe Tivoli Workload Scheduler for z/OS controller shows the messages in Example 4-16, which indicate that the Symphony file was created successfully and that the fault-tolerant workstations are active and linked.

Example 4-16 Controller messages when Symphony file is created

EQQN111I SYMNEW FILE HAS BEEN CREATED EQQW090I THE NEW SYMPHONY FILE HAS BEEN SUCCESSFULLY SWITCHED EQQWL10W WORK STATION F100, HAS BEEN SET TO LINKED STATUS EQQWL10W WORK STATION F100, HAS BEEN SET TO ACTIVE STATUS EQQWL10W WORK STATION F101, HAS BEEN SET TO LINKED STATUS EQQWL10W WORK STATION F102, HAS BEEN SET TO LINKED STATUS EQQWL10W WORK STATION F101, HAS BEEN SET TO ACTIVE STATUS EQQWL10W WORK STATION F102, HAS BEEN SET TO ACTIVE STATUS

Verify that fault-tolerant workstations are active and linkedAfter the replan job has completed and output messages have been displayed, the FTWs are checked using the JSC instance pointing Tivoli Workload Scheduler for z/OS controller (Figure 4-20).

The Fault Tolerant column indicates that it is an FTW. The Linked column indicates whether the workstation is linked. The Status column indicates whether the mailman process is up and running on the FTW.

Figure 4-20 Status of FTWs in the Tivoli Workload Scheduler for z/OS plan

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 215

Page 232: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The F200 workstation is Not Available because we have not installed a Tivoli Workload Scheduler fault-tolerant workstation on this machine yet. We have prepared for a future installation of the F200 workstation by creating the related CPUREC definitions for F200 and defined the FTW (F200) in the Tivoli Workload Scheduler controller workstation database.

Figure 4-21 shows the status of the same FTWs, as it is shown in the JSC, when looking at the Symphony file at domain manager F100.

Note that is much more information is available for each FTW. For example, in Figure 4-21 we can see that jobman and writer are running and that we can run 20 jobs in parallel on the FTWs (the Limit column). Also note the information in the Run, CPU type, and Domain columns.

The information shown in Figure 4-21 is read from the Symphony file and generated by the plan programs based on the specifications in CPUREC and DOMREC definitions. This is one of the reasons why we suggest activating support for JSC when running end-to-end scheduling with Tivoli Workload Scheduler for z/OS.

Note the status of the OPCMASTER workstation is correct and also remember that the OPCMASTER workstation and the MASTERDM domain is predefined in Tivoli Workload Scheduler for z/OS and cannot be changed.

Jobman is not running on OPCMASTER (in USS in the end-to-end server), because the end-to-end server is not supposed to run jobs in USS. So the information that jobman is not running on the OPCMASTER workstation is OK.

Figure 4-21 Status of FTWs in the Symphony file on domain manager F100

Tip: If the workstation does not link as it should, the cause could be that the writer process has not initiated correctly or the run number for the Symphony file on the FTW is not the same as the run number on the master. Mark the unlinked workstations and right-click to open a pop-up menu where you can click Link to try to link the workstation.

The run number for the Symphony file in the end-to-end server can be seen from legacy ISPF panels in option 6.6 from the main menu.

216 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 233: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

4.5 Creating fault-tolerant workstation job definitions and job streams

When the FTWs are active and linked in Tivoli Workload Scheduler for z/OS, you can run jobs on these workstations.

To submit work to the FTWs in Tivoli Workload Scheduler for z/OS, you should:

1. Define the script (the JCL or the task) that should be executed on the FTW, (that is, on the server).

When defining scripts in Tivoli Workload Scheduler for z/OS, it is important to remember that the script can be placed central in the Tivoli Workload Scheduler for z/OS job library or non-centralized on the FTW (on the Tivoli Workload Scheduler server).

Definitions of scripts are found in:

– 4.5.1, “Centralized and non-centralized scripts” on page 217

– 4.5.2, “Definition of centralized scripts” on page 219,

– 4.5.3, “Definition of non-centralized scripts” on page 221

– 4.5.4, “Combination of centralized script and VARSUB, JOBREC parameters” on page 232

2. Create a job stream (application) in Tivoli Workload Scheduler for z/OS and add the job (operation) defined in step 1.

It is possible to add the job (operation) to an existing job stream and create dependencies between jobs on FTWs and jobs on mainframe.

Definition of FTW jobs and job streams in Tivoli Workload Scheduler for z/OS is found in 4.5.5, “Definition of FTW jobs and job streams in the controller” on page 234.

4.5.1 Centralized and non-centralized scriptsAs described in “Tivoli Workload Scheduler for z/OS end-to-end database objects” on page 69, a job can use two kinds of scripts: centralized or non-centralized.

A centralized script is a script that resides in controller job library (EQQJBLIB dd-card, also called JOBLIB) and that is downloaded to the FTW every time the job is submitted. Figure 4-22 on page 218 illustrates the relationship between the centralized script job definition and member name in the job library (JOBLIB).

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 217

Page 234: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-22 Centralized script defined in controller job library (JOBLIB)

A non-centralized script is a script that is defined in the SCRPTLIB and that resides on the FTW. Figure 4-23 on page 219 shows the relationship between the job definition and the member name in the script library (EQQSCLIB).

//*%OPC SCAN //* OPC Comment: This job …………..//*%OPC RECOVER echo 'OPC occurence plan date is:rmstdlist -p 10

IBM Tivoli Workload Scheduler for z/OS job library (JOBLIB)

JOBLIB(AIXHOUSP)

218 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 235: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-23 Non-centralized script defined in controller script library (EQQSCLIB)

4.5.2 Definition of centralized scriptsDefine the centralized script job (operation) in a Tivoli Workload Scheduler for z/OS job stream (application) with the centralized script option set to Y (Yes). See Figure 4-24 on page 220.

VARSUB TABLES(IBMGLOBAL)

JOBREC JOBSCR('/tivoli/tws/scripts/rc_rc.JOBUSR(%DISTUID.) RCCONDSUC('((RC<16) AND (RC<>8))

RECOVERY OPTION(RERUN) MESSAGE('Reply OK to rerun job') JOBCMD('ls') JOBUSR(%DISTUID.) SUB

IBM Tivoli Workload Scheduler for z/OS script library (EQQSCLIB)

EQQSCLIB(AIXHOUSP)

Note: The default is N (No) for all operations in Tivoli Workload Scheduler for z/OS.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 219

Page 236: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-24 Centralized script option set in ISPF panel or JSC window

A centralized script is a script that resides in the Tivoli Workload Scheduler for z/OS JOBLIB and that is downloaded to the fault-tolerant agent every time the job is submitted.

The centralized script is defined the same way as a normal job JCL in Tivoli Workload Scheduler for z/OS.

Example 4-17 Centralized script for job AIXHOUSP defined in controller JOBLIB

EDIT TWS.V8R20.JOBLIB(AIXHOUSP) - 01.02 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 //*%OPC SCAN 000002 //* OPC Comment: This job calls TWS rmstdlist script. 000003 //* OPC ======== - The rmstdlist script is called with -p flag and 000004 //* OPC with parameter 10. 000005 //* OPC - This means that the rmstdlist script will print 000006 //* OPC files in the stdlist directory older than 10 days. 000007 //* OPC - If rmstdlist ends with RC in the interval from 1 000008 //* OPC to 128, OPC will add recovery application 000009 //* OPC F100CENTRECAPPL.

Centralized scriptCentralized script

220 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 237: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

000010 //* OPC 000011 //*%OPC RECOVER JOBCODE=(1-128),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO) 000012 //* OPC 000013 echo 'OPC occurrence plan date is: &ODMY1.' 000014 rmstdlist -p 10 ****** **************************** Bottom of Data ****************************

In the centralized script in Example 4-17 on page 220, we are running the rmstdlist program that is delivered with Tivoli Workload Scheduler. In the centralized script, we use Tivoli Workload Scheduler for z/OS Automatic Recovery as well as JCL variables.

Rules when creating centralized scriptsFollow these rules when creating the centralized scripts in the Tivoli Workload Scheduler for z/OS JOBLIB:

� Each line starts in column 1 and ends in column 80.

� A backslash (\) in column 80 can be used to continue script lines with more than 80 characters.

� Blanks at the end of a line are automatically removed.

� Lines that start with //* OPC, //*%OPC, or //*>OPC are used for comments, variable substitution directives, and automatic job recovery. These lines are automatically removed before the script is downloaded to the FTA.

4.5.3 Definition of non-centralized scriptsNon-centralized scripts are defined in a special partitioned data set, EQQSCLIB, that is allocated in the Tivoli Workload Scheduler for z/OS controller started task procedure and used to store the job or task definitions for FTA jobs. The script (the JCL) resides on the fault-tolerant agent.

You must use the JOBREC statement in every SCRPTLIB member to specify the script or command to run. In the SCRPTLIB members, you can also specify the following statements:

� VARSUB to use the Tivoli Workload Scheduler for z/OS automatic substitution of variables when the Symphony file is created or when an operation on an FTW is added to the current plan dynamically.

� RECOVERY to use the Tivoli Workload Scheduler recovery.

Note: This is the default behavior in Tivoli Workload Scheduler for z/OS for fault-tolerant agent jobs.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 221

Page 238: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 4-18 shows the syntax for the VARSUB, JOBREC, and RECOVERY statements.

Example 4-18 Syntax for VARSUB, JOBREC, and RECOVERY statements

VARSUB TABLES(GLOBAL|tab1,tab2,..|APPL)PREFIX(’char’) BACKPREF(’char’) VARFAIL(YES|NO) TRUNCATE(YES|NO)

JOBREC JOBSCR|JOBCMD (’task’) JOBUSR (’username’) INTRACTV(YES|NO) RCCONDSUC(’success condition’)

RECOVERY OPTION(STOP|CONTINUE|RERUN) MESSAGE(’message’)JOBCMD|JOBSCR(’task’)JOBUSR (’username’) JOBWS(’wsname’) INTRACTV(YES|NO) RCCONDSUC(’success condition’)

If you define a job with a SCRPTLIB member in the Tivoli Workload Scheduler for z/OS database that contains errors, the daily planning batch job sets the status of that job to failed in the Symphony file. This change of status is not shown in the Tivoli Workload Scheduler for z/OS interface. You can find the messages that explain the error in the log of the daily planning batch job.

If you dynamically add a job to the plan in Tivoli Workload Scheduler for z/OS whose associated SCRPTLIB member contains errors, the job is not added. You can find the messages that explain this failure in the controller EQQMLOG.

Rules when creating JOBREC, VARSUB, or RECOVERY statementsEach statement consists of a statement name, keywords, and keyword values, and follows TSO command syntax rules. When you specify SCRPTLIB statements, follow these rules:

� Statement data must be in columns 1 through 72. Information in columns 73 through 80 is ignored.

� A blank serves as the delimiter between two keywords; if you supply more than one delimiter, the extra delimiters are ignored.

� Continuation characters and blanks are not used to define a statement that continues on the next line.

222 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 239: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Values for keywords are contained within parentheses. If a keyword can have multiple values, the list of values must be separated by valid delimiters. Delimiters are not allowed between a keyword and the left parenthesis of the specified value.

� Type /* to start a comment and */ to end a comment. A comment can span record images in the parameter member and can appear anywhere except in the middle of a keyword or a specified value.

� A statement continues until the next statement or until the end of records in the member.

� If the value of a keyword includes spaces, enclose the value within single or double quotation marks as in Example 4-19.

Example 4-19 JOBCMD and JOBSCR examples

JOBCMD(’ls la’) JOBSCR(‘C:/USERLIB/PROG/XME.EXE’) JOBSCR(“C:/USERLIB/PROG/XME.EXE”) JOBSCR(“C:/USERLIB/PROG/XME.EXE ‘THIS IS THE PARAMETER LIST’ “) JOBSCR(‘C:/USERLIB/PROG/XME.EXE “THIS IS THE PARAMETER LIST” ‘)

Description of the VARSUB statementThe VARSUB statement defines the variable substitution options. This statement must always be the first one in the members of the SCRPTLIB. For more information about the variable definition, see IBM Tivoli Workload Scheduler for z/OS Managing the Workload, Version 8.2 (Maintenance Release April 2004), SC32-1263.

Figure 4-25 shows the format of the VARSUB statement.

Figure 4-25 Format of the VARSUB statement

Note: Can be used in combination with a job that is defined with centralized script.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 223

Page 240: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

VARSUB is defined in the members of the EQQSCLIB library, as specified by the EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the plan extend, replan, and Symphony renew batch job JCL.

Description of the VARSUB parametersThe following describes the VARSUB parameters:

� TABLES(GLOBAL|APPL|table1,table2,...)

Identifies the variable tables that must be searched and the search order. APPL indicates the application variable table (see the VARIABLE TABLE field in the MCP panel, at Occurrence level). GLOBAL indicates the table defined in the GTABLE keyword of the OPCOPTS controller and BATCHOPT batch options.

� PREFIX(char|&)

A non-alphanumeric character that precedes a variable. It serves the same purpose as the ampersand (&) character that is used in variable substitution in z/OS JCL.

� BACKPREF(char|%)

A non-alphanumeric character that delimits a variable to form simple and compound variables. It serves the same purpose as the percent (%) character that is used in variable substitution in z/OS JCL.

� VARFAIL(NO|YES)

Specifies whether Tivoli Workload Scheduler for z/OS is to issue an error message when a variable substitution error occurs. If you specify NO, the variable string is left unchanged without any translation.

� TRUNCATE(YES|NO)

Specifies whether variables are to be truncated if they are longer than the allowed length. If you specify NO and the keywords are longer than the allowed length, an error message is issued. The allowed length is the length of the keyword for which you use the variable. For example, if you specify a variable of five characters for the JOBWS keyword, the variable is truncated to the first four characters.

Description of the JOBREC statementThe JOBREC statement defines the fault-tolerant workstation job properties. You must specify JOBREC for each member of the SCRPTLIB. For each job this statement specifies the script or the command to run and the user that must run the script or command.

224 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 241: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-26 shows the format of the JOBREC statement.

Figure 4-26 Format of the JOBREC statement

JOBREC is defined in the members of the EQQSCLIB library, as specified by the EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the plan extend, replan, and Symphony renew batch job JCL.

Description of the JOBREC parametersThe following describes the JOBREC parameters:

� JOBSCR(script name)

Specifies the name of the shell script or executable file to run for the job. The maximum length is 4095 characters. If the script includes more than one word, it must be enclosed within single or double quotation marks. Do not specify this keyword if the job uses a centralized script.

� JOBCMD(command name)

Specifies the name of the shell command to run the job. The maximum length is 4095 characters. If the command includes more than one word, it must be enclosed within single or double quotation marks. Do not specify this keyword if the job uses a centralized script.

� JOBUSR(user name)

Specifies the name of the user submitting the specified script or command. The maximum length is 47 characters. If you do not specify the user in the JOBUSR keyword, the user defined in the CPUUSER keyword of the CPUREC statement is used. The CPUREC statement is the one related to the workstation on which the specified script or command must run. If the user is not specified in the CPUUSER keyword, the tws user is used.

If the script is centralized, you can also use the job-submit exit (EQQUX001) to specify the user name. This user name overrides the value specified in the JOBUSR keyword. In turn, the value that is specified in the JOBUSR keyword

Note: JOBREC can be used in combination with a job that is defined with centralized script.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 225

Page 242: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

overrides that specified in the CPUUSER keyword of the CPUREC statement. If no user name is specified, the tws user is used.

If you use this keyword to specify the name of the user who submits the specified script or command on a Windows fault-tolerant workstation, you must associate this user name to the Windows workstation in the USRREC initialization statement.

� INTRACTV(YES|NO)

Specifies that a Windows job runs interactively on the Windows desktop. This keyword is used only for jobs running on Windows fault-tolerant workstations.

� RCCONDSUC(“success condition”)

An expression that determines the return code (RC) that is required to consider a job as successful. If you do not specify this keyword, the return code equal to zero corresponds to a successful condition. A return code different from zero corresponds to the job abend.

The success condition maximum length is 256 characters and the total length of JOBCMD or JOBSCR plus the success condition must be 4086 characters. This is because the TWSRCMAP string is inserted between the success condition and the script or command name. For example, the dir command together with the success condition RC<4 is translated into:

dir TWSRCMAP: RC<4

The success condition expression can contain a combination of comparison and Boolean expressions:

– Comparison expression specifies the job return codes. The syntax is:

(RC operator operand), where:

• RC is the RC keyword (type RC).

• operator is the comparison operator. It can have the values shown in Table 4-5.

Table 4-5 Comparison operators

Example Operator Description

RC < a < Less than

RC <= a <= Less than or equal to

RC> a > Greater than

RC >= a >= Greater than or equal to

RC = a = Equal to

226 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 243: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

• operand is an integer between -2147483647 and 2147483647.

For example, you can define a successful job as a job that ends with a return code less than or equal to 3 as follows:

RCCONDSUC “(RC <= 3)”

– Boolean expression specifies a logical combination of comparison expressions. The syntax is:

comparison_expression operator comparison_expression, where:

• comparison_expression The expression is evaluated from left to right. You can use parentheses to assign a priority to the expression evaluation.

• operator Logical operator. It can have the following values: and, or, not.

For example, you can define a successful job as a job that ends with a return code less than or equal to 3 or with a return code not equal to 5, and less than 10 as follows:

RCCONDSUC “(RC<=3) OR ((RC<>5) AND (RC<10))”

Description of the RECOVERY statementScheduler recovery for a job whose status is in error, but whose error code is not FAIL. To run the recovery, you can specify one or both of the following recovery actions: � A recovery job (JOBCMD or JOBSCR keywords) � A recovery prompt (MESSAGE keyword)

The recovery actions must be followed by one of the recovery options (the OPTION keyword), stop, continue, or rerun. The default is stop with no recovery job and no recovery prompt. For more information about recovery in a distributed network, see Tivoli Workload Scheduler Reference Guide Version 8.2 (Maintenance Release April 2004),SC32-1274.

The RECOVERY statement is ignored if it is used with a job that runs a centralized script.

Figure 4-27 on page 228 shows the format of the RECOVERY statement.

RC <> a <> Not equal to

Example Operator Description

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 227

Page 244: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-27 Format of the RECOVERY statement

RECOVERY is defined in the members of the EQQSCLIB library, as specified by the EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the plan extend, replan, and Symphony renew batch job JCL.

Description of the RECOVERY parametersThe following describes the RECOVERY parameters:

� OPTION(STOP|CONTINUE|RERUN)

Specifies the option that Tivoli Workload Scheduler for z/OS must use when a job abends. For every job, Tivoli Workload Scheduler for z/OS enables you to define a recovery option. You can specify one of the following values:

– STOP: Do not continue with the next job. The current job remains in error. You cannot specify this option if you use the MESSAGE recovery action.

– CONTINUE: Continue with the next job. The current job status changes to complete in the z/OS interface.

– RERUN: Automatically rerun the job (once only). The job status changes to ready, and then to the status of the rerun. Before rerunning the job for a second time, an automatically generated recovery prompt is displayed.

� MESSAGE(“message’”)

Specifies the text of a recovery prompt, enclosed in single or double quotation marks, to be displayed if the job abends. The text can contain up to 64 characters. If the text begins with a colon (:), the prompt is displayed, but no reply is required to continue processing. If the text begins with an exclamation mark (!), the prompt is not displayed but a reply is required to proceed. You cannot use the recovery prompt if you specify the recovery STOP option without using a recovery job.

228 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 245: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� JOBCMD(command name)

Specifies the name of the shell command to run if the job abends. The maximum length is 4095 characters. If the command includes more than one word, it must be enclosed within single or double quotation marks.

� JOBSCR(script name)

Specifies the name of the shell script or executable file to be run if the job abends. The maximum length is 4095 characters. If the script includes more than one word, it must be enclosed within single or double quotation marks.

� JOBUSR(user name)

Specifies the name of the user submitting the recovery job action. The maximum length is 47 characters. If you do not specify this keyword, the user defined in the JOBUSR keyword of the JOBREC statement is used. Otherwise, the user defined in the CPUUSER keyword of the CPUREC statement is used. The CPUREC statement is the one related to the workstation on which the recovery job must run. If the user is not specified in the CPUUSER keyword, the tws user is used.

If you use this keyword to specify the name of the user who runs the recovery on a Windows fault-tolerant workstation, you must associate this user name to the Windows workstation in the USRREC initialization statement

� JOBWS(workstation name)

Specifies the name of the workstation on which the recovery job or command is submitted. The maximum length is 4 characters. The workstation must belong to the same domain as the workstation on which the main job runs. If you do not specify this keyword, the workstation name of the main job is used.

� INTRACTV(YES|NO)

Specifies that the recovery job runs interactively on a Windows desktop. This keyword is used only for jobs running on Windows fault-tolerant workstations.

� RCCONDSUC(“success condition”)

An expression that determines the return code (RC) that is required to consider a recovery job as successful. If you do not specify this keyword, the return code equal to zero corresponds to a successful condition. A return code different from zero corresponds to the job abend.

The success condition maximum length is 256 characters and the total length of the JOBCMD or JOBSCR plus the success condition must be 4086 characters. This is because the TWSRCMAP string is inserted between the success condition and the script or command name. For example, the dir command together with the success condition RC<4 is translated into:

dir TWSRCMAP: RC<4

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 229

Page 246: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The success condition expression can contain a combination of comparison and Boolean expressions:

– Comparison expression Specifies the job return codes. The syntax is:

(RC operator operand)

where:

• RC is the RC keyword (type RC).

• operator is the comparison operator. It can have the values in Table 4-6.

Table 4-6 Operator comparison operator values

• operand is an integer between -2147483647 and 2147483647.

For example, you can define a successful job as a job that ends with a return code less than or equal to 3 as follows:

RCCONDSUC “(RC <= 3)”

– Boolean expression: Specifies a logical combination of comparison expressions. The syntax is:

comparison_expression operator comparison_expression

where:

• comparison_expression The expression is evaluated from left to right. You can use parentheses to assign a priority to the expression evaluation.

• operator Logical operator (it could be either: and, or, not).

For example, you can define a successful job as a job that ends with a return code less than or equal to 3 or with a return code not equal to 5, and less than 10 as follows:

RCCONDSUC “(RC<=3) OR ((RC<>5) AND (RC<10))”

Example Operator Description

RC < a < Less than

RC <= a <= Less than or equal to

RC> a > Greater than

RC >= a >= Greater than or equal to

RC = a = Equal to

RC <> a <> Not equal to

230 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 247: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example VARSUB, JOBREC, and RECOVERYFor the test of VARSUB, JOBREC, and RECOVERY, we used the non-centralized script member as shown in Example 4-20.

Example 4-20 Non-centralized AIX script with VARSUB, JOBREC, and RECOVERY

EDIT TWS.V8R20.SCRPTLIB(F100DJ02) - 01.05 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ******************************000001 /* Definition for job with "non-centralized" script */000002 /* ------------------------------------------------ */000003 /* VARSUB - to manage JCL variable substitution */000004 VARSUB 000005 TABLES(E2EVAR) 000006 PREFIX('&') 000007 BACKPREF('%') 000008 VARFAIL(YES) 000009 TRUNCATE(YES) 000010 /* JOBREC - to define script, user and some other specifications */000011 JOBREC 000012 JOBCMD('rm &TWSHOME/demo.sh') 000013 JOBUSR ('%TWSUSER') 000014 /* RECOVERY - to define what FTA should do in case of error in job */000015 RECOVERY 000016 OPTION(RERUN) /* Rerun the job after recover*/000017 JOBCMD('touch &TWSHOME/demo.sh') /* Recover job */000018 JOBUSR('&TWSUSER') /* User for recover job */000019 MESSAGE ('Create demo.sh on FTA?') /* Prompt message */****** **************************** Bottom of Data ****************************

The member F100DJ02 in Example 4-20 was created in the SCRPTLIB (EQQSCLIB) partitioned data set. In the non-centralized script F100DJ02, we use VARSUB to specify how we want Tivoli Workload Scheduler for z/OS to scan for JCL variables and substitute JCL variables. The JOBREC parameters specify that we will run the UNIX (AIX) rm command for a file named demo.sh.

If the file does not exist (it does not exist the first time the script is run) we run the recovery command (touch) that will create the missing file. So we can rerun (OPTION(RERUN)) the JOBREC JOBCMD() without any errors.

Before the job is rerun, an operator have to reply yes to the prompt message: Create demo.sh on FTA?

Example 4-21 on page 232 shows another example. The job will be marked complete if return code from the script is less than 16 and different from 8 or equal to 20.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 231

Page 248: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 4-21 Non-centralized script definition with RCCONDSUC parameter

EDIT TWS.V8R20.SCRPTLIB(F100DJ03) - 01.01 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ******************************000001 /* Definition for job with "distributed" script */000002 /* -------------------------------------------- */000003 /* VARSUB - to manage JCL variable substitution */000004 VARSUB 000005 TABLES(IBMGLOBAL) 000006 PREFIX(%) 000007 VARFAIL(YES) 000008 TRUNCATE(NO) 000009 /* JOBREC - to define script, user and some other specifications */000010 JOBREC 000011 JOBSCR('/tivoli/tws/scripts/rc_rc.sh 12') 000012 JOBUSR(%DISTUID.) 000013 RCCONDSUC('((RC<16) AND (RC<>8)) OR (RC=20)')

4.5.4 Combination of centralized script and VARSUB, JOBREC parameters

Sometimes it can be necessary to create a member in the EQQSCLIB (normally used for non-centralized script definitions) for a job that is defined in Tivoli Workload Scheduler for z/OS with centralized script.

Important: Be careful with lowercase and uppercase. In Example 4-21, it is important that the variable name DISTUID is typed with capital letters because Tivoli Workload Scheduler for z/OS JCL variable names are always uppercase. On the other hand, it is important that the value for the DISTUID variable is defined in Tivoli Workload Scheduler for z/OS variable table IBMGLOBAL with lowercase letters, because the user ID is defined on the UNIX system with lowercase letters.

Remember to type with CAPS OFF when editing members in SCRPTLIB (EQQSCLIB) for jobs with non-centralized script and members in Tivoli Workload Scheduler for z/OS JOBLIB (EQQJBLIB) for jobs with centralized script.

232 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 249: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

This can be the case if:

� The RCCONDSUC parameter will be used for the job to accept specific return codes or return code ranges.

� A special user should be assigned to the job with the JOBUSR parameter.

� Tivoli Workload Scheduler for z/OS JCL variables should be used in the JOBUSR() or the RCCONDSUC() parameters (for example).

Remember that the RECOVERY statement cannot be specified in EQQSCLIB for jobs with centralized script. (It will be ignored.)

To make this combination, you simply:

1. Create the centralized script in Tivoli Workload Scheduler for z/OS JOBLIB. The member name should be the same as the job name defined for the operation (job) in the Tivoli Workload Scheduler for z/OS job stream (application).

2. Create the corresponding member in the EQQSCLIB. The member name should be the same as the member name for the job in the JOBLIB.

For example:

We have a job with centralized script. In the job we should accept return codes less than 7 and the job should run with user dbprod.

To accomplish this, we define the centralized script in Tivoli Workload Scheduler for z/OS the same way as shown in Example 4-17 on page 220. Next, we create a member in the EQQSCLIB with the same name as the member name used for the centralized script.

This member should only contain the JOBREC RCCONDSUC() and JOBUSR() parameters (Example 4-22).

Example 4-22 EQQSCLIB (SCRIPTLIB) definition for job with centralized script

EDIT TWS.V8R20.SCRPTLIB(F100CJ02) - 01.05 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ******************************000001 JOBREC 000002 RCCONDSUC('RC<7') 000003 JOBUSR(dbprod)

Note: You cannot use Tivoli Workload Scheduler for z/OS highest return code for fault-tolerant workstation jobs. You have to use the RCCONDSUC parameter.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 233

Page 250: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

****** **************************** Bottom of Data ****************************

4.5.5 Definition of FTW jobs and job streams in the controllerWhen the script is defined either as centralized in the Tivoli Workload Scheduler for z/OS job library (JOBLIB) or as non-centralized in the Tivoli Workload Scheduler for z/OS script library (EQQSCLIB), you can define some job streams (applications) to run the defined scripts.

Definition of job streams (applications) for fault-tolerant workstation jobs is done exactly the same way as normal mainframe job streams: The job is defined in the job stream, and dependencies are added (predecessor jobs, time dependencies, special resources). Optionally, a run cycle can be added to run the job stream fat a set time.

When the job stream is defined, the fault-tolerant workstation jobs can be executed and the final verification test can be performed.

Figure 4-28 shows an example of a job stream that is used to test the end-to-end scheduling environment. There are four distributed jobs (seen in the left window in the figure) and these jobs will run on workdays (seen in the right window).

Figure 4-28 Example of a job stream used to test end-to-end scheduling

234 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 251: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

It is not necessary to create a run cycle for job streams to test the FTW jobs, as they can be added manually to the plan in Tivoli Workload Scheduler for z/OS.

4.6 Verification test of end-to-end schedulingAt this point we have:

� Installed and configured the Tivoli Workload Scheduler for z/OS controller for end-to-end scheduling

� Installed and configured the Tivoli Workload Scheduler for z/OS end-to-end server

� Defined the network topology for the distributed Tivoli Workload Scheduler network in the end-to-end server and plan batch jobs

� Installed and configured Tivoli Workload Scheduler on the servers in the network for end-to-end scheduling

� Defined fault-tolerant workstations and activated these workstations in the Tivoli Workload Scheduler for z/OS network

� Verified that the plan program executed successfully with the end-to-end topology statements

� Created members with centralized script and non-centralized scripts

� Created job streams containing jobs with centralized and non-centralized scripts

It is time to perform the final verification test of end-to-end scheduling. This test verifies that:

� Jobs with centralized script definitions can be executed on the FTWs, and the job log can be browsed for these jobs.

� Jobs with non-centralized script definitions can be executed on the FTWs, and the job log can be browsed for these jobs.

� Jobs with a combination of centralized and non-centralized script definitions can be executed on the FTWs, and the job log can be browsed for these jobs.

The verification can be performed in several ways. Because we would like to verify that our end-to-end environment is working and that it is possible to run jobs on the FTWs, we have focused on this verification.

We used the Job Scheduling Console in combination with legacy Tivoli Workload Scheduler for z/OS ISPF panels for the verifications. Of course, it is possible to perform the complete verification only with the legacy ISPF panels.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 235

Page 252: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Finally, if you decide to use only centralized scripts or non-centralized scripts, you do not have to verify both cases.

4.6.1 Verification of job with centralized script definitionsAdd a job stream with a job defined with centralized script. The job from Example 4-17 on page 220 is used in this example.

Before the job was submitted, the JCL (script) was edited and the parameter on the rmstdlist program was changed from 10 to 1 (Figure 4-29).

Figure 4-29 Edit JCL for centralized script, rmstdlist parameter changed from 10 to 1

The job is submitted, and it is verified that the job completes successfully on the FTA. Output is verified by doing browse job log. Figure 4-30 on page 237 shows only the first part of the job log. See the complete job log in Example 4-23 on page 237.

From the job log, you can see that the centralized script that was defined in the controller JOBLIB is copied to (see the line with the = JCLFILE text):

/tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK.sh

236 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 253: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The Tivoli Workload Scheduler for z/OS JCL variable &ODMY1 in the “echo” line (Figure 4-29) has been substituted by the Tivoli Workload Scheduler for z/OS controller with the job stream planning date (for our case, 210704, seen in Example 4-23 on page 237).

Figure 4-30 Browse first part of job log for the centralized script job in JSC

Example 4-23 The complete job log for the centralized script job

================================================================ JOB : OPCMASTER#BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK= USER : twstest= JCLFILE : /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK.sh= Job Number: 52754= Wed 07/21/04 21:52:39 DFT===============================================================TWS for UNIX/JOBMANRC 8.2AWSBJA001I Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2003US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK.sh

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 237

Page 254: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

TWS for UNIX (AIX)/JOBINFO 8.2 (9.5)Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2001US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.Installed for user ''.Locale LANG set to "C"Now we are running the script /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK.shOPC occurrence plan date is: 210704TWS for UNIX/RMSTDLIST 8.2AWSBJA001I Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2003US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.AWSBIS324I Will list directories older than -1/tivoli/tws/twstest/tws/stdlist/2004.07.13/tivoli/tws/twstest/tws/stdlist/2004.07.14/tivoli/tws/twstest/tws/stdlist/2004.07.15/tivoli/tws/twstest/tws/stdlist/2004.07.16/tivoli/tws/twstest/tws/stdlist/2004.07.18/tivoli/tws/twstest/tws/stdlist/2004.07.19/tivoli/tws/twstest/tws/stdlist/logs/20040713_NETMAN.log/tivoli/tws/twstest/tws/stdlist/logs/20040713_TWSMERGE.log/tivoli/tws/twstest/tws/stdlist/logs/20040714_NETMAN.log/tivoli/tws/twstest/tws/stdlist/logs/20040714_TWSMERGE.log/tivoli/tws/twstest/tws/stdlist/logs/20040715_NETMAN.log/tivoli/tws/twstest/tws/stdlist/logs/20040715_TWSMERGE.log/tivoli/tws/twstest/tws/stdlist/logs/20040716_NETMAN.log/tivoli/tws/twstest/tws/stdlist/logs/20040716_TWSMERGE.log/tivoli/tws/twstest/tws/stdlist/logs/20040718_NETMAN.log/tivoli/tws/twstest/tws/stdlist/logs/20040718_TWSMERGE.log================================================================ Exit Status : 0= System Time (Seconds) : 1 Elapsed Time (Minutes) : 0= User Time (Seconds) : 0= Wed 07/21/04 21:52:40 DFT===============================================================

This completes the verification of centralized script.

238 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 255: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

4.6.2 Verification of job with non-centralized scriptsAdd a job stream with a job defined with non-centralized script. Our example uses the non-centralized job script from Example 4-21 on page 232.

The job is submitted, and it is verified that the job ends in error. (Remember that the JOBCMD will try to remove a non-existing file.)

Reply to the prompt with Yes, and the recovery job is executed (Figure 4-31).

Figure 4-31 Running F100DJ02 job with non-centralized script and RECOVERY options

The same process can be performed in Tivoli Workload Scheduler for z/OS legacy ISPF panels.

When the job ends in error, type RI (for Recovery Info) for the job in the Tivoli Workload Scheduler for z/OS Error list to get the panel shown in Figure 4-32 on page 240.

• The job ends in error with RC=0002.

• Right-click the job to open a context menu (1).

• In the context menu, select Recovery Info toopen the Job Instance Recovery Informationwindow.

• The recovery message is shown and you canreply to the prompt by clicking the Reply to Prompt arrow.

• Select Yes and click OK to run the recoveryjob and rerun the failed F100DJ02 job (if the recovery job ends successfully).

1

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 239

Page 256: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-32 Recovery Info ISPF panel in Tivoli Workload Scheduler for z/OS

To reply Yes to the prompt, type PY in the Option field.

Then press Enter several times to see the result of the recovery job in the same panel. The Recovery job info fields will be updated with information for Recovery jobid, Duration, and so on (Figure 4-33).

Figure 4-33 Recovery Info after the Recovery job has been executed.

The recovery job has been executed successfully and the Recovery Option (Figure 4-32) was rerun, so the failing job (F100DJ02) will be rerun and will complete successfully.

Finally, the job log is browsed for the completed F100DJ02 job (Example 4-24 on page 241). The job log shows that the user is twstest ( = USER) and that the twshome directory is /tivoli/tws/twstest/tws (part of the = JCLFILE line).

240 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 257: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 4-24 The job log for the second run of F100DJ02 (after the RECOVERY job)

================================================================ JOB : OPCMASTER#BB8D04BFE71A3901.J_010_F100DECSCRIPT01= USER : twstest= JCLFILE : rm /tivoli/tws/twstest/tws/demo.sh= Job Number: 24100= Wed 07/21/04 22:46:33 DFT===============================================================TWS for UNIX/JOBMANRC 8.2AWSBJA001I Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2003US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc rmTWS for UNIX (AIX)/JOBINFO 8.2 (9.5)Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2001US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.Installed for user ''.Locale LANG set to "C"Now we are running the script rm /tivoli/tws/twstest/tws/demo.sh================================================================ Exit Status : 0= System Time (Seconds) : 0 Elapsed Time (Minutes) : 0= User Time (Seconds) : 0= Wed 07/21/04 22:46:33 DFT===============================================================

If you compare the job log output with the non-centralized script definition in Example 4-21 on page 232, you see that the user and the twshome directory were defined as Tivoli Workload Scheduler for z/OS JCL variables (&TWSHOME and %TWSUSER). These variables have been substituted with values from the Tivoli Workload Scheduler for z/OS variable table E2EVAR (specified in the VARSUB TABLES() parameter).

This variable substitution is performed when the job definition is added to the Symphony file either during normal Tivoli Workload Scheduler for z/OS plan extension or replan or if user ad hoc adds the job stream to the plan in Tivoli Workload Scheduler for z/OS.

This completes the test of non-centralized script.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 241

Page 258: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

4.6.3 Verification of centralized script with JOBREC parameters

We did a verification with a job with centralized script combined with a JOBREC statement in the script library (EQQSCLIB).

The verification uses a job named F100CJ02 and centralized script, as shown in Example 4-25. The centralized script is defined in the Tivoli Workload Scheduler for z/OS JOBLIB.

Example 4-25 Centralized script for test in combination with JOBREC

EDIT TWS.V8R20.JOBLIB(F100CJ02) - 01.07 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 //*%OPC SCAN 000002 //* OPC Here is an OPC JCL Variable OYMD1: &OYMD1. 000003 //* OPC 000004 //*%OPC RECOVER JOBCODE=(12),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO) 000005 //* OPC 000006 echo 'Todays OPC date is: &OYMD1' 000007 echo 'Unix system date is: ' 000008 date 000009 echo 'OPC schedule time is: ' &CHHMMSSX 000010 exit 12 ****** **************************** Bottom of Data ****************************

The JOBREC statement for the F100CJ02 job is defined in the Tivoli Workload Scheduler for z/OS scriptlib (EQQSCLIB); see Example 4-26. It is important that the member name for the job (F100CJ02 in our example) is the same in JOBLIB and SCRPTLIB.

Example 4-26 JOBREC definition for the F100CJ02 job

EDIT TWS.V8R20.SCRPTLIB(F100CJ02) - 01.07 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ******************************000001 JOBREC 000002 RCCONDSUC('RC<7') 000003 JOBUSR(maestro) ****** **************************** Bottom of Data ****************************

The first time the job is run, it abends with return code 12 (due to the exit 12 line in the centralized script).

Example 4-27 on page 243 shows the job log. Note the “= JCLFILE” line. Here you can see TWSRCMAP: RC<7, which is added because we specified RCCONDSUC(‘RC<7’) in the JOBREC definition for the F100CJ02 job.

242 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 259: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 4-27 Job log for the F100CJ02 job (ends with return code 12)

================================================================ JOB : OPCMASTER#BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01= USER : maestro= JCLFILE : /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.sh TWSRCMAP: RC<7= Job Number: 56624= Wed 07/21/04 23:07:16 DFT===============================================================TWS for UNIX/JOBMANRC 8.2AWSBJA001I Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2003US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.shTWS for UNIX (AIX)/JOBINFO 8.2 (9.5)Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2001US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.Installed for user ''.Locale LANG set to "C"Todays OPC date is: 040721Unix system date is:Wed Jul 21 23:07:17 DFT 2004OPC schedule time is: 23021516================================================================ Exit Status : 12= System Time (Seconds) : 0 Elapsed Time (Minutes) : 0= User Time (Seconds) : 0= Wed 07/21/04 23:07:17 DFT===============================================================

The job log also shows that the user is set to maestro (the = USER line). This is because we specified JOBUSR(maestro) in the JOBREC statement.

Next, before the job is rerun, the JCL (the centralized script) is edited, and the last line is changed from exit 12 to exit 6. Example 4-28 on page 244 shows the edited JCL.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 243

Page 260: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 4-28 The script (JCL) for the F100CJ02 job is edited exit changed to 6

****** ***************************** Top of Data ******************************000001 //*>OPC SCAN 000002 //* OPC Here is an OPC JCL Variable OYMD1: 040721 000003 //* OPC 000004 //*>OPC RECOVER JOBCODE=(12),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO) 000005 //* OPC MSG: 000006 //* OPC MSG: I *** R E C O V E R Y A C T I O N S T A K E N *** 000007 //* OPC 000008 echo 'Todays OPC date is: 040721' 000009 echo 000010 echo 'Unix system date is: ' 000011 date 000012 echo 000013 echo 'OPC schedule time is: ' 23021516 000014 echo 000015 exit 6 ****** **************************** Bottom of Data ****************************

Note that the line with Tivoli Workload Scheduler for z/OS Automatic Recover has changed: The % sign has been replaced by the > sign. This means that Tivoli Workload Scheduler for z/OS has performed the recovery action by adding the F100CENTRECAPPL job stream (application).

The result after the edit and rerun of the job is that the job completes successfully. (It is marked as completed with return code = 0 in Tivoli Workload Scheduler for z/OS). The RCCONDSUC() parameter in the scriptlib defintion for the F100CJ02 job sets the job to successful even though the exit code from the script was 6 (Example 4-29).

Example 4-29 Job log for the F100CJ02 job with script exit code = 6

================================================================ JOB : OPCMASTER#BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01= USER : maestro= JCLFILE : /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.sh TWSRCMAP: RC<7= Job Number: 41410= Wed 07/21/04 23:35:48 DFT===============================================================TWS for UNIX/JOBMANRC 8.2AWSBJA001I Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2003US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.

244 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 261: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.shTWS for UNIX (AIX)/JOBINFO 8.2 (9.5)Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2001US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.Installed for user ''.Locale LANG set to "C"Todays OPC date is: 040721Unix system date is:Wed Jul 21 23:35:49 DFT 2004OPC schedule time is: 23021516================================================================ Exit Status : 6= System Time (Seconds) : 0 Elapsed Time (Minutes) : 0= User Time (Seconds) : 0= Wed 07/21/04 23:35:49 DFT===============================================================

This completes the verification of centralized script combined with JOBREC statements.

4.7 Activate support for the Tivoli Workload Scheduler Job Scheduling Console

To activate support for use of the Tivoli Workload Scheduler Job Scheduling Console (JSC), perform the following steps:

1. Install and start a Tivoli Workload Scheduler for z/OS JSC server on mainframe.

2. Install Tivoli Management Framework 4.1 or 3.7.1.

3. Install Job Scheduling Services in Tivoli Management Framework.

4. To be able to work with Tivoli Workload Scheduler for z/OS (OPC) controllers from the JSC:

a. Install the Tivoli Workload Scheduler for z/OS connector in Tivoli Management Framework.

b. Create instances in Tivoli Management Framework that point to the Tivoli Workload Scheduler for z/OS controllers you want to access from the JSC.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 245

Page 262: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5. To be able to work with Tivoli Workload Scheduler domain managers or fault-tolerant agents from the JSC:

a. Install the Tivoli Workload Scheduler connector in Tivoli Management Framework. Note that the Tivoli Management Framework server or managed node must be installed on the machine where the Tivoli Workload Scheduler instance is installed.

b. Create instances in Tivoli Management Framework that point to the Tivoli Workload Scheduler domain managers or fault-tolerant agents that you would like to access from the JSC.

6. Install the JSC on the workstations where it should be used.

The following sections describe installation steps in more detail.

4.7.1 Install and start Tivoli Workload Scheduler for z/OS JSC serverTo use Tivoli Workload Scheduler Job Scheduling Console for communication with Tivoli Workload Scheduler for z/OS, you must initialize the Tivoli Workload Scheduler for z/OS connector. The connector forms the bridge between the Tivoli Workload Scheduler Job Scheduling Console and the Tivoli Workload Scheduler for z/OS product.

The JSC communicates with Tivoli Workload Scheduler for z/OS through the scheduler server using the TCP/IP protocol. The JSC needs the server to run as a started task in a separate address space. The Tivoli Workload Scheduler for z/OS server communicates with Tivoli Workload Scheduler for z/OS and passes the data and return codes back to the connector.

The security model that is implemented for Tivoli Workload Scheduler Job Scheduling Console is similar to that already implemented by other Tivoli products that have been ported to z/OS (namely IBM Tivoli User Administration and IBM Tivoli Security Management). The Tivoli Framework security handles the initial user verification, but it is necessary to obtain a valid corresponding RACF user ID to be able to work with the security environment in z/OS.

Even though it is possible to have one server started task handling end-to-end scheduling, JSC communication, and even APPC communication, we recommend having a server started task dedicated to JSC communication (SERVOPTS PROTOCOL(JSC)). This has the advantage that you do not have to stop the whole end-to-end server process if only the JSC communication has be restarted.

We will install a server dedicated to JSC communication and call it the JSC server.

246 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 263: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

When JSC is used to access the Tivoli Workload Scheduler for z/OS controller through the JSC server, the JSC server uses the Tivoli Workload Scheduler for z/OS program interface (PIF) to interface with the controller.

You can find an example of the started task procedure in installation member EQQSER in the sample library that is generated by the EQQJOBS installation aid. An example of the initialization statements can be found in the EQQSERP member in the sample library generated by the EQQJOBS installation aid. After the installation of the JSC server, you can get almost the same functionality from the JSC as you have with the legacy Tivoli Workload Scheduler for z/OS IPSF interface.

Configure and start the JSC server and verify the start First, create the started task procedure for the JSC server. The EQQSER member in the sample library can be used. Take the following into consideration when customizing the EQQSER sample:

� Make sure that the C runtime library is concatenated in the server JCL (CEE.SCEERUN) in STEPLLIB, if it is not in the LINKLIST.

� If you have multiple TCP/IP stacks or if the name of the procedure that was used to start the TCPIP address space is different from TCPIP, introduce the SYSTCPD DD card pointing to a data set containing the TCPIPJOBNAME parameter. (See DD SYSTCPD in the TCP/IP manuals.)

� Customize the JSC server initialization parameters file. (See the EQQPARM DD statement in the server JCL.) The installation member EQQSERP already contains a template.

For information about the JSC server parameters, refer to IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265.

We used the JSC server initialization parameters shown in Example 4-30. Also see Figure 4-34 on page 249.

Example 4-30 The JSC server initialization parameter

/**********************************************************************//* SERVOPTS: run-time options for the TWSCJSC started task *//**********************************************************************/SERVOPTS SUBSYS(TWSC) /*--------------------------------------------------------------------*//* TCP/IP server is needed for JSC GUI usage. Protocol=JSC *//*--------------------------------------------------------------------*/ PROTOCOL(JSC) /* This server is for JSC */ JSCHOSTNAME(TWSCJSC) /* DNS name for JSC */ USERMAP(USERS) /* RACF user / TMF adm. map */ PORTNUMBER(38888) /* Portnumber for JSC comm. */ CODEPAGE(IBM-037) /* Codep. EBCIDIC/ASCII tr. */

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 247

Page 264: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

/*--------------------------------------------------------------------*//* CALENDAR parameter is mandatory for server when using TCP/IP *//* server. *//*--------------------------------------------------------------------*/INIT ADOICHK(YES) /* ADOI Check ON */ CALENDAR(DEFAULT) /* Use DEFAULT calendar */ HIGHDATE(711231) /* Default HIGHDATE */

The SUBSYS(), PROTOCOL(JSC), CALENDAR(), and HIGHDATE() are mandatory for using the Tivoli Job Scheduling Console. Make sure that the port you try to use is not reserved by another application.

If JSCHOSTNAME() is not specified, the default is to use the host name that is returned by the operating system.

Remember that you always have to define OMVS segments for Tivoli Workload Scheduler for z/OS server started tasks userids.

Optionally, the JSC server started task name can be defined in the Tivoli Workload Scheduler for z/OS controller OPCOPTS SERVERS() parameter to let the controller start and stop the JSC server task when the controller itself is started and stopped (Figure 4-34 on page 249).

Note: We got an error when trying to use the JSCHOSTNAME with a host name instead of an IP address (EQQPH18E COMMUNICATION FAILED). This problem is fixed with APAR PQ83670.

248 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 265: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-34 JSC Server that communicates with TWSC controller

After the configuration and customization of the JSC server initialization statements and the JSC server started task procedure, we started the JSC server and saw the messages in Example 4-31 during start.

Example 4-31 Messages in EQQMLOG for JSC server when started

EQQZ005I OPC SUBTASK SERVER IS BEING STARTED EQQPH09I THE SERVER IS USING THE TCP/IP PROTOCOL EQQPH28I THE TCP/IP STACK IS AVAILABLE EQQPH37I SERVER CAN RECEIVE JSC REQUESTS EQQPH00I SERVER TASK HAS STARTED

Controlling access to Tivoli Workload Scheduler for z/OS from the JSC

The Tivoli Framework performs a security check, verifying the user ID and password, when a user tries to use the Job Scheduling Console. The Tivoli Framework associates each user ID and password with an administrator. Tivoli Workload Scheduler for z/OS resources are protected by RACF.

TOPOLOGYBINDIR(/tws)WRKDIR(/tws/wrkdir)HOSTNAME(TWSC.IBM.COM)PORTNUMBER(31182)TPLGYMEM(TPLGINFO)USRMEM(USERINFO)TRCDAYS(30)LOGLINES(100)

EQQPARM(TPLGPARM)

SERVOPTSSUBSYS(TWSC)PROTOCOL(E2E)TPLGYPRM(TPLGPARM)...

TWSC

Topology Parameters

USRREC ...USRREC ...USRREC ......

EQQPARM(USRINFO)

Topology Records

User Records

BATCHOPT...TPLGYPRM(TPLGPARM)...

DOMREC ...DOMREC ...CPUREC ...CPUREC ...CPUREC ...CPUREC ......

EQQPARM(TPLGINFO)

End-to-end ServerTWSCE2E

OPCOPTSTPLGYSRV(TWSCE2E)SERVERS(TWSCJSC,TWSCE2E)...

(CPE, LTPE, etc.)Note: It is possible to run many servers, but only one server can be the end-to-end server (also called the topology server). Specify this server using the TPLGYSRV controller option. The SERVERS option specifies the servers that will be started when the controller starts.

JSC Server

SERVOPTSSUBSYS(TWSC)PROTOCOL(JSC)CODEPAGE(IBM-037)JSCHOSTNAME(TWSCJSC)PORTNUMBER(38888)USERMAP(USERS)...

USER 'ROOT@M-REGION' RACFUSER(TMF) RACFGROUP(TIVOLI)

...

EQQPARM(USERS)

User Map

TWSCJSC

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 249

Page 266: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The JSC user should have to enter only a single user ID - password combination, not at both the Tivoli Framework level and then again at the Tivoli Workload Scheduler for z/OS level).

The security model is based on having the Tivoli Framework security handle the initial user verification while obtaining a valid corresponding RACF user ID. This makes it possible for the user to work with the security environment in z/OS. The z/OS security is based on a table mapping the Tivoli Framework administrator to an RACF user ID. When a Tivoli Framework user tries to initiate an action on z/OS, the Tivoli administrator ID is used as a key to obtain the corresponding RACF user ID.

The JSC server uses the RACF user ID to build the RACF environment to access Tivoli Workload Scheduler for z/OS services, so the Tivoli Administrator must relate, or map, to a corresponding RACF user ID.

There are two ways of getting the RACF user ID:

� The first way is by using the RACF Tivoli-supplied predefined resource class, TMEADMIN.

Consult the section about implementing security in Tivoli Workload Scheduler for z/OS in IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265, for the complete setup of the TMEADMIN RACF class.

� The other way is to use a new OPC Server Initialization Parameter to define a member in the file identified by the EQQPARM DD statement in the server startup job.

This member contains all of the associations for a TME user with an RACF user ID. You should set the parameter USERMAP in the JSC server SERVOPTS Initialization Parameter to define the member name.

Use of the USERMAP(USERS)We used the JSC server SERVOPTS USERMAP(USERS) parameter to define the mapping between Tivoli Framework Administrators and z/OS RACF users.

USERMAP(USERS) means that the definitions (mappings) are defined in a member named USERS in the EQQPARM library. See Figure 4-35 on page 251.

250 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 267: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-35 The relation between TMF administrators and RACF users via USERMAP

For example, in the definitions in the USERS member in EQQPARM in Figure 4-35, TMF administrator MIKE@M-REGION is mapped to RACF user MAL (MAL is member of RACF group TIVOLI). If MIKE logs in to the TMF with name M-REGION and MIKE@M-REGION works with the Tivoli Workload Scheduler for z/OS controller from the JSC, he will have the access defined for RACF user MAL. In other words, the USER defintion maps TMF Administrator MIKE@M-REGION to RACF user MAL.

Whatever MIKE@M-REGION is doing from the JSC in the controller will be performed with the RACF authorization defined for the MAL user. All logging in RACF will also be done for the MAL user.

The TMF Administrator is defined in TMF with a certain authorization level. The TMF Administrator must have the USER role to be able to use the Tivoli Workload Scheduler for z/OS connector.

USER 'ROOT@M-REGION' RACFUSER(TMF) RACFGROUP(TIVOLI)

USER ‘MIKE@M-REGION' RACFUSER(MAL) RACFGROUP(TIVOLI)

USER ‘FINN@M-REGION' RACFUSER(FBK) RACFGROUP(TIVOLI)

USER ‘STEFAN@M-REGION' RACFUSER(SF) RACFGROUP(TIVOLI)...

EQQPARM(USERS)

A000

OPCMASTER

• When the user attempts to view or modify the OPC databases or plan, the JSC Server task uses RACF to determine whether to authorize the action

• If the USERMAP option is specified in the SERVOPTS of the JSC Server task, the JSC Server uses this map to associate TMF administrators with RACF users

• It is also possible to activate the TMEADMIN RACF class and add the TMF administrator names directly in there

• For auditing purposes, it is recommended that one TMF administrator be defined for each RACF user

USER entries in the USERMAP(USERS) member

OPCConnector

RACF

TivoliFramework

JSCServer

JobScheduling

Console

• When a JSC user connects to the computer running the OPC connector, the user is identified as a local TMF administrator.

USERMAP

Other DMs and FTAs

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 251

Page 268: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

4.7.2 Installing and configuring Tivoli Management Framework 4.1As we have already discussed, for the new Job Scheduling Console interface to communicate with the scheduling engines, it requires that a few other components be installed. If you are still not sure how all of the pieces fit together, review 2.4, “Job Scheduling Console and related components” on page 89.

When installing Tivoli Workload Scheduler 8.2 using the ISMP installer GUI, you are given the option to install the Tivoli Workload Scheduler connector. If you choose this option, the installer program automatically installs the following components:

� Tivoli Management Framework 4.1, configured as a TMR server� Job Scheduling Services 1.2� Tivoli Workload Scheduler Connector 8.2

The Tivoli Workload Scheduler 8.2 installer GUI will also automatically create a Tivoli Workload Scheduler connector instance and a TMF administrator associated with your Tivoli Workload Scheduler user. Letting the installer do the work of installing and configuring these components is generally a very good idea because it saves the trouble of performing each of these steps individually.

If you choose not to let the Tivoli Workload Scheduler 8.2 installer install and configure these components for you, you can install them later. The following instructions for getting a TMR server installed and up and running, and to get Job Scheduling Services and the connectors installed, are primarily intended for environments that do not already have a TMR server, or one in which a separate TMR server will be installed for IBM Tivoli Workload Scheduler.

Notes:

� If you decide to use the USERMAP to map TMF administrators to RACF users, you should be aware that users with update access to the member with the mapping definitions (the USERS member in our example) can get access to the Tivoli Workload Scheduler for z/OS controller by editing the mapping definitions.

To avoid any misuse, make sure that the member with the mapping definitions is protected according to your security standards. Or use the standard RACF TMEADMIN resource class in RACF to do the mapping.

� To be able to audit what different JSC users do in Tivoli Workload Scheduler for z/OS, we recommend that you establish a one-to-one relationship between the TMF Administrator and the corresponding RACF user. (That is, you should not allow multiple users to use the same TMF Administrator by adding several different logons to one TMF Administrator.)

252 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 269: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

In the last part of this section, we discuss in more detail the steps specific to end-to-end scheduling: creating connector instances and TMF administrators.

The Tivoli Management Framework is easy to install. If you already have the Framework installed in your organization, it is not necessary to install the components specific to Tivoli Workload Scheduler (the JSS and connectors) on a node in your existing Tivoli Managed Region. You may prefer to install a stand-alone TMR server solely for the purpose of providing the connection between the IBM Tivoli Workload Scheduler suite and its interface, the JSC. If your existing TMR is busy with other operations, such as monitoring or software distribution, you might want to consider installing a separate stand-alone TMR server for Tivoli Workload Scheduler. If you decide to install the JSS and connectors on an existing TMR server or managed node, you can skip to “Install Job Scheduling Services” and “Installing the connectors” on page 254.

4.7.3 Alternate method using Tivoli Management Framework 3.7.1If for some reason you need to use the older 3.7.1 version of TMF instead of the newer 4.1 version, you must first install TMF 3.7B and then upgrade it to 3.7.1.

Installing Tivoli Management Framework 3.7BThe first step is to install Tivoli Management Framework Version 3.7B. For instructions, refer to the Tivoli Framework 3.7.1 Installation Guide, GC32-0395.

Upgrade to Tivoli Management Framework 3.7.1Version 3.7.1 of Tivoli Management Framework is required by Job Scheduling Services 8.1, so if you do not already have Version 3.7.1 of the Framework installed, you must upgrade to it.

Install Job Scheduling ServicesFollow the instructions in the IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257. to install JSS. As we discussed in Chapter 2, “End-to-end scheduling architecture” on page 25, JSS is simply a library used by the Framework, and it is a prerequisite of the connectors.

Note: If you are installing TMF 3.7B on AIX 5.1 or later, you will need an updated version of the TMF 3.7B CD because the original TMF 3.7B CD did not correctly recognize AIX 5 as a valid target platform.

Order PTF U482278 to get this updated TMF 3.7B CD.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 253

Page 270: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The hardware and software prerequisites for the Job Scheduling Services are:

� Software

IBM Tivoli Management Framework: Version 3.7.1 or later for Microsoft® Windows, AIX, HP-UX, Sun Solaris, and Linux.

� Hardware

– CD-ROM drive for installation

– Approximately 4 MB of free disk space

Job Scheduling Services is supported on the following platforms:

� Microsoft Windows

– Windows NT 4.0 with Service Pack 6

– Windows 2000 Server or Advanced Server with Service Pack 3

� IBM AIX Version 4.3.3, 5.1, 5.2

� HP-UX PA-RISC Version 11.0, 11i

� Sun Solaris Version 7, 8, 9

� Linux Red Hat Version 7.2, 7.3

� SuSE Linux Enterprise Server for x86 Version 8

� SuSE Linux Enterprise Server for S/390® and zSeries (kernel 2.4, 31–bit) Version 7 (new with this version)

� Red Hat Linux for S/390 (31–bit) Version 7 (new with this version)

Installing the connectorsFollow the installation instructions in the IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257.

When installing the Tivoli Workload Scheduler connector, we recommend that you do not select the Create Instance check box. Create the instances after the connector has been installed.

The hardware and software prerequisites for the Tivoli Workload Scheduler for z/OS connector are:

� Software:

– IBM Tivoli Management Framework: Version 3.7.1 or later

– Tivoli Workload Scheduler for z/OS 8.1, or Tivoli OPC 2.1 or later

– Tivoli Job Scheduling Services 1.3

– TCP/IP network communications

254 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 271: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

– A Tivoli Workload Scheduler for z/OS user account (required), which you can create beforehand or have the setup program create for you

� Hardware:

– CD-ROM drive for installation.

– Approximately 3 MB of free disk space for the installation. In addition, the Tivoli Workload Scheduler for z/OS connector produces log files and temporary files, which are placed on the local hard drive.

Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS connector are supported on the following platforms:

� Microsoft Windows

– Windows NT 4.0 with Service Pack 6

– Windows 2000 Server or Advanced Server with Service Pack 3

� IBM AIX Version 4.3.3, 5.1, 5.2

� HP-UX PA-RISC Version 11.0, 11i

� Sun Solaris Version 7, 8, 9

� Linux Red Hat Version 7.2, 7.3

� SuSE Linux Enterprise Server for x86 Version 8

� SuSE Linux Enterprise Server for S/390 and zSeries (kernel 2.4, 31–bit) Version 7 (new with this version)

� Red Hat Linux for S/390 (31–bit) Version 7 (new with this version)

For more information, see IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes, Feature level 1.3, SC32-1258.

4.7.4 Creating connector instancesAs we discussed in Chapter 2, “End-to-end scheduling architecture” on page 25, the connectors tell the Framework how to communicate with the different types of scheduling engine.

To control the workload of the entire end-to-end scheduling network from the Tivoli Workload Scheduler for z/OS controller, it is necessary to create a Tivoli Workload Scheduler for z/OS connector instance to connect to that controller.

It may also be a good idea to create a Tivoli Workload Scheduler connector instance on a fault-tolerant agent or domain manager. Sometimes the status may get out of sync between an FTA or DM and the Tivoli Workload Scheduler for z/OS controller. When this happens, it is helpful to be able to connect directly to that agent and get the status directly from there. Retrieving job logs (standard

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 255

Page 272: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

lists) is also much faster through a direct connection to the FTA than through the Tivoli Workload Scheduler for z/OS controller.

Creating a Tivoli Workload Scheduler for z/OS connector instance

You have to create at least one Tivoli Workload Scheduler for z/OS connector instance for each z/OS controller that you want to access with the Tivoli Job Scheduling Console. This is done using the wopcconn command.

In our test environment, we wanted to be able to connect to a Tivoli Workload Scheduler for z/OS controller running on a mainframe with the host name twscjsc. On the mainframe, the Tivoli Workload Scheduler for z/OS TCP/IP server listens on TCP port 5000. Yarmouth is the name of the TMR-managed node where we created the connector instance. We called the new connector instance twsc. Here is the command we used:

wopcconn -create -h london -e TWSC -a twscjsc -p 5000

The result of this will be that when we use JSC to connect to Yarmouth, a new connector instance called TWSC appears in the Job Scheduling list on the left side of the window. We can access the Tivoli Workload Scheduler for z/OS scheduling engine by clicking that new entry in the list.

It is also possible to run wopcconn in interactive mode. To do this, just run wopcconn with no arguments.

Refer to Appendix A, “Connector reference” on page 343 for a detailed description of the wopcconn command.

Creating a Tivoli Workload Scheduler connector instanceRemember that a Tivoli Workload Scheduler connector instance must have local access to the Tivoli Workload Scheduler engine with which it is associated. This is done using the wtwsconn.sh command.

In our test environment, we wanted to be able to use JSC to connect to a Tivoli Workload Scheduler engine on the host Yarmouth. Yarmouth has two Tivoli Workload Scheduler engines installed, so we had to be sure to specify that the path of the Tivoli Workload Scheduler engine we specify when creating the connector is the path to the right Tivoli Workload Scheduler engine. We called the new connector instance TWS-A to reflect that this connector instance would be associated with the TWS-A engine on this host (as opposed to the other Tivoli Workload Scheduler engine, TWS-B). Here is the command we used:

wtwsconn.sh -create -h london -n London-A -t /tivoli/TWS/tws-a

256 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 273: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The result is that when we use JSC to connect to Yarmouth, a new connector instance called TWS-A appears in the Job Scheduling list on the left side of the window. We can access the TWS-A scheduling engine by clicking that new entry in the list.

Refer to Appendix A, “Connector reference” on page 343 for a detailed description of the wtwsconn.sh command.

4.7.5 Creating WTMF administrators for Tivoli Workload Scheduler When a user logs onto the Job Scheduling Console, the Tivoli Management Framework verifies that the user’s logon is listed in an existing TMF administrator.

TMF administrators for Tivoli Workload SchedulerA Tivoli Management Framework administrator must be created for the Tivoli Workload Scheduler user. Additional TMF administrators can be created for other users who will access Tivoli Workload Scheduler using JSC.

TMF administrators for Tivoli Workload Scheduler for z/OSThe Tivoli Workload Scheduler for z/OS TCP/IP server associates the Tivoli administrator to an RACF user. If you want to be able to identify each user uniquely, one Tivoli Administrator should be defined for each RACF user. If operating system users corresponding to the RACF users do not already exist on the TMR server or on a managed node in the TMR, you must first create one OS user for each Tivoli administrator that will be defined. These users can be created on the TMR server of on any managed node in the TMR. After you have created those users, you can simply add those users’ logins to the TMF administrators that you create.

Creating TMF administratorsIf Tivoli Workload Scheduler 8.2 is installed using the graphical ISMP installer, you have the option of installing the Tivoli Workload Scheduler connector automatically during Tivoli Workload Scheduler installation. If you choose this option, the installer will create one TMF administrator automatically.

We still recommend that you create one Tivoli Management Framework Administrator for each user who will use JSC.

Important: When creating users or setting their passwords, disable any option that requires the user to set a password at the first logon. If the operating system requires the user’s password to change at the first logon, the user will have to do this before he will be able to log on via the Job Scheduling Console.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 257

Page 274: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Perform the following steps from the Tivoli desktop to create a new TMF administrator:

1. Double-click the Administrators icon and select Create → Administrator, as shown in Figure 4-36.

Figure 4-36 Create Administrator

2. Enter the Tivoli Administrator name you want to create.

3. Click Set Logins to specify the login name (Figure 4-37 on page 259). This field is important because it is used to determine the UID with which many operations are performed and represents a UID at the operating system level.

258 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 275: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-37 Create Administrator

4. Type in the login name and press Enter. Click Set & Close (Figure 4-38).

Figure 4-38 Set Login Names

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 259

Page 276: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5. Enter the name of the group. This field is used to determine the GID under which many operations are performed. Click Set & Close.

The TMR roles you assign to the administrator depend on the actions the user will need to perform.

Table 4-7 Authorization roles required for connector actions

6. Click the Set TMR Roles icon and add the desired role or roles (Figure 4-39).

Figure 4-39 Set TMR roles

7. Click Set & Close to finish your input. This returns you to the Administrators desktop (Figure 4-40 on page 261).

An Administrator with this role... Can perform these actions

User Use the instanceView instance settings

Admin, senior, or super Use the instanceView instance settingsCreate and remove instancesChange instance settingsStart and stop instances

260 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 277: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 4-40 Tivoli Administrator desktop

4.7.6 Installing the Job Scheduling ConsoleTivoli Workload Scheduler for z/OS is shipped with the latest version (Version 1.3) of the Job Scheduling Console. We recommend that you use this version because it contains the best functionality and stability.

The JSC can be installed on the following platforms:

� Microsoft Windows

– Windows NT 4.0 with Service Pack 6

– Windows 2000 Server, Professional and Advanced Server with Service Pack 3

– Windows XP Professional with Service Pack 1

– Windows 2000 Terminal Services

� IBM AIX Version 4.3.3, 5.1, 5.2

� HP-UX PA-RISC 11.0, 11i

� Sun Solaris Version 7, 8, 9

� Linux Red Hat Version 7.2, 7.3

� SuSE Linux Enterprise Server for x86 Version 8

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 261

Page 278: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Hardware and software prerequisites The following are the hardware and software prerequisites for the Job Scheduling Console.

For use with Tivoli Workload Scheduler for z/OS � Software:

– IBM Tivoli Workload Scheduler for z/OS connector 1.3

– IBM Tivoli Workload Scheduler for z/OS 8.1 or OPC 2.1 or later

– Tivoli Job Scheduling Services 1.3

– TCP/IP network communication

– Java Runtime Environment Version 1.3

� Hardware:

– CD-ROM drive for installation

– 70 MB disk space for full installation, or 34 MB for customized (English base) installation plus approximately 4 MB for each additional language.

For use with Tivoli Workload Scheduler� Software:

– IBM Tivoli Workload Scheduler connector 8.2

– IBM Tivoli Workload Scheduler 8.2

– Tivoli Job Scheduling Services 1.3

– TCP/IP network communication

– Java Runtime Environment Version 1.3

� Hardware:

– CD-ROM drive for installation

– 70 MB disk space for full installation, or 34 MB for customized (English base) installation plus approximately 4 MB for each additional language

Note that the Tivoli Workload Scheduler for z/OS connector can support any Operations Planning and Control V2 release level as well as Tivoli Workload Scheduler for z/OS 8.1.

For the most recent software requirements, refer to IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes, Feature level 1.3, SC32-1258.

Note: You must use the same versions of the scheduler and the connector.

262 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 279: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The following steps describe how to install the Job Scheduling Console:

1. Insert the Tivoli Job Scheduling Console CD-ROM into the system CD-ROM drive or mount the CD-ROM from a drive on a remote system. For this example, the CD-ROM drive is drive F.

2. Perform the following steps to run the installation command:

– On Windows:

• From the Start menu, select Run to display the Run dialog.

• In the Open field, enter F:\Install

– On AIX:

• Type the following command:

jre -nojit -cp install.zip install

• If that does not work, try:

jre -nojit -classpath [path to] classes.zip:install.zip install

• If that does not work either, on sh-like shells, try:

cd [to directory where install.zip is located] CLASSPATH=[path to]classes.zip:install.zip export CLASSPATH java -nojit install

• Or, for csh-like shells, try:

cd [to directory where install.zip is located] setenv CLASSPATH [path to]classes.zip:install.zip java -nojit install

– On Sun Solaris:

• Change to the directory where you downloaded install.zip before running the installer.

• Enter sh install.bin.

3. The splash window is displayed. Follow the prompts to complete the installation. Refer to IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257 for more information about installation of JSC.

Starting the Job Scheduling Console Use the following to start the JSC, depending on your platform:

� On Windows

Depending on the shortcut location that you specified during installation, click the JS Console icon or select the corresponding item in the Start menu.

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 263

Page 280: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� On Windows 95 and Windows 98

You can also start the JSC from the command line. Type runcon from the \bin\java subdirectory of the installation path.

� On AIX

Type ./AIXconsole.sh

� On Sun Solaris

Type ./SUNconsole.sh

A Tivoli Job Scheduling Console start-up window is displayed (Figure 4-41).

Figure 4-41 JSC login window

Enter the following information and click the OK button to proceed:

User name The user name of the person who has permission to use the Tivoli Workload Scheduler for z/OS connector instances

Password The password for the Tivoli Framework administrator

Host Machine The name of the Tivoli-managed node that runs the Tivoli Workload Scheduler for z/OS connector

264 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 281: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Chapter 5. End-to-end implementation scenarios and examples

In this chapter, we describe different scenarios and examples for Tivoli Workload Scheduler for z/OS end-to-end scheduling.

We describe and show:

� “Description of our environment and systems” on page 266

� “Creation of the Symphony file in detail” on page 273

� “Migrating Tivoli OPC tracker agents to end-to-end scheduling” on page 274

� “Conversion from Tivoli Workload Scheduler network to Tivoli Workload Scheduler for z/OS managed network” on page 288

� “Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios” on page 303

� “Backup and maintenance guidelines for FTAs” on page 318

� “Security on fault-tolerant agents” on page 323

� “End-to-end scheduling tips and tricks” on page 331

5

© Copyright IBM Corp. 2004 265

Page 282: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.1 Description of our environment and systemsIn this section, we describe the systems and configuration we used for the end-to-end test scenarios when working on this redbook.

Figure 5-1 shows the systems and configuration that are used for the end-to-end scenarios. All of the systems are connected using TCP/IP connections.

Figure 5-1 Systems and configuration used in end-to-end scheduling test scenarios

We defined the following started task procedure names in z/OS:

TWST For the Tivoli Workload Scheduler for z/OS agentTWSC For the Tivoli Workload Scheduler for z/OS engineTWSCE2E For the end-to-end server TWSCJSC For the Job Scheduling Console server

In the following sections we have listed the started task procedure for our end-to-end server and the different initialization statements defined for the end-to-end scheduling network in Figure 5-1.

FTAN001

FTAN002

AIXDomainManager

U000

UK

FTAU001

FTAU002

Master DomainManager

OPCMASTER

MASTERDM

london9.3.4.63

W2K

edinburgh9.3.4.188

AIX

belfast9.3.4.64

DomainManager

E000

Windows 2000

Europe

FTAE001

FTAE002

geneva9.3.4.185

W2K

amsterdam9.3.4.187

AIX

rome9.3.4.122

AIXDomainManager

N000

Nordic

stockholm9.3.4.47

Linux

helsinki10.2.3.190

oslo10.2.3.184

SSL

SSLSSL

unixloclunixlocl

AIX

UX01

dublin

W2K

wtsc64z/OS

9.12.6.9

wtsc659.12.6.10

z/OS

wtsc639.12.6.8

z/OS

StandbyEngine

StandbyEngine

z/OS Sysplex

unixrshunixrsh

UX02

Extended Agents

FTAN003

W2K

copenhagen10.2.3.189

Firewall& Router

Linux

reykjavik9.3.4.12910.2.3.2

RemoteAIX box

Firewall & Router

SS

L

266 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 283: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Started task procedure for the end-to-end server (TWSCE2E)Example 5-1 shows the started task procedure for the Tivoli Workload Scheduler for z/OS end-to-end server, TWSCE2E.

Example 5-1 Started task procedure for the end-to-end server TWSCE2E

//TWSCE2E EXEC PGM=EQQSERVR,REGION=64M,TIME=1440 //* NOTE: 64M IS THE MINIMUM REGION SIZE FOR E2E (SEE PQ78043) //********************************************************************* //* THIS IS A STARTED TASK PROCEDURE FOR AN OPC SERVER DEDICATED //* FOR END-TO-END SCHEDULING. //********************************************************************* //STEPLIB DD DISP=SHR,DSN=EQQ.SEQQLMD0 //EQQMLIB DD DISP=SHR,DSN=EQQ.SEQQMSG0 //EQQMLOG DD SYSOUT=* //EQQPARM DD DISP=SHR,DSN=TWS.INST.PARM(TWSCE2E) //SYSMDUMP DD DISP=SHR,DSN=TWS.INST.SYSDUMPS //EQQDUMP DD DISP=SHR,DSN=TWS.INST.EQQDUMPS //EQQTWSIN DD DISP=SHR,DSN=TWS.INST.TWSC.TWSIN -> INPUT TO CONTROLLER //EQQTWSOU DD DISP=SHR,DSN=TWS.INST.TWSC.TWSOU -> OUTPUT FROM CONT. //EQQTWSCS DD DISP=SHR,DSN=TWS.INST.CS -> CENTRALIZED SCRIPTS

The end-to-end server (TWSCE2E) initialization statementsExample 5-2 defines the initialization statements for the end-to-end scheduling network shown in Figure 5-1 on page 266.

Example 5-2 End-to-end server (TWSCE2E) initialization statements

/*********************************************************************//* SERVOPTS: run-time options for end-to-end server *//*********************************************************************/SERVOPTS SUBSYS(TWSC) /*-------------------------------------------------------------------*//* TCP/IP server is needed for end-to-end usage. *//*-------------------------------------------------------------------*/ PROTOCOL(E2E) /* This server is for E2E "only"*/ TPLGYPRM(TOPOLOGY) /* E2E topology definition mbr. *//*-------------------------------------------------------------------*//* If you want to use Automatic Restart manager you must specify: *//*-------------------------------------------------------------------*/ ARM(YES) /* Use ARM to restart if abend */

Chapter 5. End-to-end implementation scenarios and examples 267

Page 284: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 5-3 shows the TOPOLOGY initialization statements.

Example 5-3 TOPOLOGY initialization statements; member name is TOPOLOGY

/**********************************************************************//* TOPOLOGY: End-to-End options *//**********************************************************************/TOPOLOGY TPLGYMEM(TPDOMAIN) /* Mbr. with domain+FTA descr.*/ USRMEM(TPUSER) /* Mbr. with Windows user+pw */ BINDIR('/usr/lpp/TWS/V8R2M0') /* The TWS for z/OS inst. dir */ WRKDIR('/tws/twsce2ew') /* The TWS for z/OS work dir */ LOGLINES(200) /* Lines sent by joblog retr. */ TRCDAYS(10) /* Days to keep stdlist files */ CODEPAGE(IBM-037) /* Codepage for translator */ TCPIPJOBNAME(TCPIP) /* Name of TCPIP started task */ ENABLELISTSECCHK(N) /* CHECK SEC FILE FOR LIST? */ PLANAUDITLEVEL(0) /* Audit level on DMs&FTAs */ GRANTLOGONASBATCH(Y) /* Automatically grant right? */ HOSTNAME(twsce2e.itso.ibm.com) /* DNS hostname for server */ PORTNUMBER(31111) /* Port for netman in USS */

Example 5-4 shows DOMREC and CPUREC initialization statements for the network in Figure 5-1 on page 266.

Example 5-4 Domain and fault-tolerant agent definitions; member name is TPDOMAIN

/**********************************************************************//* DOMREC: Defines the domains in the distributed Tivoli Workload *//* Scheduler network *//**********************************************************************//*--------------------------------------------------------------------*//* Specify one DOMREC for each domain in the distributed network. *//* With the exception of the master domain (whose name is MASTERDM *//* and consist of the TWS for z/OS controller). *//*--------------------------------------------------------------------*/DOMREC DOMAIN(UK) /* Domain name = UK */ DOMMNGR(U000) /* Domain manager= FLORENCE */ DOMPARENT(MASTERDM) /* Domain parent = MASTERDM */DOMREC DOMAIN(Europe) /* Domain name = Europe */ DOMMNGR(E000) /* Domain manager= Geneva */ DOMPARENT(MASTERDM) /* Domain parent = MASTERDM */DOMREC DOMAIN(Nordic) /* Domain name = Nordic */ DOMMNGR(N000) /* Domain manager= Stockholm */ DOMPARENT(MASTERDM) /* Domain parent = MASTERDM *//**********************************************************************//**********************************************************************//* CPUREC: Defines the workstations in the distributed Tivoli *//* Workload Scheduler network *//**********************************************************************//*--------------------------------------------------------------------*/

268 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 285: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

/* You must specify one CPUREC for workstation in the TWS network *//* with the exception of OPC Controller which acts as Master Domain *//* Manager *//*--------------------------------------------------------------------*/CPUREC CPUNAME(U000) /* DM of UK domain */ CPUOS(AIX) /* Windows operating system */ CPUNODE(london.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(UK) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for DM */ CPURESDEP(ON) /* Resolve dependencies on for DM*/ CPULIMIT(20) /* Number of jobs in parallel */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(maestro) /* Default user for jobs on CPU */CPUREC CPUNAME(E000) /* DM of Europe domain */ CPUOS(WNT) /* Windows 2000 operating system */ CPUNODE(geneva.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Europe) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for DM */ CPURESDEP(ON) /* Resolve dependencies on for DM*/ CPULIMIT(20) /* Number of jobs in parallel */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */CPUREC CPUNAME(N000) /* DM of Nordic domain */ CPUOS(AIX) /* AIX operating system */ CPUNODE(stockholm.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Nordic) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for DM */ CPURESDEP(ON) /* Resolve dependencies on for DM*/ CPULIMIT(20) /* Number of jobs in parallel */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */CPUREC CPUNAME(U001) /* 1st FTA in UK domain */ CPUOS(AIX) /* AIX operating system */ CPUNODE(belfast.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(UK) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */

Chapter 5. End-to-end implementation scenarios and examples 269

Page 286: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(1) /* Not allowed for DM/XAGENT CPU */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */CPUREC CPUNAME(U002) /* 2nd FTA in UK domain */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUOS(WNT) /* Windows 2000 operating system */ CPUNODE(edinburgh.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(UK) /* The TWS domain name for CPU */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(2) /* Not allowed for DM/XAGENT CPU */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ CPUUSER(tws) /* Default user for jobs on CPU */CPUREC CPUNAME(E001) /* 1st FTA in Europe domain */ CPUOS(AIX) /* AIX operating system */ CPUNODE(rome.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Europe) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(1) /* Not allowed for domain mng. */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */CPUREC CPUNAME(E002) /* 2nd FTA in Europe domain */ CPUOS(WNT) /* Windows 2000 operating system */ CPUNODE(amsterdam.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Europe) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(2) /* Not allowed for domain mng. */

CPUTZ(CST) /* Time zone for this CPU */CPUUSER(tws) /* Default user for jobs on CPU */

CPUREC CPUNAME(N001) /* 1st FTA in Nordic domain */CPUOS(WNT) /* Windows 2000 operating system */

CPUNODE(oslo.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Nordic) /* The TWS domain name for CPU */

270 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 287: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(1) /* Not allowed for domain mng. */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ SSLLEVEL(OFF) /* Use SSL? ON/OFF/ENABLED/FORCE */ SSLPORT(31382) /* Port for SSL communication */ FIREWALL(Y) /* Is CPU behind a firewall? */CPUREC CPUNAME(N002) /* 2nd FTA in Nordic domain */ CPUOS(UNIX) /* Linux operating system */ CPUNODE(helsinki.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Nordic) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(2) /* Not allowed for domain mng. */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ SSLLEVEL(OFF) /* Use SSL? ON/OFF/ENABLED/FORCE */ SSLPORT(31382) /* Port for SSL communication */ FIREWALL(Y) /* Is CPU behind a firewall? */CPUREC CPUNAME(N003) /* 3rd FTA in Nordic domain */ CPUOS(WNT) /* Windows 2000 operating system */ CPUNODE(copenhagen.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Nordic) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(3) /* Not allowed for domain mng. */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ SSLLEVEL(OFF) /* Use SSL? ON/OFF/ENABLED/FORCE */ SSLPORT(31382) /* Port for SSL communication */

FIREWALL(Y) /* Is CPU behind a firewall? */CPUREC CPUNAME(UX01) /* X-agent in UK Domain */ CPUOS(OTHER) /* Extended agent */ CPUNODE(belfast.itsc.austin.ibm.com /* Hostname of CPU */ CPUDOMAIN(UK) /* The TWS domain name for CPU */ CPUHOST(U001) /* U001 is the host for x-agent */ CPUTYPE(XAGENT) /* This is an extended agent */

Chapter 5. End-to-end implementation scenarios and examples 271

Page 288: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

CPUACCESS(unixlocl) /* use unixlocl access method */ CPULIMIT(2) /* Number of jobs in parallel */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */CPUREC CPUNAME(UX02) /* X-agent in UK Domain */ CPUOS(OTHER) /* Extended agent */ CPUNODE(belfast.itsc.austin.ibm.com /* Hostname of CPU */ CPUDOMAIN(UK) /* The TWS domain name for CPU */ CPUHOST(U001) /* U001 is the host for x-agent */ CPUTYPE(XAGENT) /* This is an extended agent */ CPUACCESS(unixrsh) /* use unixrsh access method */ CPULIMIT(2) /* Number of jobs in parallel */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */

User and password definitions for the Windows fault-tolerant workstations are defined as shown in Example 5-5.

Example 5-5 User and password defintion for Windows FTAs; member name is TPUSER

/*********************************************************************//* USRREC: Windows users password definitions *//*********************************************************************//*-------------------------------------------------------------------*//* You must specify at least one USRREC for each Windows workstation *//* in the distributed TWS network. *//*-------------------------------------------------------------------*/USRREC USRCPU(U002) USRNAM(tws) USRPSW('tws') USRREC USRCPU(E000) USRNAM(tws) USRPSW('tws') USRREC USRCPU(E000) USRNAM(tws) USRPSW('tws') USRREC USRCPU(N001) USRNAM(tws) USRPSW('tws') USRREC USRCPU(N003) USRNAM(tws) USRPSW('tws')

272 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 289: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.2 Creation of the Symphony file in detailA new Symphony file is generated whenever any of these daily planning batch jobs is run:

� Extend the current plan.� Replan the current plan.� Renew the Symphony.

Daily planning batch jobs must be able to read from and write to the HFS working directory (WRKDIR) because these jobs create the Symnew file in WRKDIR. For this reason, the group associated with WRKDIR must contain all of the users that will run daily planning batch jobs.

The end-to-end server task starts the translator process in USS (via the starter process). The translator process inherits its ownership from the starting task, so it runs as the same user as the end-to-end server task.

The translator process must be able to read from and write to the HFS working directory (WRKDIR). For this reason, WRKDIR must be owned by the user associated with the end-to-end server started task (E2ESERV in the following example). This underscores the importance of specifying the correct user and group in EQQPCS05.

Figure 5-2 shows the steps of Symphony file creation:

1. The daily planning batch job copies the Symphony Current Plan VSAM data set to an HFS file in WRKDIR called SymUSER, where USER is the user name of the user who submitted the batch job.

2. The daily planning batch job renames SymUSER to Symnew.

3. The translator program running in UNIX System Services copies Symnew to Symphony and Sinfonia.

Chapter 5. End-to-end implementation scenarios and examples 273

Page 290: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 5-2 Creation of the Symphony file in WRKDIR

This illustrates how process ownership of the translator program is inherited from the end-to-end server task. The figure also shows how file ownership of Symnew and Symphony are inherited from the daily planning batch jobs and translator, respectively.

5.3 Migrating Tivoli OPC tracker agents to end-to-end scheduling

In this section, we describe how to migrate from a Tivoli OPC tracker agent scheduling environment to a Tivoli Workload Scheduler for z/OS end-to-end scheduling environment with Tivoli Workload Scheduler fault-tolerant agents. We show the benefits of migrating to the fault-tolerant workstations with a step-by-step migration procedure.

5.3.1 Migration benefitsIf you plan to migrate to the end solution you can gain the following advantages:

� The use of fault-tolerant technology enables you to continue scheduling without a continuous connection to the z/OS engine.

USS

z/OS

starterstarter

WRKDIRBINDIR

EXTENDCPEXTENDCP

REPLANCPREPLANCP

REFRESHCPREFRESHCP

SYMRENEWSYMRENEW

End-to-endServer Task

Daily PlanningBatch Jobs

Symphony Current PlanVSAM Data Set

Run as USER3,a member of TWSGRP

Symphony Sinfonia

Symnew

started as E2ESERV

USER3:TWSGRP

USER3:TWSGRP

E2ESERV:TWSGRP

SymUSER3

translatortranslator

EQQSCPDS

Run as E2ESERV

1

2

3

E2ESERVE2ESERV

274 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 291: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Multi-tier architecture enables you to configure your distributed environment into logical and geographic needs through the domain topology.

The monitoring of workload can be separated, based on dedicated distributed views.

The multi-tier architecture also improves scalability and removes the limitation on the number of tracker agent workstations in Tivoli OPC. (In Tivoli OPC, the designated maximum number of tracer agents was 999, but the practical limit was around 500.)

� High availability configuration through:

– The support of AIX High Availability Cluster Multi-Processing (HACMP™), HP Service Guard, and Windows clustering, for example.

– Support for using host names instead of numeric IP addresses.

– The ability to change workstation addresses as well as distributed network topology without recycling the Tivoli Workload Scheduler for z/OS controller. It only requires a plan replan.

� New supported platforms and operating systems, such as:

– Windows 2000 and Windows XP

– SuSE Linux Enterprise Server for zSeries Version 7

– Red Hat Linux (Intel®) Version 7.2, 7.3

– Other third-party access methods such as Tandem

For a complete list of supported platforms and operating system levels, refer to IBM Tivoli Workload Scheduler Release Notes Version 8.2 (Maintenance Release April 2004), SC32-1277.

� Support for extended agents.

Extended agents (XA) are used to extend the job scheduling functions of Tivoli Workload Scheduler to other systems and applications. An extended agent is defined as a workstation that has a host and an access method.

Extended agents makes it possible to run jobs in the end-to-end scheduling solution on:

– Oracle E-Business Suite

– PeopleSoft

– SAP R/3

For more information, refer to IBM Tivoli Workload Scheduler for Applications User’s Guide Version 8.2 (Maintenance Release April 2004), SC32-1278.

� Open extended agent interface, which enables you to write extended agents for non-supported platforms and applications. For example, you can write

Chapter 5. End-to-end implementation scenarios and examples 275

Page 292: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

your own extended agent for Tivoli Storage Manager. For more information, refer to Implementing TWS Extended Agent for Tivoli Storage Manager, GC24-6030.

� User ID and password definitions for Windows fault-tolerant workstations are easier to implement and maintain. Does not require use of the Tivoli OPC Tracker agent impersonation support.

� IBM Tivoli Business Systems Manager support enables you to integrate the entire end-to-end environment.

� If you use alternate workstations for your tracker agents, be aware that this function is not available in fault-tolerant agents. As part of the fault-tolerant technology, a FTW cannot be an alternate workstation.

� You do not have to touch your planning-related definitions such as run cycles, periods, and calendars.

5.3.2 Migration planning Before starting the migration process, you may consider the following issues:

� The Job Migration Tool in Tivoli Workload Scheduler for z/OS 8.2 can be used to facilitate the migration from distributed tracker agents to Tivoli Workload Scheduler distributed agents.

� You may choose to not migrate your entire tracker agent environment at once. For better planning, we recommend first deciding which part of your tracker environment is more eligible to migrate. This enables you to smoothly migrate to the new fault-tolerant agents. The proper decision can be based on:

– Agents belonging to a certain business unit

– Agents running at a specific location or time zone

– Agents having dependencies to Tivoli Workload Scheduler for z/OS job streams

– Agents used for testing purposes

� The tracker agents topology is not based on any domain manager structure as used in the Tivoli Workload Scheduler end-to-end solution, so plan the topology configuration that suits your needs. The guidelines for helping you find your best configuration are detailed in 3.5.4, “Network planning and considerations” on page 141.

� Even though you can use centralized scripts to facilitate the migration from distributed tracker agents to Tivoli Workload Scheduler distributed agents, it may be necessary to make some modifications to the JCL (the script) used at

276 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 293: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

tracker agents when the centralized script for the corresponding fault-tolerant workstation is copied or moved.

For example, this is the case for comments:

– In JCL for tracker agent, a comment line can commence with //*

//* This is a comment line

– In centralized script, a comment line can commence with //* OPC

//* OPC This is a comment line

If centralized script is used, the migration from tracker agents to fault-tolerant workstation should be a simple task. Basically, the migration is done simply by changing the workstation name from the name of a tracker agent workstation to the name of the new fault-tolerant workstation. This is even more true if you follow the migration checklist that is outlined in the following sections.

Also note that with centralized script you can assign a user to a fault-tolerant workstation job exactly the same way as you did for tracker agents (for example by use of the job submit exit, EQQUX001).

5.3.3 Migration checklistTo guide you through the migration, Table 5-1 provides a step-by-step checklist.

Table 5-1 Migration checklist

Tip: We recommend starting the migration with the less critical workload in the environment. The migration process needs some handling and experience; therefore you could start by migrating a test tracker agent with test scripts. If this is successful, you can continue with less critical production job streams and progress to the most important ones.

Important: Tivoli OPC tracker agent went out of support on October 31, 2003.

Migration actions Page

1. Install IBM Tivoli Workload Scheduler end-to-end on z/OS mainframe.

“Installing IBM Tivoli Workload Scheduler end-to-end solution” on page 278

2. Install fault-tolerant agents on each tracker agent server or system that should be migrated to end-to-end.

“Installing fault-tolerant agents” on page 279

3. Define the topology for the distributed Tivoli Workload Scheduler network.

“Define the network topology in the end-to-end environment” on page 279

Chapter 5. End-to-end implementation scenarios and examples 277

Page 294: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.3.4 Migration actionsWe now explain each step of the migration actions listed in Table 5-1 in detail.

Installing IBM Tivoli Workload Scheduler end-to-end solution The Tivoli Workload Scheduler for z/OS end-to-end feature is required for the migration, and its installation and configuration are detailed in 4.2, “Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 159.

End-to-end scheduling is not complicated, but job scheduling on the distributed systems works very differently in an end-to-end environment than in the tracker agent scheduling environment.

4. Decide if centralized, non-centralized, or a combination of centralized and non-centralized script should be used.

“Decide to use centralized or non-centralized script” on page 281

5. Define centralized script. “Define the centralized script” on page 284

6. Define non-centralized script. “Define the non-centralized script” on page 285

7. Define user ID and password for Windows fault-tolerant workstations.

“Define the user and password for Windows FTWs” on page 285

8. Change the workstation name inside the job streams from tracker agent workstation name to fault-tolerant workstation name.

“Change the workstation name inside the job streams” on page 285

9. Consider doing some parallel testing before the definitive shift from tracker agents to fault-tolerant agents.

“Parallel testing” on page 286

10. Perform the cutover. “Perform the cutover” on page 287

11. Educate and train planners and operators.

“Education and training of operators and planners” on page 287

Migration actions Page

Important: It is important to start the installation of the end-to-end solution as early as possible in the migration process to gain as much experience as possible with this new environment before it should be handled in the production environment.

278 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 295: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Installing fault-tolerant agentsWhen you have decided which tracker agents to migrate, you can install the Tivoli Workload Scheduler code on the machines or servers that host the tracker agent. This enables you to migrate a mixed environment of tracker agents and fault-tolerant workstations in a more controlled way, because both environments (Tivoli Workload Scheduler and Tivoli OPC Tracker Agents) can coexist on the same physical machine.

Both environments might coexist until you decide to perform the cutover. Cutover means switching to the fault-tolerant agent after the testing phase.

Installation of the fault-tolerant agents is explained in detail in 4.3, “Installing Tivoli Workload Scheduler in an end-to-end environment” on page 207.

Define the network topology in the end-to-end environmentIn Tivoli Workload Scheduler for z/OS, define the topology of the Tivoli Workload Scheduler network. The defintion process contains the following steps:

1. Designing the end-to-end network topology.

2. Definition of the network topology in Tivoli Workload Scheduler for z/OS with the DOMREC and CPUREC keywords.

3. Definition of the fault-tolerant workstations in the Tivoli Workload Scheduler for z/OS database.

4. Activation of the fault-tolerant workstation in the Tivoli Workload Scheduler for z/OS plan by a plan extend or plan replan batchjob.

After completion of the definition process, each workstation should be defined twice, once for the tracker agent and once for the distributed agent. This way, you can run a distributed agent and a tracker agent on the same computer or server. This should enable you to gradually migrate jobs from tracker agents to distributed agents.

Example: From tracker agent network to end-to-end networkIn this example, we illustrate how an existing tracker agent network can be reflected in (or converted to) an end-to-end network topology. This example also

Tips:

� If you decide to define a topology with domain managers, you should also define backup domain managers.

� To better distinguish the fault-tolerant workstations, follow a consistent naming convention.

Chapter 5. End-to-end implementation scenarios and examples 279

Page 296: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

shows the major differences between tracker agent network topology and the end-to-end network topology.

In Figure 5-3, we have what we can call a classical tracker agent environment. This environment consists of multiple tracker agents on various operating platforms. All communication with all tracker agents are handled by a single subtask in the Tivoli Workload Scheduler for z/OS controller started task and does not use any kind of domain structure with multiple levels (tiers) to minimized the load on the controller.

Figure 5-3 A classic tracker agent environment

Figure 5-3 shows what we can call a classical tracker agent environment. This environment consists of multiple tracker agents on various operating platforms and does not follow any domain topology

Figure 5-4 shows how the tracker agent environment in Figure 5-3 on page 280 can be defined in an end-to-end scheduling environment by use of domain managers, back-up domain managers, and fault-tolerant agents.

AIX

TrackerAgents

AIX SolarisOS/400

OPCController

z/OS

AIX AIX

TrackerAgents

AIX

TrackerAgents

AIX SolarisOS/400

OPCController

z/OS

AIX AIX

TrackerAgents

280 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 297: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 5-4 End-to-end scheduling network with DMs and FTAs

In the migration phase, it is possible for these two environments to co-exist. This means that on every machine, a tracker agent and a fault-tolerant workstation are installed.

Decide to use centralized or non-centralized scriptWhen migrating from tracker agents to fault-tolerant agents, you have two options regarding scripts: you can use centralized or non-centralized scripts.

If all of the tracker agent JCL (script) is placed in the Tivoli Workload Scheduler for z/OS controller job library, the easiest and most simple solution when migrating to end-to-end will be to use centralized scripts.

But if all of the tracker agent JCL (script) is placed locally on the tracker agent systems, the easiest and most simple solution when migrating to end-to-end will be to use non-centralized scripts.

Finally, if the tracker agent JCLs are both placed in the Tivoli Workload Scheduler for z/OS controller job library and locally on the tracker agent system, the easiest will be to migrate to end-to-end scheduling where a combination of centralized and non-centralized script is used.

DomainB

Domain Manager

FDMB

MASTERDM

AIXDomain Manager

FDMA

AIX Solaris

DomainA

FTA1BDM for DomainA

FTA2 FTA4

OS/400

Master DomainManager

OPCMASTER

z/OSController & End-to-end server

FTA3BDM for DomainB

AIX

AIX

DomainB

Domain Manager

FDMB

MASTERDM

AIXDomain Manager

FDMA

AIX Solaris

DomainA

FTA1BDM for DomainA

FTA2 FTA4

OS/400

Master DomainManager

OPCMASTER

z/OSController & End-to-end server

FTA3BDM for DomainB

AIX

AIX

Chapter 5. End-to-end implementation scenarios and examples 281

Page 298: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Use of the Job Migration Tool to help with the migrationThis tool can be used to help analyze the existing tracker agent environment to be able to decide whether the tracker agent JCL should be migrated using centralized script, non-centralized script, or a combination.

To run the tool, select option 1.1.5 from the main menu in Tivoli Workload Scheduler for z/OS legacy ISPF. In the panel, enter the name for the tracker agent workstation that you would like to analyze and submit the job generated by Tivoli Workload Scheduler for z/OS.

The tool analyzes the operations (jobs) that are defined on the specified workstation and generates output in four data sets:

1. Report data set (default suffix: LIST)

Contains warning messages for the processed jobs on the workstation specified as input to the tool. (See Example 5-6 on page 283.) There will be warning messages for:

– Operations (jobs) that are associated with a job library member that uses JCL variables and directives and that have the centralized script option set to N (No).

– Scripts (JCL) that do not have variables and are associated with operations that have the centralized script option set to Y (Yes). (This situation lowers performance.)

– Operations (jobs) for which the tool did not find the JCL (member not found) in the JOBLIB libraries specified as input to the tool defined in Tivoli Workload Scheduler.

For jobs (operations) defined with centralized script option set to No (the default), the tool suggests defining the job on a workstation named DIST. For jobs (operations) defined with centralized script option set to Yes, the tool suggests defining the job on a workstation named CENT.

The last part of the report contains a cross-reference that shows which application (job stream) the job (operation) is defined in.

Note: Before submitting the job, modify it by adding all JOBLIBs for the tracker agent workstation that you are analyzing. Also remember to add JOBLIBs processed by the job-library-read exit (EQQUX002) if it is used.

For a permanent change of the sample job, modify the sample migration job skeleton, EQQWMIGZ.

Important: Check the tool report for warning messages.

282 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 299: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The report is a good starting point for an overview of the migration effort.

2. JOBLIB data set (default suffix: JOBLIB)

This library contains a copy of all detected jobs (members) for a specific workstation. The job is copied from the JOBLIB.

In our example (Example 5-6), there are four jobs in this library: NT01AV01, NT01AV02, NT01JOB1, and NT01JOB2.

3. JOBCEN data set (default suffix: JOBCEN)

This library contains a copy of all jobs (members) that have centralized scripts for a specific workstation that is defined with the centralized script option set to Yes. The job is copied from the JOBLIB.

In our example, (Example 5-6), there are two jobs in this library: NT01JOB1 and NT01JOB2. These jobs were defined in Tivoli Workload Scheduler for z/OS with the centralized script option set to Yes.

4. JOBDIS data set (default suffix: JOBDIS).

This library contains all jobs (members) that do not have centralized scripts for a specific workstation. These jobs must be transferred to the fault-tolerant workstation.

In our example (Example 5-6), there are three jobs in this library: NT01AV01, NT01AV02, and NT01JOB1. These jobs were defined in Tivoli Workload Scheduler for z/OS with the centralized script option set to No (the default).

Example 5-6 Report generated by the Job Migration Tool

P R I N T O U T O F W O R K S T A T I O N D E S C R I P T I O N S = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =REPORT TYPE: CROSS-REFERENCE OF JOBNAMES AND ACTIVE APPLICATIONS================================================================JOBNAME APPL ID VALID TO OpTYPE_OpNUMBER-------- ---------------- -------- --------------------------------------------------------------

NT01AV01 NT01HOUSEKEEPING 31/12/71 DIST_005 NT01TESTAPPL2 31/12/71 DIST_005NT01AV02 NT01TESTAPPL2 31/12/71 DIST_010NT01AV03 NT01TESTAPPL2 31/12/71 DIST_015 WARNING: NT01AV03 member not found in job libraryNT01JOB1 NT01HOUSEKEEPING 31/12/71 DIST_010

Note: The NT01JOB1 operation (job) is defined in two different applications (job streams): NT01HOUSEKEEPING and NT01TESTAPPL. The NT01JOB1 operation is defined with centralized script option set to Yes in the NT01TESTAPPL application and No in the NT01HOUSEKEEPING application. That is why the JT01JOB1 is defined on both the CENT and the DIST workstations.

Chapter 5. End-to-end implementation scenarios and examples 283

Page 300: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

NT01TESTAPPL 31/12/71 CENT_005 WARNING: Member NT01JOB1 contain directives (//*%OPC) or variables (& or % or ?). Modify the member manually or change the operation(s) type to centralized.NT01JOB2 NT01TESTAPPL 31/12/71 CENT_010 WARNING: You could change operation(s) to NON centralized type.

APPL ID VALID TO JOBNAME OpTYPE_OpNUMBER---------------- -------- --------- --------------------------------------------------------------NT01HOUSEKEEPING 31/12/71 NT01AV01 DIST_005 NT01JOB1 DIST_010NT01TESTAPPL 31/12/71 NT01JOB1 CENT_005 NT01JOB2 CENT_010NT01TESTAPPL2 31/12/71 NT01AV01 DIST_005 NT01AV02 DIST_010 NT01AV03 DIST_015 >>>>>>> END OF APPLICATION DESCRIPTION PRINTOUT <<<<<<<

Before you migrate the tracker agent to a distributed agent, you should use this tool to obtain these files for help in deciding whether the jobs should be defined with centralized or decentralized scripts.

Define the centralized scriptIf you decide to use centralized script for all or some of the tracker agent jobs, do the following:

1. Run the job migration tool for each tracker agent workstation and analyze the generated report.

2. Change the value of the centralized script flag to Yes, based on the result of the job migration tool output and your decision.

3. Run the job migration tool as many times as you want. For example, you can run until there are no warning messages and all jobs are defined on the correct workstation in the report (the CENT workstation).

4. Change the generated JCL (jobs) in the JOBCEN data set (created by the migration tool); for example, it could be necessary to change the comments line from //* to //* OPC.

5. The copied and amended members (jobs) can be activated one by one when the corresponding operation in the Tivoli Workload Scheduler for z/OS application is changed from the tracker agent workstation to the fault-tolerant workstation.

Note: If you plan to run the migration tool several times, you should copy the job to another library when it has been changed and is ready for the switch to avoid it being replaced by a new run of the migration tool.

284 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 301: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Define the non-centralized scriptIf you decide to use non-centralized script for all or some of the tracker agent jobs, do the following:

1. Run the job migration tool for each tracker agent workstation and analyze the generated report.

2. Run the job migration tool as many times as you want. For example, you can run until there are no warning messages and all jobs are defined on the correct workstation in the report (the DIST workstation).

3. Transfer the scripts from the JOBDIS data set (created by the migration tool) to the distributed agents.

4. Create a member in the script library (SCRPTLIB/EQQSCLIB) for every job in the JOBDIS data set and, optionally, for the jobs in JOBCEN (if you decide to change these jobs from use of centralized script to use of non-centralized script).

Define the user and password for Windows FTWs For each user running jobs on Windows fault-tolerant agents, define a new USRREC statement to provide the Windows user and password. USRREC is defined in the member of the EQQPARM library as specified by the USERMEM keyword in the TOPOLOGY statement.

If you use the impersonation support for NT tracker agent workstations, it does not interfere with the USRREC definitions. The impersonation support assigns a user ID based on the user ID from exit EQQUX001. Since the exit is not called for jobs with non-centralized script, impersonation support is not used any more.

Change the workstation name inside the job streams At this point in the migration, the end-to-end scheduling environment should be active and the fault-tolerant workstations on the systems with tracker agents should be active and linked in the plan in Tivoli Workload Scheduler for z/OS.

The plan in Tivoli Workload Scheduler for z/OS and the Symphony file on the fault-tolerant agents does not contain any job streams with jobs that are scheduled on the tracker agent workstations.

Note: The job submit exit EQQUX001 is not called for non-centralized script jobs.

Important: Because the passwords are not encrypted, we strongly recommend that you protect the data set containing the USRREC definitions with your security product.

Chapter 5. End-to-end implementation scenarios and examples 285

Page 302: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The job streams (applications) in the Tivoli Workload Scheduler for z/OS controller are still pointing to the tracker agent workstations.

In order to submit workload to the distributed environment, you must change the workstation name of your existing job definitions to the new FTW, or define new job streams to replace the job streams with the old tracker agent jobs.

Parallel testingIf possible, do some parallel testing before the cutover. With parallel testing, you run the same job flow on both types of workstations: tracker agent workstations and fault-tolerant workstations.

The only problem with parallel testing is that it requires duplicate versions of the applications (job streams): one application for the tracker agent and one application for the fault-tolerant workstation. Also, you cannot run the same job in both applications, so one of the jobs must be changed to a dummy job.

Some initial setup is required to do parallel testing, but when done it will be possible to verify that the jobs are executed in same sequence and that operators and planners can gain some experience with the new environment.

Notes:

� It is not possible to change the workstation within a job instance from tracker agent to a fault-tolerant workstation via the Job Scheduling Console. We have already addressed this issue of development. The change can be performed via the legacy GUI (ISPF) and the batch loader program.

� Be aware that changes to the workstation affect only the job stream database. If you want to take this modification into the plans, you must run a long term-plan (LTP) modify all batch job and current plan extend or replan batch job.

� The highest acceptable return code for operations on fault-tolerant workstations is 0. If you have a tracker agent operation with highest return code set to 8 and you change the workstation for this operation from a tracker agent workstation to a fault-tolerant workstation, you will not be able to save the modified application.

When trying to save the application you will see error message:

EQQA531E Inconsistent option when FT work station

Be aware of this if you are planning to use Tivoli Workload Scheduler for z/OS mass update functions or unload/reload functions to update a large number of applications.

286 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 303: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Another approach could be to migrate a few applications from tracker agents to fault-tolerant agents and use these applications to verify the migration strategy, the migrated jobs (JCL/script), and get some experience. When you are satisfied with the test result of these applications, the next step is to migrate the rest of the applications.

Perform the cutoverWhen the parallel testing has been completed with satisfactory results, you can do the final cutover. For example, the process can be:

� Change all workstation names from tracker agent workstation to fault-tolerant workstation for all operations in the Tivoli Workload Scheduler for z/OS controller.

This can be done with the Tivoli Workload Scheduler for z/OS mass update function or by the unload (with the Batch Command Interface Tool) edit, and batchload (with Tivoli Workload Scheduler for z/OS batchloader) process.

� Run the Extend of long-term plan batch job or Modify All of long-term plan in Tivoli Workload Scheduler for z/OS.

Verify that the changed applications and operations look correct in the long-term plan.

� Run the Extend of plan (current plan) batch job.

– Verify that the changed applications and operations look correct in the plan.

– Verify that the tracker agent jobs have been moved to the new fault-tolerant workstations and that there are no jobs on the tracker agent workstations.

Education and training of operators and plannersTracker agents and fault-tolerant workstations work differently and there are new options related to jobs on fault-tolerant workstations. Handling of fault-tolerant workstations is different from handling for tracker agent.

A tracker agent workstation can be set to Active or Offline and can be defined with open intervals and servers. A fault-tolerant workstation can be started, stopped, linked, or unlinked.

To ensure that the migration from tracker agents to fault-tolerant workstations will be successful, be sure to plan for education of your planners and operators.

Chapter 5. End-to-end implementation scenarios and examples 287

Page 304: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.3.5 Migrating backwardNormally, it should not be necessary to migrate backward because it is possible to run the two environments in parallel. As we have shown, you can run a tracker agent and a fault-tolerant agent on the same physical machine. If the preparation, planning, and testing of the migration is done as described in the previous chapters, it should not be necessary to migrate backward.

If a situation forces backward migration from the fault-tolerant workstations to tracker agents, follow these steps:

1. Install the tracker agent on the computer. (This is necessary only if you have uninstalled the tracker agent.)

2. Define a new destination in the ROUTOPTS initialization statement of the controller and restart the controller.

3. Make a duplicate of the workstation definition of the computer. Define the new workstation as Computer Automatic instead of Fault Tolerant and specify the destination you defined in step 2. This way, the same computer can be run as a fault-tolerant workstation and as a tracker agent for smoother migration.

4. For non-centralized scripts, copy the scripts from the fault-tolerant workstation repository to the JOBLIB. As an alternative, copy the script to a local directory that can be accessed by the tracker agent and create a JOBLIB member to execute the script. You can accomplish this by using FTP.

5. Implement the EQQUX001 sample to execute jobs with the correct user ID.

6. Modify the workstation name inside the operation. Remember to change the JOBNAME if the member in the JOBLIB has a name different from the member of the script library.

5.4 Conversion from Tivoli Workload Scheduler network to Tivoli Workload Scheduler for z/OS managed network

In this section, we outline the guidelines for converting a Tivoli Workload Scheduler network to a Tivoli Workload Scheduler for z/OS managed network.

The distributed Tivoli Workload Scheduler network is managed by a Tivoli Workload Scheduler master domain manager, which manages the databases and the plan. Converting the Tivoli Workload Scheduler managed network to a Tivoli Workload Scheduler for z/OS managed network means that responsibility for database and plan management move from the Tivoli Workload Scheduler master domain manager to the Tivoli Workload Scheduler for z/OS engine.

288 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 305: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.4.1 Illustration of the conversion processFigure 5-5 shows a distributed Tivoli Workload Scheduler network. The database management and daily planning are carried out by the Tivoli Workload Scheduler master domain manager.

Figure 5-5 Tivoli Workload Scheduler distributed network with a master domain manager

Figure 5-6 shows a Tivoli Workload Scheduler for z/OS managed network. Database management and daily planning are carried out by the Tivoli Workload Scheduler for z/OS engine.

Domain Manager

DMB

MASTERDM

Master Domain Manager

AIX

AIXDomain Manager

DMA

HPUX

AIX Windows 2000 Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

Chapter 5. End-to-end implementation scenarios and examples 289

Page 306: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 5-6 Tivoli Workload Scheduler for z/OS network

The conversion process is to change the Tivoli Workload Scheduler master domain manager to the first-level domain manager and then connect it to the Tivoli Workload Scheduler for z/OS engine (new master domain manager). The result of the conversion is a new end-to-end network managed by the Tivoli Workload Scheduler for z/OS engine (Figure 5-7 on page 291).

z/OSSYSPLEX

Active Engine

Standby Engine

Standby Engine

290 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 307: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 5-7 IBM Tivoli Workload Scheduler for z/OS managed end-to-end network

5.4.2 Considerations before doing the conversionBefore you start to convert your Tivoli Workload Scheduler managed network to a Tivoli Workload Scheduler for z/OS managed network, you should evaluate the positives and negatives of doing the conversion.

The pros and cons of doing the conversion will differ from installation to installation. Some installations will gain significant benefits from conversion, while other installations will gain fewer benefits. Based on the outcome of the evaluation of pros and cons, it should be possible to make the right decision for your specific installation and current usage of Tivoli Workload Scheduler as well as Tivoli Workload Scheduler for z/OS.

DomainZ

MASTERDM

Domain Manager

DMZ

AIX

AIXDomain Manager

DMA

HPUXDomain Manager

DMB

AIX Windows 2000 Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

z/OSSysplex

Standby Master DM

Standby Master DM

OS/400

MasterDomain Manager

DomainZ

MASTERDM

Domain Manager

DMZ

AIX

AIXDomain Manager

DMA

HPUXDomain Manager

DMB

AIX Windows 2000 Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

z/OSSysplex

Standby Master DM

Standby Master DM

OS/400

MasterDomain Manager

Chapter 5. End-to-end implementation scenarios and examples 291

Page 308: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Some important aspects of the conversion that you should consider are:

� How is your Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS organization today?

– Do you have two independent organizations working independently of each other?

– Do you have two groups of operators and planners to manage Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS?

– Or do you have one group of operators and planners that manages both the Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS environments?

– Do you use considerable resources keeping a high skill level for both products, Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS?

� How integrated is the workload managed by Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS?

– Do you have dependencies between jobs in Tivoli Workload Scheduler and in Tivoli Workload Scheduler for z/OS?

– Or do most of the jobs in one workload scheduler run independently of jobs in the other scheduler?

– Have you already managed to solve dependencies between jobs in Tivoli Workload Scheduler and in Tivoli Workload Scheduler for z/OS efficiently?

� The current use of Tivoli Workload Scheduler–specific functions are not available in Tivoli Workload Scheduler for z/OS.

– How intensive is the use of prompts, file dependencies, and “repeat range” (run job every 10 minutes) in Tivoli Workload Scheduler?

Can these Tivoli Workload Scheduler–specific functions be replaced by Tivoli Workload Scheduler for z/OS–specific functions or should they be handled in another way?

Does it require some locally developed tools, programs, or workarounds?

– How extensive is the use of Tivoli Workload Scheduler job recovery definitions?

Is it possible to handle these Tivoli Workload Scheduler recovery definitions in another way when the job is managed by Tivoli Workload Scheduler for z/OS?

Does it require some locally developed tools, programs, or workarounds?

292 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 309: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Will Tivoli Workload Scheduler for z/OS give you some of the functions you are missing in Tivoli Workload Scheduler today?

– Extended planning capabilities, long-term plan, current plan that spans more than 24 hours?

– Better handling of carry-forward job streams?

– Powerful run-cycle and calendar functions?

� Which platforms or systems are going to be managed by the Tivoli Workload Scheduler for z/OS end-to-end scheduling?

� What kind of integration do you have between Tivoli Workload Scheduler and, for example, SAP R/3, PeopleSoft, or Oracle Applications?

� Partial conversion of some jobs from the Tivoli Workload Scheduler-managed network to the Tivoli Workload Scheduler for z/OS managed network?

� Effort to convert Tivoli Workload Scheduler database object definitions to Tivoli Workload Scheduler for z/OS database object definitions.

Will it be possible to convert the database objects with reasonable resources and within a reasonable time frame?

5.4.3 Conversion process from Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS

The process of converting from a Tivoli Workload Scheduler-managed network to a Tivoli Workload Scheduler for z/OS-managed network has several steps. In the following description, we assume that we have an active Tivoli Workload Scheduler for z/OS environment as well as an active Tivoli Workload Scheduler environment. We also assume that the Tivoli Workload Scheduler for z/OS

Partial conversion: About 15% of your Tivoli Workload Scheduler–managed jobs or workload is directly related to the Tivoli Workload Scheduler for z/OS jobs or workload. This means that the Tivoli Workload Scheduler jobs are either predecessors or successors to Tivoli Workload Scheduler for z/OS jobs. The current handling of these interdependencies is not effective or stable with your current solution.

Converting the 15% of jobs to Tivoli Workload Scheduler for z/OS managed scheduling using the end-to-end solution will stabilize dependency handling and make scheduling more reliable. Note that this requires two instances of Tivoli Workload Scheduler workstations (one each for Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS).

Chapter 5. End-to-end implementation scenarios and examples 293

Page 310: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

end-to-end server is installed and ready for use. The conversion process mainly contains the following steps or tasks:

1. Plan the conversion and establish new naming standards.

2. Install new Tivoli Workload Scheduler workstation instances dedicated to communicating with the Tivoli Workload Scheduler for z/OS server.

3. Define the topology of the Tivoli Workload Scheduler network in Tivoli Workload Scheduler for z/OS and define associated Tivoli Workload Scheduler for z/OS fault-tolerant workstations.

4. Create JOBSCR members (in the SCRPTLIB data set) for all Tivoli Workload Scheduler–managed jobs that should be converted.

5. Convert the database objects from Tivoli Workload Scheduler format to Tivoli Workload Scheduler for z/OS format.

6. Educate planners and operators in the new Tivoli Workload Scheduler for z/OS server functions.

7. Test and verify the conversion and finalize for production.

The sequencing of these steps may be different in your environment, depending on the strategy that you will follow when doing your own conversion.

Step 1. Planning the conversionThe conversion from Tivoli Workload Scheduler-managed scheduling to Tivoli Workload Scheduler for z/OS-managed scheduling can be a major project and requires several resources. It depends on the current size and usage of the Tivoli Workload Scheduler environment. Planning of the conversion is an important task and can be used to estimate the effort required to do the conversion as well as detail the different conversion steps.

In the planning phase you should try to identify special usage of Tivoli Workload Scheduler functions or facilities that are not easily converted to Tivoli Workload Scheduler for z/OS. Furthermore, you should try to outline how these functions or facilities should be handled when scheduling is done by Tivoli Workload Scheduler for z/OS.

Part of planning is also establishing the new naming standards for all or some of the Tivoli Workload Scheduler objects that are going to be converted. Some examples:

� Naming standards for the fault-tolerant workstations in Tivoli Workload Scheduler for z/OS

Names for workstations can be up to 16 characters in Tivoli Workload Scheduler (if you are using expanded databases). In Tivoli Workload Scheduler for z/OS, workstation names can be up to four characters. This

294 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 311: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

means you have to establish a new naming standard for the fault-tolerant workstations in Tivoli Workload Scheduler for z/OS.

� Naming standards for job names

In Tivoli Workload Scheduler you can specify job names with lengths of up to 40 characters (if you are using expanded databases). In Tivoli Workload Scheduler for z/OS, job names can be up to eight characters. This means that you have to establish a new naming standard for jobs on fault-tolerant workstations in Tivoli Workload Scheduler for z/OS.

� Adoption of the existing Tivoli Workload Scheduler for z/OS object naming standards

You probably already have naming standards for job streams, workstations, job names, resources, and calendars in Tivoli Workload Scheduler for z/OS. When converting Tivoli Workload Scheduler database objects to the Tivoli Workload Scheduler for z/OS databases, you must adopt the Tivoli Workload Scheduler for z/OS naming standard.

� Access to the objects in Tivoli Workload Scheduler for z/OS database and plan

Access to Tivoli Workload Scheduler for z/OS databases and plan objects are protected by your security product (for example, RACF). Depending on the naming standards for the imported Tivoli Workload Scheduler objects, you may need to modify the definitions in your security product.

� Is the current Tivoli Workload Scheduler network topology suitable and can it be implemented directly in a Tivoli Workload Scheduler for z/OS server?

Maybe the current Tivoli Workload Scheduler network topology needs some adjustments as it is implemented today to be optimal. If your Tivoli Workload Scheduler network topology is not optimal, it should be reconfigured when implemented in Tivoli Workload Scheduler for z/OS end-to-end.

Step 2. Install Tivoli Workload Scheduler workstation instances for Tivoli Workload Scheduler for z/OS

With Tivoli Workload Scheduler workstation instances we mean installation and configuration of a new Tivoli Workload Scheduler engine. This engine should be configured to be a domain manager, fault-tolerant agent, or a backup domain manager, according to the Tivoli Workload Scheduler production environment you are going to mirror. Following this approach, you will have two instances on all the Tivoli Workload Scheduler managed systems:

1. One old Tivoli Workload Scheduler workstation instance dedicated to the Tivoli Workload Scheduler master.

Chapter 5. End-to-end implementation scenarios and examples 295

Page 312: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

2. One new Tivoli Workload Scheduler workstation instance dedicated to the Tivoli Workload Scheduler for z/OS engine (master). Remember to use different port numbers.

By creating dedicated Tivoli Workload Scheduler workstation instances for Tivoli Workload Scheduler for z/OS scheduling, you can start testing the new environment without disturbing the distributed Tivoli Workload Scheduler production environment. This also makes it possible to do partial conversion, testing, and verification without interfering with the Tivoli Workload Scheduler production environment.

You can chose different approaches for the conversion:

� Try to group your Tivoli Workload Scheduler job streams and jobs into logical and isolated groups and then convert them, group by group.

� Convert all job streams and jobs, run some parallel testing and verification, and then do the switch from Tivoli Workload Scheduler–managed scheduling to Tivoli Workload Scheduler for z/OS–managed scheduling in one final step.

The suitable approach differs from installation to installation. Some installations will be able to group job streams and jobs into isolated groups, while others will not. You have to decide the strategy for the conversion based on your installation.

Step 3. Define topology of Tivoli Workload Scheduler network in Tivoli Workload Scheduler for z/OS

The topology for your Tivoli Workload Scheduler distributed network can be implemented directly in Tivoli Workload Scheduler for z/OS. This is done by creating the associated DOMREC and CPUREC definitions in the Tivoli Workload Scheduler for z/OS initialization statements.

To activate the topology definitions, create the associated definitions for fault-tolerant workstations in the Tivoli Workload Scheduler for z/OS workstation

Note: If you decide to reuse the Tivoli Workload Scheduler distributed workstation instances in your Tivoli Workload Scheduler for z/OS managed network, this is also possible. You may decide to move the distributed workstations one by one (depending on how you have grouped your job streams and how you are doing the conversion). When a workstation is going to be moved to Tivoli Workload Scheduler for z/OS, you simply change the port number in the localops file on the Tivoli Workload Scheduler workstation. The workstation will then be active in Tivoli Workload Scheduler for z/OS at the next plan extension, replan, or redistribution of the Symphony file. (Remember to create the associated DOMREC and CPUREC definitions in the Tivoli Workload Scheduler for z/OS initialization statements.)

296 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 313: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

database. Tivoli Workload Scheduler for z/OS extend or replan will activate these new workstation definitions.

If you are using a dedicated Tivoli Workload Scheduler workstation for Tivoli Workload Scheduler for z/OS scheduling, you can create the topology definitions in an early stage of the conversion process. This way you can:

� Verify that the topology definitions are correct in Tivoli Workload Scheduler for z/OS.

� Verify that the dedicated fault-tolerant workstations are linked and available.

� Start getting some experience with the management of fault-tolerant workstations and a distributed Tivoli Workload Scheduler network.

� Implement monitoring and handling routines in your automation application on z/OS.

Step 4. Create JOBSCR members for all Tivoli Workload Scheduler–managed jobs

Tivoli Workload Scheduler-managed jobs that should be converted to Tivoli Workload Scheduler for z/OS must be defined in the SCRPTLIB data set. For every active job defined in the Tivoli Workload Scheduler database, you define a member in the SCRPTLIB data set containing:

� Name of the script or command for the job (defined in the JOBREC JOBSCRS() or the JOBREC JOBCMD specification)

� Name of the user ID that the job should execute under (defined in the JOBREC JOBUSR() specification)

Step 5. Convert database objects from Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS

Tivoli Workload Scheduler database objects that should be converted — job streams, resources, and calendars — probably cannot be converted directly to Tivoli Workload Scheduler for z/OS. In this case you must amend the Tivoli Workload Scheduler database objects to Tivoli Workload Scheduler for z/OS

Note: If the same job script is going to be executed on several systems (it is defined on several workstations in Tivoli Workload Scheduler), you only have to create one member in the SCRPTLIB data set. This job (member) can be defined on several fault-tolerant workstations in several job streams in Tivoli Workload Scheduler for z/OS. It requires that the script is placed in a common directory (path) across all systems.

Chapter 5. End-to-end implementation scenarios and examples 297

Page 314: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

format and create the corresponding objects in the respective Tivoli Workload Scheduler for z/OS databases.

Pay special attention to object definitions such as:

� Job stream run-cycles for job streams and use of calendars in Tivoli Workload Scheduler

� Use of local (workstation-specific) resources in Tivoli Workload Scheduler (local resources converted to global resources by the Tivoli Workload Scheduler for z/OS master)

� Jobs defined with “repeat range” (for example, run every 10 minutes in job streams)

� Job steams defined with dependencies on job stream level

� Jobs defined with Tivoli Workload Scheduler recovery actions

For these object definitions, you have to design alternative ways of handling for Tivoli Workload Scheduler for z/OS.

Step 6. Education for planners and operatorsSome of the handling of distributed Tivoli Workload Scheduler jobs in Tivoli Workload Scheduler for z/OS will be different from the handling in Tivoli Workload Scheduler. Also, some specific fault-tolerant workstation features will be available in Tivoli Workload Scheduler for z/OS.

You should plan for the education of your operators and planners so that they have knowledge of:

� How to define jobs and job streams for the Tivoli Workload Scheduler fault-tolerant workstations

� Specific rules to be followed for scheduling objects related to fault-tolerant workstations

� How to handle jobs and job streams on fault-tolerant workstations

� How to handle resources for fault-tolerant workstations

� The implications of doing, for example, Symphony redistribution

� How Tivoli Workload Scheduler for z/OS end-to-end scheduling works (engine, server, domain managers)

� How the Tivoli Workload Scheduler network topology has been adopted in Tivoli Workload Scheduler for z/OS

298 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 315: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Step 7. Test and verify conversion and finalize for productionAfter testing your approach for the conversion, doing some trial conversions, and testing the conversion carefully, it is time to do the final conversion to Tivoli Workload Scheduler for z/OS.

The goal is to reach this final conversion and switch from Tivoli Workload Scheduler scheduling to Tivoli Workload Scheduler for z/OS scheduling within a reasonable time frame and with a reasonable level of errors. If the period when you are running Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS will be too long, your planners and operators must handle two environments in this period. This is not effective and can cause some frustration for both planners and operators.

The key to a successful conversion is good planning, testing, and verification. When you are comfortable with the testing and verification it is safe to do the final conversion and finalize for production.

Tivoli Workload Scheduler for z/OS will then handle the central and the distributed workload, and you will have one focal point for your workload. The converted Tivoli Workload Scheduler production environment can be stopped.

5.4.4 Some guidelines to automate the conversion processIf you have a large Tivoli Workload Scheduler scheduling environment, doing manual conversion will be too time-consuming. In this case you should consider trying to automate some or all of the conversion from Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS.

One obvious place to automate is the conversion of Tivoli Workload Scheduler database objects to Tivoli Workload Scheduler for z/OS database objects. Although this is not a trivial task, some automation can be implemented. Automation requires some locally developed tools or programs to handle conversion of the database objects.

Some guidelines to helping automate the conversion process are:

� Create text copies of all the Tivoli Workload Scheduler database objects by using the composer create command (Example 5-7).

Example 5-7 Tivoli Workload Scheduler objects creation

composer create calendars.txt from CALENDARS composer create workstations.txt from CPU=@ composer create jobdef.txt from JOBS=@#@ composer create jobstream.txt from SCHED=@#@ composer create parameter.txt from PARMS composer create resources.txt from RESOURCES

Chapter 5. End-to-end implementation scenarios and examples 299

Page 316: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

composer create prompts.txt from PROMPTS composer create users.txt from USERS=@#@

These text files are a good starting point when trying to estimate the effort and time for conversion from Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS.

� Use the workstations.txt file when creating the topology definitions (DOMREC and CPUREC) in Tivoli Workload Scheduler for z/OS.

Creating the topology definitions in Tivoli Workload Scheduler for z/OS based on the workstations.txt file is quite straightforward. The task can be automated coding using a program (script or REXX) that reads the worksations.txt file and converts the definitions to DOMREC and CPUREC specifications.

� Use the jobdef.txt file when creating the SCRPTLIB members.

In jobdef.txt, you have the workstation name for the script (used in the job stream definition), the script name (goes to the JOBREC JOBSCR() definiton), the stream logon (goes to the JOBREC JOBUSR() definition), the description (can be added as comments in the SCRPTLIB member), and the recovery definition.

The recovery definition needs special consideration because it cannot be converted to Tivoli Workload Scheduler for z/OS auto-recovery. Here you need to make some workarounds. Use of Tivoli Workload Scheduler CPU class definitions need special consideration. The job definitions using CPU classes probably have to be copied to separate workstation-specific job definitions in Tivoli Workload Scheduler for z/OS. The task can be automated coding using a program (scripts or REXX) that reads the jobdef.txt file and converts each job definition to a member in the SCRPTLIB. If you have many Tivoli Workload Scheduler job definitions, having a program that can help automate this task can save a considerable amount of time.

� The users.txt file (if you have Windows NT/2000 jobs) is converted to USRREC initialization statements on Tivoli Workload Scheduler for z/OS.

Be aware that the password for the user IDs is encrypted in the users.txt file, so you cannot automate the conversion right away. You must get the password as it is defined on the Windows workstations and type it in the USRREC USRPSW() definition.

� The jobstream.txt file is used to generate corresponding job streams in Tivoli Workload Scheduler for z/OS. The calendars.txt file is used in connection with

Restriction: Tivoli Workload Scheduler CPU class definitions cannot be converted directly to similar definitions in Tivoli Workload Scheduler for z/OS.

300 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 317: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

the jobstream.txt file when generating run cycles for the job streams in Tivoli Workload Scheduler for z/OS. It could be necessary to create additional calendars in Tivoli Workload Scheduler for z/OS.

When doing the conversion, you should notice that:

– Some of the Tivoli Workload Scheduler job stream definitions cannot be converted directly to Tivoli Workload Scheduler for z/OS job stream definitions (for example: prompts, workstation specific resources, file dependencies, and jobs with repeat range).

For these definitions you must analyze the usage and find other ways to implement similar functions when using Tivoli Workload Scheduler for z/OS.

– Some of the Tivoli Workload Scheduler job stream definitions must be amended to Tivoli Workload Scheduler for z/OS definitions. For example:

• Dependencies on job stream level (use dummy start and end jobs in Tivoli Workload Scheduler for z/OS for job stream dependencies).

Note that dependencies are also dependencies to prompts, file dependencies, and resources.

• Tivoli Workload Scheduler job and job stream priority (0 to 101) must be amended to Tivoli Workload Scheduler for z/OS priority (1 to 9). Furthermore, priority in Tivoli Workload Scheduler for z/OS is always on job stream level. (It is not possible to specify priority on job level.)

• Job stream run cycles (and calendars) must be converted to Tivoli Workload Scheduler for z/OS run cycles (and calendars).

– Description texts longer than 24 characters are not allowed for job streams or jobs in Tivoli Workload Scheduler for z/OS. If you have Tivoli Workload Scheduler job streams or jobs with more than 24 characters of description text, you should consider adding this text as Tivoli Workload Scheduler for z/OS operator instructions.

If you have a large number of Tivoli Workload Scheduler job streams, manual handling of job streams can be too time-consuming. The task can be automated to a certain extend coding program (script or REXX).

A good starting point is to code a program that identifies all areas where you need special consideration or action. Use the output from this program to estimate the effort of doing the conversion. Further, the output can be used to identify and group used Tivoli Workload Scheduler functions where special workarounds must be performed when converting to Tivoli Workload Scheduler for z/OS.

Chapter 5. End-to-end implementation scenarios and examples 301

Page 318: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The program can be further refined to handle the actual conversion, performing the following steps:

– Read all of the text files.

– Analyze the job stream and job definitions.

– Create corresponding Tivoli Workload Scheduler for z/OS job streams with amended run cycles and jobs.

– Generate a file with Tivoli Workload Scheduler for z/OS batch loader statements for job streams and jobs (batch loader statements are Tivoli Workload Scheduler for z/OS job stream definitions in a format that can be loaded directly into the Tivoli Workload Scheduler for z/OS databases).

The batch loader file can be sent to the z/OS system and used as input to the Tivoli Workload Scheduler for z/OS batch loader program. The Tivoli Workload Scheduler for z/OS batch loader will read the file (data set) and create the job streams and jobs defined in the batch loader statements.

� The resources.txt file is used to define the corresponding resources in Tivoli Workload Scheduler for z/OS.

Remember that local (workstation-specific) resources are not allowed in Tivoli Workload Scheduler for z/OS. This means that the Tivoli Workload Scheduler workstation-specific resources will be converted to global special resources in Tivoli Workload Scheduler for z/OS.

The Tivoli Workload Scheduler for z/OS engine is directly involved when resolving a dependency to a global resource. A fault-tolerant workstation job must interact with the Tivoli Workload Scheduler for z/OS engine to resolve a resource dependency. This can jeopardize the fault tolerance in your network.

� The use of parameters in the parameters.txt file must be analyzed.

What are the parameters used for?

– Are the parameters used for date calculations?

– Are the parameters used to pass information from one job to another job (using the Tivoli Workload Scheduler parms command)?

– Are the parameters used as parts of job definitions, for example, to specify where the script is placed?

Depending on how you used the Tivoli Workload Scheduler parameters, there will be different approaches when converting to Tivoli Workload Scheduler for z/OS. Unless you use parameters as part of Tivoli Workload Scheduler object definitions, you usually do not have to do any conversion. Parameters will still work after the conversion.

You have to copy the parameter database to the home directory of the Tivoli Workload Scheduler fault-tolerant workstations. The parms command can still

302 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 319: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

be used locally on the fault-tolerant workstation when managed by Tivoli Workload Scheduler for z/OS.

We will show how to use Tivoli Workload Scheduler parameters in connection with Tivoli Workload Scheduler for z/OS JCL variables. This is a way to pass values for Tivoli Workload Scheduler for z/OS JCL variables to Tivoli Workload Scheduler parameters so that they can be used locally on the fault-tolerant workstation.

5.5 Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios

In this section, we describe how to make the Tivoli Workload Scheduler for z/OS end-to-end environment fail-safe and plan for system outages. We also show some fail-over scenario examples.

To make your Tivoli Workload Scheduler for z/OS end-to-end environment fail-safe, you have to:

� Configure Tivoli Workload Scheduler for z/OS backup engines (also called hot standby engines) in your sysplex.

If you do not run sysplex, but have more than one z/OS system with shared DASD, then you should make sure that the Tivoli Workload Scheduler for z/OS engine can be moved from one system to another without any problems.

� Configure your z/OS systems to use a virtual IP address (VIPA).

VIPA is used to make sure that the Tivoli Workload Scheduler for z/OS end-to-end server always gets the same IP address no matter which z/OS system it is run on. VIPA assigns a system-independent IP address to the Tivoli Workload Scheduler for z/OS server task.

If using VIPA is not an option, you should consider other ways of assigning a system-independent IP address to the Tivoli Workload Scheduler for z/OS server task. For example, this can be a hostname file, DNS, or stack affinity.

� Configure a backup domain manager for the first-level domain manager.

Refer to the Tivoli Workload Scheduler for z/OS end-to-end configuration, shown in Figure 5-1 on page 266, for the fail-over scenarios.

When the environment is configured to be fail-safe, the next step is to test that the environment actually is fail-safe. We did the following fail-over tests:

� Switch to the Tivoli Workload Scheduler for z/OS backup engine.� Switch to the Tivoli Workload Scheduler backup domain manager.

Chapter 5. End-to-end implementation scenarios and examples 303

Page 320: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.5.1 Configure Tivoli Workload Scheduler for z/OS backup enginesTo ensure that the Tivoli Workload Scheduler for z/OS engine will be started, either as active engine or standby engine, we specify:

OPCOPTS OPCHOST(PLEX)

In the initialization statements for the Tivoli Workload Scheduler for z/OS engine (pointed to by the member of the EQQPARM library as specified by the parm parameter on the JCL EXEC statement), OPCHOST(PLEX) means that the engine has to start as the controlling system. If there already is an active engine in the XCF group, the startup for the engine continues on standby engine.

OPCHOST(PLEX) is valid only when xcf group and member have been specified. Also, this selection requires that Tivoli Workload Scheduler for z/OS is running on a z/OS/ESA Version 4 Release 1 or later. Because we are running z/OS 1.3 we can use the OPCHOST PLEX(YES) definition. We specify Example 5-8 in the xcf group and member definitions for the engine.

Example 5-8 xcf group and member definitions

XCFOPTS GROUP(TWS820) MEMBER(TWSC&SYSNAME.)/* TAKEOVER(SYSFAIL,HOSTFAIL) Do takeover manually !! */

You must have unique member names for all your engines (active and standby) running in the same sysplex. We assure this by using the sysname variable.

Note: OPCOPTS OPCHOST(YES) must be specified if you start the engine with an empty checkpoint data set. This could be the case the first time you start a newly installed engine or after you have migrated from a previous release of Tivoli Workload Scheduler for z/OS.

Tip: We use the z/OS sysplex-wide SYSNAME variable when specifying the member name for the engine in the sysplex. Using z/OS variables this way, we can have common Tivoli Workload Scheduler for z/OS parameter member definitions for all our engines (and agents as well).

For example, when the engine is started on SC63, the MEMBER(TWSC&SYSNAME) will be MEMBER(TWSCSC63).

304 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 321: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

We did not define a Tivoli Workload Scheduler for z/OS APPC server task for the Tivoli Workload Scheduler for z/OS panels and PIF programs, as described in “Remote panels and program interface applications” on page 31, but it is strongly recommended that you use a Tivoli Workload Scheduler for z/OS APPC server task in sysplex environments where the engine can be moved to different systems in the sysplex. If you do not use the Tivoli Workload Scheduler for z/OS APPC server task you must log off and then log on to the system where the engine is active. This can be avoided by using the Tivoli Workload Scheduler for z/OS APPC server task.

5.5.2 Configure DVIPA for Tivoli Workload Scheduler for z/OS end-to-end server

To make sure that the engine can be moved from SC64 to either SC63 or SC65, Dynamic VIPA is used to define the IP address for the server task. This DVIPA IP address is defined in the profile data set pointed to by PROFILE DD-card in the TCPIP started task.

The VIPA definition that is used to define logical sysplex-wide IP addresses for the Tivoli Workload Scheduler for z/OS end-to-end server, engine JSC server, is as shown in Example 5-9.

Example 5-9 The VIPA definition

VIPADYNAMIC viparange define 255.255.255.248 9.12.6.104 ENDVIPADYNAMIC

Tip: We have not activated the TAKEOVER(SYSFAIL,HOSTFAIL) parameter in XCFOPTS because we do not want the engine to switch automatically to one of its backup engines in case the active engine fails or the system fails.

Because we have not specified the TAKEOVER parameter, we are making the switch to one of the backup engines manually. The switch is made by issuing the following modify command on the z/OS system where you want the backup engine to take over:

F TWSC,TAKEOVER

In this example, TWSC is the name of our Tivoli Workload Scheduler for z/OS backup engine started task (same name on all systems in the sysplex).

The takeover can be managed by SA/390, for example. This way SA/390 can integrate the switch to a backup engine with other automation tasks in the engine or on the system.

Chapter 5. End-to-end implementation scenarios and examples 305

Page 322: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

PORT 424 TCP TWSC BIND 9.12.6.105 5000 TCP TWSCJSC BIND 9.12.6.106

31282 TCP TWSCE2E BIND 9.12.6.107

In this example, the first column under PORT is the port number, the third column is the name of the started task, and the fifth column is the logical sysplex-wide IP address.

Port 424 is used for the Tivoli Workload Scheduler for z/OS tracker agent IP address, port 5000 for the Tivoli Workload Scheduler for z/OS JSC server task, and port 31282 is used for the Tivoli Workload Scheduler for z/OS end-to-end server task.

With these VIPA definitions, we have made a relation between port number, started task name, and the logical IP address that can be used sysplex-wide.

The TWSCE2E host name and 31282 port number that is used for the Tivoli Workload Scheduler for z/OS end-to-end server is defined in the TOPLOGY HOSTNAME(TWSCE2E) initialization statement used by the TWSCE2E server and Tivoli Workload Scheduler for z/OS plan programs.

When the Tivoli Workload Scheduler for z/OS engine creates the Symphony file, the TWSCE2E host name and 31282 port number will be part of the Symphony file. The first-level domain manager (U100) and the backup domain manager (F101) will use this host name when they establish outbound IP connections to the Tivoli Workload Scheduler for z/OS server. The backup domain manager only establishes outbound IP connections to the Tivoli Workload Scheduler for z/OS server if it is going to take over the responsibilities for the first-level domain manager.

5.5.3 Configure backup domain manager for first-level domain manager

Note: The examples and text below refer to a different end-to-end scheduling network, so the names of workstations are different than in the rest of the redbook. This section is included here mostly unchanged from End-to-End Scheduling with Tivoli Workload Scheduler 8.1, SG24-6022, because the steps to switch to a backup domain manager are the same in Version 8.2 as they were in Version 8.1.

One additional option that is available with Tivoli Workload Scheduler 8.2 is to use the WSSTAT command instead of the Job Scheduling Console to do the switch (from backup domain manager to first-level domain manager). This method is also shown in this scenario, in addition to the GUI method.

306 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 323: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

In this section, we show how to configure a backup domain manager for a first-level domain manager. In this scenario, we have F100 FTA configured as the first-level domain manager and F101 FTA configured as the first-level domain manager. The initial DOCREC definitions in Example 5-10 show that F100 (in bold) is defined as the first-level domain manager.

Example 5-10 DOMREC definitions

/**********************************************************************//* DOMREC: Defines the domains in the distributed Tivoli Workload *//* Scheduler network *//**********************************************************************//*--------------------------------------------------------------------*//* Specify one DOMREC for each domain in the distributed network. *//* With the exception of the master domain (whose name is MASTERDM *//* and consist of the TWS for z/OS engine). *//*--------------------------------------------------------------------*/DOMREC DOMAIN(DM100) /* Domain name for 1st domain */ DOMMNGR(F100) /* Chatham FTA - domain manager */ DOMPARENT(MASTERDM) /* Domain parent is MASTERDM */DOMREC DOMAIN(DM200) /* Domain name for 2nd domain */ DOMMNGR(F200) /* Yarmouth FTA - domain manager */ DOMPARENT(DM100) /* Domain parent is DM100 */

The F101 fault-tolerant agent can be configured to be the backup domain manager simply by specifying the Example 5-11 entries (in bold) in its CPUREC definition.

Example 5-11 Configuring F101 to be the backup domain manager

CPUREC CPUNAME(F101) CPUTCPIP(31758) CPUUSER(tws)CPUDOMAIN(DM100)CPUSERVER(1)CPUFULLSTAT(ON) /* Full status on for Backup DM */CPURESDEP(ON) /* Resolve dep. on for Backup DM */

With CPUFULLSTAT (full status information) and CPURESDEP (resolve dependency information) set to On, the Symphony file on F101 is updated with the same reporting and logging information as the Symphony file on F100. The backup domain manager will then be able to take over the responsibilities of the first-level domain manager.

Chapter 5. End-to-end implementation scenarios and examples 307

Page 324: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.5.4 Switch to Tivoli Workload Scheduler backup domain managerThis scenario is divided into two parts:

� A short-term switch to the backup manager

By short-term switch, we mean that we have switched back to the original domain manager before the current plan is extended or replanned.

� A long-term switch

By a long-term switch, we mean that the switch to the backup manager will be effective across the current plan extension or replan.

Short-term switch to the backup managerIn this scenario, we issue a switchmgr command on the F101 backup domain manager. We verify that the F101 takes over the responsibilities of the old first-level domain manager.

The steps in the short-term switch scenario are:

1. Issue the switch command on the F101 backup domain manager.

2. Verify that the switch is done.

Step 1. Issue switch command on F101 backup domain managerBefore we do the switch, we check the status of the workstations from a JSC instance pointing to the first-level domain manager (Figure 5-8 on page 308).

Figure 5-8 Status for workstations before the switch to F101

Note: FixPack 04 introduces a new Fault-Tolerant Switch Feature, which is described in a PDF file named FaultTolerantSwitch.README.

The new Fault-Tolerant Switch Feature replaces and enhances the existing or traditional Fault-Tolerant Switch Manager for backup domain managers.

308 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 325: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Note in Figure 5-8 that F100 is MANAGER (in the CPU Type column) for the DM100 domain. F101 is FTA (in the CPU Type column) in the DM100 domain.

To simulate that the F100 first-level domain manager is down or unavailable due to a system failure, we issue the switch manager command on the F101 backup domain manager. The switch manager command is initiated from the conman command line on F101:

conman switchmgr "DM100;F101"

In this example, DM100 is the domain and F101 is the fault-tolerant workstation we are going to switch to. The F101 fault-tolerant workstation responds with the message shown in Example 5-12.

Example 5-12 Messages showing switch has been initiated

TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for group 'TWS-EndToEnd'. TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for group 'TWS-EndToEnd'. Locale LANG set to "en_US" Schedule (Exp) 02/27/02 (#107) on F101. Batchman LIVES. Limit: 20, Fence: 0, Audit Level: 0 switchmgr DM100;F101 AWS20710041I Service 2005 started on F101 AWS22020120 Switchmgr command executed from cpu F101 on cpu F101.

This indicates that the switch has been initiated.

It is also possible to initiate the switch from a JSC instance pointing to the F101 backup domain manager. Because we do not have a JSC instance pointing to the backup domain manager we use the conman switchmgr command locally on the F101 backup domain manager.

Chapter 5. End-to-end implementation scenarios and examples 309

Page 326: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

For your information, we show how to initiate the switch from the JSC:

1. Double-click Status of all Domains in the Default Plan Lists in the domain manager JSC instance (TWSC-F100-Eastham) (Figure 5-9).

Figure 5-9 Status of all Domains list

2. Right-click the DM100 domain for the context menu shown in Figure 5-10.

Figure 5-10 Context menu for the DM100 domain

3. Select Switch Manager. The JSC shows a new pop-up window in which we can search for the agent we will switch to (Figure 5-11).

310 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 327: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 5-11 The Switch Manager - Domain search pop-up window

4. Click the search button (the square box with three dots to the right of the F100 domain shown in Figure 5-11), and JSC opens the Find Workstation Instance pop-up window (Figure 5-12 on page 311).

Figure 5-12 JSC Find Workstation Instance window

5. Click Start (Figure 5-12). JSC opens a new pop-up window that contains all the fault-tolerant workstations in the network (Figure 5-13 on page 312).

Chapter 5. End-to-end implementation scenarios and examples 311

Page 328: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

6. If we specify a filter in the Find field (shown in Figure 5-12) this filter will be used to narrow the list of workstations that are shown.

Figure 5-13 The result from Find Workstation Instance

7. Mark the workstation to switch to F101 (in our example) and click OK in the Find Workstation Instance window (Figure 5-13).

8. Click OK in the Switch Manager - Domain pop-up window to initiate the switch. Note that the selected workstation (F101) appears in the pop-up window (Figure 5-14).

312 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 329: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 5-14 Switch Manager - Domain pop-up window with selected FTA

The switch to F101 is initiated and Tivoli Workload Scheduler performs the switch.

If you prefer to work with the JSC, the above method of switching will appeal to you. If you are a mainframe operator, you may prefer to perform this sort of task from the mainframe. The example below shows how to do switching using the WSSTAT TSO command instead.

In Example 5-13, the workstation F101 is instructed to become the new domain manager of the DM100 domain. The command is sent via the TWST tracker subsystem.

Example 5-13 Alternate method of switching domain manager, using WSSTAT command

WSSTAT SUBSYS(TWST) WSNAME(F101) MANAGES(DM100)

If you prefer to work with the UNIX or Windows command line, Example 5-14 shows how to run the switchmgr command from conman.

Example 5-14 Alternate method of switching domain manager, using switchmgr

conman ‘switchmgr DM100,F101’

Note: With Tivoli Workload Scheduler for z/OS 8.2, you can now switch the domain manager using the WSSTAT TSO command on the mainframe. The Tivoli Workload Scheduler for z/OS Managing the Workload, SC32-1263 guide incorrectly states the syntax of this command. DOC APAR PQ93442 has been opened to correct the documentation.

Chapter 5. End-to-end implementation scenarios and examples 313

Page 330: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Step 2. Verify that the switch is doneWe check the status for the workstation using the JSC pointing to the old first-level domain manager, F100 (Figure 5-15).

Figure 5-15 Status for the workstations after the switch to F101

In Figure 5-15, it can be verified that F101 is now MANAGER (see CPU Type column) for the DM100 domain (the Domain column). The F100 is changed to an FTA (the CPU Type column).

The OPCMASTER workstation has the status unlinked (as shown in the Link Status column in Figure 5-15 on page 314). This status is correct, as we are using the JSC instance pointing to the F100 workstation. The OPCMASTER has a linked status on F101, as expected.

Switching to the backup domain manager takes some time, so be patient. The reason for this is that the switch manager command stops the backup domain manager and restarts it as the domain manager. All domain member fault-tolerant workstations are informed about the switch, and the old domain manager is converted to a fault-tolerant agent in the domain. The fault-tolerant workstations use the switch information to update their Symphony file with the name of the new domain manager. Then they stop and restart to link to the new domain manager.

In rare occasions, the link status is not shown correctly in the JSC after a switch to the backup domain manager. If this happens, try to Link the workstation manually by right-clicking the workstation and clicking Link in the pop-up window.

Long-term switch to the backup managerThe identification of domain managers is placed in the Symphony file. If a switch domain manager command is issued, the old domain manager name will be replaced with the new (backup) domain manager name in the Symphony file.

Note: To reactivate F100 as the domain manager, simply do a switch manager to F100 or run Symphony redistribute. The F100 will also be reinstated as the domain manager when you run the extend or replan programs.

314 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 331: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

If the switch to the backup domain manager is going to be effective across Tivoli Workload Scheduler for z/OS plan extension or replan, we have to update the DOMREC definition. This is also the case if we redistribute the Symphony file from Tivoli Workload Scheduler for z/OS.

The plan program reads the DOMREC definitions and creates a Symphony file with domain managers and fault-tolerant agents accordingly. If the DOMREC definitions are not updated to reflect the switch to the backup domain manager, the old domain manager will automatically resume domain management possibilities.

The steps in the long-term switch scenario are:

1. Issue the switch command on the F101 backup domain manager.

2. Verify that the switch is done.

3. Update the DOMREC definitions used by the TWSCE2E server and the Tivoli Workload Scheduler for z/OS plan programs.

4. Run the replan plan program in Tivoli Workload Scheduler for z/OS.

5. Verify that the switched F101 is still the domain manager.

Step 1. Issue switch command on F101 backup domain managerThe switch command is done as described in “Step 1. Issue switch command on F101 backup domain manager” on page 308.

Step 2. Verify that the switch is doneWe check the status of the workstation using the JSC pointing to the old first-level domain manager, F100 (Figure 5-16).

Figure 5-16 Status of the workstations after the switch to F101

From Figure 5-16 it can be verified that F101 is now MANAGER (see CPU Type column) for the DM100 domain (see the Domain column). F100 is changed to an FTA (see the CPU Type column).

The OPCMASTER workstation has the status unlinked (see the Link Status column in Figure 5-16). This status is correct, as we are using the JSC instance

Chapter 5. End-to-end implementation scenarios and examples 315

Page 332: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

pointing to the F100 workstation. The OPCMASTER has a linked status on F101, as expected.

Step 3. Update the DOMREC definitions for server and plan programWe update the DOMREC definitions, so F101 will be the new first-level domain manager (Example 5-15).

Example 5-15 DOMREC definitions

/**********************************************************************//* DOMREC: Defines the domains in the distributed Tivoli Workload *//* Scheduler network *//**********************************************************************//*--------------------------------------------------------------------*//* Specify one DOMREC for each domain in the distributed network. *//* With the exception of the master domain (whose name is MASTERDM *//* and consist of the TWS for z/OS engine). *//*--------------------------------------------------------------------*/DOMREC DOMAIN(DM100) /* Domain name for 1st domain */ DOMMNGR(F101) /* Chatham FTA - domain manager */ DOMPARENT(MASTERDM) /* Domain parent is MASTERDM */DOMREC DOMAIN(DM200) /* Domain name for 2nd domain */ DOMMNGR(F200) /* Yarmouth FTA - domain manager */ DOMPARENT(DM100) /* Domain parent is DM100 */

The DOMREC DOMNGR(F101) defines the name of the first-level domain manager. This is the only change needed in the DOMREC definition.

We did create an extra member in the EQQPARM data set and called it TPSWITCH. This member has the updated DOMREC definitions to be used when we have a long-term switch. In the EQQPARM data set, we have three members: TPSWITCH (F101 is domain manager), TPNORM (F100 is domain manager), and TPDOMAIN (the member used by TWSCE2E and the plan programs).

Before the plan programs are executed, we replace the TPDOMAIN member with the TPSWITCH member. When F100 is going to be the domain manager again we simply replace the TPDOMAIN member with the TPNORM member.

316 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 333: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Step 4. Run replan plan program in Tivoli Workload Scheduler for z/OS

We submit a replan plan program (job) using option 3.1 from legacy ISPF in the Tivoli Workload Scheduler for z/OS engine and verify the output.

Example 5-16 shows the messages in EQQMLOG.

Example 5-16 EQQMLOG

EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPDOMAIN IS: 0000 EQQ3005I CPU F101 IS SET AS DOMAIN MANAGER OF FIRST LEVEL EQQ3030I DOMAIN MANAGER F101 MUST HAVE SERVER ATTRIBUTE SET TO BLANK EQQ3011I CPU F200 SET AS DOMAIN MANAGER EQQZ013I NOW PROCESSING PARAMETER LIBRARY MEMBER TPUSER EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPUSER IS: 0000

The F101 fault-tolerant workstation is the first-level domain manager.

The EQQ3030I message is due to the CPUSERVER(1) specification in the CPUREC definition for the F101 workstation. The CPUSERVER(1) specification is used when F101 is running as a fault-tolerant workstation managed by the F100 domain manager.

Step 5. Verify that the switched F101 is still domain managerFinally, we verify that F101 is the domain manager after the replan program has finished and the Symphony file is distributed (Figure 5-17).

Tip: If you let your system automation (for example, System Automation/390) handle the switch to the backup domain manager, you can automate the entire process.

� System automation replaces the EQQPARM members.

� System automation initiates the switch manager command remotely on the fault-tolerant workstation.

� System automation resets the definitions when the original domain manager is ready be activated.

Chapter 5. End-to-end implementation scenarios and examples 317

Page 334: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 5-17 Workstations status after Tivoli Workload Scheduler for z/OS replan program

From Figure 5-17, it can be verified that F101 is still MANAGER (in the CPU Type column) for the DM100 domain (in the Domain column). The CPU type for F100 is FTA.

The OPCMASTER workstation has the status unlinked (the Link Status column). This status is correct, as we are using the JSC instance pointing to the F100 workstation. The OPCMASTER has a linked status on F101, as expected.

5.5.5 Implementing Tivoli Workload Scheduler high availability on high availability environments

You can also use high availability environments such as High Availability Cluster Multi-Processing (HACMP) or Microsoft Cluster (MSCS) to implement fail-safe Tivoli Workload Scheduler workstations.

The redbook High Availability Scenarios with IBM Tivoli Workload Scheduler and IBM Tivoli Framework, SG24-6632, discusses these scenarios in detail, so we refer you to it for implementing Tivoli Workload Scheduler high availability using HACMP or MSCS.

5.6 Backup and maintenance guidelines for FTAsIn this section, we discuss some important backup and maintenance guidelines for Tivoli Workload Scheduler fault-tolerant agents (workstations) in an end-to-end scheduling environment.

Note: To reactivate F100 as a domain manager, simply do a switch manager to F100 or Symphony redistribute. The F100 will also be reinstated as domain manager when you run the extend or replan programs.

Remember to change the DOMREC definitions before the plan programs are executed or the Symphony file will be redistributed.

318 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 335: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.6.1 Backup of the Tivoli Workload Scheduler FTAsTo make sure that you can recover from disk or system failures on the system where the Tivoli Workload Scheduler engine is installed, you should make a daily or weekly backup of the installed engine.

The backup can be done in several ways. You probably already have some backup policies and routines implemented for the system where the Tivoli Workload Scheduler engine is installed. These backups should be extended to make a backup of files in the <TWShome> and the <TWShome/..> directories.

We suggest that you have a backup of all of the Tivoli Workload Scheduler files in the <TWShome> and <TWShome/..> directories. If the Tivoli Workload Scheduler engine is running as a fault-tolerant workstation in an end-to-end network, it should be sufficient to make the backup on a weekly basis.

When deciding how often a backup should be generated, consider:

� Are you using parameters on the Tivoli Workload Scheduler agent?

If you are using parameters locally on the Tivoli Workload Scheduler agent and do not have a central repository for the parameters, you should consider making daily backups.

� Are you using specific security definitions on the Tivoli Workload Scheduler agent?

If you are using specific security file definitions locally on the Tivoli Workload Scheduler agent and do not have a central repository for the security file definitions, you should consider making daily backups.

Another approach is to make a backup of the Tivoli Workload Scheduler agent files, at least before making any changes to the files. For example, the changes can be updates to configuration parameters or a patch update of the Tivoli Workload Scheduler agent.

5.6.2 Stdlist files on Tivoli Workload Scheduler FTAs Tivoli Workload Scheduler fault-tolerant agents save job logs on the system where the jobs run. These job logs are stored in a directory named <twshome>/stdlist. In the stdlist (standard list) directory, there will be subdirectories with the name ccyy.mm.dd (where cc is the century, yy is the year, mm is the month, and dd is the date).

This subdirectory is created daily by the Tivoli Workload Scheduler netman process when a new Symphony file (Sinfonia) is received on the fault-tolerant agent. The Symphony file is generated by the Tivoli Workload Scheduler for z/OS controller plan program in the end-to-end scheduling environment.

Chapter 5. End-to-end implementation scenarios and examples 319

Page 336: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The ccyy.mm.dd subdirectory contains a job log for each job that is executed on a particular production day, as seen in Example 5-17.

Example 5-17 Files in a stdlist/ccyy.mm.dd directory

O19502.0908 File with job log for job with process no. 19502 run at 09.08 O19538.1052 File with job log for job with process no. 19538 run at 10.52 O38380.1201 File with job log for job with process no. 38380 run at 12.01

These log files are created by the Tivoli Workload Scheduler job manager process (jobman) and will remain there until deleted by the system administrator.

Tivoli Workload Scheduler also logs messages from its own programs. These messages are stored in a subdirectory of the stdlist directory called logs.

The easiest way to maintain the growth of these directories is to decide how long the log files are needed and schedule a job under Tivoli Workload Scheduler for z/OS control, which removes any file older than the given number of days. The Tivoli Workload Scheduler rmstdlist command can perform the deletion of stdlist files and remove or display files in the stdlist directory based on age of the files:

rmstdlist [-v |-u]rmstdlist [-p] [age]

In these commands, the arguments are:

-v Displays the command version and exits.

-u Displays the command usage information and exits.

-p Displays the names qualifying standard list file directories. No directory or files are removed. If you do not specify -p, the qualifying standard list files are removed.

age The minimum age, in days, for standard list file directories to be displayed or removed. The default is 10 days.

We suggest that you run the rmstdlist command daily on all of your fault-tolerant agents. This command can be defined in a job in a job stream and scheduled by Tivoli Workload Scheduler for z/OS. You may need to save a backup copy of the stdlist files, for example, for internal revision or due to company policies. If this is the case, a backup job can be scheduled to run just before the rmstdlist job.

320 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 337: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.6.3 Auditing log files on Tivoli Workload Scheduler FTAsThe auditing function can be used to track changes to the Tivoli Workload Scheduler plan (the Symphony file) on FTAs. Plan auditing is enabled by the TOPLOGY PLANAUDITLEVEL parameter, described below.

PLANAUDITLEVEL(0|1)

Enables or disables plan auditing for distributed agents. Valid values are 0 to disable plan auditing and 1 to activate plan auditing. Auditing information is logged to a flat file in the TWShome/audit/plan directory. Each Tivoli Workload Scheduler workstation maintains its own log. Only actions are logged in the auditing file, not the success or failure of any action. If you change the value, you also need to restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file.

After plan auditing has been enabled, modification to the Tivoli Workload Scheduler plan (the Symphony) on an FTA will be added to the plan directory on that workstation:

<TWShome>/audit/plan/date (where date is in ccyymmdd format)

We suggest that you clean out the audit database and plan directories regularly, daily if necessary. The cleanout in the directories can be defined in a job in a job stream and scheduled by Tivoli Workload Scheduler for z/OS. You may need to save a backup copy of the audit files (for internal revision or due to company policies, for example). If so, a backup job can be scheduled to run just before the cleanup job.

5.6.4 Monitoring file systems on Tivoli Workload Scheduler FTAsIt is easier to deal with file system problems before they happen. If your file system fills up, Tivoli Workload Scheduler will no longer function and your job processing will stop. To avoid problems, monitor the file systems containing your Tivoli Workload Scheduler home directory and /tmp. For example, if you have a 2 GB file system, you might want a warning at 80%, but if you have a smaller file system, you will need a warning when a lower percentage fills up. We cannot give you an exact percentage at which to be warned. This depends on many variables that change from installation to installation (or company to company).

Monitoring or testing for the percentage of the file system can be done by, for example, IBM Tivoli Monitoring and IBM Tivoli Enterprise Console® (TEC).

Example 5-18 shows an example of a shell script that tests for the percentage of the Tivoli Workload Scheduler file system filled and report back if it is over 80%.

Chapter 5. End-to-end implementation scenarios and examples 321

Page 338: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 5-18 Monitoring script

/usr/bin/df -P /dev/lv01 | grep TWS >> tmp1$$ /usr/bin/awk '{print $5}' tmp1$$ > tmp2$$ /usr/bin/sed 's/%$//g' tmp2$$ > tmp3$$ x=`cat tmp3$$` i=`expr $x \> 80` echo "This file system is less than 80% full." >> tmp4$$ if [ "$i" -eq 1 ]; then echo "This file system is over 80% full. You need to remove schedule logs and audit logs from the subdirectories in the file system." > tmp4$$ fi cat tmp4$$ rm tmp1$$ tmp2$$ tmp3$$ tmp4$$

5.6.5 Central repositories for important Tivoli Workload Scheduler filesTivoli Workload Scheduler has several files that are important for use of Tivoli Workload Scheduler and for the daily Tivoli Workload Scheduler production workload if you are running a Tivoli Workload Scheduler master domain manager or a Tivoli Workload Scheduler for z/OS end-to-end server. Managing these files across several Tivoli Workload Scheduler workstations can be a cumbersome and very time-consuming task. Using central repositories for these files can save time and make your management more effective.

Script filesScripts (or the JCL) are very important objects when doing job scheduling on the Tivoli Workload Scheduler fault-tolerant agents. It is the scripts that actually perform the work or the job on the agent system, such as updating the payroll database or the customer inventory database.

The job defintion for distributed jobs in Tivoli Workload Scheduler or Tivoli Workload Scheduler for z/OS contains a pointer (the path or directory) to the script. The script by itself is placed locally on the fault-tolerant agent. Because the fault-tolerant agents have a local copy of the plan (Symphony) and the script to run, they can continue running jobs on the system even if the connection to the Tivoli Workload Scheduler master or the Tivoli Workload Scheduler for z/OS controller is broken. This way we have fault tolerance on the workstations.

Managing scripts on several Tivoli Workload Scheduler fault-tolerant agents and making sure that you always have the correct versions on every fault-tolerant agent can be a time-consuming task. You also must ensure that the scripts are protected so that they cannot be updated by the wrong person. Pure protected scripts can cause problems in your production environment if someone has

322 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 339: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

changed something without notifying the responsible planner or change manager.

We suggest placing all scripts that are used for production workload in one common script repository. The repository can be designed in different ways. One way could be to have a subdirectory for each fault-tolerant workstation (with the same name as the name on the Tivoli Workload Scheduler workstation).

All changes to scripts are made in this production repository. On a daily basis, for example, just before the plan is extended, the master scripts in the central repository are distributed to the fault-tolerant agents. The daily distribution can be handled by a Tivoli Workload Scheduler scheduled job. This job can be defined as predecessor to the plan extend job.

This approach can be made even more advanced by using a software distribution application to handle the distribution of the scripts. The software distribution application can help keep track of different versions of the same script. If you encounter a problem with a changed script in a production shift, you can simply ask the software distribution application to redistribute a previous version of the same script and then rerun the job.

Security filesThe Tivoli Workload Scheduler security file, discussed in detail 5.7, “Security on fault-tolerant agents” on page 323, is used to protect access to Tivoli Workload Scheduler database and plan objects. On every Tivoli Workload Scheduler engine (such as domain manager and fault-tolerant agent), you can issue conman commands for the plan and composer commands for the database. Tivoli Workload Scheduler security files are used to ensure that the right people have the right access to objects in Tivoli Workload Scheduler.

Security files can be created or modified on every local Tivoli Workload Scheduler workstation, and they can be different from workstation to workstation.

We suggest having a common security strategy for all Tivoli Workload Scheduler workstations in your IBM Tivoli Workload Scheduler network (and end-to-end network). This way, the security file can be placed centrally and changes are made only in the central security file. If the security file has been changed, it is simply distributed to all Tivoli Workload Scheduler workstations in your IBM Tivoli Workload Scheduler network.

5.7 Security on fault-tolerant agentsIn this section, we offer an overview of how security is implemented on Tivoli Workload Scheduler fault-tolerant agents (including domain managers). For

Chapter 5. End-to-end implementation scenarios and examples 323

Page 340: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

more details, see the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273.

Figure 5-18 shows the security model on Tivoli Workload Scheduler fault-tolerant agents. When a user attempts to display a list of defined jobs, submit a new job stream, add a new resource, or any other operation related to the Tivoli Workload Scheduler plan or databases, Tivoli Workload Scheduler performs a check to verify that the user is authorized to perform that action.

Figure 5-18 Sample security setup

Tivoli Workload Scheduler users have different roles within the organization. The Tivoli Workload Scheduler security model you implement should reflect these roles. You can think of the different groups of users as nested boxes, as in Figure 5-18 on page 324. The largest box represents the highest access, granted to only the Tivoli Workload Scheduler user and the root user. The smaller boxes represent more restricted roles, with correspondingly restricted access. Each

TWS and root users:Has full access to all areasTWS and root users:Has full access to all areas Operations Group:

Can manage the wholeworkload but cannot

create job streamsHas no root access

Operations Group:Can manage the whole

workload but cannotcreate job streams

Has no root access

Applications Manager:Can document jobs and

schedules for entire groupand manage some production

Applications Manager:Can document jobs and

schedules for entire groupand manage some production

Application User:Can document ownjobs and schedules

Application User:Can document ownjobs and schedules

General User:Has display access only

General User:Has display access only

USER RootCPU=@+LOGON=maestro,TWS,root,Root_mars-region

BEGINJOB CPU=@ ACCESS=@...

END

USER OperationsCPU=@+LOGON=op,Operator

BEGINJOB CPU=@ ACCESS=DISPLAY,SUBMIT,KILL,CANCEL...

END

Security Model

USER AppManagerCPU=@+LOGON=appmgr,AppMgrs

BEGIN...END

USER ApplicationCPU=@+LOGON=apps,Application

BEGIN...END

USER UserCPU=@+LOGON=users,Users

BEGIN...END

Security File1

2

3

4

5

1

2

3

4 5

324 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 341: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

group that is represented by a box in the figure would have a corresponding stanza in the security file. Tivoli Workload Scheduler programs and commands read the security file to determine whether the user has the access that is required to perform an action.

5.7.1 The security fileEach workstation in a Tivoli Workload Scheduler network has its own security file. These files can be maintained independently on each workstation, or you can keep a single centralized security file on the master and copy it periodically to the other workstations in the network.

At installation time, a default security file is created that allows unrestricted access to only the Tivoli Workload Scheduler user (and, on UNIX workstations, the root user). If the security file is accidentally deleted, the root user can generate a new one.

If you have one security file for a network of agents, you may wish to make a distinction between the root user on a fault-tolerant agent and the root user on the master domain manager. For example, you can restrict local users to performing operations that affect only the local workstation, while permitting the master root user to perform operations that affect any workstation in the network.

A template file named TWShome/config/Security is provided with the software. During installation, a copy of the template is installed as TWShome/Security, and a compiled copy is installed as TWShome/../unison/Security.

Security file stanzasThe security file is divided into one or more stanzas. Each stanza limits access at three different levels:

� User attributes appear between the USER and BEGIN statements and determine whether a stanza applies to the user attempting to perform an action.

� Object attributes are listed, one object per line, between the BEGIN and END statements. Object attributes determine whether an object line in the stanza matches the object the user is attempting to access.

� Access rights appear to the right of each object listed, after the ACCESS statement. Access rights are the specific actions that the user is allowed to take on the object.

Chapter 5. End-to-end implementation scenarios and examples 325

Page 342: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

The steps of a security checkThe steps of a security check reflect the three levels listed above:

1. Identify the user who is attempting to perform an action.2. Determine the type of object being accessed.3. Determine whether the requested access should be granted to that object.

Step 1: Identify the userWhen a user attempts to perform any Tivoli Workload Scheduler action, the security file is searched from top to bottom to find a stanza whose user attributes match the user attempting to perform the action. If no match is found in the first stanza, the user attributes of the next stanza are searched. If a stanza is found whose user attributes match that user, that stanza is selected for the next part of the security check. If no stanza in the security file has user attributes that match the user, access is denied.

Step 2: Determine the type of object being accessedAfter the user has been identified, the stanza that applies to that user is searched, top-down, for an object attribute that matches the type of object the user is trying to access. Only that particular stanza (between the BEGIN and END statements) is searched for a matching object attribute. If no matching object attribute is found, access is denied.

Step 3: Determine whether access is granted to that objectIf an object attribute is located that corresponds to the object that the user is attempting to access, the access rights following the ACCESS statement on that line in the file are searched for the action that the user is attempting to perform. If this access right is found, then access is granted. If the access right is not found on this line, then the rest of the stanza is searched for other object attributes (other lines) of the same type, and this step is repeated for each of these.

Figure 5-19 on page 327 illustrates the steps of the security check algorithm.

Important: Because only a subset of conman commands is available on FTAs in an end-to-end environment, some of the ACCESS rights that would be applicable in an ordinary non-end-to-end IBM Tivoli Workload Scheduler network will not be applicable in an end-to-end network.

326 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 343: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 5-19 Example of a Tivoli Workload Scheduler security check

5.7.2 Sample security fileHere are some things to note about the security file stanza (Example 5-19 on page 328):

� mastersm is an arbitrarily chosen name for this group of users.

� The example security stanza above would match a user that logs on to the master (or to the Framework via JSC), where the user name (or TMF Administrator name) is maestro, root, or Root_london-region.

� These users have full access to jobs, jobs streams, resources, prompts, files, calendars, and workstations.

� The users have full access to all parameters except those whose names begin with r (parameter name=@ ~ name=r@ access=@).

Logon ID:johns

Command issued:conman 'release mars#weekly.cleanup'

USER JohnSmithCPU=@+LOGON=johns

BEGINJOB CPU=@ NAME=C@ ACCESS=DISPLAY,RELEASE,ADD,…JOB CPU=@ NAME=@ ACCESS=DISPLAYSCHEDULE CPU=@ ACCESS=DISPLAY,CANCEL,ADD,…RESOURCE CPU=@ ACCESS=DISPLAY,MODIFY,ADD,…PROMPT ACCESS=DISPLAY,ADD,REPLY,…CALENDAR ACCESS=DISPLAYCPU ACCESS=DISPLAY

END

1) Find user

3) Find access right

2) Find object

Security file on Sol

Security

FTAVenus

FTAMars

MasterSol

Chapter 5. End-to-end implementation scenarios and examples 327

Page 344: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

For NT user definitions (userobj), the users have full access to objects on all workstations in the network.

Example 5-19 Sample security file

############################################################Sample Security File############################################################(1)APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON THE#MASTER DOMAIN MANAGER OR FRAMEWORK.user mastersm cpu=$master,$framework +logon=maestro,root,Root_london-regionbegin#OBJECT ATTRIBUTES ACCESS CAPABILITIES#--------------------------------------------job access=@schedule access=@resource access=@prompt access=@file access=@calendar access=@cpu access=@parameter name=@ ~ name=r@ access=@userobj cpu=@ + logon=@ access=@end

Creating the security fileTo create user definitions, edit the template file TWShome/Security. Do not modify the original template in TWShome/config/Security. Then use the makesec command to compile and install a new operational security file. After it is installed, you can make further modifications by creating an editable copy of the operational file with the dumpsec command.

The dumpsec commandThe dumpsec command takes the security file, generates a text version of it, and sends that to stdout. The user must have display access to the security file.

� Synopsis:

dumpsec -v | -udumpsec > security-file

� Description:

If no arguments are specified, the operational security file (../unison/Security) is dumped. To create an editable copy of a security file, redirect the output of the command to another file, as shown in “Example of dumpsec and makesec” on page 330.

328 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 345: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Arguments

– -v displays command version information only.

– -u displays command usage information only.

– security-file specifies the name of the security file to dump.

Figure 5-20 The dumpsec command

The makesec commandThe makesec command essentially does the opposite of what the dumpsec command does. The makesec command takes a text security file, checks its syntax, compiles it into a binary security file, and installs the new binary file as the active security file. Changes to the security file take effect when Tivoli Workload Scheduler is stopped and restarted. Affected programs are:

� Conman� Composer� Tivoli Workload Scheduler connectors

Simply exit the programs. The next time they are run, the new security definitions will be recognized. Tivoli Workload Scheduler connectors must be stopped using the wmaeutil command before changes to the security file will take effect for users of JSC. The connectors will automatically restart as needed.

The user must have modify access to the security file.

� Synopsis:

makesec -v | -umakesec [-verify] in-file

Note: On Windows NT, the connector processes must be stopped (using the wmaeutil command) before the makesec command will work correctly.

Chapter 5. End-to-end implementation scenarios and examples 329

Page 346: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� Description:

The makesec command compiles the specified file and installs it as the operational security file (../unison/Security). If the -verify argument is specified, the file is checked for correct syntax, but it is not compiled and installed.

� Arguments:

– -v displays command version information only.

– -u displays command usage information only.

– -verify checks the syntax of the user definitions in the in-file only. The file is not installed as the security file. (Syntax checking is performed automatically when the security file is installed.)

– in-file specifies the name of a file or set of files containing user definitions. A file name expansion pattern is permitted.

Example of dumpsec and makesecExample 5-20 creates an editable copy of the active security file in a file named Security.conf, modifies the user definitions with a text editor, then compiles Security.conf and replaces the active security file.

Example 5-20 Using dumpsec and makesec

dumpsec > Security.confvi Security.conf(Here you would make any required modifications to the Security.conf file)makesec Security.conf

Configuring Tivoli Workload Scheduler security for the Tivoli Administrator

In order to use the Job Scheduling Console on a master or on an FTA, the Tivoli Administrator user (or users) must be defined in the security file of that master or FTA. The $framework variable can be used as a user attribute in place of a specific workstation. This indicates a user logging in via the Job Scheduling Console.

Note: Add the Tivoli Administrator to the Tivoli Workload Scheduler security file after you have installed the Tivoli Management Framework and Tivoli Workload Scheduler connector.

330 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 347: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.8 End-to-end scheduling tips and tricksIn this section, we provide some tips, tricks, and troubleshooting suggestions for the end-to-end scheduling environment.

5.8.1 File dependencies in the end-to-end environmentUse the filewatch.sh program that is delivered with Tivoli Workload Scheduler. Description of usage and parameters is first in the filewatch.sh program.

In an ordinary (non-end-to-end) IBM Tivoli Workload Scheduler network — one in which the MDM is a UNIX or Windows workstation — it is possible to create a file dependency on a job or job stream; this is not possible in an end-to-end network because the controlling system is Tivoli Workload Scheduler for z/OS.

It is very common to use files as triggers or predecessors to job flows in distributed environment.

Tivoli Workload Scheduler 8.2 includes TWSHOME/bin/filewatch.sh, a sample script that can be used to check for the existence of files. You can configure the script to check periodically for the file, just as with a real Tivoli Workload Scheduler file dependency. By defining a job that runs filewatch.sh, you can implement a file dependency.

To learn more about filewatch and how to use it, read the detailed description in the comments at the top of the filewatch.sh script. The options of the script are:

-kd (mandatory) The options to pass to the test command. See the man page for “test” for a list of allowed values.

-fl (mandatory) Path name of the file (or directory) to look for.

-dl (mandatory) The deadline period (in seconds); cannot be used together with -nd.

-nd (mandatory) Suppress the deadline; cannot be used together with -dl.

-int (mandatory) The search interval period (in seconds).

-rc (optional) The return code that the script will exit with if the deadline is reached without finding the file (ignored if -nd is used).

-tsk (optional) The path of the task launched if the file is found.

Here are two filewatch examples:

� In this example, the script checks for file /tmp/filew01 every 15 seconds indefinitely:

JOBSCR(‘/tws/bin/filewatch.sh -kd f -fl /tmp/filew01 -int 15 -nd')

Chapter 5. End-to-end implementation scenarios and examples 331

Page 348: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� In this example, the script checks for file /tmp/filew02 every 15 seconds for 60 seconds. If file is not there 60 seconds after the check has started, the script will end with return code 12.

JOBSCR('/tws/bin/filewatch.sh -kd f -fl /tmp/filew02 -int 15 -dl 60 -rc 12')

Figure 5-21 shows how the filewatch script might be used as a predecessor to the job that will process or work with the file being “watched.” This way you can make sure that the file to be processed is there before running the job that will process the file.

Figure 5-21 How to use filewatch.sh to set up a file dependency

5.8.2 Handling offline or unlinked workstations

The job that processes a file can be dependenton a filewatch job that watches for the file.

Filewatch log

Tip: If the workstation does not link as it should, the cause can be that the writer process has not initiated correctly or the run number for the Symphony file on the fault-tolerant workstation is not the same as the run number on the master. If you mark the unlinked workstations and right-click, a pop-up menu opens as shown in Figure 5-22 on page 333. Click Link to try to link the workstation.

332 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 349: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Figure 5-22 Context menu for workstation linking

You can check the Symphony run number and the Symphony status in the legacy ISPF using option 6.6.

Figure 5-23 Pop-up window to set status of workstation

Tip: If the workstation is Not Available/Offline, the cause might be that the mailman, batchman, and jobman processes are not started on the fault-tolerant workstation. You can right-click the workstation to open the context menu shown in Figure 5-22, then click Set Status. This opens a new window (Figure 5-23), in which you can try to activate the workstation by clicking Active. This action attempts to start the mailman, batchman, and jobman processes on the fault-tolerant workstation by issuing a conman start command on the agent.

Chapter 5. End-to-end implementation scenarios and examples 333

Page 350: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

5.8.3 Using dummy jobsBecause it is not possible to add dependency to the job stream level in Tivoli Workload Scheduler for z/OS (as it is in the IBM Tivoli Workload Scheduler distributed product), dummy start and dummy end general jobs are a workaround for this Tivoli Workload Scheduler for z/OS limitation. When using dummy start and dummy end general jobs, you can always uniquely identify the start point and the end point for the jobs in the job stream.

5.8.4 Placing job scripts in the same directories on FTAsThe SCRPTLIB members can be reused in several job streams and on different fault-tolerant workstations of the same type (such as UNIX or Windows). For example, if a job (script) is scheduled on all of your UNIX systems, you can create one SCRTPLIB member for this job and define it in several job streams on the associated fault-tolerant workstations, though this requires that the script is placed in the same directory on all of your systems. This is another good reason to have all job scripts placed in the same directories across your systems.

5.8.5 Common errors for jobs on fault-tolerant workstationsThis sections discusses two of the most common errors for jobs on fault-tolerant workstations.

Handling errors in script definitionsWhen adding a job stream to the current plan in Tivoli Workload Scheduler for z/OS (using JSC or option 5.1 from legacy ISPF), you may see this error message:

EQQM071E A JOB definition referenced by this occurrence is wrong

This shows that there is an error in the definition for one or more jobs in the job stream and that the job stream is not added to the current plan. If you look in the EQQMLOG for the Tivoli Workload Scheduler for z/OS engine, you will find messages similar to Example 5-21.

Example 5-21 EQQMLOG messages

EQQM992E WRONG JOB DEFINITION FOR THE FOLLOWING OCCURRENCE: EQQZ068E JOBRC IS AN UNKNOWN COMMAND AND WILL NOT BE PROCESSED EQQZ068I FURTHER STATEMENT PROCESSING IS STOPPED

In our example, the F100J011 member in EQQSCLIB looks like:

JOBRC JOBSCR('/tivoli/TWS/scripts/japjob1') JOBUSR(tws-e)

334 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 351: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Note the typo error: JOBRC should be JOBREC. The solution to this problem is simply to correct the error and try to add the job stream again. The job stream must be added to the Tivoli Workload Scheduler for z/OS plan again, because the job stream was not added the first time (due to the typo).

If an FTA job is defined in Tivoli Workload Scheduler for z/OS but the corresponding JOBREC is missing, the job will be added to the Symphony file but it will be set to priority 0 and state FAIL. This combination of priority and state is not likely to occur normally, so if you see a job like this, you can assume that the problem is that the JOBREC was missing when the Symphony file was built.

Another common error is a misspelled name for the script or the user (in the JOBREC, JOBSCR, or JOBUSR definition) in the FTW job.

Say we have the JOBREC definition in Example 5-22.

Example 5-22 Typo in JOBREC

/* Definiton for F100J010 job to be executed on F100 machine */ /* */ JOBREC JOBSCR('/tivoli/TWS/scripts/jabjob1') JOBUSR(tws-e)

Here the typo error is in the name of the script. It should be japjob1 instead of jabjob1. This typo will result in an error with the error code FAIL when the job is run. The error will not be caught by the plan programs or when you add the job stream to the plan in Tivoli Workload Scheduler for z/OS.

It is easy to correct this error using the following steps:

1. Correct the typo in the member in the SCRPTLIB.

2. Add the same job stream again to the plan in Tivoli Workload Scheduler for z/OS.

This way of handling typo errors in the JOBREC definitions is actually the same as if you performed a rerun from on a Tivoli Workload Scheduler master. The job stream must be re-added to the Tivoli Workload Scheduler for z/OS plan to have

Note: You will get similar error messages in the EQQMLOG for the plan programs if the job stream is added during plan extension. The error messages that are issued by the plan program are:

EQQZ068E JOBRC IS AN UNKNOWN COMMAND AND WILL NOT BE PROCESSED EQQZ068I FURTHER STATEMENT PROCESSING IS STOPPED EQQ3077W BAD MEMBER F100J011 CONTENTS IN EQQSCLIB

Note that the plan extension program will end with return code 0.

Chapter 5. End-to-end implementation scenarios and examples 335

Page 352: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Tivoli Workload Scheduler for z/OS send the new JOBREC definition to the fault-tolerant workstation agent. Remember, when doing extend or replan of the Tivoli Workload Scheduler for z/OS plan, that the JOBREC definition is built into the Symphony file. By re-adding the job stream we ask Tivoli Workload Scheduler for z/OS to send the re-added job stream, including the new JOBREC definition, to the agent.

Handling the wrong password definition for Windows FTWIf you have defined the wrong password for a Windows user ID in the USRREC topology definition or if the password has been changed on the Windows machine, the FTW job will end with an error and the error code FAIL.

To solve this problem, you have two options:

� Change the wrong USRREC definition and redistribute the Symphony file (using option 3.5 from legacy ISPF).

This approach can be disruptive if you are running a huge batch load on FTWs and are in the middle of a batch peak.

� Log into the first-level domain manager (the domain manager directly connected to the Tivoli Workload Scheduler for z/OS server; if there is more than one first-level domain manager, log on the first-level domain manager that is in the hierarchy of the FTW), then alter the password either using conman or using a JSC instance pointing to the first-level domain manager. When you have changed the password, simply rerun the job that was in error.

The USRREC definition should still be corrected so it will take effect the next time the Symphony file is created.

5.8.6 Problems with port numbers There are two different parameters named PORTNUMBER, one in the SERVOPTS that is used for the JSC and OPC Connector, and one in the TOPOLOGY parameters that is used by the E2E Server to communicate with the distributed FTAs.

The two PORTNUMBER parameters must have different values. The localopts for the FTA has a parameter named nm port, which is the port on which netman listens. The nm port value must match the CPUREC CPUTCPIP value for each FTA. There is no requirement that CPUTCPIP must match the TOPOLOGY PORTNUMBER. The value of the TOPOLOGY PORTNUMBER and the HOSTNAME value are imbedded in the Symphony file, which enables the FTA to know how to communicate back to OPCMASTER. The next sections illustrate different ways in which setting the values for PORTNUMBER and CPUTCPIP incorrectly can cause problems in the E2E environment.

336 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 353: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

CPUTCPIP not the same as NM PORTThe value for CPUTCPIP in the CPUREC parameter for an FTA should always be set to the same port that the FTA has defined as nm port in localopts.

We did some tests to see what errors occur if the wrong value is used for CPUTCPIP. In the first test, nm port for the domain manager (DM) HR82 was 31111 but CPUTCPIP was set to 31122, a value that was not used by any FTA on our network. The current plan (CP) was extended to distribute a Symphony file with the wrong CPUTCPIP in place. The DM failed to link and the messages in Example 5-23 were seen in the USS stdlist TWSMERGE log.

Example 5-23 Excerpt from TWSMERGE log

MAILMAN:+ AWSBCV082I Cpu HR82, Message: AWSDEB003I Writing socket: EDC8128I Connection refused. MAILMAN:+ AWSBCV035W WARNING: Linking to HR82 failed, will write to POBOX.

Therefore, if the DM will not link and the messages shown above are seen in TWSMERGE, the nm port value should be checked and compared to the CPUTCPIP value. In this case, correcting the CPUTCPIP value and running a Symphony Renew job eliminated the problem.

We did another test with the same DM, this time setting CPUTCPIP to 31113.

Example 5-24 Setting CPUTCPIP to 31113

CPUREC CPUNAME(HR82) CPUTCPIP(31113)

The TOPOLOGY PORTNUMBER was also set to 31113, its normal value:

TOPOLOGY PORTNUMBER(31113)

After cycling the E2E Server and running a CP EXTEND, the DM and all FTAs were LINKED and ACTIVE, which was not expected (Example 5-25).

Example 5-25 Messages showing DM and all the FTAs are LINKED and ACTIVE

EQQMWSLL -------- MODIFYING WORK STATIONS IN THE CURRENT PLAN Row 1 to 8 of 8 Enter the row command S to select a work station for modification, or I to browse system information for the destination. Row Work station L S T R Completed Active Remaining cmd name text oper dur. oper oper dur. ' HR82 PDM on HORRIBLE L A C A 4 0.00 0 13 0.05 ' OP82 MVS XAGENT on HORRIBLE L A C A 0 0.00 0 0 0.00 ' R3X1 SAP XAGENT on HORRIBLE L A C A 0 0.00 0 0 0

Chapter 5. End-to-end implementation scenarios and examples 337

Page 354: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

How could the DM be ACTIVE if the CPUTCPIP value was intentionally set to the wrong value? We found that there was an FTA on the network that was set up with nm port=31113. It was actually an MDM (master domain manager) for a Tivoli Workload Scheduler 8.1 distributed-only (not E2E) environment, so our Version 8.2 E2E environment connected to the Version 8.1 MDM as if it were HR82. This illustrates that extreme care must be taken to code the CPUTCPIP values correctly, especially if there are multiple Tivoli Workload Scheduler environments present (for example, a test system and a production system).

The localopts nm ipvalidate parameter could be used to prevent the overwrite of the Symphony file due to incorrect parameters being set up. If the following is specified in localopts:

nm ipvalidate=full

The connection would not be allowed if IP validation fails. However, if SSL is active, the recommendation is to use the following localopts parameter:

nm ipvalidate=none

PORTNUMBER set to PORT reserved for another taskWe wanted to test the effect of setting the TOPOLOGY PORTNUMBER parameter to a port that is reserved for use by another task. The data set specified by the PROFILE DD statement in the TCPIP statement had the parameters in Example 5-26.

Example 5-26 TOPOLOGY PORTNUMBER parameter

PORT 3000 TCP CICSTCP ; CICS Socket

After setting PORTNUMBER in TOPOLOGY to 3000 and running a CP EXTEND to create a new Symphony file, there were no obvious indications in the messages that there was a problem with the PORTNUMBER setting. However, the following messages appeared in the NETMAN log in USS stdlist/logs:

NETMAN:Listening on 3000 timeout 10 started Sun Aug 1 21:01:57 2004

These messages then occurred repeatedly in the NETMAN log (Example 5-27).

Example 5-27 Excerpt from the NETMAN log

NETMAN:+ AWSEDW020E Error opening IPC: NETMAN:AWSDEB001I Getting a new socket: 7

338 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 355: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

If these messages are seen and the DM will not link, the following command can be issued to determine that the problem is a reserved TCP/IP port:

TSO NETSTAT PORTLIST

In Example 5-28, the output shows the values for the PORTNUMBER port (3000).

Example 5-28 Excerpt from the NETMAN log

EZZ2350I MVS TCP/IP NETSTAT CS V1R5 TCPIP Name: TCPIP EZZ2795I Port# Prot User Flags Range IP Address EZZ2796I ----- ---- ---- ----- ----- ---------- EZZ2797I 03000 TCP CICSTCP DA

PORTNUMBER set to PORT already in usePORTNUMBER in TOPOLOGY was set to 424, which was already in use as the TCPIPPORT by the controller. Everything worked correctly, but when the E2E Server was shut down, the message in Example 5-29 occurred in the controller EQQMLOG every 10 minutes.

Example 5-29 Excerpt from the controller EQQMLOG

08/01 18.48.49 EQQTT11E AN UNDEFINED TRACKER AT IP ADDRESS 9.48.204.143 ATTEMPTED TO CONNECT TO THE 08/01 18.48.49 EQQTT11I CONTROLLER. THE REQUEST IS NOT ACCEPTED EQQMA11E Cannot allocate connectionEQQMA17E TCP/IP socket I/O error during Connect() call for "SocketImpl<Binding=/192.227.118.43,port=31111,localport=32799>", failedwith error: 146=Connection refused

When the E2E Server was up, it handled port 424. When the E2E Server was down, port 424 was handled by the controller task (which still had TCPIPPORT set to the default value of 424). Because there were some TCP/IP connected trackers defined on that system, message EQQTT11E was issued when the FTA IP addresses did not match the TCP/IP addresses in the ROUTOPTS parameter.

TOPOLOGY PORTNUMBER set the same as SERVOPTS PORTNUMBER

The PORTNUMBER in SERVOPTS is used for JSC and OPC Connector. If the TOPOLOGY PORTNUMBER is set to the same value as the SERVOPTS PORTNUMBER, E2E processing will still work, but errors will occur when starting the OPC Connector. We did a test with the parmlib member for the E2E Server containing the values shown in Example 5-30 on page 340.

Chapter 5. End-to-end implementation scenarios and examples 339

Page 356: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Example 5-30 TOPOLOGY and SERVOPTS PORTNUMBER are the same

SERVOPTS SUBSYS(O82C) PROTOCOL(E2E,JSC) PORTNUMBER(446)

TOPOLOGY PORTNUMBER(446)

The OPC Connector got the error messages shown in Example 5-31 and the JSC would not function.

Example 5-31 Error message for the OPC Connector

GJS0005E Cannot load workstation list. Reason: EQQMA11E Cannot allocate connectionEQQMA17E TCP/IP socket I/O error during Recv() call for "Socketlmpl<Binding= dns name/ip address,port=446,localport=4699>" failed with error" 10054=Connection reset by peer

For the OPC connector and JSC to work again, it was necessary to change the TOPOLOGY PORTNUMBER to a different value (not equal to the SERVOPTS PORTNUMBER) and cycle the E2E Server task. Note that this problem could occur if the JSC and E2E PROTOCOL functions were implemented in separate tasks (one task E2E only, one task JSC only) if the two PORTNUMBER values were set to the same value.

5.8.7 Cannot switch to new Symphony file (EQQPT52E) messagesThe EQQPT52E message, with text as shown in Example 5-32, can be a difficult one for troubleshooting as there are several different possible causes.

Example 5-32 EQQPT52E message

EQQPT52E Cannot switch to the new symphony file:run numbers of Symphony (x) and CP (y) aren't matching

The x and y in the example message would be replaced by the actual run number values. Sometimes the problem is resolved by running a Symphony Renew or CP REPLAN (or CP EXTEND) job. However, there are some other things to check if this does not correct the problem:

� The EQQPT52E message can be caused if new FTA workstations are added via the Tivoli Workload Scheduler for z/OS dialog, but the TOPOLOGY parms are not updated with the new CPUREC information. In this case, adding the TOPOLOGY information and running a CP batch job should resolve the problem.

340 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 357: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� EQQPT52E can also occur if there are problems with the ID used to run the CP batch job or the E2E Server task. One clue that a user ID problem is involved with the EQQPT52E message is if, after the CP batch job completes, there is still a file in the WRKDIR whose name is Sym plus the user ID that the CP batch job runs under. For example, if the CP EXTEND job runs under ID TWSRES9, the file in the WRKDIR would be named SymTWSRES9. If security were set up correctly, the SymTWSRES9 file would have been renamed to Symnew before the CP batch job ended.

If the cause of the EQQPT52E still cannot be determined, add the DIAGNOSE statements in Example 5-33 to the parm file indicated.

Example 5-33 DIAGNOSE statements added

(1) CONTROLLER: DIAGNOSE NMMFLAGS('00003000') (2) BATCH (CP EXTEND): DIAGNOSE PLANMGRFLAGS('00040000') (3) SERVER : DIAGNOSE TPLGYFLAGS(X'181F0000')

Then collect this list of documentation for analysis.

� Controller and server EQQMLOGs� Output of the CP EXTEND (EQQDNTOP) job� EQQTWSIN and EQQTWSOU files� USS stdlist/logs directory (or a tar backup of the entire WRKDIR

Chapter 5. End-to-end implementation scenarios and examples 341

Page 358: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

342 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 359: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Appendix A. Connector reference

In this appendix, we describe the commands related to the IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS connectors. We also describe some Tivoli Management Framework commands related to the connectors.

A

© Copyright IBM Corp. 2004 343

Page 360: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Setting the Tivoli environmentTo use the commands described in this appendix, you must first set the Tivoli environment. To do this, log in as root or administrator, then enter one of the commands shown in Table A-1.

Table A-1 Setting the Tivoli environment

Authorization roles requiredTo manage connector instances, you must be logged in as a Tivoli administrator with one or more of the roles listed in Table A-2.

Table A-2 Authorization roles required for working with connector instances

Working with Tivoli Workload Scheduler for z/OS connector instances

This section describes how to use the wopcconn command to create and manage Tivoli Workload Scheduler for z/OS connector instances.

Shell Command to set the Tivoli environment

sh or ksh . /etc/Tivoli/setup_env.sh

csh source /etc/Tivoli/setup_env.csh

DOS(Windows)

%SYSTEMROOT%\system32\drivers\etc\Tivoli\setup_env.cmdbash

An administrator with this role... Can perform these actions

user Use the instance, view instance settings

admin, senior, or super Use the instance, view instance settings, create and remove instances, change instance settings, start and stop instances

Note: To control access to the scheduler, the TCP/IP server associates each Tivoli administrator to a Remote Access Control Facility (RACF) user. For this reason, a Tivoli administrator should be defined for every RACF user. For additional information, refer to Tivoli Workload Scheduler V8R1 for z/OS Customization and Tuning, SH19-4544.

344 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 361: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Much of the following information is excerpted from the IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257.

The wopcconn commandUse the wopcconn command to create, remove, and manage Tivoli Workload Scheduler for z/OS connector instances. This program is downloaded when you install the connector. Table A-3 describes how to use wopcconn in the command line to manage connector instances.

Table A-3 Managing Tivoli Workload Scheduler for z/OS connector instances

� node is the name or the object ID (OID) of the managed node on which you are creating the instance. The TMR server name is the default.

� instance_name is the name of the instance.

� object_id is the object ID of the instance.

� new_name is the new name for the instance.

� address is the IP address or host name of the z/OS system where the Tivoli Workload Scheduler for z/OS subsystem to which you want to connect is installed.

� port is the port number of the OPC TCP/IP server to which the connector must connect.

Note: Before you can run wopcconn, you must set the Tivoli environment. See “Setting the Tivoli environment” on page 344.

If you want to... Use this syntax

Create an instance wopcconn -create [-h node] -e instance_name -a address -p port

Stop an instance wopcconn -stop -e instance_name | -o object_id

Start an instance wopcconn -start -e instance_name | -o object_id

Restart an instance wopcconn -restart -e instance_name | -o object_id

Remove an instance wopcconn -remove -e instance_name | -o object_id

View the settings of an instance

wopcconn -view -e instance_name | -o object_id

Change the settings of an instance

wopcconn -set -e instance_name | -o object_id [-n new_name] [-a address] [-p port] [-t trace_level] [-l trace_length]

Appendix A. Connector reference 345

Page 362: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� trace_level is the trace detail level from 0 to 5. trace_length is the maximum length of the trace file.You can also use wopcconn in interactive mode. To do this, just enter the command, without arguments, in the command line.

ExampleWe used a z/OS system with the host name twscjsc. On this machine, a TCP/IP server connects to port 5000. Yarmouth is the name of the TMR managed node where we installed the OPC connector. We called this new connector instance twsc.

With the following command, our instance has been created:

wopcconn -create -h yarmouth -e twsc -a twscjsc -p 5000

You can also run the wopcconn command in interactive mode. To do this, perform the following steps:

1. At the command line, enter wopcconn with no arguments.

2. Select choice number 1 in the first menu.

Example: A-1 Running wopcconn in interactive mode

Name : TWSC Object id : 1234799117.5.38#OPC::Engine# Managed node : yarmouth Status : Active OPC version : 2.3.0 2. Name : TWSC 3. IP Address or Hostname: TWSCJSC 4. IP portnumber : 5000 5. Data Compression : Yes 6. Trace Length : 524288 7. Trace Level : 0 0. Exit

Working with Tivoli Workload Scheduler connector instances

This section describes how to use the wtwsconn.sh command to create and manage Tivoli Workload Scheduler connector instances.

346 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 363: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

For more information, refer to IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257.

The wtwsconn.sh commandUse the wtwsconn.sh utility to create, remove, and manage connector instances. This program is downloaded when you install the connector.

Table 5-2 How to manage Tivoli Workload Scheduler for z/OS connector instances

� node specifies the node where the instance is created. If not specified, it defaults to the node from which the script is run.

� instance is the name of the new instance. This name identifies the engine node in the Job Scheduling tree of the Job Scheduling Console. The name must be unique within the Tivoli Managed Region.

� twsdir specifies the home directory of the Tivoli Workload Scheduler engine that is associated with the connector instance.

ExampleWe used a Tivoli Workload Scheduler for z/OS with the host name twscjsc. On this machine, a TCP/IP server connects to port 5000. Yarmouth is the name of the TMR managed node where we installed the Tivoli Workload Scheduler connector. We called this new connector instance Yarmouth-A.

With the following command, our instance has been created:

wtwsconn.sh -create -h yarmouth -n Yarmouth-A -t /tivoli/TWS/

Note: Before you can run wtwsconn.sh, you must set the Tivoli environment. See “Setting the Tivoli environment” on page 344.

If you want to... Use this syntax

Create an instance wtwsconn.sh -create [-h node]-n instance_name -t twsdir

Stop an instance wtwsconn.sh -stop -n instance | -t twsdir

Remove an instance wtwsconn.sh -remove -n instance_name

View the settings of an instance

wtwsconn.sh -view -n instance_name

Change the Tivoli Workload Scheduler home directory of an instance

wtwsconn.sh -set -n instance_name -t twsdir

Appendix A. Connector reference 347

Page 364: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Useful Tivoli Framework commandsThese commands can be used to check your Framework environment. Refer to the Tivoli Framework 3.7.1 Reference Manual, SC31-8434, for more details.

wlookup -ar ProductInfo lists the products that are installed on the Tivoli server.

wlookup -ar PatchInfo lists the patches that are installed on the Tivoli server.

wlookup -ar MaestroEngine lists the instances of this class type (same for the other classes).

For example:

barb 1318267480.2.19#Maestro::Engine#

The number before the first period (.) is the region number and the second number is the managed node ID (1 is the Tivoli server). In a multi-Tivoli environment, you can determine where a particular instance is installed by looking at this number because all Tivoli regions have a unique ID.

wuninst -list lists all products that can be un-installed.

wuninst {ProductName}-list lists the managed nodes where a product is installed.

wmaeutil Maestro -Version lists the versions of the installed engine, database, and plan.

wmaeutil Maestro -dbinfo lists information about the database and the plan.

wmaeutil Maestro -gethome lists the installation directory of the connector.

348 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 365: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Related publications

The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

IBM RedbooksFor information on ordering these publications, see “How to get IBM Redbooks” on page 350. Note that some of the documents referenced here may be available in softcopy only.

� End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environment, SG24-6013

� End-to-End Scheduling with Tivoli Workload Scheduler 8.1, SG24-6022

� High Availability Scenarios with IBM Tivoli Workload Scheduler and IBM Tivoli Framework, SG24-6632

� IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices, SG24-6628

� Implementing TWS Extended Agent for Tivoli Storage Manager, GC24-6030

� TCP/IP in a Sysplex, SG24-5235

Other publicationsThese publications are also relevant as further information sources:

� IBM Tivoli Management Framework 4.1 User’s Guide, GC32-0805

� IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes, Feature level 1.3, SC32-1258

� IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257

� IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273

� IBM Tivoli Workload Scheduler Reference Guide, SC32-1274

� IBM Tivoli Workload Scheduler Release Notes Version 8.2 (Maintenance Release April 2004) SC32-1277

� IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265

© Copyright IBM Corp. 2004. All rights reserved. 349

Page 366: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

� IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264

� IBM Tivoli Workload Scheduler for z/OS Managing the Workload, SC32-1263

� IBM Tivoli Workload Scheduler for z/OS Messages and Codes, Version 8.2 (Maintenance Release April 2004), SC32-1267

� IBM Tivoli Workload Scheduling Suite General Information Version 8.2, SC32-1256

� OS/390 V2R10.0 System SSL Programming Guide and Reference, SC23-3978

� Tivoli Workload Scheduler for z/OS Installation Guide, SH19-4543

� z/OS V1R2 Communications Server: IP Configuration Guide, SC31-8775

Online resourcesThese Web sites and URLs are also relevant as further information sources:

� IBM Tivoli Workload Scheduler publications in PDF format

http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html

� Search for IBM fix packs

http://www.ibm.com/support/us/all_download_drivers.html

� Adobe (Acrobat) Reader

http://www.adobe.com/products/acrobat/readstep2.html

How to get IBM RedbooksYou can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications, and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site:

ibm.com/redbooks

Help from IBMIBM Support and downloads

ibm.com/support

IBM Global Services

ibm.com/services

350 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 367: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

acronyms

ACF Advanced Communications Function

API Application Programming Interface

ARM Automatic Restart Manager

COBRA Common Object Request Broker Architecture

CP Control point

DM Domain manager

DVIPA Dynamic virtual IP address

EM Event Manager

FTA Fault-tolerant agent

FTW Fault-tolerant workstation

GID Group Identification Definition

GS General Service

GUI Graphical user interface

HFS Hierarchical File System

IBM International Business Machines Corporation

ISPF Interactive System Productivity Facility

ITSO International Technical Support Organization

ITWS IBM Tivoli Workload Scheduler

JCL Job control language

JES Job Execution Subsystem

JSC Job Scheduling Console

JSS Job Scheduling Services

MN Managed nodes

NNM Normal Mode Manager

OMG Object Management Group

OPC Operations, planning, and control

Abbreviations and

© Copyright IBM Corp. 2004. All rights reserved.

PDS Partitioned data set

PID Process ID

PIF Program interface

PSP Preventive service planning

PTF Program temporary fix

RACF Resource Access Control Facility

RFC Remote Function Call

RODM Resource Object Data Manager

RTM Recovery and Terminating Manager

SCP Symphony Current Plan

SMF System Management Facility

SMP System Modification Program

SMP/E System Modification Program/Extended

STLIST Standard list

TMF Tivoli Management Framework

TMR Tivoli Management Region

TSO Time-sharing option

TWS IBM Tivoli Workload Scheduler

TWSz IBM Tivoli Workload Scheduler for z/OS

USS UNIX System Services

VIPA Virtual IP address

VTAM Virtual Telecommunications Access Method

WA Workstation Analyzer

WLM Workload Monitor

X-agent Extended agent

XCF Cross-system coupling facility

351

Page 368: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

352 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 369: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

Index

Symbols$framework 330

Numerics24/7 availability 28.2-TWS-FP04 207

AAccess rights 325ACF/VTAM connections 29active engine 29active TCP/IP stack 131ad hoc prompts 88alter long-term plan 38APAR list 119APAR PQ80341 123APPC communication 173APPC ROUTOPTS 117APPC server 31APPL 18application 33AS/400 job 2audit 252audit trail 49auditing file 321automate operator activities 5automatic job tailoring 37Automatic Recovery 11, 33, 221Automatic Restart Manager 46availability of resources 32

Bbackup domain manager 84–85

overview 55Bad Symphony 104batch workload 4batch-job skeletons 166batchman 57batchman process 20BATCHOPT 18, 196best restart step 46binary security file 329

© Copyright IBM Corp. 2004. All rights reserved.

busiest server 55business processing cycles 36

CCA7 55caching mechanism 14CALENDAR() 248calendars 35catalog a dataset 45cataloged procedures 45central repository 322centralized dependency 14centralized script

overview 73centralized script definition rules 11Centralized Script Library Management 10Centralized Scripting 10centralized scripts 11certificates 21changes to your production workload 37classical tracker agent environment 280CLIST 47CODEPAGE 180common interface 6communication layer 155communications between workstations 51Comparison expression 19component group 210Composer 329composer create 299compressed form 146compression 146computer-processing tasks 33conman 329conman start command 333Connector 23, 50connector instance 92connector instances

overview 94Connector reference 343Connectors

overview 91Controlled z/OS systems 28

353

Page 370: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

controller 2, 4–5controlling system 28conversion 301correct processing order 32CP backup copy 127CP extend 104, 340CP REPLAN 340CPU class definitions 300CPU type 190CPUACCESS 190CPUAUTOLNK 190CPUDOMAIN 20, 190CPUFULLSTAT 191, 307CPUHOST 190CPULIMIT 193CPUNODE 20, 190CPUOS 20, 190CPUREC 20, 175, 184CPUREC definition 20CPUREC statement 20CPURESDEP 191CPUSERVER 192CPUTCPIP 190, 337CPUTYPE 20, 190CPUTZ 193CPUTZ keyword 194CPUUSER 194Create Instance 254critical jobs 48cross-system coupling facility 30current plan 4, 12customize script 209customized calendars 32customizing

DVIPA 305IBM Tivoli Workload Scheduler for z/OS backup engines 304Job Scheduler Console 255security file 325Tivoli environment 246, 344Tivoli Workload Scheduler for z/OS 162work directory 170

cutover 279

Ddaily planning batch jobs 273data integrity 49database changes 76

dataset triggering 36date calculations 32deadline 20deadline time 19–20decompression 146default user name 208delete a dataset 45dependencies 2

file 88dependencies between jobs 41dependency

job level 89job stream level 89

dependency object 6dependency resolution 22developing extended agents 276direct TCP/IP connection 20Distributed topology 68DM

See domain managerdocumentation 117domain manager 6–7, 21, 53, 337

overview 55domain topology 184DOMPARENT parameter 198DOMREC 85, 175, 184download a centralized script 73dummy end 89dummy jobs 334dummy start 89dumpsec 328DVIPA VIPARANGE 136Dynamic Virtual IP Addressing

See DVIPADynix 209

Ee-commerce 2ENABLELISTSECCHK 180Ended-in-error 73end-to-end database objects

overview 69end-to-end enabler component 115end-to-end environment 20end-to-end event data set 170end-to-end fail-over scenarios

303END-TO-END FEATURE 163

354 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 371: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

end-to-end network 14, 17, 20end-to-end plans

overview 75end-to-end scheduling xi, 3–4, 7–8, 146

conversion process 289creating Windows user and password definitions 272education 298file dependencies 331firewall support 20guidelines for coversion 299implementation scenarios 265migrating backwards 288migration actions 278migration checklist 277migration planning 276our environment 266password consideration 285planning 111previous release of OPC 158rationale behind conversion 112run number 216, 332TCP/IP considerations 129tips and tricks 331verify the conversion 299what it is 2

end-to-end scheduling network 50end-to-end scheduling solution 3end-to-end script library 128end-to-end server 10, 61–63, 65end-to-end topology statements 174EQQ prefix 203EQQA531E 286EQQMLOG 205–206EQQPARM members 317EQQPCS05 job 172EQQPDF 119EQQPDFEM member 123EQQPT56W 206EQQSCLIB DD statement 9EQQSERP member 247EQQTWSCS dataset 11EQQTWSIN 65, 127EQQTWSOU 127, 168EQQUX001 11, 285EQQUX002 282EQQWMIGZ 282establish a reconnection 133event files 57

Event Manager 66extend long-term plan 38Extend of current plan 40extend plan 213extended agent 7, 190

overview 55extended agent method 55extended plan 86extension of the current plan 39external dependency 35

Ffault tolerance 6, 14, 56, 322fault-tolerant agent 5–7, 20, 22, 58, 64

backup 319definitions 69fault-tolerant workstation 22installing 279local copy 22naming conventions 146overview 55security 323

fault-tolerant architecture 129fault-tolerant job 18fault-tolerant workstation 216file dependencies 88file transfer 33filewatch options 331filewatch program 88filewatch.sh 331final cutover 287firewall environment 20FIREWALL option 20firewall support 20firewalls 113first-level domain manager 8, 62, 133fix pack 140

FixPack 04 151JSC 150

forecast future workloads 37FTA

See fault-tolerant agentFTP 288FTW jobs 234–235

Ggcomposer 153gconman 153

Index 355

Page 372: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

General Service 66generation data group 46globalopts 172GRANTLOGONASBATCH 180

HHACMP 275Hewlett-Packard 5HFS

See Hierarchical File SystemHigh Availability Cluster Multi-Processing

See HACMPhigh availability configurations

Configure backup domain manager 306Configure dynamic VIPA 305DNS 303hostname file 303IBM Tivoli Workload Scheduler for z/OS backup engine 303stack affinity 303VIPA 303VIPA definitions 306

HIGHDATE() 248highest return code 233home directory 209host CPU 191host jobs 18host name 131hostname parameter 133Hot Standby function 30, 130housekeeping job stream 38HP Service Guard 275HP-UX 209HP-UX PA-RISC 254–255, 261

IIBM 22–23IBM AIX 254–255, 261IBM mainframe 3IBM Tivoli Business Systems Manager 276IBM Tivoli Enterprise Console 321IBM Tivoli Management Framework

overview 90IBM Tivoli Monitoring 321IBM Tivoli Workload Scheduler 5–6

architecture 6auditing log files 321backup and maintenance guidelines for FTAs

318backup domain manager 147benefits of integrating with ITWS for z/OS 7central repositories 322creating TMF Administrators 257database files 6definition 7, 22dependency resolution 22engine 22Extended Agents 7fault-tolerant workstation 22four tier network 53installing 207installing an agent 207installing and configuring Tivoli Framework 245installing Job Scheduling Services 253–254installing multiple instances 207introduction 5Job Scheduling Console 2maintenance 156master domain manager 58MASTERDM 22monitoring file systems 321multi-domain configuration 52naming conventions 146network 6overview 5plan 5, 58processes 57production day 58scheduling engine 22script files 322security files 323single domain configuration 51software ordering 116terminology 21unison directory 208UNIX code 23user 209

IBM Tivoli Workload Scheduler 8.1 10IBM Tivoli Workload Scheduler 8.1 suite 2IBM Tivoli Workload Scheduler connector 91IBM Tivoli Workload Scheduler Distributed 10, 14, 22

overview 5IBM Tivoli Workload Scheduler for z/OS 5

architecture 4backup engines 303benefits of integrating with ITWS 7

356 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 373: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

controller 4, 23creating TMF administrators 257database 4end-to-end dataset allocation 168end-to-end datasets and files

EQQSCLIB 68EQQTWSCS 67EQQTWSIN 67EQQTWSOU 67intercom.msg 67Mailbox.msg 67NetReq.msg 67Sinfonia 67Symphony 67tomaster.msg 67Translator.chk 68Translator.wjl 68

end-to-end FEATURE 163engine 23EQQJOBS installation aid 162EQQPDF 119fail-over scenarios 303HFS Installation Directory 163HFS Work Directory 163hot standby engines 303installing 159introduction 4long term plan 7overview 4PSP Upgrade and subset id 118–119Refresh CP group 164server processes

batchman 64input translator 65input writer 65job log retriever 65mailman 64netman 63output translator 66receiver subtask 66script downloader 65sender subtask 66starter 65translator 64writer 64

switch manager 318tracker 4user for OPC address space 164VIPA 303

IBM Tivoli Workload Scheduler for z/OS 8.2centralized control 7Centralized Script Library Management 10enhancements 8improved job log retrieval performance 10improved SCRIPTLIB parser 9multiple first-level domain managers 8recovery actions available 15recovery for not centralized jobs 14Return Code mapping 18security enhancements 20variable substitution for not centralized jobs 17

IBM Tivoli Workload Scheduler for z/OS connector 92IBM Tivoli Workload Scheduler processes

batchman 57intercommunication 57jobman 57mailman 57netman 57writer 57

identifying dependencies 32idle time 5impersonation support 276incident dataset 47input datasets 168installing

allocate end-to-end datasets 168connectors 254END-TO-END FEATURE 163FTAs 207IBM Tivoli Workload Scheduler for z/OS 159Installation Directory 163Job Scheduling Console 261Job Scheduling Services 253latest fix pack 140multiple instances of FTAs 207OPC tracker agents 117Refresh CP group 164service updates 117TCP/IP Server 246Tivoli Management Framework 253Tivoli Management Framework 3.7B 253Tivoli Management Framework 4.1 252User for OPC address space 164Work Directory 163

instance 347in-stream JCL 45Intercom.msg 64

Index 357

Page 374: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

internal dependency 35INTRACTV 15IRIX 209ISPF 11ISPF panels 5Itanium 209ITWS

See IBM Tivoli Workload SchedulerITWS for z/OS

See IBM Tivoli Workload Scheduler for z/OS

JJava GUI interface 5Java Runtime Environment Version 1.3 262JCL 282JCL Editing 11JCL variables 37, 221JES2 30JES3 30job 2, 89, 94job control process 57Job Instance Recovery Information panel 16job log retriever 10Job Migration Tool 282job return code 19job scheduling 2Job Scheduling Console 2, 5, 11

add users’ logins 257availability 153compatibility 151creating connector instances 255creating TMF administrators 257documentation 150fix pack 150hardware and software prerequisites 262installation on AIX 263installation on Sun Solaris 263installation on Windows 263installing 261installing Job Scheduling Services 253Job Scheduling Console, Version 1.3 261login window 264migration considerations 151overview 89required TMR roles 260

Job Scheduling Console commands 28Job Scheduling Console, Version 1.3 261Job Scheduling Services

overview 91See JSS

job statistics 5job stream 32–33, 36, 69, 81, 331job stream run cycle 40job submission 4job tailoring 42job tracking 41job_instance_output 98JOBCMD 11, 15job-completion checking 47jobdef.txt 300job-level restart 46JOBLIB 11jobman 57JOBREC parameters 16JOBSCR 15jobstream.txt 300JOBUSR 15JOBWS 15JSC

See Job Scheduling ConsoleJSC migration considerations 151JSC server 89JSC server initialization 247JSCHOSTNAME() 248JSS 23JTOPTS statement 169JTOPTS TWSJOBNAME 200Julian months 37

LLanguage Environment 135late job handling 19legacy GUI 153legacy ISPF panels 80legacy system 2Linux Red Hat 261Linux Red Hat 7.1 254–255, 261localopts 146, 336–337localopts file 21LOGLINES 181long-term plan 4, 37–38long-term plan simulation reports 38long-term switch 308loop 15loss of communication 6LTP Modify batch job 76

358 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 375: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

MMaestro 22maestro_database 98maestro_engine 97maestro_plan 97maestro_x_server 98Mailbox.msg 64mailman 57mailman cache 144–145maintenance strategy 156maintenance windows 34Maintenence release 207makesec command 329–330managed node 91management hub 51, 55manual control 42manual editing of jobs 42manual task 33master 6, 22master domain manager 6, 22, 56, 60, 131, 143

overview 54MASTERDM 22MCP dialog 17MDM Symphony file 53message management process 57migrating backwards 288migration

actions 278benefits 274planning 276planning for 155

migration checklist 277migration tool 285missed deadline 19, 35mm cache mailbox 145mm cache size 145Modify all 40mozart directory 172MSCS 318multiple calendar function 36multiple domains 56multiple tracker agents 280

Nnational holidays 36NCP VSAM data set 170nested INCLUDEs 46Netconf 172

netman 57Netman configuration file 172netman process 63NetReq.msg 63NetView for OS/390 44network listener program 57network traffic 52network traffic generated 56network writer process 57new functions related to performance and scalability

overview 8new input writer thread 10new job definition syntax 14new operation 12new parser 10new plan 58newly allocated dataset 169next production day 33nm ipvalidate=full 338nm ipvalidate=none 338NM PORT 337NOERROR functionality 18non-centralized script 72

overview 72non-FTW operations 12non-mainframe schedulers 7NOPTIMEDEPENDENCY 181Normal Mode Manager 66notify changes 60NT user definitions 328

OObject attributes 325OCCNAME 200offline workstations 332offset 37Old Symphony 104old tracker agent jobs 11OPC 4–5OPC connector 92OPC tracker agent 11, 117opc_connector 96opc_connector2 96OPCMASTER 8, 21, 207OPCOPTS 304OpenSSL toolkit 21Operations Planning and Control

See OPC

Index 359

Page 376: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

operations return code 18operator instruction 37operator intervention 5Oracle 2, 7Oracle Applications 55organizational unit 56OS/390 4, 7OS/400 22oserv program 97OSUF 73out of sync 255output datasets 168overhead 146own copy of the plan 6

PParallel Sysplex 29parallel testing 286parent domain manager 22parms command 302partitioned dataset 128PDSE dataset 11PeopleSoft 55performance bottleneck 8performance improvements over HFS 126performance-related parameters

mm cache size 145sync level 145wr enable compression 146

periods (business processing cycles) 36pervasive APARs 118PIF

See Program InterfacePIF applications 31plan 37plan auditing 321plan extension 85plan file 22plan process 37PLANAUDITLEVEL 181, 321pobox directory 172port 31111 63port number 182PORTNUMBER 182predecessor 4, 331predefined JCL variables 37predefined periods 36preventive service planning

See PSPprimary domain manager 8printing of output 34priority 0 335private key 21processing day 5production control process 57production day 6, 58production schedule 4Program Directory 118Program Interface 5, 47PSP 118PTF U482278 253

RR/W mode 127r3batch 98RACF user 257RACF user ID 250range of IP address 136RCCONDSUC 15, 19recovery actions 15recovery information 17recovery job 15recovery of jobs 45recovery option 240recovery process 5recovery prompt 15recovery prompts 88RECOVERY statement 15, 17Red Hat 7.2 275Red Hat 7.3 275Red Hat Linux for S/390 254Redbooks Web site 350

Contact us xiiiRefresh CP group field 165remote panels 28remote systems 30–31remote z/OS system 30Removable Media Manager 46repeat range 89replan 213reporting errors 47re-queuing SYSOUT 47rerun a job 45rerun from 335rerun jobs 89RESERVE macro 169

360 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 377: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

reserved resource class 49resource serialization support 36resources 36resources availability 39restart and cleanup 45return code 19Return Code mapping 18return code mapping

overview 18rmstdlist 320roll over 6rolling plan 39routing events 64rule 37run cycle 32, 36

SSAF interface 49SAGENT 190sample security file 327SAP R/3 55SAP R/3 extended agent 98SAP/3 2scalability 8schedule 81scheduling 2scheduling engine 22script library 83SCRIPTLIB 11SCRIPTLIB library 10SCRIPTLIB parser 9SCRPTLIB dataset 9SCRPTLIB member 17SD37 abend code 169security 48security enhancements

overview 20security file stanzas 325SEQQMISC library 123serialize work 36server 10, 23server started task 174service level agreement 38service updates 117Set Logins 258Set TMR Roles 260shared DASD 30shell script 321

short-term switch 308Sinfonia 79, 146Sinfonia file 146single point of control 7size of working directory 183slow WAN 146SMF 30software ordering details 115special resource dependencies 71special resources 36special running instructions 37SSL protocol 21SSLLEVEL 182, 194SSLPORT 182standard agent 55

overview 55standby controller engine 84standby engine 29, 84, 86start and stop commands 89start of each day 6started tasks 61starter process 273StartUp command 57start-up messages 206status information 30status inquiries 42status of all jobs 6status of the job 59status reporting 47stdlist 337stdlist files 126, 164step-level restart 46steps of a security check 326submit jobs 4submitting userid 139subordinate agents 22subordinate domain manager 58subordinate FTAs 6subordinate workstations 60substitute JCL variables 231success condition 18successful condition 19SuSE Linux Enterprise Server 254SuSE Linux Enterprise Server for S/390 254switch manager 318Switching domain manager 85, 313

backup manager 308long-term switch 308short-term switch 308

Index 361

Page 378: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

using switchmgr 313using the Job Scheduling Console 310verifying 314

switching domain managerusing WSSTAT 313

switchmgr 313Symbad 104Symnew 341Symold 104Symphony file 10, 17, 20, 52, 57–58, 82–83, 182, 216, 321

creation 58distribution 59monitoring 59renew 273run number 68sending to subordinate workstations 64switching 67troubleshooting 340update 59

Symphony file creation time 10Symphony file generation

overview 80Symphony renew 104Symphony run number 68, 333SymUSER 79SymX 104sync level 145synchronization 86sysplex 7, 164, 303System Authorization Facility 49System Automation/390 317System Display and Search Facility 46system documentation 117System SSL services 21

TTABLES keyword 18TCP/IP considerations 129

Dynamic Virtual IP Addressing 135stack affinity 134use of the host file 133

TCP/IP link 28TCPIPJOBNAME 183temporarily store a script 11terminology 21tier-1 platforms 210tier-2 platfoms 209

time depending jobs 39time of day 32time zone 194tips and tricks 331

backup and maintenance guidelines on FTAs 318central repositories 322common errors for jobs 334dummy jobs 334file dependencies 331filewatch.sh program 331job scripts in the same directories 334monitoring example 321monitoring file systems 321plan auditing 321script files 322security files 323stdlist files 319unlinked workstations 332useful Tivoli Framework commands 348

Tivoli administrator ID 250Tivoli Framework commands 348Tivoli managed node 28Tivoli Managed Region

See TMRTivoli Management Environment 90Tivoli Management Framework 23, 257Tivoli Management Framework 3.7.1 253Tivoli Management Framework 3.7B 253Tivoli object repository 91Tivoli server 28Tivoli Workload Scheduler for z/OS

overview 4Tivoli-managed node 264TMR database 91TMR server 91top-level domain 22TOPLOGY 76topology 56, 68topology definitions 76topology parameter statements 69TOPOLOGY PORTNUMBER 337TOPOLOGY statement 21, 178TPLGYMEM 183TPLGYPRM 177tracker 4tracker agent enabler component 115tracker agent jobs 287tracker agents 118

362 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 379: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

training 287translator checkpoint file 104translator log 105Translator.chk 104Translator.wjl 104TRCDAYS 183trial plans 37trigger 331troubleshooting

common errors for jobs on fault-tolerant worksta-tions 334DIAGNOSE statements 341E2E PORTNUMBER and CPUTCPIP 336EQQPT52E 340handling errors in script definitions 334handling offline or unlinked workstations 332handling wrong password definition for Windows FTW 336message EQQTT11E 339problems with port numbers 336SERVOPTS PORTNUMBER 339TOPOLOGY PORTNUMBER 338

Tru64 UNIX 209trusted certification authority 21TSO command 5TSO parser 10TSO rules 11TWSJOBNAME parameter 169

Uuncatalog a dataset 45Unison Maestro 5Unison Software 3, 5UNIX 22UNIX System Services

See USSunlinked workstations 332unplanned work 47upgrade

planning for 155user attributes 325users.txt 300using dummy jobs 334USRCPU 196USRMEM 183USRNAM 196USRPWD 196USRREC 175, 184

USRREC definition 336USS 20, 23, 124USS workdir 21UTF-8 to EBCDIC translation 65

Vvariable substitution and Job Setup 11Variable Substitution Directives 17VARSUB statement 17verification test 159Version 8.2 enhancements

overview 10view long-term plan 38VIPA 303VIPARANGE 136virtual IP address 135VSAM dataset 68, 127VTAM Model Application Program Definition feature 46

WWeb browser 2weekly period 36Windows 22Windows 2000 Terminal Services 261Windows clustering 275wlookup 348wlookup -ar PatchInfo 348wmaeutil 348wmaeutil command 329wookup -ar ProductInfo 348wopcconn command 345work directory 164workfiles 125workload 2workload forecast 7Workload Manager 48Workload Manager interface 48workload priorities 32, 41workstation 34wr enable compression 146wrap-around dataset 169writer 57writing incident records 47WRKDIR 183WSCCLOG.properties 172WSSTAT command 306wtwsconn.sh command 256, 347

Index 363

Page 380: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

wuninst 348

XXAGENT 190X-agent method 98

YYYYYMMDD_E2EMERGE.log 105

Zz/OS environment 71z/OS extended agent 7z/OS host name 132z/OS job 2z/OS security 250z/OS/ESA Version 4 Release 1 304zFS clusters 126

364 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Page 381: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

(0.5” spine)0.475”<->

0.875”250 <->

459 pages

End-to-End Scheduling with IBM

Tivoli Workload Scheduler V 8.2

Page 382: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624
Page 383: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624
Page 384: End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

®

SG24-6624-00 ISBN 073849139X

INTERNATIONAL TECHNICALSUPPORTORGANIZATION

BUILDING TECHNICALINFORMATION BASED ONPRACTICAL EXPERIENCE

IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information:ibm.com/redbooks

End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2

Plan and implement your end-to-end scheduling environment

Experiment with real-life scenarios

Learn best practices and troubleshooting

The beginning of the new century sees the data center with a mix of work, hardware, and operating systems previously undreamed of. Today’s challenge is to manage disparate systems with minimal effort and maximum reliability. People experienced in scheduling traditional host-based batch work must now manage distributed systems, and those working in the distributed environment must take responsibility for work running on the corporate OS/390 system.

This IBM Redbook considers how best to provide end-to-end scheduling using IBM Tivoli Workload Scheduler Version 8.2, both distributed (previously known as Maestro) and mainframe (previously known as OPC) components.

In this book, we provide the information for installing the necessary Tivoli Workload Scheduler 8.2 software components and configuring them to communicate with each other. In addition to technical information, we consider various scenarios that may be encountered in the enterprise and suggest practical solutions. We describe how to manage work and dependencies across both environments using a single point of control.

We believe that this book will be a valuable reference for IT specialists who implement end-to-end scheduling with Tivoli Workload Scheduler 8.2.

Back cover