birt ihub system administration guideotadocs.opentext.com/documentation/manualsihub31/... ·...

280
BIRT iHub System Administration Guide

Upload: others

Post on 22-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

BIRT iHub System Administration Guide

Page 2: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Information in this document is subject to change without notice. Examples provided are fictitious. No part of this document may be reproduced or transmitted in any form, or by any means, electronic or mechanical, for any purpose, in whole or in part, without the express written permission of Actuate Corporation.

© 1995 - 2015 by Actuate Corporation. All rights reserved. Printed in the United States of America.

Contains information proprietary to:Actuate Corporation, 951 Mariners Island Boulevard, San Mateo, CA 94404

www.actuate.com

The software described in this manual is provided by Actuate Corporation under an Actuate License agreement. The software may be used only in accordance with the terms of the agreement. Actuate software products are protected by U.S. and International patents and patents pending. For a current list of patents, please see http://www.actuate.com/patents.

Actuate Corporation trademarks and registered trademarks include:Actuate, ActuateOne, the Actuate logo, Archived Data Analytics, BIRT, BIRT 360, BIRT Analytics, The BIRT Company, BIRT Content Services, BIRT Data Analyzer, BIRT for Statements, BIRT iHub, BIRT Metrics Management, BIRT Performance Analytics, Collaborative Reporting Architecture, e.Analysis, e.Report, e.Reporting, e.Spreadsheet, Encyclopedia, Interactive Viewing, OnPerformance, The people behind BIRT, Performancesoft, Performancesoft Track, Performancesoft Views, Report Encyclopedia, Reportlet, X2BIRT, and XML reports.

Actuate products may contain third-party products or technologies. Third-party trademarks or registered trademarks of their respective owners, companies, or organizations include: Mark Adler and Jean-loup Gailly (www.zlib.net): zLib. Adobe Systems Incorporated: Flash Player, Source Sans Pro font. Amazon Web Services, Incorporated: Amazon Web Services SDK. Apache Software Foundation (www.apache.org): Ant, Axis, Axis2, Batik, Batik SVG library, Commons Command Line Interface (CLI), Commons Codec, Commons Lang, Commons Math, Crimson, Derby, Hive driver for Hadoop, Kafka, log4j, Pluto, POI ooxml and ooxml-schema, Portlet, Shindig, Struts, Thrift, Tomcat, Velocity, Xalan, Xerces, Xerces2 Java Parser, Xerces-C++ XML Parser, and XML Beans. Daniel Bruce (www.entypo.com): Entypo Pictogram Suite. Castor (www.castor.org), ExoLab Project (www.exolab.org), and Intalio, Inc. (www.intalio.org): Castor. Alessandro Colantonio: CONCISE Bitmap Library. d3-cloud. Day Management AG: Content Repository for Java. Dygraphs Gallery. Eclipse Foundation, Inc. (www.eclipse.org): Babel, Data Tools Platform (DTP) ODA, Eclipse SDK, Graphics Editor Framework (GEF), Eclipse Modeling Framework (EMF), Jetty, and Eclipse Web Tools Platform (WTP). Bits Per Second, Ltd. and Graphics Server Technologies, L.P.: Graphics Server. Dave Gandy: Font Awesome. Gargoyle Software Inc.: HtmlUnit. GNU Project: GNU Regular Expression. Google Charts. Groovy project (groovy.codehaus.org): Groovy. Guava Libraries: Google Guava. HighSlide: HighCharts. headjs.com: head.js. Hector Project: Cassandra Thrift, Hector. Jason Hsueth and Kenton Varda (code.google.com): Protocole Buffer. H2 Database: H2 database. IDAutomation.com, Inc.: IDAutomation. IDRsolutions Ltd.: JPedal JBIG2. InfoSoft Global (P) Ltd.: FusionCharts, FusionMaps, FusionWidgets, PowerCharts. InfoVis Toolkit. Matt Inger (sourceforge.net): Ant-Contrib. Matt Ingenthron, Eric D. Lambert, and Dustin Sallings (code.google.com): Spymemcached. International Components for Unicode (ICU): ICU library. JCraft, Inc.: JSch. jQuery: jQuery, JQuery Sparklines. Yuri Kanivets (code.google.com): Android Wheel gadget. LEAD Technologies, Inc.: LEADTOOLS. The Legion of the Bouncy Castle: Bouncy Castle Crypto APIs. Bruno Lowagie and Paulo Soares: iText. Membrane SOA Model. MetaStuff: dom4j. Microsoft Corporation (Microsoft Developer Network): CompoundDocument Library. Mozilla: Mozilla XML Parser. MySQL Americas, Inc.: MySQL Connector/J. Netscape Communications Corporation, Inc.: Rhino. NodeJS. nullsoft project: Nullsoft Scriptable Install System. OOPS Consultancy: XMLTask. OpenSSL Project: OpenSSL. Oracle Corporation: Berkeley DB, Java Advanced Imaging, JAXB, Java SE Development Kit (JDK), Jstl, Oracle JDBC driver. PostgreSQL Global Development Group: pgAdmin, PostgreSQL, PostgreSQL JDBC driver. Progress Software Corporation: DataDirect Connect XE for JDBC Salesforce, DataDirect JDBC, DataDirect ODBC. Quality Open Software: Simple Logging Facade for Java (SLF4J), SLF4J API and NOP. Raphael. RequireJS. Rogue Wave Software, Inc.: Rogue Wave Library SourcePro Core, tools.h++. Sencha Inc.: Extjs, Sencha Touch. Shibboleth Consortium: OpenSAML, Shibboleth Identity Provider. Matteo Spinelli: iscroll. StAX Project (stax.codehaus.org): Streaming API for XML (StAX). Sam Stephenson (prototype.conio.net): prototype.js. SWFObject Project (code.google.com): SWFObject. ThimbleWare, Inc.: JMemcached. Twittr: Twitter Bootstrap. VMWare: Hyperic SIGAR. Woodstox Project (woodstox.codehaus.org): Woodstox Fast XML processor (wstx-asl). World Wide Web Consortium (W3C) (MIT, ERCIM, Keio): Flute, JTidy, Simple API for CSS. XFree86 Project, Inc.: (www.xfree86.org): xvfb. ZXing Project (code.google.com): ZXing.

All other brand or product names are trademarks or registered trademarks of their respective owners, companies, or organizations.

Document No. 141215-2-530303 August 31, 2015

Page 3: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

i

ContentsAbout BIRT iHub System Administration Guide . . . . . . . . . . . . . . . . . . . . .ixAbout BIRT iHub System Administration Guide

Part 1Architecture and configuration

Chapter 1Understanding Actuate BIRT iHub architecture . . . . . . . . . . . . . . . . . . . . . 3Understanding BIRT iHub architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Using a third-party RDBMS with BIRT iHub system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4About the cluster and volume schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Installing and configuring BIRT iHub System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Managing backup, recovery, and failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Managing an iHub cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Understanding the BIRT iHub process model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Understanding process flow in a BIRT iHub system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

iHub security overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Understanding SAML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14About the BIRT iHub SAML implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Understanding SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Understanding SSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Administering BIRT iHub System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17About Migration and Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Using JDBC to connect to the BIRT iHub database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21API Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21About international character sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Administrative reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Supported operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Supporting collateral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Chapter 2Configuring BIRT iHub to use an alternative database on a Windows plat-

form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Switching to a pre-existing database for storing volume metadata . . . . . . . . . . . . . . . . . . . . . . . 24Preparing to switch databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Locating folders in the BIRT iHub installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Taking the cluster offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Page 4: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

ii

Using the cluster database switcher tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27Creating a properties file for the Cluster Database Switcher . . . . . . . . . . . . . . . . . . . . . . . . . . .28

Specifying properties for switching to an Oracle database . . . . . . . . . . . . . . . . . . . . . . . . . .28Specifying properties for switching to a PostgreSQL database . . . . . . . . . . . . . . . . . . . . . .30Specifying properties when you have multiple volumes . . . . . . . . . . . . . . . . . . . . . . . . . . .31

Running the Cluster Database Switcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31Verifying the new database connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32Bringing the cluster online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33Testing the new database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35Backing up BIRT iHub system and volume metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35

Chapter 3Configuring BIRT iHub to use an alternative database on a Linux platform

37Switching to a pre-existing database for storing volume metadata . . . . . . . . . . . . . . . . . . . . . . . .38Preparing to switch databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38

Locating folders in the BIRT iHub installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38Taking the cluster offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39

Using the cluster database switcher tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41Creating a properties file for the Cluster Database Switcher . . . . . . . . . . . . . . . . . . . . . . . . . . .42

Specifying properties for switching to an Oracle database . . . . . . . . . . . . . . . . . . . . . . . . . .42Specifying properties for switching to a PostgreSQL database . . . . . . . . . . . . . . . . . . . . . .44Specifying properties when you have multiple volumes . . . . . . . . . . . . . . . . . . . . . . . . . . .45

Running the Cluster Database Switcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45Verifying the new database connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46Bringing the cluster online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47Testing the new database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49Backing up BIRT iHub system and volume metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49

Chapter 4Configuring a cluster using shared files . . . . . . . . . . . . . . . . . . . . . . . . . . 51About configuring a cluster using shared files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52Changing default port numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53Specifying an output document destination using a URL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54Configuring custom events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55Setting default values for notification e-mail properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56

Configuring limits on notification e-mail size and number of recipients . . . . . . . . . . . . . . . . .56Creating a dummy To: line for a notification e-mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57

Working with the job completion notification e-mail message template . . . . . . . . . . . . . . . . . . .58About e-mail template elements and attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59About variable attribute values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60Using HTML in the e-mail template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61

Stopping and starting iHub processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62

Page 5: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

iii

Stopping and starting the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Editing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Stopping the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Starting the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Stopping and starting iHub processes for a standalone iHub installation on a Windows ma-chine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Stopping and starting iHub processes for a standalone iHub installation on a Linux machine 67

Chapter 5Configuring BIRT iHub security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69Securing data in a BIRT iHub volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70BIRT iHub secure communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Understanding HTTPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71Understanding SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

BIRT iHub SSL communication process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72BIRT iHub SSL support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Understanding SAML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72SAML communication process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73BIRT iHub SAML support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Using SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74Using SSL with IDAPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75Using SSL with JSAPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76Using SSL and external user management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Using SSL with System Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Using SSL with Visualization Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78Using SSL for communication with the volume metadata database . . . . . . . . . . . . . . . . . . . . 86Using SSL with mobile browsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Managing SSL files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Updating the SSL keystore password in BIRT iHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92Using a commercial SSL certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Using a commercial SSL certificate (previous) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Using a Java policy file to control access to protected resources . . . . . . . . . . . . . . . . . . . . . . . . . 98

Part 2BIRT iHub System Console

Chapter 6Understanding System Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105About System Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106Viewing clusters, nodes, and system administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107Logging in to System Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Page 6: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

iv

Using the System Console configuration user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109About Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Logging in to Universal Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Chapter 7Managing clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119About clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120Creating and configuring a cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121

About the cluster configuration categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123Understanding the default cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124Adding cluster nodes to a cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125Preparing the iHub cluster environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139

Creating the shared configuration directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140Sharing the folders that all cluster nodes access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140Configuring two nodes to communicate with each other . . . . . . . . . . . . . . . . . . . . . . . . . .145Specifying a logon account for the Actuate iHub 3.1 service on a cluster node . . . . . . . .148

Understanding Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151Performing management tasks for the entire cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152Performing management tasks for an individual cluster node . . . . . . . . . . . . . . . . . . . . . .153Performing management tasks for a service running on a cluster node . . . . . . . . . . . . . .154

Understanding Cluster Configuration-prev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .157Configuring Server Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161Adding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163

Adding a storage location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168Updating a storage location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169Removing a storage location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .171Understanding the volume menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .171

Viewing metadata database properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175Configuring logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186Configuring alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189

Viewing the list of alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189Adding an alert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190Enabling e-mail notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193Editing, deleting, disabling, and enabling an alert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196

Configuring Single Sign-On . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199Configuring User Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201

Configuring LDAP/Active Directory Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201Configuring RSSE SOAP Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .224

Updating the license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .226About Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .226Adding storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .228Configuring alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .229

Editing an existing cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231

Page 7: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

v

Managing a cluster node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231Viewing the list of clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234

Filtering the list of clusters using a search string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234Filtering non-starred clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235

Deleting clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236Viewing cluster resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

Viewing diagnostic log files using Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240Viewing system-level information using Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241About the Actuate Viewer menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244Viewing Current Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

Adding clusters to Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250Configuring LMServer and LSTailer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253Understanding the monitor and agent services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255

About the monitor service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256About the agent service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256About the in-memory database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256Managing the AC_DATA_HOME\lmservice folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256

Using the ControlConsole utility to free memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261BIRT iHub service and resource group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

Reporting service template properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264Viewing service template properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265Integration service Template properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267Understanding resource groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267Resource group properties pertaining to a template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268Resource group properties pertaining to all templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273Configuring data source connections in BIRT iHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275

Exporting and importing a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276Exporting or importing a volume when one BIRT iHub system is Release 3.1 . . . . . . . . . . 277Exporting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277Viewing the log file the export_volume script creates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280Preparing to import a volume to a BIRT iHub system that uses an Oracle database . . . . . 280Importing a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281Viewing the log file the import_volume script creates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285Rolling back if an error occurs during volume import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

Configuring an Apache web server for load balancing and proxying . . . . . . . . . . . . . . . . . . . . 286

Chapter 8Managing system administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291About Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292Viewing System Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292Working with system administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293

Creating a system administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293Customizing the notification e-mail template file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295

Page 8: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

vi

Editing a system administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296Viewing the list of System Administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297

Filtering the list of System Administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297Deleting system administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .299

Configuring Email Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302

Part 3Managing and backing up

Chapter 9Licensing BIRT iHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307Understanding licensing types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .308Understanding licensing options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .310Installing BIRT iHub System license files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .314

About the license file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .315Collecting machine information for a license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .315About modifying a license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319Specifying the minimum number of factories for a work unit license . . . . . . . . . . . . . . . . . .320About modifying the data collection option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320Hot and Warm Backup Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322Cold Backup Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .323

Understanding CPU binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .323Configuring CPU binding on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .324

Binding to specific CPU cores on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .324Binding to multiple-core CPU cores on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327Binding an Actuate process to a processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327About processors and hyperthreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .328

Configuring CPU binding on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .329Checking BIRT iHub bound processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .330

Determining the number of processors BIRT iHub System uses . . . . . . . . . . . . . . . . . . . .331Understanding CPU binding validation while BIRT iHub is running . . . . . . . . . . . . . . . .331Understanding CPU binding validation when a volume comes online . . . . . . . . . . . . . .332Understanding CPU binding validation when running BIRT iHub processes . . . . . . . . .332

Configuring e-mail for CPU license problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .332

Chapter 10Backing up BIRT iHub System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335Performing a BIRT iHub System backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336

Managing the backup and recovery of BIRT iHub metadata and data files . . . . . . . . . . . . .337Using RDBMS and file system backup utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .338Avoiding conflict with the file purging process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .339

Backing up and restoring a BIRT iHub System that uses a PostgreSQL database . . . . . . . . . . .340

Page 9: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

vii

Backing up BIRT iHub System using pgAdmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341Backing up BIRT iHub System using pg_dump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348Restoring BIRT iHub System using pg_restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353Restoring BIRT iHub System using pg_restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359

Chapter 11BIRT iHub template settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365

Page 10: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

viii

Page 11: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

A b o u t B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e ix

A b o u t B I R T i H u bS y s t e m A d m i n i s t r a t i o n

G u i d e

BIRT iHub System Administration Guide includes the following chapters:

■ About BIRT iHub System Administration Guide. Provides an overview of this guide, Actuate BIRT iHub documentation, and the typographical conventions in this book.

■ Part 1. Architecture and configuration. Describes BIRT iHub architecture and customizing the set-up of BIRT iHub.

■ Chapter 1. Understanding Actuate BIRT iHub architecture. Describes BIRT iHub architecture, the iHub System process model, and system administration, including new utilities and third-party relational database management systems (RDBMS) used to store iHub system and volume metadata.

■ Chapter 2. Configuring BIRT iHub to use an alternative database on a Windows platform. Describes how to switch to an alternative metadata database, such as a pre-existing PostgreSQL or Oracle RDBMS, in a Windows environment.

■ Chapter 3. Configuring BIRT iHub to use an alternative database on a Linux platform. Describes how to switch to an alternative metadata database, such as a pre-existing PostgreSQL or Oracle RDBMS, in a Linux environment.

■ Chapter 4. Configuring a cluster using shared files.. Describes configuration tasks using files the cluster shares.

■ Chapter 5. Configuring BIRT iHub security. Explains how to secure data in a BIRT iHub volume, BIRT iHub secure communications, and how to use SSL.

■ Part 2. BIRT iHub System Console. Describes how to use BIRT iHub System Console.

■ Chapter 6. Understanding System Console. Introduces BIRT iHub System Console.

Page 12: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

x B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Chapter 7. Managing clusters. Describes creating, configuring, and managing a BIRT iHub cluster. Describes BIRT iHub cluster monitoring features.

■ Chapter 8. Managing system administrators. Describes creating and working with System administrators.

■ Part 3. Managing and backing up. Describes licensing, backup, and utilities.

■ Chapter 9. Licensing BIRT iHub. Describes licensing options, license key installation, and CPU-binding policies for BIRT iHub.

■ Chapter 10. Backing up BIRT iHub System. Describes how to back up and restore BIRT iHub volume metadata and data.

Page 13: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Part 1Architecture and configuration

■ Understanding Actuate BIRT iHub architecture

■ Configuring BIRT iHub to use an alternative database on a Windows platform

■ Configuring BIRT iHub to use an alternative database on a Linux platform

■ Configuring a cluster using shared files

■ Configuring BIRT iHub security

PartOne1

Page 14: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are
Page 15: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 , U n d e r s t a n d i n g A c t u a t e B I R T i H u b a r c h i t e c t u r e 3

C h a p t e r

1Chapter 1Understanding Actuate

BIRT iHub architectureThis chapter contains the following topics:

■ Understanding BIRT iHub architecture

■ Understanding the BIRT iHub process model

■ iHub security overview

■ Administering BIRT iHub System

Page 16: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

4 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Understanding BIRT iHub architectureActuate BIRT iHub stores metadata containing information about the cluster and volume configurations in a relational database management system (RDBMS). In the default configuration, Actuate BIRT iHub uses the open-source, third-party database, PostgreSQL. iHub also supports using other alternative, third-party database systems, such as Oracle or a pre-existing PostgreSQL instance.

iHub stores metadata in the following schemas:

■ ClusterContains settings related to BIRT iHub configuration, such as nodes, templates, volumes, users, and roles.

■ VolumeContains settings related to volume configuration, such as folders, files, and other objects.

Using a third-party RDBMS with BIRT iHub systemActuate installs BIRT iHub cluster and volume schemas in the default PostgreSQL RDBMS installation. Installing the schemas in a pre-existing PostgreSQL RDBMS or alternative RDBMS, such as Oracle, requires running a utility that performs the operations necessary to switch from the default PostgreSQL database to the pre-existing PostgreSQL or Oracle database. Chapter 2, “Configuring BIRT iHub to use an alternative database on a Windows platform” and Chapter 3, “Configuring BIRT iHub to use an alternative database on a Linux platform” provide detailed, step-by-step instructions on how to perform this operation.

Only the metadata that specifies the cluster and volume configuration are in the database. Designs, documents, and other data objects are stored in the file system.

About the cluster and volume schemasActuate supports read-only operations on the cluster and volume metadata stored in the tables of the OOTB or other third-party database. Actuate does not support the addition, deletion, or modification of these metadata tables.

Actuate does permit the creation of additional indexes on these tables. For example, a customer can create an index on the job completion notices table to expedite database processing.

Actuate does not recommend any customization of the cluster schema. Any customization that the customer does on the volume schema must be redone when migrating to or reinstalling BIRT iHub. BIRT iHub does not track any

Page 17: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 , U n d e r s t a n d i n g A c t u a t e B I R T i H u b a r c h i t e c t u r e 5

database objects that a customer creates. Actuate reserves the right to change the structure of the schemas in future releases.

Managing backup, recovery, and failoverThe BIRT iHub administrator uses third-party RDBMS tools to manage the backup, recovery, and failover capabilities of the cluster and volume metadata database. The administrator uses standard operating system or other third-party tools to manage the backup and recovery of the data files.

The third-party database schemas that contain cluster and volume metadata are critical components of a BIRT iHub system. To guard against data loss, the database administrator must back up the cluster and volume schemas and all related file data to ensure recoverability in the event of failure. Please consult Actuate Support at the time of installation if you have any questions about the backup, recovery, or failover procedures necessary to protect a BIRT iHub system against the possibility of catastrophic failure.

For more information on backing up an BIRT iHub installation, see Chapter 10, “Backing up BIRT iHub System,” later in this book.

In BIRT iHub, there is no concept of volume failover, since each node in a cluster can operate on all the volumes. Configuring cluster and volume database failover is the responsibility of the third-party RDBMS administrator. The database administrator must use the facilities available in the RDBMS to configure this failover capability. Consult the third-party RDBMS documentation for detailed information on how to use native system tools to configure backup, recovery, and failover operations for the externally managed cluster and volume database.

Documentation for a PostgreSQL RDBMS is available at:

http://www.postgresql.org/docs

Documentation for an Oracle RDBMS is available at:

http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html

Understanding the BIRT iHub process modelThe Actuate BIRT iHub platform uses a multi-threaded, multi-process model, running single instances of the following components on each iHub node:

■ iHub servlet containerProvides the run-time environment for client applications, such as BIRT iHub System and Information Console. Client applications communicate with BIRT iHub System using SOAP-based messaging.

■ Process Management Daemon (PMD)

Page 18: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

6 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Distributes service requests among available BIRT iHub services and nodes.

In addition, the BIRT iHub platform supports multiple instances of the following services on each node:

■ FactoryExecutes requests to generate documents and perform server-side printing.

■ View Supports viewing documents in DHTML and other output formats, such as CSV and PDF. Handles requests to download files from a volume.

■ Integration (Actuate Integration service or AIS)Coordinates the running of information object files that extract data from multiple data sources.

This loosely coupled BIRT iHub architecture model provides the following maintenance and performance benefits:

■ Startup and shutdown of a BIRT iHub node is fast because it is independent of the RDBMS that manages a volume. The database server can remain online when shutting down a BIRT iHub node and is available when the node starts up.

■ Controlling the sequence of a volume startup is not necessary. All volumes are either already online in the database server or come online as the database server starts.

■ Downtime to apply a patch or diagnostic fix for a BIRT iHub node is reduced. The RDBMS does not have to be shutdown.

Understanding process flow in a BIRT iHub systemFigure 1-1 illustrates the BIRT iHub system architecture for a multi-volume, out-of-the-box (OOTB) PostgreSQL database configuration. In this configuration, the iHub administrator starts and stops an iHub instance by running scripts from the command line or using the graphical user interface (GUI) available in System Console.

Client applications, such as System Console and Information Console, run in a servlet container. Single-Sign-On (SSO) security using Security Assertion Markup Language (SAML) provides access to the BIRT iHub system.

BIRT iHub supports administering security internally through iHub system services or externally using Report Server Security Extension (RSSE) services such as LDAP or Active Directory. Client applications communicate with BIRT iHub through SOAP messaging using the Actuate Information Delivery API (IDAPI).

Page 19: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 , U n d e r s t a n d i n g A c t u a t e B I R T i H u b a r c h i t e c t u r e 7

Figure 1-1 BIRT iHub Release 3 system architecture

The Process Management Daemon (PMD) or ihubd process handles the services and processes defined on the cluster node in acpmdconfig.xml. The iportal process running in Information Console routes messages to different nodes in the cluster in a round-robin manner when the Message Distribution service (MDS) is enabled, which is the default setting. To modify MDS processing, set the MDS_ENABLED property to false in the web.xml file in iHub/web/iportal/WEB-INF.

Page 20: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

8 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

When a message reaches iHub, an administrative or provisioning message is handled locally by the node. Report generation and viewing requests are dispatched by a built-in load balancing program based on cluster service configuration, current load on java factory and viewing processes, and available work units specified for each node.

When a BIRT iHub node receives a request, iHub deserializes the SOAP message, performs the appropriate action, and sends a response in the form of a SOAP message back to the application. For example, BIRT iHub receives a request to run a design, such as a BIRT design, immediately or as a scheduled job. BIRT iHub communicates with the internal framework and the cluster and volume metadata database as necessary to locate the design and identify the resources required to run it.

The reporting engine selects a Java Factory service to run the BIRT design and checks job status. BIRT iHub uses an asynchronous Java Factory service to generate a temporary document or a synchronous Java Factory service to generate a scheduled document.

The View service renders the document in DHTML format, or converts the output to other supported formats, such as CSV or PDF, and handles requests to download files from the volume. The View service sends the document to the requesting application for viewing.

A design that uses an information object utilizes Actuate Integration service (AIS) to extract and cache data from an external data source and perform the following processing:

■ Run a query to extract data from an external data source.

■ Cache data in iHub System for high availability and to reduce load on the network, data source, and volume by avoiding repetitive data retrieval operations.

The PostgreSQL RDBMS runs as a service in Windows or a process in Linux. The RDBMS can be configured to start automatically or run manually, using a script similar to the BIRT iHub startup script.

iHub stores cluster and volume metadata in the third-party RDBMS, communicating with the RDBMS as necessary using JDBC. iHub uses the physical file system to read and store designs, documents, and other iHub objects as data in volume storage locations.

The out-of-the-box (OOTB) iHub PostgreSQL installation configures the volume database on the local disk to increase the reliability and performance of file input and output (I/O) operations. PostgreSQL discourages creating databases accessed using a Network File Systems (NFS) for these reasons. For more information, see section 17.2.1 Network File Systems at the following URL:

http://www.postgresql.org/docs/8.3/static/creating-cluster.html

Page 21: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 , U n d e r s t a n d i n g A c t u a t e B I R T i H u b a r c h i t e c t u r e 9

The iHub OOTB PostgreSQL RDBMS starts multiple instances to handle connections for running queries that access metadata. In database jargon, PostgreSQL uses a process-per-user, client/server model. For more information, refer to the PostgreSQL documentation at the following URL:

http://www.postgresql.org/docs/8.4/static/connect-estab.html

A cluster node is a machine running a BIRT iHub instance. The system administrator adds a node to a cluster to scale BIRT iHub System to the necessary processing requirements. Every cluster node must have network access to the following directory and resources to join the cluster:

■ The shared configuration directory

■ Cluster resources, such as printers, database systems, and disk storage systems

Each node gets its configuration from a template in acserverconfig.xml, which is located in a shared configuration home directory along with the license file, acserverlicense.xml.

The acserverconfig.xml file contains the server templates as well as other configuration parameters specifying the host names, volume names, port numbers, printers, and services used by nodes in the cluster. When the Process Management Daemon (PMD) starts up, it reads these configurations and exposes them to the process environment variable list. When a node joins a cluster, it configures itself using its template.

Nodes having the same cluster ID, running on the same sub-net, automatically detect and join each other when added to the same cluster in System Console. This feature is known as elastic iHub clustering.

The cluster automatically detects the on-off status of any node. Single-point node failure does not affect the availability of other nodes.

The cluster communicates across the network using standard HTTP/IP addressing. The Process Management Daemons (PMDs) located on each node coordinate processing among the available BIRT iHub services based on message type to balance the workload across the nodes.

This loosely coupled model provides the following improvements to intra-cluster messaging:

■ Each node in the cluster is relatively independent and identical in terms of components and functionality. Intra-cluster messages are limited to messages for cluster membership and load balancing.

■ Operations like design execution and viewing typically require intermediate information from the volume metadata database. This information is now directly retrieved from or updated in the RBDMS, eliminating internal messages to services on other nodes.

Page 22: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

10 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

This increased scalability of operations at the cluster level can create bottlenecks in the metadata database. Important factors to consider when configuring nodes and ancillary resources include estimating processing power and access to hardware and software resources, such as printers and database drivers.

BIRT iHub instances running on multiple machines maintain cluster and volume metadata in a database, which controls access to shared volume data. This data can be on machines that are not running BIRT iHub, but must be shared and accessible to each iHub instance.

This loosely coupled cluster model provides the following maintenance and performance benefits:

■ Startup and shutdown of a BIRT iHub node is fast because it is independent of the RDBMS that manages the cluster and volume. An RDBMS can remain online when shutting down a BIRT iHub node. The RDBMS is available when the node starts up.

■ Controlling the sequence of node and volume startup is not necessary. All nodes and volumes are either already online or come online as the RDBMS starts.

■ Downtime to apply a patch fix or a diagnostic fix for a BIRT iHub node is reduced. The RDBMS, including the OOTB PostgreSQL database server, does not have to be shutdown. In a BIRT iHub cluster, the patch or diagnostic fix can be applied to one node at a time.

This operational model lends itself well to grid, cloud, and other data-center types of deployments.

iHub security overviewThe following sections describe the basic elements of iHub security:

■ Security Assertion Markup Language (SAML)

■ Secure Sockets Layer (SSL)

■ Single sign-on (SSO)

Understanding SAMLSecurity Assertion Markup Language (SAML) is an open standard that provides a secure means of authenticating and authorizing a user to resources on the internet. SAML eliminates the need of a user to provide an authentication credential such as a password, to every internet resource the user accesses. The following entities participate in SAML-based communication:

Page 23: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 , U n d e r s t a n d i n g A c t u a t e B I R T i H u b a r c h i t e c t u r e 11

■ UserThe party accessing a resource on the internet.

■ Service providerThe application, resource, or service the user wants to access.

■ Identity providerThe entity that authenticates the user to the service provider.

A SAML-enabled identity provider and a SAML-enabled service provider can communicate with each other. The user has an account with the identity provider. The identity provider maintains a list of users and can authenticate them.

The user attempts to access a service provider. The service provider contacts the identity provider. The identity provider authenticates the user, sending an assertion, or message containing information about the user, back to the service provider. The service provider determines that the assertion is valid, then allows the user access.

SAML-based user-authentication activity is transparent to the user. Any service provider that can communicate with an identity provider with which the user has an account can obtain user authentication and grant access to the user. The user authenticates once, with the identity provider. Then, the user can potentially access all the service providers that communicate with the identity provider.

About the BIRT iHub SAML implementationBIRT iHub provides its own out-of-the-box (OOTB) SAML identity provider (IdP) implementation. BIRT iHub does not support other third-party SAML identity provider software. BIRT iHub uses Shibboleth IdP 2.4.0 to implement this feature with some customization.

BIRT iHub implements single sign-on (SSO) in SAML 2.0, using the Shibboleth OpenSAML API for System Console, Information Console, and BIRT Analytics. BIRT iHub uses the default authentication method, which specifies PasswordProtectedTransport and PreviousSession.

BIRT iHub enables SSL by default, specifying port 8001 in the acpmdconfig.xml file. BIRT iHub stores SSL certificates in the AC_SERVER_HOME/shared/credential folder.

BIRT iHub implements 2048 RSA private key encryption. Actuate supports the following secure protocol configurations:

■ SSL V3 and TLSV 1.0

■ TLSV 1.1 and TLSV 1.2

BIRT iHub disables SSL V2, client-initiated renegotiation for ihubd, and TLS compression.

Page 24: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

12 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Understanding SSLSSL, or Secure Sockets Layer, is a protocol for securely exchanging information on the internet. SSL provides the means to:

■ Encrypt the data exchanged between two parties.

■ Establish trust between the two parties by proving identity.

SSL uses the following components:

■ Public keyAvailable to the public. Used to:

■ Encrypt a message sent to the owner of the private key.

■ Verify the digital signature on a digital certificate.

■ Private keyAvailable only to the owner of the private key. Used to:

■ Decrypt a message encrypted by the public key.

■ Sign a digital certificate using a digital signature.

■ Digital certificateContains information such as the certificate issuer, certificate expiration date, information about the certificate owner, and the certificate owner’s public key. The certificate is signed by the certificate issuer, using a digital signature.

■ Certification authority (CA)A trusted agency that validates information about an entity such as an online business, then creates a digital certificate for the entity.

As an example, a client user wants to purchase a product from an online business. The client user contacts the online business over the internet. The online business sends its digital certificate back to the client, which contains a digital signature. The private key is required to create the digital signature. The client uses the public key to verify the signature. If the public key can verify the signature, the response indicates that the corresponding private key was used to sign the certificate.

The client must verify that the signer of the certificate is in fact a trusted party, such as a Certification Authority. The client possesses a list of root certificates, each signed by a trusted party or Certification Authority. The client compares the signature on the digital certificate that the online business sent to the client against the signature on the root certificate of the Certification Authority whose name appears on the certificate the online business sent. If the signatures match, the identity of the online business is authenticated. The client and the online business now have a secure connection.

Page 25: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 , U n d e r s t a n d i n g A c t u a t e B I R T i H u b a r c h i t e c t u r e 13

Understanding SSOSingle sign-on (SSO) refers to any authentication mechanism that supports a user logging in once to gain access to multiple applications. Single sign-on authenticates the user for every application the user is authorized to access and eliminates the need for the user to log in again when switching to another application during a session.

Administering BIRT iHub SystemAdministering a BIRT iHub System includes the following tasks:

■ Setting up users, user groups, folders, files, and other administrative tasksAn administrator creates, configures, and manages users, user groups, files, and folders, including assigning and updating privileges and managing group memberships. User and user group privileges selectively control access to the volume and its data objects.

■ Scheduling jobs to run designs and generate documentsEach node in an iHub cluster has a job scheduler and dispatcher. The job dispatcher sends jobs to the local resource group factories.

In this loosely coupled cluster model, the dispatcher sends a job from the pending queue to available factories, balancing the load across the cluster. Multiple job schedulers running on the nodes in a cluster allow the systemto scale processing to handle a large number of scheduled jobs at the same time.

■ Reviewing logs and auditing the information to diagnose system problemsBIRT iHub Logging and Monitoring System (LMS) can capture usage and error information in log files to assist a system administrator in evaluating resource usage and troubleshooting problems.

It is best to set up the iHub system so that System Console and the LMS server are on a separate computer. This allows system administrators to access system logging information when iHub is busy, and also reduces the memory requirements on the iHub computer.

■ Configuring a cluster using automated installation programsThe system administrator can run the installation program to configure BIRT iHub. Each cluster node gets its configuration from a template in acserverconfig.xml, located in a shared configuration directory. Nodes with the same cluster ID, running on the same sub-net, automatically detect and join each other to form the cluster.

Page 26: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

14 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Using the Actuate Information Delivery application programming interface to develop client applications and extend functionalityThe Actuate Information Delivery application programming interface (IDAPI) supports integrating and administering BIRT iHub using Extensible Markup Language (XML) and the Simple Object Access Protocol (SOAP). Using the IDAPI, developers can create an application that performs such tasks as scheduling a custom event or running a Report Server Security Extension (RSSE) application to manage users and user groups in an external system, such as LDAP or Active Directory.

A BIRT iHub administrator uses the System Console, Visualization Platform, command-line utilities, and IDAPI to perform these tasks.

Please consult the following iHub documentation for more information on how to administer the system using these components:

■ Installing and Upgrading BIRT iHubProvides detailed instructions on how to use automated installation programs to install or upgrade Visualization Platform and System Console. There are two versions of this manual, one for Windows, and one for Linux.

■ BIRT iHub System Administration GuideDescribes how to use System Console to perform tasks such as managing a cluster, connecting to a database, adding a volume, setting up user accounts and groups, updating the license, and configuring other BIRT iHub properties, such as logging levels, notification, and printing. Also describes BIRT iHub system architecture, and backup and recovery operations.

■ Managing Volumes and UsersDescribes how to use Information Console to perform volume administration tasks such as executing designs, viewing documents, and scheduling jobs.

■ Using Information ConsoleDescribes how to use Information Console to perform volume administration tasks such as executing designs, viewing documents, and scheduling jobs.

■ Integrating Applications into BIRT iHubProvides information about application programming using the SOAP-based Actuate Information Delivery API (IDAPI), including a Java developer guide and sections on logging and using the Java Report Server Security Extension (RSSE).

Using JDBC to connect to the BIRT iHub databaseBIRT iHub uses JDBC for connecting to the metadata database. The BIRT iHub run-time JRE environment supports Java 6.0 and 7.0. Any JDBC driver must be compatible with these JRE versions.

Page 27: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 , U n d e r s t a n d i n g A c t u a t e B I R T i H u b a r c h i t e c t u r e 15

iHub requires a JDBC driver that complies with the JDBC 3.0 specification or later. The function Driver.jdbcCompliant( ) must return TRUE. DatabaseMetadata.getJDBCMajorVersion( ) must return 3 or greater than 3.

A system administrator, who decides to customize BIRT iHub to connect to a database other than the OOTB PostgreSQL database, must ensure that the JDBC driver returns adequate information about the types on the database. At a minimum, the database must return the common data types, such as integer, floating point, and character. If the database does not return these common data types, then the database administrator must customize the database mapping framework to specify the types.

The JDBC driver must also support the following features:

■ Scrollable cursor

■ Retention of a cursor after commit

■ Update using a prepared cursor

When using connection pooling, the tracing functionality of the JDBC driver captures connection pool run-time statistics.

About international character setsBIRT iHub operates on the assumption that the metadata database is configured to run with UTF-8 encoding. Any other database encoding scheme requires configuring the connection parameters to specify the database encoding. The driver must handle the conversion to UCS2.

Administrative reportsThe default BIRT iHub volume contains sample volume content, such as BIRT designs you can run to generate documents, examples of a data object and a dashboard, BIRT Library files, images, and HTML files. Installing the sample volume is an option in a custom installation.

Supported operating systemsActuate BIRT iHub supports the following operating systems:

■ Windows

■ Linux

Page 28: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

16 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Page 29: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 2, Conf igur ing BIRT iHub to use an a l ternat ive database on a Windows p lat form 23

Chapter

2Chapter 2Configuring BIRT iHub to

use an alternative databaseon a Windows platform

This chapter contains the following topics:

■ Switching to a pre-existing database for storing volume metadata

■ Preparing to switch databases

■ Using the cluster database switcher tool

■ Verifying the new database connection

■ Bringing the cluster online

■ Testing the new database

■ Backing up BIRT iHub system and volume metadata

Page 30: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

24 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Switching to a pre-existing database for storing volume metadata

The BIRT iHub Visualization Platform installation program installs a PostgreSQL database to contain volume metadata such as users, user groups, and privileges. You can alternatively store volume metadata in a pre-existing PostgreSQL or Oracle database. The topics in this chapter explain the operations you perform in sequence to switch from the installed PostgreSQL database to a pre-existing PostgreSQL or Oracle database in a Windows environment.

Preparing to switch databasesBefore performing the switch database operation, you must perform the following tasks:

■ Locate several folders in the BIRT iHub installation.

■ Take the BIRT iHub cluster offline.

This section describes these tasks.

Locating folders in the BIRT iHub installationThe tools folder is in the Visualization Platform home folder. The installation root folder contains the path to the Visualization Platform home folder. The following list identifies the locations of these folders in a typical Visualization Platform installation.

■ The installation root folderThe folder where Visualization Platform 3 was installed. It contains a folder named modules. In a typical Windows installation, the default path of the rootfolder is either:

C:\Actuate\BIRTiHubVisualization

or:

C:\Actuate3\BIRTiHubVisualization

■ The Visualization Platform home folderThe home folder in your existing installation is named iHub. This folder is in the BIRTiHub folder, which the modules folder contains. In a typical Windowsinstallation, the default path of the home folder is either:

C:\Actuate\BIRTiHubVisualization\modules\BIRTiHub\iHub

Page 31: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 2, Conf igur ing BIRT iHub to use an a l ternat ive database on a Windows p lat form 25

or:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub

■ The iHub tools folderContents of this folder include build and product migration tools. In a typical Windows installation, the default path of the tools folder is either:

C:\Actuate\BIRTiHubVisualization\modules\BIRTiHub\iHub\tools

or:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\tools

Taking the cluster offlineBefore performing this operation, you must create a cluster. To create a cluster, choose Create Cluster.

Before switching databases, you must take the BIRT iHub cluster offline by stopping the cluster in System Console. The following section describes this operation.

How to stop the cluster

1 Log in to System Console as the default system administrator.

2 Choose Clusters, as shown in Figure 2-1.

Figure 2-1 Choosing Clusters

3 Left-click the arrowhead icon next to the cluster and choose Edit, as shown in Figure 2-2.

Figure 2-2 Choosing to edit the cluster

4 Left-click the arrowhead icon next to the gear icon to access the Manage Cluster menu, and choose Stop Cluster, as shown in Figure 2-3.

Page 32: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

26 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 2-3 Stopping the cluster

5 From the Manage Cluster menu, choose Refresh, as shown in Figure 2-4.

Figure 2-4 Choosing Refresh to refresh the cluster status

When all the services appear in red, the cluster is offline, as shown in Figure 2-5.

Figure 2-5 The cluster is offline

Using the cluster database switcher toolThe cluster database switcher tool performs all the operations necessary to switch from the default PostgreSQL database to a pre-existing PostgreSQL or Oracle database. The tool performs the following operations:

Page 33: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 2, Conf igur ing BIRT iHub to use an a l ternat ive database on a Windows p lat form 27

■ Connects to both the default PostgreSQL database and to the pre-existing database.

■ Creates an application user and schema in the pre-existing database. Creates multiple schemas if your Visualization Platform installation uses more than one volume.

■ Copies iHub configuration data and volume metadata from the default PostgreSQL database to the pre-existing database.

■ Updates the database connection settings in the iHub configuration file, acserverconfig.xml.

To use the cluster database switcher tool, you perform the following tasks:

■ Create a properties file.The properties listed in the properties file support the tool connecting to the pre-existing database and creating the application user and schema or schemas. You pass the properties file to the tool when you run it.

■ Run the tool.You open a command prompt, navigate to the folder containing the cluster database switcher tool script, and run it.

Creating a properties file for the Cluster Database SwitcherUsing a text editor, create a properties file named switch_cluster_database.properties. You can create the file in any folder that is convenient. For example, you can create the properties file in the folder containing the cluster database switcher tool. By default, this is the \bin folder inside the iHub tools folder:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\tools\bin

The properties file you create is slightly different for each of the following scenarios:

■ When switching to a pre-existing Oracle database

■ When switching to a pre-existing PostgreSQL database

■ When switching to either database type and you have more than one volume

The following sections describe the details for creating a properties file for each of these scenarios.

When you create the properties file, you must provide database entity names, including names for databases, schemas, tablespaces, and users. Database entity names are subject to the following restrictions:

Page 34: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

28 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Database entity names must contain only 7-bit ASCII characters.

■ The name must begin with an ASCII letter and contain only ASCII letters, digits, and underscores.

■ An Oracle schema name cannot be longer than 30 characters.

Specifying properties for switching to an Oracle databaseTable 2-1 describes the properties to use when switching to a pre-existing Oracle database. Properties having names that begin with TARGET_ pertain to the database to which you are switching. The tool automatically retrieves properties pertaining to the default PostgreSQL database from the iHub configuration file, acserverconfig.xml. An example properties file follows the table.

Table 2-1 Properties required for switching to an Oracle database

Parameter Description

AC_SERVER_HOME Path of the Visualization Platform home folder.

TARGET_DATABASE_TYPE Type of relational database management system (RDBMS). Specify Oracle.

TARGET_ORACLE_TNS_NAMES_FILE

Full path to an Oracle TNS names file, named tnsnames.ora, which defines the net service for the Oracle database.BIRT iHub does not use an Oracle client. BIRT iHub only needs a valid TNS names file. If an Oracle client is not installed on the machine where Visualization Platform is installed, you can create a TNS names file to define the net service for the Oracle database. Actuate recommends placing the TNS names file in the BIRT iHub configuration folder, which contains acserverconfig.xml. By default, the path to the iHub configuration folder is <Visualization Platform home folder>\shared\config.

TARGET_DATABASE_NAME Net service name for the Oracle database.

TARGET_DATABASE_ADMINISTRATOR

Name of a user that has administration privileges on the Oracle database, for example SYSTEM. Do not specify a user with SYSDBA privileges, for example SYS.

TARGET_CREATE_DATABASE_USER

Whether to create a database user. Valid values are true or false. Specify true.

TARGET_DATABASE_USER User ID that BIRT iHub uses to connect to the Oracle database.

TARGET_CREATE_SCHEMA Whether to create a schema. Valid values are true or false. Specify true.

TARGET_SCHEMA_NAME Name of the schema to create for BIRT iHub tables.

Page 35: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 2, Conf igur ing BIRT iHub to use an a l ternat ive database on a Windows p lat form 29

Use the following example to create the properties file. In the properties file, paths must always use forward slashes.

AC_SERVER_HOME = C:/Actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub

TARGET_DATABASE_TYPE = OracleTARGET_ORACLE_TNS_NAMES_FILE = C:/Actuate3/BIRTiHubVisualization

/modules/BIRTiHub/iHub/shared/config/tnsnames.oraTARGET_DATABASE_NAME = ABC123TARGET_DATABASE_ADMINISTRATOR = SYSTEMTARGET_CREATE_DATABASE_USER = trueTARGET_DATABASE_USER = ihubTARGET_CREATE_SCHEMA = trueTARGET_SCHEMA_NAME = ac_volumeTARGET_TABLESPACE_NAME = USERS

Specifying properties for switching to a PostgreSQL databaseTable 2-2 describes the properties to use when switching to a pre-existing PostgreSQL database. Properties having names that begin with TARGET_ pertain to the database to which you are switching. The tool automatically retrieves properties pertaining to the default PostgreSQL database from the iHub configuration file, acserverconfig.xml. An example properties file follows the table.

TARGET_TABLESPACE_NAME Name of the Oracle tablespace to use for the BIRT iHub tables and indexes.

Table 2-1 Properties required for switching to an Oracle database

Parameter Description

Table 2-2 Properties required for switching to a PostgreSQL database

Parameter Description

AC_SERVER_HOME Path to the Visualization Platform home folder.

TARGET_DATABASE_TYPE Type of relational database management system (RDBMS). Specify PostgreSQL.

TARGET_DATABASE_HOST Network host name of the database server.

TARGET_DATABASE_PORT Network port for the database server.

TARGET_DATABASE_NAME Name of the PostgreSQL database.

TARGET_DATABASE_ADMINISTRATOR

Name of a user that has administration privileges on the PostgreSQL database.

TARGET_CREATE_DATABASE_USER

Whether to create a database user. Valid values are true or false. Specify true.

Page 36: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

30 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Use the following example to create the properties file. In the properties file, paths must always use forward slashes.

AC_SERVER_HOME = C:/Actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub

TARGET_DATABASE_TYPE = PostgreSQLTARGET_DATABASE_HOST = foo.bar.comTARGET_DATABASE_PORT = 5432TARGET_DATABASE_NAME = ihubTARGET_DATABASE_ADMINISTRATOR = postgresTARGET_CREATE_DATABASE_USER = trueTARGET_DATABASE_USER = ihubTARGET_CREATE_SCHEMA = trueTARGET_SCHEMA_NAME = ac_volume

Specifying properties when you have multiple volumesIf the cluster has a single volume, the metadata for the volume resides in the same schema as the cluster configuration data. If the cluster has more than one volume, the cluster configuration data resides in a separate schema and the metadata for each volume resides in a separate schema for each volume. The properties required when you have multiple volumes are listed in Table 2-3.

TARGET_DATABASE_USER User ID that BIRT iHub uses to connect to the PostgreSQL database.

TARGET_CREATE_SCHEMA Whether to create a schema. Valid values are true or false. Specify true.

TARGET_SCHEMA_NAME Name of the schema to create for BIRT iHub tables.

Table 2-2 Properties required for switching to a PostgreSQL database

Parameter Description

Table 2-3 Properties required when you have multiple volumes

Parameter Description

TARGET_CLUSTER_SCHEMA_NAME

Name of the schema to create for the cluster configuration data.

TARGET_VOLUME_SCHEMA_NAMES

A list of volume name and schema name pairs. Each pair defines the schema name for a volume. A volume name must be separated from its schema name by a colon (:). Schema and volume name pairs must be separated by a pipe character (|). Use a backslash (\) to continue the list onto a new line.

Page 37: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 2, Conf igur ing BIRT iHub to use an a l ternat ive database on a Windows p lat form 31

Use the following example to add these two properties to a properties file.

TARGET_CLUSTER_SCHEMA_NAME = ac_clusterTARGET_VOLUME_SCHEMA_NAMES = Volume One : ac_volume_01 |\

Volume Two : ac_volume_02 |\Volume Three : ac_volume_03

Running the Cluster Database SwitcherThe Cluster Database Switcher is a batch script named switch_cluster_database.bat. The script is in the /bin folder inside the iHub tools folder. The script takes a single command-line argument, the path to the properties file.

BIRT iHub connects to the metadata database using a database user name and password. When you create the properties file for the cluster database switcher script, you provide the name of the user that BIRT iHub uses to connect to the target database in the TARGET_DATABASE_USER property. You do not, however, include the password in the properties file for security reasons. When you run the script, the script pauses to prompt you for this and other passwords. When you enter the password, the script resumes. The default source database user password is postgres.

How to run the cluster database switcher

1 Open a command prompt. Navigate to the \bin folder inside the iHub tools folder. For example:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\tools\bin

2 Run the switch_cluster_database.bat file passing the name of the properties file as an argument. For example:

switch_cluster_database.bat switch_cluster_database.properties

Verifying the new database connectionYou use System Console to verify the new database connection.

How to verify the new database connection

1 In System Console, select Metadata Database from the side menu, as shown in Figure 2-6.

Page 38: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

32 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 2-6 Choosing Metadata Database

2 Choose Test Connection to test the connection between iHub and the new database, as shown in Figure 2-7.

Figure 2-7 Testing the database connection

Bringing the cluster onlineAfter running the Cluster Database Switcher, you bring the BIRT iHub cluster back online by starting the cluster in System Console. The following section describes this operation.

Page 39: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 2, Conf igur ing BIRT iHub to use an a l ternat ive database on a Windows p lat form 33

How to start the cluster

1 In System Console, choose Cluster Configuration from the side menu, as shown in Figure 2-8.

Figure 2-8 Choosing Cluster Configuration

2 Left-click the arrowhead icon next to the gear icon to access the Manage Cluster menu, and choose Start Cluster, as shown in Figure 2-9.

Figure 2-9 Starting the cluster

3 From the Manage Cluster menu, choose Refresh, as shown in Figure 2-10.

Figure 2-10 Choosing Refresh to refresh the cluster status

When all the services appear in green, the cluster is online, as shown in Figure 2-11.

Page 40: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

34 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 2-11 The cluster is online

Testing the new databaseLog in to BIRT iHub Visualization Platform as the default user, Administrator, and verify that Visualization Platform is behaving as expected. For example:

■ Run a design and view the generated document.For more information, see Using Information Console.

■ Create a user and assign privileges on a file to the user.For more information, see Managing Volumes and Users.

Backing up BIRT iHub system and volume metadataThe schemas containing cluster configuration data and volume metadata in a pre-existing, third-party database are critical components of BIRT iHub System. To guard against data loss, the database administrator must back up the schemas using the tools and resources of the pre-existing, third-party database system.

A BIRT iHub system administrator must take all necessary precautions to ensure that the schemas are properly backed up to safeguard the metadata. Please consult Actuate Support if you have any questions about these backup procedures to protect against the possibility of catastrophic failure. For information on the recommended procedures to back up a BIRT iHub system and volume schemas in the BIRT iHub environment, see Chapter 10, “Backing up BIRT iHub System,” later in this book.

Page 41: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 3, Conf igur ing BIRT iHub to use an a l ternat ive database on a L inux p lat form 37

Chapter

3Chapter 3Configuring BIRT iHub to

use an alternative databaseon a Linux platform

This chapter contains the following topics:

■ Switching to a pre-existing database for storing volume metadata

■ Preparing to switch databases

■ Using the cluster database switcher tool

■ Verifying the new database connection

■ Bringing the cluster online

■ Testing the new database

■ Backing up BIRT iHub system and volume metadata

Page 42: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

38 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Switching to a pre-existing database for storing volume metadata

The BIRT iHub Visualization Platform installation program installs a PostgreSQL database to contain volume metadata such as users, user groups, and privileges. You can alternatively store volume metadata in a pre-existing PostgreSQL or Oracle database. The topics in this chapter explain the operations you perform in sequence to switch from the installed PostgreSQL database to a pre-existing PostgreSQL or Oracle database in a Linux environment.

Preparing to switch databasesBefore performing the switch database operation, you must perform the following tasks:

■ Locate several folders in the BIRT iHub installation.

■ Take the BIRT iHub cluster offline.

This section describes these tasks.

Locating folders in the BIRT iHub installationThe tools folder is in the Visualization Platform home folder. The installation root folder contains the path to the Visualization Platform home folder. The following list identifies the locations of these folders in a typical Visualization Platform installation.

■ The installation root folderThe folder where Visualization Platform 3 was installed. It contains a folder named modules. In a typical Linux installation, the default path of the root folder is either:

opt/actuate/BIRTiHubVisualization

or:

opt/actuate3/BIRTiHubVisualization

■ The Visualization Platform home folderThe home folder in your existing installation is named iHub. This folder is in the BIRTiHub folder, which the modules folder contains. In a typical Linux installation, the default path of the home folder is either:

opt/actuate/BIRTiHubVisualization/modules/BIRTiHub/iHub

Page 43: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 3, Conf igur ing BIRT iHub to use an a l ternat ive database on a L inux p lat form 39

or:

opt/actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub

■ The iHub tools folderContents of this folder include build and product migration tools. In a typical Linux installation, the default path of the tools folder is either:

opt/actuate/BIRTiHubVisualization/modules/BIRTiHub/iHub/tools

or:

opt/actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub/tools

Taking the cluster offlineBefore performing this operation, you must create a cluster. To create a cluster, choose Create Cluster.

Before switching databases, you must take the BIRT iHub cluster offline by stopping the cluster in System Console. The following section describes this operation.

How to stop the cluster

1 Log in to System Console as the default system administrator.

2 Choose Clusters, as shown in Figure 3-1.

Figure 3-1 Choosing Clusters

3 Left-click the arrowhead icon next to the cluster and choose Edit, as shown in Figure 3-2.

Figure 3-2 Choosing to edit the cluster

4 Left-click the arrowhead icon next to the gear icon to access the Manage Cluster menu, and choose Stop Cluster, as shown in Figure 3-3.

Page 44: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

40 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 3-3 Stopping the cluster

5 From the Manage Cluster menu, choose Refresh, as shown in Figure 3-4.

Figure 3-4 Choosing Refresh to refresh the cluster status

When all the services appear in red, the cluster is offline. as shown in Figure 3-5.

Figure 3-5 The cluster is offline

Using the cluster database switcher toolThe cluster database switcher tool performs all the operations necessary to switch from the default PostgreSQL database to a pre-existing PostgreSQL or Oracle database. The tool performs the following operations:

Page 45: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 3, Conf igur ing BIRT iHub to use an a l ternat ive database on a L inux p lat form 41

■ Connects to both the default PostgreSQL database and to the pre-existing database.

■ Creates an application user and schema in the pre-existing database. Creates multiple schemas if your Visualization Platform installation uses more than one volume.

■ Copies iHub configuration data and volume metadata from the default PostgreSQL database to the pre-existing database.

■ Updates the database connection settings in the iHub configuration file, acserverconfig.xml.

To use the cluster database switcher tool, you perform the following tasks:

■ Create a properties file.The properties listed in the properties file support the tool connecting to the pre-existing database and creating the application user and schema or schemas. You pass the properties file to the tool when you run it.

■ Run the tool.You navigate to the folder containing the cluster database switcher tool script and run it.

Creating a properties file for the Cluster Database SwitcherUsing a text editor, create a properties file named switch_cluster_database.properties. You can create the file in any folder that is convenient. For example, you can create the properties file in the folder containing the cluster database switcher tool. By default, this is the /bin folder inside the iHub tools folder:

opt/actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub/tools/bin

The properties file you create is slightly different for each of the following scenarios:

■ When switching to a pre-existing Oracle database

■ When switching to a pre-existing PostgreSQL database

■ When switching to either database type and you have more than one volume

The following sections describe the details for creating a properties file for each of these scenarios.

When you create the properties file, you must provide database entity names, including names for databases, schemas, tablespaces, and users. Database entity names are subject to the following restrictions:

Page 46: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

42 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Database entity names must contain only 7-bit ASCII characters.

■ The name must begin with an ASCII letter and contain only ASCII letters, digits, and underscores.

■ An Oracle schema name cannot be longer than 30 characters.

Specifying properties for switching to an Oracle databaseTable 3-1 describes the properties to use when switching to a pre-existing Oracle database. Properties having names that begin with TARGET_ pertain to the database to which you are switching. The tool automatically retrieves properties pertaining to the default PostgreSQL database from the iHub configuration file, acserverconfig.xml. An example properties file follows the table.

Table 3-1 Properties required for switching to an Oracle database

Parameter Description

AC_SERVER_HOME Path of the Visualization Platform home folder.

TARGET_DATABASE_TYPE Type of relational database management system (RDBMS). Specify Oracle.

TARGET_ORACLE_TNS_NAMES_FILE

Full path to an Oracle TNS names file, named tnsnames.ora, which defines the net service for the Oracle database.BIRT iHub does not use an Oracle client. BIRT iHub only needs a valid TNS names file. If an Oracle client is not installed on the machine where Visualization Platform is installed, you can create a TNS names file to define the net service for the Oracle database. Actuate recommends placing the TNS names file in the BIRT iHub configuration folder, which contains acserverconfig.xml. By default, the path to the iHub configuration folder is <Visualization Platform home folder>/shared/config.

TARGET_DATABASE_NAME Net service name for the Oracle database.

TARGET_DATABASE_ADMINISTRATOR

Name of a user that has administration privileges on the Oracle database, for example SYSTEM. Do not specify a user with SYSDBA privileges, for example SYS.

TARGET_CREATE_DATABASE_USER

Whether to create a database user. Valid values are true or false. Specify true.

TARGET_DATABASE_USER User ID that BIRT iHub uses to connect to the Oracle database.

TARGET_CREATE_SCHEMA Whether to create a schema. Valid values are true or false. Specify true.

TARGET_SCHEMA_NAME Name of the schema to create for BIRT iHub tables.

Page 47: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 3, Conf igur ing BIRT iHub to use an a l ternat ive database on a L inux p lat form 43

Use the following example to create the properties file. In the properties file, paths must always use forward slashes (/).

AC_SERVER_HOME = opt/actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub

TARGET_DATABASE_TYPE = OracleTARGET_ORACLE_TNS_NAMES_FILE = opt/actuate3/BIRTiHubVisualization

/modules/BIRTiHub/iHub/shared/config/tnsnames.oraTARGET_DATABASE_NAME = ABC123TARGET_DATABASE_ADMINISTRATOR = SYSTEMTARGET_CREATE_DATABASE_USER = trueTARGET_DATABASE_USER = ihubTARGET_CREATE_SCHEMA = trueTARGET_SCHEMA_NAME = ac_volumeTARGET_TABLESPACE_NAME = USERS

Specifying properties for switching to a PostgreSQL databaseTable 3-2 describes the properties to use when switching to a pre-existing PostgreSQL database. Properties having names that begin with TARGET_ pertain to the database to which you are switching. The tool automatically retrieves properties pertaining to the default PostgreSQL database from the iHub configuration file, acserverconfig.xml. An example properties file follows the table.

TARGET_TABLESPACE_NAME Name of the Oracle tablespace to use for the BIRT iHub tables and indexes.

Table 3-1 Properties required for switching to an Oracle database

Parameter Description

Table 3-2 Properties required for switching to a PostgreSQL database

Parameter Description

AC_SERVER_HOME Path to the Visualization Platform home folder.

TARGET_DATABASE_TYPE Type of relational database management system (RDBMS). Specify PostgreSQL.

TARGET_DATABASE_HOST Network host name of the database server.

TARGET_DATABASE_PORT Network port for the database server.

TARGET_DATABASE_NAME Name of the PostgreSQL database.

TARGET_DATABASE_ADMINISTRATOR

Name of a user that has administration privileges on the PostgreSQL database.

(continues)

Page 48: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

44 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Use the following example to create the properties file. In the properties file, paths must always use forward slashes.

AC_SERVER_HOME = opt/actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub

TARGET_DATABASE_TYPE = PostgreSQLTARGET_DATABASE_HOST = foo.bar.comTARGET_DATABASE_PORT = 5432TARGET_DATABASE_NAME = ihubTARGET_DATABASE_ADMINISTRATOR = postgresTARGET_CREATE_DATABASE_USER = trueTARGET_DATABASE_USER = ihubTARGET_CREATE_SCHEMA = trueTARGET_SCHEMA_NAME = ac_volume

Specifying properties when you have multiple volumesIf the cluster has a single volume, the metadata for the volume resides in the same schema as the cluster configuration data. If the cluster has more than one volume, the cluster configuration data resides in a separate schema and the metadata for each volume resides in a separate schema for each volume. The properties required when you have multiple volumes are listed in Table 3-3.

TARGET_CREATE_DATABASE_USER

Whether to create a database user. Valid values are true or false. Specify true.

TARGET_DATABASE_USER User ID that BIRT iHub uses to connect to the PostgreSQL database.

TARGET_CREATE_SCHEMA Whether to create a schema. Valid values are true or false. Specify true.

TARGET_SCHEMA_NAME Name of the schema to create for BIRT iHub tables.

Table 3-2 Properties required for switching to a PostgreSQL database (continued)

Parameter Description

Table 3-3 Properties required when you have multiple volumes

Parameter Description

TARGET_CLUSTER_SCHEMA_NAME

Name of the schema to create for the cluster configuration data.

TARGET_VOLUME_SCHEMA_NAMES

A list of volume name and schema name pairs. Each pair defines the schema name for a volume. A volume name must be separated from its schema name by a colon (:). Schema and volume name pairs must be separated by a pipe character (|). Use a backslash (\) to continue the list onto a new line.

Page 49: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 3, Conf igur ing BIRT iHub to use an a l ternat ive database on a L inux p lat form 45

Use the following example to add these two properties to a properties file.

TARGET_CLUSTER_SCHEMA_NAME = ac_clusterTARGET_VOLUME_SCHEMA_NAMES = Volume One : ac_volume_01 |\

Volume Two : ac_volume_02 |\Volume Three : ac_volume_03

Running the Cluster Database SwitcherThe Cluster Database Switcher is a shell script named switch_cluster_database.sh. The script is in the /bin folder inside the iHub tools folder. The script takes a single command-line argument, the path to the properties file.

BIRT iHub connects to the metadata database using a database user name and password. When you create the properties file for the cluster database switcher script, you provide the name of the user that BIRT iHub uses to connect to the target database in the TARGET_DATABASE_USER property. You do not, however, include the password in the properties file for security reasons. When you run the script, the script pauses to prompt you for this and other passwords. When you enter the password, the script resumes. The default source database user password is postgres.

How to run the cluster database switcher

1 Navigate to the /bin folder inside the iHub tools folder. For example:

opt/actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub/tools/bin

2 Run the switch_cluster_database.sh file passing the name of the properties file as an argument. For example:

sh switch_cluster_database.sh switch_cluster_database.properties

Verifying the new database connectionYou use System Console to verify the new database connection.

How to verify the new database connection

1 In System Console, select Metadata Database from the side menu, as shown in Figure 3-6.

Page 50: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

46 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 3-6 Choosing Metadata Database

2 Choose Test Connection to test the connection between iHub and the new database, as shown in Figure 3-7.

Figure 3-7 Testing the database connection

Bringing the cluster onlineAfter running the Cluster Database Switcher, you bring the BIRT iHub cluster back online by starting the cluster in System Console. The following section describes this operation.

Page 51: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Chapter 3, Conf igur ing BIRT iHub to use an a l ternat ive database on a L inux p lat form 47

How to start the cluster

1 In System Console, choose Cluster Configuration from the side menu, as shown in Figure 3-8.

Figure 3-8 Choosing Cluster Configuration

2 Left-click the arrowhead icon next to the gear icon to access the Manage Cluster menu, and choose Start Cluster, as shown in Figure 3-9.

Figure 3-9 Starting the cluster

3 From the Manage Cluster menu, choose Refresh, as shown in Figure 3-10.

Figure 3-10 Choosing Refresh to refresh the cluster status

When all the services appear in green, the cluster is online, as shown in Figure 3-11.

Page 52: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

48 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 3-11 The cluster is online

Testing the new databaseLog in to BIRT iHub Visualization Platform as the default user, Administrator, and verify that Visualization Platform is behaving as expected. For example:

■ Run a design and view the generated document.For more information, see Using Visualization Platform.

■ Create a user and assign privileges on a file to the user.For more information, see Managing Volumes and Users.

Backing up BIRT iHub system and volume metadataThe schemas containing cluster configuration data and volume metadata in a pre-existing, third-party database are critical components of BIRT iHub System. To guard against data loss, the database administrator must back up the schemas using the tools and resources of the pre-existing, third-party database system.

A BIRT iHub system administrator must take all necessary precautions to ensure that the schemas are properly backed up to safeguard the metadata. Please consult Actuate Support if you have any questions about these backup procedures to protect against the possibility of catastrophic failure. For information on the recommended procedures to back up a BIRT iHub system and volume schemas in the BIRT iHub environment, see Chapter 10, “Backing up BIRT iHub System,” later in this book.

Page 53: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 4 , C o n f i g u r i n g a c l u s t e r u s i n g s h a r e d f i l e s 51

C h a p t e r

4Chapter 4Configuring a cluster

using shared filesThis chapter contains the following topics:

■ About configuring a cluster using shared files

■ Changing default port numbers

■ Specifying an output document destination using a URL

■ Configuring custom events

■ Setting default values for notification e-mail properties

■ Working with the job completion notification e-mail message template

■ Stopping and starting iHub processes

Page 54: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

52 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

About configuring a cluster using shared filesThe topics in this chapter describe:

■ Cluster-level configuration tasks you can perform by setting properties in acserverconfig.xml

■ Customizing job completion notification e-mail using the job completion notification e-mail message template file

■ Stopping an starting iHub processes, which is commonly necessary when making changes to shared files such as acserverconfig.xml

As a best practice, create a copy of a file before you make any changes to it.

This chapter makes reference to the following variables:

■ AC_SERVER_HOMERepresents the BIRT iHub home folder. In a default BIRT iHub installation on Windows, where BIRT iHub was installed as an individual module, AC_SERVER_HOME represents the following path:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub

In a default BIRT iHub installation on Linux, where BIRT iHub was installed as an individual module, AC_SERVER_HOME represents the following path:

/opt/actuate/BIRTiHubVisualization/modules/BIRTiHub/iHub

■ AC_CONFIG_HOMERepresents the shared configuration directory. This directory contains resources that all cluster nodes share, such as acserverlicense.xml, the BIRT iHub license file, and acserverconfig.xml, the shared configuration file. For example, if the system administrator created a folder for the shared configuration directory named config_cluster, then in a default BIRT iHub installation on Windows, where BIRT iHub was installed as an individual module, AC_CONFIG_HOME represents the following path:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config_cluster

If the system administrator did not create a folder for the shared configuration directory, then in a default BIRT iHub installation on Windows, where BIRT iHub was installed as an individual module, AC_CONFIG_HOME represents the following path:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config

In a default BIRT iHub installation on Linux, where BIRT iHub was installed as an individual module, if the system administrator created a folder for the

Page 55: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 4 , C o n f i g u r i n g a c l u s t e r u s i n g s h a r e d f i l e s 53

shared configuration directory named config_cluster, for example, AC_CONFIG_HOME represents the following path:

/opt/actuate/BIRTiHubVisualization/modules/BIRTiHub/iHub/shared/config_cluster

If the system administrator did not create a folder for the shared configuration directory, then in a default BIRT iHub installation on Linux, where BIRT iHub was installed as an individual module, AC_CONFIG_HOME represents the following path:

/opt/actuate/BIRTiHubVisualization/modules/BIRTiHub/iHub/shared/config

Changing default port numbersTable 4-1 lists the ports for which you can change the value, if necessary, after installing BIRT iHub. These port numbers are defined in acserverconfig.xml, the configuration file that all nodes in a BIRT iHub cluster share.

Although the permissible range for these port numbers is 1-65535, a best practice is to use a port range of 1024-65535. For example, Linux does not allow non-root applications to open ports under 1024. Also, many ports lower than 1024 are for common protocols such as ftp and e-mail.

How to change a port number

1 Stop iHub processes. For information on stopping and starting iHub processes for a cluster or for a standalone iHub installation, see “Stopping and starting iHub processes,” later in this chapter.

2 Open AC_CONFIG_HOME\acserverconfig.xml using a text editor.

3 Change any port number appearing in Table 4-1 as necessary. Save and exit.

Table 4-1 iHub ports appearing in acserverconfig.xml

Name Description Default Range

AppContainerPort Application container process listen port

8700 1 - 65535

CustomEventServicePort Custom Event Service Port 8700 1 - 65535

ProvisioningSOAPPort Provisioning service port 8010 1 - 65535

ProvisioningSOAPSSLPort Provisioning service SSL port 8011 1 - 65535

SOAPDispatchSOAPPort Message Distribution service port

8000 1 - 65535

SOAPDispatchSOAPSSLPort

Message distribution service SSL port

8001 1 - 65535

Page 56: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

54 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

If you are changing the value of the message distribution service port, SOAPDispatchSOAPPort, perform the next step. Otherwise, skip the next step.

4 If you specify a new port number for SOAPDispatchSOAPPort in acserverconfig.xml, perform the following tasks on every node in the cluster.

1 Navigate to the following file:

AC_SERVER_HOME\web\iportal\WEB-INF\volumeProfile.xml

2 As a best practice, create a copy of volumeProfile.xml. Then, edit volumeProfile.xml, and specify the new port number in the URL that the <ServerUrl> element specifies. For example, in Listing 4-1, the port number in the URL that <ServerUrl> specifies has been changed from 8000 to 8200.

Listing 4-1 Changing the port number that <ServerUrl> specifies

...<VolumeProfiles>...<Profile>

<Default>true</Default><ProfileName>Default Volume</ProfileName><RepositoryType>enterprise</RepositoryType><ServerUrl>http://URUP:8200</ServerUrl><Volume>Default Volume</Volume>

</Profile></VolumeProfiles>

Save and exit.

5 Start iHub processes. For information on stopping and starting iHub processes for a cluster or for a standalone iHub installation, see “Stopping and starting iHub processes,” later in this chapter.

Specifying an output document destination using a URL

The iHubWebURL property in acserverconfig.xml supports automatically sending a copy of the output document a scheduled job generates to a location you specify.

How to set the iHubWebURL property

1 Stop iHub processes. For information on stopping and starting iHub processes for a cluster or for a standalone iHub installation, see “Stopping and starting

Page 57: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 4 , C o n f i g u r i n g a c l u s t e r u s i n g s h a r e d f i l e s 55

iHub processes,” later in this chapter. Wait until all services have stopped before going to the next step.

2 Open AC_CONFIG_HOME\acserverconfig.xml using a text editor. Find iHubWebURL. Specify the URL for the location where you want Visualization Platform to send a copy of the output document when the job generating the document completes. For example, in Listing 4-2, iHubWebURL specifies that Visualization Platform sends output documents to a cluster node named TUPO.

Listing 4-2 Setting the iHubWebURL property

<System...SystemName="URUP"iHubWebURL="http://TUPO:8700/iportal"...

</System>

Save and exit the file.

3 Start iHub processes. For information on stopping and starting iHub processes for a cluster or for a standalone iHub installation, see “Stopping and starting iHub processes,” later in this chapter.

Configuring custom eventsWhen a user needs to schedule a job based on a custom event, the administrator configures the following system properties in AC_CONFIG_HOME\acserverconfig.xml, to set up a web service application for triggering the job to run.

■ EventPollingIntervalSpecifies the frequency in minutes that iHub checks for a system event. The default value is 5 minutes.

■ EventPollingDurationSpecifies the duration in minutes that BIRT iHub checks for an event. If the event does not occur within the allotted time, BIRT iHub marks it as expired. A user can customize this value when creating an event-driven schedule. This value applies to all types of system events. The default value is 300 minutes.

■ EventLagTimeSpecifies the minutes that BIRT iHub scans for completed jobs to determine if an event occurred. For example, if you submit an event-based schedule with the default event lag time, BIRT iHub checks the status of jobs for the previous

Page 58: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

56 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

60 minutes. If the event occurred within the previous 60 minutes, it sets the event status to satisfied. The default value is 60 minutes.

■ EnableCustomEventServiceA flag that enables BIRT iHub custom event processing for a scheduled job. If the value is true, the service is enabled. If you change the value to false, all the existing instances of scheduled jobs using the custom event fail. This configuration value also affects the EnableEventService property value in the Actuate IDAPI GetVolumeProperties response. The default value is True.

■ CustomEventServiceIPAddressSpecifies the server name or IP address where the custom event service resides. The default value is localhost.

■ CustomEventServicePortSpecifies the number of a valid, used port for the custom event service. iHub uses an application container to host web services applications. The default value is 8700.

■ CustomEventServiceContextStringSpecifies the context string of the request URL for sending a message to the custom event service. The default value is /acevent/servlet/AxisServlet.

Setting default values for notification e-mail propertiesYou can set properties in acserverconfig.xml, the shared configuration file, to:

■ Limit notification e-mail size and number of recipients

■ Specify text to use in the To: line of a notification e-mail if the To: line is empty

Configuring limits on notification e-mail size and number of recipientsSet values for the following properties to limit notification size and number of e-mail recipients:

■ MaxMailMessageSizeSpecifies the maximum size a notification e-mail can be, in kilobytes. The permissible range is 100 through 1048576. The default value is 5120. If the combined size of the e-mail message text and attachment exceeds the specified value, BIRT iHub does not send the e-mail. The diagnostic log in AC_SERVER_HOME\data\server\log contains a record for a failed e-mail notification attempt.

■ MaxMailRecipients

Page 59: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 4 , C o n f i g u r i n g a c l u s t e r u s i n g s h a r e d f i l e s 57

Specifies the maximum number of e-mail addresses to which iHub can address a single e-mail message. If the number of e-mail recipients exceeds the value of this property, BIRT iHub divides the list into smaller lists and sends the same e-mail message to each of the smaller lists. The permissible range is 100 through 100000. The default value is 10000.

Creating a dummy To: line for a notification e-mailSet values for the following properties to specify whether BIRT iHub uses text you specify in an empty notification e-mail To: line, and to specify the text itself:

■ UseDummyToLineIndicates whether to use the value of the DummyToLine property in the To: line of a notification e-mail. Possible values are True or False.

■ DummyToLineSpecifies the text to use in the To: line of a notification e-mail if the To: line is empty.

How to set e-mail notification properties in acserverconfig.xml

1 Stop iHub processes. For information on stopping and starting iHub processes for a cluster or for a standalone iHub installation, see “Stopping and starting iHub processes,” later in this chapter. Wait until all services have stopped before going to the next step.

2 Open AC_CONFIG_HOME\acserverconfig.xml, using a text editor.

3 Add the e-mail notification properties you want to use to the <System> element, as shown in Listing 4-3.

Listing 4-3 Configuring e-mail notification properties

<SystemKeyAlias="birtihub"...DummyToLine="(names withheld)"UseDummyToLine="True"MaxMailRecipients="10000"MaxMailMessageSize="5120"><UsageAndErrorLogging/><SMTPServers>...</SMTPServers>

</System>

4 Start iHub processes. For information on stopping and starting iHub processes for a cluster or for a standalone iHub installation, see “Stopping and starting iHub processes,” later in this chapter.

Page 60: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

58 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Working with the job completion notification e-mail message template

BIRT iHub uses an XML file as a template for creating the e-mail notification of a completed job. You can customize the job completion notification e-mail using the XML file, acnotification.xml, as shown in Listing 4-4. AC_CONFIG_HOME contains this file.

The acnotification.xml file uses UTF-8 encoding.

Listing 4-4 Viewing acnotification.xml

<?xml version="1.0" encoding="UTF-8"?><notificationTemplate version="1.0">

<successMessage><subject>Actuate iHub Notification</subject><body>

<!-- Body Text Begin -->Actuate iHub - Report <insert variable="jobType"/> complete

For Information Console:Report: <insert variable="reportLink"/>

If the URL above does not work, copy the entire link and paste it into the address bar of your web browser, then press Enter or Return.

Completed at: <insert variable="jobCompletion"></insert>

Note: If the job submitter requested that you receive the report as an attachment to this email, but the report is not attached, then you might not have the privileges required to view the entire report. Please contact your system administrator.

<!-- Body Text End --></body>

</successMessage><failureMessage>

<subject>Actuate iHub Notification</subject><body>

<!-- Body Text Begin -->Actuate iHub - Report <insert variable="jobType"/> failed.

For Information Console:Report: <insert variable="reportLink"/>

If the URL above does not work, copy the entire link and paste it into the address bar of your web browser, then press Enter or Return.

Page 61: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 4 , C o n f i g u r i n g a c l u s t e r u s i n g s h a r e d f i l e s 59

Completed at: <insert variable="jobCompletion"></insert>

Warning/Error:<insert variable="jobDetailedStatus"/>

<!-- Body Text End --></body>

</failureMessage></notificationTemplate>

Listing 4-5 shows the text of an e-mail message that uses the default acnotification.xml file.

Listing 4-5 Viewing a job completion e-mail notification

Actuate iHub - Report Execution complete

For Information Console:Report: http://HULU:8700/iportal/iv?__report=/Home/administrator/

Sales%20and%20Profit%20Analysis%20by%20Country.RPTDOCUMENT%3B3&volume=Default%20Volume&repositoryType=enterprise

If the URL above does not work, copy the entire link and paste it into the address bar of your web browser, then press Enter or Return.

Completed at: Monday, May 04, 2015 1:47:31 PM Pacific Standard Time

Note: If the job submitter requested that you receive the report as an attachment to this email, but the report is not attached, then you might not have the privileges required to view the entire report. Please contact your system administrator.

About e-mail template elements and attributesTable 4-2 describes the e-mail template elements and attributes.

Table 4-2 E-mail template elements and attributes

E-mail template elements and attributes Description

notificationTemplate element

Required root element of the e-mail notification template

version attribute Required notification template attribute. Specifies the version number of the notification template file

failureMessage element Parent element of the subject and body elements for a failed job e-mail notification.

Page 62: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

60 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

About variable attribute valuesTable 4-3 describes the valid variable attribute values.

successMessage element Parent element of the subject and body elements for a successful job e-mail notification

subject element Element that specifies the content of the subject line

body element Element that specifies the content of the e-mail body

email-content-type attribute Optional body element attribute that specifies the type of the body content

insert element Optional element that inserts job and document information in subject or body message content

variable attribute Required insert element attribute that specifies the information to insert in the e-mail subject or body

Table 4-3 Values for variable attribute

Variable attribute value Description

jobCompletion Date and time of job completion.

jobDetailedStatus Detailed status of the job. Detail information from the jobstatus page.

jobHeadline Job headline.

jobName Job name.

jobStatus Status of job. Either completed or failed.

jobSubmitter Job submitter user name.

jobType Type of job. Either Execution or Printing.

reportDocumentName Document name. Available for a successful job.

reportDocumentVersionName Document version name. Available for a successful job.

reportDocumentVersionNumber Document version number. Available for a successful job.

Table 4-2 E-mail template elements and attributes (continued)

E-mail template elements and attributes Description

Page 63: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 4 , C o n f i g u r i n g a c l u s t e r u s i n g s h a r e d f i l e s 61

The following example uses the insert element reportLink variable to display the URL for the document:

Report: <insert variable=”reportLink” />

Using HTML in the e-mail templateTo use HTML in the successMessage or failureMessage element, set the email-content-type attribute to text/html:

<body email-content-type="text/html">

Enclose the HTML in CDATA sections. Place insert elements outside the CDATA sections. The following example shows a successMessage element with HTML formatting:

<?xml version="1.0" encoding="UTF-8" ?> <notificationTemplate>

<successMessage> <subject>

Report Delivery Notification: <insert variable="jobHeadline"/>

</subject><body email-content-type="text/html">

<![CDATA[ <html><body><h2>

]]> <insert variable="jobHeadline"/> <![CDATA[

</h2>]]> Version <insert variable="reportDocumentVersionNumber"/> of report <insert variable="reportDocumentName"/> is now available online.<![CDATA[

<a href="]]> <insert variable="reportLink"/> <![CDATA[

reportLink Hyperlink to access a document in the volume. For a failed job, the link accesses the job status page.

Table 4-3 Values for variable attribute (continued)

Variable attribute value Description

Page 64: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

62 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

">Go to Report</a><p><table border="1pt;"><tr><td>Report Submitter: </td><td>

]]> <insert variable="jobSubmitter"/> <![CDATA[

</td></tr><tr><td>Report Generation Date: </td><td>

]]> <insert variable="jobCompletion"/> <![CDATA[

</td></tr></table><br><br>

</body></html>]]>

</body></successMessage>

If your e-mail server supports HTML, the message header and body look similar to Figure 4-1.

Figure 4-1 Viewing the message and body of an HTML e-mail

Stopping and starting iHub processesYou stop and start iHub processes when performing tasks such as switching to a different database for storing volume metadata, applying a new license, or modifying certain properties in acserverconfig.xml, the shared configuration file. If you want to modify acserverconfig.xml for example, you stop the processes

From: Actuate iHubSent: Monday November 1, 2014 11:46 AMSubject: Report Delivery Notification: New office roster New office rosterVersion 2 of office-info.rptdocument is now available online. Go to reportReport submitter: AdministratorReport Generation Date: Nov 1, 2014 11:43:50 AM Pacific Standard Time

Report submitter: Administrator

Report Generation Date: Nov 1, 2014 11:43:50 AM Pacific Standard Time

Page 65: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 4 , C o n f i g u r i n g a c l u s t e r u s i n g s h a r e d f i l e s 63

that iHub runs, modify and save acserverconfig.xml, then start the iHub processes again.

This section describes how to stop and start iHub processes for a cluster, and for a standalone iHub installation on a single machine.

If the task for which you are stopping and starting iHub processes is for a cluster, follow the instructions in “Stopping and starting the cluster.” If the machine on which you are performing a task does not belong to a cluster, follow the instructions in “Stopping and starting iHub processes for a standalone iHub installation on a Windows machine,” or “Stopping and starting iHub processes for a standalone iHub installation on a Linux machine.”

Stopping and starting the clusterIn System Console, you stop and start the cluster, which stops and starts iHub processes on all cluster nodes.

Editing the clusterWhether starting or stopping the cluster, you begin by editing the cluster. The following section describes this operation.

How to edit the cluster

1 Log in to System Console.

2 Choose Clusters, as shown in Figure 4-2.

Figure 4-2 Choosing Clusters

3 Left-click the arrowhead icon next to a cluster and choose Edit, as shown in Figure 4-3.

Figure 4-3 Choosing to edit a cluster

Cluster Configuration appears. Figure 4-4 shows a cluster named Company, containing one node, kozu.

Page 66: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

64 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 4-4 Editing a cluster

Stopping the clusterStop the cluster by taking the cluster offline in System Console. The following section describes this operation.

How to stop the cluster

1 On Cluster Configuration, left-click the arrowhead icon next to the gear icon to access the Manage Cluster menu, and choose Stop Cluster, as shown in Figure 4-5.

Figure 4-5 Stopping the cluster

2 From the Manage Cluster menu, choose Refresh, as shown in Figure 4-6.

Page 67: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 4 , C o n f i g u r i n g a c l u s t e r u s i n g s h a r e d f i l e s 65

Figure 4-6 Choosing Refresh to refresh the cluster status

When all the services appear in red, the cluster is offline, as shown in Figure 4-7.

Figure 4-7 The cluster is offline

Starting the clusterStart the cluster by bring the cluster online in System Console. The following section describes this operation.

How to start the cluster

1 On Cluster Configuration, left-click the arrowhead icon next to the gear icon to access the Manage Cluster menu, and choose Start Cluster, as shown in Figure 4-8.

Page 68: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

66 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 4-8 Starting the cluster

2 From the Manage Cluster menu, choose Refresh, as shown in Figure 4-9.

Figure 4-9 Choosing Refresh to refresh the cluster status

When all the services appear in green, the cluster is online, as shown in Figure 4-10.

Figure 4-10 The cluster is online

Stopping and starting iHub processes for a standalone iHub installation on a Windows machineOn a Windows machine, the process for stopping and starting iHub processes differs, depending on whether you installed iHub to run as Windows services. The following procedures describe how to stop and start iHub processes when

Page 69: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 4 , C o n f i g u r i n g a c l u s t e r u s i n g s h a r e d f i l e s 67

iHub is running as Windows services, and when iHub is not running as Windows services.

How to stop and start the iHub processes when iHub was installed as Windows Services

1 Open Windows Services by choosing Windows Start➛Control Panel➛Administrative Tools➛Services.

2 Stop iHub processes by stopping the iHub service. Right-click the Actuate iHub 3.1 Service and choose Stop.

3 Start iHub processes by starting the iHub service. Right-click the Actuate iHub 3.1 Service and choose Start.

How to stop and start iHub processes when iHub was not installed as Windows services

1 Stop the iHub processes by entering s in the Actuate BIRT iHub3 command prompt and pressing Enter, as shown in Figure 4-11. The command stops iHub processes but does not stop the out-of-the-box (OOTB) PostgreSQL database.

Figure 4-11 Stopping iHub

2 Start iHub by double-clicking the Start iHub Platform icon on the desktop.

Stopping and starting iHub processes for a standalone iHub installation on a Linux machineIn a standalone iHub installation on a Linux machine, if you want to modify acserverconfig.xml for example, you stop iHub, modify and save acserverconfig.xml, then start iHub again.

How to stop and start iHub on a Linux machine

1 Navigate to <installation directory>/modules/BIRTiHub.

2 Shutdown iHub by executing the stopiHub.sh script using the following command:

sh ./stopiHub.sh

3 Start iHub by executing the startiHub.sh script using the following command:

sh ./startiHub.sh

Page 70: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

68 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Page 71: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 69

Chapter

5Chapter 5Configuring BIRT iHub

securityThis chapter discusses the following topics:

■ Securing data in a BIRT iHub volume

■ BIRT iHub secure communications

■ Using SSL

■ Using a Java policy file to control access to protected resources

Page 72: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

70 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Securing data in a BIRT iHub volumeAll files stored in a BIRT iHub volume are subject to a standard security procedure, which restricts file access to authorized users. The iHub security model is based on user groups and privileges. The iHub administrator creates groups for various job functions in an organization, such as finance, marketing, and sales. The privileges, or permissions, to perform certain operations, such as read, write, and execute, are assigned to individual users and to user groups. Administrators assign users to groups. Through these groups, users acquire the privileges to perform particular operations on folders and files.

With this level of security, each user has access to files and folders on a need-to-know basis. For security at a more detailed level, BIRT iHub provides the following types of security:

■ Page-level security, which controls user access to particular sections or pages in a report. This security feature requires the Page Level Security option on iHub. To access pages of the published report, a user requires the Secure Read privilege. Read privilege on the report provides access to the entire document.

■ Data security, which controls user access to a particular set of data provided by a BIRT data object. This security feature is part of the core iHub functionality. To access data in the published data object, a user requires the Secure Read privilege. Read privilege on the data object provides access to the entire data object.

The security procedure that manages users and their access to files and folders in an iHub volume is implemented using one of BIRT iHub’s user management solutions. Page-level security and data security, however, require implementation in BIRT Designer Professional in addition to the licensed options for BIRT iHub.

BIRT iHub secure communicationsBIRT iHub secures data in transit to prevent unauthorized third parties from accessing information passing between iHub services and between the iHub server and the end user. Data in transit use the following security protocols:

■ HyperText Transfer Protocol Secure (HTTPS) is used for communication between the user’s web browser and both the Administration Console and BIRT iHub Visualization Platform services.

■ Secure Socket Layer (SSL) for communications between:

■ The BIRT iHub server processes and the JDBC data sources

■ The BIRT iHub server processes and the metadata database

Page 73: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 71

■ Security Assertion Markup Language (SAML) provides a secure means of authenticating and authorizing a user to access a volume.

Understanding HTTPSHyperText Transfer Protocol Secure (HTTPS) is a protocol used to enable secure communication over a computer network. When HTTPS is at the beginning of a URL web address such as https://www.actuate.com, the web browser attempts to activate an encrypted connection using the Secure Sockets Layer (SSL).

A server requires two keys and one certificate to build the HTTPS connection. These establish an SSL handshake to encrypt the data between the client and the server. HTTPS also encrypts the URL’s query parameters, headers, and cookies.

After the client completes the SSL handshake, the web browser securely uses the same features available to an HTTP connection, such as hyperlinks and JavaScript.

Understanding SSLSecure Sockets Layer (SSL) is a protocol for securely exchanging information, such as passwords, credit card numbers, medical, and other private data, over a computer network. SSL provides the means to:

■ Encrypt data exchanged between two parties.

■ Verify the identity of the server and optionally the client.

■ Verify the integrity of the message, that no tampering of the data occurred.

SSL uses the following components:

■ Private keyAvailable only to the owner of the private key. Used to decrypt a message encrypted by the public key.

■ Public keyAvailable to the public. Used to encrypt a message sent to the owner of the private key.

■ Digital certificateContains information such as the certificate issuer, certificate expiration date, information about the certificate owner, and the certificate owner’s public key. The certificate is signed by a certificate authority, using a digital signature.

■ Certification authority (CA)A trusted agency that validates information about an entity such as an online business, and then signs a digital certificate for the entity to use.

Page 74: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

72 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

BIRT iHub SSL communication processA client uses a web browser to access a server hosting Visualization Platform. The secured connection starts when a client visits the SSL secured server using a URL web address beginning with HTTPS. The client receives the server’s digital certificate identifying the server and including the server’s public key. The client web browser checks the following characteristics of the certificate:

■ That the domain in the certificate matches the domain of the server

■ That the certificate is trusted or signed by a trusted certificate authority

■ That the certificate is not expired

■ That the certificate is not revoked

■ That the encryption cipher chosen by the server is supported

After accepting the server’s certificate, the client uses the public key from the server’s certificate to encrypt a message and then sends the message to the server. The message requests that the server generate a session key, also known as a shared secret key. At the same time, the client uses the data in the message to generate the same session key that the client expects the server to generate.

The server uses its private key to decrypt the message from the client. Then, the server uses the data in the message to generate a session key, identical to the session key the client generated. The client and the server use the generated session key to encrypt data, decrypt data, and verify data integrity using checksums.

The server sends a message encrypted using the generated session key back to the client. This message completes the SSL handshake and confirms that data travels securely between both sides of the connection.

BIRT iHub SSL supportBIRT iHub enables SSL by default, specifying port 8001 in the acpmdconfig.xml file. BIRT iHub stores SSL certificates in the AC_SERVER_HOME/shared/credential folder.

BIRT iHub implements 2048 RSA private key encryption. Actuate supports the following secure protocol configurations:

■ SSL V3 and TLSV 1.0

■ TLSV 1.1 and TLSV 1.2

BIRT iHub disables SSL V2, client-initiated renegotiation for ihubd, and TLS compression.

Page 75: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 73

Understanding SAMLSecurity Assertion Markup Language (SAML) is an open standard that provides a secure means of authenticating a user and authorizing the user’s access to resources on the internet. SAML eliminates requiring an authentication credential such as a password to every internet resource the user accesses. The following entities participate in SAML-based communication:

■ UserThe party accessing a resource on the internet. The user has an account with the identity provider.

■ Service providerThe application, resource, or service the user wants to access.

■ Identity provider (IdP)The entity that authenticates the user to the service provider. The identity provider maintains a list of users.

SAML communication processA SAML-enabled service provider communicates with the SAML-enabled identity provider to authenticate the user in the following way:

The user attempts to access a service provider. The service provider contacts the identity provider. The identity provider authenticates the user, sending an assertion, or message containing information about the user, to the service provider. The service provider checks that the assertion is valid, and then allows the user access.

SAML-based user-authentication activity is transparent to the user. Any service provider that can communicate with an identity provider with which the user has an account can obtain user authentication and grant access to the user. The user authenticates once, with the identity provider. Then, the user can access all the service providers that communicate with the identity provider.

BIRT iHub SAML supportSystem Console, Visualization Platform, and BIRT Analytics use SAML to authenticate and authorize users.

BIRT iHub provides its own SAML IdP implementation, which is a customized implementation of Shibboleth IdP 2.4.0 OpenSAML API supporting SAML 2.0. This implementation uses the default authentication method, which specifies PasswordProtectedTransport and PreviousSession. BIRT iHub does not support other third-party SAML identity provider software.

Page 76: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

74 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Using SSLUsing an SSL certificate with BIRT iHub enables verification of the identity of the BIRT iHub server and encryption of data sent between the web browser and BIRT Visualization Platform. The certificate installed with BIRT iHub is self-signed and is for demonstration purposes only. A self-signed certificate is not signed by a Certification Authority (CA). A CA verifies that a certificate is valid and that no tampering has occurred. Using this demonstration SSL certificate shows a warning in the web browser. Use the self-signed certificate to test the creation of an SSL-based connection between the web browser and the BIRT iHub server.

Using self-signed certificates during server testing is common practice for web developers. To test a secure SSL connection, generate a root certificate that signs and validates the self-signed certificate. This root certificate can then be installed and trusted in web browsers that connect to BIRT iHub. Root certificates from many certificate authorities are preinstalled in operating systems. These root certificates offer temporary SSL certificates that can also be used for testing SSL data security.

An SSL certificate has the following general characteristics:

■ Domain name, such as actuate.com. The name confirms the server is associated with the domain name of the web site.

■ Expiration date. After this expiration date, the certificate will not be trusted.

■ Certificate authority signature. The certificate authority distributes a public root certificate that, when trusted, can validate an SSL certificate. Most commercial certificate authorities distribute a public root certificate to computer operating systems. Check that this is the case with your certificate authority.

■ The server’s public key, used to send encrypted information to the server.

Using SSL with IDAPIThe Actuate Information Delivery application programming interface (IDAPI) supports integrating and administering BIRT iHub using Extensible Markup Language (XML) and the Simple Object Access Protocol (SOAP). Using the IDAPI, developers create applications that perform such tasks as uploading and downloading files, generating a document and scheduling document generation, sending an e-mail notification when a job completes, and using external libraries.

By default, BIRT iHub supports SSL-secured SOAP services on port 8001. This port number is set in the acserverconfig.xml file in the SOAPDispatchService element. In a default Windows installation, the location of this file is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config

Page 77: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 75

The default values for the port numbers defined in the SOAPDispatchService are:

<SOAPDispatchService EnableRequestService="true" ProvisioningSOAPPort="8010" SOAPDispatchSOAPPort="8000" ProvisioningSOAPSSLPort="8011" SOAPDispatchSOAPSSLPort="8001"/>

After enabling SSL for the Visualization Platform, test the SSL-secured SOAP port using a URL of the following format:

https://<servername>:8001/wsdl

For example, for a server named urup, use the following URL:

https://urup:8001/wsdl

This request asks which Web Service Description Language (WSDL) utilities are available. The response is a list of available SOAP APIs and their implementations, as shown in Figure 5-1. The green padlock symbol in the browser address field confirms that SSL security is enabled.

Figure 5-1 WSDL utility secured with SSL

For more information about using IDAPI, see Integrating Applications into BIRT iHub.

Using SSL with JSAPIThe Actuate JavaScript API (JSAPI) is a set of JavaScript classes that support authenticating users, connecting to data sources, interacting with the user, generating reports, and viewing reports. These classes support using the HTTPS protocol and SSL security.

Padlock icon shows that SSL is enabled

Page 78: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

76 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

The JSAPI library is available from any iHub Information Console client installation or Actuate BIRT Deployment Kit. The URL for the library is:

http://127.0.0.1:8700/iportal/jsapi

■ 127.0.0.1:8700 is the host name and TCP port for an available Actuate web application host.

■ /iportal is the context root for the web application.

■ /jsapi is the default location of the JSAPI libraries.

A script tag in an HTML page loads the JSAPI library, as shown in the following code:

<script type="text/javascript" src="http://127.0.0.1:8700/iportal/jsapi">

</script>

After enabling SSL for the Visualization Platform, access the JSAPI library securely using the following URL:

https://127.0.0.1:8701/iportal/jsapi

The following code uses HTTPS in the script tag that loads the JSAPI library:

<script type="text/javascript" src="https://127.0.0.1:8701/iportal/jsapi">

</script>

Using SSL and external user managementA BIRT iHub system using external tools to manage Visualization Platform users supports connecting to a Lightweight Directory Access Protocol (LDAP) or Active Directory server using SSL. By default, BIRT iHub only connects to an LDAP or Active Directory server that has a signed certificate. To connect to a server that does not have a signed certificate, use the Java keytool utility to add that certificate as a trusted certificate. For information on using the Java keytool utility, see the following URL:

http://docs.oracle.com/javase/6/docs/technotes/tools/windows/keytool.html

Using SSL with System ConsoleYou can use SSL to secure the communication between a client web browser and System Console. System Console runs as a web application under the Tomcat servlet container. Tomcat installs with System Console.

This section refers to a variable named TOMCAT_HOME, which represents the path of the folder into which Tomcat installs when you install System Console. In

Page 79: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 77

a default System Console installation on Windows, the location of TOMCAT_HOME is:

C:\Actuate3\SystemConsole\modules\SystemConsole\tomcat

How to secure the communication between a web browser and System Console

1 Stop Tomcat by performing the following tasks:

1 Open a command prompt. Navigate to TOMCAT_HOME\bin.

2 Execute the shutdown script. For example, on a Windows machine, execute shutdown.bat.

2 Generate a certificate keystore, and configure TOMCAT_HOME\conf\server.xml as described in the Apache Tomcat SSL/TLS Configuration HOW-TO document, at the following location:

https://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html

Listing n-n shows an example of the <Connector> element after modifying it in TOMCAT_HOME\conf\server.xml.

Listing 5-6 Modifying the <Connector> element

<Connectorport="8743"scheme="https"protocol="org.apache.coyote.http11.Http11Protocol"secure=”true”SSLEnabled="true"clientAuth="want"keystoreFile=”C:\Actuate3\SystemConsole\modules

\SystemConsole\.keystoresslEnabledProtocols="TLSv1, TLSv1.1, TLSv1.2"/>

3 Start Tomcat by executing the startup script in TOMCAT_HOME\bin. For example, on a Windows machine, execute startup.bat.

Using SSL with Visualization PlatformUse SSL to validate the identity of the BIRT iHub Visualization Platform server and to encrypt the data between the client web browser and the BIRT iHub server. To use SSL with the BIRT Visualization Platform, disable SAML and Message Distribution service (MDS) in the web.xml file located at \iHub\web\iportal\WEB-INF\. In a default Windows installation, this location is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\web\iportal\WEB-INF

Change the following values and restart the Actuate iHub 3.1 Service:

Page 80: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

78 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Set the SAMLEntityID parameter to an empty value. For example:

<context-param> <description>The SP ID for SAML SSO</description> <param-name>SAMLEntityID</param-name> <param-value></param-value></context-param>

■ Set the Message Distribution service MDS_ENABLED parameter to false. For example:

<context-param> <!-- true or false: Enable or disable MDS --> <param-name>MDS_ENABLED</param-name> <param-value>false</param-value></context-param>

After restarting the Actuate iHub 3.1 Service, access the Visualization Platform using a URL of the following format:

https://<servername>:8701/iportal/

For example, for a server named urup, use the following URL:

https://urup:8701/iportal/

Figure 5-2 shows the secured SSL connection to the Visualization Platform using HTTPS. The default certificate included with the installation of Visualization Platform is not signed by a certification authority and the browser identifies it as not trusted. When you use your own signed and trusted SSL certificate, the web browser trusts your certificate.

For testing purposes, install and trust the self-signed demonstration certificate included in the default installation of Visualization Platform. This certificate is birtihub.crt located in the following folder:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config\credentials

Each operating system has a different method to install a trusted certificate. For example, in Windows, install this certificate into the Trusted Root Certification Authorities certificate store. Figure 5-3 shows the same web URL as Figure 5-2 after setting the demonstration certificate to be trusted.

Page 81: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 79

Figure 5-2 Using HTTPS to access Information Console

Figure 5-3 Using HTTPS with a trusted certificate

Red padlock icon shows that SSL certificate is not trusted

Green padlock icon shows that SSL certificate is trusted

Page 82: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

80 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

How to install and trust the demonstration SSL certificate on Windows

This example shows how to install your own root certificate for testing purposes. This procedure applies to browsers other than Mozilla Firefox. Firefox uses a different mechanism to trust certificates. Refer to the Firefox documentation to set up a trusted certificate on Firefox.

1 Using Windows Explorer, navigate to the following folder:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config\credentials

2 Open the birtihub.crt file. Certificate—General appears, as shown in Figure 5-4.

Figure 5-4 Opening an untrusted root certificate

3 Choose Install Certificate. Certificate Import Wizard appears, as shown in Figure 5-5.

Red cross on icon shows that SSL certificate is not trusted

Page 83: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 81

Figure 5-5 Installing a root certificate

4 Choose Next. Certificate Store appears.

5 Enable Place all certificates in the following store. Choose Browse. Select Certificate Store appears.

6 Select Trusted Root Certification Authorities, as shown in Figure 5-6. Choose OK.

Figure 5-6 Selecting a store to install the root certificate

7 In Certificate Store, choose Next, as shown in Figure 5-7.

Page 84: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

82 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 5-7 Selecting where to install the root certificate

In Completing the Certificate Import Wizard, choose Finish, as shown in Figure 5-8.

Figure 5-8 Finishing the installation of the root certificate

If you receive a security warning asking if you want to install this certificate, choose Yes.

When you receive an alert that the import was successful, choose OK.

Page 85: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 83

Choose OK to close Certificate—General.

How to verify that the HTTPS connection is trusted

This example shows how to verify that the HTTPS connection to Information Console is trusted. This procedure applies to browsers other than Mozilla Firefox. Firefox uses a different mechanism to check trusted certificates. Refer to the Firefox documentation to check the HTTPS connection on Firefox.

1 Using Windows Explorer, navigate to the following folder:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config\credentials

2 Open the birtihub.crt file. The certificate should look similar to Figure 5-9.

Figure 5-9 Verifying the installation of the root certificate

3 Choose Certification Path. Verify that the certificate status is OK, as shown in Figure 5-10.

No red cross on icon shows that SSL certificate is trusted

Page 86: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

84 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 5-10 Verifying the certificate status

Choose OK.

4 Open a web browser such as Google Chrome. Type a URL of the following format, replacing servername with the name of your server. Do not use localhost as the name of the server.

https://servername:8701/iportal/

Information Console appears, using HTTPS.

5 Choose view site information. Choose Connection as shown in Figure 5-11.

Page 87: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 85

Figure 5-11 Verifying a secured SSL connection to Information Console

Using SSL for communication with the volume metadata databaseYou can encrypt the connection from BIRT iHub to the volume metadata database. By default, BIRT iHub uses a PostgreSQL database to contain volume metadata. In System Console, Metadata Database displays the properties for this PostgreSQL database. The database type for this database is PostgreSQL. Figure 5-12 shows the following properties for this database, installed on a machine named urup. An asterisk (*) next to the property name means the property is required.

■ Database serverThe host name of the machine containing the database.

■ Database portThe default port number for the default PostgreSQL database is 8432.

■ Database nameThe name of the database. The default database name is ihub.

■ Encryption MethodOne of the following methods:

Green padlock icon shows that SSL certificate is trusted

Page 88: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

86 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ requestSSLBIRT iHub encrypts the login request and data using SSL. If the database server does not support SSL, the driver establishes an unencrypted channel.

■ SSLBIRT iHub performs SSL certificate verification.

■ noEncryptionThe channel between BIRT iHub and the metadata database passes unencrypted data.

■ UsernameThe database user name. The default user name is ihub.

■ PasswordThe database user name password. The default password of the ihub user is postgres.

■ Test ConnectionChoose to verify that the BIRT iHub system can successfully connect to the metadata database.

Figure 5-12 Viewing OOTB PostgreSQL metadata database properties

Page 89: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 87

The PostgreSQL data folder is the location of the certificate and keys used by the PostgreSQL server. If you change the certificate or keys, restart the PostgreSQL server. The SSL files for a default PostgreSQL database are in the following folder:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\data\postgresql\data

Test the SSL connection of the PostgreSQL using the PostgreSQL interactive terminal (psql) command. This command is located in \postgresql\bin folder of the iHub installation. The default location of this software is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\postgresql\bin

See the documentation at the following URL for more information about configuring and securing a PostgreSQL database:

http://www.postgresql.org/docs/

How to verify that a PostgreSQL server supports an SSL connection

The following example shows how to use the Windows command prompt to check if an SSL connection to a PostgreSQL database is possible. This example connects you to the default PostgreSQL server installed with iHub. Use the same computer as the PostgreSQL server. This server has a user name and password with the value postgres and a database table named ihub. If either the user name or password of your PostgreSQL server has changed, use the current user name and password.

1 In a command window, navigate to \postgresql\bin folder of the iHub installation. The default location is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\postgresql\bin

2 Type the following command. Then press Enter:

psql postgresql://postgres@localhost:8432/ihub?sslmode=require

3 When prompted for a password, type the password for the postgres user. In this example, the password is postgres. Then press Enter. You should receive the following response.

psql (9.2.4)WARNING: Console code page (437) differs from Windows code page

(1252) 8-bit characters might not work correctly. See psql

reference page "Notes for Windows users" for details.SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)Type "help" for help.

ihub=#

Page 90: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

88 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

4 Type \q and press Enter to quit the terminal. You can see in the connection information that an SSL connection is established.

Managing SSL filesThe SSL certificates and keys used to secure BIRT iHub and the Visualization Platform are located in the \shared\config\credentials folder in the BIRT iHub installation folder. The default location is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config\credentials

This location contains the iHub’s digital certificate in the Privacy Enhanced Mail (PEM) format and the Java KeyStore (JKS) file, which is a repository of security certificates. BIRT iHub is configured to use these certificates in the following files:

■ The acpmdconfig.xml file located in the iHub \etc\ folder. The default location is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\etc

The following settings in the acpmdconfig.xml file point to the PEM files.

<EnableSSLEngine>true</EnableSSLEngine><SSLCertificateFile>

$AC_CONFIG_HOME$/credentials/birtihub_nopassphrase.pem</SSLCertificateFile><SSLCertificateKeyFile>

$AC_CONFIG_HOME$/credentials/birtihub_nopassphrase.pem</SSLCertificateKeyFile><SSLRootCertificateFile>

$AC_CONFIG_HOME$/credentials/birtihub_nopassphrase.pem</SSLRootCertificateFile><SSLCipherSuite>ALL:!ADH:!EDH</SSLCipherSuite>

■ The acserverconfig.xml file located in the iHub \shared\config folder. The default location is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config

The following settings in the acserverconfig.xml file point to the JKS file.

<System KeyAlias="birtihub" ... KeystoreFile="$AC_CONFIG_HOME$/credentials/birtihub.jks"

KeystorePass="!1!MsGLAyDce0TZhxvh1xDrTkG0Ea6hTslzaidAvxx5pfK!"

...

Page 91: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 89

You can change the SSL key alias and keystore file. You can also update the keystore password, which the KeystorePass property specifies. By default, this password is:

birtihub

If you change the JKS keystore password, use the update_keystore_password utility to update the password in acserverconfig.xml. See Using the SSL keystore change utility, later in this chapter.

If you change these SSL files, you must restart the Actuate iHub 3.1 Windows service. If the machine on which you make a change is a node in a cluster, stop and start the cluster instead. For information on this task, see “Stopping and starting iHub processes,” in Chapter 4, “Configuring a cluster using shared files.”

You can use the Java keytool utility to view and create SSL certificates and keys. This utility is located in the \bin folder of the BIRT iHub installation of the Java SE Development Kit (JDK). The default location of the JDK is:

C:\Actuate3\BIRTiHubVisualization\modules\JDK64\bin

How to use the Java keytool utility to view the contents of the JKS file

BIRT iHub generates sample SSL certificates that securely connect a web browser to BIRT iHub. To use SSL security in a production environment, you must replace these SSL certificates with certificates signed by a Certification Authority.

1 In a command window, navigate to the \credentials folder. The default location is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config\credentials

2 Type the following command and press Enter:

keytool -list -v -keystore birtihub.jks -storepass birtihub

Information similar to the following example appears:

Keystore type: JKSKeystore provider: SUN

Your keystore contains 1 entry

Alias name: birtihubCreation date: Mar 24, 2014Entry type: PrivateKeyEntryCertificate chain length: 1Certificate[1]:

Page 92: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

90 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Owner: CN=CH-IHUBTRAINING, OU=admin@localhost, O=Actuate, C=US, ST=CA

Issuer: CN=CH-IHUBTRAINING, OU=admin@localhost, O=Actuate, C=US, ST=CA

Serial number: 1ca757fcValid from: Mon Mar 24 09:14:30 PDT 2014 until: Thu Mar 21

09:14:30 PDT 2024Certificate fingerprints: MD5: 90:15:F7:79:FB:0F:23:7E:BF:4C:CE:C3:FA:8A:84:91 SHA1:

E8:A8:2C:14:74:97:61:F2:F3:74:82:34:B3:AC:F0:A4:D7:4C:BA:0F Signature algorithm name: SHA256withRSA Version: 3

Extensions:

#1: ObjectId: 2.5.29.14 Criticality=falseSubjectKeyIdentifier [KeyIdentifier [0000: 66 8E E8 FC DF D8 6E 48 22 CD 61 E1 3E DB 58 90

f.....nH".a.>.X.0010: CD CC 6D F9 ..m.]]

**************************************************************************************

Updating the SSL keystore password in BIRT iHubThe shared configuration file, acserverconfig.xml, contains the properties that specify the JKS keystore and its password. These properties appear as attributes of the <System> element in acserverconfig.xml, as shown in the following example:

<System...KeystoreFile="$AC_CONFIG_HOME$/credentials/birtihub.jks"KeystorePass="!0!birtihub!" ...

</System>

If you change the password for the JKS keystore, you can update the KeystorePass property with the new password by running the update_keystore_password utility. For example, if you change the KeystoreFile property to specify a different JKS keystore, you run the utility to update the KeystorePass property with the new JKS keystore password.

This utility performs the following tasks:

Page 93: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 91

■ Creates a backup copy of the acserverconfig.xml file

■ Encrypts the password you pass to the utility when you run it

■ Updates the KeystorePass property in acserverconfig.xml with the new encrypted password

When you run the utility, you pass the new password to the utility when you execute it.

How to run the update_keystore_password utility

1 Create a new password for the keystore.

2 Stop iHub processes. If the machine on which you are updating acserverconfig.xml belongs to a cluster, stop the cluster instead. For information on stopping and starting a cluster, see “Stopping and starting iHub processes,” in Chapter 4, “Configuring a cluster using shared files.”

3 Open a command prompt. Set an environment variable named AC_CONFIG_HOME. On Windows for example, execute the following command:

set AC_CONFIG_HOME environment <path of shared configuration directory>

Using the default BIRT iHub shared configuration directory, the command is:

set AC_CONFIG_HOME environment C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config

4 Set an environment variable named AC_SERVER_HOME. On Windows for example, execute the following command:

set AC_SERVER_HOME environment <path of BIRT iHub home directory>

Using the default BIRT iHub home directory, the command is:

set AC_SERVER_HOME environment C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub

5 Navigate to the AC_SERVER_HOME\tools\bin folder. Execute the following command:

update_keystore_password.bat <new JKS keystore password>

On a Linux system, the command is:

sh ./update_keystore_password.sh <new JKS keystore password>

Listing 5-1 shows the output of update_keystore_password.bat when passing the new JKS keystore password to the utility in the command.

Listing 5-1 Running update_keystore_password.bat with password

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\tools\

Page 94: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

92 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

bin>update_keystore_password.bat myNewPasswordSystem Administrator can use this utility to update keystore password in acserverconfig.xml fileThis utility does following operations;1. Make a backup copy of acserverconfig.xml file.2. Encrypt the password.3. Update acserverconfig.xml file with new encrypted password.-------------------------------------------------------------Updating Keystore password in ihub configuration file.....

Keystore password successfully updated in server configuration file.

Done.....

If you do not pass the new JKS keystore password in the command when executing the utility, the utility prompts you for the password, as shown in Listing 5-2.

Listing 5-2 Running update_keystore_password.bat without password

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\tools\bin>update_keystore_password.batSystem Administrator can use this utility to update keystorepassword in acserverconfig.xml fileThis utility does following operations.1. Make a backup copy of acserverconfig.xml file.2. Encrypt the password.3. Update acserverconfig.xml file with new encrypted password.-------------------------------------------------------------Please provide required keystore password.Usage: update_keystore_password.bat <password>

Execute the utility again, passing the new JKS keystore password to the utility, as shown in Listing 5-1.

6 Start iHub processes. For information on this task, see “Stopping and starting iHub processes,” in Chapter 4, “Configuring a cluster using shared files.”

Using a commercial SSL certificateThe keystore in a default BIRT iHub installation contains a self-signed certificate. If you want to use a certificate issued by a certificate authority (CA), you must replace the BIRT iHub default keystore with your own keystore, containing the CA-issued certificate.

To replace the keystore, BIRT iHub must be installed, and you must have the following items:

■ The private key file used to generate the certificate signing request (CSR)

Page 95: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 93

■ The CA-issued site certificate

■ The CA-issued root certificate

■ Intermediate chain certificates, if any

■ A Java Development Kit (JDK), for the keytool application

■ The openssl application, which BIRT iHub contains

How to replace the default BIRT iHub keystore

1 Verify that all certificates are in the PEM format by viewing each certificate file in a text editor. The certificate should have the following characteristics, as shown in Listing 5-3:

■ Certificate begins with -----BEGIN CERTIFICATE-----

■ Certificate ends with -----END CERTIFICATE-----

■ ASCII characters represent the content between -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----

Listing 5-3 Viewing a certificate in PEM format

-----BEGIN CERTIFICATE-----...E4MjBaMFUxCzAJBgNVBAgTA...-----END CERTIFICATE-----

2 Merge the certificates by concatenating them in the following order. This is the order in which BIRT iHub expects the certificates. Maintain this order even if you do not have all of the following certificates. For example, if you do not have any intermediate certificates, merge the certificates in the order of site certificate followed by root certificate.

■ Site certificate

■ Intermediate certificate

■ Root certificate

Note that the way certificate authorities issue certificates varies with each vendor. For more information, consult the CA support.

3 Convert the merged certificate and the private key to PKCS#12 format by using the openssl application in BIRT iHub. For example, on a Windows machine, perform the following tasks:

1 Open a command prompt. Navigate to AC_SERVER_HOME\bin, where AC_SERVER_HOME represents the BIRT iHub home directory. For example:

cd C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\bin

Page 96: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

94 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

2 Set the OPENSSL_CONF environment variable:

set OPENSSL_CONF=C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\bin\openssl.cfg

3 Execute the following command:

openssl pkcs12 -export -inkey private_key.pem -in ca_issued_combined_crt.pem -out key_and_crt.p12 -name somealias

Enter a password when prompted by the openssl application. The value the -name option specifies, somealias, becomes the alias value for the key when the key is stored in the keystore.

4 Create a java keystore from the .p12 file you created in the previous step using the java keytool. Execute the following command:

keytool -v -importkeystore -srckeystore key_and_crt.p12 -srcstoretype PKCS12 -destkeystore mykeystore.jks -deststoretype JKS

Enter a password when prompted by the java keytool. The tool prompts you twice; first, for the new password for the keystore the -destkeystore option specifies, and second, for the password the .p12 file contains. The two passwords you specify must be the same.

5 Stop iHub processes. For information on this task, see “Stopping and starting iHub processes,” in Chapter 4, “Configuring a cluster using shared files.”

6 Open the shared configuration file, acserverconfig.xml, using a text editor. In a default BIRT iHub installation on a Windows machine, the location of this file is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config

In acserverconfig.xml, perform the following tasks:

1 Set the value of the KeyAlias property to the value of the -name option you specified in the command in step 3.3, for example, somealias.

2 Set the value of the KeystoreFile property to the location of the new .jks file you created in step 4, for example, mykeystore.jks.

3 Set the value of the KeystorePass property to the password you specified when you ran the java keytool in step 4, using the format !0!password!. For example, if the password you specified was oranges, then specify !0!oranges! as the value for KeystorePass.

Listing 5-4 shows these properties in acserverconfig.xml.

Listing 5-4 Setting the KeyAlias, KeystoreFile, and KeystorePass properties

<Config>

Page 97: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 95

<SystemKeyAlias="somealias"...KeystoreFile="$AC_CONFIG_HOME$/credentials/mykeystore.jks"KeystorePass="!0!oranges!"

...</System>

...</Config>

7 Start iHub processes. For information on performing this task, see “Stopping and starting iHub processes,” in Chapter 4, “Configuring a cluster using shared files.”

8 To confirm that the new certificate is properly applied, connect to the HTTPS URL of Information Console, https://localhost:8701/iportal, with any browser to confirm that it received the new certificate.

Using a Java policy file to control access to protected resources

By default, a BIRT design cannot obtain permission to perform a potentially unsafe operation, such as opening an internet connection. If the administrator needs to further restrict operations a BIRT design has permission to perform, the administrator can implement restrictions using a Java policy file. A volume implements a java.security.Policy object. The Policy object obtains permission information from a Java policy file. This section describes how to use a policy file.

If the administrator wants to keep a set of restrictions in place, but the design developer needs to work around a restriction, the developer can implement a Java-based event handler, which the administrator can place in a .jar file in the AC_SERVER_HOME\resources folder. BIRT iHub places no restriction on the actions a program residing in AC_SERVER_HOME/resources may perform. As an example, if a report developer wants to create a BIRT design that obtains customer-specific information from a public entity, the developer can create an event handler that calls an application which obtains the customer information. The administrator places the event handler in AC_SERVER_HOME/resources. When the BIRT design needs to load the customer information, the design calls the event handler, which in turn calls the application that retrieves the customer information.

This section makes reference to the AC_CONFIG_HOME variable, which represents the shared configuration directory which all nodes in a cluster access. In a default BIRT iHub installation on Windows, performed using the installer, in

Page 98: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

96 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

which the install folder is C:\Actuate, AC_CONFIG_HOME represents the following path:

C:\Actuate\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config

In a cluster consisting of two or more nodes, AC_CONFIG_HOME represents the shared configuration directory the system administrator created before adding the second node to a cluster.

How to create and enable a policy file

1 In System Console—Clusters, edit the cluster. Then, from the Manage Cluster menu, choose Stop Cluster to stop the cluster. Wait until all services stop before going to the next step.

2 Using Windows Explorer, navigate to AC_CONFIG_HOME.

3 Open acserverconfig.xml in a text editor.

4 Edit each <ServerResourceGroupSetting> element, adding the Djavaserver.security argument to the StartArguments attribute definition, as shown in the following code sample:

StartArguments="-Djavaserver.security -Xmx512M -XX:MaxPermSize=256m -XX:-UsePerfData -Djava.awt.headless=true -Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server”

The following example shows the <ServerResourceGroupSetting> element for the Default BIRT Online resource group in the small template after adding the Djavaserver.security argument to the StartArguments attribute definition:

<ServerResourceGroupSettingName="Default BIRT Online"Activate="TRUE"MinFactory="1"StartArguments="-Djavaserver.security -Xmx512M-XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true-Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocolcom.actuate.javaserver.Server"/>

Save and exit acserverconfig.xml.

5 In AC_CONFIG_HOME, create a new file named javaserver.policy. You can create this file manually, or use the Policy Tool utility to create the file. Launch the Policy Tool by executing AC_JAVA_HOME\bin\policytool.exe in a command prompt. In a default BIRT iHub installation on Windows,

Page 99: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 5 , C o n f i g u r i n g B I R T i H u b s e c u r i t y 97

performed using the installer, in which the install folder is C:\Actuate, AC_JAVA_HOME represents the following path:

C:\Actuate\BIRTiHubVisualization\modules\JDK64

Instructions for using the Policy Tool utility can be found at the following location:

http://docs.oracle.com/javase/7/docs/technotes/guides/security/PolicyGuide.html

Listing 5-5 shows example policy file content:

Listing 5-5 Viewing example policy file content

grant codeBase "file:/${AC_SERVER_HOME}/Jar/-" { permission java.security.AllPermission;};grant codeBase "file:/${AC_SERVER_HOME}/MyClasses/-" { permission java.security.AllPermission;};grant codeBase "file:/${AC_SERVER_HOME}/drivers/-" { permission java.security.AllPermission;};grant codeBase "file:/${AC_SERVER_HOME}/oda/-" { permission java.security.AllPermission;};grant codeBase "file:/${AC_SERVER_HOME}/reportengines/-" { permission java.security.AllPermission;};

6 Using Windows Explorer, navigate to AC_JRE_HOME\lib\security. In a default BIRT iHub installation on Windows, performed using the installer, in which the install folder is C:\Actuate, AC_JRE_HOME represents the following path:

C:\Actuate\BIRTiHubVisualization\modules\JDK64\jre

Create a copy of the java.security file, then open java.security using a text editor.

7 Do a find on policy.url. By default, java.security contains two policy.url.n properties where n is a number, as shown in Listing 5-6.

Listing 5-6 Viewing policy.url.n properties in java.security

# The default is to have a single system-wide policy file,# and a policy file in the user's home directory.policy.url.1=file:${java.home}/lib/security/java.policypolicy.url.2=file:${user.home}/.java.policy

Page 100: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

98 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Add another policy.url.n property, specifying the location of javaserver.policy, as shown in the example in Listing 5-7.

Listing 5-7 Specifying the location of javaserver.policy in java.security

# The default is to have a single system-wide policy file,# and a policy file in the user's home directory.policy.url.1=file:${java.home}/lib/security/java.policypolicy.url.2=file:${user.home}/.java.policypolicy.url.3=file:C:/Actuate/BIRTiHubVisualization/modules

/BIRTiHub/iHub/shared/config/javaserver.policy

Save and exit java.security.

8 In System Console—Clusters, edit the cluster. Then, from the Manage Cluster menu, choose Start Cluster to start the cluster.

Page 101: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Part 2BIRT iHub System Console

■ Understanding System Console

■ Managing clusters

■ Managing system administrators

Part Two2

Page 102: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are
Page 103: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 6 , U n d e r s t a n d i n g S y s t e m C o n s o l e 105

C h a p t e r

6Chapter 6UnderstandingSystem Console

This chapter discusses the following topics:

■ About System Console

■ Viewing clusters, nodes, and system administrators

■ Logging in to System Console

■ Using the System Console configuration user interface

■ About Monitoring

Page 104: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

106 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

About System ConsoleSystem Console provides a single graphical user interface (GUI) that supports an administrator creating and managing the resources used by the Actuate applications for an entity, such as a corporation or corporate division.

A system administrator creates a cluster for the entity in System Console. A cluster supports creating a scalable BIRT iHub system consisting of two or more machines, or cluster nodes, each running BIRT iHub using the same configuration details.

When setting up a cluster, the administrator defines and configures resources such as:

■ Cluster nodes

■ Volumes the cluster uses

■ BIRT iHub license

■ Storage space for volume data on disk

In addition to creating and managing clusters, a system administrator uses System Console to perform the following operations:

■ Create alerts.System Console monitors a range of activity, conditions, and resources in a BIRT iHub System. An attribute identifies a monitored item. The system administrator can set a threshold on an attribute. When the monitored item reaches the threshold, for example, when the days until the product license expires reaches a specified number, System Console generates an alert.

■ Create other system administrators.A system administrator can create other system administrators.

■ Configure security.The system administrator selects where user authentication and authorization information are stored.

System Console contains the following features:

■ MonitoringMonitor cluster activity.

■ ClustersCreate, update, and delete clusters. View logs and detailed information on system resource usage.

Page 105: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 6 , U n d e r s t a n d i n g S y s t e m C o n s o l e 107

■ SettingsView system information, create System Administrator users, update the information for an existing System Administrator, and specify an e-mail server.

Viewing clusters, nodes, and system administratorsSystem Console displays a list for each of the following System Console data items:

■ Alerts

■ Clusters

■ Volumes

For each list, the system administrator can view the entire list, or a subset. By default, System Console displays the entire list.

The system administrator can filter each list to display only the items for which the item property System Console searches contains a string the administrator specifies. The administrator can filter the Clusters list to alternatively display only starred items.

Table 6-1 shows the list type, the list item property on which System Console performs a string search, and whether System Console filters the list type to show only monitored items.

Logging in to System ConsoleSystem Console is a standalone application, which the administrator logs into separately from a BIRT iHub application.

When logging in to System Console for the first time, you log in as the default system administrator, using a default password. The following section describes how to log in to System Console.

Table 6-1 Filtering System Console lists

System Console list

List item property on which System Console searches for specified string

Does System Console display only monitored items?

Alerts Cluster name N/A

Clusters Cluster ID Yes

Volumes Volume Name N/A

Page 106: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

108 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

How to log in to System Console

1 Open a new browser window. In the address bar, type the following URL:

http://localhost:8500/sysconsole

2 On System Login, specify the following credentials, as shown in Figure 6-1:

■ Username: sysadmin

■ Password: system11

Figure 6-1 Logging in to System Console as default system administrator

Using the System Console configuration user interfaceThe System Console configuration user interface supports the following actions:

■ Separating System Console from the System Console metadata databaseThe name of the database System Console uses is umc. System Console uses umc to store metadata about objects System Console contains, for example, the name, ID, and description of each cluster, all volume names in System Console and the clusters to which the volumes belong, and the name and e-mail address of each system administrator. By default, the System Console install program installs and configures a PostgreSQL database for System Console on the same machine as System Console. The System Console configuration user interface supports System Console creating and using a PostgreSQL metadata database that resides on a different machine.

■ Updating the System Console metadata databaseA scenario this functionality supports is if a database administrator changes the password for the umc database and System Console can no longer connect to umc, a system administrator can specify the new password on the System Console configuration user interface to re-establish connectivity.

How to log in to the System Console configuration user interface

1 Open a new browser window. In the address bar, type the following URL and press Enter:

Page 107: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 6 , U n d e r s t a n d i n g S y s t e m C o n s o l e 109

http://localhost:8500/sysconsole/pages/config.jsp

2 On Login to Configuration Console, type the password for sysadmin, the default System Console administrator, as shown in Figure 6-2. The default password for sysadmin is system11.

Figure 6-2 Accessing the System Console configuration user interface

System Console appears, as shown in Figure 6-3.

How to create a PostgreSQL database for System Console

1 Log in to the System Console user interface, as described in “How to log in to the System Console configuration user interface,” earlier in this chapter.

2 On System Console, provide or accept the values for the following properties, as shown in Figure 6-3.

■ Database Connection TypeType of database to connect to. Must be PostgreSQL.

■ Database Connection NameName of the database connection. Must be PostgreSQL.

■ Database Server NameName of the machine containing the database to which you want System Console to connect.

Page 108: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

110 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 6-3 Specifying metadata database properties

■ Database Server PortName of the port you want System Console to use to connect to the database.

■ DBAdmin DatabaseName of the administrative connection database. This is the database that System Console connects to initially, before System Console creates the umc database and schema.

■ DBAdmin UsernameName of the superuser that logs in to the DBAdmin Database.

Page 109: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 6 , U n d e r s t a n d i n g S y s t e m C o n s o l e 111

■ DBAdmin PasswordPassword for the superuser that logs in to the DBAdmin Database.

■ Cluster DB nameName of the System Console metadata database. System Console creates this database when you choose Create Database. Accept the name umc.

■ Cluster Schema nameName of the schema containing the System Console metadata. System Console creates this schema when you choose Create Database. Accept the name umc.

■ Cluster Schema PasswordType a password for the Cluster schema.

■ Cluster User NameName of the user with which System Console logs into the metadata database. Must be umc_user. This user reads and writes to the System Console metadata database.

■ Cluster User PasswordType a password for the Cluster user.

■ Email Server TypeType a name for the mail server. When System Console creates the umc database, System Console sends an e-mail notifying the system administrator if you specify values for Email Server Type and Email Server.

■ Email ServerType the IP address or fully qualified domain name of the mail server. Forexample, type mailserver.companydomain.com.

3 Choose Create Database.

How to update the System Console metadata database

1 Log in to the System Console user interface, as described in “How to log in to the System Console configuration user interface,” earlier in this chapter.

2 On System Console you can update any of the following properties for an existing System Console metadata database:

■ Database Server Port

■ Cluster User Password

■ Email Server Type

■ Email Server

3 Make any desired changes and choose Update, as shown on Figure 6-3.

Page 110: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

112 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

About MonitoringWhen the administrator logs in to System Console, System Console displays Monitoring. Monitoring consists of the following categories, which the administrator chooses from a side menu, as shown in Figure 6-4:

Figure 6-4 Monitoring menu options

■ Weekly Activity ReportDisplays the status of any starred cluster for each of the previous seven days. To star a cluster, choose Clusters. Then, on Clusters, select the star next to a cluster for which you want to view a Weekly Activity Report, as shown in Figure 6-5.

Figure 6-5 Starring a cluster

■ AlertsDisplays an alert if the attribute that System Console is monitoring has reached the alert threshold. Alerts displays the following information for an alert:

■ ClusterName of the cluster for which System Console generated the alert

■ Attribute NameName of the attribute System Console is monitoring

■ TimestampDate and time at which System Console generated the alert

Page 111: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 6 , U n d e r s t a n d i n g S y s t e m C o n s o l e 113

■ ConditionSpecifies the threshold that when reached, triggers an alert

■ Current ValueCurrent value of the attribute System Console is monitoring

■ Object TypeType of object System Console is monitoring, for example, SERVER

■ ProcessName of BIRT iHub process, if applicable

■ ServerMachine name of the cluster node

■ VolumeCurrent value of the attribute System Console is monitoring

■ Email AddressE-mail address to which iHub sends notification of an alert

■ ErrorsDisplays error messages a cluster generates. Errors displays the following information for an error:

■ ClusterName of the cluster in which the error occurred

■ Host NameMachine name of the cluster node in which the error occurred

■ Time StampDate and time of the error

■ PIDProcess identification number

■ Application Name of the application

■ Thread IDIdentification number of the thread that was active when the error occurred

■ Error MessageError message the error generated

■ Error typeSeverity of the error

Page 112: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

114 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Page 113: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 119

Chapter

7Chapter 7Managing clusters

This chapter contains the following topics:

■ About clusters

■ Creating and configuring a cluster

■ Editing an existing cluster

■ Managing a cluster node

■ Viewing the list of clusters

■ Deleting clusters

■ Viewing cluster resources

■ Understanding the monitor and agent services

■ Using the ControlConsole utility to free memory

■ BIRT iHub service and resource group properties

■ Exporting and importing a volume

■ Configuring an Apache web server for load balancing and proxying

Page 114: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

120 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

About clustersThe system administrator creates, configures, and manages clusters. Creating a cluster supports a system administrator in rapidly configuring and monitoring Actuate applications deployed for use by an entity such as a corporation or corporate division. A cluster consists of one or more computers, or cluster nodes, on which a BIRT iHub instance runs. The system administrator adds a node to a cluster to improve availability and throughput and scale the installation to meet processing requirements.

A node configures itself using a template in acserverconfig.xml, which is located in a shared configuration directory along with the license file, acserverlicense.xml. All cluster nodes access the shared configuration directory. When a node joins a cluster, the node configures itself using its designated template. In addition to templates, the acserverconfig.xml file also contains other configuration parameters specifying the host names, volume names, port numbers, printers, and services used by nodes in the cluster.

In System Console, choose Clusters to perform the following tasks, as shown in Figure 7-1:

■ Create a cluster.

■ Edit an existing cluster.

■ View and filter the list of clusters.

■ Delete clusters.

■ Select only starred clusters.

■ Star or unstar clusters.

■ Add a volume.

■ View log files.

■ View resource usage information.

Page 115: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 121

Figure 7-1 Viewing Clusters

The following sections describe how to perform these tasks.

Creating and configuring a clusterThe system administrator creates a cluster in System Console—Clusters. Then, the system administrator configures the cluster. The cluster must exist before the system administrator can perform any configuration tasks.

How to create a cluster

1 Choose Create Cluster, as shown in Figure 7-1.

2 In Create Cluster, set the following properties, as shown in Figure 7-2. A property name appearing with an asterisk (*) next to the name is a required property.

Figure 7-2 Creating a basic cluster

Page 116: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

122 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ NameType a unique identifier for an cluster name, such as the company name.

■ DescriptionType a description for the cluster.

■ Cluster URLOptionally, type a URL for the cluster. The cluster URL specifies the location of a BIRT iHub proxy that performs load balancing by distributing requests among nodes. The proxy can be a server or a third-party load-balancing mechanism. Apache and Nginx are examples of third-party load-balancing solution providers. For information about using an Apache web server for load balancing and as a proxy for a BIRT iHub cluster, see “Configuring an Apache web server for load balancing and proxying,” later in this chapter.

■ New PasswordType a new password. Actuate recommends creating a password at least eight characters long, containing at least one lowercase letter, one uppercase letter, and one digit.

■ Confirm New PasswordType the new password again.

Choose OK.

System Console creates the cluster, and displays a message telling the system administrator that the cluster has been created and that the system administrator must add a cluster node for the cluster to be operational.

About the cluster configuration categoriesAfter System Console creates the cluster, System Console displays Cluster Configuration, which includes a side menu containing the following cluster configuration categories, as shown in Figure 7-3. The system administrator specifies property settings for each category to configure the cluster.

■ Cluster ConfigurationAdd a cluster node to BIRT iHub System. Completion of this task makes the cluster operational. The system administrator must complete this task to specify any other property settings for the cluster.

■ VolumesAdd a volume to the cluster.

■ Metadata DatabaseSpecify the type of relational database management system (RDBMS) the cluster uses, such as PostgreSQL, or Oracle.

Page 117: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 123

■ AlertsConfigure one or more alerts for the cluster. System Console monitors conditions and activity levels in the cluster. An alert is a notification triggered by a condition or an activity level crossing a particular threshold.

■ Single Sign-OnView or change SAML Identity Provider information for the cluster. View or change Service Provider information for the cluster. Add a Service Provider.

■ User ManagementSpecify settings for managing user authentication and authorization.

■ LicenseUpdate the license file for the cluster.

■ Configuration FileUpdate the shared configuration file for the cluster.

Figure 7-3 Viewing menu of cluster configuration categories

Understanding the default clusterWhen you install System Console, a default cluster named DefaultCluster is created automatically. The configuration of the default cluster differs depending on which of the following scenarios you have:

■ You installed System Console individually, either on the same machine as BIRT iHub, or on a different machine. If you installed System Console individually, you must log in to System Console and add the machine on which you installed BIRT iHub as a node to the default cluster. For

Page 118: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

124 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

information about adding the first node to this cluster, see “How to add the first cluster node to a cluster,” later in this chapter.

■ You installed System Console and BIRT iHub using the install procedure that supports installing multiple BIRT iHub modules at the same time on the same machine. If you installed System Console and BIRT iHub using this install procedure, System Console automatically adds the machine on which you installed these modules as a node to the default cluster. For information about adding the second node to this cluster, see “How to add the second cluster node to a cluster and enable the default volume,” later in this chapter.

Adding cluster nodes to a clusterThis section demonstrates adding three nodes to a cluster named Company. The machine name of the first node the system administrator adds to the cluster is urup, the machine name of the second node is kozu, and the machine name of the third node the system administrator adds to the cluster is tupo. System Console and BIRT iHub are running on urup. urup also contains the shared configuration directory, which all nodes in the cluster access. The second and third nodes, kozu and tupo, each run a BIRT iHub instance. Neither kozu nor tupo run a System Console instance.

This section references tasks described in “Preparing the iHub cluster environment,” later in this chapter, which also uses the same example machine names that this section uses.

How to add the first cluster node to a cluster

Before adding the first node, urup, to a cluster, the system administrator ensures that the logon account for the Actuate iHub service on the node has administrator privileges. See “Specifying a logon account for the Actuate iHub 3.1 service on a cluster node,” in “Preparing the iHub cluster environment,” later in this chapter.

After performing this task at the operating system level, the system administrator performs the following tasks in System Console:

1 On Cluster Configuration, choose Add Cluster node, as shown in Figure 7-4.

Figure 7-4 Choosing Add Cluster Node

Page 119: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 125

2 On Add Cluster Node, set the following properties, as shown in Figure 7-5. A property name appearing with an asterisk (*) next to the name is a required property.

■ Host NameType the cluster node computer name.

■ DescriptionType a description for the node.

Figure 7-5 Adding a cluster node

Choose OK.

System Console displays the following information about the cluster node, as shown in Figure 7-6:

■ Host NameThe machine name of the cluster node

■ StatusStatus is either Running or Not Running

■ ServicesThe services running on the cluster node

Page 120: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

126 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-6 Viewing cluster node host name, status, and services

How to add the second cluster node to a cluster and enable the default volume

Before adding the second node, kozu, to the cluster, the system administrator performs the following tasks:

■ On urup, the system administrator:

■ Creates a folder for the shared configuration directory and shares it.

■ Shares the folder containing the files for the out-of-the-box (OOTB) sample volume, Default Volume.

See “Creating the shared configuration directory” and “Sharing the folders that all cluster nodes access” in “Preparing the iHub cluster environment,” later in this chapter.

■ On both urup and kozu, the system administrator:

■ Turns off the firewall.

■ Obtains the machine name and IP address, and pings each machine from the other machine to ensure the machines can communicate.

See “Configuring two nodes to communicate with each other” in “Preparing the iHub cluster environment,” later in this chapter.

■ On kozu, the system administrator ensures that the logon account for the Actuate iHub service on the node has administrator privileges. See “Specifying a logon account for the Actuate iHub 3.1 service on a cluster node” in “Preparing the iHub cluster environment,” later in this chapter.

After performing these tasks at the operating system level, the system administrator performs the following tasks in System Console:

1 On Cluster Configuration, choose Add Cluster Node, as shown in Figure 7-6.

2 On Edit Configuration Home, in Enter the configuration path, type the path to the shared configuration directory, using UNC format, as shown in Figure 7-7.

Page 121: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 127

UNC format supports all nodes in the cluster finding the shared configuration directory. The path you type is the path that appears as the Network Path in Properties—Sharing for the shared configuration directory. In this example, the shared configuration directory, config_cluster, is on a machine named URUP. Choose OK.

Figure 7-7 Specifying the path of the shared configuration directory

3 On Confirmation, choose OK to stop the services on the previously added cluster node, urup in this example, as shown in Figure 7-8.

Figure 7-8 Stopping the services on previously added cluster node

4 On Add Cluster Node, specify the machine name of the cluster node you are adding and optionally, a description, as shown in Figure 7-9. Choose OK.

Figure 7-9 Specifying name and description of node you are adding

Page 122: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

128 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

5 On Confirmation, choose OK to stop the services on the node you are adding to the cluster, as shown in Figure 7-10.

Figure 7-10 Stopping the services on the node you are adding to the cluster

System Console adds the second node to the cluster, as shown in Figure 7-11. By default, the Monitor service runs only on the node having the shared configuration directory.

Figure 7-11 Viewing the second node added to the cluster

6 Choose Start Cluster from the Manage Cluster menu, as shown in Figure 7-12. Then, choose Refresh from this menu to update the status of the services during the Start Cluster operation.

Page 123: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 129

Figure 7-12 Choosing to start the cluster

Wait until all services that are red turn green before proceeding to the next step, as shown in Figure 7-13.

Figure 7-13 Viewing the started services on both nodes

7 Choose Volumes from the side menu. Default Volume shows a status of ‘Error’. Left-click the arrowhead icon next to Default Volume and choose Disable, as shown in Figure 7-14.

Page 124: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

130 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-14 Choosing to disable Default Volume

On Confirmation, choose OK to confirm that you want to disable the Default Volume.

8 On Volumes, left-click the arrowhead icon in the first Storage Status box for Default Volume and choose Set Read Only, as shown in Figure 7-15.

Figure 7-15 Choosing set Default Volume to Read Only

On Confirmation, choose OK to confirm that you want to change the status of the volume to Read Only.

9 On Volumes, left-click the arrowhead icon in the first Storage Status box for Default Volume and choose Edit, as shown in Figure 7-16.

Page 125: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 131

Figure 7-16 Choosing to edit Default Volume storage

10 On Edit Storage, in Storage Location, type the path to the Default Volume storage folder, storage, using UNC format, as shown in Figure 7-17. UNC format supports all nodes in the cluster finding this folder. The path you type is the path that appears as the Network Path in Properties—Sharing for the storage folder after sharing it. In this example, the Default Volume storage folder is on a machine named URUP. Choose OK.

Figure 7-17 Specifying the Default Volume storage folder

11 On Volumes, left-click the arrowhead icon in the first Storage Status box for Default Volume and choose Set Read/Write, as shown in Figure 7-18.

Figure 7-18 Setting Default Volume to Read/Write status

Page 126: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

132 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

On Confirmation, choose OK to confirm that you want to change the Default Volume state to Read/Write.

12 On Volumes, left-click the arrowhead icon next to Default Volume and choose Enable, as shown in Figure 7-19.

Figure 7-19 Enabling Default Volume

On Confirmation, choose OK to confirm that you want to enable Default Volume.

Default Volume is enabled and ready for use, as shown in Figure 7-20.

Figure 7-20 Viewing Enabled status of Default Volume

How to add a third or subsequent node

Before adding the third node, tupo, or any subsequent node, to the cluster, the system administrator performs the following tasks:

■ On tupo, the system administrator:

Page 127: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 133

■ Turns off the firewall

■ Obtains the machine name and IP address

■ Ensures that the logon account for the Actuate iHub service on the node has administrator privileges

■ On both urup and tupo, the system administrator pings each machine from the other machine to ensure the machines can communicate.

For details on how to perform these tasks, see “Configuring two nodes to communicate with each other” and “Specifying a logon account for the Actuate iHub 3.1 service on a cluster node” in “Preparing the iHub cluster environment,” later in this chapter.

After performing these tasks at the operating system level, the system administrator performs the following tasks in System Console:

1 On Cluster Configuration, choose Add Cluster Node.

2 On Add Cluster Node, specify the machine name of the cluster node you are adding and optionally, a description, as shown in Figure 7-21. Choose OK.

Figure 7-21 Specifying name and description of node you are adding

3 On Confirmation, choose OK to confirm that you want to stop the services on tupo, as shown in Figure 7-22.

Figure 7-22 Stopping services on the third node

System Console adds the third node to the cluster, as shown in Figure 7-23. By default, the Monitor service runs only on the node having the shared configuration directory.

Page 128: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

134 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-23 Viewing the second node added to the cluster

4 Left-click the arrowhead icon next to tupo and choose Start Node, as shown in Figure 7-24. Then, choose Refresh from this Manage Cluster menu to update the status of the services during the Start Node operation. When all the services that are red turn green, the node is ready for use.

Figure 7-24 Choosing to start the cluster

Page 129: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 135

5 Choose Refresh from the Manage Clusters menu to update the status of the services during the Start Node operation, as shown in Figure 7-25. When the services appear green, the node is ready for use, as shown in Figure 7-26.

Figure 7-25 Refreshing the status of services on the third node

By default, the Monitor service runs only on the node containing the shared configuration directory, urup, in this example.

Figure 7-26 Viewing the running services in the cluster

Page 130: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

136 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Preparing the iHub cluster environmentThe system administrator performs the following tasks to support clustering:

■ Creates the shared configuration directory

■ Shares the folders that all cluster nodes access

■ Configures two nodes to communicate with each other

■ Specifies a logon account for the Actuate iHub 3.1 service on a cluster node, if necessary

This section provides examples of these tasks in the Windows environment where System Console and BIRT iHub Visualization Platform were installed as individual modules.

The procedures in the“Adding cluster nodes to a cluster” section earlier in this chapter indicate when to perform a task described in this section.

AC_SHARED_HOME is a variable that represents the folder that contains the shared configuration directory, to which all nodes in a cluster share access. This section makes reference to the following AC_SHARED_HOME variable settings:

■ In a default BIRT iHub installation on Windows, where BIRT iHub Visualization Platform was installed as an individual module to a folder named C:\Actuate3\BIRTiHubVisualization, AC_SHARED_HOME represents the following path:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared

■ In a default BIRT iHub installation on Windows, where BIRT iHub Visualization Platform was installed at the same time as System Console, to a folder named C:\Actuate3, AC_SHARED_HOME represents the following path:

C:\Actuate3\iHub3\modules\BIRTiHub\iHub\shared

■ In a default BIRT iHub installation on Linux, where BIRT iHub Visualization Platform was installed as an individual module to a folder named /opt/actuate, AC_SHARED_HOME represents the following path:

/opt/actuate/BIRTiHubVisualization/modules/BIRTiHub/iHub/shared

■ In a default BIRT iHub installation on Linux, where BIRT iHub Visualization Platform was installed at the same time as System Console, to a folder named /opt/actuate, AC_SHARED_HOME represents the following path:

/opt/actuate/iHub3/modules/BIRTiHub/iHub/shared

Creating the shared configuration directoryThe system administrator creates the folder for the shared configuration directory on urup before adding the second node to the cluster.

Page 131: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 137

How to create the shared configuration directory

On urup, in AC_SHARED_HOME, create a new folder for the cluster to use as the shared configuration directory. For example, create a folder named config_cluster.

Sharing the folders that all cluster nodes accessIn a BIRT iHub installation, cluster nodes must have read-write access to the following folders in AC_SHARED_HOME on urup:

■ config_clusterThe shared configuration directory. System Console populates this folder when the system administrator adds the second node to the cluster.

■ storageContains the data files for the sample volume, Default Volume.

The system administrator shares these folders before adding the second node to the cluster.

Note that you must share the folder containing the data files for any volume you add to the cluster. For more information, see “Adding a volume,” later in this chapter.

The following instructions provide a basic example of the operations required to configure network sharing. It is the responsibility of the system administrator performing this task to make sure that all settings conform to the security policies in force for the environment.

How to share the \config_cluster and \storage folders

To give a cluster node read-write access to these resources on urup perform the following tasks:

1 Using Windows Explorer on urup, right-click the config_cluster folder, and choose Properties, as shown in Figure 7-27.

Page 132: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

138 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

2 On config_cluster Properties, choose Sharing, as shown in Figure 7-28. On Sharing, choose Advanced Sharing.

Figure 7-27 Choosing Properties

Page 133: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 139

Figure 7-28 Choosing Advanced Sharing

3 On Advanced Sharing, select Share this folder, as shown in Figure 7-29.

Figure 7-29 Selecting Share this folder

On Advanced Sharing, choose Permissions.

4 On Permissions for config_cluster, in Share Permissions, select Allow for Change and Read, as shown in Figure 7-30.

Choose OK.

Page 134: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

140 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-30 Selecting Change and Read permission

On Advanced Sharing, choose OK.

On config_cluster Properties, take note of the Network Path, as shown in Figure 7-31. You specify this path when adding the node to the cluster in System Console. Choose Close.

Figure 7-31 Taking note of the Network Path

Page 135: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 141

5 Repeat steps 1 through 4 for the storage folder that contains the sample volume files. Make sure that all settings conform to the security policies in force for the environment.

In step 4, take note of the Network Path appearing on storage Properties—Sharing. You specify this path when enabling Default Volume in System Console after adding the second node to the cluster.

Close Windows Explorer.

Configuring two nodes to communicate with each otherBefore adding a node to a cluster, perform the following tasks to support communication between the node containing the shared configuration directory, for example node1, and the node you are going to add to the cluster, for example node2.

■ Turn off a Windows firewall.

■ Obtain the machine name and IP address of each machine.

■ Test the network connection between the two machines.

How to turn off a Windows firewall

Perform the following steps on both node1 and node2:

1 Choose Start➛Control Panel➛System and Security➛Windows Firewall.

2 On Windows Firewall, choose Turn Windows Firewall on or off. Make sure that the firewall settings conform to the security policies in force for the environment.

3 On Customize Settings, in Home or work (private) network location settings, choose Turn off Windows Firewall, as shown in Figure 7-32.

Choose OK.

How to display a computer’s IP address

To obtain the host names of node1 and the computer on which you will install the cluster node, perform the following tasks on node1 and node2:

1 Choose Start➛Programs➛Accessories➛Command Prompt.

2 In Command Prompt, type the following command:

ipconfig /all

Press Enter. The host name appears, as shown in Figure 7-33. In this example, the host name for node1 is urup.

Page 136: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

142 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-32 Turning off the home or work network location firewall

Figure 7-33 Displaying the host name

3 Write the host names and IP addresses of the computers to be clustered, as shown in Table 7-1.

Table 7-1 Host names and IP addresses of computers to be clustered

iHub Host name IP address

node1 urup 192.168.41.140node2 kozu 192.168.41.138

Page 137: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 143

How to test the connection between computers

Perform the following steps on both computers:

1 In Command Prompt, type the ping command followed by the IP address or host name of the other computer. For example, type the following command to ping a computer named kozu:

ping kozu

Press Enter.

If your computer reaches the other computer, Command Prompt displays a series of replies, as shown in Figure 7-34.

Figure 7-34 Receiving a reply to a ping command

2 Close Command Prompt.

Specifying a logon account for the Actuate iHub 3.1 service on a cluster nodeBefore adding the node to the cluster, the system administrator checks the Actuate iHub 3.1 Service Log On property for whether it specifies an account having administrator privileges. If the Log On property does not specify an account having administrator privileges, the system administrator performs the following tasks on the node:

■ Stops the Actuate iHub 3.1 service

■ Specifies a logon account for the Actuate iHub 3.1 service that has administrator privileges

■ Restarts the Actuate iHub 3.1 service

How to check the Actuate iHub 3.1 Service Log On property

1 Choose Start➛Control Panel➛System and Security➛Administrative Tools➛Services. On Services, right-click Actuate iHub 3.1 Service, and choose Properties, as shown in Figure 7-35.

Page 138: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

144 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-35 Choosing Actuate iHub 3.1 Service properties

2 On Actuate iHub 3.1 Service Properties, choose Log on. If This account already specifies an account having administrator privileges, as shown in the example in Figure 7-36, you do not need to specify a logon account for the Actuate iHub 3.1 service. Choose Cancel on Actuate iHub 3.1 Service Properties, and close Services. Otherwise, perform the tasks described in “How to specify a logon account for the Actuate iHub 3.1 service,” which follows.

Figure 7-36 Checking the Log On property

Page 139: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 145

How to specify a logon account for the Actuate iHub 3.1 service

1 Choose Start➛Control Panel➛System and Security➛Administrative Tools➛Services. On Services, select Actuate iHub 3.1 Service. Then, choose Stop the service, as shown in Figure 7-37.

Figure 7-37 Stopping the Actuate iHub 3.1 service

2 On Services, right-click Actuate iHub 3.1 Service, and choose Properties, as shown in Figure 7-38.

Figure 7-38 Choosing Properties for the Actuate iHub 3.1 service

3 On Actuate iHub 3.1 Service Properties, perform the following tasks:

1 Choose Log On.

2 In Log On, select This account, and specify an account that has administrator privileges, such as <machine name>\administrator.

Page 140: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

146 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

3 In Password and Confirm password, type the password for the account.

4 Choose Apply. Figure 7-39 shows Actuate iHub 3.1 Service Properties—Log On for a machine named kozu.

Figure 7-39 Specifying an account with administrator privileges

Choose OK.

4 On Services, select Actuate iHub 3.1 Service, and choose Start the service, as shown in Figure 7-40.

Figure 7-40 Starting the Actuate iHub 3.1 service

Understanding Cluster ConfigurationIn Cluster Configuration, the system administrator adds a cluster node to the cluster. Additionally, Cluster Configuration supports management tasks such as starting, stopping, and editing the properties of the following:

■ The entire cluster

Page 141: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 147

■ An individual cluster node

■ A service running on a cluster node

Performing management tasks for the entire clusterThe system administrator chooses the cog-shaped icon to access the Manage Cluster menu, as shown in Figure 7-41.

Figure 7-41 Accessing the Manage Cluster menu

The Manage Cluster menu consists of the following options:

■ RefreshRefreshes the status of the services running on all cluster nodes.

■ Stop or Start ClusterStops or Starts all nodes in the cluster. If the cluster is running, or online, Stop Cluster displays in the Manage Cluster menu. If the Cluster is stopped, or offline, Start Cluster displays in the Manage Cluster menu.

■ Edit Cluster PropertiesDisplays Edit Cluster Properties. The system administrator can change any of the following cluster properties. Choose Stop Cluster to stop the cluster before changing Cluster URL.

■ Name

■ Description

■ Cluster URL

■ Password

After making any cluster property changes choose OK. If you changed the Cluster URL, choose Start Cluster to start the cluster after choosing OK.

■ Show Cluster Configuration HomeDisplays the location of the shared configuration folder that the AC_CONFIG_HOME element specifies in the acpmdconfig.xml file on the cluster node, in UNC format. For example, the following line specifies the path to the shared configuration folder used in “How to add the second

Page 142: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

148 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

cluster node to a cluster and enable the default volume,” earlier in this chapter:

\\urup\config_cluster

urup is the name of the machine containing the shared configuration folder.

In a default BIRT iHub installation on Windows, performed using the installer, in which the install folder is C:\Actuate3, the path AC_CONFIG_HOME specifies is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config_cluster

Performing management tasks for an individual cluster nodeThe system administrator chooses the arrowhead icon next to a cluster node name to access the cluster node menu, as shown in Figure 7-42.

Figure 7-42 Accessing the cluster node menu

The following list describes the options on the cluster node menu:

■ Stop or start nodeStops or Starts the cluster node. If the cluster node is running, or online, Stop Node displays in the cluster node menu. If the cluster node is stopped, or offline, Start Node displays in the cluster node menu.

■ EditDisplays Edit Cluster Node. The system administrator can change either of the following properties:

■ Host Name

■ Description

■ DeleteDeletes the node from the cluster.

Page 143: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 149

Performing management tasks for a service running on a cluster nodeThe system administrator chooses the arrowhead icon next to a service name to access the service menu. For example, Figure 7-43 shows the menu for the Web service.

Figure 7-43 Accessing a service menu

The following list describes the options on any service menu except the BIRT menu. For more information about the BIRT service, see “About the BIRT service,” later in this chapter.

■ Stop or Start ServiceStops or Starts the service. If the service is running, the color of the icon for the service is green, and Stop Service displays in the service menu. If the service is stopped, the color of the icon for the service is red, and Start Service displays in the service menu.

■ EditDisplays Edit <service name>. For example, when the system administrator chooses to edit the Web service, System Console displays Edit Web.

For each service, Edit <service name> displays the Startup Mode, Process Name, and Java Arguments properties, as shown in Table 7-1. The system administrator can change the Startup Mode and the Java Arguments properties. A property name appearing with an asterisk (*) next to the name is a required property.

If you modify the Java heap size argument for a service, do not specify a size that exceeds the amount of RAM on the node. On some Linux platforms, the LMServer process may encounter an error if the Java heap size you specify exceeds the amount of RAM available on the node.

Page 144: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

150 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Table 7-1 Cluster node service properties

Service name

Startup mode Process name Java arguments

Agent Auto Start, Manual, Disable

LSTailer -Xms64m -Xmx256m "-Dlog4j.configuration=file:C:/Actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub/etc/lmagent-log4j.properties" "-Djava.library.path=C:/Actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub/bin" com.actuate.lmservice.logging.logtailer.ProducerAgent

Monitor Auto Start, Manual, Disable

LMServer -Xms386m -Xmx8g "-Dlog4j.configuration=file:C:/Actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub/etc/lmserver-log4j.properties" "-Djava.library.path=C:/Actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub/bin" com.actuate.lmservice.server.LMServer -p

Platform Auto Start, Manual, Disable

ihub -Xms256m -Xmx2048m -XX:MaxPermSize=128m -Ddeployment.security.SSLv2Hello=false -Ddeployment.security.SSLv3=false -Ddeployment.security.TLSv1=true -Ddeployment.security.TLSv1.1=true -Ddeployment.security.TLSv1.2=true "-Djava.library.path=C:/Actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub/bin" com.actuate.iserver.server.Server

Platform Auto Start, Manual, Disable

ihubc -Spmd -Jtupo.actuate.com -YCluster

REST Service Server

Auto Start, Manual, Disable

node_server C:/Actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub/RESTAPI/server/app.js

Page 145: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 151

About the BIRT service

Choosing the arrowhead icon next to BIRT displays a menu containing one option, Edit.

When the system administrator chooses Edit on the menu for BIRT, System Console displays Edit BIRT, as shown in Figure 7-44.

Figure 7-44 Editing the BIRT service

Changing the Capacity Option changes the server configuration template that configures this cluster node. AC_CONFIG_HOME\acserverconfig.xml contains the server configuration templates. The names of the default server configuration templates in acserverconfig.xml are small, medium, large, and disable. Stop the Platform service before changing the name for Capacity Option in Edit BIRT.

For more information on server configuration templates, see “BIRT iHub service and resource group properties,” later in this chapter.

Web Auto Start, Manual, Disable

ihubservletcontainer -Xms256m -Xmx1024m -XX:PermSize=64M -XX:MaxNewSize=256m -XX:MaxPermSize=128m -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Ddeployment.security.SSLv2Hello=false -Ddeployment.security.SSLv3=false -Ddeployment.security.TLSv1=true -Ddeployment.security.TLSv1.1=true -Ddeployment.security.TLSv1.2=true com.actuate.server.embededtomcat.EmbededTomcat

Table 7-1 Cluster node service properties

Service name

Startup mode Process name Java arguments

Page 146: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

152 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Adding a volumeA cluster can contain one or more volumes, with 1000 volumes as a practical limit. Creating more than 1000 volumes can cause instability in the iHub system.

For each volume, there is one database schema and one or more storage areas. The metadata database contains volume metadata, such as user and user group information. The storage area or areas contain volume consumable data, such as BIRT document content. When adding a volume, properties the system administrator specifies include schema name, storage area or areas, and the database user and password with which to connect to the metadata database.

A single BIRT iHub cluster can use only one security mechanism. For example, if the system administrator wants to use iHub User Management (default) as the user management setting for one volume and LDAP/Active Directory Adapter as the user management setting for a second volume, the system administrator must create a cluster for each volume. For more information on the user management setting, see “Configuring User Management,” later in this chapter.

Actuate recommends enabling e-mail notification before creating a new volume if you have not already enabled e-mail notification. You need e-mail notification enabled to successfully perform the following tasks:

■ Create a volume, if also specifying an e-mail address for the volume administrator

■ Edit an existing volume and selecting to reset the password

If you have not enabled e-mail notification, System Console displays an error message and cannot complete these tasks. For information on enabling e-mail notification, see “Enabling e-mail notification,” later in this chapter.

This section demonstrates adding an example volume named sales_volume in the process of creating a two-node cluster.

How to add a volume

1 Actuate recommends enabling e-mail notification. For information on enabling e-mail notification, see “Enabling e-mail notification,” later in this chapter.

2 Create a new folder at the location where you want to store the volume data. For example, create a new folder in AC_SHARED_HOME named sales_storage. In a default BIRT iHub installation on Windows, performed using the installer, in which the install folder is C:\Actuate3, the path for AC_SHARED_HOME\sales_storage is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\sales_storage

Do not reuse a storage location for a new volume. For example, if AC_SHARED_HOME\sales_storage was the storage folder for a previously

Page 147: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 153

existing volume, create a storage location for the new volume that has a path other than AC_SHARED_HOME\sales_storage. System Console accepts the use of a subfolder of AC_SHARED_HOME\sales_storage, such as AC_SHARED_HOME\sales_storage\sales_storage_2.

3 Share the new folder. For information on how to perform this task, see “Sharing the folders that all cluster nodes access,” in “Preparing the iHub cluster environment,” earlier in this chapter.

4 On Volumes, choose Add Volume, as shown in Figure 7-45.

Figure 7-45 Choosing Add Volume

5 Configure the following properties on Add Volume. Figure 7-46 shows the property values for an example volume, sales_volume. An asterisk (*) next to the property name means the property is required.

■ Volume NameType a name for the volume.

■ DescriptionType a description for the volume.

■ Volume Administrator EmailType the e-mail address of the volume administrator. When you create a volume, System Console sends a notification e-mail containing the volume password to this address if you have enabled e-mail notification. For more information, see “Enabling e-mail notification,” later in this chapter. If you leave Volume Administrator Email blank, BIRT iHub does not create a password for accessing the new volume in Information Console. The BIRT iHub default user, Administrator, can log in to Information Console to access the new volume without using a password. Then, in Information Console, the administrator can choose My Profile and create a new password for accessing the volume.

Page 148: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

154 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Schema NameType a name for the volume schema that is 30 characters or less. BIRT iHub creates the volume and the volume schema at the same time.

■ Create New SchemaSelect this property except under either of the following conditions:

❏ You have already populated the schema using the Volume Data Store Administrator utility.

❏ You are adding a volume for which the schema is already populated and the storage location already contains files.

■ TablespaceType the name of a tablespace for the volume schema. Alternatively, leave Tablespace blank to use the default tablespace.

■ DBA UserType the name of the postgreSQL superuser, postgres.

■ DBA passwordType the postgreSQL superuser password. By default, the password is postgres.

■ Storage LocationType the path of the volume storage folder you created in step 2, using UNC format. UNC format supports all nodes in the cluster finding this folder. The path you type is the path that appears as the Network Path in Properties—Sharing for the storage folder after sharing it. In this example, the sales_storage folder is on a machine named URUP.

■ Organization IDType an alphanumeric character string for the Organization ID. The LDAP adapter and RSSE implementation use the Organization ID to filter users and user groups. Alternatively, leave Organization ID blank. For more information on Organization ID, see “Managing volume access by users and user groups when using LDAP,” later in this chapter.

■ Encryption Key for StorageType the name of the Encryption key. Alternatively, leave Encryption Key blank.

On Add Volume, choose OK.

Page 149: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 155

Figure 7-46 Adding a volume

If e-mail notification is enabled, BIRT iHub sends an e-mail notifying the volume administrator that BIRT iHub has created the volume. The e-mail contains the password with which to log in to Information Console to access sales_volume, as shown in Figure 7-47.

Figure 7-47 Viewing the notification e-mail that the volume is created

6 On Volumes, left-click the arrowhead icon next to the new volume name and choose Enable, as shown in Figure 7-48.

Page 150: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

156 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-48 Viewing the new volume in the list on Volumes

Adding a storage locationThe system administrator can add a storage location for a volume. A single volume can use a maximum of 10 storage locations.

How to add a storage location for an existing volume

1 Create a new folder at the location where you want to add storage. Do not use a storage location that a volume has used previously. The path of the storage location must be new.

2 Share the new folder. For information on how to perform this task, see “Sharing the folders that all cluster nodes access,” in “Preparing the iHub cluster environment,” earlier in this chapter.

3 On Volumes, in the Storage Status column, left-click the plus sign (+) in the row containing the name of the volume for which you want to add storage.

4 In Add Storage, specify the new storage location in Storage Location using UNC format, as shown in Figure 7-49. Choose OK.

Figure 7-49 Adding a storage location for a volume

Page 151: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 157

Updating a storage locationThe system administrator can create a new storage folder for a volume, relocate the volume data to the new folder, then update the volume storage location using System Console.

How to update a storage location

1 Create a new folder for the volume.

2 Copy the contents of the old folder to the new folder.

3 Share the new folder. For information on how to perform this task, see “Sharing the folders that all cluster nodes access,” in “Preparing the iHub cluster environment,” earlier in this chapter.

4 On Volumes, perform the following tasks:

1 Left-click the arrowhead icon next to the volume name and choose Disable, as shown in Figure 7-50. Confirm that you want to disable the volume.

Figure 7-50 Disabling the volume

On Confirmation, choose OK to confirm that you want to disable the volume.

2 Left-click the arrowhead icon in the Storage Status box for the volume and choose Set Read Only, as shown in Figure 7-51.

Page 152: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

158 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-51 Setting a volume to Read Only status

On Confirmation, choose OK to confirm that you want to change the status of the volume to Read Only.

3 Left-click the arrowhead icon in the Storage Status box for the volume and choose Edit.

4 In Edit Storage, specify the path of the new storage folder using UNC format, as shown in Figure 7-52. Choose OK.

Figure 7-52 Specifying the new storage location

5 Left-click the arrowhead icon in the Storage Status box for the volume and choose Set Read/Write. On Confirmation, choose OK to confirm that you want to change the Volume status to Read/Write.

6 Left-click the arrowhead icon next to the volume name and choose Enable. On Confirmation, choose OK to confirm that you want to enable the volume.

5 Log in to Information Console and ensure that the contents of the volume appear.

6 Delete the old volume storage folder.

Page 153: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 159

Removing a storage locationIf you want to remove a storage location from a volume, please contact Actuate Support. They will ensure that you remove the storage location without any loss of data.

Understanding the volume menuLeft-click the arrowhead icon next to a volume name to display a menu containing the following options:

■ EditSupports changing the following volume properties:

■ Description

■ Organization ID

■ Encryption Key for Storage

■ Delete Deletes the volume. Delete is a menu option only when the volume is offline.

■ Enable or DisableBrings the volume online and takes it offline. If the status of the volume is Enabled, the menu option is Disable. If the status of the volume is Disabled, the menu option is Enable.

■ MonitoringDisplays a link named Server Resource. Choose Server Resource to open a new browser window, in which System Console uses Actuate Viewer to display a chart showing the last 48 hours of activity on this volume for each of the following statistics.

■ Response Time (milliseconds)

■ Number of Alerts

Viewing metadata database propertiesMetadata Database displays properties for the database storing volume metadata. BIRT iHub supports storing volume metadata in one of the following databases:

■ The default PostgreSQL database that installs with BIRT iHub

■ A pre-existing PostgreSQL database

■ A pre-existing Oracle database

By default, BIRT iHub stores volume metadata in the PostgreSQL database that installs with BIRT iHub. Figure 7-53 shows the following properties for the default PostgreSQL database, installed on a machine named urup.

Page 154: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

160 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Database TypeType of database BIRT iHub is using for volume metadata. Can be PostgreSQL or Oracle.

■ Test ConnectionChoose Test Connection to test the connection to the database.

■ Database serverHost name of the machine containing the database.

■ Database portThe default port number for the default PostgreSQL database is 8433.

■ Database nameName of the database. The name of the default database is ihub.

■ Encryption MethodThe default value is noEncryption. The channel between BIRT iHub and the metadata database passes unencrypted data.

■ Schema NameName of the volume schema. The name of the default volume schema is ac_cluster.

■ UsernameDatabase user name. The name of the default user is ihub.

■ Change PasswordChoose Change Password to change the database user name password.

Figure 7-53 Viewing OOTB PostgreSQL metadata database properties

Page 155: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 161

Metadata Database displays the following properties when an Oracle database contains the volume metadata, as shown for example, in Figure 7-54:

■ Database TypeType of database BIRT iHub is using for volume metadata, for example, Oracle.

■ Test ConnectionChoose Test Connection to test the connection to the database.

■ TNS NameHost name of the machine containing the TNSNAMES.ORA file.

■ TNS Names FilePath to the TNSNAMES.ORA file.

■ Encryption MethodThe default value is noEncryption. The channel between BIRT iHub and the metadata database passes unencrypted data.

■ Schema nameName of the volume schema.

■ UsernameDatabase user name.

■ Change PasswordChoose Change Password to change the database user name password.

Figure 7-54 Viewing Oracle metadata database properties

Page 156: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

162 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Configuring alertsSystem Console monitors a range of activity, conditions, and resources in a BIRT iHub System. An attribute identifies a monitored item. The system administrator can create an alert for any system attribute. Alerts supports the system administrator performing the following operations:

■ Viewing the list of alerts

■ Adding an alert

■ Editing an alert

■ Disabling and enabling an alert

■ Deleting an alert

The following sections describe these operations.

Viewing the list of alertsView the list of alerts by choosing Alerts from the side menu, as shown in Figure 7-55. An alert contains the following information:

■ Alert nameName of the alert

■ AttributeName of the attribute identifying the item BIRT iHub monitors

■ ConditionCondition determining whether a monitored item reaches the alert threshold

■ ThresholdLimit that when met, triggers an alert

■ EnableTrue if the alert is enabled, false if the alert is disabled

■ EmailE-mail address to send notification of an alert

■ MessageMessage System Console sends when an alert occurs

Adding an alertWhen adding an alert, the system administrator selects an attribute name from a list, and sets a threshold value that, when reached, causes System Console to trigger an alert. Monitoring displays the alert, and System Console sends an e-mail to the address the system administrator specifies. The e-mail notifies the recipient that the monitored attribute for the item has met the specified value.

Page 157: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 163

The value for Threshold that the system administrator specifies for most Alert attributes is a number from 0 (zero) to 100. For these Alert attributes, the administrator can specify one of the following values for determining whether the condition which triggers an alert has been met:

■ equal to (=)

■ greater than (>)

■ greater than or equal to (>=)

■ less than (<)

■ less than or equal to (<=)

For the remainder of the Alert attributes, the administrator specifies a string value for Threshold and a condition value of equal to (=).

Table 7-2 displays the Condition and Threshold values that the administrator can specify for Alert attributes.

Figure 7-55 Viewing the list of alerts

Table 7-2 Alert attribute Condition and Threshold values

Alert attribute nameThreshold value data type

Permissible values for Condition

Permissible values for Threshold

Encyclopedia service status on server

String = ONLINE, OFFLINE

(continues)

Page 158: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

164 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

This following section demonstrates adding an alert on the system attribute named Percent of server RAM used (MB).

How to add an alert

1 Choose Alerts from the Clusters side menu.

2 Choose Add Alert.

3 On Add Alert, perform the following tasks, as shown in Figure 7-56. An asterisk (*) next to the property name means the property is required.

1 In Attribute Name, select an attribute.

2 In Condition, select a condition by which System Console determines whether the monitored item has reached the threshold.

3 In Threshold, specify a value that triggers an alert when reached.

4 In Email, specify an email address where System Console sends notification of an alert. You must enable e-mail notification. For more information, see “Enabling e-mail notification,” later in this chapter.

5 In Message, type a message to display on Monitoring and to include in the notification e-mail when an alert is triggered, such as ‘Number of jobs running on the volume has reached the specified limit’.

6 In Alert Name, type a name for the alert. Choose OK.

Factory service status on server

String = ONLINE, OFFLINE

Integration service status on server

String = ONLINE, OFFLINE

Server needs restart String = YES, NO

Server status String = ONLINE, OFFLINE

View service status on server

String = ONLINE, OFFLINE

Volume status String = ONLINE, OFFLINE, ERROR

All other Alert attributes

Numeric =, >, >=, <, <= Any number from 0 through 100

Table 7-2 Alert attribute Condition and Threshold values (continued)

Alert attribute nameThreshold value data type

Permissible values for Condition

Permissible values for Threshold

Page 159: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 165

Figure 7-56 Adding an alert

Enabling e-mail notificationSystem Console supports e-mail notification when the following events occur:

■ The system administrator creates another system administrator.

■ The system administrator creates a volume.

■ System Console triggers an alert.

Information Console supports e-mail notification when a scheduled job completes.

To enable e-mail notification when System Console or Information Console is installed on a Windows machine, the administrator modifies acserverconfig.xml, the configuration file that all cluster nodes share, adding properties that support e-mail notification.

E-mail notification is enabled on a Linux machine if Sendmail is running on the machine, and the Linux administrator has set any necessary privileges for using Sendmail. It is not necessary to modify acserverconfig.xml to enable e-mail notification when running System Console or Information Console on a Linux machine.

How to enable e-mail notification

The procedure in this section is necessary only if System Console or Information Console runs on a Windows machine.

1 On Clusters, choose to edit the cluster for which you want to enable e-mail notification.

2 Stop the cluster by performing the following steps:

1 Choose Cluster Configuration from the side menu.

Page 160: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

166 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

2 On Cluster Configuration, left-click the cog icon and choose Stop Cluster from the Manage Cluster menu.

3 Choose Refresh from the Manage Cluster menu. When all the services icons have turned red, continue to the next step.

3 Using Windows Explorer, navigate to AC_CONFIG_HOME. For example, if the system administrator created a folder for the shared configuration directory named config_cluster, then in a default BIRT iHub installation on Windows, performed using the installer, in which the install folder is C:\Actuate3, AC_CONFIG_HOME represents the following path:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config_cluster

Create a backup copy of acserverconfig.xml. Then, open acserverconfig.xml in a text editor. In acserverconfig.xml, navigate to the following string:

<SMTPServers/>

Create a child element of <SMTPServers> named <SMTPServer>. Using the following lines as an example, provide values for the attributes of the <SMTPServer> element:

<SMTPServers><SMTPServer

Name="mailhost.actuate.com"SenderName="Notifications"SMTPHostName="mailhost.actuate.com"SenderAddress="[email protected]"/>

</SMTPServers>

The <SMTPServers> element appears in acserverconfig.xml as shown in Listing 7-1.

Listing 7-1 acserverconfig.xml with configured <SMTPServer> element

<?xml version="1.0" encoding="UTF-8" standalone="no"?><Config>

<SystemKeyAlias="birtihub"

...<UsageAndErrorLogging/><SMTPServers>

<SMTPServerName="mailhost.actuate.com"SenderName="Notifications"SMTPHostName="mailhost.actuate.com"SenderAddress="[email protected]"/>

</SMTPServers></System>

Page 161: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 167

Save acserverconfig.xml and close the file.

4 Start the cluster by performing the following steps:

1 Choose Cluster Configuration from the side menu.

2 In System Console, on Clusters—Cluster Configuration, left-click the cog icon and choose Start Cluster from the Manage Cluster menu.

3 Choose Refresh from the Manage Cluster menu. When all the services icons have turned green, the cluster is back online.

5 If your anti-virus software prevents processes from sending e-mail, you must disable the blocking of alert notification e-mails. Configure your anti-virus software to allow processes such as java.exe, LMServer.exe, ihub.exe, and ihubc.exe to send e-mail.

6 Verify that you are receiving e-mail notification by performing any one of the following tasks. Completion of any of these tasks prompts System Console to send an e-mail notification.

■ Schedule a job in Information Console to run immediately. For more information, see Chapter 3, “Scheduling and Managing Jobs,” in Using Information Console.

■ Configure an alert. For example, add an alert having the following properties:

❏ Attribute Name: Percent of server RAM used (MB)

❏ Condition: Greater than or equal to

❏ Threshold: 0 (zero)

For more information on configuring an alert, see “Adding an alert,” earlier in this chapter.

■ Add a volume. For more information, see “Adding a volume,” earlier in this chapter.

Editing, deleting, disabling, and enabling an alertChoose the icon next to an alert on Clusters—Alerts to access the alert menu. This menu contains the following options, as shown in Figure 7-57:

■ EditEdit the alert.

■ DeleteDelete the alert.

■ DisableIf the alert is enabled, the menu contains Disable. If the alert is disabled, the menu contains Enable.

Page 162: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

168 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-57 Viewing the alert

When editing an existing alert, the system administrator can change any value except the attribute name and the alert name.

How to edit an alert

1 Point to the icon next to the name of an alert and choose Edit.

2 On Edit Alert, modify any properties as necessary, as shown in Figure 7-58. Choose OK.

Figure 7-58 Editing an alert

How to delete an alert

Point to the icon next to the name of an alert and choose Delete.

How to disable or enable an alert

Disable an enabled alert by left-clicking the icon next to the name of an enabled alert and choosing Disable.

Enable a disabled alert by left-clicking the icon next to the name of an disabled alert and choosing Enable.

Page 163: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 169

Configuring Single Sign-OnChoose Single Sign-On to view the SAML identity and service provider information for the nodes in the cluster and optionally, to add a service provider, as shown in Figure 7-59. Service provider information for a cluster node becomes visible to the cluster when the node joins the cluster.

Figure 7-59 Choosing iHub User Management

Viewing the information in SAML Identity Provider (IdP) for this cluster

SAML Identity Provider (IdP) for this cluster specifies the following Security Assertion Markup Language (SAML) information:

■ Entity IDThe identity provider identifier. This is the value of the entityID attribute in the <EntityDescriptor> element in the identity provider metadata.

■ Metadata URIThe identifier for the identity provider metadata.

■ Metadata pathThe path to the identity provider metadata on disk.

Viewing and adding service provider information

Service Provider Information displays the information for each service provider on each node in the cluster. The system administrator can also add additional service providers using Add Service Provider.

Page 164: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

170 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

By default, each node uses the iportal service provider, which provides access to Information Console.

Choose the icon next to the service provider URL to view the following information for the service provider:

■ Entity IDThe service provider identifier. This is the value of the entityID attribute in the <md:EntityDescriptor> element in the service provider metadata.

■ Server URLThe URL of the login for a service provider. To enable https, set up a proxy that has https enabled.

■ Metadata pathThe path of the metadata file for this service provider.

■ Metadata URIThe URI for the metadata for this service provider.

■ ACS Post URLThe URL for ACS Post.

Choose Add Service Provider to specify these properties, as shown in Figure 7-60.

■ Server URLThe URL for the service provider

■ Entity IDThe service provider identifier

Figure 7-60 Specifying Service Provider information

Page 165: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 171

Configuring User ManagementThe system administrator specifies settings for managing user authentication and authorization on User Management. Select among the following ways that BIRT iHub manages users for this cluster:

■ iHub User Management (default)

■ LDAP/Active Directory Adapter

■ RSSE SOAP Service

iHub User Management is the default setting and requires no action.

Configuring LDAP/Active Directory AdapterChoose LDAP/Active Directory Adapter to configure settings for user management using an LDAP server. Settings for LDAP/Active Directory Adapter are grouped into the following sections:

■ Search setting

■ LDAP connection settings

■ LDAP Performance Settings

■ LDAP Mapping

The following sections describe these property groups.

About Search setting

Search setting contains one property, Search Cache Only. Cache Only restricts any search for users and user groups that BIRT iHub performs to the open security cache, with the exception of user authentication. When performing user authentication, BIRT iHub always searches the external security source, such as the LDAP server. Searching the cache only improves performance because data retrieval from the cache is faster than from the external data source.

A user sync thread runs in the background, and refreshes the cache automatically, at an interval that the Performance Settings—Cache Timeout property specifies. To prevent BIRT iHub from refreshing the cache, set Performance Settings—Cache Timeout to -1 to prevent a user from ever expiring. If you want BIRT iHub to refresh the cache, Actuate recommends setting Performance Settings—Cache Timeout to 1440 minutes, which is 24 hours, or more, instead of the default 60 minutes.

To use the Search Cache Only feature, create a script that sends the SOAP request that executes the caching operation, an example of which is shown in Listing 7-2. For more information, see “Chapter 24, Actuate Information Delivery API operations,” in Integrating Applications into BIRT iHub.

Page 166: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

172 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Listing 7-2 The SOAP request for the operation that loads the cache

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:act="http://schemas.actuate.com/actuate11"><soapenv:Header><soapenv:Body>

<act:Administrate><act:AdminOperation>

<act:UpdateOpenSecurityCache><act:LoadAllUsers>true</act:LoadAllUsers><act:LoadAllUserGroups>true</act:LoadAllUserGroups>

</act:UpdateOpenSecurityCache></act:AdminOperation>

</act:Administrate></soapenv:Body>

</soapenv:Envelope>

Perform this operation immediately after installing BIRT iHub, to load the open security cache. Subsequently, perform the operation to refresh the cache when information in the external data source has changed.

Actuate recommends selecting Search Cache Only only if you have a large number of users or user groups, when using the feature makes enough of a difference in performance to warrant the management task of refreshing the cache.

Configuring LDAP connection settings

Configure LDAP Connection settings to connect to the LDAP or Active Directory server by providing values for each of the following settings in LDAP Connection settings, as shown in Figure 7-61. An asterisk next to the property name indicates that this property is required. Choose Test Connection to test the connection to the LDAP server after setting all the values in LDAP connection settings. A message displays, indicating whether the connection is successful.

■ LDAP ServerName of the machine hosting the LDAP or Active Directory server. BIRT iHub must be able to resolve this name. For example, when using an LDAP server, specify:

ldap.company.com

For an Active Directory server, an example value is:

ad.company.com

■ LDAP PortPort on which the LDAP or Active Directory server listens. Whether using an LDAP server or an Active Directory server, the default port is:

389

Page 167: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 173

For an LDAP server with SSL (LDAPS), the default port is:

636

■ User DNDistinguished name of the user that can log in to the LDAP or Active Directory server. The distinguished name with which BIRT iHub binds to the LDAP server. For example, when using an LDAP server, specify:

ou=Engineering,dc=company,dc=com

For an Active Directory server, an example value is:

[email protected]

■ PasswordPassword for the LDAP or Active Directory server.

■ SSLEnables connecting to an LDAP server or an Active Directory server with SSL. An out-of-the-box (OOTB) BIRT iHub installation only connects to an LDAP or Active Directory server that has a signed certificate. To connect to a server without a signed certificate, use the Java keytool utility to add that certificate as a trusted certificate. For information on using the Java keytool utility, see:

http://docs.oracle.com/javase/6/docs/technotes/tools/windows/keytool.html.

■ Active DirectorySupports an LDAP implementation using Active Directory. Select if implementing LDAP using Active Directory.

■ Recursive GroupsSupports nested group membership. Leave this property deselected if not using an Active Directory LDAP implementation.

Figure 7-61 Configuring LDAP connection settings

Page 168: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

174 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Configuring LDAP Performance Settings

Choose LDAP Performance Settings to set the following properties, as shown in Figure 7-62. An asterisk next to the property name indicates that this property is required.

■ TimeoutThe number of milliseconds before the time to perform an LDAP operation expires.

■ Maximum Pool SizeThe maximum number of connections per connection identity that can be maintained concurrently.

■ Fetch LimitThe maximum number of entries to be returned from the directory.

■ Preferred Pool SizeThe preferred number of connections per connection identity to maintain concurrently.

■ Cache TimeoutThe number of minutes before BIRT iHub deletes cached data.

Figure 7-62 Setting Performance Settings properties

Configuring LDAP Mapping

Configure LDAP Mapping to map BIRT iHub user data to the LDAP or Active Directory server by providing values for each of the following settings in LDAP Mapping, as shown in Figure 7-63. An asterisk next to the property name indicates that this property is required.

Page 169: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 175

Figure 7-63 Configuring LDAP Mapping

Page 170: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

176 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ PrefixFor simple authentication, a string value that LDAP prepends to the name with which the user logs on to the server. For LDAP servers requiring distinguished name (DN) login, set this property to the appropriate value, followed by an equal sign (=). For example, specify:

uid=

When using an Active Directory server, leave Prefix blank.

■ SuffixFor simple authentication, a string value that LDAP appends to the name with which the user logs on to the server. For LDAP servers requiring distinguished name (DN) login, set this property to the appropriate chain of values, preceded by a comma (,). For example, specify:

,ou=company users,dc=company,dc=com

When using an Active Directory server, which requires logging in with an e-mail address, set Suffix to @ followed by the domain name of the Active Directory. For example, specify:

@company.com

■ User Base DNThe root of the tree that BIRT iHub searches for user information. A user name must be unique for each distinguished name BIRT iHub searches. Separate multiple distinguished names with a semicolon. For example, when using an LDAP server, specify:

ou=Users, dc=east, dc=com; ou=Users, dc=west, dc=com

For an Active Directory server, an example value is:

OU=Users, DC=east, DC=com; OU=Users, DC=west, DC=com

Note that you must delimit multiple branches with semicolons. The following example consists of three branches:

OU=Department1, OU=Users, DC=actuate, DC=com; OU=Department2, OU=Users, DC=actuate, DC=com; OU=Department3, OU=Users, DC=actuate, DC=com

■ User Login Name AttributeAttribute that specifies the user login name. Cannot contain a space. For example, when using an LDAP server, specify:

uid

When using an Active Directory server, specify:

sAMAccountName

Page 171: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 177

Note that if the LDAP or Active Directory server contains a user login name longer than 255 characters, BIRT iHub reads only the first 255 characters. User login names longer than 255 characters are not supported.

■ User Full Name AttributeAttribute that specifies the user’s full name. For example, when using an LDAP server, specify:

cn

or:

displayName

When using an Active Directory server, specify:

cn

■ User Description AttributeAttribute specifying a description of the user. For example, whether using an LDAP server or an Active Directory server, specify:

description

■ User ObjectLDAP object class for users. For example, when using an LDAP server, specify:

person

When using an Active Directory server, specify:

user

■ User Search FilterUse this property to identify which users can access BIRT iHub. Use the format appropriate to the indicated provider. For example, create a group for BIRT iHub users on your LDAP server. Then, specify this group as a filter to ensure that BIRT iHub imports only users belonging to the group of BIRT iHub users. For example, when using an LDAP server, specify:

cn=birtUsers

When using an Active Directory server, an example value is:

memberOf:1.2.840.113556.1.4.1941:=CN=\\#QA,CN=Users,DC=actuate,DC=com

Be aware that for a distinguished name containing one or more special characters, LDAP stores the distinguished name with any special characters escaped with a backslash, so you must also escape any special character in the value you specify for User Search Filter with a backslash. For more information, see “About searching when Active Directory implements LDAP,” later in this chapter.

Page 172: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

178 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Email AttributeAttribute that stores a user’s e-mail address. For example, whether using an LDAP server or an Active Directory server, specify:

mail

■ Group Base DNThe root of the tree that BIRT iHub searches for user group information. Separate multiple distinguished names with a semicolon. For example, when using an LDAP server, specify:

ou=Groups, dc=eastern, dc=com; ou=Groups, dc=western, dc=com

For an Active Directory server, an example value is:

CN=Groups,OU=east,DC=company,DC=com;DC=Groups,OU=west,DC=company,DC=com

■ Group Description AttributeAttribute specifying a description of the user group. For example, whether using an LDAP server or an Active Directory server, specify:

description

■ Group ObjectLDAP object class for user groups. For example, when using a Sun Directory LDAP server, specify:

groupofuniquenames

For an Active Directory server, an example value is:

group

■ Group Search FilterValue with which to filter user groups. For example, when using an LDAP server, specify:

cn=Engineering*

For either an LDAP Directory server or an Active Directory server, a more advanced example is:

(&(businessCategory=Sales)(cn=a*))

For an Active Directory server, an example value is:

member:1.2.840.113556.1.4.1941:=CN=Vincent Price,CN=Users,DC=actuate,DC=com

Page 173: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 179

■ Member List AttributeThe LDAP Role Member attribute. BIRT iHub uses this attribute to find a user in a group. Groups use this attribute to name a user to a group. For example, when using a Sun Directory LDAP server, specify:

uniqueMember

When using an Active Directory server, specify:

member

■ Member ID TypeThe LDAP Role Member. Specifies the type of a member in a group. Whether using an LDAP server or an Active Directory server, specify the type as:

DN

or:

LoginID

■ Home Folder AttributeAttribute key that maps to a user’s home folder. For example, when using an LDAP server or an Active Directory server, specify:

companyHomeFolder

When using an Active Directory server, leave this property blank.

■ Default Home FolderValue that specifies the default parent folder of a user’s home folder.

If no Home Folder Attribute exists, BIRT iHub uses this property to construct the user's home folder. For example, whether using an LDAP server or an Active Directory server, specifying /home results in a home folder of /home/bHill for a user named bHill.

■ User Volume Filter AttributeSpecifies an attribute, for example, employeeType, that BIRT iHub uses to determine which users have access to a volume. Requires the Multi-Tenant license option. For more information, see “Managing volume access by users and user groups when using LDAP,” later in this chapter.

■ Group Volume Filter AttributeSpecifies an attribute, for example, businessType, that BIRT iHub uses to determine which user groups have access to a volume. Requires the Multi-Tenant license option. For more information, see “Managing volume access by users and user groups when using LDAP,” later in this chapter.

■ “Admin” GroupSpecifies the name of a group of users to whom BIRT iHub gives Administrator-level privileges in Information Console. When using an LDAP or Active Directory server for user management, BIRT iHub does not use the

Page 174: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

180 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

default Administrators user group in Information Console—iHub Administration. For example, whether using an LDAP server or an Active Directory server, specify:

CN=volumeAdministrators,OU=SystemGroups,OU=Groups,OU=Common_Users,DC=example,DC=com

Managing volume access by users and user groups when using LDAP

The LDAP mapping attribute User Volume Filter Attribute identifies the users that can access a particular volume. The LDAP mapping attribute Group Volume Filter Attribute identifies user groups that can access a particular volume.

Whether using an LDAP server or an Active Directory server, the value the system administrator specifies for User Volume Filter Attribute is the name of an attribute having a value that is shared by a group of users to which the system administrator wants to give access to a particular volume. The system administrator specifies this attribute value for the Organization ID when creating a volume.

As an example, employeeType is an attribute for a user on an LDAP or Active Directory server. All users for which the value of employeeType is Sales can access a volume having an Organization ID of Sales.

Likewise, the value the system administrator specifies for Group Volume Filter Attribute is the name of an attribute having a value that is shared by a group of user groups to which the system administrator wants to give access to a particular volume. The system administrator specifies this attribute value for the Organization ID when creating a volume.

As an example, businessType is an attribute for a user group on an LDAP or Active Directory server. All user groups for which the value of businessType is Insurance can access a volume having an Organization ID of Insurance.

Multiple volumes can have the same Organization ID. When creating a volume, Organization ID can be only one value. If the system administrator specifies both User Volume Filter Attribute and Group Volume Filter Attribute, and on the LDAP or Active Directory the value for these two attributes is the same for a given user and user group, and the system administrator specifies this value as the Organization ID when creating a volume, both the user and the user group can access the volume.

As an example, the system administrator specifies employeeType for User Volume Filter Attribute and businessType for Group Volume Filter Attribute. If the value for each of these attributes is Sales, and the system administrator specifies Sales for the Organization ID when creating a volume, then the users for which the value of EmployeeType is Sales and the user groups for which the value of businessType is Sales can access the volume.

Page 175: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 181

If the LDAP server is shut down or restarted, the volume associated with the LDAP server is disabled. When the LDAP server is back online, the system administrator must log in to System Console to re-enable the volume. Users do not have access to the volume until it is re-enabled.

About searching when Active Directory implements LDAP

Active Directory requires that the following characters be escaped with a backslash (\) if used in a Distinguished Name (DN):

■ Comma (,)

■ Backslash character (\)

■ Pound sign (#)

■ Plus sign (+)

■ Less than symbol (<)

■ Greater than symbol (>)

■ Semicolon (;)

■ Double quote (")

■ Equal sign (=)

■ Leading or trailing spaces

If any of these characters appear in a component of a DN, Active Directory stores the character escaped. For example, Active Directory stores the following DN:

memberOf=CN=\#QA,CN=Users,DC=actuate,DC=com

For Active Directory to recognize this DN in a search, you must escape the backslash escape character with another backslash. The following example query returns the users belonging to the #QA group:

memberOf=CN=\\#QA,CN=Users,DC=actuate,DC=com

Configuring RSSE SOAP ServiceChoose RSSE SOAP Service to configure and view properties for user management using a RSSE web service application for a volume. RSSE SOAP Service is an appropriate choice if you manage user information using an external data source that does not implement LDAP. Configure the following properties for RSSE SOAP Service, as shown in Figure 7-64:

■ Search settingContains Search Cache Only. Restricts searching to only the BIRT iHub metadata database

■ RSSE SOAP service settingsContains the following properties:

Page 176: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

182 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Server NameMachine name of the server that runs the RSSE web service.

■ Port NumberPort number for the RSSE web service.

■ Context PathSpecifies the location of the RSSE web service for BIRT iHub to use when sending messages to the web service. The path for the default volume is /acrsse/servlet/AxisServlet.

■ Cache TimeoutNumber of minutes before BIRT iHub deletes cached data.

Figure 7-64 Configuring security settings

Updating the licenseEach BIRT iHub cluster uses a separate BIRT iHub license. Choose License to view the license options or update the license, as shown in Figure 7-65.

Page 177: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 183

Figure 7-65 Choosing License

Choose Update License to browse for and select the license file, as shown in Figure 7-66.

Figure 7-66 Updating the license

About Configuration FileThe configuration file contains BIRT iHub System property settings and templates that all cluster nodes in a cluster use for configuration. The name of the configuration file is acserverconfig.xml. Choose Configuration File to perform the following operations, as shown in Figure 7-67:

■ Update the configuration fileUploads a new acserverconfig.xml to the default location, AC_CONFIG_HOME. If the system administrator created a folder for the shared configuration directory named config_cluster, in a default BIRT iHub

Page 178: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

184 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

installation on Windows, performed using the installer, AC_CONFIG_HOME points to the following location:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config_cluster

■ Download or edit the configuration fileSupports saving AC_CONFIG_HOME\acserverconfig.xml to a new location or editing acserverconfig.xml.

Figure 7-67 Viewing the options on Clusters—Configuration File

How to update the configuration file

1 On Clusters, left-click the icon next to the cluster name and choose Edit to edit the cluster.

2 On Cluster Configuration, left-click the cog icon and choose Stop Cluster.

3 On Configuration File, choose Update Configuration File.

4 On Update Configuration File, choose Browse to navigate to the location of the acserverconfig.xml file to upload, as shown in Figure 7-68.

Figure 7-68 Updating the configuration file

5 On File Upload, select the file and choose Open.

6 On Update Configuration, choose OK.

Page 179: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 185

7 On Cluster Configuration, left-click the cog icon and choose Start Cluster.

How to edit or download the configuration file

1 On Configuration File, choose Download acserverconfig.xml File.

2 On Opening acserverconfig.xml, choose to open the file for editing, or save the file, as shown in Figure 7-69.

Figure 7-69 Choosing to open or save acserverconfig.xml

Editing an existing clusterThe system administrator has access to the same properties when editing a cluster as when creating a cluster. Choosing to edit a cluster displays the side menu containing the same property categories the system administrator chooses from when creating a cluster.

How to edit an existing cluster

On Clusters, left-click the arrowhead in an cluster and choose Edit, as shown in Figure 7-70.

Figure 7-70 Choosing to edit a cluster

Make any necessary property changes in any cluster category.

Page 180: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

186 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Managing a cluster nodeAfter adding a cluster node to a cluster, the system administrator can access the node and perform the following operations on the node:

■ Starting and stopping

■ Editing

■ Deleting

How to access a cluster node

1 In System Console, navigate to Clusters. On Clusters, left-click the arrowhead next to the cluster name and choose Edit, as shown in Figure 7-71.

Figure 7-71 Choosing to edit a cluster

2 On Cluster Configuration, left-click the arrowhead next to a node. System Console displays the operations available to perform on a cluster node, as shown in Figure 7-72.

Figure 7-72 Accessing the cluster node menu

Page 181: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 187

How to stop and start a cluster node

On Cluster, left-click the arrowhead icon next to a cluster node. If the node is running, Stop Server displays in the cluster node menu, as shown in Figure 7-73. Choose Stop Node to stop the cluster node.

Figure 7-73 Stopping a cluster node

If the node is not running, Start Node displays in the cluster node menu instead of Stop Node. Choose Start Node to start the cluster node.

How to edit a cluster node

1 On Cluster, left-click the arrowhead next to a cluster node. Choose Edit to edit the cluster node properties.

2 On Edit Cluster Node, make any necessary changes and choose OK, as shown in Figure 7-74.

Figure 7-74 Editing cluster node properties

How to delete a cluster node

On Cluster, left-click the arrowhead next to a cluster node. Choose Delete to delete the cluster node. Confirm the deletion.

Page 182: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

188 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Viewing the list of clustersClusters displays all existing clusters. The system administrator can view the entire list, or a subset of the list. Choose Clusters to view the entire list. The system administrator can filter the cluster list to display only the following cluster subsets:

■ Clusters where the cluster ID contains a specified search string

■ Clusters that are starred

The following sections describe how to filter the cluster list.

Filtering the list of clusters using a search stringThe system administrator can filter the cluster list to display only clusters containing a specified search string in the cluster ID. In Search, type the first letter or letters of a cluster ID to display the clusters for which the name starts with the specified letter or letters. Search is not case-sensitive.

System Console supports using an asterisk (*) in a search string as a wildcard character. The asterisk represents zero or more characters, excluding spaces and punctuation.

How to filter the list of clusters using a search string

Using the cluster list shown in Figure 7-75 as an example, type *est* in Search to find all cluster IDs that contain this string. System Console filters the list and displays the results, as shown in Figure 7-76.

Figure 7-75 Viewing the cluster list before filtering

Page 183: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 189

Figure 7-76 Viewing the results of filtering the cluster list

How to refresh the list of clusters

Refresh the list of clusters by left-clicking Actions and choosing Refresh items, as shown in Figure 7-77.

Figure 7-77 Refreshing the clusters list

Alternatively, choose the x icon in the search text box.

Filtering non-starred clustersThe system administrator can filter the cluster list to show only starred clusters. A starred cluster appears with a filled-in star next to the cluster ID. A starred cluster appears in the list of clusters on Monitoring—Weekly Activity Report.

How to filter non-starred clusters

1 Choose Clusters, as shown in Figure 7-78.

Page 184: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

190 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-78 Choosing Clusters

2 Select Starred. Only the starred clusters appear, as shown in Figure 7-79.

Figure 7-79 Viewing only the starred clusters

Choose All to re-display all clusters.

Deleting clustersThe system administrator can delete one cluster only, or multiple clusters simultaneously.

How to delete a single cluster

Left-click the arrowhead icon next to the cluster name in the list of clusters and choose Delete to delete a cluster, as shown in Figure 7-80.

Page 185: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 191

Figure 7-80 Deleting a single cluster

How to delete one or more clusters

Select each cluster to delete, then choose Actions. In Actions, select Delete items, as shown in Figure 7-81.

Figure 7-81 Deleting multiple clusters

How to delete all clusters

1 Left-click the arrowhead in the check box next to Actions and choose Select All, as shown in Figure 7-82.

Page 186: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

192 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-82 Selecting all clusters

2 Left-click the arrowhead in Actions and choose Delete Items, as shown in Figure 7-83. Left-clicking a check box toggles between selected and deselected.

Figure 7-83 Deleting all clusters

Viewing cluster resourcesSystem Console provides detailed information on activity and resource usage in a cluster. System Console groups this information into the following categories:

■ LogsDiagnostic logs the cluster creates

■ TrendsSystem-level information

Page 187: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 193

■ Current ActivityNumber of active requests the cluster is processing

How to access cluster activity and resource usage information

1 On Clusters, left-click the arrowhead icon next to a cluster name and choose View Resources, as shown in Figure 7-84.

Figure 7-84 Choosing View Resources

2 On View Resources, choose a category from the side menu, as shown in Figure 7-85.

Figure 7-85 Choosing a resource information category

The following sections describe the resource information categories.

Viewing diagnostic log files using LogsThe system administrator can view the diagnostic logs a cluster creates. Each log is an aggregate of the most recent log entries from every node in the cluster. An aggregate log exists for each of the following log types:

■ ihubSystem log entries

■ ihubcVolume log entries

Page 188: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

194 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ ihubdProcess Management Daemon (PMD) log entries

■ jsrvrihubView service log entries

■ lmserverView monitoring service log entries

The system administrator chooses the type of log to view, and how much of the log to view.

Each log entry contains the host name of the cluster node generating the log entry, the process ID, and the product name. Using the following jsrvribhub log entry as an example,

hn:URUP|pid:4920|prod:iHub|[Thread 1] 2013-11-25 08:49:51 UTC-0800 com.sun.xml.internal.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.get()

URUP is the host name, 4920 is the process id, and iHub is the product name.

How to view the diagnostic log

1 On Clusters, left-click the arrowhead icon next to a cluster name and choose View Resources.

2 From the side menu, choose Logs, as shown in Figure 7-86.

Figure 7-86 Choosing Logs

3 On View Resources—Logs, perform the following tasks:

1 In Log File, select the type of log file to view. For example, choose jsrvrihub.log.

2 In Log Buffer, Kb, select how many kilobytes of the log to view. For example, choose 10 KB.

The aggregated diagnostic log appears, as shown in Figure 7-87.

Page 189: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 195

Figure 7-87 Viewing jsrvrihub.log

Viewing system-level information using TrendsSystem Console monitors each cluster and makes a broad range of system-level information on a cluster available for viewing, including statistics on cluster resource usage, performance, and load, presented in graphical and tabular formats.

How to view system-level information

1 On Clusters, left-click the arrowhead icon next to a cluster name and choose View Resources to access the list of system-level information categories.

2 From the side menu, choose Trends, as shown in Figure 7-88.

Figure 7-88 Choosing Trends

3 Choose one of the following system-level information categories on View Resources—Trends. A new browser window opens, in which Actuate Viewer displays one or more charts or tables showing the information for the chosen category.

Page 190: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

196 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Server ResourceFor each of the following statistics, System Console displays a chart showing the last 48 hours of activity for every node in the cluster. Each node appears as a line in the chart:

❏ Response Time (milliseconds)Left-click anywhere on a line in the chart to generate node-specific charts showing the last 48 hours of activity for the following statistics:

❏ Response Times (milliseconds)

❏ Running Requests

❏ Total Completed OnDemand Requests Today

❏ CPU Usage (%)

❏ Memory Usage (%)

❏ Disk Usage (%)

When viewing the CPU Usage, Memory Usage, or Disk Usage charts, left-click anywhere on a line in the chart to generate node-specific charts showing the last 48 hours of activity for each of the following statistics:

❏ Memory, CPU and Disk Usage

❏ Processes CPU Usage (%)

❏ Processes Memory Usage (MB)

■ Work UnitsDisplays a chart showing usage for each of the following work units over the past 48 hours:

❏ BIRT Online

❏ BIRT Factory

❏ Dashboards

❏ Interactive Crosstabs

❏ Report Studio

■ Peak Work UnitsDisplays a table showing the dates that each work unit reached a particular threshold, and the number of times the work unit reached the threshold on each date.

■ Resource GroupView process, thread and queue statistics on resource group. Displays a table showing the following information:

❏ Time stamp

Page 191: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 197

Date and time at which BIRT iHub used the resource group

❏ Server nameName of the server using the resource group

❏ Resource groupName of the resource group

❏ Number of processesNumber of Factory processes the resource group can allocate for executing jobs

❏ Busy threadsNumber of threads running under a resource group

❏ Queue sizeSize of the queue where jobs wait to use the resource group

■ System ErrorDisplays a table listing system errors and their details.

About the Actuate Viewer menuSystem Console displays each of the system-level information category views using Actuate Viewer. Actuate Viewer provides a menu that supports working with the contents of each system-level information category view.

Choose the icon in the banner of a system-level information view to access a menu containing the following options:

■ Disable InteractivityDisables interactivity on a chart or table in a system-level information category view. By default, Actuate Viewer displays a chart or table in a system-level category view with interactivity enabled. Interactivity enables control over any column or chart in the view. Left-click a column name or chart in the view. Choose the ellipsis to choose options such as filtering, sorting, alignment. Figure 7-89 shows the menu for a column.

Page 192: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

198 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-89 Viewing the menu of column control options

■ ParametersSupports refreshing the view. Choose Parameters. Then, choose Run Report.

■ Table of ContentsSupports immediately navigating to a location in the view.

■ Hide/Show ItemSupports hiding or showing one or more elements in a view, such as a chart, table, table column, or column label.

■ Link to this pageProvides a link to paste in an e-mail or message, or the HTML to embed in a web page.

■ Export ContentSupports downloading the view in one of the following formats:

■ Excel (XLS)

■ Excel (XLSX)

■ PDF

■ PostScript (PS)

■ PowerPoint (PPT)

■ PowerPoint (PPTX)

■ Word (DOC)

■ Word (DOCX)

■ XHTML

■ Export DataSupports downloading the view data in one of the following file formats:

■ Comma (CSV)

Page 193: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 199

■ Pipe (PSV)

■ Tab (TSV)

■ Semicolon (SSV)

■ PrintSupports printing the view.

■ HelpAccesses help for Actuate Viewer.

Figure 7-90 shows an example of the menu, from the Request/Response Time by Category and Application view.

Figure 7-90 Viewing the menu common to each system information view

Viewing Current ActivityCurrent Activity shows the Node Active Request Summary, which lists the number of active requests each cluster node is processing. Choose a node name in the list to see a detailed status report for the node. The Node Status report displays statistics for the following categories, as shown in Figure 7-91:

Page 194: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

200 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Resource StatusDisplays the following resource usage statistics:

■ RAMPercentage of total memory the node is using

■ CPUPercentage of total CPU the node is using

■ DiskPercentage of Disk space node is using

■ Process StatusDisplays the following information about BIRT Service processes and BIRT iHub processes on the node:

Figure 7-91 Viewing the Node Status report

Page 195: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 201

■ PIDProcess ID number

■ Process NameName of the process

■ Resource GroupThe resource group running the process

■ File DescriptorsNumber of file descriptors the process is using

■ RAMAmount of memory in megabytes that the process is using

■ CPUPercentage of CPU the process is using

■ Total threadsTotal number of threads the process is using

■ Busy Worker ThreadsNumber of worker threads performing work

■ Acquired WUsNumber of acquired work units

■ Queue LengthLength of the queue

■ Last Updated TimeThe time at which Process Status information was captured

■ Active Requests StatusDisplays the following information about requests that are active on the node:

■ PIDProcess ID number

■ Request IDRequest ID number

■ VolumeVolume on which the request is active

■ UserUser who initiated the request

■ OperationType of operation the request is performing

Page 196: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

202 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ FileFile the request is using

■ StatusStatus of the request

■ Submission timeTime the user submitted the request

■ Running timeStatus of the request

■ ActionAction the system is taking for the request

How to view the list of active requests

1 On Clusters, left-click the arrowhead icon next to a cluster name and choose View Resources.

2 On View Resources, choose Current Activity to access the Node Active Request Summary. Then left-click the Refresh button to update the Node Active Request Summary, as shown in Figure 7-92.

Figure 7-92 Viewing the list of active requests

3 On Node Active Request Summary, choose the node name for which you want to view a node status report. For example, choosing the node name URUP in Figure 7-92 generates the report shown in Figure 7-91.

Understanding the monitor and agent servicesThe monitor and agent services obtain the data that System Console uses to report on cluster activity.

This section provides an overview of what the services do, and discusses how to manage the content of the folder to which the services write.

This section makes reference to the following variables:

Page 197: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 203

■ AC_CONFIG_HOMERepresents the folder containing the configuration directory that all cluster nodes share. For example, if the system administrator created a folder for the shared configuration directory named config_cluster, then in a default BIRT iHub installation, where BIRT iHub was installed as an individual module, AC_CONFIG_HOME represents the following path on Windows:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config_cluster

For an equivalent installation on Linux, AC_CONFIG_HOME represents:

/opt/actuate/BIRTiHubVisualization/modules/BIRTiHub/iHub/shared/config_cluster

■ AC_DATA_HOMERepresents the folder containing the lmservice folder, which contains the folders and files to which the monitoring and agent services write. In a default BIRT iHub installation on Windows, where BIRT iHub was installed as an individual module, AC_DATA_HOME represents the following path:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\data

In a default BIRT iHub installation on Linux, where BIRT iHub was installed as an individual module, AC_DATA_HOME represents the following path:

/opt/actuate/BIRTiHubVisualization/modules/BIRTiHub/iHub/data

■ AC_SERVER_HOMERepresents the BIRT iHub home folder. In a default BIRT iHub installation on Windows, where BIRT iHub was installed as an individual module, AC_SERVER_HOME represents the following path:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub

In a default BIRT iHub installation on Linux, where BIRT iHub was installed as an individual module, AC_SERVER_HOME represents the following path:

/opt/actuate/BIRTiHubVisualization/modules/BIRTiHub/iHub

About the monitor serviceThe monitor service obtains information from every node in a cluster and provides that information to System Console. System Console processes this information and presents it to the system administrator in the form of:

■ Alerts

■ BIRT iHub process logs

■ Errors

■ Reports and charts detailing cluster resource usage

Page 198: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

204 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

About the agent serviceBIRT iHub processes run on every node in a cluster. Each process generates log entries. The agent service runs on every cluster node. The monitor service runs on only one node, typically the node where System Console runs. Each agent service sends BIRT iHub process log entries to the monitor service. The monitor service aggregates logs in the AC_DATA_HOME\lmservice\aggregated-logfiles folder.

About the in-memory databaseThe monitor service uses an in-memory database to contain cluster resource usage data such as CPU, memory, disk space, resource group, and work unit usage. By default, the in-memory database holds the last 15 days of data.You can configure the number of days of data the in-memory database contains. The number of days of data the database contains is directly proportional to the amount of RAM required for the machine on which the monitor service runs. 3GB RAM is sufficient for a cluster of two or three nodes handling an average of 650,000 daily requests. Reducing the number of days of data the in-memory database contains reduces the amount of RAM the monitoring service needs.

Managing the AC_DATA_HOME\lmservice folderThe monitor and agent services persist data in the AC_DATA_HOME\lmservice folder. The size of this folder grows over time. This section describes the data this folder contains and options for managing the data.

Table 7-3 introduces the folders and files AC_DATA_HOME\lmservice contains. The monitor or agent service writes to each folder or file continuously, over the life of a cluster. Folders or files such as the aggregated-logfiles folder, and the lmserver.log and lstailer.log files, grow continuously because the monitor or agent services do not overwrite or remove any data. The table indicates whether you can configure a limit on the size of a folder or file, such as the inmemdb and newkafka-logs folders. The table also indicates whether an item has any configurable properties other than a property supporting a limit on the folder or file size. Finally, the table lists the configuration file associated with each folder or file in AC_DATA_HOME\lmservice.

Table 7-3 Folders and files in AC_DATA_HOME\lmservice

Folder or file name

Grows continuously?

Configure maximum size?

Configure any other properties?

Configuration file for this folder or file

aggregated-logfiles

Yes No Yes AC_CONFIG_HOME\lmsrvr\lmserviceconfig.properties

aggregator No No No n/a

hadoop Yes No No n/a

Page 199: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 205

The following list describes each folder or file in AC_DATA_HOME\lmservice and includes any configuration options for each folder or file:

■ aggregated-logfilesThe monitor service aggregates the BIRT iHub process logs from all cluster nodes into this folder. The monitor service writes the log files as long as the cluster exists. The size of the folder grows continuously unless you remove files from this folder. New log files do not overwrite older log files.

AC_CONFIG_HOME\lmsrvr\lmserviceconfig.properties contains the following properties, which affect log file creation in the aggregated-logfiles folder, as shown in Listing 7-3:

■ lmservice.iHub.aggregator.logfile.maxsize.mbSpecifies in megabytes the file size that when reached, the monitor service creates a new log file in the aggregated-logfiles folder. The default value of zero (0) specifies that the monitor service creates a new log file daily.

■ lmservice.iHub.aggregator.diagnostic.enableSpecifies whether the monitor service creates log files. The default value is true.

Listing 7-3 Configuring properties related to the aggregated-logfiles folder

...# Set 0 to configure daily rotation for aggregated file.lmservice.iHub.aggregator.logfile.maxsize.mb=0lmservice.iHub.aggregator.diagnostic.enable=true...

■ aggregator

inmemdb No Yes No AC_CONFIG_HOME\lmsrvr\lmserviceconfig.properties

lmserver.log Yes Yes Yes AC_SERVER_HOME\etc\lmserver-log4j.properties

lstailer.log Yes Yes Yes AC_SERVER_HOME\etc\lmagent-log4j.properties

newkafka-logs

Yes Yes Yes AC_SERVER_HOME\etc\lmserverconfig.properties

tailer Yes No No n/a

Table 7-3 Folders and files in AC_DATA_HOME\lmservice (continued)

Folder or file name

Grows continuously?

Configure maximum size?

Configure any other properties?

Configuration file for this folder or file

Page 200: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

206 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

The monitor service creates the files in this folder, which the service uses to track aggregated log creation activity. The size of this folder does not grow. The monitor service overwrites the files in this folder continuously.

■ hadoopThe monitor service persists in-memory database data to this folder as .csv files. The size of this folder grows continuously unless you remove older files from the folder. New files do not overwrite older files.

■ inmemdbContains the in-memory database data.

AC_CONFIG_HOME\lmsrvr\lmserviceconfig.properties contains the lmservice.monitoring.inmem.window.days.size property, which specifies the number of days of monitoring data that the in-memory database contains, as shown in Listing 7-4. The default value is 15 days.

Listing 7-4 Configuring lmservice.monitoring.inmem.window.days.size

...lmservice.monitoring.inmem.window.days.size=15...

■ lmserver.logThe monitor service log file.

AC_SERVER_HOME\etc\lmserver-log4j.properties contains the following properties for managing the lmserver.log file, as shown in Listing 7-5:

■ log4j.appender.file.FileSpecifies the location of the lmserver.log file.

■ log4j.appender.file.MaxFileSizeSpecifies the maximum size to which an lmserver.log file can grow. When the file reaches this size, the monitor service removes data on a first in, first out (FIFO) basis. The default value is 100MB.

■ log4j.appender.file.MaxBackupIndexSpecifies the maximum number of lmserver.log files the monitor service can create in addition to the original lmserver.log file. The default value is 2, which limits the maximum number of lmserver.log files to three, including the original lmserver.log file.

If log4j.appender.file.MaxFileSize=100MB and log4j.appender.file.MaxBackupIndex=2, the maximum size for all lmserver.log files combined is 300MB.

Listing 7-5 Configuring properties related to the lmserver.log file

...

Page 201: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 207

log4j.appender.file.File=C:/Actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub/data/lmservice/lmserver.log

log4j.appender.file.MaxFileSize=100MBlog4j.appender.file.MaxBackupIndex=2...

■ lstailer.logThe agent service log file.

AC_SERVER_HOME\etc\lmagent-log4j.properties contains the following properties for managing the lstailer.log file, as shown in Listing 7-6:

■ log4j.appender.file.FileSpecifies the location of the lstailer.log file.

■ log4j.appender.file.MaxFileSizeSpecifies the maximum size to which the lstailer.log file can grow. When the file reaches this size, the agent service removes data on a first in, first out (FIFO) basis. The default value is 100MB.

Listing 7-6 Configuring properties related to the lstailer.log file

...log4j.appender.file.File=C:/Actuate3/BIRTiHubVisualization

/modules/BIRTiHub/iHub/data/lmservice/lstailer.loglog4j.appender.file.MaxFileSize=100MB...

■ newkafka-logsKafka is the messaging engine that the agent service uses to send BIRT iHub process log entries from cluster nodes to the monitoring service. The agent service writes log messages that Kafka creates to the newkafka-logs folder.

AC_SERVER_HOME\etc\lmserverconfig.properties contains the following properties, which determine when the monitor service deletes log messages from the newkafka-logs folder, and the location of the folder, as shown in Listing 7-7:

■ log.retention.hoursThe age at which a log message becomes eligible for deletion. The default value is 72 hours.

■ log.retention.sizeThe monitor service removes log message from the kafka-logs folder until the disk space that the total number of log messages in the folder occupy reaches this level. The value of this property is expressed in bytes. By default, this property is commented out, but it shows a value of 1073741826 bytes, or 1 GB.

Page 202: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

208 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

If the disk space that the total number of log messages in the newkafka-logs folder occupies is greater than the log.retention.size value, the agent service deletes only log messages older than the log.retention.hours value.

■ log.dirThe location of the newkafka-logs folder.

Listing 7-7 Configuring properties related to the newkafka-logs folder

...# The minimum age of a log file to be eligible for deletionlog.retention.hours=72

# A size-based retention policy for logs. Segments are pruned# from the log as long as the remaining segments don't drop# below log.retention.size.#log.retention.size=1073741824...# The directory under which to store log fileslog.dir=C:/Actuate3/BIRTiHubVisualization/modules/BIRTiHub/iHub

/data/lmservice/newkafka-logs

■ tailerThe agent service writes data files to the tailer folder that support keeping track of what BIRT iHub process log entries the agent service has sent to the monitor service. If the cluster stops, the agent service knows where in a given log file to begin processing when the cluster starts again. The size of this folder grows continuously unless you remove older files from the folder. New files do not overwrite older files.

Using the ControlConsole utility to free memoryLMServer maintains an in-memory database. By default, this database holds up to 15 days of monitoring data. This database can fill up, causing LMServer to run out of memory.

ControlConsole is a Java utility that performs various logging and monitoring operations. ControlConsole accepts the following options for freeing memory:

■ gcRuns Java garbage collection

■ dataunload <number of days of monitoring data to keep>Releases monitoring data from memory that is older than the number of days you specify

Page 203: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 209

How to run the ControlConsole utility

1 Create the following environment variables:

■ AC_CONFIG_HOMELocation of the shared configuration directory for a cluster. If the system administrator created a folder for the shared configuration directory named config_cluster, and assuming the BIRT iHub installation folder is C:\Actuate3\BIRTiHubVisualization, the default location for AC_CONFIG_HOME is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config_cluster

■ AC_DATA_HOMELocation of logs to which BIRT iHub processes write and System Console monitors. Assuming the BIRT iHub installation folder is C:\Actuate3\BIRTiHubVisualization, the default location for AC_DATA_HOME is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\data

■ JAVA_HOMELocation of the Java JRE. Assuming the BIRT iHub installation folder is C:\Actuate3\BIRTiHubVisualization, the default location for the JRE is:

C:\Actuate3\BIRTiHubVisualization\java

2 Append the following value to the PATH environment variable:

%JAVA_HOME%\bin

3 Open a command prompt. Navigate to AC_SERVER_HOME\Jar. The lmsutility.jar file contains the ControlConsole utility. Run ControlConsole by executing the Java command, including the classpath of the ControlConsole utility, followed by:

-h <hostname> <command option>

For example, on a machine named urup, run ControlConsole with the help command option, which lists the ControlConsole command options:

java -classpath .;lmsutility.jar com.actuate.lmservice.utils.ControlConsole -h urup help

Listing 7-8 shows this command and its output.

Listing 7-8 Executing the ControlConsole help command

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\Jar>java -classpath .;lmsutility.jar com.

actuate.lmservice.utils.ControlConsole -h urup help

Help documentation:Usage: lmsctrl -p password (-h hostname) command

Page 204: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

210 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Usage: ping command does not need to have password. hostname is optional where command is one of the follow:

shutdown lmserver | lstailerping lmservergcstart logging | monitoringstop logging | monitoringrestart logging | monitoringmemprofiledatareloaddataunload day //dataunload command second

parameter is number of days_to keepnightlysnapshotsnapshotcsv yyyy-MM-dd //snapshotcsv command

second parameter is data formathelpCommand Example:lmsctrl -p password -h svr1 shutdown lmserverlmsctrl -p password -h svr1 restart monitoring

4 Run Java garbage collection by executing the gc command, as shown in Listing 7-9.

Listing 7-9 Running Java garbage collection

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\Jar>java -classpath .;lmsutility.jar com.

actuate.lmservice.utils.ControlConsole -h urup gc -p passwordSend control command to host:urup

LMServer execute gc command issued

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\Jar>

5 Release monitoring data from memory that is older than the number of days you specify by running ControlConsole with the dataunload <number of days of data to keep> command option. Listing 7-10 shows running this command option to keep three days worth of monitoring data.

Listing 7-10 Releasing monitoring data older than number of days you specify

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\Jar>java -classpath .;lmsutility.jar com.

actuate.lmservice.utils.ControlConsole -h urup dataunload 3 -p password

Send control command to host:urup

dataunload command successful

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\Jar>

Page 205: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 211

6 View the amount of memory used and the amount of memory free by running ControlConsole with the memprofile command option. Listing 7-11 shows running ControlConsole with this command option.

Listing 7-11 Viewing the amount of memory used and amount of memory free

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\Jar>java -classpath .;lmsutility.jar com.

actuate.lmservice.utils.ControlConsole -h urup memprofile -p password

Send control command to host:urup

MEMORY USED: 25 MB MEMORY FREE: 353 MB Completed operation in : 2 Sec

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\Jar>

BIRT iHub service and resource group propertiesThe shared configuration file, acserverconfig.xml, contains the server configuration templates that the nodes in a cluster use. In a typical BIRT iHub installation, the location of acserverconfig.xml is AC_CONFIG_HOME.

Each node uses a single server configuration template. The names of the default templates are small, medium, large, and disable. The system administrator can assign the disable configuration template to a cluster node to prevent the node from performing any processing. By default, all the resource groups in the disable template are configured to run no Factory processes. For example, assign the disable template to a node that runs only the Monitor service.

Table 7-4, Table 7-5, and Table 7-6 describe the Reporting, Viewing, and Integration service properties that the small, medium, and large server configuration templates contain. The tables show when a property change takes effect, and include the default value for the property for each of the templates. If a template does not contain a property, the node uses the default value. To use a non-default value, add the property to the template.

Reporting service template properties

Table 7-4 Viewing the Reporting service template properties

Property Name DescriptionTakes effect Small Medium Large

BIRTChartConvertToImageTimeOut

(continues)

Page 206: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

212 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

DatamartArchiveLimit

Number limit for datamart files

Server Restart

10 20 40

EnableGenerationService

Enable factory service Immediate true true true

MaxBIRTDataResultsetBufferSize

Maximum result set buffer size for BIRT data object generation query

Server Restart

128 MB 256 MB 512 MB

MaxDatamartArchiveSize

Maximum memory size for each datamart file

Server Restart

30720 KB

30720 KB

30720 KB

MaxPersistentArchiveSize

Maximum memory size for each persistent BIRT document file

Server Restart

512 KB 512 KB 512 KB

MaxSyncJobRuntime

Maximum execution time for on-demand execution requests

Immediate 300 seconds

300 seconds

300 seconds

MaxSyncRequestTime

Maximum execution time for all request types except on-demand execution

Immediate 300 seconds

300 seconds

300 seconds

PersistentArchiveLimit

Number limit for persistent BIRT document files

Server Restart

100 200 400

SyncJobQueueSize Job queue size for synchronous reports

Immediate 100 100 100

SyncJobQueueWait Job queue timeout for transient reports

Immediate 600 seconds

600 seconds

600 seconds

SynchReportingWeight

Weight of this server for load balancing on demand execution requests

Immediate 100 200 400

TransientReportCacheSize

Disk cache size for transient reports

Immediate 100 MB 100 MB 100 MB

TransientReportTimeOut

Disk cache timeout for transient reports

Server Restart

30 minutes

30 minutes

30 minutes

TransientStoreMaxCacheEntries

Maximum memory cache entries for transient reports

Immediate 10000 10000 10000

Table 7-4 Viewing the Reporting service template properties

Property Name DescriptionTakes effect Small Medium Large

Page 207: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 213

Viewing service template properties

Table 7-5 Viewing the Viewing service template properties

Property Name DescriptionTakes effect Small Medium Large

BIRTImageCacheTimeout

Cache timeout for images and charts from BIRT designs, documents and datamarts

Server Restart

86400 seconds

86400 seconds

86400 seconds

BIRTReportDesignCacheTimeout

Cache timeout for BIRT designs

Server Restart

1800 seconds

1800 seconds

1800 seconds

BIRTReportDesignCacheTotalNumberOfEntries

Maximum number of BIRT designs to cache

Server Restart

50 50 50

DatamartArchiveLimit

Number limit for datamart files

Server Restart

10 10 20

EnableViewingService

Enable viewing service Immediate true true true

FileCacheTimeout Cache timeout for search results, table of contents and image files

Server Restart

86400 seconds

86400 seconds

86400 seconds

JavaServerClientConnectionTimeout

The timeout value for the connections from client to encyc server

Server Restart

300 seconds

300 seconds

300 seconds

JavaServerClientMaxConnections

Maximum number of connections from client to encyc server

Server Restart

8 8 8

JavaServerClientMinConnections

Minimum number of connections from client to encyc server

Server Restart

3 3 3

MaxBIRTDataResultsetBufferSize

Maximum buffer size for BIRT Data Object query result set in Information Console

Server Restart

8 8 8

MaxConcurrentRequests

Maximum queue size per process for requests

Immediate 128 128 128

MaxDatamartArchiveSize

Maximum memory size for each datamart file

Server Restart

30720 KB

30720 KB

30720 KB

(continues)

Page 208: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

214 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Integration service Template properties

MaxPersistentArchiveSize

Maximum memory size for each persistent BIRT document file

Server Restart

512 KB 512 KB 512 KB

MaxTransientArchiveSize

Maximum memory size for each transient BIRT document file

Server Restart

1024 KB 1024 KB 1024 KB

OnDemandServerViewMessageTimeout

Timeout for on-demand and viewing messages

Server Restart

300 seconds

300 seconds

300 seconds

PersistentArchiveFileCacheTimeout

Expiration timeout for persistent BIRT documents and datamarts in the archive file cache

Server Restart

7200 seconds

7200 seconds

7200 seconds

PersistentArchiveLimit

Number limit for persistent BIRT document files

Server Restart

100 200 400

TransientArchiveFileCacheTimeout\

Expiration timeout for transient BIRT documents and datamarts in the archive file cache

Server Restart

1200 seconds

1200 seconds

1200 seconds

TransientArchiveLimit

Number limit for transient BIRT document files

Server Restart

100 200 400

ViewingWeight Weight of this server for load balancing viewing requests

Immediate 100 200 400

Table 7-5 Viewing the Viewing service template properties

Property Name DescriptionTakes effect Small Medium Large

Table 7-6 Viewing the Integration service template properties

Property Name DescriptionTakes effect Small Medium Large

BufferPoolSize Buffer pool size Server Restart

18000 pages

18000 pages

18000 pages

Page 209: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 215

Understanding resource groupsA resource group controls the Factory processes that BIRT iHub uses to run a synchronous or asynchronous job. A resource group specifies a set of Factory processes reserved to run only those jobs assigned to the group.

A design that runs unscheduled runs synchronously, as soon as possible in the foreground. iHub does not store the generated document in the volume. A scheduled job runs asynchronously in the background. iHub stores the generated document in the volume. Whether you generate a document by running a design scheduled or unscheduled, you can view, navigate, and search the generated document.

BIRT iHub uses the following default resource groups:

■ Default BIRT FactoryUsed to run a BIRT design (.rptdesign) as a scheduled job. Also used to print a BIRT document (.rptdocument).

■ Default BIRT OnlineUsed for running a BIRT design unscheduled and viewing the generated BIRT document.

■ Default DashboardsUsed for running a BIRT dashboard (.dashboard) or gadget (.gadget) design unscheduled and viewing the generated document.

■ Default Interactive CrosstabsUsed for running a Data Object Store (.data) design unscheduled and viewing the generated document.

■ Default Report StudioUsed when creating, modifying, and viewing documents using Report Studio.

EnableIntegration Service

Enable Integration service

Immediate false false false

PagePoolSize Page pool size Server Restart

2000 pages 2000 pages 2000 pages

StartArguments Start parameters for Integration service processes

ServerRestart

-Xms64M -Xmx128M com.nimble.nie.server.Server

-Xms64M -Xmx128M com.nimble.nie.server.Server

-Xms64M -Xmx128M com.nimble.nie.server.Server

Table 7-6 Viewing the Integration service template properties (continued)

Property Name DescriptionTakes effect Small Medium Large

Page 210: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

216 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Resource group properties pertaining to a templateEach template contains an XML element named <ServerResourceGroupSetting> for each resource group. This element contains attributes that affect the configuration of the resource group only for that template.

A <ServerResourceGroupSetting> element contains the following properties for a resource group:

■ NameName of the resource group.

■ ActivatePossible values are true or false. A value of true activates the resource group, enabling the node using this configuration template to use the Factory processes this resource group provides. A value of false deactivates the resource group. By default, Activate is true for all resource groups.

■ MinFactoryMinimum number of factories a resource group can use.

■ MaxWorkUnitMaximum number of work units a java view server can use.

With respect to dashboards, iHub3.1.1 increases the scope of the work that a work unit can handle. In previous versions, work units represented a capacity to execute requests. In iHub3.1.1, work units represent user sessions. This enhancement results in improved dashboard performance.

■ StartArgumentsThe Java Runtime Environment (JRE) start arguments, or Java command-line options, for the resource group.

Any change to a resource group property requires a cluster restart to take effect.

The StartArguments property appearing in each <ServerResourceGroupSetting> element include the following JRE start arguments:

■ Heap limit optionSpecifies the amount of heap the Java process can use. For example, -Xmx512M specifies that the Java process can use 512 MB of heap. Too large a heap can slow garbage collection because there is more heap to scan. This property affects Java view server memory usage.

■ MaxPermSizePermSize is additional heap space, separate from the space the Heap limit option specifies. The heap space that PermSize specifies holds reflective data for the JVM, such as class and method objects. By specifying MaxPermSize without also specifying PermSize, heap size does not increase unless an application needs more heap.

Page 211: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 217

■ UserPerfDataTurns off jvmstat instrumentation.

■ Preferred protocol stackjava.net.preferIPv4Stack=true permits only IPv4 network connections. The default value of java.net.preferIPv4Stack is true.

■ Headless graphics optionIncludes the Java graphics environment in lieu of a native graphics environment when set to true. For example, -Djava.awt.headless=true specifies including the Java graphics environment.

■ Protocol library specificationFor example, Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol specifies the package name in which the Actuate protocol handler class can be found.

■ Java server entry point specificationFor example, com.actuate.javaserver.Server specifies the Java server main class.

The following list describes the start arguments that each StartArguments property specifies, for the small, medium, and large templates. Additionally, if the <ServerResourceGroupSetting> element contains a MinFactory property, the list includes the value for this property.

■ Small

■ Default BIRT Online This resource group contains the following start arguments:

-Xmx512M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

The minimum number of factories this resource group can use is 1.

■ Default Dashboards This resource group contains the following start arguments:

-Xmx512M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

The minimum number of factories this resource group can use is 1.

■ Default Interactive Crosstabs This resource group contains the following start arguments:

Page 212: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

218 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

-Xmx512M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

■ Default BIRT FactoryThis resource group contains the following start arguments:

-Xmx512M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

■ Default Report StudioThis resource group contains the following start arguments:

-Xmx512M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

■ Medium

■ Default BIRT Online This resource group contains the following start arguments:

-Xmx1024M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

The minimum number of factories this resource group can use is 1.

■ Default Dashboards This resource group contains the following start arguments:

-Xmx1024M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

The minimum number of factories this resource group can use is 1.

■ Default Interactive Crosstabs This resource group contains the following start arguments:

-Xmx1024M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

Page 213: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 219

■ Default BIRT FactoryThis resource group contains the following start arguments:

-Xmx1024M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

■ Default Report StudioThis resource group contains the following start arguments:

-Xmx1024M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

■ Large

■ Default BIRT Online This resource group contains the following start arguments:

-Xmx2048M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

The minimum number of factories this resource group can use is 1.

■ Default Dashboards This resource group contains the following start arguments:

-Xmx2048M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

The minimum number of factories this resource group can use is 1.

■ Default Interactive Crosstabs This resource group contains the following start arguments:

-Xmx2048M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

■ Default BIRT FactoryThis resource group contains the following start arguments:

-Xmx2048M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true

Page 214: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

220 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

■ Default Report StudioThis resource group contains the following start arguments:

-Xmx2048M -XX:MaxPermSize=256m -XX:-UsePerfData-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true-Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server

Resource group properties pertaining to all templatesThe shared configuration file, acserverconfig.xml, contains an XML element named <ResourceGroup> for each resource group. This element contains attributes that affect the configuration of the resource group for all templates.

A <ResourceGroup> element contains the following properties for a resource group:

■ NameName of the resource group.

■ TypeType of job the resource group supports. Resource groups support the following job types:

■ ViewSupports running a job synchronously, or unscheduled, and viewing a document.

■ AsyncSupports running a job asynchronously, or scheduled, and printing a document.

■ VolumeSupports specifying a particular volume. Default value is all volumes.

■ DisabledSupports disabling and enabling the resource group. Value is true or false. Default value is false.

■ ReservedSupports reserving a synchronous resource group for targeted requests.

■ ReportTypeSupports running a report type of JavaReport.

■ DescriptionThe resource group description.

Page 215: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 221

■ MaxPriorityUsed by a resource group having a Type of Async. The maximum priority a job can have. A job with a higher priority than another job runs first. The range of possible values is 0 through 1000.

■ MinPriorityUsed by a resource group having a Type of Async. The minimum priority a job can have. A job with a lower priority than another job runs after the higher priority job. The range of possible values is 0 through 1000.

■ WorkUnitTypeSpecifies the type of processing this resource group can perform. For example, generating a BIRT document asynchronously requires the BIRT Factory work unit type. Generating a BIRT document immediately requires the BIRT Online work unit type.

Configuring data source connections in BIRT iHubA connection configuration file is an XML file that specifies the data source connection properties to use when BIRT iHub runs a design. Having the data source connection information for a design in an external file makes it convenient to modify. You change the connection information without altering the design. You specify the location of the file in the template settings in acserverconfig.xml.

To specify the location of a data configuration file in acserverconfig.xml, add the ConnConfigFile property containing the configuration file path as shown in the following code:

<Config><Templates>

<Template ConnConfigFile="config_file_path">....

</Template></Templates>

</Config>

You can create an external connection profile to a data source used by a design. Changes to the profile are automatically picked up by the design. The settings in a connection configuration file override any connection configuration properties in the connection profile. The sample connection configuration file in Listing 7-12 externalizes the file path to the connection profile, C:\PostgreSQL.profile.

Listing 7-12 BIRT connection configuration file example

<oda-data-source extensionID="org.eclipse.birt.report.data.oda.jdbc" name="JDBC Data Source - PostgreSQL" id="783"><property name="odaDriverClass">com.actuate.jdbc.postgresql

.PostgreSQLDriver</property>

Page 216: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

222 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

<property name="odaURL">jdbc:actuate:postgresql://DBSRV1-W2K</property>

</oda-data-source><ConnectOptions Type=".eclipse.birt.report.data.oda.jdbc_ JDBC

Data Source - PostgreSQL "><Property PropName="OdaConnProfileStorePath">C:\Mypath</Property>

</ConnectOptions>

In a BIRT design, the configuration key used to specify a data source is the unique ID of the ODA data source extension and data source name defined in the BIRT design or library. You must concatenate the string as follows:

extensionID + "_" + data source name

For example, the key is org.eclipse.birt.report.data.oda.jdbc_PostgreSQL.

Exporting and importing a volumeBIRT iHub 3.1.1 contains scripts that support exporting a volume from either a BIRT iHub 3.1 or 3.1.1 system, and importing the volume to either a BIRT iHub 3.1 or 3.1.1 system.

Exporting and importing a volume is a useful capability when you want to back up a volume to a second machine, for example.

Exporting and importing a volume consists of the following tasks:

■ Run the export_volume script on the machine from which you are exporting the volume. The script creates an archive file containing the volume data and volume metadata. The name of the archive file is <volume name>.zip on both a Windows machine and a Linux machine.

■ Copy the archive file to the machine to which you are importing the volume.

■ Run the import_volume script on the machine to which you are importing the volume. The script uses the archive file to create the volume.

This section makes reference to AC_SERVER_HOME, a variable that represents the BIRT iHub home folder. In a default BIRT iHub installation on Windows, where BIRT iHub was installed as an individual module, AC_SERVER_HOME represents the following path:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub

In a default BIRT iHub installation on Linux, where BIRT iHub was installed as an individual module, AC_SERVER_HOME represents the following path:

/opt/actuate/BIRTiHubVisualization/modules/BIRTiHub/iHub

Page 217: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 223

Exporting or importing a volume when one BIRT iHub system is Release 3.1BIRT iHub contains the tools that support exporting and importing a volume beginning with Release 3.1.1. If one of the two BIRT iHub systems involved in a volume export or import operation is a Release 3.1 system, you must copy the iHub Release 3.1.1 tools to the BIRT iHub Release 3.1 system.

How to copy BIRT iHub 3.1.1 tools to a BIRT iHub 3.1 system

1 In the BIRT iHub 3.1.1 system, navigate to AC_SERVER_HOME\tools.

2 Copy the 3.1F1Tools.zip file (for Windows machines) or the 3.1F1Tools.tar.gz file (for Linux machines) to AC_SERVER_HOME in the BIRT iHub 3.1 system.

3 On a Windows system, extract the contents of 3.1F1Tools.zip to a new folder, AC_SERVER_HOME\3.1F1Tools. The name of the new folder must be 3.1F1Tools.

On a Linux system, you do not need to create the 3.1F1Tools folder manually. The tar command creates this folder automatically when you use the command from AC_SERVER_HOME to extract the contents of 3.1F1Tools.tar.gz.

Exporting a volumeIn this section:

■ For a BIRT iHub 3.1 system involved in an export operation, AC_SERVER_HOME\<3.1tools> refers to the AC_SERVER_HOME\3.1F1Tools folder.

■ For a BIRT iHub 3.1.1 system involved in an export operation, AC_SERVER_HOME\<3.1tools> refers to the AC_SERVER_HOME\tools folder.

How to export a volume

This procedure assumes that you are working in a Windows environment. If the task a step describes is different for a Linux environment, the step includes the Linux version of the task.

1 Copy AC_SERVER_HOME\<3.1tools>\sample_properties\export_volume.properties to AC_SERVER_HOME\<3.1tools>\bin.

2 Open AC_SERVER_HOME\<3.1tools>\bin\export_volume.properties using a text editor. When specifying a path name in this file, always use a forward slash (“/”) as the delimiter character. Perform the following tasks:

1 For SOURCE_VOLUME_NAME, specify the name of the volume you are exporting.

Page 218: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

224 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

2 For EXPORT_PATH, specify the full path of the folder to which the export_volume script writes the archive file the script creates from the volume you export. Alternatively, leave this property commented out to accept the default value, AC_SERVER_HOME/data/migrate/workspace. The export_volume script creates this folder if it does not exist.

3 If a volume in the BIRT iHub System to which you are importing a volume has the same name as the volume you are exporting, specify a new name for the export volume using TARGET_VOLUME_NAME to avoid a naming conflict. If a naming conflict between volumes occurs, the export_volume script aborts and displays an error message. Leave TARGET_VOLUME_NAME commented out to leave the volume name unchanged.

4 If a schema in the BIRT iHub System to which you are importing a volume has the same name as the schema for the volume you are exporting, specify a new name for the export volume schema using TARGET_SCHEMA_NAME to avoid a naming conflict. If a naming conflict between schemas occurs, the export_volume script aborts and displays an error message. Leave TARGET_SCHEMA_NAME commented out to leave the schema name unchanged.

5 For SOURCE_DATABASE_USER_PASSWORD, specify the password for the BIRT iHub metadata database used by the BIRT iHub system from which you are exporting the volume. For example, the default password for the PostgreSQL database that installs with BIRT iHub is postgres.

6 For TARGET_DATABASE_TYPE, specify whether the metadata database used by the BIRT iHub system to which you are importing the volume is PostgreSQL or Oracle. Permissible values are PostgreSQL and Oracle.

7 The folder that WORKSPACE specifies contains the log files that the export_volume script creates, and also the <export volume name>.zip file if you left EXPORT_PATH commented out. Leave the WORKSPACE property commented out to accept the default value, AC_SERVER_HOME/data/migrate/workspace. Alternatively, specify the full path of the folder to which you want the export_volume script to write log files. The export_volume script creates the WORKSPACE folder if it does not exist.

Listing 7-13 shows an example of export_volume.properties.

Listing 7-13 Setting the properties the export_volume script uses

#Always use a forward slash ("/") as the delimiter character in#a pathname in this properties file, even when using the file#in a Windows environment.

#Specify the name of the volume you are exportingSOURCE_VOLUME_NAME=volume1

Page 219: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 225

#Specify the full path of the folder to which the export volumescript writes the archive file the script creates from the volume you export. Leave this property commented out to acceptthe default path, <AC_SERVER_HOME>/data/migrate/workspace.#EXPORT_PATH=

#If a volume on the target BIRT iHub system has the same nameas the volume you are exporting, specify a new volume name toavoid a naming conflict. Leave TARGET_VOLUME_NAME commentedout to leave the volume name unchanged.#TARGET_VOLUME_NAME=

#If a schema on the target BIRT iHub system has the same nameas the schema for the volume you are exporting, specify a newschema name to avoid a naming conflict. LeaveTARGET_SCHEMA_NAME commented out to leave the schema nameunchanged.#TARGET_SCHEMA_NAME=

#Specify the password for the BIRT iHub metadata database usedby the BIRT iHub system from which you are exporting thevolume.SOURCE_DATABASE_USER_PASSWORD=postgres

#Specify the type of database the target BIRT iHub system usesfor volume metadata. Permissible values are PostgreSQL andOracle.TARGET_DATABASE_TYPE=PostgreSQL

#Specify the full path of the folder to which the export_volumescript writes log files. Alternatively, leave this propertycommented out to accept the default location, <AC_SERVER_HOME>/data/migrate/workspace.#WORKSPACE=

3 Stop iHub processes. For information on performing this task, see “Stopping and starting iHub processes,” in Chapter 4, “Configuring a cluster using shared files.” Wait until all services have stopped before going to the next step.

4 Open a command prompt. Navigate to AC_SERVER_HOME\<3.1tools>\bin. Execute the export_volume script, passing it the export_volume.properties file. If export_volume.properties is in a different folder than the export_volume script, specify the full path of export_volume.properties in the command.

If export_volume.bat and export_volume.properties are in AC_SERVER_HOME\<3.1tools>\bin, execute the following command:

export_volume.bat export_volume.properties

Page 220: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

226 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

If your environment is Linux, and export_volume.sh and export_volume.properties are in AC_SERVER_HOME/<3.1tools>/bin, execute the following command:

sh ./export_volume.sh export_volume.properties

The export_volume script creates the volume archive file.

5 Start iHub processes. For information on performing this task, see “Stopping and starting iHub processes,” in Chapter 4, “Configuring a cluster using shared files.” Wait until all services have started before going to the next step.

6 Copy the archive file that the export_volume script created to the machine on which you are importing the volume. You can copy the archive file to any location on this machine. For example, copy the file to a new folder, C:\volume_archive_files.

Viewing the log file the export_volume script createsThe export_volume script creates a log file for each volume the script exports. For example, the default location of the log file for a volume named volume1 is:

AC_SERVER_HOME\data\migrate\workspace\logs\export\volumes\volume1\volume1.<timestamp>.log

For a volume named volume2, the default location is:

AC_SERVER_HOME\data\migrate\workspace\logs\export\volumes\volume2\volume2.<timestamp>.log

Preparing to import a volume to a BIRT iHub system that uses an Oracle databaseIf the BIRT iHub system to which you are importing a volume uses an Oracle database for volume metadata:

■ An Oracle client must be present on the machine to which you are importing the volume

■ The Oracle client tnsnames.ora file must specify the following information:

■ Database name

■ Name of the machine to which you are importing the volume

■ Port number where the database listens for requests

For example, Listing 7-14 shows a tnsnames.ora file in which the database name is ORA11-WIN, the machine name is URUP, and the port number is 1521.

Listing 7-14 Configuring the tnsnames.ora file

ORA11-WIN =

Page 221: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 227

(DESCRIPTION =(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = URUP)(PORT = 1521)))(CONNECT_DATA =

(SID = ORA11)(SERVER = DEDICATED)

))

The tnsnames.ora file supports the import_volume script connecting to the database on the machine to which you are importing the volume. The import_volume script connects to the database, then loads the volume metadata into the database.

The import_volume script must be able to access the Oracle client. Make sure the Path environment variable includes the Oracle client path.

Importing a volumeIn this section:

■ For a BIRT iHub 3.1 system involved in an import operation, AC_SERVER_HOME\<3.1tools> refers to the AC_SERVER_HOME\3.1F1Tools folder.

■ For a BIRT iHub 3.1.1 system involved in an import operation, AC_SERVER_HOME\<3.1tools> refers to the AC_SERVER_HOME\tools folder.

How to import a volume

This procedure assumes that you are working in a Windows environment. If the task a step describes is different for a Linux environment, the step includes the Linux version of the task.

1 Copy AC_SERVER_HOME\<3.1tools>\sample_properties\import_volume.properties to AC_SERVER_HOME\<3.1tools>\bin.

2 Open AC_SERVER_HOME\<3.1tools>\bin\import_volume.properties using a text editor. When specifying a path name in this file, always use a forward slash (“/”) as the delimiter character. Perform the following tasks:

1 For IMPORT_FILE, specify the full path of the location where you copied the volume archive file on the machine to which you are importing the volume, for example, C:/volume_archive_files/volume1.zip.

2 For VOLUME_STORAGE_ROOT, specify the full path of the folder to contain the imported volume data after you run the import_volume script. The script creates this folder.

3 For CREATE_DATABASE_SCHEMA, specify one of the following values:

Page 222: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

228 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

❏ trueSpecify true to create a new schema for the volume you are importing. If you specified a value for the TARGET_SCHEMA_NAME property in export_volume.properties, this value is the name of the new schema. If you did not specify a value for TARGET_SCHEMA_NAME, the new schema has the same name as the original volume schema. If a schema having the same name as the new schema already exists on the machine to which you are importing the volume, the import_volume script will abort and display an error message.

❏ falseSpecify false if the schema for the volume you are importing already exists on the machine to which you are importing the volume, and the imported volume is to use this schema. Ensure that the schema is empty before running the import_volume script. If this schema does not exist, the import_volume script will abort and display an error message.

4 For TABLESPACE_NAME, specify a tablespace for the schema of the volume you are importing if you are using an Oracle database for volume metadata. Alternatively, accept the default value of USERS. The import_volume script ignores this property if you are not using an Oracle database.

5 For DATABASE_ADMINISTRATOR and DATABASE_ADMINISTRATOR_PASSWORD, specify the name and password for the database administrator. For example, the default database administrator and database administrator password for the PostgreSQL database that installs with BIRT iHub is postgres.

6 For DATABASE_USER_PASSWORD, specify the database user password. For the PostgreSQL database that installs with BIRT iHub, the password is postgres.

7 The folder that WORKSPACE specifies contains the log files that the import_volume script creates. Leave the WORKSPACE property commented out to accept the default value, AC_SERVER_HOME/data/migrate/workspace. Alternatively, specify the full path of the folder to which you want the import_volume script to write log files. The import_volume script creates the WORKSPACE folder if it does not exist.

Listing 7-15 shows an example of import_volume.properties:

Listing 7-15 Setting the properties the import_volume script uses

#Always use a forward slash ("/") as the delimiter character in#a pathname in this properties file, even when using the file#in a Windows environment.

#Specify the full path of the location where you copied thevolume archive file on the machine to which you are importing

Page 223: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 229

the volume.IMPORT_FILE=C:/volume_archive_files/volume1.zip

#Specify the full path of the folder that will contain theimported volume data after you run the import_volume script,for example: <AC_SERVER_HOME>/shared/volume1. The script creates this folder if it does not exist.VOLUME_STORAGE_ROOT=C:/Actuate3/BIRTiHubVisualization/modules

/BIRTiHub/iHub/shared/volume1

#Specify true to create a new schema for the volume you areimporting. Specify false if you want this volume to use theschema that already exists in the target database for thevolume.CREATE_DATABASE_SCHEMA=true

#Specify a tablespace for the schema of the volume you areimporting if you are using an Oracle database for volumemetadata. The import_volume script ignores this property ifyou are not using an Oracle database.TABLESPACE_NAME=USERS

#Specify the login credentials of the database administrator of#the target databaseDATABASE_ADMINISTRATOR=postgresDATABASE_ADMINISTRATOR_PASSWORD=postgres

#Specify the target database user password.DATABASE_USER_PASSWORD=postgres

#Specify the full path of the folder to which the import_volumescript writes log files. Alternatively, leave this propertycommented out to accept the default value, <AC_SERVER_HOME>/data/migrate/workspace. The script creates this folder if itdoes not exist.#WORKSPACE=

3 Stop iHub processes. For information on performing this task, see “Stopping and starting iHub processes,” in Chapter 4, “Configuring a cluster using shared files.” Wait until all services have stopped before going to the next step.

4 Open a command prompt. Navigate to AC_SERVER_HOME\<3.1tools>\bin. Execute the import_volume script, passing it the import_volume.properties file. If import_volume.properties is in a different folder than the import_volume script, specify the full path of import_volume.properties in the command.

If import_volume.bat and import_volume.properties are in AC_SERVER_HOME\<3.1tools>\bin, execute the following command:

import_volume.bat import_volume.properties

Page 224: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

230 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

If your environment is Linux, and import_volume.sh and import_volume.properties are in AC_SERVER_HOME/<3.1tools>/bin, execute the following command:

sh ./import_volume.sh import_volume.properties

The import_volume script imports the volume.

5 Start the BIRT iHub cluster. For information on performing this task, see “Stopping and starting iHub processes,” in Chapter 4, “Configuring a cluster using shared files.” Wait until all services have started before going to the next step.

6 Enable the volume by performing the following tasks:

1 In System Console—Clusters, edit the cluster. Then, choose Volumes from the side menu.

2 Left-click the arrowhead icon next to the volume name and choose Enable, as shown in Figure 7-93. Confirm that you want to enable the volume.

Figure 7-93 Enabling the imported volume

7 Access the imported volume by logging in to Information Console, specifying the volume name, user name, and user password, as shown in Figure 7-94.

Page 225: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 231

Figure 7-94 Accessing the imported volume in Information Console

Viewing the log file the import_volume script createsThe import_volume script creates a log file for each volume the script imports. For example, the default location of the log file for a volume named volume1 is:

AC_SERVER_HOME\data\migrate\workspace\logs\import\volumes\volume1\volume1.<timestamp>.log

For a volume named volume2, the default location is:

AC_SERVER_HOME\data\migrate\workspace\logs\import\volumes\volume2\volume2.<timestamp>.log

Rolling back if an error occurs during volume importIf the import_volume script experiences an error while executing, look for the cause in the messages the script writes to the command prompt, and in the log file the script creates. After correcting the error, perform the following tasks:

■ Delete the volume schema if it was already created.

■ Delete the storage folder that the import script creates for the imported volume. If you specified a value for the VOLUME_STORAGE_ROOT property in import_volume.properties specifies, this value is the path of the volume storage folder. If you did not specify a value for this property, the default volume storage folder path is AC_SERVER_HOME\shared\<volume name>.

■ Run the import_volume script again.

Page 226: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

232 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Configuring an Apache web server for load balancing and proxying

A system administrator can configure an Apache web server to perform both load balancing and general proxying against a BIRT iHub cluster. Following are the minimum steps necessary to perform this task.

1 Create a cluster using System Console—Clusters.

2 Set up an Apache HTTP server with mod_headers, mod_proxy, mod_proxy_balancer, and mod_proxy_http enabled.

3 Use the following example XML to create an Apache VirtualHost.

<VirtualHost _default_:*># requiredProxyRequests OffProxyPreserveHost On

# add cookie for sticky sessionHeader add Set-Cookie "IHUB_ROUTE=.%{BALANCER_WORKER_ROUTE}e;

path=/" env=BALANCER_ROUTE_CHANGED# load balancing on 2 internal nodes ("node-01" and

"node-02")<Proxy balancer://ihub/>

BalancerMember http://node-01:8700 route=1BalancerMember http://node-02:8700 route=2ProxySet stickysession=IHUB_ROUTE

</Proxy>

# do not forward balancer-manager requestsProxyPass /balancer-manager !

# forward all requests to load balancerProxyPass / balancer://ihub/

</VirtualHost>

4 Set the cluster URL to the Apache website address by performing the following tasks:

1 On System Console—Clusters, edit the cluster created in step 1 by left-clicking the icon next to the cluster name.

2 On Cluster Configuration, left-click the icon next to the Cluster Status indicator and choose Edit Cluster Properties, as shown in Figure 7-95.

Page 227: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 7 , M a n a g i n g c l u s t e r s 233

Figure 7-95 Choosing to edit cluster properties

3 On Edit Cluster Properties, type the Apache website address in Cluster URL, for example, http://mywebsite.com, as shown in Figure 7-96.

Choose OK.

5 Restart the Apache Web server to apply changes, if necessary.

Figure 7-96 Specifying the Apache website address

6 Restart the cluster by performing the following tasks:

1 On Cluster Configuration, left-click the icon next to the Cluster Status indicator and choose Stop Cluster, as shown in Figure 7-97.

Page 228: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

234 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 7-97 Stopping the cluster

2 On Cluster Configuration, left-click the icon next to the Cluster Status indicator and choose Refresh. When all the services appear in red, the cluster is stopped. Wait until the cluster is stopped before going to the next step.

3 On Cluster Configuration, left-click the icon next to the Cluster Status indicator and choose Start Cluster, as shown in Figure 7-98.

Figure 7-98 Starting the cluster

As an alternative to restarting the cluster from Clusters—Cluster Configuration, you can restart the Actuate PostgreSQL for iHub 3.1 Service on every cluster node in the cluster.

Users can now access BIRT iHub from a web browser using the following URL:

http://mywebsite.com/iportal

Page 229: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 8 , M a n a g i n g s y s t e m a d m i n i s t r a t o r s 291

C h a p t e r

8Chapter 8Managing system

administratorsThis chapter contains the following topics:

■ About Settings

■ Viewing System Information

■ Working with system administrators

■ Configuring Email Settings

Page 230: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

292 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

About SettingsIn System Console, choose Settings to perform the following tasks, as shown in Figure 8-1:

■ View System Information.

■ Create a System Administrator user.

■ Edit a System Administrator user.

■ Delete a System Administrator user.

Figure 8-1 Choosing Settings

The following sections describe how to perform these tasks.

Viewing System InformationOn Settings, choose System Information to view the following information, as shown in Figure 8-2:

■ JSAPI VersionThe JavaScript API version number

■ Product versionThe System Console build number

Figure 8-2 Viewing System Information

Page 231: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 8 , M a n a g i n g s y s t e m a d m i n i s t r a t o r s 293

Working with system administratorsOn Settings, choose System Admin Users to work with System Administrator users, as shown in Figure 8-3.

Figure 8-3 Working with System Administrator users

Working with System Administrator users includes creating, deleting, and editing a System Administrator user. The following sections describe how to perform these tasks.

Creating a system administratorA system administrator can create other System Administrator users.

How to create a system administrator

1 On Settings—System Admin Users, choose Add System Administrator to create a new System Administrator user, as shown in Figure 8-3.

2 Specify the following properties on Add System Administrator, as shown in Figure 8-4. A property name appearing with an asterisk (*) next to the name is a required property.

■ UsernameType the name of the new system administrator.

■ EmailType the new administrator e-mail address.

■ DescriptionType a description for the new administrator.

■ LanguageChoose the language System Console uses.

■ Is Locked OutType true or false.

Page 232: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

294 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ Password In Password, type a password for the new user. The password must be at least eight characters long, and must contain at least one lower case letter, one upper case letter, and one digit. If you create a user using IDAPI, there is no restriction on what the password can be.

■ Confirm PasswordType the password again.

Choose OK. Confirm that you want to save changes.

Figure 8-4 Adding a system administrator

The new System Administrator user appears in the user list, as shown in Figure 8-5.

Figure 8-5 Viewing the new system administrator in the list

If you have e-mail notification enabled, BIRT iHub sends a notification e-mail to the new system administrator, as shown in Figure 8-6. For information on

Page 233: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 8 , M a n a g i n g s y s t e m a d m i n i s t r a t o r s 295

enabling e-mail notification, see “Enabling e-mail notification” in Chapter 7, “Managing clusters.”

Figure 8-6 Viewing the notification e-mail

Customizing the notification e-mail template fileSystem Console uses a .properties file with a .template file to create a notification e-mail for each of the following events:

■ New system administrator created

■ New cluster created

■ New trial cluster created

■ New volume created

The .properties file contains the text for each e-mail type. The .template file contains the HTML for each e-mail type. The .template file formats the notification e-mail, and populates it with appropriate text from the .properties file.

You can edit the .properties and .template files to customize them for your entity.

How to customize the notification e-mail files

1 In a default BIRT iHub installation on Windows, in which the install folder is C:\Actuate3, navigate to the following file:

C:\Actuate3\SystemConsole\modules\SystemConsole\tomcat\webapps\sysconsole\WEB-INF\lib\resource.jar

Page 234: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

296 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Using a file utility such as WinZip, open \resource.jar.

2 Locate the following files:

■ com\actuate\umc\data\resource\Email.properties

■ com\actuate\umc\data\resource\Email.template

Open these files and make any necessary modifications.

Save and exit the files.

Editing a system administratorWhen editing a system administrator, the properties are the same as when creating a system administrator, and the administrator can change any property.

How to edit a system administrator

1 Left-click the arrowhead icon next to the System Administrator user in the list of users and choose Edit to edit a System Administrator, as shown in Figure 8-7.

Figure 8-7 Choosing to edit a system administrator

2 Make any necessary changes on Edit <System Administrator name> and choose Save.

Deleting system administratorsThe system administrator can delete one System Administrator user only, or multiple users simultaneously.

You cannot delete the default system administrator, sysadmin.

How to delete a single administrator

Left-click the arrowhead icon next to the System Administrator user in the list of administrators and choose Delete to delete an administrator, as shown in Figure 8-8.

Page 235: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 8 , M a n a g i n g s y s t e m a d m i n i s t r a t o r s 297

Figure 8-8 Deleting a system administrator

How to delete selected administrators

Check the boxes next to the administrators you want to delete. Then, left-click Actions and choose Delete Items, as shown in Figure 8-9. Choose Refresh Items to deselect selected administrators.

Figure 8-9 Deleting one or more users

How to delete all administrators

Delete all system administrators except sysadmin, the default administrator, by performing the following tasks:

1 Left-click the arrowhead icon in the check box next to Actions and choose Select All, to select all administrators, as shown in Figure 8-10. Alternatively, choose Select None to deselect all administrators.

Page 236: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

298 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Figure 8-10 Selecting all system administrators

2 Left-click Actions and choose Delete Items, as shown in Figure 8-11. Alternatively, select Refresh Items to deselect administrators.

Figure 8-11 Deleting all administrators except sysadmin

Configuring Email SettingsOn Settings, choose Email Settings to configure the following properties, as shown in Figure 8-12.

■ Email Server TypeType a name for the mail server. For example, type mail.smtp.host.

■ Email ServerType the IP address or fully qualified domain name of the mail server. For example, type mailserver.companydomain.com.

■ Email AddressType the e-mail address that appears in the From line of the e-mail notification. For example, type [email protected]. BIRT iHub also sends a notification to this address when the mail server cannot deliver an e-mail notification to a user.

Page 237: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 8 , M a n a g i n g s y s t e m a d m i n i s t r a t o r s 299

Figure 8-12 Configuring an e-mail server

The system administrator can change monitoring to not send out alerts. If the system administrator chooses to have alerts, but does not set up an e-mail server configuration, BIRT iHub continuously generates errors saying that an e-mail server configuration cannot be found. This message is classified as a severe error. To remedy this problem, edit the lmserviceconfig.properties file, and change the following setting from:

lmservice.monitoring.alert.notify.email.enable=true

to:

lmservice.monitoring.alert.notify.email.enable=false

After changing this setting to false, BIRT iHub still logs errors, but does not send e-mail alerts to the system administrator.

In a default BIRT iHub installation on Windows, performed using the graphical installer, in which the install folder is C:\Actuate3, lmserviceconfig.properties is in the following location:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config\lmsrvr

Page 238: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

300 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Page 239: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

Part 3Managing and backing up

■ Licensing BIRT iHub

■ Backing up BIRT iHub System

Part Three3

Page 240: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are
Page 241: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 9 , L i c e n s i n g B I R T i H u b 307

C h a p t e r

9Chapter 9Licensing BIRT iHub

This chapter discusses the following topics:

■ Understanding licensing types

■ Understanding licensing options

■ Installing BIRT iHub System license files

■ Understanding CPU binding

Page 242: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

308 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Understanding licensing typesBIRT iHub System licensing supports running BIRT iHub with sets of features grouped as license options. You enable BIRT iHub System options using one or more of the following types of license models:

■ CPU CoreSpecifies the maximum number of CPU Cores that BIRT iHub System can use. Any number of users can access the licensed options on the system provided adequate licensing and capacity exists.

■ Work Unit (WU)Specifies BIRT iHub features and functionality using an aggregate model. This plan defines each BIRT iHub System resource as a work unit.

Similar to CPU Core licensing, but defined at a more granular level. With Work Unit Licensing, the customer can license just the precise amount of capacity needed for application requirements. Any number of users can access the licensed options provided sufficient capacity has been purchased.

In CPU Core and Work Unit licensing, Actuate currently uses the Standard Performance Evaluation Corporation (SPEC) standard benchmark for measuring machine capacity to establish the value of iHub performance on any given system.

Understanding licensing optionsTo determine the license options installed on iHub, log in to System Console, and choose Show License. The license options appear, as shown in Figure 9-1.

Page 243: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 9 , L i c e n s i n g B I R T i H u b 309

Figure 9-1 iHub License options

Installing BIRT iHub System license filesActuate provides a license file to use when installing BIRT iHub System. New customers receive an e-mail containing a temporary BIRT iHub license file to use for the initial installation after Actuate processes the order. The temporary BIRT iHub System license expires 45 days after installation.

Actuate license enforcement for BIRT iHub requires a single, shared license for all nodes in a cluster. The name for the BIRT iHub license file uses the following format:

Actuate_iHub_key_xxxxxxx.xml

XXXXXXX is a unique seven-digit number generated by Actuate Licensing when it creates the license file.

Actuate BIRT iHub System customers perform an initial installation using a temporary license. After installing BIRT iHub System using the temporary license, the login screen displays two messages.

The following message appears on the login screen for a license that has an expiration date:

ReminderYour BIRT iHub license expires in [the number of days] days, on [the specified date]. When the current license expires, the

Page 244: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

310 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

iHub will shut down and require a new license to restart. Please contact Actuate to purchase a new license.

The following message about how to obtain the second license file from Actuate Licensing appears until you install the new license issued by Actuate Licensing:

ReminderOne or more iHubs in your BIRT iHub System are in violation of the node locked BIRT iHub license. After the grace period expires, the iHubs that violate the node locked BIRT iHub license cannot be restarted. Please contact Actuate Licensing ([email protected] or http://www.actuate.com/licensing), or your representative, and request a new license file for the iHub nodes that are in violation. Please restart the iHubs on the nodes after updating the license key file.

You have 45 days to apply for and install the license file after you install Actuate BIRT iHub System.

After installing BIRT iHub System, the installation informs a customer requiring a license to obtain the machine ID information on which Actuate BIRT iHub is running and transmit this information to Actuate Licensing. The machine ID is displayed in the reminder message. You can also use the utility, acmachineid, to obtain the machine ID. For information on how to use the acmachineid utility, see “How to use the acmachineid utility,” later in this chapter.

After receiving the machine ID information, Actuate Licensing issues a new Actuate BIRT iHub System license file.

About the license fileThis license file specifies the available BIRT iHub license options and node-key information for the cluster nodes. This license file must be in a shared location, specified by the <AC_CONFIG_HOME> attribute of the <Server> element in the acpmdconfig.xml file of each node, and accessible to all nodes in the cluster.

A node key associates a BIRT iHub node in a cluster with the machine ID. The node-key licensing mechanism restricts the iHub node installation to that machine.

On startup, each node in the cluster checks the shared license file, verifies the installed options, and determines whether its node key, which is generated at run time, matches the license information. If the node key matches, the node joins the cluster. Otherwise, it shuts down with an error if the node-lock-violation grace period has been exceeded.

A license file with an expiration date remains valid until the specified date. If the license file is about to expire, the system reminds you that the file expires on the specified date when you log in to System Console or Information Console. Reminders also appear in the system log file. To arrange for a permanent license

Page 245: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 9 , L i c e n s i n g B I R T i H u b 311

file, or if you have a problem with an expiring file, please contact Actuate Licensing at [email protected].

When upgrading a cluster node or installing BIRT iHub on a new machine, the customer must request a new license and supply the machine ID of the new machine.

Collecting machine information for a licenseAfter installing BIRT iHub System using a temporary license file, you must collect information about the machines running BIRT iHub software and send it to Actuate Licensing. During the installation process, the install program prompts you to provide the location of the license file. After providing the location of the license file, the install program issues a prompt similar to the following message.:

The iHub system license file is locked to the machines that are used in the iHub system. The following machine id must be used to request a node key license file from Actuate:

IORRHEHs6S5UCsEtrdVu6jOixmzvFY3BbOqXLiwswQGDceJmKYYaEu0j18lQxjMsYCxnka3hVkDZFGwkmQMxb+hgKaz4om2vLUcS0ocYTA7Ta6VTMavLFQo7bEjRyrolwxAKu0Vr4NA6o8uWCzjGZXX8KrjViSUoROj70hWOY=

Please contact Actuate Licensing ([email protected] or http://www.actuate.com/licensing), or your representative, and request a node locked iHub system license.

The machine id required for the node locked iHub system license can also be generated by using the acmachineid utility that can be found in the ACTUATE_HOME\iHub\bin folder.

The format of the alphanumeric string for the machine ID and location of the license file are different depending on the operating system. On a Windows system, the unique identifier for the network card is among the elements used to determine the machine ID. You must have at least one network card enabled on the BIRT iHub machine to be able to obtain the machine ID.

After installing BIRT iHub, you must run the utility, acmachineid, from the command line to generate the machine ID information. Copy the machine ID in the command prompt to a file or e-mail message and send it to Actuate Licensing. Actuate Licensing processes your request and sends the new license file for BIRT iHub System.

How to use the acmachineid utility

Use the acmachineid utility to obtain the machine ID information by performing the following tasks:

1 Open a command prompt and navigate to AC_SERVER_HOME\bin.

2 Type the following command and press Enter:

Page 246: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

312 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

acmachineid

The utility provides output in the following format:

C:\Actuate3\iHub3\modules\BIRTiHub\iHub\bin>acmachineidSTATUS: OKGEN_VERSION: 23 Augusta R1 DevelopmentGEN_BUILD: DEV131216SERVERNAME: W7OOTBMACHINEID:

u0RREHs0Jk6tu0o8AbCrVL61x7kDpLlQKwS2t1W7qM67GbO8VjcFs6pcuAgbtDaZauSbFFa2mRejwVJc7ZjKfMEVl1suXglMKmZLiwtLykwJisqMS0EhYe5sCYoKjG+XL2UEnL2GGhLtI9fJUMYzZORSlWPPRyrolwxAKu0Vr61qxoMC8KhvpylHGtKvIhgEcasrUerKE674lnutEXtIVo+

Send the licenselocation.txt file automatically generated by the execution of the acmachineid utility to Actuate Licensing as an e-mail attachment.

How to obtain a license file

To register as a new user and obtain a new license file for a product, go to the Actuate Support web site at the following location:

http://support.actuate.com/SiteRegister

Enter the new user account and license key request information.

A maintenance customer should have login information for the Actuate e.Support web site. If you do not have access, please contact Actuate Licensing at [email protected].

If you are not a direct Actuate customer, contact the partner or distributor who supplies the product for the license file. If you have a problem obtaining a license file from this source, please contact Actuate Licensing at [email protected].

Updating the BIRT iHub System license file

After performing an installation of BIRT iHub System and transmitting the required machine ID information to obtain a license, Actuate sends an e-mail containing an attached .txt (TXT) file. Replace the .txt extension with a .zip (ZIP) extension and open the file. This ZIP file contains the following files:

■ readme.txtInstructions for installing BIRT iHub System using a license file and for obtaining a license file

■ Actuate_iHub_key_XXXXXXX.xmlBIRT iHub System license

An Actuate license file is an XML file. Actuate Licensing sends this XML file inside a ZIP file with its extension altered to TXT because transmitting a file with a .xml extension can cause problems in an e-mail system.

Page 247: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 9 , L i c e n s i n g B I R T i H u b 313

How to install the license file

To install the license file, perform the following steps:

1 Extract the contents of the ZIP file to a location on your local file system.

2 Log in to System Console. For example, open a browser and type the following URL address, http://localhost:8500/sysconsole, and log in using the system administrator ID and password.

3 In Clusters—License, choose Update License.

4 On Update License, choose Choose File and browse to navigate to the location where you extracted the contents of the ZIP file. Select the Actuate BIRT iHub System license file and choose Save to apply the license.

If iHub requires a system restart to update the license file, the following message appears:

The license file cannot be applied without a server restart. Please copy the license file to the iHub license file location and restart the iHub system.

5 Restart any node where the node-key configuration changed.

If you change the machine on which you installed BIRT iHub, you must re-apply to Actuate Licensing for a new license file. If you replace the network card on some machines, such as a Windows system, you may have to obtain a new license file since the unique identifier for the network card may be the source of the machine ID. If you have a license file installed and a reminder message appears when logging into System Console, contact Actuate Licensing and provide the current BIRT iHub System license file with the output from the machine ID utility.

Actuate_iHub_key_XXXXX.xml will contain the node key information for the stand-alone machine or all machines in a cluster. There is no separate node license file for each machine.

Listing 9-1 shows a sample of the node key information the license contains obtained from acmachineid output submitted to Actuate Licensing.

Listing 9-1 Viewing license node key information

<NodeKeys><NodeKey

MachineId="E0RREHs0Jk6tu0o8AbCrVL61x7kDpLlQKwS2t1W7qM67GbO8VjcFs6pcuAgbtZauSbFFa2mRejwVJc7ZjKfMEVl1suXglMKmZLiwtLykDa/wJisqMS0EhYe5sCYoKjG+XL2UEnL2GGhLtI9fJUMYzZORKk23jrxaSwUDigKsvlc1A6q8UbmrrAYHD8GgtpuiAmxWt4xjEM6rqlmsNEW/4Vjm40KxlkSv"ServerName="W7CLSTRNODE1"/>

<NodeKeyMachineId="I0RREHs0Jk6tu0o8AbCrVL61x7kDpLlQKwS2t1W7qM67GbO8VjcFs6pcuAgbtZauSbFFa2mRejwVJc7ZjKfMEVl1suXglMKmZLiwtLykDa/wJisqMS0EhYe5sCYoKjG+XL2UEnL2GGhLtI9fJUMYzZORKk23jrxaSwUDsg

Page 248: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

314 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Ksvlc1A6q8UbmrrAYHD8GgtpuiAmxWt4xjEM6rqlmsNEW/4ViMC0KDBkSn"ServerName="W7CLSTRNODE2"/>

</NodeKeys>

About modifying a licenseIf you decide later to license additional BIRT iHub options, the existing license file becomes invalid. You must install a new license file. Contact Actuate Licensing for the new license file.

Specifying the minimum number of factories for a work unit licenseIf you have a work unit license, acserverlicense.xml specifies how many work units you have in the LicenseInfo element, as shown in the following code snippet:

<LicenseInfo…BIRTOnlineWorkUnits="16"DashboardsWorkUnits="8"BIRTFactoryWorkUnits="1"ReportStudioWorkUnits="8"InteractiveCrosstabsWorkUnits="8">…

You must inspect acserverlicense.xml to determine the value of BIRTOnlineWorkUnits, and then modify acserverconfig.xml so that the MinFactory setting for the Default BIRT Online resource group is correct. One BIRT Online Factory is required for every 8 BIRT Online work units. For example, if BIRTOnlineWorkUnits is set to 16 in acserverlicense.xml, MinFactory must be set to 2 in acserverconfig.xml, as shown in the following code snippet:

<ServerResourceGroupSettingName="Default BIRT Online"Activate="TRUE"MinFactory="2"StartArguments="-Xmx2048M -XX:MaxPermSize=256m -XX:-UsePerfData -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Djava.protocol.handler.pkgs=com.actuate.javaserver.protocol com.actuate.javaserver.Server"/>

Understanding CPU bindingBIRT iHub System supports CPU binding on a machine with an appropriate CPU-based license. CPU binding restricts a process or processes to run on a subset of CPU cores. If you bind the BIRT iHub System to a subset of CPU cores,

Page 249: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 9 , L i c e n s i n g B I R T i H u b 315

only those CPU cores count toward the total number of licensed CPU cores. The CPU limit in the license file applies to all CPU cores for all machines in the cluster. Depending on the operating system and specific system command, you can restrict other processes from running on the processor to which you bind a process.

You can bind BIRT iHub processes to a specific set of processors on a machine that runs a Windows or Linux operating system. The default configuration does not bind BIRT iHub to a set of processors. In the default configuration, all processors on a BIRT iHub machine count toward the maximum number of licensed CPU cores.

To bind BIRT iHub to a set of processors, bind the iHub Daemon (ihubd) to the processors. ihubd starts all BIRT iHub processes. The processes inherit the binding from ihubd.

In a cluster, BIRT iHub counts only the processors on nodes that join the cluster and run the ihubc process. An ihubc process runs when a node is online. BIRT iHub counts the number of processors on a machine when the first ihubc process starts.

This section contains information on the following topics:

■ Configuring CPU binding on Windows

■ Configuring CPU binding on Linux

■ Checking BIRT iHub bound processors

■ Configuring e-mail for CPU license problems

Configuring CPU binding on WindowsYou can perform the following types of CPU binding on Windows:

■ Binding to specific CPU cores

■ Binding to multiple-core CPU cores

The following sections describe these features.

Binding to specific CPU cores on WindowsOn a multiple-CPU machine running the Windows operating system, the operating system assigns an ID number to each processor. Windows Task Manager lists the IDs of the available processors. The numbering starts at 0.

If you are not licensed to use all the CPU cores on the iHub 3 host machine, you must bind iHub 3 to the appropriate number of CPUs. To bind iHub3 to a set of processors, you modify the acpmdconfig.xml file in AC_SERVER_HOME\etc. For example, to bind processors to four logical cores, add the following line to acpmdconfig.xml:

Page 250: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

316 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

<DaemonCPUaffinity>0,1,2,3</DaemonCPUaffinity>

After restarting iHub, ihubc.log shows this setting in the System Bound field, for example:

System Bound : 4

How to configure CPU binding on a Windows machine

The following example shows the settings for a two CPU (0,1) machine running Windows, which uses only one CPU (0) for BIRT iHub:

1 To set CPU affinity, edit the <DaemonCPUaffinity> element in acpmdconfig.xml located in AC_SERVER_HOME\etc as follows:

<DaemonCPUaffinity>0</DaemonCPUaffinity>

2 Restart iHub.

3 Use Process Explorer to verify all BIRT iHub processes, including ihub, ihubc, and ihubd are using only CPU 0. Alternatively, verify the CPU binding by checking the Processor Affinity of the BIRT iHub process using Task Manager.

Binding to multiple-core CPU cores on WindowsYou can also perform multiple-core CPU binding, similar to the way you bind to a single CPU, as described in the previous section. To BIRT iHub, each core appears as a logical CPU.

For example, on a dual-core, two-CPU system, setting the DaemonCPUaffinity value to 0,1 binds BIRT iHub to both cores on the first CPU. Setting the value to 0,2 binds BIRT iHub to one core on each CPU. Setting the value to 0 binds BIRT iHub to one core on the first CPU.

Actuate does not recommend restricting BIRT iHub processing on a multiple-core CPU machine to one core for licensing purposes. BIRT iHub System achieves significant performance gains on a multiple-core CPU machine.

For example, BIRT iHub scales nearly perfectly from 1 to 2 cores and gets 50% better throughput on a dual-core system than on a two-CPU system.

About processors and hyperthreadingSome Intel processors use hyperthreading, a technology that counts each physical processor as a specific number of logical processors. The operating system and any programs running on the machine see the number of logical processors, not the number of physical processors.

When a machine uses hyperthreading, Windows Task Manager lists the logical processors, not the physical ones. You specify the number of logical processors in the environment variable. When a machine uses hyperthreading, BIRT iHub calculates the number of bound processors by dividing the number of bound logical processors by the number of logical processors for each physical processor.

Page 251: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 9 , L i c e n s i n g B I R T i H u b 317

If the result contains a decimal component, BIRT iHub uses the next highest integer. For example, it rounds 4.3 to 5. In the following example, a machine has four physical processors. With hyperthreading enabled, each physical processor corresponds to two logical processors. The machine has the following logical processors available:

■ Physical processor 0 corresponds to logical processors 0 and 1.

■ Physical processor 1 corresponds to logical processors 2 and 3.

■ Physical processor 2 corresponds to logical processors 4 and 5.

■ Physical processor 3 corresponds to logical processors 6 and 7.

If you bind BIRT iHub to the five logical processors 0, 2, 3, 6, and 7, it calculates the number of bound processors as:

5/2 = 2.5

BIRT iHub rounds this number up to determine that you have three bound processors.

Configuring CPU binding on LinuxThe following section describes how to perform various CPU-binding operations in the Linux environment.

The ihubd process is the root parent process for all other BIRT iHub processes, so CPU binding can be done only for ihubd. Binding must be done before starting the ihubd process. Binding a running ihubd process has no effect.

On a multiple-CPU machine running the Linux operating system, the operating system assigns an ID number to each processor. The numbering starts at 0.

If you are not licensed to use all the CPU cores on the iHub 3 host machine, you must bind iHub 3 to the appropriate number of CPUs. To bind iHub3 to a set of processors, you modify the acpmdconfig.xml file in AC_SERVER_HOME/etc. For example, to bind processors to four logical cores, add the following line to acpmdconfig.xml:

<DaemonCPUaffinity>0,1,2,3</DaemonCPUaffinity>

After restarting iHub, ihubc.log shows this setting in the System Bound field, for example:

System Bound : 4

How to configure CPU binding on Linux

The following example shows the settings for a four-CPU (0,1,2,3) machine running Linux, which uses only two CPUs (0,1) for BIRT iHub:

Page 252: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

318 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

1 In Linux, log in as root and use the less command to view CPU core information, as shown in the following example:

less /proc/cpuinfo

Use <Ctrl>+Z to suspend the command.

2 To verify CPU cores, use the cat command for more detailed information, as shown in the following example:

sudo cat /proc/cpuinfo

3 Use the taskset command, referencing the process ID (PID) to verify processor affinity, as shown in the following example:

taskset -p -c PID

Using the -c option provides a processor affinity mask in list form rather than a bit mask for a PID specified using the -p option. For example, a typical affinity list generated by these arguments is 0,2,3-5.

4 Use the ps|grep commands to verify the current core settings for all running processes, as shown in the following example, where ihub_id is the user that runs the BIRT iHub processes:

ps -e -o pid,cpuid,comm,user | grep ihub_id

5 To bind CPU cores on Linux, perform the following tasks:

1 Stop BIRT iHub, including ihubd. Make sure no processes are running by typing the following command, where ihub_id is the user that runs the BIRT iHub processes:

ps -e -o pid,cpuid,comm,user | grep ihub_id

2 In a typical installation, using a text editor such as vi, open the acpmdconfig.xml file located in AC_SERVER_HOME/etc and edit the <DaemonCPUaffinity> element as follows:

<DaemonCPUaffinity>0,1</DaemonCPUaffinity>

3 Restart BIRT iHub. Make sure that all processes started and verify the core to which each process is bound by running the following command, where ihub_id is the user that runs the BIRT iHub processes:

ps -e -o pid,cpuid,comm,user | grep ihub_id

The <taskset> command binds to the number of logical cores. If a machine does not have hyperthreading enabled, you will not see any difference between the physical and logical cores.

Page 253: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 9 , L i c e n s i n g B I R T i H u b 319

Checking BIRT iHub bound processors BIRT iHub performs the following bound processor checks:

■ The number of processors a cluster uses

■ The set of bound processors

Determining the number of processors BIRT iHub System usesWhen the ihubd process starts the first ihubc process on a machine, the ihubd determines the number of processors to which BIRT iHub is bound and stores the list of bound processors.

If you change the processor binding, BIRT iHub does not recognize the changes until you shut down all ihubc processes on the machine and restart one of the ihubc processes.

For example, a cluster that has a maximum licensed CPU limit of nine processors consists of two nodes, machine A and machine B.

The machines have the following configuration:

■ Machine A has four processors with no processor binding. All the processors can run Actuate processes. BIRT iHub manages a volume.

■ Machine B has eight processors with BIRT iHub bound to five processors. There is no ihubc process running on the machine, only the ihubd process.

The cluster counts four processors, the processors on machine A. If you start an ihubc process on machine B, BIRT iHub on machine A counts the five bound processors on the machine and increases the cluster processor count to nine, four on machine A and five on machine B.

If you bind the ihubd process on machine B to six processors, the change has no effect until you shut down all the running ihubc processes on machine B and restart an ihubc process on machine B.

After you stop the ihubc processes and restart an ihubc process on machine B, BIRT iHub System detects that the number of processors in the cluster is ten, which is greater than the maximum number of nine licensed processors. When the number of CPU cores exceeds the number of CPU cores your license permits, BIRT iHub does not start and returns an error message to System Console.

Understanding CPU binding validation while BIRT iHub is runningWhen BIRT iHub is running, each ihubc process periodically compares the list of processors to which it is bound with the list to which it was bound when it started. If the lists differ:

Page 254: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

320 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ BIRT iHub writes a message with the processor information to the log file. The message contains the maximum number of processors the BIRT iHub license file permits and the following information:

■ Current and original number of bound processors

■ Current and original list of bound processors

■ If configured, BIRT iHub sends an e-mail message to the administrator. The message states that the BIRT iHub System will shut down in one hour if the list of bound processors is not corrected. The e-mail message contains the information that BIRT iHub sends to the log file.

You must rebind the ihubc process to the same processors to which it was originally bound. During the next hour, any attempt to use the ihubc services fails and a message is written to the appropriate log file. If the list of processors is not restored after an hour, each BIRT iHub in the cluster shuts down and writes an error to its log file.

After updating a CPU-limit license, the system administrator must perform a complete restart of the system to refresh the list of processors that BIRT iHub uses to periodically compare the list of currently bound processors to the list to which it was bound when it started. Before restarting, the system administrator must also edit the acpmdconfig.xml file to adjust the CPU affinity to the new settings specified in the <DaemonCPUaffinity> element.

Understanding CPU binding validation when a volume comes onlineBIRT iHub uses a separate ihubc process to manage each volume on a machine. When you take a volume online, ihubd starts an ihubc process.

When ihubd starts an ihubc process, ihubd compares the list of processors to which the ihubc process is bound to the original list of processors to which the ihubd is bound. If the lists differ:

■ The ihubc process writes an error to its log file and shuts down.

■ BIRT iHub does not take the volume online.A message in the configuration states that the binding of the new process differs from the original binding of the parent process.

Understanding CPU binding validation when running BIRT iHub processesEach Factory and View process periodically compares its list of bound processors with the list of processors to which it was bound at startup. If the lists differ, the process writes an error to its log file and shuts down.

Page 255: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 9 , L i c e n s i n g B I R T i H u b 321

Configuring e-mail for CPU license problemsBIRT iHub System can send e-mail messages to an administrator if a change in processor binding violates the maximum number of licensed CPU cores for BIRT iHub System. To send e-mail about a CPU license problem, set up BIRT iHub System by completing the following tasks:

1 Configure every BIRT iHub node to send e-mail.

2 Specify the administrator e-mail address for BIRT iHub System.

Specify an administrator e-mail address as the value for the Account to receive administrative e-mail parameter. Set the value by logging into System Console, and choosing Settings—System Admin Users. Choose the sysadmin user name. In Edit sysadmin, type the e-mail address for the sysadmin user. Choose OK.

For example, the following e-mail address sends e-mail to a user named admin at a company for which the domain is mycompany:

[email protected]

3 Restart BIRT iHub System. Restarting applies the changes after you set or change the e-mail address.

For more information about configuring BIRT iHub System to send e-mail messages, see Chapter 7, “Managing clusters,” earlier in this book.

Page 256: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

322 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Page 257: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 0 , B a c k i n g u p B I R T i H u b S y s t e m 335

C h a p t e r

10Chapter 10Backing up BIRT iHub

SystemThis chapter discusses the following topics:

■ Performing a BIRT iHub System backup

■ Backing up and restoring a BIRT iHub System that uses a PostgreSQL database

Page 258: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

336 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

Performing a BIRT iHub System backupWhen performing a backup, it is important to note that there are two types of data:

■ MetadataInformation about BIRT iHub cluster and volume settings and data objects stored in third-party relational database management system (RDBMS) schemas.

■ DataBIRT iHub cluster and volume data objects, such as designs, documents, and information objects, stored as files in storage locations on disk, and the acserverconfig.xml file containing BIRT iHub System configuration settings.

The administrator must back up all BIRT iHub System metadata and data to ensure the recoverability of the system in the event of failure. The third-party database that contains BIRT iHub metadata is a critical component of BIRT iHub System. The system administrator must take all necessary precautions to ensure that this database is properly backed up and available to safeguard cluster and volume metadata. Please consult Actuate Support at the time of installation if you have any questions about the backup, recovery, or failover procedures necessary to protect against the possibility of catastrophic failure.

Managing the backup and recovery of BIRT iHub metadata and data filesA complete backup of BIRT iHub System must include the following items:

■ A database backup of the cluster and volume schemas containing the metadata

■ A copy of the folders from all volume storage file locations containing file data

■ A copy of the acserverconfig.xml file containing BIRT iHub configuration information

In the Windows BIRT iHub environment, the default AC_SERVER_HOME path is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub

The default location for volume storage folders is AC_SERVER_HOME\shared. The absolute path of this location is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared

Page 259: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 0 , B a c k i n g u p B I R T i H u b S y s t e m 337

The default acserverconfig.xml file path is AC_SERVER_HOME\shared\config. The absolute path of this location is:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config

Back up the metadata in the RDBMS at the same time that you back up the data files in the volume storage locations. A carefully coordinated backup ensures that a one-to-one correspondence exists between each entry in the volume metadata database and the data files.

The metadata backup on the RDBMS must be done before the backup of the data files in the volume storage locations. Files that are partially created when the metadata backup begins are either not yet registered in the database or are marked incomplete in the database. The metadata database does not retain a record of incomplete files.

When contacting Actuate Support to troubleshoot problems, it is best to provide a snapshot of the BIRT iHub System configuration, including the following items and information:

■ A database backup of the cluster and volume metadata schemas

■ The names of the volume schemas and users that iHub uses to connect to the RDBMS

■ A copy of the acserverconfig.xml file containing BIRT iHub configuration information

■ A copy of BIRT iHub logs

Using RDBMS and file system backup utilitiesThe administrator must perform the metadata backup using the tools provided or supported by the RDBMS. Copying the physical files of a database at the operating system level while an RDBMS is running does not create a valid backup.

Most RDBMS backup tools can be scripted and run while BIRT iHub is using the database. PostgreSQL and Oracle also provide graphical administration tools in addition to command-line tools. This chapter provides instructions on how to perform a backup in the PostgreSQL RDBMS environment as a reference example. For more information on using other RDBMS systems and tools to back up and restore BIRT iHub schemas, see the vendor documentation.

How to perform an BIRT iHub System backup

To back up BIRT iHub System, perform the following tasks:

1 Make sure that the autoarchive file purging process is not running.

2 Make an online backup of the cluster and volume schemas using the tools provided by the RDBMS.

Page 260: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

338 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

3 Back up the volume data and system configuration files using the tools available in the operating system environment.

A metadata backup is consistent with a data backup only if the file purging process that runs during an autoarchive operation does not occur between the time you back up the metadata and the time you back up the data. In System Console, the administrator can specify when the file purging process runs. Configure the following time-related file purging properties to times that do not conflict with the time when the backup operation runs:

■ Purge deleted files timeSpecifies the time when the file purging process runs to permanently delete expired files

■ Expiration time of deleted filesSpecifies the length of time that must elapse before the file purging process permanently deletes an expired file

Backing up and restoring a BIRT iHub System that uses a PostgreSQL database

PostgreSQL provides the pgAdmin graphical administration tool or the pg_dump and pg_restore command-line utilities to back up and restore a database. These PostgreSQL utilities run on the client not the server.

To back up a volume in the out-of-the-box (OOTB) PostgreSQL RDBMS environment, the administrator performs the following operations:

■ Backs up cluster and volume schemas containing the BIRT iHub System metadata using the pgAdmin graphical administration tool or the pg_dump PostgreSQL command-line utility

■ Backs up volume data and BIRT iHub System configuration files using operating system copy commands

Note that a backup of a PostgreSQL database is not portable across all operating systems.

To restore BIRT iHub System in the OOTB PostgreSQL RDBMS environment, the administrator performs the following operations:

■ Restores BIRT iHub System cluster and volume schemas containing the BIRT iHub System metadata using the pgAdmin graphical administration tool or the pg_restore PostgreSQL command-line utility

■ Restores volume data and BIRT iHub System configuration files using operating system copy commands

Page 261: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 0 , B a c k i n g u p B I R T i H u b S y s t e m 339

The following sections describe how to back up and restore a BIRT iHub System that uses the OOTB PostgreSQL database to store the metadata. These demonstrations serve as a detailed reference example. Other supported database systems, such as Oracle provide similar utilities and require comparable operational procedures.

Backing up BIRT iHub System using pg_dumpTo back up BIRT iHub System using the PostgreSQL pg_dump utility, perform the following tasks:

■ Create a folder to contain the metadata and volume data backup files.

■ Back up BIRT iHub System metadata using the pg_dump utility.

■ Back up the acserverconfig.xml file and volume data folders to the backup folder.

Create a folder to contain the metadata, configuration file, and volume data backup files outside the BIRT iHub data installation environment. To provide protection against single-point media failure, it is best to store the backup files at a storage location that is physically separate from the BIRT iHub System and volume data locations.

The following example shows a typical pg_dump command used to export the contents of the iHub cluster and volume schemas to a backup file:

pg_dump.exe --host dbhost --port 8432 --username "postgres" --format custom --blobs --verbose --file "C:\Actuate3\BIRTiHubVisualization\backup\ihub.backup" "dbname"

This pg_dump command example uses the following arguments:

■ hostSpecifies the host name of the machine where the PostgreSQL server is running, such as dbhost.

■ portSpecifies the port where the server listens for connection requests.

■ usernameSpecifies the user name for the connection to the PostgreSQL server, such as postgres.

■ formatSpecifies the output format. The value custom creates a compressed archive that can be used as input to pg_restore.

■ blobsSpecifies including blob data types.

Page 262: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

340 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

■ verboseSpecifies the level of command-line status messages.

■ fileSpecifies the output file, such as ihub.backup.

■ dbnameReplace this string in the example with the database name, such as ihub.

■ n or nameSpecies the schema name. Use multiple -n arguments to specify a list. Use wildcard notation to specify a character pattern, such as ac_*. to specify all volumes names that start with the prefix ac_. If -n is not specified, pg_dump exports all non-system schemas.

Alternatively, run the command at the volume schema level to back up individual volume schema to a separate archive. To run a backup using a script, set up auto-login using a .pgpass file. The file should contain connection information in the following format:

hostname:port:database:username:password

More information about setting up a scripted backup using a .pgpass file is available at:

http://www.postgresql.org/docs/9.2/static/libpq-pgpass.html

Back up BIRT iHub System metadata using pg_dump by performing the following tasks.

How to run pg_dump from a command prompt

1 Open a command prompt.

2 Navigate to the following location:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\postgresql\bin

3 Execute the following command. Substitute your machine name for urup in this example:

pg_dump.exe --host urup --port 8432 --username "postgres" --format custom --blobs --verbose --file "C:\Actuate3\BIRTiHubVisualization\backup\ihub.backup" "ihub"

This operation backs up the entire ihub database. If the -n argument specifying a specific schema or list of schemas is not specified, pg_dump exports all database schemas. Alternatively, you can back up only one volume schema by using the -n argument to specify a particular schema.

4 If prompted to enter a password, type the postgres superuser password.

Page 263: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

C h a p t e r 1 0 , B a c k i n g u p B I R T i H u b S y s t e m 341

pg_dump executes, writing status messages to the command prompt.

After backing up the BIRT iHub System metadata, back up the acserverconfig.xml file and volume data directories to the backup directory by performing the following tasks.

How to back up the volume data folders

1 Open Windows Explorer and navigate to the config folder that contains acserverconfig.xml file. In a default BIRT iHub installation, this file is located in AC_SERVER_HOME\shared\config. For example:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\shared\config

2 Select acserverconfig.xml, right click, and choose Copy. Copy the file to the backup location. For example:

C:\Actuate3\BIRTiHubVisualization\backup

3 Navigate to the folder or folders that contain volume data files, such as AC_SERVER_HOME\shared\storage. Right-click the folder, and choose Copy. Copy this folder to the backup location. For example:

C:\Actuate3\BIRTiHubVisualization\backup

Restoring BIRT iHub System using pg_restoreTo restore a backed-up BIRT iHub System, perform the following tasks:

■ Take BIRT iHub System offline.

■ Delete the acserverconfig.xml file in AC_SERVER_HOME\shared\config and the volume data folder in AC_SERVER_HOME\shared.

■ Copy the backed-up acserverconfig.xml file to AC_SERVER_HOME\shared\config form the backup folder and the volume data folder from the backup folder to AC_SERVER_HOME\shared.

■ Restore the BIRT iHub system and volume metadata using the PostgreSQL pg_restore utility.

■ Take BIRT iHub System online.

Alternatively, the administrator can restore an individual volume by selectively backing up and restoring only the related volume schema and data.

To begin a restore operation, take BIRT iHub System or the individual volume offline.

How to restore the backed-up data folders

1 In Windows Explorer, navigate to AC_SERVER_HOME\shared\config.

2 Select acserverconfig.xml, right-click, and choose Delete. Confirm the deletion.

Page 264: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

342 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

3 In AC_SERVER_HOME,\shared, delete the volume folder, for example, AC_SERVER_HOME\shared\storage. Confirm the deletion.

4 In Windows Explorer, navigate to the following location:

C:\Actuate3\BIRTiHubVisualization\backup

Select acserverconfig.xml, right-click, choose Copy, and copy this file to AC_SERVER_HOME\shared\config.

5 In C:\Actuate3\BIRTiHubVisualization\backup, right-click the volume folder, for example, storage. Choose Copy, and copy this folder to AC_SERVER_HOME\shared.

Restore BIRT iHub System schemas using the command-line version of pg_restore. The pg_restore utility runs using arguments similar to the pg_dump utility.

The following example shows a typical pg_restore command used to import the contents of a backup file to the BIRT iHub System database:

pg_restore -h dbhost -p 8432 -U postgres -d db_name_ihub_ihub.backup

Run pg_restore from the command line by performing the following tasks.

How to run pg_restore from a command prompt

1 Open a command prompt.

2 Navigate to the following location:

C:\Actuate3\BIRTiHubVisualization\modules\BIRTiHub\iHub\postgresql\bin

3 Enter the following command. Substitute your machine name for urup in this example:

pg_restore.exe --host urup --port 8432 --username postgres--dbname ihub --clean --verbose "C:\Actuate3\BIRTiHubVisualization\backup\ihub.backup"

Press Enter.

4 If prompted, type the postgres superuser password. Press Enter.

pg_restore executes, writing status messages to the command prompt.

Take BIRT iHub System or. alternatively, the individual volume online.

More information about backing up and restoring a volume schema using the PostgreSQL pg_dump and pg_restore utilities is available at:

http://www.postgresql.org/docs/9.2/static/backup.html

Page 265: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

I n d e x 365

IndexAAC_CONFIG_HOME element 315, 318AC_PMD_WINDOWS_CPUS registry key

327AC_PMD_WINDOWS_CPUS variable 325AC_SERVER_HOME parameter 28, 30, 42,

44AC_SERVER_HOME variable 337AC_SHARED_HOME variable 139AccessiblePDF licensing option 311accessing

configuration parameters 12Encyclopedia volumes 313iHub features 308metadata 12metadata databases 21RDBMS documentation 7shared resources 140volumes 17, 308web-based content 312, 313

accountspre-existing RDBMS databases and 5

AcEncycUpgrade utility 20AcExport utility 19AcExtern utility 20AcImport utility 19AcIntern utility 20acmachineid utility 315, 316AcMode utility 20acpmdconfig.xml 72, 90, 97acserverconfig.xml 12, 75, 90, 98Active Directory property 204, 205Active Directory server 77Active Directory servers 18, 204, 205, 222active requests 246, 250AcToc utility 20Actuate Analytics Option 310Actuate Basic reports 312Actuate Query Option 310Actuate Viewer 244AcVerify utility 20adding

cluster nodes 123, 125, 126, 129, 135, 148JDBC drivers 21license files 314licensing options 319schema owners 5system administrators 293volumes 123, 163

“Admin” Group property 212, 216administration tools (iHub) 338administration tools (obsolete) 19administrative reports 21administrators

adding cluster nodes and 12, 148backing up metadata and 35, 49backing up volumes and 337, 338, 340changing system 296creating system 293deleting 299, 300deleting clusters and 236failover procedures and 7installing alternate databases and 21installing iHub and 5, 6launching iHub images and 12managing cluster nodes and 125, 153, 231,

232managing iHub clusters and 7, 123, 151,

231managing iHub services and 154managing iHub System and 6, 17, 19, 106,

292managing metadata databases and 6obtaining licenses and 308preventing data loss and 6receiving CPU violation notices 332scheduling file purging processes and 339setting properties for 293

aggregation 310Alert Name property 192alerts

changing 196configuring 123, 189creating 106, 190, 192displaying 114, 189

Page 266: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

366 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

enabling or disabling 197Alerts list 108Analytics licensing option 311Analytics Option 310analyzing

data 310search results 312

analyzing data 311Apache web servers 286AppContainerPort parameter 53application container process listen port 53application programming interfaces 18, 21applications

backward compatibility with 21developing client 18managing resources for 106managing users and 224restricting CPU processes for 323running client 10running iHub processes and 8

archives (cloud deployments) 6archiving report files 339Attribute Name property 192authentication 10, 107, 123, 199, 201authorization 107, 123, 201autoarchive file purging processes 338auto-login scripts (pgpass) 349automated installation option 6

Bbacking up

configuration files 350, 351data files 20, 337data folders 351database schemas 349iHub System 336, 348iHub system schemas 7metadata 35, 49, 337, 338metadata databases 19PostgreSQL databases 340report files 19system databases 349volumes 338

backup conflicts 339backup files 349backup operations 7

backup procedures 6backup utilities 19, 338backward compatibility 21Basic reports 312BIRT 360 274BIRT Analytics licensing option 311BIRT Data Analyzer 274BIRT Data Analyzer Option 311BIRT Designer Professional 313BIRT Factory 275BIRT iHub. See iHub SystemBIRT Interactive Viewer licensing option 311BIRT licensing option 311BIRT onDemand licensing option 309BIRT Page Level Security licensing option 313BIRT reports 21BIRT service 156BIRT SmartSheet Security Option 311BIRT Spreadsheet Designer 311BIRT Spreadsheet Option 311BIRT Studio 275BIRTChartConvertToImageTimeOut

property 264birtihub.crt 79, 81, 84birtihub.jks 98BIRTImageCacheTimeout property 265BIRTReportDesignCacheTimeout property

265BIRTReportDesignCacheTotalNumber

OfEntries property 265briefing books 311browsers. See web browsersBufferPoolSize property 267

Ccache (data) 312Cache Timeout property 206, 224certificate authority 75Certification Authority 91certification authority 79Certification Authority (CA) 74certification authority (CA) 71changing

alerts 196cluster node properties 233cluster properties 152, 231

Page 267: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

I n d e x 367

configurations 227, 228CPU bindings 331, 332database encoding 21database schemas 5iHub service properties 154license file names 318licensing options 319metadata databases 5network cards 319server configuration templates 156system administrator 296

character encoding 21charts 310client applications 8, 10, 18client/server models 12cloud deployments 6cluster activity information 239, 250Cluster Configuration page (System Console)

123, 151cluster IDs 12, 234cluster nodes

accessing 231adding 123, 125, 126, 129, 135, 148associating with machine IDs 315changing properties for 233configuring 18, 140deleting 153, 233failing 12getting host names for 146managing 153, 231, 232networked environments and 12running iHub instances and 12setting properties for 126, 153starting or stopping 13, 152, 153, 232viewing active requests for 246, 250viewing activity information for 242

cluster schemas 5clusters

accessing configurations for 12accessing resources for 140adding nodes to 123, 125, 126, 129, 135administrative tasks for 7, 152changing node-key configurations for 319changing properties for 152, 231configuring 120, 121, 123creating 106, 121deleting 236, 237, 238

determining number of processors for 324, 331

exceeding CPU licenses for 332getting IP addresses for 146licensing options for 314, 315, 323managing 7monitoring 108, 114removing nodes from 153retrieving volume metadata and 13running iHub processes and 9running iHub services and 9running jobs and 17sending e-mail over 332setting properties for 121, 123setting up failover procedures for 7setting up iHub environment for ??–148shutting down 9starting 9storing metadata for 4, 5testing connections for 148turning off firewalls for 145updating configurations for 227viewing activity and resource usage for

239viewing diagnostic logs for 240viewing error messages for 115viewing list of 234, 235viewing node key information for 319viewing service provider information for

199, 200viewing system information for 241, 244

Clusters page (System Console) 120, 234collecting machine information 315, 316command line utilities 18, 19, 261, 316, 340completion notices 351Condition property 192Configuration Console 313

archiving report files and 339taking Encyclopedia offline with 353taking Encyclopedia online with 361

configuration fileupdating 123

configuration files 226configuration home directory 152configuration keys 276configuration parameters 12, 263configuration template properties 263

Page 268: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

368 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

configuration templates 12, 156configurations

accessing information about 4accessing resources and 140adding JDBC drivers and 21backing up 336, 350, 351changing database encoding and 21CPU binding and 323, 327, 329editing 227, 228failover procedures and 7file I/O operations and 12licensing iHub and 308, 315, 318purging report files and 339restoring 341running iHub clusters and 12, 18, 129storing metadata for 4troubleshooting 338updating 227

configuringalerts 123, 189Apache web servers 286cluster nodes 18, 140clusters 120, 121, 123connections 275e-mail settings 302iHub System 6, 13, 18LDAP adapters 201LDAP mappings 206network sharing 139, 140

ConnConfigFile property 275connection configuration files 275connection information 275connection pooling 21, 206connection profiles 275connection properties 275connections

accessing metadata databases and 12, 21backing up multiple schemas and 349changing database encoding and 21configuring 275externalizing 275installing alternate databases and 21running cluster nodes and 12setting LDAP server 203testing 87, 148, 176

Context Path property 224ControlConsole utility 261

copyingdatabase files 338license files 318

CPU binding 323–333CPU binding validation 325, 326, 329, 331,

332CPU-based licenses

combining named-user licenses with 309determining number of 323exceeding number of 331, 332obtaining 308, 310viewing information about 331

CPUsbinding iHub processes to 324, 329binding to multiple core 327deploying iHub over multi-threaded 324determining number of 327, 331hyperthreading and 328licensing options for 309measuring machine capacity for 309restricting iHub processes for 323running encycsrvr processes and 331stopping iHub processes for 330testing connections to 148viewing information about 329viewing maximum number of 331viewing processor IDs for 324, 325

creatingbackup folders 342data cubes 310iHub clusters 106, 121schema owners 5shared files and folders 140system administrators 293

crosstabs 311cube reports 310cubes 310cubeview files 311custom event service port 53custom events 18customer licenses 318CustomEventServicePort parameter 53customizing

cluster schemas 5database encoding 21metadata databases 5

Page 269: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

I n d e x 369

Ddashboards 311Dashboards licensing option 311data

analyzing 310, 311backing up volumes 336exporting 311preventing loss of 6recovering 6restoring 341retrieving 11, 13securing 70sharing across multiple machines 13storing volume 164verifying integrity of 20

Data Analyzer Option 311data cache 312data cubes 311data extraction processes 11data files 20, 337data in transit

securing 70Data Integration Option 312data objects 11, 19, 70, 336data security 70data sources 11, 275, 312data types 21database administration tool

backing up Encyclopedia and 341–347, 348–??

restoring Encyclopedia and 357database encoding 21Database name property 86, 175database objects 5Database port property 86, 175database schema owners

creating 5database schemas

backing up 7, 349customizing 5installing iHub System and 4preventing data loss for 6restoring 7storing metadata and 4

Database server property 86, 175databases

See also specific database typeaccessing documentation for 7accessing shared data and 13analyzing data and 310backing up 19, 349caching information objects and 312changing metadata 5configuring failover procedures for 7connecting to 21copying files in 338installing schemas for 4integrating 312integrating with iHub 11managing 6monitoring data and 261retrieving data from 13searching 224setting properties for 86, 179setting up user accounts for 5shutting down iHub clusters and 13specifying type 123, 175storing metadata and 4testing connections to 87, 176viewing properties for 175

DatamartArchiveLimit property 265DB2 databases

accessing documentation for 7managing volume schemas and 5setting properties for 184

Default BIRT 360 property 274Default BIRT Data Analyzer property 274Default BIRT Factory property 275Default BIRT Online property 274Default BIRT Studio property 275default database encoding 21default directories. See directoriesDefault Home Folder property 211, 215deleting

alerts 197cluster nodes 153, 233clusters 236, 237, 238system administrators 299, 300volumes 172

deployingiHub System 6, 18, 324spreadsheets 311

design files 11, 19

Page 270: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

370 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

See also designsdesigns 11, 275, 311developing

client applications 18DHTML reports 312diagnosing system problems 17diagnostic fixes 9diagnostic log files 240digital certificate 71, 72digital certificates 89digital signature 71directories

backing up volume data and 351cluster configurations and 12, 129restoring volume data 354shared resources and 140storing volume data and 164storing volumes and 168viewing configuration home 152

directory paths 129, 275, 337disk partitions 349disk storage location backups 337displaying

alerts 114, 189clusters 234, 235CPU information 329, 331cube reports 310diagnostic logs 240host names 146IP addresses 146licensing information 313, 314licensing options 226node key information 319processor IDs 324, 325reports 311service provider information 199, 200system information 241, 244, 292

distinguished names 205, 222document files 11, 19document format conversions 172, 244documentation

administering iHub System and ix, 18Network File Systems 12PostgreSQL client/server models 12third-party RDBMS tools and 7

documents 11, 311downloading

configuration files 227, 228System Console category views 245

driversaccessing metadata databases and 21adding JDBC 21changing database encoding and 21configuring iHub System and 13encryption and 87, 176running third-party databases and 11

dual-core CPUs 327

Ee.Analysis Option 312e.Report Designer Professional 312e.Report Option 312e.reports 312elastic iHub clustering 12e-mail 190, 332

See also notificationse-mail addresses 302Email Attribute property 210, 214Email property 192e-mail settings 302email-content-type attribute 61EnableGenerationService property 264EnableIntegrationService property 267EnableViewingService property 265encoding 21encryption 87, 176encryption cipher 72encryption keys 166Encryption Method property 86, 176Encyclopedia Data Store Administrator

migrating Encyclopedia and 20Encyclopedia Data Store Upgrader 274Encyclopedia processes 328Encyclopedia processes. See encycsrvr

processesEncyclopedia volume path 337Encyclopedia volumes

accessing multiple 313binding to CPUs 328installing metadata for 5migrating to current release 20running iHub processes and 9storing 171

Page 271: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

I n d e x 371

storing backup files for 349storing metadata for 4taking offline 353taking online 361

encycsrvr processes 9, 331, 332Enterprise Information Integration (EII)

services 312Entity ID property 201environment variables 325error information 17, 115, 244error logging application 17error logging extensions 18error logging reports 21error messages 115escaped characters 222evaluation licenses 315event service port 53events 18Excel spreadsheets 311exit code "0" 345Expiration time of deleted files property 339expired licenses 314, 315exporting

data 311Extensible Markup Language (XML) 75external connection profiles 275external data sources 11, 224external management systems 18, 20

FFactory service 9, 11, 264Factory service processes 328, 332failover operations 7failover procedures 6failureMessage element 61features 308Fetch Limit property 206file I/O operations 12file names 314file paths 129, 275, 337file purging properties 339file system backups 19, 338file systems 11FileCacheTimeout property 265files

archiving 339

backing up data 20, 337backing up report 19copying 338installing license 314, 318purging 339restoring 354sending licensing information and 318setting up network sharing for 140storing backup 349storing BIRT-specific 11updating license 123, 318viewing information about 21

filtering cluster list 234, 235firewalls 139, 145folders

backing up data 342, 351restoring 354setting up network sharing for 140storing backup files in 349storing Encyclopedia volumes and 171storing volume data and 164storing volumes and 168viewing shared configuration 152

freeing memory 261

Ggenerating

backup files 348machine IDs 316reports 11

getJDBCMajorVersion function 21graphs. See chartsgreen padlock symbol 75Group Base DN property 210, 214Group Description Attribute property 210,

215Group Object property 210, 215Group Search Filter property 210, 215Group Volume Filter Attribute property 212,

215, 216

Hhelp topics

See also online documentationHome Folder Attribute property 211, 215host names 126, 146

Page 272: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

372 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

HTML tags 61HTTP connections 12HTTPS connection

verifying 84HTTPS protocol 76HyperText Transfer Protocol Secure (HTTPS)

70, 71hyperthreading 328, 330

II/O operations 12IDAPI applications 10, 18identity provider (IdP) 73iHub images 6, 12, 18iHub Integration Technology 18, 20iHub licensing options 310, 319iHub Logging and Monitoring System 17iHub processes

adding cluster nodes and 12binding to CPUs 324, 327binding to Linux servers 329binding to Windows systems 324, 327, 328monitoring 114restricting number of running 323running 8, 9starting 324stopping 330verifying CPU bindings for 325, 326, 329,

331, 332iHub run-time environment 8, 21iHub servers

binding to CPUs 323–333exceeding CPU licenses for 331getting machine ID for 315, 316monitoring 114sending requests to 11

iHub service configuration parameters 263iHub services 9, 10, 12, 154iHub servlet container 8iHub System

accessing features 308administering 6, 17, 19, 106, 292backing up 336, 348changing CPU bindings and 331, 332changing machines for 319

checking bound processors for 329, 330–332

communication protocol for 10configuring 6, 13, 18customizing volume schemas and 5deploying 6, 18, 324extending functionality of 18installation options for 4, 6installing license files for 314, 318integrating with RDBMS databases 11, 21loading metadata for 5maintaining 9, 309monitoring 106, 239online backup mode and 20preventing data loss for 6restoring 340, 353, 359running on multiple machines 13security mechanisms for 10shutting down 9specifying number of CPUs for 308starting 9storing metadata for 4testing connections for 87, 148, 176viewing diagnostic logs for 240viewing licensing information for 313, 314viewing usage and performance statistics

for 243incomplete files 338indexed searches 5Information Console 76

administering iHub System and 18documentation for 19providing run-time environment for 8running 10

Information Delivery API 10, 18Information Delivery application

programming interface (IDAPI) 75Information Object Caching Option 312information objects 310, 312installation

database schemas 4Encyclopedia volume metadata 5JDBC drivers 21license files 314, 318sample volume 21

installation options 4, 6instance licenses 309

Page 273: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

I n d e x 373

Integration service 9, 11, 267, 312Integration service processes 328IntegrationService element 328Intel processors 328Interactive Crosstabs licensing option 311Interactive Viewer licensing option 311intra-cluster messaging 13IP addresses 146iServer

binding to CPUs 314iServer utilities (obsolete) 19

JJava developer guide 19Java KeyStore (JKS) file 89Java keytool utility 77, 91Java programs 20Java Runtime Environment 21Java SE Development Kit (JDK) 91Java Virtual Machines. See JVM librariesJavaScript API (JSAPI) 76JavaScript classes 76JavaServerClientConnectionTimeout

property 266JavaServerClientMaxConnections property

266JavaServerClientMinConnections property

266JDBC drivers 11, 21jdbcCompliant function 21job completion notices 351job dispatcher 17job information 21job schedulers 17jobs 11, 17, 264, 351JRE environment 21JSAPI libraries 76

KKeyAlias 98KeystorePass 90

LLDAP Adapter options (System Console) 201LDAP connection settings 203LDAP Mapping configurations 206

LDAP Performance Settings 206LDAP Port property 203, 205LDAP server 77LDAP Server property 203, 205LDAP servers 18, 20, 201, 203license file names 314, 318license files

adding licenses and 319installing 314, 318obtaining 314, 315, 317reapplying for 319receiving e-mail about 318running iHub clusters and 315selecting 226specifying location of 315updating 123, 318

license keys 318, 324License Management Server 261licensed CPUs 331licenses 308, 314licensing information 318, 331, 332licensing options 226, 308, 310, 319licensing support (Actuate) 317, 318, 319Lightweight Directory Access Protocol

(LDAP) 77Linux servers

binding iHub processes to 329running logging applications on 17verifying core settings for 330verifying CPU bindings for 329

LM Server 261load balancing (clusters) 12, 17, 286log files 240, 331Logging and Monitoring System 17logging applications 17logging in to

System Console 108logical processors 324, 327, 328, 330losing data 6

Mmachine capacity 310machine IDs 315, 316machine information 315, 316mail servers 302maintenance 9, 309

Page 274: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

374 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

maintenance customers 317manuals. See documentationmaster cluster nodes 7MaxBIRTDataResultsetBufferSize property

264MaxConcurrentRequests property 266MaxDatamartArchiveSize property 264, 266Maximum Pool Size property 206MaxPersistentArchiveSize property 264, 266MaxSyncJobRuntime property 264MaxSyncRequestTime property 264MaxTransientArchiveSize property 266MDS_ENABLED 78media failures 349Member ID Type property 211, 215Member List Attribute property 211, 215memory 261, 310Message Distribution service

setting port for 53Message Distribution service (MDS) 78message distribution service port 53Message property 192metadata

accessing 12autoarchiving and 339backing up 35, 49, 337, 338defined 336loading 5restoring 340storing 4

metadata databases 4, 5, 6, 13, 123encrypting connections 86previous releases and 4

metadata tables 5Metrics Management licensing option 311Microsoft Excel spreadsheets 311migration 6migration tools (obsolete) 19monitoring data 261Monitoring page (System Console) 114, 192multidimensional data analysis 310multiple-core CPU binding 327Multi-Tenant licensing option 313multi-threaded CPUs 324

Nnamed-user licenses 308native system tools 7network cards 316, 319Network File Systems 12network sharing configurations 139, 140networked environments

obtaining licenses for 310, 316, 319running cluster nodes and 12sending data over 11

NFS (Network File Systems) 12node keys 315node-key configurations 319node-key licensing 315, 316, 319notifications 115, 302, 351

developing templates for ??–62

Oobsolete command line utilities 19obsolete features 7ODA data sources 276onDemand licensing option 309OnDemandServerViewMessageTimeout

property 266online backup mode 20online documentation

administering iHub System and ix, 18online help. See online documentationoperating systems 22options (licensing) 308, 310, 319Oracle databases

accessing documentation for 7managing volume schemas and 5setting properties for 183

output 11output formats 11, 245

Ppackages (licensing option) 310Page Level Security Option 312Page Level Security option 70page-level security 70, 312, 313PagePoolSize property 267parameters

configuring clusters and 12

Page 275: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

I n d e x 375

setting iHub ports and 53setting iHub service 263

partitions (volume) 11Password property 87, 176, 204, 205PasswordProtectedTransport 74passwords

creating 122, 294patches 9paths 129, 275, 337PEM files 90, 97performance

CPU binding and 327file I/O operations and 12iHub architecture models and 9iHub clusters and 13

performance monitoring extensions 18performance statistics 243, 311permanent licenses 315PersistentArchiveFileCacheTimeout property

266PersistentArchiveLimit property 264, 266pg_dump command line options 348pg_dump utility 340, 342, 348, 349pg_restore command line options 360pg_restore utility 340, 353, 357, 359, 360pgAdmin database administration tool 340,

361backing up Encyclopedia and 341–347,

348–??restoring Encyclopedia and 357

pgpass files 349ping command 148PKCS12 format 98platform licenses 309Port Number property 224ports 86, 175, 205, 224

application container listen 53custom event service 53Message Distribution service 53SOAP dispatcher service 53

PostgreSQL administration utilities 340, 361PostgreSQL command line utilities 340PostgreSQL data folder 88PostgreSQL databases

accessing documentation for 7accessing metadata in 12backing up 340

backing up data folders for 351file I/O operations and 12installing iHub and 4managing volume schemas and 5running volume backup and restore

operations for 340running volume backup and restore

operations with 341, 348setting properties for 86, 179starting 11storing metadata and 4taking Encyclopedia offline for 353taking Encyclopedia online for 361viewing properties for 175

PostgreSQL interactive terminal (psql) command 88

Preferred Pool Size property 206Prefix property 208, 213PreviousSession 74printers 13Privacy Enhanced Mail (PEM) format 89private key 71

encryption 72privileges 70

accessing shared resources and 143accessing volumes and 17

Process Management DaemonCPU binding and 324, 327, 329distributing SOAP requests and 11running iHub clusters and 12running iHub processes and 9starting encycsrvr processes and 331, 332viewing diagnostic logs for 240

processor affinity 325, 326, 327, 330processor IDs 324, 325, 327, 329ProcessorAffinity element 327processors. See CPUsproperties

adding service providers and 201adding volumes and 163, 165, 172changing iHub cluster 152, 231changing iHub service 154configuring RSSE SOAP Service 224connecting to data sources and 275creating system administrators and 293file purging processes 339file sharing 140

Page 276: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

376 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

licensing iHub and 308sending e-mail and 302setting iHub cluster 121, 123setting iHub cluster node 126, 153, 233setting LDAP server 203, 205, 206setting metadata database 86, 179viewing metadata database 175

ProvisioningSOAPPort parameter 53ProvisioningSOAPSSL 53proxy servers 286public key 71, 72, 75publishing reports 311Purge deleted files time property 339purging processes 338purging volume data 339

Qqueries 11, 12Query Option (licensing) 310

RRDBMS command line utilities 340RDBMS databases 4, 5, 338RDBMS documentation 7RDBMS tools 6, 19, 338Read privilege 70rebinding encycsrvr processes 332recovery operations 6Recursive Groups property 204, 205relational database management systems 4relational databases. See databasesrenaming license files 318report design files 11, 19

See also report designsreport designs 11, 275, 311report document files 11, 19report documents 11, 311report files

archiving 339backing up 19, 337purging 339restoring 354setting up network sharing for 140storing 11viewing information about 21

Report Server Security Extension 18, 20

Report Server Security Extension services 10Report Studio licensing option 313report viewers 311reporting servers. See iHub serversreporting services. See iHub servicesReportingService element 328reports

displaying 311generating 11installing sample 21publishing 311running 312

requests 9, 11, 244, 246resource groups 243, 274resource usage information 239resources

accessing 140creating cluster nodes and 12defining as work unit 308evaluating usage 17managing 106monitoring 106, 239retrieving volume metadata and 13

restoring iHub System 340, 353, 359restricting iHub processes 323root certificates 74

installing 81RSSE applications 10, 18, 20, 224RSSE SOAP service settings 224running

acmachineid utility 316console client applications 10ControlConsole utility 261encycsrvr processes 331, 332iHub processes 8, 9, 323iHub services 9jobs 11, 17, 264pg_dump utility 342, 349pg_restore utility 357, 360PostgreSQL databases 11queries 11, 12report designs 311reports 312spreadsheet reports 311third-party databases 11

Page 277: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

I n d e x 377

SSaaS licensing option 310SAML Identity Provider information 123,

199, 200SAML IdP implementation 74SAMLEntityID 78sample reports 21scalability 106scheduling jobs 17schemas

backing up 7, 349customizing 5installing iHub System and 4preventing data loss for 6restoring 7storing metadata and 4

scriptsextending iHub functionality and 18running RDBMS backup tools 338

search results 312Search setting property 224searching

Active Directory servers and 222clusters 234, 235metadata database 224

Secure Read privilege 70Secure Socket Layer (SSL) 70Secure Sockets Layer (SSL) 71secured connection 72secured SSL connection 79securing data 70security 10, 70, 107, 139, 312Security Assertion Markup Language

(SAML) 71, 73security settings 123security systems 20self-signed certificates 74sending licensing information 318, 331, 332server configuration template properties 263server configuration templates 12, 156Server element 328Server Name property 224Server URL property 201servers

See also iHub serversconfiguring Apache web 286

configuring mail 302exceeding CPU licenses for 332licensing information and 319shutting down iHub clusters and 9

service provider 73service provider information 199, 200service providers 200services (reporting). See iHub servicesservlet container (iHub) 8session key 72shared licenses 314, 315shared resources 140shared secret key 72Shibboleth IdP 74side-by-side migrations 6signed certificates 77Simple Object Access Protocol (SOAP) 75single sign-on authentication 123, 199single-point media failures 349single-point node failures 12SmartSheet Security Option 311snapshots 338SOAP APIs 75SOAP dispatch port 53SOAP services 75SOAP-based messages 8, 10, 19SOAPDispatchService

port numbers 75SOAPDispatchSOAPPort parameter 53SOAPDispatchSOAPSSLPort 53Software as a Service licensing option 310Solaris systems 22spreadsheet reports 311SQL Server databases

accessing documentation for 7managing volume schemas and 5setting properties for 181

Squirrel Data Exporter 20Squirrel database 4SSL certificate 74SSL certificates 91

installing 97location 89storing 72

SSL connections 88verifying support 88

SSL encryption 87, 176

Page 278: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

378 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

SSL handshake 72SSL property 204, 205SSL secured server 72SSL security 76stand-alone servers 319starred clusters 235StartArguments property 267starting

iHub cluster nodes 13, 152, 153, 232iHub clusters 9iHub processes 324iHub services 154iHub System 9pg_dump utility 342, 349pg_restore utility 357, 360PostgreSQL databases 11volumes 9, 172

stoppingcluster nodes 13, 152, 153, 232iHub processes 330iHub services 154iHub System 9volumes 172

storage locations (Encyclopedia) 171storage locations (volumes) 168subscription licenses 309successMessage element 61Suffix property 208, 214SynchReportingWeight property 265SyncJobQueueSize property 264SyncJobQueueWait property 264system. See iHub Systemsystem administrators 106, 293

See also administratorsSystem Console

adding Encyclopedia and 171adding volumes and 165, 168, 171administering iHub and 18, 106, 226, 239,

292documentation for 19filtering cluster list in 234, 235licensing and 308, 319, 331logging in to 108managing iHub clusters and 120, 123, 127managing users and 201monitoring tasks and 114, 189providing run-time environment for 8

running 10sending e-mail and 333starting iHub and 9viewing data items in 107viewing system information and 241, 244,

292system databases

backing up 349changing 5connecting to 21installing 4storing metadata and 4

system failover procedures 7system metadata

backing up 35, 49maintaining 13storing 4

system schemasbacking up 7customizing 5restoring 7storing configuration metadata and 4

system tools 7

Ttables (metadata databases) 5tags 61TARGET_DATABASE_TYPE parameter 28,

30, 42, 44temporary documents 11, 265temporary licenses 314, 315Test Connection property 87, 176testing

connections 87, 148, 176JDBC drivers 21

text files (licenses) 318third-party command line utilities 340third-party databases 4, 5, 21, 312

caching information objects and 312storing metadata and 4

third-party RDBMS documentation 7third-party RDBMS tools 6, 19, 338Threshold property 192Timeout property 206TLS compression 72

Page 279: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

I n d e x 379

TransientArchiveFileCacheTimeout property 267

TransientArchiveLimit property 267TransientReportCacheSize property 265TransientReportTimeOut property 265TransientStoreMaxCacheEntries property 265troubleshooting 17, 338trusted certificates 77Trusted Root Certification Authorities

certificate store 79

UUnlimited User CPU License 313updating

configuration file 123configurations 227licenses 123, 226, 318

upgradesapplication programming interfaces and

21Encyclopedia volumes and 20licensing options and 315side-by-side migrations and 6volumes and 5

URLsNetwork File Systems documentation 12PostgreSQL administration utilities 361PostgreSQL client/server models 12RDBMS documentation 7service providers 201System Console 108

usage information 17, 243usage logging application 17usage logging extensions 18usage reports 21user accounts

installing iHub and 5user authentication 10, 107, 123, 199, 201user authorization 107, 123, 201User Base DN property 208, 214user credentials 5User Description Attribute property 209, 214User DN property 204, 205User Full Name Attribute property 209, 214user groups 17, 70User Login Name Attribute property 208, 214

User Object property 209, 214User Search Filter property 209, 214User Volume Filter Attribute property 211,

216Username property 87, 176users 70

accessing volumes and 17adding as system administrator 293authenticating 71, 73licensing and 308managing 18, 20, 21, 201

Vvalidating CPU binding 325, 326, 329, 331,

332variables (iHub environment) 325verifying data integrity 20View monitoring service 240View service 9, 11, 240, 265View service processes 328, 332viewers 311viewing

alerts 114, 189clusters 234, 235CPU information 329, 331cube reports 310diagnostic logs 240host names 146IP addresses 146licensing information 313, 314licensing options 226node key information 319processor IDs 324, 325reports 311service provider information 199, 200system information 241, 244, 292

ViewingService element 328ViewingWeight property 267Visualization Platform 200

documentation for 19volume data 13, 163, 336volume data directories 351, 354volume databases

backing up 19changing 5connecting to 21

Page 280: BIRT iHub System Administration Guideotadocs.opentext.com/documentation/ManualsIHUB31/... · Information in this document is subject to change without notice. Examples provided are

380 B I R T i H u b S y s t e m A d m i n i s t r a t i o n G u i d e

file I/O operations and 12installing 4retrieving data from 13viewing incomplete files in 338

volume failover procedures 7volume metadata 86

backing up 35, 49, 337, 338installing 5maintaining 13restoring 340storing 4

volume partitions 11volume schemas

backing up 349customizing 5storing configuration metadata and 4

Volumesviewing diagnostic logs for 240

volumesaccessing objects in 308adding to clusters 123, 163, 164backing up 338controlling access to 17CPU binding and 332customizing metadata databases for 5customizing schemas for 5deleting 172disabling and enabling 172installing sample 21managing 7monitoring 108, 114preventing data loss for 6running iHub processes and 13running third-party databases and 21saving data objects for 11saving metadata for 4setting file-sharing permissions for 143setting properties for 163, 165, 172setting up failover procedures for 7starting 9, 172storing 168taking snapshots of 338viewing activity information for 172viewing information for 172

volumes. See Encyclopedia volumes

Wweb browsers

accessing structured content and 313accessing System Console and 108

web pages 312, 313web servers 286web service applications 224Web Service Description Language (WSDL)

75web services 154web.xml 78, 97wildcard characters 234Windows systems

backing up volume data folders on 351binding iHub processes to 324, 327, 328distributing iHub System over 6network cards and 316, 319obtaining IP addresses for 146restoring iHub data folders on 354running logging applications on 17setting up firewalls for 139setting up network sharing for 140turning off firewalls for 145

work unit licenses 308, 310work units 243, 308WSDL utilities 75

XXML files 275, 318

ZZIP files 318