emc data computing appliance · data computing appliance appliance version 2.x administration guide...

144
EMC ® Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 REVISION: 02

Upload: truongminh

Post on 11-Jun-2018

296 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

EMC® Data Computing ApplianceAppliance Version 2.x

Administration GuideAPPLIES TO DCA SOFTWARE VERSION 2.1.0.0

PART NUMBER: 302-001-555REVISION: 02

Page 2: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA.

Published April 2015

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).

Page 3: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

CONTENTS

Preface

Chapter 1 System Information and Configuration

DCA Configurations ..................................................................................... 11Identify the version of the installed DCA software .................................. 11DCA configuration requirements............................................................ 11DCA Module Types ................................................................................ 12DCA configuration rules......................................................................... 15Racking order ........................................................................................ 15Racking guidelines................................................................................ 16Rack density ......................................................................................... 17Mixed System rack components ........................................................... 19Hadoop-only System Rack components (minimum config.).................... 20HD-Compute System Rack components (minimum config.).................... 21Aggregation rack components ............................................................... 22Expansion rack components ................................................................. 23

Power supply reference ............................................................................... 24 Network and cabling configurations ............................................................ 29

Interconnect cabling reference .............................................................. 29Administration switch reference ............................................................ 36Aggregation switch reference ................................................................ 40

Network hostname and IP configuration ..................................................... 47 Multiple-rack cabling reference ................................................................... 50 Configuration files....................................................................................... 51 Location of old core files ............................................................................. 51 Default passwords ...................................................................................... 52 Common administrative tasks using dca_setup........................................... 52

Configuring access to an external SMTP Forwarder ................................ 52Set FIPS mode with dca_setup .............................................................. 53Creating a Customer Maintained Repository for RPMs............................ 54

Chapter 2 Networking

Architecture ................................................................................................ 56Administration Network......................................................................... 57Interconnect Network ............................................................................ 59

Interconnect Fault Tolerance........................................................................ 61Host Port, Interconnect Cable, or Switch Port Failure.............................. 61Switch Failure ....................................................................................... 62

External Connectivity................................................................................... 64Connect through the Master Servers...................................................... 64Connect through the Interconnect or Aggregation Switch....................... 67

Diagnostics................................................................................................. 71

EMC DCA Administration Guide 3

Page 4: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Contents

Chapter 3 Default Ports for Customer Networks

Chapter 4 Storage

Overview..................................................................................................... 75 Storage layout............................................................................................. 75 Common storage commands....................................................................... 78

Chapter 5 Hadoop on the DCA

Hadoop overview ........................................................................................ 79Hadoop modules................................................................................... 79Hadoop platforms ................................................................................. 80Common commands ............................................................................. 81

Hadoop configuration ................................................................................. 81Configuration files................................................................................. 82Default ports ......................................................................................... 82Storage ................................................................................................. 82Hadoop installation specification.......................................................... 83

Hadoop example......................................................................................... 85Creating table and inserting rows in Hive............................................... 85

LDAP and Kerberos security......................................................................... 85Enabling Kerberos on the DCA ............................................................... 86Adding users to security group .............................................................. 88Removing Greenplum path .................................................................... 88Disabling security mode........................................................................ 88

Chapter 6 Master Server Failover

Orchestrated Failover .................................................................................. 90What happens during an Orchestrated Failover ..................................... 90

Automated Failover ..................................................................................... 92Enable, Disable, and Status of Automated Failover................................ 92Triggers for Automatic Failover .............................................................. 93Monitor a Failover in Progress ............................................................... 93Failback After an Automated Failover..................................................... 93

Chapter 7 SNMP

DCA MIB information................................................................................... 95MIB Locations ....................................................................................... 95MIB Contents ........................................................................................ 95View MIB............................................................................................. 110

Integrate DCA MIB with environment ......................................................... 110Change the SNMP community string.................................................... 110

Chapter 8 Database and System Monitoring Tools

ConnectEMC Dial Home Capability ............................................................ 113 Greenplum Command Center .................................................................... 118 Pivotal Command Center ........................................................................... 118

PCC User Interface............................................................................... 118 Greenplum Database email and SNMP alerting ......................................... 119

4 EMC DCA Administration Guide

Page 5: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Contents

Chapter 9 General Database Maintenance Tasks

Routine Vacuum and Analyze .................................................................... 120Transaction ID Management................................................................ 120

Routine Reindexing ................................................................................... 121 Managing Greenplum Database Log Files .................................................. 121

Database Server Log Files.................................................................... 122Management Utility Log Files............................................................... 122

Chapter 10 Utility Reference

dca_setup................................................................................................. 124Available operations ........................................................................... 125Automating dca_setup using a configuration file ................................. 134

dca_shutdown .......................................................................................... 138 dcacheck .................................................................................................. 140 dca_healthmon_ctl ................................................................................... 141 dca_blinker............................................................................................... 142 gppkg ....................................................................................................... 142

EMC DCA Administration Guide 5

Page 6: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Contents

6 EMC DCA Administration Guide

Page 7: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Preface

This guide is intended for EMC field personnel and customers responsible for configuring or managing an EMC Data Computing Appliance (DCA). This guide provides information on DCA-specific management tools and features.

• About This Guide

• Document Conventions

• Getting Support

About This GuideThis guide assumes knowledge of Linux/UNIX system administration, database management systems, database administration, and structured query language (SQL).

This guide contains the following chapters and appendices:

• Chapter 1, “System Information and Configuration” explains the DCA hardware configurations, power supply reference, network and cabling, authentication information for the administrator, and common administrative tasks.

• Chapter 2, “Networking” describes DCA network architecture, external connectivity, and steps to troubleshoot a switch.

• Chapter 3, “Default Ports for Customer Networks” lists the default ports available for connectivity to customer networks.

• Chapter 4, “Storage” contains information about DCA storage layout and common storage commands.

• Chapter 5, “Hadoop on the DCA” discusses Hadoop implementation and enabling security mode on the DCA.

• Chapter 6, “Master Server Failover” describes the Master Server Failover feature of the DCA and the two types of failover.

• Chapter 7, “SNMP” describes the DCA MIB information and how to integrate DCA MIB with an environment.

• Chapter 8, “Database and System Monitoring Tools” lists various tools to monitor the status of Greenplum Database as well as the hardware components it runs on.

• Chapter 9, “General Database Maintenance Tasks” describes tasks be performed regularly to maintain optimum performance of the GPDB.

• Chapter 10, “Utility Reference” contains reference information about DCA utilities.

Document ConventionsThe following conventions are used throughout the DCA documentation to help you identify certain types of information.

• Figure Text Conventions

• Command Syntax Conventions

7

Page 8: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Text Conventions

Command Syntax Conventions

Table 1 Text Conventions

Text Convention Usage Examples

bold Button, menu, tab, page, and field names in GUI applications

Click Cancel to exit the page without saving your changes.

italics New terms where they are defined

Database objects, such as schema, table, or columns names

The master instance is the postgres process that accepts client connections.

Catalog information for Greenplum Database resides in the pg_catalog schema.

monospace File names and path names

Programs and executables

Command names and syntax

Parameter names

Edit the postgresql.conf file.

Use gpstart to start Greenplum Database.

monospace italics Variable information within file paths and file names

Variable information within command syntax

/home/gpadmin/config_file

COPY tablename FROM 'filename'

monospace bold Used to call attention to a particular part of a command, parameter, or code snippet.

Change the host name, port, and database name in the JDBC connection URL:

jdbc:postgresql://host:5432/mydb

UPPERCASE Environment variables

SQL commands

Keyboard keys

Make sure that the Java /bin directory is in your $PATH.

SELECT * FROM my_table;

Press CTRL+C to escape.

Table 2 Command Syntax Conventions

Text Convention Usage Examples

{ } Within command syntax, curly braces group related command options. Do not type the curly braces.

FROM { 'filename' | STDIN }

[ ] Within command syntax, square brackets denote optional arguments. Do not type the brackets.

TRUNCATE [ TABLE ] name

8

Page 9: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Getting SupportEMC support, product, and licensing information can be obtained as follows.

Product information

For DCA product-specific documentation, release notes, or software updates, go to the EMC Online Support site at http://support.emc.com, click Support By Product, and search for Data Computing Appliance.

Technical support

For technical support, go to http://support.emc.com. The Support page includes several support options, including an option to request service. Note that to open a service request, you must have a valid support agreement. Please contact your EMC sales representative for details about obtaining a valid support agreement or with questions about your account.

... Within command syntax, an ellipsis denotes repetition of a command, variable, or option. Do not type the ellipsis.

DROP TABLE name [, ...]

| Within command syntax, the pipe symbol denotes an “OR” relationship. Do not type the pipe symbol.

VACUUM [ FULL | FREEZE ]

$ system_command

# root_system_command

=> gpdb_command

=# su_gpdb_command

Denotes a command prompt - do not type the prompt symbol. $ and # denote terminal command prompts. => and =# denote Greenplum Database interactive program command prompts (psql or gpssh, for example).

$ createdb mydatabase

# chown gpadmin -R /datadir

=> SELECT * FROM mytable;

=# SELECT * FROM pg_database;

Table 2 Command Syntax Conventions

Text Convention Usage Examples

9

Page 10: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

10

Page 11: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

CHAPTER 1System Information and Configuration

The following sections are included:

DCA Configurations

Power supply reference

Network and cabling configurations

Network hostname and IP configuration

Multiple-rack cabling reference

Configuration files

Default passwords

Common administrative tasks using dca_setup

DCA ConfigurationsThis section describes the supported hardware configurations.

Identify the version of the installed DCA software

DCA documentation is tied to a specific version of the DCA software. To identify the version of the software running on a particular DCA, perform this procedure:

1. Log in to the Primary Master server as the user root.

2. View the contents of the /opt/dca/etc/dca-build-info.txt file. For example:

# cat opt/dca/etc/dca-build-info.txt

In the output see the ISO_Version information.

## =============================================ISO_BUILD_DATE="Wed Sept 15 21:59:56 PST 2014"ISO_VERSION="2.1.0.0"ISO_BUILD_VERSION="4"ISO_INSTALL_TYPE="iso"## =============================================

DCA configuration requirements

The DCA is built from required switches, two master nodes for cluster management, and server increments called modules.

The following DCA configurations are supported:

DCA Configurations 11

Page 12: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Greenplum Database (GPDB) DCA (can be GPDB-only or a mix of GPDB and other types of servers):

• Requires a minimum of 1 GPDB module in the System Rack occupying the lowest rack position

• A GPDB module is comprised of x4 Intel 2U 24-drive servers

• Maximum GPDB modules per rack: x4 modules (x16 24 drive servers)

• Hi-memory servers (servers with 256GB memory) only allow a maximum of 3 modules per rack

Hadoop-only DCA (applies to DCA version 2.0.1.0 and later):

• Minimum Hadoop configuration: 1 hdw module + 1 hdm module

• A Hadoop Worker module (hdw) is comprised of x4 2U Intel 12-drive servers

• A Hadoop Master module (hdm) is comprised of x4 2U Intel 12-drive servers

Hadoop Compute configuration:

• Four HDC modules

DCA Module Types

DCA modules consist of either two or four servers. EMC supported servers for the DCA are named Dragon 12, Dragon 24, or Kylin. This helps customers and EMC Support to easily identify servers.

Read this section for server types that make up the three available modules:

Pivotal Greenplum DatabaseTM (GPDB) Module

Data Integraton Accelerator (DIA) Modules

Hadoop Modules

GPDB Module Server Types and SpecificationsTable 3 lists the server types and specifications for the GPDB modules.

12

Page 13: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Table 3 GPDB Module Specifications

GPDB module type Server quantities / Drive Types / Memory Usage

GPDB Standard Module(Introduced in DCA version 2.0.0.0)

This module is comprised of four Dragon 24 servers. • Disks - Twenty Four 900GB drives

per server• Memory - 64GB per server

GPDB

GPDB Compute Module(Introduced in DCA version 2.0.0.0)

This module is comprised of four Dragon 24 servers. • Disks - Twenty Four 300GB drives

per server• Memory - 64GB per server

GPDB

GPDB Hi-Memory Module(Introduced in DCA version 2.0.2.0)

This module is comprised of four Dragon 24 servers. • Disks - Twenty Four 300GB drives per

server• Memory - 256GB per server

GPDB

DCA Configurations 13

Page 14: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

DIA Modules Server Types and SpecificationsTable 4 lists the server types and specifications for the DIA modules.

Table 4 DIA Module Specifications

Hadoop Modules Server Types and SpecificationsTable 5 lists the server types and specifications for the Hadoop modules.

Table 5 Hadoop Module Specifications

Type Server quantities / Drive Types / Memory Usage

DIA-Kylin 300GB Disk ModuleIntroduced in DCA version 2.0.0.0

This module is comprised of two Kylin servers. • Disks - Six 300GB drives per server• Memory - 64GB per server

Business Intelligence Tools

DIA 3TB Disk ModuleIntroduced in DCA version 2.0.2.0

This module is comprised of two Dragon 12 servers. • Disks - Twelve 3TB drives per server• Memory - 64GB per server

Business Intelligence Tools

DIA Hi-Memory Module with 24 HDDsIntroduced in DCA version 2.0.2.0

This module is comprised of two Dragon 24 servers. • Disks - Twenty four 300GB drives per

server• Memory - 256GB per server

Business Intelligence Tools

DIA-Kylin Hi-Memory ModuleIntroduced in DCA version 2.1.0.0

This module is comprised of two Kylin servers:• Disks - Six 300GB drives per server• Memory - 256GB per server

Business Intelligence Tools

Type Server quantities / Drive Types / Memory Usage

Hadoop (HD) Module (master or worker)

This module is comprised of four Dragon 12 servers:• Disks - Twelve 3TB drives per server• Memory - 64GB per server

Hadoop

Hadoop-Compute (HDC) Module This module is comprised of two Kylin servers. • Disks - Six 300GB drives per server• Memory - 64GB per server

Hadoop with Isilon storage

Hadoop Dragon 12 Hi-Memory ModuleIntroduced in DCA version 2.1.0.0

This module is comprised of four Dragon 12 servers. • Disks - Twelve 3TB drives per server• Memory - 256GB per server

Pivotal Hadoop and Pivotal HAWQ

Hadoop Dragon 12 Large Disk Moduleintroduced in DCA version 2.1.0.0

This module is comprised of four Dragon 12 servers.• Disks - Twelve 6TB drives per server• Memory - 256GB per server

Pivotal Hadoop and Pivotal HAWQ

14

Page 15: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

DCA configuration rules

Manufacturing ships three basic types of racks for the DCA:

System - DCA2-SYSRACK

Aggregation - DCA2-AGGREG

Expansion - DCA2-EXPAND

Racking order

Table 6 lists the EMC approved racking order for the DCA. All master nodes and switches are racked first. All other nodes are racked as shown per the following:

Table 6 Approved DCA Racking Sequence (*New module introduced in DCA software release 2.1.0.0)

SKU Host Name PrefixRack Priority (when present)

100-585-031-07Dragon 24, 900GB disks, 64GB RAM

Segment server (sdw) First

100-585-035-06Dragon 24, 300GB disks, 64GB RAM

sdw Second

100-585-030-06Dragon 12, 3TB disks, 64GB RAM

• Hadoop Master (hdm) • Hadoop Worker (hdw) • dia (etl) server

Third

100-585-055-01Dragon 24, 300GB disks, 256GB RAM

• sdw• etl

Fourth

100-585-068-01Dragon 12, 3TB disks, 256GB RAM

• hdm• hdw

Fifth

*100-585-161-xxDrgaon 12, 6TB disks, 256GB RAM

hdw Sixth

100-585-067-01Kylin, 300GB disks, 256GB RAM

etl Seventh

100-585-029-05Kylin, 300GB disks, 64GB RAM

• etl• hdc

Eighth

DCA Configurations 15

Page 16: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Racking guidelines

GPDB Compute, Standard, or High Memory modules must not occupy the same DCA.

GPDB Hi-Mem servers are limited to four modules or 12 servers per rack.

The minimum Hadoop configuration must include two Hadoop modules, one serving as the Hadoop Master module (hdm) and a second serving as the Hadoop Worker (data) module (hdw). For Hadoop Compute with Isilon the minimum requirements are eight Kylins (4 x2 Hadoop Compute modules).

The 2nd rack (if present) is always an Aggregation rack. HD-C and DIA-Kylin are limited to a maximum of 10 modules or 20 servers in rack 1 (system rack) and rack 2 (aggregation rack).

Racks 3 through 11 (if present) are Expansion racks. HD-C and DIA-Kylin are limited to a maximum of 11 modules or 22 servers in expansion racks.

Any rack containing even one 100-585-055-01 is limited to thirty rack units for servers. Switches remain in the standard locations. Racks with High Memory servers should not exceed 30U.

Figure 1 11-rack configuration

Figure 2 Aggregation switch locations in a multi-rack DCA

Expansion

AggregationSystem

16

Page 17: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Rack density

Rack density refers to the number of servers possible in a rack. This number is dictated first by the physical space in a rack and next by how much power is delivered to the rack.

EMC uses racks with 40 rack units of usable space (a rack unit is 44.45mm or 1.75 inches), and 9600 watts of usable input power. Each rack pulls a max of 1250W of static hardware (switches, master nodes, etc) leaving 8350W for servers.

2U servers (servers that occupy two rack units ) with 64GB of RAM use at most 520W. Servers with 256GB of RAM use at most 600W. 1U servers with 64GB of RAM use at most 430W. Therefore, a 40U rack with 8350W of usable input power can fit the following:

16x2U servers each with 64GB RAM(standard memory, GPBD/PHD nodes)

22x1U servers each with 64GB RAM, also known as the Dense Rack (master nodes, DIA nodes, or HD+Isilon)

12x2U servers each with 256GB RAM (High memory nodes for DIA or GPDB)

18x1U servers each with 256GB RAM (High memory nodes for DIA or HDC)

Or any combination thereof.

The following diagram shows where servers can be placed in racks. 2U servers should be racked before 1U servers.

DCA Configurations 17

Page 18: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Figure 3 DCA rack density

18

Page 19: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Mixed System rack components

Figure 4 DCA2-SYSRACK

Table 7 DCA2-SYSRACK - System rack components

DCA Component Quantity

Hadoop Servers (Dragon 12, 2U) 16 (8 minimum, 4 hdw + 4 hdm) or 12 High Memory Systems

Master Servers (Kylin, 1U) 2 (1 Primary + 1 Standby)

GPDB (Segment) Servers (Dragon 24, 2U) 16 or 12 High Memory Systems

Interconnect Switches (Arista 7050S-52) 2

Administration Switches (Arista 7048T-A) 1

DCA Configurations 19

Page 20: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Hadoop-only System Rack components (minimum config.)

Note: Supported in DCA version 2.0.1.0 and later.

Figure 5 Hadoop-only System rack

Table 8 Hadoop-only System Rack components

DCA Component Quantity

Hadoop Master Servers (hdm) 4 minimum

Hadoop Worker Servers (hdw) 4 minimum

Master Servers (Kylin, 1U) 2

Interconnect Switches (Arista 7050S-52) 2

Administration Switch (Arista 7048T-A) 1

20

Page 21: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

HD-Compute System Rack components (minimum config.)

Note: Supported in DCA version 2.0.2.0 and later.

Figure 6 HDC-Compute System rack

Table 9 HDC-Compute System rack components

DCA Component Quantity

Hadoop Compute Servers (hdc) 8 minimum, 22 maximum

DCA Configurations 21

Page 22: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Aggregation rack components

Figure 7 DCA2-AGGREG

Table 10 DCA2-AGGERG - Aggregation rack components

DCA Component Quantity

Segment Servers 16 maximum (or 12 maximum with High Memory Modules)

Master Servers (Kylin, 1U) 0

Interconnect Switches (Arista 7050S-52) 4 (2 for the Interconnect network; 2 for the Aggregation network)

Administration Switch (Arista 7048T-A) 1

22

Page 23: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Expansion rack components

Figure 8 DCA2-EXPAND

Table 11 DCA2-EXPAND - Expansion rack components

Component Quantity

Segment Servers 16 maximum (or 12 maximum with High Mem Module)

Master Servers (Kylin, 1U) 0

Interconnect Switches (Arista 7050S-52) 2

Administration Switch (Arista 7048T-A) 1

DCA Configurations 23

Page 24: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Power supply referenceFigure 9 shows four external customer-supplied power input circuits connected to DCA Power Distribution Units (PDUs). The figure shows a full System rack.

Figure 9 DCA power cable configuration, full System rack

Powerswitches

Customer-suppliedpower

Upper Zone Ainput

Powerswitches

Customer-suppliedpower

Lower Zone Ainput

Powerswitches

Powerswitches

Customer-supplied power

LowerZone Binput

Customer-supplied power

UpperZone Binput

24

Page 25: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Figure 10 DCA power cable configuration, 1/2 System rack

Powerswitches

Customer-suppliedpower

Lower Zone Ainput

Powerswitches

Customer-supplied power

LowerZone Binput

Powerswitches

Customer-suppliedpower

Upper Zone Ainput

Powerswitches

Customer-supplied power

UpperZone Binput

Power supply reference 25

Page 26: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Figure 11 DCA power cable configuration, 1/4 System rack

Powerswitches

Customer-suppliedpower

Lower Zone Ainput

Powerswitches

Customer-supplied power

LowerZone Binput

Customer-suppliedpower not needed

to upper Zone Ain 1/4 rack

Customer-supplied power not needed to upper Zone Bin 1/4 rack

26

Page 27: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Figure 12 Dense rack configuration

Power supply reference 27

Page 28: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Figure 13 High memory system rack configuration

28

Page 29: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Network and cabling configurationsThis section describes the network cabling configurations for the Interconnect and administration switches.

Interconnect cabling reference

Each rack in the DCA contains two Interconnect switches which provide the DCA Interconnect network. Topics in this section include:

“Lower Interconnect switch cabling reference”

“Upper Interconnect switch cabling reference”

“Dense rack switch cabling reference”

“Dense rack Interconnect 2 configuration (dual NIC)”

Figure 14 Interconnect switch port map

Serial consolePorts 45-46 to lower

Aggregation switch

Ports 47-48 to upper Aggregation switch

(aggr-sw-2)

Ports 9-16 to servers

sdw9-sdw16

Port 18 to Standby Master Server smdw

Port 17 to Primary Master Server (mdw)

mLAG peer connections

to the other Interconnect

switch in the rack.

To Administration switch

Ports 1-8 to servers

sdw1-sdw8

Ports 41-44 to customer

network (single rack DCA only)

Network and cabling configurations 29

Page 30: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Lower Interconnect switch cabling referenceThe lower Interconnect switch connects servers to the first Interconnect. Lower Interconnect switches are always odd-numbered hostnames (for example, i-sw-1, i-sw-3, i-sw-5, etc.).

Figure 15 Lower Interconnect switch cabling reference

30

Page 31: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Upper Interconnect switch cabling referenceThe upper Interconnect Switch connects servers to the second Interconnect. Upper Interconnect switches are always even-numbered hostnames (for example, i-sw-2, i-sw-4, i-sw-6, etc.).

Figure 16 Upper Interconnect switch cabling reference

Network and cabling configurations 31

Page 32: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Dense rack switch cabling reference

Figure 17 Dense rack Interconnect 1 configuration (dual NIC)

32

Page 33: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Dense rack Interconnect 2 configuration (dual NIC)

Figure 18 Dense rack Interconnect 2 configuration (dual NIC)

Network and cabling configurations 33

Page 34: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Table 12 Interconnect switch cable routing, 3-rack DCA (page 1 of 2)

SYS-RACK AGGR-RACK EXPAND-RACK

IC switch port

i-sw-1 i-sw-2 i-sw-3 i-sw-4 i-sw-5 i-sw-6

ServerCNA port 0

ServerCNA port 1

ServerCNA port 0

ServerCNA port 1

ServerCNA port 0

ServerCNA port 1

1 server 1 server 1 server 1 server 1 server 1 server 1

2 server 2 server 2 server 2 server 2 server 2 server 2

3 server 3 server 3 server 3 server 3 server 3 server 3

4 server 4 server 4 server 4 server 4 server 4 server 4

5 server 5 server 5 server 5 server 5 server 5 server 5

6 server 6 server 6 server 6 server 6 server 6 server 6

7 server 7 server 7 server 7 server 7 server 7 server 7

8 server 8 server 8 server 8 server 8 server 8 server 8

9 server 9 server 9 server 9 server 9 server 9 server 9

10 server 10 server 10 server 10 server 10 server 10 server 10

11 server 11 server 11 server 11 server 11 server 11 server 11

12 server 12 server 12 server 12 server 12 server 12 server 12

13 server 13 server 13 server 13 server 13 server 13 server 13

14 server 14 server 14 server 14 server 14 server 14 server 14

15 server 15 server 15 server 15 server 15 server 15 server 15

16 server 16 server 16 server 16 server 16 server 16 server 16

17 mdw mdw server 17 server 17 server 17 server 17

18 smdw smdw server 18 server 18 server 18 server 18

19 server 17 server 17 server 19 server 19 server 19 server 19

20 server 18 server 18 server 20 server 20 server 20 server 20

21 server 19 server 19

22 server 20 server 20

23 to 40

41 to 44 Customer network (in single-rack DCA)

45 aggr-sw-1 port 1 aggr-sw-1 port 3 aggr-sw-1 port 5 aggr-sw-1 port 7 aggr-sw-1 port 9 aggr-sw-1 port 11

46 aggr-sw-1 port 2 aggr-sw-1 port 4 aggr-sw-1 port 6 aggr-sw-1 port 8 aggr-sw-1 port 10 aggr-sw-1 port 12

47 aggr-sw-2 port 1 aggr-sw-2 port 3 aggr-sw-2 port 5 aggr-sw-2 port 7 aggr-sw-2 port 9 aggr-sw-2 port 11

48 aggr-sw-2 port 2 aggr-sw-2 port 4 aggr-sw-2 port 6 aggr-sw-2 port 8 aggr-sw-2 port 10 aggr-sw-2 port 12

49 mLAG peer link: i-sw-1 to i-sw-2 mLAG peer link: i-sw-3 to i-sw-4 mLAG peer link: i-sw-5 to i-sw-6

34

Page 35: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

50 mLAG peer link: i-sw-1 to i-sw-2 mLAG peer link: i-sw-3 to i-sw-4 mLAG peer link: i-sw-5 to i-sw-6

51 mLAG peer link: i-sw-1 to i-sw-2 mLAG peer link: i-sw-3 to i-sw-4 mLAG peer link: i-sw-5 to i-sw-6

52 mLAG peer link: i-sw-1 to i-sw-2 mLAG peer link: i-sw-3 to i-sw-4 mLAG peer link: i-sw-5 to i-sw-6

Table 12 Interconnect switch cable routing, 3-rack DCA (page 2 of 2)

SYS-RACK AGGR-RACK EXPAND-RACK

Network and cabling configurations 35

Page 36: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Administration switch reference

The DCA contains one Administration switch per rack. The Administration switch routes management traffic, connects all of the servers and switches in a DCA, and provides service connectivity through a red service cable.

Topics in this section include:

“Rack 1 Administration switch cabling reference”

“Dense rack Administration switch port mapping reference”

“Administration switch cabling routing reference”

Figure 19 Administration switch port map, single rack DCA

Ports 1-8: to servers 1 through 8

Ports 9-16: to servers 9 through 16

Port 44: to upper

Interconnect switch

Ports 45 and 46: to

a-sw-1 to a-sw-2

Other Administration switchesin a multi-rack DCA

Port 43: to lower

Interconnect switch

Port 18: to Standby Master server (smdw)

Port 17: to Primary Master server (mdw)

Serial console

Serial console

Port 48: red service cable for cluster management (a-sw-1 only)

Other Administration switchesin a multi-rack DCA

36

Page 37: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Rack 1 Administration switch cabling reference

Figure 20 Rack 1 Administration switch cabling reference

Port 47: Customer Admin network access (optional)

Port 48: Cluster Management (red service cable)

Network and cabling configurations 37

Page 38: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Dense rack Administration switch port mapping reference

Figure 21 Dense rack Administration switch port mapping to servers 9 - 16

38

Page 39: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Administration switch cabling routing reference

Note: A dash (-) indicates cable connections that vary depending on the specific type(s) and quantity of servers and racks in the DCA.

Table 13 Administration switch cable routing

Admin Switch Port

a-sw-1 in SYS-RACK

a-sw-2 in AGGR-RACK

a-sw-3 in EXPAND-RACK

Admin Switch Port

a-sw-1 inSYS-RACK

a-sw-2 in AGGR-RACK

a-sw-3 in EXPAND-RACK

1 server 1 server 1 server 1 25 a-sw-3, port 45 a-sw-3, port 46 n/a

2 server 2 server 2 server 2 26 a-sw-4, port 45 a-sw-4, port 46 n/a

3 server 3 server 3 server 3 27 a-sw-5, port 45 a-sw-5, port 46 n/a

4 server 4 server 4 server 4 28 a-sw-6, port 45 a-sw-6, port 46 n/a

5 server 5 server 5 server 5 29 a-sw-7, port 45 a-sw-7, port 46 n/a

6 server 6 server 6 server 6 30 a-sw-8, port 45 a-sw-8, port 46 n/a

7 server 7 server 7 server 7 31 a-sw-9, port 45 a-sw-9, port 46 n/a

8 server 8 server 8 server 8 32 a-sw-10, port 45 a-sw-10, port 46 n/a

9 server 9 server 9 server 9 33 a-sw-11, port 45 a-sw-11, port 46 n/a

10 server 10 server 10 server 10 34 a-sw-12, port 45 a-sw-12, port 46 n/a

11 server 11 server 11 server 11 35 n/a n/a n/a

12 server 12 server 12 server 12 36 n/a n/a n/a

13 server 13 server 13 server 13 37 n/a n/a n/a

14 server 14 server 14 server 14 38 n/a n/a n/a

15 server 15 server 15 server 15 39 n/a n/a n/a

16 server 16 server 16 server 16 40 n/a n/a n/a

17 mdw server 17 server 17 41 n/a n/a n/a

18 smdw server 18 server 18 42 n/a n/a n/a

19 server 17 server 19 server 19 43 Lower Interconnect switch <...> port

20 server 18 server 20 server 20 44 Upper Interconnect switch <...> port

21 server 19 — — 45 a-sw-2peer

a-sw-1peer

a-sw-1, port 25

22 server 20 — n/a 46 a-sw-2peer

a-sw-1peer

a-sw-2, port 25

23 — — n/a 47 Customer Admin network access (optional)

Customer Admin network access (optional)

n/a

24 — — n/a 48 Cluster management (red service cable)

n/a n/a

Network and cabling configurations 39

Page 40: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Aggregation switch reference

Servers in a multiple-rack configuration communicate through the two Aggregation switches located in Rack 2. The following diagram and table show the proper connectivity.

Figure 22 Aggregation switch port map

40

Page 41: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Interconnect switch-to-Aggregation switch port mapping

Table 14 Interconnect switch-to-Aggregation switch port mapping (page 1 of 6)

Rack 1 Expansion

Upper Interconnect switch (i-sw-2)

Ports Ports

Rack 2AGGR Rack

47 <.......> 3 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 4

Ports Ports

45 <.......> 3 Lower Aggregation switch (aggr-sw-1)

46 <.......> 4

Lower Interconnect switch (i-sw-1)

Ports Ports

47 <.......> 1 Upper Aggregation switch (aggr-sw-2)

48 <.......> 2

Ports Ports

45 <.......> 1 Lower Aggregation switch (aggr-sw-1)

46 <.......> 2

Rack 2 AGG Rack

Upper Interconnect switch (i-sw-4)

Ports Ports

Rack 2AGGR Rack

47 <.......> 7 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 8

Ports Ports

45 <.......> 7 Lower Aggregation switch (aggr-sw-1)

46 <.......> 8

Lower Interconnect switch (i-sw-3)

Ports Ports

47 <.......> 5 Upper Aggregation switch (aggr-sw-2)

48 <.......> 6

Ports Ports

45 <.......> 5 Lower Aggregation switch (aggr-sw-1)

46 <.......> 6

Network and cabling configurations 41

Page 42: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Rack 3 Expansion

Upper Interconnect switch (i-sw-6)

Ports Ports

Rack 2AGGR Rack

47 <.......> 11 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 12

Ports Ports

45 <.......> 11 Lower Aggregation switch (aggr-sw-1)

46 <.......> 12

Lower Interconnect switch (i-sw-5)

Ports Ports

47 <.......> 9 Upper Aggregation switch (aggr-sw-2)

48 <.......> 10

Ports Ports

45 <.......> 9 Lower Aggregation switch (aggr-sw-1)

46 <.......> 10

Table 14 Interconnect switch-to-Aggregation switch port mapping (page 2 of 6)

42

Page 43: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Rack 4 Expansion

Upper Interconnect switch (i-sw-8)

Ports Ports

Rack 2AGGR Rack

47 <.......> 15 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 16

Ports Ports

45 <.......> 15 Lower Aggregation switch (aggr-sw-1)

46 <.......> 16

Lower Interconnect switch (i-sw-7)

Ports Ports

47 <.......> 13 Upper Aggregation switch (aggr-sw-2)

48 <.......> 14

Ports Ports

45 <.......> 13 Lower Aggregation switch (aggr-sw-1)

46 <.......> 14

Rack 5 Expansion

Upper Interconnect switch (i-sw-10)

Ports Ports

Rack 2AGGR Rack

47 <.......> 19 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 20

Ports Ports

45 <.......> 19 Lower Aggregation switch (aggr-sw-1)

46 <.......> 20

Lower Interconnect switch (i-sw-9)

Ports Ports

47 <.......> 17 Upper Aggregation switch (aggr-sw-2)

48 <.......> 18

Ports Ports

45 <.......> 17 Lower Aggregation switch (aggr-sw-1)

46 <.......> 18

Table 14 Interconnect switch-to-Aggregation switch port mapping (page 3 of 6)

Network and cabling configurations 43

Page 44: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Rack 6 Expansion

Upper Interconnect switch (i-sw-12)

Ports Ports

Rack 2AGGR Rack

47 <.......> 23 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 24

Ports Ports

45 <.......> 23 Lower Aggregation switch (aggr-sw-1)

46 <.......> 24

Lower Interconnect switch (i-sw-11)

Ports Ports

47 <.......> 21 Upper Aggregation switch (aggr-sw-2)

48 <.......> 22

Ports Ports

45 <.......> 21 Lower Aggregation switch (aggr-sw-1)

46 <.......> 22

Rack 7 Expansion

Upper Interconnect switch (i-sw-14)

Ports Ports

Rack 2AGGR Rack

47 <.......> 27 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 28

Ports Ports

45 <.......> 27 Lower Aggregation switch (aggr-sw-1)

46 <.......> 28

Lower Interconnect switch (i-sw-13)

Ports Ports

47 <.......> 25 Upper Aggregation switch (aggr-sw-2)

48 <.......> 26

Ports Ports

45 <.......> 25 Lower Aggregation switch (aggr-sw-1)

46 <.......> 26

Table 14 Interconnect switch-to-Aggregation switch port mapping (page 4 of 6)

44

Page 45: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Rack 8 Expansion

Upper Interconnect switch (i-sw-16)

Ports Ports

Rack 2AGGR Rack

47 <.......> 31 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 32

Ports Ports

45 <.......> 31 Lower Aggregation switch (aggr-sw-1)

46 <.......> 32

Lower Interconnect switch (i-sw-15)

Ports Ports

47 <.......> 29 Upper Aggregation switch (aggr-sw-2)

48 <.......> 30

Ports Ports

45 <.......> 29 Lower Aggregation switch (aggr-sw-1)

46 <.......> 30

Rack 9 Expansion

Upper Interconnect switch (i-sw-18)

Ports Ports

Rack 2AGGR Rack

47 <.......> 35 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 36

Ports Ports

45 <.......> 35 Lower Aggregation switch (aggr-sw-1)

46 <.......> 36

Lower Interconnect switch (i-sw-17)

Ports Ports

47 <.......> 33 Upper Aggregation switch (aggr-sw-2)

48 <.......> 34

Ports Ports

45 <.......> 33 Lower Aggregation switch (aggr-sw-1)

46 <.......> 34

Table 14 Interconnect switch-to-Aggregation switch port mapping (page 5 of 6)

Network and cabling configurations 45

Page 46: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Rack 10 Expansion

Upper Interconnect switch (i-sw-20)

Ports Ports

Rack 2AGGR Rack

47 <.......> 39 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 40

Ports Ports

45 <.......> 39 Lower Aggregation switch (aggr-sw-1)

46 <.......> 40

Lower Interconnect switch (i-sw-19)

Ports Ports

47 <.......> 37 Upper Aggregation switch (aggr-sw-2)

48 <.......> 38

Ports Ports

45 <.......> 37 Lower Aggregation switch (aggr-sw-1)

46 <.......> 38

Rack 11 Expansion

Upper Interconnect switch (i-sw-22)

Ports Ports

Rack 2AGGR Rack

47 <.......> 43 Upper Aggregationswitch (aggr-sw-2)

48 <.......> 44

Ports Ports

45 <.......> 43 Lower Aggregation switch (aggr-sw-1)

46 <.......> 44

Lower Interconnect switch (i-sw-21)

Ports Ports

47 <.......> 41 Upper Aggregation switch (aggr-sw-2)

48 <.......> 42

Ports Ports

45 <.......> 41 Lower Aggregation switch (aggr-sw-1)

46 <.......> 42

Table 14 Interconnect switch-to-Aggregation switch port mapping (page 6 of 6)

46

Page 47: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Network hostname and IP configuration Table 15 DCA network configuration (page 1 of 3)

Rack Component hostname BMC IP host-sp

NIC 1 IP host-cm Interconnect

Reserved for DHCP n/a n/a 172.28.6.170 through172.28.6.179

n/a

Rack 1 Administration Switch a-sw-1 172.28.0.190

Rack 2 Administration Switch a-sw-2 172.28.0.191

Rack 3 Administration Switch a-sw-3 172.28.0.192

Rack 4 Administration Switch a-sw-4 172.28.0.193

Rack 5 Administration Switch a-sw-5 172.28.0.194

Rack 6 Administration Switch a-sw-6 172.28.0.195

Rack 7 Administration Switch a-sw-7 172.28.0.196

Rack 8 Administration Switch a-sw-8 172.28.0.197

Rack 9 Administration Switch a-sw-9 172.28.0.198

Rack 10 Administration Switch a-sw-10 172.28.0.199

Rack 11 Administration Switch a-sw-11 172.28.1.190

Rack 1 Interconnect Switch, lower i-sw-1 172.28.0.170

Interconnect Switch, upper i-sw-2 172.28.0.180

Rack 2 Interconnect Switch, lower i-sw-3 172.28.0.171

Interconnect Switch, upper i-sw-4 172.28.0.181

Rack 3 Interconnect Switch, lower i-sw-5 172.28.0.172

Interconnect Switch, upper i-sw-6 172.28.0.182

Rack 4 Interconnect Switch, lower i-sw-7 172.28.0.173

Interconnect Switch, upper i-sw-8 172.28.0.183

Rack 5 Interconnect Switch, lower i-sw-9 172.28.0.174

Interconnect Switch, upper i-sw-10 172.28.0.184

Rack 6 Interconnect Switch, lower i-sw-11 172.28.0.175

Interconnect Switch, upper i-sw-12 172.28.0.185

Rack 7 Interconnect Switch, lower i-sw-13 172.28.0.176

Interconnect Switch, upper i-sw-14 172.28.0.186

Rack 8 Interconnect Switch, lower i-sw-15 172.28.0.177

Interconnect Switch, upper i-sw-16 172.28.0.187

Network hostname and IP configuration 47

Page 48: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Rack 9 Interconnect Switch, lower i-sw-17 172.28.0.178

Interconnect Switch, upper i-sw-18 172.28.0.188

Rack 10 Interconnect Switch, lower i-sw-19 172.28.0.179

Interconnect Switch, upper i-sw-20 172.28.0.189

Rack 11 Interconnect Switch, lower i-sw-21 172.28.1.170

Interconnect Switch, upper i-sw-22 172.28.1.180

Rack 2 Aggregation Switch, lower aggr-sw-1 172.28.0.248

Aggregation Switch, upper aggr-sw-2 172.28.0.249

Rack 1 Primary Master Server, lower server mdw 172.28.0.250 172.28.4.250 172.28.8.250

Standby Master Server, upper server smdw 172.28.0.251 172.28.4.251 172.28.8.251

Table 15 DCA network configuration (page 2 of 3)

Rack Component hostname BMC IP host-sp

NIC 1 IP host-cm Interconnect

48

Page 49: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

GPDB Segment Server 1-160 sdw# 172.28.0.# 172.28.4.# 172.28.8.#

GPDB Segment Server 161-176 sdw# 172.28.1.1 -172.28.1.16

172.28.5.1 -172.28.5.16

172.28.9.1 -172.28.9.16

DIA Server 1-16 etl# 172.28.0.20# 172.28.4.20# 172.28.8.20#

DIA Server 17-32 etl# 172.28.1.201-172.28.1.216

172.28.5.201-172.28.5.216

172.28.9.201-172.28.9.216

DIA Server 33-48 etl# 172.28.2.231-172.28.2.246

172.28.6.231-172.28.6.246

172.28.10.231-172.28.10.246

DIA Server 49-64 etl# 172.28.3.231-172.28.3.246

172.28.7.231-172.28.7.246

172.28.11.231-172.28.11.246

Hadoop Master Node 1-8 hdm1 hdm2 hdm3 hdm4 hdm5hdm6hdm7hdm8

172.28.1.250 172.28.1.251 172.28.1.252 172.28.1.253 172.28.2.250 172.28.2.251172.28.3.250 172.28.3.251

172.28.5.250 172.28.5.251 172.28.5.252 172.28.5.253 172.28.6.250 172.28.6.251172.28.7.250 172.28.7.251

172.28.9.250 172.28.9.251 172.28.9.252 172.28.9.253 172.28.10.250 172.28.10.251 172.28.11.250 172.28.11.251

Hadoop Worker Node 1-160 hdw1-160 172.28.2.# 172.28.6.# 172.28.10.#

1Hadoop Worker Node 161-320 hdw161-320 172.28.3.# # = node number minus 160. Example: hdw162-sp = 172.28.3.2

172.28.7.# # = node number minus 160. Example: hdw162 -cm= 172.28.7.2

172.28.11.# # = node number minus 160. Example: hdw162-1 = 172.28.11.2

Hadoop Compute Node 1-60 hdc1-60 172.28.2.170 - 172.28.2.229

172.28.6.170 - 172.28.6.229

172.28.10.170 -172.28.10.229

Hadoop Compute Node 61-120 hdc61-120 172.28.3.170 - 172.28.3.229

172.28.7.170 - 172.28.7.229

172.28.11.170 -172.28.11.229

2 IP Addresses reserved for Isilon

1. Hadoop Worker nodes are numbered 1-320. In order to accommodate the required number of hosts, the third IP address octet is incremented by 1 and the fourth octet restarts at 1 when the node number reaches 161. For example, the host hdw160-sp uses a third octet of 2 and a fourth octet of 160 - host hdw-161-sp uses a third octet of 3 and a fourth octet of 1. To see a complete list of IP addresses and hostnames, view the /etc/hosts file.

2.) 172.28.8.217 through 172.28.8.246 and 172.28.9.217 through 172.28.9.246

Table 15 DCA network configuration (page 3 of 3)

Rack Component hostname BMC IP host-sp

NIC 1 IP host-cm Interconnect

Network hostname and IP configuration 49

Page 50: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Multiple-rack cabling reference

Table 16 Cabling kit contents and part numbers

Kit NameComponent Part Numbers

Component Quantity Component Description

DCA2-CBL10

100-585-048 16 ARISTA 10GBASE-SRL SFP+ OPTIC MODULE

038-003-733 8 10m LC to LC Optical 50 Micron MM Cable Assemblies

038-003-476 2 25’ CAT6 Ethernet Cable

DCA2-CBL30

100-585-048 16 ARISTA 10GBASE-SRL SFP+ OPTIC MODULE

038-003-740 8 30m LC to LC Optical 50 Micron MM Cable Assemblies

038-003-475 2 100’ CAT6 Ethernet Cable

Table 17 Cable kits for a 7-to-11-rack DCA

Connect from: To: Use cable kit:

Rack 2 - AGGREG

Rack 1 - SYSRACK DCA2-CBL10

Rack 2 - AGGREG DCA2-CBL10

Rack 3 - 1st EXPAND DCA2-CBL10

Rack 4 - 2nd EXPAND DCA2-CBL10

Rack 5 - 3rd EXPAND DCA2-CBL10

Rack 6 - 4th EXPAND DCA2-CBL10

Rack 7 - 5th EXPAND DCA2-CBL30

Rack 8 - 6th EXPAND DCA2-CBL30

Rack 9 - 7th EXPAND DCA2-CBL30

Rack 10 - 8th EXPAND DCA2-CBL30

Rack 11 - 9th EXPAND DCA2-CBL30

50

Page 51: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Configuration filesConfiguration files are text files that contain the hostnames of servers that occupy quarter, half, or full rack configurations. The file used depends on the desired function. Refer to the table below for a description of each configuration and host file. The hostfiles are located at $ /home/gpadmin/gpconfigs:

Location of old core files(Applies to DCA version 2.0.1.0 and later) Old core files are moved automatically to a separate directory to prevent them from being sent to Support following a healthmon restart. For example, for sdw1, old core files are moved to /var/crash/user.

[root@sdw1 user-processed]# ls -l /var/crash/user

Table 18 Hostfiles created by the DCA Setup utility

File Description

gpexpand_map Expansion MAP file created during the dca_setup option Expand the DCA. It’s purpose is to during GPDB reallocate primary and mirror instances on the new hardware.

gpinitsystem_map MAP file used during installation of GPDB blocks to assign primary and mirror segments to each server.

hostfile Contains one hostname per server for ALL servers in the system. Includes GPDB, DIA and HD (if present).

hostfile_segments Contains the hostnames of the segment servers of all GPDB blocks.

hostfile_gpdb Contains the hostnames for GPDB servers.

hostfile_dia Contains the hostnames of the DIA servers.

hostfile_hadoop Contains the hostnames of the Hadoop servers.

hostfile_hdm Contains the hostnames of all Hadoop Master servers.

hostfile_hdw Contains the hostnames of all Hadoop Worker servers.

hostfile_hdc Contains the hostnames of all Hadoop Compute servers.

Configuration files 51

Page 52: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Default passwords The following table lists default passwords for all the components in a DCA.

Common administrative tasks using dca_setup

Configuring access to an external SMTP Forwarder

From your DCA servers, you can configure sendmail to listen on the loopback interface and send out emails via your SMTP server.

For example, you’ve written a script that runs every 30 minutes in cron and reports the list of active queries and their current runtimes. This script will run on the active master node (mdw or smdw) and log the information. If you want the log sent to your Database Administrator (DBA) twice per day, you must make the master nodes aware of the SMTP server. You then run a command, probably in cron, twice per day to send the log.

Note: Ensure that the server being configured has network access to an external SMTP server.

1. On the DCA server, sign in as root.

2. Verify that sendmail-cf is installed:

root@hostname# rpm -qa | grep sendmail-cf

3. Open the sendmail configuration file:

root@hostname# cd /etc/mailroot@hostname# vi sendmail.mc

4. In the configuration file, find the following line:

dnl define(‘SMART_HOST’, ‘smtp.your.provider’)dnl

5. Uncomment the line by removing dnl and change the line to point at your SMTP server:

define(‘SMART_HOST’, ‘smtp.foobar.com’)dnl

6. Add the following line to map the current hostname to its external interface:

Table 19 Default user names and passwords

Component User Password

Master Servers BMC root user For a new unconfigured server: password

For an existing configured server: sephiroth

root changeme

gpadmin changeme

Interconnect, Adminstration, and Aggregation switches

admin changeme

52

Page 53: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

define(‘confDOMAIN_NAME’, ‘somename.foobar.com.’)dnl

7. Rebuild sendmail.cf:

root@hostname# make

8. Run dca_setup to add an entry to /etc/hosts so sendmail can map to the hostname in confDOMAIN_NAME:

a. Start dca_setup:

# dca_setup

b. Select Option 2 - Modify DCA Settings.

c. Select Option 14 - Modify Hostnames.

d. Select Option 4 - Add a Non DCA Hostname.

e. Enter the name and IP address of the SMTP server when prompted:

<The Non DCA Hostname><The Non DCA IP>

f. Press Enter.

9. Turn on sendmail and start the service:

root@hostname# chkconfig sendmail onroot@hostname# service sendmail start

Set FIPS mode with dca_setup

Federal Information Processing Standards (FIPS) mode is used to measure whether a DCA meets federal government security standards. The default setting is non-FIPS mode whereby the feature is disabled.

Note: Use the ICM client to manually start Hadoop services after enabling FIPS mode. See the latest version of the Pivotal HD Manager Installation and Administration Guide for additional information.

1. Launch the dca_setup utility as the user root:

# dca_setup

2. Select Option 2: Modify DCA Settings.

3. Select Option 15: Configure Security Settings.

4. Follow the on screen prompt to enable FIPS mode.

Common administrative tasks using dca_setup 53

Page 54: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

System Information and Configuration

Creating a Customer Maintained Repository for RPMs

DCA software release 2.1.0.0 integrates a feature allowing customers to create a repository for RPMs. This removes the previous requirement whereby non-DCA applications would need to be removed, then re-installed after a DCA software upgrade.

IMPORTANT

Customer supplied software must be in RPM form. Additionally, customers are responsible for ensuring that all relevant packages and dependencies not already present in the DCA are properly included in this repository. Neither EMC nor Pivotal will maintain or update the contents of this repository.

1. Identify the RPMs and dependencies associated with the software.

2. Enable httpd (the Apache web server) on each Master server to access the RPMs and their dependencies:

Note: The Apache web server (httpd) must be enabled on each host to access the RPMs and their dependencies from the Master server. Alternatively, if httpd is not enabled then the RPMs must reside on each host for the Customer Repository feature to work.

# service httpd start

3. Log in as user root and create a folder named /data/customer:

# mkdir -p /data/customer

4. Copy all relevant RPMs to the /data/customer folder:

# cp *.rpm /data/customer

5. Create the repository:

# createrepo -p -d /data/customer

54

Page 55: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

CHAPTER 2Networking

This chapter provides information about networking on a DCA. Topics include:

Architecture ............................................................................................................ 56 Interconnect Fault Tolerance.................................................................................... 61 External Connectivity............................................................................................... 64 Diagnostics............................................................................................................. 71

55

Page 56: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

ArchitectureThis section describes the DCA network architecture. The DCA has two physical networks:

Administration network for traffic such as health monitoring and lights out management.

Interconnect network for application traffic such as Greenplum Database (GPDB), data loading using Data Integration Accelerator (DIA), and Hadoop.

Figure 23 GPDB network configuration

56

Page 57: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

Figure 24 PHD network configuration

Administration Network

The administration network is comprised of a gigabit ethernet switch connected to each host and interconnect switch with cat6 cabling. This network allows traffic on a separate physical connection for management tasks such as health monitoring, loading of interconnect switch configurations and lights out management through the Board Management Controller (BMC).

The following sections illustrate administration network topologies.

Architecture 57

Page 58: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

Single-Rack Administration Network

Figure 25 Health Monitoring Traffic from sdw1 to mdw over the Administration Network

Two-Rack Administration Network

Figure 26 Health Monitoring Traffic from sdw17 to mdw over the Administration Network

58

Page 59: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

Three Rack+ Administration Network

Figure 27 Health monitoring Traffic from sdw33 to mdw over the Administration Network

Interconnect Network

The interconnect is a redundant 10 gigabit ethernet network. The interconnect is used for high speed application traffic such as Greenplum Database, data loading with DIA modules or Hadoop. Redundancy is provided by the MLAG switch feature - failures of networking components are handled at the network layer.

The dual port converged network adapter (CNA) is configured as a bonded interface through the operating system. This device, identified as bond0, has a single IP address. The network bond operates in the 802.3ad Dynamic link aggregation mode. Packets are transmitted on both ports (0 and 1) of the CNA to the corresponding port on the receiving host. For example, a packet sent out port 0 will be routed to port 0 on the receiving host. The operating system bonding driver handles distribution and assembly of packets.

When a failure occurs on a port, the packets are routed through the available path, and the switch uses its MLAG Peer Link to trespass the packets to the switch connected to the receiving port. For more information on failure scenarios, see below.

Architecture 59

Page 60: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

Figure 28 Packets are sent over device bond0 through the Interconnect

The following topologies are used based on the rack configuration:

Interconnect Cabling in a Single Rack Configuration

Figure 29 Traffic between mdw and sdw1 through Interconnect switches

60

Page 61: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

Interconnect Cabling in a Multiple Rack Configuration

Figure 30 Traffic between sdw1 and sdw17 over Interconnect using the aggregation switches

Interconnect Fault ToleranceThe Interconnect in the DCA edition utilizes MLAG technology to provide network layer fault tolerance. This is in contrast to the DCA 1.x, which had fault tolerance at the application level - segment mirroring in Greenplum Database.

Fault tolerance provides protection against a failed network interface card (NIC), cable, switch port or switch. The following scenarios are handled by the fault tolerance feature of the switches.

Host Port, Interconnect Cable, or Switch Port Failure

In the event a single port on the CNA of a host, switch, or an Interconnect cable fail, the bonding driver in the host operating system and MLAG feature of the Interconnect switch will route packets to the correct destination.

For example, if port 0 of the CNA is unavailable, the OS bonding driver will route that packet through port 1 instead. Interconnect switch, i-sw-2 will send the packet through one of the four MLAG peer links between i-sw-1 and i-sw-2. Interconnect switch i-sw-1 will receive the packet and route it to port 0 of the receiving host.

Interconnect Fault Tolerance 61

Page 62: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

Figure 31 Traffic is routed through MLAG after Port 0 on sdw1 fails

Switch Failure

In the event of an Interconnect switch failure, the OS bonding driver and aggregation switch (multi-rack) MLAG feature will route packets to the intended destination.

Failed Interconnect Switch—Single RackIn a single rack configuration, a failure of Interconnect switch i-sw-2 will cause all traffic to be routed through Interconnect switch i-sw-1 from port 0 of the sending host to port 0 of the receiving host. The OS bonding driver handles detection of the failure and re-establishing the connection once the switch has been restored.

Figure 32 Interconnect switch i-sw-2 fails causing traffic to route through i-sw-1

62

Page 63: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

Failed Interconnect Switch—Multiple RackIn a multiple rack configuration, failure of an Interconnect switch will cause all traffic to route through the other switch in that rack - once traffic needs to leave the rack, it will route through the aggregation switch down to the correct Interconnect switch.

Figure 33 Interconnect switch i-sw-3 fails causing traffic to route through i-sw-4 then aggr-sw-1

Interconnect Fault Tolerance 63

Page 64: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

Failed Aggregation Switch During failure of an aggregation switch, the interconnect switches will detect a link down, and route traffic through the other aggregation switch.

Figure 34 Aggregation switch aggr-sw-1 fails and traffic is routed through aggr-sw-2

External ConnectivityConnectivity to the DCA can be established in multiple ways based on the intended use. The standard connection is established over the 1GB ethernet ports on the Primary and Standby Master server. A high speed 10GB connection is available by connecting through the Interconnect or Aggregation switches.

Connect through the Master Servers

The standard connection to the DCA is done through the 1GB ports on the Primary and Standby Master servers. This connection is used to access the Greenplum Database, call-home, or administrator access. Client tools such as PSQL will communicate to the Greenplum Database over this connection.

By default, the external connections to the master servers are bonded. The bond mode is set to active-backup. During normal operation, one of two slave interfaces is used - if the active interface becomes unavailable, the second one is used.

In order to use the fault-tolerant bonded connection to the master servers, two physical connections to each master server on port GB1(eth1 in OS) and GB2 (eth2 in OS) are required. Only one IP address per server is required. If two connections per master server are not available, one can be used for a non fault tolerant configuration.

64

Page 65: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

Figure 35 Master server interfaces eth1 and eth2 are bonded for fault tolerance

Set External IP Addresses of Master ServersThe network information for the Primary and Standby Master server are configured through the DCA Setup utility. Before beginning, collect the IP address, netmask, gateway, and (optional) DNS server information.

This example assumes you have run the DCA Setup utility once to configure basic cluster information. If this is the first time running DCA Setup, you will be prompted for external network information.

1. Log on to the Primary Master server as the user root.

2. Launch the DCA Setup utility:

# dca_setup

3. Select option 2) Modify DCA Settings.

4. Select option 11) Modify the Master Servers’ External Network Settings.

5. Select sub option 1 through 4 to modify the IP, gateway and netmask. Once complete, enter A to apply changes.

1) External Primary Master server IP2) External Standby Master server IP3) External gateway4) External netmask

Set an External DNS server1. Log on to the Primary Master server as the user root.

2. Launch the DCA Setup utility:

# dca_setup

3. Select option 2) Modify DCA Settings.

4. Select option 12) Modify the DNS Settings.

External Connectivity 65

Page 66: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

5. Select sub option 1 through 3 to modify the DNS settings. Once complete, enter A to apply changes.

1) Domain Name Servers (DNS)2) Resolve Search Path3) Resolve Domain

Set a Virtual IP Address for Master Server FailoverA virtual IP address (VIP) is used to move a single IP address between the Primary and Standby master server in the event of a failover. This feature is used to maintain a single IP address for client applications.

The VIP feature will use device bond1 to create virtual device bond1:0 on the Primary Master server during normal operation. When a failover occurs, device bond1:0 will be created on the Standby Master server. This feature will function if you are using one or two connections per master.

Figure 36 Client applications access DCA over VIP 10.10.10.3

To configure the virtual IP address:

1. Log on to the Primary Master server as the user root.

2. Launch the DCA Setup utility:

# dca_setup

3. Select option 2) Modify DCA Settings.

4. Select option 11) Modify the Master Servers’ External Network Settings.

5. Select sub option 5) Failover VIP. Enter the virtual IP address.

5) Failover VIP

6. Enter A to apply changes.

Set One External Connection per Master ServerBy default, the bonded external interface on the Primary and Master server uses two connections. The interface will function if only one connection is available, however, call-home error messages will be generated for the missing connection.

To configure the DCA to use one external connection per master server:

1. Log on to the Primary Master server as the user root.

66

Page 67: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

2. Launch the DCA Setup utility:

# dca_setup

3. Select option 2) Modify DCA Settings.

4. Select option 11) Modify the Master Servers’ External Network Settings.

5. Select sub option 6) Use two external connections per master.

6. Select n to specify one connection:

Do you wish to use both connections for external connectivity on the masters? (Yy|Nn)Current Setting = Yes. Press Enter to keep this setting.>> n

7. Enter A to apply changes.

Connect through the Interconnect or Aggregation Switch

Connecting to the DCA through the Interconnect or aggregation switches should be used when high speed 10GB communication to internal DCA hosts is required. Data loading is a common scenario for connecting through an Interconnect or Aggregation switch.

Connectivity to internal DCA hosts is established by creating a VLAN (virtual local area network) overlay. A VLAN overlay separates internal and external traffic by creating a separate virtual network over the same physical network with a different IP address range. In order to create a VLAN overlay on the DCA, the switch configurations must be modified to allow traffic over the new VLAN ID and a VNIC (virtual network interface card) must be created on each host to receive traffic over the different IP address range.

The DCA Setup utility provides an automated method to configure switches to allow VLAN traffic and create VNICs on hosts.

Figure 37 Traffic from external hosts travels over VLAN to internal DCA hosts

External Connectivity 67

Page 68: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

There are several components to connecting external hosts to DCA internal hosts:

“Connect to switches”

“Setting up a VLAN Overlay”

“Set Custom Hostnames (optional)”

Connect to switchesWhen connecting to a single rack configuration, ports on the Interconnect switches should be used. When connecting to a multiple rack configuration, ports on the aggregation switch should be used for an even network load.

Ports 41-44 are available on the Interconnect switches for external connectivity. Aggregation switches reserve ports 45-48 for external connectivity. When using the DCA Setup utility to add a VLAN for external connectivity, it will be added to these ports.

Figure 38 Ports 41-44 are reserved for external connectivity on an Interconnect switch

Figure 39 Ports 45-48 are reserved for external connectivity on an Aggregation switch

Setting up a VLAN OverlayBy default, internal traffic on the DCA travels over VLANs with ID 4 and 4094. In order to allow external traffic, a new VLAN overlay should be created to allow traffic over switches and hosts from the external environment.

68

Page 69: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

Before beginning this procedure, collect the following information:

The switch configurations can be modified using the DCA Setup utility:

1. Create a VLAN map file to assign VLAN IP addresses to internal hosts. This file should contain the hostname of the internal server and the IP address it should be assigned. For example, to assign a 1 module DCA with a 10.5.6.x IP address range:

mdw = 10.5.6.143smdw = 10.5.6.164sdw1 = 10.5.6.165sdw2 = 10.5.6.169sdw3 = 10.5.6.155sdw4 = 10.5.6.145

The recommended location for this file is /home/gpadmin/vlan_map_file. An example file is provided in: /opt/dca/var/dca_setup/customer_vlan_map_example.

2. From the Primary Master server, launch the DCA Setup utility as the user root:

# dca_setup

3. Select option 2) Modify DCA Settings.

4. Select option 13) Switch and VLAN Settings.

5. Select sub option 4) Add/Enable Customer VLAN.

6. Enter the location of the VLAN map file created earlier:

>> /home/gpadmin/vlan_map_file

7. Enter the netmask of the new VLAN:

Please enter the vlan netmask address. >> 255.255.255.0

8. Enter the gateway of the new VLAN:

Please enter the vlan gateway address >> 192.168.0.100

Table 20 Requirements for creating a VLAN overlay

Item Description

VLAN ID Customer provided VLAN ID. This is the ID of the external network that will be connecting to the DCA. Packets in a VLAN are tagged with an ID, and routed over the intended network using this ID.

IP Addresses for Internal Hosts Internal hosts are assigned an additional IP address so hosts can be accessed over the VLAN by external hosts in the same subnet. One IP address per host is required.

Subnet Mask Subnet that the external hosts belong to.

Gateway External gateway server that is within the VLAN subnet.

External Connect LAG Cabling from the switch to external hosts/switches can be configured in a port-channel LAG or as separate links. This configuration item is dependent on the configuration of the external environment.

External Connectivity 69

Page 70: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

9. Enter the VLAN ID:

Please choose a customer vlan ID. Current Setting/Default = 222. Press Enter to keep this setting.>> 100

10. Specify if you would like to create a LAG (link aggregation group) over the external connections to the switch.

11. Once the setup completes, hosts will be accessible over their VLAN IP address. Confirm the VNIC was created on a host - issue the ifconfig command and look for the device bond0.100, where 100 is the VLAN ID you specified above.

Set Custom Hostnames (optional)You can assign custom hostnames to internal DCA hosts so that they can be resolved by name. For example, if a customer with a Hadoop or ETL server wants to modify the hostname to be consistent with their external environment.

Note: You should stop Greenplum Database prior to performing this procedure. If Greenplum Database is running, you will be prompted to exit and stop the database.

To set custom hostnames:

1. Create a hostname map file. For example, to create a map file called hostname_map:

# vi /home/gpadmin/hostname_map

2. From the Primary Master server, launch the DCA Setup utility as the user root:

# dca_setup

a. Select option 2) Modify DCA Settings

b. Select option 14) Modify Hostnames

c. Select sub option 2) Modify the Hostnames from a File

3. Select a host by entering the corresponding number, and then type the custom hostname. For example, to modify the hostname of the Primary Master server, mdw:

select: 3

DCA hostname:mdwCustom hostname:mdw

4. Enter new custom hostname [default=mdw]: dca11-mdw1

5. Enter 1 to apply changes, and 1 again to apply.

Please Enter the full path for the Hostname Map File. The file should specify the host(s) whose name(s) you want to change.

The file must specify the default name of the host followed by the new hostname you want to assign.

For example, to change the hostname of mdw and sdw2, modify the map file as follows:

mdw = MasterHost

70

Page 71: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

sdw2 = SegmentHost2

DiagnosticsThe following information should be used to troubleshoot a switch. For a full list of switch commands, refer to the Arista switch documentation.

Log into the switch:

From any internal host in the DCA:

# ssh admin@i-sw-1

Show Command List

# show ?

Show Switch firmware version:

# show version

Show the configuration on the switch

The switch uses a running and startup configuration. The running config can be modified when the switch is live - during a reboot any changes will be lost. Changes that must persist a reboot should be made to the startup config.

Show running config:

# show running-config

Show startup config:

# show startup-config

Show hosts logged into switch (by MAC)

# show mac address-table

Show Port Traffic / Error Information

# show interfaces counters

Show Connectivity / Port Status

# show interfaces status

Show Interface Status with Port Description

# show interfaces description

Copy a Switch Configuration to Host

In this example, the startup configuration is being copied to the Primary Master server in the /home/gpadmin directory:

# copy startup-config scp:[email protected]/home/gpadmin

Diagnostics 71

Page 72: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Networking

72

Page 73: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

CHAPTER 3Default Ports for Customer Networks

This chapter lists the default ports available for connectivity to customer networks.

Table 21 GPDB ports

Ports: Application

22 SSH

5432 The default port number of the Greenplum Database server running on the master host.

28080 Command Center / GP Performance Manager

8080 The default HTTP port for Greenplum Chorus

Table 22 ConnectEMC (Remote Support)

Ports: Application

989 ConnectEMC

990 ConnectEMC

20000-30000 Passive FTP Ports

Table 23 PHD ports

Ports: Application

8020 mapreduce job hdfs servers

8030 yarn resourcemanager scheduler address

8032 yarn resourcemanager address

8033 yarn resourcemanager admin address

8042 yarn nodemanager webapp address

8088 yarn resourcemanager webapp address

50010 dfs datanode address (non-secure mode)

1004 dfs datanode address (secure mode)

50020 dfs datanode ipc address

50070 dfs namenode http address

50075 dfs datanode http address (non-secure mode)

Default Ports for Customer Networks 73

Page 74: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Default Ports for Customer Networks

1006 dfs datanode http address (secure mode)

50090 dfs namenode secondary http address

For a complete list of PHD ports, see the Pivotal HD Enterprise Installation and Administrator Guide available on http://docs.pivotal.io/.

Table 23 PHD ports

Ports: Application

74

Page 75: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Storage

CHAPTER 4Storage

This chapter describes the DCA storage system. Topics include:

Overview Storage layout Common storage commands

OverviewThe DCA uses direct attached storage to house application and operating environment data. Each server in a DCA contains an LSI RAID Controller with 512MB of cache protected by a super capacitor (GPDB servers feature x2 RAID controllers).

Storage layout There are different physical and virtual disk configurations based on the type and role of a server in the DCA:

Table 24 Storage layout by server role

Server RoleDrive Count Drive Size / Speed RAID Protection

Logical Volumes

Device Name / Mount*Does not show unused LV

Master Servers 6 300GB 10k RAID-5 (4+1) and a Hot Spare 5 /dev/sda2 //dev/sdb1 /home/dev/sdd1 /crash/dev/sdc1 /swap/dev/sde /data

GPDB (Segment) Servers

24 Compute: 300GB 10kStandard: 900GB 10k

2x RAID controllers, each with RAID-5 (10+1) and a Hot Spare

6 /dev/sda2 //dev/sdb /swap/dev/sdc /data1/dev/sde1 /crash/dev/sdf /data2

Kylin DIA Servers;Hadoop Compute

6 300GB 10k RAID-5 (4+1) and a Hot Spare 5 /dev/sda2 //dev/sdc1 /swap/dev/sdd1 /crash/dev/sde /data

Dragon 12 DIA Server

12 3TB 7.2k RAID-5 (10+1) and a Hot Spare 5 /dev/sda2 //dev/sdc1 /swap/dev/sdd1 /crash/dev/sde /data

Overview 75

Page 76: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Storage

Dragon 12 DIA Server

12 6TB 7.2k RAID-5 (10+1) and a Hot Spare 5 /dev/sda2 //dev/sdc1 /swap/dev/sdd1 /crash/dev/sde /data

GPDB High Memory;Dragon 24 DIA Server (High Memory)

24 300GB 10k 2x RAID controllers, each with RAID-5 (10+1) and a Hot Spare

6 /dev/sda2 //dev/sdb /swap/dev/sdc /data1/dev/sde1 /crash/dev/sdf /data2

GP Hadoop Master Server(DCA version 2.0.0.0)

12 3TB 7.2k RAID-5 (10+1) and a Hot Spare 5 /dev/sda2 //dev/sdc1 /swap/dev/sdd1 /crash/dev/sde /data

GP Hadoop Worker Server(DCA version 2.0.0.0)

12 3TB 7.2k 12x R0 16 /dev/sda2//dev/sdc1 /swap/dev/sdg1 /var/crash/dev/sdb /data1/dev/sdd /data2/dev/sdf /data3/dev/sdh /data4/dev/sdi /data5/dev/sdj /data6/dev/sdk /data7/dev/sdl /data8/dev/sdm /data9/dev/sdn /data10/dev/sdo /data11/dev/sdp /data12

GP Hadoop Master Server(DCA version 2.0.1.0)

12 3TB 7.2k RAID-5 (10+1) and a Hot Spare 5 /dev/sda2 //dev/sdc1 /swap/dev/sdd1 /crash/dev/sde /data

Table 24 Storage layout by server role

Server RoleDrive Count Drive Size / Speed RAID Protection

Logical Volumes

Device Name / Mount*Does not show unused LV

76

Page 77: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Storage

GP Hadoop Worker Server(DCA version 2.0.1.0)

12 3TB 7.2k R1 = 2 System disks (0 – 1)R0 = 10 Data disks (2 – 11)

14 /dev/sda1 /boot/dev/sda2//dev/sdbswap/dev/sdc1/var/crash/dev/sde /data1/dev/sdf /data2/dev/sdg /data3/dev/sdh /data4/dev/sdi /data5/dev/sdj /data6/dev/sdk /data7/dev/sdl /data8/dev/sdm /data9/dev/sdn /data10

GP Hadoop Master Server(DCA version 2.1.0.0)

12 6TB 7.2k RAID-5 (10+1) and a Hot Spare 5 /dev/sda2 //dev/sdc1 /swap/dev/sdd1 /crash/dev/sde /data

GP Hadoop Worker Server(DCA version 2.1.0.0)

12 6TB 7.2k R1 = 2 System disks (0 – 1)R0 = 10 Data disks (2 – 11)

14 /dev/sda1 /boot/dev/sda2//dev/sdbswap/dev/sdc1/var/crash/dev/sde /data1/dev/sdf /data2/dev/sdg /data3/dev/sdh /data4/dev/sdi /data5/dev/sdj /data6/dev/sdk /data7/dev/sdl /data8/dev/sdm /data9/dev/sdn /data10

Table 24 Storage layout by server role

Server RoleDrive Count Drive Size / Speed RAID Protection

Logical Volumes

Device Name / Mount*Does not show unused LV

Storage layout 77

Page 78: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Storage

Common storage commandsThe following commands can be used to view information about the storage system on a DCA server. Run all commands as the user root.

View List of commands:

# CmdTool2 -?

Show Virtual Disk State:

# CmdTool2 -LDInfo -Lall -aAll | grep -e State -e 'Virtual Drive'

Show Physical Disk State:

# CmdTool2 -PDList -aAll | grep -e 'Slot Number' -e 'Firmware state'

Show Storage Config:

# CmdTool2 -CfgDsply -aAll

View disk free space by OS mount:

# df -h

78

Page 79: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Hadoop on the DCA

CHAPTER 5Hadoop on the DCA

For the latest information on Pivotal Hadoop, refer to the PHD documentation on http://pivotalhd.docs.pivotal.io/.

This chapter describes the Hadoop implementation on the DCA. Major topics include:

Hadoop overview Hadoop configuration Hadoop example LDAP and Kerberos security

Hadoop overviewThis section contains the following major topics:

“Hadoop modules” on page 79

“Hadoop platforms” on page 80

“Common commands” on page 81

Hadoop modules

There are two types of Hadoop modules used in the DCA implementation:

Hadoop Master

Hadoop Worker

Hadoop Master module Hadoop Master modules are configured to assume the role of namenode, secondary namenode, resourcemanager, and hive server. A single Hadoop Master module is required as the foundation of an Hadoop installation. (Only one Hadoop Master module is allowed in the current DCA implementation.) Each server in a Master module serves a unique role, as shown in Table 25:

Table 25 Hadoop Master Module server roles

Hostname Role

hdm1 Namenode, Hadoop Namenode, zookeeper1, HAWQ standby

Hadoop overview 79

Page 80: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Hadoop on the DCA

Although the DCA’s modular architecture allows the Hadoop Master module to be installed in any location, it is typically the first Hadoop module installed.

Hadoop Worker moduleHadoop Worker modules provide data storage for a Hadoop installation. A Hadoop cluster must have at least one Hadoop Worker module and can have as many as 46 Hadoop Worker modules, enough to populate a 12-rack DCA.

Hadoop Worker modules run the nodemanager and datanode services.

Hadoop platforms

The following platforms are installed and configured for use with Hadoop modules.

Hive

Apache Hive is data warehousing software that can access files stored in the HDFS. Hive on the DCA has an initial configuration and is ready for use when you run the following command from any Hadoop Master module server:

$ hive

Hive binaries, examples, and documentation are located in:

/usr/lib/gphd/hive

The Hive configuration is located in:

/etc/gphd/hive/conf

Pig

Apache Pig is a platform for analyzing large data sets. Pig on the DCA has an initial configuration and is ready for use when you run the following command from any Hadoop Master module server:

$ pig

Pig binaries, examples, and documentation are located in:

/usr/lib/gphd/pig

The Pig configuration is located in:

/etc/gphd/pig/conf

hdm2 Secondary Namenode, Hadoop Secondary Namenode, zookeeper2, HAWQ master

hdm3 Resourcemanager, YARN Resource Manager/MapReduceHistory Server, zookeeper3

hdm4 hive server, HBASE master, Secondary DNS, client gateway/workhorse

Table 25 Hadoop Master Module server roles

Hostname Role

80

Page 81: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Hadoop on the DCA

ZooKeeper

Apache ZooKeeper is a coordination service for distributed applications in Hadoop. The ZooKeeper binaries are located in:

/usr/lib/gphd/zookeeper

The ZooKeeper configuration is located in:

/etc/gphd/zookeeper/conf

Common commands

This section presents some common commands used for interacting with the Hadoop file system on a DCA. These commands are not specific to the DCA; for a full list, see the Apache Hadoop documentation.

You must switch to the user hdfs to run Hadoop commands:

[root@hdm1 ~]# su - hdfs[hdfs@hdm1 ~]$

View Hadoop FS commands:

$ hadoop dfs -help

List directories in HDFS:

$ hadoop dfs -ls /

Run the Hadoop filesystem check tool:

$ hadoop fsck /

Remove files in a directory:

$ hadoop dfs -rmr TEST

Copy files from a local directory into HDFS:

$ hadoop dfs -copyFromLocal /home/gpadmin/gpAdminLogs/* TEST

Hadoop configurationThis section describes how Hadoop is configured in the DCA. It contains the following major topics:

“Configuration files” on page 82

“Default ports” on page 82

“Storage” on page 82

Hadoop configuration 81

Page 82: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Hadoop on the DCA

Configuration files

The following table lists the location of important configuration files used by Hadoop.

Default ports

The following table lists ports configured on the DCA for various Hadoop services.

Storage

PHD Master and Worker modules are configured with the Hadoop distribution and are ready for high-performance unstructured data queries.

Table 26 File location of Hadoop configuration files

Filename Location

core-site.xml On Hadoop Masters:

/etc/gphd/hadoop/confhdfs-site.xml

mapred-site.xml

Table 27 Hadoop ports on the DCA

Service Port

Hadoop IPC 9000

Namenode HTTP 50070

Secondary Namenode HTTP 50090

82

Page 83: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Hadoop on the DCA

In DCA version 2.0.1.0 and later, Hadoop Master modules run namenode, secondary namenode, resourcemanager, zookeeper, and hive server services. Hadoop Worker modules run nodemanager and datanode services.

Figure 40 DCA Hadoop Master and Worker modules

Hadoop installation specification

The PHD installation specification is provided in the following location:[root@hdm1 etc]# cat gphd_install_spec

Hadoop Master server Hadoop Worker server

Table 28 Hadoop installation specification

Service Component details

datanode nodes: hdw1, hdw2, hdw3, hdw4....hdw320

rpms: hadoop-datanode-1.0.3_gphd_1.2.0.0-GA.x86_64.rpm

service: hadoop-datanode

hbase nodes: hdm4

rpms: hbase-0.92.1_gphd_1.2.0.0-GA.noarch.rpmhbase-doc-0.92.1_gphd_1.2.0.0-GA.noarch.rpmhbase-master-0.92.1_gphd_1.2.0.0-GA.noarch.rpm hbase-thrift-0.92.1_gphd_1.2.0.0-GA.noarch.rpm

service: hbase-master

Hadoop configuration 83

Page 84: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Hadoop on the DCA

hbase-regionserver nodes: *id001

rpms: hbase-regionserver-0.92.1_gphd_1.2.0.0-GA.noarch.rpm

service: hbase-regionserver

hdfs nodes: hdm1, hdm2, hdm3, hdm4, hdw1, hdw2, hdw3, hdw4

rpms: hadoop-1.0.3_gphd_1.2.0.0-GA.x86_64.rpm

hive nodes: hdm1, hdm2, hdm3, hdm4

rpms: hive-0.8.1_gphd_1.2.0.0-GA.noarch.rpm

hive-server nodes: hdm4

rpms: hive-metastore-0.8.1_gphd_1.2.0.0-GA.noarch.rpmhive-server-0.8.1_gphd_1.2.0.0-GA.noarch.rpm

service: hive-metastore, hive-server

resourcemanager nodes: hdm3

rpms: hadoop-resourcemanager-1.0.3_gphd_1.2.0.0-GA.x86_64.rpm

service: hadoop-resourcemanager

namenode nodes: hdm1

rpms: hadoop-namenode-1.0.3_gphd_1.2.0.0-GA.x86_64.rpm

service: hadoop-namenode

pig nodes: hdm1, hdm2, hdm3, hdm4

rpms: pig-0.9.2_gphd_1.2.0.0-GA.noarch.rpm

secondary-namenode nodes: hdm2

rpms: hadoop-secondarynamenode-1.0.3_gphd_1.2.0.0-GA.x86_64.rpm

service: hadoop-secondarynamenode

nodemanager nodes: *id001

rpms: hadoop-nodemanager-1.0.3_gphd_1.2.0.0-GA.x86_64.rpm

service: hadoop-tasktracker

zookeeper nodes: hdm2, hdm3, hdm4

rpms: zookeeper-3.3.5_gphd_1.2.0.0-GA.noarch.rpmzookeeper-server-3.3.5_gphd_1.2.0.0-GA.noarch.rpm

service: zookeeper-server

Table 28 Hadoop installation specification

Service Component details

84

Page 85: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Hadoop on the DCA

Hadoop example

Creating table and inserting rows in Hive

1. Log in to hdm1 as the user hdfs.

2. Create a text file /home/hdfs/test_hive and populate it with integer values, one on each line:

$ vi /home/hdfs/test_hive123...

3. Start Hive:

$ hive

4. Create a table named test:

hive> create table test (foo int);

5. Show the table:

hive> show tables;

6. Load data from the test_hive file into the table test:

hive> load data local inpath '/home/hdfs/test_hive' overwrite into table test;

7. View the contents of the table test:

hive> select * from test;

8. Exit the Hive shell:

hive> quit;

9. View the contents of the table test from the HDFS:

$ hadoop fs -cat /user/hive/warehouse/test/test_hive

LDAP and Kerberos security Security is enabled by default in PHD version 2.1. Follow this procedure if you need to manually enable security mode on the DCA:

1. Enabling Kerberos on the DCA

2. Adding users to security group

3. Removing Greenplum path

Note: dca_setup Verify Health option for Pivotal Hadoop is not supported in security mode.

Hadoop example 85

Page 86: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Hadoop on the DCA

Enabling Kerberos on the DCA

To enable security on a deployed, but unsecured, cluster, you need to set up a Kerberos server, as follows. If you already have a Kerberos server set up, you do not need to run this command, but you need to make security-specific edits to the Cluster configuration file.

To configure Kerberos:

1. Stop the cluster:

[gpadmin]# icm_client stop -l <CLUSTERNAME>

2. On the Admin node, as gpadmin, run:

$ icm_client security -i

3. You will be prompted for the following information:

Do you wish to configure Kerberos Server? (y/n) [Yes]? yes

Enter no if you do not wish to use the built-in Kerberos server.

Choose a realm for your Kerberos server; usually this will be your domain name:

Enter REALM for Kerberos (ex PIVOTAL.IO): PIVOTAL.IO

Choose a login and password for your Kerberos server. You will need these if you ever need to manage the Kerberos server directly via the command line tool (for example, kadmin):

Enter username for Kerberos Server ADMIN [admin]: gpadminEnter new password for Kerberos Server ADMIN:Re-enter the new password for Kerberos Server Admin:Enter new MASTER password for KDC:Re-enter new MASTER password for KDC:

You are now prompted to setup the built-in LDAP server:

[WARNING] Attempt to re-configure previously configure LDAP server may result in data or functionality lossDo you wish to configure LDAP Server? (y/n) [Yes]? yes

Select a suitable base domain name (DN), usually this will be your domain name:

Enter Domain name for LDAP base DN (ex pivotal.io): pivotal.io

Choose a login and password for the LDAP administrator. You will need these to add new users into the system, and also it will be needed if you ever need to manage the built-in LDAP server directly:

Enter username for LDAP Administrator [Manager]: gpadminEnter new password for LDAP administrator:Re-enter new password for LDAP administrator:

The installer will now install and configure the built-in Kerberos and LDAP server, based on the information you provided:

[INFO] Attempting to configure KDC and/or LDAP. It may take few minutes...[DONE] Security components initialized successfully

86

Page 87: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Hadoop on the DCA

4. You now need to add security-specific parameters/values to the configuration file. You can use icm_client to reconfigure for this purpose. Make sure it runs successfully on all nodes before proceeding further. Perform the following steps on the Admin node:

a. Fetch the current configuration in a directory SecureConfiguration:

[gpadmin]# icm_client fetch-configuration -o SecureConfiguration -l <CLUSTERNAME>

b. Open the cluster configuration file and set the security parameter to true:

<securityEnabled>true</securityEnabled>

c. Locate the following section in Global Services Properties <servicesConfigGlobals>:

<!-- Security configurations --> <!-- provide security realm. e.g. EXAMPLE.COM --> <security.realm></security.realm> <!-- provide the path of kdc conf file --> <security.kdc.conf.location>/etc/krb5.conf</security.kdc.conf.location>

You need to add a valid value to the <security.realm> parameter. Use the same (case-sensitive) realm value entered in step 3.

The default value for the <security.kdc.conf.location> parameter is valid if you are using the Kerberos server setup during Configuring Kerberos and LDAP; if you are using an existing Kerberos server, you need to add a value for that location.

5. Run reconfigure to push your changes to cluster nodes:

[gpadmin]# icm_client reconfigure -l <CLUSTERNAME> -c SecureConfiguration -f

6. Start the Cluster:

[gpadmin]# icm_client start -l <CLUSTERNAME>

7. If HAWQ is configured:

a. Start HAWQ:

$ /etc/init.d/hawq start

b. Make sure you have a kerberos principal for gpadmin.

c. Locate HAWQ's data directory:

On the HAWQ master, open /etc/gphd/hawq/conf/gpinitsystem_config

Locate DFS URL and obtain the directory after nameservice or namenode. By default the value of this is hawq_data. We will refer to it as HAWQ_DATA_DIR for purpose of this document.

d. Create HAWQ_DATA_DIR on HDFS:

Start the cluster using icm_client. Make sure HDFS service is up and running, then as gpadmin, on namenode or client machine, run the following:

kinit

LDAP and Kerberos security 87

Page 88: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Hadoop on the DCA

hadoop fs -mkdir /<HAWQ_DATA_DIR> hadoop fs -chown -R postgres:gpadmin /<HAWQ_DATA_DIR> hadoop fs -mkdir /user/gpadmin hadoop fs -chown gpadmin:gpadmin /user/gpadmin hadoop fs -chmod 777 /user/gpadmin kdestroy

e. Specify that security is enabled by running the following:

source /usr/local/hawq/greenplum_path.shgpconfig -c enable_secure_filesystem -v "on"gpconfig --masteronly -c krb_server_keyfile -v "'/path/to/keytab/file'"

Note: The single quotes ' after/before the double quotes " in the keytab string above are required.

f. Restart HAWQ:

$ /etc/init.d/hawq restart

At this point, security is enabled and you may run test commands to validate that data is still accessible in secure mode.

Adding users to security group

After enabling security mode, before running a mapreduce job, the users you want to use in secure mode need to be manually added into security group using the adduserphd script:

/usr/lib/gphd/tools/security/utils/adduserphd.py <host-name-list-file> -H -s -r <realm name>

Removing Greenplum path

The file /etc/profile needs to be modified to remove greenplum path for user gpadmin.

On each node, perform the following steps:

1. Run the command: su

2. Enter the root password: changeme

3. Run the command: vi /etc/profile

4. Comment out three lines:

if [[ "$ID" == 'gpadmin' ]]; then# gpadmin user will get greenplum path# . /usr/local/greenplum-db/greenplum_path.sh# . /usr/local/greenplum-cc-web/gpcc_path.sh 2>/dev/null

5. To verify that if the variable is not there or reset the variable, run the command: set

Disabling security mode

To disable security mode, follow instructions in the Pivotal PHD Enterprise Installation and Administrator Guide available on http://docs.pivotal.io/.

88

Page 89: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Master Server Failover

CHAPTER 6Master Server Failover

Note: (This note applies only to DCA version 2.0.1.0 and later) DCA versions 2.0.1.0 and later support an option for an Hadoop-only DCA. Master Server failover in an Hadoop-only DCA differs in significant ways from Master Server failover in a GPDB-only or mixed DCA. The procedures for replacing a Primary or Standby Master server in a Hadoop-only DCA are documented in the latest version of the EMC Data Computing Appliance Maintenance Guide.

This chapter describes the Master Server Failover feature of the DCA. In the DCA, a backup master server called the Standby is available to resume operations if the Primary master has a failure. A failover to the Standby Master - hostname smdw - is performed when the Primary Master - hostname mdw - can no longer accept connections to the Greenplum Database.

There are two types of failover, orchestrated and automatic. An orchestrated failover is done by invoking the failover manually - this may be due to marginally failing hardware or scheduled maintenance of the Primary Master. Automatic failover occurs when the DCA detects the Primary Master has failed and performs an unattended failover to the Standby Master.

The failover scenarios listed in this chapter only pertain to the case where the Primary Master fails. If a Standby Master fails, there is no impact to availability of the Greenplum Database.

The following sections are included in this chapter:

“Orchestrated Failover”

“Automated Failover”

Figure 41 Overview of Master Server failover process

89

Page 90: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Master Server Failover

Orchestrated FailoverOrchestrated failover refers to the manual failover from Primary to Standby Master. The Primary Master may still be operational, but does not need to be.

An orchestrated failover can be done for several reasons, including:

Marginally failing hardware

Scheduled maintenance

Control of when failover occurs - user does not want to activate Automated Failover

Orchestrated failover does not require any special setup beforehand. The utility used to invoke an orchestrated failover is an automation of steps that were required to be run manually in previous versions of the DCA software.

The following topics are included in this section:

“What happens during an Orchestrated Failover”

“Syntax for the dca_failover Utility”

“DCA Failover Configuration”

“Orchestrated Failover Examples”

What happens during an Orchestrated Failover

An orchestrated failover is an aggregation of functionality which moves operations from the Primary to Standby Master. The following operations are performed in order during an orchestrated failover:

1. The Greenplum Database is stopped - if the --stopmasterdb parameter is given.

2. The virtual IP address of the failed Primary Master is deleted - if the --deletevip parameter is given.

3. The gpactivatestandby command is run on the Standby Master, it becomes the new Primary Master.

4. A query is run against the new Primary Master to validate the Greenplum Database is running.

5. The failed master server is shutdown - if the --shutdown parameter is given.

6. Add the virtual IP address to the new Primary Master - if the --shutdown and --vip parameters are given.

7. The new Primary Master sends an ARP request to the gateway with the virtual IP address it was assigned - if the --shutdown and --vip parameters are given.

8. Command Center is started on the new Primary Master - if the --shutdown parameter is given.

90

Page 91: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Master Server Failover

Syntax for the dca_failover UtilityThe dca_failover utility is used to initiate an orchestrated failover from Primary to Standby Master. The utility parameters are listed below.

dca_failover --master_data_dir --port --user --vip --gateway --netmask --shutdown --deletevip --noninteractive --stopmasterdb --help

--master_data_dir: Master Data Directory of Greenplum Database. This value is read from the /opt/dca/etc/env/default/master_data_directory file by default. This parameter is not required.

--port: The port which the Greenplum Database is running on. If the parameter is not given, the value is read from the /opt/dca/etc/dca_setup/dca_setup.cnf file. This parameter is not required.

--user: The user to perform Greenplum Database operations - gpactivatestandby, test query with. This is gpadmin by default. This parameter is not required.

--vip: The virtual IP address to move. If the parameter is not given, the value is read from the /opt/dca/etc/dca_setup/dca_setup.cnf file. This parameter is not required.

--gateway: The gateway of the virtual IP address. The --vip parameter must be given in addition to this parameter. If the parameter is not given, the value is read from the /opt/dca/etc/dca_setup/dca_setup.cnf file. This parameter is not required.

--netmask: The netmask of the virtual IP address. The --vip parameter must be given in addition to this parameter. If the parameter is not given, the value is read from the /opt/dca/etc/dca_setup/dca_setup.cnf file. This parameter is not required.

--shutdown: Shutdown the failed Primary Master after the Standby has been initialized as the new Primary Master. This parameter is required if the failed Primary Master is still powered on.

--noninteractive: Don’t prompt user for confirmation during failover.

--stopmasterdb: Stop the Greenplum Database on the failed Primary Master server before failover. This parameter is required if the failed Primary Master is powered on and still accepting connections to the Greenplum Database.

--help: Display syntax of dca_failover command.

DCA Failover ConfigurationThe DCA Failover utility uses values from configuration files if certain command-line parameters are not specified. These values are populated by the DCA Setup utility during installation and configuration.

/opt/dca/etc/healthmond/healthmond.cnffailover_vip = 10.10.10.3failover_netmask = 255.255.248.0failover_gateway = 10.10.10.10

/opt/dca/etc/env/default/master_data_directory/data/master/gpseg-1

Orchestrated Failover 91

Page 92: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Master Server Failover

Orchestrated Failover ExamplesTo perform an orchestrated failover where the Primary Master is powered on and the Greenplum Database is online, enter the following command:

# dca_failover --shutdown --stopmasterdb --noninteractive

To perform an orchestrated failover where the Primary Master server has completely failed - powered off - and the Greenplum Database is offline, enter the following command:

# dca_failover --noninteractive

Automated FailoverThe automated failover feature monitors the Primary Master, and automatically initiates a failover to the Standby Master when certain conditions are met. Automated failover only refers to a failover from the Primary to Standby Master. If a Standby Master fails, there is no affect to Greenplum Database availability, and no operations are moved.

The algorithm that determines if the Primary Master has failed uses pessimistic logic, queries and ping requests across network interfaces from multiple hosts must all fail in order for a determination to be made. By default, the automated failover feature is not enabled. The feature is enabled by the DCA Setup utility.

The following topics are included in this section:

“Enable, Disable, and Status of Automated Failover”

“Triggers for Automatic Failover”

“Monitor a Failover in Progress”

“Failback After an Automated Failover”

Enable, Disable, and Status of Automated Failover

Automated failover is controlled by the DCA Setup utility. In order to run the DCA Setup utility, the user must have root permissions.

Enable Automated Failover

1. As the user root, log into the Primary Master Server - mdw.

2. Launch the DCA Setup Utility:

# dca_setup

3. Select option 2) to Modify DCA Settings.

4. Selection option 19) Enable/Disable Master Server Auto Failover (currently disabled)

Disable Automated Failover

1. As the user root, log into the Primary Master Server - mdw.

2. Launch the DCA Setup Utility:

# dca_setup

3. Select option 2) Modify DCA Settings.

4. Selection option 19) Enable/Disable Master Server Auto Failover (currently enabled)

92

Page 93: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Master Server Failover

Check Status of Automated Failover

The status of automated failover can be verified by reading the text in the DCA Setup utility. If option 19 shows currently enabled, the failover feature is active:

1. As the user root, log into the Primary Master Server - mdw.

2. Launch the DCA Setup Utility:

# dca_setup

3. Select option 2) Modify DCA Settings.

4. Review option 19) - the text will specify if the feature is enabled or disabled.

Triggers for Automatic Failover

The following triggers are evaluated by the failover logic. The triggers are checked in order - on a 15 second poll interval. If the first trigger fails, the next is tried, and all must fail in order for the failover to the Standby Master to initiate. If any trigger is successful, the failover does not occur, and checks start from the beginning on the next poll interval.

The failover monitoring and execution is carried out by a service called dbwatcherd. The dbwatcherd service must running in order for these triggers to be monitored.

Note: A partial failure such as a failed NIC or Greenplum Database down may not trigger a failover to the Standby. These failures will be reported to EMC Support by the ConnectEMC software.

The Standby Master connects to the Primary Master and runs a query.

Standby Master issues ping request to Primary Master.

Standby Master asks segment servers to validate Primary Master availability:

• 5 different segments will query the Primary Master.

• 5 segments (used above) - will issue ping requests to the Primary Master.

• All 5 segments must report the Primary Master as unreachable.

Monitor a Failover in Progress

When an automated failover is in progress, events are written to the OS log files. By monitoring the /var/log/messages file, you can see what phase of the failover is occurring. Use the following command from the Standby Master:

# tail -f /var/log/messages

Failback After an Automated Failover

After an automated failover occurs, the original Standby Master - smdw - will be the new Primary Master - smdw. Once the original Primary Master - mdw - is replaced, it can be integrated back into the cluster in one of two ways: as the Standby Master or as the Primary Master. The supported process is to always return the replaced master server to its original role - only in certain circumstances, where additional downtime is not feasible, should the replaced master server be left in a reversed role.

Automated Failover 93

Page 94: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Master Server Failover

This section contains guidelines and high level information about restoring normal operation after a failover. For the replacement procedure, see the latest version of the EMC Data Computing Appliance Maintenance Guide.

Return Master Server to Original Role After Replacement

The following high level steps should be performed to return the replaced Master server to its original role as Primary Master Server. At completion of the activity, both Master servers will be in their original role, and able to perform an automated failover should a new failure of the Primary Master occur.

Initialize the replaced Master Server - mdw - as the Standby Master

Perform an Orchestrated failover to move operations from the new Primary Master - smdw - to the original Primary Master - mdw.

Initialize the original Standby Master - smdw - as the new Standby Master.

Keep the Master Servers in a Reversed Role

After the original Primary Master - mdw - is physically replaced, the DCA will be operating with the original Standby Master - smdw - as the Primary Master and no Standby Master. In order to reduce the downtime that would be induced by a fail-back, the original Primary Master - mdw - can be initialized as a Standby Master.

This configuration is transparent to the Greenplum Database, and in the event of a failure, the failover process would function correctly. This configuration is not supported by EMC, and should only be done under special circumstances.

Note: The Primary Master should always be returned to its original role due to serviceability concerns regarding rack position, cabling, and hostnames.

94

Page 95: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

CHAPTER 7SNMP

This chapter describes the DCA implementation of SNMP. The DCA has an SNMP version 2 management information base (MIB). The MIB can be used by enterprise monitoring systems to identify issues with components and services in the DCA.

The following sections are included in this chapter:

“DCA MIB information”

“Integrate DCA MIB with environment”

DCA MIB informationThis section contains information on how to understand and view the DCA Data and Trap MIB.

MIB Locations

The DCA MIBs are located in the following locations:

/usr/share/snmp/mibs/GP-DCA-TRAP-MIB.txt/usr/share/snmp/mibs/GP-DCA-DATA-MIB.txt

MIB Contents

The DCA public MIB is organized in the following way:

Figure 42 MIB OID Structure

1.3.6.1.4.1.1139.23.1.1.X.y1 = Trap MIB2 = Data MIB

Component OID

DCA MIB OIDEnterprise OID

Data MIB components

1.1 Trap Notifications1.2 Symptom Code1.3 Detailed Symptom Code1.4 Description1.5 Severity1.6 Hostname

X.y2.1 DCA v1 Hardware2.2 DCA UAP Edition Hardware2.3 Services2.4 Software Version2.5 Hadoop Version2.6 Basic System Information

Trap MIB componentsX.y

DCA MIB information 95

Page 96: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

Table 29 DCA Data MIB

DCA MIB Content Component OID Description

1 - gpDCAv1Hardware gpMasterNodes 1 GPDB Master servers.

gpSegmentNodes 2 GPDB Segment servers.

gpAdminSwitches 3 DCA administration switches.

gpInterconnectSwitches 4 DCA Interconnect switches.

gpEtlNodes 5 DIA servers.

gpHadoopMasterNodes 6 Hadoop Master servers.

gpHadoopWorkerNodes 7 Hadoop Worker servers.

gpAggAdminSwitches 8 DCA aggregation administration switches.

gpAggInterconnectSwitches 8 DCA aggregation Interconnect switches.

gpHbaseWorkerNodes 10 Hadoop HBase servers.

2 - gpDCAv2Hardware gpMasterNodes 1 GPDB Master servers.

gpSegmentNodes 2 GPDB Segment servers.

gpAdminSwitches 3 DCA administration switches.

gpInterconnectSwitches 4 DCA Interconnect switches.

gpEtlNodes 5 DIA servers.

gpHadoopMasterNodes 6 Hadoop Master servers.

gpHadoopWorkerNodes 7 Hadoop Worker servers.

gpAggAdminSwitches 8 DCA aggregation administration switches.

gpAggInterconnectSwitches 9 DCA aggregation Interconnect switches.

gpHadoopComputeNodes 10 Hadoop Compute servers.

3 - gpDCAServices gpDbService 1 Greenplum Database processes.

gpHadoopService 2 Hadoop processes.

96

Page 97: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

An example healthmon dialhome message looks like this:

(955): snmp_vals=['11','9002','Controller Battery 1 Status: ok','3','smdw : smdw'];Event Code 11.9002, Severity: Informational (3) - Message about smdw (standby master)

Table 31, “Dialhome message detail” shows how each element of the message corresponds to the rows in Table 30, “DCA Trap MIB” above.

Table 32 below shows examples of actual trap descriptions and trap severities. The list is not comprehensive.

Table 30 DCA Trap MIB

DCA Trap MIB Contents Description

1 - gpDCATrap This OID is used for notifications generated for a hardware or database event.

2 - gpDCATrapSymCode Symptom code for the event.

3 - gpDCATrapDetailedSymCode Detailed symptom code for the event.

4 - gpDCATrapDesc Description of the event.

5 - gpDCATrapSeverity Severity of the event:0 - unknown1 - error2 - warning3 - info4 - debug

6 - gpDCATrapHostname Server where the event occurred.

Table 31 Dialhome message detail

Trap NotificationsSymptom

CodeDetailed

Symptom Code Description SeverityHostname

Internal: Custom

snmp_vals 11 9002 Controller Battery 1 Status: ok 3 smdw:smdw

Table 32 Example trap descriptions and severities

1.1 Host not responding to SNMP calls, host may be down

Unknown

Power Supply Name: timeout

Upgrade State: timeout

Operational Status: timeout

Interface Description: timeout

Array Disk Name: timeout

Network Device Name: timeout

Virtual Disk Device Name: timeout

sysDescr: timeout

DCA MIB information 97

Page 98: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

Virtual Disk Read Policy: timeout

Virtual Disk State: timeout

Controller Name: timeout

Network Device IP Address: timeout

Virtual Disk Write Policy: timeout

Sensor Name: timeout

Network Device Status: timeout

Battery Status: timeout

Disk Space Used Percentage on Segment (/) Value: timeout

Power Supply Status: timeout

Memory Device Status: timeout

Cache Device Status: timeout

Controller Battery State: timeout

Interface 404161031 Description: timeout

Cooling Device High critical temp: timeout

1.4

Unknown

Interface Status: could not open session to host

2.15 GPDB status - Sent from inside GPDB such as panics, GPDB start

Info

database system is ready to accept connections PostgreSQL 8.2.15 (Greenplum Database 4.1.1.3 build 4) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Aug 2 2011 10:46:48

2.15005

Error

PANIC: insufficient resource queues available

PANIC: proclock table corrupted (lock.c:1247)

PANIC: could not write to file "pg_xlog/xlogtemp.12691": No space left on device

PANIC: Unable to complete 'Abort Prepared' broadcast for gid = 1323052957-0000019676 (cdbtm.c:930)

PANIC: Waiting on lock already held! (lwlock.c:552)

PANIC: could not open file "global/pg_control": No such file or directory

PANIC: out of shared memory

3.2 Status of power supply, if PS fails, will get error with this code

Error

PS 1 Status: critical

Table 32 Example trap descriptions and severities (continued)

98

Page 99: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

PS 2 Status: critical

Info

PS 2 Status: ok

PS 1 Status: ok

Warning

PS 2 Status: nonCritical

5.4002 Temperature of system2

Warning

System Temperature: Temperature not in normal range

9.7 Memory device status. Failed memory devices will get this code.

Error

Memory Device 1 Status: critical

Memory Device 5 Status: critical

Info

Memory Device 1 Status: ok

Memory Device 5 Status: ok

Warning

Memory Device 6 Status: nonCritical

Memory Device 1 Status: nonCritical

10.8003 Status of the network device.

11.9001 Status of IO Controller.

Error

Controller 1 Status: Degraded

Info

Controller 1 Status: ok

11.9002 Status of battery on the IO Controller.

Error

Controller Battery 1 Status: critical

Info

Controller Battery 1 Status: ok

Warning

Controller Battery 1 Status: nonCritical

12.10002

Error

Table 32 Example trap descriptions and severities (continued)

DCA MIB information 99

Page 100: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

Virtual Disk 4 Status: /dev/sdd: critical

Virtual Disk 3 Status: /dev/sdc: critical

Info

Virtual Disk 3 Status: /dev/sdc: ok

Virtual Disk 4 Status: /dev/sdd: ok

Virtual Disk 2 Status: /dev/sdb: ok

Virtual Disk 1 Status: /dev/sda: ok

Warning

Virtual Disk 3 Status: /dev/sdc: nonCritical

Virtual Disk 4 Status: /dev/sdd: nonCritical

Virtual Disk 1 Status: /dev/sda: nonCritical

Virtual Disk 2 Status: /dev/sdb: nonCritical

12.10005 Write cache policy on virtual disk. For example, expected to be write back mode.

Info

Virtual Disk 2 Write Policy: /dev/sdb: LSI Write Back

Virtual Disk 3 Write Policy: /dev/sdc: LSI Write Back

Virtual Disk 1 Write Policy: /dev/sda: LSI Write Back

Virtual Disk 4 Write Policy: /dev/sdd: LSI Write Back

Virtual Disk 6 Write Policy: /dev/sdf: LSI Write Back

Virtual Disk 5 Write Policy: /dev/sde: LSI Write Back

Virtual Disk 11 Write Policy: /dev/sdk: LSI Write Back

Virtual Disk 8 Write Policy: /dev/sdh: LSI Write Back

Virtual Disk 13 Write Policy: /dev/sdm: LSI Write Back

Virtual Disk 14 Write Policy: /dev/sdn: LSI Write Back

Virtual Disk 15 Write Policy: /dev/sdo: LSI Write Back

Virtual Disk 7 Write Policy: /dev/sdg: LSI Write Back

Virtual Disk 9 Write Policy: /dev/sdi: LSI Write Back

Virtual Disk 12 Write Policy: /dev/sdl: LSI Write Back

Virtual Disk 10 Write Policy: /dev/sdj: LSI Write Back

Virtual Disk 16 Write Policy: /dev/sdp: LSI Write Back

Warning

Virtual Disk 2 Write Policy: /dev/sdb: LSI Write Through

Virtual Disk 3 Write Policy: /dev/sdc: LSI Write Through

Virtual Disk 1 Write Policy: /dev/sda: LSI Write Through

Table 32 Example trap descriptions and severities (continued)

100

Page 101: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

Virtual Disk 4 Write Policy: /dev/sdd: LSI Write Through

Virtual Disk 2 Write Policy: /dev/sdb: Enabled Always (SAS)

Virtual Disk 1 Write Policy: /dev/sda: Enabled Always (SAS)

Virtual Disk 4 Write Policy: /dev/sdd: Enabled Always (SAS)

Virtual Disk 3 Write Policy: /dev/sdc: Enabled Always (SAS)

Virtual Disk 5 Write Policy: /dev/sde: LSI Write Through

Virtual Disk 6 Write Policy: /dev/sdf: LSI Write Through

Virtual Disk 16 Write Policy: /dev/sdp: LSI Write Through

Virtual Disk 10 Write Policy: /dev/sdj: LSI Write Through

Virtual Disk 12 Write Policy: /dev/sdl: LSI Write Through

Virtual Disk 14 Write Policy: /dev/sdn: LSI Write Through

Virtual Disk 7 Write Policy: /dev/sdg: LSI Write Through

Virtual Disk 9 Write Policy: /dev/sdi: LSI Write Through

Virtual Disk 8 Write Policy: /dev/sdh: LSI Write Through

Virtual Disk 13 Write Policy: /dev/sdm: LSI Write Through

Virtual Disk 15 Write Policy: /dev/sdo: LSI Write Through

Virtual Disk 11 Write Policy: /dev/sdk: LSI Write Through

12.10006 Read cache policy of virtual disk. For example, expected to be adaptive read ahead.

Warning

Virtual Disk 1 Read Policy: /dev/sda: LSI No Read Ahead

Virtual Disk 3 Read Policy: /dev/sdc: LSI No Read Ahead

Virtual Disk 2 Read Policy: /dev/sdb: LSI No Read Ahead

12.10007 Detects offline, rebuilding raid and other unexpected virtual disk states.

Error

Virtual Disk 3 State: /dev/sdc: Degraded

Virtual Disk 4 State: /dev/sdd: Degraded

Virtual Disk 1 State: /dev/sda: Degraded

Virtual Disk 2 State: /dev/sdb: Degraded

Info

Virtual Disk 4 State: /dev/sdd: Ready

Virtual Disk 2 State: /dev/sdb: Ready

Virtual Disk 3 State: /dev/sdc: Ready

Virtual Disk 1 State: /dev/sda: Ready

Warning

Table 32 Example trap descriptions and severities (continued)

DCA MIB information 101

Page 102: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

Virtual Disk 4 State: /dev/sdd: Background Initialization

Virtual Disk 2 State: /dev/sdb: Background Initialization

Virtual Disk 3 State: /dev/sdc: Background Initialization

Virtual Disk 1 State: /dev/sda: Background Initialization

12.10011 Percentage of disk space on virtual disk used.

Error

Disk Space Used Percentage on Segment (/data1) 2 Value: value 90 outside of range 0 to 89

Disk Space Used Percentage on Segment (/data2) 3 Value: value 90 outside of range 0 to 89

Disk Space Used Percentage on Segment (/data2) 3 Value: value 93 outside of range 0 to 89

Disk Space Used Percentage on Segment (/) 1 Value: value 100 outside of range 0 to 89

Info

Disk Space Used Percentage on Segment (/data2) 3 Value: 79

Disk Space Used Percentage on Segment (/data1) 2 Value: 79

Disk Space Used Percentage on Segment (/) 1 Value: 16

Warning

Disk Space Used Percentage on Segment (/data2) 3 Value: value 80 outside of range 0 to 79

Disk Space Used Percentage on Segment (/data1) 2 Value: value 80 outside of range 0 to 79

Disk Space Used Percentage on Master (/) 1 Value: value 84 outside of range 0 to 79

13.11001 Status of drive. Drive failures use this ID.

Error

Array Disk 9 Status: critical

Array Disk 6 Status: critical

Array Disk 10 Status: critical

Info

Array Disk 8 Status: ok

Array Disk 2 Status: ok

Array Disk 3 Status: ok

Warning

Array Disk 5 Status: nonCritical

Array Disk 4 Status: nonCritical

Array Disk 12 Status: nonCritical

Array Disk 11 Status: nonCritical

14.12002 Interconnect Switch Operational Status.

Error

Table 32 Example trap descriptions and severities (continued)

102

Page 103: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

Operational Status: down

Info

Operational Status: ok

Unknown

Operational Status: unexpected status from device

14.13001 Status errors from switch sensors – Fans, Power Supplies, and Temperature.

Error

Sensor 9 Status: failed: Power Supply #2 -- sensor 9: type 5 is faulty, value is 0

Sensor 8 Status: failed: Power Supply #1 -- sensor 8: type 5 is faulty, value is 0

Info

Sensor 9 Status: ok: Power Supply #2 -- sensor 9: type 5 is OK, value is 1

Sensor 8 Status: ok: Power Supply #1 -- sensor 8: type 5 is OK, value is 1

14.14

Unknown

Interface 0 Description: unexpected snmp value: val_len<= 0

14.14001

Unknown

Interface 0 Status: unexpected status from device

15.2 An error detected in the SNMP configuration of the host

Error

Crash files on system: SNMP configuration issue on host

Core files on system: SNMP configuration issue on host

Disk Space Used Percentage on Segment (/data2) Value: SNMP configuration issue on host

Disk Space Used Percentage on Segment (/data1) Value: SNMP configuration issue on host

Cache Device Size: SNMP configuration issue on host

Power Supply Name: SNMP configuration issue on host

Disk Space Used Percentage on Master (/) Value: SNMP configuration issue on host

Power Probe Type: SNMP configuration issue on host

Cooling Device Low critical temp: SNMP configuration issue on host

Virtual Disk Device Name: SNMP configuration issue on host

Network Device IP Address: SNMP configuration issue on host

Power Supply Volts: SNMP configuration issue on host

Array Disk Name: SNMP configuration issue on host

Network Device Status: SNMP configuration issue on host

Table 32 Example trap descriptions and severities (continued)

DCA MIB information 103

Page 104: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

System Temperature: SNMP configuration issue on host

Processor Status: SNMP configuration issue on host

Memory Device Status: SNMP configuration issue on host

Controller Name: SNMP configuration issue on host

Disk Space Used on Master in megabytes (/data) Value: SNMP configuration issue on host

OS Memory Status: SNMP configuration issue on host

Virtual Disk Read Policy: SNMP configuration issue on host

Virtual Disk Write Policy: SNMP configuration issue on host

Battery Status: SNMP configuration issue on host

Cache Device Status: SNMP configuration issue on host

Power Supply Status: SNMP configuration issue on host

Power Probe Value: SNMP configuration issue on host

Power Probe Name: SNMP configuration issue on host

Virtual Disk State: SNMP configuration issue on host

Controller Battery Status: SNMP configuration issue on host

Cooling Device High critical temp: SNMP configuration issue on host

Controller Battery State: SNMP configuration issue on host

Cooling Device Status: SNMP configuration issue on host

Percentage of idle CPU time: SNMP configuration issue on host

Percentage of user CPU time: SNMP configuration issue on host

Controller Status: SNMP configuration issue on host

RAM available: SNMP configuration issue on host

Network Device Name: SNMP configuration issue on host

Swap space available: SNMP configuration issue on host

Percentage of system CPU time: SNMP configuration issue on host

Swap space total: SNMP configuration issue on host

Cooling Device Name: SNMP configuration issue on host

Upgrade State: SNMP configuration issue on host

RAM total: SNMP configuration issue on host

15.3 Other SNMP related errors

Error

Power Supply Name: Got unexpected error looking for snmp OID

Virtual Disk Device Name: Got unexpected error looking for snmp OID

Cooling Device High critical temp: Got unexpected error looking for snmp OID

Table 32 Example trap descriptions and severities (continued)

104

Page 105: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

Controller Name: Got unexpected error looking for snmp OID

Disk Space Used on Segment in kilobytes (/data2) Value: Got unexpected error looking for snmp OID

Crash files on system: Got unexpected error looking for snmp OID

Power Probe Value: Got unexpected error looking for snmp OID

Cooling Device Low critical temp: Got unexpected error looking for snmp OID

Cooling Device Status: Got unexpected error looking for snmp OID

OS Memory Status: Got unexpected error looking for snmp OID

Array Disk Status: Got unexpected error looking for snmp OID

Power Supply Volts: Got unexpected error looking for snmp OID

Power Supply Status: Got unexpected error looking for snmp OID

Network Device Name: Got unexpected error looking for snmp OID

Array Disk Name: Got unexpected error looking for snmp OID

Network Device IP Address: Got unexpected error looking for snmp OID

Cache Device Size: Got unexpected error looking for snmp OID

Controller Status: Got unexpected error looking for snmp OID

Disk Space Used Percentage on Segment (/) Value: Got unexpected error looking for snmp OID

System Temperature: Got unexpected error looking for snmp OID

Battery Status: Got unexpected error looking for snmp OID

Virtual Disk Read Policy: Got unexpected error looking for snmp OID

Disk Space Used Percentage on Segment (/data2) Value: Got unexpected error looking for snmp OID

Cooling Device Name: Got unexpected error looking for snmp OID

Virtual Disk Write Policy: Got unexpected error looking for snmp OID

Virtual Disk State: Got unexpected error looking for snmp OID

Virtual Disk Status: Got unexpected error looking for snmp OID

Memory Device Status: Got unexpected error looking for snmp OID

Network Device Status: Got unexpected error looking for snmp OID

Power Probe Type: Got unexpected error looking for snmp OID

Cache Device Status: Got unexpected error looking for snmp OID

Processor Status: Got unexpected error looking for snmp OID

Power Probe Name: Got unexpected error looking for snmp OID

Core files on system: Got unexpected error looking for snmp OID

Controller Battery State: Got unexpected error looking for snmp OID

Controller Battery Status: Got unexpected error looking for snmp OID

Disk Space Used Percentage on Segment (/data1) Value: Got unexpected error looking for snmp OID

Table 32 Example trap descriptions and severities (continued)

DCA MIB information 105

Page 106: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

Disk Space Used on Segment in kilobytes (/data1) Value: Got unexpected error looking for snmp OID

Interface Description: Got unexpected error looking for snmp OID

Operational Status: Got unexpected error looking for snmp OID

Disk Space Used on Master in kilobytes (/data) Value: Got unexpected error looking for snmp OID

Disk Space Used Percentage on Master (/data) Value: Got unexpected error looking for snmp OID

Sensor Status: Got unexpected error looking for snmp OID

Interface Status: Got unexpected error looking for snmp OID

15.6 Can not find expected OID during SNMP walk

Error

Power Supply Name: Data not found for expected snmp OID

Memory Device Status: Data not found for expected snmp OID

Network Device IP Address: Data not found for expected snmp OID

OS Memory Status: Data not found for expected snmp OID

Power Supply Status: Data not found for expected snmp OID

Controller Status: Data not found for expected snmp OID

Power Supply Volts: Data not found for expected snmp OID

Virtual Disk Read Policy: Data not found for expected snmp OID

Virtual Disk Write Policy: Data not found for expected snmp OID

Cooling Device Name: Data not found for expected snmp OID

Cooling Device High critical temp: Data not found for expected snmp OID

Cache Device Size: Data not found for expected snmp OID

Power Probe Value: Data not found for expected snmp OID

Cooling Device Low critical temp: Data not found for expected snmp OID

Network Device Status: Data not found for expected snmp OID

Virtual Disk Status: Data not found for expected snmp OID

Virtual Disk Device Name: Data not found for expected snmp OID

Controller Battery State: Data not found for expected snmp OID

System Temperature: Data not found for expected snmp OID

Virtual Disk State: Data not found for expected snmp OID

Battery Status: Data not found for expected snmp OID

Network Device Name: Data not found for expected snmp OID

Cache Device Status: Data not found for expected snmp OID

Array Disk Status: Data not found for expected snmp OID

Controller Name: Data not found for expected snmp OID

Table 32 Example trap descriptions and severities (continued)

106

Page 107: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

Array Disk Name: Data not found for expected snmp OID

Power Probe Name: Data not found for expected snmp OID

Power Probe Type: Data not found for expected snmp OID

Controller Battery Status: Data not found for expected snmp OID

Cooling Device Status: Data not found for expected snmp OID

Processor Status: Data not found for expected snmp OID

Sensor Status: Data not found for expected snmp OID

Sensor Message: Data not found for expected snmp OID

Processor Device Status: Data not found for expected snmp OID

Sensor Name: Data not found for expected snmp OID

Operational Status: Data not found for expected snmp OID

16 Test Dial Home

Error

EMC Connect Test Error Alert

Info

EMC Connect Test Info Alert

18.17 Sent by healthmond when GPDB status is normal.

Info

GPDB Status: GPDB not running

GPDB Status: ok

18.17001 Sent by healthmond when GPDB can not be connected to and was not shutdown cleanly, possible GPDB failure.

Error

GPDB Status: fe_sendauth: no password supplied

GPDB Status: timeout expired

GPDB Status: FATAL: Upgrade in progress, connection refused

GPDB Status: FATAL: no pg_hba.conf entry for host "172.28.10.250", user "gpadmin", database "template1", SSL off

GPDB Status: FATAL: could not open file "global/pg_database": No such file or directory

Connection Status: Unsuccessful

GPDB Status: FATAL: DTM initialization: failure during startup/recovery, retry failed, check segment status (cdbtm.c:1351)

GPDB Status: FATAL: semctl(7241763, 14, SETVAL, 0) failed: Invalid argument (pg_sema.c:154)

GPDB Status: could not connect to server: No route to host Is the server running on host "mdw" and accepting TCP/IP connections on port 5432?

Info

Table 32 Example trap descriptions and severities (continued)

DCA MIB information 107

Page 108: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

Connection Status: ok

18.17002 Sent by healthmond when detecting a failed segment.

Error

GPDB Status: One or more segments are down

Count of segments down: 6

Count of segments down: 12

Count of segments down: 1

Count of segments down: 4

Count of segments down: 3

Info

Count of segments down: 0

18.17003 Sent by healthmond when detecting a segment in change tracking.

Error

Count of segments in change tracking: 6

Count of segments in change tracking: 12

Count of segments in change tracking: 1

Count of segments in change tracking: 4

Count of segments in change tracking: 3

Info

Count of segments in change tracking: 0

18.17004 Sent by healthmond when detecting a segment in resync mode.

Error

GPDB Status: One or more database segments are in resync mode

Count of segments in resync mode: 12

Count of segments in resync mode: 24

Count of segments in resync mode: 6

Count of segments in resync mode: 2

Count of segments in resync mode: 22

Count of segments in resync mode: 8

Info

Count of segments in resync mode: 0

18.17005 Sent by healthmond when detecting a segment not in its preferred role, unbalanced cluster.

Error

Count of segments not in preferred role: 12

Table 32 Example trap descriptions and severities (continued)

108

Page 109: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

GPDB Status: One or more database segments is not in its preferred role

Count of segments not in preferred role: 6

Info

Count of segments not in preferred role: 0

18.17006 Sent by healthmond when detecting a move of the master segment from mdw to smdw.

Warning

GPDB has moved to smdw from mdw

18.17007 Sent by healthmond when detecting a move of the master segment from smdw to mdw.

Warning

GPDB has moved to mdw from smdw

18.17008 Sent by healthmond when a query fails during health checking.

Error

GPDB Status: Database mirrors are not in sync with the master

18.17009 Healthmond error querying GPDB State.

Error

GPDB Status: no connection to the server

19.18 ID for informational dial homes with general system usage information.

Info

Informational Dial Home

21.2 Core files were found on the system.

Error

Core files on system: Core files present on system

Info

Core files on system: ok

22.21 Master Node Failover was successful.

Info

Successful

22.21001 GPActivatestandby command failed during master node failover.

Error

Gpactivatestandby failed

22.21003Error in bringing the remote (other) master server down during master node failover.

Error

Could not shutdown remote master

Table 32 Example trap descriptions and severities (continued)

DCA MIB information 109

Page 110: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

View MIB

Issue the following commands from a Master server as the user root:

# MIBS+=GP-DCA-DATA-MIB# export MIBS# snmpwalk -v 2c -c public 172.28.4.250 1.3.6.1.4.1.1139.23.1.1.2.2

Integrate DCA MIB with environmentThis section contains information on how to integrate the DCA MIB with an environment. The following information is included:

“Change the SNMP community string”

“Set an SNMP Trap Sink”

Change the SNMP community string

The SNMP community string can be modified through the DCA Setup utility. Changing the SNMP community string through DCA Setup updates all hosts in the DCA. Follow the instructions below to modify the community SNMP string.

The following restrictions apply when modifying the community SNMP string:

The Greenplum Database must be version 4.1.1.3 or later. If the Greenplum Database is a version earlier than 4.1.1.3, the option to modify the SNMP community string will not be available.

If the SNMP community string is modified while running Greenplum Database 4.1.1.3 or later, and the Greenplum Database is downgraded to a version earlier than 4.1.1.3, the modified SNMP file will not function properly. Also, dial-home and health monitoring will be affected.

If the DCA cluster is expanded with new hosts, the new hosts will not use the default SNMP configuration by default. The updated SNMP configuration must be copied from an existing host to the new hosts.

1. Open an SSH connection to the Primary Master server and log in as the user root.

2. Start the DCA Setup utility:

# dca_setup

3. Select option 2) Modify DCA Settings.

4. Select option 16) Modify the Health Monitoring Configuration.

5. Select sub option 6) Configure the SNMP Community.

6. Enter the new SNMP community string when prompted.

7. Enter A to apply.

Set an SNMP Trap Sink

Up to 6 SNMP Trap Sink servers can be specified through the DCA Setup utility. Follow the instructions below to set Trap Sink servers:

1. Open an SSH connection to the Primary Master server and log in as the user root.

110

Page 111: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

2. Start the DCA Setup utility:

# dca_setup

3. Select option 2) Modify DCA Settings.

4. Select option 16) Modify the Health Monitoring Configuration.

5. Select option 7) trap hosts

6. Enter the IP address or qualified name of a trap server at the following prompt:

Please enter a trap server.

7. Specify if you want to add an additional trap server at the following prompt:

Would you like to add another trap host? (Yy|Nn).

8. Enter A to apply the above settings.

Integrate DCA MIB with environment 111

Page 112: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

SNMP

112

Page 113: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Database and System Monitoring Tools

CHAPTER 8Database and System Monitoring Tools

EMC Data Computing Appliance provides various tools to monitor the status of Greenplum Database as well as the hardware components it runs on. This section contains the following topics:

ConnectEMC Dial Home Capability

Greenplum Database email and SNMP alerting

ConnectEMC Dial Home CapabilityThe EMC Data Computing Appliance and Data Integration Accelerator support dial home functionality through the ConnectEMC software. ConnectEMC is a support utility that collects and sends event data (files indicating system errors and other information) from EMC products to EMC Global Services customer support. ConnectEMC sends DCA event files using the secure file transfer protocol (FTPS). If an EMC Secure Remote Support Gateway (ESRS) is used for connectivity, HTTPS or FTP are available protocols for sending alerts.

The ConnectEMC software is configured on the DCA master and standby master server and sent out through the external connection (eth1) either to an ESRS Gateway server or directly to EMC.

Dial Home Severity LevelsAlerts that arrive at EMC Global Services can have one of the following severity levels:

WARNING: This indicates a condition that might require immediate attention. This severity creates a service request.

ERROR: This indicates that an error occurred on the DCA. System operation, performance, or both are likely affected. This severity creates a service request.

UNKNOWN: This severity level is associated with hosts and devices on the DCA that are either disabled due to hardware failure or unreachable for some reason. This severity creates a service request.

INFO: This severity level indicates that a previously reported error condition is now resolved. An event with this severity level is also used to provide information about the system that does not require any action. This severity does not create a service request. For example, Greenplum Database startup triggers an INFO alert.

The severity of events determines if a service request is created for EMC support to act on. The events listed in Table , “DCA Error Codes,” on page 114 can generate multiple severity levels based on the error condition.

ConnectEMC Dial Home Capability 113

Page 114: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Database and System Monitoring Tools

For example, if a segment server disk drive fails, Symptom Code 13 is generated with a severity of ERROR. The ConnectEMC software dials home to Global Services customer support, and a service request is created. On successful replacement of the disk drive, Symptom Code 13.11001 is generated again, this time with a severity of INFO to note that the disk drive was replaced.

ConnectEMC Event AlertsThe following table lists the conditions that cause ConnectEMC to send event data alerts to EMC Global Services.

DCA Error Codes

Code Description

1.1 Host not responding to SNMP calls, host may be down.

1.4 Interface status: could not open session to host 1.

2.15 Greenplum Database is ready to accept connections.

2.15000 GPDB status– Sent from inside GPDB such as panics, GPDB start.

2.15001 GPDB status– could not access the status of a transaction.

2.15002 GPDB status– interrupted in recovery.

2.15003 2 Phase file corrupted

2.15005 Greenplum Database panic, insufficient resource queues available.

3.2000 Status of power supply, if PS fails, will get error with this code.

3.2004 Server power supply monitoring (using IPMI).

5.4000 Server status.

5.4001 Status of cooling device, e.g., fan failure.

5.4002 Temperature of system.

6.5001 Status check of a CPU. CPU failure will register here.

9.7000 Memory device status. Failed memory devices will get this code.

10.8003 Status of the network device.

10.8005 A configured network bond is unavailable.

10.8006 Network bonding on master servers: The bond interface has no active link/slave.

10.8007 Network bonding on master servers: The bond interface link/slave has changed.

10.8008 Network bonding on master servers: The bond interface links are all down.

10.8009 Network bonding on master servers: One of the bond interface links is down.

11.9001 Status of IO Controller

12.10002 Virtual Disk Status: one of the configured drives has failed or is offline

12.10004 Virtual disk size (MB)

12.10005 Write cache policy on virtual disk. For example, expected to be write back mode.

12.10007 Detects offline, rebuilding raid, and other unexpected virtual disk states.

ConnectEMC Dial Home Capability 114

Page 115: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Database and System Monitoring Tools

12.10011 The percentage of disk space used on virtual disk has exceeded the error threshold,

Example:

mdw: 12.10011 : Disk space used 2%: error: one or more disk usage exceed error threshold, System is configured to operate under 1% disk capacity.

12.10014 Virtual disk space used (KB).

13.11001 Physical disk needs to be replaced. Slot number and capacity of the disk are indicated.

Example:

mdw: 13.11001: Physical Disk slot 6 Status: warning: unconfigured-good: Dev Id 6 : Adp Id 0 : Size 279 GiB

14.12002 Interconnect Switch Operational Status.

14.12008 Switch thermal status (V2).

14.12009 Displays Switch power supply status 1

14.12010 Switch power supply status 2

14.12011 mLAG status with peer switch

14.12012 mLAG status of port 1

14.12013 mLAG status of port 2

14.12014 mLAG status of port 3

14.12015 mLAG status of port 4

14.12016 mLAG status of port 5

14.12017 mLAG status of port 6

14.12018 mLAG status of port 7

14.12019 mLAG status of port 8

14.12020 mLAG status of port 9

14.12021 mLAG status of port 10

14.12022 mLAG status of port 11

14.12023 mLAG status of port 12

14.12024 mLAG status of port 13

14.12025 mLAG status of port 14

14.12026 mLAG status of port 15

14.12027 mLAG status of port 16

14.12028 mLAG status of port 17(mdw)

14.12029 mLAG status of port 18(smdw)

14.12030 mLAG status of port 19

14.12031 mLAG status of port 20

14.12032 mLAG status of port 21

14.12033 mLAG status of port 22

14.12034 LAG status

Code Description

ConnectEMC Dial Home Capability 115

Page 116: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Database and System Monitoring Tools

14.13007 Status errors from switch sensors – Fans (V2).

14.14000 Interface 0 Description: unexpected snmp value: val_len<=0.

14.14001 Interface 0 Status: unexpected status from device.

15.20000 An error is detected in the SNMP configuration of the host. Indicates an issue with the IP address setting in the SNMP configuration.

15.30000 Other SNMP related errors.

15.40000 Connection aborted by SNMP.

15.50000 Unexpected SNMP errors from the SNMP system libraries.

15.60000 Can not find expected OID during SNMP walk.

16.00000 Test Dial Home.

18.15000 Sent from inside GPDB when starting up.

18.15001 Sent from inside GPDB when GPDB could not access the status of a transaction.

18.15002 Sent from inside GPDB when interrupted in recovery.

18.15003 Sent from inside GPDB when a 2 phase file is corrupted.

18.15004 A test message sent from inside GPDB.

18.15005 Sent from inside GPDB when hitting a panic.

18.17000 Sent by healthmond when GPDB status is normal.

18.17001 Sent by healthmond when GPDB can not be connected to and was not shut down cleanly, possible GPDB failure.

18.17002 Sent by healthmond when detecting a failed segment.

18.17003 Sent by healthmond when detecting a segment in change tracking.

18.17004 Sent by healthmond when detecting a segment in resync mode.

18.17005 Sent by healthmond when detecting a segment not in its preferred role, unbalanced cluster.

18.17006 Sent by healthmond when detecting a move of the master segment from mdw to smdw.

18.17007 Sent by healthmond when detecting a move of the master segment from smdw to mdw.

18.17008 Sent by healthmond when a query fails during health checking.

18.17009 Healthmond error querying GPDB State.

18.17010 Database starts (informational only).

18.17011 Database stops (informational only).

19.18000 ID for informational dial homes with general system usage information.

21.20000 Core files were found on the system.

21.20001 Linux kernel core dump files were found on the system - indicates a crash and reboot.

21.20002 GPDB (PostgresSQL) core dump files were found on the system - indicates a crash and reboot.

22.21000 Master Node Failover was successful.

22.21001 GPAactive standby command failed during master node failover.

Code Description

ConnectEMC Dial Home Capability 116

Page 117: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Database and System Monitoring Tools

22.21002 Greenplum Database is not reachable after the failover.

22.21003 Error in bringing the remote (other) master server down during master node failover.

22.21004 Error in taking over the remote (other) master server IP.

22.21005 Unknown error in failover.

23.22002 Host did not complete upgrade within the specified timeout period. Timeout period is 12 hours by default unless set in /opt/dca/etc/healthmond/healthmond.cnf.

Code Description

ConnectEMC Dial Home Capability 117

Page 118: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Database and System Monitoring Tools

Greenplum Command CenterGreenplum Command Center allows administrators to collect query and system performance metrics from a running Greenplum Database system. Monitor data is stored within Greenplum Database.

Greenplum Command Center is comprised of data collection agents that run on the master host and each segment host. The agents collect performance data about active queries and system utilization and send it to the DCA master at regular intervals. The data is stored in a dedicated database on the master, where it can be accessed using the Greenplum Command Center web application or SQL queries.

Greenplum Command Center is a browser-based application that administrators can use to view active and historical query and system metrics stored in the gpperfmon database. By default, Greenplum Command Center is installed on the Greenplum Database master host using HTTP port 28080. It can be accessed through a browser using a URL, such as http://masterhostname.companydomain.com:28080. Before you can log in to Greenplum Command Center, your Greenplum Database administrator must assign you a username and password. For instructions on granting access, see the Greenplum Command Center Administrator Guide at gpdb.docs.gopivotal.com.

Pivotal Command CenterPivotal Command Center allows an administrative user to administer and monitor one or more Pivotal HD clusters. The Command Center has command-line tools to deploy and configure Pivotal HD clusters, as well as, an intuitive graphical user interface (GUI) that is designed to help the user view the status of the clusters and take appropriate action. This release of Command Center allows only administering and monitoring of Pivotal HD Enterprise 1.0.x clusters.

Pivotal Command Center 2.0.x is comprised of the following:

• Pivotal Command Center UI

• Pivotal HD Manager

• Performance Monitor (nmon)

PCC User Interface

The PCC UI provides the user with a single web-based GUI to monitor and manage one or more Pivotal HD clusters. This web application is hosted on a Ruby-on-Rails application which presents the status and metrics of the clusters. This data comes from multiple sources. All of the Hadoop specific data comes from the Pivotal HD Manager component. The system metrics data is gathered by our Performance Monitor (nmon) component. The UI can be accessed through a browser using a URL, such as http://masterhostname.companydomain.com:5000/.

For more detain and instructions about Pivotal Command Center, see the Pivotal Command Center 2.x Installation Guide.

Greenplum Command Center 118

Page 119: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Database and System Monitoring Tools

Greenplum Database email and SNMP alertingThe Greenplum Database system can be configured to trigger SNMP alerts or send email notifications to system administrators when certain database events occur. These events can include fatal server errors, segment shutdown and recovery, and database system shutdown and restart.

Greenplum Database email and SNMP alerting 119

Page 120: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

General Database Maintenance Tasks

CHAPTER 9General Database Maintenance Tasks

Like any database management system, Greenplum Database requires certain tasks be performed regularly to maintain optimum performance. The tasks discussed here are required, but since they are repetitive they can easily be automated using standard UNIX tools such as cron scripts. It is the database administrator’s responsibility to set up appropriate scripts and see that they run successfully. This section contains the following topics:

Routine Vacuum and Analyze

Routine Reindexing

Managing Greenplum Database Log Files

Routine Vacuum and AnalyzeBecause of the multi-version concurrency control (MVCC) transaction model used in Greenplum Database, data rows that are deleted or updated still occupy physical space even though they are not visible to any new transactions. If you have a database with lots of updates and deletes, you will generate a lot of expired rows. Running the VACUUM SQL command will reclaim this disk space. The VACUUM command also collects table-level statistics, such as number of rows and pages, so it is necessary to periodically run VACUUM on all tables.

Transaction ID Management

Greenplum Database’s MVCC transaction semantics must be able to compare transaction ID (XID) numbers to determine visibility to other transactions. However, since transaction IDs have limited size, a system that runs the Greenplum Database for a long time (more than four billion transactions) would suffer transaction ID wraparound: the XID counter wraps around to zero, so that transactions that occurred in the past appear to occur in the future, which means their outputs become invisible. To avoid this, you must run VACUUM on every table in every database at least once every two billion transactions. For more information, see the Pivotal™ Greenplum Database® Administrator Guide at gpdb.docs.gopivotal.com.

System Catalog MaintenanceSystem performance can be affected by numerous database updates with the CREATE and DROP commands, which can cause the system catalog to grow. For example, after a large number of DROP TABLE statements, the overall performance of the system can degrade due to excessive data scanning during metadata operations on the catalog tables. Depending on your system, the performance loss can be caused by thousands to tens of thousands of DROP TABLE statements.

Routine Vacuum and Analyze 120

Page 121: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

General Database Maintenance Tasks

EMC recommends that you periodically run VACUUM on the system catalog to clear the space occupied by deleted objects. If numerous DROP statements are a part of your regular database operations, you can safely run a system catalog maintenance procedure with VACUUM at off-peak hours every day. This can be done while the system is running and available.

The following sample script performs a VACUUM of the Greenplum Database system catalog:

#!/bin/bashDBNAME="<database_name>"VCOMMAND="VACUUM ANALYZE"psql -tc "select '$VCOMMAND' || ' pg_catalog.' || relname || ';' from

pg_class a,pg_namespace b where a.relnamespace=b.oid and b.nspname= 'pg_catalog' and a.relkind='r'" $DBNAME | psql -a $DBNAME

Vacuum and Analyze for Query OptimizationGreenplum Database uses a cost-based query planner that relies on database statistics. Accurate statistics allow the query planner to better estimate selectivity and the number of rows retrieved by a query operation in order to choose the most efficient query plan. The ANALYZE command collects column-level statistics needed by the query planner.

Both VACUUM and ANALYZE operations can be run in the same command. For example:

=# VACUUM ANALYZE mytable;

Routine ReindexingFor B-tree indexes, a freshly constructed index is somewhat faster to access than one that was updated many times, because logically adjacent pages are usually also physically adjacent in a newly built index. It might help to reindex periodically to improve access speed. Also, if all but a few index keys on a page have been deleted, there is wasted space on the index page that a reindex can reclaim. In Greenplum Database it is often faster to drop an index (DROP INDEX) and recreate it (CREATE INDEX) than to use the REINDEX command.

Bitmap indexes are not updated when changes are made to the indexed columns. If you updated a table that has a bitmap index, you must drop and recreate the index for it to remain current.

Managing Greenplum Database Log FilesThis section contains the following topics:

Database Server Log Files

Management Utility Log Files

121

Page 122: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

General Database Maintenance Tasks

Database Server Log Files

Greenplum Database log output tends to be voluminous, especially at higher debug levels, and you do not need to save it indefinitely. Administrators need to rotate the log files periodically so that new log files are started and old ones are removed after a reasonable period of time.

Greenplum Database has log file rotation enabled on the master and all segment instances. Daily log files are created in pg_log on the master and in each segment data directory using the naming convention gpdb-YYYY-MM-DD.log. Though log files roll over daily, they are not automatically truncated or deleted. Administrators must implement a script or program to periodically delete old log files in the pg_log directory of the master and each segment instance.

Management Utility Log Files

By default, log files for the Greenplum Database management utilities are written to ~/gpAdminLogs, the home directory of the gpadmin user. The naming convention for management log files is <script_name>_<date>.log.

The log file for a particular utility is appended to its daily log file each time that utility is run. Administrators need to implement a script or program to periodically clean up old log files in ~/gpAdminLogs.

For information on connecting to a Greenplum Database system running on the DCA, see the Pivotal™ Greenplum Database® Administrator Guide available on http://docs.pivotal.io/

Managing Greenplum Database Log Files 122

Page 123: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

General Database Maintenance Tasks

123

Page 124: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

CHAPTER 10Utility Reference

This chapter contains reference information about DCA utilities. These utilities pertain to the DCA.

dca_setupThe dca_setup utility is an administration tool used to install, upgrade, and modify settings on a Data Computing Appliance (DCA). EMC recommends using the dca_setup utility versus modifying the Linux configuration files directly for the following reasons:

• Changes made through the dca_setup utility automatically take care of dependencies that may exist. For example, if a hostname is changed by some other method than the dca_setup utility then there is a possibility that not all files will get updated appropriately with the new hostname. These naming inconsistencies can lead to problems during configuration and upgrade processes.

• Operations through the dca_setup utility are recorded for audit purposes.

• The dca_setup utility is the EMC recommended and supported administration tool. Not using it could invalidate the customer’s support warranty.

Synopsis

• Run dca_setup in interactive mode: dca_setup

• Display the online help:dca_setup --help

• Run dca_setup in batch mode:dca_setup [ --config USER_CONFIG_FILE | -c USER_CONFIG_FILE ] [-a]

• Run dca_setup in development mode:dca_setup { --dev | -d } [ --rackhosts HOST_PER_RACK | -r HOST_PER_RACK ] [--offline | --o ]

• Start the Switch and VLAN Settings menu:dca_setup [ --switch | -s ]

• Reset default configuration values:dca_setup [ --update | -u ]

Description

The dca_setup utility was implemented starting in DCA software version 1.1.0.0 to automate common tasks performed on a DCA. The dca_setup utility can be run in an interactive mode (default) or as a batch script using a configuration file. Using a configuration file is best done when there are multiple changes to be made, such as configuration of custom IP addresses in a VLAN overlay. See Automating dca_setup using a configuration file.

Usage:

Must be run as the user root

124

Page 125: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Must be run from the Primary or Standby Master server

Only one instance can be run at one time.

Navigation is done by entering the numeric option or sub-option and pressing ENTER.

Use B to navigate back one level.

Use X to exit.

Use A to apply changes.

Options

--helpDisplay the online help.

--config USER_CONFIG_FILE | -c USER_CONFIG_FILE

Specify a configuration file for a batch mode. This option is best used when making multiple changes, such as assigning custom IP addresses to multiple hosts for a VLAN overlay.

-aUse this parameter with the --config option to perform unattended changes. Using this option will not prompt for confirmation before performing an action.

--dev | -dRun dca_setup in development mode.

--rackhosts HOST_PER_RACK | -r HOST_PER_RACK

Set the number of hosts per rack (normally 16). Must be run in development mode.

--offline | --oRun dca_setup in offline mode. This mode is for development only. It keeps the DCA from connecting or coping files to other hosts.

--switch | -sStart the Switch and VLAN Settings menu. Use this option if you cannot perform an installation due to switch configuration issues.

--update | -uReset default configuration values.

Examples

Run dca_setup in interactive mode:

# dca_setup

Run dca_setup with a configuration file, in unattended mode:

# dca_setup -c my_config.cnf -a

Available operations

The following operations are available through the dca_setup Main menu:

1. Install DCA

2. Modify DCA Settings

125

Page 126: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

3. Host Inventory

4. Application Management (PHD-Application-Suite and PHD-Isilon)

5. RPQ Management

Install DCA

This option installs a new DCA with interactive options, and configure GPDB, DIA and PHD modules.

Modify DCA Settings

Once DCA installation is completed, a variety of features can be enabled and modified by selecting this option.

Table 33 lists the operations available under this option:

Table 33 Modify DCA Settings

Option Sub Option Description

Regenerate Hostfiles Generate or regenerate hostfiles while ensuring that connectivity between all components is operating as expected. Switches are also checked and validated for connectivity. Files are created in /home/gpadmin/gpconfigs. Health monitoring is configured with hosts entered in this option.For more information about this option, see Section “Regenerate Hostfiles” on page 132.

Set System Information Set DCA Name, description, location, and contact information.After you set a value for the name, description, location, or contact information, enter A to make the setting permanent.

Set DCA Locale Set locale of the DCA. Available locales can be displayed by using the locale -a

Set DCA Timezone Set time zone of the DCA.

Modify NTP/Clock Configuration Options

1) Reset the NTP configuration to the default settings and synchronize clocks across the cluster

Modify settings related to the time/clocks on DCA servers. Reset configuration to default settings.

2) Add external NTP servers and synchronize clocks across the cluster.

Add customer timeserver.

3) Synchronize clocks across the cluster to the NTP server (can take several minutes).

Synchronize time across all hosts in the DCA.

4) Quick Clock Synchronization (synchronize clocks across the cluster to the clock on the Master Server).

Synchronize all nodes' clocks in the cluster to master's clock.Difference between option 3 and 4: option 3 causes master clock first synchronize with external NTP server, then other nodes synchronize with master. Option 4) skips the first part, and can finish faster.

126

Page 127: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Generate SSH Keys Generate SSH keys across all servers in the DCA. This operation should be used any time new hardware is added to the DCA. You need passwords for the root, gpadmin, and switch components within the cluster to perform this action.

Change Passwords Change the gpadmin, root, BMC and Switch password across all servers in the DCA. Save your most current passwords; EMC Customer Support need them when diagnosing issues as well as retrieving logs.

Initialize the GPDB Master and Standby Master

Initialize the GPDB installation on both the Master and Standby Master servers.

Initialize/Synchronize the GPDB Standby Master

Initialize a Standby Master server for an existing Greenplum Database instance, and ensures the Standby Master server is in sync with the current running Master server.

Expand the DCA In some cases it is required to expand the DCA to improve performance or increase capacity. Use this option to add GPDB, DIA, Hadoop servers or Hadoop Compute servers to a DCA. Values must be entered in increments of 4 (modules).Once the expansion completes further validation operations need to be completed. For details, see the EMC Data Computing Appliance Implementation Guide Appliance Version 2.x / DCA Software Version 2.1.0.0.

Modify the Master Servers' External Network Settings

Set NIC bonding, external IP address/gateway/netmask for the Primary and Standby Master, and optionally set a Virtual IP (provided by the customer) on the Primary Master server (used in failover). It also enables the use of two external connections per master. For details, see EMC Data Computing Appliance Implementation Guide Appliance Version 2.x / DCA Software Version 2.1.0.0. If enabled, make sure that both cables are connected, otherwise it can cause dial home messages indicating one cable is missing.

Modify the DNS Settings This setting is used to modify the DNS forwarding parameters on the DCA. DNS forwarding is the process by which particular sets of DNS queries are handled by a designated server, rather than initial server contacted by the client. Ensure the customer is consulted prior to setting this variable.

Table 33 Modify DCA Settings

Option Sub Option Description

127

Page 128: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Switch and VLAN Settings A VLAN is configured on each switch so that ports connected to servers that need external access have both the internal VLAN (4) and the overlay VLAN (2000). The ports to which the external cables are connected are given only the overlay VLAN.Also use this option to change certain settings, for instance, passwords, at the switch layer.

1) Change the Switch Passwords

Changes the switch password.Keep the new password ina safe place in case customer support requires access and the default password is not accepted.

2) Upload the Switch Configurations

Uploads the default switch configuration already loaded onto the mdw & smdw servers.

3) Verify and Fix the Switch Configurations

Run checks on the switch setup in the DCA.

4) Download the Switch Configurations

Downloads all of the available switch configurations to a user specified directory. The specified directory must exist prior to executing this step. This process will download the running-config and the startup-config files. File names will be seen as <switch hostname>.startup-config and <switch hostname>.running-config.

5) Add/Enable Customer VLAN

External resources are considered directly connected when one or more non-DCA servers are cabled directly to the DCA switches. These sorts of connections do not technically require a VLAN Overlay but the standard practice is to establish one. Doing so makes it easier to expand to more complex connections later if needed, and ensures a supportable configuration.

Once selected, the following appears:Please enter the full path for the VLAN

IP Map File. If no file exists, please press CTRL-C, exit dca_setup, and create a file as described below.

This file should contain all the hosts in the VLAN followed by their VLAN IP address.

For example, a quarter rack VLAN Map File may look like:

mdw = 10.5.6.143smdw = 10.5.6.164sdw1 = 10.5.6.165sdw2 = 10.5.6.169sdw3 = 10.5.6.155sdw4 = 10.5.6.145You can find an example file here:

/opt/dca/var/dca_setup/customer_vlan_map_example

For more details and examples of setting up a VLAN on both a single and expanded rack setup, see the EMC Data Computing Appliance Security Configuration Guide Appliance Version 2.x / DCA Software Version 2.1.0.0.

Table 33 Modify DCA Settings

Option Sub Option Description

128

Page 129: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Modify Hostnames 1) Modify the Hostnames from a Menu

Use a menu to set custom hostnames of the servers in a DCA.

2) Modify the Hostnames from a File

Use the Hostname Map File to set custom hostnames of the servers in a DCA.

3) Reset all the Hostnames to their default value

Set hostnames in /etc/hosts to their default value, and copy /etc/hosts to the entire cluster

4) Add a Non DCA Hostname Set a hostname in /etc/hosts for a server that is not part of the DCA.

Table 33 Modify DCA Settings

Option Sub Option Description

129

Page 130: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Configure Security Settings 1) Pre-Login System Message You can set up a custom system message that will be displayed pre-login across all servers in the DCA. You must create a plain-text file with the content before-hand.

2) Post-Login System Message

You can set up a custom system message that will be displayed post-login across all servers in the DCA. You must create a plain-text file with the content before-hand.

3) Configure Auditd Service Use this to enable/disable the auditd service.The auditd service is disabled by default.The auditd service is a component of the Linux Auditing System that is responsible for writing audit records to disk. By default, the service audits about security-relevant events such as system logins, account modifications, and authentication events performed by programs such as sudo. The audit service, configured with at least its default rules, is strongly recommended for all sites. Note that comprehensive auditing might affect your system's performance.After enabling/disabling the auditd service you must reboot the servers using the Security Settings menu option 9: Reboot Remote Servers.

Note: Once you have enabled the auditd service, you must configure and start it manually, refer to the EMC Data Computing Appliance Security Configuration Guide Appliance Version 2.x / DCA Software Version 2.1.0.0 for details.

4) Configure FIPS mode Use this to enable/disable FIPS modeFIPS mode is disabled by default.Note that after changing the FIPS mode you must reboot the server using the Security Settings menu option 9: Reboot Remote Servers.

5) Check FIPS Mode Status After selecting this option, you will see several status messages appear, including one that displays the current FIPS mode states (disabled or enabled).

6) Configure SSH Port Forwarding

Use this option to display the current state (enabled/disabled) of SSH Port Forwarding, and to enable or disable SSH Port Forwarding.

7) Configure Secure Permissions

This option is used for DCA security hardening. Secure permissions is disabled by default.Use this option to change (enable or disable) secure permissions. When secure permissions is enabled, certain files on DCA have more secured file permissions. See Section “Configure Secure Permissions” on page 133 for the impact on file access.

8) Configure Enhanced Security Login

Use this option to customize security logins that will enhance the security of logins by, for example, specifying a minimum length for passwords.See the EMC Data Computing Appliance Security Configuration Guide Appliance Version 2.x / DCA Software Version 2.1.0.0 for more details.

Table 33 Modify DCA Settings

Option Sub Option Description

130

Page 131: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Configure Security Settings (cont’d)

9)Reboot Remote Servers Use this option to reboot all servers after the following operations: • Changing the FIPS mode• Enabling/disabling the auditd service.• Expanding your system. Note that this option reboots

all hosts, not only the newly added ones.

Modify the Health Monitoring Configuration

1) Enable Connect EMC (dial homes)

Change ConnectEMC operation.

2) Enable periodic licensing reports

Turn on or off periodic licensing report call home.

3) Enable periodic dial home Turn on or off periodic dial home. Periodic dial home gathers basic system information and status and sends them back to EMC Support Center for record keeping and diagnostic purpose. If enabled, this information is sent on a weekly basis.

4) Configure disk space warning percentage

Change the threshold a warning notification will be generated. Default setting is 80%.

5) Configure disk space error percentage

Change the threshold an error notification will be generated. Default setting is 90%.

6) Configure the SNMP Community

Set the SNMP Community String. Default setting is public. For information about the correct Community String to use, see the applicable Greenplum Database Administrator Guide.

7) trap hosts Set 1 to 6 trapsink server hostnames. The host specified and entered will be the recipient of trap messages from the DCA.

Table 33 Modify DCA Settings

Option Sub Option Description

131

Page 132: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Regenerate Hostfiles

During the Install DCA operation, generic hostfiles are created based on the users input. These hostfiles are utilized in different ways within the cluster but at times they may need to be regenerated.

Using the Regenerate Hostfiles option will regenerate these files while also ensuring that connectivity between all components is operating as expected.

Switches, both interconnect and admin are also checked and validated for connectivity

In the following example, this option is executed on a ¼ rack GPDB cluster.

root Connectivity Test Complete

Testing switch Connectivity for hosts i-sw-1 and i-sw-2Switch Connectivity Test Complete

Generate Host and Device Files Action: Checking the Prerequisites for Generate Host and Device Files Action

Generate the Cluster Map Action: Checking the Prerequisites for Generate the Cluster Map Action

** Running Actions **

Regenerate PXE Boot Configuration Files on All Servers

This provides the user with two different modes of operation in relation to setting up PXE boot configurations on the DCA. Either a full or partial image is allowed as the preferred options. When selected, the following prompt appears: ***************************************

** What mode do you want to use for PXE

booting?***************************************

** 1) Full Image (erases all existing

data)* 2) Partial Image (preserves data

directories)***************************************

*What mode do you want to use for PXE

booting?Default = Full Image. Press Enter to use

this setting or type new setting.>>

When a PXE boot is required on a server within the DCA it will default to the Full Image option unless specified differently here to use the Partial Image option. The partial image option preserves the existing data directories on the node that is being PXE booted.

Light Bar Controls The light bar control can be used for server identification within a DCA rack. Use this option to turn on, turn off, or blink the DCA door light bar and a server LED. In order for this operation to function, a valid cluster must have been installed and the DCA is initialized.

Enable/Disable Master Server Auto Failover

This option only shows on a Master server in a GPDB cluster. If enabled, when GPDB master process goes down, Standby master automatically takes over. The master server auto failover is enabled by default.

Table 33 Modify DCA Settings

Option Sub Option Description

132

Page 133: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Regenerate Hostfiles Action: Starting to run Generate Host and Device Files Action

Generate Host and Device Files Action: /home/gpadmin/gpconfigs/hostfile_gpdb generated on mdw and smdw

Generate Host and Device Files Action: /home/gpadmin/gpconfigs/hostfile generated on mdw and smdw

** Running Prerequisite Checks **Testing root Connectivity for hosts sdw1, sdw2, sdw3, sdw4, and smdw

Generate Host and Device Files Action: /home/gpadmin/gpconfigs/hostfile_dia generated on mdw and smdw

Generate Host and Device Files Action: /home/gpadmin/gpconfigs/hostfile_hadoop generated on mdw and smdw

Generate Host and Device Files Action: /home/gpadmin/gpconfigs/hostfile_hdm generated on mdw and smdw

Generate Host and Device Files Action: /home/gpadmin/gpconfigs/hostfile_hdw generated on mdw and smdw

Generate Host and Device Files Action: /home/gpadmin/gpconfigs/hostfile_hdc generated on mdw and smdw

Generate Host and Device Files Action: /home/gpadmin/gpconfigs/hostfile_segments generated on mdw and smdw

Generate Host and Device Files Action: /opt/dca/etc/healthmond/devices.cnf generated on mdw and smdw

Generate Host and Device Files Action: /opt/dca/etc/healthmond/devices.cnf copied to segments

Regenerate Hostfiles Action: Finished running Generate Host and Device Files Action

Regenerate Hostfiles Action: Starting to run Generate the Cluster Map Action

Generate the Cluster Map Action: Generating the cluster map.Generate the Cluster Map Action: Finding MAC addresses of all hostsGenerate the Cluster Map Action: Successfully written cluster map into

/opt/dca/var/clustermapRegenerate Hostfiles Action: Finished running Generate the Cluster Map

ActionRestarting Healthmon and snmpd on the mastersSelections Successfully Completed

Configure Secure Permissions

When Secure Permissions is enabled, certain files on DCA have more secured file permissions. The file permissions for the following files change based on the secure permission setting:

Table 34 File Permissions

File

File Permissions

If Configure Secure Permissions is enabled

If Configure Secure Permissions is disabled

/etc/sysconfig 700 755

/etc/login.defs 600 644

/etc/ntp.conf 600 644

/etc/security/access.conf 600 644

/opt/dca/bin/gdb 700 755

133

Page 134: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Automating dca_setup using a configuration file

dca_setup operations can be run in an automated batch mode using a configuration file. You can automate the following operations:

• Generating and regenerating the DCA host files

• Initializing GPDB in a standard way

• Adding GPDB, DIA or GP HD servers to a DCA

• Modifying the GPDB Segment Network Settings

Sample configuration files

Sample configuration files are located in the /opt/dca/var/dca_setup/ directory:

Using a configuration file

Perform the following steps to customize a configuration file and run dca_setup with it:

1. Edit a copy of the file:

# cp /opt/dca/var/dca_setup/sample_config.cnf config_copy.cnf# vi config_copy.cnf

/usr/bin/gdb 700 755

/usr/sbin/tcpdump 700 755

/var/lib/nfs 750 755

/var/log/lastlog 600 644

/var/log/wtmp 660 664

/var/log/wtmpx 660 664

/var/run/utmp 660 664

/var/run/utmpx 660 664

/var/spool/mail 700 775

Table 34 File Permissions

File

File Permissions

If Configure Secure Permissions is enabled

If Configure Secure Permissions is disabled

Table 35

File Options in dca_setup

dca_setup.cnf Option 1 Regenerate DCA Config FilesOption 9 Expand DCAOption 13 Networking: Segments

custom_gpdb_port.cnf Option 7 Initializing GPDB

networking_sample.cnf Option 13 Networking: Segments

134

Page 135: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

2. Remove comment markers ‘#’ to make a line active.

3. Select parameters (top of file), then the associated action to perform in the # ACTIONS section.

4. Execute dca_setup with the configuration file:

# dca_setup -c config_copy.cnf

Host Inventory

This option provides a mechanism to gather and present hardware information on a DCA server.

To display hardware information on a DCA server:

1. Type the option number, 4, to select this option. A list of servers in the current configuration is displayed.

2. Select a server, for example, the Primary master server, mdw. Instead of selecting a server, you can also select the Generate system report option to check connectivity for Bond0 interface, eth0, SSH, IPMI, and swapp client, and to display the hardware information on all the servers in the current configuration. A system report, dca_report.html, is generated and put under the /tmp directory.

Note: If the IPMI/SSH password has been changed from the default value, the corresponding connectivity check will fail.

3. If you selected a server in step 2, select an option in Table 36 to display the hardware information on the server.

Table 36 Host Inventory

Option Description

Summary Lists information about the DCA model, CPU, memory, disk, OS version, and DCA version.

CPU For each CPU, displays the following information:• model - The CPU model• cores - The main memory, or RAM, in gigabyte• speed - The clock speed in megahertz• bogomips: The CPU speed in a Linux kernel

DIMM Displays the following information about the DIMMs in all the slots, including empty slots:• Product name• Vendor name• Serial number• Size

135

Page 136: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Application Management

Several Pivotal Applications such as PHD Application Suite and PHD-Isilon run on the DCA platform. The Application Management option provides a set of clean interfaces for new applications to be easily integrated into the DCA.

To deploy and manage Pivotal applications:

1. Type the option number, 4, to select this option.

2. Type the option number, 1, to list available applications. A list of applications in the current DCA ISO package is displayed.

3. Select an application, for example, PHD-Application-Suite.

RAID Controller Displays the following information about the RAID Controller:• description - The description of the RAID Controllers• address - The address of the RAID Controller• revision: The revision number • raidlevels - The RAID levels of the disk drives• drivetypes: The Controller type, for example, SAS,

and SATA• vdiskcount - The number of virtual disks• pdiskcount - The number of physical disks• bios: The BIOS version of the Controller• firmware: The firmware version of the Controller

NIC Lists the following information about the Network Interface Card:• Vendor• Product name• Description• Serial number• Class• Bus• Handle• Firmware• Driver• Driver version

Physical Disk Lists the following information about each physical disk:• Drive type• Size• Media type• Serial number• Slot number• State

Virtual Disk Displays the following information about each virtual disk:• Disk name• Raid level• Size• Number of drives• Strip size• State

Table 36 Host Inventory

Option Description

136

Page 137: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

4. Select a version of the application, for example, 2.1.0.

5. Select an option in Table 37 to deploy or manage the application:

Table 37 Application Management

Option Sub Option Description

Deployment 1) Install Deploys the application instance on the DCA cluster.A hash sign (#) appears next to the installed application. After installation of the application, dca_setup has to be exited and restarted for the hash sign to appear. The hash sign will be removed once the application is removed.

2) Verify Health Performs health check on the deployed services. runs a series of jobs to verify that Hadoop, HAWQ, and any other deployed services work.

3) Uninstall Uninstalls the application instance on the DCA cluster.

Warning: Uninstalling the applications forcibly removes existing cluster and data. Use caution selecting this option.

4) Change Configuration Reconfigures the application instance by resetting some of the parameters set during the initial installation of an application.

5) Add Services Adds an optional service such as HAWQ.

6) Remove Services Removes optional services that are installed on the DCA.

Operations 1) Start Services Starts all deployed services. Use icm_client or PCC to start individual services.

2) Stop Services Stops all deployed services. Use icm_client or PCC to stop individual services.

3) Get Status Retrieves the status (UP, DOWN, NOTINSTALLED or UNKNOWN) of services running on the DCA cluster.

137

Page 138: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Manage RPQ

Request for product qualification, or RPQ, is a request for customized configuration in a DCA. The Manage RPQ option makes it easy to document, update, and display RPQ records.

Table 38 lists the operations available under this option:

dca_shutdownThe DCA shutdown utility will safely power off all servers in a DCA.

Synopsis

dca_shutdown { -f hostfile | -h hostname } [ --ignoredb ] [ --password= password ] [ --passfile= password_file ] [--statusonly]

dca_shutdowndca_shutdown --help

Description

The dca_shutdown utility will safely power down all servers in a DCA. The utility can be run with no parameters, and will use the system inventory generated by DCA Setup during an installation or Regenerate DCA Config Files operation. If the utility is run with a hostfile or hostname specified, only those hosts will be shutdown. This utility will not shut down the administration, Interconnect or aggregation switches.

The utility should be run as the user root. Prior to running the dca_shutdown, the following steps should be performed to ensure a clean shutdown:

Table 38 Manage RPQ

Option Description

Document an RPQ Creates a new RPQ record. When prompted, enter a unique number, title, and description for the RPQ. If the number you enter already exists in the system, an “invalid input” message is displayed.

Note: The number must be less than or equal to 16 characters. And the description must be less than or equal to 240 characters.

Update an RPQ Adds new text to the description of an existing RPQ record.To update an RPQ record:1. When prompted, enter the number of an existing

RPQ record2. Enter the text you want to add to the description of

the RPQ record.

3. Enter y to confirm the action.

Show RPQs Displays the numbers and titles of RPQ records in the system.When selected, it displays the first 10 records in the system. Press Enter to display the next record, or the spacebar to display the next 10 records.

138

Page 139: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

1. Stop Greenplum Database:

$ gpstop -af

2. Stop Command Center:

$ gpcmdr --stop

3. Stop health monitoring as the user root:

$ su -# dca_healthmon_ctl -d

Options

-?, --help

Print usage and help information.

-i, --ignoredb

Do not check if Greenplum Database, health monitoring or Command Center are running. Shut down all servers immediately.

-h, --host hostname

Perform a shutdown on the host specified.

-f, --hostfile hostfile

Perform a shutdown on the hosts listed in the hostfile. This option can not be used with the --host option.

-p, --password password

Specify a password to connect to the server’s IPMI (iDRAC) to perform the shutdown. The password is originally set during installation with DCA Setup - if an installation through DCA Setup has never been run, the user will be prompted for a password.

-s, --passfile password_file

Specify a file containing the password to use to connect to the server’s IPMI (iDRAC) to perform the shutdown. This file is generated during installation with DCA Setup, and is located in /opt/dca/etc/ipmipasswd.

-o, --statusonly

Print the power status (ON | OFF) of all servers. This will not power off any servers.

Examples

Shut down all servers in a DCA:

# dca_shutdown

Shut down servers listed in the file hostfile:

dca_shutdown -f /home/gpadmin/gpconfigs/hostfile

139

Page 140: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

dcacheckValidate hardware and operating system settings.

Synopsis

dcacheck { -f hostfile | -h hostname } { --stdout | --zipout } [ --config config_file ]

dcacheck --zipin dcacheck_zipfiledcacheck -?

Description

The dcacheck utility validates DCA operating system and hardware configuration settings. The dcacheck utility can use a host file or a file previously created with the --zipout option to validate settings. At the end of a successful validation process, DCACHECK_NORMAL message displays. If DCACHECK_ERROR displays, one or more validation checks failed. You can also use dcacheck to gather and view platform settings on hosts without running validation checks.

EMC recommends that you run dcacheck as the user root. If you do not run dcacheck as root, the utility displays a warning message and will not be able to validate all configuration settings. Only settings will be validated.

Running dcacheck with no parameters validates settings in the following file:

/opt/dca/etc/dcacheck/dcacheck_config

The specific configuration parameters that are validated depends on the DCA software release.

Options

--config config_file

The name of a configuration file to use instead of the default file /opt/dca/etc/dcacheck/dcacheck_config.

-f hostfile

The name of a file that contains a list of hosts dcahceck uses to validate settings. This file should contain a single host name for all hosts in the DCA.

-h hostname

The name of a host that dcacheck will use to validate platform-specific settings.

--stdout

Display collected host information from dcacheck. No checks or validations are performed.

--zipout

Save all collected data to a .zip file in the current working directory. dcacheck automatically creates the .zip file and names it dcacheck_timestamp.tar.gz. No checks or validations are performed.

140

Page 141: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

--zipin file

Use this option to decompress and check a .zip file created with the --zipout option. dcacheck performs validation tasks on the file you specify in this option.

-?

Print the online help.

Examples

Verify and validate the DCA settings on specific servers:

# dcacheck -f /home/gpadmin/gpconfigs/hostfile

Verify custom settings on all DCA servers:

# dcacheck --config my_config_file

dca_healthmon_ctlControl the healthmond service on a DCA.

Synopsis

dca_healthmon_ctl { -e | -d | -s } [ -n ]dca_healthmon_ctl --help

Description

The dca_healthmon_ctl utility controls the DCA health monitoring service. This utility is used to manually enable, disable, or check the status of health monitoring. The health monitoring service must be stopped during most service activities to avoid false call home messages. This utility must be run as the user root.

Options

-e, --enable

Enable health monitoring services on the DCA.

-d, --disable

Disable health monitoring services on the DCA.

-s, --status

Query the status of health monitoring services on the DCA.

-n, --node mdw|smdw

Run command on a specific server, mdw or smdw. Normal operation runs the command on both mdw and smdw. This option must be used with the --enable or --disable parameters.

-h, -?, --help

Print the online help.

141

Page 142: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

Examples

Stop heath monitoring services on a DCA:

# dca_healthmon_ctl -d

Start health monitoring services on the Primary Master server, mdw:

# dca_healthmon_ctl -e -n mdw

dca_blinkerIdentify a DCA server by flashing it’s marker LED.

Synopsis

dca_blinker { -h hostname | -f hostfile } [ -a ON | OFF ] [ -t interval ]dca_blinker -?

Description

The dca_blinker utility turns on/off the marker LED on a DCA server. This utility can be used from any server in a DCA. The default flash interval 15 minutes. This utility must be run as the user root.

Options

-h hostname

Specify the host to flash the marker LED. Multiple hosts can be specified.

-f hostfile

Specify a list of hosts to flash the marker LED.

-a ON|OFF

Set the LED flash to ON or OFF.

-t interval

Interval in seconds to flash the server marker LED. Default interval is 15 minutes.

Examples

Flash the marker LED on sdw1 for 20 seconds:

dca_blinker -h sdw1 -a ON -t 20

Turn off the flashing LED on sdw1:

dca_blinker -h sdw1 -a OFF

gppkgInstalls Greenplum Database extensions such as pgcrypto, PL/R, PL/Java, PL/Perl, and PostGIS, along with their dependencies, across an entire cluster.

Synopsis

gppkg [-i package | -u package | -r name-version | -c]

142

Page 143: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

[-d master_data_directory] [-a] [-v]

gppkg --migrate GPHOME_1 GPHOME_2 [-a] [-v]

gppkg [-q | --query] query_option

gppkg -? | --help | -h

gppkg --version

Description

The Greenplum Package Manager (gppkg) utility installs Greenplum Database extensions, along with any dependencies, on all hosts across a cluster. It will also automatically install extensions on new hosts in the case of system expansion and segment recovery.

First, download one or more of the available packages from the EMC Download Center then copy it to the master host. Use the Greenplum Package Manager to install each package using the options described below.

Note: After a major upgrade to Greenplum Database, you must download and install all extensions again.

These packages are available for download from the EMC Download Center.

http://support.emc.com

PostGIS

PL/Java

PL/R

PL/Perl

Pgcrypto

Options

-a (do not prompt)

Do not prompt the user for confirmation.

-c | --clean

Reconciles the package state of the cluster to match the state of the master host. Running this option after a failed or partial install/uninstall ensures that the package installation state is consistent across the cluster.

-d master_data_directory

The master data directory. If not specified, the value set for $MASTER_DATA_DIRECTORY will be used.

-i package | --install=package

Installs the given package. This includes any pre/post installation steps and installation of any dependencies.

143

Page 144: EMC Data Computing Appliance · Data Computing Appliance Appliance Version 2.x Administration Guide APPLIES TO DCA SOFTWARE VERSION 2.1.0.0 PART NUMBER: 302-001-555 ... Dragon 24,

--migrate GPHOME_1 GPHOME_2

Migrates packages from a separate $GPHOME. Carries over packages from one version of Greenplum Database to another.

For example: gppkg --migrate /usr/local/greenplum-db-4.2.2.4 /usr/local/greenplum-db-4.2.3.2

This option is automatically invoked by the installer during minor upgrades. This option is given here for cases when the user wants to migrate packages manually.

Migration can only proceed if gppkg is executed from the installation directory to which packages are being migrated. That is, GPHOME_2 must match the $GPHOME from which the currently executing gppkg is being run.

-q | --query query_option

Provides information specified by query_option about the installed packages. Only one query_option can be specified at a time. The following table lists the possible values for query_option. <package_file> is the name of a package.

-r name-version | --remove=name-version

Removes the specified package.

-u package | --update=package

Updates the given package.

--version (show utility version)

Displays the version of this utility.

-v | --verbose

Sets the logging level to verbose.

-? | -h | --help

Displays the online help.

Table 39 Query Options for gppkg

query_option Returns

<package_file> Whether the specified package is installed.

--info <package_file> The name, version, and other information about the specified package.

--list <package_file> The file contents of the specified package.

--all List of all installed packages.

144