was v7 notes chapter

133
7/16/2019 Was v7 Notes Chapter http://slidepdf.com/reader/full/was-v7-notes-chapter 1/133 Chapter 1 Architecture Illustrate the relationships between IBM WebSphere Application Server, Network Deployment, V7.0 and the application components (e.g., browser, HTTP server, proxy server, plug-in, firewall, database servers, WebSphere MQ, load balancing, ip spraying, and Tivoli Performance Viewer.) Load balancers A load balancer, also referred to as an IP sprayer, enables horizontal scalability by dispatching TCP/IP traffic among several identically configured servers. Depending on the product used for load balancing, different protocols are supported. Load balancer is implemented using the Load Balancer Edge component provided with the Network Deployment package, which provides load balancing capabilities for HTTP, FTP, SSL, SMTP, NNTP, IMAP, POP3, Telnet, SIP, and any other TCP based application. Horizontal scaling topology with an IP sprayer Load balancing products can be used to distribute HTTP requests among Web servers running on multiple physical machines. The Load Balancer component of Network Dispatcher, for example, is an IP sprayer that performs intelligent load balancing among Web servers based on server availability and workload. Figure below illustrates a horizontal scaling configuration that uses an IP sprayer to redistribute requests between Web servers on multiple machines. Figure 1.1. Simple IP sprayer horizontally scaled topology The active Load Balancer hosts the highly available TCP/IP address, the cluster address of your service and sprays requests to the Web servers. At the same time, the Load Balancer keeps track of the Web servers health and routes requests around Web servers that are not available. To avoid having the Load Balancer be a single point of failure, the Load Balancer is set up in a hot-standby cluster. The primary Load Balancer communicates its state and routing table to the secondary Load Balancer. The

Upload: harikrishnam2008gmailcom

Post on 31-Oct-2015

46 views

Category:

Documents


0 download

DESCRIPTION

was v7

TRANSCRIPT

Page 1: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 1/133

Chapter 1

Architecture

Illustrate the relationships between IBM WebSphere Application Server, NetworkDeployment, V7.0 and the application components (e.g., browser, HTTP server, proxy

server, plug-in, firewall, database servers, WebSphere MQ, load balancing, ip spraying,and Tivoli Performance Viewer.)

Load balancers

A load balancer, also referred to as an IP sprayer, enables horizontal scalability bydispatching TCP/IP traffic among several identically configured servers. Depending onthe product used for load balancing, different protocols are supported.

Load balancer is implemented using the Load Balancer Edge component provided withthe Network Deployment package, which provides load balancing capabilities for HTTP,FTP, SSL, SMTP, NNTP, IMAP, POP3, Telnet, SIP, and any other TCP based application.

Horizontal scaling topology with an IP sprayer

Load balancing products can be used to distribute HTTP requests among Web serversrunning on multiple physical machines. The Load Balancer component of Network

Dispatcher, for example, is an IP sprayer that performs intelligent load balancingamong Web servers based on server availability and workload.

Figure below illustrates a horizontal scaling configuration that uses an IP sprayer toredistribute requests between Web servers on multiple machines.

Figure 1.1. Simple IP sprayer horizontally scaled topology

The active Load Balancer hosts the highly available TCP/IP address, the cluster address of your serviceand sprays requests to the Web servers. At the same time, the Load Balancer keeps track of the Webservers health and routes requests around Web servers that are not available. To avoid having theLoad Balancer be a single point of failure, the Load Balancer is set up in a hot-standby cluster. Theprimary Load Balancer communicates its state and routing table to the secondary Load Balancer. The

Page 2: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 2/133

secondary Load Balancer monitors the primary Load Balancer through heartbeat and takes over whenit detects a problem with the primary Load Balancer. Only one Load Balancer is active at a time.

Both Web servers are active at the same time and perform load balancing and failover between theapplication servers in the cluster through the Web server plug-in. If any component on System C orSystem D fails, this should be detected by the plug-in and the other server can continue to receiverequests.

Using Web servers

In WebSphere Application Server, a Web server can be administratively defined to the cell. This allowsthe association of applications to one or more Web servers and custom plug-in configuration files to begenerated for each Web server.

Managed and unmanaged nodes: When you define a Web server to WebSphere Application Server, itis associated with a node. The node is either a managed or an unmanaged. When we refer tomanaged Web servers, we are referring to a Web server defined on a managed node. An unmanagedWeb server resides on an unmanaged node. In a stand-alone server environment, you can define oneunmanaged Web server. In a distributed environment, you define multiple managed or unmanagedWeb servers.

•  Managed Web servers

Defining a managed Web server allows you to start and stop the Web server from theIntegrated Solutions Console and push the plug-in configuration file to the Web server. A nodeagent must be installed on the Web server machine. An exception is if the Web server is theIBM HTTP Server. Figure below illustrates a Web server managed node:

Figure 1.2. Web server managed node

Unmanaged Web servers

Unmanaged Web servers reside on a system without a node agent. This is the only option in a stand-alone server environment and is a common option for Web servers installed outside a firewall. The useof this topology requires that each time the plug-in configuration file is regenerated, it is copied fromthe machine where WebSphere Application Server is installed to the machine where the Web server isrunning. Figure below illustrates a Web server unmanaged node:

Page 3: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 3/133

Figure 1.3. Web server unmanaged node

IBM HTTP Server as an unmanaged Web server (special case)

If the Web server is IBM HTTP Server, it can be installed on a remote machine without installing anode agent. You can administer IBM HTTP Server through the deployment manager using the IBM

HTTP Server Admin Process for tasks such as starting, stopping, or automatically pushing the plug-inconfiguration file. Figure below illustrates an IBM HTTP Server unmanaged node:

Figure 1.4. IBM HTTP Server unmanaged node

Although you can install the Web server on the same system as WebSphere Application Server, andyou can even direct HTTP requests directly to the application server, you should have a Web server ina DMZ as a front-end to receive requests. The Web server is located in a DMZ to provide security,

Page 4: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 4/133

performance, throughput, availability, and maintainability, while the application server containingbusiness logic is located securely in a separate network:

Figure 1.5. Stand-alone Server topology with Web server in a DMZ

Relate the various components of the IBM WebSphere ApplicationServer Network Deployment V7.0 runtime architecture.

Intelligent runtime provisioning

Intelligent runtime provisioning is a new concept introduced with WebSphere Application Server V7.0.This mechanism selects only the runtime functions needed for an application. Each application isexamined by WebSphere Application Server during the deployment to generate an activation plan. Atrun time, the server uses the activation plan to start only those components that are required insidethe application server.

Figure 1.6. WebSphere Application Server V6.1: without Intelligent runtime provisioning

Page 5: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 5/133

 

Figure 1.7. WebSphere Application Server V7.0: with Intelligent runtime provisioning

 

Intelligent runtime provisioning is a feature that reduces memory footprint, application server startuptime, and used CPU resources needed to start the application server.

Page 6: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 6/133

Describe WebSphere dynamic caching features.

Using the dynamic cache service to improve performance

Caching the output of servlets, commands, and JavaServer Pages (JSP) improves applicationperformance. WebSphere Application Server consolidates several caching activities including servlets,

Web services, and WebSphere commands into one service called the dynamic cache. These cachingactivities work together to improve application performance, and share many configurationparameters that are set in the dynamic cache service of an application server. You can use thedynamic cache to improve the performance of servlet and JSP files by serving requests from an in-memory cache. Cache entries contain servlet output, the results of a servlet after it runs, andmetadata.

The dynamic cache service works within an application server Java virtual machine (JVM), interceptingcalls to cacheable objects. For example, it intercepts calls through a servlet service method or acommand execute method, and either stores the output of the object to the cache or serves thecontent of the object from the dynamic cache.

Procedure:

1.  The dynamic cache service is enabled by default. You can configure the default cache instancein the administrative console.

2.  Configure the type of caching that you are using:

•  Configuring servlet caching.

•  Configuring portlet fragment caching.

•  Configuring Edge Side Include caching.

•  Configuring command caching.

•  Caching Web services.

•  Configuring the JAX-RPC Web services client cache.

3.  Monitor the results of your configuration using the dynamic cache monitor.

Dynamic caching

Dynamic caching refers to the methods employed by WebSphere Application Server either to providefragment caching or to reuse components within the application server engine. Fragment caching

means that only some portions of a page are cached.

Dynamic caching is enabled at the application server container services level. Cacheable objects aredefined inside the cachespec.xml file, located inside the Web module WEB-INF or enterprise

beanMETA-INF directory. The cachespec.xml file enables you to configure caching at a servlet/JSP

level. The caching options in cachespec.xml file must include sufficient details to allow the dynamic

cache service to build a unique cache-key. This cache-key is used to uniquely identify each object.This might be achieved by specifying request parameters, cookies, and so on.The cachespec.xml also file allows you to define cache invalidation rules and policies.

Page 7: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 7/133

Note: Pay special attention to the servlet caching configuration, because you create unexpectedresults by returning a cached servlet fragment that is stale.

Another dynamic caching option available is Edge Side Include (ESI) caching. ESI caching is an in-memory caching solution implemented through the Web server plug-in, WebSphere proxy server, orthe DMZ secure proxy server. If dynamic caching is enabled at the servlet Web container level, theplug-in uses ESI caching.

An additional header is added to the HTTP request by the caching facility, called the Surrogate-Capabilities header. The application server returns a Surrogate-Control header in the

response. Then, depending on the rules specified for servlet caching, you can cache responses for JSPand servlets.

Cache replication

With replication, data is generated one time and copied or replicated to other servers in the cluster,saving time and resources. Caching in a cluster has additional concerns. In particular, the same datacan be required and generated in multiple places. Also, the permission the resources need to generatethe cached data can be restricted, preventing access to the data.

Cache replication deals with these concerns by generating the data one time and copying it to theother servers in the cluster. It also aids in cache consistency. Cache entries that are not needed areremoved or replaced.

The data replication configuration can exist as part of the Web container dynamic cache configurationaccessible through the administrative console, or on a per cache entry basis through

thecachespec.xml file. With the cachespec.xml file, you can configure cache replication at the

Web container level, but disable it for a specific cache entry.

Cache replication can take on three forms:

• PUSH - Send out new entries, both ID and data, and updates to those entries.

• PULL - Requests data from other servers in the cluster when that data is not locally present.This mode of replication is not recommended.

• PUSH/PULL - Sends out IDs for new entries, then, only requests from other servers in the

cluster entries for IDs previously broadcast. The dynamic cache always sends out cache entryinvalidations.

You can override the global sharing policy by specifying a specific sharing policy in the cache policy.For example, if your global policy is to use PUSH only, you can change the sharing policy of a specificcache entry by making this change to your cache policy:

 

<cache-entry>

<sharing-policy>not-shared</sharing-policy>

<class>servlet</class>

<name>/app</name>

Page 8: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 8/133

<cache-id>

<component id="action" type="parameter">

<value>portfolio</value>

<required>true</required>

</component>

<component id="JSESSIONID" type="cookie">

<required>true</required>

</component>

<property name="EdgeCacheable">true</property>

</cache-id>

</cache-entry>

 

Compare the Network Deployment (ND) cell model with the flexible

management model.

Flexible management

This release introduces an optional administrative model that enables you to implement a moreflexible, scalable, and asynchronous administrative topology. This new loosely-coupled model, called"flexible management", is built around autonomous nodes that maintain local control over theirconfiguration. Servers on a node are locally managed by an "administrative agent" that can host theadministrative logic for all servers on a node, reducing their footprint. A central "job manager" processprovides a single interface from which you can asynchronously submit administrative tasks to a nodeor group of nodes. Because it does not rely on tightly-coupled, synchronous communication, flexiblemanagement can be advantageous in situations that push the limits of the cell model, includingenvironments with very large numbers of nodes, or topologies that include high-latency, long-distance

links. Keep in mind that this new model is an option, and the cell model is still available. Manyenvironments will still find the cell model to be the most appropriate, since it provides many services,like high-availability, that applications might require.

Two new Java processes, the administrative agent, and the job manager work together to enablethe "flexible management" administrative topology. The administrative agent is responsible for theadministrative logic for all servers on a node. By consolidating the management logic for all servers onthe node, administrative overhead is reduced, and there is a single point of administration. After aprofile has been registered with an administrative agent, the administrative console runs on theadministrative agent, and not the application server. An administrative agent can only manage local

Page 9: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 9/133

servers. Multiple nodes can be administered remotely using a job manager. The job manager providesits own console, and enables you to send management jobs to registered servers through theadministrative agent. Note that individual nodes retain their autonomy, and can still be managedlocally, even when registered with a job manager. A job manager can also send commands to adeployment manager, providing a way to administer multiple Network Deployment cells from a singleinterface. The administrative agent is available as a part of all WebSphere Application Serverpackages, while the job manager is only available with the Network Deployment offering.

Figure 1.8. Flexible management topology

 

The job manager communicates with either administrative agents or deployment managers. Theflexible management components enable some scenarios, like very large server farms, or a singleadministrative interface for separate data centers, that are not easy to manage with the traditional cellmodel.

Job manager

A job manager is a component that provides management capabilities for multiple standaloneapplication servers, administrative agents, and deployment managers. It brings enhanced multiple

node installation options for your environment.

It is possible to encounter a senario where there might be multiple distributed environments, eachmanaged by their own deployment manager. With multiple deployment managers, they must beadministered individually and there is no way of coordinating management actions between thedifferent distributed environments. Distributed environment administration performance is affected bylow latency networks because file synchronization between the deployment manager and node agentare dependent on network communication.

Page 10: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 10/133

The job manager can be used to administer multiple distributed environments as well as standaloneservers. The job manager administers the environment asynchronously using the concept of jobs.Because jobs are submitted asynchronously, a low latency network is sufficient, which can be usefulwhen the environment is distributed over distant geographical areas.

The job manager is available only with WebSphere Application Server Network Deployment andWebSphere Application Server for z/OS.

To administer a distributed environment, the deployment manager is registered with the job manager.To administer standalone servers, the nodes managed by the administrative agent are registered withthe job manager. This relation between the job manager and the environments it can interact with isshown below:

Figure 1.9. High-level overview of a job manager architecture

 

The job manager administers the registered environments by submitting jobs that perform tasks, for

example:

• Start and stop servers

• Create and delete servers

• Install and uninstall applications

• Start and stop applications

Page 11: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 11/133

• Run wsadmin scripts

• Distribute files

The job manager has a repository for its own configuration files, which are related to security,administration of job manager, configurations, and so on, however, it does not maintain a master

repository the way a deployment manager does. Rather, the job manager allows the administrativeagents and deployment managers to continue managing their environment as they would have hadthey not been registered with the job manager. The job manager simply provides another point of administration.

The job manager can administer multiple administrative agents and deployment managers.Conversely, each administrative agent and deployment manager can be registered with multiple jobmanagers.

Flexible management

Flexible management is a concept introduced with WebSphere Application Server V7.0. With flexiblemanagement components like the administrative agent and the job manager, you can build advanced

and large-scale topologies and manage single and multiple application server environments from asingle point of control. This reduces management and maintenance complexity.

• Administrative agent profile

The administrative agent is a new profile that provides enhanced management capabilities forstand-alone application servers. This is a new concept introduced with WebSphere ApplicationServer V7.0.

An administrative agent profile is created on the same node as the standalone serversand can manage only servers on that node. The node configuration for each standaloneserver is totally separate from any other servers on the system, but it can be managed usingthe administrative console on the administrative agent.

To participate in flexible management, standalone base servers first register themselves withthe administrative agent. When a base application server registers with an administrativeagent, much of the administrative code that was in the base server is consumed by theadministrative agent. This results in a significantly smaller and faster starting base server:

Figure 1.10. High-level overview of a administrative agent profile architecture

Page 12: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 12/133

 

Unlike a node in a WebSphere Application Server Network Deployment cell, the configurationrepository of a WebSphere Application Server base profile that is registered with an adminagent is not federated into a master repository. The administration services in the adminagent modify the configuration of the various registered base profiles directly. This also meansthe admin agent can only manage WebSphere Application Server base profiles running on thesame machine:

Figure 1.11. Administrative agent

Page 13: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 13/133

 

• Job manager profile

The job manager is a new server type that was added to support flexible management. A jobmanager is defined by a job manager profile.

To participate in flexible management, a standalone application server first registers itself withthe administrative agent. The administrative agent must then register the node for theapplication server with the job manager. If a deployment manager wants to participate in anenvironment controlled by a job manager, the deployment manager registers directly with the job manager; no administrative agent is involved in this case:

Figure 1.12. Flexible management

Page 14: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 14/133

 

The main use of the job manager is to queue jobs to application servers in a flexiblemanagement environment. These queued jobs are pulled from the job manager by theadministrative agent and distributed to the appropriate application server or servers.

Both deployment manager and administrative agents retain autonomy and can be managedwithout the job manager.

The units of work that are handled by the flexible management environment are known as jobs. The semantics of these jobs are typically straightforward, and the jobs require fewparameters. The jobs are processed asynchronously and can have an activation time,expiration time, and a recurrence indicator. You can specify that an e-mail notification be sentupon completion of a job. Additionally, you can view the current status of a job by issuing astatus command.

Figure 1.13. Job manager management model for multiple administrative agents

Page 15: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 15/133

 

In a deployment manager environment, there is a tight coupling between application serversand node agents and also between node agents and the deployment manager. This tightcoupling can impact the scalability of the administrative run time if the runtime componentsare not located together in close proximity using redundant, high capacity, low latencynetworks.

The job manager addresses the limitations inherent in the current management architectureby adopting a loosely coupled management architecture. Rather than synchronouslycontrolling a number of remote endpoints (node agents), the job manager coordinates

management across a group of endpoints by providing an asynchronous job managementcapability across a number of nodes.

The advanced management model relies on the submission of management jobs to theseremote endpoints, which can be either a WebSphere Application Server administrative agentor a deployment manager. In turn, the administrative agent or the deployment managerexecutes the jobs that update the configuration, starts or stops applications, and performs avariety of other common administrative tasks.

Page 16: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 16/133

Figure 1.14. Job manager management model

 

To create a job manager and coordinate administrative actions among multiple deploymentmanagers and administer multiple unfederated application servers, you need to create amanagement profile during the profile creation phase of the installation.

The job manager can manage nodes that span multiple systems and platforms. A nodemanaged by one job manager also can be managed by multiple job managers.

Note: The job manager is not a replacement for a deployment manager. It is an option forremotely managing a Network Deployment deployment manager or, more likely, multipledeployment managers, removing the cell boundaries.

Chapter 2

Installation/Configuration of WebSphere Application Server

Identify installation options and determine the desired configuration(e.g., silent install, etc.)

Page 17: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 17/133

Installation method

On distributed systems, you have several choices for installation:

•  Graphical installation

The installation wizard is suitable for installing WebSphere Application Server on a smallnumber of systems. Executing the installation wizard will install one system. You can start withthe Launchpad, which contains a list of installation activities to select, or you can execute theinstallation program directly.

The installer checks for the required operating system level, sufficient disk space, and userpermissions. If you fail any of these, you can choose to ignore the warnings. Note that there isa danger that the installation might fail or the product might not work as expected later on.

•  Silent installation

To install WebSphere Application Server V7.0 on multiple systems or remote systems, use thesilent installation. This option enables you to store installation and profile creation options in a

single response file, and then issue a command to perform the installation and (optionally)profile creation. The silent installation approach offers the same options as the graphicalinstaller. Providing the options in a response file provides various advantages over using the

graphical installation wizard:

o  The installation options can be planned and prepared in advance

o  The prepared response file can be tested

o  The installation is consistent and repeatable

o  The installation is less fault-prone

o  The installation is documented through the response file

•  Installation factory

The Installation Factory is an Eclipse-based tool that allows the creation of WebSphereApplication Server installation packages in a reliable and repeatable way tailored to yourneeds.

The installation factory is part of the WebSphere deliverable on a separate media anddownload.

The Installation Factory can produce two types of packages:

o  Customized Installation Packages (CIP)

A WebSphere Application Server CIP package includes a WebSphere Application Serverproduct, product maintenance, profile customization, enterprise archives, other userfiles as well as user-defined scripting.

o  Integrated Installation Packages (IIP)

Page 18: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 18/133

An IIP can be used to install a full WebSphere software stack including ApplicationServers, feature pack, and other user files and might even contain multiple CIPs.

The Installation Factory allows you to create one installation package to install the full productstack you need to run your applications. Using the scripting interface you can ship and installcomponents not related to the WebSphere installation process.

Depending on the platform on which you are running the Installation Factory, you can buildinstallation packages for operating systems other than the one on which the InstallationFactory is running. The Installation Factory running on AIX, HP-UX, Linux, and Solariaoperating systems can create installation packages for all supported platforms. The InstallationFactory running on Windows can create installation packages for Windows and i5/OS.

The CIP or IIP can be installed on the target system through two methods:

o  Installation wizard

o  Silent installer using a response file

The benefit of the Installation Factory is mainly in terms of installation time (fix packs, forexample, are directly incorporated into the installation image) and in consistency andrepeatability of the installations. This gives you a quick pay-back for the time required to buildthe CIP and IIP.

•  Centralized installation manager

Another product feature which can be used to install and update WebSphere ApplicationServer Network Deployment installations is the CIM.

Installing silently

Silently install the application server product. To configure the installation, change the options in theresponse file before you issue the installation command.

Customize the response file to add your selections before attempting to install silently.

Use the response file to supply values to the installation wizard as the wizard runs in silent mode. Thewizard does not display interactive panels when it runs in silent mode, but reads values from theresponse file instead.

Procedure:

1.  Log on to the operating system. If you are installing as a non-root or non-administrative user,then there are certain limitations.

In addition, select a umask that allows the owner to read/write to the files, and allows othersto access them according to the prevailing system policy. For root, a umask of 022 isrecommended. For non-root users, a umask of 002 or 022 could be used, depending onwhether or not the users share the group. To verify the umask setting, issue the followingcommand:

umask

Page 19: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 19/133

To set the umask setting to 022, issue the following command:

umask 022

2.  Access the root directory of your installation image on your hard disk, or insert the disk labeled

"WebSphere Application Server Network Deployment" into the disk drive.

3.  Locate the sample options response file. The file is named responsefile.nd.txt in

the WAS directory on the product image, CD-ROM, or DVD.

4.  Copy the file to preserve it in its original form. For example, copy and save itas myresponsefile.txt to a location on your hard drive.

5.  Edit the copy in your flat file editor of choice, on the target operating system. Read thedirections within the response file to choose appropriate values that reflect parameters foryour system. The response file contains a description of each option.

6.  Save the file.

7.  Issue the proper command to use your custom response file. For example, issue a commandsuch as the following:

8. mnt_cdrom/WAS/install.sh -options /tmp/WAS/myresponsefile.txt -silent

9.  After the installation, examine the logs for success.

Install WebSphere Application Server Network Deployment V7.0 andverify the installation (e.g., Installation Verification Tool (IVT), defaultapplication (i.e., snoop and/or hitcount.))

Figure 2.1. First steps console

Page 20: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 20/133

 

The firststeps command starts the First steps console. The First steps console is a post-installation

ease-of-use tool for directing WebSphere Application Server Network Deployment elements from oneplace. Options display dynamically on the First steps console, depending on features that you installand the availability of certain elements on a particular operating system platform. Options includeverifying the installation, starting and stopping deployment manager and application server processes,

creating profiles, accessing the administrative console, launching the Migration wizard, accessing theonline information center, and accessing the Samples gallery.

The location of the firststeps command that starts the First steps console for a profile is:

profile_root/firststeps/firststeps.sh

Option descriptions:

Page 21: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 21/133

• Installation verification

This option starts the installation verification test. The test consists of starting and monitoringthe deployment manager or the standalone application server during its start up.

If this is the first time that you have used the First steps console since creating a deployment

manager or standalone application server profile, click "Installation verification" to verify yourinstallation. The verification process starts the deployment manager or the application server.

The IVT provides the following useful information about the deployment manager or theapplication server:

o The name of the server process

o The name of the profile

o The profile path, which is the file path and the name of the profile

o The type of profile

o The cell name

o The node name

o The current encoding

o The port number for the administrative console, which is 9060 by default

o Various informational messages that include the location of the SystemOut.log file

and how many errors are listed within the file

o A completion message

The location of the installation verification test command is:

profile_root/bin/ivt.sh

• Start the server

This option toggles to "Stop the server" when the application server runs.

This option displays when the First steps console is in a standalone application server profile ora cell profile.

After selecting the "Start the server" option, an output screen displays with status messages.The success message informs you that the server is open for e-business. Then the menu item

toggles to "Stop the server" and both the "Administrative console" option and the "Samplesgallery" option enable.

The location of the startServer command is:

profile_root/bin/stopServer.sh server_name

When you have more than one application server on the same machine, the command startsthe same application server that is associated with the First steps console.

Page 22: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 22/133

• Start the deployment manager

This option toggles to "Stop the deployment manager" when the deployment manager runs.

This option displays when the First steps console is in a deployment manager profile or a cellprofile.

After selecting the "Start the deployment manager" option, an output screen displays withstatus messages. The success message informs you that the deployment manager is open fore-business. Then the menu item changes to "Stop the deployment manager".

The location of the startManager command is:

profile_root/bin/startManager.sh

When you have more than one deployment manager on the same machine, the commandstarts the same deployment manager that is associated with the First steps console.

• Start the administrative agent

This option toggles to "Stop the administrative agent" when the administrative agent runs.

This option displays when the First steps console is in an administrative agent profile.

After selecting the "Start the administrative agent" option, an output screen displays withstatus messages. The success message informs you that the administrative agent is open fore-business. Then the menu item changes to "Stop the administrative agent".

• Start the job manager

This option toggles to "Stop the job manager" when the job manager runs.

This option displays when the First steps console is in a job manager profile or a cell profile.

After selecting the "Start the job manager" option, an output screen displays with statusmessages. The success message informs you that the job manager is open for e-business.Then the menu item changes to "Stop the job manager".

• Administrative console

This option is unavailable until the application server or deployment manager runs.

The administrative console is a configuration editor that runs in one of the supported Web

browsers. The administrative console lets you work with XML configuration files for thestandalone application server or the deployment manager and all of the application serversthat are in the cell.

To launch the administrative console, click "Administrative console" or point your browserto http://localhost:9060/ibm/console. Substitute the host name for localhost if the

address does not load. Verify the installation to verify the administrative console port number,if 9060 does not load.

Page 23: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 23/133

The administrative console prompts for a login name. This is not a security item, but merely atag to identify configuration changes that you make during the session. Secure signon is alsoavailable when administrative security is enabled.

The installation procedure in the information center cautions you to write down theadministrative user ID and password when security is enabled during installation. Without theID and password, you cannot use the administrative console or scripting.

• Profile Management Tool

This option starts the Profile Management Tool, which can create standalone application serverprofiles, a management profile, a cell profile, a secure proxy profile, or a custom profile.

Each profile has its own administrative interface. A custom profile is an exception. A custom

profile is an empty node that you can federate into a deployment manager cell and customize.No default server processes or applications are created for a custom profile.

Each profile also has its own First steps console except for the secure proxy profile.

The command file name is:

app_server_root/bin/ProfileManagement/pmt.sh

• Samples gallery

This option starts the Samples gallery. The option is unavailable until you start the applicationserver. The option displays when you have installed the Samples during installation.

To launch the "Samples gallery", click "Samples gallery" or point your browserto http://localhost:9080/WSsamples. The Web address is case sensitive.

Substitute your own host name and default host port number if the address does not load.Verify the port number by opening the administrative console and clicking "Servers >Application servers > server_name > [Communications] Ports". Use the WC_defaulthost

port number value or the WC_defaulthost_secure value instead of 9080, which is the

default.

If you do not install the Samples during the initial installation of the product, the option doesnot display on the First steps console. You can perform an incremental installation to add theSamples feature. After adding the Samples, the option displays on the First steps console.

• Information center for WebSphere Application Server

This option links you to the online information center.

• Migration wizard

This option starts the Migration wizard, which is the graphical interface to the migration tools.

The location of the migration command is:

app_server_root/bin/migration.sh

Page 24: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 24/133

Create profiles.

Managing profiles using the graphical user interface

You can create profiles, which define runtime environments, using the Profile Management Tool.Using profiles instead of multiple product installations saves disk space and simplifies updating the

product because a single set of core product files is maintained.

The Profile Management Tool is the graphical user interface for the manageprofiles.sh command.

NOTE: You cannot use the Profile Management Tool to create profiles for WebSphere Application

Server installations on 64-bit architectures except on the Linux for zSeries platform. However, you canuse the Profile Management Tool on other 64–bit architectures if you use a WebSphere ApplicationServer 32–bit installation.

Procedures:

•  Create a cell profile.

With a cell profile, you can create a deployment manager profile and a profile for a federatedapplication server node in a single pass through the Profile Management tool. Use the cellprofile creation option to create the deployment manager profile and the federated applicationserver node profile, unless you have a specific reason to create them separately.

After you install the Network Deployment product and apply the feature pack, you can createtwo different types of cell profiles: one that is enabled for the Network Deployment productonly or one that is also enabled for the feature pack.

•  Create a management profile with a deployment manager server.

With a deployment manager you can create the administrative node for a multinode, multi-

machine group of application server nodes that you create later. This logical group of application server processes is known as a cell.

After you install the Network Deployment product and apply the feature pack, you can createa management profile with a deployment manager that is enabled for the NetworkDeployment product only or a deployment manager profile that is enabled for the featurepack.

•  Create a management profile with an administrative agent server.

You can create a management profile for the administrative agent to administer multipleapplication servers that run customer applications only. The administrative agent provides asingle administrative console to administer the application servers.

After you install the Network Deployment product and apply the feature pack, you can createa management profile with an administrative agent that is enabled for the NetworkDeployment product only or an administrative agent profile that is enabled for the featurepack.

•  Create a management profile with a job manager server.

Page 25: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 25/133

You can create a management profile for the job manager to coordinate administrative actionsamong multiple deployment managers, administer multiple unfederated application servers,asynchronously submit jobs to start servers, and a variety of other tasks.

•  Create an application server profile.

Create an application server profile so that you can make applications available to the Internetor to an intranet, typically using Java technology.

After you install the Network Deployment product and apply the feature pack, you can createtwo different types of application server profiles: one that is enabled for the NetworkDeployment product only or one that is also enabled for the feature pack.

•  Create a custom profile.

A custom profile is an empty node that you can customize through the deployment managerto include application servers, clusters, or other Java processes, such as a messaging server.

Create a custom profile on a distributed machine and add the node into the deploymentmanager cell to get started customizing the node.

After you install the Network Deployment product and apply the feature pack, you can createtwo different types of custom profiles: one that is enabled for the Network Deployment

product only or one that is also enabled for the feature pack.

•  Create a secure proxy profile.

You can create a secure proxy profile to serve as the initial point of entry into your enterpriseenvironment. Typically, a secure proxy server exists in the DMZ, accepts requests from clientson the Internet, and forwards the requests to servers in your enterprise environment.

Augmenting profiles using the graphical user interface

After completing the installation of a feature pack, a feature pack might require you to augment aprofile to make that profile compatible with a particular feature pack. You can use the ProfileManagement Tool to augment a profile.

Augmenting existing profiles might result in changes to the profile configuration. Before augmenting aprofile, back up the existing configuration in case you need to restore the configuration. Usethe backupConfig.sh command to back up your current configuration.

When you apply a feature pack, the feature pack might require augmentation of Network Deploymentprofiles to use the new capabilities.

For this feature pack, you can augment the application server profile, the management profile with a

server type of deployment manager, the management profile with a server type of administrativeagent, the custom profile, and the cell profile of the Network Deployment product. You cannotaugment the management profile with a server type of job manager or the secure proxy profile. Youcan also create a new application server profile, a new deployment manager profile, a newadministrative agent profile, a new cell profile, or a new custom profile that is enabled for the featurepack. Use the profile creation tasks to create these profiles.

1.  Back up the existing configuration using the backupConfig.sh command if you have not

already done so.

Page 26: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 26/133

2.  Start the Profile Management Tool.

You can use one of the following ways to start the tool:

•  Issue the command directly from a command prompt:

•  app_server_root/bin/ProfileManagement/pmt.sh

•  Select the "Profile Management Tool" option from the First steps console.

•  Windows: Use the "Start" menu to access the Profile Management Tool. For example,

click "Start > Programs or All Programs > IBM WebSphere > your_product > ProfileManagement Tool".

•  Linux: Use the Linux operating system menus used to start programs to start the

Profile Management Tool. For example, click

"the_operating_system_menus_to_access_programs > IBM WebSphere >your_product > Profile Management Tool".

3.  Click "Launch Profile Management Tool".

4.  Select the profile that you want to augment.

5.  Click "Augment".

6.  Select the augmentation that you want to apply to the profile, and click "Next".

7.  Click "Augment" on the summary panel.

8.  When augmentation is complete, click "Finish".

Troubleshoot the installation (e.g., identify and analyze log files.)

Troubleshooting installation

The installer program records the following indicators of success in the logs:

•  INSTCONFSUCCESS

The operation was a success.

•  INSTCONFPARTIALSUCCESS

The operation was partially successful. Refer to the log for more details.

•  INSTCONFFAILED

The operation failed. Refer to the log for more details.

Page 27: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 27/133

Return codes:

•  0 - Success

•  1 - Failure

•  2 - Partial Success

Procedure:

1.  Run the installation verification test (IVT).

2.  Run the installver.sh command to calculate and compare checksums for all installed

components to the bill of materials list for the product.

Compare the output from the installver.sh command to the installation log files that are

described in the next step.

3.  Check the installation log files for errors after installing:

The app_server_root/logs/install/log.txt file and

the app_server_root/logs/manageprofiles/profile_name_create.log file record

installation and profile creation status.

If the error happens early in the installation, look for the log.txt file in the system

temporary area.

•  user_home/waslogs/log_date_stamp.time_stamp.txt if installation finishes

but is unsuccessful or for some other reason cannot be copiedto app_server_root/logs/install/log.txt

•  user_home/waslogs/log.txt if installation is interrupted

The installation program copies the log from the temporary area to the logs directory at the

end of the installation.

During installation, a single entry in the app_server_root/logs/install/log.txt file

points to the temporary log file, /tmp/log.txt on platforms such as AIX or Linux. The

installation program copies the file from the temporary directory tothe app_server_root/logs/install/log.txt location at the end of the installation.

If the installation fails and the app_server_root/logs/install/log.txt file has only

this one pointer to the temporary directory, open the log.txt file in the temporary directory.The log might have clues to the installation failure.

Uninstalling creates the app_server_root/logs/uninstall/log.txt file.

4.  Determine whether the installation problem is caused by a failing ANT script.

The app_server_root/logs/install/instconfig.log file indicates ANT configuration

problems that could prevent the product from working correctly.

Page 28: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 28/133

5.  Verify that no files exist in the app_server_root/classes directory.

6.  Uninstall the product, if possible, and reinstall after turning on tracing if the error logs do notcontain enough information to determine the cause of the problem.

7.  Use the command line method to start the application server.

8.  Verify whether the server starts and loads properly by looking for a running Java process andthe "Open for e-business message" in the SystemOut.log and SystemErr.log files.

You can find the SystemOut.log and SystemErr.log files in

the profile_root/logs/server1 (platforms such as AIX or Linux) directory in an

Application Server profile.

9.  Start the Snoop servlet to verify the ability of the Web server to retrieve an application fromthe Application Server.

Test the internal HTTP transport provided by the Application Server:

http://localhost:9080/snoop

Test the Web server plug-in:

http://Host_name_of_Web_server_machine/snoop

10.  Start the WebSphere Application Server administrative console:

11.http://localhost:9060/ibm/console

12.  Resolve any IP address caching problems.

 

Utilize installation factory and Centralized Installation Manager (CIM).

Installation Factory

The Installation Factory is an Eclipse-based tool that allows the creation of WebSphere ApplicationServer installation packages in a reliable and repeatable way tailored to your needs.

The Installation Factory can produce two types of packages:

• Customized Installation Packages (CIP)

A WebSphere Application Server CIP package includes a WebSphere Application Serverproduct, product maintenance, profile customization, enterprise archives, other user files aswell as user-defined scripting.

Page 29: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 29/133

• Integrated Installation Packages (IIP)

An IIP can be used to install a full WebSphere software stack including Application Servers,feature pack, and other user files and might even contain multiple CIPs.

The Installation Factory allows you to create one installation package to install the full product stack

you need to run your applications. Using the scripting interface you can ship and install componentsnot related to the WebSphere installation process.

Depending on the platform on which you are running the Installation Factory, you can buildinstallation packages for operating systems other than the one on which the Installation Factory isrunning. The Installation Factory running on AIX, HP-UX, Linux, and Solaria operating systems cancreate installation packages for all supported platforms. The Installation Factory running on Windowscan create installation packages for Windows and i5/OS.

The CIP or IIP can be installed on the target system through two methods:

• Installation wizard

• Silent installer using a response file

The benefit of the Installation Factory is mainly in terms of installation time (fix packs, for example,are directly incorporated into the installation image) and in consistency and repeatability of theinstallations. This gives you a quick pay-back for the time required to build the CIP and IIP.

You can also use Installation Factory to create CIM repository and add additional installation packages

to your Central Installation Management repository:

Figure 2.2. Installation Factory

Page 30: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 30/133

 

Centralized installation manager (CIM)

The CIM allows you to install and uninstall WebSphere Application Server binaries and maintenancepatches from a centralized location (the deployment manager) to any servers in the network.

CIM is a new feature added in WebSphere Application Server V7.0. CIM is supported on the followingoperating systems:

• Unix-based systems

• Windows

• IBM System i

Using CIM, the following tasks can be performed:

• Installation of WebSphere Application Server V7.0 and creation of a managed profile that gets

federated to the deployment manager automatically.

• Installation of the Update Installer for WebSphere Application Server V7.0.

Page 31: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 31/133

• Installation of a customized installation package (CIP) as created using the Installation

Factory.

• Central download of interim fixes and fix packs from the IBM support side. The downloaded

packages are stored in the installation manager's repository.

• Installation of fixes and fix packs on nodes within the deployment manager's cell.

The repository for the CIM can either be created during the installation of WebSphere ApplicationServer Network Deployment V7.0 or afterwards using the Installation Factory. The installation usingthe CIM provides a good approach to perform centralized remote installations and upgrades. Thedrawback is that you cannot control the naming of the profiles created when performing standardinstallation. This problem can be avoided by using custom installation packages created through theinstallation manager.

Managing installation targets

You can add or remove an installation target, which is the workstation on which selected softwarepackages might be installed. You can also edit the configuration of an existing installation target, andstore the administrative ID and password of each target for later use when installing or uninstallingpackages.

From the "Installation Targets" page in the administrative console, you can add additional installationtargets that are located outside of the cell. For example, you can install the middleware agent on anode that is running other middleware servers that were created outside of the product cell by addingthe remote workstation as a new installation target. Other tasks that you can complete to furthermanage your installation targets include removing installation targets, editing the configuration of installation targets, and installing a Secure Shell (SSH) public key on installation targets. To accessthis page, click "System administration > Centralized Installation Manager > Installation targets".

To add additional installation targets that are located outside of the cell, click "Add InstallationTarget". The configuration page is displayed next.

1.  Provide the host name and platform of the installation target, and optionally specify theadministrative ID and password, which the centralized installation manager later uses to installone or more packages on the installation target.

Figure 2.3. Adding Installation Target

Page 32: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 32/133

 

2.  Optional: Click "Test Connection" to test the connection using the administrative ID andpassword that you provide.

3.  Click "OK" after you specify the configuration settings to return to the "Installation targets"page. The new installation target is now displayed in the table.

 

Chapter 3

Application Assembly/Deployment and Cell Configuration/ResourceAllocation

Describe the name service management of WebSphere ApplicationServer Network Deployment V7.0 (JNDI).

Naming includes both server-side and client-side components. The server-side component is aCommon Object Request Broker Architecture (CORBA) naming service (CosNaming). The client-sidecomponent is a Java Naming and Directory Interface (JNDI) service provider. JNDI is a corecomponent in the Java Platform, Enterprise Edition (Java EE) programming model.

The WebSphere JNDI service provider can be used to interoperate with any CosNaming name serverimplementation. Yet WebSphere name servers implement an extension to CosNaming, and the JNDIservice provider uses those WebSphere extensions to provide greater capability than CosNamingalone. Some added capabilities are binding and looking up of non-CORBA objects.

Java EE applications use the JNDI service provider supported by WebSphere Application Server toobtain references to objects related to server applications, such as enterprise bean (EJB) homes,

which have been bound into a CosNaming name space.

Page 33: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 33/133

Configuring namespace bindings

Instead of creating namespace bindings from a program, you can configure namespace bindings usingthe administrative console. Name servers add these configured bindings to the namespace view byreading the configuration data for the bindings. Configured bindings are created each time a serverstarts, even when the binding is created in a transient partition of the namespace. One major use of configured bindings is to provide fixed qualified names for server application objects

A deployed application requires qualified fixed names if the application is accessed by thin clientapplications or by Java Platform, Enterprise Edition (Java EE) client applications or server applicationsrunning in another server process.

When you configure a namespace binding, you create a qualified fixed name for a server object. Afixed name does not change if the object is moved to another server. A qualified fixed name with a cellscope has the following form:

cell/persistent/fixedName

The fixedName is an arbitrary fixed name.

You can configure namespace bindings, and thus qualified fixed names, for the following objects:

• A string constant value

• An enterprise bean (EJB) home installed on a server in the cell

• A CORBA object available from a CosNaming name server

• An object bound in a WebSphere® Application Server namespace that is accessible using a

Java Naming and Directory Interface (JNDI) indirect lookup

Procedure

1.  Go to the "Name space bindings" page.

In the administrative console, click "Environment > Naming > Name space bindings".

2.  Select the desired scope.

The scope determines where in the namespace binding is created. It also affects which nameservers contain the binding in the namespace that they manage. Regardless of the scope, anamespace binding is accessible from all name servers in the cell. However, the scope can

affect whether the lookup can be resolved locally by a name server or whether the nameserver must make a remote call to another name server to resolve the binding.

Only namespace bindings created with the selected scope are visible in the collection table onthe page. By changing the scope, you can see and create bindings in other scopes.

a.  Select a scope.

If you are creating a new namespace binding, refer to the table below as a guide inselecting a scope:

Page 34: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 34/133

Table 3.1. Namespace binding scope descriptions. The scope can be a cell,node, server, or cluster.

Scope Description

Cell Cell-scoped bindings are created under the cell persistent root context.Select Cell if the namespace binding is not specific to any particular node orserver, or if you do not want the binding to be associated with any specific node,cluster or server. For example, you can use cell-scoped bindings to create fixed

qualified names for enterprise beans. Fixed qualified names do not have any node,cluster or server names embedded within them.

Node Node-scoped bindings are created under the node persistent root context for the

selected node. Select Node if the namespace binding is specific to a particularnode, or if you want the binding to be associated with a specific node.

Node-scoped bindings are created in the node agent and all application serverprocesses in the selected node. Therefore, all name servers in the node can resolve

those bindings locally. No remote invocations to other name servers are necessaryto resolve the bindings. However, name servers in other nodes must make remote

calls to the node agent in the selected node in order to resolve the bindings. Forexample, in order for a name server running in node node1 to resolve the

namecell/nodes/node2/persistent/nodeScopedConfiguredBinding ,

the name server must make a remote call to the node agent running in node2.

Any name server running in node2 can resolve that name without invoking any

other name servers.

Server Server-scoped bindings are created under the server root context for the selected

server. Select Server if a binding is to be used only by clients of an applicationrunning on a particular server, or if you want to configure a binding with the same

name on different servers which resolve to different objects. Note that two serverscan have configured bindings with the same name but resolve to different objects.

Server-scoped bindings are created in the process of the selected application

server. Therefore, the name server running in the selected application server canresolve those bindings locally. No remote invocations to other name servers arenecessary to resolve the bindings. However, all other name servers in the cell must

make remote calls to the selected server in order to resolve the bindings. Forexample, in order for the name server running in server1 in node node1 to

resolve thenamecell/nodes/node1/servers/server2/serverScopedConfiguredBinding, it must make a remote call to server2 in node1. Only the name

server in server2 in node1 can resolve that name without invoking any other

name servers.

Cluster

Cluster-scoped bindings are created under the server root context for all membersin the selected cluster. Select Cluster if the namespace binding is specific to a

particular cluster, or if you want the binding to be associated with a specificcluster.

Cluster-scoped bindings are created in all member processes of the selectedcluster. Therefore, the name server running in each member of the selected clustercan resolve those bindings locally. No remote invocations to other name serversare necessary to resolve the bindings. However, all other name servers in the cell

must make remote calls to the node agent in the selected node in order to resolvethe bindings. For example, in order for a name server running in any memberof cluster1 to resolve the

name cell/clusters/cluster2/clusterScopedConfiguredBinding , it

Page 35: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 35/133

Scope Description

must make a remote call to some member in cluster2. Only the name servers

in cluster2 members can resolve that name without invoking any other name

servers.

Server-scoped bindings in cluster members override cluster-scoped bindings withthe same binding name. However, cluster members generally should all beconfigured identically, and server-scoped bindings should not be required for

individual cluster members.

 

b.  Click "Apply".

3.  Create a new namespace binding.

a.  Open the "New Name Space Binding" wizard.

On the "Name space bindings" page, click "New".

b.  On the "Specify binding type" panel, select the binding type.

The namespace binding can be for a constant string value, an EJB home, a CORBACosNaming NamingContext or CORBA leaf node object, or an object that you can lookup indirectly using JNDI.

c.  On the "Specify basic properties" panel, specify the binding identifier and otherproperties for the binding.

d.  Optional: On the "Other context properties" panel, specify new properties to be passedto the javax.naming.InitialContext constructor.

This step applies to indirect lookup bindings only.

e.  On the "Summary" panel, verify the settings and click "Finish".

The name of the new binding is displayed in the collection table on the Name space bindingspage.

4.  Optional: Edit a previously created binding.

Cell-scoped bindings are created under the cell persistent root context. Node-scoped bindings arecreated under the node persistent root context for the specified node. Server-scoped bindings arecreated under the server root context for the selected server. Cluster-scoped bindings are createdunder the server root context for each member of the selected cluster.

Package Java enterprise applications, including enhanced ear filesusing the Rational Assembly and Deployment Tool.

Page 36: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 36/133

WebSphere Enhanced EAR 

A WebSphere enhanced EAR is a regular JEE EAR file, but with additional configuration information forresources required by JEE applications. While adding this extra configuration information at packagingtime is not mandatory, it can simplify deployment of JEE applications to WebSphere if theenvironments where the application is to be deployed are similar.

When an Enhanced EAR is deployed to WebSphere Application Server, WebSphere can automaticallyconfigure the resources specified in the Enhanced EAR. This reduces the number of configuration stepsrequired to set up the WebSphere environment to host the application.

When an Enhanced EAR is uninstalled, the resources that are defined at the application level scope areremoved as well. However, resources defined at a scope other than application level are not removedbecause they might be in use by other applications. Resources created at the Application level scopeare limited in visibility to only that application.

Table below shows the resources supported by the Enhanced EAR and the scope in which they arecreated.

Table 3.2. Scope for resources in WebSphere Enhanced EAR file

Resource Scope

JDBC providers Application

Data sources Application

Resource adapters Application

JMS resources Application

Substitution variables Application

Class loader policies Application

Shared libraries Server

JAAS authentication aliases Cell

Virtual hosts Cell

 

The supplemental information in an Enhanced EAR is modified by using the WebSphere ApplicationServer Deployment editor. The information itself lives in XML files in a folder called ibmconfig in the

EAR file's META-INF folder.

Examining the WebSphere Enhanced EAR file

The information about the resources configured is stored in the ibmconfig subdirectory of the EAR

file's META-INF directory. Expanding this directory reveals the well-known directory structure for a

cell configuration, as seen in figure below. You can also see the scope level where each resource isconfigured.

Page 37: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 37/133

Figure 3.1. Enhanced EAR file contents

 

At deployment time, WebSphere Application Server uses this information to automatically create theresources.

Configure application resources (e.g., JCA resource adapters,connection factories, resource scoping, MDB activation specification,JDBC providers, data sources)

Relational resource adapters and JCA

A resource adapter is a system-level software driver that a Java application uses to connect to anenterprise information system (EIS). A resource adapter plugs into an application server and providesconnectivity between the EIS, the application server, and the enterprise application.

WebSphere Application Server supports JCA versions 1.0 and 1.5 including additional, configurablefeatures for JCA 1.5 resource adapters with activation specifications that handle inbound requests.

Page 38: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 38/133

Data access for container-managed persistence (CMP) beans is indirectly managed by the WebSpherePersistence Manager. The JCA specification supports persistence manager delegation of the dataaccess to the JCA resource adapter without knowing the specific backend store. For the relationaldatabase access, the persistence manager uses the relational resource adapter to access the datafrom the database.

Note: The terms J2C and JCA both refer to J2EE Connector Architecture and they are usedinterchangeably.

Java EE Connector Architecture and WebSphere relational resource adapters

An application server vendor extends its system once to support the Java Platform, Enteprise EditionConnector Architecture (JCA) and is then assured of seamless connectivity to multiple EISs. Likewise,an EIS vendor provides one standard resource adapter with the capability to plug into any applicationserver that supports the connector architecture.

The product supports any resource adapter that implements version 1.0 or 1.5 of this specification.IBM includes WebSphere MQ and the Service Integration Bus with the Application Server, and IBMsupplies resource adapters for many enterprise systems separately from the WebSphere ApplicationServer package, which include but are not limited to, the Customer Information Control System

(CICS), Host On-Demand (HOD), Information Management System (IMS), and Systems, Applications,and Products (SAP) R/3 .

The general approach to writing an application that uses a JCA resource adapter is to develop EJBsession beans or services with tools such as Rational Application Developer. The session bean usesthe javax.resource.cci interfaces to communicate with an enterprise information system through

the resource adapter.

WebSphere Relational Resource Adapter

WebSphere Application Server provides the WebSphere Relational Resource Adapter implementation.This resource adapter provides data access through JDBC calls to access the database dynamically.The connection management is based on the JCA connection management architecture and provides

connection pooling, transaction, and security support. The WebSphere RRA is installed and runs aspart of WebSphere Application Server, and needs no further administration.

The RRA supports both the configuration and use of JDBC data sources and Java EE ConnectionArchitecture (JCA) connection factories. The RRA supports the configuration and use of data sourcesimplemented as either JDBC data sources or Java EE Connector Architecture connection factories.Data sources can be used directly by applications, or they can be configured for use by container-managed persistence (CMP) entity beans.

Configuring Java EE Connector connection factories in the administrative console

To access an enterprise information system (EIS), configure connection factories, which instantiateresource adapter classes for establishing and maintaining resource connections.

An application component uses a connection factory to access a connection instance, which thecomponent then uses to connect to the underlying enterprise information system (EIS). Examples of connections include database connections, Java Message Service connections, and SAP R/3connections.

1.  Click "Resources > Resource Adapters > Resource adapters".

2.  In the "Resource adapters" panel, select the resource adapter that you want to configure.

Page 39: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 39/133

3.  From the "Additional Properties" heading, click "J2C connection factories".

a.  Click "New".

b.  Specify any properties for the connection factory in the "General Properties" panel.

Figure 3.2. J2C connection factory properties

 

c.  Select the authentication preference.

Page 40: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 40/133

d.  Select the aliases for "Component-managed authentication", "Container-managedauthentication", or both. Some choices for the mapping-configuration alias do not usea container-managed authentication alias, so you will not be able to select a container-managed alias if one of those mapping-configuration aliases is selected.

If you have defined security domains in the application server, you can click"Browse..." to select an authentication alias for the resource that you are configuring.Security domains allow you to isolate authentication aliases between servers. The treeview is useful in determining the security domain to which an alias belongs, and thetree view can help you determine the servers that will be able to access eachauthentication alias. The tree view is tailored for each resource, so domains and

aliases are hidden when you cannot use them.

e.  Click "OK".

4.  Click the name of the J2C connection factory that you created.

5.  From the "Additional Properties" heading, click "Connection pool properties".

a. 

Change any values by clicking the property name.

b.  Click "OK".

6.  Click "Custom properties" from the "Additional Properties" heading.

a.  Click any property name to change its value. If the "UserName" and "Password"properties are defined, they will be overridden by the component-managedauthentication alias that you specified in the previous step.

b.  Click "Save".

7.  Restart the application server for the changes to take effect.

JMS and the default messaging provider

Java Enterprise Edition (Java EE) applications (producers and consumers) access the SIBus and thebus members through the JMS API. JMS destinations are associated with SIBus destinations. A SIBusdestination implements a JMS destination function. Session Enterprise JavaBeans (EJBs) use a JMS

connection factory to connect to the JMS provider. Message Driven Beans (MDBs) use a JMS activationspecification to connect to the JMS provider.

Figure 3.3. WebSphere default messaging provider and JMS

Page 41: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 41/133

 

Configuring an activation specification for the default messaging provider

Configure a JMS activation specification to enable a message-driven bean to communicate with thedefault messaging provider.

You create a JMS activation specification if you want to use a message-driven bean to communicatewith the default messaging provider through Java EE Connector Architecture (JCA) 1.5. JCA providesJava connectivity between application servers such as WebSphere Application Server, and enterpriseinformation systems. It provides a standardized way of integrating JMS providers with Java EEapplication servers, and provides a framework for exchanging data with enterprise systems, wheredata is transferred in the form of messages.

One or more message-driven beans can share a single JMS activation specification.

Because a JMS activation specification is a group of messaging configuration properties not acomponent, it cannot be manually started and stopped. For this reason, to prevent a message-drivenbean from processing messages you must complete the following tasks:

Page 42: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 42/133

• Stop the application that contains the message-driven bean.

• Stop the messaging engine.

All the activation specification configuration properties apart from "Name", "JNDI name", "DestinationJNDI name", and "Authentication alias" are overridden by appropriately named activation-configuration properties in the deployment descriptor of an associated EJB 2.1 or later message-driven

bean. For an EJB 2.0 message-driven bean, the "Destination type", "Subscription durability","Acknowledge mode" and "Message selector" properties are overridden by the corresponding elementsin the deployment descriptor. For either type of bean the "Destination JNDI name" property can beoverridden by a value specified in the message-driven bean bindings.

1.  Start the administrative console.

2.  Display the default messaging provider. In the navigation pane, expand "Resources > JMS >JMS providers".

3.  Select the default provider for which you want to configure an activation specification.

4.  Optional: Change the "Scope" check box to the scope level at which the activation specification

is to be visible to applications, according to your needs.

5.  In the content pane, under the "Additional properties" heading, click "Activation specifications".

This lists any existing JMS activation specifications for the default messaging provider in thecontent pane.

6.  Display the properties of the JMS activation specification. If you want to display an existingactivation specification, click one of the names listed.

Alternatively, if you want to create a new activation specification, click New, then specify thefollowing required properties:

a.  "Name" - Type the name by which the activation specification is known for

administrative purposes.

b.  "JNDI name" - Type the JNDI name that is used to bind the activation specification intothe JNDI namespace.

c.  "Destination type" - Whether the message-driven bean uses a queue or topicdestination.

d.  "Destination JNDI name" - Type the JNDI name that the message-driven bean uses tolook up the JMS destination in the JNDI namespace.

Select the type of destination on the "Destination type" property.

e.  "Bus name" - The name of the bus to connect to.

Specify the name of the service integration bus to which connections are made. Thismust be the name of the bus on which the bus destination identified by the

Destination JNDI name property is defined.

You can either select an existing bus or type the name of another bus. If you type thename of a bus that does not exist, you must create and configure that bus before theactivation specification can be used.

Page 43: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 43/133

7.  Specify properties for the JMS activation specification, according to your needs.

8.  Optional: Specify the JMS activation specification connection properties that influence how thedefault messaging provider chooses the messaging engine to which your message-driven beanapplication connects. By default, the environment automatically connects applications to anavailable messaging engine on the bus. However you can specify extra configuration details toinfluence the connection process; for example to identify special bootstrap servers, or to limitconnection to a subgroup of available messaging engines, or to improve availability orperformance, or to ensure sequential processing of messages received.

9.  Click "OK".

10.  Save your changes to the master configuration.

Data sources

Installed applications use a data source to obtain connections to a relational database. A data sourceis analogous to the Java Platform, Enterprise Edition (Java EE) Connector Architecture (JCA)connection factory, which provides connectivity to other types of enterprise information systems (EIS).

A data source is associated with a JDBC provider, which supplies the driver implementation classesthat are required for JDBC connectivity with your specific vendor database. Application components

transact directly with the data source to obtain connection instances to your database. The connectionpool that corresponds to each data source provides connection management.

You can create multiple data sources with different settings, and associate them with the same JDBCprovider. For example, you might use multiple data sources to access different databases within thesame vendor database application. WebSphere Application Server requires JDBC providers toimplement one or both of the following data source interfaces, which are defined by SunMicrosystems. These interfaces enable the application to run in a single-phase or two-phasetransaction protocol.

ConnectionPoolDataSource - a data source that supports application participation in localand global transactions, excepting two-phase commit transactions. When a connection pooldata source is involved in a global transaction, transaction recovery is not provided by thetransaction manager. The application is responsible for providing the backup recovery processif multiple resource managers are involved.

• XADataSource - a data source that supports application participation in any single-phase or

two-phase transaction environment. When this data source is involved in a global transaction,the product transaction manager provides transaction recovery.

JDBC providers

Installed applications use Java Database Connectivity (JDBC) providers to interact with relationaldatabases.

The JDBC provider object supplies the specific JDBC driver implementation class for access to aspecific vendor database. To create a pool of connections to that database, you associate a datasource with the JDBC provider. Together, the JDBC provider and the data source objects arefunctionally equivalent to the Java Platform, Enterprise Edition (Java EE) Connector Architecture (JCA)connection factory, which provides connectivity with a non-relational database.

Planning for resource scope use

Page 44: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 44/133

Resource scope is a powerful concept to prevent duplication of resources across lower-level scopes.For example, if a data source can be used by multiple servers in a node, it makes sense to define thatdata source once at the node level, rather than create the data source multiple times, possiblyintroducing errors along the way. Also, if the data source definition needs to change (maybe due tochanges to an underlying database), the data source definition can be changed once and is visible toall servers within the node. The savings in time and cost should be self-evident.

Some thought needs to be put toward outlining what resources you will need for all the applications tobe deployed and at what scope to define each. You select the scope of a resource when you create it.

The following list describes the scope levels, listed in order of granularity with the most generalscope first:

• Cell scope

The cell scope is the most general scope and does not override any other scope. Werecommend that cell scope resource definitions should be further granularized at a morespecific scope level. When you define a resource at a more specific scope, you provide greaterisolation for the resource. When you define a resource at a more general scope, you provideless isolation. Greater exposure to cross-application conflicts occur for a resource that youdefine at a more general scope.

The cell scope value limits the visibility of all servers to the named cell. The resource factorieswithin the cell scope are defined for all servers within this cell and are overridden by anyresource factories that are defined within application, server, cluster, and node scopes that arein this cell and have the same Java Naming and Directory Interface (JNDI) name. Theresource providers that are required by the resource factories must be installed on every nodewithin the cell before applications can bind or use them.

• Cluster scope

The cluster scope value limits the visibility to all the servers on the named cluster. The

resource factories that are defined within the cluster scope are available for all the members of this cluster to use and override any resource factories that have the same JNDI name that isdefined within the cell scope. The resource factories that are defined within the cell scope areavailable for this cluster to use, in addition to the resource factories that are defined withinthis cluster scope.

• Node scope (default)

The node scope value limits the visibility to all the servers on the named node. This is thedefault scope for most resource types. The resource factories that are defined within the nodescope are available for servers on this node to use and override any resource factories thathave the same JNDI name defined within the cell scope. The resource factories that aredefined within the cell scope are available for servers on this node to use, in addition to theresource factories that are defined within this node scope.

• Server scope

The server scope value limits the visibility to the named server. This is the most specific scopefor defining resources. The resource factories that are defined within the server scope areavailable for applications that are deployed on this server and override any resource factoriesthat have the same JNDI name defined within the node and cell scopes. The resource factoriesthat are defined within the node and cell scopes are available for this server to use, in additionto the resource factories that are defined within this server scope.

Page 45: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 45/133

• Application scope

The application scope value limits the visibility to the named application. Application scoperesources cannot be configured from the Integrated Solutions Console. Use RationalApplication Developer Assembly and Deploy V7.5, or the wsadmin.sh tool to view or modify

the application scope resource configuration. The resource factories that are defined within the

application scope are available for this application to use only. The application scope overridesall other scopes.

You can define resources at multiple scopes but the definition at the most specific scope is used.

When selecting a scope, the following rules apply:

• The application scope has precedence over all the scopes.

• The server scope has precedence over the node, cell, and cluster scopes.

• The cluster scope has precedence over the node and cell scopes.

• The node scope has precedence over the cell scope.

When viewing resources, you can select the scope to narrow the list to just the resources defined atthe scope. Alternatively, you can select to view resources for all scopes. Resources are always createdat the currently selected scope. Resources created at a given scope might be visible to lower scope.For example, a data source created at a node level might be visible to servers within the node.

Configure WebSphere messaging (e.g., SIBus bus members anddestinations).

Service integration buses

A service integration bus is a group of one or more application servers or server clusters in a

WebSphere Application Server cell that cooperate to provide asynchronous messaging services. Theapplication servers or server clusters in a bus are known as bus members. In the simplest case, aservice integration bus consists of a single bus member, which is one application server.

Usually, a cell requires only one bus, but a cell can contain any number of buses. The servercomponent that enables a bus to send and receive messages is a messaging engine.

A service integration bus provides the following capabilities:

• Any application can exchange messages with any other application by using a destination to

which one application sends, and from which the other application receives.

• A message-producing application, that is, a producer, can produce messages for a

destination regardless of which messaging engine the producer uses to connect to the bus.

• A message-consuming application, that is, a consumer, can consume messages from a

destination (whenever that destination is available) regardless of which messaging engine theconsumer uses to connect to the bus.

Different service integration buses can, if required, be connected. This allows applications that use onebus (the local bus) to send messages to destinations in another bus (a foreign bus). Note, though,that applications cannot receive messages from destinations in a foreign bus.

Page 46: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 46/133

An application can connect to more than one bus. For example, although an application cannot receivemessages from destinations in a foreign bus, if the application connects to that bus, the bus becomesa local bus and then the application can receive messages.

For example, in the following diagram, the application can send messages to destination A anddestination B, but it cannot receive messages from destination B:

Figure 3.4. An application that is connected to bus A

 

In the following diagram, the application can send messages to, and receive messages from,destination A and destination B:

Figure 3.5. An application that is connected to bus A and bus B

Page 47: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 47/133

 

A service integration bus comprises a SIB Service, which is available on each application server inthe WebSphere Application Server environment. By default, the SIB Service is disabled. This meansthat when a server starts it cannot undertake any messaging. The SIB Service is enabledautomatically when you add a server to a service integration bus. You can choose to disable the

service again by configuring the server.

A service integration bus supports asynchronous messaging, that is, a program places a message on amessage queue, then proceeds with its own processing without waiting for a reply to the message.Asynchronous messaging is possible regardless of whether the consuming application is running, orwhether the destination is available. Also, point-to-point and publish/subscribe messaging aresupported.

After an application connects to the bus, the bus behaves as a single logical entity and the connectedapplication does not have to be aware of the bus topology. In many cases, connecting to the bus anddefining bus resources is handled by an application programming interface (API) abstraction, forexample the administered JMS connection factory and JMS destination objects.

The service integration bus is sometimes referred to as the messaging bus if it provides themessaging system for JMS applications that use the default messaging provider.

Many scenarios require a simple bus topology, for example, a single server. If you add multipleservers to a single bus, you increase the number of connection points for applications to use. If youadd server clusters as members of a bus, you can increase scalability and achieve high availability.Servers, however, do not have to be bus members to connect to a bus. In more complex bustopologies, multiple buses are configured, and can be interconnected to form complex networks. Anenterprise might deploy multiple interconnected buses for organizational reasons. For example, an

Page 48: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 48/133

enterprise with several independent departments might want separately administered buses in eachlocation.

Multiple-server bus with clustering

You can have a bus consisting of multiple servers, some or all of which are members of a cluster.

When a server is a member of a cluster, it allows servers to run common applications on differentmachines. Installing an application on a cluster that has multiple servers on different machinesprovides high availability. If one machine fails, the other servers in the cluster do not fail.

When you configure a server bus member, that server runs a messaging engine. For many purposes,this is sufficient, but such a messaging engine can run only in the server it was created for. The serveris therefore a single point of failure; if the server cannot run, the messaging engine is unavailable. Byconfiguring a cluster bus member instead, the messaging engine can run in one server in the cluster,and if that server fails, the messaging engine can run in an alternative server.

Figure 3.6. Service integration bus with clustered server

 

Another advantage of configuring a cluster bus member is the ability to share the workload associatedwith a destination across multiple servers. You can deploy additional messaging engines to the cluster.A destination deployed to a cluster bus member is partitioned across the set of messaging enginesthat the cluster servers run. The messaging engines in the cluster each handle a share of themessages arriving at the destination.

Figure 3.7. Service integration bus with partitioned destinations

Page 49: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 49/133

 

To summarize, with a cluster bus member you can achieve high availability (through failover). You canalso configure a cluster to achieve workload sharing or workload sharing with high availability,depending on the policies that you configure for the messaging engines.

Bus member types and their effect on high availability and workload sharing

You can add a server to a service integration bus, to create a server bus member. You can also add acluster to a service integration bus, to create a cluster bus member. A cluster bus member canprovide scalability and workload sharing, or high availability, but a server bus member cannot.

• Adding a server to a bus

When you add a server to a service integration bus, a messaging engine is createdautomatically. This single messaging engine cannot participate in workload sharing with othermessaging engines; it can only do that in a cluster. The messaging engine also cannot be

highly available, because there are no other servers in which it can run.

• Adding a cluster to a bus

A cluster deployment can provide scalability and workload sharing, or high availability, or acombination of these aspects. This depends on the number of messaging engines in the clusterand the behavior of those messaging engines, such as whether the messaging engines can failover to another server, or fail back when a server becomes available again.

Page 50: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 50/133

You can use messaging engine policy assistance to create and configure messaging engines ina cluster. The following predefined messaging engine policy types are available, which supportfrequently-used cluster configurations:

o High availability. One messaging engine is created in the cluster. It can fail over to any

other server in the cluster, so it is highly available.

oScalability. One messaging engine is created for each application server in the cluster.

The messaging engines cannot fail over.

o Scalability with high availability. One messaging engine is created for each application

server in the cluster. Each messaging engine can fail over to one specific server in thecluster, creating a circular pattern of availability.

You can also use messaging engine policy assistance to create a custom messaging enginepolicy. You can create any number of messaging engines for the cluster, and configure themessaging engines as you require. The associated core group policies and settings for themessaging engines are created automatically.

If you do not use messaging engine policy assistance, when you add a server cluster to aservice integration bus, a single messaging engine is created automatically. This messaging

engine uses the default SIBus core group policy that already exists in WebSphere ApplicationServer. The policy allows the messaging engine to fail over to any server in the cluster. Youcan then add further messaging engines if required. The cluster deployment depends on thenumber of messaging engines in the cluster and the policy bound to the high availability group(HAGroup) of each messaging engine.

If there is only one messaging engine in the cluster and you deploy a destination to thatcluster, the destination is localized by that messaging engine. All messaging workload for thatdestination is handled by that messaging engine; the messaging workload cannot be shared.The availability characteristics of the destination are the same as the availabilitycharacteristics of the messaging engine.

You can benefit from increased scalability by introducing additional messaging engines to the

cluster. When you deploy a destination to the cluster, it is localized by all the messagingengines in the cluster and the destination becomes partitioned across the messaging engines.The messaging engines can share all traffic passing through the destination, reducing theimpact of one messaging engine failing. The availability characteristics of each destinationpartition are the same as the availability characteristics of the messaging engine the partitionis localized by.

If you do not use messaging engine policy assistance, you control the availability behavior of each messaging engine by modifying the core group policy that the HAManager applies to theHAGroup of the messaging engine.

The simplest way to create and configure messaging engines in a cluster is to add a cluster toa bus and use messaging engine policy assistance with one of the predefined messagingengine policy types. If you are familiar with creating messaging engines and configuringmessaging engine behavior, you can use messaging engine policy assistance and the custommessaging engine policy type. To add a cluster to a bus without using messaging engine policyassistance, you should be familiar with all the creation and configuration steps involved, forexample, creating a messaging engine, configuring core group policies and using matchcriteria.

Automate deployment tasks with scripting.

Page 51: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 51/133

Automating the application deployment is something to consider if it is done more than one time.Successful automation will provide an error free and consistent application deployment approach. Mostof the application deployment not only involves installing the application itself, but it also needs tocreate other WebSphere objects, configure the Web servers, file systems and others. These tasks canbe automated using shell scripting depending on the operating systems, Java Common Language(JACL) and Jythons.

Getting started with wsadmin scripting

The WebSphere Application Server wsadmin.sh tool provides the ability to run scripts. The wsadmin

tool supports a full range of product administrative activities.

The following figure illustrates the major components involved in a wsadmin scripting solution:

Figure 3.8. wsadmin scripting

 

The wsadmin tool supports two scripting languages: Jacl and Jython. Five objects are available whenyou use scripts:

• AdminControl: Use to run operational commands.

• AdminConfig: Use to run configurational commands to create or modify WebSphere

Application Server configurational elements.

• AdminApp: Use to administer applications.

• AdminTask: Use to run administrative commands.

• Help: Use to obtain general help.

The scripts use these objects to communicate with MBeans that run in WebSphere Application Serverprocesses. MBeans are Java objects that represent Java Management Extensions (JMX) resources. JMX

Page 52: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 52/133

is an optional package addition to Java 2 Platform Standard Edition (J2SE). JMX is a technology thatprovides a simple and standard way to manage Java objects.

Important: Some wsadmin scripts, including the AdminApp install, AdminApp update, and someAdminTask commands, require that the user ID under which the server is running must have readpermission to the files that are created by the user that is running wsadmin scripting. For example, if the application server is running under user1, but you are running wsadmin scripting underuser2,

you might encounter exceptions involving a temporary directory. When user2 runs wsadmin scripting

to deploy an application, a temporary directory for the enterprise application archive (EAR) file iscreated. However, when the application server attempts to read and unzip the EAR file as user1, the

process fails. It is not recommended that you set the umask value of the user that is running wsadminscripting to 022 or 023 to work around this issue. This approach makes all of the files that are createdby the user readable by other users. To resolve this issue, consider the following approaches based onyour administrative policies:

• Run wsadmin scripting with the same user ID as the user that runs the deployment manager

or application server. A root user can switch the user ID to complete these actions.

• Set the group ID of the user that is running the deployment manager or application server to

be the same group ID as the user that is running wsadmin scripting. Also, set the umask valueof the user that is running the wsadmin scripting to be at least a umask 027 value so that files

that are created by the wsadmin scripting can be read by members of the group.

• Run wsadmin scripting from a different machine. This approach forces files to be transferred

and bypasses the file copy permission issue.

Launching wsadmin

The wsadmin.sh command file resides in the bin directory of every profile. Start wsadmin from a

command prompt with the command:

profile_root/bin/wsadmin.sh

Note that the wsadmin command also exists in the bin directory of the install_root directory. If 

you start wsadmin from this location, you must be careful to specify the profile to work with in thecommand. If you do not specify the profile (or forget to specify it), the default profile will be chosen.

Example below illustrates how to start wsadmin. In this example, the wsadmin command is used toconnect to the job manager. It is issued from the bin directory of the job manager profile, so the

profile does not need to be specified. The -lang argument indicates Jython will be used (Jacl is the

default):

 

/opt/IBM/WebSphere/AppServer/profiles/jmgr40/bin>wsadmin.sh -lang jython

WASX7209I: Connected to process "jobmgr" on node jmgr40node using SOAP

connector

; The type of process is: JobManager

Page 53: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 53/133

WASX7031I: For help, enter: "print Help.help()"

wsadmin>

 

wsadmin syntax:

 

wsadmin.sh

[ -h(elp) ]

[ -? ]

[ -c <command> ]

[ -p <properties_file_name>]

[ -profile <profile_script_name>]

[ -f <script_file_name>]

[ -javaoption java_option]

[ -lang language]

[ -wsadmin_classpath class path]

[ -profileName profile]

[ -conntype

SOAP

[-host host_name]

[-port port_number]

[-user userid]

[-password password] |

RMI

[-host host_name]

Page 54: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 54/133

[-port port_number]

[-user userid]

[-password password] |

JSR160RMI

[-host host_name]

[-port port_number]

[-user userid]

[-password password] |

IPC

[-ipchost host_name]

[-port port_number]

[-user userid]

[-password password] |

NONE

]

[ -jobid <jobid_string>]

[ -tracefile <trace_file>]

[ -appendtrace <true/false>]

[ script parameters ]

Command and script invocation

The wsadmin.sh commands can be invoked in three different ways

Note: For simplicity, the examples assume that:

• wsadmin.sh is executed from the profile_root/bin directory, so it is not necessary to

specify the profile name, host, and port.

Page 55: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 55/133

• Administrative security is disabled. In reality, you will need to specify the username and

password when you invoke wsadmin.

1.  Invoking a single command (-c)

The -c option is used to execute a single command using wsadmin. In the example below, we

use the AdminControl object to query the node name of the WebSphere server process:

/opt/IBM/WebSphere/AppServer/profiles/jmgr40/bin>wsadmin.sh -langjython -c

AdminControl.getNode()

WASX7209I: Connected to process "jobmgr" on node jmgr40node using SOAP

connector

; The type of process is: JobManager

'jmgr40node'

2.  Running script files (-f)

The -f option is used to execute a script file. Example below shows a two-line Jython script

named myScript.py. The script has a .py extension to reflect the Jython language syntax of 

the script. The extension plays no significance in wsadmin;the com.ibm.ws.scripting.defaultLang property or -lang parameter is used to

determine the language used. If the property setting is not correct, use the -lang option to

identify the scripting language, because the default is Jacl.

Jython script:

print "This is an example Jython script"

print ""+ AdminControl.getNode()+""

Running a Jython script in wsadmin:

/opt/IBM/WebSphere/AppServer/profiles/jmgr40/bin>wsadmin.sh -fmyScript.py -lang jython

WASX7209I: Connected to process "dmgr" on node dmgr40node using SOAP

connector;

Page 56: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 56/133

The type of process is: DeploymentManager

This is an example Jython script

dmgr40node

3.  Invoking commands interactively

The command execution environment can be run in interactive mode, so you can invokemultiple commands without having the overhead of starting and stopping the wsadminenvironment for every single command. Run the wsadmin.sh command without the

command (-c) or script file (-f) options to start the interactive command execution

environment, as shown in the example below:

/opt/IBM/WebSphere/AppServer/profiles/jmgr40/bin>wsadmin.sh -lang jython

WASX7209I: Connected to process "dmgr" on node dmgr40node using SOAP

connector;

The type of process is: DeploymentManager

WASX7031I: For help, enter: "print Help.help()"

wsadmin>

From the wsadmin> prompt, the WebSphere administrative objects and built-in language

objects can be invoked, as shown in example below. Simply type the commands atthe wsadmin>prompt:

wsadmin>AdminControl.getNode()

'dmgr40node'

wsadmin>

End the interactive execution environment by typing quit and pressing the "Enter" key.

Manage the plug-in configuration file (e.g., regenerate, edit,propagate, etc

Web servers and WebSphere Application Server Plug-in

Page 57: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 57/133

Most WebSphere Application Server topologies will have a Web server which receives HTTP-basedrequests from clients. For security reasons the Web server should be placed in a separate networkzone secured by firewalls (a DMZ).

Usually the Web server, in conjunction with the WebSphere Application Server Plug-in, provides thefollowing functionality in the topology:

• Serves requests for static HTTP content like HTML files, images, and so forth.

• Requests for dynamic content like Java Server Pages (JSPs), servlets, and portlets are

forwarded to the appropriate WebSphere Application Server through the WebSphereApplication Server Plug-in.

• Allows caching of response fragments using the Edge Side Include (ESI) cache.

• Breaks the secured socket layer (SSL) connection from the client (unless this is done by

another device in the architecture) and optionally opens a separate secured connection fromthe Web server to the Web container on the application server system.

WebSphere Application Server comes with Web server plug-ins for all supported Web servers.

The plug-in uses a configuration file (plugin-cfg.xml) that contains settings describing how to pass

requests to the application server. The configuration file is generated on the application server. Eachtime a change on the application server affects the request routing of requests (for example, a newapplication is installed) the plug-in must be regenerated and propagated to the Web server machineagain.

Note: In a stand-alone topology, only unmanaged Web servers are possible. This means the plug-in

must be manually pushed out to the Web server system. The exception to this is if you are using IBMHTTP Server. The application server can automatically propagate the plug-in configuration file to IBMHTTP Server, even though it is an unmanaged node, by using the administrative instance of IBM HTTPServer.

Figure 3.9. IBM HTTP Server (IHS) as unmanaged node (remote)

 

Page 58: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 58/133

Figure 3.10. Web Server on a managed node (local)

 

Web server configuration

Plug-in configuration involves configuring the Web server to use the binary plug-in module thatWebSphere Application Server provides. Plug-in configuration also includes updating the plug-in XML

configuration file to reflect the current application server configuration. The binary module uses theXML file to help route Web client requests.

After installing a supported Web server, you must install a binary plug-in module for the Web server.The plug-in module lets the Web server communicate with the application server. The Plug-ins

installation wizard installs the Web server plug-in. The wizard configures the Web server. The wizardalso creates a Web server definition in the configuration of the application server. The Plug-insinstallation wizard uses the following files to configure a plug-in for the Web server that you select:

• The Web server configuration file on the Web server machine, such as

the httpd.conf file for IBM HTTP Server.

The Web server configuration file is installed as part of the Web server.

The wizard must reconfigure the configuration file for a supported Web server.

Configuration consists of adding directives that identify file locations of two files:

1.  The binary plug-in file

2.  The plugin-cfg.xml configuration file

• The binary Web server plug-in file that the Plug-ins installation Wizard installs on the Web

server machine.

Page 59: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 59/133

An example of a binary plug-in module is the mod_ibm_app_server_http.so file for IBM

HTTP Server on the Linux platform.

The binary plug-in file does not change. However, the configuration file for the binary plug-inis an XML file. The application server changes the configuration file when certain changes toyour WebSphere Application Server configuration occur.

The binary module reads the XML file to adjust settings and to route requests to theapplication server.

• The plug-in configuration file, plugin-cfg.xml, on the application server machine that

you propagate (copy) to a Web server machine.

The plug-in configuration file is an XML file with settings that you can tune in theadministrative console. The file lists all of the applications installed on the Web serverdefinition. The binary module reads the XML file to adjust settings and to route requests to theapplication server.

The standalone application server regenerates the plugin-cfg.xml file in

the profile_root/config/cells/cell_name /nodes/web_server_name_node/servers/web_server_name directory. Regeneration occurs whenever a change

occurs in the application server configuration that affects deployed applications.

After regeneration, propagate (copy) the file to the Web server machine. The binary plug-inthen has access to the most current copy of its configuration file.

The Web server plug-in configuration service automatically regenerates the plugin-cfg.xml file after certain events that change the configuration. The configuration service

automatically propagates the plugin-cfg.xml file to an IBM HTTP Server machine when the

file is regenerated. You must manually copy the file on other Web servers.

• The default (temporary) plug-in configuration file,plugin-cfg.xml

, on the Web server

machine.

The Plug-ins installation wizard creates the temporary plugin-cfg.xml file in

the plugins_root/config/web_server_name directory. The wizard creates the file for

every remote installation scenario. The wizard creates the file at the same time that it installsthe binary plug-in module for the Web server.

The default file is a placeholder that you must replace with the plugin-cfg.xml file from the

Web server definition on the application server. The default file is a replica of the file that theapplication server creates for a default standalone application server that has the samplesinstalled.

Run the configureweb_server_name script from the app_server_root/bin directory of the application server machine for a remote installation, or directly fromthe plugins_root/bin directory for a local installation. The script creates the Web server

definition in the configuration files of the default profile. To configure a different profile thanthe default, edit theconfigureweb_server_name script. Use the -profileName parameter

to identify a different profile than the default.

After the Web server definition is created, the Web server plug-in configuration service withinthe application server creates the first plugin-cfg.xml file in the Web server definition on

the application server machine. If you install an application, create a virtual host, or do

Page 60: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 60/133

anything that changes the configuration, you must propagate the updated plugin-

cfg.xml file from the application server machine to the Web server machine to replace the

default file.

• The configureweb_server_name script that you copy from the Web server machine to the

application server machine.

The Plug-ins installation wizard creates the configureweb_server_name script on the

Web server machine in the plugins_root/bin directory. If one machine in a remote

scenario is running under an operating system like AIX or Linux and the other machine isrunning under Windows, use the script created inthe plugins_root/bin/crossPlatformScripts directory. The script is created for

remote installation scenarios only.

Copy the script from the Web server machine to the app_server_root/bin directory on a

remote application server machine. You do not have to copy the script on a local installation.Run the script to create a Web server definition in the configuration of the application server.

When using the IBM HTTP Server, configure the IBM HTTP Administration Server also. The IBM

HTTP Administration Server works with the administrative console to manage Web serverdefinitions. Also, use the administrative console to update your Web server definition withremote Web server management options. Click Servers > Server Types > Web servers >web_server_name to see configuration options. For example, click "Remote Web servermanagement" to change such properties as:

o  Host name

o  Administrative port

o  User ID

Password

Implementing a web server plug-in

1.  Use the administrative console to change the settings in the plug-in configuration file.

When setting up your web server plug-in, you must decide whether to have the configurationautomatically generated in response to a configuration change. When the web server plug-inconfiguration service is enabled and any of the following conditions occur, the plug-inconfiguration file is automatically generated:

•  When the web server is created or saved

•  When an application is installed

•  When an application is uninstalled

•  When the virtual host definition is updated

You can either use the administrative console, or issue the GenPluginCfg command to

regenerate your plugin-cfg.xml file.

Page 61: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 61/133

NOTE: You must delete the plugin-cfg.xml file in

the profile_root/config/cells directory before you complete this task. Otherwise,

configuration changes do not persist to the plugin-cfg.xml file.

Complete the following steps to regenerate your plugin-cfg.xml file by using the

administrative console:

e.  Select Servers > Server Types > Web Servers > web_server_name > Plug-inproperties.

Figure 3.11. Plug-in properties

 

f.  Select Automatically generate plug-in configuration file, or click one or more of the following topics to manually configure the plugin-cfg.xml file:

•  Caching

Page 62: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 62/133

•  Request and response

•  Request routing

•  Custom Properties

NOTE: Do not manually update the plugin-cfg.xml file. Any manual updates you

make for a web server are overridden whenever the plugin-cfg.xml file for that

web server is regenerated.

g.  Click OK.

h.  You might need to stop the application server, and then restart the application serverfor the web server to locate the plugin-cfg.xml file.

  Propagate the plug-in configuration. The plug-in configuration file, plugin-cfg.xml, is

automatically propagated to the web server if the web server plug-in configuration service is enabled,and one of the following conditions is true:

Figure 3.12. Propagating the plugin-cfg.xml

 

• The web server is a local web server, which means that the web server is located onthe same workstation as an application server.

•  The web server is a remote IBM HTTP Server Version 7.0 that has a running IBM HTTP

Server administration server.

If neither of these conditions is true, you must manually copy the plugin-cfg.xml file to the

location of the installation for the remote web server. Copy the plugin-cfg.xml file

inapp_server_root/profiles/profile_name/config/cells/../../nodes/../serv

Page 63: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 63/133

ers/web_server_name to the web server host location, which

is Plugin_Install_Root/config/web_server_name/ .

NOTE: If you use the FTP function to copy the file, and the configuration reload fails, check thefile permissions on the plugin-cfg.xml file, and make sure that they are set to rw-r--r--.

If the file permissions are not correct, the web server is not able to access the new version of 

the file, which causes the configuration reload to fail.

If the file permissions are incorrect, issue the following command to change the filepermissions to the appropriate settings:

chmod 644 plugin-cfg.xml

The remote web server installation location is the location you specified when you created thenode for this web server.

Request routing using plug-in

The Web Server plug-in uses an XML configuration file to determine whether a request is for the Web

Server of the Application Server.

When a request reaches the Web Server, the URL is compared to those managed by the plug-in. If amatch is found, the plug-in configuration file contains the information needed to forward the requestto the Application Server's web container using the web container inbound chain.

For example, lets say you make a request to http://localhost:80/snoop URL, so the Web Server

Plugin will check the /snoop URL to find out how it is should be handled. It will check if there is

matching UriGroup element in plugin-cfg.xml:

 

<UriGroup Name="default_host_cluster1_URIs">

<Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid"Name="/snoop/*"/>

</UriGroup>

In this case it knows that the /snoop/* URL is for dynamic content, so the next part is how to routeit to correct server. It will read value of Name attribute for the UriGroup which

isdefault_host_cluster1_URIs, and name of the cluster is cluster1. It will use these values

find out virtual host and cluster:

 

<Route ServerCluster="cluster1" UriGroup="default_host_cluster1_URIs"

Page 64: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 64/133

VirtualHostGroup="default_host" />

 

<VirtualHostGroup Name="default_host">

<VirtualHost Name="*:9080"/>

<VirtualHost Name="*:80"/>

<VirtualHost Name="*:9443"/>

<VirtualHost Name="*:5060"/>

<VirtualHost Name="*:5061"/>

<VirtualHost Name="*:443"/>

<VirtualHost Name="*:9081"/>

<VirtualHost Name="*:9082"/>

</VirtualHostGroup>

 

<ServerCluster CloneSeparatorChange="false" GetDWLMTable="false"

IgnoreAffinityRequests="true" LoadBalance="Round Robin"

Name="cluster1" PostBufferSize="64" PostSizeLimit="-1"

RemoveSpecialHeaders="true" RetryInterval="60">

 

<Server CloneID="14dtuu8g3" ConnectTimeout="0" ExtendedHandshake="false"

LoadBalanceWeight="2" MaxConnections="-1" Name="dmgrNode01_server2"

ServerIOTimeout="0" WaitForContinue="false">

<Transport Hostname="dmgr.webspherenotes.com" Port="9081"Protocol="http"/>

<Transport Hostname="dmgr.webspherenotes.com" Port="9444"Protocol="https">

<Property Name="keyring"Value="C:\Cert\HTTPServer\Plugins\config\webserver2\plugin-key.kdb"/>

Page 65: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 65/133

<Property Name="stashfile"Value="C:\Cert\HTTPServer\Plugins\config\webserver2\plugin-key.sth"/>

</Transport>

</Server>

 

<Server CloneID="14dtuueci" ConnectTimeout="0" ExtendedHandshake="false"

LoadBalanceWeight="2" MaxConnections="-1" Name="dmgrNode01_server4"

ServerIOTimeout="0" WaitForContinue="false">

<Transport Hostname="dmgr.webspherenotes.com" Port="9082"Protocol="http"/>

<Transport Hostname="dmgr.webspherenotes.com" Port="9445"Protocol="https">

<Property Name="keyring"Value="C:\Cert\HTTPServer\Plugins\config\webserver2\plugin-key.kdb"/>

<Property Name="stashfile"Value="C:\Cert\HTTPServer\Plugins\config\webserver2\plugin-key.sth"/>

</Transport>

</Server>

 

<PrimaryServers>

<Server Name="dmgrNode01_server2"/>

<Server Name="dmgrNode01_server4"/>

</PrimaryServers>

 

</ServerCluster>

Page 66: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 66/133

It knows that the cluster1 has two servers dmgrNode01_server2 and dmgrNode01_server4 and

from the cluster definition it can find out the http and https port and then forward request to either of the server. The cluster definition also says that the load balancing algorithm is Round Robin.

Configure class loader parameters.

 

Application server class loader policies

For each application server in the system, the class loader policy can be set to Single or Multiple.These settings can be found in the administrative console by selecting Servers > Server Types >WebSphere application servers > server_name.

Figure 3.13. Application server classloader settings

Page 67: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 67/133

 

• When the application server class loader policy is set to Single, a single application class

loader is used to load all EJBs, utility JARs, and shared libraries within the application server(JVM). If the WAR class loader policy then has been set to Single class loader for application,the Web module contents for this particular application are also loaded by this single classloader.

• When the application server class loader policy is set to Multiple, the default, each application

will receive its own class loader for loading EJBs, utility JARs, and shared libraries. Dependingon whether the WAR class loader policy is set to Class loader for each WAR file in

Page 68: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 68/133

application or Single class loader for application, the Web module might or might notreceive its own class loader.

Here is an example to illustrate. Suppose that you have twoapplications, Application1 and Application2, running in the same application server. Each

application has one EJB module, one utility JAR, and two Web modules. If the application server hasits class loader policy set to Multiple and the class loader policy for all the Web modules are setto Class loader for each WAR file in application, the result is as shown in figure below:

Figure 3.14. Class loader policies: Example 1

 

Each application is completely separated from the other and each Web module is also completelyseparated from the other one in the same application. WebSphere's default class loader policiesresults in total isolation between the applications and the modules.

If we now change the class loader policy for the WAR2-2 module to Single class loader for

application, the result is shown in figure below:

Figure 3.15. Class loader policies: Example 2

Page 69: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 69/133

 

Web module WAR2-2 is loaded by Application2's class loader and classes, and for example, classes

in Util2.jar are able to see classes in WAR2-2's /WEB-INF/classes and /WEB-

INF/lib directories.

As a last example, if we change the class loader policy for the application server to Single and alsochange the class loader policy for WAR2-1 to Single class loader for application, the result is as

shown in figure below:

Figure 3.16. Class loader policies: Example 3

Page 70: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 70/133

 

There is now only a single application class loader loading classes forboth Application1 and Application2. Classes in Util1.jar can see classes

in EJB2.jar, Util2.jar, WAR2-1.war and WAR2-2.war. The classes loaded by the applicationclass loader still cannot, however, see the classes in the WAR1-1 and WAR1-2 modules, because a

class loader can only find classes by going up the hierarchy, never down.

Class loading/delegation mode

WebSphere's application class loader and WAR class loader both have a setting called the class loaderorder.

Figure 3.17. Application server classloader settings

Page 71: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 71/133

 

This setting determines whether the class loader order should follow the normal Java class loaderdelegation mechanism, or override it.

There are two possible options for the class loading mode:

• Classes loaded with parent class loader first

• Classes loaded with local class loader first (parent last)

Page 72: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 72/133

In previous WebSphere releases, these settings were called PARENT_FIRST and PARENT_LAST,

respectively.

The default value for class loading mode is Classes loaded with parent class loader first. Thismode causes the class loader to first delegate the loading of classes to its parent class loader beforeattempting to load the class from its local class path. This is the default policy for standard Java classloaders.

If the class loading policy is set to Classes loaded with local class loader first (parent last), theclass loader attempts to load classes from its local class path before delegating the class loading to itsparent. This policy allows an application class loader to override and provide its own version of a classthat exists in the parent class loader.

NOTE: The administrative console is a bit confusing at this point. On the settings page for a Webmodule, the two options for class loader order are Classes loaded with parent class loaderfirstand Classes loaded with local class loader first (parent last). However, in this context, the"local class loader" really refers to the WAR class loader, so the option Classes loaded with localclass loader first should really be called Classes loaded with WAR class loader first.

Assume that you have an application, similar to Application1 in the previous examples, and it uses

the popular log4j package to perform logging from both the EJB module and the two Web modules.

Also assume that each module has its own, unique, log4j.properties file packaged into the

module. You could configure log4j as a utility JAR so you would only have a single copy of it in yourEAR file.

However, if you do that, you might be surprised to see that all modules, including the Web modules,load the log4j.properties file from the EJB module. The reason is that when a Web module

initializes the log4j package, the log4j classes are loaded by the application class loader. Log4j isconfigured as a utility JAR. Log4j then looks for a log4j.properties file on its class path and finds

it in the EJB module.

Even if you do not use log4j for logging from the EJB module and the EJB module does not, therefore,contain a

log4j.propertiesfile, log4j does not find the

log4j.propertiesfile in any of the Web

modules anyway. The reason is that a class loader can only find classes by going up the hierarchy,never down.

To solve this problem, you can use one of the following approaches:

• Create a separate file, for example, Resource.jar, configure it as a utility JAR, move

all log4j.properties files from the modules into this file, and make their names unique

(like war1-1_log4j.properties, war1-2_log4j.properties,

and ejb1_log4j.properties). When initializing log4j from each module, tell it to load the

proper configuration file for the module instead of the default (log4j.properties).

• Keep the log4j.properties for the Web modules in their original place (/WEB-

INF/classes), add log4j.jar to both Web modules (/WEB-INF/lib) and set the class

loading mode for the Web modules to Classes loaded with local class loader first (parentlast). When initializing log4j from a Web module, it loads the log4j.jar from the module

itself and log4j would find thelog4j.properties on its local classpath, the Web module

itself. When the EJB module initializes log4j, it loads from the application class loader and itfinds the log4j.properties file on the same class path, the one in the EJB1.jar file.

• If possible, merge all log4j.properties files into one and place it on Application class

loader, in a Resource.jar file, for example).

Page 73: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 73/133

Shared Libraries

Shared libraries are files used by multiple applications. Examples of shared libraries are commonlyused frameworks like Apache Struts or log4j. You use shared libraries typically to point to a set of JARs and associate those JARs to an application, a Web module, or the class loader of an applicationserver. Shared libraries are especially useful when you have different versions of the same frameworkyou want to associate to different applications.

Shared libraries are defined using the administration tools. They consist of a symbolic name, a Javaclass path, and a native path for loading JNI libraries. They can be defined at the cell, node, server, orcluster level. However, simply defining a library does not cause the library to be loaded. You mustassociate the library to an application, a Web module, or the class loader of an application server forthe classes represented by the shared library to be loaded. Associating the library to the class loaderof an application server makes the library available to all applications on the server.

NOTE: If you associate a shared library to an application, do not associate the same library to theclass loader of an application server.

You can associate the shared library to an application in one of two ways:

• You can use the administrative console. The library is added using the Shared libraries

references link under the References section for the enterprise application.

• You can use the manifest file of the application and the shared library. The shared library

contains a manifest file that identifies it as an extension. The dependency to the library isdeclared in the application’s manifest file by listing the library extension name in an extensionlist.

Shared files are associated with the class loader of an application server using the administrativetools. The settings are found in the Server Infrastructure section. Expand the Java and ProcessManagement. Select Class loader and then click the New button to define a new class loader. Afteryou have defined a new class loader, you can modify it and, using the Shared libraryreferences link, you can associate it to the shared libraries you need.

Isolated Shared Libraries

An Isolated Shared Library is another way to deploy application artifacts into the WebSphereruntime environment. An Isolated Shared Library can be associated or shared with one or manyWebSphere application and Web module class loaders. The Isolated Shared Library will provide a

mechanism where you can share a common set of classes across a subset of the applications withinthe WebSphere Application Server. This is similar to the server associated shared library, but anIsolated Shared Library is not typically used across ALL applications.

An Isolated Shared Library associated with the application class loader can be thought of as anapplication associated shared library that can be shared across a subset of the applications in theWebSphere Application Server. However, unlike application associated shared libraries, IsolatedShared Libraries DO NOT have visibility to application classes loaded by the application class loader.

Types of shared libraries:

• Application associated shared libraries have their classpath added to the application's class

loader classpath. Each application will have its own instances of the shared libraries classes.This allows one application to specify version XXX while another application can specify versionYYY. The draw back with using application shared libraries is that every application using theshared library will have their own class instances. For example if you want six applications to

Page 74: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 74/133

have version XXX and two applications to have version YYY, each will have its own set of classobjects resulting in eight instances of an object loaded into memory.

• Server associated shared libraries have their own class loader in the WebSphere Application

Server Class Loaders hierarchy. This allows a single instance of the classes to be shared by allapplications. The drawback with using the server associated shared libraries is they are sharedacross all applications on that server which prevents version handling of classes. If one sharedlibrary contained version X and one contained version Y, the version higher (assuming parent-first) in the class loader hierarchy will win.

Application and Web module associated shared libraries do not default to be Isolated Shared Libraries.Specifying a server associated shared library as an Isolated Shared Library has no effect.

WebSphere Application Server version 7.0 includes a new feature, Isolated Shared Libraries, toprovide a way to share a common set of classes across a subset of the applications within theWebSphere Application Server. The major benefit to Isolated Shared Libraries is the ability to reducethe number of class instances that are loaded in a JVM reducing the JVM's memory footprint. To betterunderstand the benefit of Isolated Shared Libraries, look at how a solution deployer can decide toshare an application associated shared library across multiple applications. The solution deployer hasseveral possibilities:

Using an Isolated Shared Library, the solution deployer can combine pieces of the prior two solutions.Isolated Shared Libraries each have their own class loader allowing a single instance of the classes tobe shared across the applications. Each application can specify which Isolated Shared Libraries itwants to reference and different applications can reference different versions of the Isolated SharedLibrary resulting in a set of applications sharing an Isolated Shared Library. The advantage here isseen in comparison to the previous example: with Isolated Shared Libraries, you are now sharing asingle copy of version XXX and a single copy of version YYY for a total of two instances in memory.

Isolated Shared Libraries do not have visibility to application classes loaded by the application classloader.

The JDK defines the class loader delegation model, which provides a way to establish parent-childrelationships between two class loaders. This is necessary for creating multiple class loaders in an

hierarchal environment as it defines the algorithm that standardizes loading behavior. Under thedelegation model every class loader has an associated parent class loader (except for the JVM'sBootstrap class loader).

The order of searching libraries changes when Isolated Shared Libraries are used. The hierarchy ororder of search without the new Isolated Shared Libraries present follows the delegation path frombottom to top. The root is always the JDK bootstrap loader. The only variation is to select either

parent first or parent last. In both cases, the child looks to see if it has already loaded the class anduses that instance if found and delegates otherwise. In parent first mode, the child delegates to theparent and then tries to load the class if the parent can not load it. In parent last mode, the child triesto load the class then delegates to the parent if it can't load the class. In all cases, a child will delegateto all Isolated Shared Libraries present before delegating to a parent.

Figure 3.18. Use and behavior of Isolated Shared Libraries

Page 75: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 75/133

 

In this picture an Isolated Shared Library is shared between two application servers. When an artifact

needs to be loaded for a Web module with parent first delegation, the search order is 1 (Isolated

Shared Library), 2 (server class loader), 3 (application class loader) and finally 4 (Web module classloader). If parent last delegation is used on the Web module and application class loaders, the order is4, 3, 1 and then 2.

Not shown on this figure are several WebSphere loaders and the JDK bootstrap loader. They are allhigher in the hierarchy tree. Also note that the server, application, library, and Web module sharedloaders are optional and might not be present. Here is a library, server, and application sharedloaders, but not a WebSphere Application Server shared loader.

To interactively control Isolated Shared Libraries, the Integrated Solution Console system applicationprovides a new check box. The check box specifies whether this shared library will have a singleinstance when it is associated with an application or Web module.

Figure 3.19. Isolated Shared Libraries

Page 76: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 76/133

 

The default value for this attribute is false (unchecked). If an Isolated Shared Library is used for ashared library associated with an application or Web module class loader, the shared library will haveits own class loader. Specifying a server associated shared library as isolated will have no effect.

Chapter 4

WebSphere Application Server Security

Implement security policies (e.g., authentication and authorization(using different security registries), global security, etc.) ProtectWebSphere Application Server Network Deployment V7.0 resources(e.g., Java 2 security, Java EE security roles).

Page 77: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 77/133

Global security and security domains

WebSphere Application Server provides configuration facilities that allow you to secure theadministrative applications and services that are used to manage and configure a WebSphereenvironment and to secure applications running in that environment. These configuration activities aredone separately, although, they can share common settings.

Global security settings are the security configuration settings that apply to all administrativefunctions and provide the default settings for user applications.

New in V7, WebSphere Application Server V7 introduces the ability to create additional securitydomains to secure user applications and their resources. A security domain is specific to theapplication servers, clusters, and service integration buses that are assigned to it. A security domaincan have attributes that differ from the global security settings. For example, a separate user registrycan be used to secure administrative functions and applications.

Global security compared to security domains

The global security domain in WebSphere Application Server V7 defines the administrative security

configuration and the default configuration for applications. If no other security domains areconfigured, and application security is enabled at the global security domain, all of the userapplications and administrative applications use the same security configuration.

Although extremely convenient and straightforward, a single-domain configuration might not be theideal configuration for certain clients that need settings customized for applications. Fortunately,WebSphere Application Server V7 offers the flexibility to override the global security domainconfiguration with additional security domains that are configured at a different scope. Securitydomains provide the flexibility to use configuration security settings that differ from those settingsthat are specified in the global security settings.

Administrative security must be enabled before you can enable application security. However,application security can be disabled at the global security level and enabled at the security domainlevel.

You define attributes at the security domain level that need to be different from those at the globallevel. If the information is common, the security domain does not need to have the informationduplicated in it. Any attributes that are missing in the domain are obtained from the globalconfiguration.

Table below shows a comparison of the security features that can be specified in the global securitysettings and those that a security domain can override.

Table 4.1. Comparison of global and domain security settings

Global security configuration Security domain overrides

• Enablement of application

security

• Java 2 security

• User realm (registry)

• Enablement of application security

• Java 2 security

• User realm (registry)

• Trust Association Interceptor

Page 78: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 78/133

Global security configuration Security domain overrides

• Trust Association Interceptor

(TAI)

•SPNEGO Web authentication

• RMI/IIOP Security (CSIv2

protocol)

• JAAS

• Authentication mechanism

attributes

• Authorization provider

• Custom properties

• Web attributes (single sign-on)

• SSL

• Audit

• LTPA authentication mechanism

• Kerberos authentication

mechanism

(TAI)

• SPNEGO Web authentication

• RMI/IIOP Security (CSIv2

protocol)

• JAAS

• Authentication mechanism

attributes

• Authorization provider

• Custom properties

 

Security domain scope

A security domain can be scoped to an entire cell, or to a specific set of servers, clusters, or serviceintegration buses. Therefore, multiple security domains can be used to allow security settings to varyfrom one application to another application.

Figure 4.1. Configure a new Security Domain

Page 79: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 79/133

 

Security settings that apply to an application will be defined by the following scope:

1.  If the application is running on a server or cluster that is within the scope of a security domain,those settings will be used. Security settings that are not defined in this domain will be takenfrom the global security settings (not a cell-level domain).

2.  If the application is running on a server or cluster that is not within the scope of a securitydomain, but a security domain has been defined at the cell scope, that domain will be used.Security settings that are not defined in this domain will be taken from the global securitysettings.

3.  If the previous conditions do not apply, the global domain settings will be used.

Note that you can enable or disable application security at the domain and global level, so just fallingwithin a domain does not necessarily mean that application security is enabled. Also, note that namingoperations always use the global security configuration.

Define and implement administrative security roles.

Administrative roles

Page 80: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 80/133

The Java Platform, Enterprise Edition (Java EE) role-based authorization concept is extended toprotect the WebSphere Application Server administrative subsystem.

A number of administrative roles are defined to provide degrees of authority that are needed toperform certain administrative functions from either the Web-based administrative console or thesystem management scripting interface. The authorization policy is only enforced when administrativesecurity is enabled. The following list describes the administrative roles:

•  Monitor

An individual or group that uses the monitor role has the least amount of privileges. A monitorcan complete the following tasks:

o  View the WebSphere Application Server configuration.

o  View the current state of the Application Server.

•  Configurator

An individual or group that uses the configurator role has the monitor privilege plus the abilityto change the WebSphere Application Server configuration. The configurator can perform allthe daily configuration tasks. For example, a configurator can complete the following tasks:

o  Create a resource.

o  Map an application server.

o  Install and uninstall an application.

o  Deploy an application.

o  Assign users and groups-to-role mapping for applications.

o  Set up Java 2 security permissions for applications.

o  Customize the Common Secure Interoperability Version 2 (CSIv2), Security

Authentication Service (SAS), and Secure Sockets Layer (SSL) configurations.

•  Operator

An individual or group that uses the operator role has monitor privileges plus ability to changethe runtime state. For example, an operator can complete the following tasks:

o  Stop and start the server.

o  Monitor the server status in the administrative console.

•  Administrator

Page 81: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 81/133

An individual or group that uses the administrator role has the operator and configuratorprivileges, plus additional privileges that are granted solely to the administrator role. Forexample, an administrator can complete the following tasks:

o  Modify the server user ID and password.

o  Configure authentication and authorization mechanisms.

o  Enable or disable administrative security.

o  Enable or disable Java 2 security.

o  Change the Lightweight Third Party Authentication (LTPA) password and generate keys.

o  Create, update, or delete users in the federated repositories configuration.

o  Create, update, or delete groups in the federated repositories configuration.

NOTE: An administrator CANNOT map users and groups to the administrator roles without alsohaving the adminsecuritymanager role.

•  ISC Admins

This role is only available for administrative console users, not for wsadmin users. Users who

are granted this role have administrator privileges for managing users and groups in thefederated repositories. For example, a user of the iscadmins role can complete the following

tasks:

o  Create, update, or delete users in the federated repositories configuration.

o  Create, update, or delete groups in the federated repositories configuration.

•  Deployer

Users granted this role can complete both configuration actions and runtime operations onapplications.

•  Admin Security Manager

You can assign users and groups to the Admin Security Manager role on the cell level

through wsadmin scripts and the administrative console. Using the Admin Security Manager

role, you can assign users and groups to the administrative user roles and administrative

group roles. However, an administrator cannot assign users and groups to the administrativeuser roles and administrative group roles including the Admin Security Manager role.

•  Auditor

Users granted this role can view and modify the configuration settings for the security auditingsubsystem. For example, a user with the auditor role can complete the following tasks:

o  Enable and disable the security auditing subsystem.

Page 82: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 82/133

o  Select the event factory implementation to be used with the event factory plug-in point.

o  Select and configure the service provide, or emitter. or both to be used with the service

provider plug-in point.

o  Set the audit policy that describes the behavior of the application server in the event of 

an error with the security auditing subsystem.

o  Define which security events are to be audited.

The auditor role includes the monitor role. This allows the auditor to view but not change therest of the security configuration.

Implement federated repositories.

Selecting a registry or repository

During profile creation, either during installation or post-installation, administrative security is enabled

by default. The file-based federated user repository is configured as the active user registry.

WebSphere Application Server provides implementations that support multiple types of registries andrepositories including the local operating system registry, a standalone Lightweight Directory AccessProtocol (LDAP) registry, a standalone custom registry, and federated repositories.

With WebSphere Application Server, a user registry or a repository, such as a federated repository,authenticates a user and retrieves information about users and groups to perform security-relatedfunctions including authentication and authorization.

With WebSphere Application Server, a user registry or repository is used for:

•  Authenticating a user using basic authentication, identity assertion, or client certificates.

•  Retrieving information about users and groups to perform security-related administrative

functions, such as mapping users and groups to security roles.

In addition to local operating system, LDAP, and Federated repository registries, WebSphereApplication Server also provides a plug-in to support any registry by using the custom registry feature.The custom registry feature enables you to configure any user registry that is not made availablethrough the security configuration panels of the WebSphere Application Server.

Configuring the correct registry or repository is a prerequisite to assigning users and groups to rolesfor applications. When a user registry or repository is not configured, the local operating systemregistry is used by default. If your choice of user registry is not the local operating system registry,you need to first configure the registry or repository, which is normally done as part of enabling

security, restart the servers, and then assign users and groups to roles for all your applications.

WebSphere Application Server supports the following types of user registries:

•  Federated repository

•  Local operating system

Page 83: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 83/133

•  Standalone Lightweight Directory Access Protocol (LDAP) registry

•  Standalone custom registry

The UserRegistry interface is used to implement both the custom registry and the federated

repository options for the user account repository. The interface is very helpful in situations where the

current user and group information exists in some other formats, for example, a database, and cannotmove to local operating system or LDAP registries. In such a case, you can implementtheUserRegistry interface so that WebSphere Application Server can use the existing registry for all

the security-related operations. The process of implementing a custom registry is a softwareimplementation effort, and it is expected that the implementation does not depend on WebSphereApplication Server resource management for its operation. For example, you cannot use an ApplicationServer data source configuration; generally you must invoke database connections and dictate theirbehavior directly in your code.

Federated repositories

Federated repositories enable you to use multiple repositories with WebSphere Application Server.These repositories, which can be file-based repositories, LDAP repositories, or a sub-tree of an LDAP

repository, are defined and theoretically combined under a single realm. All of the user repositoriesthat are configured under the federated repository functionality are invisible to WebSphere ApplicationServer.

When you use the federated repositories functionality, all of the configured repositories, which youspecify as part of the federated repository configuration, become active. It is required that the userID, and the distinguished name (DN) for an LDAP repository, be unique in multiple user repositoriesthat are configured under the same federated repository configuration. For example, there might bethree different repositories that are configured for the federated repositoriesconfiguration: Repository A, Repository B, and Repository C. When user1 logs in, the

federated repository adapter searches each of the repositories for all of the occurrences of that user. If multiple instances of that user are found in the combined repositories, an error message displays.

In addition, the federated repositories functionality in WebSphere Application Server supports thelogical joining of entries across multiple user repositories when the Application Server searches andretrieves entries from the repositories. For example, when an application calls for a sorted list of people whose age is greater than twenty, WebSphere Application searches all of the repositories in the

federated repositories configuration. The results are combined and sorted before the ApplicationServer returns the results to the application.

Unlike the local operating system, standalone LDAP registry, or custom registry options, federatedrepositories provide user and group management with read and write capabilities. When you configurefederated repositories, you can use one of the following methods to add, create, and delete users andgroups:

•  Use the user management application programming interfaces (API).

•  Use the administrative console. To manage users and groups within the administrative

console, click Users and Groups > Manage Users or Users and Groups > ManageGroups. For information on user and group management, click the Help link that displays inthe upper right corner of the window. From the left navigation pane, click Users and Groups.

•  Use the wsadmin commands. For more information, see

the WIMManagementCommands command group for the AdminTask object.

Page 84: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 84/133

Configure auditing.

Auditing

WebSphere Application Server V7.0 introduces a new feature as part of its security infrastructure: thesecurity auditing subsystem.

Security auditing has two primary goals:

•  Confirming the effectiveness and integrity of the existing security configuration (accountability

and compliance with policies and laws)

•  Identifying areas where improvement to the security configuration might be needed

(vulnerability analysis)

Security auditing achieves these goals by providing the infrastructure that allows you to implementyour code to capture and store supported auditable security events. During run time, all code (except

the Java EE 5 application code) is considered to be trusted. Each time a Java EE 5 application accessesa secured resource, any internal application server process with an audit point included can berecorded as an auditable event.

If compliance with regulatory laws or organizational policies have to be proved, you can enableauditing and configure filters to log the events you are interested in according to your needs.

The security auditing subsystem has the ability to capture the following types of auditable events:

•  Authentication

•  Authorization

•  Principal/credential mapping

•  Audit policy management

•  Administrative configuration management

•  User registry and identity management

•  Delegation

These events are recorded in signed and encrypted audit log files in order to ensure its integrity.Encryption and signing of audit logs are not set by default, though we suggest its use to protect those

records from being altered. You will have to add keystores and certificates for encryption and signing.

Log files can be read with the audit reader, a tool that is included in WebSphere Application ServerV7.0 in the form of a wsadmin.sh command. For example, the following wsadmin.sh command line

returns a basic audit report:

AdminTask.binaryAuditLogReader('[-fileName myFileName -reportMode basic-keyStorePassword password123 -outputLocation /binaryLogs]')

Page 85: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 85/133

WebSphere Application Server provides a default audit service provider and event factory, but you canchange them if you have special needs. For instance, you could configure a third-party audit serviceprovider to record the generated events to a different repository.

Chapter 5

Workload Management, Scalability, High Availability Failover

Federate nodes (including custom profiles).

Managed and unmanaged nodes

A node is a logical grouping of managed servers.

A node usually corresponds to a logical or physical computer system with a distinct IP host address.Nodes cannot span multiple computers.

By default, node names are based on the host name of the computer, for example MyHostNode01.

Nodes can be managed or unmanaged. An unmanaged node does not have a node agent oradministrative agent to manage its servers, whereas a managed node does. Both application serversand supported Web servers can be on unmanaged or managed nodes.

A stand-alone application server is an unmanaged node. The application server node becomes amanaged node when it is either federated into a cell or registered with an administrative agent.

When you create a managed node by federating the application server node into a deploymentmanager cell, a node agent is automatically created. The node agent process manages the applicationserver configurations and servers on the node.

When you create a managed node by registering an application server node with an administrative

agent, the application server must be an unfederated application server node. The administrativeagent is a single interface that monitors and controls one or more application server nodes so that youcan use the application servers only to run your applications. Using a single interface reduces theoverhead of running administrative services in every application server.

A managed node in a cell can have WebSphere Application Server, Java Message Service (JMS)servers (on Version 5 nodes only), Web servers, or generic servers. A managed node that is not in acell, but is instead registered to an administrative agent, can have application servers, web servers,and generic servers on the node.

Federating nodes to a cell

A custom profile defines a node that can be added to a cell. The addNode command is used to

federate a node in a custom profile to a cell.

A stand-alone application server can also be federated to a cell with the addNode command, or from

the deployment manager administrative console. The administrative console invokesthe addNodecommand on the target system.

When you federate a node, the node name from the federated node is used as the new node nameand must be unique in the cell. If the name of the node that you are federating already exists,the addNode operation will fail.

Page 86: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 86/133

addNode command syntax

The syntax of the addNode command is shown below:

 

addNode dmgr_host [dmgr_port] [-conntype <type>] [-includeapps]

[-includebuses] [-startingport <portnumber>]

[-portprops <qualified-filename>] [-nodeagentshortname <name>]

[-nodegroupname <name>] [-registerservice] [-serviceusername <name>]

[-servicepassword <password>] [-coregroupname <name>] [-noagent]

[-statusport <port>] [-quiet] [-nowait] [-logfile <filename>]

[-replacelog] [-trace] [-username <username>] [-password <pwd>]

[-localusername <localusername>] [-localpassword <localpassword>]

[-profileName <profile>] [-excludesecuritydomains] [-help]

• dmgr_host, -username, -password

This command connects to the deployment manager, so you have to specify the deploymentmanager host name and a user ID/password with administrative privileges on the deploymentmanager.

• dmgr_port, -conntype

The default is to connect to the deployment manager using SOAP and port 8879. If yourdeployment manager was defined with this port, you do not need to specify anything. If not,you can specify the correct port, or you can use RMI as the connection type.

For SOAP connections, the port defined as the SOAP_CONNECTOR_PORT number on the

deployment manager must be specified. If you choose to use an RMI connection instead,theORB_LISTENER_ADDRESS port must be specified. You can see these in the port list of the

deployment manager in the administrative console.

Tip: Port numbers are also stored in profile_root/properties/portdef.props

• -startingport, -portprops <filename>

The new node agent is assigned a range of ports automatically. If you want to specify theports for the node rather than taking the default, you can specify a starting port using the -startingport parameter. The numbers are incremented from this number.

Page 87: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 87/133

For example, if you specify 3333, the BOOTSTRAP_ADDRESS port will be

3333, CSIV2_SSL_MUTUALAUTH_LISTENER_ADDRESS will be 3334, and so on.

As an alternative, you can provide specific ports by supplying a file with the port properties.

• -includeapps, -includebuses

If you are federating an application server, you can keep any applications that are deployed tothe server and you can keep any service integration bus definitions that have been created.The default is that these are not included during federation and are lost.

The addNode command performs the following actions:

1.  Connects to the deployment manager process. This is necessary for the file transfersperformed to and from the deployment manager in order to add the node to the cell.

2.  Attempts to stop all running application servers on the node.

3. 

Backs up the current stand-alone node configuration tothe profile_root/config/backup/base/ directory.

4.  Copies the stand-alone node configuration to a new cell structure that matches the deploymentmanager structure at the cell level.

5.  Creates a new local config directory and definition (server.xml) for the node agent.

6.  Creates entries (directories and files) in the master repository for the new node's managedservers, node agent, and application servers.

7.  Uses the FileTransfer service to copy files from the new node to the master repository.

8.  Uploads applications to the cell only if the -includeapps option is specified.

9.  Performs the first file synchronization for the new node. This pulls everything down from thecell to the new node.

10.  Fixes the node's setupCmdLine and wsadmin scripts to reflect the new cell environment

settings.

11.  Launches the node agent (unless -noagent is specified).

Federating a custom node to a cell

Note: You only have to do this if you created a custom profile and chose not to federate it at the time.This requires that you have a deployment manager profile and that the deployment manager is up andrunning.

To federate the node to the cell, do the following actions:

1.  Start the deployment manager.

Page 88: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 88/133

2.  Open a command window on the system where you created the custom profile for the newnode. Switch to the profile_root/bin directory or install_root/bin directory.

3.  Run the addNode command.

Example of using the addNode command on a Windows system to add Node01 to the

deployment manager using 8879 as the SOAP connector address:

C:\WebSphere\AppServer\profiles\Custom01\bin>addNode localhost 8879

4.  Open the deployment manager administrative console and view the node and node agent:

•  Select "System Administration > Nodes". You should see the new node.

•  Select "System Administration > Node agents". You should see the new node agentand its status.

The node is started as a result of the federation process. If it does not appear to be started inthe console, you can check the status from a command window on the node system:

 

cd profile_root/bin

serverStatus -all

If you find that it is not started, start it with this command:

cd profile_root/bin

startNode

Federating an application server profile to a cell

If you are using the administrative console to federate an application server, keep in mind thefollowing considerations:

• Both the deployment manager and the application server must be running.

• You need to be logged into the console with an ID that has administrator privileges.

• The command will connect to the application server. This requires you to specify the

application server host name and a user ID that can connect to the server. In turn, the node

Page 89: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 89/133

has to connect to the deployment manager. Specify a user ID and password for thisconnection.

• You need to specify the host name, JMX connection type, and port number to use to connect

to the application server. The JMX connection type can be SOAP or RMI. The default is a SOAPconnection using port 8880.

To federate an application server profile to a cell, do the following steps:

1.  Ensure that the application server and deployment manager are running.

2.  Open the deployment manager administrative console.

3.  Select "System Administration > Nodes > Add Node".

4.  Select "Managed node" and click "Next".

5.  Enter the host name and SOAP connector port of the application server profile.

If you want to keep the sample applications and any other applications you have installed,check the "Include applications" box.

Enter the administrator user ID and passwords for both the application server and thedeployment manager.

Figure 5.1. Adding a standalone application profile to a cell

Page 90: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 90/133

 

Click "OK".

6.  If the node is a Windows node, you have the opportunity to register the new node agent as aWindows service. Make your selection and click "OK".

The federation process stops the application server. It creates a new node agent for the node, andadds the node to the cell. The federation process then starts the node agent, but not the server.

You can now display the new node, node agent, and application server from the console. You can alsostart the server from the console.

At the completion of the process:

• The profile directory for the application server still exists and is used for the new node.

• The old cell name for the application server has been replaced in the profile directory with the

cell name of the deployment manager.

profile_root/config/cells/dmgr_cell

• A new entry in the deployment manager profile directory has been added for the new node.

Page 91: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 91/133

dmgr_profile_root/config/cells/dmgr_cell/nodes/federated_node

• An entry for each node in the cell is added to the application server profile configuration. Each

node entry contains the serverindex.xml file for the node.

profile_root/config/cells/dmgr_cell/nodes/federated_node

In turn, an entry for the new node is added to the nodes directory for each node in the cellwith a serverindex.xml entry for the new node.

Example of using the addNode command to add an application server profile to a cell. The command

specifies the deployment manager host (host60) and the SOAP connector port (8882). Applications

currently installed on the application server will still be installed on the server after federation:

C:\WebSphereV7\AppServer\bin>addNode host60 8882 -profileName node40b-includeapps -username admin -password adminpwd

Managing profiles using the graphical user interface

You can create profiles, which define runtime environments, using the Profile Management Tool.Using profiles instead of multiple product installations saves disk space and simplifies updating theproduct because a single set of core product files is maintained.

The Profile Management Tool is the graphical user interface for the manageprofiles.sh command.

NOTE: You cannot use the Profile Management Tool to create profiles for WebSphere ApplicationServer installations on 64-bit architectures except on the Linux for zSeries platform. However, you canuse the Profile Management Tool on other 64–bit architectures if you use a WebSphere Application

Server 32–bit installation.

Procedures:

• Create a cell profile.

With a cell profile, you can create a deployment manager profile and a profile for a federated

application server node in a single pass through the Profile Management tool. Use the cellprofile creation option to create the deployment manager profile and the federated applicationserver node profile, unless you have a specific reason to create them separately.

After you install the Network Deployment product and apply the feature pack, you can createtwo different types of cell profiles: one that is enabled for the Network Deployment product

only or one that is also enabled for the feature pack.

• Create a management profile with a deployment manager server.

With a deployment manager you can create the administrative node for a multinode, multi-machine group of application server nodes that you create later. This logical group of application server processes is known as a cell.

Page 92: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 92/133

After you install the Network Deployment product and apply the feature pack, you can createa management profile with a deployment manager that is enabled for the NetworkDeployment product only or a deployment manager profile that is enabled for the featurepack.

• Create a management profile with an administrative agent server.

You can create a management profile for the administrative agent to administer multipleapplication servers that run customer applications only. The administrative agent provides asingle administrative console to administer the application servers.

After you install the Network Deployment product and apply the feature pack, you can createa management profile with an administrative agent that is enabled for the NetworkDeployment product only or an administrative agent profile that is enabled for the featurepack.

• Create a management profile with a job manager server.

You can create a management profile for the job manager to coordinate administrative actions

among multiple deployment managers, administer multiple unfederated application servers,asynchronously submit jobs to start servers, and a variety of other tasks.

• Create an application server profile.

Create an application server profile so that you can make applications available to the Internet

or to an intranet, typically using Java technology.

After you install the Network Deployment product and apply the feature pack, you can createtwo different types of application server profiles: one that is enabled for the NetworkDeployment product only or one that is also enabled for the feature pack.

• Create a custom profile.

A custom profile is an empty node that you can customize through the deployment managerto include application servers, clusters, or other Java processes, such as a messaging server.Create a custom profile on a distributed machine and add the node into the deploymentmanager cell to get started customizing the node.

After you install the Network Deployment product and apply the feature pack, you can createtwo different types of custom profiles: one that is enabled for the Network Deploymentproduct only or one that is also enabled for the feature pack.

• Create a secure proxy profile.

You can create a secure proxy profile to serve as the initial point of entry into your enterpriseenvironment. Typically, a secure proxy server exists in the DMZ, accepts requests from clientson the Internet, and forwards the requests to servers in your enterprise environment.

Create clusters and cluster members.

Creating application server clusters

Page 93: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 93/133

When you create a cluster, you have the option to create an empty cluster (no servers) or to createthe cluster with one or more servers. The first application server added to the cluster acts as atemplate for subsequent servers. You can create the first server during the cluster creation process oryou can convert an existing application server. The rest of the servers must be new and can becreated when you create the cluster or added later.

Tip: When creating a cluster, it is possible to select the template of an existing application server forthe cluster without adding that application server into the new cluster. If you need to change theattributes of the servers in your cluster after the cluster has been created, you must change eachserver individually. For this reason, consider creating an application server with the server propertiesthat you want as a standard in the cluster first, then use that server as a template or as the first

server in the cluster.

Cluster and cluster member options

When you create a new cluster, you have the following options to consider:

• Prefer local:

This setting indicates that a request to an EJB should be routed to an EJB on the local node if available. This is the default setting and generally will result in better performance.

• Configure HTTP session memory-to-memory replication (create a replication domain):

WebSphere Application Server supports session replication to another WebSphere ApplicationServer instance. In this mode, sessions can replicate to one or more WebSphere ApplicationServer instances to address HTTP Session single point of failure.

When you create a cluster, you can elect whether to create a replication domain for thecluster. The replication domain is given the same name as the cluster and is configured withthe default settings for a replication domain. When the default settings are in effect, a singlereplica is created for each piece of data and encryption is disabled. Also, the Web container for

each cluster member is configured for memory-to-memory replication.

When you create a new cluster member, you have the following options to consider:

• Basis for first cluster member:

You can add application servers to the cluster when you create the cluster or later.

The first cluster member can be a new application server or you can convert an existingapplication server so that it becomes the first cluster member.

Subsequent application servers in the cluster must be created new. The first application serverin the cluster acts as a template for the subsequent servers.

The options you have depend on the how you create the cluster.

When you use the job manager, you have the option to convert an existing server to use asthe first cluster member, or create an empty cluster and run additional jobs to add clustermembers.

When you use the deployment manager, you can convert an existing server, create one ormore new servers, or create an empty cluster.

Page 94: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 94/133

Note: The option to use an existing application server does not appear in the deploymentmanager administrative console if you create an empty cluster, then add a member later. If you want to convert an existing application server as the first server, specify that option whenyou create the cluster or use the job manager to create the cluster member.

Tip: To remove a server from a cluster, you must delete the server. Take this intoconsideration when you are determining whether to convert an existing server to a cluster.

• Server weight for each cluster member:

The weight value controls the amount of work that is directed to the application server. If theweight value for this server is greater than the weight values that are assigned to otherservers in the cluster, then this server receives a larger share of the workload. The weightvalue represents a relative proportion of the workload that is assigned to a particularapplication server. The value can range from 0 to 20.

Member weight: Specify the relative weight of this server in the cluster. Values are from 0 to20. 0 indicates that work is to be routed to this server only in the event that no other serversare available.

Using the deployment manager administrative console

To create a new cluster:

1.  Select "Servers > Clusters > WebSphere application server clusters".

2.  Click New.

3.  Enter the information for the new cluster (see the figure below):

•  Enter a cluster name of your choice.

Figure 5.2. Creating a new cluster

 

Page 95: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 95/133

4.  Create first cluster member: The first cluster member determines the server settings for thecluster members:

Figure 5.3. First cluster member

 

•  Member name: Type a name of the new server to be added to the cluster.

•  Select node: Specifies the node on which this new cluster member is created.

Page 96: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 96/133

•  Weight: Assign the weight for this server.

Work is distributed across the servers in the cluster based on weights assigned to eachapplication server. If all cluster members have identical weights, work is distributedamong the cluster members equally. Servers with higher weight values are given morework. An example formula for determining routing preference is as follows:

% routed to Server1 = weight_1 /(weight_1 + weight_2 +...+weight_n)

In the formula, n represents the number of cluster members in the cluster. Consider

the capacity of the system that hosts the application server.

For example, if you have a cluster that consists of members A, B, and C with weights

2, 3, and 4, respectively, then 2/9 of the requests are assigned to member A, 3/9 are

assigned to member B, and 4/9 are assigned to member C. If a new member,member D, is added to the cluster and member D has a weight of 5, then

member A now gets 2/14 of the requests, member B gets 3/14 of the requests,

member C gets 4/14 of the requests, and member D gets 5/14 of the requests.

•  Generate unique HTTP ports: Generates unique port numbers for every transport that

is defined in the source server, so that the resulting server that is created will nothave transports that conflict with the original server or any other servers defined onthe same node.

•  Select basis for first cluster member:

o  If you select "Create the member using an application server template", thesettings for the new application server are identical to the settings of the

application server template you select from the list of available templates.

o  If you select "Create the member using an existing application server as a

template", the settings for the new application server are identical to thesettings of the application server you select from the list of existing applicationservers. However, applications that are installed on the template server arenot installed on the new servers.

o  If you select "Create the member by converting an existing application server",

the application server you select from the list of available application serversbecomes a member of this cluster.

Applications that are installed on the existing server are automatically installedon new members of the cluster.

Note that the only way to remove a server from a cluster is to delete it, andwhen you delete the cluster, all servers in the cluster are deleted. Considerthis before selecting this option.

Page 97: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 97/133

o  If you select "None. Create an empty cluster", a new cluster is created, but it

does not contain any cluster members.

Click Next.

5.  Create additional cluster members: Use this page to create additional members for a cluster.

You can add a member to a cluster when you create the cluster or after you create the cluster.A copy of the first cluster member that you create is stored as part of the cluster data andbecomes the template for all additional cluster members that you create.

To add a member, enter a new server name, select the node, and click "Add Member". Thenew member will be added to the list.

Figure 5.4. Additional cluster members

Page 98: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 98/133

 

6.  When all the servers have been entered, click Next.

7.  A summary page shows you what will be created.

8.  Click Finish to create the cluster and new servers.

9.  Save the configuration.

Node groups

Page 99: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 99/133

A node group is a collection of managed nodes. Managed nodes are WebSphere Application Servernodes. A node group defines a boundary for server cluster formation.

In a distributed environment, you can have nodes in a cell with different capabilities. However, thereare restrictions on how the nodes can coexist.

Node groups are created to group nodes of similar capability together to allow validation duringsystem administration processes. Effectively, this means that a node group establishes a boundaryfrom which servers can be selected for a cluster. Nodes on distributed platforms and nodes on the IBMi platform can be members of the same node group, however, they cannot be members of a nodegroup that contains a node on a z/OS platform.

A node group validates that the node is capable of performing certain functions before allowing them.For example, a cluster cannot contain both z/OS nodes and nodes that are not z/OS-based. In thiscase, you can define multiple node groups, one for the z/OS nodes and one for nodes other than z/OS.A DefaultNodeGroup is automatically created. This node group contains the deployment manager andany new nodes with the same platform type. A node can be a member of more than one node group.

Figure 5.5. Cell, deployment manager, node, and node group concepts

 

To delete a node group, the node group must be empty. The default node group cannot be deleted.

A default node group called DefaultNodeGroup is automatically created for you when the deploymentmanager is created, based on the deployment manager platform. New nodes on similar platforms areautomatically added to the default group. A node must belong to at least one node group, but canbelong to more than one.

As long as you have nodes in a cell with similar platforms, you do not need to do anything with nodegroups. New nodes are automatically added to the node group. However, before adding a node on a

Page 100: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 100/133

platform that does not have the same capabilities as the deployment manager platform, you will needto create the new node group.

Note: Do not confuse node groups with "groups of nodes" in the job manager. These are two differentconcepts.

Configure session management (memory-to-memory, databasepersistence).

Session support

Information entered by a user in a Web application is often needed throughout the application. Forexample, a user selection might be used to determine the path through future menus or options todisplay content. This information is kept in a session.

A session is a series of requests to a servlet that originate from the same user. Each request arrivingat the servlet contains a session ID. Each ID allows the servlet to associate the request with a specific

user. The WebSphere session management component is responsible for managing sessions, providingstorage for session data, allocating session IDs that identify a specific session, and tracking the

session ID associated with each client request through the use of cookies or URL rewriting techniques.

When planning for session data, there are three basic considerations:

• Application design

• Session tracking mechanism

• Session storage options.

Application design

Although using session information is a convenient method for the developer, this usage should be

minimized. Only objects really needed for processing of subsequent requests should be stored in thesession. If sessions are persisted during runtime, there is a performance impact if the session data istoo big.

Session tracking mechanism

You can choose to use cookies, URL rewriting, SSL session IDs, or a combination of these as themechanism for managing session IDs.

• Cookies

Using cookies as a session tracking mechanism is common. WebSphere session managementgenerates a unique session ID and returns it to the user's Web browser to be stored as a

cookie.

• URL rewriting

URL rewriting requires the developer to use special encoding APIs and to set up the site pageflow to avoid losing the encoded information. The session identifier is stored in the pagereturned to the user. WebSphere encodes the session identifier as a parameter on URLs thathave been encoded programmatically by the Web application developer.

Page 101: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 101/133

URL rewriting can only be used for pages that are dynamically generated for each request,such as pages generated by servlets or JSPs. If a static page is used in the session flow thesession information is lost. URL rewriting forces the site designer to plan the user's flow in thesite to avoid losing their session ID.

• SSL ID tracking

With SSL ID tracking, SSL session information is used to track the session ID. Because theSSL session ID is negotiated between the Web browser and an HTTP server, it cannot survivean HTTP server failure. However, the failure of an application server does not affect the SSLsession ID. In environments that use WebSphere components with multiple HTTP servers, youmust use an affinity mechanism for the Web servers when SSL session ID is used as thesession tracking mechanism.

When the SSL session ID is used as the session tracking mechanism in a clusteredenvironment, either cookies or URL rewriting must be used to maintain session affinity. Thecookie or rewritten URL contains session affinity information that enables the Web server toproperly route requests back to the same server after the HTTP session has been created on aserver. The SSL ID is not sent in the cookie or rewritten URL but is derived from the SSL

information. The main disadvantage of using SSL ID tracking is the performance degradation

due to the SSL overhead. If you have a business requirement to use SSL, this is probably agood choice.

It is possible to combine multiple options for a Web application.

• Use of SSL session identifiers has preference to cookie and URL rewriting.

• Use of cookies has preference to URL rewriting.

If selecting SSL session ID tracking, we suggest that you also select cookies or URL rewriting so thatsession affinity can be maintained. The cookie or rewritten URL contains session affinity information

enabling the Web server to properly route a session back to the same server for each request.

Storage of session-related information

You can choose whether to store the session data as follows:

• Local sessions (non-persistent)

• Database persistent sessions

• Memory-to-memory replicated persistent sessions

The last two options allow session data to be accessed by multiple servers and should be consideredwhen planning for failover. Using a database or session replication is also called session persistence.

Storing session data external to the system can have drawbacks in performance. The amount of impact depends on the amount of session data, the method chosen, and the performance and capacityof the external storage. Session management implements caching optimizations to minimize theimpact of accessing the external storage, especially when consecutive requests are routed to the sameapplication server.

Local sessions (non-persistent)

Page 102: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 102/133

If the session data is stored in the application server memory only, the session data is not available toany other servers. Although this option is the fastest and the simplest to set up, an application serverfailure ends the session, because the session data is lost.

The following settings can help you manage the local session storage:

•Maximum in-memory session count

This setting enables you to define a limit to the number of sessions in memory. This preventsthe sessions from acquiring too much of the JVM heap and causing out-of-memory errors.

• Allow overflow

This setting permits an unlimited number of sessions. If you choose this option, monitor thesession cache size closely.

• Session time-out

This setting determines when sessions can be removed from cache.

Database persistent sessions

You can store session data in an external database. The administrator must create the database andconfigure the session database in WebSphere through a data source.

The Use multi-row schema setting gives you the option to use multi-row sessions to support largesession objects. With multi-row support, the WebSphere session manager breaks the session dataacross multiple rows if the size of the session object exceeds the size for a row. This also provides amore efficient mechanism for storing and retrieving session contents when session attributes are largeand few changes are required to the session attributes.

Memory-to-memory replicated persistent sessions

Memory-to-memory replication copies session data across application servers in a cluster, storing thedata in the memory of an application server and providing session persistence. Using memory-to-memory replication eliminates the effort of maintaining a production database and eliminates the

single point of failure that can occur with a database. Test to determine which persistence mechanismis the best one in your environment.

The administrator sets up memory-to-memory replication by creating a replication domain and addingapplication servers to it. You can manage replication domains from the administrative console bynavigating to "Environment > Replication domain". When defining a replication domain, you mustspecify whether each session is replicated in one of the following manners:

• To one server (single replica)• To every server (entire domain)

• To a defined number of servers

The number of replicas can affect performance. Smaller numbers of replicas result in betterperformance because the data does not have to be copied into many servers. By configuring morereplicas, your system becomes more tolerant to possible failures of application servers because thedata is backed up in several locations.

Page 103: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 103/133

When adding an application server to a replication domain, you must specify the replication mode forthe server:

• Server mode

In this mode, a server only stores backup copies of other application server sessions. It does

not send copies of its own sessions to other application servers.

• Client mode

In this mode, a server only broadcasts or sends copies of its own sessions. It does not receivecopies of sessions from other servers.

• Both mode

In this mode, the server is capable of sending its own sessions and receiving sessions fromother application servers. Because each server has a copy of all sessions, this mode uses themost memory on each server. Replication of sessions can impact performance.

Session manager settings

Session management in WebSphere Application Server can be defined at the following levels:

• Application server

This is the default level. Configuration at this level is applied to all Web modules within theserver.

Navigate to "Servers > Server Types > Application servers > <server_name> >Session management > Distributed environment settings > Memory-to-memoryreplication".

• Application

Configuration at this level is applied to all Web modules within the application.

Navigate to "Applications > Application Types > WebSphere enterprise applications ><app_name> > Session management > Distributed environment settings > Memory-to-memory replication".

• Web module

Configuration at this level is applied only to that Web module.

Navigate to "Applications > Application Types > WebSphere enterprise applications ><app_name> > Manage modules > <web_module> > Session management >Distributed environment settings > Memory-to-memory replication".

The following session management properties can be set:

• Session tracking mechanism

Page 104: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 104/133

Session tracking mechanism lets you select from cookies, URL rewriting, and SSL ID tracking.Selecting cookies will lead you to a second configuration page containing further configurationoptions.

• Maximum in-memory session count

Select Maximum in-memory session count and whether to allow this number to be exceeded,or overflow.

• Session time-out

Session time-out specifies the amount of time to allow a session to remain idle beforeinvalidation.

• Security integration

Security integration specifies a user ID be associated with the HTTP session.

•Serialize session access

Serialize session access determines if concurrent session access in a given server is allowed.

• Overwrite session management

Overwrite session management, for enterprise application and Web module level only,determines whether these session management settings are used for the current module, or if the settings are used from the parent object.

• Distributed environment settings

Distributed environment settings select how to persist sessions (memory-to-memoryreplication or a database) and set tuning properties.

Session affinity

In a clustered environment, any HTTP requests associated with an HTTP session must be routed to thesame Web application in the same JVM. This ensures that all of the HTTP requests are processed witha consistent view of the user's HTTP session. The exception to this rule is when the cluster memberfails or has to be shut down.

WebSphere assures that session affinity is maintained in the following way: Each server ID isappended to the session ID. When an HTTP session is created, its ID is passed back to the browser aspart of a cookie or URL encoding. When the browser makes further requests, the cookie or URL

encoding will be sent back to the Web server. The Web server plug-in examines the HTTP session IDin the cookie or URL encoding, extracts the unique ID of the cluster member handling the session, andforwards the request.

This situation can be seen in figure below, where the session ID from the HTTPheader, request.getHeader("Cookie"), is displayed along with the session ID

from session.getId(). The application server ID is appended to the session ID from the HTTP

header. The first four characters of HTTP header session ID are the cache identifier that determinesthe validity of cache entries.

Page 105: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 105/133

Figure 5.6. Session ID containing the server ID and cache ID

 

The JSESSIONID cookie can be divided into these parts: cache ID, session ID, separator, clone ID,

and partition ID. JSESSION ID will include a partition ID instead of a clone ID when memory-to-

memory replication in peer-to-peer mode is selected. Typically, the partition ID is a long numericnumber.

Table below shows their mappings based on the previous example. A clone ID is an ID of a clustermember.

Table 5.1. Cookie mapping

Content Value in the example

Cache ID 0000

Session ID SHOQmBQ8EokAQtzl_HYdxIt

Separator :

Page 106: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 106/133

Content Value in the example

Clone ID vuel491u

 

The application server ID can be seen in the Web server plug-in configuration file, plug-in-cfg.xml file,as shown in the example below:

 

<?xml version="1.0" encoding="ISO-8859-1"?>

<Config>

......

<ServerCluster Name="MyCluster">

<Server CloneID="vuel491u" LoadBalanceWeight="2" Name="NodeA_server1">

<Transport Hostname="wan" Port="9080" Protocol="http"/>

<Transport Hostname="wan" Port="9443" Protocol="https">

......

</Config>

Note: Session affinity can still be broken if the cluster member handling the request fails. To avoidlosing session data, use persistent session management. In persistent sessions mode, cache ID andserver ID will change in the cookie when there is a failover or when the session is read from thepersistent store, so do not rely on the value of the session cookie remaining the same for a given

session.

Session affinity and failover

Server clusters provide a solution for failure of an application server. Sessions created by clustermembers in the server cluster share a common persistent session store. Therefore, any clustermember in the server cluster has the ability to see any user’s session saved to persistent storage.

If one of the cluster members fails, the user can continue to use session information from anothercluster member in the server cluster. This is known as failover. Failover works regardless of whetherthe nodes reside on the same machine or several machines. Only a single cluster member can controland access a given session at a time.

Page 107: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 107/133

Figure 5.7. Session affinity and failover

 

After a failure, WebSphere redirects the user to another cluster member, and the user's sessionaffinity switches to this replacement cluster member. After the initial read from the persistent store,the replacement cluster member places the user's session object in the in-memory cache, assumingthat the cache has space available for additional entries.

The Web server plug-in maintains a cluster member list and picks the cluster member next in the listto avoid the breaking of session affinity. From then on, requests for that session go to the selectedcluster member. The requests for the session go back to the failed cluster member when the failedcluster member restarts.

WebSphere provides session affinity on a best-effort basis. There are narrow windows where sessionaffinity fails. These windows are as follows:

• When a cluster member is recovering from a crash, a window exists where concurrent

requests for the same session could end up in different cluster members. The reason for this isthat the Web server is multi-processed and each process separately maintains its own retrytimer value and list of available cluster members. The end result is that requests beingprocessed by different processes might end up being sent to more than one cluster memberafter at least one process has determined that the failed cluster member is running again.

Page 108: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 108/133

To avoid or limit exposure in this scenario, if your cluster members are expected to crash veryseldom and are expected to recover fairly quickly, consider setting the retry timeout to a smallvalue. This narrows the window during which multiple requests being handled by differentprocesses get routed to multiple cluster members.

• A server overload can cause requests belonging to the same session to go to different cluster

members. This can occur even if all the cluster members are running. For each clustermember, there is a backlog queue where an entry is made for each request sent by the Webserver plug-in waiting to be picked up by a worker thread in the servlet engine. If the depth of this queue is exceeded, the Web server plug-in starts receiving responses that the clustermember is not available. This failure is handled in the same way by the Web server plug-in asan actual JVM crash. Here are some examples of when this can happen:

o The servlet engine does not have an appropriate number of threads to handle the user

load.

o The servlet engine threads take a long time to process the requests. Reasons for this

include: applications taking a long time to execute, resources being used byapplications taking a long time, and so on.

Persistent session management

By default, WebSphere places session objects in memory. However, the administrator has the optionof enabling persistent session management, which instructs WebSphere to place session objects in apersistent store. Administrators should enable persistent session management when:

• The user's session data must be recovered by another cluster member after a cluster member

in a cluster fails or is shut down.

• The user's session data is too valuable to lose through unexpected failure at the WebSphere

node.

• The administrator desires better control of the session cache memory footprint. By sending

cache overflow to a persistent session store, the administrator controls the number of sessionsallowed in memory at any given time.

There are two ways to configure session persistence as shown below:

• Database persistence, supported for the Web container only

Figure 5.8. Database Persistent Sessions

Page 109: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 109/133

 

• Memory-to-memory session state replication using the data replication service available in

distributed server environments

Figure 5.9. Data Replication Service

 

All information stored in a persistent session store must be serialized. As a result, all of the objectsheld by a session must implement java.io.Serializable if the session needs to be stored in a

persistent session store.

In general, consider making all objects held by a session serialized, even if immediate plans do notcall for the use of persistent session management. If the website grows, and persistent sessionmanagement becomes necessary, the transition between local and persistent management occurs

Page 110: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 110/133

transparently to the application if the sessions only hold serialized objects. If not, a switch topersistent session management requires coding changes to make the session contents serialized.

Persistent session management does not impact the session API, and Web applications require no APIchanges to support persistent session management. However, as mentioned previously, applicationsstoring unserializable objects in their sessions require modification before switching to persistentsession management.

If you use database persistence, using multi-row sessions becomes important if the size of the sessionobject exceeds the size for a row, as permitted by the WebSphere session manager. If theadministrator requests multi-row session support, the WebSphere session manager breaks the sessiondata across multiple rows as needed. This allows WebSphere to support large session objects. Also,this provides a more efficient mechanism for storing and retrieving session contents under certaincircumstances.

Using a cache lets the session manager maintain a cache of most recently used sessions in memory.Retrieving a user session from the cache eliminates a more expensive retrieval from the persistentstore. The session manager uses a "least recently used" scheme for removing objects from the cache.Session data is stored to the persistent store based on your selections for write frequency and writeoption.

Create and configure Data Replication Service (DRS) replicationdomains.

Data replication

Replication is a service that transfers data, objects, or events among application servers. Datareplication service (DRS) is the internal WebSphere Application Server component that replicates data.

Use data replication to make data for session manager, dynamic cache, and stateful session beansavailable across many application servers in a cluster. The benefits of using replication vary dependingon the component that you configure to use replication:

• Session manager uses the data replication service when configured to do memory-to-

memory replication. When memory-to-memory replication is configured, session managermaintains data about sessions across multiple application servers, preventing the loss of session data if a single application server fails.

• Dynamic cache uses the data replication service to further improve performance by copying

cache information across application servers in the cluster, preventing the need to repeatedlyperform the same tasks and queries in different application servers.

• Stateful session beans use the replication service so that applications using stateful session

beans are not limited by unexpected server failures.

Important: When you use the replication services, ensure that the "Propagate security attributes"

option is enabled. Security attribute propagation is enabled, by default.

You can define the number of replicas that DRS creates on remote application servers. A replica is acopy of the data that copies from one application server to another. The number of replicas that youconfigure affects the performance of your configuration. Smaller numbers of replicas result in betterperformance because the data does not have to copy many times. However, if you create morereplicas, you have more redundancy in your system. By configuring more replicas, your systembecomes more tolerant to possible failures of application servers in the system because the data isbacked up in several locations.

Page 111: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 111/133

Defining a single replica configuration helps you to avoid a single point of failure in the system.However, if your system must be tolerant to more failure, introduce extra redundancy in the system.Increase the number of replicas that you create for any HTTP session that is replicated with DRS. The"Number of replicas" property for any replication domain that is used by the dynamic cache servicemust be set to "Entire domain".

Session manager, dynamic cache, and stateful session beans are the three consumers of replication.A consumer is a component that uses the replication service. When you configure replication, thesame types of consumers belong to the same replication domain. For example, if you areconfiguring both session manager and dynamic cache to use DRS to replicate objects, create separatereplication domains for each consumer. Create one replication domain for all the session managers on

all the application servers and one replication domain for the dynamic cache on all the applicationservers. The only exception to this rule is to create one replication domain if you are configuringreplication for HTTP sessions and stateful session beans. Configuring one replication domain in thiscase ensures that the backup state information is located on the same backup application servers.

Configuring cache replication

Use this task to improve performance by configuring the data replication service (DRS) to replicatedata from the dynamic cache service across the consumers in a replication domain.

You should have a replication domain created for the dynamic cache service. Configure a differentreplication domain for each type of consumer of the replication domain. For example, configure twodifferent replication domains for dynamic cache and session manager. There are two ways to configurereplication domains:

• To create replication domains manually, click Environment > Replication domains in the

administrative console.

• To create a new replication domain automatically when you create a cluster, click Servers >

Clusters > New in the administrative console.

1.  In the administrative console, click "Servers > Application servers > server_name >Container services > Dynamic cache service".

Figure 5.10. Dynamic cache service replication

Page 112: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 112/133

 

2.  To enable replication, select "Enable cache replication".

3.  Full group replication domain: Choose a replication domain. Use different replicationdomains for each type of consumer. For example, dynamic cache should use a differentreplication domain than session manager. The only replication domains that you can select inthis panel include replication domains that are configured to use full-group replication. In a

full-group configuration, every cache entry is replicated to every other cache that is configuredin the servers that are in the replication domain. If none of the replication domains in yourconfiguration meet these requirements, the list is empty. In this case, create a replicationdomain or alter an existing replication domain so that you have a replication domain that canperform full-group replication.

4.  Define the dynamic cache replication settings:

Replication type: Select appropriate replication type where:

Page 113: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 113/133

•  Not Shared: Cache entries for this object are not shared among different application

servers. These entries can contain non-serializable data. For example, a cached servletcan place non-serializable objects into the request attributes, if the class type supportsit.

•  PUSH: Cache entries for this object are automatically distributed to the dynamic

caches in other application servers or cooperating Java Virtual Machines (JVMs). Eachcache has a copy of the entry at the time it is created. These entries cannot store non-serializable data.

•  PULL: Cache entries for this object are shared between application servers on

demand. If an application server gets a cache miss for this object, it queries thecooperating application servers to see if they have the object. If no application serverhas a cached copy of the object, the original application server runs the request and

generates the object. These entries cannot store non-serializable data. This mode of sharing is NOT recommended.

•  PUSH_PULL: Cache entries for this object are shared between application servers on

demand. When an application server generates a cache entry, it broadcasts the cache

ID of the created entry to all cooperating application servers. Each server then knowswhether an entry exists for any given cache ID. On a given request for that entry, theapplication server knows whether to generate the entry or pull it from somewhereelse. These entries cannot store non-serializable data.

Push Frequency: You can define when and how often data is replicated across the dynamiccache replication domain

Chapter 6

Maintenance, Performance Monitoring and Tuning

 

Perform WebSphere Application Server Network Deployment V7.0backup, restore and configuration tasks.

manageprofiles command

Use the manageprofiles command to create, delete, augment, back up, and restore profiles, which

define runtime environments. Using profiles instead of multiple product installations saves disk spaceand simplifies updating the product because a single set of core product files is maintained.

The manageprofiles command and its graphical user interface, the "Profile Management Tool", are

the only ways to create runtime environments.

The command file is located in the app_server_root/bin directory. The command file is a script

named manageprofiles.sh.

The manageprofiles.sh command is used to perform the following tasks:

•  create a profile (-create)

Page 114: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 114/133

•  delete a profile (-delete)

•  augment a profile (-augment)

•  unaugment a profile (-unaugment)

•  unaugment all profiles that have been augmented with a specific augmentation template (-

unaugmentAll)

•  delete all profiles (-deleteAll)

•  list all profiles (-listProfiles)

•  list augments for a profile (-listAugments)

•  get a profile name (-getName)

•  get a profile path (-getPath)

•  validate a profile registry (-validateRegistry)

•  validate and update a profile registry (-validateAndUpdateRegistry)

•  get the default profile name (-getDefaultName)

•  set the default profile name (-setDefaultName)

•  back up a profile (-backupProfile)

•  restore a profile (-restoreProfile)

•  perform manageprofiles command tasks that are contained in a response file (-response)

Parameters:

•  -backupProfile

Performs a file system backup of a profile folder and the profile metadata from the profileregistry file. Any servers using the profile that you want to back up must first be stopped priorto invoking the manageprofiles command with the -backupProfile option. The -

backupProfile parameter must be used with the -backupFile and -profileName parameters, for example:

manageprofiles.sh -backupProfile -profileName profile_name -backupFilebackupFile_name

Page 115: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 115/133

When you back up a profile using the -backupProfile option, you must first stop the server

and the running processes for the profile that you want to back up.

-backupFile backupFile_name

Backs up the profile registry file to the specified file. You must provide a fully qualified file path

for the backupFile_name.

•  -restoreProfile

Restores a profile backup. Must be used with the -backupFile parameter, for example:

manageprofiles.sh -restoreProfile -backupFile file_name

To restore a profile, perform the following steps:

1. 

Stop the server and the running processes for the profile that you want to restore.

2.  Manually delete the directory for the profile from the file system.

3.  Run the -validateAndUpdateRegistry option of 

the manageprofiles.sh command.

4.  Restore the profile by using the -restoreProfile option of 

the manageprofiles.sh command.

backupConfig command

ThebackupConfig.sh

command is a simple utility to back up the configuration of your node to a

file.

By default, all servers on the node stop before the backup is made so that partially synchronizedinformation is not saved. If you do not have root authority, you must specify a path for the backup filein a location where you have write permission. The backup file will be in zip format and a .zipextension is recommended.

In a UNIX or Linux environment, the backupConfig.sh command does not save file permissions or

ownership information. The restoreConfig.sh command uses the current umask and effective user

ID (EUID) to set the permissions and ownership when restoring a file. If it is required that the restoredfiles have the original permissions and ownership, use the tar command (available on all UNIX or

Linux systems) to back up and restore the configuration.

Issue the command from the profile_root/bin directory.

The command syntax is as follows:

 backupConfig.sh [backup_file] [-nostop] [-quiet] [-logfile <filename>][-replacelog] [-trace] [-username <username>] [-password <password>][-profileName <profile>] [-help]

Page 116: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 116/133

 

The backup_file parameter specifies the file where the backup is to be written. If you do not specify

a backup file name, a unique name is generated and the file is stored in the current directory. If youspecify a backup file name in a directory other than the current directory, the specified directory mustexist.

restoreConfig command

Use the restoreConfig.sh command to restore the configuration of your node after backing up the

configuration using the backupConfig.sh command.

The restoreConfig.sh command is a simple utility to restore the configuration of your node after

backing up the configuration using the backupConfig.sh command. By default, all servers on the

node stop before the configuration restores so that a node synchronization does not occur during therestoration. If the configuration directory already exists, it is renamed before the restoration occurs.

The backupConfig.sh command does not save file permissions or ownership information.The restoreConfig.sh command uses the current umask and effective user ID (EUID) to set the

permissions and ownership when restoring a file. If it is required that the restored files have theoriginal permissions and ownership, use the tar command (available on all UNIX or Linux systems) to

back up and restore the configuration.

Issue the command from the profile_root/bin directory.

The command syntax is as follows:

 restoreConfig.sh backup_file [-location restore_location] [-quiet]

[-nostop] [-nowait] [-logfile <filename>] [-replacelog] [-trace][-username <username>] [-password <password>] [-profileName <profile>][-help] 

If the configuration to be restored exists, the config directory is renamed

to config.old (then config.old_1, etc.) before the restore begins. The command then restores

the entire contents of theprofile_root/config directory.

Use Tivoli Performance Viewer (TPV) / Request Metrics to gatherinformation about resources and analyze results.

Tivoli performance viewer

Tivoli Performance Viewer (TPV) is included with WebSphere Application Server V7.0 and is used torecord and display performance data. Since WebSphere Application Server V6.0, TPV is integrated intothe Integrated Solutions Console.

Using Tivoli Performance Viewer, you can perform the following tasks:

Page 117: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 117/133

•  Display Performance Monitoring Infrastructure (PMI) data collected from local and remote

application servers:

o  Summary reports show key areas of contention.

o  Graphical/tabular views of raw PMI data.

o  Optionally save collected PMI data to logs.

•  Provide configuration advice through performance advisor section

Tuning advice formulated from gathered PMI and configuration data.

•  Log performance data

Using TPV you can log real-time performance data and review the data at a later time.

•  View server performance logs

You can record and view data that has been logged by TPV in the Integrated SolutionsConsole.

You can use TPV to create summary reports. These reports let you monitor the server's real-timeperformance and health. TPV enables you to work with the performance modules. With these modules,you can drill down on specific areas of interest, even old logs. Use the log analysis tools to detecttrends over time. TPV can also save performance data for later analysis or problem determination.

As the TPV runs inside the Integrated Solutions Console, the performance impact depends on whichedition of WebSphere Application Server you run. When running the single server edition, the TPVruns in the same JVM as your application. In Network Deployment, the TPV runs in the JVM of thedeployment manager. Certain functions (like the advisor), however, require resources in the nodeagents or in the application servers.

WebSphere performance advisors

Gathering information made available through the PMI, the WebSphere performance advisors have theability to make suggestions about the environment. The advisors are able to determine the current

configuration for an application server, and trending the PMI data over time, make informed decisionsabout potential environmental changes that can enhance the performance of the system. Advice ishard coded into the system and is based on IBM best practices for tuning and performance. Theadvisors do not implement any changes to the environment. Instead, they identify the problem andallow the system administrator to make the decision whether or not to implement. You should performtests after any change is implemented. There are two types of advisors:

•  Performance and Diagnostic Advisor

This advisor is configured through the Integrated Solutions Console. It writes tothe SystemOut.log and to the console while in monitor mode. The interface is configurable

to determine how often data is gathered and advice is written. It offers advice about thefollowing components:

o  J2C Connection Manager

Page 118: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 118/133

  Thread pools

  LTC Nesting

  Serial reuse violation

  Plus various different diagnostic advises

o  Web Container Session Manager

  Session size with overflow enabled

  Session size with overflow disabled

  Persistent session size

o  Web Container

  Bounded thread pool

  Unbounded thread pool

o  Orb Service

  Unbounded thread pool

  Bounded thread pool

o  Data Source

  Connection pool size

  Prepared statement cache size

o  Java virtual machine (JVM)

  Memory leak detection

If you need to gather advice about items outside this list, use the Tivoli Performance ViewerAdvisor.

•  Performance Advisor in Tivoli Performance Viewer

This advisor is slightly different from the Performance and Diagnostic Advisor. ThePerformance Advisor in Tivoli Performance Viewer is invoked only through the TVP interface of the Integrated Solutions Console. It runs on the application server you are monitoring, but therefresh intervals are based on selecting refresh through the console. Also, the output is routedto the user interface instead of to an application server output log. This advisor also capturesdata and gives advice about more components. Specifically, this advisor can capture thefollowing types of information:

Page 119: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 119/133

o  ORB service thread pools

o  Web container thread pools

o  Connection pool size

o  Persisted session size and time

o  Prepared statement cache size

o  Session cache size

o  Dynamic cache size

o  JVM heap size

o  DB2 performance configuration

The Performance Advisor in Tivoli Performance Viewer provides more extensive advice than thePerformance and Diagnostic Advisor. Running the Performance Advisor in Tivoli Performance Viewercan require plenty of resources and impact performance. Use it with care in production environments.

WebSphere request metrics

PMI provides information about average system resource usage statistics but does not provide anycorrelation between the data. Request metrics, in contrast, provide data about each individualtransaction and correlate this data.

Request metrics gather information about single transactions within an application. The metric trackseach step of a transaction and determines the process time for each of the major applicationcomponents. Several components support this transaction metric:

•  Web server plug-ins

•  Web container

•  EJB container

•  JDBC calls

•  Web services engine

•  Default messaging provider

The amount of time that a request spends in each component is measured and aggregated to define

the complete execution time for that transaction. Both the individual component times and the overalltransaction time can be useful metrics when trying to gauge user experience on a site. The data allowsfor a hierarchical by response time view for each individual transaction. When debugging resourceconstraints, these metrics provide critical data at each component. The request metric providesfiltering mechanisms to monitor synthetic transactions or to track the performance of a specifictransaction. By using test transactions, you can measure performance of the site end-to-end.

Page 120: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 120/133

From a performance perspective, using transaction request metrics can aid in determining if anapplication is meeting service level agreements (SLAs) for the client. The metrics can be used to alertthe user when an SLA target is not met.

Request metrics help administrators answer the following questions:

•  What performance area should the user be focused on?

•  Is there too much time being spent on any given area?

•  How do I determine if response times for transactions are meeting their goals and do not

violate the SLAs?

Chapter 7

Problem Determination

Configure, review and analyze logs (e.g., Web server, IBM WebSphereApplication Server Network Deployment V7.0, first failure data capture

(FFDC)).

Components and log files

During runtime, each WebSphere Application Server process creates log files that report the actionsbeing performed. Each of these logs can be very important to troubleshooting a problem.

Figure 7.1. Log Files

Page 121: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 121/133

There are also many log files that are not shown in this picture. The logs shown in this picture are thelogs that you should look at first when dealing with a problem during runtime.

The logs shown in the table below will exist for each profile, and as such will exist in the particularprofile's profile_home/logs directory. Other than activity.log, which only exists once per

node, each server on your node will have its own copy of these runtime log files.

Table 7.1. WebSphere logs: Overview

Group Log Files (default names) Description

Servicelog

activity.log Binary log file contains data fromeach JVM in a node, for analysis

using Log Analyzer. Contains systemmessages from all applicationservers and the node agent for agiven node. Contains messages

produced by instrumentedapplications

The service log is a binary-format logthat contains more or less the same

messages as the SystemOut JVM log,plus a few extra serviceability

messages.

The benefit of the service log is thatit can be used in conjunction with theLog Analyzer, which can compare

logged messages with a "symptomdatabase". The symptom databasecorrelates messages to knownproblems. Viewing your service log

with Log Analyzer is a good first stepin problem determination, because itcan compare your situation to a listof known WebSphere problems.

JVM logs SystemOut.log, SystemErr.log Contain all messages sent to theJava System.out and System.err streams.

The most commonly used logs arethe JVM logs, often referred to as"standard out" and "standard error".They contain messages written to

the System.out and System.err 

streams, respectively. Each Javaprocess has its own JVM logs.Exceptions are written to these logs

fromException.printStackTrace().

The JVM logs are a good place tolook for detailed information when

there is a problem with an

Page 122: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 122/133

Group Log Files (default names) Description

Application Server. Node Agent andDeployment Manager processes also

write to JVM logs. You should look atthese if the Node Agent or

Deployment Manager is unable tostart, similar to the trace file in older

versions of WebSphere ApplicationServer.

A log can be configured to roll overat specific time intervals, or whenthe file reaches a certain size. Thetwo can also be combined. You

could, for example, configure the logto roll over each day at midnight,unless the file grows larger than10MB, in which case it rolls over

immediately.

You can also specify how many

"historical", meaning previouslyrolled-over, log files to keep.

Native

processlogs

native_stdout.log, native_stderr.log Contain messages sent to stdout and

stderr from native code segments,including the JVM.

The native process logs containmessages written by native code

segments, including the JVM itself.Given the relatively small amount of 

native code compared to Java code,very few messages are written to

these logs. Since the files are small,they do not have rollover capability.

One component that does use areasonable amount of native code isthe security subsystem. If you arehaving security-related problems,

you should take a look at the nativelogs.

Embedd

ed HTTP

serverlogs

http_access.log, http_error.log Contain all requests to the embedded

HTTP server.

These logs are disabled by default,but can be enabled using theAdministrative Console

orwsadmin.sh. They can be

particularly useful when you want to

verify that a request is reaching theApplication Server, or track the

progress of a particular request as itmoves through your environment.

Page 123: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 123/133

Group Log Files (default names) Description

HTTPserver

plug-inlog

http_plugin.log Contains data about the operation of the HTTP server plug-in module.

Although it runs inside the webserver process, the plug-in writes to

its own log file.

The log contains messages about theplug-in's startup process and runtimeevents. If your web server does not

start, you should look for errors inthis file, and in the web server's logfiles. If there seems to be acommunication problem between the

plug-in and an Application Server,you should also examine this file.

Comman

d-lineprogram

logs

startServer.log, startNode.log,addNode.log, <command>.log

Contain data about the execution of 

individual command-line utilities.

Most of the command-line utilities,suchas startServer.sh or addNode.sh, write data to their own log files.

These logs will appear in your

profile's "logs" directory, and havethe same name as the utility. For

most utilities, you can specify analternate log file on the command

line if you prefer.

System

application andsample

application logs

<name>_deploy.txt, <name>_config.txt Deployment and configuration logs

for each of the enterpriseapplications installed by theWebSphere Application Server

installer (Administrative Console,samples, etc)

 

The JVM logs are the most useful of the runtime logs. They contain information about the serverruntime, and any messages that the application writes to System.out or System.err. You will also

find exception and stack trace information in these logs. The native process logs contain informationthat gets logged by native code, such as the JVM itself, or some parts of the security implementation.

First Failure Data Capture (FFDC)

WebSphere Application Server V7 includes a feature called First Failure Data Capture (FFDC). TheFFDC feature runs in the background and collects events and errors that occur during WebSphereApplication Server runtime. The information that it collects are written to log files inthe WAS_install_root/profiles/profile_name/logs/ffdc directory.

Page 124: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 124/133

FFDC does not affect the performance of WebSphere Application Server and should not be disabled.The FFDC logs will not, most likely, be useful in your problem determination efforts. However, theymight be useful to the WebSphere Application Server support team if you open a PMR.

There are three FFDC configuration files in the WAS_install_root/properties directory. The only

file that you should modify is the ffdcRun.properties file. You can add

the ExceptionFileMaximumAgeproperty to the file. This property specifies the number of days thatan FFDC log remains in the WAS_install_root>/profiles/profile_name/logs/ffdc directory

before it is deleted. As part of your diagnostic data collection plan, you might want to modifythe ExceptionFileMaximumAge property to ensure that the FFDC files remain on your system for a

certain time period. You should not modify any other properties unless you are asked to do so by theWebSphere Application Server support team.

 

Use trace facility (e.g., enabling, selecting components, and logconfiguration).

Changing the logging and tracing options

You might want to customize the logging and tracing properties for the new application server. Thereare several ways to access the logging and tracing properties for an application server.

• Select Troubleshooting > Logs and Trace in the navigation bar, then select a server.

• Select Servers > Server Types > WebSphere application servers, select a server, and

then select Logging and Tracing from the Troubleshooting section.

• Select Servers > Server Types > WebSphere application servers, select a server,

select Process definition from the Java and Process Management section.Select Logging and Tracingfrom the Additional Properties section.

We will take the third navigation path to customize the location of the JVM logs, the diagnostic tracelogs, and the process logs.

1.  Select Logging and Tracing.

2.  Select JVM Logs.

This allows you to change the JVM standard output and error file properties. Both are rotatingfiles. You can choose to save the current file and create a new one, either when it reaches acertain size, or at a specific moment during the day. You can also choose to disable the outputof calls to System.out.print() or System.err.print().

Figure 7.2. JVM Logs

Page 125: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 125/133

 

We recommend that you specify a new file name, using an environment variable to specify it,such as:

${APPLICATION_ROOT}/logs/SystemOut.log

${APPLICATION_ROOT}/logs/SystemErr.log

On this page you can also modify how WebSphere will rotate your log files.

Click OK.

3.  Select Diagnostic Trace.

Each component of the WebSphere Application Server is enabled for tracing with the JRasinterface. This trace can be changed dynamically while the process is running usingthe Runtimetab, or added to the application server definition from the Configuration tab. Asshown in figure below, the trace output can be either directed to memory or to a rotating tracefile.

Page 126: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 126/133

Change the trace output file name so the trace is stored in a specific location for the serverusing the ITSOBANK_ROOT variable and select the Log Analyzer format.

Figure 7.3. Specifying diagnostic trace service options

 

Click OK.

4.  Select Process Logs.

Messages written by native code (JNI) to standard out and standard error streams areredirected by WebSphere to process logs, usuallycalled native_stdout.log and native_stderr.log. Change the native process logs to:

${APPLICATION_ROOT}/logs/native_stdout.log

${APPLICATION_ROOT}/logs/native_stderr.log

Click OK.

Page 127: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 127/133

5.  All log files produced by the application server are now redirected to the $

{APPLICATION_ROOT}/logs directory. Save the configuration.

Log levels

Levels control which events are processed by Java logging. WebSphere Application Server controls the

levels of all loggers in the system.

The level value is set from configuration data when the logger is created and can be changed at runtime from the administrative console. If a level is not set in the configuration data, a level is obtainedby proceeding up the hierarchy until a parent with a level value is found. You can also set a level foreach handler to indicate which events are published to an output device. When you change the levelfor a logger in the administrative console, the change is propagated to the children of the logger.

Levels are cumulative; a logger can process logged objects at the level that is set for the logger, andat all levels above the set level.

Valid log levels:

• Off - No events are logged.

• Fatal - Task cannot continue and component cannot function.

• Severe - Task cannot continue, but component can still function.

• Warning - Potential error or impending error.

• Audit - Significant event affecting server state or resources.

• Info - General information outlining overall task progress.

• Config - Configuration change or status.

• Detail - General information detailing subtask progress.

• Fine - Trace information - General trace.

• Finer -Trace information - Detailed trace + method entry / exit / return values.

• Finest - Trace information - A more detailed trace - Includes all the detail that is needed to

debug problems.

• All - All events are logged. If you create custom levels, All includes your custom levels, and

can provide a more detailed trace than Finest.

NOTE: Trace information, which includes events at the Fine, Finer and Finest levels, can be written

only to the trace log. Therefore, if you do not enable diagnostic trace, setting the log detail levelto Fine, Finer, or Finest does NOT effect the logged data.

Analyze the content of the JNDI namespace using  dumpNameSpace .

dumpNameSpace

Run the dumpNameSpace.sh command against any bootstrap port to get a listing of the names

bound with that provider URL.

Page 128: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 128/133

The output of the command:

• Does not present a full logical view of the name space.

• Shows CORBA URLs where the name space transitions to another server.

The tool indicates that certain names point to contexts external to the current server and its name

space. The links show the transitions necessary to perform a lookup from one name space to another.

NOTE: An invocation of the dumpNameSpace.sh command cannot generate a dump of the entire

name space, only the objects bound to the bootstrap server and links to other local name spaces thatcompose the federated name space. Use the correct host name and port number for the server to be

dumped.

To run the dumpNameSpace.sh command, type the following:

server_root/bin/dumpNameSpace.sh [options]

All arguments are optional. Table below shows the available options.

Table 7.2. dumpNameSpace.sh Options

Option Description

-host<hostname>

This option is the host name of bootstrap server. If it is not defined, then thedefault is localhost.

-port

<portnumber>

This option is the bootstrap server port number. If i is not defined, then the

default is 2809.

-factory<factory>

This option is the initial context factory to be used to get initial context. Thedefaultof com.ibm.websphere.naming.WsnInitialContextFactory is

okay for most use.

-root [ cell| server |node | host |legacy | tree| default ]

WebSphere V5.0 or later:

• cell: dumpNameSpace default. Dump the tree starting at the cell

root context.

• server: Dump the tree starting at the server root context.

• node: Dump the tree starting at the node root context.

(Synonymous with "host")

• default: Dump the tree starting at the initial context that JNDI

returns by default for that server type. This is the only -root choice

that is compatible with WebSphere servers prior to V4.0 and with

non-WebSphere name servers.

-url <url> This option is the value for the java.naming.provider.url property

used to get the initial JNDI context. This option can be used in place of the -

Page 129: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 129/133

Option Description

host, -port, and -root options. If the -url option is specified, the -host, -port, and -root options are ignored.

-startAt<context> This option is the path from the requested root context to the top levelcontext where the dump should begin. Recursively dumps subcontexts belowthis point. Defaults to empty string, that is, root context requested with the -root option.

-format<format>

• jndi: Display name components as atomic strings.

• ins: Display name components parsed against INS rules (id.kind).

The default format is jndi.

-report<length>

• short: Dumps the binding name and bound object type, essentially

what JNDI Context.list() provides.

• long: Dumps the binding name, bound object type, local object

type, and string representation of the local object. In other words,IORs, string values, and so on, are printed.

The default report option is short.

-traceString<tracespec>

Trace string of the same format used with servers, with output going to thefile DumpNameSpaceTrace.out.

-help or -? Prints a usage statement.

 

dumpNameSpace.sh usage.

Get help on options:

$ dumpNameSpace.sh -?

Dump server on localhost:2809 from cell root:

$ dumpNameSpace.sh

Dump server on localhost:2806 from cell root:

$ dumpNameSpace.sh -port 2806

Page 130: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 130/133

Dump server on yourhost:2811 from cell root:

$ dumpNameSpace.sh -port 2811 -host yourhost

Dump server on localhost:9810 from server root:

$ dumpNameSpace.sh -root server

Dump server at corbaloc:

$ dumpNameSpace.sh -url corbaloc:iiop:yourhost:901

Perform JVM troubleshooting tasks (e.g., thread dump, JVM core dump,and heap dump, verbose Garbage Collection (GC)).

Generating a Java heap dump

A Java heap dump is an IBM software development kit (SDK)-generated data file containing asnapshot of the current memory state of the application server JVM. A Java heap dump is generated

automatically when the JVM runs out of memory.

To issue a command to generate a heap dump in wsadmin.sh, you must first obtain a reference to

the message bean (MBean) that is associated with the JVM running in your process using thecommand that is shown below:

jvm =AdminControl.queryNames("WebSphere:type=JVM,process=server1,node=devNode,*")AdminControl.invoke(jvm, 'generateHeapDump')

Afterward, use the following command to induce the Java heap dump:

AdminControl.invoke(jvm, 'generateHeapDump');

Generating Java core dump

The Java core dump is a IBM SDK-generated data file, which contains information pertaining to thethreads and monitors in the JVM. Just as the Java heap dump is a snapshot of the process JVM

Page 131: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 131/133

memory, the Java core dump is a snapshot of the threads running on the JVM. This data file isgenerated automatically every time that a server crashes. Or, a user can issue a command togenerate this data file.

To issue a command to generate a core dump in wsadmin, you must first obtain a reference to theMBean that is associated with the JVM running in your process using the command shown below:

jvm =AdminControl.queryNames("WebSphere:type=JVM,process=server1,node=devNode,*")AdminControl.invoke(jvm, 'dumpThreads')

Afterward, use the following command to start the Java core dump:

AdminControl.invoke(jvm, 'dumpThreads');

Locating and analyzing heap dumps

Do not analyze heap dumps on the WebSphere Application Server machine because analysis is veryexpensive. For analysis, transfer heap dumps to a dedicated problem determination machine.

When a memory leak is detected and heap dumps are generated, you must analyze heap dumps on aproblem determination machine and not on the application server because the analysis is very centralprocessing unit (CPU) and disk I/O intensive.

Perform the following procedure to locate the heap dump files:

1.  On the physical application server where a memory leak is detected, go to the WebSphere

Application Server home directory. The directory is:

2. WAS_install_root/profiles/profile_name

3.  IBM heap dump files are usually named in the following way:

4. 5. heapdump.<date>.<timestamp>.<pid>.phd6. 

7. 

Gather all the .phd files and transfer them to your problem determination machine foranalysis.

8.  Many tools are available to analyze heap dumps that include Rational Application Developer7.5. WebSphere Application Server serviceability released a technology previewcalled Memory Dump Diagnostic For Java (MDD4J). You can download this preview fromthe product download Web site.

Have insight to IBM Support Assistant.

Page 132: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 132/133

Diagnosing problems using IBM Support Assistant tooling

The IBM Support Assistant (ISA) is a free local software serviceability workbench that helps youresolve questions and problems with IBM software products.

Tools for IBM Support Assistant perform numerous functions from memory-heap dump analysis and

Java core-dump analysis to enabling remote assistance from IBM Support. All of these tools come withhelp and usage documentation that allow you to learn about the tools and start using them to analyzeand resolve your problems.

The following are samples of the tools available in IBM Support Assistant:

•  Memory Dump Diagnostic for Java (MDD4J)

The Memory Dump Diagnostic for Java tool analyzes common formats of memory dumps(heap dumps) from the Java virtual machine (JVM) that is running the WebSphere ApplicationServer or any other standalone Java applications. The analysis of memory dumps is targetedtowards identifying data structures within the Java heap that might be root causes of memoryleaks. The analysis also identifies major contributors to the Java heap footprint of the

application and their ownership relationship. The tool is capable of analyzing very largememory dumps obtained from production-environment application serversencountering OutOfMemoryError issues.

•  IBM Thread and Monitor Dump Analyzer (TMDA)

IBM Thread and Monitor Dump Analyzer (TMDA) provides analysis for Java thread dumps or javacores such as those from WebSphere Application Server. You can analyze thread usage at

several different levels, starting with a high-level graphical view and drilling down to a detailedtally of individual threads. If any deadlocks exist in the thread dump, TMDA detects andreports them.

•  Log Analyzer

Log Analyzer is a graphical user interface that provides a single point of contact for browsing,analyzing, and correlating logs produced by multiple products. In addition to importing log

files from multiple products, Log Analyzer enables you to import and select symptom catalogsagainst which log files can be analyzed and correlated.

•  IBM Visual Configuration Explorer

The IBM Visual Configuration Explorer provides a way for you to visualize, explore, andanalyze configuration information from diverse sources.

•  IBM Pattern Modeling and Analysis Tool for Java Garbage Collector (PMAT)

The IBM Pattern Modeling and Analysis Tool for Java Garbage Collector (PMAT) parses IBMverbose garbage-collection (GC) trace, analyzes Java heap usage, and recommends keyconfigurations based on pattern modeling of Java heap usage. Only verbose GC traces that aregenerated from IBM Java Development Kits (JDKs) are supported.

•  IBM Assist On-site

Page 133: Was v7 Notes Chapter

7/16/2019 Was v7 Notes Chapter

http://slidepdf.com/reader/full/was-v7-notes-chapter 133/133

IBM Assist On-site provides remote desktop capabilities. You run this tool when you areinstructed to do so by IBM Support personnel. With this live remote-assistance tool, a memberof the IBM Support team can view your desktop and share control of your mouse andkeyboard to help you find a solution. The tool can speed up problem determination, datacollection, and ultimately your problem solution.