version 4.2 juli 2006 sizing guide exchange server 2003 server... · version abstract sizing of...

69
Abstract Sizing of PRIMERGY systems for Microsoft Exchange Server 2003. This technical documentation is aimed at the persons responsible for the sizing of PRIMERGY servers for Microsoft Exchange Server 2003. It is to help during the presales phase when determining the correct server model for a requested number of users / performance class. In addition to the question, how the required system performance is acquired for a specific number of users, special emphasis is given to discussing the Exchange Server 2003 requirements and the respective bottlenecks that can arise from this. The options offered by the various PRIMERGY models and their performance classes are explained with regard to Exchange Server 2003 and sample configurations are presented. Contents PRIMERGY .............................................................. 2 Exchange Server 2003 ............................................. 3 What‟s new in Exchange Server 2003 ................... 4 Preview of Exchange 2007 .................................... 5 Exchange measurement methodology ..................... 6 User definition ....................................................... 6 Load simulation with LoadSim 2003 ...................... 7 User profiles .......................................................... 8 Evolution of the user profiles ................................. 8 LoadSim 2000 vs. LoadSim 2003.......................... 9 Benchmark versus reality .................................... 10 System load......................................................... 11 Exchange-relevant resources................................. 13 Exchange architecture ......................................... 13 Active Directory and DNS .................................... 14 Operating system ................................................ 15 Computing performance ...................................... 16 Main memory....................................................... 16 Disk subsystem ................................................... 18 Transaction principle ........................................ 19 Access pattern .................................................. 20 Caches ............................................................. 20 RAID levels ....................................................... 21 Data throughput ................................................ 23 Hard disks ........................................................... 24 Storage space .................................................. 26 Network ............................................................... 27 High availability ................................................... 27 Backup................................................................. 28 Backup solutions for Exchange Server 2003 ....... 32 Archiving .............................................................. 36 Virus protection.................................................... 37 System analysis tools ............................................. 38 Performance analysis .......................................... 40 PRIMERGY as Exchange Server 2003 .................. 44 PRIMERGY Econel 100....................................... 46 PRIMERGY Econel 200....................................... 48 PRIMERGY TX150 S4......................................... 49 PRIMERGY TX200 S3......................................... 51 PRIMERGY RX100 S3 ........................................ 53 PRIMERGY RX200 S3 ........................................ 54 PRIMERGY RX220.............................................. 56 PRIMERGY RX300 S3 / TX300 S3 ..................... 58 PRIMERGY BX600 .............................................. 60 PRIMERGY BX620 S3...................................... 60 PRIMERGY BX630 ........................................... 60 PRIMERGY RX600 S3 / TX600 S3 ..................... 63 PRIMERGY RX800 S2 ........................................ 66 PRIMERGY RXI300 / RXI600 .............................. 66 Summary ............................................................. 67 References ............................................................. 68 Document History ................................................... 69 Contacts ................................................................. 69 Sizing Guide Exchange Server 2003 Version 4.2 Juli 2006 Pages 69

Upload: vandung

Post on 22-Mar-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

Abstract

Sizing of PRIMERGY systems for Microsoft Exchange Server 2003.

This technical documentation is aimed at the persons responsible for the sizing of PRIMERGY servers for Microsoft Exchange Server 2003. It is to help during the presales phase when determining the correct server model for a requested number of users / performance class.

In addition to the question, how the required system performance is acquired for a specific number of users, special emphasis is given to discussing the Exchange Server 2003 requirements and the respective bottlenecks that can arise from this. The options offered by the various PRIMERGY models and their performance classes are explained with regard to Exchange Server 2003 and sample configurations are presented. Contents

PRIMERGY .............................................................. 2 Exchange Server 2003 ............................................. 3

What‟s new in Exchange Server 2003 ................... 4 Preview of Exchange 2007 .................................... 5

Exchange measurement methodology ..................... 6 User definition ....................................................... 6 Load simulation with LoadSim 2003 ...................... 7 User profiles .......................................................... 8 Evolution of the user profiles ................................. 8 LoadSim 2000 vs. LoadSim 2003 .......................... 9 Benchmark versus reality .................................... 10 System load ......................................................... 11

Exchange-relevant resources ................................. 13 Exchange architecture ......................................... 13 Active Directory and DNS .................................... 14 Operating system ................................................ 15 Computing performance ...................................... 16 Main memory ....................................................... 16 Disk subsystem ................................................... 18

Transaction principle ........................................ 19 Access pattern .................................................. 20 Caches ............................................................. 20 RAID levels ....................................................... 21 Data throughput ................................................ 23

Hard disks ........................................................... 24 Storage space .................................................. 26

Network ............................................................... 27 High availability ................................................... 27

Backup ................................................................. 28 Backup solutions for Exchange Server 2003 ....... 32 Archiving .............................................................. 36 Virus protection .................................................... 37

System analysis tools ............................................. 38 Performance analysis .......................................... 40

PRIMERGY as Exchange Server 2003 .................. 44 PRIMERGY Econel 100 ....................................... 46 PRIMERGY Econel 200 ....................................... 48 PRIMERGY TX150 S4 ......................................... 49 PRIMERGY TX200 S3 ......................................... 51 PRIMERGY RX100 S3 ........................................ 53 PRIMERGY RX200 S3 ........................................ 54 PRIMERGY RX220.............................................. 56 PRIMERGY RX300 S3 / TX300 S3 ..................... 58 PRIMERGY BX600 .............................................. 60

PRIMERGY BX620 S3...................................... 60 PRIMERGY BX630 ........................................... 60

PRIMERGY RX600 S3 / TX600 S3 ..................... 63 PRIMERGY RX800 S2 ........................................ 66 PRIMERGY RXI300 / RXI600 .............................. 66 Summary ............................................................. 67

References ............................................................. 68 Document History ................................................... 69 Contacts ................................................................. 69

Sizing Guide

Exchange Server 2003

Version 4.2 Juli 2006

Pages 69

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 2 (69)

PRIMERGY

The following definition is aimed at all readers, for whom the name PRIMERGY has no meaning and serves as a short introduction: Since 1995 PRIMERGY Servers has been the trade name for a very successful server family from Fujitsu. It is a product line that has been developed and produced by Fujitsu with systems for small work groups and solutions for large-scale companies.

Scalability, Flexibility & Expandability The latest technologies are used in the PRIMERGY family from small mono processor systems through to systems with 16 processors. Intel or AMD processors of the highest performance class form the heart of these systems. Multiple 64-bit PCI I/O and memory busses, fast RAM and high-performance components, such as SCSI technology and Fibre Channel products, ensure high data throughput. This means full performance, regardless of whether it is for scaling-out or scaling-up. With the scaling-out method, similar to an ant colony where an enhanced performance is provided by means of a multitude of individuals, the Blade Servers and compact Compu Node systems can be used ideally. The scale-up method, i.e. the upgrading of an existing system, is ensured by the extensive upgrade options of the PRIMERGY systems to up to 16 processors and main memory of 256 GB. PCI and PCI-X slots provide the required expansion option for I/O components. Long-term planning in close cooperation with renowned component suppliers, such as Intel, AMD, LSI, ServerWorks, ensures continuous and optimal compatibility from one server generation to the next. PRIMERGY planning reaches out two years into the future and guarantees early as possible integration of the latest technologies.

Reliability & Availability In addition to performance, emphasis is also placed on quality. This not only includes an excellent processing quality and the use of high-quality individual components, but also fail-safe, early error diagnostics and data protection features. Important system components are designed on a redundant basis and their functionality is monitored by the system. Many parts can be replaced trouble-free during operation, thus enabling downtimes to be kept to a minimum and guaranteeing availability.

Security Your data are of the greatest importance to PRIMERGY. Protection against data loss is provided by the high-performance disk sub-systems of the PRIMERGY and FibreCAT product family. Even higher, largest possible availability rates are provided by PRIMERGY cluster configurations, in which not only the servers but also the disk subsystems and the entire cabling are redundant in design.

Manageability Comprehensive management software for all phases of the server lifecycle ensures smooth operation and simplifies the maintenance and error diagnostics of the PRIMERGY.

.

ServerStart is a user-friendly, menu-based software package for the optimum installation and configuration of the system with automatic hardware detection and installation of all required drivers.

ServerView is used for server monitoring and provides alarm, threshold, report and basis management, pre-failure detection and analyzing, alarm service and version management.

RemoteView permits independent remote maintenance and diagnostics of the hardware and operating system through LAN or modem.

Further detailed information about the PRIMERGY systems is available in the Internet at http://www.primergy.com.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 3 (69)

Exchange Server 2003

This chapter with its brief summary is aimed at those readers, who have not yet gained any experience with a Microsoft Exchange Server 2003 product. Unfortunately, only the most important functions can be mentioned. Explaining all of the features of the Microsoft Exchange Server would exceed the framework of this white paper and its actual subject matter.

The Microsoft Exchange Server 2003 is a modern client-server solution which is used for messaging and workgroup computing. Exchange enables secure access to mail boxes, mail storage and address books. In addition to the transmission of electronic mail, this platform provides convenient appointment/calendar functions within an organization or work group, publication of information in public folders and web storage, electronic forms as well as user-defined applications for workflow automation.

Microsoft Exchange Server 2003 is completely integrated into the Windows Active Directory and supports a hierarchical topology. This permits a multitude of Exchange servers, grouped according to location, to be operated jointly on a worldwide basis within an organization. Administration can be performed on a central and cross-location basis. This decentralized concept increases the performance and availability of Microsoft Exchange for use as a messaging system within the company and enables outstanding scalability.

On the one hand Exchange guarantees data security through its complete integration into the Windows security mechanism, and on the other hand it provides additional mechanisms, such as digital signatures and e-mail de/encryption.

The high measure of reliability that already is offered by a single Exchange server is greatly enhanced through the support of Microsoft Clustering Service, which is included in the Windows Server 2003 Enterprise Edition and Windows Server 2003 Datacenter Edition. This enables the realization of clusters with two nodes up to eight nodes.

By means of so-called connectors Microsoft Exchange servers can be linked to worldwide e-mail services, such as Internet and X.400. Similarly, inter-functionality is also possible with other mail systems, such as Lotus Notes, PROFS and SNADS. Furthermore, third-party suppliers now offer numerous gateways, which integrate further services into the Exchange server, such as FAX, telephone connections for call-center solutions, voicemail, etc.

Microsoft Exchange Server 2003 provides numerous standard protocols for communication, such as Post Office Protocol Version 3 (POP3), Simple Mail Transfer Protocol (SMTP), Lightweight Directory Access Protocol (LDAP), Internet Message Access Protocol Version 4 (IMAP4), Network News Transfer Protocol (NNTP), and Hypertext Transfer Protocol (HTTP), with which Exchange can be integrated into heterogeneous network and heterogeneous client environments. This guarantees location-independent access to the information administered by the Exchange Server 2003, regardless of whether the device is a desktop PC (irrespective of the operating system), Personal Digital Assistants (PDA) or mobile telephone.

The Microsoft Exchange 2003 Server is available in two configurations:

The Exchange Server 2003 Standard Edition is also available as part of the bundle »Windows Small Business Server 2003«. Windows Small Business Server (SBS) is designed as a complete package to meet the requirements of small and medium enterprises with up to 75 client workplaces.

Additional information concerning the functionality of Microsoft Exchange Server 2003 is available in the Internet at www.microsoft.com/exchange and www.microsoft.com/windowsserver2003/sbs.

Exchange Server 2003

Standard Edition

Platform for small and medium-sized companies

Max. 2 databases

Max. 16 GB per database (with Service Pack 2 up to 75 GB)

Exchange Server 2003

Enterprise Edition

Platform for medium to very large, globally active companies with the highest requirements concerning reliability and scalability.

Max. 20 databases

Max. 16 TB per database

Cluster Support

X.400 Connector

... 10 ...

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 4 (69)

What’s new in Exchange Server 2003

Microsoft Exchange Server‟s ten year history goes back to 1996. The first version of Exchange was version number 4.0 because it replaced the predecessor product, MS Mail 3.2. However, Exchange has as regards architecture nothing in common with MS Mail, except that both applications are basically used to exchange e-mails. Today, ten years after the market launch, Exchange bears the product name Exchange Server 2003 and the internal version number 6.5.

After three years Exchange Server 2003 is the successor to Exchange 2000 Server. Based on the product name and the period of time that lies between the two product versions, it could be assumed that Exchange Server 2003 has revolutionary changes to offer, but as can be seen from the internal version code, it is a so-called point release. In comparison with the changes between Exchange 5.5 and Exchange 2000 Server, the changes are less revolutionary and more of an evolutionary nature.

Compared with its predecessor version Exchange 5.5, Exchange 2000 Server involved a completely new concept for database organization and user data administration, which had consequences up to and including the domain structure of the Windows network so that migration from Exchange 5.5 to Exchange 2000 Server was necessary. The consequence is that many Exchange users have shied away from these migration costs and are migrating directly from Exchange 5.5 to Exchange Server 2003. On the other hand, the migration from Exchange 2000 Server to Exchange Server 2003 is proving to be considerably less problematic and is the equivalent of a simple update.

Nevertheless, the new functions in Exchange Server 2003 as opposed to Exchange 2000 Server are not to be despised. And also with the Service Pack 1 (SP1) and Service Pack 2 (SP2) for Exchange Server 2003 not only are problems fixed, they offer a lot of new and improved features, like mobile e-mail access with direct push technology and improved SPAM filtering methods with »Intelligent Message Filter« (IMF). The white paper entitled What's new in Exchange Server 2003 [L7] from Microsoft describes on more than 200 pages the new features of Exchange Server 2003, including SP2. The bulk of the new functions refer to security, manageability, mobility and client-side extensions, as provided by the new standard client for Exchange - Outlook 2003 and the revised OWA (Outlook Web Access). In the following we intend to concentrate on a number of outstanding innovations that have an impact on the hardware basis of an Exchange server.

Shorter backup and restore times

The Volume Shadow Copy Service (referred to in short as VSS) is in fact a new function of Windows Server 2003 and enables the creation of snapshot backups. Exchange Server 2003 is compatible with this new VSS function. Thus making it possible to take backups of the Exchange databases in a very short time and at the same time drastically reduce backup and restore times, which turn out to be the limiting factor in large Exchange servers. In the following chapters Consolidation and Backup this functionality is addressed in more detail.

Exchange Server 2003 offers for the first time a simple method of restoring individual mailboxes. To this end, there is a separate Recovery Storage Group, which permits the restore of individual mailboxes or individual databases during operation.

Extended clustering

In contrast to Exchange 2000 Server, which supported a cluster with two nodes on the basis of Windows 2000 Advanced Server and four nodes under Windows 2000 DataCenter Server, Exchange Server 2003 permits a cluster with up to eight nodes to already be implemented on Windows Server 2003 Enterprise Edition.

Number of cluster nodes

Exchange 2000 Enterprise Server

Exchange 2003 Enterprise Edition

Windows 2000

Server - -

Advanced Server 2 2

DataCenter Server 4 4

Windows 2003

Standard Edition - -

Enterprise Edition - 8

Datacenter Edition - 8

Exchange 4.0 v 4.0 Apr. 1996 Exchange 5.0 v 5.0 Mar. 1997 Exchange 5.5 v 5.5 Nov. 1997 Exchange 2000 v 6.0 Oct. 2000 Exchange 2003 v 6.5 Oct. 2003 Exchange 2003 SP1 v 6.5 May 2004 Exchange 2003 SP2 v 6.5 Oct. 2005

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 5 (69)

Reduced network load

With the new functionality of client-side caching, the current Outlook 2003 version of the standard Exchange client makes an important contribution toward reducing network load. Particularly with clients connected through a low-capacity WAN, this greatly relieves the load on the network and also on the server. Whereas all previous Outlook versions submitted a request to the Exchange server for every object, the Exchange server is now only bothered in this way for first access. Afterwards data compression is used for communication between Outlook 2003 and Exchange Server 2003.

Communication of the Exchange servers with each other has also been optimized. For example, for the replication of public folders a least-cost calculation is now always taken as a basis, and the accesses of Exchange Server 2003 to the Active Directory are reduced through better caching algorithms by up to 60%.

It should be noted that many of the new functions of Exchange Server 2003 are only available if Exchange Server 2003 is used in conjunction with Windows Server 2003 and Outlook 2003. For more details see the white paper entitled What's new in Exchange Server 2003 [L7].

Preview of Exchange 2007

After looking back on the history of Exchange Server 2003, we would now also like to briefly look to the future of Exchange Server. The next version of Exchange will be called »Exchange Server 2007« and, as the name already suggests, will appear in 2007.

The outstanding performance-relevant change will be the changeover from the 32-bit to the 64-bit version. Exchange Server 2003 is a 32-bit application and runs solely on the 32-bit version of Windows. Since Exchange Server 2003 does not actively use PAE, Exchange is limited to an address space of 3 GB. A configuration of an Exchange Server with more than 4 GB of main memory does not entail a gain in performance. On the other hand, Exchange »lives« from the cache for the database (see chapter Main memory). This limitation will be overcome with Exchange Server 2007. Exchange Server 2007 will only be available as a 64-bit version for x64 architectures (Intel CPUs with EMT64 and AMD Opteron); a version for the IA64 architecture (Itanium) is not planned.

As the table opposite illustrates, the usage of the physical memory is considerably improved with 64 bit. With sufficient memory, Exchange Server 2007 will benefit particularly from a larger database cache. This reduces the number of disk accesses and thus the requirements made of the disk subsystem, which in today‟s Exchange configurations frequently determines the performance of the entire system. As a result, it is possible with Exchange Server 2007 to either organize the disk subsystem with an unchanged number of users in a more cost-effective way or manage a larger number of users.

In addition, Exchange Server 2007 entails a number of new or extended functionalities, the listing of which would go beyond the scope of this paper. More details about Exchange Server 2007 can be found on the Microsoft web page http://www.microsoft.com/exchange/preview/default.mspx.

At the same time with Exchange Server 2007, a revised version of Outlook will also appear on the client side. The web page http://www.microsoft.com/office/preview/programs/outlook/overview.mspx gives an overview of Microsoft Office Outlook 2007.

Physical memory

Exchange

address space

Windows Server 2003 R2 Standard Edition 4 GB 3 GB

Windows Server 2003 R2 Enterprise Edition 64 GB 3 GB

Windows Server 2003 R2 Datacenter Edition 128 GB 3 GB

Windows Server 2003 R2 x64 Standard Edition 32 GB 8 TB

Windows Server 2003 R2 x64 Enterprise Edition 1 TB 8 TB

Windows Server 2003 R2 x64 Datacenter Edition 1 TB 8 TB

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 6 (69)

Exchange measurement methodology

The following question is one that is repeatedly asked concerning the sizing of a server: »Which PRIMERGY do I require for N Exchange users?« or »How many Exchange users can a specific PRIMERGY model serve?«

This question is particularly asked by customers and sales staff who are of course looking for the best possible system. Not under-sized so that the performance is right, but also not over-sized so that the price is right.

In addition, service engineers also ask: »How can I configure the system to enable the best possible performance to be achieved from existing hardware?« Because, for example, it is decisive just how the hard disks are organized in RAID arrays.

Unfortunately, the answers cannot be listed in a concise table with the number of users in the one column and the ideal PRIMERGY system or its configuration in the other column, even if many would like to see this and several competitors even suggest this. Why the answers to this, what might appear, simple question are not so trivial and how a suitable PRIMERGY system can still be chosen on the basis of the number of users is indicated below.

User definition

The most difficult point in the apparently simple question »Which PRIMERGY do I require for N users?«, is the user himself. One possible answer to the question »What is a user?« could be: a person who uses Exchange, i.e. who sends e-mails. Is that all? No! He of course also reads e-mails... and the filing option provided by Exchange is also used ... and addresses are managed… and the calendar is used. And just how intensively does the user perform this type of task? In relation to the daily requirements …

In addition to the question, what is user behavior like with regard to the number and size of the mail, the question which is increasingly being asked now concerns the method of access: »How does the user gain access to the mail system?« A few years ago it would have been normal for really homogeneous infrastructures to at least be found within an organization unit, and practically all employees would have uniformly worked with Outlook. Due to the increasing mobility of the end user and the growing diversity of mail-compatible systems, the multitude of protocols and access mechanisms is on the increase, e.g. Outlook (MAPI), Internet protocols (POP3, IMAP4, LDAP), Web-based (HTTP) or mobile devices (WAP), just to list the most important ones.

One could now assume that this would not have a direct influence on the number of users operating an Exchange server because a user uses only the one protocol or the other one at any given time, and in the end is only one user. This is however not the case because the mail protocols differ in the method of communication used. For example, the one type processes mails as a whole and the other type on an object-orientated basis (sender, recipient, subject, ...).

This different type of access pattern leads to the mail from the Exchange server having to be converted into the format required. As the information is only stored in one format in the information store of the Exchange server. The service load caused by the conversion is in part not insignificant. With Exchange 2000 Server and Exchange Server 2003 and in contrast to Exchange 5.5, Microsoft has already taken several measures to reduce the load through the conversion of mail formats and protocols. For example, a special streaming cache is set up for Internet-based access, which relieves the load on the database optimized for MAPI-based accesses.

We see our greatest problem in the question: How do we integrate the human factor? There isn‟t just a user. Just as there are small, big, short and thin people, there are users who use the medium electronic mail more or less intensively. This depends not least on the respective task of the user. Whereas the one user may only send very few short text mails per day, another user may send mails with larger attachments of several MB. One user may read a mail and delete it straight away, whereas another will collect his mails in archives, which naturally results in a completely different load being placed on the Exchange server.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 7 (69)

We have now realized that one user is not the same as another user. But even if we do know the user profile, the question still arises: How many users of a specific performance class can a server system serve. Due to the manifold influences an exact answer to this can only be provided by a test performed in a real environment, but this is of course impossible. However, very good performance statements can be obtained through simulations.

Load simulation with LoadSim 2003

How do we determine the number of users who can be served by a server? By trial and error. This of course cannot be done using real users, the users are simulated with the aid of computers, so-called load generators and special software. The load simulator used by the Microsoft Exchange server is LoadSim.

The Microsoft Exchange server load simulator LoadSim 2003 (internal version number 6.5) is a tool which enables numerous mail users to be simulated. LoadSim can be used to determine the efficiency of the Microsoft Exchange server under different loads. The behavior of the server can be studied with the aid of load profiles which are freely definable. During this process, the load simulator LoadSim will determine a so-called score. This is the mean response time a user has to wait for his job to be acknowledged. In this case, a response time of 0.2 seconds is regarded as an excellent value because this is equivalent to the natural reaction time of a person. A value of below a second is generally viewed as acceptable. The results can then be used to determine the optimum number of users per server, to analyze performance bottlenecks and to evaluate the efficiency of a specific hardware configuration. To obtain your own performance analysis, use the Exchange load simulator LoadSim 2003 available at the Microsoft Web Page Downloads for Exchange Server 2003 [L9].

However, load simulation is also problematic. This is because it is only as good as the load profile which has been defined. Only when the load profile coincides with reality, or is at least very close to it, will the results of the load simulation correlate with the load in real operation. If the customer knows his load profile, then the performance of an Exchange solution can be evaluated very exactly during a customer-specific simulation. Unfortunately, the load profile is only known exactly in the most seldom cases. Although the use of this method provides precise performance information for a selected customer, general performance statements cannot be inferred.

In order for performance measurements to be performed which are valid on a general basis, several things must be unified. On the one hand, a standard user profile must be determined, and on the other hand the Exchange environment must be idealized. Both assumptions must cover as large a bandwidth of the real scenario as possible.

With the aid of load simulation it is now possible to determine the influence of specific key components, such as CPU performance, main memory and disk system on the performance of the overall system. A set of basic rules can then be derived from this which must be observed when sizing an Exchange server. Additionally, using a standard load profile, the various models of the PRIMERGY system family can then be measured according to a uniform method in order for them to be graded into specific performance classes.

Load generators

Exchange Server to be measured

Controller

Network

Active Directory

•••

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 8 (69)

User profiles

The simulation tool LoadSim 2003 for Exchange Server 2003 enables the creation of any type of user profiles. On the other hand, LoadSim also already provides several standard profiles. According to Microsoft, these profiles were created from analyses of existing Exchange installations. As it was obviously difficult to determine a standard user in this case, Microsoft has defined three profiles - medium, heavy and cached-mode user as well as an additional fourth profile, MMB3, for pure benchmark purposes.

All four predefined load profiles of LoadSim 2003 use the same mix of mails with an average mail size of 76.8 kB. The profiles differ in the number of mails in the mailbox and in the number of mails sent per day, as shown in the table oppo-site.

All other considerations and sizing measurements in this white paper are based on the medium profile, which should reflect most application scenarios. The heavy profile with far more than 200 mail activities per user and day should hardly reflect an average user‟s real activities. The cached-mode profile was especially developed for the simulation of the new cache mode in Outlook 2003 and Exchange Server 2003. Unfortunately, the mail traffic generated is not comparable with any other standard profile of LoadSim 2003 so it cannot be used for the comparison between cached-mode and classic Outlook. The profile MMB3 is solely suited for benchmark purposes, as illustrated in the chapter Benchmark versus Reality.

Evolution of the user profiles

The load simulation tool LoadSim for MAPI-based accesses has been available since the first version of Exchange. During this period of ten years it was necessary to change the load profile to meet the user behavior that had changed over the years and the growing functionalities of Exchange and Outlook. The load profiles need to be redefined approximately every three years so that at present LoadSim 2003 represents the third generation of load profiles for Exchange.

The table opposite shows how mail volume has over the years shifted toward larger mails with attachments. Between 1997 and 2000 the size of a mail approximately increased six-fold. Fortunately, this trend has not persisted and during the last three years the average mail size has only doubled. However, this is also due to the fact that many mail-server operators restrict the permissible size of e-mails.

Mail Attachment Weighting in LoadSim

Size 5.5 2000 2003

4 kB - 60% 41% 15%

5 kB - 13% 18% 18%

6 kB - 5% 14% 16%

10 kB Excel Object 5% - -

14 kB Bitmap Object 2% 10% 5%

14 kB Text File 5% - -

18 kB Excel Spreadsheet 2% 7% 17%

19 kB Word Document 8% 7% 20%

107 kB PowerPoint Presentation - 1% 5%

1 MB PowerPoint Presentation - 1% 2%

2 MB Word Document - 1% 2%

Average mail size [kB] 5.7 39.2 76.8

Activity Profile

Medium Heavy

Cached

Mode MMB3

New mails per work day 10 12 7 8

Reply, Reply All, Forward 20 40 37 56

Average recipients per Mail 4.8 4.0 3.7 2.4

Received mails 141 208 162 152

Mail traffic in MB per day 13 20 15 16

Mailbox size in MB 60 112 93 100

Exchange

Version

Year of

Publication

LoadSim

Version

Exchange 4.0 1996

LoadSim 4.x Exchange 5.0 1997

Exchange 5.5 1997

Exchange 2000 2003 LoadSim 2000

Exchange 2003 2003 LoadSim 2003

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 9 (69)

Not only has the average mail size grown over the years, so has the number of mails sent. Consequently, the average mailbox size also is growing. Since most mailbox operators also intervene here in a restrictive capacity and set mailbox limits of typically 100 MB, the average mailbox size is at present approximately 60 MB.

In addition to the growing mail volume, LoadSim also does justice to changing user behavior by including and simulating various user actions in the load profile. In this way, the simulation of accesses to the calendar and contacts were added in LoadSim 2000. The simulation of smart folders and rules were included in the load profile in LoadSim 2003.

LoadSim 2000 vs. LoadSim 2003

In order to do justice to present user behavior we also use the current version of LoadSim 2003 with the medium user profile for all sizing measurements in this paper.

Although the comparability with the previous version 3.x of this sizing guide suffers, it would nevertheless not be justified to make statements as to the sizing of an Exchange server based on a no longer up-to-date load profile. However, in order to obtain an impression of which load differences result on an Exchange server due to a different load profile, an identical system configuration was measured both with the load profile of LoadSim 2000, which was based on Sizing Guide 3.x, and with the current LoadSim 2003 profile, which is based on this Sizing Guide 4.x.

The diagram opposite shows the percentage changes of significant performance data with the measurement results of LoadSim 2000 taken as the 100% reference basis.

Activity LoadSim

5.5 2000 2003

Send Mail x x x

Process Inbox x x x

Browse Mail x x x

Free/Busy x x x

Request Meetings x x

Make Appointments x x

Browse Calendar x x

Journal Applications x

Logon/off x

Smart Folders x

Rules x

Browse Contacts x

Create Contact x

Activity LoadSim

5.5 2000 2003

New mails per work day 4 7 10

Mails with attachment 22% 27% 51%

Average recipients per mail 4.68 3.67 4.78

Average mail size [kB] 5.72 39.16 76.81

Daily mail volume [MB] 0.45 7.88 12.82

Mailbox size [MB] 5 26 60

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 10 (69)

Benchmark versus reality

If this type of load simulation is performed under standard conditions, the term used for this is a benchmark. The focus of a benchmark is usually to determine the largest possible number of »users per server«. The advantage of such standardized conditions is that it then is possible for a comparison to be made on a system and cross-manufacturer basis. The disadvantage is that, from a benchmark point-of-view, each manufacturer attempts to obtain the optimum from his system in order to do well in cross-manufacturer comparisons. This leads to all other functions, which are normally required by a system but are not a mandatory requirement of the benchmark rule, being disregarded or even consciously deactivated. Functions, such as backup, virus protection and other server functions, e.g. like the classical file and print function or growth options are then typically fully disregarded. Even the functions which provide a system‟s fail-safe security e.g. data protection through RAID 1 or RAID 5, remain disregarded.

This increasingly results in confusion if the efficiency of an Exchange server is advertised using benchmarks. Therefore, Fujitsu has in contrast to many a competitor always consciously distinguished between benchmark and sizing measurements.

Thus with the MAPI Messaging Benchmark MMB2 under Exchange 2000 Server more than 16,000 users were achieved with one server. In reality, however, such a high number of users cannot be achieved; on the contrary, the user numbers fall approximately four-fold short of this mark. With the successor, MMB3 for Exchange Server 2003, Microsoft has attempted to develop a load profile to ascertain lower, more realistic user numbers. But with actual server hardware and a disk subsystem which is oversized compared to a real environment, it is possible to achieve about 13,500 MMB3 users. These numbers are likewise about three times higher than the number of users a server can host in real operation.

A cross-manufacturer collection of results of the MAPI Messaging Benchmark (MMB) is maintained by Microsoft in a list with MMB3 Results [L8].

Nevertheless, benchmarks are an important aid for determining the operating efficiency of computer systems, providing that the benchmark results are interpreted correctly. Above all a benchmark must not be mistaken for a performance measurement or with a real application. Hence the listing below of the most important differences or features in an overview:

Benchmark Optimized to maximum performance, due to a cross-manufacturer comparison.

Performance measurement The measurement of several systems, which are not by all means trimmed to high performance, but have been upgraded in a reality-compatible and simplified scenario for the purpose of comparison with each other.

Real application Real scenarios with several services on a server; with peak loads and exceptional situations that are to be overcome.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 11 (69)

System load

When is an Exchange server working at full capacity? With the Windows Server 2003 performance monitor numerous counters provide a very detailed analysis of the system. Also the Exchange Server provides additional counters.

The most important counters, which are used to read off the behavior of the system, are:

• The counter »Processor / % Processor Time« describes the average CPU load. The MMB3 benchmark rules specify that the value may not be greater than 90%. For a productive system, however, this is clearly too high. Depending on the source, there are recommendations saying that the average CPU load should not be permanently above 70% - 80%. In all our simulations to size the PRIMERGY as an Exchange server we set ourselves a limit of 30% so that there still remains sufficient CPU performance - in addition to Exchange - for other tasks such as the virus check, backup, etc.

• The counter »System / Processor Queue Length« indicates how many threads are waiting to be processed by the CPU. This counter should not be larger than the number of logical processors for a longer period of time.

• The counter »Logical Disk / Average Disk Queue Length« provides information about the disk subsystem. Over a lengthy measuring period, this counter should not be larger than the number of physical disks, from which the logical drive is made up.

• The Exchange-specific counter »MSExchangeIS Mailbox / Send Queue Size« counts the Exchange objects that are waiting to be forwarded. The destination can either be a local database or another mail server. The send queue should always be below 500, not grow continuously over a lengthy period of time and now and again reach a value close to zero.

• During the simulation run the simulation tool LoadSim determines the processing time of all transactions in milliseconds (ms) and calculates from this a so-called 95% score. This is the maximum time that 95% of all transactions have required. In other words, there are transactions that took longer, but 95% of the transactions implemented need less time than that specified by the score.

• The MMB3 rules & regulations stipulate that the score should be < 1000 ms. We consider a response time of 1s to be unacceptable for a productive system. For example, when scrolling through the mail inbox this would mean having to wait a second for every new entry. Therefore, for the measurements which constitute the basis for this paper we have set a maximum score of 200 ms. This is equivalent to a typical human reaction time.

• The LoadSim-specific counter »LoadSim Action: Latency« is the weighted average of the client response time. According to MMB3 rules & regulations this counter should also be less than 1000 ms. Analog to the score we have also reduced this value to 200 ms.

In addition, there are further performance counters that provide information about the »health« of an Exchange server and should not continuously increase during a simulation run:

• SMTP Server: Categorizer Queue Length

• SMTP Server: Local Queue Length

• SMTP Server: Remote Queue Length

• LoadSim Global: Task Queue Length

For more information about the performance monitor and other tools for the analysis and monitoring of Exchange Servers see chapter System analysis tools

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 12 (69)

Let us now summarize this chapter in a few clear-cut sentences:

• An exact performance statement can only be realized by performing a customer-specific load simulation with a customer-specific user profile.

• Through idealized load simulations it is possible to derive series of rules for server sizing, which are then used as an aid to good planning. These rules should not be mistaken for a formula. It is still necessary to interpret the rules, meaning that the basis on which the rules are founded must reflect the reality. In order for this to be translated into a real project, it will be necessary to determine the requirements of the real users and to place these in relation to the standardized medium user used in this paper.

• Benchmark measurements should not be confused or equated with performance measurements. Benchmarks are optimized for maximum performance. With performance measurements, such as the ones on which this paper is based, the systems analyzed are configured for real operation.

The rules shown below are based on measurements performed in the PRIMERGY Performance Lab with the load simulation LoadSim 2003 and the medium user profile.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 13 (69)

Exchange-relevant resources

After having explained how to measure an Exchange server in the previous chapter, we now wish to analyze the performance-critical components of an Exchange server, before we then discuss the individual PRIMERGY models with regard to their suitability and performance as an Exchange server in the following chapter.

Exchange architecture

At this juncture, we are will indicate several important points which should be taken into consideration when designing an Exchange environment. This chapter is especially intended for the reader who is not conversant with the architecture of Exchange environments.

Decentrally distributed Exchange server is designed for a decen-tralized net containing many servers. In other words, a company, for example, with 200,000 employees would not install one Exchange server for all of its employees, but would install 40, 50 or even 100 Exchange serv-ers which reflect the organizational and geographical structure of the company. Exchange can by all means still be centrally and efficiently administered. The complete under-taking should be structured in such a way that the users can access the Exchange server allocated to them, as well as access all servers within a geographical location through a LAN-type connection. WAN connections suffice between the individual locations.

This decentralized concept has to all intents and purposes several practical advantages:

• The computing performance is available at the location where the user needs it.

• If a system fails, not all the users are affected.

• Data can be replicated, i.e. they are available on several servers.

• Connections to Exchange servers in other locations of the company and to worldwide mail systems can be redundant in design. If a server or a connection fails, Exchange automatically looks for another route, otherwise the most favorably priced one is used.

• The backup data size for the classic backup (compare chapter Backup) is spread over several servers, the backup can run in parallel.

• User groups with very different requirements (mail volume, data sensitivity, etc.) can be separated from each other.

But there are also disadvantages:

• Administration personnel is required at every geographical location for backup and hardware maintenance.

• Depending on the degree of geographical spread, more hardware - particularly for backup - is necessary.

• If a great many small servers are used in contrast to a few large server systems, higher costs are incurred for software licenses.

Consolidation In particular these decentralization disadvantages that affect the Total Cost of Ownership (TCO) result - in times of sinking company sales and increasingly difficult market conditions - in the request for consolidation in Exchange environments.

Exchange Server 2003 will do justice to this trend toward consolidation. In addition to the classic platform consolidation (use of fewer, but larger servers), consolidation of locations is also made possible. Sinking costs for WAN lines and intelligent client-side caching, as provided by Outlook 2003, are prerequisite for this consolidation approach. Where geographically distributed servers were still required with Exchange 5.5 or Exchange 2000 Server, it is now possible in many scenarios to reduce the locations with Exchange Server 2003.

In turn, several larger Exchange servers at one location provide the opportunity of combining the servers to form a cluster. Thus, in the event of hardware failure other server hardware can help out. In a very much decentralized scenario the hardware expenditure required for this would not be justified.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 14 (69)

A modern infrastructure based on Exchange Server 2003 will in comparison with an Exchange 5.5 environment consist of considerably less servers and above all fewer locations. Also in comparison with Exchange 2000 Server a reduction in the locations is conceivable, because with the help of the cached mode of Outlook 2003 and optimization of the communication between Outlook 2003 and Exchange Server 2003 a substantial reduction in the required network bandwidth is achieved.

Dedicated servers In addition to e-mail, Exchange Server 2003 also offers a number of other components. It may therefore be practical to distribute various tasks over dedicated Exchange servers. Thus a distinction is made between the following Exchange server types:

The mailbox server - also frequently known as back-end server - houses the user mailboxes and is responsible for delivering the mail to the clients by means of a series of different protocols, such as MAPI, HTTP, IMAP4 or POP3.

The public folder server is dedicated to public folders, which are brought to the end user by means of protocols, such as MAPI, HTTP, HTTP-DAV or IMAP4.

A connector server is responsible for various connections to other Exchange sites or mail systems. In this regard, standard protocols, such as SMTP (Simple Mail Transfer Protocol), or X.400 can be used, or proprietary connectors to mail systems, such as Lotus Notes or Novell GroupWise. Such a dedicated server should then be used if a connection type is used very intensively.

The term front-end server is used for a server which talks to the clients and passes on the requests of the client to a back-end-server, which typically houses the mailboxes and public folders. Such a staggered scenario of front-end and back-end servers is frequently implemented for web-based client accesses - Outlook Web Access (OWA).

Moreover, a distinction is made between so-called real-time collaboration servers, as well as data conferencing servers, video conferencing servers, instant messaging servers and chat servers, which accommodate one or more of these Exchange components in a dedicated manner. (Note: The real-time collaboration features contained in Exchange 2000 Server have been removed from Exchange Server 2003 and incorporated in a dedicated Microsoft product »Live Communications Server 2003« for real-time communication and collaboration.)

Below, we will turn our attention to the most frequently used Exchange server type, the mailbox server, which houses the users‟ mailboxes and public folders.

Active Directory and DNS

Exchange 2000 Server and Exchange Server 2003 is completely integrated into the Windows Active Directory. Different to previous Exchange versions, such as 4.0 and 5.5, the information about mail users and mail folders and is no longer integrated in Exchange, but is integrated in the Active Directory. Exchange makes intensive use of the Active Directory and DNS (Domain Name System). This must be taken into consideration with a complete design of an Exchange Server 2003 infrastructure, i.e. as well as Exchange servers with adequate performance, Active Directory servers with an adequate performance are also required, as this could otherwise have a detrimental effect on the Exchange performance. As Active Directory typically mirrors the organizational structure of a company, organizational and geographical realities must also be taken into consideration in the design. For performance reasons, apart from small installations in the small business sector or branch offices, the Active Directory and Exchange may not be installed on the same servers because the amount of processor and memory capacity required by an Active Directory is substantial. Whereas the necessary disk storage is quite moderate for the Active Directory, a substantial computing performance is required for the administration and processing of the accesses to the Active Directory.

In larger Exchange environments the Exchange server should not simultaneously assume the role of a domain controller, but dedicated domain controllers should be used. In this respect, the sizing of the domain controllers is at least as complex as the sizing of Exchange servers. Since it would be fully beyond the scope of this white paper, the topic of sizing the Active Directory cannot be discussed any further here. Helpful information as to the design and the sizing of the Active Directory can be found at Windows Server 2003 Active Directory [L17].

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 15 (69)

Operating system

Exchange Server 2003 can not only be operated on the basis of Windows 2000 Server but also on Windows Server 2003. Conversely, however, an Exchange 2000 Server cannot be operated on Windows Server 2003. The table opposite shows which version of Exchange can be used on which operating system. It should be noted here that Exchange Server 2003 only runs on the 32-bit versions of Windows. It is not possible to install Exchange Server 2003 on a 64-bit version of Windows Server 2003. 64-bit support will only be available with the next version of Exchange Server 2007.

Many new functionalities of Exchange Server 2003 are, however, only available if Exchange is operated on the basis of Windows Server 2003. This includes in particular performance-relevant features, such as:

• Memory tuning with /3GB The /3GB switch is also already available under Windows Server 2003 in the standard edition and

causes a shift in the distribution of the virtual address space to the advantage of application at a ratio of 3:1. Under Windows 2000 this option was only supported by Advanced Server.

• Memory tuning with /USERVA switch The switch /USERVA is used to fine tune memory distribution in connection with the /3GB switch. This

option allows the operating system in the kernel area to create larger administration tables with nevertheless almost 3 GB virtual address space being made available to the application.

• Data backup with Volume Shadow Copy Service (VSS) This functionality of Windows Server 2003 enables snapshot backups of the Exchange databases to be

generated during ongoing operation of Exchange Server 2003. Further details are described in the chapter Backup.

• Support of 8-node cluster On the basis of Windows Server 2003 Enterprise Edition Clusters with up to eight nodes can be implemented. In contrast to Windows 2000 Server, where Advanced Server only supports two nodes and Datacenter Server four nodes.

• Support of Mount Points Windows Server 2003 allows disk volumes to be added to an existing volume, instead of providing them with a separate drive letter, thus enabling the given limit of max. 26 possible drive letters to be overcome - which represented a bottleneck in clustered environments in particular.

Exchange

5.5 2000 2003

Windows

2000 × × ×

2003 32-bit ×

2003 64-bit

Number of

cluster nodes

Windows 2000 Advanced Server 2

Windows 2000 Datacenter Server 4

Windows Server 2003, Enterprise Edition 8

Windows Server 2003, Datacenter Edition 8

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 16 (69)

Computing performance

It is obvious that the more powerful the processor, the more processors in a system, the faster the data are processed. However, with Exchange the CPU performance is not the only decisive factor. The Microsoft Exchange server provides an acceptable performance even with a relatively small CPU configuration. In fact it is important that a fast disk subsystem and an adequate memory configuration are available and, of course, that the network connection is not a bottleneck. It becomes especially evident with small server systems that the processor performance is not the restricting factor, but the configuration options of the disk subsystem.

The diagram opposite can be used as a guideline for the number of processors. There are no strict limits defining the number of processors. All systems are available with processors of different performance. Thus a 2-way system with highly clocked dual processors and a large cache can by all means be within the performance range of a poorly equipped 4-way system. Which system is used with regard to possible expandability ultimately depends on the forecast of customer‟s growth rates. In addition, it could prove advantageous to also use a 2-way or 4-way system for a smaller number of users, as this type of system frequently offers better configuration options for the disk subsystem.

As far as pure computing performance is concerned, 4-way systems are generally adequate for Exchange Server 2003, because with large Exchange servers it is not computing performance but Exchange-internal memory administration that sets the limits. However, it may still make sense to use an 8-way system if very CPU-intensive Exchange extensions are used in addition to the pure Exchange server or if with regard to greater availability the use of Windows Server 2003, Datacenter Edition is taken into consideration.

With more than approximately 5000 active users on a server the scaling of Exchange Server is not satisfactory (see Main memory). Therefore, for large numbers of users a scale-out scenario, in which several servers in a logical array can serve an arbitrarily large number of mail users, should be considered instead of a scale-up scenario.

Main memory

As far as the main memory is concerned, there is a quite simple rule: the more the better. Firstly, the main memory of the server must at least be large enough for the system not to be compelled to swap program parts from the physical memory into the virtual memory onto the hard disk. Otherwise this would hopelessly slow down the system. As far as the program code is concerned, 512 MB would generally suffice. The system would then run freely and no program code would have to be swapped out to the hard disk.

If, however, more memory is available, it is then used by Exchange as a cache for the data from the Exchange database, the so called store. This leads to what could be classed as a substantial load relief for the disk subsystem and thus a gain in performance. After all, accesses to the memory are about 1000 times faster than accesses made to the hard disks.

But unfortunately there are also limits here. On the one hand 4 GB RAM is a magical limit. It is not possible to address any more with a 32-bit address. Windows 2000 and 2003 have mechanisms for overcoming this limit, the so-called Physical Address Extension (PAE). Depending on the version, Windows supports up to 128 GB RAM. From a hardware viewpoint, it would also be unproblematically possible to provide this memory in the PRIMERGY RX600 S3 or RX800 S2, but Microsoft Exchange Server 2003 does not support these PAE address types and is as regards addressing limited to 4 GB RAM. Further information is available at Physical Address Extension - PAE Memory and Windows [L18].

# CPU

RAM [MB]

/3GB Support

Windows 2000 4 4 -

Windows 2000 Advanced 8 8

Windows 2000 DataCenter 32 32

Windows 2003 Standard 4 4

Windows 2003 Enterprise 8 32

Windows 2003 DataCenter 32 64

Windows 2003 Standard SP1 and R2 4 4

Windows 2003 Enterprise SP1 and R2 8 64

Windows 2003 DataCenter SP1 and R2 64 128

1

# C

PU

2

4

500 1000 2000 3000 4000 User

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 17 (69)

A further reduction in the effective memory results from the system architecture of Windows Server 2003. Thus this address space is divided by default into 2 GB for the operating system and 2 GB for applications, to which Exchange is also classed. By means of an appropriate configuration parameter it is possible to shift this split to 1:3 GB in favor of the applications. It is advisable to use this so-called /3GB option as early as a physical memory configuration of 1 GB RAM. (For better understanding: this /3GB option refers to the administration of virtual memory addresses, not to the physical memory.) The option /USERVA, with which the distribution of the address space at 3:1 can be controlled in a more detailed way, was introduced with Windows Server 2003 as a supplement to the /3GB option.

The necessity of the /3GB switch for Exchange becomes clearer under the effect that Exchange obviously requires about twice the number of virtual addresses in proportion to the physically used memory, which Microsoft describes only indirectly. This effect can be explained from a program point of view and even regarded as a good or at least modern programming style because of the underlying methodology. Based on the fact that with 32-bit systems the limits of available virtual addresses and physically available memory are coming closer and on the fact that even the physical memory exceeds the addressable memory, this memory administration architecture represents an (unnecessary) limitation.

In other words, it could be assumed that an IA64-bit system, such as the PRIMERGY RXI600, is the optimal system for Exchange. Reality, however, is that a 64-bit architecture would be optimal for the internal memory administration of Exchange Server 2003, but not for the other components of Exchange Server 2003. Thus there is at present no 64-bit version of Exchange Server 2003 so that despite the almost infinite virtual memory space of 8 TB (terabyte), which is provided by 64-bit Windows to applications, Exchange would from today's perspective not run faster in the present version on an IA64-bit system, but more slowly.

But let's return to the current hardware options. At a rough estimate: 3 GB of virtual address space for Exchange, of which at most half can be used physically, results in 1.5 GB. In fact, Microsoft has limited the ESE cache size for the store to about 900 MB. According to a Microsoft description, this may be increased to 1.2 GB. Please note: This memory requirement is only for the store cache. It goes without saying that Exchange Server 2003 uses additional memory for other data. With 2 GB RAM an Exchange server is already quite well equipped. What lies above a memory configuration of 3 GB is only used conditionally by Exchange. However, a configuration with further memory of up to 4 GB can be very practical if other components, such as a virus scanner or fax services, are to be added in addition to Exchange.

In addition to these considerations as to the maximum (sensibly) effective memory, there are also guidelines based on empirical values, which calculate memory configuration on a user basis. Microsoft recommends you to calculate 512 kB RAM per active user. If you assume a base of 512 MB for the operating system and the core components, the outcome is the linear lower curve opposite

RAM = 512 MB + 0.5 MB * [number of users]

If this is reflected against the limitations discussed above, it is also possible to read an upper limit of users that an individual Exchange server can serve: approximately 5000 users. This is not a fixed limit. It is by no means so that the Exchange server crashes with a higher number of users, it is simply no longer able to function efficiently with a high load. On account of a lack of memory the disk subsystem is put under a greater load. Due to higher throughput times the load placed on the CPU is ultimately also increased. More jobs have to be managed in the queues, this in turn causes a higher administration outlay and greater memory requirement, which is ultimately at the expense of the cache memory. Thus a process escalates in such a way that in the end all users can no longer be adequately served.

In addition, practical experience has shown that »small« systems in particular benefit from a somewhat larger memory configuration (see upper curve).

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 18 (69)

Disk subsystem

Practical experience shows that Exchange systems are frequently oversized as far as CPU performance is concerned, but are hopelessly undersized when it comes to the disk subsystem. Thus the overall system performance is totally dissatisfactory and that is why we intend to handle the disk subsystem topic intensively. In the following chapters we will see that system performance (with regard to Exchange) is frequently not determined by the CPU performance but by the connection options for a disk subsystem.

An Exchange server that is primarily used as a mailbox server needs a large amount of memory in order to efficiently administer the mailbox contents. The internal hard disks of a server are as a rule inadequate and an external disk subsystem is needed. There are currently four approaches to the topic of disk subsystems:

Direct Attached Storage (DAS) is the name of a storage technology through which the hard disks are connected directly to one or more disk controllers installed in the server. Typically, SCSI, SAS or SATA is used in conjunction with intelligent RAID controllers. These controllers are relatively cheap and offer good performance. The hard disks are either in the server housing or in the external disk housing, which are basically used to accommodate disks and the power supply.

DAS offers top-class performance and is a good choice for small and medium-sized Exchange installations. However, for large-scale Exchange installations, DAS has some limitations as regards scaling. The physical disks must all be connected directly to the server through SCSI cabling. The number of disks per SCSI bus is also limited as is the number of possible SCSI channels in one system. This results in limits regarding the maximum configuration. Further DAS disadvantages are the extensive and thus error-prone SCSI cabling as well as the fact that clusters are not fully supported. All servers integrated in a cluster must be able to access a common data pool; in contrast DAS requires a dedicated disk allocation.

Under the aspect of these limitations, the networks of Network Attached Storage (NAS) or Storage Area Network (SAN) appear considerably more attractive. Both concepts are based on the idea that the disk subsystem must be detached from the server and made available as a separate unit in a network to one or more servers. Vice versa, a server can also access several storage units.

Network Attached Storage (NAS) is principally a classical file server. Such an NAS server is specialized in the efficient administration of large data quantities and provides this storage to other servers through a LAN. Internally, NAS servers typically use the DAS disk and controller technology. Classical LAN infrastructures are used for the data transport to and from the servers. Consequently, NAS systems can be constructed at reasonable prices. As the data storage is not allocated on a dedicated basis to one server, the use of many NAS servers reaches high scaling levels.

Classic NAS topology is basically unsuited for Exchange 2000 Server and 2003. Exchange uses an installable file system (IFS), which requires access to a block-oriented device. The IFS driver is an integral component of the Exchange 200x architecture and is used for internal Exchange processes. If the Exchange database is filed on a non-block-oriented device, the Exchange 200x database cannot be mounted (cf. Microsoft Knowledge Base Article Q317173).

However, in addition to the classic file sharing through NFS and CIFS, modern NAS systems also provide special disk drivers, which make the NAS visible by means of a block-oriented device for Windows 200x. If this is the case, an NAS can also be used in conjunction with Exchange 200x.

Storage Area Network (SAN) is currently the most innovative technology in the fast-growing storage market. In contrast to NAS, SAN does not use LAN for data transport, but its own network of high bandwidth based on Fibre Channel (FC). The conversion from LAN protocol to SCSI required with NAS is not needed with Fibre Channels as Fibre Channels such as SCSI use the same data protocol. However, Fibre Channel is not subject to the physical restrictions of SCSI. Thus, in contrast to SCSI, where the cable length is restricted to 25 meters, cable lengths of up to 10 kilometers are possible depending on the cable medium and bandwidth. Fibre Channels also offer much greater scope with regard to the number of devices. In contrast to the max. 15 devices on a SCSI channel, Fibre Channel enables through a so-called arbitrated loop (FC-AL) up to 126 devices, and this limit can also be increased by using Fibre Channel switches. In a Storage Area Network all the servers and storage systems are connected to each other and can thus access a large data pool. A SAN is thus ideal for cluster solutions where several servers share the same storage areas in order to take over the tasks of a server when a server fails. A SAN is an ideal solution for large or clustered Exchange installations.

Copper Glass fiber

MMF SMF

62.5 µm 50 µm 9 µm

1 GB FC 10 m 175 m 500 m 10 km

2 GB FC - 90 m 300 m 2 km

4 GB FC - 45 m 150 m 1 km

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 19 (69)

Internet small computer system interface (iSCSI), described as RFC3270 by the »Internet Engineering Task Force« (IETF), is, in addition to Fibre-Channel (FC) with its completely own infrastructure, increasingly gaining in significance. This concept is based on the idea of separating the disk subsystem from the server and making it available to one or more servers as a separate unit in a network. Conversely, it is also possible for a server to access several storage units. In contrast to most Network Attached Storage (NAS) products, which make the protocols Server Message Block (SMB) or Common Internet File System (CIFS) - known from the Microsoft world - or the Network File System (NFS) - known from UNIX / Linux - available through a LAN, block devices are made available in the server both by iSCSI and by Fibre-Channel. Some applications, e.g. Exchange Server, need block-device interfaces for their data repository. For these applications it is not possible to see whether they access a directly connected disk subsystem or whether the data are to be found somewhere in the network. Unlike Fibre-Channel with its complex infrastructure with special controllers (Host Bus Adapters or HBA), separate cabling, separate switches and even separate management, iSCSI accesses the infrastructure known from TCP/IP – hence the designation »IP-SAN«. As a result of using existing infrastructures, the initial costs with iSCSI are lower than in the Fibre-Channel environment. See also Performance Report - iSCSI and iSCSI Boot [L4].

Transaction principle

Microsoft Exchange Server works in a transaction-oriented way and stores all data in databases (the so-called store). Exchange 2000 Server and Exchange Server 2003 support up to 20 separate databases, which can be structured in four so-called storage groups, each with a maximum of five databases. A joint transaction log file is written per storage group. Compared with Exchange 5.5 with only one storage group and database, this architecture has a number of advantages and thus overcomes a number of limitations. Here are some of the most important advantages that are of interest for our sizing considerations:

• A database is restricted to one logical volume. The volume size is limited by the disk subsystem. Thus, it is only possible to combine a maximum of 32 disks into a volume with many RAID controllers. This limitation can be overcome by using several databases.

• One backup process is possible per storage group; as a result the backup process can be effected in parallel by using several storage groups and thus optimized as regards time. Prerequisite for this is of course adequate backup hardware.

• Under the aspect of availability, the times for the restoring of a database after its loss are critical. As a result of the assignment among several databases it is possible to reduce this restore time.

• Sensitive data can be physically separated by using different databases and storage groups. This is particularly interesting when an ASP (application service provider) wants to serve several customers on one Exchange server.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 20 (69)

Access pattern

Database administration activities result in two completely complementary data access patterns. On the one hand the database with 100% random access. In this respect, 2/3 are typically accounted for by read and 1/3

by write accesses. On the other hand, a data flow which is written 100% in sequence arises as a result of the transaction log files. To do justice to this it is advisable to spread the databases and log files over different physical disks.

A second aspect as regards the physical separation of log files and databases: In the transaction concept all changes to the database are recorded in the log files. If the database is lost, it is possible to completely restore the database using a backup and the log files since the backup was generated. In order to achieve maximum security it is sensible to store the log files and the database on physically different hard disks so that all data are not lost in the event of a disk crash. As long as only one of the two sets of information is lost, the missing information can be restored. This is particularly valid for small Exchange installations where the lack of a large number of hard disks encourages the storing of both sets of information on one data medium.

Caches

For an intelligent SCSI controller with its own cache mechanisms there are further opportunities of adapting the disk subsystem to the requirements of the Exchange server. Thus the write-back cache should be activated for the volume on which the log files are to be found. Read-ahead caches should also be activated for the log files; this has advantages during restore where log files have to be read. The same applies to the volume on which queues (SMTP or MTA queues) are filed.

However, for the volume at which the store is filed it is not practical to activate the read-ahead cache. It may sound illogical to deactivate a cache that is used to accelerate accesses. However, the store is a database of many Gigabytes, to which random access in blocks of 4 kB is made. The probability of a 4-kB block from an infinitely large data amount being found in a cache of a few MB and not having to be read from the disk is very unlikely. With some controllers it is unfortunately not possible to deactivate the read cache independently of the write cache and also when reading, a check is first made as to whether the data requested are available in the cache. In this case, better overall throughput is achieved by deactivating the cache (except for with RAID 5) as read accesses are typically twice as frequent as write accesses.

In addition, each individual hard disk also provides write and read caches. As standard the read cache is always activated. A lengthy discussion is going on about the activation of the write cache, because in contrast to the write caches of the RAID controller this cache does not have a battery backup. On condition that the server (and of course the disk subsystem) is UPS-protected, it may be sensible to activate the write caches of the hard disk. All hard disks from Fujitsu are for security reasons supplied with the write caches deactivated. Some of our competitors supply their hard disks with activated write caches with the result that at first sight such systems appear to have a better performance in comparative testing. However, if a system is not protected against power failures by a UPS, or if it is deactivated abruptly, data losses can occur on the activated write caches of the hard disks.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 21 (69)

RAID levels

One of the most error-prone components of a computer system is the hard disk. It is a mechanical part and is above all extensively used for database-based applications, to which Exchange is also classed. It is therefore important to be prepared for the failure of such a component. To this end, there are methods of arranging several hard disks in an array in such a way that it is possible to cope with the failure of a hard disk. This is known as a Redundant Array of Independent Disks or RAID in short. Below is a brief overview of the most important RAID levels.

The figure illustrates how the blocks of a dataflow

are organized on the individual disks.

RAID 0 RAID level 0 is also denoted as the »Non-redundant striped array«. With RAID 0 two or more hard disks are combined, solely with the aim of increasing the write/read speed. The data are split up in small blocks with a size of between 4 and 128 kbytes, so-called stripes, and are stored alternatingly on the disks. In this way, it is possible to access several disks at the same time, which in turn increases the speed. Since no redundant information is generated with RAID 0, all data in the RAID 0 array are lost even if only one hard disk fails. RAID 0 offers the fastest and most efficient access, but is only suitable for data that can be regenerated without any problems at all times.

RAID 1 With RAID 1, also known as »drive duplexing« or »mirroring«, identical data are stored on two hard disks and this results in a redundancy of 100%. In addition, alternating access also can increase the read performance. If one of the two hard disks fails, the system then continues to work with the remaining hard disk without interruption. RAID 1 is first choice in performance-critical, error-tolerant environments. Moreover, there is no alternative to RAID 1 when error tolerance is called for, but not more than two disks are required or available. However, the high failsafe rate has its price, namely twice the number of hard disks are necessary.

RAID 5 In a RAID 5 array at least three hard disks are required. Similar to RAID 0 the dataflow is split into blocks. Parity information is formed across the individual blocks and this is stored in addition to the data on the RAID array, whereby the information itself and the parity information is always written on two different hard disks. If one hard disk fails, restoration of the data can be effected with the aid of the remaining parity information. The wastage caused as a result of the additional parity information sinks with the number of hard disks used and comes to

1/(number of disks). A simple rule of thumb is

one disk of wastage per RAID 5 array. RAID 5 offers redundancy and budgets disk resources best of all. However, the cost is the parity calculation performance. Even special RAID controllers cannot compensate this.

A B C D E F ...

RAID 0

I

E

A

J

F

B

K

G

C

L

H

D

RAID 1

C

B

A

C’

B’

A’

RAID 5

G

D

A

P(GHI)

E

B

H

P(DEF)

C

I

F

P(ABC)

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 22 (69)

RAID 1+0 Also occasionally referred to as RAID 10. Actually it is not a separate RAID level, but merely RAID 0 combined with RAID 1. Thus the features of the two basic levels - security and sequential performance - are combined. RAID 1+0 uses an even number of hard disks. Two disks are combined and the data are mirrored (RAID 1). The data are distributed over this pair of disks (RAID 0). RAID 1+0 is particularly suited for the redundant storing of large files. Since no parity has to be calculated in this case, write access with RAID 1+0 is very fast.

RAID 0+1 In addition to the RAID level combination 1+0 there is also the combination 0+1. For half the hard disks a RAID 0 array is formed and the information is then mirrored onto the other half (RAID 1). As regards performance RAID 0+1 and 1+0 are identical. However, RAID 1+0 has a higher degree of availability than RAID 0+1. If a disk fails with RAID 0+1, redundancy is no longer given. With a RAID 1+0, however, further disks may fail as long as the same RAID 1 pair is not affected. The likelihood of both disks of a RAID 1 pair failing in a RAID 1+0 consisting of n disks

2/(n²-n) is considerably less than the probability of two disks that do not belong to a

pair being affected (2n-4)

/(n²-n).

Others There are a number of other RAID levels that are in part no longer in use today, or other combinations, such as RAID 5+0.

More information about the different RAID levels is to be found in the white paper Performance Report – Modular RAID [L5].

For all RAID levels care must be taken that hard disks of the same capacity and same performance are used. Otherwise, the smallest hard disk determines the overall capacity or the slowest hard disk the overall performance. The performance of the RAID array is on the one hand determined by the RAID level used, but also by the number of disks in an array. Even the RAID controllers themselves show differing performances particularly for more complex RAID algorithms such as RAID 5. Even parameters such as block and stripe size, which have to be defined when setting up the RAID array also ultimately influence the efficiency of a RAID array. The diagram opposite shows the relative performance of various RAID arrays.

RAID 0

RA

ID 1

E

A

E’

A’

F

B

F’

B’

G

C

G’

C’

H

D

H’

D’

RAID 1+0

RAID 0

RA

ID 1

E

A

E’

A’

F

B

F’

B’

G

C

G’

C’

H

D

H’

D’

RAID 0+1

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 23 (69)

Data throughput

Current SCSI RAID controllers provide a data throughput of 320 MB/s per SCSI channel. This is more than adequate for database-oriented applications, even if 7 or more hard disks are operated on one channel. Frequently with regard to the number of disks on an SCSI channel the maximum data transfer rate of the SCSI bus is mistakenly divided by the peak data transfer rate of the hard disk. With fast hard disks this may very well be over 80 MB/s, according to which an SCSI bus would then already be at full capacity with four hard disks. However, this calculation is incorrect as it only applies to a theoretical extreme situation. The diagram below shows the real situation. It can be seen that the theoretical curve is roughly achieved only with purely sequential read with large data blocks, as is the case with video strea-ming. If write operations are added, the data throughput slumps decidedly. For ran-dom access with block sizes of 4 and 8 kB, as is the case with access to the Exchange database, the throughput is only just approximately 10 MB/s. This means that the maximum possible number of hard disks can be run on one SCSI bus without any trouble.

SCSI RAID controllers provide up to 4 SCSI buses. Consequently the possible throughput adds up in principle. Therefore it is important for such controllers to also be used in an adequate PCI slot. The table opposite shows the various PCI bus speeds and the throughputs ascertained with them. However, the data throughputs also depend on the type and number of the controllers used as well as on the memory interface (chip set) of the server.

In order to harmonize the controller type and ensure its number with the server, each of these is tested and certified for the individual systems by Fujitsu. In this connection, the system configurator determines which and how many controllers can be sensibly used per system.

PCI Bus Throughput in MB/s

theoretical measured

PCI 33 MHz, 32-bit 133 82

PCI 33 MHz, 64-bit 266 184

PCI-X 66 MHz, 64-bit 533 330

PCI-X 100 MHz, 64-bit 800 n/a

PCI-X 133 MHz, 64-bit 1066 n/a

PCI-E 2500 MHz, 1× 313 250

PCI-E 2500 MHz, 2× 625 500

PCI-E 2500 MHz, 4× 1250 1000

PCI-E 2500 MHz, 8× 2500 2000

PCI-E 2500 MHz, 16× 5000 4000

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 24 (69)

Hard disks

A major influence on the performance is the speed of the hard disk. In addition to mean access time, rotational speed in particular is an important parameter here. The faster the disk rotates, the more quickly the data of a whole track can be transferred; but also the data density of the disk has an influence upon this. The closer the data are together on the disk, i.e. the more data can be packed into one track, the more data can be transferred per revolution and without repositioning the heads.

In the SCSI and SAS environment only disks from the top of the performance range are offered. In this way, no hard disks are offered with less than 10000 rpm (revolutions per minute) and a seek time (positioning time) greater than 6 ms. The table opposite shows the currently available disk types. Hard disks with even greater capacities are to be expected in the near future.

The rotational speed of the hard disk is directly reflected in the number of read/write operations that a disk can process per time unit. If the number of I/O commands that an application produces per second is known, it is possible to calculate the number of hard disks required to prevent the occurrence of a bottleneck. In comparison with a hard disk with 10 krpm, a hard disk with 15 krpm shows - depending on the access pattern - an up to 40% higher performance, particularly in the case of random accesses with small block sizes as occur with the Exchange database. For sequential accesses with large block sizes that occur in backup and restore processes, the advantage of 15 krpm is reduced to between 10% and 12%.

Moreover, the number of hard disks in an RAID array plays a major role. Thus, for example, eight 36 GB disks in a RAID 1+0 are substantially faster than two 146 GB disks, although the result is the same effective capacity. In other words, it is necessary to calculate between the number of available slots for hard disks, the required disk capacity and ultimately the costs. From a performance point of view, the rule of more small hard disks rather than less large ones is applicable.

If the Exchange Server 2003 is placed under stress using the medium load profile of LoadSim 2003, then 0.6 I/Os per second and user will occur for the Exchange database. The table opposite shows the required number of hard disks subject to the number of users, disk rotational speed and RAID level. It takes into consideration that write accesses need two I/O operations with a RAID 10 and up to four I/O operations with a RAID 5. If you also take the typical database access profile for Exchange with

2/3 read and

1/3 write accesses as the

basis, the I/O rate for a RAID 10 is calculated according to the formula

and the I/O rate for a RAID 5 according to the formula

However, it should be noted that the actually required number depends on user behavior: a different user profile can initiate a different I/O load.

# Users IO/s RAID 10 RAID 5

# IO # Disks # IO # Disks

10 krpm 15 krpm 10 krpm 15 krpm

50 30 40 2 2 60 3 3

100 60 80 2 2 120 3 3

500 300 400 4 4 600 5 4

1000 600 800 8 6 1200 10 8

2000 1200 1600 14 10 200 20 15

3000 1800 2400 20 16 3600 30 22

4000 2400 3200 28 20 4800 40 29

5000 3000 4000 34 24 6000 50 36

IO/s

5400 rpm 62

7200 rpm 75

10000 rpm 120

15000 rpm 170

Type rpm Capacity [GB]

36 60 73 80 100 146 160 250 300 500

2½" SATA 7200 × ×

3½" 7200 × × × ×

2½" SAS 10000 × ×

3½" SCSI 10000 × × × ×

3½" 15000 × × ×

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 25 (69)

As far as data security is concerned, the log files are considerably more important than the database itself. The reason lies in the fact that the log files record all changes since the last backup. Therefore, the log files should be protected by a RAID 1 (mirror disk) or RAID 5 (stripe set with parity). For performance reasons a RAID 1 or rather RAID 1+0 is advisable. Since the log files are automatically deleted during backup, no large quantities of data accrue with regular backups.

Theoretically, the database should as far as data loss is concerned require no further protection. Here it would be possible to work without RAID or for performance reasons with RAID 0 (stripe set). However, we strongly advise against this in practice. If a hard disk fails, this would mean the failure of the Exchange servers until the disk is replaced, the last backup loaded and the restored database synchronized with the log files. Depending on database size this may take hours or even mean a whole working day. This is impractical for an increasingly more pivotal medium such as e-mail. For the database a RAID 5 or RAID 1+0 should be used. From a performance point of view a RAID 1+0 is advisable. However, cost pressure or max. disk configuration frequently requires side-stepping to RAID 5. In the case of small Exchange implementations, for which performance is not to the forefront, RAID 5 is a good compromise between performance and costs.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 26 (69)

Storage space

Now that we have intensively discussed the various types and performance of the individual components of the disk subsystem, a significant question still remains: How much storage space do I need for how many users? This results again in the classic problem of user behavior. Are all mails administered centrally by the users on the Exchange server or locally in private stores (PST) under Outlook? How large are the individual mails typically? Yes, even the client used, Outlook, Outlook-Express, or web-based access through a web browser influences the storage space required by the Exchange server.

If the customer has made no specifications, it is possible to take a moderately active Outlook user who is not idle and who administers his mails on the Exchange server as a basis. In this respect, 100 MB per user or mailbox is very practical value. If your calculation adds a further 100% as space for growth and Exchange has adequate working scope, this is then a very good value. The table shows disk requirement for the database. In the calculation it has to be taken into consideration that a 36 GB disk only has a net capacity of 34 GB. And accordingly 68 GB net for 73 GB, 136 GB for the 146 GB, and 286 GB for the 300 GB disk. In the RAID 5 calculation a package size of max. 7 disks was taken as a basis. From a performance point of view it would be advisable to choose a package size of 4 or 5. In this respect, the hard disk requirement increases by 6 or 11%. As already mentioned, RAID 5 should be avoided with from a perfor-mance viewpoint and preference given to a RAID 1+0 array.

In addition to the hard disks for the database(s) with the user mailboxes, disk requirements for public folders must also be taken into consideration.

Moreover, hard disks are still required for the log files. The scope of the log files depends on the one hand on user activity and on the other hand on backup cycles. After a backup the log files are deleted. A RAID 1 or RAID 1+0 should be used for the log files. The table opposite shows the disk requirements for a log file size of 6 MB per user for three days‟ storage.

In addition to the disk requirements for the database and log files, Exchange still needs storage space for queues. Queues can occur when mails cannot be immediately delivered, e.g. when other mail servers cannot be reached or a database is off-line because of a restore. Queues are typically written and read on a sequential basis. Separate storage space should also be provided for this. The data volume can be estimated analog to log file requirements from the average mail volume per user and the anticipated maximum downtime of the components causing the queue.

Logs per user and day [MB]

Number of days

Number of users

Logfile GB net

Number of Disks

RAID 1+0

36 GB 73 GB 146 GB

6 3

50 1 2 2 2

100 2 2 2 2

500 9 2 2 2

1000 18 2 2 2

2000 36 4 2 2

3000 54 4 2 2

4000 72 6 4 2

5000 90 6 4 2

User with 100 MB Mailbox

GB net

Number of Disks RAID 1+0 Number of Disks RAID 5

36 GB 73 GB 146 GB 300 GB 36 GB 73 GB 146 GB 300 GB

50 10 2 2 2 2 3 3 3 3

100 20 2 2 2 2 3 3 3 3

500 100 6 4 2 2 4 3 3 3

1000 200 12 6 4 2 7 4 3 3

2000 400 24 12 6 4 14 7 4 3

3000 600 36 18 10 6 20 11 6 3

4000 800 48 24 12 6 28 14 7 4

5000 1000 60 30 16 8 35 18 10 5

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 27 (69)

Network

Network quality represents an important performance factor. For example, an overloaded Ethernet segment, in which many collisions occur, has a significant influence on performance. It is advisable to connect the Exchange Server - depending on data volume - to the backbone through a 100-Mbit Ethernet or a gigabit Ethernet.

If the backup is not implemented on the server on a dedicated basis but centrally through the network, consideration must then be made for the appropriate bandwidth. In case of an on-line backup - which is the recommended backup method, see chapter Backup - the Exchange backup API provides a data throughput of approximately 70 GB/h which is equivalent to about 200 Mbit/s.

For a user as described in the medium profile (see chapter User profiles) an average data volume of 5 kbit/s/user is to be expected. In addition to the pure data volume, consideration must be made for the fact that the network is loaded differently depending on the protocol used. Thus the MAPI protocol induces many small network packets which place a greater load on the network than the few small ones that occur with the IMAP protocol.

High availability

Availability plays a major role in large-scale Exchange server installations. If several 1000 users depend on one single server, uncontrolled downtimes may mean sizable damage. In this case, it is recommended to implement availability through a cluster solution, as offered by the Windows Cluster Services of Windows Server 2003 Enterprise Edition and Windows Server 2003 Datacenter Edition. Entirely different restrictions as regards disk subsystem and performance apply to such clusters.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 28 (69)

Backup

Backup is one of the most important components for safeguarding data. With regard to high-availability hardware resources there could be a tendency to neglect the backup of mirrored data. Studies, however, show that only about 10% of data loss is due to hardware and environmental influences. The remaining 90% is split approximately into one half for data losses as a result of software problems, such as program errors, system crashes and viruses, and one half for data losses as a result of reckless handling of the data by users and administrators.

It is possible to eliminate part of the possible causes by means of preventive measures. The hardware-induced causes for data losses can be intercepted by excellent hardware equipment, as provided by Fujitsu. Fujitsu even offers a disaster-tolerant hardware platform for Exchange servers for natural disasters. A good virus scanner should by all means be used with every Exchange server in order to protect against data loss through viruses. Data loss as a result of reckless handling can be reduced in an Exchange server by training the administrators accordingly. Nevertheless, there is still great potential for possible data losses due to program errors, system crashes or human error. In this case, prevention through reliable backup hardware or careful backup strategy is the only cure.

One action for avoiding data losses through program errors and system crashes is the transaction database concept used by Exchange, which was already explained in the chapter Transaction principle. In it all changes to the database are recorded in so-called log files. The log files, which are only written sequentially, are for the most part immune to logical errors due to program errors, as can occur in a database with complex data structures that is written and read on a permanently random basis. Furthermore, the data volume of the log files is quite small compared with the database so that errors in the log files are statistically speaking substantially less.

However, this transaction principle necessitates a regular backup, as otherwise the updating of all changes to the database would in the long run take up a great deal of storage space. With an on-line Exchange backup the log files are automatically deleted by Exchange once the database backup has been completed. If the database is lost, the database can - with the aid of a backup and the log files that have been written since the last backup - be restored with all the data of the time of the database loss. After a database has been restored, Exchange automatically replays the log files with all the changes since the backup.

All versions of Exchange Server provide the option of carrying out a so-called on-line backup during ongoing operations. In this way, the Exchange database can be backed up while all the services of Exchange - apart from performance loss - are available without any restrictions. At the same time, it is of course possible to carry out a so-called off-line backup. However, this is not an adequate method, as the Exchange services are not available during the backup time, the data volume to be backed up is larger (because the database files are backed up as a whole and not logically), no data check takes place and the log files are not purged automatically. However, the essential disadvantage of an off-line backup lies in the fact that with a restore the log files which have come into being since the compilation of the backup cannot be played back.

The choice of a suitable backup medium and suitable backup software has an altogether considerable influence on the availability of the Exchange server. Whereas the backup can be carried out during the ongoing operations of Exchange and the duration of a backup is thus not directly critical, the duration of a restore is particularly decisive for availability. Since, in contrast to the backup, the Exchange services are not unrestrictedly available during the restore. This is why when selecting a backup solution - hardware and software - particular attention must be paid to speed in addition to reliability.

Exchange Server itself contains a number of features that accommodate a fast backup and restore. First with Exchange 2000 Server, Exchange supports several databases and storage groups. Storage groups can be backed up in parallel and databases can be restored individually. In the event of a restore only those users are affected whose data are in the database to be restored. All other users can use the Exchange services without any restrictions, apart from possible performance loss.

Backup

+ =

Current

database Logs

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 29 (69)

In connection with Windows Server 2003, Exchange Server 2003 has a further innovation to offer, the so-called »Volume Shadow Copy Service« (VSS) which substantially shortens the time for a backup. Essentially, the storage technology VSS is an innovation of Windows Server 2003. New in Exchange Server 2003 is the support for this function provided by Windows Server 2003. This means that VSS is only available for Exchange Server 2003 when Exchange Server 2003 is used in combination with Windows Server 2003.

With VSS Microsoft provides a Windows-proprietary and uniform interface for shadow copies, frequently also referred to as snapshot backups. Snapshot backups are nothing new, a great many storage systems have been supporting such backups for a long time now and there are also various third-party software solutions in order to also implement snapshot backups from Exchange databases with the support of such storage systems. However, there is one interface that is supported by Microsoft, standardized and independent of the disk subsystem. The emphasis is deliberately placed on interface, or framework as Microsoft puts it.

The manufacturers of intelligent storage systems and backup solutions now have to adapt their products to this framework. Even applications have to be adapted if they want to be VSS-compliant and to support snapshot backups. Exchange Server 2003 is already one such VSS-compliant application.

In the main, the VSS framework consists of three parts:

The Requestor is the software that initiates a snapshot. It is typically backup software. As regards Microsoft„s own backup tool, »ntbackup.exe«, which is supplied with Windows Server 2003 or in an extended version with Exchange Server 2003, it must be mentioned that it is not a VSS Requestor with which snapshots can be generated by Exchange Server 2003. As far as Exchange is concerned, it merely controls classic online backups.

• The Writer is a component that every VSS-compliant application has to provide, with the Writer having to be adjusted to the application-specific data architecture. With Exchange, the Writer must ensure that a consistent database state is created in accordance with the transaction database principle which is based on Exchange, and that no changes are made to the data during the time of the actual snapshot. In addition, the Writer must also provide meta data for the data to be backed up. For example, in the case of Exchange a record of consistent data can extend over several volumes that must be backed up together.

• The VSS Provider performs the actual snapshot. The Provider is usually supplied by the storage manufacturers whose storage systems offer internal mechanisms for cloning. Windows Server 2003 includes a software-based Provider that works according to the copy-on-write method.

The advantage of the VSS framework consists of components of various software and hardware manufacturers harmonizing with each other. Particularly in somewhat larger data centers, in which various hardware and applications are used, the backup - regardless of the storage manufacturer - can e.g. now be coordinated with standardized software and special solutions are no longer needed to meet the requirements of e.g. database-based applications.

VSS

Requestor

VSS Writer SQL-Server

VSS Writer

VSS Framework

Exchange

VSS Writer

VSS

Requestor

VSS

Provider VSS

Provider VSS

Provider

Several VSS Requestors and several VSS Providers can co-exist for various volumes.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 30 (69)

Backup hardware

When choosing the backup medium, attention must be paid to the fact that the database is backed up in an adequate time. Since an on-line backup means performance losses for the users, it should be possible to carry out a backup during the typically low-use hours of the night. In addition to the backup, database maintenance, such as garbage collection and on-line defragmenting, also take place during this time. In this connection, garbage collection should run first, followed by on-line defragmenting and then the backup. As a result of garbage collection the data volume to be backed up becomes smaller and due to the defragmenting the subsequent database accesses during the backup take place more quickly. The data transfer rate and also the capacity of an individual backup medium play a role in the selection of the backup medium. It is indeed possible to carry out a backup on hard disk, Magneto Optical Disk (MO), CD-RW or DVD-RW. However, due to the data volume, tapes are the typical medium used.

If there are no other requirements, such as existing backup structures, the backup medium should be chosen in such a way that the backup and restore are suitable both in an acceptable time and for a manageable amount of tapes. At any rate the backup device should be selected in such a way that the backup is carried out without any operative intervention, i.e. without an administrator having to change tapes during the backup or restore. For larger data volumes where one medium is not enough there are so-called tape libraries, which automatically change the tapes, as well as devices that with several drives write parallel onto several tapes, similar to the RAID 0 (striping) with hard disks, so as to increase data throughput. The table below shows a small selection of tape libraries.

Technology

Maximum data throughput

Capacity / tape without compr.

[MB/s] [GB/h] [GB]

DDS Gen5 3 10 36

VXA-2 6 21 80

VXA-320 12 42 160

LTO2 24(35) 105(123) 200

LTO3 80 281 400

Library

Technology

Number Maximum throughput

Capacity / tape without compr.

Drives Tapes [GB/h] [TB]

VXA-2 PackerLoader VXA-2 1 10 21 0.8

FibreCAT TX10 VXA-320 1 10 42 1.6

FibreCAT TX24 LTO2, LTO3 1 - 2 12 / 24 123 - 281 2.4- 9.6

MBK 9084 LTO2, LTO3 1 - 2 10 - 21 123 - 281 2.0 - 8.4

Scalar i500 LTO2, LTO3 1 - 18 36 - 402 123 - 5058 7.2 - 160

Scalar i2000 LTO2, LTO3 1 - 96 100 - 3492 123 - 26976 20 - 1396

Scalar 10k LTO2, LTO3 1 - 324 700 - 9582 123 - 91044 789 - 3832

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 31 (69)

Backup duration

Calculation of the backup duration is not quite so trivial. In theory, it is the result of data volume divided by the data transfer rate. However, the maximum data transfer rate stipulated for tape technology cannot be taken as a basis. The effective data transfer rate is determined by other factors.

To begin with the amount of data must be made available. In this respect, the performance of the disk sub-system, in which the database is filed, plays a role as well as CPU performance, main memory size and finally even Exchange Server 2003 itself. For an on-line backup Exchange must provide all the data by means of the so-called backup API. In so doing, 64 kB blocks must be read, verified and transferred to the backup software. Microsoft quotes a throughput of approximately 70 GB/h for the Exchange Server 2003 backup API.

A further limitation to the data throughput follows from a technical feature of tape drives. Tapes are faster streaming devices than disks but they become dramatically slow when the data are not provided continuously and in sufficient quantities. If the data are provided more slowly or irregularly, the tape can no longer be described as being in the so-called streaming mode but switches to start-stop operations. In this mode the tape is stopped when no data are outstanding. When sufficient data are again available, the tape is restarted. For many recording procedures the tape even has to be rewound a little. This takes time and the writing speed decreases. How pronounced this effect is depends on the one hand on the recording technology used, on the cache abilities of the backup drive and also on the backup software used. The better the backup software is familiar with and designed for the features of the backup drive, the higher the effective data transfer rate.

Compressing the data is another influence on the effective data transfer rate. All backup drives support data compression. This is not implemented by the backup software or the driver for the tape drive, but by the firmware of the tape drive itself. Depending on the compressing ability of the data, the write speed can increase. As a result the effective data throughput can even be above the maximum data throughput of the backup medium.

The table opposite shows effective data throughput rates. In each case an on-line backup of a 50 GB Exchange database was carried out with a sufficiently fast disk subsystem with the Windows backup program.

Technology

Maximum data throughput

Effective data throughput

Total duration

[MB/s] [GB/h] [MB/s] [GB/h] [h]

DDS Gen5 3 10 4.8 16.8 3:10

VXA-2 6 21 7.5 26.3 1:45

VXA-320 12 42 15.0 52.7 1:00

LTO2 30 105 47.0 165.2 0:18

LTO3 80 281 105.0 369.1 0:08

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 32 (69)

Backup solutions for Exchange Server 2003

As backup software either the Windows backup program or a 3rd

-party product that supports the Exchange API, such as BrightStor ARCserve from Computer Associates or NetWorker from EMC Legato, can be used. Compared with the Windows backup program, third-party products provide additional functions, such as the support of tape libraries, backup of individual mailboxes or even individual mails or folders. However, when backing up individual mailboxes or folders note that the throughput is considerably smaller - approximately only 20% - in comparison with on-line backup of entire Exchange databases.

With Windows Server 2003 and Exchange Server 2003 VSS-based backup and recover should become a standard procedure for disaster recovery in the Enterprise environment. The advantages of this methodology, as discussed in the past, speak for themselves. A 3

rd-

party backup product is also necessary here because Windows‟ own backup tool, ntbackup, does not support VSS snapshots of Exchange databases.

The functional scope of the products available on the market varies substantially in part, particularly as far as the supported hardware devices or the support of other applications besides Exchange, or even other operating systems than Windows, are concerned. For Exchange Server 2003 Fujitsu recommends the backup products of the NetWorker family. This backup solution is VSS-compliant and also enables individual mailboxes to be backed up and restored. In addition, the NetWorker supports the online backup of an incomparable amount of applications, all market-relevant backup devices and operating system platforms. Consequently, the creation of this backup solution is not an insular solution for Exchange Server 2003, but has laid the foundation for a company-wide Enterprise backup solution.

Feature Windows Backup BrightStor ARCserve NetWorker

Offline Backup × × ×

Online Backup × × ×

Single Database × × ×

Single Mailbox × ×

Single Objects × ×

VSS Snapshots × ×

Backup in parallel × × ×

Online Restore × × ×

Restore in parallel × × ×

Cluster Support × × ×

Tape Library Support × ×

Remote Backup × × ×

Environments Small Windows Heterogeneous

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 33 (69)

Backup Software

Exchange

Backup Agent

Backup Server

Exchange Server

Exchange

Exchange Server

with Backup

Backup Software

Backup strategy

Even more fundamental than adequate backup hardware and software is an appropriate backup strategy. The backup strategy influences the requirements made of the backup hardware and software and in turn defines the restore strategy. Thus the backup intervals and backup method are critical for restore times. But also the structuring of the Exchange server into storage groups and databases influences the backup and restore times. Since restore time in particular is the critical path (as this means downtime), the restore time required particularly with larger Exchange servers determines both the backup and the Exchange storage group and database concept.

Exchange 2000 Server and Exchange Server 2003 support up to four storage groups, each with up to five databases. Each storage group is administered within its own process. For each storage group a separate set of log files is administered within the group for all databases. One backup process is possible per storage group. In this way, providing there is appropriate backup hardware, it is possible for the backup to run in parallel. On the other hand, it should not be kept a secret that the splitting into several storage groups during normal operations means more expenditure as a result of the split into several processes and thus a higher CPU load and storage requirements.

In order to completely back up the Exchange 2000 Server or Exchange Server 2003 databases it is - in contrast to Exchange 5.5 - inadequate to only back up the Exchange databases. Although Active Directory is not a component of Exchange 200x and its backup, Exchange 200x is very much based upon it. The entire Exchange 200x configuration data are stored in Active Directory, as are the user data. Moreover, Exchange 200x is based on the IIS and various fundamental Exchange configuration data are stored in the Meta database of the IIS. Both sets of information, Active Directory and the IIS Meta database, are backed up in a system-state backup. Note in this respect that the Active Directory is not necessarily hosted on the Exchange server. This is why a system-state backup of the domain controller must be effected. With clustered systems other components must also be taken into consideration in the backup.

The backup hardware can either be directly connected to the Exchange server or to a dedicated backup server in a network. With an on-line backup access to the Exchange data is effected in both cases through the backup interface of the respective Exchange server. If a decision is made in favor of a dedicated backup server, then a LAN of 1 GB is advisable in order to guarantee adequate data throughput.

Online Backup types

save purge

database log files log files

Full × × ×

Incremental × ×

Differential ×

Copy × ×

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 34 (69)

Restore

Databases can be individually reconstructed. During this process the other databases are not affected and the users allocated to them can use all the Exchange services.

Depending on the cause that triggers a database restore, loss of data is possible despite a diligent backup. A distinction is made between two recovery scenarios:

Roll-Forward Recovery

In the scenario of a roll-forward recovery one or more databases of a storage group are lost but the log files are intact. In this case, a selective restore of the databases concerned can be effected from a backup. Exchange restores all the data changed since the time of the snapshot on the basis of transaction logs. This means that no data are lost at all despite the necessity of accessing a backup.

Point-in-Time Recovery

If in addition to one or more databases the log files of a storage group are also affected, all the databases and the log files of the storage group must be copied back from the backup. As in this case the difference data, in the form of transaction logs, since the last backup are also lost, merely the database at the time of the backup can be restored.

In such a disaster, which necessitates a point-in-time recover and in which data are lost, only as short as possible backup intervals help minimize data losses.

The time required to restore a database is always greater than the time needed for the backup. On the one hand, this is hardware-induced because tapes are faster streaming devices than disks, particularly when the writing is done onto a RAID 5 array, and in this connection parity is also calculated and has to be written. On the other hand, it is software-induced because the restore process is more complex than the backup process. The restore process is made up of

• Installation of the last full backup.

• Installation of incremental or differential backups.

• Replay of the changes saved in the log files since the last full backup.

For the restore speed of the backup it can be assumed that typically 60% - 70% of the backup speed is achieved. The time for the replay of the log files depends on the backup intervals and the performance of the Exchange server. The longer the time since the last backup, the more log information that has to be replayed. In this respect, the loading of the log files can really take longer than the replay of the backups itself (see box).

In order to increase the restore speed it is advisable to deactivate the virus scanner for the corresponding drive during the loading of the data. A virus check ought to be superfluous as the data were already checked prior to their entry in the database (see chapter Virus protection).

Restore time is tantamount to downtime. Particularly with such an elementary medium as e-mail, certain requirements are made of availability. For example, if the requirement is made that an Exchange server may fail for a maximum of one hour, then it must be possible to implement the necessary restore of the database in precisely this time. Thus the maximum size of an Exchange database is determined indirectly. Therefore, the practical upper limit of an Exchange server is in the end not determined by the efficiency of the hardware, such as CPU, main memory and disk subsystem, but by the backup concept.

Restore example

Database size: 4 GB Log files of one week: 360 × 5 MB = 1.8 GB Restore time for the database: ½ hour Replay of the log files: 5 hours

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 35 (69)

Best Practice

The best backup method is a regular on-line full backup. In addition to the Exchange databases the full backup should include the system state of the Exchange server and of the domain controller as well as all the Exchange program files.

In contrast to incremental and differential backup, a full backup minimizes the restore times. In order to minimize the times required for the loading of the log information it is advisable to implement a full backup as often as possible. With differential and incremental backups the fact that additional disk space is temporarily required for the log files during the restore must also be taken into consideration.

The backup hardware should be designed in such a way that a full backup can be carried out without manually changing the tape. Thus the basis for an automatic, userless, daily backup is given.

For further information on backup strategies see Exchange Server 2003 Technical Documentation Library [L10] and Exchange 2000 Server Resource Kit [L12]

Daily Full Backup

1 Tape

Weekly Full Backup

with Daily Differentials

2 Tapes

Weekly Full Backup

with Daily Incrementals

n Tapes

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 36 (69)

Archiving

Although Exchange could theoretically manage data storage of up to 320 TB, there are in practice limits of the magnitude of 400 GB. Therefore, limitation of the mailbox size in order to control the data volume on one Exchange server is to be found in almost all larger Exchange installations. And where can older mails be put? In most cases the answer to this question is left up to the mailbox user. The user decides whether the information that exceeds the limits of the mailbox meant for him is deleted or archived on a client-side basis. In this regard, however, neither data integrity nor data security is for the most part ensured. This cannot be the solution in fields of business in which the retention of all correspondence is required by law. Server-side solutions are required in this case.

There are a series of high-performance 3rd

-party products for the automatic archiving of e-mails. The term archiving should not be confused with backup in this regard. A backup is used to save the data of current databases and for their recovery. An archive is used to retain the entire information history. Moreover, a distinction must be made with archiving between classic »long-term archiving« and »data relocation« to lower-cost storage media. Long-term archiving

In order to meet the obligation to provide an audit trail for the legislator or auditing it is necessary for certain data stocks to be retained according to the period stipulated. Once successfully archived these data should no longer be changed, but must be made available at any time for evaluation if requested.

Data relocation

Data relocation is particularly suited for the displacement of so-called inactive data. This is the usual term for e-mails that are forgotten after fairly a long time. These e-mails are relocated to lower-cost media by means of migration according to set rules, such as ageing (date received, date of last access), size, and threshold values, such as high and low watermarks. In contrast to long-term archived e-mails, these e-mails remain visible in the Exchange database by means of so-called stub objects. User access to a relocated e-mail triggers restoring of the e-mails in the Exchange database both automatically and in a way that is transparent for the user.

An archiving solution for Exchange Server can on the one hand be used to meet statutory regulations and on the other hand also increase performance and availability. Relieving the load of old e-mails on the Exchange database results in a better throughput and in the event of a restore in shorter downtimes.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 37 (69)

Virus protection

Approximately just as many data losses that are due to hardware failures - 8% - are also due to the influence of computer viruses. These are only cases that entail data losses, they do not include failures due to the blocking of the mail servers with viruses and downtimes for the elimination of viruses. Therefore, it is vital to protect a mail system with a virus scanner so as to block any viruses before they spread or even do any damage. In this regard, it is necessary to not only check incoming mails for viruses. It is also necessary to check outgoing mails in order not to unintentionally distribute to business partners viruses that were introduced in other ways than with incoming mails. The damage in this case would above all be loss of image. It is also necessary to check internal mail traffic for viruses because the ways in which viruses can be introduced are varied, e.g. data media (such as floppies, CDs, removable disks or USB sticks), Internet and remote accesses or portable computers which are also operated in external networks.

In addition to viruses that spread through e-mail, spam mail - unsolicited advertising - is also a nuisance nowadays and places a substantial load upon the mail servers. Statistics prove that between 5 and 40% of the volume of mail is caused by spam mail.

Exchange Server 2003 itself does not provide a virus scanner. Third-party solutions are required in this respect. As of version Exchange 5.5 Service Pack 3, however, Exchange has at least a virus scanner API, known in short as AV API. This interface permits virus scanners to check unencrypted e-mails for viruses in an efficient way and to eliminate them before they reach the recipient's mailbox. For encrypted e-mails, client-side anti-virus tools are needed.

There is a whole range of anti-virus products. These products are generally not only restricted to the protection of Exchange, but mostly include a whole range of protective programs, with which client PCs, web servers, file servers and other services can be safeguarded against viruses. Although the Exchange virus API was already introduced with Exchange 5.5 SP3, not all anti-virus solutions support this interface. Even today some products are still restricted to the SMTP gateway and the client interface. When selecting an anti-virus solution, care should be taken that it is compatible with the virus scanner API of Exchange Server 2003. Effective protection and optimal performance are only guaranteed in this way.

An overview of existing anti-virus solutions for Exchange Server and an independent performance assessment are provided by the web site www.av-test.org, a project of the University of Magdeburg and AV-Test GmbH.

For its work a virus scanner uses considerable system resources. Processor performance above all is required, particularly for compressed attachments because in these cases the contents to be checked must first be unpacked. Whereas a virus scanner in fact does not constitute an appreciable load for the main memory and the disk I/O or the network I/O.

Measurements with the medium profile on a PRIMERGY TX150 with TrendMicro ScanMail 6.2 have shown that with a virus scanner the CPU requirement of Exchange increases by approximately factor 1.6. The changes to the response times are on the other hand almost constant and are approximately 25 ms.

However, for the processors of the PRIMERGY it is not a problem to provide the necessary computing performance. During sizing attention merely has to be paid to planning for this CPU requirement and to accordingly selecting a correspondingly high-capacity CPU or the number of processors.

Nowadays data security plays an increasingly important role and many people decide in favor of encrypting their mails. However, in this respect the fact that such mails cannot be checked for viruses on the Exchange server must be taken into consideration. In this case classic virus scanners must be used on the client systems.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 38 (69)

System analysis tools

A business-critical application, such as e-mail communication, requires foresighted planning and continuous performance checks during ongoing operations. For Exchange Server, Microsoft provides a variety of tools that enable the efficiency of an Exchange Server to be planned, verified and analyzed. The tools can be distinguished for three phases:

Planning and Design

White Paper

A variety of documents exists for the planning of an Exchange Server environment and the sizing of the individual Exchange Servers. In addition to this white paper, which deals specifically with the sizing of PRIMERGY servers, there are numerous Microsoft documents, see Exchange Server 2003 Technical Documentation Library [L10]. The white paper Exchange Server 2003 Performance and Scalability Guide [L11], which contains essential information about the performance and scaling of Exchange Server 2003, is particularly worth mentioning here.

System Center Capacity Planner 2006

Microsoft also provides the efficient product System Center Capacity Planner 2006 [L14], which enables the interactive planning and modeling of Exchange Server 2003 topologies and Operations Manager 2005 environments.

Prototyping (design verification)

There are two tools from Microsoft used to evaluate the performance of Exchange Server and to verify whether the planned server hardware has been adequately sized. Both tools are not suited for ongoing operations and should only be used on non-productive systems.

JetStress

The tool JetStress is used to check the disk I/O performance of an Exchange Server. JetStress simulates the disk I/O load of Exchange for a definable number of users with regard to the Exchange databases and their log files. It is not absolutely necessary for Exchange Server 2003 to be installed for this purpose. However, CPU, memory and network I/O are not simulated by JetStress.

LoadSim

The simulation tool LoadSim has already been presented in detail in chapter Exchange Measurement Methodology. It simulates the activity of Exchange users and thus puts the Exchange Server under realistic load, with all the resources (CPU, memory, disk subsystem, network) the Exchange Server needs being involved.

Both stress tools can be downloaded free of charge from the web site Downloads for Exchange Server 2003 [L9].

Operate

Windows contains a central concept to monitor and analyze the performance of a system at run time. Events and performance counters are collected and archived on a system-wide basis for this purpose. This standardized concept is also open to all applications, providing they make use of it. Microsoft Exchange Server uses it intensively and not only records events available in the event log but also a variety of Exchange-specific performance counters. To evaluate the events and performance counters you can either use Event Viewer and Performance Monitor, available as standard in every Windows system, or also use special tools that evaluate and asses the contents of the event log and the performance counters under certain application aspects.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 39 (69)

Event Viewer

The Event Viewer is a standard tool in every Windows system and can be found under the name »Event Viewer« in the start menu under »Administrative Tools«. The events are sorted into various groups, such as »Application«, »Security«, »System« or »DNS Server« and divided in each case into the classes »Information«, »Warning« and »Error«. The events logged by Exchange Server are to be found under the category »Application«. However, events that influence the availability of an Exchange Server can also appear under »System«.

Performance Monitor

The Performance Monitor is an integral part of every Windows system and can be found under the name »Performance« in the start menu under »Administrative Tools«. It can also be selected using the short command »perfmon«.

A description of the most important performance counters relevant to the performance of an Exchange Server can be found in the following chapter Performance Analysis.

Microsoft Exchange Server Best Practices Analyzer Tool

The tool Microsoft Exchange Server Best Practices Analyzer [L9], also known in short as ExBPA, determines the »state of health« of an entire Exchange Server topology. For this purpose, the Exchange Server Best Practices Analyzer automatically collects the settings and data from the relevant components, such as Active Directory, Registry, Metabase and performance counters. These data are compared using comprehensive best-practice rules and a detailed report is consequently prepared with recommendations for the optimization of the Exchange environment.

Microsoft Exchange Server Performance Troubleshooting Analyzer Tool

The Microsoft Exchange Server Performance Troubleshooting Analyzer Tool [L9] collects the configuration data and performance counters of an Exchange Server at run time. The tool analyzes the data and provides information about the possible causes of bottlenecks.

Microsoft Exchange Server Profile Analyzer

The Exchange Server Profile Analyzer [L9] can be of help for future capacity planning and performance analyses. With the aid of this tool it is possible to collect statistical information about the activities of individual mailboxes as well as entire Exchange Servers.

Microsoft Exchange Server User Monitor

Unlike the above listed tools, the Microsoft Exchange Server User Monitor [L9] does not work on the server side, but on the client side. As a result it is possible to analyze the impression of an individual user with regard to the performance of Exchange Server. The Exchange Server User Monitor collects data such as processor usage, response times of the network and response times of the Outlook 2003 MAPI interface. These data can then be used for bottleneck analyses and for the planning of future infrastructures.

Microsoft Operations Manager

In Microsoft Operations Manager (MOM) [L15] Microsoft makes a powerful software product available, with which events and the system performance of various server groups can be monitored within the company network. MOM creates reports, trend analyses and offers proactive notifications in the event of alerts and errors on the basis of freely configurable filters and rules. These rules can be extended by additional management packs, which are available for various applications. Such an Exchange-specific management pack [L9] is also available for Microsoft Exchange Server.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 40 (69)

Performance analysis

Windows and applications such as Exchange Server provide performance counters for all relevant components. These performance counters can be viewed, monitored and recorded through a standardized interface with the performance monitor that is available in all Windows versions – also known as system monitor in some Windows versions.

The performance monitor can be found under the name »Performance« in the start menu under »Administrative Tools«. It can also be started using the short command »perfmon«.

Performance counters are grouped in an object-specific manner, some of them also exist in various instances when an object is available several times. For example, there is a performance counter, »%Processor Time«, for the object »Processor«, with one instance per CPU for a multi-processor system. Not all performance counters are already available in Windows, and many applications, such as Exchange Server, come with their own performance counters, which integrate in the operating system and can be queried through the performance monitor. The performance data can either be monitored on screen or, better, written in a file and analyzed offline. Not only can performance counters of the local system be evaluated but also of remote servers, which necessitate appropriate access rights. How to use the performance monitor is described in detail in Windows help or in Microsoft articles in the Internet, and there is also an explanation of each individual performance counter under »Explain«.

Please note that the performance monitor is also a Windows application that needs computing time. It is possible with an extreme server overload for the performance monitor itself not to be able to determine and show any performance data; in this case the relevant values are then 0 or blank.

To obtain an overview of the efficiency of an Exchange Mailbox Server it is sufficient to observe a number of performance counters from the categories:

Processor

Memory

Logical Disk

MSExchangeIS

SMTP Server

In detail there are the following performance counters:

\\<Exchange Server>\Processor(_Total)\% Processor Time

\\<Exchange Server>\System\Processor Queue Length

\\<Exchange Server>\Memory\Available MBytes

\\<Exchange Server>\Memory\Free System Page Table Entries

\\<Exchange Server>\Memory\Pages/sec

\\<Exchange Server>\LogicalDisk(<drive>:)\Avg. Disk Queue Length

\\<Exchange Server>\LogicalDisk(<drive>:)\Avg. Disk sec/Read

\\<Exchange Server>\LogicalDisk(<drive>:)\Avg. Disk sec/Write

\\<Exchange Server>\LogicalDisk(<drive>:)\Disk Reads/sec

\\<Exchange Server>\LogicalDisk(<drive>:)\Disk Writes//sec

\\<Exchange Server>\MSExchangeIS Mailbox(_Total)\Send Queue Size

\\<Exchange Server>\MSExchangeIS\RPC Averaged Latency

\\<Exchange Server>\MSExchangeIS\RPC Requests

\\<Exchange Server>\MSExchangeIS\VM Total Large Free Block Bytes

\\<Exchange Server>\SMTP Server(_Total)\Local Queue Length

\\<Exchange Server>\SMTP Server(_Total)\Remote Queue Length

Depending on the configuration, the Exchange Server which should be monitored <Exchange Server> has

to be selected (it is also possible for several Exchange Servers to be monitored at the same time). With the

logical disk counters you also need to select the logical drive(s) <drive>: relevant to the Exchange

databases and log files.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 41 (69)

On their own, the figures and graphs provided by the performance monitor of course say nothing about the efficiency and health of an Exchange Server. For this purpose a number of rules and thresholds are necessary, which the administrator should be aware of for each of these performance counters.

Processor

Processor(_Total)\% Processor Time

To also be able to manage load peaks, the average CPU usage should not be greater than 80% for a longer period of time.

System\Processor Queue Length

Adequate processor performance exists when the counter Processor Queue Length has an average value that is smaller than the number of logical processors.

If a bottleneck is evident here, solution options exist: Increase the processor performance through additional or faster processors, or relocate services or mailboxes to other Exchange Servers.

Memory

Memory\Available MBytes

The free memory still available should always be greater than 50 MB, and by all means greater than 4 MB, as Windows otherwise implements drastic reductions in the resident working sets of the processes.

If a bottleneck is evident here, the upgrading of the main memory should be taken into consideration (see also chapter Main memory).

Memory\Free System Page Table Entries

To ensure that the operating system remains runnable, the Free System Page Table Entries should not go below 3,500.

A check should be made here as to whether the boot.ini-switch /USERVA=3030 has also been set with the set boot.ini-switch /3GB (see also chapter Main memory).

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 42 (69)

Logical Disk

LogicalDisk(<drive>:)\Avg. Disk Queue Length

The average queue length of a logical drive should not exceed the number of hard disks from which the logical drive was formed. Longer disk queues occur together with higher disk response times and are an indication of an overloaded disk subsystem.

LogicalDisk(<drive>:)\Avg. Disk sec/Read

LogicalDisk(<drive>:)\Avg. Disk sec/Write

The read and write response times should be clearly below 20 ms, ideally around 5 ms for read and around 10 ms for write. Higher response times occur together with longer disk queues and are an indication of an overloaded disk subsystem.

A remedy can be found through the addition of further hard disks, and the use of faster hard disks or a more efficient disk subsystem. The activation of the hard-disk read and write caches or the activation or increase in the cache of the disk subsystem or of the RAID controller can contribute toward reducing the response times and thus also toward reducing the disk queue length. Another possibility of relieving the load on the disk subsystem is by enlarging the Exchange cache through upgrading the server with additional main memory, through which the necessity to access the databases is reduced.

LogicalDisk(<drive>:)\Disk Reads/sec

LogicalDisk(<drive>:)\Disk Writes//sec

The number of I/Os that a logical drive can manage per second depends on the RAID level used and on the number of hard disks. The two performance counters show the number of read and write requests that are generated on the server side. Depending on the RAID level, the outcome for the hard disks is a different number of I/O requests, which is calculated for a RAID 10 according to the formula

IO10 = IOread + 2 × IOwrite

and for a RAID 5 according to the formula

IO5 = IOread + 4 × IOwrite.

It must also be taken into consideration that a hard disk – depending on its speed – can only satisfy a certain number of IO/s. Hence the result, for example for a RAID 10 consisting of four hard disks with 10 krpm is a maximum rate of 480 IO/s.

If the I/O rate observed here is too high, there are various options of finding a remedy. It is possible on the one hand to increase the number of hard disks used. If a RAID 5 is used, you can consider using a RAID 10 instead, which has a better I/O rate with the same number of hard disks (see also chapter Hard disks). If the I/O bottleneck affects a logical drive on which an Exchange database lies, upgrading the main memory is another possible solution. Exchange Server 2003 can use up to 1.2 GB RAM as the database cache. The effect of increasing the database cache is of course a lower disk I/O rate. The default size of the Exchange cache is about 500 MB and with a set switch /3GB about 900 MB. If sufficient memory is available, a further 300 MB can be added. For this purpose, the value for msESEParamMaxCacheSize must set to 307200 using the Active Directory Service interface and the ADSI-Edit tool.

More information about hard disks, controllers and RAID levels is to be found in the white paper Performance Report – Modular RAID [L5].

IO/s

5400 rpm 62

7200 rpm 75

10000 rpm 120

15000 rpm 170

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 43 (69)

Exchange Server

MSExchangeIS Mailbox(_Total)\Send Queue Size

The send queue contains the Exchange objects that are waiting to be forwarded. The destination can be either a local database or another mail server. This queue should be smaller than 500. A higher value is an indication of an overloaded system.

SMTP Server(_Total)\Local Queue Length

The local queue of the SMTP server contains the Exchange objects that have to be incorporated in the local databases. It should not be larger than 100. A larger queue, in connection with longer disk queues and higher disk response times, is an indication of an overloaded disk subsystem.

SMTP Server(_Total)\Remote Queue Length

The remote queue of the SMTP server contains the Exchange objects that have to be processed by remote mail servers. It should always be smaller than 1000. A larger queue is an indication of connection problems or problems with the remote mail servers.

MSExchangeIS\RPC Averaged Latency

This counter contains the average waiting time of outstanding requests. The value in milliseconds should be less than 50. A higher value is an indication of an overloaded system.

MSExchangeIS\RPC Requests

This counter contains the number of outstanding requests. The value should be less than 30. A higher value is an indication of an overloaded system.

MSExchangeIS\VM Total Large Free Block Size (MB)

This counter contains the size of the largest free virtual memory block. The value is a measure for the fragmenting of the virtual address space. It should be more than 500 MB. For more information about this see the Microsoft TechNet article KB325044 [L19].

As already mentioned at the start of the chapter, the performance counters described only represent a small selection of the performance counters that are available and relevant for Exchange. Further reaching bottleneck analyses will undoubtedly call for additional performance counters. The description of all Exchange-relevant counters would go beyond the scope of this paper, therefore reference is made to the appropriate documentation from Microsoft [L10], in particular the white paper Troubleshooting Exchange Server 2003 Performance [L12] is worth mentioning here.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 44 (69)

PRIMERGY as Exchange Server 2003

Which PRIMERGY models are suitable for the Exchange Server 2003? To begin with every PRIMERGY model has sufficient performance and provides adequate memory configuration for a certain number of Exchange users. Since the previous sections taught us that the Exchange Server is a very disk-I/O-intensive application, the hard disk configuration of the PRIMERGY plays a significant role in the suitability for Exchange or in the maximum number of users.

With regard to the disk-I/O the internal configuration options of the servers are mostly inadequate for Exchange. Therefore it makes sense to extend the server by an external disk subsystem. Direct Attached Storages (see chapter Disk Subsystem) or a Storage Area Network (SAN) are unrestrictedly suitable for Exchange Server 2003. There are various technologies for the connection of such disk subsystems: SCSI for the direct connection and Fibre-Channel (FC) or Internet-SCSI (iSCSI) in the SAN environment. A further connection option is provided by Network Attached Storage (NAS) in conjunction with Windows Storage Server 2003.

Below the current PRIMERGY systems and their performance classes are explained with regard to Exchange Server 2003 and configuration options for performance classes of between 75 and 5,000 users are presented. Configurations with more than 5,000 users are possible in cluster solutions with PRIMERGY systems, but are not described here.

The configurations described here have all been tested for their functionality in the PRIMERGY Performance Lab and subjected to the medium load profile described in the chapter Exchange Measurement Methodology.

When dimensioning all configurations the following assumptions were taken as a basis:

Disk capacity

• For the operating system and program files we estimate 10 GB.

• For the Active Directory we estimate 2 GB, this is adequate for approximately 300 000 entries.

• We calculate the Exchange databases on the basis of an average mailbox size of 100 MB per user. Since mails are not deleted directly from a database, but only after a fixed latency period (default 30 days), we calculate for this an additional disk capacity of 100%.

• For the disk requirements of the log files we assume an average mail volume of 3 MB per user and day. The disk area for the log files must be sufficient to store all the data that occurs until the next online backup. A daily backup is advisable. For security reasons we plan log-file space that will be adequate for seven days.

• Based on the same mail volume of 3 MB per user and day we plan space for one day for SMTP and MTA queues.

• While the restore of a database is effected directly onto the volume of the database, extra disk space must be planned for the restore of log files, the size of which is determined by the sum of the log files that occur as the differential between two full backups. For this we calculate the same disk capacity as for the log files of a storage group.

Processor performance and main memory

• The processor performance was dimensioned in such a way that for a load simulation (without a virus scanner, spam filter and backup) the processor load did not lie above 30%. Thus there was sufficient space for other services, such as a virus scanner or backup.

• Since most main memory is used as a cache for the Exchange databases, it was dimensioned in dependence of the disk subsystem in such a way that on the one hand the disk queue length is not greater than the number of hard disks used for the databases and that in addition the response time for 95% of all transactions is not more than 500 ms.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 45 (69)

Backup

Since regular data backups are vital for stable Exchange operations, we have taken the following into consideration for the sample configurations:

• It must be possible to carry out the entire backup at maximum database size in a maximum of seven hours.

• The restore of an individual database may not take more than four hours. This requirement also influences the Exchange design and the structuring in storage groups and databases.

The sample configurations presented here are pure Exchange mailbox servers, so-called back-end servers. Particularly with larger Exchange installations there is a need for further servers for Active Directory, DNS and, if necessary, other services, such as Outlook Web Access (OWA), SharePoint Portal Server or others.

At any rate, the sizing of an Exchange server requires the analysis of the customer requirements and of the existing infrastructure as well as expert consulting.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 46 (69)

PRIMERGY Econel 100

The mono processor system PRIMERGY Econel 100 offers configuration options with up to 8 GB of main memory and four internal SATA hard disks. The optional 1-channel SCSI controller is intended to control the backup media.

As the entry-level model of the PRIMERGY server family, the PRIMERGY Econel 100 is suited for small companies in the small business segment, where it provides data security for smalller budgets. It should be noted that if deployed in such a scenario the Active Directory must in the majority of cases also be accommodated in the system. However, in the small business environment the Active Directory is not critical as regards hard disk access, since there should not be many changes and read access is temporarily buffered by Exchange. The only prerequisite here should be somewhat more main memory. If used in the branch offices of larger companies the data volume arising as a result of replication of the Active Directory depends on the design of the Active Directory. This not only influences the necessary network bandwidth between branch office and head office, but also the hardware of the Active Directory server in the branch office.

If a Pentium 4 631, memory of 1 GB and four 80 GB hard disks are used, the configuration depicted below could in connection with Microsoft Small Business Server 2003 [L16] be an entry-level solution for up to 75 users. Since a regular data backup is essential for the smooth operation of an Exchange Server (see chapter Backup), a DDS Gen5 or VXA-2 tape drive is recommended.

In connection with the standard products of Windows Server 2003 and Exchange Server 2003 instead of Small Business Server 2003 there is no limitation to 75 users, and a PRIMERGY Econel 100 with the four internal SATA hard disks, a Pentium D 820 and main memory of 2 GB can manage up to 200 Exchange users. In this larger configuration a VXA-2 drive should be used for the backup, because with DDS Gen5 the tape capacity may not be adequate to perform a complete backup without having to change the tape.

Although hard disks with a capacity up to 500 GB are available for the PRIMERGY Econel 100, you should not come up with the idea of replacing the four small-capacity hard disks with two large-capacity hard disks. Four hard disks are consciously used here to be able to archive the Exchange databases and log files onto different physical hard disks. This is for performance and security reasons, see chapter Disk subsystem.

The picture opposite shows a small configuration for an Exchange server with Active Directory. The four disks are connected directly to the internal onboard SATA controller. The RAID 1 functionality can be realized either with the disk mirroring of Windows Server 2003 or with the onboard 4-port SATA controller with HW-RAID. Provided the system is secured by UPS, the disk caches should be activated to improve performance.

Two mirrored system drives are set up. One partition is created for each system drive. The first system drive is used for the operating system, Active Directory, log files and queues, the second one is solely for the Exchange databases (store).

Processors 1 × Pentium D 820, 2.8 GHz, 2×1 MB SLC

or

1 × Pentium 4 631/641, 3.0/3.2 GHz, 2 MB

SLC or

1 × Celeron D 346, 3.06 GHz, 256 kB SLC

Memory max. 8 GB

Onboard RAID SATA

PCI SCSI controller 1 × 1-channel

SCSI channels 1 (backup device)

Disks internal max. 4, 80 – 500 GB, 7200 rpm

Disks external none

Onboard LAN 2 × 10/100/1000 Mbit

RAID 1 OS, AD,

Logs, Queues

RAID 1 Store

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 47 (69)

Exchange Server 2003 in its standard edition supports two databases with a maximum database size of 75 GB, with one database being required for the mailboxes and one database for the public folders. Thus this configuration meets the assumptions specified at the start of the calculation for an average mailbox size of 100 MB per user and 100% reserve for database objects that are not to be deleted immediately, but only after a standard latency of 30 days. Under these conditions the database for the mailboxes would in the worst case grow to up to 100 MB × 200 users × 2 = 40 GB.

In accordance with the assumptions made initially of a log file volume of 3 MB per user and day, a disk space requirement of 4 GB must be calculated for a 7-day stock of log files for 200 users.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 48 (69)

PRIMERGY Econel 200

The dual processor system PRIMERGY Econel 200 offers the computing performance of two Intel Xeon DP processors and configuration options with up to 4 GB of main memory as well as four internal SATA hard disks. The optional 1-channel SCSI controller is intended to control the backup drives.

As the entry-level model PRIMERGY Econel 100, the PRIMERGY Econel 200 is also ideally suited for the small business segment or for branch offices where more computing performance is needed than a mono processor system can provide.

It should be noted that if deployed in such a scenario the Active Directory must in the majority of cases also be accommodated in the system. However, in the small business environment the Active Directory is not critical as regards hard disk access, since there should not be many changes and read access is temporarily buffered by Exchange. The only prerequisite here should be somewhat more main memory. If used in the branch offices of larger companies the data volume arising as a result of replication of the Active Directory depends on the design of the Active Directory. This not only influences the necessary network bandwidth between branch office and head office, but also the hardware of the Active Directory server in the branch office.

The picture opposite shows a small configuration for an Exchange server with Active Directory. The four disks are connected directly to the internal onboard SATA controller. The RAID 1 functionality can be realized either with the disk mirroring of Windows Server 2003 or with the onboard 4-port SATA controller with HW-RAID. Provided the system is secured by UPS, the disk caches should be activated to improve performance. Although hard disks with a capacity up to 500 GB are available for the PRIMERGY Econel 200, you should not come up with the idea of replacing the four small-capacity hard disks with two large-capacity hard disks. Four hard disks are consciously used here to be able to archive the Exchange databases and log files onto different physical hard disks. This is for performance and security reasons, see chapter Disk subsystem.

Two mirrored system drives are set up. One partition is created for each system drive. The first system drive is used for the operating system, Active Directory, log files and queues, the second one is solely for the Exchange databases (store).

If you equip the PRIMERGY Econel 200 with one or two Xeon DP processors and 1 GB of memory, this configuration could in connection with Microsoft Small Business Server 2003 [L16] be an entry-level configuration for up to 75 users, which can be loaded with other CPU- or memory-intensive tasks. Since a regular data backup is essential for the smooth operation of an Exchange Server (see chapter Backup), a VXA-2 or VXA-320 tape drive is recommended.

If you use the standard products of Windows Server 2003 and Exchange Server 2003 instead of the Microsoft Small Business Server Edition that is limited to 75 users, an appropriately configured PRIMERGY Econel 200 with two processors and 2 GB of main memory can by all means manage up to 200 users, with the disk subsystem of at most four internal SATA hard disks proving to be the limiting factor. Alternatively, a Network Attached Storage (NAS) on the basis of the Windows Storage Server 2003 can also be used as disk subsystem. Thus as regards computing performance the PRIMERGY Econel 200 would also suit as a cost-effective solution for dedicated Exchange Servers for branch offices or Application Service Provider (ASP) data center. However, as it is not possible to integrate the PRIMERGY Econel 200 in a 19" rack, it is for this field of application less suited and in this case the rack servers of the PRIMERGY product line are recommended.

RAID 1 OS, AD,

Logs, Queues

RAID 1 Store

Processors 2 × Xeon DP 2.8/3.4 GHz, 2 MB SLC

Memory max. 4 GB

Onboard RAID SATA

PCI SCSI controller 1 × 1-channel

SCSI channels 1 (backup device)

Disks internal max. 4 80 – 500 GB, 7200 rpm

Disks external none

Onboard LAN 2 × 10/100/1000 Mbit

RAID 1 OS, AD,

Logs, Queues

RAID 1 Store

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 49 (69)

PRIMERGY TX150 S4

The mono processor system PRIMERGY TX150 S4 offers configuration options with up to 8 GB RAM and four internal SAS or SATA hard disks. Optionally, further 28 hard disks are possible with two externally connected PRIMERGY SX30. In addition to the classic floor-stand housing form the PRIMERGY TX150 S4 is also available in a rack version.

As a floor-stand server the PRIMERGY TX150 S4 is suited for small companies in the small business segment or as a server for small branch offices. It should be noted that if deployed in such a scenario the Active Directory must in the majority of cases also be accommodated in the system. However, in the small business environment the Active Directory is not critical as regards hard disk access, since there should not be many changes and read access is temporarily buffered by Exchange. The only prerequisite here should be somewhat more main memory. If used in the branch offices of larger companies the data volume arising as a result of replication of the Active Directory depends on the design of the Active Directory. This not only influences the necessary network bandwidth between branch office and head office, but also the hardware of the Active Directory server in the branch office.

If a Pentium 4 631 and a memory of 1 GB are used, the PRIMERGY TX150 S4 with four internal 80-GB SATA or four 73-GB SCSI hard disks can in connection with Microsoft Small Business Server 2003 [L16] be an entry-level configuration for up to 75 users. The diagram below shows a small configuration for an Exchange server with Active Directory. The four disks can be connected directly to the internal onboard SCSI or SATA controller. The RAID 1 functionality can be realized either with the disk mirroring of Windows Server 2003 or with the zero-channel RAID option of the 1-channel onboard Ultra 320 SCSI controller »LSI 1020A« or with the 4-port SATA controller »Promise FastTrak S150-TX4« with HW-RAID. Provided the system is secured by UPS, the controller and disk caches should be activated.

Two mirrored system drives are set up. One partition is created for each system drive. The first system drive is used for the operating system, Active Directory, log files and queues, the second one is solely for the Exchange databases (store).

In no way should a RAID 5 array be made from the four hard disks which are used in the PRIMERGY TX150 S4. Neither should a saving of a hard disk be made nor a RAID 5 array be made with only three hard disks. Instead of the two RAID 1 arrays consisting of four hard disks a RAID 1 array of only two large-capacity hard disks should not be made, either. This would have a fatal effect on the system performance because the RAID 5 is on the one hand considerably slower than RAID 1 and on the other hand the operating system and all user data with different access patterns would then be on one volume. The performance would no longer be acceptable.

With a maximum of 75 users under Small Business Server 2003 the Exchange database for the mailboxes will – on condition of the assumptions specified initially for an average mailbox size of 100 MB per user and 100% reserve for database objects - grow to about the size of 100 MB × 75 users × 2 = 15 GB. A DDS Gen5 drive is recommended as backup drive.

RAID 1 OS, AD, Logs,

Queues

RAID 1 Store

Processors 1 × Pentium D 820 / 930 / 940 / 950 (2.8/3.0/3.2/3.4 GHz, 2 MB SLC) or

1 × Pentium 4 631/651 (3.0/3.4 GHz,2 MB SLC) or

1 × Celeron D 346 (3.06 GHz, 256 KB SLC)

Memory max. 8 GB

Onboard RAID SCSI or SATA

PCI SCSI controller 1 × 1-channel (backup device) 1 × 2-channel RAID

Disks internal max. 4

Disks external max. 28

Onboard LAN 1 × 10/100/1000 Mbit

RAID 1 OS, AD,

Logs, Queues

RAID 1 Store

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 50 (69)

If you equip the PRIMERGY TX150 S4 with a more powerful processor, e.g. a Pentium D 820, and main memory of 2 GB, it is by all means possible to manage up to 200 Exchange users. However, the Small Business Server 2003, which is limited to 75 users, is no longer sufficient, the products Windows Server 2003 Standard Edition and Exchange Server 2003 Standard Edition should be used. The database for the mailboxes of 200 users will with a maximum mailbox size of 100 MB grow to approximately 100 MB × 200 users × 2 = 40 GB. Therefore, the VXA-2 should be used as backup medium, as otherwise more than one tape may possibly be required for a total backup.

A server in the small and medium-sized enterprise (SME) environment or in branch offices is frequently also used as a file and print server because it is often the only server on site. For simple print services it is sufficient to have a somewhat more powerful processor, e.g. Pentium D 940, and more memory. However, for additional file-server services that go beyond occasional accesses the hard disks are inadequate. In this case, at least two further hard disks should be added in a secure RAID 1 array, which means the extension by a PRIMERGY SX30 with 14 additional hard disks is possible. The PRIMERGY SX30 is available as a 1- or 2-channel version. Which PRIMERGY SX30 version is preferred depends on the field of application. In the SME environment, where - in addition to the RAID arrays for the Exchange databases - further RAID arrays are built in the PRIMERGY SX30, the 2-channel variant with an appropriate RAID controller is recommended.

In case of more disks and more memory this configuration can as a dedicated Exchange Server by all means serve up to 700 users. In this case, it is advisable to use a 1-channel PRIMERGY SX30 and, if necessary, a second PRIMERGY SX30 as a supplement if additional data volume is required for the Exchange databases. The above picture shows a sample configuration with a 2-channel RAID controller and one PRIMERGY SX30.

The PRIMERGY TX150 and PRIMERGY SX30 are alternatively also offered as rack solutions. The operational area would then be less as an office server and more as a dedicated server in a data center or with an application service provider (ASP).

RAID 1 OS

RAID 1 AD

RAID 1 Queues

RAID 1+0 Store

... 6 ...

RAID 1+0 or RAID 5 File Sharing, etc

... 4 ...

RAID 1 Logs

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 51 (69)

PRIMERGY TX200 S3

The PRIMERGY TX200 S3 is the »big brother« of the PRIMERGY TX150 S4. Also designed as an all-round server in the guise of a tower server, the PRIMERGY TX200 S3 offers the computing performance of two Intel Xeon DP dual-core processors, a maximum of nine internal hot-plug hard disks and a RAID-compliant onboard SCSI, SATA or SAS controller. With the assistance of two additional 2-channel PCI-RAID controllers it is possible to connect up to four PRIMERGY SX30 with up to 56 hard disks externally.

Like the PRIMERGY TX150 S4, the PRIMERGY TX200 S3 is also available in a rack version. However, the rack-optimized systems, PRIMERGY RX200 S3 resp. PRIMERGY RX300 S3, should with the same computing performance be of greater interest for use in the rack.

As a floor-stand version the PRIMERGY TX200 S3 is ideally suited both for the SME segment and for branch offices where more computing performance is needed than a mono processor system can provide. In such an environment there are - as already discussed with the PRIMERGY TX150 S4 - mostly additional tasks for a server. Since in these environments only one server at most is installed, the latter must in addition to Exchange also perform other services, such as Active Directory, DNS, file and print service. The following picture shows a sample configuration for such tasks.

The six internal hard disks are mirrored in pairs (RAID 1). The first system drive is used for the operating system, the second one for the Active Directory and the third one for queues and restore. Two of the external hard disks are planned as a RAID 1 for log files. Six hard disks in a RAID 10 array house the Exchange databases (store) and the data area for file sharing. Provided the system is secured by UPS, the controller and disk caches should be activated for performance reasons.

Using this disk subsystem as well as two Xeon DP 51xx processors and 3 GB of main memory it is by all means possible to manage up to 700 users. Depending on the requirements made of disk capacity an upgrade with a second PRIMERGY SX30 is also possible. As a dedicated Exchange Server equipped with a good CPU and memory and when fast hard disks with 15 krpm are used, it is by all means possible to manage up to 1,000 users of a branch office or a medium-sized business.

RAID 1 OS

RAID 1 AD

RAID 1+0 Store

... 6 ...

RAID 1+0 File Sharing, etc

... 6 ...

RAID 1 Logs

RAID 1 Queues

Processors 2 × Xeon DP 5050/5060, 3.0/3.2 GHz, 2 × 2 MB SLC or

2 × Xeon DP 5110/5120/5130/5140 1.6/1.8/2.0/2.3 GHz, 4 MB SLC

Memory max. 16 GB

Onboard RAID 2-channel SCSI or 2-port SATA or

8-port SAS with 0-ch. RAID controller

PCI RAID controller 2 × 2-channel SCSI

Disks internal 6 × SAS/SATA (optional 9 × SCSI)

Disks external max. 56

Onboard LAN 1 × 10/100/1000 Mbit

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 52 (69)

For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is 2:2. Already from a physical memory configuration of 1 GB, Microsoft recommends using the /3GB option. For more details see Microsoft Knowledge Base Article Q823440.

If we take the general conditions specified at the beginning of this chapter as a basis that a mailbox is at most 100 MB and we give the space requirement for deleted mails that have not yet been removed from the database factor two, we then need e.g. for 700 users 700 users × 100 MB × 2 = 140 GB and for 1,200 users 1200 users × 100 MB × 2 = 240 GB of disk space for the Exchange databases. However, an individual database should not be greater than 100 GB, as otherwise the restore time of a database can be more than four hours. Thus to enable fast restore times several small databases should be preferred. As already explained in chapters Transaction principle and Backup, Exchange Server supports up to 20 databases, which are organized in groups of at most five databases in so-called storage groups (SG). If the rule of filling up individual storage groups with databases before creating a further storage group was valid for versions prior to Exchange Server 2003 on account of the administration overhead, it is recommended to already open an additional storage group for more than two databases for Exchange Server 2003 (see Microsoft Knowledgebase Article Q890699). If organizational reasons do not advise any other distribution, one storage group with two databases would be used for 700 users and two storage groups each with two databases for 1,200 users.

In accordance with the assumptions made initially of a log file size of 3 MB per user and day, a disk space requirement of 15 GB is needed for 7 days for 700 users and about 26 GB for 1,200 users. Thus it is sufficient to form a secure RAID 1 from two 36-GB hard disk for this purpose.

On account of the database size the VXA-320 or LTO2 with a tape capacity of 160 or 200 GB is suited as backup medium with 700 users. An LTO3 with a tape capacity of 400 GB should be used with 1,200 users, as otherwise more than one tape may possibly be required for a total backup.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 53 (69)

PRIMERGY RX100 S3

The PRIMERGY RX100 S3 is a rack-optimized server of only one height unit (1U).

The PRIMERGY RX100 S3 was especially designed for use in server farms, appliances, front-end solutions and hard-disk-less solutions for Internet and application service providers (ASP). In other words, for applications in which it is important to use many servers in a very confined space.

The PRIMERGY RX100 S3 offers two internal SATA hard disks with an integrated RAID 1 controller. Despite its compact design, one full-height PCI slot and a low-profile PCI slot, each of half length, are available.

The computing performance is comparable with that of a PRIMERGY TX150 S4. However, due to the rack optimization the configuration options are somewhat restricted. The internal hard disks meet the requirements of the operating system, but an external disk subsystem has to be connected for the data of Exchange. Unfortunately, neither SCSI-RAID controllers nor Fibre-Channel controllers can be used. Thus the PRIMERGY RX100 S3 is - in connection with a Network Attached Storage (NAS) together with Windows Storage Server 2003 or with an iSCSI-compliant storage system - only suited for branch offices or ASP data centers that provide their customers with cost-effective dedicated Exchange Servers on this basis.

In connection with an adequately equipped LAN-based disk subsystem the PRIMERGY RX100 S3 can manage up to 250 users. To connect external backup drives either a PRIMERGY SX10 is used in conjunction with 5¼“ devices, or backup devices in 19“ format are used.

Processors 1 × Celeron D 346 3.06 GHz, 256 SLC or

1 × Pentium 4 631 3.0 GHz, 2 MB SLC or

1 × Pentium D 820 2.8 GHz, 2 × 1 MB SLC or

1 × Pentium D 930/940/950 3.0/3.2/3.4 GHz, 2 × 2 MB SLC

Memory max. 8 GB

Onboard RAID SATA

Disks internal max. 2

Disks external none

Onboard LAN 2 × 10/100/1000 Mbit

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 54 (69)

PRIMERGY RX200 S3

Like the PRIMERGY RX100 S3, the PRIMERGY RX200 S3 is an optimized computing node with one height unit (1U). However, in contrast to the PRIMERGY RX100 S3, the PRIMERGY RX200 S3 has despite its small height - two dual-core processors and four SAS hard disks to offer. Moreover, the PRIMERGY RX200 S3 already has an onboard RAID-complaint SAS controller for the internal hard disks and 2 GB LAN interfaces. For further scaling a 2-channel RAID controller can be used so that it can be ideally combined with one or two PRIMERGY SX30. As an alternative to an SCSI-based disk subsystem it is also possible to connect a SAN through up to two 1-channel Fibre-Channel controllers.

In a configuration with two Xeon DP 51xx processors and 3 GB of main memory it is by all means possible to efficiently manage up to 1,200 mail users with the PRIMERGY RX200 S3 and an appropriate configuration with a PRIMERGY SX30.

For this purpose the four internal hard disks are mirrored in pairs (RAID 1) through the onboard SAS controller. The first logical drive is used for the operating system and for the Active Directory, the second logical drive for queues and restore. The log files are stored on two external hard disks configured as a RAID 1. The remaining eight hard disks are put together to form a RAID 10 and house the Exchange databases (store). You should not be misled into believing that you can reduce the number of hard disks by using a RAID 5 instead of a RAID 10 or by using large-capacity hard disks. The number of hard disks is calculated from the anticipated I/O accesses per second. The efficiency of the individual hard disks and RAID levels was discussed in detail in chapter Hard disks. In this scenario for approximately 1,200 users this means an I/O rate of 1200 users × 0.6 IO/user/s = 720 IO/s for the Exchange databases. If you take hard disks with 15 krpm as a basis, you will need six hard disks to satisfy this I/O rate and if you use hard disks with 10 krpm, eight disks will be required.

Provided the system is UPS secure, the controller and disk caches should be activated to improve performance.

As an alternative to an SCSI-based disk subsystem it is also possible to use a Fibre-Channel-based SAN. Regardless of the disk subsystem used, the logical distribution of the hard disks should be analog to the SCSI solution.

RAID 1 OS, AD

RAID 1 Queues

RAID 1 Logs

RAID 1+0 Store

... 8 ...

Processors 2 × Xeon DP 5050/5060/5080 3.0/3.2/3.73 GHz, 2 × 2 MB SLC or

2 × Xeon DP 5110/5120/5130/5140/5150/5160 1.66/1.86/2.0/2.33/2.66/3.0 GHz, 4 MB SLC

Memory max. 32 GB

Onboard RAID SAS / SATA

PCI SCSI controller 2 × 1-channel 1 × 2-channel RAID

Disks internal 2 × SAS or SATA 3½” or 4 × SAS or SATA 2½”

Disks DAS external 28

Fibre-Channel max. 2 channels

Onboard LAN 2 × 10/100/1000 Mbit

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 55 (69)

To connect external backup drives either a PRIMERGY SX10 is used in conjunction with 5¼“ devices, or backup devices in 19“ format are used. On account of the data volume with 1,200 users either an LTO3 drive or a tape library that automatically changes the tapes should be used.

For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is 2:2. Already from a physical memory configuration of 1 GB, Microsoft recommends using the /3GB option. For more details see Microsoft Knowledge Base Article Q823440.

For 1,200 planned Exchange users and in case of the general conditions specified at the beginning of this chapter a space requirement of up to 1200 users × 100 MB × 2 = 240 GB is needed for the Exchange databases. However, an individual database should not be greater than 100 GB, as otherwise the restore time of a database can be more than four hours. Thus to enable fast restore times several small databases should be preferred. As already explained in chapters Transaction principle and Backup, Exchange Server supports up to 20 databases, which are organized in groups of at most five databases in so-called storage groups (SG). If the rule of filling up individual storage groups with databases before creating a further storage group was valid for versions prior to Exchange Server 2003 on account of the administration overhead, it is recommended to already open an additional storage group for more than two databases for Exchange Server 2003 (see Microsoft Knowledgebase Article Q890699). If organizational reasons do not advise any other distribution, two storage groups each with two databases would be used for 1,200 users.

Disk space requirement of 26 GB should be planned for the log files. This follows from the assumptions made initially of a log file requirement of 3 MB per user and day.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 56 (69)

PRIMERGY RX220

Like the PRIMERGY RX200 S3, the PRIMERGY RX220 is an optimized computing node with two processors and one height unit (1U). In contrast to the PRIMERGY RX200 S3 which is based on Intel Xeon architecture, the PRIMERGY RX220 is based on AMD Opteron architecture. The PRIMERGY RX220 offers an onboard RAID-complaint SATA controller for the two internal hot-plug SATA hard disks and 2 GB LAN interfaces. Two PCI slots are available for further scaling. Moreover, a 2-channel RAID controller can be used so that it can be ideally combined with one or two PRIMERGY SX30. Alternatively, the PRIMERGY RX220 can also be connected to a SAN through a Fibre-Channel controller.

In a configuration with two AMD Opteron 280 processors and 3 GB of main memory it is by all means possible to efficiently manage up to 1,200 mail users with the PRIMERGY RX220 and an appropriate configuration with a PRIMERGY SX30 or a SAN disk subsystem of an appropriate performance level.

The two internal hard disks are mirrored (RAID 1) through the onboard SATA controller and used for the operating system and the Active Directory. Two of the external hard disks are intended as RAID 1 for queues and restore. The log files are stored on two external hard disks configured as a RAID 1. The remaining eight hard disks are put together to form a RAID 10 and house the Exchange databases (store). You should not be misled into believing that you can reduce the number of hard disks by using a RAID 5 instead of a RAID 10 or by using large-capacity hard disks. The number of hard disks is calculated from the anticipated I/O accesses per second. The efficiency of the individual hard disks and RAID levels was discussed in detail in chapter Hard disks. In this scenario for approximately 1,200 users this means an I/O rate of 1200 users × 0.6 IO/user/s = 720 IO/s for the Exchange databases. If you take hard disks with 15 krpm as a basis, you will need six hard disks to satisfy this I/O rate and if you use hard disks with 10 krpm, eight disks will be required.

Provided the system is secured by UPS, the controller and disk caches should be activated to improve performance.

As an alternative to an SCSI-based disk subsystem it is also possible to use a Fibre-Channel-based SAN. Regardless of the disk subsystem used, the logical distribution of the hard disks should be analog to the SCSI solution.

RAID 1 OS, AD

RAID 1 Queues

RAID 1 Logs

RAID 1+0 Store

... 8 ...

Processors 2 × Opteron DP 246 – 256 2.0 - 3.0 GHz, 1 MB SLC or

2 × Opteron DP 265 – 280 1.8 - 2.4 GHz, 2 × 1 MB SLC

Memory max. 28 GB

Onboard RAID SATA

PCI SCSI controller 1 × 1-channel 1 × 2-channel RAID

PCI FC controller 1 × 2-Channel

Disks internal 2 × SATA 3½”

Disks DAS external max. 28 SCSI

Onboard LAN 2 × 10/100/1000 Mbit

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 57 (69)

To connect external backup drives either a PRIMERGY SX10 is used in conjunction with 5¼“ devices, or backup devices in 19“ format are used. On account of the data volume with 1,200 users either an LTO3 drive or a tape library that automatically changes the tapes should be used.

For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is 2:2. Already from a physical memory configuration of 1 GB, Microsoft recommends using the /3GB option. For more details see Microsoft Knowledge Base Article Q823440.

For 1,200 planned Exchange users and in case of the general conditions specified at the beginning of this chapter a space requirement of up to 1200 users × 100 MB × 2 = 240 GB is needed for the Exchange databases. However, an individual database should not be greater than 100 GB, as otherwise the restore time of a database can be more than four hours. Thus to enable fast restore times several small databases should be preferred. As already explained in chapters Transaction principle and Backup, Exchange Server supports up to 20 databases, which are organized in groups of at most five databases in so-called storage groups (SG). If the rule of filling up individual storage groups with databases before creating a further storage group was valid for versions prior to Exchange Server 2003 on account of the administration overhead, it is recommended to already open an additional storage group for more than two databases for Exchange Server 2003 (see Microsoft Knowledgebase Article Q890699). If organizational reasons do not advise any other distribution, two storage groups each with two databases would be used for 1,200 users.

In accordance with the assumptions made initially with regard to the scope of the log data of 3 MB per user and day, a disk space requirement of 26 GB is needed for 7 days and 1,200 users.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 58 (69)

PRIMERGY RX300 S3 / TX300 S3

Like the PRIMERGY RX200 S3, the PRIMERGY RX300 S3 is a rack-optimized dual-core, dual processor system. However, with its height of two units (2U) it provides space for a larger number of hard disks and PCI controllers, thus enabling a greater scaling with external disk subsystems.

The PRIMERGY TX300 S3 offers a comparable computing performance as a PRIMERGY RX300 S3, but due to its larger housing it is suitable for 5¼” drives, e.g. backup media.

Both the PRIMERGY RX300 S3 and the PRIMERGY TX300 S2 can be equipped with two 2-channel RAID controllers, thus enabling the connection of up to four PRIMERGY SX30 with a total of 56 hard disks. It is also possible to connect a SAN as an external disk subsystem through up to six Fibre-Channel channels.

With two Xeon DP 51xx processors, 4 GB of main memory and a well sized disk subsystem with fast 15 krpm hard disks it is possible for the PRIMERGY RX300 S3 or PRIMERGY TX300 S3 as a dedicated Exchange Server to by all means manage up to 3,000 Exchange users. The picture on the next page shows a solution with an SCSI-based disk subsystem with one 2-channel RAID controller and two PRIMERGY SX30. It goes without saying that a SAN can also be used instead of the SCSI-based disk subsystem. However, no difference results here with regard to the number of hard disks and RAID arrays.

With 3,000 Exchange users an I/O rate of 3000 users × 0.6 IO/user/s = 1800 IO/s is to be expected for the user profile described in the chapter Exchange measurement methodology. Typically, Exchange has

2/3 read

and 1/3 write accesses, resulting for a RAID 10 in 2,400 IO/s that have to be processed by the hard disks.

Providing this I/O rate calls for 16 hard disks with 15 krpm. RAID 5 should be dispensed with, because as already discussed in the chapter Hard disks, each hard disk and each RAID level only has a certain I/O

RX300 S3

Processors 2 × Xeon DP 5050/5060/5080 3.0/3.2/3.73 GHz, 2 × 2 MB SLC or

2 × Xeon DP 5110/5120/5130/5140/5150/5160 1.66/1.86/2.0/2.33/2.66/3.0 GHz, 4 MB SLC

Memory max. 32 GB

Onboard RAID SAS, optional “MegaRaid ROMB” kit

PCI RAID controller 2 × 2-channel SCSI

PCI FC controller 3 × 2-channel

Disks internal 6 × 3½" SAS/SATA

Disks DAS external max. 56 SCSI

Onboard LAN 2 × 10/100/1000 Mbit

TX300 S3

Processors 2 × Xeon DP 5110/5120/5130/5140/5150/5160 1.66/1.86/2.0/2.33/2.66/3.0 GHz, 4 MB SLC

Memory max. 32 GB

Onboard RAID SAS, optional “MegaRaid ROMB” kit

PCI RAID controller 1 × 8-port SAS

2 × 2-channel SCSI

PCI FC controller 3 × 2-channel

Disks internal 6 × 3½" SAS/SATA, optional 8 × 3½"

Disks external max. 56 SCSI

Onboard LAN 2 × 10/100/1000 Mbit

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 59 (69)

performance. Using a RAID 5 would require 22 hard disks in comparison with only 16 hard disks with a RAID 10.

Under the general conditions defined at the beginning of the chapter, the space requirement for the mailbox contents of 3,000 Exchange users is calculated to be 3000 users × 100 MB × 2 = 600 GB. With 16 hard disks in one or more RAID 10 arrays hard disks with a capacity of at least 75 GB are required. Thus, hard disks with 73 GB are somewhat too small for 3,000 users, in other words hard disks with a capacity of 146 GB and 15 krpm should be used for the databases (store). With 3 MB per user and day over seven days this results for the log data in a data volume of 63 GB for 3,000 users. However, for performance and data security reasons a dedicated drive should be used per storage group for the log files. Since it is advisable to use four storage groups, this results in 63 GB / 4 ≈ 16 GB. Therefore, it is sufficient to use hard disk with the smallest available capacity of 36 GB.

An individual database should not be larger than 100 GB, as otherwise the restore time of a database can be more than four hours. Thus to enable fast restore times several small databases should be preferred. It is therefore advisable to use four storage groups each with two databases, unless there is an objection for other organizational reasons.

For an Exchange Server of this size it is recommended to use dedicated hard disks per storage group for the database log files. This results in the following distribution of the disk subsystem:

The internal six hard disks are run on the onboard controller of the PRIMERGY RX300 S3. Three RAID 1 pairs are formed from the six hard disks, one pair for the operating system, one for the queues and a third one for restore. The PRIMERGY SX30 with their hard disks will contain the Exchange databases and logs. For each PRIMERGY SX30 we build two RAID 1 arrays consisting of two hard disks for the log files and two RAID 10 arrays made up of fast hard disks with 15 krpm for the databases.

In an installation of this size the entire system or the data center should be secured by UPS so that the caches of the controllers and the hard disks can be activated without a loss of data occurring in case of a possible power failure.

For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is 2:2. Already from a physical memory configuration of 1 GB, Microsoft recommends using the /3GB option. For more details see Microsoft Knowledge Base Article Q823440.

For an Exchange Server of this magnitude the Active Directory should be placed on a dedicated server.

... 4 ...

RAID 10 Store SG 3

... 4 ...

RAID 1 Logs SG 3

... 4 ...

RAID 10 Store SG 4

... 4 ...

RAID 1 Logs SG 4

... 4 ...

RAID 10 Store SG 1

... 4 ...

RAID 1 Logs SG 1

... 4 ...

RAID 10 Store SG 2

... 4 ...

RAID 1 Logs SG 2

RAID 1 OS

RAID 1 Queues

RAID 1 Restore

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 60 (69)

PRIMERGY BX600

The PRIMERGY BX600 is a scalable 19" rack server system. With only seven height units (7U) it provides space for up to ten server blades, as well as the entire infrastructure, such as gigabit LAN, Fibre-Channel and KVM (keyboard-video-mouse) switch and remote management.

Alternatively, the PRIMERGY BX600 rack server system can be equipped with the PRIMERGY BX630 blade with AMD Opteron processors that can be scaled from two to eight processors or with the PRIMERGY BX620 S3 blade with Intel Xeon processors.

PRIMERGY BX620 S3

The PRIMERGY BX620 S3 is a server blade with two Intel Xeon processors. Its computing performance is comparable to that of a PRIMERGY RX300 S3. Each PRIMERGY BX620 S3 server blade offers an onboard SAS/SATA controller with RAID functionality, two hot-plug 2½" SAS/SATA hard disks and two gigabit-LAN interfaces. Optionally, the PRIMERGY BX620 S3 can be equipped with a 2-channel Fibre-Channel interface.

PRIMERGY BX630

The PRIMERGY BX630 is a server blade with two AMD Opteron processors. Its computing performance is comparable to that of a PRIMERGY RX220. The PRIMERGY BX630 offers two hot-plug 3½" hard disks and can be equipped with either an SAS or SATA controller. Two gigabit-LAN interfaces are available onboard and it can be optionally equipped with a 2-channel Fibre-Channel interface.

However, the special feature of the PRIMERGY BX630 server blade is its scalability. Thus it is possible to couple two PRIMERGY BX630 to form a 4-processor system and four PRIMERGY BX630 to form an 8-processor system.

Processors 2 × Xeon DP 5050/5060/5080/5063 3.0/3.2/3.73/3.2 GHz, 2 × 2 MB SLC

2 × Xeon DP 5110/5120/5130/5140 1.6/1.8/2.0/2.3 GHz, 4 MB SLC

Memory max. 32 GB

Onboard LAN 2 × 10/100/1000 Mbit

Onboard RAID SAS/SATA

Disks internal 2 × SAS/SATA 2½”

Fibre-Channel 2 × 2 Gbit

Disks external depending on SAN disk subsystem

Processors 2 × Opteron DP 246 – 256 2.0 – 3.0 GHz, 1MB SLC, or

2 × Opteron DP 265 – 285 1.8 – 2.6 GHz, 2 × 1MB SLC, or

2 × Opteron MP 865 – 885 1.8 – 2.6 GHz, 2 × 1MB SLC

Memory max. 16 GB

Onboard LAN 2 × 10/100/1000 Mbit

Onboard RAID SAS/SATA

Disks internal 2 × SAS/SATA 3½”

Fibre-Channel 2 × 2 Gbit

Disks external depending on SAN disk subsystem

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 61 (69)

The computing performance of a PRIMERGY BX630 equipped with two AMD Opteron 2xx dual-core processors and a PRIMERGY BX620 S3 equipped with two Xeon 51xx processors is very similar and both server blades can as dedicated Exchange Servers with a well sized SAN disk subsystem by all means manage up to 3,000 Exchange users. The diagram figuratively shows a FibreCAT CX500, but of course every other SAN disk subsystem of an adequate performance level can be used.

With 3,000 Exchange users an I/O rate of 0.6 IO/user/s × 3000 users = 1800 IO/s is to be expected for the user profile described in the chapter Exchange measurement methodology. Typically, Exchange has

2/3 read

and 1/3 write accesses, resulting for a RAID 10 in 2,400 IO/s that have to be processed by the hard disks.

Providing this I/O rate calls for 16 hard disks with 15 krpm. RAID 5 should be dispensed with, because as already discussed in the chapter Hard disks, each hard disk and each RAID level only has a certain I/O performance. Using a RAID 5 would require 22 hard disks in comparison with only 16 hard disk with a RAID 10.

The space requirement for the mailbox contents of 3,000 Exchange users is calculated – under the general conditions defined at the beginning of the chapter – to be 3000 users × 100 MB × 2 = 600 GB. With 16 hard disks in one or more RAID 10 arrays hard disks with a capacity of at least 75 GB are required. Thus, hard disks with 73 GB are somewhat too small for 3,000 users, in other words hard disks with a capacity of 146 GB and 15 krpm should be used for the databases (store).

An individual database should not be larger than 100 GB, as otherwise the restore time of a database can be more than four hours. Thus to enable fast restore times several small databases should be preferred. It is therefore advisable to use four storage groups each with two databases, unless there is an objection for other organizational reasons. For an Exchange Server of this size it is recommended to use dedicated hard disks per storage group for the database log files. With 3 MB per user and day over seven days this results for the log data in a data volume of 63 GB for 3,000 users. Since it is advisable to use four storage groups, this results in 63 GB / 4 ≈ 16 GB. Therefore, it is sufficient to use hard disk with the smallest available capacity of 36 GB.

The two internal hard disks are run as RAID 1 on the onboard SAS or SATA controller of the PRIMERGY BX620 S3 or PRIMERGY BX630 and are used to house the operating system. In the SAN two RAID 1 arrays

... 4 ...

RAID 10 Store SG 1

... 4 ...

RAID 1 Logs SG 1

... 4 ...

RAID 10 Store SG 2

... 4 ...

RAID 1 Logs SG 2

RAID 1 OS

... 4 ...

RAID 10 Store SG 3

... 4 ...

RAID 1 Logs SG 3

... 4 ...

RAID 10 Store SG 4

... 4 ...

RAID 1 Logs SG 4

RAID 1 Queues

RAID 1 Restore

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 62 (69)

are built for queues and restore, moreover four RAID 1 arrays are used for the log files and four RAID 10 arrays made up of fast hard disks with 15 krpm for the databases.

In an installation of this size the entire system or the data center should be secured by UPS so that the caches of the controllers and the hard disks can be activated without a loss of data occurring in case of a possible power failure.

For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is 2:2. As early as with a physical memory configuration of 1 GB, Microsoft recommends using the /3GB option. For more details see Microsoft Knowledge Base Article Q823440.

For an Exchange Server of this magnitude the Active Directory should be placed on a dedicated server.

If you combine two PRIMERGY BX630 server blades to form a 4-processor system, between 5,000 and 6,000 users can then be managed with a configuration of four AMD Opteron 8xx dual-core processors, 4 GB of main memory and an adequate disk subsystem.

The main difference between the disk subsystem and configuration shown on the previous page with 3,000 users is the larger number of hard disks for the Exchange databases (store). The large number of 24 hard disks with 15 krpm is necessary to manage the higher I/O rate that is induced by more than 2,000 users. It goes without saying that the hard disks for queues, restore and log files must also be adapted to the higher capacity requirements.

As already described, the PRIMERGY BX630 can be scaled up to an 8-processor system through the combination of four PRIMERGY BX630 server blades. However, this concentrated computing performance cannot be put to full use by Exchange as a mere 32-bit application that does not use PAE.

Scaling beyond the 5,000 to 6,000 users that are already managed with a 4-processor system does not exist. Instead, a scale-out scenario should better be used for scaling of Exchange above 5,000 users and additional Exchange Servers installed on 2- or 4-processor systems. See also chapter Exchange architecture.

... 4 ...

RAID 10 Store SG 1

... 6 ...

RAID 1 Logs SG 1

... 4 ...

RAID 10 Store SG 2

... 6 ...

RAID 1 Logs SG 2

RAID 1 OS

RAID 1 Queues

RAID 1 Restore

... 4 ...

RAID 10 Store SG 3

... 6 ...

RAID 1 Logs SG 3

... 4 ...

RAID 10 Store SG 4

... 6 ...

RAID 1 Logs SG 4

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 63 (69)

PRIMERGY RX600 S3 / TX600 S3

With four processors and a large potential for storage and PCI controllers the PRIMERGY RX600 S3 and PRIMERGY TX600 S3 represent a basis for major application servers. The PRIMERGY RX600 S3 and PRIMERGY TX600 S3 only differ as far as the form of the housing is concerned. The PRIMERGY RX600 S3 as a pure computing unit has been optimized for rack installation with four height units (4U), whereas the PRIMERGY TX600 S3 uses six height units (6U). However, in return the PRIMERGY TX600 S3 also offers space for further five hard disks and accessible 5¼“ devices. The PRIMERGY TX600 S3 is also available as a floor-stand version.

Although these servers can include up to 64 GB of memory, only a configuration of 4 GB is practical for Exchange as a pure 32-bit application (see chapter Main memory). This also sets boundaries to the scaling of an Exchange Server – more than between 5,000 and 6,000 users cannot be sensibly run on an individual Exchange Server.

Due to the memory limitation a higher number of users will result particularly in an increased access to the Exchange databases. Thus the additional load with 6,000 instead of 5,000 users fully affects the disk subsystem, because in the absence of main memory the Exchange Server cannot adequately buffer the database accesses. If you want to run such a large monolithic Exchange Server, you should select an efficient and generously sized Fibre-Channel-based disk subsystem that can uncouple load peaks with a cache on the disk subsystem side.

Up to 5,000 users it is by all means still possible to use an SCSI-based Direct Attached Storage (DAS) subsystem, as shown on the next page. A configuration with a Fibre-Channel-based disk subsystem for 5,000 users was already illustrated in the previous chapter PRIMERGY BX600.

TX600 S3

Processors 4 × Xeon MP 3.16/3.66 GHz, 1 MB SLC, or

4 × Xeon MP 7020/7030 2.67/2.80 GHz, 2 x 1 MB SLC, or

4 × Xeon MP 7041 3.0 GHz, 2 x 2 MB SLC

Memory max. 64 GB

Onboard RAID 2-channel SCSI

PCI RAID controller 2 × 2-channel SCSI

PCI FC controller up to 4 × 2-channel

Disks internal max. 10

Disks DAS external max. 56

Onboard LAN 2 × 10/100/1000 Mbit

RX600 S3

Processors 4 × Xeon MP 3.16/3.66 GHz, 1 MB SLC, or

4 × Xeon MP 7020/7030 2.67/2.80 GHz, 2 × 1 MB SLC, or

4 × Xeon MP 7041 3.0 GHz, 2 x 2 MB SLC

Memory max. 64 GB

Onboard RAID 2-channel SCSI

PCI RAID controller up to 2 × 2-channel SCSI

PCI FC controller up to 4 × 2-channel

Disks internal max. 5

Disks DAS external max. 56

Onboard LAN 2 × 10/100/1000 Mbit

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 64 (69)

For 5,000 users the PRIMERGY RX600 S3 or PRIMERGY TX600 S3 should be equipped with four Xeon 7041 processors and 4 GB of main memory. An I/O rate of 5000 users × 0.6 IO/user/s = 3000 IO/s is to be expected for the user profile described in the chapter Exchange measurement methodology. Typically, Exchange has

2/3 read and

1/3 write accesses, resulting for a RAID 10 in 4,000 IO/s that have to be

processed by the hard disks. Providing this I/O rate calls for 24 hard disks with 15 krpm. RAID 5 should be dispensed with, because as already discussed in the chapter Hard disks, each hard disk and each RAID level only has a certain I/O performance. Using a RAID 5 would require 36 hard disks in comparison with only 24 hard disk with a RAID 10.

Under the general conditions defined at the beginning of the chapter, the space requirement for the mailbox contents of 5,000 Exchange users is calculated to be 5000 users × 100 MB × 2 = 1 TB. With 24 hard disks in one or more RAID 10 arrays hard disks with a capacity of at least 84 GB are required. Thus, hard disks with a capacity of 146 GB should be used for the databases (store).

An individual database should not be larger than 100 GB, as otherwise the restore time of a database can be more than four hours. Thus to enable fast restore times several small databases should be preferred. It is therefore advisable, unless there is an objection for other organizational reasons, to use four storage groups each with three databases.

With 5,000 users planning must be made for about 105 GB for log files, providing the assumption made at the beginning of the chapter of 3 MB per user and day for a 7-day stock of log files is taken as the basis. For performance and data security reasons it is advisable to set up a separate drive for the log files for each storage group. For four storage groups 105 GB / 4 ≈ 27 GB are needed per log file volume for this purpose. It is therefore sufficient to use hard disks with a capacity of 36 GB.

For an Exchange Server of this size it is recommended to use dedicated hard disks per storage group for the database log files. This results in the following distribution of the disk subsystem:

The internal hard disks of the PRIMERGY RX600 S3 are run on the onboard SAS controller. A RAID 1 array is used for the operating system and a second RAID 1-array for the SMTP queues. The fifth hard disk is planned as temporary disk space for restore, and as only temporary data are to be found here it need not be mirrored.

The two 2-channel RAID controllers control the four 1-channel PRIMERGY SX30s with their hard disks will contain the Exchange databases and logs. For each PRIMERGY SX30 a RAID 1 array is set up for the log files and a RAID 10 array made up of fast hard disks with 15 krpm for the databases.

RAID 1 OS

RAID 1 Queues

Restore

... 6 ...

RAID 1 Logs

... 6 ... RAID 10 Stores

... 6 ...

... 6 ...

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 65 (69)

For an Exchange Server of this magnitude dedicated servers should be used for the Active Directory so that in this system no hard disks have to be provided for the database of the Active Directory.

An Exchange Server of this dimension should be protected by a UPS against power failures. Then the controller and hard disk caches can also be activated without hesitation to improve performance.

For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is 2:2. Since Exchange Server 2003 requires approximately 1.8 times the virtual address space compared with a physical memory, Exchange would – with a standard division of 2 GB for applications – not be able to use the physical memory of 2 GB, but only 1.1 GB. Comparison measurements with and without /3GB have shown a gain in performance here of 28%. For more details see Microsoft Knowledge Base Article Q823440.

A feature of this system‟s chipset is to fully reserve the address area of 3 to 4 GB for the addressing of controllers. To use the underlying physical memory in this address, the chipset can make it visible in the address area of 4 to 5 GB. Thus with a configuration of more than 3 GB of physical memory PAE should - contrary to the recommendations elsewhere - be activated to enable the memory area above 4 GB to be addressed.

See Microsoft Knowledge Base Article Q815372 for more information about the optimization of large Exchange Servers.

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 66 (69)

Processors 4 - 16 × Xeon MP 7020 2.67 GHz, 2 × 1 MB SLC, or

4 - 16 × Xeon MP 7040 3.0 GHz, 2 × 2 MB SLC

Memory max. 265 GB

Onboard RAID SAS

PCI RAID controller 16 × 2-channel SCSI

PCI FC controller up to 6 × 1-channel

Disks internal 6 × SAS

Onboard LAN 2 × 10/100/1000 Mbit per unit

PRIMERGY RX800 S2

The PRIMERGY RX800 S2 is the flagship of the PRIMERGY family. The PRIMERGY RX800 S2 can be scaled from a 4-processor to a 16-processor system in increments of four processors. The maximum configuration provides 256 GB of main memory.

Unfortunately Exchange Server 2003 with its limitations as regards memory usage and the high dependence on the disk I/O does not make full use of the possible computing performance of a PRIMERGY RX800 S2. Therefore, the PRIMERGY RX800 S2 is hardly used in the field of stand-alone Exchange Servers.

However, the PRIMERGY RX800 S2 is an adequate system for the setting-up of high-availability high-end Exchange clusters on the basis of Windows Server 2003 Datacenter Edition. Under Windows Server 2003 Datacenter Edition it is possible to run up to eight PRIMERGY RX800 S2 together in a cluster, with six or seven servers typically being run actively and one or two servers passively so that these can stand in if one of the active servers in the cluster fails. A high-availability Exchange cluster, which can manage up to about 35,000 (7 × 5,000) active users, can be realized on this model.

PRIMERGY RXI300 / RXI600

The PRIMERGY RXI300 and PRIMERGY RXI600 are based on Intel‟s state-of-the-art 64-bit platform with Itanium®2 processor architecture. However, since Exchange Server 2003 has until now not been available in a 64-bit version, an Exchange Server is not possible on the basis of a PRIMERGY RXI300 or PRIMERGY RXI600.

Processors 2 or 4 × Itanium2

1.5 GHz, 4 MB TLC

1.6 GHz, 9 MB TLC

Memory 16 or 32 GB

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 67 (69)

Summary

If we summarize the re-sults discussed above once more in a clearly ar-ranged way, the outcome is the diagram opposite. (Note: the horizontal axis is not proportionate.) You can immediately see that many of the PRIMERGY systems overlap or are even on a par as regards their efficiency concerning Exchange. A well equipped PRIMERGY RX300 S3 can serve just as many users as a smaller configuration of a PRIMERGY RX600 S3. There are no strict limits because - as already dis-cussed in-depth at the outset - the real number of users depends on the customer-specific load profile. In the chart this fact is depicted by the gradient of the bars.

Which system is ultimately the most suitable depends on the customer requirements? For example, is a floor-stand or a rack required, is the backup to be performed internally or externally, does the customer have growth potential and does he require expandability, etc.

Irrespective of the performance of the hardware, maximum scaling in the scale-up scenario is as a result of limitations in the Exchange software reached at approximately 5,000 active users per server. A larger number of users can be achieved in a scale-out scenario through additional servers. In this case, it is sensible to use clustered solutions because these also provide redundancy against hardware failure. In this way, clusters for up to approximately 35,000 users can be set up.

However, when sizing an Exchange server it is by all means necessary to analyze the customer requirements and the existing infrastructure, such as network, Active Directory, etc. These must then be prudently incorporated in the planning of the Exchange server or the Exchange environment. A number of influences, such as backup data volume, can be calculated directly, other influences can only be estimated or weighed up on the basis of empirical values. Thus when configuring a medium-sized to large-scale Exchange server, detailed planning and consulting are an absolute necessity.

At this point it must be expressly pointed out once again that all the data used in this paper has been determined on the basis of the medium user profile, but in so doing did not aim for and optimize the maximum possible value. All measurements were based on RAID systems protected against hardware failure and on all systems adequate computing performance was also available for active virus protection and backup. Our competitors very frequently quote benchmark results for the efficiency of their systems. However, these are generally based on insecure RAID 0 arrays and leave no room for virus protection, server management or the like.

RX600 S3 Cluster

TX150 S4

TX200 S3

TX300 S3

TX600 S3

RX800 Cluster

RX100 S3

RX200 S3

RX300 S3

RX600 S3

BX600 Cluster

RX300 S3 Cluster

Tower Solutions

Cluster Solutions

Blade Solutions

Rack Solutions

Economy Solutions

Econel 100

Econel 200

50 100 250 500 1000 1500 2500 5000 15000 User 35000

BX630 dual

BX620 S3

BX630 quad

RX220

PR

IME

RG

Y

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 68 (69)

References

[L1] General information about Fujitsu products

http://ts.fujitsu.com

[L2] General information about the PRIMERGY product family

http://ts.fujitsu.com/primergy

[L3] PRIMERGY Benchmarks – Performance Reports and Sizing Guides

http://ts.fujitsu.com/products/standard_servers/primergy_bov.html

[L4] Performance Report – iSCSI and iSCSI Boot

http://docs.ts.fujitsu.com/dl.aspx?id=108eca7c-2412-4e59-99dd-3f96721f4127

[L5] Performance Report – Modular RAID

http://docs.ts.fujitsu.com/dl.aspx?id=8f6d5779-2405-4cdd-8268-1f948ba050e6

[L6] Microsoft Exchange Server

www.microsoft.com/exchange/default.mspx

[L7] What's new in Exchange Server 2003

www.microsoft.com/downloads/details.aspx?FamilyID=84236bd9-ac54-4113-b037-c04a96a977fd&DisplayLang=en

[L8] Performance Benchmarks for Computers Running Exchange Server 2003

http://technet.microsoft.com/en-us/library/cc507123.aspx

[L9] Downloads for Exchange Server 2003

http://technet.microsoft.com/en-us/exchange/bb288488.aspx

[L10] Exchange Server 2003 Technical Documentation Library

www.microsoft.com/technet/prodtechnol/exchange/2003/library/default.mspx

[L11] Exchange Server 2003 Performance and Scalability Guide

www.microsoft.com/technet/prodtechnol/exchange/2003/library/perfscalguide.mspx

[L12] Troubleshooting Exchange Server 2003 Performance

http://www.microsoft.com/technet/prodtechnol/exchange/2003/library/e2k3perf.mspx

[L13] Exchange 2000 Server Resource Kit

http://www.microsoft.com/technet/prodtechnol/exchange/2000/library/reskit/default.mspx

[L14] System Center Capacity Planner 2006

www.microsoft.com/windowsserversystem/systemcenter/sccp/default.mspx

[L15] Microsoft Operations Manager

www.microsoft.com/mom/default.mspx

[L16] Windows Small Business Server 2003

www.microsoft.com/windowsserver2003/sbs/default.mspx

[L17] Windows Server 2003 Active Directory

www.microsoft.com/windowsserver2003/technologies/directory/activedirectory/default.mspx

[L18] Physical Address Extension - PAE Memory and Windows

www.microsoft.com/whdc/system/platform/server/PAE/PAEdrv.mspx

[L19] Microsoft TechNet

http://www.microsoft.com/technet/

White Paper Sizing Guide Exchange Server 2003 Version: 4.2, July 2006

© Fujitsu Technology Solutions, 2009 Page 69 (69)

Delivery subject to availability, specifications subject to change without notice, correction of errors and omissions excepted. All conditions quoted (TCs) are recommended cost prices in EURO excl. VAT (unless stated otherwise in the text). All hardware and software names used are brand names and/or trademarks of their respective holders. Copyright © Fujitsu Technology Solutions GmbH 2009

Published by department: Enterprise Products PRIMERGY Server PRIMERGY Performance Lab mailto:[email protected]

Internet:

http://ts.fujitsu.com/primergy

Extranet:

http://partners.ts.fujitsu.com/com/products/servers/primergy

Delivery subject to availability, specifications subject to change without notice, correction of errors and omissions excepted. All conditions quoted (TCs) are recommended cost prices in EURO excl. VAT (unless stated otherwise in the text). All hardware and software names used

Published by department: Enterprise Products PRIMERGY Server

Internet: http://ts.fujitsu.com/primergy

Extranet: http://partners.ts.fujitsu.com/com/products/serv

Document History

Version Date of publication Exchange Version

Version 1.0 March 1997 4.0

Version 2.0 July 1999 5.5

Version 3.0 February 2002 2000

Version 3.1 September 2002 2000

Version 4.0 February 2004 2003

Version 4.1 July 2006 2003

Version 4.2 July 2006 2003

Contacts

PRIMERGY Performance and Benchmarks

mailto:[email protected]

PRIMERGY Product Marketing

mailto:[email protected]