scal ability

8
Scalability (Khả năng mở rộng) Scalability From Wikipedia, the free encyclopedia This article needs additional citations for verification . Please help improve this article by adding citations to reliable sources . Unsourced material may be challenged and removed. (March 2012) Scalability is the ability of a system, network, or process to handle a growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth. [1] For example, it can refer to the capability of a system to increase its total output under an increased load when resources (typically hardware) are added. An analogous meaning is implied when the word is used in an economic context, where scalability of a company implies that the underlying business model offers the potential for economic growth within the company. Scalability, as a property of systems, is generally difficult to define [2] and in any particular case it is necessary to define the specific requirements for scalability on those dimensions that are deemed important. It is a highly significant issue in electronics systems, databases, routers, and networking. A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system. An algorithm , design, networking protocol , program , or other system is said to scale if it is suitably efficient and practical when applied to large situations (e.g. a large input data set, a large number of outputs or users, or a large number of participating nodes in the case of a distributed system). If the design or system fails when a quantity increases, it does not scale. In practice, if there are a large number of things (n) that affect scaling, then resource requirements (for example, algorithmic time-complexity) must grow less than n 2 as n increases. An example is a search engine, that must scale not only for the number of users, but for the number of objects it indexes. Scalability refers to the ability of a site to increase in size as demand warrants. [3] The concept of scalability is desirable in technology as well as business settings. The base concept is consistent – the ability for a business or technology to accept increased volume without impacting

Upload: thinh-tran-van

Post on 09-Nov-2015

216 views

Category:

Documents


0 download

DESCRIPTION

Scalability

TRANSCRIPT

Scalability (Kh nng m rng)ScalabilityFrom Wikipedia, the free encyclopediaThis articleneeds additional citations forverification.Please helpimprove this articlebyadding citations to reliable sources. Unsourced material may be challenged and removed.(March 2012)

Scalabilityis the ability of a system, network, or process to handle a growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth.[1]For example, it can refer to the capability of a system to increase its total output under an increased load when resources (typically hardware) are added. An analogous meaning is implied when the word is used in aneconomiccontext, where scalability of a company implies that the underlyingbusiness modeloffers the potential foreconomic growthwithin the company.Scalability, as a property of systems, is generally difficult to define[2]and in any particular case it is necessary to define the specific requirements for scalability on those dimensions that are deemed important. It is a highly significant issue in electronics systems, databases, routers, and networking. A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be ascalable system.Analgorithm, design,networking protocol,program, or other system is said toscaleif it is suitablyefficientand practical when applied to large situations (e.g. a large input data set, a large number of outputs or users, or a large number of participating nodes in the case of a distributed system). If the design or system fails when a quantity increases, itdoes not scale. In practice, if there are a large number of things (n) that affect scaling, then resource requirements (for example, algorithmic time-complexity) must grow less thann2asnincreases. An example is a search engine, that must scale not only for the number of users, but for the number of objects it indexes. Scalability refers to the ability of a site to increase in size as demand warrants.[3]The concept of scalability is desirable in technology as well asbusinesssettings. The base concept is consistent the ability for a business or technology to accept increased volume without impacting thecontribution margin(=revenuevariable costs). For example, a given piece of equipment may have a capacity for 11000 users, while beyond 1000 users additional equipment is needed or performance will decline (variable costs will increase and reduce contribution margin).Contents[hide] 1Measures 2Examples 3Horizontal and vertical scaling 4Database scalability 5Strong versus eventual consistency (storage) 6Performance tuning versus hardware scalability 7Weak versus strong scaling 8See also 9References 10External linksMeasures[edit]Scalability can be measured in various dimensions, such as: Administrative scalability: The ability for an increasing number of organizations or users to easily share a single distributed system. Functional scalability: The ability to enhance the system by adding new functionality at minimal effort. Geographic scalability: The ability to maintain performance, usefulness, or usability regardless of expansion from concentration in a local area to a more distributed geographic pattern. Load scalability: The ability for adistributed systemto easily expand and contract its resource pool to accommodate heavier or lighter loads or number of inputs. Alternatively, the ease with which a system or component can be modified, added, or removed, to accommodate changing load. Generation scalabilityrefers to the ability of a system to scale up by using new generations of components. Thereby,heterogeneous scalabilityis the ability to use the components from different vendors.[4]Examples[edit] Arouting protocolis considered scalable with respect to network size, if the size of the necessaryrouting tableon each node grows asO(logN), whereNis the number of nodes in the network. A scalableonline transaction processingsystem ordatabase management systemis one that can be upgraded to process more transactions by adding new processors, devices and storage, and which can be upgraded easily and transparently without shutting it down. Some earlypeer-to-peer(P2P) implementations ofGnutellahad scaling issues. Each node queryfloodedits requests to all peers. The demand on each peer would increase in proportion to the total number of peers, quickly overrunning the peers' limited capacity. Other P2P systems likeBitTorrentscale well because the demand on each peer is independent of the total number of peers. There is no centralized bottleneck, so the system may expand indefinitely without the addition of supporting resources (other than the peers themselves). The distributed nature of theDomain Name Systemallows it to work efficiently even when allhostson the worldwideInternetare served, so it is said to "scale well".Horizontal and vertical scaling[edit]Methods of adding more resources for a particular application fall into two broad categories: horizontal and vertical scaling.[5] Toscale horizontally(orscale out) means to add more nodes to a system, such as adding a new computer to a distributed software application. An example might involve scaling out from one Web server system to three. As computer prices have dropped and performance continues to increase, high-performance computing applications such as seismic analysis and biotechnology workloads have adopted low-cost "commodity" systems for tasks that once would have requiredsupercomputers. System architects may configure hundreds of small computers in aclusterto obtain aggregate computing power that often exceeds that of computers based on a single traditional processor. The development of high-performance interconnects such asGigabit Ethernet,InfiniBandandMyrinetfurther fueled this model. Such growth has led to demand for software that allows efficient management and maintenance of multiple nodes, as well as hardware such as shared data storage with much higher I/O performance.Size scalabilityis the maximum number of processors that a system can accommodate.[4] Toscale vertically(orscale up) means to add resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer. Such vertical scaling of existing systems also enables them to usevirtualizationtechnology more effectively, as it provides more resources for the hosted set ofoperating systemandapplicationmodules to share. Taking advantage of such resources can also be called "scaling up", such as expanding the number ofApachedaemon processes currently running.Application scalabilityrefers to the improved performance of running applications on a scaled-up version of the system.[4]There are tradeoffs between the two models. Larger numbers of computers means increased management complexity, as well as a more complex programming model and issues such as throughput and latency between nodes; also,some applications do not lend themselves to a distributed computing model. In the past, the price difference between the two models has favored "scale up" computing for those applications that fit its paradigm, but recent advances in virtualization technology have blurred that advantage, since deploying a new virtual system over ahypervisor(where possible) is almost always less expensive than actually buying and installing a real one.[dubiousdiscuss]Configuring an existing idle system has always been less expensive than buying, installing, and configuring a new one, regardless of the model.Database scalability[edit]A number of different approaches enabledatabasesto grow to very large size while supporting an ever-increasing rate oftransactions per second. Not to be discounted, of course, is the rapid pace of hardware advances in both the speed and capacity ofmass storagedevices, as well as similar advances in CPU and networking speed.One technique supported by most of the majordatabase management system (DBMS)products is thepartitioningof large tables, based on ranges of values in a key field. In this manner, the database can bescaled outacross a cluster of separatedatabase servers. Also, with the advent of 64-bitmicroprocessors,multi-coreCPUs, and largeSMP multiprocessors, DBMS vendors have been at the forefront of supportingmulti-threadedimplementations that substantiallyscale uptransaction processingcapacity.Network-attached storage (NAS)andStorage area networks (SANs)coupled with fast local area networks andFibre Channeltechnology enable still larger, more loosely coupled configurations of databases and distributed computing power. The widely supportedX/Open XAstandard employs a global transaction monitor to coordinatedistributed transactionsamong semi-autonomous XA-compliant database resources.Oracle RACuses a different model to achieve scalability, based on a "shared-everything" architecture that relies upon high-speed connections between servers.While DBMS vendors debate the relative merits of their favored designs, some companies and researchers question the inherent limitations ofrelational database management systems.GigaSpaces, for example, contends that an entirely different model of distributed data access and transaction processing,Space based architecture, is required to achieve the highest performance and scalability. On the other hand,Base Onemakes the case for extreme scalability without departing from mainstream relational database technology.[6]For specialized applications,NoSQLarchitectures such as Google'sBigTablecan further enhance scalability. Google's massively distributedSpannertechnology, positioned as a successor to BigTable, supports general-purposedatabase transactionsand provides a more conventionalSQL-based query language.[7]Strong versus eventual consistency (storage)[edit]In the context of scale-outdata storage, scalability is defined as the maximum storage cluster size which guarantees full data consistency, meaning there is only ever one valid version of stored data in the whole cluster, independently from the number of redundant physical data copies. Clusters which provide "lazy" redundancy by updating copies in an asynchronous fashion are called'eventually consistent'. This type of scale-out design is suitable when availability and responsiveness are rated higher than consistency, which is true for many web file hosting services or web caches (if you want the latest version, wait some seconds for it to propagate). For all classical transaction-oriented applications, this design should be avoided.[8]Many open source and even commercial scale-out storage clusters, especially those built on top of standard PC hardware and networks, provideeventual consistencyonly. Idem some NoSQL databases likeCouchDBand others mentioned above. Write operations invalidate other copies, but often don't wait for their acknowledgements. Read operations typically don't check every redundant copy prior to answering, potentially missing the preceding write operation. The large amount of metadata signal traffic would require specialized hardware and short distances to be handled with acceptable performance (i.e. act like a non-clustered storage device or database).Whenever strong data consistency is expected, look for these indicators: the use of InfiniBand, Fibrechannel or similar low-latency networks to avoid performance degradation with increasing cluster size and number of redundant copies. short cable lengths and limited physical extent, avoiding signal runtime performance degradation. majority / quorum mechanisms to guarantee data consistency whenever parts of the cluster become inaccessible.Indicators foreventually consistentdesigns (not suitable for transactional applications!) are: write performance increases linearly with the number of connected devices in the cluster. while the storage cluster is partitioned, all parts remain responsive. There is a risk of conflicting updates.Performance tuning versus hardware scalability[edit]It is often advised to focus system design on hardware scalability rather than on capacity. It is typically cheaper to add a new node to a system in order to achieve improved performance than to partake inperformance tuningto improve the capacity that each node can handle. But this approach can have diminishing returns (as discussed inperformance engineering). For example: suppose 70% of a program can be sped up if parallelized and run on multiple CPUs instead of one. Ifis the fraction of a calculation that is sequential, andis the fraction that can be parallelized, the maximumspeedupthat can be achieved by using P processors is given according toAmdahl's Law:.Substituting the value for this example, using 4 processors we get.If we double the compute power to 8 processors we get.Doubling the processing power has only improved the speedup by roughly one-fifth. If the whole problem was parallelizable, we would, of course, expect the speed up to double also. Therefore, throwing in more hardware is not necessarily the optimal approach.Weak versus strong scaling[edit]In the context ofhigh performance computingthere are two common notions of scalability: The first isstrong scaling, which is defined as how the solution time varies with the number of processors for a fixedtotalproblem size. The second isweak scaling, which is defined as how the solution time varies with the number of processors for a fixed problem sizeper processor.[9]See also[edit]