-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
1/28
Acknowledgement
I would like to thank all who gave this opportunity to prepare and teach.
First of all i would like to wish u all a very good luck for your B.Tech and whole life. As i said in
class lectures to be at least be eloquent in one of programming languages, as it decides your fate, if uwish to go for software industry. we can say your B.Tech program starts with C language...
PCDS Theory 1st
unit to start up basics:
We want to learn programming language because "programming is the part where rubber meets the
road".. . it means its the main path where actual implementation of program starts.
Requirements analysis-> design-> Implementation->Testing-> Maintenance...these are simply
Software Development Life Cycle Phases (SDLC)... Programming comes into existence after the
design phase...i.e., Implementation
My dear students this is the 1st unit of your academic syllabus of programming in c and data
structures .1st unit deals with the basic concepts instructions, software, programs, pseudo code,
heuristics of algorithms, problem solving aspects, strategies and implementation of algorithms
Note: most of things specified in brackets (.....) are just comments for understand ability and
readability. Please verify previous document that specifies syllabus copy and academic calendar.
Analysis, Complexity and Efficiency of algorithms and few concepts of testing like verification... (
testing is generally performed at every phase of life cycle at its end to check proper requirements and
intension of finding errors in phase... programmer generally thinks how to make a program and tester
thinks how to break a program) these concepts covers under further units of your syllabus as they are
advanced concepts such as sorting looping related to final output...as your performance is calculated
after the time period of year or semester similarly analysis, efficiency are used to calculate
performance of algorithm at the end of implementations. Now carry on with your 1st unit...apart from
this notes please prepare precise notes of your own...sometimes u may feel text book is better than this
notes as text wise unit is small but complex, but i elongated to the way u can understand and few info
collected from other resources .
CP Lab initial conditions to start:
C program is generally stored with extension of *.c; c program is done in many platforms like
windows and unix. In windows u can have various compilers like turbo c (tc,tcc), dave cpp, and inunix gcc and many command line prompts are present. For future references to review or change ur
past works u must save the programs by creating folder in any of drives. Ex: D:\ vicky\ sample.c ( D:\
- sample drive that specifies drive in which program is to be saved, Vicky\- sample folder where
program to be saved and finally sample (filename) .c(extension to be saved))
(\- back slash;/- forward slash; saving execution path- \; comments : // (single line ), and /*.....*/
(multiline);\ - white space characters \n,\t;please specify the brackets circular, rectangular and
flower brackets properly)
SHRADDAVAN LABHATE GNANAM..GNANAVAN LABATHE
SHOWRYAM..SHOURYAVAN LABATHE SARVAM..... YASHASWI BHAVA -With RegardsG.N.Vivekananda, M.Tech, Assistant Professor, Email Id: [email protected]
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
2/28
UNIT I: INTRODUCTION TO COMPUTER PROBLEM SOLVING
1.1 Introduction
Computer problem solving(CPS) is an complex process that requires much thought, careful
Planning, logical Precision, Persistence and attention to detail thus its a challenging, excitingand satisfying experience with considerable criteria for personal creativity and expression
thus its demanding .If this CPS followed correctly chances of success are greatly amplified.
Programs and Algorithms:
Set of explicit and unambiguous (clear) instructions expressed in a certain programming
language that indeed acts as vehicle for computer solution is called a program.
Algorithm is the sequence of step by step instructions to solve the problem corresponding to a
solution that is independent of any programming language (or) Algorithm consists of a set of
explicit and unambiguous finite steps which, when carried out for a given set of initial
conditions, produces output and terminate in a finite time.
To attain a solution to the problem, we have to supply with considerable input Program then
takes he input and manipulates it according to instructions and produces output, that
represents a solution to problem.
Features of an efficient algorithm:
Free of ambiguity
Efficient in execution time
Concise and compactCompleteness
Definiteness
Finiteness
Requirements for solving problems by computer:
There are many algorithms to solve many problems. Lets consider to look up someones
telephone no. in telephone directory, we need to employ or design an algorithm as a first step.
Tasks like this are generally performed without any thought to the complex underlying
mechanism needed to effectively conduct the search, it surprises sometimes when we develop
the computer algorithms that solution must be specified in logical precision.
Requires interaction between programmer and user
Specifications include:
Input data
Type, accuracy, units, range, format, location, sequence
Special symbols to signal end of data
Output data (results)
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
3/28
type, accuracy, units, range, format, location, headings
We have to know how is output related to input
Any special constraint
Example: find the phone no. of a person
Problems get revised often
These examples serve to emphasize the importance of influence of data organization on the
performance of algorithms
1.2 The Problem Solving Aspect
Problem solving is a creative process, which needs systemization and mechanization that has
no universal method, as there are no different strategies appear to work for different people.
Problem Solving is the sequential process of analysing information related to a givensituation and generating appropriate response options.
There are 6 steps that you should follow in order to solve a problem:
1. Understand the Problem
2. Formulate a Model
3. Develop an Algorithm
4. Write the Program
5. Test the Program
6. Evaluate the Solution
The main thing to be known here is what must be done, rather than how to do it
Problem Definition Phase:
The first step in solving a problem is to understand problem clearly. Hence,
the first phase is the problem definition phase. That is, to extract the task from the problem
statement. If the problem is not understood, then the solution will not be correct and it may
result in wastage of time and effort.
When we get to start a problem?
There are many ways to solve the problem and also many solutions. The sooner you start
coding your program the longer it is going to take.
Consider a simple example of how the input/process/output works on a simple problem:
Example:Rule of thumb is generally used props or heuristics(A common sense rule (or set of rules)
intended to increase the probability of solving some problem), to get a start with the problem.
This general approach focusing on a particular problem can often give us foothold we need
for making start on solution.
Calculate the average grade for all students in a class.
1. Input: get all the grades perhaps by typing them in via the keyboard or by reading them
from a USB flash drive or hard disk.
2. Process: add them all up and compute the average grade.
3. Output: output the answer to either the monitor, to the printer, to the USB flash drive or
hard disk or a combination of any of these devices.
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
4/28
As you can see, the problem is easily solved by simply getting the input, computing
something and producing the output. However, nothing less than a complete proof of
correctness of our algorithm is entirely satisfactory.
Similarities of Problems:
This method is used to find out if a problem of this sort has been already solved and to adopta similar method in solving the problem. The contribution of experience in the previous
problem with help and enhance the method of problem for the current problem.
Working backward from the solution:
When we have a solution to the problem then we have to work backward to find the starting
condition. Even a guess can take us to the starting of the problem.
This is very important to systematize the investigation to avoid duplication of our effort.
The strategy of working backwards entails starting with the end results and reversing the
steps you need to get
those results, in order to figure out the answer to the problem. There are at least two different
types of problems which can best be solved by this strategy:(1) When the goal is singular and there are a variety of alternative routes to take. In this
situation, the strategy of
working backwards allows us to ascertain which of the alternative routes was optimal.
An example of this is when you are trying to figure out the best route to take to get from your
house to a store. You
would first look at what neighbourhood the store is in and trace the optimal route backwards
on a map to your home.
(2) When end results are given or known in the problem and you're asked for the initial
conditions.
An example of this is when we are trying to figure out how much money we started with at
the beginning of the day, if we know how much money we have at the end of the day and all
of the transactions we made during the day.
Fig: Working of Phases Fig: The Interactions between Computer
Problem-Solving Phases
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
5/28
General problem Solving Strategies:
The following are general approaches.
Working backwards Reverse-engineering Once you know it can be done, it is much easier to do
What are some examples? Look for a related problem that has been solved before
Java design patterns Sort a particular list such as: David, Alice, Carol and Bob to find a general
sorting algorithm
Stepwise Refinement Break the problem into several sub-problems Solve each sub problem separately Produces a modular structure
K.I.S.S. = Keep It Simple Stupid!
Apart from above, there are a no. of general and powerful computational strategies that arevariously used in various guises in computing science. The most widely strategies are listed
below:
a)Divide and conquer
b)Binary doubling strategy
c)Dynamic programming
a)Divide and conquer method:
The basic idea is to divide the problem into several sub problems beyond which cannot be
further subdivided. Then solve the sub problems efficiently and join then together to get the
solution for the main problem. When we consider the binary search algorithm , applying this
strategy to an ordered data set results in an algorithm that needs to make only log2n rather
than n comparisons to locate an item in n elements. When this is used in sorting algorithms,
no. of comparisons can be reduced from the order of n2 steps to nlog2n steps.
b)Binary doubling strategy:
The reverse of binary doubling strategy, i.e. combining small problems in to one is known as
binary doubling strategy. This strategy is used to avoid the generation of intermediate results.
With this doubling strategy we express the next term n to be computed in terms of the current
term which is usually a function of n/2 in order to avoid the need to generate intermediate
terms,
c)Dynamic programming used:
Dynamic programming is used when the problem is to be solved in a sequence ofintermediate steps. It is particularly relevant for many optimizations problems, i.e. frequently
encountered in Operations research. This method relies on idea that a good solution to a large
problem can sometimes be built up from good or optimal solutions to smaller problems.
Many approaches like greedy search, back tracking and bound and bound evaluations
follow this approach.
There are still more strategies but usually associated with more advanced algorithms, we will
not further proceed further by using them.
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
6/28
1.3 Top Down Design
After defining a problem to be solved we vague for an idea of how to solve it. People as
problem solvers can only focus on time, logic or instructions. Top-down design or step wise
refinement is a strategy that can be applied to find a solution to a problem from a vague
outline to precisely define the algorithm and program implementation by stepwise
refinement. Its used to build solutions to problem in stepwise fashion.
Breaking a problem into sub problems involves following steps:
In top-down model, an overview of the system is formulated, without going into detail
for any part of it.
Each part of the system is then refined in more details.
Each new part may then be refined again, defining it in yet more details until the
entire specification is detailed enough to validate the model.
This design model can also be applied while developing algorithm.
Refinement is applied until we reach to the stage where the subtasks can be directly
carried out.
For the ease of use and understandability it successively refines by Breaking the
problem/tasks into a set of sub tasks/sub problems called modules (divide and
conquer)
Fig: An view of top-down design Fig: Subdividing the party planning
This process continues for as many levels as it takes to expand every task to the
smallest details
A step that needs to be expanded is an abstract step. A step that is specified to the
finest detail is a concrete step
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
7/28
Top-down design in detail consists follows:
Definition Breaking a problem in to sub problems Choice of a suitable data structure Constructions of loops
Establishing initial conditions for loops Finding the iterative construct Terminations of loops
Choice of a suitable data structure:
One of the most important decisions we have to make in formulating computer solutions to
problems is the choice of appropriate data structures.
In appropriate choice of data structure often leads to clumsy, inefficient and difficult
implementations. Appropriate choice leads to simple, transparent and efficient
implementation. The key to effectively solving many problems comes down to making
appropriate choices about associated data structures. As Data structures and algorithms are
usually linked to one another, thus not easy to formulate general rules that says choice of data
structure is appropriate.
Unfortunately with regards to data structures each problem must be considered on its merits.
The sort of things we must however be aware of in setting up data structures are such
questions as:
1) How can intermediate results be arranged to allow fast access to information that will
reduce the amount of computation required at a later stage?
2) Can the data structure be easily searched?
3) Can the data structure be easily updated?
4) Does the data structure provide a way of recovering an earlier state in the computation?
5) Does the data structure involve the excessive use of storage?
6) Is it possible to impose some data structure on a problem that is not initially apparent?
7) Can the problem be formulated in terms of one of the common data structures (e.g. array,
set, queue, stack, tree, graph, list)?
These are general considerations but they give the flavour of sort of things that we
can proceed with development of an algorithm.
Construction of loops:
We are led to a series of constructs, loops, structures that are conditionally executed from the
general statements. These structures with input, output statements, computable expressions,
and assignments, make up heart of program implementations.
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
8/28
To construct a loop 3 conditions must be considered:
- 1.Initial conditions that must be applied before the loop begins to execute
- 2.Invariant relation that must apply after each iteration of the loop
- 3.Conditions under which the iterative process must terminate
1. To establish initial conditions for loop, effective strategy is to set the loop
variables to values that they would have to assume in order to solve the smallest
problem associated with loop. Number of iterations n that must be made by a loop
are in the range 01) .thus these
are used to solve problem for n>=1.
i:=0; s:=0---initialization conditions for loop and solution to summing problems when
n=0
while i=1
s:=s+a[i]
end
the above steps gives solution to summation problem for n>0
The other consideration for constructing loops is considered with setting up of
termination conditions.
Termination of loops:
Generally it occurs in many ways .in general they are dictated by nature of problem.
Simplest termination occurs when known in advance how many iterations needed to
be made.
In this we can use termination facilities provided by certain programming language.
For ex. In Pascal the for-loop can be used for such computations;
for i:=1 to n dobegin
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
9/28
-
-
-
end
loop terminated unconditionally after n iterations
2nd way is that when some conditional expression becomes false.
for ex: while (x>0) and (x
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
10/28
1.4 Implementation of Algorithms
If an algorithm has been properly designed the path of execution should flow in a straight line
from top to bottom. They are usually easier to understand and modify various parts of
program are much more apparent.
Use of procedures to emphasize modularity:
To assist readability it is useful. These develops a set of independent procedures to perform
specific and well defined tasks. For example if part of an algorithm is required to sort an
array, specific independent procedure should be used for it.
In first phase of implementation, before we have implemented any of the procedures, we can
just place a write statement in skeleton procedures which simply writes out the procedures
name when it is called. For ex.,
procedure sort;
begin
writeln(sort called)
end
This is used to test mechanism of main program at early stage and implement and test the
procedures one by one.
Choice of variable name:
This is other implementation that makes more meaningful and easier to understand to choose
appropriate variable and constant names.
For ex. If we have to make manipulations on days of week we are much better off using
variable day rather than single letter a or some other variable.
It makes program much more self documenting. In addition each variable must have only one
role in a given program. A clear definition of all variables and constants at start of each
procedure can also be very useful.
Documenting of programs:
Writing description that explain what the program does.
Can be done in 2 ways:
Writing comments between the line of codes
Creating a separate text file to explain the program
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
11/28
Important not only for other people to use or modify your program, but also for you to
understand your own program after a long time (believe me, you will forget the details
of your own program after some time ...)
Documentation is so important because:
You may return to this program in future to use the whole of or a part of it
again
Other programmer or end user will need some information about your
program for reference or maintenance
You may someday have to modify the program, or may discover some errors
or weaknesses in your program
Although documentation is listed as the last stage of software development method, it
is actually an ongoing process which should be done from the very beginning of the
software development process.
Debugging Programs:
Test ensures that program is behaving correctly according to it s specifications. Debugging is
the process of finding and correcting program code mistakes. syntax errors, Semantic error ,
logical errors(bugs), run time errors. The simplest way to implement this debugging tool is to
have a Boolean variable (e.g. debug) which is set to true when verbose debugging O/P for
program is required. Then each debugging O/P can parenthesized in following way:
IfdebugthenBegin
Writeln(..)
-
-
-
end
The best advice to always work the program through by hand before ever attempting to
execute it. If done systematically and thoroughly this should catch most errors. If the process
we are modeling is a loop it is usually only to check first couple of iterations and last couple
of iterations before termination.For ex: we could draw up for binary search procedure as below(essential steps) :
lower:=1; upper:=n;while lowera[middle] then
lower:=middle+1
else
upper:=middle
end;
found:=(x=a[lower])
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
12/28
It is usually a wasted effort to assume that some things work and only start a systematic study
of the algorithm halfway through the execution path. A good rule to follow when debugging
is not to assume anything.
Program Testing: Process of intension to find errors in a program
Testing means running the programs , executing all its instructions and functions and testing
the logic by entering sample data to check the data output. Field testing is realized by users
that operate the software with the purpose of locating problems. In attempting to test whether
or not a program will handle all variations of the problem it was designed to solve we must
make every effort to be sure that it will cope with the limiting and unusual cases. Program
testing is the process of executing a program to demonstrate its correctness
The practice referred to is that of using fixed constants where variables should be
used.
for ex., we should not use statements of the form while i
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
13/28
Differences between Risk / Error / Bug / Defect / Fault /Failure:
Warning: A message informing of danger (or) Cautionary advice about something
imminent (especially imminent danger or other unpleasantness) (or) Notification of
something, usually in advance.
Risk: Expose to a chance of loss or damage (or) source of damager i.e., a possibility
of incurring loss or injury,
Defect: Found in product itself after it is shipped to the respective customer.
Bug: It is nothing but a logical error. It is found in development environment before
the product is shipped to respective customer.
Error: It is deviation from actual and expected value.
Fault: It is the result of error.
Failure: occurs when fault executes.
1.5 Program Verification
Verification and Validation are steps involved in the Testing phase.
Validation means building Product Right and Verification means Right Product.
Program verification is the process of ensuring that a program meets user-requirement
After the program is compiled, we must execute the program and test/verify it with
different inputs before the program can be released to the public or other users (or tothe instructor of this class)
Program verification refers to the application of mathematical proof techniques, to verify
that the results obtained by the execution of the program with arbitrary inputs are in accord
with formally defined output Specifications. To handle the branches that appear in the
program segments, it is necessary to set-up and proves verification conditions individually.
Fig: Process of Verification
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
14/28
How Computer model for program execution is designed?
Execution path that is followed for given input ( I/P) conditions is important aspect. Program
may have different execution paths leading to successful termination.
For given set of I/P conditions only 1 of these paths will be followed (although some pathsmay follow some common sub paths). Process of progress of computation from specific I/P
conditions through to terminations can be thought of as a sequence of transitions from 1
computation state to another.
Input (I/P) and Output (O/P) Assertions(statement/ argument) :
The very first step to prove program correctively is to provide a formal statement of its
specification in terms of variables that it employs.
Formal statement has 2 parts : I/P assertion and O/P assertion.
Expressed in logic notation as predicate describe state of executing program variables.
I/P assertion: Any constraints that have been paced on values of I/P variables used by
program (e.g., I/P variable d may play the role of a divisor in the program. Clearly d cannot
have the value 0. I/P is thus d < > 0. When no restrictions on values of I/P variable is given
logical value true.
O/P assertion: it tells symbolically results that program expected to produce for I/P data that
satisfies I/P assertion.( e.g., if program designed to calculate the quotient q and remainder r
resulting from division of x by y then O/P assertions written as (x=q*y+r) ^ (r
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
15/28
Ex:
Symbolic execution transforms verification procedure into proving that I/P assertions
with symbolic values substituted for all I/P variables implies O/P assertions with final
symbolic values for all variables. Its called Verification Condition (VC) over program
segment from I/P assertion to O/P assertion.
Initially we must set a no. of intermediate verification conditions between I/P and O/P
assertions, that carries statement by statement . In practical we consider only for sufficient
conditions for blocks of a program as marked by straight line segments and loop
segments. We will adopt convention VC(A-B) referring to verification condition over
program segment from A to B.
Verification of straight line program segments:
Exchange mechanism below serve as example of this straight line program segments:
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
16/28
Verification of program segments with branches:
To handle program segments that contains branches its necessary to prove verification
conditions for each branch separately. For ex:
Verification of program segment with loops:
It has a problem to work with loops because of no. of iterations used by symbolic execution
are arbitrary(having any value or form, of any degree or extent).Thus special assertion of loop
variant is employed. It must be a predicate (property) that captures progressive computational
role of loop while at same time remaining true before and after each loop traversal
irrespective of how many times the loop is executed. Once its established there are man steps
to verify loop segment . To understand this we use single loop program structure as model.
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
17/28
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
18/28
Partial correctness: if the initial condition holds and the program terminates then the
input-output claim holds.
Termination: if the initial condition holds, the program terminates.
Total correctness: if the initial condition holds, the program terminates and the
input-output claim holds.
Assign an assertion for each pair of nodes. The assertion expresses the relation
between the variable when the program counter is located between these nodes.
Verification of program segments that employ arrays:
Idea of symbolic execution is preceded using simple examples that employ arrays. As an
example of verification of a program segment containing an array we will set up the
verification conditions for a program that finds position of smallest element in the array.
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
19/28
Proof of termination:
To prove that a program accomplishes its stated objective in a finite number of steps is called
program termini nation. The proof of termination is obtained directly from the properties of
the interactive constructs. For ex: consider for loop below:
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
20/28
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
21/28
1.6 Efficiency of algorithms:
Its tied with design, implementation and analysis of algorithms.
Generally there are 3 types of efficiency:
a) Worst case efficiency:
is the maximum number of steps that an algorithm can take for any collection
of data values.
b) Best case efficiency:
is the minimum number of steps that an algorithm can take any collection ofdata values.
c) Average case efficiency:
- the efficiency averaged on all possible inputs
- must assume a distribution of the input
- we normally assume uniform distribution (all keys are equally probable)
If the input has size n, efficiency will be afunction of n
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
22/28
Efficiency is the ratio of the output to the input of any system, every algorithm uses
system resources to complete its tasks like CPU(central processor unit) time and
internal memory. Because of high cost of computing resources it is always desirable
to design algorithms that are economical in use of CPU time and memory. This is
easy statement to make but because of bad design habits, inherent complexity ofproblem or both. Every problem has its own characteristics that responses to solve
problem efficiently. We make some statements that may be useful in designing
efficient algorithms.
Redundant Computations:
Most inefficiencies in algorithm comes because of this redundancy (duplicates), that
makes unnecessary storage. It effects seriously when embedded in a loop. Common
mistake using loops is to repeatedly re calculate part of an expression that remain
constant through out entire execution phase of end.
For ex:
Refactoring array elements:
If not cared redundant computations can also easily creep into array processing.
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
23/28
Inefficiency due to late termination:
Inefficiencies come into implementation where more tests are done than are required
to solve the problem.
For ex: if u had to linear search an alphabetically ordered list of names for some
particular name, inefficient implementation in this instance would be one where all
names were examined, even at last of list none could not occur later. If you were
looking name Vicks, then soon we encountered a name that occurred alphabetically
later than Vicks , i.e., Vicky. There would be no need to proceed further.
Inefficient implementation could have form as below:
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
24/28
Early detection of desired O/P Conditions:
Sometimes, due to nature of I/P data ,algorithm establishes the desired O/P conditions before
general conditions for termination have met.
Ex: Bubble sort might be used to sort(A category of things distinguished by some common
characteristic or quality or An operation that segregates items into groups according to a
specified criterion) a set of data thats already sorted order.
Terminate as soon as data sorted. To do this check if any exchanges in current pass of inner
loop, if no exchanges in current pass data it must be sorted and so early termination can be
applied. In general, we must include additional steps and tests to detect conditions for early
termination. If they are kept inexpensive then its worth including them like in bubble sort.
When early termination is possible we always have to trade tests and may be even storage to
bring about early termination.
Trading Storage for efficiency gains:
A trade between storage and efficiency used to improve performance of an algorithm. We pre
compute in this trade off or save some intermediate results and avoid to do a lot of
unnecessary testing and computing later. To speed up algorithms one strategy we use to
implement least number of loops. When its usually done, makes program much harder to read
and debug. Thus its better to stick to one loop do one job just as we have one variable doing
one job.
To get more efficiency solution to a problem, its required it is far better to try to improve the
algorithm rather than restoring to programming tricks that tend to obscure what is being
done. Clear implementation of a better algorithm is to be preferred rather to a tricky
implementation of algorithm that is not as good.
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
25/28
1.7 The analysis of Algorithms
Why Algorithm analysis:
Generally, we use a computer because we need to process a large amount of data.
When we run a program on large amounts of input, besides to make sure the program
is correct, we must be certain that the program terminates within a reasonable amount
of time.
Algorithm analysis: a process of determining the amount of time, resource, etc.
required when executing an algorithm.
We generally consider good solutions to a problem, if algorithm design is good it has
both qualitative and quantitative aspects. In practical, we are interested in solution that
is economical in use of computing and human resources.
Among other things, a good algorithms must usually possess following qualities and
capabilities:
i) They can be easily understood by others, that is ,the implementation is clear
and concise without being tricky.
ii) They are able to be understood on a number of levels
iii) Easily modifiable if necessary.
iv) They are easy, general and powerful.
v) They are correct for clearly defined solution.
vi) Require less computer time, storage and peripherals i.e. they are more
economical.
vii) They are documented well enough to be used by others who do not have a
detailed knowledge of the inner working.
viii) They are not dependable on being run on a particular computer.
ix) The solution is pleasing and satisfying to its designer and user.
x) They are able to be used as a sub-procedure for other problem
Two or more algorithms can solve the same problem in different ways. So, quantitative
measures are valuable in that they provide a way of comparing the performance of two or
more algorithms that are intended to solve the same problem. This is an important step
because the use of an algorithm that is more efficient in terms of time, resources required, can
save time and money.
Computational Complexity:
We can characterize an algorithms performance in terms of the size (usually n) of the
problem being solved. More computing resources are needed to solve larger problems in the
same class. The table below illustrates the comparative cost of solving the problem for a
range of n values.
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
26/28
The above table shows that only very small problems can be solved with an algorithm that
exhibit exponential behaviour. An exponential problem with n=100 would take immeasurably
longer time. At the other extreme, for an algorithm with logarithmic dependency would
merely take much less time (13 steps in case of log2n in the above table). These examples
emphasize the importance of the way in which algorithms behave as a function of the
problem size. Analysis of an algorithm also provides the theoretical model of the inherent
computational complexity of a particular problem.
To decide how to characterize the behaviour of an algorithm as a function of size of the
problem n, we must study the mechanism very carefully to decide just what constitutes the
dominant mechanism. It may be the number of times a particular expression is evaluated, or
the number of comparisons or exchanges that must be made as n grows. For example,
comparisons, exchanges, and moves count most in sorting algorithm. The number of
comparisons usually dominates so we use comparisons in computational model for sorting
algorithms.
The Order of Notation:
The O-notation gives an upper bound to a function within a constant factor. For a given
function g(n), we denote by O(g(n)) the set of functions.
O(g(n)) = { f(n) : there exist positive constants c and n0, such that 0
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
27/28
worst case complexity of an algorithm, we choose a set of input conditions that force the
algorithm to make the least possible progress at each step towards its final goal.
In many practical applications its very important to have measure of the expected complexity
of an algorithm rather than the worst case behaviour. The expected complexity gives a
measure of the behaviour of the algorithm averaged over all possible problems of size n.
As a simple example: Suppose we wish to characterize the behaviour of an algorithm that
linearly searches an ordered list of elements for some value x. 1 2 3 4 5 . N
In the worst case, the algorithm examines all n values in the list before terminating.
In the average case, the probability that x will be found at position 1 is 1/n, at position 2 is 2/n
and so on. Therefore,Average search cost = 1/n(1+2+3+ ..+n) = 1/n(n/2(n+1)) = (n+1)/2.
Probabilistic average case analysis:
If we wish to characterize behaviour of an algorithm, that linearly searched an ordered list of
elements for some value x.
1 2 n
In worst case it will be necessary for algorithms to examine all n values in list before
terminating
In average case it is assumed that all possible points of termination are equally likely.
i.e., probability that x will be found at position 1 is 1/n, at position is 1/n,.
The average search cost is thus sum of all possible search costs each multiplexed by their
associated probability.
For ex: if n=5, average search cost=1/5 (1+2+3+4+5) =3.
Noting that 1+2+3+..+n= n(n+1)/2 (i.e., from Gauss formula) then in the general we have:
average search cost= 1/n (n/2(n+1))= n+1/2.
In 2nd case, we take complicated example. Considering average case analysis for binary
search procedure. We wish to establish in this analysis is average number of iterations of the
search loop that are required before algorithm terminates in a successful search. This analysis
corresponds to 1st binary search implementation proposed in algorithm which terminates as
soon as search value is found.
Referring to tree we see that 1 element can be found with 1 comparison, 2 elements with 2
comparisions,4 elements with 3 comparisons, and so on.
i.e., sum over all possible elements= 1+2+2+3+3+3+3+4+. .
In general case, 2i elements require i+1 comparisons. Now assuming that all items present in
array are equally likely to be retrieved.( probability of retrieval is 1/n) then average search
cost is again just sum over all possible search costs, each multiplied by their associated
probability. i.e.,
-
7/29/2019 Unit-1 Introduction to Computer Problem-Solving
28/28
Fig: Binary decision tree for a set of 15 elements
THE END OF UNIT 1
THANK YOU.