instrumental methods of analysis lession notes

122
BT2258 -LECTURE NOTES ON INSTRUMENTAL METHODS OF ANALYSIS BACHELOR OF TECHNOLOGY IV SEMESTER PREPARED BY Mr. K. Selvaraj B. Pharm, M. Tech DEPARTMENT OF BIOTECHNOLOGY RAJALAKSHMI ENGINEERING COLLEGE THANDALAM- 602 105

Upload: kselvaraj

Post on 18-Nov-2014

7.589 views

Category:

Documents


9 download

DESCRIPTION

B. TECH BIOTECHNOLOGY

TRANSCRIPT

Page 1: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

BT2258 -LECTURE NOTES

ON

INSTRUMENTAL METHODS OF ANALYSIS

BACHELOR OF TECHNOLOGY

IV SEMESTER

PREPARED BY

Mr. K. Selvaraj B. Pharm, M. Tech

DEPARTMENT OF BIOTECHNOLOGY

RAJALAKSHMI ENGINEERING COLLEGE

THANDALAM- 602 105

Page 2: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

SYLLABUS

SUBJECT : INSTRUMENTAL METHODS OF ANALYSIS

SUBJECT CODE : BT2258

YEAR / SEM : II YEAR B.Tech (BIOTECH)/ IV SEMESTER

UNIT I BASICS OF MEASUREMENT

Classification of methods – calibration of instrumental methods – electrical

components and circuits – signal to noise ratio – signal – noise enhancement.

UNIT II OPTICAL METHODS

General design – sources of radiation – wavelength selectors – sample containers –

radiation transducers – types of optical instruments – Fourier transform measurements.

UNIT III MOLECULAR SPECTROSCOPY

Measurement of transmittance and absorbance – beer's law – spectrophotometer

analysis – qualitative and quantitative absorption measurements - types of spectrometers –

UV – visible – IR – Raman spectroscopy – instrumentation – theory.

UNIT IV THERMAL METHODS

Thermo-gravimetric methods – differential thermal analysis – differential scanning

calorimetry.

UNIT V SEPARATION METHODS

Introduction to chromatography – models – ideal separation – retention parameters –

van – deemter equation – gas chromatography – stationary phases – detectors – kovats

indices – HPLC – pumps – columns – detectors – ion exchange chromatography – size

exclusion chromatography – supercritical chromatography – capillary electrophoresis

Page 3: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

UNIT I BASICS OF MEASUREMENT

Classification of methods – calibration of instrumental methods – electrical

components and circuits – signal to noise ratio – signal – noise enhancement.

INTRODUCTIONAnalytical Chemistry deals with methods for determining the chemical composition

of samples of matter. A qualitative method yields information about the identity of atomic or

molecular species or the functional groups in the sample; a quantitative method, in contrast,

provides numerical information as to the relative amount of one or more of these

components.

Analytical methods are often classified as being either classical or instrumental. This

classification is largely historical with classical methods, sometimes called wet-chemical

methods, preceding instrumental methods by a century or more.

Classical MethodsSeparation of analytes by precipitation, extraction, or distillation.

Qualitative analysis by reaction of analytes with reagents that yielded products that

could be recognized by their colors, boiling or melting points, solubilities, optical

activities, or refractive indexes.

Quantitative analysis by gravimetric or by titrimetric techniques.

1. Gravimetric Methods – the mass of the analyte or some compound produced from the

analyte was determined.

Page 4: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

2. Titrimetric Methods – the volume or mass of a standard reagent required to react

completely with the analyte was measured.

Instrumental Methods

Measurements of physical properties of analytes, such as conductivity, electrode

potential, light absorption, or emission, mass to charge ratio, and fluorescence, began to be

used for quantitative analysis of a variety of inorganic, organic, and biochemical analyte.

Highly efficient chromatographic and electrophoretic techniques began to replace distillation,

extraction, and precipitation for the separation of components of complex mixtures prior to

their qualitative or quantitative determination. These newer methods for separating and

determining chemical species are known collectively as instrumental methods of analysis.

Instrumentation can be divided into two categories: detection and quantitation.1. Quantitation Measurement of physical properties of analytes - such as conductivity, electrode

potential, light absorption or emission, mass-to-charge ratio, and fluorescence-began to be

employed for quantitative analysis of inorganic, organic, and biochemical analytes.

2. Detection Efficient chromatographic separation techniques are used for the separation of

components of complex mixtures.

Table 1. Classification of instrumental methods based on different analytical signals

Signal Instrumental Methods

Emission of radiation Emission spectroscopy (X-ray, UV, visible,electron, Auger); fluorescence,phosphorescence, and luminescence (X-ray, UV, and visible)

Absorption of radiation Spectrophotometry and photometry (X-ray, UV, visible, IR); photoacoustic spectroscopy; nuclear magnetic resonance and electron spin resonance spectroscopy

Scattering of radiation Turbidimetry; nephelometry; Raman spectroscopy

Refraction of radiation Refractometry; interferometry

Diffraction of radiation X-Ray and electron diffraction methods

Rotation of radiation Polarimetry; optical rotary dispersion; circular dichroism

Electrical potential Potentiometry; chronopotentiometry

Electrical charge Coulometry

Page 5: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Electrical current Polarography; amperometry

Electrical resistance Conductometry

Mass-to-charge ratio Mass spectrometry

Rate of reaction Kinetic methods

Thermal properties Thermal conductivity and enthalpy

Radioactivity Activation and isotope dilution methods

CALIBRATION OF INSTRUMENTAL METHODSA calibration curve is one approach to the problem of instrument calibration; other

approaches may mix the standard into the unknown, giving an internal standard.

The calibration curve is a plot of how the instrumental response, the so-called

analytical signal, changes with the concentration of the analyte (the substance to be

measured). The operator prepares a series of standards across a range of concentrations near

the expected concentration of analyte in the unknown. The concentrations of the standards

must lie within the working range of the technique (instrumentation) they are using (see

figure). Analyzing each of these standards using the chosen technique will produce a series of

measurements. For most analyses a plot of instrument response vs. analyte concentration will

show a linear relationship. The operator can measure the response of the unknown and, using

the calibration curve, can interpolate to find the concentration of analyte.

Calibration curve

Figure 1.Limit of detection (LOD), limit of quantification (LOQ), dynamic range, and limit of linearity (LOL).

How to create a calibration curve

The data - the concentrations of the analyte and the instrument response for each

standard - can be fit to a straight line, using linear regression analysis. This yields a model

described by the equation y = mx + c, where y is the instrument response, m represents the

sensitivity, and c is a constant that describes the background. The analyte concentration (x) of

unknown samples may be calculated from this equation.

Page 6: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Many different variables can be used as the analytical signal. For instance, chromium

(III) might be measured using a chemiluminescence method, in an instrument that contains a

photomultiplier tube (PMT) as the detector. The detector converts the light produced by the

sample into a voltage, which increases with intensity of light. The amount of light measured

is the analytical signal.

Most analytical techniques use a calibration curve. There are a number of advantages

to this approach. First, the calibration curve provides a reliable way to calculate the

uncertainty of the concentration calculated from the calibration curve (using the statistics of

the least squares line fit to the data). [1]

Second, the calibration curve provides data on an empirical relationship. The

mechanism for the instrument's response to the analyte may be predicted or understood

according to some theoretical model, but most such models have limited value for real

samples. (Instrumental response is usually highly dependent on the condition of the analyte,

solvents used and impurities it may contain; it could also be affected by external factors such

as pressure and temperature.)

Many theoretical relationships, such as fluorescence, require the determination of an

instrumental constant anyway, by analysis of one or more reference standards; a calibration

curve is a convenient extension of this approach. The calibration curve for a particular

analyte in a particular (type of) sample provides the empirical relationship needed for those

particular measurements.

The chief disadvantages are that the standards require a supply of the analyte material,

preferably of high purity and in known concentration. (Some analytes - e.g., particular

proteins - are extremely difficult to obtain pure in sufficient quantity.)

ApplicationsAnalysis of concentration

Verifying the proper functioning of an analytical instrument or a sensor device such as an

ion selective electrode

Determining the basic effects of a control treatment (such as a dose-survival curve in

clonogenic assay)

Standard addition method

The method of standard addition is used in instrumental analysis to determine

concentration of a substance (analyte) in an unknown sample by comparison to a set of

samples of known concentration, similar to using a calibration curve. Standard addition can

Page 7: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

be applied to most analytical techniques and is used instead of a calibration curve to solve the

matrix effect problem.

This graph is an example of a standard addition plot used to determine the

concentration of calcium in an unknown sample by atomic absorption spectroscopy. The

point at zero concentration added Ca is the reading of the unknown, the other points are the

readings after adding increasing amounts ('spikes') of standard solution. The absolute value of

the x-intercept is the concentration of Ca in the unknown, in this case 1.69E-6 g/mL.

Applications

Standard addition is frequently used in atomic absorption spectroscopy and gas

chromatography.

LAWS OF ELECTRICITY

Ohm's Law

For many conductors of electricity, the electric current which will flow through them

is directly proportional to the voltage applied to them. When a microscopic view of Ohm's

law is taken, it is found to depend upon the fact that the drift velocity of charges through the

material is proportional to the electric field in the conductor. The ratio of voltage to current is

called the resistance, and if the ratio is constant over a wide range of voltages, the material is

said to be an "ohmic" material. If the material can be characterized by such a resistance, then

the current can be predicted from the relationship:

Page 8: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Kirchhoff's Current Law (KCL)

The current entering any junction is equal to the current leaving that junction. i1 + i4 =

i2 + i3.This law is also called Kirchhoff's first law, Kirchhoff's point rule, Kirchhoff's junction

rule (or nodal rule), and Kirchhoff's first rule.

Power law

The general form of the law is

P=VI

where I is the magnitude of the physical stimulus, ψ(I) is the psychophysical function relating

to the subjective magnitude of the sensation evoked by the stimulus, a is an exponent that

depends on the type of stimulation and k is a proportionality constant that depends on the type

of stimulation and the units used.

DIRECT CURRENT CIRCUITS AND MEASUREMENTS

Direct Current.(DC) Is one that flows always in the same direction. Most electronic

devices need Direct Current because they require a steady flow of electrons that always head

in the same direction. A battery is Direct Current. Alternating Current (AC) is changed to

Direct Current (DC) with the use of Diode Rectifiers. You cannot use a transformer with

Direct Current.

Series Circuit

Page 9: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

A Series circuit is one in which all components are connected in tandem. The current

at every point of a series circuit stays the same. In series circuits the current remains the same

but the voltage drops may vary.

Parallel Circuit

Parallel circuits are those in which the components are so arranged that the current

divides between them. In parallel circuits the voltage remains the same but the current may

vary. The circuits in your home are wired in parallel.

MEASUREMENT OF DC

Page 10: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Digital voltmeters (DVM)

Digital voltmeters usually employ an electronic circuit that acts as an integrator,

linearly ramping output voltage when input voltage is constant (this can be easily realized

with an opamp). The dual-slope integrator method applies a known reference voltage to the

integrator for a fixed time to ramp the integrator's output voltage up, then the unknown

voltage is applied to ramp it back down, and the time to ramp output voltage down to zero is

recorded (realized in an ADC implementation). The unknown voltage being measured is the

product of the voltage reference and the ramp-up time divided by the ramp-down time. The

voltage reference must remain constant during the ramp-up time, which may be difficult due

to supply voltage and temperature variations. Part of the problem of making an accurate

voltmeter is that of calibration to check its accuracy. In laboratories, the Weston Cell is used

as a standard voltage for precision work. Precision voltage references are available based on

electronic circuits.Digital voltmeters, like vacuum tube voltmeters, generally exhibit a

constant input resistance of 10 megohms regardless of set measurement range.

ALTERNATING CURRENT CIRCUITS

The amplitude or peak value of the sinusoidal variation we shall represent by Vm and

Im, and we shall use V = Vm/21/2 and I = Im/21/2 without subscripts to refer to the RMS values.

For an explanation of RMS values, see Power and RMS values.

So for instance, we shall write:

v = v(t) = Vm sin (ωt + φ)

i = i(t) = Im sin (ωt).

where ω is the angular frequency. ω = 2πf, where f is the ordinary or cyclic frequency. f is the

number of complete oscillations per second. φ is the phase difference between the voltage

and current. We shall meet this and the geometrical significance of ω later.

Resistors and Ohm's law in AC circuits

The voltage v across a resistor is proportional to the current i travelling through it.

Further, this is true at all times: v = Ri. So, if the current in a resistor is

i = Im . sin (ωt) ,           we write:

v = R.i = R.Im sin (ωt)

v = Vm. sin (ωt)            where

Vm = R.Im

Page 11: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

So for a resistor, the peak value of voltage is R times the peak value of current. Further, they

are in phase: when the current is a maximum, the voltage is also a maximum.

(Mathematically, φ = 0.) The first animation shows the voltage and current in a resistor as a

function of time.

Impedance and reactance

Circuits in which current is proportional to voltage are called linear circuits. (As soon

as one inserts diodes and transistors, circuits cease to be linear, but that's another story.) The

ratio of voltage to current in a resistor is its resistance. Resistance does not depend on

frequency, and in resistors the two are in phase, as we have seen in the animation. However,

circuits with only resistors are not very interesting.

In general, the ratio of voltage to current does depend on frequency and in general there is a

phase difference. So impedance is the general name we give to the ratio of voltage to current.

It has the symbol Z. Resistance is a special case of impedance. Another special case is that in

which the voltage and current are out of phase by 90°: this is an important case because when

this happens, no power is lost in the circuit. In this case where the voltage and current are out

of phase by 90°, the ratio of voltage to current is called the reactance, and it has the symbol

X.

Capacitors and charging

The voltage on a capacitor depends on the amount of charge you store on its plates.

The current flowing onto the positive capacitor plate (equal to that flowing off the negative

plate) is by definition the rate at which charge is being stored. So the charge Q on the

capacitor equals the integral of the current with respect to time. From the definition of the

capacitance,

vC = q/C, so

we have a sinusoidal current i = Im . sin (ωt), so integration gives

Page 12: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

(The constant of integration has been set to zero so that the average charge on the capacitor is

0).

Now we define the capacitive reactance XC as the ratio of the magnitude of the voltage to

magnitude of the current in a capacitor. From the equation above, we see that XC = 1/ωC.

Now we can rewrite the equation above to make it look like Ohm's law. The voltage is

proportional to the current, and the peak voltage and current are related by

Vm = XC.Im.

RC Series combinations

When we connect components together, Kirchoff's laws apply at any instant. So the

voltage v(t) across a resistor and capacitor in series is just

vseries(t) = vR(t) + vC(t)

However the addition is complicated because the two are not in phase. The next animation

makes this clear: they add to give a new sinusoidal voltage, but the amplitude is less than

VmR(t) + VmC(t). Similarly, the AC voltages (amplitude times 21/2) do not add up. This may

seem confusing, so it's worth repeating:

vseries = vR + vC        but

Vseries > VR + VC.

This should be clear on the animation and the still graphic below: check that the voltages v(t)

do add up, and then look at the magnitudes. The amplitudes and the RMS voltages V do not

add up in a simple arithmetical way.

Here's where phasor diagrams are going to save us a lot of work. Play the animation again

(click play), and look at the projections on the vertical axis. Because we have sinusoidal

variation in time, the vertical component (magnitude times the sine of the angle it makes with

the x axis) gives us v(t). But the y components of different vectors, and therefore phasors, add

up simply: if

rtotal = r1 + r2,        then

ry total = ry1 + ry2.

So v(t), the sum of the y projections of the component phasors, is just the y projection of the

sum of the component phasors. So we can represent the three sinusoidal voltages by their

phasors. (While you're looking at it, check the phases. You'll see that the series voltage is

behind the current in phase, but the relative phase is somewhere between 0 and 90°, the exact

value depending on the size of VR and VC.

All of the variables (i, vR, vC, vseries) have the same frequency f and the same angular

frequency ω, so their phasors rotate together, with the same relative phases. In this series

Page 13: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

circuit, the current is common. (In a parallel circuit, the voltage is common, so I would make

the voltage the horizontal axis.)

The phasor diagram below shows us a simple way to calculate the series voltage. The

components are in series, so the current is the same in both. The voltage phasors (brown for

resistor, blue for capacitor in the convention we've been using) add according to vector or

phasor addition, to give the series voltage (the red arrow).

From Pythagoras' theorem:

V2mRC = V2

mR + V2mC

If we divide this equation by two, and remembering that the RMS value V = Vm/21/2, we also

get:

Now this looks like Ohm's law again: V is proportional to I. Their ratio is the series

impedance, Zseries and so for this series circuit,

Page 14: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

   

Note the frequency dependence of the series impedance ZRC: at low frequencies, the

impedance is very large; because the capacitive reactance 1/ωC is large (the capacitor is open

circuit for DC). At high frequencies, the capacitive reactance goes to zero (the capacitor

doesn't have time to charge up) so the series impedance goes to R. At the angular frequency ω

= ωo = 1/RC, the capacitive reactance 1/ωC equals the resistance R. We shall show this

characteristic frequency on all graphs on this page.

Remember how, for two resistors in series, you could just add the resistances: Rseries = R1 + R2

to get the resistance of the series combination. That simple result comes about because the

two voltages are both in phase with the current, so their phasors are parallel. Because the

phasors for reactances are 90° out of phase with the current, the series impedance of a resistor

R and a reactance X are given by Pythagoras' law:

Zseries2 = R2 + X2

Ohm's law in AC

We can rearrange the equations above to obtain the current flowing in this circuit.

Alternatively we can simply use the Ohm's Law analogy and say that I = V source/ZRC. Either

way we get

   

where the current goes to zero at DC (capacitor is open circuit) and to V/R at high

frequencies (no time to charge the capacitor).

Page 15: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

From simple trigonometry, the angle by which the current leads the voltage is

    tan-1 (VC/VR) = tan-1 (IXC/IR)

  = tan-1 (1/ωRC) = tan-1 (1/2πfRC).

However, we shall refer to the angle φ by which the voltage leads the current. The voltage is

behind the current because the capacitor takes time to charge up, so φ is negative, ie

φ = tan-1 (1/ωRC) = tan-1 (1/2πfRC).   

At low frequencies, the impedance of the series RC circuit is dominated by the

capacitor, so the voltage is 90° behind the current. At high frequencies, the impedance

approaches R and the phase difference approaches zero. The frequency dependence of Z and

φ are important in the applications of RC circuits. The voltage is mainly across the capacitor

at low frequencies, and mainly across the resistor at high frequencies. Of course the two

voltages must add up to give the voltage of the source, but they add up as vectors.

V2RC = V2

R + V2C.

At the frequency ω = ωo = 1/RC, the phase φ = 45° and the voltage fractions are VR/VRC =

VC/VRC = 1/2V1/2 = 0.71.

So, by chosing to look at the voltage across the resistor, you select mainly the high

frequencies, across the capacitor, you select low frequencies. This brings us to one of the very

important applications of RC circuits, and one which merits its own page: filters, integrators

and differentiators where we use sound files as examples of RC filtering.

Page 16: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

RL Series combinations

In an RL series circuit, the voltage across the inductor is aheadof the current by 90°,

and the inductive reactance, as we saw before, is XL = ωL. The resulting v(t) plots and phasor

diagram look like this.

RLC Series combinations

Now let's put a resistor, capacitor and inductor in series. At any given time, the

voltage across the three components in series, vseries(t), is the sum of these:

vseries(t) = vR(t) + vL(t) + vC(t),

The current i(t) we shall keep sinusoidal, as before. The voltage across the resistor, vR(t), is in

phase with the current. That across the inductor, vL(t), is 90° ahead and that across the

capacitor, vC(t), is 90° behind.

Once again, the time-dependent voltages v(t) add up at any time, but the RMS voltages V do

not simply add up. Once again they can be added by phasors representing the three sinusoidal

voltages. Again, let's 'freeze' it in time for the purposes of the addition, which we do in the

graphic below. Once more, be careful to distinguish v and V.

Page 17: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Look at the phasor diagram: The voltage across the ideal inductor is antiparallel to that of the

capacitor, so the total reactive voltage (the voltage which is 90° ahead of the current) is VL -

VC, so Pythagoras now gives us:

V2series = V2

R + (VL - VC)2

Now VR = IR, VL = IXL = ωL and VC = IXC= 1/ωC. Substituting and taking the common

factor I gives:

where Zseries is the series impedance: the ratio of the voltage to current in an RLC series ciruit.

Note that, once again, reactances and resistances add according to Pythagoras' law:

Zseries2 = R2 + Xtotal

2

      = R2 + (XL XC)2.

Remember that the inductive and capacitive phasors are 180° out of phase, so their reactances

tend to cancel.

Now let's look at the relative phase. The angle by which the voltage leads the current is

φ = tan-1 ((VL - VC)/VR).

Substiting VR = IR, VL = IXL = ωL and VC = IXC= 1/ωC gives:

The dependence of Zseries and φ on the angular frequency ω is shown in the next figure. The

angular frequency ω is given in terms of a particular value ωo, the resonant frequency

(ωo2 = 1/LC), which we meet below.

Page 18: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

   

The next graph shows us the special case where the frequency is such that VL = VC.

Because vL(t) and vC are 180° out of phase, this means that vL(t) =  vC(t), so the two reactive

voltages cancel out, and the series voltage is just equal to that across the resistor. This case is

called series resonance.

SIGNAL-TO-NOISE RATIO

The signal is what you are measuring that is the result of the presence of your analyte.

Noise is extraneous information that can interfere with or alter the signal. It can not be

completely eliminated, but hopefully reduced. True Noise is considered random.

Signal-to-noise ratio (often abbreviated SNR or S/N) is an electrical engineering concept,

also used in other fields (such as scientific measurements, biological cell signaling), defined

as the ratio of a signal power to the noise power corrupting the signal.

In less technical terms, signal-to-noise ratio compares the level of a desired signal (such as

music) to the level of background noise. The higher the ratio, the less obtrusive the

background noise is.

Page 19: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

In analog and digital communications, signal-to-noise ratio, often written S/N or SNR, is a

measure of signal strength relative to background noise. The ratio is usually measured in

decibels (dB).If the incoming signal strength in microvolts is Vs, and the noise level, also in

microvolts, is Vn, then the signal-to-noise ratio, S/N, in decibels is given by the formula

S/N = 20 log10(Vs/Vn)

If Vs = Vn, then S/N = 0. In this situation, the signal borders on unreadable, because the noise

level severely competes with it. In digital communications, this will probably cause a

reduction in data speed because of frequent errors that require the source (transmitting)

computer or terminal to resend some packets of data.

Ideally, Vs is greater than Vn, so S/N is positive. As an example, suppose that V s = 10.0

microvolts and Vn = 1.00 microvolt. Then

S/N = 20 log10(10.0) = 20.0 dB

which results in the signal being clearly readable. If the signal is much weaker but still above

the noise -- say 1.30 microvolts -- then

S/N = 20 log10(1.30) = 2.28 dB

which is a marginal situation. There might be some reduction in data speed under these

conditions.

If Vs is less than Vn, then S/N is negative. In this type of situation, reliable communication is

generally not possible unless steps are taken to increase the signal level and/or decrease the

noise level at the destination (receiving) computer or terminal.

Communications engineers always strive to maximize the S/N ratio. Traditionally, this has

been done by using the narrowest possible receiving-system bandwidth consistent with the

data speed desired. However, there are other methods. In some cases, spread spectrum

techniques can improve system performance. The S/N ratio can be increased by providing the

source with a higher level of signal output power if necessary. In some high-level systems

such as radio telescopes, internal noise is minimized by lowering the temperature of the

receiving circuitry to near absolute zero (-273 degrees Celsius or -459 degrees Fahrenheit). In

wireless systems, it is always important to optimize the performance of the transmitting and

receiving antennas.

Types of Noise

Chemical Noise

Chemical reactions

Reaction/technique/instrument specific

Page 20: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Instrumental Noise

Germane to all types of instruments

Can often be controlled physically (e.g. temp) or electronically (software

averaging)

Instrumental Noise

Thermal (Johnson) Noise:

Thermal agitation of electrons affects their “smooth” flow.

Due to different velocities and movement of electrons in electrical

components.

Upon both temperature and the range of frequencies (frequency bandwidths)

being utilized.

Can be reduced by reducing temperature of electrical components.

Eliminated at “absolute” zero.

Considered “white noise” because it is independent of frequency (but

dependent on frequency bandwidth or the range of frequencies being measured).

Shot Noise:

Occurs when electrons or charged particles cross junctions (different

materials, vacuums, etc.)

Considered “white noise” because it is independent of frequency.

It is the same at any frequency but also dependent on frequency bandwidth

Due to the statistical variation of the flow of electrons (current) across some

junction

Some of the electrons jump across the junction right away

Some of the electrons take their time jumping across the junction

Flicker Noise

Frequency dependent

Significant at frequencies less than 100 Hz

Magnitude is inversely proportional to frequency

Results in long-term drift in electronic components

Can be controlled by using special wire resistors instead of the less expensive

carbon type.

Page 21: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Environmental Noise

Unlimited possible sources

Can often be eliminated by eliminating the source

Other noise sources can not be eliminated!!!!!!

Methods of eliminating it…

Moving the instrument somewhere else

Isolating /conditioning the instruments power source

Controlling temperature in the room

Control expansion/contraction of components in instrument

Eliminating interferences

Stray light from open windows, panels on instrument

Turning off radios, TV’s, other instruments

SIGNAL-NOISE ENHANCEMENT

HARDWARE METHODS

Lock-in amplifier

A lock-in amplifier (also known as a phase-sensitive detector) is a type of amplifier

that can extract a signal with a known carrier wave from extremely noisy environment (S/N

Page 22: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

ratio can be as low as -60 dB or even less . It is essentially a homodyne with an extremely low

pass filter (making it very narrow band). Lock-in amplifiers use mixing, through a frequency

mixer, to convert the signal's phase and amplitude to a DC—actually a time-varying low-

frequency—voltage signal.

Basic principles

Operation of a lock-in amplifier relies on the orthogonality of sinusoidal functions.

Specifically, when a sinusoidal function of frequency ν is multiplied by another sinusoidal

function of frequency μ not equal to ν and integrated over a time much longer than the period

of the two functions, the result is zero. In the case when μ is equal to ν, and the two functions

are in phase, the average value is equal to half of the product of the amplitudes.

In essence, a lock-in amplifier takes the input signal, multiplies it by the reference

signal (either provided from the internal oscillator or an external source), and integrates it

over a specified time, usually on the order of milliseconds to a few seconds. The resulting

signal is an essentially DC signal, where the contribution from any signal that is not at the

same frequency as the reference signal is attenuated essentially to zero, as well as the out-of-

phase component of the signal that has the same frequency as the reference signal (because

sine functions are orthogonal to the cosine functions of the same frequency), and this is also

why a lock-in is a phase sensitive detector.

More basic principles

Lock-in amplifiers are used to measure the amplitude and phase of signals buried in

noise. They achieve this by acting as a narrow bandpass filter which removes much of the

unwanted noise while allowing through the signal which is to be measured.

The frequency of the signal to be measured and hence the passband region of the filter is set

by a reference signal, which has to be supplied to the lock-in amplifier along with the

unknown signal. The reference signal must be at the same frequency as the modulation of the

signal to be measured.

A basic lock-in amplifier can be split into 4 stages: an input gain stage, the reference

circuit, a demodulator and a low pass filter.

Input Gain Stage: The variable gain input stage pre-processes the signal by amplifying it to

a level suitable for the demodulator. Nothing complicated here, but high performance

amplifiers are required.

Reference Circuit: The reference circuit allows the reference signal to be phase shifted.

Page 23: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Demodulator: The demodulator is a multiplier. It takes the input signal and the reference and

multiplies them together. When you multiply two waveforms together you get the sum and

difference frequencies as the result. As the input signal to be measured and the reference

signal are of the same frequency, the difference frequency is zero and you get a DC output

which is proportional to the amplitude of the input signal and the cosine of the phase

difference between the signals. By adjusting the phase of the reference signal using the

reference circuit, the phase difference between the input signal and the reference can be

brought to zero and hence the DC output level from the multiplier is proportional to the input

signal. The noise signals will still be present at the output of the demodulator and may have

amplitudes 1000 times as large as the DC offset.

Low Pass Filter:

As the various noise components on the input signal are at different frequencies to the

reference signal, the sum and difference frequencies will be non zero and will not contribute

to the DC level of the output signal. This DC level (which is proportional to the input signal)

can now be recovered by passing the output from the demodulator through a low pass filter.

The above gives an idea of how a basic lock-in amplifier works. Actual lock-in amplifiers are

more complicated, as there are instrument offsets that need to be removed, but the basic

principle of operation is the same.

Application to signal measurements in a noisy environment

The essential idea in signal recovery is that noise tends to be spread over a wider

spectrum, often much wider than the signal. In the simplest case of white noise, even if the

root mean square of noise is 106 times as large as the signal to be recovered, if the bandwidth

of the measurement instrument can be reduced by a factor much greater than 106 around the

signal frequency, then the equipment can be relatively insensitive to the noise. In a typical

100 MHz bandwidth (e.g. an oscilloscope), a bandpass filter with width much narrower than

100 Hz would accomplish this.

In summary, even when noise and signal is indistinguishable in time domain, if signal

has a definite frequency band and there is no large noise peak within that band, noise and

signal can be separated sufficiently in the frequency domain.If the signal is either slowly

varying or otherwise constant (essentially a DC signal), then 1/f noise typically overwhelms

the signal. It may then be necessary to use external means to modulate the signal. For

example, in the case of detection of small light signal against a bright background, the signal

can be modulated either by a chopper wheel, acousto-optical modulator, photoelastic

Page 24: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

modulator at a large enough frequency so that 1/f noise drops off significantly, and the lock-

in amplifier is referenced to the operating frequency of the modulator. In the case of an

atomic force microscope, in order to achieve nanometer and piconewton resolution, the

cantilever position is modulated at a high frequency, to which lock-in amplifier is again

referenced.

When the lock-in technique is applied, care must be taken in calibration of signal,

because lock-in amplifiers generally detect only the root-mean-square signal of the operating

frequency only. For a sinusoidal modulation, this would introduce a factor of between the

lock-in amplifier output and the peak amplitude of the signal, and a different factor for a

modulation of different shape. In fact, in the case of extremely nonlinear systems, it may be

advantageous to use a higher harmonic of reference frequency because of frequency-doubling

that take place in a nonlinear medium.

Chopper amplifiers

One classic use for a chopper circuit and where the term is still in use is in chopper

amplifiers. These are DC amplifiers. Some types of signal that need amplifying can be so

small that an incredibly high gain is required, but very high gain DC amplifiers are much

harder to build with low offset and 1/f noise, and reasonable stability and bandwidth. It's

much easier to build an AC amplifier instead. A chopper circuit is used to break up the input

signal so that it can be processed as if it were an AC signal, then integrated back to a DC

signal at the output. In this way, extremely small DC signals can be amplified. This approach

is often used in electronic instrumentation where stability and accuracy are essential; for

example, it is possible using these techniques to construct pico-voltmeters and Hall sensors.

SOFTWARE METHODS

Signal Averaging

(one way of controlling noise)

Ensemble Averaging

Collect Multiple Signals Over The Same Time Or Wavelength (For Example)

Domain

Easily Done With Computers

Page 25: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Calculate The Mean Signal At Each Point In The Domain

Re-Plot The Averaged Signal

Since Noise Is Random (Some +/ Some -), This Helps Reduce The Overall Noise By

Cancellation.

Boxcar Averaging

Take an average of 2 or more signals in some domain

Plot these points as the average signal in the same domain

Can be done with just one set of data

You lose some detail in the overall signal

Page 26: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Polynomial Smoothing

Like Boxcar Averaging

Multipoint digital data averaging

Results in loss of some data at the beginning and the end of the data set.

Page 27: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

UNIT II

OPTICAL METHODS

Page 28: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

GENERAL DESIGNS OF OPTICAL INSTRUMENTS

SAMPLEELECTROMAGNETIC

RADIATION

ELECTRICAL

CURRENT NUMBER

Chemical,

Physical

Domain

Optical Domain Electrical DomainDigital

Domain

POWER

SOURCE

SAMPLE

CELL

WAVELENGTH

DISPERSERPHOTODETECTOR READOUT

Power Source: spectrochemical encoding system

Sample: must be in form suitable for analysis, may involve a separation or speciation

Sample cell: cuvette for UV-VIS, flame for atomic spectroscopy

Wavelength Disperser: an information sorting system, spreads light out spatially according to

its wavelength

Photodetector: radiation transducer changing optical info into electrical info

Readout: digital (ADC), meter, strip chart recorder

SOURCES OF RADIATION

Continuum Sources

Ar Lamp VAC UV

Xe Lmp VAC UV, UV-VIS

Page 29: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

H2 or D2 Lamp UV

Tungsten Lamp UV-Near IR

Nernst Glower UV-VIS-Near IR-IR

Nichrome Wire Near IR-Far IR

Globar Near IR-Far IR

Hollow Cathode Lamp UV-VIS

Lasers UV-VIS-Near IR

Radiation Sources

Sources may be continuous or pulsed in time

Continuum sources

- Continuum sources are preferred for spectroscopy because of their relatively

flat radiance versus wavelength curves

- Nernst glower (b) W filament (c) D2 lamp (d) arc (e) arc plus reflector

- produce broad, featureless range of wavelengths

- black and gray bodies, high pressure arc lamps

Line sources

- produce relatively narrow bands at specific wavelengths generating structured

emission spectrum

- lasers, low pressure arc lamps, hollow cathode lamps

Line plus continuum sources

- contain lines superimposed on continuum background

- medium pressure arc lamps, D2 lamp

Black body sources

Nernst glowers (ZrO2, YO2), Globars (SiC)

1000-1500 K in air - max lies in IR

relatively fragile

low spectral radiance (B~10-4 W·cm-2·nm-1·sr-1)

Arc sources

Hg, Xe, D2 lamps

AC or DC discharge through gas or metal vapor

Page 30: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

- 20-70 V, 10 mA-20 A

Line sources

Generally not much use for molecular spectroscopy

useful for luminescence excitation, photochemistry experiments

where high radiant intensity at one q required

Arc lamps

Low pressure (<10 Torr) with many different fill vapors

Hg, Cd, Zn, Ga, In, Th and alkali metals

Excellent wavelength calibration sources

Hollow cathode lamps (HCL)

primary line sources in atomic spectroscopy

low gas pressure (<10 mtorr)

o linewidths ~ 0.01 Å

o high currents (>few mA) reduces lifetime and broadens lines

single or multi-element cathodes

moderate radiance B~10-2 W·cm-2·nm-1·sr-1

Electrodeless discharge lamps (EDL)

contain a microwave or RF-excited plasma

need ignition pulse to start plasma

electric field of RF or microwave drives ions and electrons in plasma

no electrodes

gas pressures and temperatures relatively low

slight pressure broadening

line widths are not as narrow as the HCL (<1Å)

moderate radiance B~10-1 W·cm-2·nm-1·sr-1

Lasers

intense (radiance B>104 W·cm-2·nm-1·sr-1)

nearly monochromatic (0.01-0.1 Å)

coherent (temporally and spatially)

directed (small divergence)

pulsed or continuous

Page 31: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

stable Continuous wave (cw) lasers

operate continuously in time and are continuously pumped

lifetime of the upper lasing state (j) must be longer than that of

the lower lasing state (i) to maintain a population inversion

low gain systems, typically just above threshold

require high reflectivity mirrors

Average powers of up to a few tens of watts are possible

Pulsed lasers

operate intermittently in time

single pulses

repetitive pulse trains

lifetime of level j is shorter than level i

population inversion cannot be sustained indefinitely

high gain systems

can still lase with poor quality mirrors or with no mirrors

Peak and Average Power for Pulsed Laser

Peak output power (energy per pulse divided by duration of pulse)

may be MW or GW (109 W)

Average output power (energy per pulse multiplied by pulse repetition rate)

May be much more modest (few W).

different because of short duty cycle of pulsed laser

Q-switched lasers

Pulsed and cw lasers operate close to threshold - as soon as threshold

is exceeded, lasing starts

- if delay lasing while pumping, larger population inversion created

Q-switch works like pulsed lasers but contain an additional cavity

component to prevent lasing action (cavity spoiling) Cavity spoiling

slightly move one of mirrors

add a saturable absorber to the cavity (for example, a dye)

during pumping dye absorbs a considerable fraction of the photons traversing the

cavity

population inversion is continually created

when dye is saturated becomes transparent

allows an intense burst of laser light

Page 32: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Mode-locked lasers

force random cavity modes to be phase locked

In free-running multimode laser, individual cavity modes not synchronized to each

other

show a time varying range of phases and hence amplitudes

Forcing the laser to operate with the phases of the cavity modes

fixed (or locked) means that a pulse train is produced for each regular pulse

Each component of pulse train may be picoseconds (10-12 s) or shorter

Mode locking is achieved by rapid time-varying absorber in the cavity (up to 100

MHz)

- commonly an electro-optic modulator in addition to Q-switch

- when modulator absorbs, the beam is spoiled, no lasing occurs

- when modulator becomes transparent, cavity modes established

within a short period of time

Most electro-optic modulators are based on the Pockels effect (Pockels cell)

KDP crystal is birefringent when V applied

rotates polarization of light

apply sinusoidally varying voltage for mode-locking

Laser types

Solid state lasers

contain solid-state crystal as lasing media

single crystal rods with parallel mirrored ends

flashlamp or continuously pumped

ruby laser (Cr-doped alumina) (red 694 nm, 500 ns)

Nd:YAG laser (Nd-doped yttrium aluminum garnet) (IR 1064 nm,10 ns)

Semiconductor diode lasers

a type of solid state laser currently undergoing rapid development

no optical pumping

usually operated in cw mode

current through semiconductor pn junction forces recombination of electrons and

holes

variety of 's can be produced by changing band-gap of semiconductor (for example

in AlGaAs)

Page 33: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

average powers up to a few tens of W (if cooled)

small and relatively cheap to manufacture

Gas lasers

include gas or gas mixture as lasing medium (He-Ne)

pumped by an electrical discharge - fraction He atoms excited (ionized)

electronically excited metastable states in the He (optically forbidden transitions long-

lived) transfer energy in a nearly resonant

collisional process to Ne atoms

632 nm line is a 3s2p transition

CO2 (contains N2, CO2 & He):

electrical discharge excites N2(v=1)

multiple rotational transitions can be involved

many discrete lines in the region of 10.6-9.6 m

may be pulsed or cw and can produce high peak and average powers

Excimer

creation of excited state dimers (excimer) between a noble gas atom and halogen

excimer is only stable in the electronically excited state

dissociates rapidly in ground state

intense (up to >500 mJ·pulse-1) on 10 ns timescale

somewhat tunable - few nm - lifetime broadened

gas lifetime limited by reaction of the halogen with cavity materials

Dye lasers

based on fluorescent dyes (rhodamine, coumarin, fluorescein)

pumped by a flashlamp or another laser (often excimer or N2 for pulsed operation or

Ar+ for cw operation)

four-level systems - emission from the ground vibrational state of some electronically

excited state to some (high-lying)

vibrationally excited state of the ground electronic state

- fluorescence in liquid is solvent-broadened

- a wide range of wavelengths is produced

- narrower band of wavelengths selected by intracavity diffraction

grating

- only 's supported by the diffraction grating are amplified

- tunable over a 40-50 nm

Page 34: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Wavelength selector

A. Filters are used to pass a band of wavelengths

Absorption filters

Interference filters

Monochromators - one color - pass a narrow band of wavelnegths

B. Prisms

Dispersing prisms

Separation of wavelengths due to differences in index of refraction of

the glass in the prism with each different wavelength. This leads to

constructive and destructive interference.

Dispersion is angular (nonlinear). Single order is obtained. The larger

the focal length, the better the dispersion.

Reflecting prisms

Designed to change direction of propagation of beam, orientation, or

both

Polarizing prisms

Made of birefringent materials

C. Gratings

Can be considered as a set of slits at which diffraction occurs and

destructive/constructive interference occurs that yields a diffraction pattern. Grooves

patterns are now generated by machine. They are manufactured by ruling a piece of

glass or metal. Replica gratings are then produced by laying down a polymer film

over it to copy the groove pattern. Replicas are what are actually used in instruments

due to the great difficulty and cost of achieving high quality gratings. Holographic

gratings are also used but are not as efficient. Never touch a grating with your fingers.

Page 35: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

D. Types of Mounts

1) Littrow: autocollimating

2)Czerny-Turner: two mirrors used to collimate and focus.

3) Fastie-Ebert: single mirror used to collimate and focus

4) Rowland Circle: used in polychromators

5) Echelle: uses prism to sort orders from a grating

E. Performance Characteristics

Resolving power

R = / = n N

where n = diffraction order and N =lines of the grating illuminated from the entrance

slit. Therefore depends on

1) Physical size of dispersing element

2) Order of hv being observed to get better resolution either

1) Increase N

2) Increase n (cost now is in lessened intensity)

If R = 100 Poor quality

If R = 106 High quality

Number of orders detectable is proportional to N

Higher orders yield greater resolution but poorer intensity

The quality of the slits is also important.

Some light is also lost in reflection (n = 0, zero order)

Reciprocal Linear Dispersion

Rd or D-1 =1/D l, nm/mm of wavelength intervals (e.g., nm) contained in each

interval of distance (e.g., mm) along the focal plane.

Page 36: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

F/Number

Measure of the light gathering power of the monochromator that emerges from the

entrance slit

f = F / d

Where F = focal length of the collimating mirror or lens

and d = diameter of collimating mirror or lens

The light-gathering power of an optical device increases as the inverse square of the

f/number, therefore an f/2 lens gathers four times more light than an f/4 lens.

The f/numbers of many monochromators lie in the 1 to 10 range

F. Slits

Slits are used to limit the amount of light impinging on the dispersing element as well

as to limit the light reaching the detector.

There is a dichotomy between intensity and resolution.

Wide slits Narrow slits

Throughput High Low

Resolution Low High

Quant Good Poor

Qual Poor Good

Atomic lines are not infinitely narrow due to types of broadening

1) Natural

2) Doppler:

3) Stark

4)Collisional broadening

The use of entrance and exit slits convolutes this broadening as a triangular function -

the slit function.

Page 37: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Spectral bandpass, s, is the width at half-height of the wavelength distribution as

passed by the exit slit

s = Rd W

where Rd is the reciprocal linear dispersion and W is the slit width.

The slit-width-limited resolution s is

= 2s = 2 Rd W

SAMPLE CONTAINERS

Required of all spectroscopic methods except emission spectroscopy

Must be made of material that is transparent to the spectral region of interest

Spectral

RegionMaterial

UV Fused silica

VIS Plastic, glass

NaCl IR

V. RADIATION TRANSDUCERS

High sensitivity

High S/N

Constant response over range of wavelengths

Fast response

Zero output in absence of illumination

Electrical signal directly proportional to radiant power

Photomultiplier Tubes

Sensitivity: Significantly more sensitive than simple phototube

Process of Multiplication: Electrons emitted from cathode surface and accelerated

towards dynode (each successive dynode is 90 V more positive than preceding

dynode

Construction

- Photocathode: made of alkali metals with low work functions

- Focusing electrodes

- An electron multiplier (dynodes) amplification by factor of 106 to 107 for each

photon

Page 38: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

- An electron collector (anode)

- Window: borosilicate, quartz, sapphire, or MgF2

Spectral response

- Depends on photocathodic material

- Conversion efficiency varies with

- Lower cutoff determined by window composition

ARRAY DETECTORS

- An "electrical photographic plate"

- Detect differences in light intensity at different points on their photosensitive

surfaces

- Fabricated from silicon using semiconductor technology

- Originally conceived as television camera sensing elements

- Placed at focal plane of polychromator in place of the exit slit

- Sensitive for detection of light in 200-1000 nm range

- Major advantage is simultaneous detection of all wavelengths within range

H.Types

SIT : silicon intensifier target

PDA : photodiode array

CCD : charge-coupled device

CID : charge injection device

PHOTODIODE ARRAYS (PDA)

Usually 1-3 cm long; contains a few hundred photodiodes (256 - 2048) in a linear

array

Partitions spectrum into x number of wavelength increments

Each photodiode captures photons simultaneously

Measures total light energy over the time of exposure (whereas PMT measures

instantaneous light intensity)

Process

Each diode in the array is reverse-biased and thus can store charge like a capacitor

Before being exposed to light to be detected, diodes are fully charged via a transistor

switch

Page 39: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Light falling on the PDA will generate charge carriers in the silicon which combine

with stored charges of opposite polarity and neutralize them

The amount of charge lost is proportional to the intensity of light

Amount of current needed to recharge each diode is the measurement made which is

proportional to light intensity

Recharging signal is sent to sample-and-hold amplifier and then digitized

Array is however read sequentially over a common output line

Use minicomputer to handle data

Disadvantages

Must have fast data storage system

High dark noise

Must cool PDA to well below room temperature

Diode saturates within a few seconds integration time

Resolution not good, limited by # diodes/linear distance

Stray radiant energy (SRE) is a killer

Used as detectors in Raman, fluorescence, and absorption

CHARGE TRANSFER DEVICES

Two-dimensional arrays of silicon integrated circuits, postage-stamp-size

Typical pixel dimensions are 20 x 20 µm

Both CCDs and CIDs accumulate photogenerated charges in similar ways but differ in the

way accumulated charge is detected

A. Charge Injection Devices (CID)

A CID sensing element can be thought of as two electrodes side by side

One of the electrodes is biased so as to create a potential well near it

When an incident photon creates an electron-hole pair in the sensor region,

one member of the pair will be attracted to the well and held there

n-doped Si used as charge storage region

After exposure to light accumulated charge is moved from one electrode to

the other

Potential change caused by the change in charge stored on second electrode is

measured

Potential change is proportional to amount of stored charge and thus

Page 40: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

proportional to integrated light flux

Charge sensing may be done non-destructively therefore can take repeated

readings of same accumulated charge to improve S/N

Charge-Coupled Devices (CCD)

Potential well formed by an electrode as in CID

p-type material, however, used to store charges as electrons after exposure to light

charge packets are transferred along the row to special low-capacitance readout diode

Passage of charge induces a voltage change proportional to amount of charge

Advantages over CID include

A. An increased voltage change and

B. Lower reading noise

C. Small pixels are not well-suited to ordinary dispersive spectroscopy

"Binning"

o Aggregates charges formed in several detector elements into one element prior

to readout

o Yields increased detector sensitivity at a cost of resolution but elements are

very small so loss of resolution can be minimized

Summation is done on the chip rather than in memory after the readout, thus only one

read operation required for all the pixels to be summed, thus lower readout noise per

pixel is achieved

Used in astronomy and low light situations: fluorometry, Raman, CZE, HPLC

Thermal Transducers

phototransducers not applicable in IR due to low energy

Thermocouples

Bolometers

Pyroelectric Transducers

Page 41: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES
Page 42: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

UNIT III

MOLECULAR SPECTROSCOPY

INTRODUCTION

Spectroscopy is the study of the interaction between radiation (electromagnetic radiation, or

light, as well as particle radiation) and matter. Spectrometry is the measurement of these

interactions and an instrument which performs such measurements is a spectrometer or

spectrograph. A plot of the interaction is referred to as a spectrum.

Historically, spectroscopy referred to a branch of science in which visible light was used for

the theoretical study of the structure of matter and for qualitative and quantitative analyses.

Recently, however, the definition has broadened as new techniques have been developed that

utilise not only visible light, but many other forms of radiation.

Spectroscopy is often used in physical and analytical chemistry for the identification of

substances through the spectrum emitted from or absorbed by them. Spectroscopy is also

heavily used in astronomy and remote sensing. Most large telescopes have spectrometers,

which are used either to measure the chemical composition and physical properties of

astronomical objects or to measure their velocities from the Doppler shift of their spectral

lines.

Page 43: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Classification of spectroscopic methods

Nature of radiation measured

The type of spectroscopy depends on the physical quantity measured. Normally, the quantity

that is measured is an amount or intensity of something.

Optical Spectroscopy (Electromagnetic Spectroscopy) involves interactions of matter

with electromagnetic radiation or light. Ultraviolet-visible spectroscopy is an

example.

Electron Spectroscopy involves interactions with electron beams. Auger spectroscopy

involves inducing the Auger effect with an electron beam.

Mass spectroscopy involves the interaction of charged species with magnetic and/or

electric fields, giving rise to a mass spectrum. The term "mass spectroscopy" is

deprecated in favor of mass spectrometry, for the technique is primarily a form of

measurement, though it does produce a spectrum for observation.

Measurement process

Most spectroscopic methods are differentiated as either atomic or molecular based on

whether or not they apply to atoms or molecules. Along with that distinction, they can be

classified on the nature of their interaction:

Absorption spectroscopy uses the range of the electromagnetic spectra in which a

substance absorbs. This includes atomic absorption spectroscopy and various

molecular techniques, such as infrared spectroscopy in that region and nuclear

magnetic resonance (NMR) spectroscopy in the radio region.

Emission spectroscopy uses the range of electromagnetic spectra in which a substance

radiates (emits). The substance first must absorb energy. This energy can be from a

variety of sources, which determines the name of the subsequent emission, like

luminescence. Molecular luminescence techniques include spectrofluorimetry.

Scattering spectroscopy measures the amount of light that a substance scatters at

certain wavelengths, incident angles, and polarization angles. The scattering process

is much faster than the absorption/emission process. One of the most useful

applications of light scattering spectroscopy is Raman spectroscopy.

Page 44: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Common types of spectroscopy

Spectrum of light from a fluorescent lamp showing prominent mercury peaks.

Fluorescence spectroscopy

Fluorescence spectroscopy uses higher energy photons to excite a sample, which will then

emit lower energy photons. This technique has become popular for its biochemical and

medical applications, and can be used for confocal microscopy, fluorescence resonance

energy transfer, and fluorescence lifetime imaging.

X-ray spectroscopy and X-ray crystallography When X-rays of sufficient frequency

(energy) interact with a substance, inner shell electrons in the atom are excited to outer empty

orbitals, or they may be removed completely, ionizing the atom. The inner shell "hole" will

then be filled by electrons from outer orbitals. The energy available in this de-excitation

process is emitted as radiation (fluorescence) or will remove other less-bound electrons from

the atom (Auger effect). The absorption or emission frequencies (energies) are characteristic

of the specific atom. In addition, for a specific atom small frequency (energy) variations

occur which are characteristic of the chemical bonding. With a suitable apparatus, these

characteristic X-ray frequencies or Auger electron energies can be measured. X-ray

absorption and emission spectroscopy is used in chemistry and material sciences to determine

elemental composition and chemical bonding.

X-ray crystallography is a scattering process; crystalline materials scatter X-rays at well-

defined angles. If the wavelength of the incident X-rays is known, this allows calculation of

the distances between planes of atoms within the crystal. The intensities of the scattered X-

rays give information about the atomic positions and allow the arrangement of the atoms

within the crystal structure to be calculated.

Flame Spectroscopy

Liquid solution samples are aspirated into a burner or nebulizer/burner combination,

desolvated, atomized, and sometimes excited to a higher energy electronic state. The use of a

flame during analysis requires fuel and oxidant, typically in the form of gases. Common fuel

gases used are acetylene (ethyne) or hydrogen. Common oxidant gases used are oxygen, air,

or nitrous oxide. These methods are often capable of analyzing metallic element analytes in

Page 45: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

the part per million, billion, or possibly lower concentration ranges. Light detectors are

needed to detect light with the analysis information coming from the flame.

Atomic Emission Spectroscopy - This method uses flame excitation; atoms are excited

from the heat of the flame to emit light. This method commonly uses a total

consumption burner with a round burning outlet. A higher temperature flame than

atomic absorption spectroscopy (AA) is typically used to produce excitation of

analyte atoms. Since analyte atoms are excited by the heat of the flame, no special

elemental lamps to shine into the flame are needed. A high resolution polychromator

can be used to produce an emission intensity vs. wavelength spectrum over a range of

wavelengths showing multiple element excitation lines, meaning multiple elements

can be detected in one run. Alternatively, a monochromator can be set at one

wavelength to concentrate on analysis of a single element at a certain emission line.

Plasma emission spectroscopy is a more modern version of this method. See Flame

emission spectroscopy for more details.

Atomic absorption spectroscopy (often called AA) - This method commonly uses a pre-

burner nebulizer (or nebulizing chamber) to create a sample mist and a slot-shaped

burner which gives a longer pathlength flame. The temperature of the flame is low

enough that the flame itself does not excite sample atoms from their ground state. The

nebulizer and flame are used to desolvate and atomize the sample, but the excitation

of the analyte atoms is done by the use of lamps shining through the flame at various

wavelengths for each type of analyte. In AA, the amount of light absorbed after going

through the flame determines the amount of analyte in the sample. A graphite furnace

for heating the sample to desolvate and atomize is commonly used for greater

sensitivity. The graphite furnace method can also analyze some solid or slurry

samples. Because of its good sensitivity and selectivity, it is still a commonly used

method of analysis for certain trace elements in aqueous (and other liquid) samples.

Atomic Fluorescence Spectroscopy - This method commonly uses a burner with a round

burning outlet. The flame is used to solvate and atomize the sample, but a lamp shines

light at a specific wavelength into the flame to excite the analyte atoms in the flame.

The atoms of certain elements can then fluoresce emitting light in a different

direction. The intensity of this fluorescing light is used for quantifying the amount of

analyte element in the sample. A graphite furnace can also be used for atomic

Page 46: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

fluorescence spectroscopy. This method is not as commonly used as atomic

absorption or plasma emission spectroscopy.

Plasma Emission Spectroscopy In some ways similar to flame atomic emission

spectroscopy, it has largely replaced it.

Direct-current plasma (DCP)

A direct-current plasma (DCP) is created by an electrical discharge between two

electrodes. A plasma support gas is necessary, and Ar is common. Samples can be

deposited on one of the electrodes, or if conducting can make up one electrode.

Glow discharge-optical emission spectrometry (GD-OES)

Inductively coupled plasma-atomic emission spectrometry (ICP-AES)

Laser Induced Breakdown Spectroscopy (LIBS) (LIBS), also called Laser-induced

plasma spectrometry (LIPS)

Microwave-induced plasma (MIP)

Spark or arc (emission) spectroscopy - is used for the analysis of metallic elements in solid

samples. For non-conductive materials, a sample is ground with graphite powder to make it

conductive. In traditional arc spectroscopy methods, a sample of the solid was commonly

ground up and destroyed during analysis. An electric arc or spark is passed through the

sample, heating the sample to a high temperature to excite the atoms in it. The excited analyte

atoms glow emitting light at various wavelengths which could be detected by common

spectroscopic methods. Since the conditions producing the arc emission typically are not

controlled quantitatively, the analysis for the elements is qualitative. Nowadays, the spark

sources with controlled discharges under an argon atmosphere allow that this method can be

considered eminently quantitative, and its use is widely expanded worldwide through

production control laboratories of foundries and steel mills.

Visible spectroscopy

Many atoms emit or absorb visible light. In order to obtain a fine line spectrum, the atoms

must be in a gas phase. This means that the substance has to be vaporised. The spectrum is

Page 47: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

studied in absorption or emission. Visible absorption spectroscopy is often combined with

UV absorption spectroscopy in UV/Vis spectroscopy.

Ultraviolet spectroscopy

All atoms absorb in the UV region because these photons are energetic enough to excite outer

electrons. If the frequency is high enough, photoionisation takes place. UV spectroscopy is

also used in quantifying protein and DNA concentration as well as the ratio of protein to

DNA concentration in a solution. Several amino acids usually found in protein, such as

tryptophan, absorb light in the 280nm range and DNA absorbs light in the 260nm range. For

this reason, the ratio of 260/280nm absorbance is a good general indicator of the relative

purity of a solution in terms of these two macromolecules. Reasonable estimates of protein or

DNA concentration can also be made this way using Beer's law.

Infrared spectroscopy

Infrared spectroscopy offers the possibility to measure different types of interatomic bond

vibrations at different frequencies. Especially in organic chemistry the analysis of IR

absorption spectra shows what types of bonds are present in the sample.

Raman Spectroscopy

Raman spectroscopy uses the inelastic scattering of light to analyse vibrational and rotational

modes of molecules. The resulting 'fingerprints' are an aid to analysis.

Nuclear magnetic resonance spectroscopy

Nuclear magnetic resonance spectroscopy analyzes the magnetic properties of certain atomic

nuclei to determine different electronic local environments of hydrogen, carbon, or other

atoms in an organic compound or other compound. This is used to help determine the

structure of the compound.

Page 48: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Photoemission spectroscopy

Mössbauer spectroscopy

Transmission or conversion-electron (CEMS) modes of Mössbauer spectroscopy probe the

properties of specific isotope nuclei in different atomic environments by analyzing the

resonant absorption of characteristic energy gamma-rays known as the Mössbauer effect.

Less frequently used / combined spectroscopy

Photoacoustic Spectroscopy measures the sound waves produced upon the absorption

of radiation.

Photothermal Spectroscopy measures heat evolved upon absorption of radiation.

Circular Dichroism spectroscopy

Raman Optical Activity Spectroscopy exploits Raman scattering and optical activity

effects to reveal detailed information on chiral centers in molecules.

Terahertz spectroscopy uses wavelengths above infrared spectroscopy and below

microwave or millimeter wave measurements.

Inelastic neutron scattering works like Raman spectroscopy, with neutrons instead of

photons.

Inelastic electron tunneling spectroscopy uses the changes in current due to inelastic

electron-vibration interaction at specific energies which can also measure optically

forbidden transitions.

Auger Spectroscopy is a method used to study surfaces of materials on a micro-scale.

It is often used in connection with electron microscopy.

Cavity ring down spectroscopy

Fourier transform is an efficient method for processing spectra data obtained using

interferometers. The use of Fourier transform in spectroscopy is called Fourier

transform spectroscopy. Nearly all infrared spectroscopy (FTIR) and Nuclear

Magnetic Resonance (NMR) spectroscopy are performed with Fourier transforms.

Spectroscopy of matter in situations where the properties are changing with time is

called Time-resolved spectroscopy.

Mechanical spectroscopy involves interactions with macroscopic vibrations, such as

phonons. An example is acoustic spectroscopy, involving sound waves.

Time-resolved spectroscopy

Spectroscopy using an AFM-based analytical technique is called Force spectroscopy.

Page 49: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Dielectric spectroscopy

Thermal infrared spectroscopy measures thermal radiation emitted from materials and

surfaces and is used to determine the type of bonds present in a sample as well as their

lattice environment. The techniques are widely used by organic chemists,

mineralogists, and planetary scientists.

Background Subtraction

Background subtraction is a term typically used in spectroscopy when one explains the

process of acquiring a background radiation level (or ambient radiation level) and then makes

an algorithmic adjustment to the data to obtain qualitative information about any deviations

from the background, even when they are an order of magnitude less decipherable than the

background itself.

Background subtraction can effect a number of statistical calculations (Continuum, Compton,

Bremsstrahlung) leading to improved overall system performance.

SPECTROPHOTOMETRY

In physics, spectrophotometry is the quantitative study of electromagnetic spectra. It is more

specific than the general term electromagnetic spectroscopy in that spectrophotometry deals

with visible light, near-ultraviolet, and near-infrared. Also, the term does not cover time-

resolved spectroscopic techniques.

Spectrophotometry involves the use of a spectrophotometer. A spectrophotometer is a

photometer (a device for measuring light intensity) that can measure intensity as a function of

the color, or more specifically, the wavelength of light. There are many kinds of

spectrophotometers. Among the most important distinctions used to classify them are the

wavelengths they work with, the measurement techniques they use, how they acquire a

spectrum, and the sources of intensity variation they are designed to measure. Other

important features of spectrophotometers include the spectral bandwidth and linear range.

Perhaps the most common application of spectrophotometers is the measurement of light

absorption, but they can be designed to measure diffuse or specular reflectance. Strictly, even

the emission half of a luminescence instrument is a kind of spectrophotometer.

Page 50: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Design

There are two major classes of spectrophotometers; single beam and double beam. A double

beam spectrophotometer measures the ratio of the light intensity on two different light paths,

and a single beam spectrophotometer measures the absolute light intensity. Although ratio

measurements are easier, and generally stabler, single beam instruments have advantages; for

instance, they can have a larger dynamic range, and they can be more compact.

Historically, spectrophotometers use a monochromator to analyze the spectrum, but there are

also spectrophotometers that use arrays of photosensors and. Especially for infrared

spectrophotometers, there are spectrophotometers that use a Fourier transform technique to

acquire the spectral information more quickly in a technique called Fourier Transform

InfraRed.

The spectrophotometer measures quantitatively the fraction of light that passes through a

given solution. In a spectrophotometer, a light from the lamp is guided through a

monochromator, which picks light of one particular wavelength out of the continuous

spectrum. This light passes through the sample that is being measured. After the sample, the

intensity of the remaining light is measured with a photodiode or other light sensor, and the

transmittance for this wavelength is then calculated.

In short, the sequence of events in a spectrophotometer is as follows:

1. The light source shines through the sample.

2. The sample absorbs light.

3. The detector detects how much light the sample has absorbed.

4. The detector then converts how much light the sample absorbed into a number.

5. The numbers are either plotted straight away, or are transmitted to a computer to be

further manipulated (e.g. curve smoothing, baseline correction)

UV and IR spectrophotometers

The most common spectrophotometers are used in the UV and visible regions of the

spectrum, and some of these instruments also operate into the near-infrared region as well.

Page 51: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Visible region 400-700nm spectrophotometry is used extensively in colorimetry science. Ink

manufacturers, printing companies, textiles vendors, and many more, need the data provided

through colorimetry. They usually take readings every 20 nanometers along the visible

region, and produce a spectral reflectance curve. These curves can be used to test a new batch

of colorant to check if it makes a match to specifications.

Traditional visual region spectrophotometers cannot detect if a colorant has fluorescence.

This can make it impossible to manage color issues if one or more of the printing inks is

fluorescent. Where a colorant contains fluorescence, a bi-spectral fluorescent

spectrophotometer is used. There are two major setups for visual spectrum

spectrophotometers, d/8 (spherical) and 0/45. The names are due to the geometry of the light

source, observer and interior of the measurement chamber. Scientists use this machine to

measure the amount of compounds in a sample. If the compound is more concentrated more

light will be absorbed by the sample; within small ranges, the Beer-Lambert law holds and

the absorbance between samples vary with concentration linearly.

Samples are usually prepared in cuvettes; depending on the region of interest, they may be

constructed of glass, plastic, or quartz.

IR spectrophotometry

Spectrophotometers designed for the main infrared region are quite different because of the

technical requirements of measurement in that region. One major factor is the type of

photosensors that are available for different spectral regions, but infrared measurement is also

challenging because virtually everything emits IR light as thermal radiation, especially at

wavelengths beyond about 5 μm.

Another complication is that quite a few materials such as glass and plastic absorb infrared

light, making it incompatible as an optical medium. Ideal optical materials are salts, which do

not absorb strongly. Samples for IR spectrophotometry may be smeared between two discs of

potassium bromide or ground with potassium bromide and pressed into a pellet. Where

aqueous solutions are to be measured, insoluble silver chloride is used to construct the cell.

Page 52: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Spectroradiometers

Spectroradiometers, which operate almost like the visible region spectrophotometers, are

designed to measure the spectral density of illuminants in order to evaluate and categorize

lighting for sales by the manufacturer, or for the customers to confirm the lamp they decided

to purchase is within their specifications.

Components: 1. The light source shines onto or through the sample. 2. The sample transmits

or reflects light. 3. The detector detects how much light was reflected from or transmitted

through the sample. 4. The detector then converts how much light the sample transmitted or

reflected into a number.

ULTRAVIOLET-VISIBLE SPECTROSCOPY

Ultraviolet-visible spectroscopy or ultraviolet-visible spectrophotometry (UV/ VIS)

involves the spectroscopy of photons and spectrophotometry. It uses light in the visible and

adjacent near ultraviolet (UV) and near infrared (NIR) ranges. In this region of energy space

molecules undergo electronic transitions.

Applications

UV/Vis spectroscopy is routinely used in the quantitative determination of solutions of

transition metal ions and highly conjugated organic compounds.

Solutions of transition metal ions can be coloured (i.e., absorb visible light) because d

electrons within the metal atoms can be excited from one electronic state to another.

The colour of metal ion solutions is strongly affected by the presence of other species,

such as certain anions or ligands. For instance, the colour of a dilute solution of

copper sulphate is a very light blue; adding ammonia intensifies the colour and

changes the wavelength of maximum absorption (λ_max).

Organic compounds , especially those with a high degree of conjugation, also absorb

light in the UV or visible regions of the electromagnetic spectrum. The solvents for

these determinations are often water for water soluble compounds, or ethanol for

organic-soluble compounds. (Organic solvents may have significant UV absorption;

not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly

at most wavelengths.)

Page 53: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

While charge transfer complexes also give rise to colours, the colours are often too

intense to be used for quantitative measurement.

The Beer-Lambert law states that the absorbance of a solution is directly proportional to the

solution's concentration. Thus UV/VIS spectroscopy can be used to determine the

concentration of a solution. It is necessary to know how quickly the absorbance changes with

concentration. This can be taken from references (tables of molar extinction coefficients), or

more accurately, determined from a calibration curve.

A UV/Vis spectrophotometer may be used as a detector for HPLC. The presence of an

analyte gives a response which can be assumed to be proportional to the concentration. For

accurate results, the instrument's response to the analyte in the unknown should be compared

with the response to a standard; this is very similar to the use of calibration curves. The

response (e.g., peak height) for a particular concentration is known as the response factor.

Beer-Lambert law

The method is most often used in a quantitative way to determine concentrations of an

absorbing species in solution, using the Beer-Lambert law

where A is the measured absorbance, I0 is the intensity of the incident light at a given

wavelength, I is the transmitted intensity, L the pathlength through the sample, and c the

concentration of the absorbing species. For each species and wavelength, ε is a constant

known as the molar absorptivity or extinction coefficient. This constant is a fundamental

molecular property in a given solvent, at a particular temperature and pressure, and has units

of 1 / M * cm or often AU / M * cm.

The absorbance and extinction ε are sometimes defined in terms of the natural logarithm

instead of the base-10 logarithm.

The Beer-Lambert Law is useful for characterizing many compounds but does not hold as a

universal relationship for the concentration and absorption of all substances. A 2nd order

polynomial relationship between absorption and concentration is sometimes encountered for

Page 54: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

very large, complex molecules such as organic dyes (Xylenol Orange or Neutral Red, for

example).

The instrument used in ultraviolet-visible spectroscopy is called a UV/vis

spectrophotometer. It measures the intensity of light passing through a sample (I), and

compares it to the intensity of light before it passes through the sample (Io). The ratio I / Io is

called the transmittance, and is usually expressed as a percentage (%T). The absorbance, A, is

based on the transmittance:

A = − log(%T)

The basic parts of a spectrophotometer are a light source (often an incandescent bulb for the

visible wavelengths, or a deuterium arc lamp in the ultraviolet), a holder for the sample, a

diffraction grating or monochromator to separate the different wavelengths of light, and a

detector. The detector is typically a photodiode or a CCD. Photodiodes are used with

monochromators, which filter the light so that only light of a single wavelength reaches the

detector. Diffraction gratings are used with CCDs, which collects light of different

wavelengths on different pixels.

A spectrophotometer can be either single beam or double beam. In a single beam instrument

(such as the Spectronic 20), all of the light passes through the sample cell. Io must be

measured by removing the sample. This was the earliest design, but is still in common use in

both teaching and industrial labs.

In a double-beam instrument, the light is split into two beams before it reaches the sample.

One beam is used as the reference; the other beam passes through the sample. Some double-

beam instruments have two detectors (photodiodes), and the sample and reference beam are

measured at the same time. In other instruments, the two beams pass through a beam chopper,

which blocks one beam at a time. The detector alternates between measuring the sample

beam and the reference beam.

Samples for UV/Vis spectrophotometry are most often liquids, although the absorbance of

gases and even of solids can also be measured. Samples are typically placed in a transparent

cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an

internal width of 1 cm. (This width becomes the path length, L, in the Beer-Lambert law.)

Page 55: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Test tubes can also be used as cuvettes in some instruments. The best cuvettes are made of

high quality quartz, although glass or plastic cuvettes are common. (Glass and most plastics

absorb in the UV, which limits their usefulness to visible wavelengths.)

Ultraviolet-visible spectrum

An ultraviolet-visible spectrum is essentially a graph of light absorbance versus wavelength

in a range of ultraviolet or visible regions. Such a spectrum can often be produced directly by

a more sophisticated spectrophotometer, or the data can be collected one wavelength at a time

by simpler instruments. Wavelength is often represented by the symbol λ. Similarly, for a

given substance, a standard graph of the extinction coefficient (ε) vs. wavelength (λ) may be

made or used if one is already available. Such a standard graph would be effectively

"concentration-corrected" and thus independent of concentration. For the given substance, the

wavelength at which maximum absorption in the spectrum occurs is called λmax, pronounced

"Lambda-max".

The Woodward-Fieser rules rules are a set of empirical observations which can be used to

predict λmax, the wavelength of the most intense UV/Vis absorption, for conjugated organic

compounds such as dienes and ketones.

This spectrum can be used qualitatively to identify components in a sample as each

component has their own unique absorbance spectrum (like a fingerprint).

INFRARED SPECTROSCOPY

Infrared spectroscopy (IR spectroscopy) is the subset of spectroscopy that deals with the

infrared region of the electromagnetic spectrum. It covers a range of techniques, the most

common being a form of absorption spectroscopy. As with all spectroscopic techniques, it

can be used to identify compounds or investigate sample composition. Infrared spectroscopy

correlation tables are tabulated in the literature.

Theory

The infrared portion of the electromagnetic spectrum is divided into three regions; the near-,

mid- and far- infrared, named for their relation to the visible spectrum. The far-infrared,

approximately 400-10 cm -1 (1000–30 μm), lying adjacent to the microwave region, has low

Page 56: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

energy and may be used for rotational spectroscopy. The mid-infrared, approximately 4000-

400 cm-1 (30–1.4 μm) may be used to study the fundamental vibrations and associated

rotational-vibrational structure. The higher energy near-IR, approximately 14000-4000 cm-1

(1.4–0.8 μm) can excite overtone or harmonic vibrations. The names and classifications of

these subregions are merely conventions. They are neither strict divisions nor based on exact

molecular or electromagnetic properties.

Infrared spectroscopy exploits the fact that molecules have specific frequencies at which they

rotate or vibrate corresponding to discrete energy levels. These resonant frequencies are

determined by the shape of the molecular potential energy surfaces, the masses of the atoms

and, by the associated vibronic coupling. In order for a vibrational mode in a molecule to be

IR active, it must be associated with changes in the permanent dipole. In particular, in the

Born-Oppenheimer and harmonic approximations, i.e. when the molecular Hamiltonian

corresponding to the electronic ground state can be approximated by a harmonic oscillator in

the neighborhood of the equilibrium molecular geometry, the resonant frequencies are

determined by the normal modes corresponding to the molecular electronic ground state

potential energy surface. Nevertheless, the resonant frequencies can be in a first approach

related to the strength of the bond, and the mass of the atoms at either end of it. Thus, the

frequency of the vibrations can be associated with a particular bond type.

Simple diatomic molecules have only one bond, which may stretch. More complex molecules

have many bonds, and vibrations can be conjugated, leading to infrared absorptions at

characteristic frequencies that may be related to chemical groups. For example, the atoms in a

CH2 group, commonly found in organic compounds can vibrate in six different ways:

symmetrical and antisymmetrical stretching, scissoring, rocking, wagging and twisting:

The infrared spectra of a sample are collected by passing a beam of infrared light through the

sample. Examination of the transmitted light reveals how much energy was absorbed at each

wavelength. This can be done with a monochromatic beam, which changes in wavelength

over time, or by using a Fourier transform instrument to measure all wavelengths at once.

From this, a transmittance or absorbance spectrum can be produced, showing at which IR

wavelengths the sample absorbs. Analysis of these absorption characteristics reveals details

about the molecular structure of the sample.

Page 57: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

This technique works almost exclusively on samples with covalent bonds. Simple spectra are

obtained from samples with few IR active bonds and high levels of purity. More complex

molecular structures lead to more absorption bands and more complex spectra. The technique

has been used for the characterization of very complex mixtures.

Sample preparation

Gaseous samples require little preparation beyond purification, but a sample cell with a long

pathlength (typically 5-10 cm) is normally needed, as gases show relatively weak

absorbances.

Liquid samples can be sandwiched between two plates of a high purity salt (commonly

sodium chloride, or common salt, although a number of other salts such as potassium

bromide or calcium fluoride are also used). The plates are transparent to the infrared light and

will not introduce any lines onto the spectra. Some salt plates are highly soluble in water, so

the sample and washing reagents must be anhydrous (without water).

Solid samples can be prepared in two major ways. The first is to crush the sample with a

mulling agent (usually nujol) in a marble or agate mortar, with a pestle. A thin film of the

mull is applied onto salt plates and measured.

The second method is to grind a quantity of the sample with a specially purified salt (usually

potassium bromide) finely (to remove scattering effects from large crystals). This powder

mixture is then crushed in a mechanical die press to form a translucent pellet through which

the beam of the spectrometer can pass.

It is important to note that spectra obtained from different sample preparation methods will

look slightly different from each other due to differences in the samples' physical states.

The last technique is the Cast Film technique.

Cast film technique is used mainly for polymeric compound. Sample is first dissolved in

suitable, non hygroscopic solvent. A drop of this solution is deposited on surface of KBr or

NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is

analysed directly. Care is important to ensure that the film is not too thick otherwise light

cannot pass through. This technique is suitable for qualitative analysis.

Page 58: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Typical method

Typical apparatus

A beam of infrared light is produced and split into two separate beams. One is passed through

the sample, the other passed through a reference which is often the substance the sample is

dissolved in. The beams are both reflected back towards a detector, however first they pass

through a splitter which quickly alternates which of the two beams enters the detector. The

two signals are then compared and a printout is obtained.

A reference is used for two reasons:

This prevents fluctuations in the output of the source affecting the data

This allows the effects of the solvent to be cancelled out (the reference is usually a

pure form of the solvent the sample is in)

Uses and applications

Infrared spectroscopy is widely used in both research and industry as a simple and reliable

technique for measurement, quality control and dynamic measurement. The instruments are

now small, and can be transported, even for use in field trials. With increasing technology in

computer filtering and manipulation of the results, samples in solution can now be measured

accurately (water produces a broad absorbance across the range of interest, and thus renders

the spectra unreadable without this computer treatment). Some machines will also

automatically tell you what substance is being measured from a store of thousands of

reference spectra held in storage.

Page 59: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

By measuring at a specific frequency over time, changes in the character or quantity of a

particular bond can be measured. This is especially useful in measuring the degree of

polymerization in polymer manufacture. Modern research machines can take infrared

measurements across the whole range of interest as frequently as 32 times a second. This can

be done whilst simultaneous measurements are made using other techniques. This makes the

observations of chemical reactions and processes quicker and more accurate.

Techniques have been developed to assess the quality of tea-leaves using infrared

spectroscopy. This will mean that highly trained experts (also called 'noses') can be used

more sparingly, at a significant cost saving.[1]

Infrared spectroscopy has been highly successful for applications in both organic and

inorganic chemistry. Infrared spectroscopy has also been successfully utilized in the field of

semiconductor microelectronics[2]: for example, infrared spectroscopy can be applied to

semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide, amorphous

silicon, silicon nitride, etc.

Isotope effects

The different isotopes in a particular species may give fine detail in infrared spectroscopy.

For example, the O-O stretching frequency of oxyhemocyanin is experimentally determined

to be 832 and 788 cm-1 for ν(16O-16O) and ν(18O-18O) respectively.

By considering the O-O as a spring, the wavelength of absorbance, ν can be calculated:Where

k is the spring constant for the bond, and μ is the reduced mass of the A-B system:

(mi is the mass of atom i).

The reduced masses for 16O-16O and 18O-18O can be approximated as 8 and 9 respectively.

Thus

Fourier transform infrared spectroscopy

Fourier transform infrared (FTIR) spectroscopy is a measurement technique for

collecting infrared spectra. Instead of recording the amount of energy absorbed when the

frequency of the infra-red light is varied (monochromator), the IR light is guided through an

Page 60: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

interferometer. After passing the sample the measured signal is the interferogram. Performing

a mathematical Fourier transform on this signal results in a spectrum identical to that from

conventional (dispersive) infrared spectroscopy.

FTIR spectrometers are cheaper than conventional spectrometers because building of

interferometers is easier than the fabrication of a monochromator. In addition, measurement

of a single spectrum is faster for the FTIR technique because the information at all

frequencies is collected simultaneously. This allows multiple samples to be collected and

averaged together resulting in an improvement in sensitivity. Because of its various

advantages, virtually all modern infrared spectrometers are FTIR instruments.

Two-dimensional infrared spectroscopy

Two-dimensional infrared correlation spectroscopy analysis is the application of 2D

correlation analysis on infrared spectra. By extending the spectral information of a perturbed

sample, spectral analysis is simplified and resolution is enhanced. The 2D synchronous and

2D asynchronous spectra represent a graphical overview of the spectral changes due to a

perturbation (such as a changing concentration or changing temperature) as well as the

relationship between the spectral changes at two different wavenumbers.

Nonlinear two-dimensional infrared spectroscopy[3][4] is the infrared version of correlation

spectroscopy. Nonlinear two-dimensional infrared spectroscopy is a technique that has

become available with the development of femtosecond infrared laser pulses. In this

experiment first a set of pump pulses are applied to the sample. This is followed by a waiting

time, where the system is allowed to relax. The waiting time typically lasts from zero to

several picoseconds and the duration can be controlled with a resolution of tens of

femtoseconds. A probe pulse is then applied resulting in the emission of a signal from the

sample. The nonlinear two-dimensional infrared spectrum is a two-dimensional correlation

plot of the frequency ω1 that was excited by the initial pump pulses and the frequency ω3

excited by the probe pulse after the waiting time. This allows the observation of coupling

between different vibrational modes. Because of its extremely high time resolution it can be

used to monitor molecular dynamics on a picosecond timescale. It is still a largely unexplored

technique and is becoming increasingly popular for fundamental research.

Page 61: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Like in two-dimensional nuclear magnetic resonance (2DNMR) spectroscopy this technique

spreads the spectrum in two dimensions and allow for the observation of cross peaks that

contain information on the coupling between different modes. In contrast to 2DNMR

nonlinear two-dimensional infrared spectroscopy also involve the excitation to overtones.

These excitations result in excited state absorption peaks located below the diagonal and

cross peaks. In 2DNMR two distinct techniques, COSY and NOESY, are frequently used.

The cross peaks in the first are related to the scalar coupling, while in the later they are

related to the spin transfer between different nuclei. In nonlinear two-dimensional infrared

spectroscopy analogs have been drawn to these 2DNMR techniques. Nonlinear two-

dimensional infrared spectroscopy with zero waiting time corresponds to COSY and

nonlinear two-dimensional infrared spectroscopy with finite waiting time allowing

vibrational population transfer corresponds to NOESY. The COSY variant of nonlinear two-

dimensional infrared spectroscopy has been used for determination of the secondary structure

content proteins.[5]

RAMAN SPECTROSCOPY

Raman spectroscopy is a spectroscopic technique used in condensed matter physics and

chemistry to study vibrational, rotational, and other low-frequency modes in a system. [1] It

relies on inelastic scattering, or Raman scattering of monochromatic light, usually from a

laser in the visible, near infrared, or near ultraviolet range. The laser light interacts with

phonons or other excitations in the system, resulting in the energy of the laser photons being

shifted up or down. The shift in energy gives information about the phonon modes in the

system. Infrared spectroscopy yields similar, but complementary information.

Typically, a sample is illuminated with a laser beam. Light from the illuminated spot is

collected with a lens and sent through a monochromator. Wavelengths close to the laser line,

due to elastic Rayleigh scattering, are filtered out while the rest of the collected light is

dispersed onto a detector.

Spontaneous Raman scattering is typically very weak, and as a result the main difficulty of

Raman spectroscopy is separating the weak inelastically scattered light from the intense

Rayleigh scattered laser light. Raman spectrometers typically use holographic diffraction

gratings and multiple dispersion stages to achieve a high degree of laser rejection. In the past,

PMTs were the detectors of choice for dispersive Raman setups, which resulted in long

Page 62: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

acquisition times. However, the recent uses of CCD detectors have made dispersive Raman

spectral acquisition much more rapid.

Raman spectroscopy has a stimulated version, analogous to stimulated emission, called

stimulated Raman scattering.

Basic theory

Energy level diagram showing the states involved in Raman signal. The line thickness is

roughly proportional to the signal strength from the different transitions.

The Raman effect occurs when light impinges upon a molecule and interacts with the electron

cloud of the bonds of that molecule. The incident photon excites one of the electrons into a

virtual state. For the spontaneous Raman effect, the molecule will be excited from the ground

state to a virtual energy state, and relax into a vibrational excited state, which generates

Stokes Raman scattering. If the molecule was already in an elevated vibrational energy state,

the Raman scattering is then called anti-Stokes Raman scattering.

A molecular polarizability change, or amount of deformation of the electron cloud, with

respect to the vibrational coordinate is required for the molecule to exhibit the Raman effect.

The amount of the polarizability change will determine the intensity, whereas the Raman shift

is equal to the vibrational level that is involved.

Page 63: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

History

Although the inelastic scattering of light was predicted by Smekal in 1923, it was not until

1928 that it was observed in practice. The Raman effect was named after one of its

discoverers, the Indian scientist Sir C. V. Raman who observed the effect by means of

sunlight (1928, together with K. S. Krishnan and independently by Grigory Landsberg and

Leonid Mandelstam).[1] Raman won the Nobel Prize in Physics in 1930 for this discovery

accomplished using sunlight, a narrow band photographic filter to create monochromatic light

and a "crossed" filter to block this monochromatic light. He found that light of changed

frequency passed through the "crossed" filter.

Subsequently the mercury arc became the principal light source, first with photographic

detection and then with spectrophotometric detection. Currently lasers are used as light

sources.

Applications

Raman spectroscopy is commonly used in chemistry, since vibrational information is very

specific for the chemical bonds in molecules. It therefore provides a fingerprint by which the

molecule can be identified. The fingerprint region of organic molecules is in the range 500-

2000 cm-1. Another way that the technique is used is to study changes in chemical bonding,

e.g. when a substrate is added to an enzyme.

Raman gas analyzers have many practical applications, for instance they are used in medicine

for real-time monitoring of anaesthetic and respiratory gas mixtures during surgery.

In solid state physics, spontaneous Raman spectroscopy is used to, among other things,

characterize materials, measure temperature, and find the crystallographic orientation of a

sample.

As with single molecules, a given solid material has characteristic phonon modes that can

help an experimenter identify it. In addition, Raman spectroscopy can be used to observe

other low frequency excitations of the solid, such as plasmons, magnons, and

superconducting gap excitations.

Page 64: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

The spontaneous Raman signal gives information on the population of a given phonon mode

in the ratio between the Stokes (downshifted) intensity and anti-Stokes (upshifted) intensity.

Raman scattering by an anisotropic crystal gives information on the crystal orientation. The

polarization of the Raman scattered light with respect to the crystal and the polarization of the

laser light can be used to find the orientation of the crystal, if the crystal structure

(specifically, its point group) is known.

Raman active fibers, such as aramid and carbon, have vibrational modes that show a shift in

Raman frequency with applied stress. Polypropylene fibers also exhibit similar shifts.

The radial breathing mode is a commonly used technique to evaluate the diameter of carbon

nanotubes.

Spatially Offset Raman Spectroscopy (SORS), which is less sensitive to surface layers than

conventional Raman, can be used to discover counterfeit drugs without opening their internal

packaging, and for non-invasive monitoring of biological tissue.[2][3]

Raman spectroscopy can be used to investigate the chemical composition of historical

documents such as the Book of Kells and contribute to knowledge of the social and economic

conditions at the time the documents were produced. [4] This is especially helpful because

Raman spectroscopy offers a non-invasive way to determine the best course of preservation

or conservation treatment for such materials.

Raman microspectroscopy

Raman spectroscopy offers several advantages for microscopic analysis. Since it is a

scattering technique, specimens do not need to be fixed or sectioned. Raman spectra can be

collected from a very small volume (< 1 µm in diameter); these spectra allow the

identification of species present in that volume. Water does not interfere very strongly. Thus,

Raman spectroscopy is suitable for the microscopic examination of minerals, materials such

as polymers and ceramics, cells and proteins. A Raman microscope begins with a standard

optical microscope, and adds an excitation laser, a monochromator, and a sensitive detector

(such as a charge-coupled device (CCD) or photomultiplier tube (PMT)). FT-Raman has also

been used with microscopes.

Page 65: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

In direct imaging, the whole field of view is examined for scattering over a small range of

wavenumbers (Raman shifts). For instance, a wavenumber characteristic for cholesterol could

be used to record the distribution of cholesterol within a cell culture.

The other approach is hyperspectral imaging or chemical imaging, in which thousands of

Raman spectra are acquired from all over the field of view. The data can then be used to

generate images showing the location and amount of different components. Taking the cell

culture example, a hyperspectral image could show the distribution of cholesterol, as well as

proteins, nucleic acids, and fatty acids. Sophisticated signal- and image-processing

techniques can be used to ignore the presence of water, culture media, buffers, and other

interferents.

Raman microscopy, and in particular confocal microscopy, has very high spatial resolution.

For example, the lateral and depth resolutions were 250 nm and 1.7 µm, respectively, using a

confocal Raman microspectrometer with the 632.8 nm line from a He-Ne laser with a pinhole

of 100 µm diameter.

Since the objective lenses of microscopes focus the laser beam to several micrometres in

diameter, the resulting photon flux is much higher than achieved in conventional Raman

setups. This has the added benefit of enhanced fluorescence quenching. However, the high

photon flux can also cause sample degradation, and for this reason some setups require a

thermally conducting substrate (which acts as a heat sink) in order to mitigate this process.

By using Raman microspectroscopy, in vivo time- and space-resolved Raman spectra of

microscopic regions of samples can be measured. As a result, the fluorescence of water,

media, and buffers can be removed. Consequently in vivo time- and space-resolved Raman

spectroscopy is suitable to examine proteins, cells and organs.

Raman microscopy for biological and medical specimens generally uses near-infrared (NIR)

lasers (785 nm diodes and 1064 nm Nd:YAG are especially common). This reduces the risk

of damaging the specimen by applying high power. However, the intensity of NIR Raman is

low (owing to the ω-4 dependence of Raman scattering intensity), and most detectors required

very long collection times. Recently, more sensitive detectors have become available, making

the technique better suited to general use. Raman microscopy of inorganic specimens, such as

rocks and ceramics and polymers, can use a broader range of excitation wavelengths.[5]

Page 66: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Variations

Several variations of Raman spectroscopy have been developed. The usual purpose is to

enhance the sensitivity (e.g., surface-enhanced Raman), to improve the spatial resolution

(Raman microscopy), or to acquire very specific information (resonance Raman).

Surface Enhanced Raman Spectroscopy (SERS) - Normally done in a silver or

gold colloid or a substrate containing silver or gold. Surface plasmons of silver and

gold are easily excited by the laser, and the resulting electric fields cause other nearby

molecules to become Raman active. The result is amplification of the Raman signal

(by up to 1011). This effect was originally observed by Fleishman but the prevailing

explanation was proposed by Van Duyne in 1977.[6]

Hyper Raman - A non-linear effect in which the vibrational modes interact with the

second harmonic of the excitation beam. This requires very high power, but allows

the observation of vibrational modes which are normally "silent". It frequently relies

on SERS-type enhancement to boost the sensitivity.

Resonance Raman spectroscopy - The excitation wavelength is matched to an

electronic transition of the molecule or crystal, so that vibrational modes associated

with the excited electronic state are greatly enhanced. This is useful for studying large

molecules such as polypeptides, which might show hundreds of bands in

"conventional" Raman spectra. It is also useful for associating normal modes with

their observed frequency shifts.

Spontaneous Raman Spectroscopy - Used to study the temperature dependence of

the Raman spectra of molecules.

Optical Tweezers Raman Spectroscopy (OTRS) - Used to study individual

particles, and even biochemical processes in single cells trapped by optical tweezers.

Stimulated Raman Spectroscopy - A two color pulse transfers the population from

ground to a rovibrationally excited state, if the difference in energy corresponds to an

allowed Raman transition. Two photon UV ionization, applied after the population

transfer but before relaxation, allows the intra-molecular or inter-molecular Raman

Page 67: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

spectrum of a gas or molecular cluster (indeed, a given conformation of molecular

cluster) to be collected. This is a useful molecular dynamics technique.

Spatially Offset Raman Spectroscopy (SORS) - The Raman scatter is collected

from regions laterally offset away from the excitation laser spot, leading to

significantly lower contributions from the surface layer than with traditional Raman

spectroscopy.

UNIT IV

THERMAL ANALYSIS

Page 68: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

THERMOGRAVIMETRIC ANALYSIS

sketch of a typical TGA (Setaram TG-DTA 92 B type); the cooling water pipe was omitted

Thermogravimetric Analysis or TGA is a type of testing that is performed on samples to

determine changes in weight in relation to change in temperature. Such analysis relies on a

high degree of precision in three measurements: weight, temperature, and temperature

change. As many weight loss curves look similar, the weight loss curve may require

transformation before results may be interpreted. A derivative weight loss curve can be used

to tell the point at which weight loss is most apparent. Again, interpretation is limited without

further modifications and deconvolution of the overlapping peaks may be required.

TGA is commonly employed in research and testing to determine characteristics of materials

such as polymers, to determine degradation temperatures, absorbed moisture content of

materials, the level of inorganic and organic components in materials, decomposition points

of explosives, and solvent residues. It is also often used to estimate the corrosion kinetics in

high temperature oxidation.

Analyzer

The analyzer usually consists of a high-precision balance with a pan loaded with the sample.

The sample is placed in a small electrically heated oven with a thermocouple to accurately

measure the temperature. The atmosphere may be purged with an inert gas to prevent

oxidation or other undesired reactions. A computer is used to control the instrument.

Analysis is carried out by raising the temperature gradually and plotting weight against

temperature. After the data is obtained, curve smoothing and other operations may be done

such as to find the exact points of inflection.

DIFFERENTIAL THERMAL ANALYSIS

Page 69: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Differential thermal analysis (or DTA) is a thermoanalytic technique, similar to differential

scanning calorimetry. In DTA, the material under study and an inert reference are heated (or

cooled) under identical conditions, while recording any temperature difference between

sample and reference.[1] This differential temperature is then plotted against time, or against

temperature (DTA curve or thermogram). Changes in the sample, either exothermic or

endothermic, can be detected relative to the inert reference. Thus, a DTA curve provides data

on the transformations that have occurred, such as glass transitions, crystallization, melting

and sublimation. The area under a DTA peak can be to the enthalpy change and it is not

affected by the heat capacity of the sample.

Apparatus

A DTA apparatus consist of a sample holder comprising thermocouples, sample containers

and a ceramic or metallic block; a furnace; a temperature programmer; and a recording

system. The key feature is the existence of two thermocouples connected to a voltmeter. One

thermocouple is placed in an inert material such as Al2O3, while the other is placed in a

sample of the material under study. As the temperature is increased, there will be a brief

deflection of the voltmeter if the sample is undergoing a phase transition. This occurs because

the input of heat will raise the temperature of the inert substance, but be incorporated as latent

heat in the material changing phase.

Applications

A DTA curve can be used only as a finger print for identification purposes but usually the

applications of this method are the determination of phase diagrams, heat change

measurements and decomposition in various atmospheres.

DTA is widely used in the pharmaceutical and food industries.

DTA may be used in cement chemistry, mineralogical research and in environmental studies.

DTA curves may also be used to date bone remains or to study archaeological materials.

Page 70: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

DIFFERENTIAL SCANNING CALORIMETRY

Differential scanning calorimetry or DSC is a thermoanalytical technique in which the

difference in the amount of heat required to increase the temperature of a sample and

reference are measured as a function of temperature. Both the sample and reference are

maintained at nearly the same temperature throughout the experiment. Generally, the

temperature program for a DSC analysis is designed such that the sample holder temperature

increases linearly as a function of time. The reference sample should have a well-defined heat

capacity over the range of temperatures to be scanned. The basic principle underlying this

technique is that, when the sample undergoes a physical transformation such as phase

transitions, more (or less) heat will need to flow to it than the reference to maintain both at

the same temperature. Whether more or less heat must flow to the sample depends on

whether the process is exothermic or endothermic. For example, as a solid sample melts to a

liquid it will require more heat flowing to the sample to increase its temperature at the same

rate as the reference. This is due to the absorption of heat by the sample as it undergoes the

endothermic phase transition from solid to liquid. Likewise, as the sample undergoes

exothermic processes (such as crystallization) less heat is required to raise the sample

temperature. By observing the difference in heat flow between the sample and reference,

differential scanning calorimeters are able to measure the amount of heat absorbed or

released during such transitions. DSC may also be used to observe more subtle phase

changes, such as glass transitions. DSC is widely used in industrial settings as a quality

control instrument due to its applicability in evaluating sample purity and for studying

polymer curing.[1][2][3]

An alternative technique, which shares much in common with DSC, is differential thermal

analysis (DTA). In this technique it is the heat flow to the sample and reference that remains

the same rather than the temperature. When the sample and reference are heated identically

phase changes and other thermal processes cause a difference in temperature between the

sample and reference. Both DSC and DTA provide similar information; DSC is the more

widely used of the two techniques.[1][2][3]

DSC curves

The result of a DSC experiment is a curve of heat flux versus temperature or versus time.

There are two different conventions: exothermic reactions in the sample shown with a

Page 71: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

positive or negative peak; it depends by the different kind of technology used by the

instrumentation to make the experiment. This curve can be used to calculate enthalpies of

transitions. This is done by integrating the peak corresponding to a given transition. It can be

shown that the enthalpy of transition can be expressed using the following equation:

ΔH = KA

where ΔH is the enthalpy of transition, K is the calorimetric constant, and A is the area under

the curve. The calometric constant will vary from instrument to instrument, and can be

determined by analyzing a well-characterized sample with known enthalpies of transition.[2]

Applications

Differential scanning calorimetry can be used to measure a number of characteristic

properties of a sample. Using this technique it is possible to observe fusion and crystallization

events as well as glass transition temperatures (Tg). DSC can also be used to study oxidation,

as well as other chemical reactions.[1][2][3]

Glass transitions may occur as the temperature of an amorphous solid is increased. These

transitions appear as a step in the baseline of the recorded DSC signal. This is due to the

sample undergoing a change in heat capacity; no formal phase change occurs.[1][3]

As the temperature increases, an amorphous solid will become less viscous. At some point

the molecules may obtain enough freedom of motion to spontaneously arrange themselves

into a crystalline form. This is known as the crystallization temperature (Tc). This transition

from amorphous solid to crystalline solid is an exothermic process, and results in a peak in

the DSC signal. As the temperature increases the sample eventually reaches its melting

temperature (Tm). The melting process results in an endothermic peak in the DSC curve. The

ability to determine transition temperatures and enthalpies makes DSC an invaluable tool in

producing phase diagrams for various chemical systems.

DSC may also be used in the study of liquid crystals. As matter transitions between solid and

liquid it often goes through a third state, which displays properties of both phases. This

anisotropic liquid is known as a liquid crystalline or mesomorphous state. Using DSC, it is

possible to observe the small energy changes that occur as matter transitions from a solid to a

liquid crystal and from a liquid crystal to an isotropic liquid.[2]

Page 72: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Using differential scanning calorimetry to study the oxidative stability of samples generally

requires an airtight sample chamber. Usually, such tests are done isothermally (at constant

temperature) by changing the atmosphere of the sample. First, the sample is brought to the

desired test temperature under an inert atmosphere, usually nitrogen. Then, oxygen is added

to the system. Any oxidation that occurs is observed as a deviation in the baseline. Such

analyses can be used to determine the stability and optimum storage conditions for a

compound.[1]

DSC is widely used in the pharmaceutical and polymer industries. For the polymer chemist,

DSC is a handy tool for studying curing processes, which allows the fine tuning of polymer

properties. The cross-linking of polymer molecules that occurs in the curing process is

exothermic, resulting in a positive peak in the DSC curve that usually appears soon after the

glass transition.[1][2][3]

In the pharmaceutical industry it is necessary to have well-characterized drug compounds in

order to define processing parameters. For instance, if it is necessary to deliver a drug in the

amorphous form, it is desirable to process the drug at temperatures below those at which

crystallization can occur.

In food science research, DSC is used in conjunction with other thermal analytical techniques

to determine water dynamics. Changes in water distribution may be correlated with changes

in texture. Similar to material science studies, the effects of curing on confectionery products

can also be analyzed.

DSC curves may also be used to evaluate drug and polymer purities. This is possible because

the temperature range over which a mixture of compounds melts is dependent on their

relative amounts. This effect is due to a phenomenon known as freezing point depression,

which occurs when a foreign solute is added to a solution. (Freezing point depression is what

allows salt to de-ice sidewalks and antifreeze to keep your car running in the winter.)

Consequently, less pure compounds will exhibit a broadened melting peak that begins at

lower temperature than a pure compound.

In last few years this technology has been involved in metallic material study. The

characterization of this kind of material with DSC is not easy yet because of the low quantity

of literature about it. It is known that it is possible to use DSC to find solidus and liquidus

Page 73: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

temperature of a metal alloy, but the widest application is, by now, the study of

precipitations, Guiner Preston zones, phase transitions, dislocations movement, grain growth

etc.

Page 74: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

UNIT V

SEPARATION TECHNIQUES

Page 75: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

INTRODUCTION

Chromatography (from Greek χρώμα:chroma, color and γραφειν:"graphein" to write) is the

collective term for a family of laboratory techniques for the separation of mixtures. It

involves passing a mixture dissolved in a "mobile phase" through a stationary phase, which

separates the analyte to be measured from other molecules in the mixture and allows it to be

isolated.

Chromatography may be preparative or analytical. Preparative chromatography seeks to

separate the components of a mixture for further use (and is thus a form of purification).

Analytical chromatography normally operates with smaller amounts of material and seeks to

measure the relative proportions of analytes in a mixture. The two are not mutually exclusive.

Explanation

.An analogy which is sometimes useful is to suppose a mixture of bees and wasps passing

over a flower bed. The bees would be more attracted to the flowers than the wasps, and

would become separated from them. If one were to observe at a point past the flower bed, the

wasps would pass first, followed by the bees. In this analogy, the bees and wasps represent

the analytes to be separated, the flowers represent the stationary phase, and the mobile phase

could be thought of as the air. The key to the separation is the differing affinities among

analyte, stationary phase, and mobile phase. The observer could represent the detector used in

some forms of analytical chromatography. A key point is that the detector need not be

capable of discriminating between the analytes, since they have become separated before

passing the detector.

Chromatography terms

The analyte is the substance that is to be separated during chromatography.

Analytical chromatography is used to determine the existence and possibly also the

concentration of analyte(s) in a sample.

A bonded phase is a stationary phase that is covalently bonded to the support

particles or to the inside wall of the column tubing.

Page 76: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

A chromatogram is the visual output of the chromatograph. In the case of an optimal

separation, different peaks or patterns on the chromatogram correspond to different

components of the separated mixture.

Plotted on the x-axis is the retention time and plotted on the y-axis a signal (for

example obtained by a spectrophotometer, mass spectrometer or a variety of other

detectors) corresponding to the response created by the analytes exiting the system. In

the case of an optimal system the signal is proportional to the concentration of the

specific analyte separated.

A chromatograph is equipment that enables a sophisticated separation e.g. gas

chromatographic or liquid chromatographic separation.

Chromatography is a physical method of separation in which the components to be

separated are distributed between two phases, one of which is stationary (stationary

phase) while the other (the mobile phase) moves in a definite direction.

The effluent is the mobile phase leaving the column.

An immobilized phase is a stationary phase which is immobilized on the support

particles, or on the inner wall of the column tubing.

The mobile phase is the phase which moves in a definite direction. It may be a liquid

(LC and CEC), a gas (GC), or a supercritical fluid (supercritical-fluid

chromatography, SFC). A better definition: The mobile phase consists of the sample

being separated/analyzed and the solvent that moves the sample through the column.

In one case of HPLC the solvent consists of a carbonate/bicarbonate solution and the

sample is the anions being separated. The mobile phase moves through the

chromatography column (the stationary phase) where the sample interacts with the

stationary phase and is separated.

Preparative chromatography is used to purify sufficient quantities of a substance

for further use, rather than analysis.

The retention time is the characteristic time it takes for a particular analyte to pass

through the system (from the column inlet to the detector) under set conditions. See

also: Kovat's retention index

The sample is the matter analysed in chromatography. It may consist of a single

component or it may be a mixture of components. When the sample is treated in the

course of an analysis, the phase or the phases containing the analytes of interest is/are

Page 77: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

referred to as the sample whereas everything out of interest separated from the sample

before or in the course of the analysis is referred to as waste.

The solute refers to the sample components in partition chromatography.

The solvent refers to any substance capable of solubilizing other substance, and

especially the liquid mobile phase in LC.

The stationary phase is the substance which is fixed in place for the chromatography

procedure. Examples include the silica layer in thin layer chromatography.

Techniques by chromatographic bed shape

Column chromatography

A diagram of a standard column chromatography and a flash column chromatography setup

Column chromatography is a separation technique in which the stationary bed is within a

tube. The particles of the solid stationary phase or the support coated with a liquid stationary

phase may fill the whole inside volume of the tube (packed column) or be concentrated on or

along the inside tube wall leaving an open, unrestricted path for the mobile phase in the

middle part of the tube (open tubular column). Differences in rates of movement through the

medium are calculated to different retention times of the sample.

In 1978, W. C. Still introduced a modified version of column chromatography called flash

column chromatography (flash). The technique is very similar to the traditional column

Page 78: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

chromatography, except for that the solvent is driven through the column by applying

positive pressure. This allowed most separations to be performed in less than 20 minutes,

with improved separations compared to the old method. Modern flash chromatography

systems are sold as pre-packed plastic cartridges, and the solvent is pumped through the

cartridge. Systems may also be linked with detectors and fraction collectors providing

automation. The introduction of gradient pumps resulted in quicker separations and less

solvent usage.

In expanded bed adsorption, a fluidized bed is used, rather than a solid phase made by a

packed bed. This allows omission of initial clearing steps such as centrifugation and

filtration, for culture broths or slurries of broken cells.

Planar Chromatography

Thin layer chromatography is used to separate components of chlorophyll

Planar chromatography is a separation technique in which the stationary phase is present as

or on a plane. The plane can be a paper, serving as such or impregnated by a substance as the

stationary bed (paper chromatography) or a layer of solid particles spread on a support such

as a glass plate (thin layer chromatography).

Paper Chromatography

Paper chromatography is a technique that involves placing a small dot of sample solution

onto a strip of chromatography paper. The paper is placed in a jar containing a shallow layer

Page 79: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

of solvent and sealed. As the solvent rises through the paper it meets the sample mixture

which starts to travel up the paper with the solvent. Different compounds in the sample

mixture travel different distances according to how strongly they interact with the paper. This

paper is made of cellulose, a polar molecule, and the compounds within the mixture travel

farther if they are non-polar. More polar substances bond with the cellulose paper more

quickly, and therefore do not travel as far. This process allows the calculation of an R f value

and can be compared to standard compounds to aid in the identification of an unknown

substance.

Thin layer chromatography

Thin layer chromatography (TLC) is a widely-employed laboratory technique and is similar

to paper chromatography. However, instead of using a stationary phase of paper, it involves a

stationary phase of a thin layer of adsorbent like silica gel, alumina, or cellulose on a flat,

inert substrate. Compared to paper, it has the advantage of faster runs, better separations, and

the choice between different adsorbents. Different compounds in the sample mixture travel

different distances according to how strongly they interact with the adsorbent. This allows the

calculation of an Rf value and can be compared to standard compounds to aid in the

identification of an unknown substance.

Techniques by physical state of mobile phase

Gas chromatography

Gas chromatography (GC), also sometimes known as Gas-Liquid chromatography, (GLC), is

a separation technique in which the mobile phase is a gas. Gas chromatography is always

carried out in a column, which is typically "packed" or "capillary" (see below) .

Gas chromatography (GC) is based on a partition equilibrium of analyte between a solid

stationary phase (often a liquid silicone-based material) and a mobile gas (most often

Helium). The stationary phase is adhered to the inside of a small-diameter glass tube (a

capillary column) or a solid matrix inside a larger metal tube (a packed column). It is widely

used in analytical chemistry; though the high temperatures used in GC make it unsuitable for

high molecular weight biopolymers or proteins (heat will denature them), frequently

encountered in biochemistry, it is well suited for use in the petrochemical, environmental

monitoring, and industrial chemical fields. It is also used extensively in chemistry research.

Page 80: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Liquid chromatography

Liquid chromatography (LC) is a separation technique in which the mobile phase is a liquid.

Liquid chromatography can be carried out either in a column or a plane. Present day liquid

chromatography that generally utilizes very small packing particles and a relatively high

pressure is referred to as high performance liquid chromatography (HPLC).

In the HPLC technique, the sample is forced through a column that is packed with irregularly

or spherically shaped particles or a porous monolithic layer (stationary phase) by a liquid

(mobile phase) at high pressure. HPLC is historically divided into two different sub-classes

based on the polarity of the mobile and stationary phases. Technique in which the stationary

phase is more polar than the mobile phase (e.g. toluene as the mobile phase, silica as the

stationary phase) is called normal phase liquid chromatography (NPLC) and the opposite

(e.g. water-methanol mixture as the mobile phase and C18 = octadecylsilyl as the stationary

phase) is called reversed phase liquid chromatography (RPLC). Ironically the "normal phase"

has fewer applications and RPLC is therefore used considerably more.

Specific techniques which come under this broad heading are listed below. It should also be

noted that the following techniques can also be considered fast protein liquid chromatography

if no pressure is used to drive the mobile phase through the stationary phase. See also

Aqueous Normal Phase Chromatography.

Affinity chromatography

Affinity chromatography is based on selective non-covalent interaction between an analyte

and specific molecules. It is very specific, but not very robust. It is often used in biochemistry

in the purification of proteins bound to tags. These fusion proteins are labelled with

compounds such as His-tags, biotin or antigens, which bind to the stationary phase

specifically. After purification, some of these tags are usually removed and the pure protein is

obtained.

Supercritical fluid chromatography

Supercritical fluid chromatography is a separation technique in which the mobile phase is a

fluid above and relatively close to its critical temperature and pressure.

Page 81: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

Techniques by separation mechanism

Ion exchange chromatography

Ion exchange chromatography utilizes ion exchange mechanism to separate analytes. It is

usually performed in columns but the mechanism can be benefited also in planar mode. Ion

exchange chromatography uses a charged stationary phase to separate charged compounds

including amino acids, peptides, and proteins. In conventional methods the stationary phase is

an ion exchange resin that carries charged functional groups which interact with oppositely

charged groups of the compound to be retained. Ion exchange chromatography is commonly

used to purify proteins using FPLC.

Size exclusion chromatography

Size exclusion chromatography (SEC) is also known as gel permeation chromatography

(GPC) or gel filtration chromatography and separates molecules according to their size (or

more accurately according to their hydrodynamic diameter or hydrodynamic volume).

Smaller molecules are able to enter the pores of the media and, therefore, take longer to elute,

whereas larger molecules are excluded from the pores and elute faster. It is generally a low

resolution chromatography technique and thus it is often reserved for the final, "polishing"

step of purification. It is also useful for determining the tertiary structure and quaternary

structure of purified proteins, especially since it can be carried out under native solution

conditions.

Special techniques

Reversed-phase chromatography

Reversed-phase chromatography is an elution procedure used in liquid chromatography in

which the mobile phase is significantly more polar than the stationary phase.

Two-dimensional chromatography

In some cases, the chemistry within a given column can be insufficient to separate some

analytes. It is possible to direct a series of unresolved peaks onto a second column with

different physico-chemical (Chemical classification) properties. Since the mechanism of

retention on this new solid support is different from the first dimensional separation, it can be

Page 82: INSTRUMENTAL METHODS OF ANALYSIS LESSION NOTES

possible to separate compounds that are indistinguishable by one-dimensional

chromatography.

Pyrolysis gas chromatography

Fast protein liquid chromatography

Fast protein liquid chromatography (FPLC) is a term applied to several chromatography

techniques which are used to purify proteins. Many of these techniques are identical to those

carried out under high performance liquid chromatography.

Countercurrent chromatography

Countercurrent chromatography (CCC) is a type of liquid-liquid chromatography, where both

the stationary and mobile phases are liquids. It involves mixing a solution of liquids, allowing

them to settle into layers and then separating the layers.

Chiral chromatography

Chiral chromatography involves the separation of stereoisomers. In the case of enantiomers,

these have no chemical or physical differences apart from being three dimensional mirror

images. Conventional chromatography or other separation processes are incapable of

separating them. To enable chiral separations to take place, either the mobile phase or the

stationary phase must themselves be made chiral, giving differing affinities between the

analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both

normal and reversed phase are commercially available