aplikasi fisika kuantum
TRANSCRIPT
APLICATION QUANTUM PHYSIC
D
I
S
U
S
U
N
OLEH
GITA THERESIA ARTITA
072244610035
FAKULTAS MATEMATIKA DAN ILMU PENGETAHUAN ALAM
UNIVERSITAS NEGERI MEDAN
2009
TEORI KUANTUM
Pada tahun 1900, fisikawan berkebangsaan Jernam Max Planck (1858-1947),
memutuskan untuk mempelajari radiasi benda hitam. Beliau berusaha untuk mendapatkan
persamaan matematika yang menyangkut bentuk dan posisi kurva pada grafik distribusi
spektrum. Planck menganggap bahwa permukaan benda hitam memancarkan radiasi secara
terus-menerus, sesuai dengan hukum-hukum fisika yang diakui pada saat itu. Hukum-hukum
itu diturunkan dari hukum dasar mekanika yang dikembangkan oleh Sir Isaac Newton.
Namun dengan asumsi tersebut ternyata Planck gagal untuk mendapatkan persamaan
matematika yang dicarinya. Kegagalan ini telah mendorong Planck untuk berpendapat bahwa
hukum mekanika yang berkenaan dengan kerja suatu atom sedikit banyak berbeda dengan
hukum Newton.
Max Planck mulai dengan asumsi baru, bahwa permukaan benda hitam tidak
menyerap atau memancarkan energi secara kontinyu, melainkan berjalan sedikit demi sedikit
dan bertahap-tahap. Menurut Planck, benda hitam menyerap energi dalam berkas-berkas kecil
dan memancarkan energi yang diserapnya dalam berkas-berkan kecil pula. Berkas-berkas
kecil itu selanjutnya disebut kuantum. Teori kuantum ini bisa diibaratkan dengan naik atau
turun menggunakan tangga. Hanya pada posisi-posisi tertentu, yaitu pada posisi anak tangga
kita dapat menginjakkan kaki, dan tidak mungkin menginjakkan kaki di antara anak-anak
tangga itu. Dengan hipotesa yang revolusioner ini, Planck berhasil menemukan suatu
persamaan matematika untuk radiasi benda hitam yang benar-benar sesuai dengan data
percobaan yang diperolehnya. Persamaan tersebut selanjutnya disebut Hukum Radiasi Benda
Hitam Planck yang menyatakan bahwa intensitas cahaya yang dipancarkan dari suatu benda
hitam berbeda-beda sesuai dengan panjang gelombang cahaya.
Hukum Radiasi Planck yaitu :
di mana
adalah energi (J),
adalah tetapan Planck, (J s ), dan
adalah frekuensi dari cahaya (Hz).
Hipotesa Planck yang bertentangan dengan teori klasik tentang gelombang
elektromagnetik ini merupakan titik awal dari lahirnya teori kuantum yang menandai
terjadinya revolusi dalam bidang fisika. Terobosan Planck merupakan tindakan yang sangat
berani karena bertentangan dengan hukum fisika yang telah mapan dan sangat dihormati.
Dengan teori ini ilmu fisika mampu menyuguhkan pengertian yang mendalam tentang alam
benda dan materi. Planck menerbitkan karyanya pada majalah yang sangat terkenal. Namun
untuk beberapa saat, karya Planck ini tidak mendapatkan perhatian dari masyarakat ilmiah
saat itu. Pada mulanya, Planck sendiri dan fisikawan lainnya menganggap bahwa hipotesa
tersebut tidak lain dari fiksi matematika yang cocok. Namun setelah berjalan beberapa tahun,
anggapan tersebut berubah hingga hipotesa Planck tentang kuantum dapat digunakan untuk
menerangkan berbagai fenomena fisika.
Pengakuan Terhadap Teori Kuantum
Teori kuantum sangat penting dalam ilmu pengetahuan karena pada prinsipnya teori
ini dapat digunakan untuk meramalkan sifat-sifat kimia dan fisika suatu zat. Pengakuan
terhadap hasil karya Planck datang perlahan-lahan karena pendekatan yang ditempuhnya
merupakan cara berfikir yang sama sekali baru. Albert Einstein misalnya, menggunakan
konsep kuantum ini untuk menjelaskan efek foto listrik yang diamatinya. Efek foto listrik
merupakan fenomena fisika berupa pancaran elektron dari permukaan benda apabila cahaya
dengan energi tertentu menimpa permukaan benda itu. Semua logam dapat menunjukkan
fenomena ini. Penjelasan Einstein mengenai efek foto listrik itu terbilang sangat radikal,
sehingga untuk beberapa waktu tidak diterima secara umum. Namun ketika Einstein
menerbitkan hasil karyanya pada tahun 1905, penjelasannya memperoleh perhatian luas di
kalangan fisikawan. Dengan demikian, penerapan teori kuantum untuk menjelaskan efek foto
listrik telah mendorong ke arah perhatian yang luar biasa terhadap teori kuantum dari Planck
yang sebelumnya diabaikan.
Pada tahun 1913, Niels Bohr, fisikawan berkebangsaan Swedia, mengikuti jejak
Einstein menerapkan teori kuantum untuk menerangkan hasil studinya mengenai spektrum
atom hidrogen. Bohr mengemukakan teori baru mengenai struktur dan sifat-sifat atom. Teori
atom Bohr ini pada prinsipnya menggabungkan teori kuantum Planck dan teori atom dari
Ernest Rutherford yang dikemukakan pada tahun 1911. Bohr mengemukakan bahwa apabila
elektron dalam orbit atom menyerap suatu kuantum energi, elektron akan meloncat keluar
menuju orbit yang lebih tinggi. Sebaliknya, jika elektron itu memancarkan suatu kuantum
energi, elektron akan jatuh ke orbit yang lebih dekat dengan inti atom.
Dengan teori kuantum, Bohr juga menemukan rumus matematika yang dapat
dipergunakan untuk menghitung panjang gelombang dari semua garis yang muncul dalam
spektrum atom hidrogen. Nilai hasil perhitungan ternyata sangat cocok dengan yang
diperoleh dari percobaan langsung. Namun untuk unsur yang lebih rumit dari hidrogen, teori
Bohr ini ternyata tidak cocok dalam meramalkan panjang gelombang garis spektrum.
Meskipun demikian, teori ini diakui sebagai langkah maju dalam menjelaskan fenomena-
fenomena fisika yang terjadi dalam tingkatan atomik. Teori kuantum dari Planck diakui
kebenarannya karena dapat dipakai untuk menjelaskan berbagai fenomena fisika yang saat itu
tidak bisa diterangkan dengan teori klasik. Pada tahun 1918 Planck memperoleh hadiah
Nobel bidang fisika berkat teori kuantumnya itu. Dengan memanfaatkan teori kuantum untuk
menjelaskan efek foto listrik, Einstein memenangkan hadiah Nobel bidang fisika pada tahun
1921. Selanjutnya Bohr yang mengikuti jejak Einstein menggunakan teori kuantum untuk
teori atomnya juga dianugerahi hadiah Nobel Bidang fisika tahun 1922.
Tiga hadiah Nobel fisika dalam waktu yang hampir berurutan di awal abad ke-20 itu
menandai pengakuan secara luas terhadap lahirnya teori mekanika kuantum. Teori ini
mempunyai arti penting dan fundamental dalam fisika. Di antara perkembangan beberapa
bidang ilmu pengetahuan di abad ke-20, perkembangan mekanika kuantum memiliki arti
yang paling penting, jauh lebih penting dibandingkan teori relativitas dari Einstein. Oleh
sebab itu, Planck dianggap sebagai Bapak Mekanika Kuantum yang telah mengalihkan
perhatian penelitian dari fisika makro yang mempelajari objek-objek tampak ke fisika mikro
yang mempelajari objek-objek sub-atomik. Dengan adanya perombakan dalam penelitian
fisika yang dimulai sejak memasuki abad ke-20 ini, maka perhatian orang mulai tertuju ke
arah penelitian atom, dan melalui penjelasan teori kuantum inilah manusia mampu mengenali
atom dengan baik.
Sebagai konsekwensi atas beralihnya bidang kajian dalam fisika ini, maka muncullah
beberapa disipilin ilmu spesialis seperti fisika nuklir dan fisika zat padat. Fisika nuklir yang
perkembangannya cukup kontraversial kini menawarkan berbagai macam aplikasi praktis
yang sangat bermanfaat dalam kehidupan. Energi nuklir misalnya, saat ini telah mensuplai
sekitar 17 % kebutuan energi listrik dunia. Sedang perkembangan dalam fisika zat pada telah
mengantarkan ke arah revolusi dalam bidang mikro elektronika, dan kini sedang menuju ke
arah nano elektronika.
APLICATION QUANTUM PHYSIC
Quantum mechanics has had enormous success in explaining many of the features of
our world. The individual behaviour of the subatomic particles that make up all forms of
matter—electrons, protons, neutrons, photons and others—can often only be satisfactorily
described using quantum mechanics. Quantum mechanics has strongly influenced string
theory, a candidate for a theory of everything (see reductionism) and the multiverse
hypothesis. It is also related to statistical mechanics.
Quantum mechanics is important for understanding how individual atoms combine
covalently to form chemicals or molecules. The application of quantum mechanics to
chemistry is known as quantum chemistry. (Relativistic) quantum mechanics can in principle
mathematically describe most of chemistry. Quantum mechanics can provide quantitative
insight into ionic and covalent bonding processes by explicitly showing which molecules are
energetically favorable to which others, and by approximately how much. Most of the
calculations performed in computational chemistry rely on quantum mechanics.
Much of modern technology operates at a scale where quantum effects are significant.
Examples include the laser, the transistor (and thus the microchip), the electron microscope,
and magnetic resonance imaging. The study of semiconductors led to the invention of the
diode and the transistor, which are indispensable for modern electronics.
Researchers are currently seeking robust methods of directly manipulating quantum
states. Efforts are being made to develop quantum cryptography, which will allow guaranteed
secure transmission of information. A more distant goal is the development of quantum
computers, which are expected to perform certain computational tasks exponentially faster
than classical computers. Another active research topic is quantum teleportation, which deals
with techniques to transmit quantum states over arbitrary distances.
Quantum tunneling is vital in many devices, even in the simple light switch, as
otherwise the electrons in the electric current could not penetrate the potential barrier made
up of a layer of oxide. Flash memory chips found in USB drives use quantum tunneling to
erase their memory cells.
1. LASER
Experiment with a laser (U.S. Air Force).
Light Amplification by Stimulated Emission of Radiation, LASER (laser), is a
mechanism for emitting light within the electromagnetic radiation region of the spectrum, via
the process of stimulated emission. The emitted laser light is (usually) a spatially coherent,
narrow low-divergence beam, that can be manipulated with lenses. In laser technology,
“coherent light” denotes a light source that produces (emits) light of in-step waves of
identical frequency and phase. The laser’s beam of coherent light differentiates it from light
sources that emit incoherent light beams, of random phase varying with time and position;
whereas the laser light is a narrow-wavelength electromagnetic spectrum monochromatic
light; yet, there are lasers that emit a broad spectrum light, or simultaneously, at different
wavelengths.
Terminology
From left to right: gamma rays, X-rays, ultraviolet rays, visible spectrum, infrared,
microwaves, radio waves.
The word laser originally was the upper-case LASER, the acronym from Light Amplification
by Stimulated Emission of Radiation, wherein light broadly denotes electromagnetic radiation
of any frequency, not only the visible spectrum; hence infrared laser, ultraviolet laser, X-ray
laser, et cetera. Because the microwave predecessor of the laser, the maser, was developed
first, devices that emit microwave and radio frequencies are denoted “masers”. In the early
technical literature, especially in that of the Bell Telephone Laboratories researchers, the laser
was also called optical maser, a currently uncommon term, moreover, since 1998, Bell
Laboratories adopted the laser usage. Linguistically, the back-formation verb to lase means
“to produce laser light” and “to apply laser light to”. The word laser sometimes is
inaccurately used to describe a non-laser-light technology, e.g. a coherent-state atom source
is an atom laser.
Design
Principal components:
1. Gain medium
2. Laser pumping energy
3. High reflector
4. Output coupler
5. Laser beam
A laser consists of a gain medium inside a highly reflective optical cavity, as well as a
means to supply energy to the gain medium. The gain medium is a material with properties
that allow it to amplify light by stimulated emission. In its simplest form, a cavity consists of
two mirrors arranged such that light bounces back and forth, each time passing through the
gain medium. Typically one of the two mirrors, the output coupler, is partially transparent.
The output laser beam is emitted through this mirror.
Light of a specific wavelength that passes through the gain medium is amplified
(increases in power); the surrounding mirrors ensure that most of the light makes many
passes through the gain medium, being amplified repeatedly. Part of the light that is between
the mirrors (that is, within the cavity) passes through the partially transparent mirror and
escapes as a beam of light.
The process of supplying the energy required for the amplification is called pumping.
The energy is typically supplied as an electrical current or as light at a different wavelength.
Such light may be provided by a flash lamp or perhaps another laser. Most practical lasers
contain additional elements that affect properties such as the wavelength of the emitted light
and the shape of the beam.
Laser physics
A helium-neon laser demonstration at the Kastler-Brossel Laboratory at Univ. Paris 6.
The glowing ray in the middle is an electric discharge producing light in much the same way
as a neon light. It is the gain medium through which the laser passes, not the laser beam itself,
which is visible there. The laser beam crosses the air and marks a red point on the screen to
the right.
Spectrum of a helium neon laser showing the very high spectral purity intrinsic to nearly all
lasers. Compare with the relatively broad spectral emittance of a light emitting diode.
The gain medium of a laser is a material of controlled purity, size, concentration, and
shape, which amplifies the beam by the process of stimulated emission. It can be of any state:
gas, liquid, solid or plasma. The gain medium absorbs pump energy, which raises some
electrons into higher-energy ("excited") quantum states. Particles can interact with light both
by absorbing photons or by emitting photons. Emission can be spontaneous or stimulated. In
the latter case, the photon is emitted in the same direction as the light that is passing by.
When the number of particles in one excited state exceeds the number of particles in some
lower-energy state, population inversion is achieved and the amount of stimulated emission
due to light that passes through is larger than the amount of absorption. Hence, the light is
amplified. By itself, this makes an optical amplifier. When an optical amplifier is placed
inside a resonant optical cavity, one obtains a laser.
The light generated by stimulated emission is very similar to the input signal in terms
of wavelength, phase, and polarization. This gives laser light its characteristic coherence, and
allows it to maintain the uniform polarization and often monochromaticity established by the
optical cavity design.
The optical cavity, a type of cavity resonator, contains a coherent beam of light
between reflective surfaces so that the light passes through the gain medium more than once
before it is emitted from the output aperture or lost to diffraction or absorption. As light
circulates through the cavity, passing through the gain medium, if the gain (amplification) in
the medium is stronger than the resonator losses, the power of the circulating light can rise
exponentially. But each stimulated emission event returns a particle from its excited state to
the ground state, reducing the capacity of the gain medium for further amplification. When
this effect becomes strong, the gain is said to be saturated. The balance of pump power
against gain saturation and cavity losses produces an equilibrium value of the laser power
inside the cavity; this equilibrium determines the operating point of the laser. If the chosen
pump power is too small, the gain is not sufficient to overcome the resonator losses, and the
laser will emit only very small light powers. The minimum pump power needed to begin laser
action is called the lasing threshold. The gain medium will amplify any photons passing
through it, regardless of direction; but only the photons aligned with the cavity manage to
pass more than once through the medium and so have significant amplification.
The beam in the cavity and the output beam of the laser, if they occur in free space
rather than waveguides (as in an optical fiber laser), are, at best, low order Gaussian beams.
However this is rarely the case with powerful lasers. If the beam is not a low-order Gaussian
shape, the transverse modes of the beam can be described as a superposition of Hermite-
Gaussian or Laguerre-Gaussian beams (for stable-cavity lasers). Unstable laser resonators on
the other hand, have been shown to produce fractal shaped beams. The beam may be highly
collimated, that is being parallel without diverging. However, a perfectly collimated beam
cannot be created, due to diffraction. The beam remains collimated over a distance which
varies with the square of the beam diameter, and eventually diverges at an angle which varies
inversely with the beam diameter. Thus, a beam generated by a small laboratory laser such as
a helium-neon laser spreads to about 1.6 kilometers (1 mile) diameter if shone from the Earth
to the Moon. By comparison, the output of a typical semiconductor laser, due to its small
diameter, diverges almost as soon as it leaves the aperture, at an angle of anything up to 50°.
However, such a divergent beam can be transformed into a collimated beam by means of a
lens. In contrast, the light from non-laser light sources cannot be collimated by optics as well.
Although the laser phenomenon was discovered with the help of quantum physics, it is not
essentially more quantum mechanical than other light sources. The operation of a free
electron laser can be explained without reference to quantum mechanics.
Modes of operation
The output of a laser may be a continuous constant-amplitude output (known as CW
or continuous wave); or pulsed, by using the techniques of Q-switching, modelocking, or
gain-switching. In pulsed operation, much higher peak powers can be achieved.
Some types of lasers, such as dye lasers and vibronic solid-state lasers can produce light over
a broad range of wavelengths; this property makes them suitable for generating extremely
short pulses of light, on the order of a few femtoseconds (10-15 s).
Continuous wave operation
In the continuous wave (CW) mode of operation, the output of a laser is relatively
constant with respect to time. The population inversion required for lasing is continually
maintained by a steady pump source.
Pulsed operation
In the pulsed mode of operation, the output of a laser varies with respect to time,
typically taking the form of alternating 'on' and 'off' periods. In many applications one aims to
deposit as much energy as possible at a given place in as short time as possible. In laser
ablation for example, a small volume of material at the surface of a work piece might
evaporate if it gets the energy required to heat it up far enough in very short time. If,
however, the same energy is spread over a longer time, the heat may have time to disperse
into the bulk of the piece, and less material evaporates. There are a number of methods to
achieve this.
Q-switching
In a Q-switched laser, the population inversion (usually produced in the same way as
CW operation) is allowed to build up by making the cavity conditions (the 'Q') unfavorable
for lasing. Then, when the pump energy stored in the laser medium is at the desired level, the
'Q' is adjusted (electro- or acousto-optically) to favourable conditions, releasing the pulse.
This results in high peak powers as the average power of the laser (were it running in CW
mode) is packed into a shorter time frame.
Modelocking
A modelocked laser emits extremely short pulses on the order of tens of picoseconds
down to less than 10 femtoseconds. These pulses are typically separated by the time that a
pulse takes to complete one round trip in the resonator cavity. Due to the Fourier limit (also
known as energy-time uncertainty), a pulse of such short temporal length has a spectrum
which contains a wide range of wavelengths. Because of this, the laser medium must have a
broad enough gain profile to amplify them all. An example of a suitable material is titanium-
doped, artificially grown sapphire (Ti:sapphire).
The modelocked laser is a most versatile tool for researching processes happening at
extremely fast time scales also known as femtosecond physics, femtosecond chemistry and
ultrafast science, for maximizing the effect of nonlinearity in optical materials (e.g. in
second-harmonic generation, parametric down-conversion, optical parametric oscillators and
the like), and in ablation applications. Again, because of the short timescales involved, these
lasers can achieve extremely high powers.
Pulsed pumping
Another method of achieving pulsed laser operation is to pump the laser material with
a source that is itself pulsed, either through electronic charging in the case of flashlamps, or
another laser which is already pulsed. Pulsed pumping was historically used with dye lasers
where the inverted population lifetime of a dye molecule was so short that a high energy, fast
pump was needed. The way to overcome this problem was to charge up large capacitors
which are then switched to discharge through flashlamps, producing a broad spectrum pump
flash. Pulsed pumping is also required for lasers which disrupt the gain medium so much
during the laser process that lasing has to cease for a short period. These lasers, such as the
excimer laser and the copper vapour laser, can never be operated in CW mode.
Recent innovations
Graph showing the history of maximum laser pulse intensity throughout the past 40 years.
Since the early period of laser history, laser research has produced a variety of improved and
specialized laser types, optimized for different performance goals, including:
new wavelength bands
maximum average output power
maximum peak output power
minimum output pulse duration
maximum power efficiency
maximum charging
maximum firing
minimum cost
Lasing without maintaining the medium excited into a population inversion was
discovered in 1992 in sodium gas and again in 1995 in rubidium gas by various international
teams. This was accomplished by using an external maser to induce "optical transparency" in
the medium by introducing and destructively interfering the ground electron transitions
between two paths, so that the likelihood for the ground electrons to absorb any energy has
been cancelled.
Types and operating principles
For a more complete list of laser types see this list of laser types.
Wavelengths of commercially available lasers. Laser types with distinct laser lines are shown
above the wavelength bar, while below are shown lasers that can emit in a wavelength range.
The color codifies the type of laser material (see the figure description for more details).
Gas lasers
Gas lasers using many gases have been built and used for many purposes. The
helium-neon laser (HeNe) emits at a variety of wavelengths and units operating at 633 nm are
very common in education because of its low cost. Carbon dioxide lasers can emit hundreds
of kilowatts at 9.6 µm and 10.6 µm, and are often used in industry for cutting and welding.
The efficiency of a CO2 laser is over 10%. Argon-ion lasers emit light in the range 351-
528.7 nm. Depending on the optics and the laser tube a different number of lines is usable but
the most commonly used lines are 458 nm, 488 nm and 514.5 nm. A nitrogen transverse
electrical discharge in gas at atmospheric pressure (TEA) laser is an inexpensive gas laser
producing UV light at 337.1 nm. Metal ion lasers are gas lasers that generate deep ultraviolet
wavelengths. Helium-silver (HeAg) 224 nm and neon-copper (NeCu) 248 nm are two
examples. These lasers have particularly narrow oscillation linewidths of less than 3 GHz
(0.5 picometers), making them candidates for use in fluorescence suppressed Raman
spectroscopy.
Chemical lasers
Chemical lasers are powered by a chemical reaction, and can achieve high powers in
continuous operation. For example, in the Hydrogen fluoride laser (2700-2900 nm) and the
Deuterium fluoride laser (3800 nm) the reaction is the combination of hydrogen or deuterium
gas with combustion products of ethylene in nitrogen trifluoride. They were invented by
George C. Pimentel.
Excimer lasers
Excimer lasers are powered by a chemical reaction involving an excited dimer, or
excimer, which is a short-lived dimeric or heterodimeric molecule formed from two species
(atoms), at least one of which is in an excited electronic state. They typically produce
ultraviolet light, and are used in semiconductor photolithography and in LASIK eye surgery.
Commonly used excimer molecules include F2 (fluorine, emitting at 157 nm), and noble gas
compounds (ArF [193 nm], KrCl [222 nm], KrF [248 nm], XeCl [308 nm], and XeF
[351 nm])
Solid-state lasers
Solid-state laser materials are commonly made by "doping" a crystalline solid host
with ions that provide the required energy states. For example, the first working laser was a
ruby laser, made from ruby (chromium-doped corundum). The population inversion is
actually maintained in the "dopant", such as chromium or neodymium. Formally, the class of
solid-state lasers includes also fiber laser, as the active medium (fiber) is in the solid state.
Practically, in the scientific literature, solid-state laser usually means a laser with bulk active
medium, while wave-guide lasers are caller fiber lasers.
"Semiconductor lasers" are also solid-state lasers, but in the customary laser terminology,
"solid-state laser" excludes semiconductor lasers, which have their own name.
Neodymium is a common "dopant" in various solid-state laser crystals, including yttrium
orthovanadate (Nd:YVO4), yttrium lithium fluoride (Nd:YLF) and yttrium aluminium garnet
(Nd:YAG). All these lasers can produce high powers in the infrared spectrum at 1064 nm.
They are used for cutting, welding and marking of metals and other materials, and also in
spectroscopy and for pumping dye lasers. These lasers are also commonly frequency doubled,
tripled or quadrupled to produce 532 nm (green, visible), 355 nm (UV) and 266 nm (UV)
light when those wavelengths are needed.
Ytterbium, holmium, thulium, and erbium are other common "dopants" in solid-state lasers.
Ytterbium is used in crystals such as Yb:YAG, Yb:KGW, Yb:KYW, Yb:SYS, Yb:BOYS,
Yb:CaF2, typically operating around 1020-1050 nm. They are potentially very efficient and
high powered due to a small quantum defect. Extremely high powers in ultrashort pulses can
be achieved with Yb:YAG. Holmium-doped YAG crystals emit at 2097 nm and form an
efficient laser operating at infrared wavelengths strongly absorbed by water-bearing tissues.
The Ho-YAG is usually operated in a pulsed mode, and passed through optical fiber surgical
devices to resurface joints, remove rot from teeth, vaporize cancers, and pulverize kidney and
gall stones.
Titanium-doped sapphire (Ti:sapphire) produces a highly tunable infrared laser, commonly
used for spectroscopy as well as the most common ultrashort pulse laser.
Thermal limitations in solid-state lasers arise from unconverted pump power that manifests
itself as heat and phonon energy. This heat, when coupled with a high thermo-optic
coefficient (dn/dT) can give rise to thermal lensing as well as reduced quantum efficiency.
These types of issues can be overcome by another novel diode-pumped solid-state laser, the
diode-pumped thin disk laser. The thermal limitations in this laser type are mitigated by using
a laser medium geometry in which the thickness is much smaller than the diameter of the
pump beam. This allows for a more even thermal gradient in the material. Thin disk lasers
have been shown to produce up to kilowatt levels of power.
Fiber-hosted lasers
Solid-state lasers where the light is guided due to the total internal reflection in an
optical fiber are called fiber lasers. Guiding of light allows extremely long gain regions
providing good cooling conditions; fibers have high surface area to volume ratio which
allows efficient cooling. In addition, the fiber's waveguiding properties tend to reduce
thermal distortion of the beam. Erbium and ytterbium ions are common active species in such
lasers.
Quite often, the fiber laser is designed as a double-clad fiber. This type of fiber
consists of a fiber core, an inner cladding and an outer cladding. The index of the three
concentric layers is chosen so that the fiber core acts as a single-mode fiber for the laser
emission while the outer cladding acts as a highly multimode core for the pump laser. This
lets the pump propagate a large amount of power into and through the active inner core
region, while still having a high numerical aperture (NA) to have easy launching conditions.
Pump light can be used more efficiently by creating a fiber disk laser, or a stack of such
lasers.
Fiber lasers have a fundamental limit in that the intensity of the light in the fiber
cannot be so high that optical nonlinearities induced by the local electric field strength can
become dominant and prevent laser operation and/or lead to the material destruction of the
fiber. This effect is called photodarkening. In bulk laser materials, the cooling is not so
efficient, and it is difficult to separate the effects of photodarkening from the thermal effects,
but the experiments in fibers show that the photodarkening can be attributed to the formation
of long-living color centers.
Photonic crystal lasers
Photonic crystal lasers are lasers based on nano-structures that provide the mode confinement
and the density of optical states (DOS) structure required for the feedback to take place.They
are typical micrometre-sized and tunable on the bands of the photonic crystals.
Semiconductor lasers
Semiconductor lasers are also solid-state lasers but have a different mode of laser
operation. Commercial laser diodes emit at wavelengths from 375 nm to 1800 nm, and
wavelengths of over 3 µm have been demonstrated. Low power laser diodes are used in laser
printers and CD/DVD players. More powerful laser diodes are frequently used to optically
pump other lasers with high efficiency. The highest power industrial laser diodes, with power
up to 10 kW (70dBm), are used in industry for cutting and welding. External-cavity
semiconductor lasers have a semiconductor active medium in a larger cavity. These devices
can generate high power outputs with good beam quality, wavelength-tunable narrow-
linewidth radiation, or ultrashort laser pulses.
A 5.6 mm 'closed can' commercial laser diode, probably from a CD or DVD player.
Vertical cavity surface-emitting lasers (VCSELs) are semiconductor lasers whose
emission direction is perpendicular to the surface of the wafer. VCSEL devices typically have
a more circular output beam than conventional laser diodes, and potentially could be much
cheaper to manufacture. As of 2005, only 850 nm VCSELs are widely available, with
1300 nm VCSELs beginning to be commercialized and 1550 nm devices an area of research.
VECSELs are external-cavity VCSELs. Quantum cascade lasers are semiconductor lasers
that have an active transition between energy sub-bands of an electron in a structure
containing several quantum wells.
The development of a silicon laser is important in the field of optical computing.
Silicon is the material of choice for integrated circuits, and so electronic and silicon photonic
components (such as optical interconnects) could be fabricated on the same chip.
Unfortunately, silicon is a difficult lasing material to deal with, since it has certain properties
which block lasing. However, recently teams have produced silicon lasers through methods
such as fabricating the lasing material from silicon and other semiconductor materials, such
as indium(III) phosphide or gallium(III) arsenide, materials which allow coherent light to be
produced from silicon. These are called hybrid silicon laser. Another type is a Raman laser,
which takes advantage of Raman scattering to produce a laser from materials such as silicon.
Dye lasers
Dye lasers use an organic dye as the gain medium. The wide gain spectrum of available dyes
allows these lasers to be highly tunable, or to produce very short-duration pulses (on the order
of a few femtoseconds)
Free electron lasers
Free electron lasers, or FELs, generate coherent, high power radiation, that is widely tunable,
currently ranging in wavelength from microwaves, through terahertz radiation and infrared,
to the visible spectrum, to soft X-rays. They have the widest frequency range of any laser
type. While FEL beams share the same optical traits as other lasers, such as coherent
radiation, FEL operation is quite different. Unlike gas, liquid, or solid-state lasers, which rely
on bound atomic or molecular states, FELs use a relativistic electron beam as the lasing
medium, hence the term free electron.
Exotic laser media
In September 2007, the BBC News reported that there was speculation about the possibility
of using positronium annihilation to drive a very powerful gamma ray laser. Dr. David
Cassidy of the University of California, Riverside proposed that a single such laser could be
used to ignite a nuclear fusion reaction, replacing the hundreds of lasers used in typical
inertial confinement fusion experiments.
Space-based X-ray lasers pumped by a nuclear explosion have also been proposed as
antimissile weapons. Such devices would be one-shot weapons.
Uses
Lasers range in size from microscopic diode lasers (top) with numerous applications, to
football field sized neodymium glass lasers (bottom) used for inertial confinement fusion,
nuclear weapons research and other high energy density physics experiments.
Main article: Laser applications
When lasers were invented in 1960, they were called "a solution looking for a problem".
Since then, they have become ubiquitous, finding utility in thousands of highly varied
applications in every section of modern society, including consumer electronics, information
technology, science, medicine, industry, law enforcement, entertainment, and the military.
The first application of lasers visible in the daily lives of the general population was the
supermarket barcode scanner, introduced in 1974. The laserdisc player, introduced in 1978,
was the first successful consumer product to include a laser, but the compact disc player was
the first laser-equipped device to become truly common in consumers' homes, beginning in
1982, followed shortly by laser printers.
Some of the other applications include:
Medicine: Bloodless surgery, laser healing, surgical treatment, kidney stone
treatment, eye treatment, dentistry
Industry: Cutting, welding, material heat treatment, marking parts
Defense: Marking targets, guiding munitions, missile defence, electro-optical
countermeasures (EOCM), alternative to radar, blinding enemy troops.
Research: Spectroscopy, laser ablation, laser annealing, laser scattering, laser
interferometry, LIDAR, laser capture microdissection
Product development/commercial: laser printers, CDs, barcode scanners,
thermometers, laser pointers, holograms, bubblegrams.
Laser lighting displays: Laser light shows
Laser skin procedures such as acne treatment, cellulite reduction, and hair removal.
In 2004, excluding diode lasers, approximately 131,000 lasers were sold worldwide, with a
value of US$2.19 billion. In the same year, approximately 733 million diode lasers, valued at
$3.20 billion, were sold.
Examples by power
Different applications need lasers with different output powers. Lasers that produce a
continuous beam or a series of short pulses can be compared on the basis of their average
power. Lasers that produce pulses can also be characterized based on the peak power of each
pulse. The peak power of a pulsed laser is many orders of magnitude greater than its average
power. The average output power is always less than the power consumed.
The continuous or average power required for some uses:
less than 1 mW – laser pointers
5 mW – CD-ROM drive
5–10 mW – DVD player or DVD-ROM drive
100 mW – High-speed CD-RW burner
250 mW – Consumer DVD-R burner
1 W – green laser in current Holographic Versatile Disc prototype development
1–20 W – output of the majority of commercially available solid-state lasers used for
micro machining
30–100 W – typical sealed CO2 surgical lasers
100–3000 W (peak output 1.5 kW) – typical sealed CO2 lasers used in industrial laser
cutting
1 kW – Output power expected to be achieved by a prototype 1 cm diode laser bar
Examples of pulsed systems with high peak power:
700 TW (700×1012 W) – National Ignition Facility, a 192-beam, 1.8-megajoule laser
system adjoining a 10-meter-diameter target chamber.
1.3 PW (1.3×1015 W) – world's most powerful laser as of 1998, located at the
Lawrence Livermore Laboratory.
Hobby uses
In recent years, some hobbyists have taken interests in lasers. Lasers used by
hobbyists are generally of class IIIa or IIIb, although some have made their own class IV
types. However, compared to other hobbyists, laser hobbyists are far less common, due to the
cost and potential dangers involved. Due to the cost of lasers, some hobbyists use inexpensive
means to obtain lasers, such as extracting diodes from DVD burners.
Hobbyists also have been taking surplus pulsed lasers from retired military applications and
modifying them for pulsed holography. Pulsed Ruby and Pulsed YAG lasers have been used.
Laser safety
Warning symbol for lasers.
Even the first laser was recognized as being potentially dangerous. Theodore Maiman
characterized the first laser as having a power of one "Gillette" as it could burn through one
Gillette razor blade. Today, it is accepted that even low-power lasers with only a few
milliwatts of output power can be hazardous to human eyesight, when the beam from such a
laser hits the eye directly or after reflection from a shiny surface. At wavelengths which the
cornea and the lens can focus well, the coherence and low divergence of laser light means
that it can be focused by the eye into an extremely small spot on the retina, resulting in
localized burning and permanent damage in seconds or even less time.
Lasers are usually labeled with a safety class number, which identifies how dangerous the
laser is:
Class I/1 is inherently safe, usually because the light is contained in an enclosure, for
example in CD players.
Class II/2 is safe during normal use; the blink reflex of the eye will prevent damage.
Usually up to 1 mW power, for example laser pointers.
Class IIIa/3R lasers are usually up to 5 mW and involve a small risk of eye damage
within the time of the blink reflex. Staring into such a beam for several seconds is
likely to cause (minor) eye damage.
Class IIIb/3B can cause immediate severe eye damage upon exposure. Usually lasers
up to 500 mW, such as those in CD and DVD writers.
Class IV/4 lasers can burn skin, and in some cases, even scattered light can cause eye
and/or skin damage. Many industrial and scientific lasers are in this class.
The indicated powers are for visible-light, continuous-wave lasers. For pulsed lasers and
invisible wavelengths, other power limits apply. People working with class 3B and class 4
lasers can protect their eyes with safety goggles which are designed to absorb light of a
particular wavelength.
Certain infrared lasers with wavelengths beyond about 1.4 micrometres are often referred to
as being "eye-safe". This is because the intrinsic molecular vibrations of water molecules
very strongly absorb light in this part of the spectrum, and thus a laser beam at these
wavelengths is attenuated so completely as it passes through the eye's cornea that no light
remains to be focused by the lens onto the retina. The label "eye-safe" can be misleading,
however, as it only applies to relatively low power continuous wave beams and any high
power or Q-switched laser at these wavelengths can burn the cornea, causing severe eye
damage.
Lasers as weapons
Laser beams are famously employed as weapon systems in science fiction, but actual
laser weapons are only beginning to enter the market. The general idea of laser-beam
weaponry is to hit a target with a train of brief pulses of light. The rapid evaporation and
expansion of the surface causes shockwaves that damage the target.
The power needed to project a high-powered laser beam of this kind is difficult for current
mobile power technology. Public prototypes are chemically-powered gas dynamic lasers.
Lasers of all but the lowest powers can potentially be used as incapacitating weapons,
through their ability to produce temporary or permanent vision loss in varying degrees when
aimed at the eyes. The degree, character, and duration of vision impairment caused by eye
exposure to laser light varies with the power of the laser, the wavelength(s), the collimation
of the beam, the exact orientation of the beam, and the duration of exposure. Lasers of even a
fraction of a watt in power can produce immediate, permanent vision loss under certain
conditions, making such lasers potential non-lethal but incapacitating weapons. The extreme
handicap that laser-induced blindness represents makes the use of lasers even as non-lethal
weapons morally controversial, and weapons designed to cause blindness have been banned
by the Protocol on Blinding Laser Weapons.
In the field of aviation, the hazards of exposure to ground-based lasers deliberately
aimed at pilots have grown to the extent that aviation authorities have special procedures to
deal with such hazards. On March 18, 2009 Northrop Grumman announced that its engineers
in Redondo Beach had successfully built and tested an electric laser capable of producing a
100-kilowatt ray of light, powerful enough to destroy an airplane or a tank. An electric laser
is theoretically capable, according to Brian Strickland, manager for the United States Army's
Joint High Power Solid State Laser program, of being mounted in an aircraft, ship, or vehicle
because it requires much less space for its supporting equipment than a chemical laser.
Applications
In manufacturing, lasers are used for cutting, bending, and welding metal and other
materials, and for "marking"—producing visible patterns such as letters by changing the
properties of a material or by inscribing its surface. In science, lasers are used for many
applications. One of the more common is laser spectroscopy, which typically takes advantage
of the laser's well-defined wavelength or the possibility of generating very short pulses of
light. Lasers are used by the military for range-finding, target designation, and illumination.
Lasers have also begun to be tested for directed-energy weapons. Lasers are used in medicine
for surgery, diagnostics, and therapeutic applications.
Fictional predictions
Before stimulated emission was discovered, novelists used to describe machines that we can
identify as "lasers".
A laser-like device was described in Alexey Tolstoy's sci-fi novel The Hyperboloid of
Engineer Garin in 1927.
Mikhail Bulgakov exaggerated the biological effect (laser bio stimulation) of
intensive red light in his sci-fi novel Fatal Eggs (1925), without any reasonable
description of the source of this red light. (In that novel, the red light first appears
occasionally from the illuminating system of an advanced microscope; then the
protagonist Prof. Persikov arranges the special set-up for generation of the red light.)
2. TRANSISTOR
A transistor is a semiconductor device commonly used to amplify or switch electronic
signals. A transistor is made of a solid piece of a semiconductor material, with at least three
terminals for connection to an external circuit. A voltage or current applied to one pair of the
transistor's terminals changes the current flowing through another pair of terminals. Because
the controlled (output) power can be much more than the controlling (input) power, the
transistor provides amplification of a signal. Some transistors are packaged individually but
most are found in integrated circuits. The transistor is the fundamental building block of
modern electronic devices, and its presence is ubiquitous in modern electronic systems
Importance
The transistor is considered by many to be one of the greatest inventions of the twentieth
century. The transistor is the key active component in practically all modern electronics. Its
importance in today's society rests on its ability to be mass produced using a highly
automated process (fabrication) that achieves astonishingly low per-transistor costs.
Although several companies each produce over a billion individually-packaged
(known as discrete) transistors every year, the vast majority of transistors produced are in
integrated circuits (often shortened to IC, microchips or simply chips) along with diodes,
resistors, capacitors and other electronic components to produce complete electronic circuits.
A logic gate consists of up to about twenty transistors whereas an advanced microprocessor,
as of 2006, can use as many as 1.7 billion transistors (MOSFETs). "About 60 million
transistors were built this year [2002] ... for [each] man, woman, and child on Earth."
The transistor's low cost, flexibility, and reliability have made it a ubiquitous device.
Transistorized mechatronic circuits have replaced electromechanical devices in controlling
appliances and machinery. It is often easier and cheaper to use a standard microcontroller and
write a computer program to carry out a control function than to design an equivalent
mechanical control function.
Usage
The bipolar junction transistor, or BJT, was the most commonly used transistor in the
1960s and 70s. Even after MOSFETs became widely available, the BJT remained the
transistor of choice for many analog circuits such as simple amplifiers because of their
greater linearity and ease of manufacture. Desirable properties of MOSFETs, such as their
utility in low-power devices, usually in the CMOS configuration, allowed them to capture
nearly all market share for digital circuits; more recently MOSFETs have captured most
analog and power applications as well, including modern clocked analog circuits, voltage
regulators, amplifiers, power transmitters, motor drivers, etc.
Simplified operation
The essential usefulness of a transistor comes from its ability to use a small signal
applied between one pair of its terminals to control a much larger signal at another pair of
terminals. This property is called gain. A transistor can control its output in proportion to the
input signal, that is, can act as an amplifier. Or, the transistor can be used to turn current on or
off in a circuit as an electrically controlled switch, where the amount of current is determined
by other circuit elements.
The two types of transistors have slight differences in how they are used in a circuit.
A bipolar transistor has terminals labeled base, collector, and emitter. A small current at the
base terminal (that is, flowing from the base to the emitter) can control or switch a much
larger current between the collector and emitter terminals. For a field-effect transistor, the
terminals are labeled gate, source, and drain, and a voltage at the gate can control a current
between source and drain.
The image to the right represents a typical bipolar transistor in a circuit. Charge will
flow between emitter and collector terminals depending on the current in the base. Since
internally the base and emitter connections behave like a semiconductor diode, a voltage drop
develops between base and emitter while the base current exists. The size of this voltage
depends on the material the transistor is made from, and is referred to as VBE.
Transistor as a switch
BJT used as an electronic switch, in grounded-emitter configuration.
Transistors are commonly used as electronic switches, for both high power
applications including switched-mode power supplies and low power applications such as
logic gates. In a grounded-emitter transistor circuit, such as the light-switch circuit shown, as
the base voltage rises the base and collector current rise exponentially, and the collector
voltage drops because of the collector load resistor. The relevant equations:
VRC = ICE × RC, the voltage across the load (the lamp with resistance RC)
VRC + VCE = VCC, the supply voltage shown as 6V
If VCE could fall to 0 (perfect closed switch) then Ic could go no higher than V CC / RC, even
with higher base voltage and current. The transistor is then said to be saturated. Hence, values
of input voltage can be chosen such that the output is either completely off, or completely on.
The transistor is acting as a switch, and this type of operation is common in digital circuits
where only "on" and "off" values are relevant.
Transistor as an amplifier
Amplifier circuit, standard common-emitter configuration.
The common-emitter amplifier is designed so that a small change in voltage in (Vin)
changes the small current through the base of the transistor and the transistor's current
amplification combined with the properties of the circuit mean that small swings in Vin
produce large changes in Vout.
It is important that the operating values of the transistor are chosen and the circuit
designed such that as far as possible the transistor operates within a linear portion of the
graph, such as that shown between A and B, otherwise the output signal will suffer distortion.
Various configurations of single transistor amplifier are possible, with some providing
current gain, some voltage gain, and some both. From mobile phones to televisions, vast
numbers of products include amplifiers for sound reproduction, radio transmission, and signal
processing. The first discrete transistor audio amplifiers barely supplied a few hundred
milliwatts, but power and audio fidelity gradually increased as better transistors became
available and amplifier architecture evolved.
Modern transistor audio amplifiers of up to a few hundred watts are common and
relatively inexpensive. Some musical instrument amplifier manufacturers mix transistors and
vacuum tubes in the same circuit, as some believe tubes have a distinctive sound.
Comparison with vacuum tubes
Prior to the development of transistors, vacuum (electron) tubes (or in the UK "thermionic
valves" or just "valves") were the main active components in electronic equipment.
Advantages
The key advantages that have allowed transistors to replace their vacuum tube predecessors
in most applications are
Small size and minimal weight, allowing the development of miniaturized electronic
devices.
Highly automated manufacturing processes, resulting in low per-unit cost.
Lower possible operating voltages, making transistors suitable for small, battery-
powered applications.
No warm-up period for cathode heaters required after power application.
Lower power dissipation and generally greater energy efficiency.
Higher reliability and greater physical ruggedness.
Extremely long life. Some transistorized devices have been in service for more than
30 years.
Complementary devices available, facilitating the design of complementary-symmetry
circuits, something not possible with vacuum tubes.
Insensitivity to mechanical shock and vibration, thus avoiding the problem of
microphonics in audio applications.
Limitations
Silicon transistors do not operate at voltages higher than about 1,000 volts (SiC
devices can be operated as high as 3,000 volts). In contrast, electron tubes have been
developed that can be operated at tens of thousands of volts.
High power, high frequency operation, such as used in over-the-air television
broadcasting, is better achieved in electron tubes due to improved electron mobility in
a vacuum.
On average, a higher degree of amplification linearity can be achieved in electron
tubes as compared to equivalent solid state devices, a characteristic that may be
important in high fidelity audio reproduction.
Silicon transistors are much more sensitive than electron tubes to an electromagnetic
pulse, such as generated by an atmospheric nuclear explosion.
Types
PNP P-channel
NPN N-channel
BJT JFET
BJT and JFET symbols
P-channel
N-channel
JFET MOSFET enh MOSFET dep
JFET and IGFET symbols
Transistors are categorized by
Semiconductor material: germanium, silicon, gallium arsenide, silicon carbide, etc.
Structure: BJT, JFET, IGFET (MOSFET), IGBT, "other types"
Polarity: NPN, PNP (BJTs); N-channel, P-channel (FETs)
Maximum power rating: low, medium, high
Maximum operating frequency: low, medium, high, radio frequency (RF), microwave
(The maximum effective frequency of a transistor is denoted by the term fT, an
abbreviation for "frequency of transition". The frequency of transition is the
frequency at which the transistor yields unity gain).
Application: switch, general purpose, audio, high voltage, super-beta, matched pair
Physical packaging: through hole metal, through hole plastic, surface mount, ball grid
array, power modules
Amplification factor hfe (transistor beta)[13]
Thus, a particular transistor may be described as silicon, surface mount, BJT, NPN, low
power, high frequency switch.
The 'BC' letters in a common transistor name like BC547B means
Prefix class Usage
BC Small signal transistor ("allround")
BF High frequency, many MHz
BD Withstands higher current and power
BA Germanium
Bipolar junction transistor
Main article: Bipolar junction transistor
The bipolar junction transistor (BJT) was the first type of transistor to be mass-
produced. Bipolar transistors are so named because they conduct by using both majority and
minority carriers. The three terminals of the BJT are named emitter, base, and collector. The
BJT consists of two p-n junctions: the base–emitter junction and the base–collector junction,
separated by a thin region of semiconductor known as the base region (two junction diodes
wired together without sharing an intervening semiconducting region will not make a
transistor). "The [BJT] is useful in amplifiers because the currents at the emitter and collector
are controllable by the relatively small base current." In an NPN transistor operating in the
active region, the emitter-base junction is forward biased (electrons and holes recombine at
the junction), and electrons are injected into the base region. Because the base is narrow,
most of these electrons will diffuse into the reverse-biased (electrons and holes are formed at,
and move away from the junction) base-collector junction and be swept into the collector;
perhaps one-hundredth of the electrons will recombine in the base, which is the dominant
mechanism in the base current. By controlling the number of electrons that can leave the
base, the number of electrons entering the collector can be controlled. Collector current is
approximately β (common-emitter current gain) times the base current. It is typically greater
than 100 for small-signal transistors but can be smaller in transistors designed for high-power
applications.
Unlike the FET, the BJT is a low–input-impedance device. Also, as the base–emitter
voltage (Vbe) is increased the base–emitter current and hence the collector–emitter current (Ice)
increase exponentially according to the Shockley diode model and the Ebers-Moll model.
Because of this exponential relationship, the BJT has a higher transconductance than the
FET.
Bipolar transistors can be made to conduct by exposure to light, since absorption of
photons in the base region generates a photocurrent that acts as a base current; the collector
current is approximately β times the photocurrent. Devices designed for this purpose have a
transparent window in the package and are called phototransistors.
Field-effect transistor
The field-effect transistor (FET), sometimes called a unipolar transistor, uses either
electrons (in N-channel FET) or holes (in P-channel FET) for conduction. The four terminals
of the FET are named source, gate, drain, and body (substrate). On most FETs, the body is
connected to the source inside the package, and this will be assumed for the following
description.
In FETs, the drain-to-source current flows via a conducting channel that connects the
source region to the drain region. The conductivity is varied by the electric field that is
produced when a voltage is applied between the gate and source terminals; hence the current
flowing between the drain and source is controlled by the voltage applied between the gate
and source. As the gate–source voltage (Vgs) is increased, the drain–source current (Ids)
increases exponentially for Vgs below threshold, and then at a roughly quadratic rate (
) (where VT is the threshold voltage at which drain current begins) in
the "space-charge-limited" region above threshold. A quadratic behavior is not observed in
modern devices, for example, at the 65 nm technology node.
For low noise at narrow bandwidth the higher input resistance of the FET is
advantageous. FETs are divided into two families: junction FET (JFET) and insulated gate
FET (IGFET). The IGFET is more commonly known as a metal–oxide–semiconductor FET
(MOSFET), reflecting its original construction from layers of metal (the gate), oxide (the
insulation), and semiconductor. Unlike IGFETs, the JFET gate forms a PN diode with the
channel which lies between the source and drain. Functionally, this makes the N-channel
JFET the solid state equivalent of the vacuum tube triode which, similarly, forms a diode
between its grid and cathode. Also, both devices operate in the depletion mode, they both
have a high input impedance, and they both conduct current under the control of an input
voltage.
Metal–semiconductor FETs (MESFETs) are JFETs in which the reverse biased PN
junction is replaced by a metal–semiconductor Schottky-junction. These, and the HEMTs
(high electron mobility transistors, or HFETs), in which a two-dimensional electron gas with
very high carrier mobility is used for charge transport, are especially suitable for use at very
high frequencies (microwave frequencies; several GHz).
Unlike bipolar transistors, FETs do not inherently amplify a photocurrent.
Nevertheless, there are ways to use them, especially JFETs, as light-sensitive devices, by
exploiting the photocurrents in channel–gate or channel–body junctions. FETs are further
divided into depletion-mode and enhancement-mode types, depending on whether the channel
is turned on or off with zero gate-to-source voltage. For enhancement mode, the channel is
off at zero bias, and a gate potential can "enhance" the conduction. For depletion mode, the
channel is on at zero bias, and a gate potential (of the opposite polarity) can "deplete" the
channel, reducing conduction. For either mode, a more positive gate voltage corresponds to a
higher current for N-channel devices and a lower current for P-channel devices. Nearly all
JFETs are depletion-mode as the diode junctions would forward bias and conduct if they
were enhancement mode devices; most IGFETs are enhancement-mode types.
Packaging
Through-hole transistors (tape measure marked in centimetres)
Transistors come in many different packages (chip carriers) (see images). The two
main categories are through-hole (or leaded), and surface-mount, also known as surface
mount device (SMD). The ball grid array (BGA) is the latest surface mount package
(currently only for large transistor arrays). It has solder "balls" on the underside in place of
leads. Because they are smaller and have shorter interconnections, SMDs have better high
frequency characteristics but lower power rating.
Transistor packages are made of glass, metal, ceramic, or plastic. The package often
dictates the power rating and frequency characteristics. Power transistors have larger
packages that can be clamped to heat sinks for enhanced cooling. Additionally, most power
transistors have the collector or drain physically connected to the metal can/metal plate. At
the other extreme, some surface-mount microwave transistors are as small as grains of sand.
Often a given transistor type is available in sundry packages. Transistor packages are mainly
standardized, but the assignment of a transistor's functions to the terminals is not: other
transistor types can assign other functions to the package's terminals. Even for the same
transistor type the terminal assignment can vary (normally indicated by a suffix letter to the
part number, q.e. BC212L and BC212K).
3. DIODE
Figure 1: Closeup of a diode,showing the square shaped semiconductor crystal
Figure 2 : Structure of a vacuum tube diode
In electronics a diode is a two-terminal electronic component that conducts electric
current in only one direction. The term usually refers to a semiconductor diode, the most
common type today, which is a crystal of semiconductor connected to two electrical
terminals, a P-N junction. A vacuum tube diode, now little used, is a vacuum tube with two
electrodes; a plate and a cathode.
The most common function of a diode is to allow an electric current in one direction
(called the forward direction) while blocking current in the opposite direction (the reverse
direction). Thus, the diode can be thought of as an electronic version of a check valve. This
unidirectional behavior is called rectification, and is used to convert alternating current to
direct current, and remove modulation from radio signals in radio receivers.
However diodes can have more complicated behavior than this simple on-off action,
due to their complex non-linear electrical characteristics, which can be tailored by varying the
construction of their P-N junction. These are exploited in special purpose diodes that perform
many different functions. Diodes are used to regulate voltage (Zener diodes), electronically
tune radio and TV receivers (varactor diodes), generate radio frequency oscillations (tunnel
diodes), and produce light (light emitting diodes).
Diodes were the first semiconductor electronic devices. The discovery of crystals'
rectifying abilities was made by German physicist Ferdinand Braun in 1897. The first
semiconductor diodes, called cat's whisker diodes were made of crystals of minerals such as
galena. Today most diodes are made of silicon, but other semiconductors such as germanium
are sometimes used.
Thermionic and gaseous state diodes
Figure 3: The symbol for an indirect heated vacuum tube diode. From top to bottom, the
components are the anode, the cathode, and the heater filament.
Thermionic diodes are thermionic-valve devices (also known as vacuum tubes, tubes,
or valves), which are arrangements of electrodes surrounded by a vacuum within a glass
envelope. Early examples were fairly similar in appearance to incandescent light bulbs.
In thermionic valve diodes, a current through the heater filament indirectly heats the cathode,
another internal electrode treated with a mixture of barium and strontium oxides, which are
oxides of alkaline earth metals; these substances are chosen because they have a small work
function. (Some valves use direct heating, in which a tungsten filament acts as both heater
and cathode.) The heat causes thermionic emission of electrons into the vacuum. In forward
operation, a surrounding metal electrode called the anode is positively charged so that it
electrostatically attracts the emitted electrons. However, electrons are not easily released
from the unheated anode surface when the voltage polarity is reversed. Hence, any reverse
flow is negligible.
For much of the 20th century, thermionic valve diodes were used in analog signal
applications, and as rectifiers in many power supplies. Today, valve diodes are only used in
niche applications such as rectifiers in electric guitar and high-end audio amplifiers as well as
specialized high-voltage equipment.
Semiconductor diodes
A modern semiconductor diode is made of a crystal of semiconductor like silicon that
has impurities added to it to create a region on one side that contains negative charge carriers
(electrons), called n-type semiconductor, and a region on the other side that contains positive
charge carriers (holes), called p-type semiconductor. The diode's terminals are attached to
each of these regions. The boundary within the crystal between these two regions, called a
PN junction, is where the action of the diode takes place. The crystal conducts conventional
current in a direction from the p-type side (called the anode) to the n-type side (called the
cathode), but not in the opposite direction.
Another type of semiconductor diode, the Schottky diode, is formed from the contact
between a metal and a semiconductor rather than by a p-n junction.
Current–voltage characteristic
A semiconductor diode’s behavior in a circuit is given by its current–voltage
characteristic, or I–V curve (see graph at right). The shape of the curve is determined by the
transport of charge carriers through the so-called depletion layer or depletion region that
exists at the p-n junction between differing semiconductors. When a p-n junction is first
created, conduction band (mobile) electrons from the N-doped region diffuse into the P-
doped region where there is a large population of holes (places for electrons in which no
electron is present) with which the electrons “recombine”. When a mobile electron
recombines with a hole, both hole and electron vanish, leaving behind an immobile positively
charged donor (the dopant) on the N-side and negatively charged acceptor (the dopant) on the
P-side. The region around the p-n junction becomes depleted of charge carriers and thus
behaves as an insulator.
However, the width of the depletion region (called the depletion width) cannot grow
without limit. For each electron-hole pair that recombines, a positively-charged dopant ion is
left behind in the N-doped region, and a negatively charged dopant ion is left behind in the P-
doped region. As recombination proceeds and more ions are created, an increasing electric
field develops through the depletion zone which acts to slow and then finally stop
recombination. At this point, there is a “built-in” potential across the depletion zone.
If an external voltage is placed across the diode with the same polarity as the built-in
potential, the depletion zone continues to act as an insulator, preventing any significant
electric current flow (unless electron/hole pairs are actively being created in the junction by,
for instance, light. see photodiode). This is the reverse bias phenomenon. However, if the
polarity of the external voltage opposes the built-in potential, recombination can once again
proceed, resulting in substantial electric current through the p-n junction (i.e. substantial
numbers of electrons and holes recombine at the junction).. For silicon diodes, the built-in
potential is approximately 0.6 V. Thus, if an external current is passed through the diode,
about 0.6 V will be developed across the diode such that the P-doped region is positive with
respect to the N-doped region and the diode is said to be “turned on” as it has a forward bias.
Figure 4: I–V characteristics of a P-N junction diode (not to scale).
A diode’s I–V characteristic can be approximated by four regions of operation (see the figure
at right). At very large reverse bias, beyond the peak inverse voltage or PIV, a process called
reverse breakdown occurs which causes a large increase in current (i.e. a large number of
electrons and holes are created at, and move away from the pn junction) that usually damages
the device permanently. The avalanche diode is deliberately designed for use in the avalanche
region. In the zener diode, the concept of PIV is not applicable. A zener diode contains a
heavily doped p-n junction allowing electrons to tunnel from the valence band of the p-type
material to the conduction band of the n-type material, such that the reverse voltage is
“clamped” to a known value (called the zener voltage), and avalanche does not occur. Both
devices, however, do have a limit to the maximum current and power in the clamped reverse
voltage region. Also, following the end of forward conduction in any diode, there is reverse
current for a short time. The device does not attain its full blocking capability until the
reverse current ceases.
The second region, at reverse biases more positive than the PIV, has only a very small
reverse saturation current. In the reverse bias region for a normal P-N rectifier diode, the
current through the device is very low (in the µA range). However, this is temperature
dependent, and at suffiently high temperatures, a substantial amount of reverse current can be
observed (mA or more).
The third region is forward but small bias, where only a small forward current is conducted.
As the potential difference is increased above an arbitrarily defined “cut-in voltage” or “on-
voltage” or “diode forward voltage drop (Vd)”, the diode current becomes appreciable (the
level of current considered “appreciable” and the value of cut-in voltage depends on the
application), and the diode presents a very low resistance.
The current–voltage curve is exponential. In a normal silicon diode at rated currents,
the arbitrary “cut-in” voltage is defined as 0.6 to 0.7 volts. The value is different for other
diode types — Schottky diodes can be rated as low as 0.2 V and red or blue light-emitting
diodes (LEDs) can have values of 1.4 V and 4.0 V respectively.
At higher currents the forward voltage drop of the diode increases. A drop of 1 V to
1.5 V is typical at full rated current for power diodes.
Shockley diode equation
The Shockley ideal diode equation or the diode law (named after transistor co-inventor
William Bradford Shockley, not to be confused with tetrode inventor Walter H. Schottky)
gives the I–V characteristic of an ideal diode in either forward or reverse bias (or no bias).
The equation is:
where
I is the diode current,
IS is the reverse bias saturation current,
VD is the voltage across the diode,
VT is the thermal voltage, and
n is the emission coefficient, also known as the ideality factor. The emission
coefficient n varies from about 1 to 2 depending on the fabrication process and
semiconductor material and in many cases is assumed to be approximately equal to 1
(thus the notation n is omitted).
The thermal voltage VT is approximately 25.85 mV at 300 K, a temperature close to “room
temperature” commonly used in device simulation software. At any temperature it is a known
constant defined by:
where k is the Boltzmann constant, T is the absolute temperature of the p-n junction, and q is
the magnitude of charge on an electron (the elementary charge).
The Shockley ideal diode equation or the diode law is derived with the assumption that the
only processes giving rise to current in the diode are drift (due to electrical field), diffusion,
and thermal recombination-generation. It also assumes that the recombination-generation (R-
G) current in the depletion region is insignificant. This means that the Shockley equation
doesn’t account for the processes involved in reverse breakdown and photon-assisted R-G.
Additionally, it doesn’t describe the “leveling off” of the I–V curve at high forward bias due
to internal resistance.
Under reverse bias voltages (see Figure 4) the exponential in the diode equation is
negligible, and the current is a constant (negative) reverse current value of −IS. The reverse
breakdown region is not modeled by the Shockley diode equation.
For even rather small forward bias voltages (see Figure 4) the exponential is very large
because the thermal voltage is very small, so the subtracted ‘1’ in the diode equation is
negligible and the forward diode current is often approximated as
The use of the diode equation in circuit problems is illustrated in the article on diode
modeling.
Small-signal behaviour
For circuit design, a small-signal model of the diode behavior often proves useful. A specific
example of diode modeling is discussed in the article on small-signal circuits.
Types of semiconductor diode
DiodeZener
diode
Schottky
diode
Tunnel
diode
Light-emitting
diodePhotodiode Varicap Silicon controlled rectifier
Figure 5: Some diode symbols.
Figure 6: Typical diode packages in same alignment as diode symbol. Thin bar depicts the
cathode.
There are several types of junction diodes, which either emphasize a different
physical aspect of a diode often by geometric scaling, doping level, choosing the right
electrodes, are just an application of a diode in a special circuit, or are really different devices
like the Gunn and laser diode and the MOSFET:
Normal (p-n) diodes, which operate as described above, are usually made of doped
silicon or, more rarely, germanium. Before the development of modern silicon power rectifier
diodes, cuprous oxide and later selenium was used; its low efficiency gave it a much higher
forward voltage drop (typically 1.4–1.7 V per “cell”, with multiple cells stacked to increase
the peak inverse voltage rating in high voltage rectifiers), and required a large heat sink (often
an extension of the diode’s metal substrate), much larger than a silicon diode of the same
current ratings would require. The vast majority of all diodes are the p-n diodes found in
CMOS integrated circuits, which include two diodes per pin and many other internal diodes.
Avalanche diodes
Diodes that conduct in the reverse direction when the reverse bias voltage exceeds the
breakdown voltage. These are electrically very similar to Zener diodes, and are often
mistakenly called Zener diodes, but break down by a different mechanism, the avalanche
effect. This occurs when the reverse electric field across the p-n junction causes a wave of
ionization, reminiscent of an avalanche, leading to a large current. Avalanche diodes are
designed to break down at a well-defined reverse voltage without being destroyed. The
difference between the avalanche diode (which has a reverse breakdown above about 6.2 V)
and the Zener is that the channel length of the former exceeds the “mean free path” of the
electrons, so there are collisions between them on the way out. The only practical difference
is that the two types have temperature coefficients of opposite polarities.
Constant current diodes
These are actually a JFET with the gate shorted to the source, and function like a two-
terminal current-limiter analog to the Zener diode, which is limiting voltage. They allow a
current through them to rise to a certain value, and then level off at a specific value. Also
called CLDs, constant-current diodes, diode-connected transistors, or current-regulating
diodes.
Esaki or tunnel diodes
These have a region of operation showing negative resistance caused by quantum
tunneling, thus allowing amplification of signals and very simple bistable circuits. These
diodes are also the type most resistant to nuclear radiation.
Gunn diodes
These are similar to tunnel diodes in that they are made of materials such as GaAs or
InP that exhibit a region of negative differential resistance. With appropriate biasing, dipole
domains form and travel across the diode, allowing high frequency microwave oscillators to
be built.
Light-emitting diodes (LEDs)
In a diode formed from a direct band-gap semiconductor, such as gallium arsenide,
carriers that cross the junction emit photons when they recombine with the majority carrier on
the other side. Depending on the material, wavelengths (or colors) from the infrared to the
near ultraviolet may be produced. The forward potential of these diodes depends on the
wavelength of the emitted photons: 1.2 V corresponds to red, 2.4 V to violet. The first LEDs
were red and yellow, and higher-frequency diodes have been developed over time. All LEDs
produce incoherent, narrow-spectrum light; “white” LEDs are actually combinations of three
LEDs of a different color, or a blue LED with a yellow scintillator coating. LEDs can also be
used as low-efficiency photodiodes in signal applications. An LED may be paired with a
photodiode or phototransistor in the same package, to form an opto-isolator.
Laser diodes
When an LED-like structure is contained in a resonant cavity formed by polishing the
parallel end faces, a laser can be formed. Laser diodes are commonly used in optical storage
devices and for high speed optical communication.
Photodiodes
All semiconductors are subject to optical charge carrier generation. This is typically
an undesired effect, so most semiconductors are packaged in light blocking material.
Photodiodes are intended to sense light(photodetector), so they are packaged in materials that
allow light to pass, and are usually PIN (the kind of diode most sensitive to light). A
photodiode can be used in solar cells, in photometry, or in optical communications. Multiple
photodiodes may be packaged in a single device, either as a linear array or as a two-
dimensional array. These arrays should not be confused with charge-coupled devices.
Point-contact diodes
These work the same as the junction semiconductor diodes described above, but their
construction is simpler. A block of n-type semiconductor is built, and a conducting sharp-
point contact made with some group-3 metal is placed in contact with the semiconductor.
Some metal migrates into the semiconductor to make a small region of p-type semiconductor
near the contact. The long-popular 1N34 germanium version is still used in radio receivers as
a detector and occasionally in specialized analog electronics.
PIN diodes
A PIN diode has a central un-doped, or intrinsic, layer, forming a p-type/intrinsic/n-
type structure. They are used as radio frequency switches and attenuators. They are also used
as large volume ionizing radiation detectors and as photodetectors. PIN diodes are also used
in power electronics, as their central layer can withstand high voltages. Furthermore, the PIN
structure can be found in many power semiconductor devices, such as IGBTs, power
MOSFETs, and thyristors.
.
Zener diodes
Diodes that can be made to conduct backwards. This effect, called Zener breakdown,
occurs at a precisely defined voltage, allowing the diode to be used as a precision voltage
reference. In practical voltage reference circuits Zener and switching diodes are connected in
series and opposite directions to balance the temperature coefficient to near zero. Some
devices labeled as high-voltage Zener diodes are actually avalanche diodes (see above). Two
(equivalent) Zeners in series and in reverse order, in the same package, constitute a transient
absorber (or Transorb, a registered trademark). The Zener diode is named for Dr. Clarence
Melvin Zener of Southern Illinois University, inventor of the device.
Applications
Radio demodulation
The first use for the diode was the demodulation of amplitude modulated (AM) radio
broadcasts. The history of this discovery is treated in depth in the radio article. In summary,
an AM signal consists of alternating positive and negative peaks of voltage, whose amplitude
or “envelope” is proportional to the original audio signal. The diode (originally a crystal
diode) rectifies the AM radio frequency signal, leaving an audio signal which is the original
audio signal, minus atmospheric noise. The audio is extracted using a simple filter and fed
into an audio amplifier or transducer, which generates sound waves.
Power conversion
Rectifiers are constructed from diodes, where they are used to convert alternating current
(AC) electricity into direct current (DC). Automotive alternators are a common example,
where the diode, which rectifies the AC into DC, provides better performance than the
commutator of earlier dynamo. Similarly, diodes are also used in Cockcroft–Walton voltage
multipliers to convert AC into higher DC voltages.
[edit] Over-voltage protection
Diodes are frequently used to conduct damaging high voltages away from sensitive electronic
devices. They are usually reverse-biased (non-conducting) under normal circumstances.
When the voltage rises above the normal range, the diodes become forward-biased
(conducting). For example, diodes are used in (stepper motor and H-bridge) motor controller
and relay circuits to de-energize coils rapidly without the damaging voltage spikes that would
otherwise occur. (Any diode used in such an application is called a flyback diode). Many
integrated circuits also incorporate diodes on the connection pins to prevent external voltages
from damaging their sensitive transistors. Specialized diodes are used to protect from over-
voltages at higher power (see Diode types above).
Logic gates
Diodes can be combined with other components to construct AND and OR logic gates.
This is referred to as diode logic.
Ionizing radiation detectors
In addition to light, mentioned above, semiconductor diodes are sensitive to more
energetic radiation. In electronics, cosmic rays and other sources of ionizing radiation cause
noise pulses and single and multiple bit errors. This effect is sometimes exploited by particle
detectors to detect radiation. A single particle of radiation, with thousands or millions of
electron volts of energy, generates many charge carrier pairs, as its energy is deposited in the
semiconductor material. If the depletion layer is large enough to catch the whole shower or to
stop a heavy particle, a fairly accurate measurement of the particle’s energy can be made,
simply by measuring the charge conducted and without the complexity of a magnetic
spectrometer or etc. These semiconductor radiation detectors need efficient and uniform
charge collection and low leakage current. They are often cooled by liquid nitrogen. For
longer range (about a centimetre) particles they need a very large depletion depth and large
area. For short range particles, they need any contact or un-depleted semiconductor on at least
one surface to be very thin. The back-bias voltages are near breakdown (around a thousand
volts per centimetre). Germanium and silicon are common materials. Some of these detectors
sense position as well as energy. They have a finite life, especially when detecting heavy
particles, because of radiation damage. Silicon and germanium are quite different in their
ability to convert gamma rays to electron showers.
Semiconductor detectors for high energy particles are used in large numbers. Because of
energy loss fluctuations, accurate measurement of the energy deposited is of less use.
Temperature measurements
A diode can be used as a temperature measuring device, since the forward voltage drop
across the diode depends on temperature, as in a Silicon bandgap temperature sensor. From
the Shockley ideal diode equation given above, it appears the voltage has a positive
temperature coefficient (at a constant current) but depends on doping concentration and
operating temperature (Sze 2007). The temperature coefficient can be negative as in typical
thermistors or positive for temperature sense diodes down to about 20 kelvins. Typically,
silicon diodes have approximately −2 mV/˚C temperature coefficient at room temperature.
Current steering
Diodes will prevent currents in unintended directions. To supply power to an electrical
circuit during a power failure, the circuit can draw current from a battery. An Uninterruptible
power supply may use diodes in this way to ensure that current is only drawn from the battery
when necessary. Similarly, small boats typically have two circuits each with their own
battery/batteries: one used for engine starting; one used for domestics. Normally both are
charged from a single alternator, and a heavy duty split charge diode is used to prevent the
higher charge battery (typically the engine battery) from discharging through the lower
charged battery when the alternator is not running.
Diodes are also used in electronic musical keyboards. To reduce the amount of wiring
needed in electronic musical keyboards, these instruments often use keyboard matrix circuits.
The keyboard controller scans the rows and columns to determine which note the player has
pressed. The problem with matrix circuits is that when several notes are pressed at once, the
current can flow backwards through the circuit and trigger "phantom keys" that cause “ghost”
notes to play. To avoid triggering unwanted notes, most keyboard matrix circuits have diodes
soldered with the switch under each key of the musical keyboard. The same principle is also
used for the switch matrix in solid state pinball machines.
4. QUANTUM TUNNELLING
Quantum mechanics
Uncertainty principle
Wave-mechanical tunneling (also called quantum-mechanical tunneling,
quantum tunneling, and the tunnel effect) is an evanescent wave coupling effect that occurs
in the context of quantum mechanics because the behaviour of particles is governed by
Schrödinger's wave-equation. All wave equations exhibit evanescent wave coupling effects if
the conditions are right. Wave coupling effects, mathematically equivalent to those called
"tunneling" in quantum mechanics, can occur with Maxwell's wave-equation (both with light
and with microwaves), and with the common non-dispersive wave-equation often applied (for
example) to waves on strings and to acoustics.
For these effects to occur there must be a situation where a thin region of "medium
type 2" is sandwiched between two regions of "medium type 1", and the properties of these
media have to be such that the wave equation has "traveling-wave" solutions in medium type
1, but "real exponential solutions" (rising and falling) in medium type 2. In optics, medium
type 1 might be glass, medium type 2 might be vacuum. In quantum mechanics, in
connection with motion of a particle, medium type 1 is a region of space where the particle
total energy is greater than its potential energy, medium type 2 is a region of space (known as
the "barrier") where the particle total energy is less than its potential energy - for further
explanation see the section on "Schrödinger equation - tunnelling basics" below.
If conditions are right, amplitude from a traveling wave, incident on medium type 2
from medium type 1, can "leak through" medium type 2 and emerge as a traveling wave in
the second region of medium type 1 on the far side. If the second region of medium type 1 is
not present, then the traveling wave incident on medium type 2 is totally reflected, although it
does penetrate into medium type 2 to some extent. Depending on the wave equation being
used, the leaked amplitude is interpreted physically as traveling energy or as a traveling
particle, and, numerically, the ratio of the square of the leaked amplitude to the square of the
incident amplitude gives the proportion of incident energy transmitted out the far side, or (in
the case of the Schrödinger equation) the probability that the particle "tunnels" through the
barrier.
Introduction
Schematic representation of quantum tunnelling through a barrier. The energy of the
tunneled particle is the same, only the quantum amplitude (and hence the probability of the
process) is decreased.
The scale on which these "tunnelling-like phenomena" occur depends on the
wavelength of the traveling wave. For electrons the thickness of "medium type 2" (called in
this context "the tunnelling barrier") is typically a few nanometres; for alpha-particles
tunnelling out of a nucleus the thickness is very much less; for the analogous phenomenon
involving light the thickness is very much greater.
With Schrödinger's wave-equation, the characteristic that defines the two media
discussed above is the kinetic energy of the particle if it is considered as an object that could
be located at a point. In medium type 1 the kinetic energy would be positive, in medium type
2 the kinetic energy would be negative. There is some inconsistency in this, because particles
cannot physically be located at a point: they are always spread out ("delocalised") to some
extent, and the kinetic energy of the delocalised object is always positive.
What is true is that it is sometimes mathematically convenient to treat particles as
behaving like points, particular in the context of Newton's Second Law and classical
mechanics generally. In the past, people thought that the success of classical mechanics
meant that particles could always and in all circumstances be treated as if they were located at
points. But there never was any convincing experimental evidence that this was true when
very small objects and very small distances are involved, and we now know that this
viewpoint was mistaken. However, because it is still traditional to teach students early in their
careers that particles behave like points, it sometimes comes as a big surprise for people to
discover that it is well established that traveling physical particles always physically obey a
wave-equation (even when it is convenient to use the mathematics of moving points).
Clearly, a hypothetical classical point particle analysed according to Newton's Laws could
not enter a region where its kinetic energy would be negative. But, a real delocalised object,
that obeys a wave-equation and always has positive kinetic energy, can leak through such a
region if conditions are right. An approach to tunnelling that avoids mention of the concept of
"negative kinetic energy" is set out below in the section on "Schrödinger equation tunnelling
basics".
An electron approaching a barrier has to be represented as a wave-train. This wave-
train can sometimes be quite long – electrons in some materials can be 10 to 20 nm long. This
makes animations difficult. If it were legitimate to represent the electron by a short wave-
train, then tunnelling could be represented as in the animation alongside.
Reflection and tunnelling of an electron wavepacket directed at a potential barrier.
The bright spot moving to the left is the reflected part of the wavepacket. A very dim spot can
be seen moving to the right of the barrier. This is the small fraction of the wavepacket that
tunnels through the classically forbidden barrier. Also notice the interference fringes between
the incoming and reflected waves.
It is sometimes said that tunnelling occurs only in quantum mechanics. Unfortunately,
this statement is a bit of linguistic conjuring trick. As indicated above, "tunnelling-type"
evanescent-wave phenomena occur in other contexts too. But, until recently, it has only been
in quantum mechanics that evanescent wave coupling has been called "tunnelling".
(However, there is an increasing tendency to use the label "tunnelling" in other contexts too,
and the names "photon tunnelling" and "acoustic tunnelling" are now used in the research
literature.)
With regards to the mathematics of tunnelling, a special problem arises. For simple
tunnelling-barrier models, such as the rectangular barrier, the Schrödinger equation can be
solved exactly to give the value of the tunnelling probability (sometimes called the
"transmission coefficient"). Calculations of this kind make the general physical nature of
tunnelling clear. One would also like to be able to calculate exact tunnelling probabilities for
barrier models that are physically more realistic. However, when appropriate mathematical
descriptions of barriers are put into the Schrödinger equation, then the result is an awkward
non-linear differential equation. Usually, the equation is of a type where it is known to be
mathematically impossible in principle to solve the equation exactly in terms of the usual
functions of mathematical physics, or in any other simple way. Mathematicians and
mathematical physicists have been working on this problem since at least 1813, and have
been able to develop special methods for solving equations of this kind approximately. In
physics these are known as "semiclassical" or "quasiclassical" methods. A common
semiclassical method is the so-called WKB approximation (also known as the "JWKB
approximation"). The first known attempt to use such methods to solve a tunnelling problem
in physics was made in 1928, in the context of field electron emission. It is sometimes
considered that the first people to get the mathematics of applying this kind of approximation
to tunnelling fully correct (and to give reasonable mathematical proof that they had done so)
were N. Fröman and P.O. Fröman, in 1965. Their complex ideas have not yet made it into
theoretical-physics textbooks, which tend to give simpler (but slightly more approximate)
versions of the theory. An outline of one particular semiclassical method is given below.
Three notes may be helpful. In general, students taking physics courses in quantum
mechanics are presented with problems (such as the quantum mechanics of the hydrogen
atom) for which exact mathematical solutions to the Schrödinger equation exist. Tunnelling
through a realistic barrier is a reasonably basic physical phenomenon. So it is sometimes the
first problem that students encounter where it is mathematically impossible in principle to
solve the Schrödinger equation exactly in any simple way. Thus, it may also be the first
occasion on which they encounter the "semiclassical-method" mathematics needed to solve
the Schrödinger equation approximately for such problems. Not surprisingly, this
mathematics is likely to be unfamiliar, and may feel "odd". Unfortunately, it also comes in
several different variants, which doesn't help.
Also, some accounts of tunnelling seem to be written from a philosophical viewpoint
that a particle is "really" point-like, and just has wave-like behaviour. There is very little
experimental evidence to support this viewpoint. A preferable philosophical viewpoint is that
the particle is "really" delocalised and wave-like, and always exhibits wave-like behaviour,
but that in some circumstances it is convenient to use the mathematics of moving points to
describe its motion. This second viewpoint is used in this section. The precise nature of this
wave-like behaviour is, however, a much deeper matter, beyond the scope of this article on
tunnelling.
Although the phenomenon under discussion here is usually called "quantum
tunnelling" or "quantum-mechanical tunnelling", it is the wave-like aspects of particle
behaviour that are important in tunnelling theory, rather than effects relating to the
quantization of the particle's energy states. For this reason, some writers prefer to call the
phenomenon "wave-mechanical tunnelling".
History
By 1928, George Gamow had solved the theory of the alpha decay of a nucleus via
tunnelling. Classically, the particle is confined to the nucleus because of the high energy
requirement to escape the very strong potential. Under this system, it takes an enormous
amount of energy to pull apart the nucleus. In quantum mechanics, however, there is a
probability the particle can tunnel through the potential and escape. Gamow solved a model
potential for the nucleus and derived a relationship between the half-life of the particle and
the energy of the emission.
Alpha decay via tunnelling was also solved concurrently by Ronald Gurney and
Edward Condon. Shortly thereafter, both groups considered whether particles could also
tunnel into the nucleus.
After attending a seminar by Gamow, Max Born recognized the generality of
quantum-mechanical tunnelling. He realized that the tunnelling phenomenon was not
restricted to nuclear physics, but was a general result of quantum mechanics that applies to
many different systems. Today the theory of tunnelling is even applied to the early
cosmology of the universe.
Quantum tunnelling was later applied to other situations, such as the cold emission of
electrons, and perhaps most importantly semiconductor and superconductor physics.
Phenomena such as field emission, important to flash memory, are explained by quantum
tunnelling. Tunnelling is a source of major current leakage in Very-large-scale integration
(VLSI) electronics, and results in the substantial power drain and heating effects that plague
high-speed and mobile technology.
Another major application is in electron-tunnelling microscopes (see scanning
tunnelling microscope) which can resolve objects that are too small to see using conventional
microscopes. Electron tunnelling microscopes overcome the limiting effects of conventional
microscopes (optical aberrations, wavelength limitations) by scanning the surface of an object
with tunnelling electrons.
Quantum tunnelling has been shown to be a mechanism used by enzymes to enhance
reaction rates. It has been demonstrated that enzymes use tunnelling to transfer both electrons
and nuclei such as hydrogen and deuterium. It has even been shown, in the enzyme glucose
oxidase, that oxygen nuclei can tunnel under physiological conditions.
Schrödinger equation - tunnelling basics
Consider the time-independent Schrödinger equation for one particle, in one
dimension. This can be written in the forms
where is Planck's constant divided by is the particle mass, x represents distance
measured in the direction of motion of the particle, Ψ(x) is the Schrödinger wave function,
V(x) is the potential energy of the particle (measured relative to any convenient reference
level), E is that part of the total energy of the particle that is associated with motion in the x-
direction (measured relative to the same reference level as V(x)), and M(x) is a quantity
defined by this equation. Explicitly, M(x) is given by
M(x) = V(x) − E.
The quantity M(x) has no accepted name in physics generally; the name "motive energy" is
used in the article on field electron emission.
The solutions of the Schrödinger equation take different forms for different values of
x, depending on whether M(x) is positive or negative. This is easiest to understand if we
consider a situation in which we have regions of space in which M(x) is (a) constant and
negative and (b) constant and positive. When M(x) is constant and negative, then the
Schrödinger equation can be written in the form
The solutions of this equation represent travelling waves, with phase-constant +k or -
k. Alternatively, if M(x) is constant and positive, then the Schrödinger equation can be written
in the form
The solutions of this equation are rising and falling exponentials, which take the form
exp(+κx) for rising exponentials, or the form exp(-κx) for decaying exponentials (also called
"evanescent waves"). When M(x) varies with position, the same difference in behaviour
occurs, depending on whether M(x) is negative or positive, but the parameters k and κ
become functions of position. It follows that the sign of M(x) determines the "nature of the
medium", with negative M corresponding to the "medium of type 1" discussed above, and
positive M corresponding to the "medium of type 2". It thus follows from well-established
mathematical principles of classical wave-physics - but applied to the Schrödinger equation -
that evanescent wave coupling can occur if a region of positive M is sandwiched between two
regions of negative M. This occurs if V(x) has a "hill-type" shape.
A problem is that the mathematics of dealing with the situation where M(x) varies
with x is intensely difficult, except in certain mathematical special cases that usually do not
correspond quantitatively well to physical reality. A discussion of the simple (but
quantitatively unrealistic) case of the rectangular potential barrier appears elsewhere. A
discussion of the "semi-classical" approximate method, as sometimes found in physics
textbooks, is given in the next section. A full (but very complicated) complete mathematical
treatment appears in the 1965 monograph by Fröman and Fröman noted below. Their ideas
have not yet made it into physics textbooks, but probably in most cases their corrections have
little quantitative effect. A brief statement of the outcome of the Fröman and Fröman
treatment appears in the article on field electron emission (which was the first major physical
effect to be identified as due to electron tunnelling, in 1928), in the section on escape
probability.
Note that, in the hypothetical physical picture of "particle" motion used in the 1800s
and earlier, in which a "particle" is assumed to have the behaviour of a moving point mass,
positive values of M(x) correspond to negative values of the kinetic energy of a point mass
located at position "x". There is, however, no logical need to introduce the concept of
"negative kinetic energy at a point in space" into discussion of evanescent wave coupling
(i.e., there is no logical need to introduce this concept into discussions of "tunnelling" based
on the Schrödinger equation.)
A semiclassical method for determining a formula for tunnelling probability
Now let us recast the wave function Ψ(x) as the exponential of a function.
Now we separate Φ'(x) into real and imaginary parts using real valued functions A and B.
,
because the pure imaginary part needs to vanish due to the real-valued right-hand side:
Next we want to take the semiclassical approximation to solve this. That means we
expand each function as a power series in . From the equations we can see that the power
series must start with at least an order of to satisfy the real part of the equation. But as we
want a good classical limit, we also want to start with as high a power of Planck's constant as
possible.
The constraints on the lowest order terms are as follows.
A0(x)B0(x) = 0
If the amplitude varies slowly as compared to the phase, we set A0(x) = 0 and get
which is only valid when you have more energy than potential - classical motion. After the
same procedure on the next order of the expansion we get
On the other hand, if the phase varies slowly as compared to the amplitude, we set B0(x) = 0
and get
which is only valid when you have more potential than energy - tunnelling motion. Resolving
the next order of the expansion yields
It is apparent from the denominator, that both these approximate solutions are bad near the
classical turning point E = V(x). What we have are the approximate solutions away from the
potential hill and beneath the potential hill. Away from the potential hill, the particle acts
similarly to a free wave - the phase is oscillating. Beneath the potential hill, the particle
undergoes exponential changes in amplitude.
In a specific tunnelling problem, we might suspect that the transition amplitude is
proportional to and thus the tunnelling is exponentially dampened by
large deviations from classically allowable motion.
But to be complete we must find the approximate solutions everywhere and match
coefficients to make a global approximate solution. We have yet to approximate the solution
near the classical turning points E = V(x).
Let us label a classical turning point x1. Now because we are near E = V(x1), we can expand
in a power series.
Let us only approximate to linear order
This differential equation looks deceptively simple. Its solutions are Airy functions.
Hopefully this solution should connect the far away and beneath solutions. Given the 2
coefficients on one side of the classical turning point, we should be able to determine the 2
coefficients on the other side of the classical turning point by using this local solution to
connect them. We are able to find a relationship between C,θ and C + ,C − .
Fortunately the Airy function solutions will asymptote into sine, cosine and exponential
functions in the proper limits. The relationship can be found as follows:
Now we can construct global solutions and solve tunnelling problems.
The transmission coefficient, , for a particle tunnelling through a single
potential barrier is found to be
Where x1,x2 are the 2 classical turning points for the potential barrier. If we take the classical
limit of all other physical parameters much larger than Planck's constant, abbreviated as
, we see that the transmission coefficient correctly goes to zero. This classical limit
would have failed in the unphysical, but much simpler to solve, situation of a square
potential. A related subject is above barrier reflection: in classical physics a particle will not
reflect if its energy is above potential barrier, but in quantum case it is possible. In this case
reflection coefficient is exponentially small in Plank constant. Semiclassical technic of
calculation of the reflection coefficient is similar to calculation of the tunneling described
above.
5. QUANTUM COMPUTER
The Bloch sphere is a representation of a qubit, the fundamental building block of quantum
computers.
A quantum computer is a device for computation that makes direct use of quantum
mechanical phenomena, such as superposition and entanglement, to perform operations on
data. The basic principle behind quantum computation is that quantum properties can be used
to represent data and perform operations on these data. A theoretical model is the quantum
Turing machine, also known as the universal quantum computer.
Although quantum computing is still in its infancy, experiments have been carried out
in which quantum computational operations were executed on a very small number of qubits
(quantum bit). Both practical and theoretical research continues with interest, and many
national government and military funding agencies support quantum computing research to
develop quantum computers for both civilian and national security purposes, such as
cryptanalysis.
If large-scale quantum computers can be built, they will be able to solve certain
problems much faster than any of our current classical computers (for example Shor's
algorithm). Quantum computers are different from other computers such as DNA computers
and traditional computers based on transistors. Some computing architectures such as optical
computers may use classical superposition of electromagnetic waves. Without some
specifically quantum mechanical resources such as entanglement, it is conjectured that an
exponential advantage over classical computers is not possible. Quantum computers however
do not allow one to compute functions that are not theoretically computable by classical
computers, i.e. they do not alter the Church-Turing thesis. The gain is only in efficiency.
Basis
A classical computer has a memory made up of bits, where each bit represents either a
one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can
represent a one, a zero, or, crucially, any quantum superposition of these; moreover, a pair of
qubits can be in any quantum superposition of 4 states, and three qubits in any superposition
of 8. In general a quantum computer with n qubits can be in an arbitrary superposition of up
to 2n different states simultaneously (this compares to a normal computer that can only be in
one of these 2n states at any one time). A quantum computer operates by manipulating those
qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is
called a quantum algorithm.
An example of an implementation of qubits for a quantum computer could start with
the use of particles with two spin states: "down" and "up" (typically written and , or
and ). But in fact any system possessing an observable quantity A which is conserved
under time evolution and such that A has at least two discrete and sufficiently spaced
consecutive eigenvalues, is a suitable candidate for implementing a qubit. This is true
because any such system can be mapped onto an effective spin-1/2 system.
Bits vs. qubits
Qubits are made up of controlled particles and the means of control (e.g. devices that trap
particles and switch them from one state to another).
Consider first a classical computer that operates on a three-bit register. The state of
the computer at any time is a probability distribution over the 23 = 8 different three-bit strings
000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly
one of these states with probability 1. However, if it is a probabilistic computer, then there is
a possibility of it being in any one of a number of different states. We can describe this
probabilistic state by eight nonnegative numbers a,b,c,d,e,f,g,h (where a = probability
computer is in state 000, b = probability computer is in state 001, etc.). There is a restriction
that these probabilities sum to 1.
The state of a three-qubit quantum computer is similarly described by an eight-
dimensional vector (a,b,c,d,e,f,g,h), called a wavefunction. However, instead of adding to
one, the sum of the squares of the coefficient magnitudes, | a | 2 + | b | 2 + ... + | h | 2, must
equal one. Moreover, the coefficients are complex numbers. Since states are represented by
complex wavefunctions, two states being added together will undergo interference. This is a
key difference between quantum computing and probabilistic classical computing.
If you measure the three qubits, then you will observe a three-bit string. The
probability of measuring a string will equal the squared magnitude of that string's coefficients
(using our example, probability that we read state as 000 = | a | 2, probability that we read
state as 001 = | b | 2, etc..). Thus a measurement of the quantum state with coefficients
(a,b,...,h) gives the classical probability distribution ( | a | 2, | b | 2,..., | h | 2). We say that the
quantum state "collapses" to a classical state.
Note that an eight-dimensional vector can be specified in many different ways,
depending on what basis you choose for the space. The basis of three-bit strings 000, 001, ...,
111 is known as the computational basis, and is often convenient, but other bases of unit-
length, orthogonal vectors can also be used. Ket notation is often used to make explicit the
choice of basis. For example, the state (a,b,c,d,e,f,g,h) in the computational basis can be
written as
, where, e.g., = (0,0,1,0,0,0,0,0). The computational basis for a single qubit (two
dimensions) is = (1,0), = (0,1), but another common basis are the eigenvectors of the
Pauli-x operator: and .
Note that although recording a classical state of n bits, a 2n-dimensional probability
distribution, requires an exponential number of real numbers, practically we can always think
of the system as being exactly one of the n-bit strings—we just don't know which one.
Quantum mechanically, this is not the case, and all 2n complex coefficients need to be kept
track of to see how the quantum system evolves. For example, a 300-qubit quantum computer
has a state described by 2300 (approximately 1090) complex numbers, more than the number of
atoms in the observable universe.
Operation
While a classical three-bit state and a quantum three-qubit state are both eight-
dimensional vectors, they are manipulated quite differently for classical or quantum
computation. For computing in either case, the system must be initialized, for example into
the all-zeros string, , corresponding to the vector (1,0,0,0,0,0,0,0). In classical
randomized computation, the system evolves according to the application of stochastic
matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In
quantum computation, on the other hand, allowed operations are unitary matrices, which are
effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean
or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum
device.) Consequently, since rotations can be undone by rotating backward, quantum
computations are reversible. (Technically, quantum operations can be probabilistic
combinations of unitaries, so quantum computation really does generalize classical
computation. See quantum circuit for a more precise formulation.)
Finally, upon termination of the algorithm, the result needs to be read off. In the case
of a classical computer, we sample from the probability distribution on the three-bit register
to obtain one definite three-bit string, say 000. Quantum mechanically, we measure the three-
qubit state, which is equivalent to collapsing the quantum state down to a classical
distribution (with the coefficients in the classical state being the squared magnitudes of the
coefficients for the quantum state, as described above) followed by sampling from that
distribution. Note that this destroys the original quantum state. Many algorithms will only
give the correct answer with a certain probability, however by repeatedly initializing, running
and measuring the quantum computer, the probability of getting the correct answer can be
increased.
For more details on the sequences of operations used for various quantum algorithms,
see universal quantum computer, Shor's algorithm, Grover's algorithm, Deutsch-Jozsa
algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum
adiabatic algorithm and quantum error correction.
Potential
Integer factorization is believed to be computationally infeasible with an ordinary
computer for large integers that are the product of only a few prime numbers (e.g., products
of two 300-digit primes). By comparison, a quantum computer could efficiently solve this
problem using Shor's algorithm to find its factors. This ability would allow a quantum
computer to "break" many of the cryptographic systems in use today, in the sense that there
would be a polynomial time (in the number of digits of the integer) algorithm for solving the
problem. In particular, most of the popular public key ciphers are based on the difficulty of
factoring integers (or the related discrete logarithm problem which can also be solved by
Shor's algorithm), including forms of RSA. These are used to protect secure Web pages,
encrypted email, and many other types of data. Breaking these would have significant
ramifications for electronic privacy and security. The only way to increase the security of an
algorithm like RSA would be to increase the key size and hope that an adversary does not
have the resources to build and use a powerful enough quantum computer.
A way out of this dilemma would be to use some kind of quantum cryptography.
There are also some digital signature schemes that are believed to be secure against quantum
computers. See for instance Lamport signatures.
Besides factorization and discrete logarithms, quantum algorithms offering a more
than polynomial speedup over the best known classical algorithm have been found for several
problems, including the simulation of quantum physical processes from chemistry and solid
state physics, the approximation of Jones polynomials, and solving Pell's equation. No
mathematical proof has been found that shows that an equally fast classical algorithm cannot
be discovered, although this is considered unlikely. For some problems, quantum computers
offer a polynomial speedup. The most well-known example of this is quantum database
search, which can be solved by Grover's algorithm using quadratically fewer queries to the
database than are required by classical algorithms. In this case the advantage is provable.
Several other examples of provable quantum speedups for query problems have subsequently
been discovered, such as for finding collisions in two-to-one functions and evaluating NAND
trees.
Consider a problem that has these four properties:
1. The only way to solve it is to guess answers repeatedly and check them,
2. There are n possible answers to check,
3. Every possible answer takes the same amount of time to check, and
4. There are no clues about which answers might be better: generating possibilities
randomly is just as good as checking them in some special order.
An example of this is a password cracker that attempts to guess the password for an
encrypted file (assuming that the password has a maximum possible length).
For problems with all four properties, the time for a quantum computer to solve this
will be proportional to the square root of n. That can be a very large speedup, reducing some
problems from years to seconds. It can be used to attack symmetric ciphers such as Triple
DES and AES by attempting to guess the secret key. Regardless of whether any of these
problems can be shown to have an advantage on a quantum computer, they nonetheless will
always have the advantage of being an excellent tool for studying quantum mechanical
interactions, which of itself is an enormous value to the scientific community.
Grover's algorithm can also be used to obtain a quadratic speed-up [over a brute-force search]
for a class of problems known as NP-complete.
Since chemistry and nanotechnology rely on understanding quantum systems, and
such systems are impossible to simulate in an efficient manner classically, many believe
quantum simulation will be one of the most important applications of quantum computing.
There are a number of practical difficulties in building a quantum computer, and thus far
quantum computers have only solved trivial problems. David DiVincenzo, of IBM, listed the
following requirements for a practical quantum computer: scalable physically to increase the
number of qubits;
qubits can be initialized to arbitrary values;
quantum gates faster than decoherence time;
universal gate set;
qubits can be read easily.
Quantum decoherence
One of the greatest challenges is controlling or removing quantum decoherence. This
usually means isolating the system from its environment as the slightest interaction with the
external world would cause the system to decohere. This effect is irreversible, as it is non-
unitary, and is usually something that should be avoided, if not highly controlled.
Decoherence times for candidate systems, in particular the transverse relaxation time T2 (for
NMR and MRI technology, also called the dephasing time), typically range between
nanoseconds and seconds at low temperature.
These issues are more difficult for optical approaches as the timescales are orders of
magnitude lower and an often cited approach to overcoming them is optical pulse shaping.
Error rates are typically proportional to the ratio of operating time to decoherence time, hence
any operation must be completed much more quickly than the decoherence time.
If the error rate is small enough, it is thought to be possible to use quantum error
correction, which corrects errors due to decoherence, thereby allowing the total calculation
time to be longer than the decoherence time. An often cited figure for required error rate in
each gate is 10−4. This implies that each gate must be able to perform its task 10,000 times
faster than the decoherence time of the system.
Meeting this scalability condition is possible for a wide range of systems. However,
the use of error correction brings with it the cost of a greatly increased number of required
qubits. The number required to factor integers using Shor's algorithm is still polynomial, and
thought to be between L and L2, where L is the number of bits in the number to be factored;
error correction algorithms would inflate this figure by an additional factor of L. For a 1000-
bit number, this implies a need for about 104 qubits without error correction. With error
correction, the figure would rise to about 107 qubits. Note that computation time is about L2
or about 107 steps and on 1 MHz, about 10 seconds.
A very different approach to the stability-decoherence problem is to create a topological
quantum computer with anyons, quasi-particles used as threads and relying on braid theory to
form stable logic gates.
Developments
There are a number of quantum computing candidates, among those:
Superconductor-based quantum computers (including SQUID-based quantum
computers)
Trapped ion quantum computer
Optical lattices
Topological quantum computer
Quantum dot on surface (e.g. the Loss-DiVincenzo quantum computer)
Nuclear magnetic resonance on molecules in solution (liquid NMR)
Solid state NMR Kane quantum computers
Electrons on helium quantum computers
Cavity quantum electrodynamics (CQED)
Molecular magnet
Fullerene-based ESR quantum computer
Optic-based quantum computers (Quantum optics)
Diamond-based quantum computer
Bose–Einstein condensate-based quantum computer
Transistor-based quantum computer - string quantum computers with entrainment of
positive holes using an electrostatic trap
Spin-based quantum computer
Adiabatic quantum computation
Rare-earth-metal-ion-doped inorganic crystal based quantum computers
The large number of candidates shows explicitly that the topic, in spite of rapid progress,
is still in its infancy. But at the same time there is also a vast amount of flexibility. In 2005,
researchers at the University of Michigan built a semiconductor chip which functioned as an
ion trap. Such devices, produced by standard lithography techniques, may point the way to
scalable quantum computing tools. An improved version was made in 2006.
In 2009, researchers at Yale University created the first rudimentary solid-state quantum
processor. The two-qubit superconducting chip was able to run elementary algorithms. Each
of the two artificial atoms (or qubits) were made up of a billion aluminum atoms but they
acted like a single one that could occupy two different energy states.
Another team, working at the University of Bristol, also created a silicon-based quantum
computing chip, based on quantum optics. The team was able to run Shor's algorithm on the
chip.
Relation to computational complexity theory
The suspected relationship of BQP to other problem spaces.
The class of problems that can be efficiently solved by quantum computers is called
BQP, for "bounded error, quantum, polynomial time". Quantum computers only run
probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP on
classical computers. It is defined as the set of problems solvable with a polynomial-time
algorithm, whose probability of error is bounded away from one half. A quantum computer is
said to "solve" a problem if, for every instance, its answer will be right with high probability.
If that solution runs in polynomial time, then that problem is in BQP.
BQP is contained in the complexity class #P (or more precisely in the associated class of
decision problems P#P), which is a subclass of PSPACE.
BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not
known. Both integer factorization and discrete log are in BQP. Both of these problems are
NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be
NP-complete. There is a common misconception that quantum computers can solve NP-
complete problems in polynomial time. That is not known to be true, and is generally
suspected to be false.
Although quantum computers may be faster than classical computers, those described above
can't solve any problems that classical computers can't solve, given enough time and memory
(however, those amounts might be practically infeasible). A Turing machine can simulate
these quantum computers, so such a quantum computer could never solve an undecidable
problem like the halting problem. The existence of "standard" quantum computers does not
disprove the Church–Turing thesis. It has been speculated that theories of quantum gravity,
such as M-theory or loop quantum gravity, may allow even faster computers to be built.
Currently, it's an open problem to even define computation in such theories due to the
problem of time, i.e. there's no obvious way to describe what it means for an observer to
submit input to a computer and later receive output.