industrial training report at ddk mandi house new delhi

50
T I Rajeev Kumar Roorkee institute of technology, Roorkee

Upload: rajeevkr35

Post on 22-Jan-2018

274 views

Category:

Engineering


5 download

TRANSCRIPT

Page 1: Industrial Training Report at DDK Mandi House New Delhi

T

I

Rajeev Kumar

Roorkee institute of technology, Roorkee

Page 2: Industrial Training Report at DDK Mandi House New Delhi

1

ACKNOWLEDGEMENT

This report is an outcome of the practical training which I have gone through at

Doordarshan Kendra , Mandi house , Copernicus Marg New Delhi –

110001

Words often fail to express one’s feeling towards others, still I express my

sincere gratitude to Mr. Gurjeet Singh, Assistant Director(Engineering) ,

DOORDARSHAN KENDRA , Mandi House for his valuable guidance without

which it would have been difficult for me to complete my training. I also

express my gratitude to Mr. Rakesh Kumar (Dy. Director), Mrs. Amita Gautam,

and all who helped me a lot in understanding the various processes and concepts

involved. It was really a great experience working in the DD Kendra and

learning from such experienced engineers with hands on experience on the

subject.

Rajeev Kumar

Roorkee Institute Of Technology

Roorkee, Uttarakhand -247667

Page 3: Industrial Training Report at DDK Mandi House New Delhi

2

PREFACE

With the ongoing revolution in Electronics and Communication where

innovations are taking place at the blink of eye, it is impossible to keep pace

with the emerging trends.

Excellence is an attitude that the whole of the human race is born with. It is the

environment that makes sure that whether the result of this attitude is visible or

otherwise. A well planned, properly executed and evaluated industrial training

helps a lot in developing a professional attitude. It provides a linkage between a

student and industry to develop an awareness of industrial approach to problem

solving, based on a broad understanding of process and mode of operation of

organization. During this period, the student gets the real experience for

working in the industry environment. Most of the theoretical knowledge that has

been gained during the course of their studies is put to test here. Apart from this

the student gets an opportunity to learn the latest technology, which immensely

helps in them in building their career. I had the opportunity to have a real

experience on many ventures, which increased my sphere of knowledge to great

extent. I got a chance to learn many new technologies and also interfaced to

many instruments. All this credit goes to the organization Doordarshan.

Page 4: Industrial Training Report at DDK Mandi House New Delhi

3

CERTIFICATE

This is to certify that Rajeev kumar, a student of B.Tech (3rd year, ECE) of

Roorkee Institute Of Technology, Roorkee, Uttarakhand has successfully

completed his Industrial Training under the guidance of Mr. Gurjeet Singh

(ADE, DDK Delhi) and Amita Gautam in Doordarshan Kendra , Mandi House

for a period starting from 20th June , 2016 to 15th July, 2016.

He worked hard and diligently completed his presentation in time. He took a lot

of initiative in learning.

His overall performance during the project was excellent.

We wish his all success in his career.

Page 5: Industrial Training Report at DDK Mandi House New Delhi

4

TV studio Hall 5

Studio Lighting 7

TV Camera 10

CCU 11

Microphones 12

VTR 13

PCR 21

Post Production suites 27

Video Chain 29

Audio chain 30

MSR 31

Modes of transmission 35

Earth Station 40

OB/DSNG Van 46

Page 6: Industrial Training Report at DDK Mandi House New Delhi

5

TV STUDIO HALL

Doordarshan have their own shooting studio. Doordarshan, Delhi has four studio halls. One is

used as News Room and the others are used for shooting various programs. Artificial set is

created in the studio hall according to the requirements of program to be shooted. Studio hall

contains numbers of lights to give natural effect to artificial set. Hall is big enough to build

the set of about 4 rooms.

The four studios are named as:

1. Studio Big (B)

2. Studio Small (S)

3. Studio Medium (M)

4. Studio Large (L)

Studio hall contains many devices for shooting and for creating natural set also. Like…

Lighting winches & control board.

Cyclorama.

Many microphone Connections.

Makeup room.

Furniture.

Cameras.

Sound absorbers.

Lights are hanged over the lighting winches and arranged in row. Types and purpose for

lightening will be explained under the title “Studio lighting”.

Page 7: Industrial Training Report at DDK Mandi House New Delhi

6

Cyclorama is nothing but a special type of white curtain hanged with the wall in three

dimensions. Cyclorama works as the light and colour absorbers to maintain original colour

tone on video output. Because of its white colour it is also used to create background of

various colour by using colour paper on lightings.

Many microphone connections hanged in between the winches are used to attach the

microphones during dialog delivery in play.

White coloured tiles of sound absorber material are used on walls for reduce the echoes.

Whole studio is central air conditioned and all the doors kept airtight for preventing outer

voice coming into studio.

Studio is also as main action area. This place requires very large place compared to other

sections. Action in this area includes staging, lighting, performance, and arrangement to pick

up picture and sound.

Requirements of TV Studio

Very efficient air conditioning.

Uniform and smooth floor for smooth movement of cameras.

Efficient sound absorbers.

Effective communication with other sections.

3 to 4 studio camera with teleprompter.

Cyclorama and curtain.

Warning light and safety devices like fire alarm, firefighting equipment.

Digital clock display.

Page 8: Industrial Training Report at DDK Mandi House New Delhi

7

Studio Lighting

Lighting for television is very exciting and needs creative talent. There is always a

tremendous scope for doing experiments to achieve the required effect. Light is a kind of

electromagnetic radiation with a visible spectrum from red to violet i.e. wavelength from

700nm to 3800nm respectively.

Why lighting is done?

When we shoot outdoor program, the source of light is Sun. The natural effect we see in

outdoor is greatly depends on the proper lighting.

There are two main reason to use light techniques in studio. First is when we prepare artificial

set to look like natural, we have to give the proper lighting effect as if it was outdoor.

Lighting also depends on the mood of the scene. Secondly, the output picture of the camera is

2D, while natural scenes we see are 3D, on T.V. screen to differentiate the main object from

the background and to give 3D effect lighting is must.

Types of lights.

Nature has provided us two types of lights Hard & Soft.

Hard light is a point source so the shadow of the object looks sharper. In nature sun is the

hard light source and other reflecting clouds, hills, and buildings are soft sources. The

shadow of the object under soft light source looks feathered and soft.

In studio mainly Three Point Lighting technique is used. These are-

• Key light: It is the main light used to highlight any object or to give attention toward the

person. It gives shape and modelling by a casting shadow. It is treated as a sun in the sky and

should cast only one shadow. Key light is usually a hard source at an angle of 15-30 degree to

camera axis at an elevation of about 40 degrees.

• Fill light: controls the lighting contrast by filling in shadows. It is 80% intensity of the key

light and at the opposite side of the camera axis. It is used to suppress the shadow made by

the key light. It can also provide catch lights in the eyes.

• Back light: Separates the body from the background, gives roundness to the subject and

reveals texture.

Page 9: Industrial Training Report at DDK Mandi House New Delhi

8

Fig: Light arrangement

Key Light 100%

Fill Light 80%

Back Light 110%

Back Ground Light 50-60% Table:: Intensity of lighting points

The three-point lighting ratio 3:2:1 (back: key: fill) in monochrome and 3:2:2 in color provide

good portrait lighting.

Lighting Techniques

To understand lighting techniques, we should know all the parameters of lights, those are-

Quantity

Quality

Color temperature

Contrast Ration

Quantity means the amount of light or amount of radiated energy by the source of light, and

quality means the type of light source used.

Contrast ratio is the difference between highly lighted and darkest part of the scene.

Different lights and colors have their own temperature known as color temperature. When a

black body is heated, it may be noted that the color of body changes from black to red and

then toward white as temperature increases. Different sources have their own color

temperatures, some numbers of color temperature for different light sources are listed below.

Sun Light 5600 Kelvin

Studio Lamp 3200 Kelvin

Domestic Lamp 2780 Kelvin

Page 10: Industrial Training Report at DDK Mandi House New Delhi

9

Fire 1930 Kelvin

Fluorescent 6500 Kelvin

Cloudy Day 6500 Kelvin

Clear Blue Sky 12000 Kelvin

The motors attached with winches by metal belts control the height of the lights. Movement

of winches is controlled from the control panel which also contains the connection for

talkback system. Intensity and power on/off of lights are controlled from the Light Control

Unit (LCU). In doordarshan LCU is merged with the CCU.

Light Control

The scene to be televised must be well illuminated to produce a clear and noise free picture.

The lighting should also give the depth, the correct contrast and artistic display of various

shades without multiple shadows.

The lighting arrangements in a TV studio have to be very elaborate. A large number of lights

are used to meet the needs of ‘key’, ‘fill’, and ‘back’ lights etc. Lights are classified as spot

and soft lights. These are suspended from motorized hoists and telescopes. The up and down

movement is remotely controlled. The switching on and off the lights at the required time and

their dimming is controlled from the light control panel inside a lighting control room using

SCR dimmer controls. These remotely control various lights are inside the studios.

Modern TV studios have a computer-controlled lighting system. The intensities of various

lights can be adjusted independently and memorized for reproduction. The status indication

of lights regarding their location and intensity is available on a monitor/MIMIC display.

During reproduction of a particular sequence, the information from the memory operates the

respective light dimmers. Hand held control boxes are also available for controlling light

intensities inside the studios which communicate via a control panel. Most of the operational

Page 11: Industrial Training Report at DDK Mandi House New Delhi

10

controls of the computerized light control system can also be performed manually with the

back–up matrix and fader controls.

T.V. Camera

A TV Camera consists of three sections:

1. A Camera lens and optical block

2. A transducer or pick up device

3. Electronics

Camera Lens

The purpose of the camera lens is to focus the optical energy at the face plate of a pickup

device i.e. to form an optical image. The lens has following sections:

1. Main focus section

2. Zoom section with manual or servo mode operation

3. Servo drive assembly for Zoom and iris control

4. Aperture section with manual or auto mode

5. Back focus section with adjustment facilities for back and micro focus.

R, G & B signals, as separated by the optical block are converted to electrical signal in the

transducer section of the camera. It is then processed in camera electronics to give CCVS

(color composite video signal) output.

Page 12: Industrial Training Report at DDK Mandi House New Delhi

11

Camera Control Unit (CCU)

The television cameras which include camera head with its optical focusing lens, pan and tilt

head, video signal pre-amplifier view finder and other associated electronic circuitry are

mounted on cameras trolleys and operate inside the studios. The output of cameras is pre-

amplified in the head and then connected to the camera control unit (CCU) through triax

cable.

All the camera control voltages are fed from the CCU to the camera head over the camera

cable. The view-finder signal is also sent over the camera cable to the camera head

viewfinder for helping the cameraman in proper focusing, adjusting and composing the shots.

The video signal so obtained is amplified, H.F. corrected, equalized for cable delays, D.C.

clamped, horizontal, and vertical blanking pulses are added to it. The peak white level is also

clipped to avoid overloading of the following stages and avoiding over modulation in the

transmitter. The composite sync signals are then added and these video signals are fed to a

distribution amplifier, which normally gives multiple outputs for monitoring etc.

Page 13: Industrial Training Report at DDK Mandi House New Delhi

12

MICROPHONES

A microphone is an acoustic­to­electric transducer or sensor that converts sound in to

an electrical signal.

TYPES OF MICROPHONES

1.CONDENSER MICROPHONE:

In a condenser microphone also called a capacitor microphone or electrostatic

microphone, the diaphragm acts as one plate of a capacitor, and the vibrations produce

changes in the distance between the plates.

2. ELECTRET CONDENSER MICROPHONE

An electret microphone is a relatively new type of capacitor microphone invented at

Bell laboratories in 1962 by Gerhard Sessler and Jim West. An electret is a

ferroelectric material that has been permanently electrically charged or polarized. The

name comes from electrostatic and magnet; a static charge is embedded in an electret

by alignment of the static charges in the material, much the way a magnet is made by

aligning the magnetic domains in a piece of iron. 3. DYNAMIC MICROPHONE:

Dynamic microphones work via electromagnetic induction. They are robust, relatively

in expensive and resistant to moisture. This coupled with their potentially high gain

before feedback makes them ideal for on­stage use. Moving­coil microphones use

same dynamic principle as in loudspeaker, only reversed.

4. RIBBON MICROPHONE:

Ribbon microphones use a thin, usually corrugated metal ribbon suspended in a

magnetic field. The ribbon is electrically connected to the microphone's output, and its

vibration within the magnetic field generates the electrical signal. Ribbon microphones

are similar to moving coil microphones in the sense that both produce sound by means

of magnetic induction.

5. PIEZO ELECTRIC MICROPHONE:A crystal microphone or piezo microphone uses the p

henomenon of piezoelectricity the ability of some materials to produce a voltage when subjec

ted to pressure to convert vibrations into an electrical signal.

6. LASER MICROPHONE:

Laser microphones are often portrayed in movies as spy gadgets. A laser beam is

aimed at the surface of a window or other plane surface that is affected by sound. The

slight vibrations of this surface displace the returned beam, causing it to trace the

sound wave. The vibrating laser spot is then converted back to sound. In a more robust

and expensive implementation, the returned light is split and fed to an interferometer,

which detects movement of the surface.

7. FIBER OPTIC MICROPHONE:

Page 14: Industrial Training Report at DDK Mandi House New Delhi

13

A fiber optic microphone converts acoustic waves into electrical signals by sensing

changes in light intensity, instead of sensing changes in capacitance or magnetic fields

as with conventional microphones.

Analog & Digital Video Tape Recorder (VTR)

Video tape recorder is a most complex piece of studio equipment with analog and digital

processing servo system, microprocessor, memories, logic circuits and mechanical devices

etc. Also these recorders have been the main limitation so for as the quality output from

studio is concerned.

Requires higher writing speed and lower head gap along with reduction in number of octaves.

Video Tape Formats

Introduction:

Format of Video tape recorder defines the arrangement of magnetic information of the tape. It

specifies:

The width of tape.

Number of tracks for video, Audio, Control, Time code and Cue,

Width of tracks.

Their electrical characteristics and orientation.

Page 15: Industrial Training Report at DDK Mandi House New Delhi

14

Classification

TELECINE:

It is used for telecast purpose. It converts the motion picture signal into video signal. This

allows the use of old movies, documentaries educational & commercial films of various kinds

as a source of program material Continuous transport by microcomputer controlled wrap

capstan drive. Silent and film saving.

BCN-3:

Composite Analog Formats.

BETACAM:

Component Analog Formats 3(A).

BETACAM SP

Updated models, Betacam-SP and M-ll formats are using metal particle tapes and higher

carrier frequency for FM. These machines have performance equal to the one-inch type B or

C formats. Introduced in 1986 as an improvement over Betacam with 60 min/90 min

Cassette, it uses higher specification metal tape which increases overall performance

particularly in bandwidth and multi-generation. It offers two high specification FM audio

channels in addition to two linear audio tracks.

BETACAM SYSTEM

Fig: Betacam Track Layout

Fig: Betacam tape path around the head drum

Page 16: Industrial Training Report at DDK Mandi House New Delhi

15

Heads

Video head(two)

Audio head (two for ch-1 & ch-2)

Video erase head

Audio erase head

All erase head

Time code head (TC head)

Control track head (CTL head)

The R_Y and B-Y are clocked into separate one-line duration stores. Similarly, during the

second line, a second pair of stores receivers the next R-Y & B-Y.

Meanwhile, R-Y is clocked out of its first store at twice the clock speed, compressed it to 32

micro-second. Then B-Y is clocked from its first store to fill the next 32 micro-second period.

This is called CTDM.

The first pair of stores is now empty, ready to receive new R-Y & B-Y from the input signal.

While this is going on double speed clock are used to empty the second pair of stores in a

sequence of R-Y first and then B-Y.

DVC PROFFESIONAL

Digital composite formats

Fig: Tape construction

These are the ultimate in video recording as the information is recorded in digital form and

multi-generation dubbing is no longer a problem.

Page 17: Industrial Training Report at DDK Mandi House New Delhi

16

Fig: DVC Pro

Magnetic Principle

Let us refresh ourselves with the magnetic principles,

Fig (a) shows current carrying conductor causing a magnetic field proportional to current.

Fig(b) shows current carrying conductor when wound like a coil acts like a bar magnet.

Fig(c) shows current carrying coil when bent to form a ring, inner field remains

homogeneous but the outer field vanishes, i.e. field lines inside are able to close.

Page 18: Industrial Training Report at DDK Mandi House New Delhi

17

Fig(d) shows ferromagnetic material inserted inside the ring with a narrow air gap causing a

flux bubble because of magnetic potential difference across the gap.

Property of the ferromagnetic materials to retain magnetism even after the current or the H is

removed is called retentivity and is used for recording electrical signals in magnetic form on

magnetic tapes This relationship can also be represented by a curve called BH curve.

Magnetic tapes are made of ferromagnetic materials with broader BH curve than the

materials with broader BH curve than the material used for video heads as the heads are not

required to retain information.

Writing Speed and Frequency Response

Recording process with reference to fig (d) when a tape is passed over the magnetic flux

bubble, the electric signal in the coil will cause the electric lines of force from the head gap to

pass through the magnetic material of the tape producing small magnets depending upon the

strength of the current. Polarity of the magnets depends on the change of current. Decreasing

current will cause NS magnet and vice versa. Power of these magnets is as per BH curve.

Thus the magnetic flux strengthens the unarranged magnetic particles as per the signal and

they stay in that condition after the tape has already passed the magnetic head.

Fig :Recording process play back process

Length of the magnet thus formed is directly proportional to writing speed of the head v, and

inversely proportional to the frequency of the signal to be recorded, i.e.

Recorded wavelength for one cycle of signal = speed x time

Or wavelength of the magnetic signal tape = v/f

During play back when the recorded tape is passed over the head gap at the same speed at

which it was recorded, flux lines emerging from the tape on crossing the head gap induce

voltage in the coil proportional to the rate of change of flux, i.e. dϴ/dt and this in turn

depends on the frequency of the recorded signal. Doubling of frequency causes voltage to

increase by 6dB. This accounts for the well-known 6 dB/octave playback characteristics of

the recording medium. This holds good only up to a certain limit thereafter at very high

Page 19: Industrial Training Report at DDK Mandi House New Delhi

18

frequencies, lot of loses take place during playback and recording process causing noise to be

more than the signal itself. It may be noted that when the gap becomes equal to the

wavelength of the recorded signal, two adjoining bar magnets may produce opposite current

during playback and the output becomes zero. Similar thing happens when the gap equals n

times the wavelength. For getting maximum output, head gap has to be one half of

wavelength. Frequency at which zero output occurs is called extinction frequency. Thus the

maximum usable frequency becomes half of the extinction frequency. These parameters are

related by:

Maximum usable frequency(MUF) = Fext/2 = writing speed/ 2*head gap

Fext = v/ wavelength

Since ƛ record on tape = head gap for Fext

So in order to record the higher frequencies we must increase the writing speed for a

minimum value of wavelength recorded on tape i.e. ƛ tape. This minimum value of ƛ tape is

again restricted by the minimum practically possible head gap.

Now the ratio of video and audio frequencies is approximately 300, so we must increase the

writing speed or reduce the gap by the same factor of 300 to get the desired results. Perhaps a

speed of 60mph will be required to cope with the higher video frequencies. Keeping in mind

the practical limitations a gap of the order of 0.025 mil and writing speed of 600 ips or 15

mtrs/sec approx. For most of the present day portable machines, higher performance

specification even at lower writing speed has been possible because of development of better

quality metal tape and improvement in video heads.

Page 20: Industrial Training Report at DDK Mandi House New Delhi

19

Page 21: Industrial Training Report at DDK Mandi House New Delhi

20

Frequency Modulation

It was found difficult to record/reproduce 18 octaves of video signal frequencies on a tape

even after increasing the writing speed because of the dynamic range of 60 dbs or 10 octaves.

Thus another problem for recording video frequencies was to reduce number of octaves. This

was achieved by modulation. Amplitude modulation is not suitable because information is

not suitable because information of output due to tape head contact will appear as

modulation. Frequency and pulse modulation system have this property. These systems have

this property. This system however produces large number of side bands. For VTRs where

we are having limitation on the requirement of bandwidth, so we have to use lower

modulation index to reduce side bands. As an example, for average picture level carrier of 6

MHz and modulating signal of 5 MHz, the side bands will be (6+5)MHz (6-5)MHz i.e. 1

MHz to 11 MHz. The octave range of this modulated signal is now only 4. Hence the octave

bands are compressed with frequency requirement enhanced up to 11 MHz. The frequency

range is now modified so that the lowest frequencies are at 1MHz and the highest at 11 MHz.

The extinction frequency is now higher but the octave range has been reduced to 4 only.

Page 22: Industrial Training Report at DDK Mandi House New Delhi

21

Production Control Room (PCR)

One of the important parts of DDK is the PCR, which is the second step of the Video chain.

The generated signals in the studio are controlled and with some effects and characters are

added here.

So signal come firstly to the PCR and go for the further processing.

There are three main parts of the PCR i.e.

Base station

Vision mixer

Audio Console

Fig : Basic diagram of PCR

Activities in this area are:-

1. Direction to the production crew by the producer of the programme.

2. Timing a production/telecast.

3. Editing of different sources available at the production desk.

4. Monitoring of output/off air signal.

Hardware provided in this area include:

1. Monitoring facilities for all the input and output sources (audio/video).

Page 23: Industrial Training Report at DDK Mandi House New Delhi

22

2. Remote control for video mixer, telecine and library store and special effect (ADO) etc.

3. Communication facilities with technical areas and studio floor.

Video Signal Generation

Video is nothing but a sequence of pictures. The image we see is maintained in our eye for

1/16sec. So if we see images at the rate more than 16 pictures/sec, our eyes cannot recognize

the difference and we see the continuous motion. In movies camera and movie projector it is

found that 24 fps is better for human eyes. TV system could also use this rate but in PAL

system 25fpm is selected. In TV cameras image is converted in electrical signal using

photosensitive material. Whole image is divided into many micro particles known as pixels.

These pixels are small enough so that our eyes cannot recognize pixels and we see continuous

image.

Thus, at any particular instance there are almost infinite numbers of pixels that need to be

converted in electrical signal simultaneously for transmitting picture details. However this is

not possible practical because it is no feasible to provide a separate path for each pixel. In

practice this problem is solved by method known as Scanning in which information is

converted one by one pixel, line by line and frame by frame.

Vision Mixer (or Video Switcher)

Unlike films, television media allows switching between different sources simultaneously at

the video switcher in Production control room operated by the Vision Mixer on the direction

of the program producer. The producer directs the cameramen for proper shots on various

cameras through intercom and the vision mixer (also called VM engineer) switches shots

from the selected camera/cameras with split second accuracy, in close cooperation with

the producer. The shots can be switched from one video source to another video source,

superimposed, cross faded, faded in or faded out electronically with actual switching

being done during the vertical intervals between the picture frames. Electronics special

effects are also used now days as a transition between the two sources.

For most of the Video Switcher Mixing between the sources is possible only if the sources

are having timing accuracy between 50 ns to 200 ns and Burst phase for SC with an accuracy

of 1.5 to 5 deg.

Though the video switching is done by the VM at the remote panel, the electronics is

located in CAR. The vision mixer is typically a 10 x 6 or 20 x 10 cross bar switcher selecting

anyone of the 10 or 20 input sources to 6 or 10 different output lines. The input sources

include: Camera 1, camera 2, camera 3, VTR1, VTR2, Telecine 1, Telecine 2, Test signal etc.

Some of the sources that have their sync coincident with the station sync are called

synchronous, while others having their own independent sync are called non-synchronous.

Page 24: Industrial Training Report at DDK Mandi House New Delhi

23

The vision mixer provides for the following operational facilities for editing of TV

programs:-

Take: Selection of any input source or

Cut: switching clearly from one source to another.

DISSOLVE: Fading out of one source of video and fading in another source of video

SUPERPOSITION: Superimposition can be obtained by fading up two or more picture

together. This may be used to add tilting to an existing picture or special montage effects.

PREVIEW: Vision mixer also have a preview bank & its output connected to a monitor. It

enables us to check any selected non studio pictures source before its transition.

WIPE: The wipe is common special effects. It can be described as one picture chasing the

original picture.

The direction of the entry of the picture in original picture can be horizontal, vertical,

diagonal, circular &so on.

KEYING: The keying signal can be generated either by the luminance, hue or chrominance

of the source input. The keypad portion can be filled with the same or with matte or external

source. Matte means internally generated BG with choice of color from the vision mixer

itself.

Chroma keying:

Fig: Original studio scene

Page 25: Industrial Training Report at DDK Mandi House New Delhi

24

Fig: Computer graphics that will be used for chroma keying.

Fig: Result of chroma keying

Computer graphics is super imposed on blue background in studio scene. In this effect a

selected portion of the background video source is replaced with foreground video source.

The FG portion to be inserted is determined from a keying waveform, which may be derived

from the foreground picture. Sometime this is also known as color separation overlay(CSO).

Principle of Chroma Keying- Chroma key is concerned with HUE, SATURATION AND

LUMINANCE. Chroma key is produced by the use of a processing unit, which is adjustable

to a specific hue, saturation & luminance combination in a source signal.

SPECIAL EFFECTS: A choice of a number of wipe patterns for split screen or wipe effects.

Page 26: Industrial Training Report at DDK Mandi House New Delhi

25

Fig : Vision Mixer

The selected output can be monitored in the corresponding pre-view monitor. All the picture

sources are available on the monitors. The preview monitors can be used for previewing the

telecine, VTR; test signals etc. with any desired special effect, prior to its actual switching.

The switcher also provides cue facilities to switch camera tally lights as an indication to the

cameraman whether his camera is on output of the switcher.

Vision mixer is the almost final equipment in program (video) production and its output is

used either for recording or transmission. Vision mixing is the process of providing a

composite signal from various input sources. It has many input sources such as cameras,

VCR/server, Graphics, IRD’s. Out of these I/ p, any source can be taken on o/p.

It is used to switch or cut between 2 video sources, or to combine them in a variety of ways.

There are two types of mixing:

1. Additive mixing

2. Non additive mixing

Sound mixing and control

As a rule, in television, sound accompanies the picture. Several microphones are generally

required for production of complex television programs besides another audio.

Page 27: Industrial Training Report at DDK Mandi House New Delhi

26

Audio facilities

An audio mixing console, with a number of inputs, say about 32 inputs is provided in

major studio. This includes special facilities such as equalization, PFL, phase reversal, echo

send/receive and digital reverberation units at some places. Meltron console tape recorders

and EMI 938 disc reproducers are provided for playing back/creating audio effects as

independent sources (Unmarried) to the switcher.

Fig: Audio Console

TALK BACK SYSTEM

This is one audio loop attached to all the sections of Doordarshan to communicate with

different sections. This is just like intercome.

Page 28: Industrial Training Report at DDK Mandi House New Delhi

27

POST PRODUCTION SUITES

Modern videotape editing has revolutionized the production of television programs

over the years. The latest trend all over the world is to have more of fully equipped post

production suites than number of studios. The actual production is done in these suites. The

job for a post-production suites is:-

1. To knit program available on various sources.

2. While doing editing with multiple sources, it should be possible to have any kind of

transition.

3. Adding/Mixing sound tracks.

4. Voice over facilities.

5. Creating special effects.

The concept of live editing on vision mixer is being replaced by “to do it at leisure” in post-

production suites.

A well-equipped post production suite will have:-

1. Five VTRs/VCRs, may be of different format remotely controlled by the editor.

2. Vision mixing with special effect and wipes etc. with control from a remote editor panel.

3. Audio mixer with remote control from the editor remote panel.

4. Multi-track audio recorder with time code facilities and remote operation.

5. Adequate monitoring facilities.

6. Non Linear Editing System

7. 3-D Graphics

LINEAR EDITING AND NON EDITING

Linear Editing

The linear editing is sequential kind of editing where continuous editing is done. We can’t

reach the exact point of interest where we need to do the editing’s.

Page 29: Industrial Training Report at DDK Mandi House New Delhi

28

Fig: Linear Editing

Disadvantages:

1. Low S/N ratio

2 .High noise

3. Editing become problematic

4. Time taking

Non Linear Editing (NLE) Fundamentally editing is a process where one places Audio

video clips in an appropriate sequence and mainly used in video post-production. Linear

editing is tape based and is sequential in nature. It has various problems like long hours spent

on rewinding of tapes in search of material, potential risk of damage to original footage,

difficult to insert a new shot in an edit, difficult to experiment with variations, quality loss is

more, limited composting effects and color correction capability. Non-linear editing (NLE) is

a video editing in digital format with standard computer based technology. NLE can also be

extended to film editing. Computer technology is harnessed in Random access, computational

and manipulation capability, multiple copies, multiple versions intelligent search,

sophisticated project and media management tools, standard interfaces and powerful display.

Fig: Standard Non Linear Editing

Page 30: Industrial Training Report at DDK Mandi House New Delhi

29

Advantages Of NLE Over LE

1. NLE has various advantages over tape based (linear) editing: Flexibility in all editing

functions.

2. Easy to do changes, undo, copy, duplicate and multiple version

3. Easy operation for cut, dissolve, wipes and other transition effects. Multi-layering of video

is easy.

4. Powerful integration of video and graphics, tools for filtering, color correction, key

framing and special 2D/3D effects.

5. Equally powerful audio effects and mixing.

6. Possible to trim; compress or expand the length of the clip.

7. Intelligent and powerful 3D video effect can be created and customized. Efficient and

intelligent storage.

VIDEO CHAIN

The video we see at our home is either pre-recorded in studio or live telecasted. Block

diagram shown in fig illustrates different chains of video recording, video playback, news,

and live broadcasting. In First chain we will understand studio program recording. Camera

output from the studio hall is sent to CCU where many parameters of video signals are

controlled. Output signal of CCU after making all corrections is sent to VM in PCR-1

(production control room). Output of 3 to 4 cameras comes here and final signal is selected

here using VM according to a director’s choice.

The final signal from the vision mixer is sent to VTR. VTR uses both analog and digital tape

recording system. At the time of transmitting pre-recoded programme the cassettes are played

into the respective VTR room. The signal from the VTR is sent to PCR 2. PCR 2 has one

vision mixer, video monitoring system and computer graphics (CG), from the PCR 2, signal

travels from MSR to transmitter or earth station for terrestrial or satellite transmission. MSR

is the main control rom room between studio and transmitter and receiver.

Page 31: Industrial Training Report at DDK Mandi House New Delhi

30

Fig: Video Chain

AUDIO CHAIN

In studio program, audio from studio microphones is directly fed to the AUDIO CONSOLE

place in PCR-1. It is used to mix audio from different sources and maintain its output. From

AC, signal is directly recorded on tape with video signal in VTR. While playing back audio is

extracted from tape and fed to another audio console placed in PCR-2 and then

travels with the video signal.

Fig: Audio Chain

Page 32: Industrial Training Report at DDK Mandi House New Delhi

31

MASTER SWITCHING ROOM (MSR)

Introduction

Master switching room (MSR) is used for transmission media. It is the engineering

coordination center of activity for selecting & routing the signal from various sources to

transmitter and earth station. It is a room where all different sources from the outside studio

comes first here and enroots transmission to different destination like transmitter & earth

station. This room comprises of Routine switcher, Stab amplifier, Video/Audio distribution

amplifier etc. It is the heart of the studio.

Most of the switching electronics are kept here e.g. camera base stations, switcher

mainframe, SPG, Satellite receivers, MW link, DDA & most of the patch panels. Signal is

routed through MSR. Signal can be monitored at various stages.

This section is equipped with a 64X64 Digital Routing Switcher where all the signals from

Studio-A, Studio-B, Transmitter, Earth Station, OB Van signal, DSNG etc are routed to

various areas as per requirements for recording/transmission. One OFC link between MSR

and Earth Station has also been installed.

Equipment’s Installed

1. Router (64 X 64) audio video

2. Switcher OFC rack

3. Frame synchronizer

4. DVDs and AVDs

5. D/A and A/D converters

6. SPG

7. IRDs

Fig: Master Switching Room when source is antenna

Page 33: Industrial Training Report at DDK Mandi House New Delhi

32

The Master Switching Room or Master Control Room (MSR/MCR) houses equipment that is

too noisy or runs too hot for the Production control room (PCR). It also makes sure that coax

cable and other wire lengths and installation requirements keep within manageable lengths,

since most high- quality wiring runs only between devices in this room. This can include the

actual circuitry and connections between all specifies devices.

Fig: Master Switching Room when source is PCR

DEVICES USED DETAILS

IRDs

An Integrated Receiver Decoder (IRD) is an electronic device used to pick-up a

radiofrequency signal and convert digital information transmitted in it. The IRD is the

interface between a receiving satellite dish networks and a Broadcasting facility video/audio

infrastructure. The IRDs used in MSR feed Room at DDK Mandi House is 16x16. i.e. each

source can be passed to 16 destinations which makes transmission easy. Including this, 32x32

type is also used.

Fig: A typical Integrated Receiver Decoder

Page 34: Industrial Training Report at DDK Mandi House New Delhi

33

VTRs

A video tape recorder (VTR) is a tape recorder designed to record video material, usually on

magnetic tape. VTRs originated as individual tape reels, serving as a replacement for motion

picture film stock and making recording for television applications cheaper and quicker. It is

similar to VCRs and VDRs.

Fig: A Typical VTR

DVDAs

It amplifies the Digital Video signals and route them to the defined video displays. It is the

device used in MSR and PCR to view the video signals on more than one interface.

Fig: DVDA with waveform monitor and TV display.

FS (FRAME SYNCHRONISER)

A frame synchronizer is a device used in live television production to match the timing of an

incoming video source to the timing of an existing video system. They are often used to "time

Page 35: Industrial Training Report at DDK Mandi House New Delhi

34

in" consumer video equipment to a professional system but can be used to stabilize any

video. The frame synchronizer essentially takes a picture of each frame of incoming video

and then immediately outputs it with the correct synchronization signals to match an existing

video system.

Fig: Frame Synchronizer

SPG (SYNC PULSE GENERATOR)

A sync pulse generator is a special type of generator which produces synchronization signals,

with a high level of stability and accuracy. These devices are used to provide a master timing

source for a video facility. The output of an SPG will typically be in one of several forms,

depending on the needs of the facility.

Fig: Synchronous Pulse Generator

Page 36: Industrial Training Report at DDK Mandi House New Delhi

35

MODES OF TRANSMISSION

The signal generated or to be received can be transmitted by one of these techniques

1. Satellite

2. Microwave

3. Optical fiber cable (OFC)

4. DSNG

SATELLITE

Fig: satellite communication

This is the most widely used means of transmission. Satellite transmission requires an

unobstructed line of sight. The line of site will be between the orbiting satellite and a station

on Earth. Satellite signals must travel in straight lines but do not have the limitations of

ground based wireless transmission, such as the curvature of the Earth.

Microwave signals from a satellite can be transmitted to any place on Earth which means that

high quality communications can be made available to remote areas of the world without

requiring the massive investment in ground-based equipment.

This is better discussed in Earth station section.

MICROWAVE TRANSMISSION

Microwave transmission refers to the technology of transmitting information or energy by the

use of radio waves whose wavelengths are conveniently measured in small numbers of

centimetre, these are called microwaves. Microwaves are widely used for point-to- point

Page 37: Industrial Training Report at DDK Mandi House New Delhi

36

communications because their small wavelength allows conveniently-sized antennas to direct

them in narrow beams, which can be pointed directly at the receiving antenna. This is better

discussed in Pitampura visit section.

Fig: Microwave Communication

OPTICAL FIBRE CABLE

An optical fiber cable is a cable containing one or more optical fibers. The optical fiber

elements are typically individually coated with plastic layers and contained in a protective

tube suitable for the environment where the cable will be deployed.

Optical fiber consists of a core and a cladding layer, selected for total internal reflection due

to the difference in the refractive index between the two. In practical fibers, the cladding is

usually coated with a layer of acrylate polymer or polyimide. This coating protects the fiber

from damage but does not contribute to its optical waveguide properties. Individual coated

fibers (or fibers formed into ribbons or bundles) then have a tough resin buffer layer and/or

core tube(s) extruded around them to form the cable core. Several layers of protective

sheathing, depending on the application, are added to form the cable. Rigid fiber assemblies

sometimes put light-absorbing ("dark") glass between the fibers, to prevent light that leaks

out of one fiber from entering another. This reduces cross-talk between the fibers, or reduces

flare in fiber bundle imaging applications.

Page 38: Industrial Training Report at DDK Mandi House New Delhi

37

Fig: Optical Fibre

DIGITAL SATELLITE NEWS GATHERING

Digital satellite news gathering (DSNG) is a system that combines electronic news gathering

(ENG) with satellite news gathering (SNG). The first types of ENG systems were extensively

used during the dispute over Falkland Islands between England and Argentina in 1982. As

time passed and electronic devices become smaller, a whole DSNG system was fitted into a

van. DSNG vans are now common; they're extensively used in covering news events.

This is better discussed in OB/DSNG section.

Page 39: Industrial Training Report at DDK Mandi House New Delhi

38

Fig: DSNG Van

Page 40: Industrial Training Report at DDK Mandi House New Delhi

39

EARTH STATION

Introduction

An earth station, ground station, or earth terminal is a terrestrial terminal station designed for

extra planetary telecommunication with spacecraft, and/or reception of radio waves from an

astronomical radio source. Earth stations are located either on the surface of the Earth, or within

Earth’s atmosphere. Earth stations communicate with spacecraft by transmitting and receiving

radio waves in the super high frequency or extremely high frequency bands (e.g., microwaves).

When an earth station successfully transmits radio waves to a spacecraft (or vice versa), it

establishes a telecommunications link.

When a satellite is within an earth station’s line of sight, the earth station is said to have a view

of the satellite. It is possible for a satellite to communicate with more than one earth station at

a time. A pair of earth stations are said to have a satellite in mutual view when the stations

share simultaneous, unobstructed, line- of-sight contact with the satellite.

There are currently three classes of earth stations:

1. Mass capacity station

--Designed for large users or inter-exchange carrier applications. This type of earth station

serves a user community with communications needs great enough to require feeder line access

to the earth station. The cost for earth stations in this class runs into millions of dollars.

2. Middle range earth station

--Designed for large corporate applications. This type of earth station serves a single large user

(e.g. newspaper publisher, financial institution, etc.). The cost for earth stations in this class

run into the hundreds of thousands of dollars.

3. Low-end earth station

--Designed for smaller corporate applications. This type of earth station serves a single user

(e.g., retailers, general business, etc.) and is typically designed to handle data traffic (e.g., point-

of-sale information, inventory control, credit authorization, and other types of remote

processing). These types of earth stations are established with a minimal amount of equipment

and a very small aperture terminal (VSAT).

Problems of Analog

One programme per channel/transponder

Comparatively noisy

Ghosts in Terrestrial Transmission

Lower quality with respect to VCD, DVD digital medium

Fixed reception

Page 41: Industrial Training Report at DDK Mandi House New Delhi

40

Why Digital?

More programmes per channel/Transponder i.e. spectrum efficient.

Noise-Free Reception.

Ghost elimination.

CD quality sound & better than DVD quality picture.

Reduced transmission power.

Flexibility in service planning.

SATELLITE COMMUNICATION

Satellite Communication is the outcome of the desire of man to achieve the concept of global

village. Penetration of frequencies beyond 30 Mega Hertz through ionosphere force people to

think that if an object (Reflector) could be placed in the space above ionosphere then it could

be possible to use complete spectrum for communication purpose.

Intelsat-I (nick named as Early Bird) was launched on 2 April 1965. This was parked in

geosynchronous orbit in Atlantic Ocean and provided telecommunication or television service

between USA and Europe. It had capacity for 240 one way telephone channels or one televis ion

channel. Subsequently Intelsat-II generation satellites were launched and parked in Atlantic

Ocean and Pacific Ocean. During Intelsat III generation, not only Atlantic and Pacific Ocean

got satellites but also Indian Ocean got satellite for the first time. Now Arthur C.Clarke‟s vision

of providing global communication using three Satellites with about 120 degrees apart became

a reality. So far Intelsat has launched 7 generations of geosynchronous satellites in all the three

regions namely Atlantic Ocean, Pacific Ocean and Indian Ocean.

Started in 1960.

Uses Geo Stationary Satellite.

Operates in C-band & Ku Band.

Started in India in 1975.

First Indian Satellite INSAT launched in 1982.

Gulf War brought satellite television to prominence.

Architecture of a Satellite Communication System

The Space Segment

The space segment contains the Satellite and all terrestrial facilities for the control and

monitoring of the Satellite. This includes the tracking, telemetry and command stations

(TT&C) together with the Satellite control center where all the operations associated with

station-keeping and checking the vital functions of the satellite are performed. In our case it is

Master Control Facility (MCF) at Hassan. The radio waves transmitted by the earth stations

are received by the satellite; this is called the uplink. The satellite in turn transmits to the

receiving earth stations; this is the down link. The quality of a radio link is specified by its

carrier-to-noise ratio. The important factor is the quality of the total link, from station to station,

and this is determined by the quality of the uplink and that of the down link. The quality of the

total link determines the quality of the signals delivered to the end user in accordance with the

type of modulation and coding used.

Page 42: Industrial Training Report at DDK Mandi House New Delhi

41

The Ground Segment

The ground segment consists of all the earth stations ; these are most often connected to the

end-users equipment by a terrestrial network or, in the case of small stations (Very Small

Aperture Terminal, VSAT), directly connected to the end-users equipment. Stations are

distinguished by their size which varies according to the volume of traffic to be carried on the

space link and the type of traffic (telephone, television or data). The largest are equipped with

antenna of 30 m diameter (Standard A of the INTELSAT network). The smallest have 0.6 m

antenna (direct television receiving stations). Fixed, transportable and mobile stations can also

be distinguished. Some stations are both transmitters and receivers.

Earth station involves the two terms which are basically the important parameters of the

communication i.e. UPLINK & DOWNLINK.

Space Geometry

Types of Orbit The orbit is the trajectory followed by the satellite in equilibrium between two

opposing forces. These are the force of attraction, due to the earth‟s gravitation, directed

towards the centre of the earth and the centrifugal force associated with the curvature of the

satellite‟s trajectory. The trajectory is within a plane and shaped as an ellipse with a maximum

extension at the apogee and a minimum at the perigee. The satellite moves more slowly in its

trajectory as the distance from the earth increases .

Most favourable Orbits

Elliptical orbits inclined at an angle of 64o with respect to the equatorial plane. This orbit

enables the satellite to cover regions of high latitude for a large fraction of the orbital period as

it passes to the apogee. This type of orbit has been adopted by the USSR for the satellites of

the MOLNYA system with a period of 12 hours. Please note that the satellite remains above

the regions located under the apogee for a period of the order of 8 hours. Continuous coverage

can be ensured with three phased satellites on different orbits.

Circular inclined orbits : The altitude of the satellite is constant and equal to several

hundreds of kilometers. The period is of the order of one and a half hours. With near 90%

inclination this type of orbit guarantees that the satellite will pass over every region of the earth.

Several systems with world wide coverage using constellations of satellite carries in low

altitude circular orbits are for e.g. IRIDIUM, GLOBAL STAR, ODYSSEY, ARIES, LEOSAT,

STARNET, etc.

Circular orbits with zero inclination (Equatorial orbits). The most popular is the geo

stationary satellite orbits ; the satellite orbits around the earth at an altitude of 35786 km, and

in the same direction as the earth. The period is equal to that of the rotation of the earth and in

the same direction. The satellite thus appears as a point fixed in the sky and ensures continuous

operation as a radio relay in real time for the area of visibility of the satellite (43% of the earth‟s

surface).

Factors deciding the selection of Orbit The choice of orbit depends on the nature of the

mission, the acceptable interference and the performance of the launchers : The extent and

Page 43: Industrial Training Report at DDK Mandi House New Delhi

42

latitude of the area to be covered. The elevation angle of earth stations. Transmission duration

and delay. Interference The performance of launchers

UPLINK:

The process of gathering any informative part & sending it to the satellite, running on specified

frequency is termed as UPLINK. The uplink frequency is 5950MHz.

Fig: Uplink

Output Chain Of Earth Station Or Uplink

1. The information gets recorded is in the analog form which needs to be converted into

digital form for long route transmission by encoder.

2. Moreover, Encoder also enables the compression technique.

3. Many digitalized signals are then feeded to multiplexer (many into one) so that we can

have one output signal at a time.

4. This output signal has poor strength & power & not fit for the long distance transmission

so its need to be pass through modulator where it superimpose on high power carrier signal.

But modulator can raise its frequency up to 70MHz only.

5. After that IF switch selects one of the modulator o/p & divide it to 1:4 ratio.

6. It is then compared with Equalizer signal to limits its parameters like amplitude, phase etc.

and also to compensate the effect of delay in the signal.

7. UPC (Up converter) increases the signal frequency to a range to reach to the satellite. This

is done by mixing the signal with locally generated high frequency signal of oscillator.

Page 44: Industrial Training Report at DDK Mandi House New Delhi

43

8. It is then transmitted to RF selector switch where the signals gets partitioned & provided to

two different HPA.

9. High power Amplifier (HPA) amplifies the signal to 750W.

10. The two signals are combined in combiner and transmitted through hollow rectangular

waveguide (Now-a-days Travelling wave tubes, TWT are also in use) to antenna & then to

SATELLITE.

11. The uplink frequency assigned to Doordarshan service station is 5950 MHz

12. All these signals are SD i.e. Standard definition, which uses 625 lines.

13. Very few signals are HD i.e. High definitions which uses 720 lines.

DOWNLINK

Input Chain Of Earth Station Or Downlink

1. Downlinking is just opposite of uplinking.

2. Satellite transmits the signal to be received by the earth station.

3. The signals are then passed through LNA (Low Noise Amplifier), so that we can have less

noise while Rx signal from the satellite or through LNBC (Low Noise Block Converter) to

down the frequency near to 5150 MHz

4. It is then divided into 1:4 ratios by the RF divider for the various o/p.

5. RF patch panel are used to monitor the signal at this point.

6. Now, incoming C band signals are converted into L band by C-L band down converter just

to lower down the high frequency signals.

7. It is then divided into 1:4 ratios by the L band divider for the various o/p.

8. This four o/p are then passes through IRDs (Decoder) to convert them back to analog

signals & then passes to Patch panel for monitoring purpose.

9. After this, through 40X40 SDI/ASI routers they are sent to Suit View Leitch (SVL).

10. The o/p of SVL enables us to view 8 channels simultaneously on single LCD (42”)

monitor.

11. The downlinking procedure is done for monitoring purpose mostly, just to confirm that

whatever we are uplinking are actually happening or not.

Page 45: Industrial Training Report at DDK Mandi House New Delhi

44

Fig: Downlink

Satellite Transponder

As shown in fig, the uplinked signal (6 GHz) at satellite is received, amplified and down

converted to 4 GHz band and sent back through filter and power amplifier (TWT). The local

oscillator frequency of down converter is 2225 MHz for C band and Ex-C band transponders.

Fig: Block diagram of Satellite Transponder

Receiving Satellite Signal

For receiving a satellite signal we need following equipment :

1. Satellite receiving antenna (PDA).

2. Feed with low noise block converter (LNBC).

3. Indoor unit consisting of satellite system unit and a Synthesised satellite receiver.

Page 46: Industrial Training Report at DDK Mandi House New Delhi

45

Azimuth and Elevation

For receiving a satisfactory signal from the satellite the dish antenna should be pointed

towards the satellite accurately. For that we need to know the azimuth and elevation of a

particular satellite from our place. The azimuth and elevation are angles which specify the

direction of a satellite from a point on the earth's surface. In layman terms the azimuth is the

east west movement and the elevation can be defined as the north south movement of the

dish. Both the azimuth and elevation of a dish can be affected by three factors for geo-

stationary satellites.

They are

1. The longitude of the satellite.

2. The latitude of the place.

3. The longitude of the place.

Calculation of Angle of Elevation

Where r = Radius of the earth (6367 kms) R = Radius of Synchronous orbit (42,165 kms). =

Latitude of the earth station D = difference in longitude of the earth station and the satellite.

( r - s) 2 1 Cos

Page 47: Industrial Training Report at DDK Mandi House New Delhi

46

Calculation of Azimuth

Indoor Units

The indoor unit contains two units. They are :

1. System unit

2. Satellite Receiver Unit

System unit

The system unit contains a passive power divider and power supply for the LNBC. The

power divider divides the IF into two equal parts to be applied to the two receivers. The

power supply is fed through same cable to the LNBC. Satellite Receiver Unit The satellite

receiver contains the down converter, video/audio demodulators and processing circuits.

Finally we get two video/audio outputs. A synthesised receiver accepts signal in the range of

900 to 1700 MHz. The block diagram of a typical EC receiver is shown in figure 9. The IF is

applied to a four-stage low noise amplifier for amplification. The overall gain of the amplifier

is around 22 dB. This signal is then applied to FET mixer where a LO frequency of 1500 to

2300 MHz is mixed so that an IF of 600 MHz is produced. The local oscillator consists of

two similar VCOs (voltage controlled oscillator) one operating in the range of 1500 - 1749

MHz and the other in the range of 1750 to 2300 MHz. They are controlled by a synthesiser

IC. A sample of the LO frequency is taken and phase compared with a stable reference

crystal frequency of 4 MHz and error if any, is then applied to the VCO for frequency

correction through a low pass filter. Thus the VCO works in a phase locked loop mode.

OB/DSNG VAN

Introduction

Outside broadcasting is the production of television or radio programmes (typically to cover

news and sports events) from a mobile television studio. This mobile control room is known

as an "Outside Broadcasting Van", "OB Van", "Scanner" (a BBC term), "mobile unit",

"remote truck", "live truck", or "production truck". Signals from cameras and microphones

come into the OB Van for processing and transmission. The final output is then transferred to

DSNG Van (Digital Satellite News Gathering) for Direct uplinking or transmission to MSR.

The OB Van is known as the ‘Mobile Studio’ and the DSNG Van is also known as the

“Mobile Earth Station’.

Page 48: Industrial Training Report at DDK Mandi House New Delhi

47

Fig: Outdoor Broadcast van (OB van)

OB VAN

OB Van is equipped with 8 numbers of Thomson TTV 1657 Digital CCD cameras, 16 input

versatile vision mixers ROSS Synergy with various special effects. 16 channel Sound Craft

make audio mixer with facility of individual channel equalization and limited. In addition to

the above, one computerized MOVE CG for superimposing titles.

Two nos. of broadcast quality VCR having slow motion (TTV3575p), two nos. of Recording

VCRs and one EVS make Live slow motion hard disc recording system is also installed. One

Long haul microwave link is also available with OB VAN.

OB VAN IS EQUIPPED WITH

1. Highly compact 20KVA On Line UPS system.

2. Mains and UPS Power Supply status (Aural & Visual) indication should be made available

in the Operational Area through Remote Monitoring Facility. It should be clearly visible and

audible.

3. Power Distribution Rack with Metering facilities should be provided in the OB Van.

4. Internal lighting suitable for both TV Production and Operation & Maintenance should be

provided in the OB Van.

5. Air Conditioners of adequate tonnage capacity especially concentrated for the equipment

racks and comfortable for the Operators etc. will be part of the offer.

Page 49: Industrial Training Report at DDK Mandi House New Delhi

48

6. The OB Van should be equipped with equipment bays for Production, Recording, Super

Slow Motion/Slow Motion Area, CCU, Monitoring and Test Equipment with an ease of

access while servicing.

7. The OB Van should be provided with ergonomically & aesthetically designed Production,

CCU, Audio, Recording, Super Slow Motion/ Slow Motion Area, End Control Desks and

Video Monitoring walls etc.

8. Technical Furniture including Operator chairs etc. will also be part of the offer.

9. The OB Van should be provided with suitable storage area in the rear and both sides of the

vehicle for Cameras, Lens, various cable drums and miscellaneous equipment etc.

10. The OB Van should have hydraulic stabilization jacks.

11. Portable Fire Fighting Extinguishers of suitable type in each partition should also be

provided.

OB SECTIONS

OB van has 5 sections:

1. The 1st and largest part is the production area where the director, technical director,

assistant director, character generator operator and producers usually sit in front of a wall of

monitors. This area is very similar to a Production control room. The technical director sits in

front of the video switcher. The monitors show all the video feeds from various sources,

including computer graphics, cameras, video tapes, video servers and slow motion replay

machines. The wall of monitors also contains a preview monitor showing what could be the

next source on air and a program monitor that shows the feed currently going to air or being

recorded. Behind the directors there is usually a desk with monitors for the editors to operate.

It is essential that the directors and editor are in connection with each other during events, so

that replays and slow-motion shots can be selected and aired.

2. The 2nd part of a van is for the audio engineer; it has a sound mixer. The audio engineer

can control which channels are added to the output and will follow instructions from the

director.

3. The 3rd part of van is video tape. The tape area has a collection of VTRs and may also

house additional power supplies or computer equipment.

4. The 4th part is the video control area where the cameras are controlled by 1or 2 people to

make sure that the iris is at the correct exposure and that all the cameras look the same.

5. The 5th part is transmission where the signal is monitored by and engineered for quality

control purposes and is transmitted or sent to other trucks.

Page 50: Industrial Training Report at DDK Mandi House New Delhi

49

Fig: OB Van Sections

Fig: Inside view of OB Van

Main Components of OB Van

1. Parabolic Dish Antenna

2. Feed

3. LNA / LNBC

4. Wave Guide / Low Loss Cable

5, HPA / SSPA

6. UP Converter

7. Modulator / Multiplexer

8. Encoder / Decoder