report of engineering in entertainment

53
A General Seminar Report On “ENGINEERING IN ENTERTAINMENT” BACHELOR OF ENGINEERING IN ELECTRONICS AND COMMUNICATION ENGINEERING BY SYED MUNEEB ULLAH HUSSAINI (1604-10-735-128) DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING MUFFAKHAM JAH COLLEGE OF ENGINEERING AND TECHNOLOGY (Affiliated to OSMANIA UNIVERSITY) HYDERABAD 2013 1

Upload: syed-muneeb-ullah-hussaini

Post on 21-May-2015

109 views

Category:

Engineering


1 download

DESCRIPTION

For presentation on this report go to below link http://www.slideshare.net/muneeb10482/engineering-in-entertainment

TRANSCRIPT

Page 1: Report of Engineering in Entertainment

A General Seminar Report On

“ENGINEERING IN ENTERTAINMENT”

BACHELOR OF ENGINEERING

IN

ELECTRONICS AND COMMUNICATION ENGINEERING

BYSYED MUNEEB ULLAH HUSSAINI

(1604-10-735-128)

DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING

MUFFAKHAM JAH COLLEGE OF ENGINEERING AND TECHNOLOGY

(Affiliated to OSMANIA UNIVERSITY)

HYDERABAD

2013

1

Page 2: Report of Engineering in Entertainment

TABLE OF CONTENTS

Serial

No.

CONTENT PAGE NO.

1. Introduction 3

2. Hand Drawn Animation 4

3. Computer Animation 9

4. 2D Technology 11

5. 3D Technology 12

6. 4D Technology 19

7. 5D Technology 20

8. Super Fast Camera 21

9. Aerial Filming 24

10. CGI Visual Effects 27

11. Rhythm And Hues Visual Effects 30

12. LOLA Visual Effects 31

13 Conclusion 40

14 References 41

2

Page 3: Report of Engineering in Entertainment

INTRODUCTION

With the growth of public interest in entertainment, many film makers are

coming up with new story line to satisfy viewers in order to cope with the

competition and grab more viewers.

Even electronic companies such as Samsung, LG started introducing

various products to enhance viewing experience

But while making a movie, film makers face various problems and this

problems give rise to a new technology

“INSIDE OF EVERY PROBLEM LIES AN OPPORTUNITY”

- NIKOLA TESLA

3

Page 4: Report of Engineering in Entertainment

HAND DRAWN ANIMATION

Hand-drawn animation is an animation technique where each frame

is drawn by hand. The technique was the dominant form of animation in

cinema until the advent of computer animation. In olden days, to make a

a cartoon like Mickey Mouse. It has to be hand drawn and for each

action the cartoonist have to draw separate slide. For instance if you are

moving your hand from one position to another position, for this action

cartoonist have to draw around 50 to 60 slides

CARTOONIST DRAWING MICKEY MOUSE

And to make a single episode of a cartoon it takes many cartoonist and

requires lot of time to make a cartoon .

Due to this problem it gave rise to a digital technology

4

Page 5: Report of Engineering in Entertainment

Different Techniques of Hand Drawn Animation

Cels

This image shows how two transparent cels, each with a different character

drawn on them, and an opaque background are photographed together to form

the composite image.

The cel is an important innovation to traditional animation, as it allows some parts of each frame to be

repeated from frame to frame, thus saving labor. A simple example would be a scene with two characters

on screen, one of which is talking and the other standing silently. Since the latter character is not moving,

it can be displayed in this scene using only one drawing, on one cel, while multiple drawings on multiple

cels are used to animate the speaking character.

For a more complex example, consider a sequence in which a boy sets a plate upon a table. The table

stays still for the entire sequence, so it can be drawn as part of the background. The plate can be drawn

along with the character as the character places it on the table. However, after the plate is on the table,

the plate no longer moves, although the boy continues to move as he draws his arm away from the plate.

In this example, after the boy puts the plate down, the plate can then be drawn on a separate cel from the

boy. Further frames feature new cels of the boy, but the plate does not have to be redrawn as it is not

moving; the same cel of the plate can be used in each remaining frame that it is still upon the table. The

cel paints were actually manufactured in shaded versions of each color to compensate for the extra layer

5

Page 6: Report of Engineering in Entertainment

of cel added between the image and the camera; in this example the still plate would be painted slightly

brighter to compensate for being moved one layer down. In TV and other low-budget productions, cels

were often "cycled" (i.e. a sequence of cels was repeated several times), and even archived and reused

in other episodes. After the film was completed, the cels were either thrown out or, especially in the early

days of animation, washed clean and reused for the next film. Some studios saved a portion of the cels

and either sold them in studio stores or presented them as gifts to visitors.

In very early cartoons made before the use of the cel, such as Gertie the Dinosaur (1914), the entire

frame, including the background and all characters and items, were drawn on a single sheet of paper,

then photographed. Everything had to be redrawn for each frame containing movements. This led to a

"jittery" appearance; imagine seeing a sequence of drawings of a mountain, each one slightly different

from the one preceding it. The pre-cel animation was later improved by using techniques like the slash

and tear system invented by Raoul Barre; the background and the animated objects were drawn on

separate papers. A frame was made by removing all the blank parts of the papers where the objects were

drawn before being placed on top of the backgrounds and finally photographed. The cel animation

process was invented by Earl Hurd and John Bray in 1915.

Limited animation

In lower-budget productions, shortcuts available through the cel technique are used extensively. For

example, in a scene in which a man is sitting in a chair and talking, the chair and the body of the man may

be the same in every frame; only his head is redrawn, or perhaps even his head stays the same while

only his mouth moves. This is known as limited animation. The process was popularized in theatrical

cartoons by United Productions of America and used in most television animation, especially that

of Hanna-Barbera. The end result does not look very lifelike, but is inexpensive to produce, and therefore

allows cartoons to be made on small television budgets.

"Shooting on twos"

Moving characters are often shot "on twos", that is to say, one drawing is shown for every two frames of

film (which usually runs at 24 frames per second), meaning there are only 12 drawings per second. Even

though the image update rate is low, the fluidity is satisfactory for most subjects. However, when a

character is required to perform a quick movement, it is usually necessary to revert to animating "on

ones", as "twos" are too slow to convey the motion adequately. A blend of the two techniques keeps the

eye fooled without unnecessary production cost.

Animation loops

Creating animation loops or animation cycles is a labor-saving technique for animating repetitive motions,

such as a character walking or a breeze blowing through the trees. In the case of walking, the character is

animated taking a step with his right foot, then a step with his left foot. The loop is created so that, when

the sequence repeats, the motion is seamless. However, since an animation loop essentially uses the

same bit of animation over and over again, it is easily detected and can in fact become distracting to an

audience. In general, they are used only sparingly by productions with moderate or high budgets.

6

Page 7: Report of Engineering in Entertainment

Multiplane camera

The multiplane camera is a tool used to add depth to scenes in 2D animated movies, called the

multiplane effect or the parallax process. The art is placed on different layers of glass plates, and as the

camera moves vertically towards or away from the artwork levels, the camera's viewpoint appears to

move through the various layers of artwork in 3D space. The panorama views in Pinocchio are examples

of the effects a multiplane camera can achieve. Different versions of the camera have been made through

time, but the most famous is the one developed by the Walt Disney studio beginning with their 1937

short The Old Mill. Another one, the "Tabletop", was developed by Fleischer Studios. The Tabletop, first

used in 1934's Poor Cinderella, used miniature sets made of paper cutouts placed in front of the camera

on a rotating platform, with the cels between them. By rotating the entire setup one frame at a time in

accordance with the cel animation, realistic panoramas could be created. Ub Iwerks and Don Bluth also

built multiplane cameras for their studios.

Xerography

Applied to animation by Ub Iwerks at the Walt Disney studio during the late 1950s,

the electrostatic copying technique called xerography allowed the drawings to be copied directly onto the

cels, eliminating much of the "inking" portion of the ink-and-paint process. This saved time and money,

and it also made it possible to put in more details and to control the size of the xeroxed objects and

characters (this replaced the little known, and seldom used, photographic lines technique at Disney, used

to reduce the size of animation when needed). At first it resulted in a more sketchy look, but the technique

was improved upon over time

The APT process

Invented by Dave Spencer, the APT (Animation Photo Transfer) process was a technique for transferring

the animators' art onto cels. The APT process is a photographic transfer system that can

photographically transfer lines or solid blocks of colors onto acetate sheets (cels). A very similar process

is used in silk screen printing. The process relies on UV-sensitive inks that cure when exposed to light

and stick to the plastic sheet, while the ink in the non-exposed areas are chemically removed from the

sheet. Its main advantage is that coloring - normally done via back painting after xerox scanning - can be

controlled better and multiple versions made quickly. To put it simply: the drawings are photographed and

the negatives then processed onto the cels instead of the typical photography. It also meant that a line on

an animated character could be in color instead of just black (although xerography at this point could be

done in colors too); this is known as self-colored lines.

This process was used on Disney's animated features such as The Black Cauldron, The Great Mouse

Detective, Oliver & Company and The Little Mermaid

Spencer received an Academy Award for Technical Achievement for developing this process.

Cel overlay

A cel overlay is a cel with inanimate objects used to give the impression of a foreground when laid on top

of a ready frame. This creates the illusion of depth, but not as much as a multiplane camera would. A

7

Page 8: Report of Engineering in Entertainment

special version of cel overlay is called line overlay, made to complete the background instead of making

the foreground, and was invented to deal with the sketchy appearance of xeroxed drawings. The

background was first painted as shapes and figures in flat colors, containing rather few details. Next, a cel

with detailed black lines was laid directly over it, each line drawn to add more information to the

underlying shape or figure and give the background the complexity it needed. In this way, the visual style

of the background will match that of the xeroxed character cels. As the xerographic process evolved, line

overlay was left behind.

Rotoscoping

Rotoscoping is a method of traditional animation invented by Max Fleischer in 1915, in which animation is

"traced" over actual film footage of actors and scenery. Traditionally, the live action will be printed out

frame by frame and registered. Another piece of paper is then placed over the live action printouts and

the action is traced frame by frame using a lightbox. The end result still looks hand drawn but the motion

will be remarkably lifelike

A method related to conventional rotoscoping was later invented for the animation of solid inanimate

objects, such as cars, boats, or doors. A small live action model of the required object was built and

painted white, while the edges of the model were painted with thin black lines. The object was then filmed

as required for the animated scene by moving the model, the camera, or a combination of both, in real

time or using stop-motion animation. The film frames were then printed on paper, showing a model made

up of the painted black lines. After the artists had added details to the object not present in the live-action

photography of the model, it was xeroxed onto cels. A notable example is Cruella de Vil's car in

Disney's One Hundred and One Dalmatians. The process of transferring 3D objects to cels was greatly

improved in the 1980s when computer graphics advanced enough to allow the creation of 3D computer

generated objects that could be manipulated in any way the animators wanted, and then printed as

outlines on paper before being copied onto cels using Xerography or the APT process. This technique

was used in Disney films such as Oliver and Company (1988) and The Little Mermaid (1989). This

process has more or less been superseded by the use of cel-shading.

8

Page 9: Report of Engineering in Entertainment

COMPUTER ANIMATION

Computer animation or CGI animation is the process used for generating animated images by

using computer graphics. The more general term computer-generated imagery encompasses both static

scenes and dynamic images, while computer animation only refers to moving images.

Computer animation

Modern computer animation usually uses 3D computer graphics, although 2D computer graphics are still

used for stylistic, low bandwidth, and faster real-time renderings. Sometimes the target of the animation is

the computer itself, but sometimes the target is another medium, such as film.

Computer animation is essentially a digital successor to the stop motion techniques used in traditional

animation with 3D models and frame-by-frame animation of 2D illustrations. Computer generated

animations are more controllable than other more physically based processes, such as

constructingminiatures for effects shots or hiring extras for crowd scenes, and because it allows the

creation of images that would not be feasible using any other technology. It can also allow a single

graphic artist to produce such content without the use of actors, expensive set pieces, or props.

To create the illusion of movement, an image is displayed on the computer monitor and repeatedly

replaced by a new image that is similar to it, but advanced slightly in time (usually at a rate of 24 or 30

frames/second). This technique is identical to how the illusion of movement is achieved with

television and motion pictures.

For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are

rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate

transparent layers are used, with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc.

of the figure are moved by the animator on key frames. The differences in appearance between key

frames are automatically calculated by the computer in a process known as tweening or morphing.

Finally, the animation is rendered.

For 3D animations, all frames must be rendered after modeling is complete. For 2D vector animations, the

rendering process is the key frame illustration process, while tweened frames are rendered as needed.

9

Page 10: Report of Engineering in Entertainment

For pre-recorded presentations, the rendered frames are transferred to a different format or medium such

as film or digital video. The frames may also be rendered in real time as they are presented to the end-

user audience. Low bandwidth animations transmitted via the internet (e.g. 2D Flash, X3D) often use

software on the end-users computer to render in real time as an alternative to streaming or pre-loaded

high bandwidth animations.

Computer-assisted Vs computer-generated animation

To animate means "to give life to" and there are two basic ways that animators commonly do this.

Computer-assisted animation is usually classed as two-dimensional (2D) animation. Creators drawings

either hand drawn (pencil to paper) or interactively drawn(drawn on the computer) using different

assisting appliances and are positioned into specific software packages. Within the software package the

creator will place drawings into different key frames which fundamentally create an outline of the most

important movements. The computer will then fill in all the " in-between frames" commonly known

as Tweening. Computer assisted animation is basically using new technologies to cut down the time scale

that traditional animation could take, but still having the elements of traditional drawings of characters or

objects.

Two examples of films using computer-assisted animation are Beauty and the Beast and Antz.

Computer-generated animation is known as 3-dimensional (3D) animation. Creators will design an

object or character with an X,Y and Z axis. Unlike the traditional way of animation no pencil to paper

drawings create the way computer generated animation works. The object or character created will then

be taken into a software, key framing and tweening are also carried out in computer generated animation

but are also a lot of techniques used that do not relate to traditional animation. Animators can break

physical laws by using mathematical algorithms to cheat, mass, force and gravity rulings. Fundamentally,

time scale and quality could be said to be a preferred way to produce animation as they are two major

things that are enhanced by using computer generated animation. Another great aspect of CGA is the fact

you can create a flock of creatures to act independently when created as a group. An animal's fur can

be programmed to wave in the wind and lie flat when it rains instead of programming each strand of hair

separately.

Three examples of computer-generated animation movies are Toy Story, The Incredibles and Shrek.

10

Page 11: Report of Engineering in Entertainment

2D Technology

2D computer graphics is the computer-based generation of digital images—mostly from two-

dimensional models (such as 2D geometric models, text, and digital images) and by techniques specific

to them. The word may stand for the branch of computer science that comprises such techniques, or for

the models themselves.

2D computer graphics are mainly used in applications that were originally developed upon

traditional printing and drawing technologies, such as typography, cartography, technical

drawing, advertising, etc. In those applications, the two-dimensional image is not just a representation of

a real-world object, but an independent artifact with added semantic value; two-dimensional models are

therefore preferred, because they give more direct control of the image than 3D computer

graphics (whose approach is more akin to photography than to typography).

In many domains, such as desktop publishing, engineering, and business, a description of a document

based on 2D computer graphics techniques can be much smaller than the corresponding digital image—

often by a factor of 1/1000 or more. This representation is also more flexible since it can be rendered at

different resolutions to suit different output devices. For these reasons, documents and illustrations are

often stored or transmitted as 2D graphic files.

2D computer graphics started in the 1950s, based on vector graphics devices. These were largely

supplanted by raster-based devices in the following decades. The PostScript language and the X Window

System protocol were landmark developments in the field.

Distinction from photorealistic 2D graphics

Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with

3D photorealistic effects are often achieved without wireframe modeling and are sometimes

indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D

vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D

effects and manually render photorealistic effects without the use of filters

11

Page 12: Report of Engineering in Entertainment

3D TECHNOLOGY

3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional

representation of geometric data (often Cartesian) that is stored in the computer for the purposes of

performing calculations and rendering 2D images. Such images may be stored for viewing later or

displayed in real-time.

3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-

frame model and 2D computer raster graphics in the final rendered display. In computer graphics

software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D

techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques.

3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is

contained within the graphical data file. However, there are differences. A 3D model is

the mathematical representation of any three-dimensional object. A model is not technically a graphic until

it is displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed

visually as a two-dimensional image through a process called 3D rendering, or used in non-

graphical computer simulations and calculations.

Rendering[edit]

Main article: 3D rendering

Rendering converts a model into an image either by simulating light transport to get photo-realistic

images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in

realistic rendering are transport (how much light gets from one place to another) and scattering (how

surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D

graphics API. Altering the scene into a suitable form for rendering also involves 3D projection, which

displays a three-dimensional image in two dimensions.

12

Page 13: Report of Engineering in Entertainment

3D SCREEN

3D screen displays depth perception to the viewer by employing techniques such

as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern

3D television sets use an active shutter 3D system or a polarized 3D system, and some

are autostereoscopic without the need of glasses.

These TV sets are high-end and generally include Ethernet, USB player and recorder, Bluetooth and

USB Wi-Fi.

3D-ready TV sets

13

Page 14: Report of Engineering in Entertainment

3D-ready TV sets are those that can operate in 3D mode (in addition to regular 2D mode) using one of

several display technologies to recreate a stereoscopic image. These TV sets usually support HDMI

1.4and a minimum output refresh rate of 120 Hz; glasses may be sold separately.

Philips was developing a 3D television set that would be available for the consumer market by about 2011

without the need for special glasses (autostereoscopy).[13] However it was canceled because of the slow

adoption of customers going from 2D to 3D. (Citation needed)

In August 2010, Toshiba announced plans to bring a range of autosteroscopic TVs to market by the end

of the year.[14]

The Chinese manufacturer TCL Corporation has developed a 42-inch (110 cm) LCD 3D TV called the TD-

42F, which is currently available in China. This model uses a lenticular system and does not require any

special glasses (autostereoscopy). It currently sells for approximately $20,000.[15][16]

Onida, LG, Samsung, Sony, and Philips intend to increase their 3D TV offering with plans to make 3D TV

sales account for over 50% of their respective TV distribution offering by 2012. It is expected that the

screens will use a mixture of technologies until there is standardisation across the industry.[17] Samsung

offers the LED 7000, LCD 750, PDP 7000 TV sets and the Blu-ray 6900.[18]

Full 3D TV sets[edit]

Full 3D TV sets include Samsung Full HD 3D (1920x1080p, this is, 2K x 1K; and 600 Hz)

and Panasonic Full HD 3D (1920x1080p, this is, 2K x 1K; and 600 Hz){{Citation needed}}.

A September 2011 Cnet review touted Toshiba's 55ZL2 as "the future of television". Because of the

demanding nature of auto-stereoscopic 3D technology, the display features a 3840x2160 display;

however, there is no video content available at this resolution. That said, it utilizes a multi-core

processor to provide excellent upscaling to the "4k2k" resolution. Using a directional lenticular lenslet

filter, the display generates nine 3D views. This technology commonly creates deadspots, which Toshiba

avoids by using an eye-tracking camera to adjust the image. The reviewers also note that the 3D

resolution for a 1080p signal looks more like 720p and lacks parallax, which reduces immersion.

14

Page 15: Report of Engineering in Entertainment

WORKING OF 3D

15

Page 16: Report of Engineering in Entertainment

3D Stereoscopic glasses are nothing new. In fact you had them when you were a kid and probably didn't even know it.

In order to see things in 3D each eye must see a slightly different picture. This is done in the real world by your eyes being spaced apart so each eye has its own slightly different view. The brain then puts the two pictures together to form one 3D image that has depth to it.

Anaglyphic - A stereoscopic motion or still picture in which the right component of a composite image usually red in color is superposed on the left component in a contrasting color to produce a three-dimensional effect when viewed through correspondingly colored filters in the form of spectacles.

The mode of 3D presentation you are most familiar with are the paper glasses with red and blue lenses. The technology behind 3D, or stereoscopic, movies is actually pretty simple. They simply recreate the way humans see normally. 

Since your eyes are about two inches apart, they see the same picture from slightly different angles. Your brain then correlates these two images in order to gauge distance. This is called binocular vision and binoculars mimic this process by presenting each eye with a slightly different image. The binocular vision system relies on the fact that our two eyes are spaced about 2 inches (5 centimeters) apart. Therefore, each eye sees the world from a slightly different perspective, and the binocular vision system in your brain uses the difference to calculate distance. Your brain has the ability to correlate the images it sees in its two eyes even though they are slightly different.Two cameras photograph the same image from slightly different positions to create these images. Your eyes can correlate these images automatically because each eye sees only one of the images. A 3D film viewed without glasses is a very strange sight and may appear to be out of focus, fuzzy or out of register. The same scene is projected simultaneously from two different angles in two different colors, red and cyan (or blue or green). Here's where those cool glasses come in -- the colored filters separate the two different images so each image only enters one eye. Your brain puts the two pictures back together and now you're dodging a flying meteor!The reason why you wear 3-D glasses in a movie theater is to feed different images into your eyes just like a View-Master does. The screen actually displays two images, and the glasses cause one of the images to enter one eye and the other to enter the other eye. There are two common systems for doing this: Although the red/green or red/blue system is now mainly used for television 3-D effects, and was used in many older 3-D movies. In this system, two images are displayed on the screen, one in red and the other in blue (or green). The filters on the glasses allow only one image to enter each eye, and your brain does the rest. You cannot really have a color movie when you are using color to provide the separation, so the image quality is not nearly as good as with the polarized system.

In old fashioned 3D films, footage for the left eye would be filmed using a red lens filter, producing a red image, and footage for the right eye would be shot using a blue filter, resulting in a blue image. Two projectors then superimposed the images on the cinema screen.

16

Page 17: Report of Engineering in Entertainment

3D glasses with blue and red filters ensured viewers’ left and right eyes saw the correct image: the red filter would only let red light through to your left eye, and the blue filter would only let blue light through to your right eye. Your brain would then combine these two slightly different images to create the illusion of 3D. Unfortunately, this meant that old fashioned 3D films couldn’t make full use of colour.To get around this problem, modern 3D films use polarised light instead of red and blue light.

What is polarised light?

A polarised light wave vibrates on only one plane. The light produced by the sun is unpolarised, meaning it is made up of light waves vibrating on many different planes. It can however be transformed into polarised light using a polarising filter.

A polarising filter has tiny parallel lines etched into it, a bit like the slats on a set of venetian blinds. This means it will only let light vibrating on a particular plane through.

Analygraph VS Polarised glasses

As with old fashioned 3D, the film is recorded using two camera lenses sat side by side. But in the cinema, the two reels of film are projected through different polarised filters. So images destined for viewers' left eyes are polarised on a horizontal plane, whereas images destined for their right eyes are polarised on a vertical plane.Cinema goers’ glasses use the same polarising filters to separate out the two images again, giving each eye sees a slightly different perspective and fooling the brain into 'seeing' Avatar's planet Pandora as though they were actually there.

In the stone age of the 20th century it was simple – just get a pair of red/blue (cyan to be exact) glasses that say “geek” better than if you had it tattooed on your forehead. Those are called anaglyph and have the advantage of being the cheapest ones you can easily make from cellophane at home. The trickery relies on the two video streams edited to show one in red and the other in cyan. The glasses then act as filters – the red lens allows the “reds” stream and blocks the cyan one, while the cyan feeds only the cyans to the respective eye. The rest as you might have already guessed is brain work.But then in the movie theaters, you watch something completely different. While we still have two streams of video, this time they are superimposed onto the screen using different filters. What you wear is polarized glasses, the reason is that they have differently polarized lenses.

17

Page 18: Report of Engineering in Entertainment

Usually, we have a vertical polarization on the one, allowing only the vertically polarized content to pass through, and a horizontal polarization on the other lens, which in turn gives you the horizontally polarized stream. Hence, each of the eyes sees a different picture and we're back to the brain doing the rest. In this case, though, you have a much better quality since no color information is lost as is the case with anaglyph (red/cyan) glasses. While polarized glasses themselves are relatively cheap, your TV will need a screen with polarized coating allowing each eye to see every other line, which comes at a higher cost. The first TVs with the technology have already hit the market Anaglyph glasses pros and consPros: Cheap, can be made at home, don't require special equipmentCons: Some of the colors are lost, the effect is not that submersive

Polarized glasses pros and consPros: Better 3D effect, colors are represented more accurately, relatively affordableCons: Moving your head distorts the 3D effect with linear polarization, require special setup

18

Page 19: Report of Engineering in Entertainment

19

Page 20: Report of Engineering in Entertainment

4D Technology

4D film or 4-D film is a marketing term for an entertainment presentation system combining a 3D film with

physical effects that occur in the theatre in synchronization with the film. (Note that 4D films are not

actually four-dimensional in the geometric sense of the word.) Because physical effects can be expensive

to install, 4D films are most often presented in custom-built theatres at special venues such as theme

parks and amusement parks. However, some movie theatres have the ability to present 4D versions of

wide-release 3D films. The films Journey to the Center of the Earth (2008), and Avatar (2009) are among

the films that have received a 4D treatment in certain theatres.

Effects simulated in a 4D film may include rain, wind, strobe lights, and vibration. Seats in 4D venues may

vibrate or move a few inches during the presentations. Other common chair effects include air jets, water

sprays, and leg and back ticklers. Hall effects may include smoke, rain, lightning, air bubbles, and special

smells (for example, fireworks smells at the London Eye's Experience, and gassy smells when a stinkbug

sprays it in It's Tough to Be a Bug).

4D films have occasionally been marketed as 5D, 6D, or 7D films in order to emphasize the variety or

uniqueness of their theatre effects. However, there is no consistent standard among films for the

application of these marketing labels.

Notable formats for providing different aspects of a "fourth dimension" to films include Sensurround, Smell-O-Vision and 4DX.Smell-O-Vision was a system that released odor during the projection of a film so that the viewer could "smell" what was happening in the movie. Therefore, giving a life like experience

20

Page 21: Report of Engineering in Entertainment

5D Technology

This is predicted as 5D Technology which is not introduced yet

In this technology, images can be touched and rotated at every angle.

From this technology, picture can be seen from every view

21

Page 22: Report of Engineering in Entertainment

SUPER FAST CAMERA

A high-speed camera is a device used for recording fast-moving objects as a photographic image(s)

onto a storage medium. After recording, the images stored on the medium can be played back in slow-

motion. Early high-speed cameras used film to record the high-speed events, but today high-speed

cameras are entirely electronic using either a charge-coupled device (CCD) or a CMOS active pixel

sensor, recording typically over 1,000 frames per second into DRAM and playing images back slowly to

study the motion for scientific study of transient phenomena.[1] A high-speed camera can be classified as

(1) a high-speed film camera that records to film, (2) a high-speed framing camera that records a short

burst of images to film/digital still camera, a high-speed streak camera that records to film/digital memory

or (3) a high-speed video camera recording to digital memory.

A normal motion picture is filmed and played back at 24 frames per second, while television uses 25

frames/s (PAL) or 29.97 frames/s (NTSC). High-speed film cameras can film up to a quarter of a million

frames per second by running the film over a rotating prism or mirror instead of using a shutter, thus

reducing the need for stopping and starting the film behind a shutter which would tear the film stock at

such speeds. Using this technique one can stretch one second to more than ten minutes of playback time

(super slow motion). High-speed video cameras are widely used for scientific research,[2][3] military test

and evaluation,[4] and industry.[5] Examples of industrial applications are filming a manufacturing line to

better tune the machine, or in the car industry the crash testing to better document the crash and what

happens to the automobile and passengers during a crash. Today, the digital high-speed camera has

replaced the film camera used for Vehicle Impact Testing

High-speed cameras are frequently used in television productions of many major sporting events for slow

motion instant replays when normal slow motion is not slow enough, such as international Cricket

matches.

22

Page 23: Report of Engineering in Entertainment

Our eyes generates generally 10 to 12 frames per second (fps) .Whereas,

these camera 60fps.As you noticed that fast actions cannot be easily

captured and gives unclear shaky videos. This super fast camera can

capture every bit of action and are used to make fast motion into slow

motion

As you can see action in the above image, where every bit of action is

recorded and can be seen clearly

Thus, enhancing viewing experience with high definition recording

23

Page 24: Report of Engineering in Entertainment

Aerial Filming

Aerial Filming is the taking of videos of the ground from an elevated position. The term usually refers to images in which the camera is not supported by a ground-based structure.

Earlier, to do aerial filming the videographer have to go through

helicopter and record the action. This method was expensive due to fuel

cost. Moreover, renting a helicopter and hiring a pilot turns out to be

more expensive

Furthermore, this method can be dangerous as well due to unfortunate

weather or human error

In order, to eliminate this problem a remotely piloted camera drones

where created which does not require helicopter and fuel

24

Page 25: Report of Engineering in Entertainment

CAMERA DRONES

25

Page 26: Report of Engineering in Entertainment

The Camera drones is a remotely piloted aircraft custom built to provide professional aerial photography, filming and live broadcasting of video.

As a multi-rotor electric unmanned aerial vehicle, it is propelled using eight electric brushless DC motorse

These camera drone are controlled by a analog stick controller on which a smart phone can be attached using mobile operating system such Apple’s iOS, Google’s Android and Microsoft’s Windows as well.

The smart phone attached to controller is used as a screen and helps to view the scene.

By using analog stick, we can rotate the camera to desired angle

This Drone can go to long distance using GPS system and GPRS as well

26

Page 27: Report of Engineering in Entertainment

CGI VISUAL EFFECTS

In 1989, a movie called RoboCop was released which was based on man

turned into robot (cyborg). To make the suit of this character it cost

around 70% of the film revenue.

This movie was hit but couldn’t gross much than the expected profit.

Due to this reason film makers stopped making movies on robot due to

its cost in making suit

Later, the filmmakers came up with new technology called CGI

(Computer generated Imagery) visual effect to solve this problem

27

Page 28: Report of Engineering in Entertainment

The film makers did not invest on making suit of the robot, instead they

used CGI VFX.

Using CGI VFX desired robot is placed on the person acting as robot

with help of uniquely colored belts. This color belts detects the motion

of person assumed as robot and moves the animated robot automatically

28

Page 29: Report of Engineering in Entertainment

Original person CG robot

Final shot

This is how it works, the person is transformed into desired robot using

CGI VFX and CGI VFX gives final touches and this way we see the

robot in movie. Therefore, this technology gives a real robot effect thus

eliminates the making of robotic suit

Not only you can create a robot from this technology but also you can

create your own imaginary world. For example in movie The Hobbit in

which with the help of CGI VFX castles and mountains where created

29

Page 30: Report of Engineering in Entertainment

RHYTHM AND HUES

VISUAL EFFECTS

Another problem faced by the film makers is the use of animals in the

movie. Since it is difficult to take care of animals and if things does not

well then will be taken against filmmakers by animal protection

organizations. Other than that most of the actors posses animal phobia.

This problem is removed by Rhythm And Hues VFX. This VFX uses a

stuffed toy rather than using of animal and visually places the animal in

place of stuffed toy. Thus, no animal issues

30

Page 31: Report of Engineering in Entertainment

LOLA VISUAL EFFECTS

BEFORE AFTER

In some movies an actor have to loose weight in order to get into

character. In one of the movie role, the actor Christian Bale (as seen in

above image) was required to loose weight. The actor successfully lost

40kg but the actor faced many health problems and risked his life. This

issue led to creation of LOLA Visual Effects

LOLA VFX can make the person thin or strong without making the

actor gain or loose weight.

Lola does employ 3D but only for reference, central to their work is just

2D compositing. This fact is either a source of amazement or

humiliation, amazement at the quality of the work, and humiliation that

such work has been done with basically the same tools the rest of us use

daily. It is easy to dismiss great work when it is the result of specialist

31

Page 32: Report of Engineering in Entertainment

in-house tools, but quite another when it is apparent that it is just great

artistry.

LOLA Plate Original Plate

For Captain America, Lola worked on over 300 shots, which were primarily the body transformation of Chris Evans, but also some work on the nose replacement of Red Skull (80 some shots).

Lola had three primary approaches to shrinking the 220 pound Evans to the 140 pound guy he needed to be, while maintaining Evans’ performance as closely as possible.1. Body double / actor doubling for the entire body. The body double was English Shakespearean trained stage actor Leander Deeny2. Digital head replacement / face projection – similar to the technique Lola used in The Social Network, where the actor is filmed with multiple cameras and this digital file is object tracked onto a body double’s (Deeny) body. For example, when Rogers was at the recruitment center – standing semi-naked in the queue, about to be rejected near the start of the film. This was only used in about 5% of Lola’s shots.

3. Shrink and scale the actor in the principal photography (no greenscreen) – a 2D scale of the actor Chris Evans. This was used in the majority – about 85% of Lola’s effects shots.

32

Page 33: Report of Engineering in Entertainment

The third approach of digitally shrinking the actor is highly detailed. Evans did not have much body hair, but as the skin was shrunk, the granularity of any skin texture needed to be consistent. “It was more of a grain problem than anything else,” says Williams. “The scaled down sections of his body would become sharper and have very little grain. We would shrink him in some parts by as much as 30%. We took a lot of mass off. This meant we would get the skin looking sharper and as if it had no grain so we had to do a de-grain and then an over all re-grain to get the skin to match the rest of his body.”

As standard, one of the first things Lola does is remove and balance out shadows before adding them back in again, or, as Williams explains, “selectively removing them. It is one of the ongoing tricks we deploy. So for example, when he was sitting, his shoulder muscles would be casting a shadow down on his bicep and then at the bottom of his bicep, – near his elbow, it would also get very dark, so one of the first things we would do is go through and reduce all those shadow values, before we scaled him down. A skinny guy is not going to have shadows cast down to his belly button from his biceps, because he just doesn’t have biceps.”Hands were particularly hard. While a man’s body size may vary greatly with muscle mass, both hands and feet are not muscle bound, and so while an arm bicep would be reduced by say 60%, a hand may only be reduced 10% and most of the work would need to go into making the fingers more slender, and not just smaller.

33

Page 34: Report of Engineering in Entertainment

Shot with Actor Chris Evan

Shot with body double Leander Deeny

Final Shot

34

Page 35: Report of Engineering in Entertainment

For each setup there would be three passes shot:1. Chris Evans acting the scene2. A body double acting the scene – often just for lighting3. A clean pass (but not motion control)

In addition to the central task of shrinking Chris Evans, all the surrounding action needed to be correct, including eye lines and props. Here a number of tricks that were done on set:- Evans would walk with bent knees, Groucho Marx style, to be lower in shot (although if he was taking more than a couple of steps this was not done as his walk and posture would be wrong.- Evans would take shorter steps. The character Steve Rogers needed to vary between 6 ft 4″ and 5ft 4″, so smaller Rogers would have a smaller pace naturally. If you tried to scale the walk in post, the feet would appear to slide relative to the ground. “He would seem to moonwalk,” joked Williams. Note: even body double Leander Deeny was 5ft 7″, a full 4 inches taller than ‘Skinny Steve’.- Seats, such as Evans’ side of the taxi, would be lowered by several inches so his co-stars would naturally look down at him.- Shirts and hats were oversized. For example, Evans wore the largest army helmet that could be found so that when he and the helmet were shrunk digitally – the helmet would look the same size as everyone else’s but he would appear to barely fit it. Shirt collars were also oversized, so that when Evans was shrunk, the shirt would appear normal but too big for him, again making him look frail.- Evans’ co-stars would focus on his chin for shots where they was looking directly at him, so that when he was shrunk, their eyeline would line up with his lower positioned eyes. Evans in turn looked at the brow of his co-stars.- If possible, production would remove things in front of Evans’ face. So when Rogers is crawling through barb wire during basic training, the filmmakers would shoot the real Chris Evans pass without foreground barb wire, and then add it back later based on the reference pass filmed with it in on another pass. This clean pass would allow the slimming down process to happen without the wire being in the way and the new correct-looking, correct scale barb wire added back on top would just sell the illusion.

In scenes where Evans was taking a few steps, the team would have the actor walk bent kneed, so that his hair was in the correct place in height, but then the team would need to bring his waist up and digitally straighten his legs as part of the process.

35

Page 36: Report of Engineering in Entertainment

While there was always a clean pass, this was not motion control, so in a moving camera shot – such as Rogers in the army barracks – all that Lola got was what was jokingly referred to as ‘poor man’s motion control’. But in the environment of the barracks, the two plates only roughly lined up. With all the parallax and objects in the scene, background patching and replacement in this scene was some of the hardest that Lola had to do. “The plates were so dissimilar we ended up having to make a 3D background environment for that one,” says Williams. “Overall we must have spent as much time cleaning up the back plates as we did slimming down Chris. Some of the clean plates were crazy. There were crowd scenes, for example at the World Expo registration center, we ended up with about four or five digital doubles.” In this scene Chris Evans needed to walk down some stairs and have people pass him. Walking ‘groucho-style’ is not possible when walking down stairs, so “as people walking behind him started to get close to him they would merge into digi-doubles,” says Williams. “They would then pass him and then fade back into the original performances again as they cleared him.”

Another scene that had very complex background cleanup was the alley fight scene. Nearly all the shots in this fight were a scaled Chris Evans, with the exception of the actual face punch, which was a face projection shot, but the background replacement was so vast as the real Evans covered so much of the frame. Williams and his team digitally recreated the alley from the clean plate and then digitally projected it onto matching background geometry and composited it into the hero take, fixing about 25% of the alley.n all shots Leander Deeny was a lighting and body reference, although as he was a stage actor and not a screen actor first and foremost, his style was a little different than Chris Evans. “He was very dramatic,” notes Williams. “If the sun was coming up, he was like ‘Look the Sun is coming UP‘ – so his moves were stagey – very dramatic.” A lot of his moves Lola couldn’t use as a reference – as a lot of Deeny’s moves were very dramatic – more of a stage presence and Chris, by contrast, was more fluid. Says Williams: “Chris was more of a cinematic actor instead of a stage actor, but having Leander’s body was always helpful as we could always see his proportions.”

LOLA VFX not only shrinks the person but it can also make older person look young and young person look old. Hence removes the age barrier from movie

36

Page 37: Report of Engineering in Entertainment

In some movies an actor have to play a role of young character in which he is a

college student. For this type of roles he needs look young, for such rle extensive

make up is used and to do make up it requires lot of time and doesn’t stay for

longer. Moreover, make up can cause skin problems. In one movie called Curious

Case of Benjamin Button an actor was transformed into many ages

37

Page 38: Report of Engineering in Entertainment

The overall process included:1. Working from life-casts of Brad Pitt and body actors to create three photo-real maquettes representing Benjamin in his 80s, 70s and 60s, then shooting them in different lighting conditions using a light stage.

2. Creating 3D computer scans of each of the three maquettes.

3. Shooting scenes on set with body actors in blue hoods.

4. Creating computer-based lighting to match the on-set lighting for every frame where Benjamin appears.

5. Having Brad perform facial expressions while being volumetrically captured (with Mova/Contour), and creating a library of ‘micro-expressions.’

6. Shooting Brad in high definition performing the role, from four camera angles, and using image analysis technology data to get animation curves and timings.

7. Matching the library of expressions to Brad’s live performance of Benjamin.

8. Re-targeting the performance and expression data to the digital models of Benjamin (created from scanning the maquettes) at the specific age required in the shot

9. Finessing the performance to match current-Brad expressions to old-Benjamin physiology using hand animation.

38

Page 39: Report of Engineering in Entertainment

10. Creating software systems for hair, eyes, skin, teeth, and all elements that make up Benjamin.11. Creating software to track the exact movements of the body actor and the camera, to integrate the CG head precisely with the body.12. Compositing all of Benjamin’s elements to integrate animation, lighting, and create the final shot.

39

Page 40: Report of Engineering in Entertainment

CONCLUSION

As we know during economic meltdown, the only thing which stays evergreen

is the entertainment field where we engineers can make the best outcome of it.

Many of us loves watching movies but somehow ignore the fact that we can

contribute towards movies after all you can make your hobby as your work

India is a place where every year more than 1000 films are produced. But this

does not benefit our country as producers uses foreign technologies .So it is

our duty to create new technologies to help our country gain economic

growth. As we all know ECE is beyond designing chips and circuits

“INSTEAD OF THINKING OUT OF THE BOX

GET RID OF THE BOX ”

REFERENCES

40

Page 41: Report of Engineering in Entertainment

www.fxguide.com www.en.wikipedia.org www.google.com www.physics.org

www.howstuffworks.com www.joblo.com www.wiki-fx.net

41