teradata frequently asking questions

59
1.Explain Teradata Architecture Major Components of a Teradata Architecture NODE: A node is made up of various hardware and software components. components that make up a node are 1. Parsing Engine (PE) 2. BYNET 3. Access Module Processor (AMP) 4. Disks Parsing Engine The Parsing Engine (PE) is a component that interprets SQL requests,receives input records, and passes data. To do that it sends the messages through the BYNET to the AMPs. BYNET The BYNET is the message passing layer. It determines which AMP(s) (Access Module Processor) should receive a message. Access Module Processor (AMP) The AMP is a virtual processor designed for and dedicated to managing a portion of the entire database. It performs all database management functions such as sorting, aggregating, and formatting data. The AMP receives data from the PE, formats rows, and distributes them to the disk storage units it controls. The AMP also retrieves the rows requested by the Parsing Engine. Disks Disks are disk drives associated with an AMP that store the data rows.On current systems, they are implemented using a disk array

Upload: sagar

Post on 11-Jul-2016

44 views

Category:

Documents


3 download

DESCRIPTION

This has all the basic questions that are frequently asked.

TRANSCRIPT

Page 1: Teradata Frequently asking questions

1.Explain Teradata ArchitectureMajor Components of a Teradata Architecture NODE: A node is made up of various hardware and software components. components that make up a node are

1. Parsing Engine (PE)2. BYNET3. Access Module Processor (AMP)4. Disks

Parsing EngineThe Parsing Engine (PE) is a component that interprets SQL requests,receives input records, and passes data. To do that it sends the messages through the BYNET to the AMPs.BYNETThe BYNET is the message passing layer. It determines which AMP(s)(Access Module Processor) should receive a message.Access Module Processor (AMP)The AMP is a virtual processor designed for and dedicated to managing a portion of the entire database. It performs all database management functions such as sorting, aggregating, and formatting data. The AMP receives data from the PE, formats rows, and distributes them to the disk storage units it controls. The AMP also retrieves the rows requested by the Parsing Engine.DisksDisks are disk drives associated with an AMP that store the data rows.On current systems, they are implemented using a disk array

Page 2: Teradata Frequently asking questions

All applications run under UNIX , Windows NT or Windows 2000 and all Teradata software runs under PDE. All share the resources of CPU and memory on the node.AMPs and PEs are virtual processors running under control of the PDE.Their numbers are software configurable.In addition to user applications, gateway software and channel driver support may also be running.

The Teradata RDBMS has a "shared-nothing" architecture, which means that the vprocs (which are the PEs and AMPs) do not share common components. For example, each AMP manages its own dedicated memory space (taken from the memory pool) and the data on its own vdisk -- these are not shared with other AMPs. Each AMP uses system resources independently of the other AMPs so they can all work in parallel for high system performance overall.

Symmetric Multi-Processor (SMP):A single node is a Symmetric Multi-Processor (SMP)

Massively Parallel Processing (MPP):When multiple SMP nodes are connected to form a larger configuration,we refer to this as a Massively Parallel Processing (MPP) system.

M 2.Functionality of each that include in Teradata architecture:

Parsing Engine:

A Parsing Engine (PE) is a virtual processor(vproc). It is made up of the following software components: 1. Session Control, 2. Parser, 3.Optimizer, 4. Dispatcher.

Page 3: Teradata Frequently asking questions

Session ControlThe major functions performed by Session Control are logon and logoff. Logon takes a textual request for session authorization, verifies it, and returns a yes or no answer. Logoff terminates any ongoing activity and deletes the session’s context.ParserThe Parser interprets SQL statements, checks them for proper SQL syntax and evaluates them semantically. The PE also consults the Data Dictionary to ensure that all objects and columns exist and that the user has authority to access these objects.

OptimizerThe Optimizer is responsible for developing the least expensive plan to return the requested response set. Processing alternatives are evaluated and the fastest alternative is chosen. This alternative is converted to executable steps, to be performed by the AMPs, which are then passed to the dispatcher.DispatcherThe Dispatcher controls the sequence in which the steps are executed and passes the steps on to the BYNET. It is composed of execution control and response-control tasks. Execution control receives the step definitions from the Parser and transmits them to the appropriate AMP(s) for processing, receives status reports from the AMPs as they process the steps, and passes the results on to response control once the AMPs havecompleted processing. Response control returns the results to the user. The Dispatcher sees that all AMPs have finished a step before the next step is dispatched. Depending on the nature of the SQL request, a step will be sent to one AMP, or broadcast to all AMPs.

Page 4: Teradata Frequently asking questions

The BYNET handles the internal communication of the Teradata RDBMS. All communication between PEs and AMPs is done via the BYNET.

When the PE dispatches the steps for the AMPs to perform, they are dispatched onto the BYNET. The messages are routed to the appropriate AMP(s) where results sets and status information are generated. This response information is also routed back to the requesting PE via the BYNET.

Depending on the nature of the dispatch request, the communication may be a:• Broadcast—message is routed to all nodes in the system.• Point-to-point—message is routed to one specific node in the system.

Once the message is on a participating node, PDE handles the multicast(carries the message to just the AMPs that should get it). So, while a teradata system does do multicast messaging, the BYNET hardware alone cannot do it - the BYNET can only do point-to-point and broadcast between nodes.

FEATURES OF BYNET:The BYNET has several unique features:

Fault tolerant: each network has multiple connection paths. If the BYNET detects an unusable path in either network, it will automatically reconfigure that network so all messages avoid the unusable path. Additionally, in the rare case that BYNET 0 cannot be reconfigured, hardware on BYNET 0 is disabled and messages are re-routed to BYNET 1 (or equally distributed if there are more than two BYNETs present), and vice versa.

Load balanced: traffic is automatically and dynamically distributed between both BYNETs.

Scalable: as you add nodes to the system, overall network bandwidth scales linearly - meaning an increase in system size without loss of performance.

High Performance: an MPP system typically has two or more BYNET networks. Because all networks are active, the system benefits from the full aggregate bandwidth of all networks. Since the number of networks can be scaled, performance can also be scaled to meet the needs of demanding applications. The technology of the BYNET is what makes the Teradata parallelism possible.

The Access Module Processor (AMP)

Page 5: Teradata Frequently asking questions

The Access Module Processor (AMP) is the virtual processor. An AMP will control some portion of each table on the system. AMPs do the physical work associated with generating an answer set including sorting, aggregating, formatting and converting. AnAMP can control up to 64 physical disks. The AMPs perform all database management functions in the system.An AMP responds to Parser/Optimizer steps transmitted across theBYNET by selecting data from or storing data to its disks. For some requests, the AMPs may redistribute a copy of the data to other AMPs.

The Database Manager subsystem resides on each AMP. The Database Manager:• Receives the steps from the Dispatcher and processes the steps. It has the ability to:− Lock databases and tables.− Create, modify, or delete definitions of tables.− Insert, delete, or modify rows within the tables.− Retrieve information from definitions and tables.• Collects accounting statistics, recording accesses by session sousers can be billed appropriately.• Returns responses to the Dispatcher.

The Database Manager provides a bridge between that logical organization and the physical organization of the data on disks. The Database Manager performs a space-management function that controls the use and allocation of space.

Page 6: Teradata Frequently asking questions

A disk array is a configuration of disk drives that utilizes specialized controllers to manage and distribute data and parity across the disks while providing fast access and data integrity.Each AMP vproc must have access to an array controller that in turn accesses the physical disks. AMP vprocs are associated with one or more ranks of data. The total disk space associated with an AMP vproc is called a vdisk. A vdisk may have up to three ranks.Teradata supports several protection schemes:• RAID Level 5—Data and parity protection striped across multiple disks.• RAID Level 1—Each disk has a physical mirror replicating the data.• RAID Level S—Data and parity protection similar to RAID 5 but used for EMC disk arrays.The disk array controllers are referred to as dual active array controllers, which means that both controllers are actively used in addition to serving as backup for each other.

3.How is Teradata parallel?

Teradata is Parallel for the following reasons:

Each PE can support up to 120 user sessions in parallel. Each session may handle multiple requests concurrently. While only one request at a time may be

active on behalf of a session, the session itself can manage the activities of 16 requests and their associated answer sets.

The MPL is implemented differently for different platforms, this means that it will always be well within the needed bandwidth for each particular platform’s maximum throughput.

Each AMP can perform up to 80 tasks in parallel. This means that AMPs are not dedicated at any moment in time to the servicing of only one request, but rather are multi-threading multiple requests concurrently. Because AMPs are designed to operate on only one portion of the database, they must operate in parallel to accomplish their intended results.

In addition to this, the optimizer may direct the AMPs to perform certain steps in parallel if there are no contingencies between the steps. This means that an AMP might be concurrently performing more than one step on behalf of the same request.

Query Parallelism: Breaking the request into smaller components, all components being worked on at the same time,

with one single answer delivered. Parallel execution can incorporate all or part of the operations within a query, and can significantly reduce the response time of an SQL statement, particularly if the query reads and analyzes a large amount of data.

Query parallelism is enabled in Teradata by hash-partitioning the data across all the VPROCs defined in the system. A VPROC provides all the database services on its allocation of datablocks.All relational operations such as table scans, index scans, projections, selections, joins, aggregations, and sorts execute in parallel across all the VPROCs simultaneously and unconditionally. Each

Page 7: Teradata Frequently asking questions

operation is performed on a VPROC’s data independently of the data associated with the other VPROCs.

4.Explain mechanism in data distribution and data retrieval Data Distribution:

Teradata uses hash partitioning and distribution to randomly and evenly distribute data across all AMPs. The rows of every table are distributed among all AMPs - and ideally will be evenly distributed among all AMPs. The rows of all tables are distributed across the AMPs according to their Primary Index value. The Primary Index value goes into the hashing algorithm and the output is a 32-bitRow Hash. The high order 16 bits are referred to as the “bucket number” and are used toidentify a hash map entry. The “hash bucket” is also referred to as then DSW – DestinationSelection Word. This entry, in turn, is used to identify the AMP that will be targeted. Theremaining 16 bits are not used to locate the AMP. Each hash map is simply an array that associates DSW values (or bucket numbers) with specific AMPs

.To locate a row, the AMP file system searches through a memory-resident structure called the Master Index. An entry in the Master Index will indicate that if a row with this Table ID and row hash exists, then it must be on a specific disk cylinder.The file system will then search through the designated Cylinder Index. There it will find anentry that indicates that if a row with this Table ID and row hash exists, it must be in one specific data block on that cylinder.The file system then searches the data block until it locates the row(s) or returns a No Rows Found condition code.

Page 8: Teradata Frequently asking questions

Data retrival: Retrieving data from the Teradata RDBMS simply reverses the storage model process. A request made for data is passed on to a Parsing Engine(PE). The PE optimizes the request for efficient processing and creates tasks for the AMPs to perform, which results in the request being satisfied. Tasks are then dispatched to the AMPs via the BYNET. Often, all AMPs must participate in creating the answer set, such as returning all rows of a table to a client application. Other times, only one or a few AMPs need participate. The PE will ensure that only the AMPs that need to will be assigned tasks. Once the AMPs have been given their assignments, they retrieve the desired rows from their respective disks. The AMPs will sort, aggregate,or format if needed. The rows are then returned to the requesting PE viathe BYNET. The PE takes the returned answer set and returns it to the requesting client application.

5. If PI is not defined on a Teradata table, what will happen? Teradata tables must have a primary index. If none is specified while creating the table, teradata supplies an

automatically created one. This is the situation prior to 13.0

From 13, Teradata creates a NOPI table (No Primary index). Without a PI, the hash value as well as AMP ownership of a row is arbitrary. Within the AMP, there are no row-ordering constraints and therefore rows can be appended to the end of the table as if it were a spool table. Each row in a NoPI table has a hash bucket value that is internally generated. A NoPI table is internally treated as a hashed table; it is just that typically all the rows on one AMP will have the same hash bucket value.

6.what are the types of indexes in Teradata? Unique Primary Index (UPI) Unique Secondary Index (USI) Non-Unique Primary Index (NUPI) Non-Unique Secondary Index (NUPI) Join Index

7.what is secondary index? Whats are its uses? A secondary index is an alternate path to the data. Secondary indexes are used to improve performance by allowing the user to avoid scanning the entire table during a query. A secondary index is like a primary index in that it allows the user to locate rows. Unlike a primary index, it has no influence on the way rows are distributed among AMPs. Secondary Indexes are optional and can be created and dropped dynamically. Secondary Indexes require separate subtables which require extra I/O to maintain the indexes.

Comparing to primary indexes, Secondary indexes allow access to information in a table by alternate, less frequently used paths. Teradata automatically creates a Secondary Index Subtable. The subtable will contain:

Secondary Index Value Secondary Index Row ID Primary Index Row ID

Page 9: Teradata Frequently asking questions

When a user writes an SQL query that has a SI in the WHERE clause, the Parsing Engine will hash the Secondary Index Value. The output is the Row Hash of the SI. The PE creates a request containing the Row Hash and gives the request to the Message Passing Layer (which includes the BYNET software and network). The Message Passing Layer uses a portion of the Row Hash to point to a bucket in the Hash Map. That bucket contains an AMP number to which the PE's request will be sent. The AMP gets the request and accesses the Secondary Index Subtable pertaining to the requested SI information. The AMP will check to see if the Row Hash exists in the subtable and double check the subtable row with the actual secondary index value. Then, the AMP will create a request containing the Primary Index Row ID and send it back to the Message Passing Layer. This request is directed to the AMP with the base table row, and the AMP easily retrieves the data row.

Secondary indexes can be useful for :

Satisfying complex condition

Processing aggregates

value comparision

Matching character combination

Joining tables

8.why is secondary index needed? Secondary indexes are used to improve performance by allowing the user to avoid scanning the entire table

during a query Secondary indexes are frequently used in the where clause. The base table data aren't redistributed when secondary indexes are defined.

Secondary indexes can be useful for :

Satisfying complex condition

Processing aggregates

value comparison

Matching character combination

Joining tables

9.What are the different types of locks in Teradata? Locking prevents multiple users who are trying to change the same data at the same time from violating the data's integrity. Locks are automatically acquired during the processing of a request and released at the termination of the request.

There are four types of locks: Exclusive Lock:Exclusive locks are only applied to databases or tables, never to rows. They are the most restrictive type of lock; all other users are locked out. Exclusive locks are used rarely, most often when structural changes are being made to the database.

Read Lock:Read locks are used to ensure consistency during read operations. Several users may hold concurrent read locks on the same data, during which no modification of the data is permitted.

Write Lock:Write locks enable users to modify data while locking out all other users except readers not concerned about data consistency (Access lock readers). Until a Write lock is released, no new read or write locks are allowed.

Access Lock:Access locks can be specified by users who are not concerned about data consistency. The use of an access lock allows for reading data while modifications are in process. Access locks are designed for decision support on large tables that are updated only by small single row changes. Access locks are sometimes called “stale read” locks, i.e. you may get ‘stale data’ that hasn’t been updated.

Locks may be applied at three levels:Database – applies to all tables/views in the database

Table/View – applies to all rows in the table/views Row Hash – applies to all rows with same row hash Lock types are automatically applied based on the SQL command: SELECT – applies a Read lock UPDATE – applies a Write lock CREATE TABLE – applies an Exclusive lock

Page 10: Teradata Frequently asking questions

10.When is ACCESS lock used? Access locks are used for the quick access to tables in multi-user environment even if other request are updating the data. They also have minimal effect on locking out others - whenyou use an access lock, virtually all requests are compatible with yours.

11.How to set default database?Setting the default database: The user name you logon with is your temporary database.For example ,if you logon as .logon abc; password:xyzthen abc is normally default database

Queries you make that do not specify database name will be made against your default database.

Changing the default database: The DATABASE command is used to change the default database

For example:DATABASE birla;

set your default database to birla and the subsequent queries are made against birla database.

12.What is a cluster? A cluster is a group of AMPs that act as a single fallback unit. Clustering has no effect on primary row distribution of the table, but the fallback row copy will always go to another AMP in the same cluster. Should an AMP fail, the primary and fallback row copies stored on that AMP cannot be accessed. However, their alternate copies are available through the other AMPs in the same cluster. The loss of an AMP in one cluster has no effect upon other clusters. It is possible to lose one AMP in each cluster and still have full access to all fallback-protected table data. If there are two AMP failures in the same cluster, the entire Teradata system halts.While an AMP is down, the remaining AMPs in the cluster must do their own work plus the work of the down AMP.

The example shows an 8-AMP system set up in two clusters of 4-AMPs each.

13.What are the connections involved in Channel attached system? In channel-attached systems, there are three major software components, which play important roles in getting the requests to and from the Teradata RDBMS.

Page 11: Teradata Frequently asking questions

The client application is either written by a programmer or is one of Teradata’s provided utility programs. Many client applications are written as “front ends” for SQL submission,but they also are written for file maintenance and report generation. Any client-supported language may be used provided it can interface to the Call Level Interface (CLI).

The Call Level Interface (CLI) is the lowest level interface to the Teradata RDBMS. It consists of system calls which create sessions, allocate request and response buffers, create and de-block “parcels” of information, and fetch response information to the requesting client.

The Teradata Director Program (TDP) is a Teradata-supplied program that must run on any client system that will be channel-attached to the Teradata RDBMS. The TDP manages the session traffic between the Call-Level Interface and the RDBMS. Its functions include session initiation and termination, logging, verification, recovery, and restart, as well as physical input to and output from the PEs, (including session balancing) and the maintenance of queues. The TDP may also handle system security.

The Host Channel Adapter is a mainframe hardware component that allows the mainframe to connect to an ESCON or Bus/Tag channel.The PBSA (PCI Bus ESCON Adapter) is a PCI adapter card that allows a WorldMark server to connect to an ESCON channel.The PBCA (PCI Bus Channel Adapter) is a PCI adapter card that allows a WorldMark server to connect to a Bus/Tag channel.

14.What are the connections involved in Network attached system?In network-attached systems, there are four major software components that play important roles in getting the requests to and from the Teradata RDBMS.

The Call Level Interface (CLI) is a library of routines that resides on the client side. Client

Page 12: Teradata Frequently asking questions

application programs use these routines to perform operations such as logging on and off, submitting SQL queries and receiving responses which contain the answer set. These routines are 98% the same in a network-attached environment as they are in a channel attached.

The Teradata ODBC™ (Open Database Connectivity) driver uses an open standardsbasedODBC interface to provide client applications access to Teradata across LAN-basedenvironments. NCR has ODBC drivers for both UNIX and Windows-based applications.

The Micro Teradata Director Program (MTDP) is a Teradata-supplied program that must be linked to any application that will be network-attached to the Teradata RDBMS. The MTDP performs many of the functions of the channel based TDP including session management. The MTDP does not control session balancing across PEs. Connect and Assign Servers that run on the Teradata system handle this activity.

The Micro Operating System Interface (MOSI) is a library of routines providing operating system independence for clients accessing the RDBMS. By using MOSI, we only need one version of the MTDP to run on all network-attached platforms.

15.How do you replace a null value with a default value while loading?

Using COALESCE functionSyntax: COALESCE( COL, 'DEFAULT')

16.What is COMPRESS?The COMPRESS clause works in two different ways: When issued by itself, COMPRESS causes all NULL values for that column to be compressed to zero space. When issued with an argument (e.g., COMPRESS "constant"), the COMPRESS clause will compress every occurrence of ”constant” in that column to zero space as well as cause every NULL value to be compressed.

17.How many values can we compress in Teradata?

Up to 255 values (plus NULL) can be compressed per column. Only fixed width columns can be compressed. Primary index columns cannot be compressed.

18.Difference between volatile and global volatile table?

Global Temporary tables (GTT) -1. When they are created, its definition goes into Data Dictionary.2. When materialized data goes in temp space.3. thats why, data is active up to the session ends, and definition will remain there upto its not dropped using Drop table statement.If dropped from some other session then its should be Drop table all;4. you can collect stats on GTT.

Volatile Temporary tables (VTT) - 1. Table Definition is stored in System cache2. Data is stored in spool space.3. thats why, data and table definition both are active only upto session ends.

Page 13: Teradata Frequently asking questions

4. No collect stats for VTT.

19.Difference between PK and PI?

Primary Key: A relational concept used to determine relationships among entities and to define referential

constraints. Not required, unless referential integrity checks are to be performed. Define by CREATE TABLE statement. Unique. Identifies a row uniquely. Value can not be changed. Can not be null. Not related to access path.

Primary Index: Used to store rows on disk. Defined by CREATE TABLE STATEMENT . Unique or Non unique. It is used to distribute rows. Values can be changed. Can be null. Related to access path.

20.What is multiple statement processing?Multiple statement processing increases the performance when loading into large tables. All

statements are sent to parser simultaneously. All statements are executed parallel.

21.What is TDPID?TDPID is the IP address of the teradata server machine.

22.What is tenacity?Specifies the no. of hours that teradata FLOAD tries to logon continuously when the maximum no of load jobs is already running on teradata database.

23.What is Sleep?Specifies the no. of minutes that teradata FLOAD pauses before retrying on logon operation.

24.What is database skewing?Skew factor occurs when the primary index column selected is not a good candidate.That means, If for a table when the PI selected has highly non unique values then SKEW factor will be getting. By default it will be zero, if skew factor selected is greater than 25 then it is not a good sign.

25.What is soft Referential Integrity and Batch Referential Integrity?

Soft Referential Integrity:o Does not test for referential integrity. o Enables optimization techniques such as Join Elimination.

Batch Referential Integrity:Tests an entire insert, delete, or update batch operation for referential integrity. If insertion, deletion, or update of any row in the batch violates referential integrity, then parsing engine software rolls back the entire batch and returns an abort message.

26.Difference Between MLOAD & FLOAD

MLOAD:

It does the loading in the 5 phasesPhase1:It will get the import file and checks the script

Page 14: Teradata Frequently asking questions

Phase2:It reads the record from the base table and store in the work tablePhase3:In this Application phase it locks the table headerPhase4:In the DML operation will done in the tablesPhase 5: In this table locks will be released and work tables will be dropped.

Multiload allows nonunique secondary indexes - automatically rebuilds them after loading.

Multiload can load at max 5 tbls at a time and can also update and delete the data

FastLoad:

Fastload performs the loading of the data in 2phase..and it no need a work table for loading the data so it is faster as well as it follows the below steps to load the data in the tablePhase1-It moves all the records to all the AMP first without any hashingPhase2-After giving end loading command,Amp will hashes the record and send it to the appropriate AMPS .

Fastload is used to load empty tables and is very fast, can load one table at a time.

27. Advantages of PPIPPI:-Partitioned Primary Index.

When a Index is given on a partitioned table on the partitioned column that is the column on which the partitioned has done the same column has been given as a primary index then,

If there are more partitions, then it will be faster to scan the table, that too with the PI value itself.

28. Disadvatages of PPI If there are no partition declared for the row to be inserted in a particular partition then it is waste to

declare the primary index itself. It is better to use the secondary index for partition for better performance.

29.Teradata joins?Join Processing

A join is the combination of two or more tables in the same FROM of a single SELECT statement. When writing a join, the key is to locate a column in both tables that is from a common domain. Like the correlated subquery, joins are normally based on an equal comparison between the join columns.

The following is the original join syntax for a two-table join:

SELECT [<table-name>.]<column-name>

[,<table-name>.<column-name> ]

FROM <table-name1> [ AS <alias-name1> ] ,<table-name2> [ AS <alias-name2> ]

[ WHERE [<table-name1>.]<column-name>= [<table-name2>.]<column-name> ]

JoIN keyword is used in an SQL statement to query data from two or more tables, based on a relationship between certain columns in these tables

Common Join Types in Teradata1.Self Join2.Inner Join 3.Outer Join

The three formats of an OUTER JOIN are:

Left_table LEFT OUTER JOIN Right_table -left table is outer table

Page 15: Teradata Frequently asking questions

Left_table RIGHT OUTER JOIN Right_table -right table is outer table

Left_table FULL OUTER JOIN Right_table -both are outer tables

Self Join

A Self Join is simply a join that uses the same table more than once in a single join operation. The first requirement for this type of join is that the table must contain two different columns of the same domain. This may involve de-normalized tables.

For instance, if the Employee table contained a column for the manager's employee number and the manager is an employee, these two columns have the same domain. By joining on these two columns in the Employee table, the managers can be joined to the employees.

Example:SELECT Mgr.Last_name (Title 'Manager Name', format 'X(10) )

,Department_name (Title 'For Department ')

FROM Employee_table AS Emp INNER JOIN Employee_table AS Mgr ON Emp.Manager_Emp_ID = Mgr.Employee_Number

INNER JOIN Department_table AS Dept

ON Emp.Department_number = Dept.Department_number

ORDER BY 2 ;

INNER JOIN:

INNER JOIN keyword return rows when there is at least one match in both tables

INNER JOIN Syntax:

SELECT column_name(s) FROM table_name1 INNER JOIN table_name2

ON table_name1.column_name=table_name2.column_name

LEFT OUTER JOIN

The LEFT OUTER JOIN keyword returns all rows from the left table (table_name1), even if there are

no matches in the right table(table_name2).

LEFT OUTER JOIN Syntax:

SELECT column_name(s) FROM table_name1 LEFT OUTER JOIN table_name2

ON table_name1.column_name=table_name2.column_name

RIGHT OUTER JOIN:

The RIGHT OUTER JOIN keyword Return all rows from the right table (table_name2), even if there are no matches in the left table (table_name1).

Page 16: Teradata Frequently asking questions

RIGHT OUTER JOIN Syntax:

SELECT column_name(s) FROM table_name1 RIGHT OUTER JOIN table_name2

ON table_name1.column_name=table_name2.column_name

FULL OUTER JOIN:

The FULL OUTER JOIN keyword return rows when there is a match in one of the tables.

FULL OUTER JOIN Syntax:

SELECT column_name(s) FROM table_name1 FULL OUTER JOIN table_name2

ON table_name1.column_name=table_name2.column_name

A FULL OUTER JOIN uses both of the tables as outer tables. The exceptions are returned from both tables and the missing column values from either table are extended with NULL.

Product Join

It is very important to use an equal condition in the WHERE clause. Otherwise you get a product join. This means that one row of a table is joined to multiple rows of another table. A mathematic product means that multiplication is used.

30. Difference between Primary index and secondary index?

1. primary index cannot create after table creation, whereas secondary index can be created dynamically.2. primary index is 1 AMP operation, secondary index is 2 AMP operation and non unique secondary index is ALL AMP operation.

31. what are Journals? Journaling is a data protection mechanism in teradata Journals are generated to maintain pre- images and post images of a DML transaction starting/ending at/from a checkpoint. When a DML transaction fails,the table is restored back to the last available checkpoint using the journal Images.

There are two types of Journals (1) permanent (2) Transient journal.

The purpose of the permanent journal is to provide selective or full database recovery to a specified point in time. It permits recovery from unexpected hardware or software disasters. The permanent journal also reduces the need for full table backups that can be costly in both time and resources.

1. Permanent journals are explicitly created during database and/or table creation time.This journaling can be implemented depending upon the need and available disk space. PJ processing is a user selectable option on a database which allows the user to select extra journaling for changes made to a table. There are more options and the data can be rolled forward or backward (depending if you selected the correct options) at points of the customers choosing. They are permanent because the changes are kept until the customer deletes them or unloads them to a backup tape. They are usually kept in conjunction with backups of the database and allow partial rollback or roll forward for some corrupted data or operational error like someone deleted a months worth of data because they messed up the where clause 2.Transient Journal

The transient journal permits the successful rollback of a failed transaction (TXN). Transactions

Page 17: Teradata Frequently asking questions

are not committed to the database until the AMPs have received an End Transaction request, either implicitly or explicitly. There is always the possibility that the transaction may fail. If so, the participating table(s) must be restored to their pre-transaction state.

The transient journal maintains a copy of before images of all rows affected by the transaction. In the event of transaction failure, the before images are reapplied to the affected tables, then are deleted from the journal, and a rollback operation is completed. In the event of transaction success, the before images for the transaction are discarded from the journal at the point of transaction commit.

Transient Journal activities are automatic and transparent to the user

32.Teradata fast export script? .LOGTABLE RestartLog1_fxp;

.RUN FILE logon ;

.BEGIN EXPORT SESSIONS 4 ;

.LAYOUT Record_Layout ;

.FIELD in_City 1 CHAR(20) ;

.FIELD in_Zip * CHAR(5);

.IMPORT INFILE city_zip_infile LAYOUT Record_Layout ;

.EXPORT OUTFILE cust_acct_outfile2 ;SELECT A.Account_Number

, C.Last_Name , C.First_Name, A.Balance_Current

FROM Accounts A INNER JOINAccounts_Customer AC INNER JOINCustomer C

ON C.Customer_Number = AC.Customer_NumberON A.Account_Number = AC.Account_NumberWHERE A.City = :in_CityAND A.Zip_Code= :in_ZipORDER BY 1 ;

.END EXPORT ;

.LOGOFF ;

33.Teradata statistics.

Statistics collection is essential for the optimal performance of the Teradata query optimizer. The query optimizer relies on statistics to help it determine the best way to access data. Statistics also help the optimizer ascertain how many rows exist in tables being queried and predict how many rows will qualify for given conditions. Lack of statistics, or out-dated statistics, might result in the optimizer choosing a less-than-optimal method for accessing data tables.

Points:

1: Once a collect stats is done on the table(on index or column) where is this information stored so that the optimizer can refer this?

Ans: Collected statistics are stored in DBC.TVFields or DBC.Indexes. However, you cannot query these two tables.

2: How often collect stats has to be made for a table that is frequently updated?

Answer: You need to refresh stats when 5 to 10% of table's rows have changed. Collect stats could be pretty

Page 18: Teradata Frequently asking questions

resource consuming for large tables. So it is always advisable to schedule the job at off peak period and normally after approximately 10% of data changes.

3: Once a collect stats has been done on the table how can i be sure that the optimizer is considering this before execution ? i.e; until the next collect stats has been done will the optimizer refer this?

Ans: Yes, optimizer will use stats data for query execution plan if available. That's why stale stats is dangerous as that may mislead the optimizer.

4: How can i know the tables for which the collect stats has been done?

Ans: You run Help Stats command on that table. e.g HELP STATIISTICS TABLE_NAME ; this will give you Date and time when stats were last collected. You will also see stats for the columns ( for which stats were defined) for the table. You can use Teradata Manager too.

5: To what extent will there be performance issues when a collect stats is not done?Can a performance issue be related only due to collect stats? Probably a HOT AMP could be the reason for lack of spool space which is leading to performance degradation !!!

As: 1stpart: Teradata uses a cost based optimizer and cost estimates are done based on statistics. So if you dont have statistics collected then optimizer will use a Dynamic AMP Sampling method to get the stats. If your table is big and data was unevenly distributed then dynamic sampling may not get right information and your performance will suffer.

2nd Part: No, performance could be related to bad selection of indexes ( most importantly PI) and the access path of a particular query.6: Also let me know what can lead to lack of spool space apart from HOT AMP !!!Ans: One reason comes to my mind, a product join on two big data sets may lead to the lack of spool space.

34. Where will you define error tables in the script?

In FLOAD & MLOAD we define in BEGIN LOADING statement. 35. I have to load data daily. Which load utility will be good? TPUMP.

36 What are different SPACES available in Teradata? Perm Space

Temp Space spool space Perm Space :All databases have a defined upper limit of permanent space.

Permanent space is used for storing the data rows of tables. Perm space is not pre-allocated. It represents a maximum limit. Spool Space : All databases also have an upper limit of spool space. If there is no limit defined for a particular database or user, limits are inherited from parents. Theoretically, a user could use all unallocated space in the system for their query. Spool space is temporary space used to hold intermediate query results or formatted answer sets to queries. Once the query is complete, the spool space is released.

Example: You have a database with total disk space of 100GB. You have 10GB of user data and an additional 10GB of overhead. What is the maximum amount of spool space available for queries?

Answer: 80GB. All of the remaining space in the system is available for spool Temp Space : The third type of space is temporary space. Temp space is used for Global and Volatile temporary tables, and these results remain available to the user until the session is terminated. Tables created in temp space will survive a restart.

37.different options that we can specify in CREATE table statement? There are two different table type philosophies so there are two different type tables. They are SET and MULTISET. It has been said, "A man with one watch knows the time, but a man with two watches is

Page 19: Teradata Frequently asking questions

never sure". When Teradata was originally designed it did not allow duplicate rows in a table. If any row in the same table had the same values in every column Teradata would throw one of the rows out. They believed a second row was a mistake. Why would someone need two watches and why would someone need two rows exactly the same? This is SET theory and a SET table kicks out duplicate rows.The ANSI standard believed in a different philosophy. If two rows are entered into a table that are exact duplicates then this is acceptable. If a person wants to wear two watches then they probably have a good reason. This is a MULTISET table and duplicate rows are allowed. If you do not specify SET or MULTISET, one is used as a default. Here is the issue: the default in Teradata mode is SET and the default in ANSI mode is MULTISET.

Therefore, to eliminate confusion it is important to explicitly define which one is desired. Otherwise, you must know in which mode the CREATE TABLE will execute in so that the correct type is used for each table. The implication of using a SET or MULTISET table is discussed further.

SET and MULTISET Tables

A SET table does not allow duplicate rows so Teradata checks to ensure that no two rows in a table are exactly the same. This can be a burden. One way around the duplicate row check is to have a column in the table defined as UNIQUE. This could be a Unique Primary Index (UPI), Unique Secondary Index (USI) or even a column with a UNIQUE or PRIMARY KEY constraint. Since all must be unique, a duplicate row may never exist. Therefore, the check on either the index or constraint eliminates the need for the row to be examined for uniqueness. As a result, inserting new rows can be much faster by eliminating the duplicate row check.

However, if the table is defined with a NUPI and the table uses SET as the table type, now a duplicate row check must be performed. Since SET tables do not allow duplicate rows a check must be performed every time a NUPI DUP (duplicate of an existing row NUPI value) value is inserted or updated in the table. Do not be fooled! A duplicate row check can be a very expensive operation in terms of processing time. This is because every new row inserted must be checked to see if it is a duplicate of any existing row with the same NUPI Row Hash value. The number of checks increases exponentially as each new row is added to the table.

What is the solution? There are two: either make the table a MULTISET table (only if you want duplicate rows to be possible) or define at least one column or composite columns as UNIQUE. If neither is an option then the SET table with no unique columns will work, but inserts and updates will take more time because of the mandatory duplicate row check.

Below is an example of creating a SET table:

CREATE SET Table TomC.employee

( emp INTEGER

,dept INTEGER

,lname CHAR(20)

,fname VARCHAR(20)

,salary DECIMAL(10,2)

,hire_date DATE )

UNIQUE PRIMARY INDEX(emp);Notice the UNIQUE PRIMARY INDEX on the column emp. Because this is a SET table it is much more efficient to have at least one unique key so the duplicate row check is eliminated.

The following is an example of creating the same table as before, but this time as a MULTISET table:

CREATE MULTISET TABLE employee

( emp INTEGER

,dept INTEGER

,lname CHAR(20)

,fname VARCHAR(20)

,salary DECIMAL(10,2)

,hire_date DATE )

Page 20: Teradata Frequently asking questions

PRIMARY INDEX(emp);Notice also that the PI is now a NUPI because it does not use the word UNIQUE. This is important! As mentioned previously, if the UPI is requested, no duplicate rows can be inserted. Therefore, it acts more like a SET table. This MULTISET example allows duplicate rows. Inserts will take longer because of the mandatory duplicate row check.

38. What is macro? Advatages of it. Macros:A macro is a predefined, stored set of one or more SQL commands and report-formatting commands. Macros are used to simplify the execution of frequently used SQL commands. Macros do not require permanent space.

39.What are the functions of AMPs in Teradata? Each AMP is designed to hold a portion of the rows of each table. An AMP is responsible for the storage, maintenance and retrieval of the data under its control. Teradata uses hash partitioning to randomly and evenly distribute data across all AMPs for balanced performance

points:

40. How Does Teradata Store Rows?• Teradata uses hash partitioning and distribution to randomly and evenly distribute data across all AMPs. • The rows of every table are distributed among all AMPs - and ideally will be evenly distributed among all AMPs. • Each AMP is responsible for a subset of the rows of each table. • Evenly distributed tables result in evenly distributed workloads.

41. Which one will take care when an AMP goes down? Down amp recovery journal will start when AMP goes down to restore the data for the down amp 2.fall back is like it has redundant data,if one amp goes down in the cluster also it wont affect your queries.the query will use data from fall back rows.the down amp wont be updated use the data from fall back.

For your doubt,When amp is down you ran the update,so fall back rows will be updated.Still amp is in down condition and if you run the query,the query will use the updated ones and run.whenever down amp active it will use downamp recovery journal and data will be updated.

42.Which one will take care when a NODE goes down? In the event of node failure, all virtual processors can migrate to another available node in the clique. All nodes in the clique must have access to the same disk arrays

43.What is the use of EXPLIN plan?

The EXPLAIN facility allows you to preview how Teradata will execute a requested query. It returns a summary of the steps the Teradata RDBMS would perform to execute the request. EXPLAIN also discloses the strategy and access method to be used, how many rows will be involved, and its cost in minutes and seconds. Use EXPLAIN to evaluate a query performance and to develop an alternative processing strategy that may be more efficient. EXPLAIN works on any SQL request. The request is fully parsed and optimized, but not run. The complete plan is returned to the user in readable English statements.

EXPLAIN provides information about locking, sorting, row selection criteria, join strategy and conditions, access method, and parallel step processing. EXPLAIN is useful for performance tuning, debugging, pre-validation of requests, and for technical training. 44.Use of COALESCE function? The newer ANSI standard COALESCE can also convert a NULL to a zero. However, it can convert a NULL value to any data value as well. The COALESCE searches a value list, ranging from one to many values, and returns the first Non-NULL value it finds. At the same time, it returns a NULL if all values in the list are NULL.

To use the COALESCE, the SQL must pass the name of a column to the function. The data in the column is then compared for a NULL. Although one column name is all that is required, normally more than

Page 21: Teradata Frequently asking questions

one column is normally passed to it. Additionally, a literal value, which is never NULL, can be returned to provide a default value if all of the previous column values are NULL.

The syntax for the COALESCE follows:

SELECT COALESCE (<column-list> [,<literal> ] )

,<Aggregate>( COALESCE(<column-list>[,<literal>] ) )

FROM <table-name>

GROUP BY 1 ;

In the above syntax the <column-list> is a list of columns. It is written as a series of column names separated by commas.

SELECT COALESCE(NULL,0) AS Col1

,COALESCE(NULL,NULL,NULL) AS Col2

,COALESCE(3) AS Col3 ,COALESCE('A',3) AS Col4 ;

45.Diff between role , privilege and profile?

A role can be assisgned a collection of access rights in the same way a user can.

You then grant the role to a set of users, rather than grant each user the same rights.

This cuts down on maintenance, adds standardisation (hence reducing erroneous access to sensitive data) and reduces the size of the dbc.allrights table, which is very important in reducing DBC blocking in a large environment.Profiles assign different characteristics on a User, such as spool space, permspace and account strings. Again this helps with standardisation. Note that spool assigned to a profile will overrule spool assigned on a create user statement. Check the on line manuals for the full lists of properties

Data Control Language is used to restrict or permit a user's access. It can selectively limit a user's ability to retrieve, add, or modify data. It is used to grant and revoke access privileges on tables and views.

46.Diff between database and user?

Both may own objects such as tables, views, macros, procedures, and functions. Both users and databases may hold privileges. However, only users may log on, establish a session with the Teradata Database, and submit requests.

A user performs actions where as a database is passive. Users have passwords and startup strings; databases do not. Users can log on to the Teradata Database, establish sessions, and submit SQL statements; databases cannot.

Creator privileges are associated only with a user because only a user can log on and submit a CREATE statement. Implicit privileges are associated with either a database or a user because each can hold an object and an object is owned by the named space in which it resides

47.How many molad scripts are required for the below scenarioFirst I want to load data from source to volatile table.After that I want to load data from volatile table to Permanent table.

48.What are the types of CASE statements available in Teradata?

The CASE function provides an additional level of data testing after a row is accepted by the WHERE clause. The additional test allows for multiple comparisons on multiple columns with multiple outcomes. It also incorporates logic to handle a situation in which none of the values compares equal.

When using CASE, each row retrieved is evaluated once by every CASE function. Therefore, if two CASE operations are in the same SQL statement, each row has a column checked twice, or two

Page 22: Teradata Frequently asking questions

different values each checked one time.

The basic syntax of the CASE follows:

CASE <column-name>

WHEN <value1> THEN <true-result1>

WHEN <value2> THEN <true-result2>

WHEN <valueN> THEN <true-resultN>

[ ELSE <false-result> ]

END

Types:

1.Flexible Comparisons within CASE

When it is necessary to compare more than just equal conditions within the CASE, the format is modified slightly to handle the comparison. Many people prefer to use the following format because it is more flexible and can compare inequalities as well as equalities.

This is a more flexible form of the CASE syntax and allows for inequality tests:

CASE

WHEN <condition-test1> THEN <true-result1>

WHEN <condition-test2> THEN <true-result2>

WHEN <condition-testN> THEN <true-resultN>

[ ELSE <false-result> ]

END

The above syntax shows that multiple tests can be made within each CASE. The value stored in the column continues to be tested until it finds a true condition. At that point, it does the THEN portion and exits the CASE logic by going directly to the END.

2.Comparison Operators within CASE

In this section, we will investigate adding more power to the CASE statement. In the above examples, a literal value was returned. In most cases, it is necessary to return data. The returned value can come from a column name just like any selected column or a mathematical operation.

Additionally, the above examples used a literal ‘=’ as the comparison operator. The CASE comparisons also allow the use of IN, BETWEEN, NULLIF and COALESCE. In reality, the BETWEEN is a compound comparison. It checks for values that are greater than or equal to the first number and less than or equal to the second number.

The next example uses both formats of the CASE in a single SELECT with each one producing a column display. It also uses AS to establish an alias after the END:

SELECT CASE WHEN Grade_pt IS NULL THEN 'Grade Point Unknown' WHEN Grade_pt IN (1,2,3) THEN 'Integer GPA' WHEN Grade_pt BETWEEN 1 AND 2 THEN 'Low Decimal value' WHEN Grade_pt < 3.99 THEN 'High Decimal value' ELSE '4.0 GPA'

END AS Grade_Point_Average

,CASE Class_code WHEN 'FR' THEN 'Freshman' WHEN 'SO' THEN 'Sophomore' WHEN 'JR' THEN 'Junior'

Page 23: Teradata Frequently asking questions

WHEN 'SR' THEN 'Senior' ELSE 'Unknown Class'

END AS Class_Description

FROM Student_table

ORDER BY Class_code ;

3.CASE for Horizontal Reporting

Another interesting usage for the CASE is to perform horizontal reporting. Normally, SQL does vertical reporting. This means that every row returned is shown on the next output line of the report as a separate line. Horizontal reporting shows the output of all information requested on one line as columns instead of vertically as rows.

Previously, we discussed aggregation. It eliminates detail data and outputs only one line or one line per unique value in the non-aggregate column(s) when utilizing the GROUP BY. That is how vertical reporting works, one output line below the previous. Horizontal reporting shows the next value on the same line as the next column, instead of the next line.

Using the next SELECT statement, we achieve the same information in a horizontal reporting format by making each value a column:

SELECT AVG(CASE Class_code WHEN 'FR' THEN Grade_pt

ELSE NULL END) (format 'Z.ZZ') AS Freshman_GPA

,AVG(CASE Class_code WHEN 'SO' THEN Grade_pt

ELSE NULL END) (format 'Z.ZZ') AS Sophomore_GPA

,AVG(CASE Class_code WHEN 'JR' THEN Grade_pt

ELSE NULL END) (format 'Z.ZZ') AS Junior_GPA

,AVG(CASE Class_code WHEN 'SR' THEN Grade_pt

ELSE NULL END) (format 'Z.ZZ') AS Senior_GPA

FROM Student_Table

WHERE Class_code IS NOT NULL ;

4.Nested CASE Expressions

After becoming comfortable with the previous examples of the CASE, it may become apparent that a single check on a column is not sufficient for more complicated requests. When that is the situation, one CASE can be imbedded within another. This is called nested CASE statements.

The CASE may be nested to check data in a second column in a second CASE before determining what value to return. It is common to have more than one CASE in a single SQL statement. However, it is powerful enough to have a CASE statement within a CASE statement.

Example:

SELECT Last_name

,CASE Class_code WHEN 'JR'

THEN 'Junior ' ||(CASE WHEN Grade_pt < 2 THEN 'Failing' WHEN Grade_pt < 3.5 THEN 'Passing'

Page 24: Teradata Frequently asking questions

ELSE 'Exceeding' END)

ELSE 'Senior ' ||(CASE WHEN Grade_pt < 2 THEN 'Failing' WHEN Grade_pt < 3.5 THEN 'Passing'

ELSE 'Exceeding' END)

END AS Current_Status

FROM Student_Table

WHERE Class_code IN ('JR','SR')

ORDER BY class_code, last_name;

49.how will you ALTER a table in Teradata?1.ALTER TABLE(ADD): Syntax

ALTER TABLE employee ADD Street VARCHAR(30) ,ADD City VARCHAR(20);

ALTER TABLE(drop): Syntax

ALTER TABLE employee DROP Phone ,DROP Pref;

ALTER TABLE(Rename): Syntax

ALTER TABLE employee RENAME Street TO StreetAddr;

50.mention the order of SQL execution

SELECT-WHERE-ORDER BY-GROUP BY-HAVING class

51 What is the SQL to find the base AMP, no. of records stored for a particular table?

52 When a PI is not mentioned on a table, how will Teradata consider the PI for that table?

If you don't specify a PI at table create time then Teradata must chose one. For instance, if the DDL is ported from another database that uses a Primary Key instead of a Primary Index, the CREATE TABLE contains a PRIMARY KEY (PK) constraint. Teradata is smart enough to know that Primary Keys must be unique and cannot be null. So, the first level of default is to use the PRIMARY KEY column(s) as a UPI. If the DDL defines no PRIMARY KEY, Teradata looks for a column defined as UNIQUE. As a second level default, Teradata uses the first column defined with a UNIQUE constraint as a UPI. If none of the above attributes are found, Teradata uses the first column defined in the table as a NON-UNIQUE PRIMARY INDEX (NUPI).

53 What is covered query in Teradata?

If a SELECT query covers all the columns that are defined in the JOIN INDEX as join columns, such type of queries are called as COVERED query.Multi-Column NUSI Columns used as a Covered Query

54. What is NUSI bit mapping?

55. What are data demographics?Data demographics give us the information related to frequently updated columns.data demographics are :maximum rows per valuetypical rows per value

Page 25: Teradata Frequently asking questions

distinct values

56. Diff between logical and physical data modeling?Logical Versus Physical Database Modeling

After all business requirements have been gathered for a proposed database, they must be modeled. Models are created to visually represent the proposed database so that business requirements can easily be associated with database objects to ensure that all requirements have been completely and accurately gathered. Different types of diagrams are typically produced to illustrate the business processes, rules, entities, and organizational units that have been identified. These diagrams often include entity relationship diagrams, process flow diagrams, and server model diagrams. An entity relationship diagram (ERD) represents the entities, or groups of information, and their relationships maintained for a business. Process flow diagrams represent business processes and the flow of data between different processes and entities that have been defined. Server model diagrams represent a detailed picture of the database as being transformed from the business model into a relational database with tables, columns, and constraints. Basically, data modeling serves as a link between business needs and system requirements.

Two types of data modeling are as follows:

Logical modeling Physical modeling

If you are going to be working with databases, then it is important to understand the difference between logical and physical modeling, and how they relate to one another. Logical and physical modeling are described in more detail in the following subsections.

Logical ModelingLogical modeling deals with gathering business requirements and converting those requirements into a model. The logical model revolves around the needs of the business, not the database, although the needs of the business are used to establish the needs of the database. Logical modeling involves gathering information about business processes, business entities (categories of data), and organizational units. After this information is gathered, diagrams and reports are produced including entity relationship diagrams, business process diagrams, and eventually process flow diagrams. The diagrams produced should show the processes and data that exists, as well as the relationships between business processes and data. Logical modeling should accurately render a visual representation of the activities and data relevant to a particular business.

The diagrams and documentation generated during logical modeling is used to determine whether the requirements of the business have been completely gathered. Management, developers, and end users alike review these diagrams and documentation to determine if more work is required before physical modeling commences.

Typical deliverables of logical modeling include

Entity relationship diagramsAn Entity Relationship Diagram is also referred to as an analysis ERD. The point of the initial ERD is to provide the development team with a picture of the different categories of data for the business, as well as how these categories of data are related to one another.

Business process diagramsThe process model illustrates all the parent and child processes that are performed by individuals within a company. The process model gives the development team an idea of how data moves within the organization. Because process models illustrate the activities of individuals in the company, the process model can be used to determine how a database application interface is design.

User feedback documentation

Physical ModelingPhysical modeling involves the actual design of a database according to the requirements that were established during logical modeling. Logical modeling mainly involves gathering the requirements of the business, with the latter part of logical modeling directed toward the goals and requirements of the database. Physical modeling deals with the conversion of the logical, or business model, into a relational database

Page 26: Teradata Frequently asking questions

model. When physical modeling occurs, objects are being defined at the schema level. A schema is a group of related objects in a database. A database design effort is normally associated with one schema.

During physical modeling, objects such as tables and columns are created based on entities and attributes that were defined during logical modeling. Constraints are also defined, including primary keys, foreign keys, other unique keys, and check constraints. Views can be created from database tables to summarize data or to simply provide the user with another perspective of certain data. Other objects such as indexes and snapshots can also be defined during physical modeling. Physical modeling is when all the pieces come together to complete the process of defining a database for a business.

Physical modeling is database software specific, meaning that the objects defined during physical modeling can vary depending on the relational database software being used. For example, most relational database systems have variations with the way data types are represented and the way data is stored, although basic data types are conceptually the same among different implementations. Additionally, some database systems have objects that are not available in other database systems.

57. what is derived Table?

Derived tables are always local to a single SQL request. They are built dynamically using an additional SELECT within the query. The rows of the derived table are stored in spool and discarded as soon as the query finishes. The DD has no knowledge of derived tables. Therefore, no extra privileges are necessary. Its space comes from the users spool space.

Following is a simple example using a derived table named DT with a column alias called avgsal and its data value is obtained using the AVG aggregation:

SELECT *FROM (SELECT AVG(salary) FROM Employee_table) DT(avgsal) ;

58.what is the use of WITH CHECK OPTION in Teradata?

In Teradata, the additional key phase: WITH CHECK OPTION, indicates that the WHERE clause conditions should be applied during the execution of an UPDATE or DELETE against the view. This is not a concern if views are not used for maintenance activity due to restricted privileges.

59.what is soft referential integrity and batch referential integrity?

Soft RI is just an indication that there is a PK-FK relation between the columns and is not implemented at TD side.But having it would help in cases like Join processing etc.

Batch:

- Tests an entire insert, delete, or update batch operation for referential integrity.

- If insertion, deletion, or update of any row in the batch violates referential integrity, then parsing engine software rolls back the entire batch and returns an abort message.

Lets say that I had a table called X with some number of rows and I wanted to insert these rows into table Y (insert into X select * from y). However, some of the rows violated an RI constraint that table Y had. From reading the manuals, it seemed to me that if using standard RI, all of the valid rows would be inserted but the invalid ones would not. But with batch RI (which is "all or nothing") I would expect nothing to get inserted since it would check for problem rows up front and return an error right away.

If in fact there is no difference except in how Teradata processes things internally (i.e. where it checks for invalid rows) then why would you want to use one over the other? Wouldn't you always want to use batch since it does the checking up front and saves processing time?

Points:lets suppose that we have 3 dimensions and 1 facts table (like in the example above).lets suppose that join index (or aji) is based on 3 dims and facts (all tables inner joined).

Page 27: Teradata Frequently asking questions

1. with or without referential integrity:if you submit query which joins dim1, dim2, dim3 and facts index can be used2. with referential integrity:if you submit query which joins dim1 and facts index can be used because optimizer knows that facts rows reference rows from other dims (so he knows that inner join will not throw away those records)3. without referential integrityif you submit query which joins dim1 and facts index cannot be used because optimizer does not know if rows from facts reference rows from other dims and optimizer does not know if it is one-to-many or many-to-one or anything else.

60.what is identity column?IN Teradata V2R5.1 with one, column (INTEGER data type) that is defined as an Identity column. Here's the DDL:

CREATE SET TABLE test_table ,NO FALLBACK , NO BEFORE JOURNAL, NO AFTER JOURNAL, CHECKSUM = DEFAULT ( PRIM_REGION_ID INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 MINVALUE -2147483647 MAXVALUE 2147483647 NO CYCLE), PRIM_REGION_CD CHAR(6) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL) PRIMARY INDEX ( PRIM_REGION_ID );

61.How to implement UPSERT logic in Teradata using SQL?

We have MERGE-INTO option available in Teradata data which works as an UPSERT logic in teradata.

Example: MERGE into dept_table1 as Taregt

USING (SELECT dept_no, dept_name, budget

FROM dept_table where dept_no = 20) Source

ON (Target.dept_no = 20)

WHEN MATCHED then

UPDATE set dept_name = ‘Being Renamed’

WHEN NOT MATCHED then

INSERT (dept_no, dept_name, budget)

VALUES (source.dept_no, source.dept_name, source.budget);

62.what is SAMPLEID in Teradata?Since SAMPLEID is a column, it can be used as the sort key.

63.What are diff other options available with SAMPLE function in Teradata?SAMPLE function is used to retrive the random of data from table example 1Select * from emp sample 10example 2

Page 28: Teradata Frequently asking questions

select * from tab sample when prod_code = 'AS' then 10when prod_code = 'CM' then 10when prod_code = 'DQ' then 10end

64.what are the considerations to choose a Primary Index?The Primary Index determines which AMP stores an individual row of a table. The PI data is converted into the Row Hash using a mathematical hashing formula. The result is used as an offset into the Hash Map to determine the AMP number. Since the PI value determines how the data rows are distributed among the AMPs, requesting a row using the PI value is always the most efficient retrieval mechanism for Teradata.POINTS:.It determines how data will be distributed and is also the most efficient access path.

65. How many max roles can be assigned to a user.?

66.consider Mload or Tpump according to volume of the data.,diffrent situations where Tpump and Mload should be used ?

In general, the more you tend to accumulate your updates into large batches before applying them to your tables, the more likely it is that you'll want to use Mload. Mload is more efficient at applying a large number of updates. However, Mload has certain limitations like it can't update unique secondary indexes or join indexes, it can't fire triggers, and you can't use it on a table with referential integrity defined. Also, Mload will lock the entire table with a write lock when it's in the APPLY phase (when it's applying the updates).

Tpump, on the other hand, is best used if you are applying updates throughout the day in small batches (or using a queue). Tpump is not as fast, especially as the update volumes grow. It's advantages are that it doesn't lock the entire table for write, but only locks the specific row-hash values that are being updated, and it only locks them for the duration of the update. Also, since there is no special code inside the DBMS for Tpump, it supports all DBMS features (updates unique secondary indexes, join indexes, fires triggers, etc.).

If you are applying updates on a weekly or daily basis, I would tend to use Multiload. As you start to apply updates more frequently throughout the day, you may start to find that Tpump is the better option.

67.Why 3rd NF in Teradata LDM ?

68.How many subject areas in FSLDM. 10 1.Party 2.asset 3.product 4.agreement 5.event 6.Location 7.Campaign 8.channel 9.Financial Management 10.Internal Organization.69.explain about MLoad and SI ?MLOAD will not work with unique secondary indexes.70.PPI and UPI in table creation statement. UPI: CREATE SET Table TomC.employee

( emp INTEGER

,dept INTEGER

,lname CHAR(20)

,fname VARCHAR(20)

,salary DECIMAL(10,2)

,hire_date DATE )UNIQUE PRIMARY INDEX(emp);

71.what is value ordered NUSIWhen we define a value ordered NUSI on a column the rows in the secondary subtable get sorted based on the secondary index value. The columns should be of integer or date type.This is used for range queries and to avoid full table scans on large tables.

Page 29: Teradata Frequently asking questions

72.difference between oracle and Teradata.

Both the database has there advantages & disadvantages.There are a lot of factors to be taken into consideration before deciding which database is better.If you are talking about OLTP systems then Oracle is far better than Teradata.Oracle is more flexible in terms of programming like u can write Packages,procedures,functions .Teradata is useful if you want to generate reports on a very huge database.But the recent versions of Oracle like 10g is quite good & contains a lot of features to support DataWareHouse

Teradata is a MPP System which really can process the complex queries very fastly..Another advantage is the uniform distribution of data through the Unique primary indexes with out any overhead.Recently we had an evaluation with experts from both Oracle and Teradata for OLAP system,and they were really impressed with the performance of Teradata over Oracle.

Oracle support MPP in form of grid computing. uniform distribution of data based on primary key will not be much useful when accessing huge amount of data a full scan is required. so far we found teradata almost equal in performance with oracle 10g. Based on bench mark and after consulting from different people we find following problems in Teradata.its too expensive. you need long pockets to work with teradata.it has only one type of index while oracle has many types of indexes especially there bitmap index.teradata does not have materialize view. oracle has materialize view which decrease the IO band width and makes system more scalable.Oracle has very wide variety of analytic functions for Sql.3 types of partitioning and in oracle 11g there are some new addition in partitioningthe ability to use clusters without having to statically partition dataFurther..... these are the remarks i found on some of oracle discussion formsthe largest databases in the world run on Oraclehttp://biz.yahoo.com/prnews/031114/sff029_1.htmlthey count

a) all disk on the computer, not just database diskb) the sum of all databases a customer is using -- not individual databases

But still we saw that best database is the one which you have technical resource to work and especially tune.

73.what are the DBQL tables.

Database Query Log tables are the tables present in DBC database which store the history of all the operations performed on the tables present in the databases.The history could get very large so these tables should be purged when the data is no longer needed.

74.diff kind of users in Teradata

75.explain about Raid 1 and Raid5Raid Protection

There are many forms of disk array protection in Teradata. RAID 1 and RAID 5 are commonly used and will be discussed here. The disk array controllers manage both.RAID 1 is a disk-mirroring technique. Each physical disk is mirrored elsewhere in the array. This requires the array controllers to write all data to two separate locations, which means data can be read from two locations as well. In the event of a disk failure, the mirror disk becomes the primary disk to the array controller and performance is unchanged. RAID 1 may be configured as RAID 1 + 0 that uses mirrored striping.

RAID 5 is a parity-checking technique. For every three blocks of data (spread over three disks), there is a fourth block on a fourth disk that contains parity information. This allows any one of the four blocks to be reconstructed by using the information on the other three. If two of the disks fail, the rank becomes unavailable. The array controller does the recalculation of the information for the missing block. Recalculation will have some impact on performance, but at a much lower cost in terms of disk space.

76.what is the diff b/w sample and top.The Sampling function (SAMPLE) permits a SELECT to randomly return rows from a Teradata database

Page 30: Teradata Frequently asking questions

table. It allows the request to specify either an absolute number of rows or a percentage of rows to return. Additionally, it provides an ability to return rows from multiple samples.

SELECT * FROM student_course_table SAMPLE 5;

TOP Clause The TOP clause is used to specify the number of records to return.

The TOP clause can be very useful on large tables with thousands of records. Returning a large number of records can impact on perfor mance.

Note: Not all database systems support the TOP clause.

Example:

1.SELECT TOP 50 PERCENT * FROM EMP

2. SELECT TOP 2 * FROM EMP

77.How to improve performance of the query

78.Explain Primary Index and how do we select thatThe Primary Index determines which AMP stores an individual row of a table. The PI data is converted into the Row Hash using a mathematical hashing formula. The result is used as an offset into the Hash Map to determine the AMP number. Since the PI value determines how the data rows are distributed among the AMPs, requesting a row using the PI value is always the most efficient retrieval mechanism for Teradata.POINTS:.It determines how data will be distributed and is also the most efficient access path.

79.What is difference between Role, Privilege and profile A role can be assigned a collection of access rights in the same way a user can.

You then grant the role to a set of users, rather than grant each user the same rights.

This cuts down on maintenance, adds standardisation (hence reducing erroneous access to sensitive data) and reduces the size of the dbc.allrights table, which is very important in reducing DBC blocking in a large environment.Profiles assign different characteristics on a User, such as spool space, permspace and account strings. Again this helps with standardisation. Note that spool assigned to a profile will overrule spool assigned on a create user statement. Check the on line manuals for the full lists of properties

Data Control Language is used to restrict or permit a user's access. It can selectively limit a user's ability to retrieve, add, or modify data. It is used to grant and revoke access privileges on tables and views.

80.What are different spaces in Teradata and difference ? Perm Space

Temp Space spool space Perm Space :All databases have a defined upper limit of permanent space.

Permanent space is used for storing the data rows of tables. Perm space is not pre-allocated. It represents a maximum limit. Spool Space : All databases also have an upper limit of spool space. If there is no limit defined for a particular database or user, limits are inherited from parents. Theoretically, a user could use all unallocated space in the system for their query. Spool space is temporary space used to hold intermediate query results or formatted answer sets to queries. Once the query is complete, the spool space is released.

Example: You have a database with total disk space of 100GB. You have

Page 31: Teradata Frequently asking questions

10GB of user data and an additional 10GB of overhead. What is the maximum amount of spool space available for queries?

Answer: 80GB. All of the remaining space in the system is available for spool Temp Space : The third type of space is temporary space. Temp space is used for Global and Volatile temporary tables, and these results remain available to the user until the session is terminated. Tables created in temp space will survive a restart.

81.If your Skew factor is going up. What are remedies ?Skew factor occurs when the primary index column selected is not a good candidate.Mean, If for a table when the PI selected having highly non unique values then SKEW factor will be getting by default it will be zero, if skew factor selected is greater than 25 then it is not a good sign.

82.When, How and why we use Secondary Indexes.?A secondary index is an alternate path to the data. Secondary indexes are used to improve performance by allowing the user to avoid scanning the entire table during a query. A secondary index is like a primary index in that it allows the user to locate rows. Unlike a primary index, it has no influence on the way rows are distributed among AMPs. Secondary Indexes are optional and can be created and dropped dynamically. Secondary Indexes require separate subtables which require extra I/O to maintain the indexes.

83.What is difference between Primary Key and Primary Index

84.What is difference between database and user in Teradata. what are the things you can do or can not do in both.

Both may own objects such as tables, views, macros, procedures, and functions. Both users and databases may hold privileges. However, only users may log on, establish a session with the Teradata Database, and submit requests.

A user performs actions where as a database is passive. Users have passwords and startup strings; databases do not. Users can log on to the Teradata Database, establish sessions, and submit SQL statements; databases cannot.

Creator privileges are associated only with a user because only a user can log on and submit a CREATE statement. Implicit privileges are associated with either a database or a user because each can hold an object and an object is owned by the named space in which it resides

Page 32: Teradata Frequently asking questions

85.What is Checkpoint ?

86.When do you use BTEQ. What other softwares have you used or can we use rather than BTEQ.

When the query is performing operations on lesser amount of data in a table then we go for BTEQ. Any kind of SQL operations like SELECT, UPDATE, INSERT and delete.Can be used for import, export and reporting purposes.Macros and Stored procs can also be run using BTEQ.

The other utilities which we can use instead of BTEQ for loading purposes are FASTLOAD and MLOAD.And exporting is FASTEXPORT. But these are used while accessing large amount of data.

87.How many type of files have you loaded and their differences. (Fixed and Variable) ?

88.How do you execute your jobs in Teradata Environment.

In a channel environment I.e mainframes, the load utilities can be execute through a JCL.In a network I.e from a command prompt the load scripts can be run through the following command.<utility name> <scriptname>

89.What was the environment of your latest project (Number of Amps, Nodes, Teradata Server Number etc)

Number of Amps production and integration – 24 development – 12Number of nodes - production and integration – 4 development – 2

90.What is the process to restart the multiload if it failsIf Mload failed in the Acquisition phase just rerun the job. If Mload failed in Application Phase:a) Try to drop error tables, work tables, log tables, release Mload if required n submit the job from .Begin Import onwards.b) if ur table is fallback protected u need to make sure un fallback and use RELEASE MLOAD IN APPLY sql.Then resubmit the job.

1. Relase the Mload on the target table.

2. Drop all error tables and work tables

3. Resubmit the Mload script.

91.How does indexing improve query performance.?

Indexing is a way to physically reorganize the records to enable some frequently used queries to run faster.

The index can be used as a pointer to the large table. It helps to locate the required row quickly and then return to back to the user.or

The frequently used queries need not hit a large table for data. they can get what they want from the index itself. - cover queries.Index comes with the overhead of maintenance. Teradata maintains its index by itself. Each time an

Page 33: Teradata Frequently asking questions

insert/update/delete is done on the table the indexes will also need to be updated and maintained.Indexes cannot be accessed directly by users. Only the optimizer has access to the index.92.What is difference between Multiload, FastLoad and TPUMP

93.what are the different functions you do in BTEQ (Errorcode, ErrorLevel, etc) ?Error Level :Assigns severity to errorsyou can assign an error level (severity) for each error code returned. you can make decisions can be based on error level.

94.what is difference between ZEROIFNULL and NULLIFZERO ?The ZEROIFNULL function: will pass zero when data coming as nullThe NULLIFZERO function: will pass null when data coming as zero.

95.What is Range_NRange_N is defined on a partition primary index to specify the range of values of a column that should be assigned to a partition.The number of partitions = the number of ranges specified + no case + unknownno case – if the value does not belong to any rangeunknown- for the values like nulls, spaces etc

96.Explain PPI? PPI :- Partitioned Primary Indexes are Created so as to divide the table onto partitions based on Range or Values as Required .the data is first Hashed into Amps , then Stored in amps based on the Partitions !!! which when Retrived for a single partition / multiple Partitions , will be a all amps Scan, but not a Full Table Scan !!!! . this is effective for Larger Tables partitioned on the Date Specially !!!  there is no extra Overhead on the System (no Spl Tables Created ect )

Page 34: Teradata Frequently asking questions

97.What is Casting in Teradata ?It will convert the data type The casting is similar to DDL:CAST('02/03/2009-01:25:11' AS TIMESTAMP FORMAT 'MM/DD/YYYY-HH:MI:SS')

98.What is difference between UNION and MINUS?

Both are set operators on two tables generally. UNION gives all rows from both tables eliminating duplicate rows.

MINUS gives records from first table excluding common records from both tables. Its just like EXCEPT in Teradata

Operator ReturnsUNION All rows selected by either query.

UNION ALL All rows selected by either query, including all duplicates.

INTERSECT All distinct rows selected by both queries.

MINUS All distinct rows selected by the first query but not the second.

UNION ExampleThe following statement combines the results with the UNION operator, which eliminates duplicate selected rows. This statement shows that you must match datatype (using the TO_DATE and TO_NUMBER functions) when columns do not exist in one or the other table:

SELECT part, partnum, to_date(null) date_in FROM orders_list1UNIONSELECT part, to_number(null), date_in FROM orders_list2;

PART PARTNUM DATE_IN---------- ------- -------- SPARKPLUG 3323165 SPARKPLUG 10/24/98FUEL PUMP 3323162FUEL PUMP 12/24/99TAILPIPE 1332999TAILPIPE 01/01/01CRANKSHAFT 9394991CRANKSHAFT 09/12/02

SELECT part FROM orders_list1 UNION SELECT part FROM orders_list2;

PART ---------- SPARKPLUG FUEL PUMP TAILPIPE CRANKSHAFT

MINUS ExampleThe following statement combines results with the MINUS operator, which returns only rows returned by the first query but not by the second:

Page 35: Teradata Frequently asking questions

SELECT part FROM orders_list1 MINUS SELECT part FROM orders_list2;

PART ---------- SPARKPLUG

FUEL PUMP

99. What is explain in teradata?Ans : Explain is a fn using which you can find the execution procedure of any query in sql assistant. To use this fn type Explain before any query and run it or press F6 after writing a query. It also gives the estimated time, join confidence and memory needed to execute that query. It's advisable to use explain before executing any complex query.

100.what will you do if you get low-confidence in explain plan.In EXPLAIN plan when we get low confidence on a column, we define COLLECT STATISTICS for that particular column. Then onwards PE prepares plan with High confidence.101.what will you do if you get high-confidence in explain plan.Then we will run the query without hesitation.102.I have one sql query, when I ran explain plan its showing Product join. What are the factors you will look in to the query to make merge join ?Product joins are the only join type that can join two tables without a bind term. The only way to avoid a product join to make a merge join is to supply a connecting term between the tables where the operator of the term is =. (These terms are called BIND TERMS.) I.e we can add another join condition like 1 = 1 110.How does indexing improve query performance?

Indexing is a way to physically reorganize the records to enable some frequently used queries to run faster.

The index can be used as a pointer to the large table. It helps to locate the required row quickly and then return to back to the user.or

The frequently used queries need not hit a large table for data. they can get what they want from the index itself. - cover queries.Index comes with the overhead of maintenance. Teradata maintains its index by itself. Each time an insert/update/delete is done on the table the indexes will also need to be updated and maintained.Indexes cannot be accessed directly by users. Only the optimizer has access to the index.

111.Can we do collect stats on a table when the table is being updated?

no

112.What is Join Index in TD and How it works?ANS : JOIN INDEX:-----------Join Index is nothing but pre-joining 2 or more tables orviews which are commonly joined in order to reduce thejoining overhead.So teradata uses the join index instead of resolving thejoins in the participating base tables.They increase the efficiency and performance of join queries.They can have different primary indexes than the base tablesand also are automatically updated as and when the base rowsare updated. they can have repeating values.

There are 3 types of join indexes:

Page 36: Teradata Frequently asking questions

1)Single table join index - here the rows are distributed based on the foreign key hash value of the base table.2) Multi table join index - joining two tables.3) Aggregate join index - performing the aggregates but onlysum and count.

113.I have two tables and one of the table index is defined as UPI or USI. The second table is having any of the indexes like UPI,NUPI,USI OR NUSI. In this scenario what type of join strategy optimizer will use ? Merge Join Strategy

114.I have two tables. Most of the time I am joining on the same columns. Which type of join index will improve the performance in this scenario ?Multi table join index

115.When will you create PPI and when will you create secondary indexes?Partitioned Primary Indexes are Created so as to divide the table onto partitions based on Range or Values as Required. This is effective for Larger Tables partitioned on the Date and integer columns. There is no extra Overhead on the System (no Spl Tables Created ect )

Secondary Indexes are created on the table for an alternate way to access data. This is the second fastest method to retrieve data from a table next to the primary index. Sub tables are created.

PPI and secondary indexes do not perform full table scans but they access only a defined st of data in the AMP's.

116.what is an optimization and performance tuning and how does it really work in practical projects. can i get any example to better understand. 

117.Explain about Skew Factor?Skew factor occurs when the primary index column selected is not a good candidate.Mean, If for a table when the PI selected having highly non unique values then SKEW factor will be getting by default it will be zero, if skew factor selected is greater than 25 then it is not a good sign.

118.When you chose primary index and when will you choose secondary index? Primary index will be chosen at the time of table creation. This will help us in data distribution, data retrieval and join operations. Secondary indexes can be created and dropped at any time. They are used as an alternate path to access data other than the primary index.

119.When will go for Join index ? When we have two tables which are joined based on the same join condition very frequently then we go for Join Index

120.When will you go for hash index?

a.A hash index organizes the search keys with their associated pointers into a hash file structure.b.We apply a hash function on a search key to identify a bucket, and store the key and its associated pointers in the bucket (or in overflow buckets).c.Strictly speaking, hash indices are only secondary index structures, since if a file itself is organized using hashing, there is no need for a separate hash index structure on it.

scenario based questions

121. In case of replacement loading which utility you prefer? Mload or Fload?Fload.

122.I have a scenario where I update one column in a table using flat file as source. At the same time,

Page 37: Teradata Frequently asking questions

the same column is getting updated because of another flat file. Which utility will be more applicable in this case?

Tpump is better as it locks at row level

The table got loaded with wrong data using Fastload and it failed. The error message shown was: “RDBMS error 2652: Operation not allowed: _db_._table_ is being Loaded.” How to realese lock on this table?

When the data got loaded completely and still its locked, submit another fastload script with BEGIN LOADING AND END LOADING atetments alone.

I need to create a delimited file using fastexport. As fast export do not support delimited format, so I have written the following select to get the delimited output:

selecttrim(col1) || '|' ||trim(col2) || '|' ||trim(col3) || '|' || ..........................................trim(col50)from table

but the above script prefix each line with 2 junk characters.How to get the data without the junk characters.

when the fastload check point value is <= 60 and > 60, how is that going to matter?When the checkpoint interval is <= 60, that indicates the minutes (time) interval. If the value is more than 60, it will be considered as the no. of records but not the time.

123. I am loading a delimited flat file with a time format as the following:

HH:MM PM/AM

Examples would be :

9:45 AM10:25 PM

And there is no zero if the hours is a single integer value.

Is there any way that I would get the mload acquisition phase count in the mload script? MLOAD support environment provides different variables (total ins, upd, del etc.) at the application phase, but not at the acquisition phase.Is there any way other than scan the log file?

There are various commands available for the same.SYSAPLYCNTSYSNOAPLYCNTSYSRCDCNTSYSRJCTCNT

124. I have this requirement when error table gets generated during the MLOAD, I want to send an email. How can I achieve this?\

After Mload use a BTEQ to query for the error table if present quit on some value say '99' and use your OS to mail when the return code is 99.

I am using the following syntax to logon to Teradata Demo thru BTEQ/BTEQWin:.logon demotdat/dbc,dbc;and having the following error:*** Error: Invalid logon!*** Total elapsed time was 1 second.Teradata BTEQ 08.02.00.00 for WIN32. Enter your logon or BTEQ command:The hosts file shows the following:

Page 38: Teradata Frequently asking questions

127.0.0.1 localhost DemoTDAT DemoTDATcop1

Hovewer when I use.logon demotdat/dbcwithout specifying its password, it prompts for a password... when I type in its password, I am able to logon.

125.What is the reason? When we use BTEQ in interactive mode, we cant directly gine the Id and pwd.We have to first give and the logon id and then press enter. Only after that we have to enter the password.

126.Can we make a MLOAD script fail when the error tables are created ?Currently the mload scripts exits with a return code = 0 which means loading is successful even

though it is not.It has created some error tables which indicate some data has been rejected....There are various commands to do this operation..logoff &SYSUVCNT + &SYSRJCTCNT + &SYSETCNT + &SYSRC;

TROUBLE SHOOTING

1) open batch session got failed because of the following error.WRITER_1_*_1> WRT_8229 Database errors occurred: FnName: Execute -- [NCR][ODBC Teradata][Teradata Database] Duplicate unique prime key error in CFDW2_DEV_CNTL.CFDW_ECTL_CURRENT_BATCH. FnName: Execute -- [DataDirect][ODBC lib] Function sequence error

Solution : When ever you want to open a fresh batch id, first of all you should close the existing batch id and open a fresh batch id.

2) source is Flat file and I am staging the this flat file in teradata.I found that the initial zero’s are truncating in teradata. What could be the reason.

Solution : The reason is that in teradata you are defined the column datatype as Integer. That’s why initial values are truncating. So, change the target table data type to VARCHAR. VARCHAR datatype it won’t trucate the initial zero’s.

3) Can`t determine current batch ID for Data Source 47

Solution : For any fresh stage load you should open a batch id for the current data source id.

4) Unique Primary key violation CFDW_ECTL_CURRENT_BATCH table.

Solution : In CFDW_ECTL_CURRENT_BATCH table unique primary key defined on ECTL_DATA_SRCE_ID, ECTL_DATA_SRCE_INST_ID columns. At any point of time you shold have only one record for ECTL_DATA_SRCE_ID, ECTL_DATA_SRCE_INST_ID columns.

5) can’t insert a NULL value in a NOT NULL column.

Solution : First find all the NOT NULL columns in a target table and cross verify

Page 39: Teradata Frequently asking questions

with the corresponding source columns and identify for which source column you are getting NULL value and take necessary action.

6) source is Flat file and I am staging the this flat file in teradata.I found that the initial zero’s are truncating in teradata. What could be the reason.

Solution : The reason is that in teradata you are defined the column datatype as Integer. That’s why initial values are truncating. So, change the target table data type to VARCHAR. VARCHAR datatype it won’t trucate the initial zero’s.

7) I am passing one record to target look up but the look up is not returning the matching record.I know that the record is present in loo up. What action you will take ?

Solution : use LTRIM,RTRIM in look up sql override.this will remove the unwanted blank spaces. Then look up will find the matching record in look up.

8) I am getting duplicate records for natural key (ECTL_DATA_SRCE_KEY) what will you do to eliminate duplicate records natural key.

Solution: we will concatenate 2 ,3 or more source columns and check for duplicate records. If you are not getting duplicates after concatenating then use those columns to populate ECTL_DATA_SRCE_KEY column in target.

9) Accti_id is a Not null column in AGREEMENT table. You are getting a NULL value from CFDW_AGREEMENT_XREF look up ? what will you do to eliminate NULL records.

Solution : After stage load, I will populate CFDW_AGREEMENT_XREF table (this table basically contain surrogate keys). Once you populate XREF table then you won’t get any NULL records Accti_id column.

10) Unique primary key violation on CFDW_ECTL_BATCH_HIST table.

Solution : In CFDW_ECTL_BATCH_HIST table Unique primary index defined on ectl_btch_id column. So, there should be only one uniue record for a ectl_btch_id column.

11) when will you use ECTL_PGM_ID column in target look up sql overirde ?

Solution : when you are populating a single target table (AGREEMENT table) from multiple mappings in the same informatica folder then we will use ECTL_PGM_ID in taget look up sql override. This will eliminate unnecessary updating records.12) you are defined the primary keys as per the ETL spec but you are getting the duplicate records. How will you handle.

Solution : Apart from the primary key columns in the spec,First I will add any other column (other primary key columns in spec) as the primary key and I will check for the duplicate records. If I didn’t get any duplicates, I will ask modeller

Page 40: Teradata Frequently asking questions

to add this column as the primary key.

13) In teradata the error is mentioned as: “no more room in database”

Solution: I spoke with DBA to add the space for that database.

14) Though the column is available in target table, when I am trying to load using Mload, it shows that tahe column is not available in the table. Why?

Solution: As the loading process was happening through a view and the view was not refreshed to add the new column, it was the error message. So, refresh the view definition to add the new column.

15) when deleting the target table, though I wante to delete some data from the target table, by mistake all the data got deleted from Development table.

Solution: Add ECTL_DATA_SRCE_ID and PGM_ID in the where clause of the query.

16) While updatating the target table, it shows an error message saying multiple rows are trying to update a single row.

Solution: There are duplicates available in the table matching the Where condition of the update qurey. These duplicate records need to be eliminated.

17) I have a file with header, data records and trailer. Data record is delimited with comma and header and trailer are fixed width. The header and trailer starts with (HDR,TRA). I need to avoid the header and trailer while loading the file with Multiload. Please help me in this case.

Solution: Code Mload utility to consider only the data records excluding the header and trailer records.

APPLY label WHERE REC_TD_IN NOT IN('HDR','TRA')

****MORE ON JOINS & INDEXES****

*Teradata makes itself the decision to use the index or not - if you are not careful you spend time in table updates to keep up an index which is no used at all (one cannot give the query optimizer hints to use some index - though collecting of statistics may affect the optimizer strategy*In the MP-RAS environment, look at the script "/etc/gsc/bin/perflook.sh". This will provide a system-wide snapshot in a series of files. The GSC uses this data for incident analysis.* When using an index one must keep sure that the index condition is met in the sub queries "using IN, nested queries, or derived tables"* Indication of the proper index use is found by explain log entry "a ROW HASH MATCH SCAN across ALL-AMPS"* If the index is not used the result of the analysis is the 'FULL TABLE SCAN' where the performance time grows when the size of the history table grows* Keeping up an index information is a time/space consuming issue. Sometimes Teradata is much

Page 41: Teradata Frequently asking questions

better when you "manually" imitatate the index just building it from scratch.* keeping up join index might help, but you cannot multiload to a table which is a part of the join index - loading with 'tpump' or pure 'SQL' is OK but does not perform as well. Dropping and re-creating a join index with a big table takes time and space.* when your Teradata "explain" gives '25' steps from your query (even without the update of the results) and the actual query is a join of six or more tablesCase e.g.We had already given up updating the secondary indexes - because we have not had much use for them.After some trials and errors we ended up to the strategy, where the actual "purchase frequency analysis" is never made "directly" against the history table.Instead:1) There is a "one-shot" run to build the initial "customer's previous purchase" from the "purchase history" - it takes time, but that time is saved later2) The purchase frequency is calculated by joining the "latest purchase" with the "customer's previous purchase".3) When the "latest purchase" rows are inserted to the "purchase history" the "customer's previous purchase" table is dropped and recreated by merging the "customer's previous purchase" with the "latest purchase"4) By following these steps the performance is not too fast yet (about 25 minutes in our two node system) for a bunch of almost 1.000.000 latest receipts - but it is tolerable now.(We also tested by adding both the previous and latest purchase to the same table, but because its size was in average case much bigger than the pure "latest purchase", the self-join was slower in that case)*********

MANAGING CONCURRENT WORKLOADSIntegrated e-commerce efforts present many warehouse challenges. Here's how Teradata can help.The word e-commerce means many things to many people. Although for some it connotes only the Web, the real value of e-commerce can only be realized when all channels of a business are integrated and have full access to all customer information and transactions. In fact, to me, e-commerce means using the rich technology available today to bring added value to the customer and additional value to the business through all customer interaction channels. Under this definition of e-commerce, an active warehouse is at the epicenter, providing the storage and access for decision making in the e-commerce world. As more and more companies adopt active warehousing for this purpose, data warehouse workloads are expanding and changing. If your warehouse relies on a Teradata DBMS, you'll find that handling the challenge of high-volume, widely varying, disparate service-level workloads is one of its core competencies. One of the biggest concerns I hear from customers is how to deal with the quickly rising number of concurrent queries and concurrent users that can result from active warehousing and e-commerce initiatives. Expected service levels vary widely among different groups of users, as do query types. And, of course, the entire workload must scale upward linearly as the demand increases, ideally with a minimum of effort required from users and systems staff. Here's a look at some of the most frequent questions I receive on the subject of mixed workloads and concurrency requirements.

Page 42: Teradata Frequently asking questions

  How do I balance the work coming in across all nodes of my Teradata configuration? You don't. Teradata automatically balances sessions across all nodes to evenly distribute work across the entire parallel configuration. Users connect to the system as a whole rather than a specific node, and the system uses a balancing algorithm to assign their sessions to a node. Balancing requires no effort from users or system administrators.

Does Teradata balance the work queries cause? The even distribution of data is the key to parallelism and scalability in Teradata. Each query request is sent to all units of parallelism, each of which has an even portion of the data to process, resulting in even work distribution across the entire system. For short queries and update flow typical of Web interactions, the optimizer recognizes that only a single unit of parallelism is needed. A query coordinator routes the work to the unit of parallelism needed to process the request. The hashing algorithm does not cluster related data, but spreads it out across the entire system. For example, this month's data and even today's data is evenly distributed across all units of parallelism, which means the work to update or look at that data is evenly distributed.

Will many concurrent requests cause bottlenecks in query coordination? Query coordination is carried out by a fully parallel parsing engine (PE) component. Usually, one or more PEs are present on each node. Each PE handles the requests for a set of sessions, and sessions are spread evenly across all configured PEs. Each PE is multithreaded, so it can handle many requests concurrently. And each PE is independent of the others with no required cross-coordination. The number of users logged on and requests in flight are limited only by the number of PEs in the configuration.

How do you avoid bottlenecks when the query coordinator must retrieve information from the data dictionary? In Teradata, the DBMS itself manages the data dictionary. Each dictionary table is simply a relational table, parallelized across all nodes. The same query engine that manages user workloads also manages the dictionary access, using all nodes for processing dictionary information to spread the load and avoid bottlenecks. The PE even caches recently used dictionary information in memory. Because each PE has its own cache, there is no coordination overhead. The cache for each PE learns the dictionary information most likely to be needed by the sessions assigned to it.

With a large volume of work, how can all requests execute at once? As in any computer system, the total number of items that can execute at the same time is always limited to the number of CPUs available. Teradata uses the scheduling services Unix and NT provide to handle all the threads of execution running concurrently. Some requests might also exist on other queues inside the system, waiting for I/O from the disk or a message from the BYNET, for example. Each work item runs in a thread; each thread gets a turn at the CPU until it needs to wait for some external event or until it completes the current work. Teradata configures several units of parallelism in each SMP node. Each unit of parallelism contains many threads of execution that aren't restricted to a particular CPU; therefore, every thread gets to compete equally for the CPUs in the SMP node. There is a limit, of course, to the number of pieces of work that can actually have a thread allocated in a unit of parallelism. Once that limit is reached, Teradata queues work for the threads. Each

Page 43: Teradata Frequently asking questions

thread is context free, which means that it is not assigned to any session, transaction, or request. Therefore, each thread is free to work on whatever is next on the queue. The unit of work on the queue is a processing step for a request. Combining the queuing of steps with context-free threads allows Teradata to share the processing service equally across all the concurrent requests in the system. From the users' point of view, all the requests in the system are running, receiving service, and sharing system resources.

How does Teradata avoid resource contention and the resulting performance and management problems? Teradata algorithms are very resource efficient. Other DBMSs optimize for single-query performance by giving all resources to the single query. But Teradata optimizes for throughput of many concurrent queries by allocating resources sparingly and using them efficiently. This kind of optimization helps avoid wide performance variations that can occur depending on the number of concurrent queries. When faced with a workload that requires more system resources than are available, Teradata tunes itself to that workload. Thrashing, a common performance failure mode in computer systems, occurs when the system has fewer resources than the current workload requires and begins using more processing time to manage resources than to do the work. With most databases, a DBA would tune the system to avoid thrashing. However, Teradata adjusts automatically to workload changes by adjusting the amount of running work and internally pushing back incoming work. Each unit of parallelism manages this flow control mechanism independently.

If all concurrent work shares resources evenly, how are different service levels provided to different users? The Priority Scheduler Facility (PSF) in Teradata manages service levels among different parts of the workload. PSF allows granular control of system resources. The system administrator can define up to five resource partitions; each partition contains four available priorities. Together, they provide 20 allocation groups (AGs) to which portions of the workload are assigned by an attribute of the logon ID for the user or application. The administrator assigns each AG a portion of the total system resources and a scheduling policy. For example, the administrator can assign short queries from the Web site a guaranteed 20 percent of system resources and a high priority. In contrast, the administrator might assign medium priority and 10 percent of system resources to more complex queries with lower response-time requirements. Similarly, the administrator might assign data mining queries a low priority and five percent of the total resources, effectively running them in the background. You can define policies so that the resources adjust to the work in the system. For example, you could allow data mining queries to take up all the resources in the system if nothing else is running. Unlike other scheduling utilities, PSF is fully integrated into the DBMS, not managed at the task or thread level, which makes it easier to use for parallel database workloads. Because PSF is an attribute of the session, it follows the work wherever it goes in the system. Whether that piece of work is executed by a single thread in a single unit of parallelism or in 2,000 threads in 500 units of parallelism, PSF manages it without system administrator involvement. CPU scheduling is a primary component of PSF, using all the normal techniques (such as quantum size, CPU queues by priority, and so on). However, PSF is endemic throughout the Teradata DBMS. There are many queues inside a DBMS handling a large volume mixed workload. All of those queues are prioritized based on the priority of the work. Thus, a high priority query entered after several lower priority requests that are awaiting their turn to run will go to the head of the queue and will be executed first. I/O is managed by priority. Data warehouse workloads are heavy I/O users, so a large query performing a lot of I/O could hold up a short, high-priority request. PSF puts the high-priority request I/Os to the head of the queue, helping to deliver response time goals.

Page 44: Teradata Frequently asking questions

Data warehouse databases often set the system environment to allow for fast scans. Does Teradata performance suffer when the short work is mixed in? Because Teradata was designed to handle a high volume of concurrent queries, it doesn't count on sequential scans to produce high performance for queries. Although other DBMS products see a large fall in request performance when they go from a single large query to multiple queries or when a mixed workload is applied, Teradata sees no such performance change. Teradata never plans on sequential access in the first place. In fact, Teradata doesn't even store the data for sequential accesses. Therefore, random accesses from many concurrent requests are just business as usual. Sync scan algorithms provide additional optimization. When multiple concurrent requests are scanning or joining the same table, their I/O is piggybacked so that only a single I/O is performed to the disk. Multiple concurrent queries can run without increasing the physical I/O load, leaving the I/O bandwidth available for other parts of the workload.

What if work demand exceeds Teradata's capabilities? There are limits to how much work the engine can handle. A successful data warehouse will almost certainly create a demand for service that is greater than the total processing power available on the system. Teradata always puts into execution any work presented to the DBMS. If the total demand is greater than the total resources, then controls must be in place before the work enters the DBMS. When your warehouse reaches this stage, you can use Database Query Manager (DBQM) to manage the flow of user requests into the warehouse. DBQM, inserted between the users' ODBC applications and the DBMS, evaluates each request and then applies a set of rules created by the system administrator. If the request violates any of the rules, DBQM notifies the user that the request is denied or deferred to a later time for execution. Rules can include, for example, system use levels, query cost parameters, time of day, objects accessed, and authorized users. You can read more about DBQM in a recent Teradata Review article ("Field Report: DBQM," Summer 1999, available online at www.teradatareview.com/summer99/truet.html).

How do administrators and DBAs stay on top of complex mixed workloads? The Teradata Manager utility provides a single operational system view for administrators and DBAs. The tool provides real-time performance, logged past performance, users and queries currently executing, management of the schema, and more.

STAYING ACTIVE The active warehouse is a busy place. It must handle all decision making for the organization, including strategic, long-range data mining queries, tactical decisions for daily operations, and event-based decisions necessary for effective Web sites. Nevertheless, managing this diversity of work does not require a staff of hundreds running a complex architecture with multiple data marts, operational data stores, and a multitude of feeds. It simply requires a database management system that can manage multiple workloads at varying service levels, scale with the business, and provide 2437 availability year round with a minimum of operational staff.

2. Use COMPRESS in whichever attribute possible. This helps in reducing IO and hence Improves performance. Especially for attribute having lots of NULL values/Unique known values.

3. COLLECT STATISTICS on daily basis (after every load) in order to improve performance.

4. Drop and recreate secondary indices before and after every load. This helps in improving load

Page 45: Teradata Frequently asking questions

performance (if critical)

5. Regularly Check for EVEN data distribution across all AMPs using Teradata Manager or thru queryman

6. Check for the combination on CPU, AMP’s, PE, nodes for performance optimization.Each AMP can handle 80 tasks and each PE can handle 120 sessions.

MLOAD – Customize the number sessions for each MLOAD jobs depending on the Number of concurrent MLOAD jobs & Number of PE’s in the system

e.gSCENARIO 1

# of AMPS = 10# of MAx load Jobs handled by Teradata=5 (Parameter which can be set values-5 to 15)# of Sessions per load job= 1 (parameter that can be set at Global or at each MLOAD script level)# of PE's=1

So 10*5*1= 50 + 10 (2 per job overhead) = 60 is the Max sessions on Teradata boxThis is LESS then 120, which is max # of sessions a PE can handle

SCENARIO 2#AMPS = 16 #Max load Jobs handles by Teradata=15 #Sessions per load job= 1#of PE's=1

So 16*15*1= 240 + 30 (2 per job ovehead) = 270 (Max sessions on Teradata box). This is MORE then 120, which is the max sessions a PE can handle.

Hence MLOAD fail, inspite of the usage of the SLEEP & TENACITY features.

Use the SLEEP and TENACITY features of MLOAD for scheduling MLOAD jobs.

Check the TABLEWAIT parameter. If omitted can cause immediate load job failure if you submit two MLOADS loads that are trying to update the same table.

JOIN INDEX - Check the limit on number of fields for a join Index (max 16 fields). It may vary by version

Join Index is like building the table physically. Hence it has the advantage like BETTER Performance since data is physically stored and not calculated ON THE FLY etc. Cons are of LOADING time(MLOAD needs Join Indices to be dropped before loading) and additional space since it is a physical table.