introduction to database system sajid banday

52
INTRODUCTION TO DATABASE SYSTEM SAJID BANDAY

Upload: zaynah

Post on 25-Feb-2016

70 views

Category:

Documents


1 download

DESCRIPTION

Introduction to database System Sajid Banday. Basic Database Concept. Part 1 Relational database theory Normalization theory Database engine structure and architecture concept Database Engine Component Database Engine structure and architecture. Database Design Concepts. Part 2 - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Introduction to database System Sajid Banday

INTRODUCTION TO DATABASE SYSTEM

SAJID BANDAY

Page 2: Introduction to database System Sajid Banday

Basic Database ConceptPart 11. Relational database theory2. Normalization theory3. Database engine structure and architecture concept

1. Database Engine Component2. Database Engine structure and architecture

Page 3: Introduction to database System Sajid Banday

Database Design ConceptsPart 21. Design Data Model2. Design Entity Relationship Model

Page 4: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 31. Data Definition Language (DDL)2. Data Manipulation Language (DML)3. Data Control Language (DCL)4. Data Query Language (DQL)5. Group functions6. Sql Joins7. Basic SQL programming(Transact-SQL & CLR /PL-

SQL/ SQL & PSM)1. Stored procedure2. User-defined function3. Trigger4. Views/Synonyms

Page 5: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Relational database theory The relational database model was conceived by E. F. Codd in 1969, then a

researcher at IBM. The model is based on branches of mathematics called set theory and predicate logic. The basic idea behind the relational model is that a database consists of a series of unordered tables (or relations) that can be manipulated using non-procedural operations that return tables. This model was in vast contrast to the more traditional database theories of the time that were much more complicated, less flexible and dependent on the physical storage methods of the data..

Relational database theory uses a set of mathematical terms, which are roughly equivalent to SQL database terminology. The table below summarizes some of the most important relational database terms and their SQL database equivalents.

Relational term SQL equivalentrelation, base relation variable table

derived relation variable view, query result, result set

Tuple row

Attribute column

Page 6: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Relational database theory In Relational database schema is the structural description of relations in database and

instance is actual contents or data at given point in t ime.

Database is set of named relations or tables and each relation has a set of named attributes or columns. Tuple or row has a value for each attribute. Each attribute has a data type or domain

Key: The relational model dictates that each row in a table be unique. If you allow duplicate

rows in a table, then there's no way to uniquely address a given row via programming. This creates all sorts of ambiguities and problems that are best avoided.

You guarantee uniqueness for a table by designating a primary key a column that contains unique values for a table. Each table can have only one primary key, even though several columns or combination of columns may contain unique values. All columns (or combination of columns) in a table with unique values are referred to as candidate keys, from which the primary key must be drawn.

All other candidate key columns are referred to as alternate keys. Keys can be simple or composite. A simple key is a key made up of one column, whereas a composite key is made up of two or more columns.

Page 7: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Relational database theory

Relationships: You define foreign keys in a database to model relationships in the real world. Relationships between real-world entities can be quite complex, involving numerous entities each having multiple relationships with each other. For example, a family has multiple relationships between multiple people all at the same time. These tables can be related in one of three different ways: one-to-one, one-to-many or many-to-many.

One-to-One RelationshipsTwo tables are related in a one-to-one (1—1) relationship if, for every row in the first table, there is at most one row in the second table. True one-to-one relationships seldom occur in the real world. This type of relationship is often created to get around some limitation of the database management software rather than to model a real-world situation.

Page 8: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Relational database theoryCardinality: One-to-Many RelationshipsTwo tables are related in a one-to-many (1—M) relationship if for every row in the first table, there can be zero, one, or many rows in the second table, but for every row in the second table there is exactly one row in the first table.

Page 9: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Relational database theoryCardinality: Many-to-Many RelationshipsTwo tables are related in a many-to-many (M—M) relationship when for every row in the first table, there can be many rows in the second table, and for every row in the second table, there can be many rows in the first table. Many-to-many relationships can't be directly modeled in relational database. These types of relationships must be broken into multiple one-to-many relationships.

Page 10: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Normalization theory

Normalization is a process of reducing redundancies or set of guidelines used to optimally design a database to reduce redundant data. The actual guidelines of normalization, called normal forms.

However, normalization is an important process that, if understood, will increase your understanding of SQL. At this point, don't be overly concerned with all the specifics of normalization; it is most important to understand the basic concepts.

Page 11: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Normal Forms

The normal forms, an integral concept involved in the process of database normalization.

Normal form is a way of measuring the levels, or depth, to which a database has been normalized. A database's level of normalization is determined by the normal form.

The following are the three most common normal forms in the normalization process:

1. First normal form2. Second normal form3. Third normal form

Of the three normal forms, each subsequent normal form depends on normalization steps taken in the previous normal form. For example, to normalize a database using the second normal form, the database must first be in the first normal form.

Page 12: Introduction to database System Sajid Banday

Basic Database ConceptPart 1First Normal Form

The objective of the first normal form is to divide the base data into logical units called tables. When each table has been designed, a primary key is assigned to most or all tables. Below figure which illustrates how the raw database has been redeveloped using the first normal form.

Page 13: Introduction to database System Sajid Banday

Basic Database ConceptPart 1First Normal Form

You can see that to achieve the first normal form, data had to be broken into logical units of related information, each having a primary key and ensuring that there are no repeated groups in any of the tables. Instead of one large table, there are now smaller, more manageable tables: EMPLOYEE_TBL, CUSTOMER_TBL, and PRODUCTS_TBL. The primary keys are normally the first columns listed in a table, in this case: EMP_ID, CUST_ID, and PROD_ID

Page 14: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Second Normal Form The objective of the second normal form is to take data that is only partly

dependent on the primary key and enter that data into another table. Below figure illustrates the second normal form.

Page 15: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Second Normal Form

According to the figure, the second normal form is derived from the first normal form by further breaking two tables down into more specific units.

EMPLOYEE_TBL split into two tables called EMPLOYEE_TBL and EMPLOYEE_PAY_TBL. Personal employee information is dependent on the primary key (EMP_ID), so that information

remained in the EMPLOYEE_TBL (EMP_ID, LAST_NAME, FIRST_NAME, MIDDLE_NAME, ADDRESS, CITY, STATE, ZIP, PHONE, and PAGER). On the other hand, the information that is only partly dependent on the EMP_ID (each individual

employee) is used to populate EMPLOYEE_PAY_TBL (EMP_ID, POSITION, POSITION_DESC, DATE_HIRE, PAY_RATE, and DATE_LAST_RAISE). Notice that both tables contain the column EMP_ID.

This is the primary key of each table and is used to match corresponding data between the two tables. CUSTOMER_TBL split into two tables called CUSTOMER_TBL and ORDERS_TBL. What took place is similar to what occurred in the EMPLOYEE_TBL.

Columns that were partly dependent on the primary key were directed to another table. The order information for a customer is dependent on each CUST_ID, but does not directly depend on the general customer information in the original table.

Page 16: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Third Normal Form The third normal form's objective is to remove data in a table that is not dependent

on the primary key. Below figure illustrates the third normal form.

Page 17: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Third Normal Form

Another table was created to display the use of the third normal form. EMPLOYEE_PAY_TBL is split into two tables, one table containing the actual employee pay information and the other containing the position descriptions, which really do not need to reside in EMPLOYEE_PAY_TBL. The POSITION_DESC column is totally independent of the primary key, EMP_ID .

Page 18: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Benefits of Normalization

Normalization provides numerous benefits to a database. Some of the major benefits include the following :1. Greater overall database organization2. Reduction of redundant data3. Data consistency within the database4. A much more flexible database design5. A better handle on database security

Organization is brought about by the normalization process, making everyone's job easier, from the user who accesses tables to the database administrator (DBA) who is responsible for the overall management of every object in the database.

Data redundancy is reduced, which simplifies data structures and conserves disk space. Because duplicate data is minimized, the possibility of inconsistent data is greatly reduced. The database has been normalized and broken into smaller tables, you are provided with more flexibility as far as modifying existing structures.

It is much easier to modify a small table with little data than to modify one big table that holds all the vital data in the database. Data integrity is the assurance of consistent and accurate data within a database.

Page 19: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Drawbacks of Normalization

Although most successful databases are normalized to some degree, there is one substantial drawback of a normalized database: reduced database performance.

The acceptance of reduced performance requires the knowledge that when a query or transaction request is sent to the database, there are factors involved, such as CPU usage, memory usage, and input/output (I/O).

To make a long story short, a normalized database requires much more CPU, memory, and I/O to process transactions and database queries than does a de-normalized database. A normalized database must locate the requested tables and then join the data from the tables to either get the requested information or to process the desired data.

Page 20: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Referential Integrity

Referential integrity simply means that the values of one column in a table depend on the values of a column in another table. For instance, in order for a customer to have a record in the ORDERS_TBL table, there must first be a record for that customer in the CUSTOMER_TBL table. Integrity constraints can also control values by restricting a range of values for a column. The integrity constraint should be created at the table's creation. Referential integrity is typically controlled through the use of primary and foreign keys.

In a table, a foreign key, normally a single field, directly references a primary key in another table to enforce referential integrity. In the preceding paragraph, the CUST_ID in ORDERS_TBL is a foreign key that references CUST_ID in CUSTOMER_TBL.

Page 21: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Database engine structure and architecture concept

Database Engine Component

DBMS engine accepts logical requests from various other DBMS subsystems, converts them into physical equivalents, and actually accesses the database and data dictionary as they exist on a storage device.

Data definition subsystem helps the user create and maintain the data dictionary and define the structure of the files in a database.

Data manipulation subsystem helps the user to add, change, and delete information in a database and query it for valuable information. Software tools within the data manipulation subsystem are most often the primary interface between user and the information contained in a database. It allows the user to specify its logical information requirements.

Application generation subsystem contains facilities to help users develop transaction-intensive applications. It usually requires that the user perform a detailed series of tasks to process a transaction. It facilitates easy-to-use data entry screens, programming languages, and interfaces.

Data administration subsystem helps users manage the overall database environment by providing facilities for backup and recovery, security management, query optimization, concurrency control, and change management.

Page 22: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Database engine structure and architecture concept

Database Engine structure and architecture1. SQL Server architecture

Page 23: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Database engine structure and architecture concept

Database Engine structure and architecture2. Oracle architecture

Page 24: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Database engine structure and architecture concept

Database Engine structure and architecture3. MySql architecture

Page 25: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Database engine structure and architecture concept MySql store engine MySQL's unique Pluggable Storage Engine Architecture gives users the flexibility to

choose from a portfolio of purpose-built storage engines that are optimized for specific application domains - OLTP, Read-Intensive Web Scale-out, High-Availability Clustering, Data Archiving, Data Warehousing, and more.

Plus, The Pluggable Storage Engine Architecture provides a standard set of server, drivers, tools, management, and support services that are leveraged across all the underlying storage engines.

MySQL Native Storage EnginesMySQL currently offers a number of its own native Storage Engines, including:1. InnoDB 2. MyISAM3. Example 4. Federated5. Archive 6. Merge7. Memory 8. CSV9. Blackhole

Page 26: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Database engine structure and architecture conceptMySql store enginePartner Storage Engines (Third party) Calpont's InfiniDB storage engine is purpose-built for analytical data,

data warehouses and marts, and any read-intensive application environment. The column-oriented and modular scale up/out architecture ensures the fastest possible response times no matter the data volume.

Infobright's BrightHouse engine : InfoBright's storage engine delivers high performance on complex queries across terabytes of data for analytic data warehouse solutions. InfoBright's features include a column-oriented architecture, high-speed data loading, advanced optimizer, and 10:1 data compression.

Kickfire analytics appliances bring together hardware and software innovations that deliver high-performance data warehousing capabilities. By combining the ease of use of MySQL with patented hardware and state-of-the-art software technology, Kickfire is able to deliver high-end performance via an appliance-based approach.

Page 27: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Database engine structure and architecture concept MySQL Native Storage Engines1. InnoDB : A transaction-safe (ACID compliant) storage engine for MySQL

that has commit, rollback, and crash-recovery capabilities to protect user data. InnoDB row-level locking (without escalation to coarser granularity locks) and Oracle-style consistent non-locking reads increase multi-user concurrency and performance. InnoDB stores user data in clustered indexes to reduce I/O for common queries based on primary keys. To maintain data integrity, InnoDB also supports FOREIGN KEY referential-integrity constraints. InnoDB is the default storage engine

2. MyISAM : These tables have a small footprint. Table-level locking limits the performance in read/write workloads, so it is often used in read-only or read-mostly workloads in Web and data warehousing configurations.

3. Federated : Offers the ability to link separate MySQL servers to create one logical database from many physical servers. Very good for distributed or data mart environments.

Page 28: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Database engine structure and architecture concept MySQL Native Storage Engines4. Archive : These compact, un-indexed tables are intended for storing and retrieving

large amounts of seldom-referenced historical, archived, or security audit information. 5. Example : This engine serves as an example in the MySQL source code that

illustrates how to begin writing new storage engines. It is primarily of interest to developers. The storage engine is a “stub” that does nothing. You can create tables with this engine, but no data can be stored in them or retrieved from them.

6. Merge : Enables a MySQL DBA or developer to logically group a series of identical MyISAM tables and reference them as one object. Good for VLDB environments such as data warehousing.

7. Memory : Stores all data in RAM, for fast access in environments that require quick lookups of non-critical data. This engine was formerly known as the HEAP engine. Its use cases are decreasing; InnoDB with its buffer pool memory area provides a general-purpose and durable way to keep most or all data in memory, and NDB-CLUSTER provides fast key-value lookups for huge distributed data sets.

Page 29: Introduction to database System Sajid Banday

Basic Database ConceptPart 1Database engine structure and architecture concept MySQL Native Storage Engines8. CSV : Its tables are really text files with comma-separated values. CSV

tables let you import or dump data in CSV format, to exchange data with scripts and applications that read and write that same format. Because CSV tables are not indexed, you typically keep the data in InnoDB tables during normal operation, and only use CSV tables during the import or export stage.

9. Blackhole : The Blackhole storage engine accepts but does not store data, similar to the Unix /dev/null device. Queries always return an empty set. These tables can be used in replication configurations where DML statements are sent to slave servers, but the master server does not keep its own copy of the data.

Page 30: Introduction to database System Sajid Banday

Database Design ConceptsPart 2Design Data Model Data modeling is a process used to define and analyze data requirements

needed to support the business processes within the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system.

Data modeling defines not just data elements, but their structures and relationships between them. Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization

There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system.1. Conceptual data model2. Logical data model3. Physical data model

Page 31: Introduction to database System Sajid Banday

Database Design ConceptsPart 2Design Data Model Conceptual data model :The data requirements are initially recorded as a

conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders.

Logical data model: The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models.

Physical data model : The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details.

Page 32: Introduction to database System Sajid Banday

Database Design ConceptsPart 2Design Entity Relationship Model There are several notations for data modeling. The actual model is frequently called

"Entity relationship model", because it depicts data in terms of the entities and relationships described in the data.

An entity-relationship model (ERM) is an abstract conceptual representation of structured data. Entity-relationship modeling is a relational schema database modeling method, used in software engineering to produce a type of conceptual data model (or semantic data model) of a system, often a relational database, and its requirements in a top-down fashion.

These models are being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain universe of discourse i.e. area of interest.

Page 33: Introduction to database System Sajid Banday

Database Design ConceptsPart 2Design Entity Relationship Model

Page 34: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Data Definition Language (DDL)The Data Definition Language (DDL) is used to create and destroy databases and database objects. These commands will primarily be used by database administrators during the setup and removal phases of a database project. Let's take a look at the structure and usage of basic DDL commands:1. Create : Create database object (table, view, Store procedure, trigger, function, index,

constraint etc.)

2. Alter : Modify database object (table, view, Store procedure, trigger, function, index, constraint etc.)

3. Drop : Delete database object (table, view, Store procedure, trigger, function, index, constraint etc.)

4. TRUNCATE : Deletes all data from a table in a very fast way, deleting the data inside the table and not the table itself. It usually implies a subsequent COMMIT operation, i.e., it cannot be rolled back.(data is not written to the logs for rollback later, unlike DELETE )

Page 35: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Data Manipulation Language (DML)The Data Manipulation Language (DML) is used to retrieve, insert and modify database information. These commands will be used by all database users during the routine operation of the database. Let's take a brief look at the basic DML commands:

1. INSERT - Insert data into database tables2. UPDATE - Update data in database tables3. DELETE - Delete data from database tables

Page 36: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Data Control Language (DCL) The Data Control Language (DCL) is a subset of the Structured Query Lanaguge (SQL)

that allows database administrators to configure security access to relational databases. It complements the Data Definition Language (DDL), which is used to add and delete database objects, and the Data Manipulation Language (DML), which is used to retrieve, insert and modify the contents of a database.

As it consists of three commands: GRANT, REVOKE, and DENY. Combined, these three commands provide administrators with the flexibility to set and remove database permissions in an extremely granular fashion.

1. GRANT : The GRANT command is used by administrators to add new permissions to a database user.

2. REVOKE : The REVOKE command is used to remove database access from a user previously granted such access.

3. DENY : The DENY command may be used to explicitly prevent a user from receiving a particular permission. This is helpful when a user may be a member of a role or group that is granted a permission and you want to prevent that user from inheriting the permission by creating an exception.

Page 37: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Data Query Language (DQL) The most common operation in SQL is the query, which is performed with the

declarative SELECT statement. SELECT retrieves data from one or more tables, or expressions. Standard SELECT statements have no persistent effects on the database. Some non-standard implementations of SELECT can have persistent effects, such as the SELECT INTO syntax that exists in some databases.

1. Select : A query includes a list of columns to be included in the final result immediately following the SELECT keyword. An asterisk ("*") can also be used to specify that the query should return all columns of the queried tables. SELECT is the most complex statement in SQL, with optional keywords and clauses that include:

2. From : The FROM clause which indicates the table(s) from which data is to be retrieved. The FROM clause can include optional JOIN sub clauses to specify the rules for joining tables.

3. Where : The WHERE clause includes a comparison predicate, which restricts the rows returned by the query. The WHERE clause eliminates all rows from the result set for which the comparison predicate does not evaluate to True.

Page 38: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Data Query Language (DQL)

4. Group By : The GROUP BY clause is used to project rows having common values into a smaller set of rows. GROUP BY is often used in conjunction with SQL aggregation functions or to eliminate duplicate rows from a result set. The WHERE clause is applied before the GROUP BY clause.

5. Having : The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate.

6. Order By : The ORDER BY clause identifies which columns are used to sort the resulting data, and in which direction they should be sorted (options are ascending or descending). Without an ORDER BY clause, the order of rows returned by an SQL query is undefined.

Page 39: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Group functionsGroup functions are built-in SQL functions that operate on groups of rows and return one value for the entire group. These functions are: 

1. COUNT : This function returns the number of rows in the table that satisfies the condition specified in the WHERE condition. If the WHERE condition is not specified, then the query returns the total number of rows in the table.

2. MAX : This function is used to get the maximum value from a column.

3. MIN : This function is used to get the minimum value from a column.

4. AVG : This function is used to get the average value of a numeric column.

5. SUM : This function is used to get the sum of a numeric column.

6. DISTINCT : This function is used to select the distinct rows.

Page 40: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Sql JoinsThe SQL JOIN clause is used whenever we have to select data from 2 or more tables. To be able to use SQL JOIN clause to extract data from 2 (or more) tables, we need a relationship between certain columns in these tables. These Sql joins are:1. INNER JOIN - only rows satisfying selection criteria from both joined tables are selected.

2. LEFT OUTER JOIN - rows satisfying selection criteria from both joined tables are selected as well as all remaining rows from left joined table are being kept along with Nulls instead of actual right joined table values.

3. RIGHT OUTER JOIN - rows satisfying selection criteria from both joined tables are selected as well as all remaining rows from right joined table are being kept along with Nulls instead of actual left joined table values.

4. FULL OUTER JOIN - rows satisfying selection criteria from both joined tables are selected as well as all remaining rows both from left joined table and right joined table are being kept along with Nulls instead of values from other table.

5. SELF JOIN - table is joined to itself.

Page 41: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)SQL is designed for a specific purpose: to query data contained in a relational database. SQL is a set-based, declarative query language, not an imperative language such as C or BASIC. However, there are extensions to Standard SQL which add procedural programming language functionality, such as control-of-flow constructs. These include:

Source Common Name Full NameANSI/ISO Standard SQL/PSM SQL/Persistent Stored ModulesInter-base/Firebird PSQL Procedural SQLIBM SQL PL SQL Procedural Language (implements SQL/PSM)Microsoft/Sybase T-SQL Transact-SQLMimer-SQL SQL/PSM SQL/Persistent Stored Module (implements SQL/PSM)MySQL SQL/PSM SQL/Persistent Stored Module (implements SQL/PSM)Oracle PL/SQL Procedural Language/SQL (based on Ada)

Postgre SQL PL/PG SQL Procedural Language/Postgre SQL Structured Query Language (based on Oracle PL/SQL)

Postgre SQL PL/PSM Procedural Language/Persistent Stored Modules (implements SQL/PSM)

Page 42: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)In addition to the standard SQL/PSM extensions and proprietary SQL extensions, procedural and object-oriented programmability is available on many SQL platforms via DBMS integration with other languages.

1. SQL Server 2005 uses the SQLCLR (SQL Server Common Language Runtime) to host managed .NET assemblies in the database, while prior versions of SQL Server were restricted to using unmanaged extended stored procedures which were primarily written in C.

2. Postgre SQL allows functions to be written in a wide variety of languages including Perl, Python, Tcl, and C.

3. In Oracle Database, you can now invoke public static methods of Java classes in the Oracle JVM directly from Java clients without defining PL/SQL call specifications and calling these through JDBC or SQLJ. Instead, you can use the JPublisher utility to generate a client-proxy class with the same signature as the server-side Java class. Once you have instantiated a client-proxy instance with a JDBC connection, you can call the proxy methods directly.

Page 43: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)

Stored procedures A stored procedure is a subroutine available to applications that access a relational

database system. A stored procedure (sometimes called a proc, sproc, StoPro, StoredProc, or SP) is actually stored in the database data dictionary.

Typical uses for stored procedures include data validation (integrated into the database) or access control mechanisms. Furthermore, stored procedures can consolidate and centralize logic that was originally implemented in applications. Extensive or complex processing that requires execution of several SQL statements is moved into stored procedures, and all applications call the procedures. One can use nested stored procedures, by executing one stored procedure from within another.

Page 44: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)

User-defined function In SQL databases, a user-defined function provides a mechanism for extending the

functionality of the database server by adding a function that can be evaluated in SQL statements.

The SQL standard distinguishes between scalar and table functions. A scalar function returns only a single value (or NULL), whereas a table function returns a (relational) table comprising zero or more rows, each row with one or more columns.

Once created, a user-defined function may be used in expressions in SQL statements. For example, it can be invoked where most other intrinsic functions are allowed. This also includes SELECT statements, where the function can be used against data stored in tables in the database. Conceptually, the function is evaluated once per row in such usage.

Page 45: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)Trigger A database trigger is procedural code that is automatically executed in response to certain

events on a particular table or view in a database. The trigger is mostly used for keeping the integrity of the information on the database. Triggers are commonly used to:

1. Audit changes (e.g. keep a log of the users and roles involved in changes)

2. Enhance changes (e.g. ensure that every change to a record is time-stamped by the server's clock)

3. Enforce business rules (e.g. require that every invoice have at least one line item)

4. Execute business rules (e.g. notify a manager every time an employee's bank account number changes)

5. Replicate data (e.g. store a record of every change, to be shipped to another database later)

6. Enhance performance (e.g. update the account balance after every detail transaction, for faster queries)

Page 46: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)Trigger The examples above are called Data Manipulation Language (DML) triggers because the

triggers are defined as part of the Data Manipulation Language and are executed at the time the data is manipulated.

Some systems also support non-data triggers, which fire in response to Data Definition Language (DDL) events such as creating tables, or runtime or and events such as logon, commit, and rollback. Such DDL triggers can be used for auditing purposes. The following are major features of database triggers and their effects:

1. triggers do not accept parameters or arguments (but may store affected-data in temporary tables)

2. triggers cannot perform commit or rollback operations because they are part of the triggering SQL statement (only through autonomous transactions)

3. triggers are normally slow (slowdown the process)

Page 47: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)TriggerTriggers in Oracle : The four main types of triggers are:

1. Row Level Trigger: This gets executed before or after any column value of a row changes

2. Column Level Trigger: This gets executed before or after the specified column changes3. For Each Row Type: This trigger gets executed once for each row of the result set

caused by insert/update/delete4. For Each Statement Type: This trigger gets executed only once for the entire result set,

but fires each time the statement is executed.

Page 48: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)TriggerTriggers in MS Sql Server : The Three main types of triggers are:

1. After Triggers : After Trigger fires an operation after any insert, update, delete operations performed on a table.

2. Instead OF Triggers : Instead of triggers are executed in place of the usual triggering action. Instead of triggers can also be defined on views with one or more base tables, where they can extend the types of updates a view can support.

3. CLR Triggers : A CLR Trigger can be either an AFTER or INSTEAD OF trigger. A CLR trigger can also be a DDL trigger. Instead of executing a Transact-SQL stored procedure, a CLR trigger executes one or more methods written in managed code that are members of an assembly created in the .NET Framework and uploaded in SQL Server.

Page 49: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)TriggerTriggers in My Sql : The Three main types of triggers are:

1. Insert Trigger2. Update Trigger3. Delete Trigger

MySQL allows only one trigger of each type on each table (i.e. one before insert, one after insert, one before update, one after update, one before delete and one after delete). Also does NOT fire triggers outside of a statement (i.e. API's, foreign key cascades)

Page 50: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)Views In database theory, a view consists of a stored query accessible as a virtual table in a

relational database. Unlike ordinary tables (base tables) in a relational database, a view does not form part of the

physical schema: it is a dynamic, virtual table computed or collated from data in the database. Changing the data in a table alters the data shown in subsequent invocations of the view.

Views can provide advantages over tables:1. Views can represent a subset of the data contained in a table2. Views can join and simplify multiple tables into a single virtual table3. Views can act as aggregated tables, where the database engine aggregates data (sum,

average etc.) and presents the calculated results as part of the data4. Views can hide the complexity of data; for example a view could appear as Sales2000 or

Sales2001, transparently partitioning the actual underlying table5. Views take very little space to store; the database contains only the definition of a view, not a

copy of all the data it presents6. Depending on the SQL engine used, views can provide extra security7. Views can limit the degree of exposure of a table or tables to the outer world

Page 51: Introduction to database System Sajid Banday

Querying and Manipulating DataPart 3Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM)Synonyms A Synonym is an alias or alternate name for a table, view, Sequence, or other Schema

object. They are used mainly to make it easy for users to access database objects owned by other users. They hide the underlying object's identity and make it harder for a malicious program or user to target the underlying object. Because a synonym is just an alternate name for an object, it requires no storage other than its definition.

Synonyms are very powerful from the point of view of allowing users access to objects that do not lie within their schema. All synonyms have to be created explicitly with the CREATE SYNONYM command and the underlying objects can be located in the same database or in other databases that are connected by database links.

There are two major uses of synonyms:1. Object transparency: Synonyms can be created to keep the original object transparent to

the user.2. Location transparency: Synonyms can be created as aliases for the tables and other

objects that belong other than the local database.

Page 52: Introduction to database System Sajid Banday

Thanks