pointers wiz ing,odmg,acyclic graph

14
Stub, Skeleton and Marshal RMI is the object equivalent of RPC. RPC calls procedure over a network whereas RMI invokes object methods over a network. The server defines objects that client can use remotely and clients can now invoke methods of remote object as if it were a local object running in the same virtual machine as client. RMI possesses the following characteristics • hides the underlying mechanism of transporting method arguments and return values across network, uses a network based registry program to keep track of the distributed object. a simple name repository (registry) that acts as a central management point, and server makes methods available for remote invocation by binding it to a name in the in the ragistery. The underlying mechanism actually follows the following steps: • The client invokes a local method called the client stub • The client stub parcels up the client parameters to the remote procedure, converts them to a standard format and builds network messages (marshaling). RMI uses object serialization to marshal and de-marshal messages. • The network messages are sent to the remote server by the client stub through a previously agreed upon protocol. • On the remote server, the server skeleton de-marshals the parameters from the network messages and converts them. • The skeleton invokes local methods to invoke the actual server function, passes the pa- remitters it received on the network messages from the client stub. • When the server procedure completes, it returns to the skeleton with return values. • The skeleton converts the return values, if necessary and marshal hem into network mess- sages To send back to the client • The messages are transferred across the network to the client stub • The client stub de-marshals the return values to the client. “Bag” and migrates to the next host. Obtained and reports of its itinerary. DML vs. DDL DML changes data in an object. If you insert a ro w into a table, that is DML. You have manipulated the data. When you create, change or remove a database o bject, it is referred to as data definition language (DDL). As we will discuss at the end of this chap ter, all DDL statements issue an implicit commit, so they are a permanent change. All DML statements change data and must be committed before the change becomes permanent. Managing Tables The table is the basic building block of any database system. We discussed tables in Chapter 1 and talked about normalizing data to remove redund ancy. In this section, we are going to discuss the different types of tables inside an Oracle database and how they are created and used. We need this information as we progress into manipulating the data in tables with the INSERT, UPDATE and DELETE statements. In computer parlance, updates are DML. You create a table by defining the column names and their data types. Columns can be any of the data types discussed in Chap ter 2, to include user defined data types. When we loaded the PUBS schema, we ran the pubs_db.sql script that contained the commands to create the tables. Let’s look at the AUTHOR table. Pointer swizzling In computer science, pointer swizzling is the conversion of references based on name or  position to direct pointer  references. It is typically performed during the deserialization (loading) of a relocatable object from disk, such as an executable file or pointer-based data structure. The reverse operation, replacing pointers with position-independent symbols or Samsher singh

Upload: samsher-singh

Post on 07-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 1/14

Stub, Skeleton and Marshal

RMI is the object equivalent of RPC. RPC calls procedure over a network whereas RMI

invokes object methods over a network. The server defines objects that client can use

remotely and clients can now invoke methods of remote object as if it were a local object

running in the same virtual machine as client. RMI possesses the following characteristics •

hides the underlying mechanism of transporting method arguments and return values across

network, uses a network based registry program to keep track of the distributed object. asimple name repository (registry) that acts as a central management point, and server makes

methods available for remote invocation by binding it to a name in the in the ragistery.

The underlying mechanism actually follows the following steps:• The client invokes a local method called the client stub

• The client stub parcels up the client parameters to the remote procedure, converts them to

a standard format and builds network messages (marshaling). RMI uses object serialization

to marshal and de-marshal messages.

• The network messages are sent to the remote server by the client stub through a previously

agreed upon protocol.

• On the remote server, the server skeleton de-marshals the parameters from the network 

messages and converts them.• The skeleton invokes local methods to invoke the actual server function, passes the pa-

remitters

it received on the network messages from the client stub.

• When the server procedure completes, it returns to the skeleton with return values.

• The skeleton converts the return values, if necessary and marshal hem into network mess-

sages To send back to the client

• The messages are transferred across the network to the client stub

• The client stub de-marshals the return values to the client. “Bag” and migrates to the next

host. Obtained and reports of its itinerary.

DML vs. DDLDML changes data in an object. If you insert a row into a table, that is DML. You have

manipulated the data. When you create, change or remove a database object, it is referred to

as data definition language (DDL). As we will discuss at the end of this chapter, all DDL

statements issue an implicit commit, so they are a permanent change. All DML statements

change data and must be committed before the change becomes permanent.

Managing Tables

The table is the basic building block of any database system. We discussed tables in Chapter 1 and talked about normalizing data to remove redundancy. In this section, we are going to

discuss the different types of tables inside an Oracle database and how they are created and

used. We need this information as we progress into manipulating the data in tables with the

INSERT, UPDATE and DELETE statements. In computer parlance, updates are DML.

You create a table by defining the column names and their data types. Columns can be any

of the data types discussed in Chapter 2, to include user defined data types. When we loaded

the PUBS schema, we ran the pubs_db.sql script that contained the commands to create the

tables. Let’s look at the AUTHOR table.

Pointer swizzling

In computer science, pointer swizzling is the conversion of references based on name or  position to direct pointer  references. It is typically performed during the deserialization 

(loading) of a relocatable object from disk, such as an executable file or pointer-based data

structure. The reverse operation, replacing pointers with position-independent symbols or 

Samsher singh

Page 2: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 2/14

 positions, is sometimes referred to as unswizzling, and is performed during serialization

(saving).

Methods of unswizzling

There are a potentially unlimited number of forms into which a pointer can be unswizzled,

 but some of the most popular include:

• The offset of the pointed-to object in the file• The index of the pointed-to object in some sequence of records

• A unique identifier possessed by the pointed-to object, such as a person's social

security number ; in databases, all pointers are unswizzled in this manner (see foreign

key)

Methods of swizzling

Swizzling in the general case can be complicated. The reference graph of pointers might

contain an arbitrary number of  cycles; this complicates maintaining a mapping from the old

unswizzled values to the new addresses. Associative arrays are useful for maintaining the

mapping, while algorithms such as breadth-first search help to traverse the graph, although both of these require extra storage. Various serialization libraries provide general swizzling

systems. In many cases, however, swizzling can be performed with simplifying assumptions,

such as a tree or list structure of references.

The different types of swizzling are:

• Automatic Swizzling

• On-Demand Swizzling

Object Data Management Group

The Object Data Management Group (ODMG) was conceived in the summer of 1991 at a

 breakfast with object database vendors that was organized by Rick Cattell of  Sun

Microsystems. In 1998, the ODMG changed its name from the Object Database Management

Group to reflect the expansion of its efforts to include specifications for both object database

and object-relational mapping products. The primary goal of the ODMG was to put forward a

set of specifications that allowed a developer to write portable applications for object

database and object-relational mapping products. In order to do that, the data schema,

 programming language bindings, and data manipulation and query languages needed to be

 portable. Between 1993 and 2001, the ODMG published five revisions to its specification.

The last revision was ODMG version 3.0, after which the group disbanded.

Major components of the ODMG 3.0 specification

• Object Model. This was based on the Object Management Group's Object Model. The

OMG core model was designed to be a common denominator for object request

 brokers, object database systems, object programming languages, etc. The ODMG

designed a profile by adding components to the OMG core object model.

• Object Specification Languages. The ODMG Object Definition Language (ODL) was

used to define the object types that conform to the ODMG Object Model. The ODMG

Object Interchange Format (OIF) was used to dump and load the current state to or 

from a file or set of files.• Object Query Language (OQL). The ODMG OQL was a declarative (nonprocedural)

language for query and updating. It used SQL as a basis, where possible, though OQL

supports more powerful object-oriented capabilities.Samsher singh

Page 3: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 3/14

• C++ Language Binding. This defined a C++ binding of the ODMG ODL and a C++

Object Manipulation Language (OML). The C++ ODL was expressed as a library that

 provides classes and functions to implement the concepts defined in the ODMG

Object Model. The C++ OML syntax and semantics are those of standard C++ in the

context of the standard class library. The C++ binding also provided a mechanism to

invoke OQL.

• Smalltalk Language Binding. This defined the mapping between the ODMG ODL and

Smalltalk , which was based on the OMG Smalltalk binding for the OMG Interface

Definition Language (IDL). The Smalltalk binding also provided a mechanism to

invoke OQL.

•  Java Language Binding. This defined the binding between the ODMG ODL and the

Java programming language as defined by the Java 2 Platform. The Java binding also

 provided a mechanism to invoke OQL.

Directed acyclic graph

A simple directed acyclic graph

In computer science and mathematics, a directed acyclic graph, also called a DAG, is a

directed graph with no directed cycles; that is, for any vertex v, there is no nonempty directed

 path that starts and ends on v. DAGs appear in models where it doesn't make sense for a

vertex to have a path to itself; for example, if an edge u→v indicates that v is a part of u, such

a path would indicate that u is a part of itself, which is impossible. Informally speaking, a

DAG "flows" in a single direction.

Each directed acyclic graph gives rise to a partial order  ≤ on its vertices, where u ≤ v exactly

when there exists a directed path from u to v in the DAG. However, many different DAGs

may give rise to this same reachability relation. Among all such DAGs, the one with the

fewest edges is the transitive reduction of each of them and the one with the most is their transitive closure. In particular, the transitive closure is the reachability order ≤.

* A source is a vertex with no incoming edges, while a sink is a vertex with no outgoing

edges. A finite DAG must have at least one source and at least one sink.

* The depth of a vertex in a finite DAG is the length of the longest path from a source to that

vertex, while its height is the length of the longest path from that vertex to a sink.[citation needed ]

* The length of a finite DAG is the length (number of edges) of a longest directed path. It is

equal to the maximum height of all sources and equal to the maximum depth of all sinks.[citation needed ]

Properties

Samsher singh

Page 4: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 4/14

Every directed acyclic graph has a topological sort, an ordering of the vertices such that each

vertex comes before all vertices it has edges to. In general, this ordering is not unique. Any

two graphs representing the same partial order have the same set of topological sort orders.

DAGs can be considered to be a generalization of  trees in which certain subtrees can be

shared by different parts of the tree. In a tree with many identical subtrees, this can lead to a

drastic decrease in space requirements to store the structure. Conversely, a DAG can be

expanded to a forest of rooted trees using this simple algorithm:

• While there is a vertex v with in-degree n > 1,

o Make n copies of v , each with the same outgoing edges but no incoming

edges.

o Attach one of the incoming edges of v to each vertex.

o Delete v .

If we explore the graph without modifying it or comparing nodes for equality, this forest will

appear identical to the original DAG.

Some algorithms become simpler when used on DAGs instead of general graphs. For 

example, search algorithms like depth-first search without iterative deepening normally mustmark vertices they have already visited and not visit them again. If they fail to do this, they

may never terminate because they follow a cycle of edges forever. Such cycles do not exist in

DAGs (marking is still a good idea as it reduces the worst-case performance from exponential

(due to multiple paths) to linear).

Applications

Directed acyclic graphs have many important applications in computer science, including:

• Parse trees constructed by compilers• Bayesian networks

• Reference graphs that can be garbage collected using simple reference counting

• Feedforward neural networks, and other feed-forward controller or classifiertopologies

• Reference graphs of purely functional data structures (although some languagesallow purely functional cyclic structures)

• Dependency graphs such as those used in instruction scheduling and makefiles

• Dependency graphs between classes formed by inheritance relationships inobject-oriented programming languages

• Serializability Theory of Transaction Processing Systems

• Information categorization systems, such as file system directories

• Hierarchical scene graphs to optimise view frustum culling operations• Forward chained rules systems (including business rules engines) such as the

Rete algorithm, used by the rule engine Drools.

• Representing spacetime as a causal set in theoretical physics

• In bioinformatics, finding areas of synteny between two genomes, or representingevolution as phylogenetic networks

• Abstract process descriptions such as workflows and some models of provenance

• A pattern language as a DAG of patterns[3]

• Dynamic programming

• Optical character recognition

Strategies for create persistent object:

The data manipulated by an object-oriented database can be either transient or  persistent.

Transient data is only valid inside a program or transaction; it is lost once the program or 

Samsher singh

Page 5: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 5/14

transaction terminates. Persistent data is stored outside of a transaction context, and so

survives transaction updates. Usually the term persistent data is used to indicate the databases

that are shared, accessed and updated across transactions. The two main strategies used to

create and identify persistent objects are:

• Persistence extensions

• Persistence through reachability

Several object-oriented databases incorporate the notion of a class extension to make theinstances of a class persistent. With class extensions, the class also serves as a set of objects.

Persistence and class extensions are orthogonal in the sense that one can have an object-

oriented system that has class extensions, but uses different mechanism for persistence.

However, the object-oriented database languages that support class extensions usually make

the extensions persistent.

The persistent object space has a root called database and every object reachable from this

database root is persistent. The authors defines the persistent object space as follows:

1. Database is a persistent tuple object: database = [S1: {…} , S2: {…} , … , Sn:

{…}]2. If pT is a persistent tuple object, then pT.a (the attribute a of pT) is a persistent

object.

3. If pS is a persistent set of object then every element e of pS is persistent

Deep Equal and Shallow Equal:

Object identity is a fundamental object orientation concept. With object identity, objects can

contain or refer to other objects. Identity is a property of an object that distinguishes the

object from all other objects in the application.

There are many techniques for identifying objects in  programming languages, databases and

operating systems. According to the authors the most commonly used technique for 

identifying objects is user-defined names for objects. There are of course practical limitations

to the use of variable names without the support of object identity.

To identify an object with a unique key (also called identifier keys) is a method that is

commonly used in database management systems. Using identifier keys for object identity

confuses identity and data values. According to the authors there are three main problems

with this approach:

1. Modifying identifier keys. Identifier keys cannot be allowed to change,even though they are user-defined descriptive data.2. Non-uniformity. The main source of non-uniformity is that identifierkeys in different tables have different types or different combinations of 

attributes. And more serious problem is that the attribute(s) to use for anidentifier key may need to change.

3. "Unnatural" joins. The use of identifier keys causes joins to be used inretrievals instead of simpler and more direct object retrievals.

There are many operations associated with identity. Since the state of an object is different

from its identity, there are three types of equality predicates from comparing objects. The

most obvious equality predicate is the identical predicate, which checks whether two objects

are actually one and the same object. The two other equality predicates (shallow equal anddeep equal) actually compare the states of objects. Shallow equal goes one level deep in

comparing corresponding instances variables. Deep equal ignores identities and compares the

Samsher singh

Page 6: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 6/14

values of corresponding base objects. As far as copying objects, the counterparts of deep and

shallow equality are deep and shallow copy.

Message dispatch in object oriented approach :-In computer science, dynamic

dispatch is the process of mapping a message to a specific sequence of code (method) at

runtime. This is done to support the cases where the appropriate method cannot be

determined at compile-time (i.e. statically). Dynamic dispatch is only used for code

invocation and not for other binding processes (such as for global variables) and the name isnormally only used to describe a language feature where a runtime decision is required to

determine which code to invoke.

Dynamic dispatch is needed when multiple classes contain different implementations of the

same method foo(). If the class of an object x is not known at compile-time, then when

x.foo() is called, the program must decide at runtime which implementation of foo() to

invoke, based on the runtime type of object x. This case is known as single dispatch because

an implementation is chosen based on a single type—that of the this  or self  object . Single

dispatch is supported by many object-oriented languages, including statically typed languages

such as C++ and Java, and dynamically typed languages such as Smalltalk and Objective-C.

In a small number of languages such as Common Lisp, methods or functions can also be

dynamically dispatched based on the type of arguments. Expressed in pseudocode, the code

manager.handle(y) could call different implementations depending on the type of object y.

This is known as multiple dispatch.

Dynamic dispatch mechanisms

A language may be implemented with different dynamic dispatch mechanisms. The choices

of the dynamic dispatch mechanism offered by a language to a large extent alter the

 programming paradigms that are available or are most natural to use within a given language.

 Normally in a typed language the dispatch mechanism will be performed based on the type of 

the arguments (most commonly based on the type of the receiver of a message). This might

 be dubbed 'per type dynamic dispatch'. Languages with weak or no typing systems often

carry a dispatch table as part of the object data for each object. This allows instance

behaviour as each instance may map a given message to a separate method.

ODBMS Vs OODBMS

Advantages:

• Enriched Modeling Capabilities.

• Extensibility.

• Removal of Impedance Mismatch.

• More Expressive Query Language.

• Support for Schema Evolution. 

• Support for Long Duration Ts. 

• Applicability to AdvancedDatabase Apps. 

• Improved Performance. 

Disadvantages:

• Lack of Universal Data Model. 

• Lack of Experience. 

• Lack of Standards. 

• Query Optimization compromisesEncapsulation. 

• Object Level Locking may impactPerformance. 

• Complexity. 

• Lack of Support for Views. 

• Lack of Support for Security. 

Table 1: A Comparison of Database Management Systems 

Criteria RDBMS ODBMS ORDBMS

Samsher singh

Page 7: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 7/14

Defining standard SQL2 ODMG-2.0 SQL3 (in process)

Support for object-

oriented features

Does not support; Itis difficult to map

program object to thedatabase

Supports extensivelyLimited support;mostly to new datatypes

Usage Easy to useOK for programmers;some SQL access forend users

Easy to use except for

some extensions

Support for complexrelationships

Does not supportabstract datatypes

Supports a widevariety of datatypesand data with

complex inter-relationships

Supports Abstractdatatypes and

complex relationships

PerformanceVery goodperformance

Relatively lessperformance

Expected to performvery well

Product maturityRelatively old and sovery mature

This concept is fewyears old and sorelatively mature

Still in developmentstage so immature.

The use of SQLExtensive supports

SQL

OQL is similar to SQL,

but with additionalfeatures like Complex

objects and object-oriented features.

SQL3 is beingdeveloped with OO

features incorporatedin it

Advantages

Its dependence onSQL, relatively simplequery optimizationhence goodperformance

It can handle all typesof complexapplications,reusability of code,less coding

Ability to querycomplex applicationsand ability to handlelarge and complexapplications

Disadvantages

Inability to handle

complex applications

Low performance due

to complex query

optimization, inabilityto support large-scalesystems

Low performance in

web applications

Support from vendors

It is considered to behighly successful sothe market size isvery large but many

vendors are movingtowards ORDBMS

Presently lackingvendor support due tovast size of RDBMS

market

All major RDBMSvendors are after thisso has very good

future

Source: International Data Corporation, 1997 [4]

The Object Management Group (OMG)OMG™ is an international, open membership, not-for-profit computer industryconsortium. OMG Task Forces develop enterprise integration standards for a wide range

of technologies, and an even wider range of industries. OMG’s modeling standardsenable powerful visual design, execution and maintenance of software and other

processes. OMG’s middleware standards and profiles are based on the Common ObjectRequest Broker Architecture (CORBA®) and support a wide variety of industries. All of our specifications may be downloaded without charge from our website.Any organization may join OMG and participate in our standards-setting process. Ourone-organization-one-vote policy ensures that every organization, large and small, hasan effective voice in our process. Our membership includes hundreds of organizations,

with half being software end-users in over two dozen vertical markets, and the other half representing virtually every large organization in the computer industry and many

smaller ones. Most of the organizations that shape enterprise and Internet computingtoday are represented on our Board of Directors.

Samsher singh

Page 8: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 8/14

What is Object Oriented Design?Object Oriented Design is the concept that forces programmers to plan out their code inorder to have a better flowing program. The origins of object oriented design is debated,but the first languages that supported it included Simula and SmallTalk. The term did not

become popular until Grady Booch wrote the first paper titled Object-Oriented Design, in1982.

Object Oriented Design is defined as a programming language that has 5 conceptual

tools to aid the programmer. These programs are often more readable than non-objectoriented programs, and debugging becomes easier with locality.

Language Concepts

The 5 Basic Concepts of Object Oriented Design are the implementation level featuresthat are built into the programming language. These features are often referred to by

these common names.Encapsulation: A tight coupling or association of data structures with the methods or

functions that act on the data. This is called a class, or object (an object is often theimplementation of a class).

Data Protection: The ability to protect some components of the object from externalentities. This is realized by language keywords to enable a variable to be declared as

private or protected to the owning class.Inheritance: The ability for a class to extend or override functionality of another class.The so called child class has a whole section that is the parent class and then it has it'sown set of functions and data.

Interface: A definition of functions or methods, and their signatures that are availablefor use to manipulate a given instance of an object.

Polymorphism: The ability to define different functions or classes as having the samename but taking different data types.

 

The Booch Methodology

The Booch software engineering methodology [#!booch!#] provides an object-oriented

development in the analysis and design phases. The analysis phase is split into steps. The first

step is to establish the requirements from the customer perspective. This analysis step

generates a high-level description of the system's function and structure. The second step is adomain analysis. The domain analysis is accomplished by defining object classes; their 

attributes, inheritance, and methods. State diagrams for the objects are then established. The

analysis phase is completed with a validation step. The analysis phase iterates between the

Samsher singh

Page 9: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 9/14

customer's requirements step, the domain analysis step, and the validation step until

consistency is reached.

Once the analysis phase is completed, the Booch software engineering methodology develops

the architecture in the design phase. The design phase is iterative. A logic design is mapped

to a physical design where details of execution threads, processes, performance, location, data

types, data structures, visibility, and distribution are established. A prototype is created and

tested. The process iterates between the logical design, physical design, prototypes, and

testing.

The Booch software engineering methodology is sequential in the sense that the analysis

 phase is completed and then the design phase is completed. The methodology is cyclical in

the sense that each phase is composed of smaller cyclical steps. There is no explicit priority

setting nor a non-monotonic control mechanism. The Booch methodology concentrates on the

analysis and design phase and does not consider the implementation or the testing phase in

much detail.

Object Request BrokerShort for Object Request Broker, a component in the CORBA programming model that actsas the middleware  between clients and servers. In the CORBA model, a client can request a

service without knowing anything about what servers are attached to the network . The

various ORBs receive the requests, forward them to the appropriate servers, and then hand

the results back to the client.

What is message dispatch

Object-oriented systems must implement message dispatch efficiently in order not to penalize

the objectoriented programming style. We characterize the performance of most previously

 published dispatch techniques for both statically- and dynamically-typed languages with both

single and multiple inheritance. Hardware organization (in particular, branch latency and

superscalar instruction issue) significantly impacts dispatch performance. For example, inline

caching may outperform C++-style "vtables " on deeply pipelined processors

even though it executes more instructions per dispatch. We also show that adding support for 

dynamic typing or multiple inheritance does not significantly impact dispatch speed for most

techniques, especially on superscalar machines. Also, instruction space overhead (calling

sequences) can exceed the space cost of data structures (dispatch tables), so that minimal

table size may not imply minimal runtime space usage.

Common Object Request Broker Architecture(CORBA)It is an architecture and specification for creating, distributing, and managing distributed 

 program objects in a network. It allows programs at different locations and developed by

different vendors to communicate in a network through an "interface broker." CORBA was

developed by a consortium of vendors through the Object Management Group (OMG), which

currently includes over 500 member companies. Both International Organization for 

Standardization (ISO) and X/Open have sanctioned CORBA as the standard architecture for 

distributed objects (which are also known as components). CORBA 3 is the latest level.

The essential concept in CORBA is the Object Request Broker (ORB). ORB support in a

network of clients and servers on different computers means that a client program (which

may itself be an object) can request services from a server program or object without havingto understand where the server is in a distributed network or what the interface to the server 

 program looks like. To make requests or return replies between the ORBs, programs use the

General Inter-ORB Protocol (GIOP) and, for the Internet, its Internet Inter-ORB Protocol

Samsher singh

Page 10: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 10/14

(IIOP). IIOP maps GIOP requests and replies to the Internet's Transmission Control Protocol

(TCP) layer in each computer.

A notable hold-out from CORBA is Microsoft, which has its own distributed object

architecture, the Distributed Component Object Model (DCOM). However, CORBA and

Microsoft have agreed on a gateway approach so that a client object developed with the

Component Object Model will be able to communicate with a CORBA server (and vice

versa). Distributed Computing Environment (DCE), a distributed programming architecture

that preceded the trend toward object-oriented programming and CORBA, is currently used

 by a number of large companies. DCE will perhaps continue to exist along with CORBA and

there will be "bridges" between the two. More information is available in our definitions of 

Internet Inter-ORB Protocol and Object Request Broker .

What is JDBC?

Java database connectivity (JDBC) is the JavaSoft specification of a standard application programminginterface (API) that allows Java programs to access database management systems. The JDBC APIconsists of a set of interfaces and classes written in the Java programming language.

Using these standard interfaces and classes, programmers can write applications that connect todatabases, send queries written in structured query language (SQL), and process the results.

The JDBC API is consistent with the style of the core Java interfaces and classes, such as java.lang

and java.awt. The following table describes the interfaces, classes, and exceptions (classes thrown as

exceptions) that make up the JDBC API. In the table, interfaces belonging to the javax.sql package are

extensions to the standard JDBC interfaces and are contained in the Java 2 SDK.

Top Ten New Features in Oracle9i

Oracle9i is the latest major version of the Oracle database. This release covers a broad range of functionality,

expanding the reach and depth of the features in Oracle8 i .

We have updated our Oracle Essentials book to cover the new features in Oracle9i . The following list is, in our

 judgment, the ten most significant new facets of the Oracle9i database--plus one more feature as a lagniappe.

1. Real Application Clusters: If you have paid attention to any of the hype surrounding the launch of 

Oracle9i , you have no doubt heard about Real Application Clusters. In a nutshell, Real Application

Clusters are a way to have an Oracle9i database spread across multiple machines in a cluster of servers, with each server extending both the scalability and the availability of the entire cluster.

Real Application Clusters use a technology called cache fusion to make the existence of a clustertransparent to an application. You can run any type of application, from an OLTP (online transaction

processing) application to a data warehouse, on a Real Application Clusters database, withoutmodifying your code.

2. Dynamic Memory Pools: With Oracle9i , you can adjust the size of the memory pools (buffer pool,

shared pool, and large pool) without having to stop and restart the server.

3. Data Guard: This new feature automatically handles standby databases, from creation to

maintenance, and failover.

4. Automatic Undo Management: Rather than having to define and manage rollback segments, you

can simply define an Undo tablespace and let Oracle9i take care of the rest.

5. Flashback Query: This is a nice little feature that allows you to run a query against the database and

return results as they would have been produced at an earlier time. Flashback Query uses the same

mechanisms as multiversion read consistency, so it's like getting a feature for free.

6. XMLType: This is a new datatype that lets you store native XML documents directly in the database.

XMLType eliminates the need to parse the documents coming into and out of the database.

7. List Partitioning: This feature offers an additional way to partition data, based on a list of values. If 

you are using partitions to isolate maintenance operations, List Partitioning could come in handy.

8. FastStart Recovery: This new feature allows you to simply specify the amount of time you want to

spend recovering a database. Oracle9i will use the indicated time to automatically issue checkpoints.

Samsher singh

Page 11: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 11/14

9. Two-Pass Recovery: One of the behind-the-scenes improvements in Oracle9i is Two-Pass Recovery,

which allows the database to be reopened for use as soon as roll-forward recovery is completed,

without having to wait for the entire rollback process to finish.

10. Zero Data Loss: With Oracle9i , you can now specify Zero Data Loss, which means that all writes to

the local log file will also be written to the log file of the standby database. This feature means that

the standby database is always in exactly the same state as the active database, so that no data will

be lost in the event of a failure.

Bonus Tip

Label Security: This feature gives you the ability to specify security, based on values, which gives you a finer

grain of control that is useful if you are creating virtual private databases.

We cover all of these features and more in the recently released Oracle Essentials: Oracle9i ,

Oracle8i  & Oracle8. This new edition has been expanded to include the latest features from

Oracle 9i , which are explained

Database security threats include unruly insidersAsk just about anyone in IT management what they believe is a greater threat to database

security -- employees and contractors or external hackers -- and you're almost guaranteed to

hear, "The bad guy outsiders, of course!" Sure, a lot of people still quote the "80% of all

attacks come from the inside" statistic, but the majority of people I talk to still fear the

dreaded outsiders.

What's causing this ignorance of the insider threat? Is it the tribal belief that all insiders can

 be trusted? Is it the myrmidon-type mentality that everything we hear in the media is true?

Or, is it just everyday people not knowing how in the heck to secure their internal network? I

think it's all three.

Results from numerous studies cast a spotlight on the fact that the internal threat is a much

greater problem than we believe it is. It's naïve to think that breaking into internal databases

and critical systems takes a lot of skill -- and it's absolutely incorrect. Facts show most peopleare using very basic methods for obtaining ill-gotten gains. Sure, anyone can download

database reconnaissance tools, password crackers and vulnerability scanners. That's certainly

a risk, but it's a small one.

We're giving malicious insiders too much credit for their efforts. By and large, all they're

doing is using their own account or someone else's account to obtain unauthorized access to

servers and databases in order to do their dirty deeds. And they know they're most likely not

going to get caught. This isn't just theory. I confirm these vulnerabilities and network user 

 beliefs in my security assessments month after month, year after year. Let this be a wake up

call.

Rogue administrators are a reality, too, but there are a lot more "regular users" on the network 

that have access to sensitive information than there are administrators. Don't fall into the trap

of believing it's the admins that do all the harm. Complicating matters is mobility. When

users are offsite and out of mind, it's human tendency to believe that "a little peek won't hurt"

or "no one can see me doing it" -- especially when malicious thoughts are present.

Combine that with readily-available applications that can control all aspects of the network --

including critical database systems -- from the convenience of a Treo or similar handheld

device can't be good for business! This is especially true when little to no internal controls are

implemented or are being proactively managed.

You simply cannot trust that everyone is doing the right things all the time. Employee abuse

of trust is happening. Sure, it's not everyone. It's not even 10% of your users. It's likely just

one, two, maybe three people, but that's all it takes to create a big problem. It's okay to trust,

Samsher singh

Page 12: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 12/14

 but do it only where it makes sense. Most people don't need their current privileges, much

less the full administrative rights that are so easily handed out.

Protecting sensitive information from forces within the organization needs as much if not

more attention than what's been dedicated to hardening the network perimeter against outside

threats. The solution – on paper at least – is simple. Management will have to view things

differently. They need to know that existing perimeter controls aren't enough and that moving

 perimeter security concepts inside the firewall is essential.

Management must realize that just because insiders have passed background checks and seem

to be good people, they absolutely cannot trust in them completely. Perhaps more important,

managers have to think for themselves and trust you, the administrator, as well as outside

security experts when they're told threats and vulnerabilities do indeed exist. Then, they need

to follow up with adequate support and resources to actually do something about the problem

One of the most effective things you can do is to learn how to "sell" information security to

others. In fact, communicating business needs, along with reasons for those needs, is a key

 piece for success in your job and in your career.

Centralized and distributed database systemsIn the traditional enterprise computing model, an Information Systems department maintains

control of a centralized corporate database system. Mainframe computers, usually located at

corporate headquarters, provide the required performance levels. Remote sites access the

corporate database through wide-area networks (WANs) using applications provided by the

Information Systems department.

Changes in the corporate environment toward decentralized operations have prompted

organizations to move toward distributed database systems that complement the new

decentralized organization.

Today’s global enterprise may have many local-area networks (LANs) joined with a WAN,

as well as additional data servers and applications on the LANs. Client applications at the

sites need to access data locally through the LAN or remotely through the WAN. For 

example, a client in Tokyo might locally access a table stored on the Tokyo data server or 

remotely access a table stored on the New York data server.

In a distributed database environment, mainframe computers may be needed at corporate or 

regional headquarters to maintain sensitive corporate data, while clients at remote sites use

minicomputers and server-class workstations for local processing.

Both centralized and distributed database systems must deal with the problems associated

with remote access:

•  Network response slows when WAN traffic is heavy. For example, a mission-critical

transaction-processing application may be adversely affected when a decision-support

application requests a large number of rows.

• A centralized data server can become a bottleneck as a large user community

contends for data server access.

Data is unavailable when a failure occurs on the network.

Samsher singh

Page 13: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 13/14

Parallel Query Processing

Without the parallel query feature, the processing of a SQL statement is always performed by

a single server process. With the parallel query feature, multiple processes can work together 

simultaneously to process a single SQL statement. This capability is called parallel query processing. By dividing the work necessary to process a statement among multiple server 

 processes, the Oracle Server can process the statement more quickly than if only a single

server process processed it.

The parallel query feature can dramatically improve performance for data-intensive

operations associated with decision support applications or very large database environments.

Symmetric multiprocessing (SMP), clustered, or massively parallel systems gain the largest

 performance benefits from the parallel query feature because query processing can be

effectively split up among many CPUs on a single system.

It is important to note that the query is parallelized dynamically at execution time. Thus, if 

the distribution or location of the data changes, Oracle automatically adapts to optimize the

 parallelization for each execution of a SQL statement.

The parallel query feature helps systems scale in performance when adding hardware

resources. If your system's CPUs and disk controllers are already heavily loaded, you need to

alleviate the system's load before using the parallel query feature to improve performance.

Chapter 18, "Parallel Query Tuning" describes how your system can achieve the best

 performance with the parallel query feature.

The Oracle Server can use parallel query processing for any of these statements:

• SELECT statements

• subqueries in UPDATE, INSERT, and DELETE statements

• CREATE TABLE ... AS SELECT statements

• CREATE INDEX

Parallel Query Process Architecture

Without the parallel query feature, a server process performs all necessary processing for the

execution of a SQL statement. For example, to perform a full table scan (for example,

SELECT * FROM EMP), one process performs the entire operation. The following figure

illustrates a server process performing a full table scan:

Figure C-1: Full Table Scan without the Parallel Query feature 

The parallel query feature allows certain operations (for example, full table scans or sorts) to

 be performed in parallel by multiple query server processes. One process, known as the query

coordinator , dispatches the execution of a statement to several query servers and coordinatesthe results from all of the servers to send the results back to the user.

Samsher singh

Page 14: Pointers Wiz Ing,ODMG,acyclic graph

8/6/2019 Pointers Wiz Ing,ODMG,acyclic graph

http://slidepdf.com/reader/full/pointers-wiz-ingodmgacyclic-graph 14/14

The following figure illustrates several query server processes simultaneously performing a

 partial scan of the EMP table. The results are then sent back to the query coordinator, which

assembles the pieces into the desired full table scan.

Figure C-2: Multiple Query Servers Performing a Full Table Scan in 

Parallel 

The query coordinator process is very similar to the server processes in previous releases of 

the Oracle Server. The difference is that the query coordinator can break down execution

functions into parallel pieces and then integrate the partial results produced by the query

servers. Query servers get assigned to each operation in a SQL statement (for example, a

table scan or a sort operation), and the number of query servers assigned to a single operationis the degree of parallelism for a query.

The query coordinator calls upon the query servers during the execution of the SQL statement

(not during the parsing of the statement). Therefore, when using the parallel query feature

with the multi-threaded server, the server processing the EXECUTE call of a user's statement

 becomes the query coordinator for the statement.

Samsher singh