international journal of engineering computer science and technology

42
DOMAIN REMEDY LANGUAGE ORCHESTRATING Pro. (Dr.) Parveen Kumar [1], Mukesh Kumar[2], Seema [3] [1] Computer Science Department, Meerut Institute of Engineering and Technology, Meerut (Uattar Pardesh) India., [2] Computer Science & Engineering. NIMS University, Shobha Nagar, Jaipur (Rajasthan) India [email protected] [email protected] , [3] Lecturer in Department of Computer Science & Engineering, KCGPW Ambala City, (Haryana) India, [email protected] Abstract: - In foregoing paper, we tried to put language description doctrine called collage. It could be used for engineering Domain Remedy Languages, which are to be made to solve problems of specific domains. We accented on Domain Remedy language that have recursive design and are to be speculated in corporate environments. This paper related to successive languages and technical part contributes to the improvement of the Domain Specific Language designing process. It provides the basic pattern features of object oriented programming reusing and apply to collages. We focused on allay of specification and ease reuse for programming language by reusing of specification in mode of object oriented programming. In this paper we tried to elaborate to apply the domain specific language technology in corporate sectors. Keywords: - Remedy, collages, Orchestrating Introduction Software reusability has been become subtle goal. For permoting the reusability by software assets could be great benefit companies which make class of similar products. A litigate known as domain engineering, from their implementation to derive required products. For managing these common products will have the developing cycle for future requirement and will help chasten the large variant seen in the development of various products. We postulate that a causal relationship gives a well-grounded basis to industrial strength engineering, a requirement for successful processed software reuse. Here provided an exemplification of the impact of advanced object-oriented technology. Domain Engineering (DE) is the activity of collecting, organizing, and storing past experience in building systems or parts of systems in a particular domain in the form of reusable assets, as well as providing an adequate means for reusing these assets when building (i.e., retrieval, qualification, dissemination, adaptation, assembly). It has been denotative finding of the requirement for building software i.e based on reusable element. Today still mostly software practitioners make applications according to the standard regular approach. These kind of practices have a many problems and short comes. Now it is required that we need to move from single engineering system to families of systems by recongonization i.e would be a reusable solution. For engineering such Domain language, it is important that the designs of the existing, that this descriptions are easily reused as basic building things to design new Domain Languages. With the help of collages the tool support Gem-Mex, such a new designs can be framed in an integrated linguistics environment. This research to consecutive languages and tries to contribute to the improvement of the DL design and reuse for programming language constructs from well known GPL designs. The basic part of this research depicts the stipulation styles to introduce all the attributes of the object oriented features of reuse and implement these patterns to collages for further work. The stress of this paper is to design and amplification of a language engineering Parveen Kumar, Mukesh Kumar, Seema, Int. J. EnCoTe, 2012, v0101, 01-04 ISSN : 2277 - 9337 IJECST | JAN - FEB 2012 Available [email protected] 1

Category:

Documents


0 download

DESCRIPTION

“International Journal of Engineering, Computer Science and Technology” (IJECST) is an international peer-reviewed open access journal which publishes research articles, review articles short communications and book reviews from academicians and business-persons.

TRANSCRIPT

Page 1: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

DOMAIN REMEDY LANGUAGE ORCHESTRATING

Pro. (Dr.) Parveen Kumar [1], Mukesh Kumar[2], Seema [3] [1] Computer Science Department, Meerut Institute of Engineering and Technology, Meerut

(Uattar Pardesh) India., [2] Computer Science & Engineering. NIMS University, Shobha Nagar, Jaipur (Rajasthan) India

[email protected]

[email protected], [3] Lecturer in Department of Computer Science & Engineering, KCGPW Ambala City,

(Haryana) India, [email protected] Abstract: - In foregoing paper, we tried to put language description doctrine called collage. It could be used for engineering Domain Remedy Languages, which are to be made to solve problems of specific domains. We accented on Domain Remedy language that have recursive design and are to be speculated in corporate environments. This paper related to successive languages and technical part contributes to the improvement of the Domain Specific Language designing process. It provides the basic pattern features of object oriented programming reusing and apply to collages. We focused on allay of specification and ease reuse for programming language by reusing of specification in mode of object oriented programming. In this paper we tried to elaborate to apply the domain specific language technology in corporate sectors. Keywords: - Remedy, collages, Orchestrating Introduction Software reusability has been become subtle goal. For permoting the reusability by software assets could be great benefit companies which make class of similar products. A litigate known as domain engineering, from their implementation to derive required products. For managing these common products will have the developing cycle for future requirement and will help chasten the large variant seen in the development of various products. We postulate that a causal relationship gives a well-grounded basis to industrial strength engineering, a requirement for successful processed software reuse. Here provided an exemplification of the impact of advanced object-oriented technology. Domain Engineering (DE) is the activity of collecting, organizing, and storing past experience in building systems or parts of systems in a particular domain in the form of reusable assets, as well as providing an adequate means for reusing these assets when building (i.e., retrieval, qualification, dissemination, adaptation, assembly). It has been denotative finding of the requirement

for building software i.e based on reusable element. Today still mostly software practitioners make applications according to the standard regular approach. These kind of practices have a many problems and short comes. Now it is required that we need to move from single engineering system to families of systems by recongonization i.e would be a reusable solution.

For engineering such Domain language, it is important that the designs of the existing, that this descriptions are easily reused as basic building things to design new Domain Languages. With the help of collages the tool support Gem-Mex, such a new designs can be framed in an integrated linguistics environment. This research to consecutive languages and tries to contribute to the improvement of the DL design and reuse for programming language constructs from well known GPL designs. The basic part of this research depicts the stipulation styles to introduce all the attributes of the object oriented features of reuse and implement these patterns to collages for further work. The stress of this paper is to design and amplification of a language engineering

Parveen Kumar, Mukesh Kumar, Seema, Int. J. EnCoTe, 2012, v0101, 01-04 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

1

Page 2: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

based on state intuition of algorithms and programming. This approach provides the possibility of application of Domain language technology in corporate sector. Here the smaller ties of beneficial so that by default more secure and focused languages are most rendered.

Systematic Reuse “Re-use is considered as a means to support the construction of new programs using in a

systematical way existing designs, design fragments, program texts, documentation, or other forms of program representation.” [1]

“Reusability is the extent to which a software component can be used (with or without adaptation) in multiple problem solutions.” [2,3]

The systematic making of common parts or assets with handled variability that forms the basis for systematically building systems in a domain.

An asset’s definitions are focused on code. There could be several problems with code although it can be saved in reuse library for further, but unmanaged and miscellaneous collection will be failed for achieving the high leverage reuse. That code might be problematic to understand, modify and to locate. The information of design is unenviable is not designed in. During the product development assets work as templates for generation of the different work product. For achieving the high rate of success assets must be well organized in ongoing company’s activities, like in particular areas for e.g automotive, web designing etc. Assets accentuate design-for-commonality. This form is based for standardizing assets for building products using encapsulation feature of related products. Design commonality understands as usual structure of relative products, particular design made by discovering a common design, coordination role for

common interest, product applied by the chosen reusable components. Dominance -of- unevenness is the basis for giving the flexibility in the assets for meeting requirements for respective of products without conciliatory commonality. It needs careful design to include appropriate levels of parameterization, specialization, generalization and extension. Like commonality, ability should be engineered a priority, so that analysis must explicit identify variations that anticipate adaptations. Dominance -of- unevenness results into:

• Features of optional components • Specified alternate structures • Context dependencies parameters

When, for achieving of an organization’s objective, product lines are constructing to support it then we institutionalize reuse. Here is a definition of product line:-

A collection of applications sharing a common, managed set of features that satisfy the specific needs of a selected market to fulfill an organization’s mission is called product of line [6]. Reuse actually occurs both within a product line and across product lines, an impression has discovered earlier and linked with the concepts of horizontal and vertical reuse.

Horizontal reuse refers to the use of an asset across several distinct product lines; typically, assets reused in this way tend to be general (i.e., they are application domain-independent), and with a very specific set of functionality. Horizontal reuse solely focuses on specifying components as independent as possible of any domain-specific product decisions.

Vertical reuse reefers to the use of assets specially designed for a given product line; these assets are therefore more specific to a set of products (i.e., they are application domain-specific). Vertical reuse focuses on

Parveen Kumar, Mukesh Kumar, Seema, Int. J. EnCoTe, 2012, v0101, 01-04 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

2

Page 3: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

defining product-line specific components whose specification is constrained by the product line architectural decisions. Economies of scale are achieved through the recognition of product lines.

METHODOLOGY

Domain Engineering Process: - Domain Engineering is the combination of analysis, specification and implementation of software assets. This process is based on three phases:- • Analysis • Design • Implementation

Analysis: - Analysis is the activity that depicts the commonalities and variability with in domain. We organize the information in a set of domain models which will be reusable, when create new system. A domain model will carry the domain dictionary, feature model context model. Design:- This part takes the domain model as an input i.e applied as control model of partitioning strategy. This strategy has such kind of components objects, data type, subsystems and how the features of domain allocated.

Implementation:- This section generates the outputs that are reusable assets, application generators and domain based language.

Methods:- Methods are based on the aim of analyzed and models to achieve the reusable similarities in a domain. These methods vary on level of formalities of methods and products or information carried. Domain engineering methods have been extended to other methods to cover up the whole product line engineering process. Methods based on domain analysis and design, which denote set of systems or functional areas. Some of

these methods: ODM, FODA, FORM, RSEB, and DSSA.

Futuristic Strategy:- New languages are normally designed to routine work by the programmers; this forming is called programming development life cycle. During the development designers need to check of already decisions had taken. So that accurate, well informed, consistent and manageable data is required. Most comprehensive formation of programming language is likely to be in its referral manual e.i mainly open to interpretation.

Collage is a new offer for such kind of formal approaches, which could be seen as a combination of Extended Backus–Naur Form (EBNF), State Machines, Grammars, and a simple prototyping language. One of the important achievements of collages is a new path to modularize the designing of the language. The library of existing language designs contains small specification modules, each of them capturing a language feature, such as scoping, sub-typing, or recursive method calls. In the current state, the library contains all features needed to assemble a modern object-oriented language such as Java. Most interestingly it is managed to achieve a high level of decoupling among the modules. The relationships between languages specification and language instances, the syntax and semantics related components of a language specification, which should be with the corresponding process on language instances.

Syntax Syntax of a programming language is specified by means of EBNF productions. The EBNF productions define a context free grammar and can be used to generate a parser. Static Semantics

Static Semantics of programming languages

Parveen Kumar, Mukesh Kumar, Seema, Int. J. EnCoTe, 2012, v0101, 01-04 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

3

Page 4: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

is described by means of attribute grammars and predicate logic. All static information, such as static typing, constant propagation, or scope resolution can be specified with attribution rules. Dynamic Semantics Dynamic semantic define the ex ecution behavior of a program. Collages gives dynamic semantics by mapping each program of a described language into a finite state machine, whose states are decorated with actions, which are fired, each time a state is visited. With other words, during execution control flows along transitions whose firing conditions evaluate to true, and at every state visited, the corresponding action rule is executed. Instead of giving a transformation from programs into state machines, a novel kind of state machines, called Tree Finite State Machines (TFSMs) Conclusion :- With the help on collage some of the ASM (Abstract State Machine) models of programming languages are assumed that the representation of the program’s control and data flow in the form of (static) functions between parts of the program text. Informal pictures could be explained in the flow graph. These pictures can be refined and formalized as the Collage Visual language. Collages use attribute grammars (AGs) for the specification of static as well as dynamic properties. Most of the several mechanisms are proposed for defining programming language, AG systems are in the most successful ones. The main reason for this lies in the fact that they can be written in a declarative style and are highly modular. However, by itself they are unsuitable for the specification of dynamic semantics. Collages are a graphical formalism.

REFERENCES 1) Dusink, E.M. and Katwijk, J. van

Reflections on Reusable Software and a. Software Components. Ada

Components: Libraries and Tools. (In b. Proceedings of the Ada-Europe

Conference, Stockholm. Ed. by S. Tafvelin,

c. Cambridge University Press, Cambridge, U.K. 1987) pp. 113-126.

2) Hooper, J.W. and Chester, R.O.

Software Reuse, Guidelines and Methods.

(Plenum Press, New York, New York, 1991.)

3) Katz, S., et al. Glossary of Software

Reuse Terms. (Gaithersburg, MD: National Institute of Standards and

Technology. 1994) 4) Weiss, D. M. and C. T. R. Lai. Software

Product-Line Engineering 5) D. Spinellis and V. Guruprasad.

Lightweight languages as software engineering tools. In USENIX Conference on Domain-Specific Languages.

6) Cohen, S., Friedman, Martin,

Solderitsch, and Webster. Product Line Identification for ESC-Hanscom.

(CMU/SEI-95-SR-024, Pittsburgh, Software Engineering Institute, Carnegie Mellon University. 1995)

7) J. L. DÍAZ-HERRERA Department of Computer Science, Southern

Polytechnic State University. 1100 South Marietta Parkway, Marietta, GA

30060-2896

Parveen Kumar, Mukesh Kumar, Seema, Int. J. EnCoTe, 2012, v0101, 01-04 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

4

Page 5: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

INFORMATION TECHNOLOGY AND ITS APPLICATION IN MODERN DISTANCE EDUCATION

1Iqrar Ahmad 2Arif Ahmad Khan Research Scholar Software Engineer

Singhania University, Jhunjhunu Accenture Services Pvt Ltd Rajasthan, India Bangalore, India

[email protected] [email protected] Abstract Information technology with huge potential for education makes it crucial in teaching and learning. Educational applications of information technology for e-learning, real time learning, virtual workshop, resource sharing, and cooperative learning are described. Information technology used effectively to improve interactive learning, and make content easier to understand. Within information technology educational environments, self-centered, autonomous learning is promoted, thus the development of lifelong learning skills is contributed. This paper focuses on the application of information technology in Modern distance education. The text describes the rapid development of technology and implication for modern distance education, addresses the integration of technology into modern education, such as computer-assisted instruction, computer multimedia and networking, online learning using computer mediated communication; addresses three teaching strategies in online learning; examines advantages of technology for modern education. Keywords- e-learning, information technology, modern education, online learning 1. Introduction Information technology currently considerably enhances the use of the Internet. Media on demand solutions, interactive educational broadcasting and multicasting-on-demand are emerging technologies with huge potential for education [1]. A growing number of colleges have broadband access to the internet, enabling them to take advantage of multimedia resources and new interactive learning methods. The use of multimedia resources in classroom teaching will have an impact on the way teachers prepare lessons, which in turn, will modify the lessons themselves. There is a need to gather more insight into the educational impact of multimedia and broadcasting technology. Media education and the training of teachers in using multimedia resources effectively will be crucial in order to make successful use of digital media in classrooms. In recent years, information technology for e-learning plays an important role in modern distance education [1]. Information technology for modern distance education in the most general sense of the term is instruction delivered over a distance to one or more individuals located in one or more venues [2]. Today’s new information technologies, particularly the Internet, present higher education with the largest megaphone in its history: the capacity to disseminate knowledge to an exponentially larger number of people than ever before [3]. To do this, educators use a vehicle now commonly known as modern distance education. It

is a subject that has stimulated intense passions, new and aggressive competitors, pressure for new (and often very different) resources, an evolving regulatory environment, and more ambiguities than certainties about appropriate policy and practice: not to mention the most fundamental questions about the future of the academy [4]. Modern distance education is utilized at every level of the educational spectrum. It takes different forms, however, at different levels [5]. The most extensive use as a substitute for the classroom experience is in higher education [6]. Modern distance education refers to education or training courses delivered to remote (off campus) location(s) via audio, video (live or prerecorded), or computer technologies, including both synchronous and asynchronous instruction [7]. The field of college distance education is growing most rapidly. Courses are available in community colleges and universities, at the post-graduate level, for professional development and training, and for continuing education [8, 9]. Community colleges, with their history of serving local and continuing education communities, have been particularly active, as have many university systems [10, 11]. Courses conducted exclusively on campus, as well as classes conducted exclusively via written correspondence, are not included in this definition of distance education (although some on-campus instruction or testing may be involved, and some instruction may be conducted via written correspondence) [12]. Modern distance education helps students overcome such barriers as full-time work commitments, geographic inaccessibility, and

Iqrar Ahmad, Arif Ahmad Khan, Int. J. EnCoTe, v01012012, 05-12 ISSN : 2277 - 9337

Available [email protected] 5

Page 6: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

the difficulty of obtaining child or elder care, and physical disabilities. It can also provide the advantage of convenience and flexibility. With digital technologies enabling courses to reach and appeal to wider audiences, interest in distance education is growing. 2. Necessity of technology Technology is coming and we must deal with it since it is an unstoppable force. Technology is requisite for the advancement of us all, to some extent, technology is the future, and without it we may not have jobs. Technology is continually changing the way we learn, work, and live. Nowadays, computer and network is almost the synonym of technology, especially in the areas of education and for ordinary person. Sawchuck claims that, “Strictly speaking, the term technology should include any mediating tool of human activity, ranging from a pencil, to a computer, to a language, to any rational organization of material resources. However, in the industrialized world, ask someone to talk about technology and his or her response largely begins and ends with a discussion of computers.”[14].The world increasingly depends on computer technologies in many aspects of daily life, such as, entertainment, business, and education. In 1993, President Bill Clinton highlighted the changing role of technology in the global economy as follows: “Most important of all, information has become global and has become king of the global economy. In earlier history, wealth was measured in land, gold, in oil, in machines. Today, the principal measure of our wealth is information: its quality, its quantity, and the speed with which we acquire it and adapt to it” [15]. The world is changing, and as technology becomes more integrated as a necessity for daily life, access to technology will become as important as access to education. Pittman found that learners can benefit from “strategic uses of technology”. She states: “Adults in homes, schools, and community centers need to embrace, not fear, technology and believe in its transformative power; they must develop new capacities to embed technology in all of their work. We owe learners many and varied ways to experience technology’s value in the learning process and use it to take charge of their own learning”[16]. Technology can eliminate some barriers to participation and meet some of the unique needs of learners. They can deliver learning in places other than classrooms, facilitate the efficient use of precious learning time, sustain the motivation of learners, and reach many different types of learners in the ways they need. 3. Information technology and teaching

strategies in modern education The new technologies offer ways of individualizing instruction to meet the needs of types of learners.

Educators can make use of technologies to individualize instructions, as a result, all types of learners can get the best instruction they want. It’s urgent to integrate these technology products and others into modern education. 3.1 Computer-assisted instruction The ability of the computer to allow students to control the learning experience is the greatest strength of Computer assisted instruction (CAI). CAI help students develop skills in logic, solve problem, improve academic proficiency in areas such as reading and vocabulary, language, writing, and listening. CAI is utilized because of the benefits it offers to learners. These benefits stem from an array of diverse, innovative software programs. Some software programs offer word processing, or text adventures, or branching where students move to different levels without the teacher having to check their work before they continue. Some programs even offer holistic literacy interactions in which students “become engaged with scripts and use language to discuss, plan, and solve problems” [17]. As result, learning remains challenging yet fun, no doubt, most students like this kind of instruction. Owing to self-esteem, most learners do not want others to know about their academic deficiencies. CAI can provide privacy they need. The computer is nonjudgmental and allows learners to make mistakes without being known by others. The most important advantage of CAI is the individualized nature of the method of delivery. As result, some person note that individualized instruction can be facilitated in modern prisons with the use of CAI [18]. 3.2 Computer multimedia and networking Computer multimedia and networking is an ideal supporting technology to meet the increasing demands of continuing modern education. Computer multimedia and networking can enhance communication, community and collaboration, thus it makes interactive distance education come true. Usually Interactive distance education is provided utilizing a compressed video by cable or TI lines. The Internet can provide information in an interesting way, and create a place where older adults from all over the country can form virtual “study circles’’ and self-help groups. This form of modern distance education fits many needs of students. Web-based interactive learning helps keeping the various types of learners together. In this form of distance education the teacher and students can see and hear each other through two-way audio and video communications, as result, a real-time teaching/learning environment emerges. 3.3 Online learning using computer

mediated communication

Iqrar Ahmad, Arif Ahmad Khan, Int. J. EnCoTe, v01012012, 05-12 ISSN : 2277 - 9337

Available [email protected] 6

Page 7: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

Online learning using computer mediated communication (CMC) has many benefits. Online learning is an appropriate delivery method for modern education. However, students and teachers need to change the roles each plays in the learning-teaching process using online learning. Converting regular classroom courses into online delivery is not a simple or linear process. The instructor needs to rethink the learner role, the teacher role, and the design of instruction in this new environment. Teaching online requires a considerable amount of time to design, develop, and deliver a course. The duration of planning for online instruction can be a long process when involving multiple institutions or programs as part of the collaboration. Modes of delivery may require a consistent look and feel for the learner, showing uniformity in appearance and navigation of the Web environment. The instructor must gain comfort and proficiency in using the Web as the primary instructor-learner connection in order to teach effectively without visual and verbal cues. Online activities call for active participation of students in discussions. One of the challenges for online instructors is to identify teaching strategies that best fit the needs of learners, content, and the environment. The instructor could use multiple teaching strategies to meet diverse learners’ needs. For online courses that focus on content and process, the use of consensus groups is ideal. Learning object repositories can allow various levels of interactivity and are focused on the learner if used correctly. Online discussion boards places less emphasis on the instructor and more on the learner, and have the unique capacity to support higher-order constructivist learning and the development of a learning community. 3.3.1 Mentoring. Learners encounter various potential mentors during their educational life, all with their own personal histories, their own areas of expertise, and their own special gifts that influence the learners. According to English, mentoring can occur “anywhere that an student is in need of being taught, sponsored, guided, counseled, and befriended by someone who is more experienced”. “Even where teacher-student contact lacks the intensity normally associated with mentorship.”[19] Mentoring recognizes learners’ needs both within and beyond the content of the online course. In the online environment, this interaction is complicated by the isolation of both instructor and student in their own space. Mentors equally serve career development and psychosocial functions for the learners. Instructors have many mentoring responsibilities, such as: supporting the novice, supporting the marginalized, incorporating mentoring into instructional practice, keep your eye on the student, provide windows to the future, model revelation and reflection, expose students to the profession., invite chaos when appropriate,

invite others to mentor, etc[20]. There are many benefits of mentoring for the mentors as well as the learners. Learners become more commitment to their field, more satisfied with their programs. Mentors benefit from wisdom and life experiences of student. Consequently, paradigms of mentoring have shifted toward a collaborative, learning partnership, where the mentor is both benefactor and recipient of learning in the relationship. 3.3.2 Online discussion board. The online discussion board has become a ubiquitous part of today’s distance learning landscape. Online discussion board presents unique opportunities for teaching in new ways, and support higher-order constructivist learning. As McKeachie notes, “Teaching by discussion differs from lecturing because you never know what is going to happen. At times this is anxiety-producing, at times frustrating, but more often exhilarating. It provides constant challenges and opportunities for both you and the students to learn.”[21] Online discussion board can stimulate an individualized form of learning at the higher levels of the cognitive domain, and provide learners with exceptional opportunities for self-expression and reflection. In order to make effective use of the online discussion board to facilitate interaction, the online educators have much to do. They should take much into consideration, as Levine lists: create an environment conducive to learning, establish rules, and provide introductory instruction, guide the threaded discussion, pose meaningful questions and problems, focus on the highest three levels of the cognitive domain, allow individualization without isolation, be sensitive to nonparticipation, stimulate participation, encourage reflection, summarize key ideas[22]. 3.3.3 Online consensus group work. Online environment is ideal for collaborative learning. The ongoing nature of asynchronous group meetings negates the need to coordinate schedules to meet with group members. Online consensus group work (CGW)is a form of collaborative learning. Learners are entrusted with the ability to govern themselves, working in groups; learners foster a community, and thus allowing students to feel connected with each other, the teacher, and the content. Students are considered co-constructors of knowledge rather than just consumers of it. Group work can shift the locus of control in the learning setting from the teacher to student peer groups, increase learner motivation, persistence, and learning outcomes. Consensus is critical to the CGW process. It’s through consensus that group members can listen, hear, understand, and finally accept the viewpoint of group peers. Theoretically CGW embodies many of the principles and tenets that have historically characterized the nature of modern learning,

Iqrar Ahmad, Arif Ahmad Khan, Int. J. EnCoTe, v01012012, 05-12 ISSN : 2277 - 9337

Available [email protected] 7

Page 8: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

including participatory democratic social spaces that empower students, provide a space for reflection, and promote changes in worldview [23]. 4. Teaching literature through blogs Specially speaking, the appropriate educational applications of information technology should be directed by the education principles and the theory of modern education technology. When use information technology in modern distance education, we should put emphasizes on the following: Integrating the information technology (computer) into the advanced instructional theory system. In the education field, there are some principles of teaching and learning. These principles each has their own advantages and limitations. It is hard to say which one is the most advanced . We can use them according to the specific situation in different cases. As an online resources databank, the main function of blackboard platform is to provide more learning materials for students and relatively neglects the interaction between teachers and students, students and students. As Cai Jigang points out that the use of multimedia in class lecture does not alter the passive roles of students in traditional classroom [24], the courseware sometimes is so dominating that it leaves little time for the learner’s independent and critical thinking. In view of this, literature teaching can reinforce the interaction through other information technologies on the internet. As a new way of communication on the Internet after e-mail, BBS and ICQ, blog came into China around 2003, which has become a powerful and popular form of media for its simplicity and accessibility within a few years. As Ferdig and Trammel suggests elsewhere [25], for both teachers and students of literature, the use of blog lies in three aspects: first, it can cultivate creative and critical thinking by expressing one’s ideas of studying literature; second, it can promote the learning and teaching interest by stimulating one’s initiative in participation; third, it can provide teachers and students with various perspectives through complementing the interaction between different learning and teaching groups. 4.1 Teacher blog Literature teachers can establish their individual blog and share the teaching resources and exchange experience with other teachers. Meanwhile teacher blogs can provide learners with proper learning materials, multiplying the opportunities for their self-exercise. Therefore in literature teaching blogs teachers are responsible for organizing blogging and supervising the performance of each learner. Teacher blogs can be a platform for communications between teachers and students and among teachers, either responding to those questions posted by students or exchanging novel

ideas and notions of teaching with other teachers. Students can give some suggestions for the improvement of lectures. The artificial nature of cyberspace ensures the open minded communication between teachers and students since even harsh comments and bold suggestions are acceptable to each other in this context. Therefore teacher blog provides an ideal space for exchanging ideas about literature course in both teaching and learning processes. It can promote teaching literature in a more self-reflective way, which makes teachers keep a more actively involved attitude in teaching. However, some problems might appear in literature blogs. For example, failure to update contents or much delay in responding to students’ questions might diminish students’ learning enthusiasm. 4.2 Student blog Only people learn in accordance with his or her self concept, the learning could be significant, which is characterized by self-involvement, self-initiation, individual pervasion, and self-evaluation [26].Teachers of literature can encourage students to establish their own individual blog, which can record their learning experience and reinforce their learning strategy more self-consciously. In addition, in light of peer comments or arguments, students can acquire autonomous learning ability through reflecting on one’s own problems in the learning process. Teachers should pay attention to student blogs regularly so as to be aware of the problems they meet in learning literature. Establishment and maintaining of blogs can be set up as a means of evaluating performance. Nevertheless, freedom should be given to students to establish blogs with their own personality. Individual blogging can help students express their distinct individuality in the learning process, stimulating their interest and activating their initiative for the literature course. 5. The applications of information

technology for distance education Over the years, various technologies have been employed to enhance distance education, including radio, television and the telephone. Today, a variety of telecommunication technologies are available for support of modern distance education. Often, a key element of these technologies is their ability to enhance communication between teacher and learners and among learners who may be at different locations. There are three broad categories of modern distance education technologies: audio-based, video-based, and computer-based. [27] 5.1 Audio-based technologies Audio-based technologies have been used in distance education for many years, Options include audio cassettes and CDs, radio, and audio

Iqrar Ahmad, Arif Ahmad Khan, Int. J. EnCoTe, v01012012, 05-12 ISSN : 2277 - 9337

Available [email protected] 8

Page 9: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

teleconferencing. Audio-based technologies are familiar and readily available. While not as widely used as computer and video technologies, they offer a cost-effective alternative that can effectively meet many distance education needs. Teachers can invite an expert into the classroom to engage in a dialog with students. The audio teleconference is often seen as a cost-effective way to hold a meeting or teacher training session without the expense of time and money involved in travel. Audio technologies is the most easily accessible form of telecommunication because it uses telephone service. Commercial phone companies have made it easy to set up audio teleconferences from any phone. The third advantage of audio technologies is interactive. All participants get the same message and interactivity. They can talk to the instructor or to the other learners. The limitations of audio technologies are the following: lack of visual information. The lack of a visual dimension poses limitations. This can be offset by arranging to have material at the sites in advance or using technology resources to transmit the visuals. To have acceptable audio quality, each receiving site needs to have special microphone-amplifier devices. Lack of experience with this type of communication technology may make some learners less willing to participate. 5.2 Video-based technologies Video-based technologies overcome the lack of visual elements in audio-based distance education. Video may be delivered over distances using a variety of means including video cassettes, broadcast television, satellite and microwave transmission, closed-circuit and cable systems, and, today, the Internet. A key distinction between various video distance education options is the degree of interactivity. Two-way video is now becoming more common in pre-college education, and considered enthusiasm surrounds the potential of this medium. However, it is important to recognize that this technology is still relatively young. Video compression, which is needed to support two-way video transmission over telephone lines or the Internet, requires specialized equipment and can be subject to problems with sound and picture quality. While two-way interactive video equipment can be quite complex and expensive to set up and operate, the development of relatively easy-to-use Internet-based video conferencing equipment from companies promises to make video conferencing more accessible. A room-to-room video conferencing unit includes a camera, microphone, and built-in codec(compressor/decompressor) for transmitting video over digital telephone lines or the Internet. Schools interested in exploring this technology should consult experts in the field for advice about equipment and requirements. Video-based

technologies overcome the lack of visual elements in audio-based distance education. Video may be delivered over distances using a variety of means including video cassettes, broadcast television, satellite and microwave transmission, closed-circuit and cable systems, and, today, the Internet. A key distinction between various video distance education options is the degree of interactivity. Two-way video is now becoming more common in pre-college education, and considered enthusiasm surrounds the potential of this medium. However, it is important to recognize that this technology is still relatively young. Video compression, which is needed to support two-way video transmission over telephone lines or the Internet, requires specialized equipment and can be subject to problems with sound and picture quality. While two-way interactive video equipment can be quite complex and expensive to set up and operate, the development of relatively easy-to-use Internet-based video conferencing equipment from companies promises to make video conferencing more accessible. A room-to-room video conferencing unit includes a camera, microphone, and built-in codec(compressor/decompressor) for transmitting video over digital telephone lines or the Internet. Schools interested in exploring this technology should consult experts in the field for advice about equipment and requirements. C. Computer-based technologies Computers represent one of the newest tools for modern distance education. In addition to the use of packaged software as part of correspondence courses, the computer can be used as a communication tool and as a tool for gathering information and resources. Computer mediated communication(CMC) is the term given to any use of the computer as a device for mediating communication between teacher and learners and among learners, often over distances, Common CMC applications include e-mail, computer conferencing, chat, and the web. The computer is now the most widely used platform for distance education delivery. In the past decade, the emergence of the World Wide Web has led to a proliferation of online learning courses and virtual schools. Institutions specializing in distance education, such as the Tsinghua University and the Peking University, have grown by leaps and bounds. Many areas now have virtual high school. It is not hard to understand the attraction. Computer technologies have grown to encompass many of the other technologies that have traditionally been used in modern distance education. Computers and the World Wide Web can deliver textual information, graphics, audio, and even video to learners in remote classroom locations or in their own homes. They make education more accessible and more convenient than ever before.

Iqrar Ahmad, Arif Ahmad Khan, Int. J. EnCoTe, v01012012, 05-12 ISSN : 2277 - 9337

Available [email protected] 9

Page 10: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

6. Conclusions Distance education is not new. Both audio and television are resources that have been used for many years in distance teaching settings. Technology is often used in distance education to facilitate communication between teacher and learners. We examined the characteristics of three categories of distance education technologies: audio-based, video-based, and computer-based. As the technologies have advanced, these resources have been incorporated into more learning opportunities for students. Modern education requires double bishops teaching mode, which not only can give play to the role of students’ cognitive subject, but also can give play to the teachers’ leading role. The use of information technologies for a better visualization of learning content, will promote interactive learning, and make content easier to understand. Therefore, this will promote self-centered, autonomous learning environments, thus contributing to the development of lifelong learning skills. 7. References [1] Horton. S, Web Teaching Guide: A practical

approach to creating course web sites, Yale University Press, New Haven, 2000.

[2] H. Lee, J. Kim, and J. Kang, “The Assessment Of Professional Standard Competence Of Teachers Of Students With Visual Impairments”, International Journal Of Special Education, 2008, Vol23 No 2 , pp.33-46.

[3] A. Kezar, “Is there a way out? Examining the commercialization of higher education”, Journal of Higher Education, Jul/Aug2008, Vol. 79 Issue 4, pp. 473-481.

[4] M. Bishop, & W. Cates, “Theoretical foundation for sound’s use in multimedia instruction to enhance learning. Educational Technology Research and Development,” 2001, 49(3), pp. 5-22.

[5] L. Bussell, “Haptic interfaces; Getting in touch with web-based learning. Educational Technology,” 2001, 41(3), pp. 27-32.

[6] K. Ayres, & J. Langone, “Evaluation of software for functional skills instruction blending best practice with technology,” Technology in Action, 2005, 1(5), pp. 1-8.

[7] M. Martinez, N. Sauleda, & G. Huber, “Metaphors as blueprints of thinking about teaching and learning,” Teaching and Teacher Education, 2001, 17(8), pp. 965-977.

[8] J. Guthrie, W. Schafer, & C. Huang, “Benefits of opportunity to read and balanced instruction on the NAEP,” The Journal of Educational Research, 2001, 94(3), pp. 145-162.

[9] R. Gersten, L. Fuchs, J. Williams, & S. Baker, “Teaching reading comprehension strategies to students with learning disabilities,” A review of research. Review of Educational Research, 2001, 71(2), pp. 279-321.

[10] M. Hoover, & E. Fabian, “A successful program for struggling readers,” Reading Teacher, 2000, 53(6), pp. 474-476.

[11] T. Cook, B. Means, G. Haertel, & V. Michalchik, The case for randomized experiments. In Haertel, G.D. & Means, B. (Eds.) Evaluating the effects of learning technologies. New York, NY: Teachers College Press. 2003

[12] N. Denzin, & Y. Lincoln, (Eds.), The Sage handbook of qualitative research. Thousand Oaks, CA: Sage Publications, 2005

[13] N. Brouwer, G. Muller, and H. Rietdijk, “Educational Designing with Micro Worlds”, Journal of Technology and Teacher Education, 2007, 15(4)�Cpp.35-47.

[14] Peter H. Sawchuck. Learning in Doing: Social, Cognitive, and Computational Perspectives Series. Roy Pea, Christian Heath, and Lucy A. Suchman, Eds. (2003). Cambridge: Cambridge University Press. p.50.

[15] Schiller, H. I. Information Inequality: The Deepening Social Crisis in America. New York: Routledge, 1996.p.105.

[16] ]Pittman, J. “Empowering Individuals, Schools, and Communities.” In G. Solomon,N. J. Allen, and P. Resta (eds.), Toward Digital Equity: Bridging the Divide in Education.Needham Heights, Mass.: Allyn & Bacon, 2003.

[17] Finnegan, R., & Sinatra, R. (1991). Interactive computer-assisted instruction with adults. Journal of Reading, 35,108-119.

[18] Batchhelder, J.S. (2000). Efficiacy of a computer-assisted instruction program in a prison setting: An experimental study. Adult Education Quarterly; Washington 50(2). 120-129.

[19] English, L. M. “Spiritual Dimensions of Informal Learning.” In L. M. English and M. A. Gillen (eds.), Addressing Spiritual Dimensions of Adult Learning. New Directions for Adult and Continuing Education, no. 85. San Francisco: Jossey-Bass, 2000.

[20] Kimberly R. Burgess, Mentoring as Holistic Online Instruction, NEW DIRECTIONS FOR ADULT AND CONTINUING EDUCATION, no. 113, Spring 2007, pp49-56..

[21] McKeachie, W. J. McKeachie’s Teaching Tips: Strategies, Research, and Theory for College and University Teachers. (11th ed.) Boston: Houghton Mifflin, 2002.

[22] S. Joseph Levine, The Online Discussion Board, NEW DIRECTIONS FOR ADULT AND CONTINUING EDUCATION, no. 113, Spring 2007, pp.67-74.

[23] Regina O. Smith, John M. Dirkx, Using Consensus Groups in Online Learning, NEW DIRECTIONS FOR ADULT AND CONTINUING EDUCATION, no. 113, Spring 2007, pp25-35.

[24] Cai Jigang, College English Teaching: Review, Reflection and Research. Shanghai: Fudan University Press, 2006.

[25] R. Ferdig and K. Trammel, “Content delivery in the

blogosphere,” Technological Horizons in Education Journal, Feb.2004, pp.12-15.

[26] Jenny Rogers, Adults Learning. Buckingham: Open University Press, 1971.

[27] Li Yu-mei “The Research on the Applications of Information Technologies for Distance Education” The 6th International Conference on Computer

Iqrar Ahmad, Arif Ahmad Khan, Int. J. EnCoTe, v01012012, 05-12 ISSN : 2277 - 9337

Available [email protected] 10

Page 11: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

Science & Education (ICCSE 2011) August 3-5, 2011. SuperStar Virgo, Singapore

Iqrar Ahmad, Arif Ahmad Khan, Int. J. EnCoTe, v01012012, 05-12 ISSN : 2277 - 9337

Available [email protected] 11

Page 12: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

Abstract - In this paper, the problem of closed loop power control based on Signal-to-Interference Ratio (SIR) is investigated for the reverse link of a Direct Sequence Code Division Multiple Access (DS-CDMA) system in slow multipath fading environments. The investigation is focussed on the issue of SIR estimation methods and it is used for SIR estimation using an auxiliary spreading sequence is proposed and simulated in this work. The performance of closed loop power control (CLPC) using the new SIR estimator is evaluated in terms of bit error rates (BER) versus Eb/Io. The result is compared with the BER performance using another standard SIR estimator, which is based on the estimation of the average power and variance of the received signal. Simulation results show that for low SIRs, the mean value of the estimated SIR based on the proposed method is better than that obtained by the standard estimator. The SIR estimation using the auxiliary spreading sequence, however, yields a higher variance on the estimated SIR compared to that when standard technique is used. This is due to the fact that the new technique operates at the symbol level to perform the averaging process, while the standard technique utilises the chip sequence, which is N times higher in the number of samples to be averaged, where N is the processing gain. The BER performance of CLPC is better when power control updating rates are much higher than the fading rates. This indicates that higher rates of power control can track the fading more accurately. This issue is identified, and a possible solution using a prediction filter algorithm to reduce the variance has been suggested for further research. Keywords: Closed Loop Power Control, DS-CDMA, SIR estimation

I. Introduction Direct sequence code division multiple access (DS-CDMA) has been more attractive for application in third generation (3G) systems than frequency division multiple access (FDMA) and time division multiple access (TDMA) schemes [1]. Although in theory the three multiple access schemes should provide the same capacity [2], CDMA permits several capacity enhancement techniques to be implemented more efficiently than in the other two. For example, CDMA can utilise frequency spectrum more efficiently, and is able to deal with multipath channel more effectively through the use of RAKE technology. Hence, it provides more capacity in practice. Consequently, many aspects of CDMA are currently under active study in order to satisfy the requirement of 3G systems and to optimise its performance. However, since all users in CDMA occupy the same frequency band simultaneously, every user will interfere with every other user

as multiple access interference (MAI). This is because of non-zero crosscorrelations between user’s spreading sequence. Due to MAI, the received power at each basestation must be kept equal to obtain optimum capacity [3]. Therefore, power control in DS-CDMA plays a significant role in achieving optimum capacity. Although several receiver structures have been proposed to alleviate the need of equal power, they are too complex for practical implementation [4]. Meanwhile, simpler decorrelation detectors that work effectively in the second-generation CDMA system are still attractive for use in third generation systems. One important issue of power control is to provide a reliable estimate of the SIR. Several methods have been proposed in the literature [6-8]. In [6], the SIR estimate is extracted using a subspace method. However, subspace methods can suffer from instability problems especially when interference and noise are dominant. In [7], the SIR estimate is obtained using pilot symbols and hard decisions on the data symbols. Although this method has the advantage of being rather simple and efficient for SIRs greater than 6 dB, it shows a severe degradation when the SIR decreases due to the fact that it relies on symbol decisions which tend to be wrong as the SIR decreases. Another method of interest shown in [8] consists of estimating the SIR partly by processing chips to estimate the MAI. However, this last method is selected for comparison purposes, as it seemed to provide the best SIR estimates among the other existing methods. In this paper, a novel approach for SIR estimation using an auxiliary spreading sequence presented in [9] is simulated and is used for closed loop power control with a fixed step size. II. Wireless Communication Channel

Wireless communication channels suffer from severe attenuation and signal fluctuations. Large attenuation is due to the user’s mobility through the propagation environment that causes almost no direct signal from the transmitter can reach the receiver. Three different models that are commonly used to characterise a wireless channel are: 1. Propagation path loss (near-far attenuation) 2. Shadowing (variation on the average power) 3. Multipath fading (fast signal fluctuation). Propagation path loss represents the average signal attenuation due to the separation between the transmitter and the receiver.. For instance if a mobile station moves along a circular line at a fixed distance from the base station, the average received signal may vary slowly. This is called shadowing. Multipath fading is associated with short terms

A NOVEL METHOD FOR SIR ESTIMATION USING AN AUXILIARY SPREADING SEQUENCE

Mr K. V . Murali Mohan Professor in ECE Dept. HITS, Bogaram(V), Keesara(M),Hyderabad [email protected]

Dr D Krishna Reddy Professor in ECE Dept. CBIT, Gandipet, Hyderabad [email protected]

K V Murali Mohan, Krishna Reddy, Int. J. EnCoTe, v01012012, 13-17 ISSN : 2277 - 9337

Available [email protected] 13

Page 13: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

signal fluctuation due to superposition of two or more signal paths with different amplitudes and phases. To illustrate this propagation mechanism, the received signal s[dB] at a mobile user as a function of distance d [km] is shown in Fig. 1.

Fig. 1 Wireless propagation Statistics of long terms signal variation (average pathloss and shadowing) are often required for large-scale system design and planing, such as service area coverage and hand-off designs. In this work, the study of wireless channel is focussed on multipath fading in order to evaluate the effect of CLPC. Signal variation due to shadowing is assumed to be perfectly smoothed out by the open loop power control. III. Closed Loop Power Control The focus of this study is to investigate the performance of CLPC in a slow fading environment (Doppler frequency fD=17, 50, and 100 Hz were chosen for comparison). The study employs an uncoded binary signal in order to show how the performance is improved by CLPC. Power control on the reverse link is crucial because the received signals come from different mobile terminals that fade independently.The scheme of closed loop power control is depicted in Fig. 2 as described in [11].

Fig.2 Mechanism of CLPC

In IS-95 (a second generation CDMA system), power control is performed every 1.25 ms. The third generation systems, however, will operate in the 1.8 GHz frequency band resulting in higher fading rates compared to the second generation system that employs a 900 MHz carrier frequency. Therefore higher rates for power control are required in third generation systems. Once the estimated SIR is available, it is then compared with the target SIR to produce the power control command (PCC) bits. The PCC bits represent instructions to raise the mobile transmit power if the SIR estimate is less than the target SIR, or to lower it when SIR estimate is greater than the target SIR. The mobile terminal then changes its transmit power according to the instructions received from the base station. Note that PCC bits are inserted into the user’s data using a multiplexer (MUX) at the basestation, and then a demultiplexer (DEMUX) is used to remove the PCC bits from user’s data at the mobile terminal. Coding and interleaving are not employed on the PCC bits to reduce data transmission inefficiency and to avoid the corresponding delay. Therefore, errors may occur on the PCC bits because they are transmitted in an uncoded form. Another problem to be considered is the loop delay introduced by the propagation channel and processing time. IV. Simulation of SIR Estimation In practice, it is not easy to obtain fast and accurate SIR measurements because the desired signal can not be easily separated from the MAI. As mentioned in the introduction, a SIR estimator has been proposed in [8] using measurements of the average power as well as the total power of the received signals. In that method, however, the total power was estimated at the chip level (before despreading). In [9], a SIR estimator at the symbol/bit level using an auxiliary spreading sequence (after despreading) is proposed. This method is described in Fig.3 using QPSK modulation. During initial access, the mobile transmit power is determined by the open loop power control. After connection is established the closed loop power control begins to operate at faster rates. Corresponding to one time slot of the received symbols (one power control period), the basestation measures and estimates the received SIR every 0.67 ms. In Fig. 3 the received signal after carrier demodulation and filtering is despread by the kth user spreading sequences an

(I)(k) and an(Q)(k) to obtain the wanted signal. The MAI is

then estimated by despreading the received signal with the auxiliary spreading sequences, an

(I)(a) and an(Q)(a). The

superscripts (I) and (Q) signify respectively the in-phase and quadrature components of the spreading sequences used in QPSK modulation. The auxiliary spreading sequences are reserved for interference estimation and are not assigned to any user in the system. However, all users can use the same auxiliary spreading sequences to estimate the MAI. At the demodulator input, the received signal from all K users can be represented as

PCC bits

MUX

SIR Estimate

PCC bits

Target SIR

Despread Decoding

DEMUX FADING CHANNEL gain

mobile

basestation

multipath fading

shadowing

average pathloss

d [km]

s[dB]

K V Murali Mohan, Krishna Reddy, Int. J. EnCoTe, v01012012, 13-17 ISSN : 2277 - 9337

Available [email protected] 14

Page 14: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

[

])sin()()()()(

)cos()()()()()(

)(

)(

kccnnc

kccnnc

tnTthn

kakxkE

tnTthn

kakxkEk

ks

Q

I

φω

φω

+−∑∞

−∞=+

+−∑∞

−∞=∑=

(1)

in which Ec(k) is the chip energy of the kth user, an

(I)(k) and an

(Q)(k) are the in-phase and quadrature spreading sequences, respectively, xn(k) is the binary symbol, and ωc represents the carrier frequency.

Fig.3 SIR estimator using auxiliary spreading sequence When the chip sequence is perfectly synchronized to the received signal of the kth user, the mean value of yn(k) is

)()()]([ kEkxkyE cnn = (2) while the mean value of yn(a), E[yn(a)] = 0 as a result of despreading by the auxiliary spreading sequence. However, both yn(k) and yn(a) have non-zero variance due to MAI. The quantities Zk and Za can then be derived as Zk = N2Sk

2 + NI2 + e Za = NSk

2 + NI2 + f (3) in which N is the processing gain, Sk is the wanted signal of the kth user, e and f correspond to the estimation error due to the limited number of averaged symbols. I represents the MAI from the other K-1 users plus thermal noise corresponding to the variance of yn(k) which can be expressed as

22

)()](var[ oN

kj T

jEky

c

cn +∑

≠= (4)

in which Tc is the chip period and No is the thermal noise power spectral density. From equation (3) one can derive the estimate of SIR as

NZZZZSIRka

ak/−

−= (5)

The SIR estimate in (5) may improve the estimate determined in [9] since it takes into account the self interference produced by the crosscorrelation between the auxiliary spreading sequence and the spreading sequence of the desired user. This may contribute significant effect to the SIR estimate when the number of user is small, or when the desired user is in high power, which can be the case in fading channels. The concept of SIR measurement and estimation depicted in Fig. 3 consists of square and averaging operations of the received binary data after despreading. The more data being used in the averaging process the more accurate estimate will result. However, measurement period can not be too long due to the fact that the results will be used to control the mobile transmit power at faster rates than the fading rates. In the technical specification proposed in [12], the reverse link data channel for 3G system has the structure as described in Fig. 4. The reverse link channel may have one dedicated physical control channel (DPCCH) and 0, 1, or several dedicated physical data channels (DPDCH) for each connection. The channel structure consists of super frames, frames and slots as shown in Fig. 4. The simulation of this method was first conducted in an AWGN channel to see the effect of MAI on the proposed SIR estimator. The number of users K=10 with processing gain N=64 using QPSK modulation is considered. SIR estimation was performed every 0.67 ms corresponding to 2560 chips for one time slot. The simulation used a data rate of 120 kbps or 60 ksps at the input of QPSK modulator, so that 40 binary symbols per time slot are available for averaging. In this simulation, the step size to increase or decrease the mobile transmit power is 1 dB. Measurements were conducted 100 times within the SIR dynamic range considered above with the same interval. This SIR estimator was compared with the other method proposed in [8]. Then the performance of both estimators was validated by comparison with the true SIR as illustrated in Fig.5 (a) and (b).

Fig. 5 (a) Performance of SIR estimator 0 2 4 6 8 10 12 14 16 18 20

0

2

4

6

8

10

12

14

16

18

20

True SIR [dB]

Estim

ated

SIR

[dB

]

True SIR SIR estimate (our method)

|.| 2

|.| 2

E[.]

E[.]

an(Q)(k

)

an(I)(k

Desired signal

Interference

yn(k

yn(a)

an(I)(a)

I

Q

an(Q)(a)

Rati

∑ : chip sum over 1 symbol .2 : square of magnitude E[.] : averaging operator

Z

Za

K V Murali Mohan, Krishna Reddy, Int. J. EnCoTe, v01012012, 13-17 ISSN : 2277 - 9337

Available [email protected] 15

Page 15: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

roposed method)

Fig.5 (b) Performance of SIR estimator (standard method)

The results in Fig. 5 show that for low SIRs, the proposed method provides better estimates for the mean value, while the standard method [8] gives higher estimates when they are compared with the true SIR. The standard method performs well for the mean estimates when SIR>6 dB. However, it has to be pointed out that the variance of the proposed estimator is higher than the variance of the standard estimator. This problem can be overcome by using a prediction filtering technique that will smooth out the SIR variations produced by the estimator , effect of these estimators on the performance of power control will be investigated in section V. V. Simulation of Fast Power Control The SIR estimator outlined in section IV is used in the closed loop power control described in section III. As reported in [5], closed loop power control and coding/interleaving can work complementary, in that power control works more effectively in slow fading, while coding/interleaving performs better in fast fading. It is not difficult to understand this, because most coding techniques utilises random bit errors to construct a forward error correction scheme. In slow fading environments, the received signal may have burst errors during the slow fades. Interleaving aims at randomising those burst errors in order to improve the performance of coding scheme. However, delay is inherent in the interleaving process. On the other hand, power control may fail to function in a fast fading environment because it cannot track the fast signal fluctuation. Therefore, power control is designed to work more effectively in slow fading channels, and let the error correction coding to overcome the problem in fast fading channel. From simulation, the effect of power control on mitigating multipath fading is shown in Fig. 6. Simulation was performed at a low mobile speed of 10 km/hr, which corresponds to a Doppler frequency of 17 Hz in the 1.8 GHz frequency band. Power control updating frequency is 1.5 kHz, (the transmit power is updated every 0.67 ms) as described in section III.

(a) Multipath fading without power control

(b) Multipath fading with power control

Fig.6 Effect of CLPC on fading channel (v=10 km/h) The same simulation parameters as described in section IV for 10 users are also used in power control simulation. The step size for increasing or decreasing the transmit power at the mobile station is 1 dB. Fig. 6 (a) shows the simulated received signal level under multipath fading without power control. The SIR without power control corresponds to the received signal fading, which may indicate that the MAI is Gaussian or tend to be Gaussian. With power control, the SIR is nearly constant at a target SIR level as can be seen in Fig. 6 (b). In this simulation, the SIR target was set at 10 dB. We can see from Figure 6 (b) that the controlled SIR is still varying, especially during the channel fading dips. This is due to the delay that power control cannot track the fading dips perfectly. To evaluate the effect of CLPC on bit error rate performance for various fading rates, simulation was conducted using QPSK modulation on CDMA signals. Doppler frequencies fD were chosen at 17, 50, and 100 Hz corresponding respectively to the mobile speeds of 10, 30, and 60 km/h in the 1.8 GHz frequency band. The power-control-updating rate was 1.5 kHz corresponding to 0.67 ms time slot of the SIR measurement period (Tp). The delay introduced by the propagation channel is assumed equals one measurement period (Tp). The command bits sent by the basestation were assumed error free and the processing delay of the command bits was negligible.

0 2 4 6 8 10 12 14 16 18 200

2

4

6

8

10

12

14

16

18

20

True SIR [dB]

Est

ima

ted

SIR

[d

B]

True SIR SIR estimate proposed in [8]

50 100 150 200 250 300 350 400 450 500-30

-20

-10

0

10

20

30

time x 0.67 ms

rela

tive

sign

al le

vel [

dB]

fading envelope constant transmit power (without CLPC)SIR without CLPC

50 100 150 200 250 300 350 400 450 500-30

-20

-10

0

10

20

30

time x 0.67 ms

rela

tive

sign

al le

vel [

dB]

fading envelope transmit power with CLPC SIR with CLPC (target = 10 dB)

K V Murali Mohan, Krishna Reddy, Int. J. EnCoTe, v01012012, 13-17 ISSN : 2277 - 9337

Available [email protected] 16

Page 16: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

The BER was counted during simulation for various target SIRs. The simulation result is plotted in Fig. 7 to show the performance of BER as a function of Eb/Io. Figure 7. Performance of CLPC for different fading rates From the BER performance plotted in Fig.7, it is clear that power control performs better for slow fading rates. For fDTp= 0.01 the result exhibits a significant performance improvement. For fDTp= 0.03, power control produces a less significant improvement, while for fDTp=0.07, power control seems to be ineffective in that the performance of fading channels is almost unchanged by power control. This may be due to the combination of two factors: firstly, the variance of this estimator may be too high. Second, in the case of fast fading channels, the processing delay is so long that the estimated SIR does not correspond to the SIR experienced at the time the power control command takes place. This problem can be fixed by using a prediction method that will allow one to predict the SIR experienced at the time the power control takes place. VI. Conclusions In this work, a novel SIR estimator for SIR-based close loop power control of the reverse link CDMA systems under slow Rayleigh fading conditions is simulated. The SIR estimate is used in closed loop power control to reduce the effect of fading and multiple access interference. Power controlled BER performance has shown promising results using the proposed SIR estimator when compared to previously published methods. It was found that when power control updating rates are much higher than the fading rates (100 times in this simulation), the BER performance of fading channel improves significantly. The performance of SIR-based CLPC is, however, affected by the processing and propagation loop delays in the case of fast fading channels. It is also clear that the CLPC performance is affected by the variance of the proposed SIR estimator. To overcome these problems, a prediction filter on the SIR estimate need to be considered to eliminate the delay and to reduce the variance. The BER performance of CLPC based on standard SIR estimator is comparable to that when CLPC is based on the proposed SIR estimator using an auxiliary spreading sequence. However, the complexity of the SIR estimator with the auxiliary spreading sequence is N times lower than the

standard method, where N is the processing gain. Therefore the proposed SIR estimator using an auxiliary spreading sequence can be preferred from the implementation perspective. VII. Further Works The SIR estimation method with an auxiliary spreading sequence has been shown to have a high variance due to the fact that at symbol/bit level, smaller number of samples (measurements) are available for averaging operation compared to that when measurements is performed at the chip level. However, the high variance problem can be overcome by using a predictive filtering technique that can predict one sample ahead to eliminate feed back delay, and to reduce the variance of the estimated SIR. Therefore, the topic of prediction filtering technique can be an interesting further research direction. References [1] F. Adachi, M. Sawahashi, and H. Suda, “Promising Technique

to Enhance Performance of Wideband Wireless Access Based on DS-CDMA,” IEICE Transaction on Fundamentals, vol. E81-A, pp. 2242-2249, November 1998.

[2] W.C.Y. Lee, “Overview of Cellular CDMA,” IEEE Transaction on Vehicular Technology, vol. 40, pp. 291-302, May 1991.

[3] K.S. Gilhousen, et al., “On the Capacity of a Cellular CDMA System,” IEEE Transaction on Vehicular Technology, vol. 40, pp. 303-312, May 1991.

[4] R. Cameron and B.D. Woerner, “An Analysis of CDMA with Imperfect Power Control,” in Proceeding IEEE 42nd Vehicular Technology Conference, pp 977-980, May 1992.

[5] F. Simpson, and J.M. Holtzman, “Direct Sequence CDMA Power Control, Interleaving, and Coding,” IEEE Journal on Selected Areas in Communications, vol. 11, pp. 1085-1095, September 1993.

[6] D. Ramakrishna, N.B. Mandayam, and R.D. Yates, “Subspace Based Estimation of Signal-to-Interference Ratio for CDMA Cellular System,” in Proceeding IEEE 47th Vehicular Technology Conference, pp. 735-739, May 1997.

[7] H. Mizuguchi et al, “Performance Evaluation on Power Control and Diversity of Next Generation CDMA system,” IEICE Transaction on Communications, vol. E81-B, pp. 1345-1360, July 1998.

[8] C.C. Lee and R. Steele, “Closed-Loop Power Control in CDMA Systems,” in Proceeding IEE Communications, vol. 143, pp. 231-239, August 1996.

[9] A. Kurniawan, “SIR-Estimation in CDMA Systems Using Auxiliary Spreading Sequence,” MITE-ITB, vol. 5 No. 2, pp.9-18, August 2000.

[10] A. Kurniawan, “Prediction of Mobile Radio Propagation by Regression Analysis from Signal Measurements,” MITE-ITB, vol 3., No.1, pp. 11-17, May 1997.

[11] A. Kurniawan, S. Perreau, J. Choi, and K. Lever, “SIR-Based Power Control in Third Generation CDMA Systems,” in Proceedings of the 5th CDMA International Conference, Seoul, vol. II, pp. 90-94, November 2000.

[12] 3GPP, “Physical Channels and Mapping of Transport Channels onto Physical Channels (FDD),” Technical Specification TS 25.211, v2.5.0(1999-10), October 1999.

-2 0 2 4 6 8 10 12 1410-5

10-4

10-3

10-2

10-1

100

Bit

erro

r ra

te (

BE

R)

Eb/Io [dB]

Fading channel CLPC with SIR estimate in [8]CLPC with our SIR estimate CLPC with true SIR AWGN channel

K V Murali Mohan, Krishna Reddy, Int. J. EnCoTe, v01012012, 13-17 ISSN : 2277 - 9337

Available [email protected] 17

Page 17: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

PERFORMANCE EVALUATION OF COGNITIVE RADIO PHYSICAL LAYER OVER AWGN CHANNEL

Amandeep Kaur Virk, Ajay K Sharma Computer Science and Engineering Department, Dr. B.R Ambedkar National Institute of Technology,

Jalandhar, India

Abstract

This paper analyzes the Bit Error Rate performance of Cognitive Radio Physical layer over AWGN channel under different channel encoding schemes, digital modulation schemes and channel conditions. The system outperforms with Reed Solomon along with convolution encoding for BPSK modulation technique as compared to other digital modulation schemes and the system is highly effective to combat inherent interferences under AWGN channel. The system shows improved BER on using encoding schemes with error rate reduced by 17% using Reed Solomon encoding, 97% reduction on using convolutional encoding and 99% error reduction on applying Reed Solomon with convolution encoding. It has been anticipated from the simulation study that the performance of the communication system degrades with the increase of noise power. Key Words: Cognitive Radio, Bit Error Rate, AWGN, Reed Solomon encoding, Convolution encoding.

I. Introduction

Federal Communications Commission (FCC) states that the temporal and geographical variations in the utilization of the assigned spectrum range from 15% to 85% in current spectrum allocation policies [1]. So, we need to find out ways to allow wireless devices to efficiently share the airwaves. Cognitive Radio (CR) has been proposed as a potential solution for spectrum inefficiency problems [2]. CR promises to increase spectrum usage by supporting unlicensed users to share licensed bands [3]. Licensed users of CR are called primary users and the unlicensed users are called secondary users. CR supports user access to the licensed spectrum as a secondary user when and where channels are detected idle [4]. CR is quite different from the traditional wireless radios, so

the cognitive radio layers perform additional functionality along with functions of the conventional wireless radios.

This paper presents physical layer analysis of Bit Error Rate (BER) performance over an AWGN channel. Time Division Multiple Access (TDMA) system is used for licensed users along with Orthogonal Frequency Division Multiplexing (OFDM) and non-persistent Carrier Sense Multiple Access (CSMA) system is used for unlicensed users who opportunistically use the idle licensed spectrum. Various digital modulation schemes used are 8PAM, BPSK, QPSK, 8PSK, 16PSK and 16QAM. Reed Solomon (RS) and Convolution encoding schemes are used with varying values of Signal to Noise ratio (SNR) to compute BER.

The paper is organized as follows: Section II gives the channel characteristics, Section III presents the simulation results and finally, the paper is concluded in Section IV.

II. Channel Additive White Gaussian Noise (AWGN) Channel

In communications, the AWGN channel model is one in which noise is additive, white and noise samples have a Gaussian distribution [6]. Additive noise means received signal equals the transmit signal plus some noise and the noise is statistically independent of the signal. White noise has flat power spectral density and so the autocorrelation of the noise in time domain is zero for any non-zero time offset. The model does not account for the phenomena of fading, frequency selectivity, interference, nonlinearity or dispersion. AWGN is commonly used to simulate background noise of the channel under study, in addition to multipath, terrain blocking, interference, ground clutter and self interference that modern radio systems encounter in terrestrial operation [7].

AWGN is a channel model in which the only impairment to communication is a linear addition of wideband or white noise with a constant

Amandeep Kaur Virk, Ajay K Sharma, Int. J. EnCoTe, v01012012, 18-23 ISSN : 2277 - 9337

Available [email protected] 18

Page 18: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

spectral density and a Gaussian distribution of amplitude. The model does not account for fading, frequency selectivity, interference, nonlinearity or dispersion. However, it produces simple and tractable mathematical models which are useful for gaining insight into the underlying behavior of a system before other phenomena are considered. Reed Solomon Encoding and Decoding

Reed-Solomon codes are block-based error correcting codes. The Reed-Solomon encoder takes a block of digital data and adds extra redundant bits. The Reed-Solomon decoder processes each block and attempts to correct errors that occur during transmission and recover the original data.

A Reed-Solomon code is specified as RS(n,k) with s-bit symbols. This means that the encoder takes k data symbols of s bits each and adds parity symbols to make an n symbol codeword. There are n-k parity symbols of s bits each. A Reed-Solomon decoder can correct up to t symbols that contain errors in a codeword, where 2t = n-k. RS algebraic decoding procedure can correct errors and erasures. An erasure occurs when the position of an erred symbol is known. A decoder can correct up to t errors or up to 2t erasures [8]. Convolution Encoding and Decoding

Convolution encoder takes k bits at a time and outputs n encoded bits. Convolution codes are usually described using two parameters: the code rate and the constraint length. The code rate, k/n, is expressed as a ratio of the number of bits into the convolutional encoder (k) to the number of channel symbols output by the convolutional encoder (n) in a given encoder cycle. In this paper, 1/3 convolutional code is used. Viterbi algorithm is used for decoding [9].

III. Simulation Results MATLAB 7.10.0 has been used for simulation. Signal to Noise Ratio (SNR) values range from 2 to 16 dB. The modulation schemes used for simulation are 8PAM, BPSK, QPSK, 8PSK, 16PSK and16QAM. Forward error correction (FEC) encoding schemes used are convolutional encoding and RS encoding. 1/3 convolutional encoding scheme is used and viterbi decoding algorithm is used for decoding convolution coding. The system is observed by separately using convolutional and RS encoding and then using them in combination using inter leavers.

Simulation results are shown for TDMA, CSMA and overall channel. Overall channel shows the results for TDMA and CSMA system combined together. TDMA users are the primary users and CSMA users are the secondary users. Number of TDMA users used for this scenario is 50 and number of CSMA users are 50. The total number of TDMA packets to be transmitted is 10000. Individual packets are generated at each user with exponentially distributed inter arrival times. Poisson arrival process is used. CSMA users opportunistically transmit packets during idle periods of TDMA transmission. It has been assumed that all the receivers are co-located and all users are at same distance from the receivers. Both the systems use one or more common frequency channels and all the packets are of common length. We have considered four cases for CR performance. Case-I: BER performance over AWGN channel. Case-II: BER performance over AWGN channel with RS encoding. Case-III: BER performance over AWGN channel with convolution encoding. Case-IV: BER performance over AWGN channel with RS and convolution encoding.

In all the cases, the system provides degraded performance in 16 PSK and satisfactory performance in BPSK and 8PAM modulation. Figure 1 through 4 shows the BER performance of data through CR physical layer under six types of digital modulation schemes on AWGN channel.

BER performance over AWGN channel Figure 1 below gives the BER obtained at various SNR values over AWGN channel for overall system, TDMA system and CSMA system. It is observed that 8PAM modulation perform best among other modulation schemes. The BER values at SNR 2 dB are 2.8330 x 10-2 (8PAM) for overall channel, 5.091 x 10-2

(8PAM) for TDMA system and 2.8809 x 10-2

(8PAM) for CSMA system.

Amandeep Kaur Virk, Ajay K Sharma, Int. J. EnCoTe, v01012012, 18-23 ISSN : 2277 - 9337

Available [email protected] 19

Page 19: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of o

vera

ll cha

nnel

Bit Error Ratio of overall channel

8PAMBPSKQPSK8PSK16PSK16QAM

(a)

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of T

DMA

Bit Error Ratio of TDMA

8PAMBPSKQPSK8PSK16PSK16QAM

(b)

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of C

SMA

Bit Error Ratio of CSMA

8PAMBPSKQPSK8PSK16PSK16QAM

(C)

Figure 1.BER simulations through CR physical layer over AWGN channel (a) for overall channel (b) for

TDMA system (c) for CSMA system

BER performance over AWGN channel with RS Encoding

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of o

vera

ll cha

nnel

Bit Error Ratio of overall channel

8PAMBPSKQPSK8PSK16PSK16QAM

(a)

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of T

DMA

Bit Error Ratio of TDMA

8PAMBPSKQPSK8PSK16PSK16QAM

(b)

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of C

SMA

Bit Error Ratio of CSMA

8PAMBPSKQPSK8PSK16PSK16QAM

(c)

Figure 2.BER simulations through CR physical layer using RS encoded system for different modulation schemes over AWGN channel (a) (a) for overall

channel (b) for TDMA system (c) for CSMA system Figure 2 above gives the BER obtained at various SNR values over AWGN channel with

Amandeep Kaur Virk, Ajay K Sharma, Int. J. EnCoTe, v01012012, 18-23 ISSN : 2277 - 9337

Available [email protected] 20

Page 20: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

RS encoding for overall system, TDMA system and CSMA system. It is observed that BPSK modulations perform best among other modulation schemes and the. BER is reduced by 17%. The BER values for BPSK at SNR 2 dB are 6.4180 x 10-2 for overall channel, 3.0305 x 10-2 for TDMA system and 3.4133 x 10-2for CSMA system. BER performance over AWGN channel with Convolutional Encoding The simulation results for 1/3 convolutional encoded AWGN channel are shown in figure 3 below. Figure 3(a) shows the improved results for overall channel with the improved value of BER at SNR 2 dB for BPSK as 0.832 x 10-3. Improved value for TDMA system at SNR 2 dB is 0.832 x 10-3 and for CSMA system is 0 as shown in figure 3(b) and (c) respectively. Convolution encoding removes bit errors and the system shows 97% reduction in errors.

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of o

vera

ll ch

anne

l

Bit Error Ratio of overall channel

8PAMBPSKQPSK8PSK16PSK16QAM

(a)

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of T

DMA

Bit Error Ratio of TDMA

8PAMBPSKQPSK8PSK16PSK16QAM

(b)

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of CS

MA

Bit Error Ratio of CSMA

8PAMBPSKQPSK8PSK16PSK16QAM

(c)

Figure 3.BER simulations through CR physical layer using 1/3 Convolutional encoded system for different

modulation schemes over AWGN channel for (a) overall channel (b) TDMA system (c) CSMA system

BER performance over AWGN channel with RS and Convolutional Encoding Figure 4 shows the simulation results for RS and convolution encoded AWGN channel. The BER value at SNR 2 dB for BPSK and 8PAM modulation schemes is 0 and is 0.2188 x 10-2 for 16 PAM modulation scheme. RS encoding eliminates block errors and convolution encoding removes bit errors and the system shows 99% improvement.

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of o

vera

ll cha

nnel

Bit Error Ratio of overall channel

8PAMBPSKQPSK8PSK16PSK16QAM

(a)

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of TD

MA

Bit Error Ratio of TDMA

8PAMBPSKQPSK8PSK16PSK16QAM

(b)

Amandeep Kaur Virk, Ajay K Sharma, Int. J. EnCoTe, v01012012, 18-23 ISSN : 2277 - 9337

Available [email protected] 21

Page 21: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

2 4 6 8 10 12 14 16

10-4

10-3

10-2

10-1

100

SNR (dB)

BER

of C

SMA

Bit Error Ratio of CSMA

8PAMBPSKQPSK8PSK16PSK16QAM

(c)

Figure 4.BER simulations through CR physical layer using RS and 1/3 Convolutional encoded system for

different modulation schemes over AWGN channel for (a) overall channel (b) TDMA system (c) CSMA

system Table 1 shows the BER values obtained for various modulation schemes using RS and

Convolutional encoding. The table highlights the lowest BER values obtained in each case.

V. Conclusion In this paper, the BER performance of data through CR physical layer is shown using RS, Convolutional channel coding and different digital modulation schemes. A range of system performance highlights the impact of digital modulations under RS and Convolution coding under AWGN channel. The system shows improved performance on using encoding schemes. The enhanced BER is 17% for RS encoding, 97% for convolution encoding and 99% for RS combined with convolution encoding scheme. In the context of system performance, it can thus be concluded that the implementation of BPSK modulation with RS and convolution channel coding technique together provides satisfactory result among the digital modulation schemes with limited SNR.

Table1. BER values for different modulation and encoding schemes over AWGN channel

BER values SNR = 2 8PAM BPSK QPSK 8PSK 16PSK 16QAM

AWGN CHANNEL

Overall Channel

5.7139 x 10-2

7.7080 x 10-2

20.4663 x 10-2

39.3102 x 10-2

51.8367 x 10-2

6.0892 x 10-2

TDMA 2.8330 x

10-2 3.7041 x

10-2 10.3589 x

10-2 19.0954 x

10-2 26.5437 x

10-2 3.1270 x

10-2 CSMA 2.8809 x

10-2 4.0039 x

10-2 10.1074 x

10-2 20.2148 x

10-2 25.2930 x

10-2 2.9622 x

10-2

RS ENCODING

Overall Channel

8.3936 x 10-2

6.4180 x 10-2

27.7383 x 10-2

70.2109 x 10-2

90.6992 x 10-2

9.4932 x 10-2

TDMA 3.7803 x 10-2

3.0305 x 10-2

14.3594 x 10-2

34.8594 x 10-2

46.3633 x 10-2

5.4893 x 10-2

CSMA 4.6875 x 10-2

3.4133 x 10-2

13.3789 x 10-2

35.3516 x 10-2

44.3359 x 10-2

4.0039 x 10-2

1/3

CONVOLUTION ENCODING

Overall Channel

4.870 x 10-3

0.832 x 10-3

68.7340 x 10-2

98.6060 x 10-2

101.8882 x 10-2

1.1022 x 10-2

TDMA 0.870 x 10-3

0.832 x 10-3

32.7340 x 10-2

49.9060 x 10-2

49.9882 x 10-2

6.022 x 10-3

CSMA 4.0 x 10-3 0 36.0 x 10-

2 48.70 x

10-2 51.90 x

10-2 5.0 x 10-3

RS WITH 1/3

CONVOLUTION ENCODING

Overall Channel

0 0 63.8760 x 10-2

102.0977 x 10-2

97.6904 x 10-2

0.2188 x 10-2

TDMA 0 0 34.0908 x 10-2

50.1445 x 10-2

49.9365 x 10-2

0.2188 x 10-2

CSMA 0 0 29.7852 x 10-2

51.9531 x 10-2

47.7539 x 10-2

0

Amandeep Kaur Virk, Ajay K Sharma, Int. J. EnCoTe, v01012012, 18-23 ISSN : 2277 - 9337

Available [email protected] 22

Page 22: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

References

[1] Danijela Čabrić, Robert W. Brodersen Berkeley, “Physical Layer Design Issues Unique to Cognitive Radio Systems”, Wireless Research Center, University of California at Berkeley, USA.

[2] Wang Weifang, “Denial of Service Attacks in Cognitive Radio Networks”, 2nd Conference on Environmental Science and Information Application Technology,2010 IEEE.

[3] Xueying Zhang Cheng Li, “The security in cognitive radio networks: A survey”, 2009.

[4] Peha J. M.: “Approaches to spectrum sharing”, IEEE Communications Mag., 2005, 43, (2), pp. 10–12.

[5] Mitola J., Maguire G.: “Cognitive radio: making software radios more personal”, IEEE Pers. Communications, 1999, 6, (4), pp. 13–18.

[6] www.wirelesscommunications.nl [7] Syed Asif, Abdullah-al-muraf, S.M. Anisul Islam,

Amitavo Tikader, Abdul Alim, “Comparison of BER between uncoded signal and coded signal over slow Rayleigh fading channel”,Journal of theoretical and applied information technology, 2005-2009.

[8] www.cs.emu.edu [9] Chip Fleming, “A tutorial on convolution encoding

with viterbi decoding”.

Amandeep Kaur Virk, Ajay K Sharma, Int. J. EnCoTe, v01012012, 18-23 ISSN : 2277 - 9337

Available [email protected] 23

Page 23: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

CELLSIZE DETERMINATION IN HETEREOGENEOUS ENVIRONMENT FOR OFDM BASED TECHNOLOGY

Kanchan Chaube Gopal Chandra Manna Bhavana Jharia Government Engineering Inspection Circle,BSNL Government Engineering

College,Jabalpur,MP,India [email protected] College,Jabalpur,MP,India [email protected] [email protected] _________________________________________________________________________________

Abstract

The next generation of mobile wireless communication is based on OFDM technology. The radio part of this technology has been designed based on COST 231 model which has been modified in Walfisch-Ikegami or WIM model. The WIM model takes into consideration the growing sub-urban situation which takes care of LOS and Non-LOS propagation based on diffraction and scattering. In present study, propagation of OFDM signal in a heterogeneous environment consisting of plain land, hills, single storied buildings, low height factory shades, multi-storied buildings and large vegetation with trees of various heights and leave sizes has been carried out. Obtained results were compared with Extended Okumura Hata model. Results provide definite value of path-loss exponent and fading factor which alongwith Jakes graph leads to determination of cell radius with desired probability in a given environment.

Keywords : OFDM, propagation path loss, COST 231, Walfisch-Ikegami, Jakes graph

___________________________________________________________________________

1. Introduction

Radio wave propagated through space are subject to free space path loss. In earth, they are further subject to impairments like multi-path reflection, refraction through penetration of wall, scattering at different objects like street poles and hoardings, diffraction at building corners etc. In green field environments, there are hills and trees which contributes to impairments. Also, moving objects contributes to rapid fading of signals. Several scientists has carried out researches to study on radio wave propagation through different type of environments and came out with different solutions applicable to different type of clutters and environmental situations. While implementation of practical models at different frequencies, radio engineers has taken care of these impairments at propagation environments to the extent possible. OFDM, the latest entrant in mobile radio access technology, has taken care of these factors in its radio model and incorporated guard time to avoid delay spread and improved fading effect through spatial multiplexing. It is, therefore, necessary to study the deviation , if any, from the incorporated model at radio-path in it’s Wide Area Network implementation called WiMAX. This will enable radio engineers to properly plan in implementation

of mobile radio network using OFDM technology in next generations.

Section 2 deals with literature survey on the subject. Analytical models incorporating radio environments and model used in OFDM implementation are discussed in Section 3. The measurement setup and related parameters are described in chapter 4. Section 5 illustrates the measurement environment and record of data. Section 6 analyses recorded data, applies formulae and compute results. Section 7 concludes the observations.

2. Literature Survey

Several researchers have worked in the area of wireless mobile communications. Wireless communication has suffered a lot of hindrances due to some channel impairments like multipath effects, shadowing, fading etc. Early works on radio wave propagation studies were made by several scientists from 1960’s. Yoshihisa Okumura [1] carried out detailed propagation studies in 1968 for land-mobile radio service at VHF (200 MHz) and UHF bands (453,922,1310,1430 and 1920 MHz) and over irregular terrain and clutter. His studies include variation of signal strength with distance, frequency, antenna height etc. at urban,

Gopal C Manna et.al Int. J. EnCoTe, 2012, v0101, 24-31 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

24

Page 24: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

sub-urban and open areas. He introduced appropriate formulae for different areas with suitable correction factors for different terrains. Hata consolidated the results of measurements with several correction factors as riders for different situations. [2]. To take a step ahead, Peter Bernardin[3] of Texas University has refined the area coverage probability concept to cell radius coverage probability concept for better coverage of network in both 800 MHz and 1900 MHz bands which covers both GSM and CDMA in different environments in 2005. Electrical and Computer Engineering Department, university of Central Florida conducted a survey during November 1999 for coverage of Miami city for the transport department based on Direct Sequence Spread Spectrum technology at 2400 MHz to 2483.5 MHz band [4]. Similarly RF propagation studies were thoroughly done by different teams for extending recommendations to IEEE 802.16 Broadband Wireless Access working groups for finalisation of link budget and calculation of maximum allowable path loss. A modified Okumura Hata model named COST 231 [5] was recommended in 1999. Stanford University Interim (SUI) channel model [6] recommendation for fixed wireless application was released in January 2001. A project was initiated by Ofcom U.K. for comparative study of propagation pathloss models [7] for Cambridge U.K. at 3.5 GHz band during September to December 2003. Propagation study for pathloss at further higher frequency viz. 5.3 GHz [8] was done during summer of 1998 at Ottawa, Ontario and recommendation submitted by IEEE 802.16 wireless access working group during November 2000.

A study was conducted in 2007 to estimate the coverage of WiMAX based on IEEE 802.16 D at Kalyani, a suburban area of Kolkata City, for estimation of both line of sight(LOS) and Non Line of Sight (NLOS) performance [9]. A study to measure TCP and UDP based throughput along with RSSI and CNR was conducted in 2006 after release of WiMAX certified equipment in sub-urban areas of Oslo, Norway for fixed WiMAX ranging upto 15 Km at plain and rising slope upto a height of 251m above sea level [10]. Measurements for mobile WiMAX system in rural areas of Japan viz. Yamaguchi district of Akaiwa city in the Okayama prefecture where the residential and agricultural field areas are surrounded by mountains. A hybrid approach of 802.11 b/g LAN has been deployed [11]. RSSI and CINR were thoroughly measured for WiMAX coverage in desert and hilly areas deploying an outdoor CPE having 13db gain at a height of 3m upto a distance of 9 km [12]. Performance measured concluded for need of improved throughput in most cases.

3. Analytical details

In case of isotropic antennas and using λ=c/f where c=velocity of light and f = frequency of the wave, we write free space pathloss expressed in decibel as Lfs= 32.44 + 20 log d +20 log f ----------(1) L= pathloss in decibel from transmitter to receiver. Here d is Kilometers and f is in MegaHertz The general path loss L in dB, following Okumura and Hata, can be written as L=C1 + C2 log(f) -13.82 log (ht) – a (hm) + [44.9-6.55 log(ht)] log (d) +C0 ---------------------------(2) L = Median path loss in dB, ht and hm in meters , d in Km and f in MHz a(hm) = {1.1 log(f) -0.7} hm – {1.56 log(f) -0.8} for urban = 3.2 { log(11.75 hm )}2 - 4.97 for dense urban C1 =69.55 and C2 = 26.16 for 150MHz < f <1500 MHz [Okumura-Hata] C1 =46.3 and C2 = 33.9 for 1500 MHz < f < 2000 MHz [Extended Okumura-Hata] C0 = 3 dB for dense urban else 0. hm=height of mobile in m, ht=height of tower in m In Europe, the Cooperation Of Scientific and Technical (COST) research, project 231 recommended adoption of International Telecommunication Union(ITU) model for radio propagation including a semi-deterministic model for medium(0.2Km) to large(5Km) cells in 800MHz to 2000MHz band called Walfisch-Ikegami (WIM) model. The WIM model also assumes antenna height above 30m, high degree of Fresnel Zone clearance. The WIM model is largely used for OFDM radio in 2GHz band. In Line of sight (LOS) and non LOS(NLOS)propagation, WIM model loss is expressed as Llos=Lfs+6 log(50d) ---------------------------(3) and Lnlos= Lfs+Lrts+Lmsd ---------------------------- (4) Where Lrts = Roof To Street diffraction loss and Lmsd=Multi-Screen Diffraction loss. Lrts=-16.9-10 log w+10 log f+20 logΔhm+Lori Where Lori = -10+0.354Φ 0≤Φ≤35o Lori = 2.5+0.075(Φ-35o) 35≤Φ≤55o Lori= 4.0-0.114(Φ-55o) 55≤Φ≤90o Φ=Angle of incidence of mobile with respect to street from building/obstacle. Δhm =height of mobile antenna below rooftop, w=width of street (tower building/obstacle separation) In present calculation, Extended Okumara-Hata model has been used for Line-of-sight at near region and mixed region above 109.68 m. WMI has been used for NLOS condition where multi-screen diffraction is taken as nil. No specific model has been recommended for propagation through leafy trees and similarly for low slope hills.

Gopal C Manna et.al Int. J. EnCoTe, 2012, v0101, 24-31 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

25

Page 25: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

An empirical formula to define coverage is L=Lo+10γlog(d/do) +σ Where Lo=fixed loss, do= reference distance , σ=standard deviation from mean i.e. fading factor. σ/γ value can be plotted on Jakes graph to determine area coverage probability based on cell edge probability.

4. Environment and Measurement Setup

A Base Station(BS) was installed at Chakan , a rural area of Maharashtra province of India. At the east, Pune Nasik highway is located at a distance of 1.75km. Houses and markets are situated around the highway .The area is a developing industrial one with few scattered single storied factories at peer ground level near BS. Small low height barren hills of various sizes and heights are found towards west and south west. Few multi-storied buildings are under construction towards south east of the BS. There are scattered trees of usual height with small size dense leaves towards the north. Fig 1 shows the environment. Measurement setup for collection of data are described below-

A. Base Station : The WiMAX IEEE802.16e compliant Base Station was located at Chakan, Maharashtra. It is parented to Pune ASN through SDH link. The Radio Unit housed at 40m height configured with 2TxX2Rx MIMO with Total radiated power of 40 dbm on 2 Tx and EIRP 57 dbm. Downlink MAPL were 130 dbm,124dbm and 112dbm for BPSK,QPSK and 16QAM with corresponding rural radius 1.62km,1.09km,0.49km at data rates of 1Mb,2Mb and 4Mb respectively. The 3 sectors has a bandwidth of 5MHz each with 512 subcarriers out of which 360 tones were for traffic.

B. Mobile Station : A drive test tool with receiver and laptop were taken in a vehicle with GPS mounted at roof. The laptop was loaded with Agilent Software V3.92 Power was fed from lighter port of the vehicle.

C. Setup : The base station configuration was exploited through Local Craft Terminal( LCT). Connectivity of BS ethernet to TDM-SDH was accorded through a converter. It connects to an Add Drop Multiplexer to port the WiMAX traffic to nearest Edge Router of IP network at about 30km away at Pune.

D. Radio Resource Unit(RRU): RRUs are mounted on BS i.e. ground based tower.

The α sector points towards Pune Nasik Highway at East, β sector 1200 clockwise towards Industrial area and γ sector 1200 anticlockwise and mostly open field.

5. Data Collection

The data was collected in anti-clockwise and clockwise rounds starting from BS shown as O in fig 1 and received signal strength from only one antenna was plotted for( >-75dbm)(red),(>-95dbm and <-75dbm ) (green) and below -95dbm as blue .

In anti-clockwise direction, from O to A , the road was straight with slight up and down of signal due to trees. At A to B, there was obstruction due to a low height hill(fig 4) nearly of the height of antenna resulting signal level below -95dbm (blue) between B to C which is unusable for communication. D to E again was LOS signal above -75dbm (Red) where 16QAM modulation is possible. The part EFG is still in LOS condition except some low height factory shades near G, which indicated that 1.765 Km is maximum distance upto which -75dbm is possible. There is no communication in parts of FG which comes under partial shade. Communication is possible with QPSK and BPSK in GH part because of the reflective component from behind hill even though the area is under NLOS condition. HK part is under NLOS, whereas, for section KL , there are scattered houses with one storied buildings, small trees and the BS is slightly visible showing communicable signal strength. The part LM is under shade of multistoried building, tower invisible and communication was not possible. At MH section, there were faint communication again and it could be concluded that communication is not possible after 2.5 km in a mixed type of environment where there LOS is hardly available.

In clockwise direction, the vehicle goes towards north-west showing full strength with downward slope. After it, the vehicle takes right turn where the road passes through tropical trees with small, thin but dense leaves. The signal from BS has to pass through leaves ranging from 3m to 10m(~80λ) where the signal strength goes down rapidly and even goes below communication level.

6. Results and Discussion

In near zone (fig 2), the measurement passed through very close range of tower (~35m) towards North-West direction which has a clear line of sight situation. It is observed that path-loss steadily increases upto a distance of 50m after which direct signal from BS and reflected signal from road starts to combine constructively resulting in reduction in pathloss. The combining continued upto a distance

Gopal C Manna et.al Int. J. EnCoTe, 2012, v0101, 24-31 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

26

Page 26: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

~110m after which fading due to distance factor starts to dominate and hence positive slope for pathloss. The observation continued upto 630m on line of sight after which the vehicle took turn to encircle the BS.

Fig 1 shows observation contour alongwith straight line distance from tower mapped onto it. The contour covers in-between hills, diffraction from small buildings and factory sheds, reinforcement from distant hills, multi-screen diffraction from tall buildings etc. upto a distance of 2.5 km. The combined environment path loss variation with distance (in log scale) has been shown in fig (3). The observed data is subjected to linear trendline and compared with Extended Hata Model data for same distance and plotted. It is observed that the two plots intersects each other at a distance of 309.0295 m and loss is 123dbm. In Extended Hata formula, the fixed loss comprises of 46.3dbm and additional frequency plus height dependent losses whereas corresponding loss observed from measurement as 54.787 dbm plus 9.513dbm as fading component (fig 5) i.e. total 64.3 dbm and hence fairly matching. For Extended Hata model, path-loss exponent is 3.478635 at 35m antenna height whereas the corresponding value obtained from measurement is 2.402. A better measurement performance is obvious due to contribution from 2X2 MIMO architecture.

Path loss due to a low slope hill with height ~80m and ~800m introduces a loss of 20 dbm (Fig 4) or more specifically 1 in 5 slope introduces 1 dbm loss per 4m height.

Table 1 compares received signal strength values (RSSI) mapped to Maximum Allowed Path Loss (MAPL) of Walfisch-Ikegami Model (WIM) and calculates the corresponding maximum distance d in km for it and the corresponding same loss obtained through measurement. In addition, it perpetually compares with typical manufacturer’s data for corresponding distance, data speed and related modulation. Three cases viz. LOS, NLOS with 10 dbm scatter loss and NLOS with 20 dbm scatter loss has been considered for 3 types of modulation each.

Calculations shows that for single storied buildings of average 4m height and at a distance of 1km from tower with orientation loss of 3.65dbm has Lrt value of 0.0133586dbm and 32m building with 4dbm orientation loss has Lrt value of 20.874884dbm at 2650MHz.Lmsd was not considered as only one row of multistoried building was there for diffraction. Table 1 shows that under NLOS condition with BPSK modulation and 1 mbps download throughput as per manufacturer specification , measured distance exceeds WIM model calculation in both 10dbm and 20dbm scatter loss condition.

Table 1: Modulation, speed, WIM model distance Vs Actual measured distance

7. Conclusion

From all above considerations in general and close examination of fig 3 and fig 5, it is observed that OFDM with MIMO based system path loss is lower than the predicted loss as given in Extended

Hata model and WIM model both for LOS and NLOS conditions. Hence, practical determination of cell-size based on the above models shall lead to under utilisation of deployed infrastructure. Extended Okumura-Hata model is presently used for cell planning. Lower value of pathloss exponent implies higher σ/γ value which provides much

Gopal C Manna et.al Int. J. EnCoTe, 2012, v0101, 24-31 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

27

Page 27: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

higher (i.e.stable) cell edge probability for a particular level of area coverage probability around a base station as obtained from Jakes graph. Thus, Environment and MIMO configuration shall have to be taken into consideration during cell planning for practical implementation.

References:

[1] Yoshihisa OKUMURA, Eiji OHMORI, Tomihiko KAWANO and kaneharu FUKUDA ,”field Strength and Its Variability in VHF and UHF Land-Mobile Radio Service” Rev. Elec. Commun. Lab., vol. 16, pp. 825-873, Sept-Oct . 1968. [2] Empirical formula for propagation loss in land mobile radio service – Hata M – IEEE transactions on vehicular technology, vol 29, Aug 1980. [3] Peter Bernardin ,”Predicting and Verifying Cellular Network Coverage”, University of Texas at Dallas, Texas Wireless Symposium, 2005. [4] “Wireless Communication Spectrum Guidelines for ITS”, Electrical and Computer Engineering Department, University of Central Florida submitted to Department of transportation, State of Florida on November 5, 1999. [5] COST Action 231 “Digital Mobile Radio towards Future Generation Systems, final report”, tech. Rep., European Communities, EUR 18957, 1999. [6] V.ERCEG, K.V.S. Hari et al, “Channel models for fixed wireless applications”, Tech.

Rep., IEEE 802.16 Broadband Wireless Access Working Group, January 2001. [7] V.S. Abhayawardhana, I.J. Wassell, D. Crosby, M.P. Sellars, M.G. Brown “Comparison of Empirical Propagation Path Loss Models for Fixed Wireless Access Systems”, Project of Ofcom, UK, Ofcom Ref : AY 4463. [8] John Sydor, “ Propagation of 5.3 GHz through urban foliated environment : summary of propagation pathloss and signal de-polarisation data”, IEEE 802.16 Broadband Wireless Access Working Group. Submitted 02-11-2000. [9]S.K.Sarkar and G.C.Manna, “Performance Evaluation of IEEE 802.16 based System in Sub-urban Area”, Telecommunications,Vol-59,Issue 2, March-April 2009. [10]Ole Gr0ndalen, Pal Gr0nsund, Tor Breivik, Paal Engelstad, “Fixed WiMAX Field Trial Measurements and Analyses” ,IEEE Xplore Digital Library. [11] Makoto Ono, Masaharu Hata, Shigeru Tomisato, “Experimental Study of BWA System Applied in an Intermediate and Mountainous Area”,2009, IEEE Xplorer , Digital Library. [12] G .C.Manna and Bhabana Jharia, “Mobile WiMAX Coverage Evaluation for Rural areas”, International Conference in Advanced Computing Technology, South Korea, Feb13-15,2011.

Gopal C Manna et.al Int. J. EnCoTe, 2012, v0101, 24-31 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

28

Page 28: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

Fig 1: BS, Environment and Drive Test route with signal level plot

Fig 2 : Near distance path-loss behaviour

Gopal C Manna et.al Int. J. EnCoTe, 2012, v0101, 24-31 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

29

Page 29: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

Fig 3: Overall Path loss beyond Near Zone

Fig 4: Path-loss estimation due to low slope hill.

Gopal C Manna et.al Int. J. EnCoTe, 2012, v0101, 24-31 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

30

Page 30: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

Fig 5 : Signal strength distribution and Rayleigh fading factor estimation.

Gopal C Manna et.al Int. J. EnCoTe, 2012, v0101, 24-31 ISSN : 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

31

Page 31: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

ANALYSES OF CRYPTOGRAPHIC TECHNIQUES FOR PRIVACY PRESERVATION

N.Punitha Research Scholar, R.Amsaveni Assistant Professor,

Department of Computer Science, PSGR Krishnammal College for Women Coimbatore, India.

[email protected] ___________________________________________________________________________Abstract Through the immense advance of the Internet, has come and exciting promise for businesses, governments and consumers, dealing with the way that we use our computers and how to deal with sensitive information. Protecting private data is a vital concern and want earlier to analysis of an individual’s data and need mandated privacy as a growing challenges. Cryptography is a study of mathematical techniques, related to aspects of information security such as confidentiality, data integrity, entity authentication, and data origin authentication is shaping the way that information is safely and securely transmitted over the Internet. Sensitive information is quit large, Such as Credit Card Information, Social Security Numbers, Private correspondence, Military statement, Bank account information. This paper attempts to analysis the cryptography techniques (ELLIPIICAL and ELGAMAL) to find solution for various scenarios. Encrypting files using the two methods has never been in question but taking the type of files like Adobe pdf , Text txt and Document doc with specific relation to size and the time taken to encrypt the files and decrypt the files in the two modes. And also allows for hypothesis on the content of the files before due attention to detail encryption techniques

1. INTRODUCTION

The primary task in data mining is to development of models about aggregated data. Obviously growing data collection along with the flood of analysis tools capable of handling huge volumes of information and knowledge. Data mining may pose a threat to our privacy and data security .The real privacy concerns are with unlimited access of individual records. Privacy preserving data mining is a novel research direction in data mining and statistical databases , where data mining algorithms are analyzed for the side-effects they gain in data privacy. The main objective in privacy preserving data mining is to develop algorithms for altering the original data in some way, so that the private data and private knowledge remain private even after the mining process. The problem that arises when confidential information can be derived from released data by unauthorized users is also commonly called the “database inference” problem. Security is also an important question with any data collection that is shared and/or is projected to be used for strategic decision making. The confidential nature of some of this

data and the potential illegal access to the information. Moreover, data mining could disclose new implicit knowledge discovered, some important information could be withheld, while other information could be widely distributed and used without control. Multifarious issues, such as those concerned in Privacy Preserving Data Mining (PPDM), cannot simply be addressed by restricting data collection or even by restricting the secondary use of information technology.

1.1 Privacy Preserving Data mining

Privacy Preserving Data mining Analysis is a mixture of the data of heterogeneous users without disclosing the private and vulnerable details. Problem demand of a valid ,but prescribed approach for early privacy preserving analysis in the scene of component based software development, in order to evaluate and compare with each and all the Techniques in a universal platform and to devise, build up and execute functionalities like a User friendly framework, portability etc/. Privacy Preserving Techniques can classified into many approaches which have been adopted for privacy preserving

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

32

Page 32: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

data mining. We can classify them based on the following dimensions: • Data distribution • Data modification • Data mining algorithm • Data or rule hiding • Privacy preservation

The first dimension refers to the distribution of data. Some of the approaches have been developed for centralized data. Distributed data scenarios can also be classified as horizontal data distribution and vertical data distribution. The second dimension refers to the data modification In general; data modification is used in order to modify the original values of a database that needs to be released to the public and in this way to ensure high privacy protection • Perturbation, which is accomplished by the alteration of an attribute value by a new value (i.e., changing a 1-value to a 0-value, or adding noise), • Blocking, which is the replacement of an existing attribute value with a “?”, • Aggregation or merging which is the combination of several values into a coarser category. • Swapping that refers to interchanging values of individual records • Sampling, which refers to releasing data for only a sample of a population?

The third dimension refers to the data mining algorithm, for which the data modification is taking place .This is actually something that is not known earlier but it facilitates the analysis and design of the data hiding algorithm. The fourth dimension refers to whether raw data or aggregated data should be hidden. The complexity for hiding aggregated data in the form of rules is of course higher, and for this reason, mostly heuristics Have been developed. The last dimension which is the most important, Refers to the privacy preservation technique used for the selective modification of the data. Selective modification is required in order to achieve higher utility for the modified data given that the privacy is not threaten.

The techniques that have been applied For this reason are:

• Heuristic-based techniques like adaptive modification that modifies only selected values that minimize the utility loss rather than all available values.

• Cryptography- based techniques like secure multiparty computation where a computation is secure if at the end of the computation, no party knows anything except its own input and the results.

• Reconstruction-based techniques where the original distribution of the data is reconstructed from the randomized data.

1.2 CRYPTOGRAPHY

Cryptography is the science of writing in secret code and is an ancient art; the first documented use of cryptography in writing dates back to circa 1900 B.C. when an Egyptian scribe used non-standard hieroglyphs in an inscription. Some experts argue that cryptography appeared spontaneously sometime after writing was invented, with applications ranging from diplomatic missives to war-time battle plans. It is no surprise, then, that new forms of cryptography came soon after the widespread development of computer communications. In data and telecommunications, cryptography is necessary when communicating over any entrusted medium, which includes just about any network, particularly the Internet.

Within the context of any application-to-application communication, there are some specific security requirements, including: • Authentication: The process of proving one's identity. (The primary forms of host-to-host authentication on the Internet today are name-based or address-based, both of which are notoriously weak.) • Privacy/confidentiality: Ensuring that no one can read the message except the intended receiver. Integrity: Assuring the receiver that the received message has

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

33

Page 33: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

not been altered in any way from the original.

Non-repudiation: A mechanism to prove that the sender really sent this message. Cryptography are not only to protects data from theft or alteration, but can also be used for user authentication. There are, in general, three types of cryptographic schemes typically used to accomplish these goals: secret key (or symmetric) cryptography, public-key (or asymmetric) cryptography, and hash functions, In all cases, the initial unencrypted data is referred to as plaintext. It is encrypted into cipher text , which will in turn (usually) be decrypted into usable plaintext. Two communicating parties will be referred to as Alice and Bob; this is the common category in the crypto field and literature to make it easier to identify the communicating parties. If there is a third or fourth party to the communication, they will be referred to as Carol and Dave. Mallory is a malicious party, Eve is an eavesdropper, and Trent is a trusted third party. Number of keys that are employed for encryption and decryption, and further defined by their application and use. The three types of algorithms : • Secret Key Cryptography (SKC): Uses a single key for both encryption and decryption • Public Key Cryptography (PKC): Uses one key for encryption and another for decryption Hash Functions: Uses a mathematical transformation to irreversibly "encrypt" information. The need for a secure method to transmit information is very important in today’s world. The most secure forms of protection are known as encryption, the process of encoding information in such a way that only the person(s) who know the key, or code are able to view such information . Symmetric Key Encryption is the least secure method of encryption due to the fact that the two keys required to factor the numbers are both public keys, Which means that both values are readily available, assuming that someone knows

where to look for it. This type of information protection is generally acceptable for the majority group to use with email, and other simple information transfers. Public Key Encryption is a more secure method of concealing sensitive information. This is because only one of the two keys required to compute the problem, are available to the public. The other key is private and is not shared with anyone. So even if someone knows one value and one of the keys it is still very difficult and time consuming to factor out the other values. However cryptography is not fool proof. No one can guarantee one hundred percent security a good cryptographic system strikes the balance between what is possible and what is acceptable .A major reason for this is that those assigned with the task of encryption; they must try and block every angle of attack upon their code. Whereas those people trying to break the encryption; all they have to find is one mistake, or a backdoor into the encryption. There are two different types of attacks on encryption and protocols. “1. A passive attack is one where the adversary only monitors the communication channels. A passive attacker only threatens the confidentiality of information. 2. An active attack is one where the adversary attempts to delete, add, or in some other way alter the transmission on the channel. An active attacker threatens data integrity, and authentication as well as confidentiality.” Strong cryptography can withstand targeted attacks until a certain degree of intensity has been reached. 2. LITERATURE REVIEW 1. The Round Complexity of Secure Protocols (1990)

In a network of n players, each player i having private input x i , we show how the players can collaboratively evaluate a function f(x 1 ; : : : ; xn ) in a way that does not compromise the privacy

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

34

Page 34: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

of the players' inputs, and yet requires only a constant number of rounds of interaction. The underlying model of computation is a complete network of private channels, with broadcast, and a majority of the players must behave honestly. Our solution assumes the existence of a one-way function. Introduction Secure function evaluation. Assume we have n parties, 1; : : : ; n; each party i has a private input x i known only to him. The parties want to correctly evaluate a given function f on their inputs, that is to compute y = f(x 1 ; : : : ; xn ), while maintaining the privacy of their own inputs.

2. Oblivious transfers and intersecting codes Brassard, G.; Crepeau, C. ; Santha, M. ;

Assume A owns t secret k-bit strings. She is willing to disclose one of them to B, at his choosing, provided he does not learn anything about the other strings. Conversely, B does not want A to learn which secret he chose to learn. A protocol for the above task is said to implement one-out-of-t string oblivious transfer, denoted (t 1)-OTk2. This primitive is particularly useful in a variety of cryptographic settings. An apparently simpler task corresponds to the case k=1 and t=2 of two 1-bit secrets: this is known as one-out-of-two bit oblivious transfer, denoted (2 1)-OT2. We address the question of implementing ( t1)-OTk2 assuming the existence of a (21)-OT2. In particular, we prove that unconditionally secure (21)-OTk 2 can be implemented from Θ(k) calls to (2 1)-OT2. This is optimal up to a small multiplicative constant. Our solution is based on the notion of self-intersecting codes. Of independent interest, we give several efficient new constructions for such codes. Another contribution of this paper is a set of information-theoretic definitions for correctness and privacy of unconditionally secure oblivious transfer

3. Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computation

The rapid development of distributed systems raised the natural question of what tasks can be performed by them (especially when faults occur). A large body of literature over the past ten years addressed this question. There are two approaches to this question, depending on whether a limit on the computational power of processors is assumed or not. The cryptographic approach, inaugurated by Difiie and Hellman [DH], assumes the players are computationally bounded, and further assumes the existence of certain (one-way) functions, that can be computed but not inverted by the player. This simple assumption was encapsulated in [DH] in order to achieve the basic task of secure message exchange between two of the processors, but turned out to be universal! In subsequent years ingenious protocols based on the same assumption were given for increasingly harder tasks such as contract signing, secret exchange, joint coin flipping, voting and playing Poker. These results culminated, through the definition of zero-knowledge proofs [GMR], their existence for NP-complete problems in completeness theorems for two-party pl] and multi-party cryptographic distributed computation.

4. A randomized protocol for signing contracts S. Even, O. Goldreich and A. Lempel.

Randomized protocols for signing contracts, certified mail, and flipping a coin are presented. The protocols use a 1-out-of-2 oblivious transfer sub protocol which is axiomatically defined. The 1-out-of-2 oblivious transfer allows one party to transfer exactly one secret, out of two recognizable secrets, to his counterpart. The first (second) secret is received with probability one half, while the sender is ignorant of which secret has been received. An implementation of the 1-out-of-2

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

35

Page 35: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

oblivious transfer, using any public key cryptosystem, is presented.

5. The Foundations of Cryptography

Cryptography is concerned with the construction of schemes that should maintain a desired functionality, even under malicious attempts aimed at making them deviate from it. The design of cryptographic systems has to be based on firm foundations; whereas ad-hoc approaches and heuristics are a very dangerous way to go. This work is aimed at presenting firm foundations for cryptography. The foundations of cryptography are the paradigms, approaches and techniques used to conceptualize, define and provide solutions to natural ``security concerns''. The emphasis of the work is on the clarification of fundamental concepts and on demonstrating the feasibility of solving several central cryptographic problems. The current book is the second volume of this work, and it focuses on the main applications of Cryptography: encryption schemes, signature schemes and secure protocols. The first volume focused on the main tools of Modern Cryptography: computational difficulty (one-way functions), pseudo randomness and zero-knowledge proofs. 6. Leveraging the "Multi" in Secure Multi-Party Computation (2003)

Jaideep Vaidya , Chris Clifton Secure Multi-Party Computation enables parties with private data to collaboratively compute a global function of their private data, without revealing that data. The increase in sensitive data on networked computers, along with improved ability to integrate and utilize that data, make the time ripe for practical secure multi-party computation. This paper surveys approaches to secure multi-party computation, and gives a method whereby an efficient protocol for two parties using an entrusted third party can be used to

construct an efficient peer-to-peer secure multi-party protocol. 7. Tools for privacy preserving distributed data mining Chris Clifton, Murat Kantarcioglu, Jaideep Vaidya, Xiaodong Lin, Michael Y. Zhu

Privacy preserving mining of distributed data has numerous applications. Each application poses different constraints: What is meant by privacy, what are the desired results, how is the data distributed, what are the constraints on collaboration and cooperative computing, etc. We suggest that the solution to this is a toolkit of components that can be combined for specific privacy-preserving data mining applications. This paper presents some components of such a toolkit, and shows how they can be used to solve several privacy-preserving data mining problems.

8. How to play ANY mental game O. Goldreich, S. Micali and A. Wigderson

We present a polynomial-time algorithm that, given as a input the description of a game with incomplete information and any number of players, produces a protocol for playing the game that leaks no partial information, provided the majority of the players is honest. Our algorithm automatically solves all the multi-party protocol problems addressed in complexity-based cryptography during the last 10 years. It actually is a completeness theorem for the class of distributed protocols with honest majority. Such completeness theorem is optimal in the sense that, if the majority of the players is not honest, some protocol problems have no efficient solution. 9. Founding cryptography on oblivious transfer MIT

Suppose your net mail is being erratically censored by Captain Yossarian. Whenever you send a message, he censors each bit of the message with probability

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

36

Page 36: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

1/2, replacing each censored bit by some reserved character. Well versed in such concepts as redundancy, this is no real problem to you. The question is, can it actually be turned around and used to your advantage? We answer this question strongly in the affirmative. We show that this protocol, more commonly known as oblivious transfer, can be used to simulate a more sophisticated protocol, known as oblivious circuit evaluation([Y]). We also show that with such a communication channel, one can have completely non interactive zero-knowledge proofs of statements in NP. These results do not use any complexity-theoretic assumptions. We can show that they have applications to a variety of models in which oblivious transfer can be done. 10. Cryptography and Information Security Group Research Project: Pseudo-Randomness in Cryptographic Applications

Randomness is a key ingredient for cryptography. Random bits are necessary not only for generating cryptographic keys, but are also often an integral part of steps of cryptographic algorithms. In practice, the random bits will be generated by a pseudo random number generation process. When this is done, the security of the scheme of course depends in a crucial way on the quality of the random bits produced by the generator. Thus, an evaluation of the overall security of a cryptographic algorithm should consider and take into account the choice of the pseudorandom generator. We started a combined study of pseudo-random number generators and cryptographic applications. The intent is to illustrate the extreme care with which one should choose a pseudo random number generator to use within a particular cryptographic algorithm. Specifically, in Mihir Bellare from UCSD and CIS members Shafi Goldwasser and Daniele Micciancio consider a concrete algorithm, the Digital Signature Standard,

and a concrete pseudo random number generator, the linear congruential generator (or truncated linear congruential pseudo random generators) and show that if a LCG or truncated LCG is used to produce the pseudo random choices called for in DSS, then DSS becomes completely breakable.

11. Pseudo randomness and Cryptographic Applications

A pseudorandom generator is an easy-to-compute function that stretches a short random string into a much longer string that ``looks'' just like a random string to any efficient adversary. One immediate application of a pseudorandom generator is the construction of a private key cryptosystem that is secure against chosen plaintext attack. There do not seem to be natural examples of functions that are pseudorandom generators. On the other hand, there do seem to be a variety of natural examples of another basic primitive: the one-way function. A function is one-way if it is easy to compute but hard for any efficient adversary to invert on average. The first half of the book shows how to construct a pseudorandom generator from any one-way function. Building on this, the second half of the book shows how to construct other useful cryptographic primitives, such as private key cryptosystems, pseudorandom function generators, pseudorandom permutation generators, digital signature schemes, bit commitment protocols, and zero-knowledge interactive proof systems. The book stresses rigorous definitions and proofs.

3. Diffie-Hellman Algorithm

The privacy requirements normally encountered in the traditional paper document world are increasingly expected in Internet transactions today. Whitfield Diffie and Martin Hellman discovered what is now known as the Diffie-Hellman (DH) algorithm in 1976. It is an amazing and ubiquitous algorithm found in many

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

37

Page 37: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

secure connectivity protocols on the Internet Whitfield Diffie and Martin Hellman discovered what is now known as the Diffie-Hellman (DH) algorithm in 1976. It is an amazing and ubiquitous algorithm found in many secure connectivity protocols on the Internet.

Overview of the Diffie-Hellman Algorithm

The mathematics behind this algorithm are conceptually simple enough that a high school student should be able to understand it. The fundamental math includes the algebra of exponents and modulus arithmetic. The first published public-key crypto algorithm was Diffie-Hellman. The mathematical "trick" of this scheme is that it is relatively easy to compute exponents compared to computing discrete logarithms. For this discussion we will use Alice and Bob, two of the most widely travelled Internet users in cyberspace, to demonstrate the DH key exchange. The goal of this process is for Alice and Bob to be able to agree upon a shared secret that an eavesdropper will not be able to determine. This shared secret is used by Alice and Bob to independently generate keys for symmetric encryption algorithms that will be used to encrypt the data stream between them. The “key” aspect is that neither the shared secret nor the encryption key do not ever travel over the network. Note that some of these numbers are very large. some of the Finer Details of Diffie-Hellman. Diffie–Hellman key exchange (D–H)[nb 1] is a specific method of exchanging keys. It is one of the earliest practical examples of key exchange implemented within the field of cryptography. The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure communications channel. This key can then be used to encrypt subsequent communications using a symmetric key cipher. The scheme was first published by Whitfield Diffie and Martin Hellman in

1976, although it later emerged that it had been separately invented a few years earlier within GCHQ, the British signals intelligence agency, by Malcolm J. Williamson but was kept classified. In 2002, Hellman suggested the algorithm be called Diffie–Hellman–Merkle key exchange in recognition of Ralph Merkle's contribution to the invention of public-key cryptography (Hellman, 2002).Although Diffie–Hellman key agreement itself is an anonymous (non-authenticated) key-agreement protocol, it provides the basis for a variety of authenticated protocols, and is used to provide perfect forward secrecy in Transport Layer Security's ephemeral modes (referred to as EDH or DHE depending on the cipher suite).

1. Alice and Bob agree to use a prime number p=23 and base g=5. 2. Alice chooses a secret integer a=6, then sends Bob A = ga mod p A = 56 mod 23 A = 15,625 mod 23 A = 8 3. Bob chooses a secret integer b=15, then sends Alice B = gb mod p B = 515 mod 23

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

38

Page 38: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

B = 30,517,578,125 mod 23 B = 19 4. Alice computes s = B a mod p s = 196 mod 23 s = 47,045,881 mod 23 s = 2 5. Bob computes s = A b mod p s = 815 mod 23 s = 35,184,372,088,832 mod 23 s = 2 6. Alice and Bob now share a secret: s = 2. This is because 6*15 is the same as 15*6. So somebody who had known both these private integers might also have calculated s as follows: s = 56*15 mod 23 s = 515*6 mod 23 s = 590 mod 23 s = 807,793,566,946,316,088,741,610,050,849,573,099,185,363,389,551,639,556,884,765,625 mod 23 s = 2 THE ALGORITHM: Two people Alice and Bob who wish to exchange some key over an insecure communications channel. They select a large prime p (~200 digit), such as (p-1)/2 should also be prime They also select g, a primitive root mod p g is a primitive if for each n from 0 to p-1, there exists some a where ga = n mod p. The values of g and p don’t need to be secret Alice then chooses a secret number xA Bob also chooses a secret number xB Alice and Bob compute yA and yB respectively, which are then exchanged yA = gxA mod p yB = gxB mod p Both Alice and Bob can calculate the key as KAB = gxA.xB mod p = yA

xB mod p (which B can compute) = yB

xA mod p (which A can compute) The key may then be used in a private-key cipher to secure communications between A and B Can be expanded to be used with many parties , Finite fields Elliptic curves Galois field

3.1 ELLIPTIC CURVE Elliptic Curve Cryptography (ECC)

is a public key cryptography. In public key cryptography each user or the device taking part in the communication generally have a pair of keys, a public key and a private key, and a set of operations associated with the keys to do the cryptographic operations. Only the particular user knows the private key whereas the public key is distributed to all users taking part in the communication. Some public key algorithm may require a set of predefined constants to be known by all the devices taking part in the communication. ‘Domain parameters’ in ECC is an example of such constants. Public key cryptography, unlike private key cryptography, does not require any shared secret between the communicating parties but it is much slower than the private key cryptography. The mathematical operations of ECC is defined over the elliptic curve y2 = x3 + ax + b, where 4a3 + 27b2 ≠ 0 . Each value of the ‘a’ and ‘b’ gives a different elliptic curve. All points (x, y) which satisfies the above equation plus a point at infinity lies on the elliptic curve. The public key is a point in the curve and the private key is a random number. The public key is obtained by multiplying the private key with the generator point G in the curve. The generator point G, the curve parameters ‘a’ and ‘b’, together with few more constants constitutes the domain parameter of ECC. The EC domain parameters are explained below. One main advantage of ECC is its small key size. A 160-bit key in ECC is considered to be as secured as 1024-bit key in RSA.

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

39

Page 39: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

Consider two points J and K on an elliptic curve as shown in figure (a). If K ≠ -J then a line drawn through the points J and K will intersect the elliptic curve at exactly one more point –L. The reflection of the point –L with respect to x-axis gives the point L, which is the result of addition of points J and K. Thus on an elliptic curve L = J + K. If K = -J the line through this point intersect at a point at infinity O. Hence J + (-J) = O. This is shown in figure (b). O is the additive identity of the elliptic curve group. A negative of a point is the reflection of that point with respect to x-axis.

Analytical explanation

Consider two distinct points J and K such that J = (xJ, yJ) and K = (xK, yK)

Let L = J + K where L = (xL, yL), then

xL = s2 - xJ – xK yL = -yJ + s (xJ – xL) s = (yJ – yK)/(xJ – xK), s is the

slope of the line through J and K. If K = -J i.e. K = (xJ, -yJ) then J +

K = O. where O is the point at infinity. If K = J then J + K = 2J then point

doubling equations are used. Also J + K = K + J

Point doubling

Point doubling is the addition of a point J on the elliptic curve to itself to obtain another point L on the same elliptic curve. To double a point J to get L, i.e. to find L = 2J, consider a point J on an elliptic curve as shown in figure (a). If y coordinate of the point J is not zero then the tangent line at J will intersect the

elliptic curve at exactly one more point –L. The reflection of the point –L with respect to x-axis gives the point L, which is the result of doubling the point J. Thus L = 2J. If y coordinate of the point J is zero then the tangent at this point intersects at a point at infinity O. Hence 2J = O when y J = 0. This is shown in the figure.

Elliptic Curve Digital Signature Algorithm

Signature algorithm is used for authenticating a device or a message sent by the device. For example consider two devices A and B. To authenticate a message sent by A, the device A signs the message using its private key. The device A sends the message and the signature to the device B. This signature can be verified only by using the public key of device A. Since the device B knows A’s public key, it can verify whether the message is indeed send by A or not. ECDSA is a variant of the Digital Signature Algorithm (DSA) that operates on elliptic curve groups. For sending a signed message from A to B, both have to agree up on Elliptic Curve domain parameters. Sender ‘A’ have a key pair consisting of a private key d. A (a randomly selected integer less than n, where n is the order of the curve, an elliptic curve domain parameter) and a public key QA = d. A * G (G is the generator point, an elliptic curve domain parameter).

3.2 The ElGamal public key system

The ElGamal cryptographic algorithm is a public key system like the

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

40

Page 40: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

Diffie-Hellman system. It is mainly used to establish common keys and not to encrypt messages. The ElGamal cryptographic algorithm is comparable to the Diffie-Hellman system. Although the inventor, Taher ElGamal, did not apply for a patent on his invention, the owners of the Diffie-Hellman patent (US patent 4,200,770) felt this system was covered by their patent. For no apparent reason everyone calls this the "ElGamal" system although Mr. ElGamal's last name does not have a capital letter 'G'. A disadvantage of the ElGamal system is that the encrypted message becomes very big, about twice the size of the original message m. For this reason it is only used for small messages such as secret keys.

Generating the ElGamal public key

As with Diffie-Hellman, Alice and Bob have a (publicly known) prime number p and a generator g. Alice chooses a random number a and computes A = ga. Bob does the same and computes B = gb. Alice's public key is A and her private key is a. Similarly, Bob's public key is B and his private key is b. Encrypting and decrypting messages

If Bob now wants to send a message m to Alice, he randomly picks a number k which is smaller than p. He then computes:

c1 = gk mod p c2 = Ak * m mod p and sends c1 and c2 to Alice. Alice

can use this to reconstruct the message m by computing c1

-a * c2 mod p = m because c1-a * c2 mod p

= (gk)-a * Ak * m = g-a * k * Ak * m = (ga)-

k * Ak * m = A-k * Ak * m = 1 * m = m 4. DISSCUSSION

Encryption for long has been a favoured subject in the field of computer science, but the methods of encryption have proved to be contentious with each preferring some choice over other. This thesis too in a way analyzes the cryptographic methods specifically

relating to ELLIPTICAL and ELGAMAL techniques.

The implementation of encrypting files using the two methods has never been in question but taking the type of files like Adobe pdf , Text txt and Document doc with specific relation to size and the time taken to encrypt the files and decrypt the files in the two modes. This has brought some interesting things which deals with respect to the contents of the file and the encryption size. The larger the file the longer the encryption time may be a known fact but the same content spread out over different formats of compression as we should take it like pdf and doc brings less time for pdf docs when compared with word docs. This might be due to the compression size of a pdf format file. The small increase in milliseconds is captured and this is done for both the methods of encryption alike .This brings to conclusion that based on the content the actual variations in time can be used by the encryption users to encrypt their files based on content instead of on availability. • For DSA, RSA we need larger key length. • ECC requires significantly smaller key size with same level of security. • Benefits of having smaller key sizes : faster computations, need less storage space. • ECC ideal for constrained environments : Pagers ; PDAs ; Cellular Phones ; Smart Cards Diffie Hellman over ECC • Alice and Bob chose a finite field Fq and an elliptic curve E • The key will be taken from a random point P over the elliptic curve (e.g. the x coordinate) • Alice and Bob choose a point B that does not need to be secret – B must have a very large order!

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

41

Page 41: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

5. CONCLUSION and FUTURE ENHANCEMENTS

There is a great need for a secure way to transfer data and information safely over the Internet. While cryptography is not perfect, its strengths out way its weaknesses. As the amount of information sent over the World Wide Web increases so will the need for a safe and secure method of encrypting that information. We examine a cryptographically secure privacy preserving data mining solution in different computational settings Privacy-preserving data mining, in which certain computations are allowed, while other information is to remain protected, was first introduced in 2000 by Agrawal and Srikant [AS00] and Lindell and Pinkas [LP02]. Since then, extensive research has been devoted to privacy-preserving data mining and other privacy-preserving primitives efficient enough to be used on extremely large data sets. In general, this research has been divided into solutions that provide strong cryptographic privacy protection, which require more computational overhead and have so far been limited to extremely simple (but useful) functions, and those that use perturbation, which provide weaker privacy properties, but allow much more efficient solutions and allow computation of more sophisticated data mining functions. Protocol providing authenticated key establishment, making use of the ideas of Diffie-Hellman key exchange [Diff76] Taking both encryption techniques into consideration for privacy preservation this prepossessing elliptical methods for huge data file sizes while the other method may be chosen for word

documents. But for text files the options are same because it is obvious that they do not contain any pictures or formatting content etc.

As for other methods may similarly be compared for non conforming content and then provide users with more options before encrypting a file. The methods applied and the algorithms may also vary in accordance with different mechanism for encryption and a totally different mechanism for decrypting of the file. This paper also allows for speculation on the content of the files before due diligence to apply the encryption techniques. Much work has been done in recent years involving identification and authentication schemes using asymmetric techniques. This problems can be solve and it can be implement in the future work ,for an efficient and good privacy by effective cryptography technique .

6.REFERENCES [1] D. Beaver, S. Micali and P. Rogaway,

The round complexity of secure protocols, Proc. of 22nd ACM Symposium on Theory of Computing (STOC), pp. 503-513, 1990.

[2] M. Bellare and S. Micali, Non-Interactive Oblivious Transfer and Applications, Advances in Cryptology - CRYPTO ’89. Lecture Notes in Computer Science, Vol. 435, Springer-Verlag, 1997, pp. 547-557.

[3] M. Ben-Or, S. Goldwasser and A. Wigderson, Completeness theorems for non cryptographic fault tolerant distributed computation, Proceedings of the 20th Annual Symposium on the Theory of Computing (STOC), ACM, 1988, pp. 1–9.

[4] D. Chaum, C. Crepeau and I. Damgard, Multiparty unconditionally secure protocols, Proceedings of the 20th Annual Symposium on the Theory of Computing (STOC), ACM, 1988, pp. 11–19.

[5] S. Even, O. Goldreich and A. Lempel. A Randomized Protocol for Signing

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

42

Page 42: INTERNATIONAL JOURNAL OF ENGINEERING COMPUTER SCIENCE AND TECHNOLOGY

Contracts. Communications of the ACM, 28(6):637-647, 1985.

[6] O. Goldreich. Foundations of Cryptography: Volume 2 { Basic Applications. Cambridge University Press, 2004

[7] Jaideep Vaidya and Chris Clifton, “Leveraging the ‘multi’ in Secure Multiparty Computation,” WPES’03 October 30, 2003, Washington, DC, USA, ACM Transaction 2003, pp120-128.

[8] Chris Clifton, Murat Kantarcioglu, Jaideep Vaidya, Xiaodong Lin, Michael Y. Zhu, ”Tools for Privacy Preserving Data Mining”. international conference on knowledge discovery and data mining, Vol. 4, No. 2, 2002, pp. 1-8.

[9] Anand Sharma and vibha ojha ““Privacy preserving Data Mining by Cryptography” in Springer-LNCS-CICS- Vol:89, “Recent Trends in Network Security and Applications” .pp.576- 581.2010.

[10] O. Goldreich, S. Micali and A. Wigderson, How to Play any Mental Game - A Completeness Theorem for Protocols with Honest Majority, Proceedings of the 19th Annual Symposium on the Theory of Computing (STOC), ACM, 1987, pp. 218–229.

N.Punitha, R.Amsaveni, Int. J. EnCoTe, 2012, v01i01, 32 - 43 ISSN 2277 - 9337

IJECST | JAN - FEB 2012 Available [email protected]

43