cryptology beyond shannon’s information theory: preparing ... · abstract "a cryptosystem...

25
Cryptology beyond Shannon’s Information Theory: Preparing for When the ‘Enemy Knows the System’ With Technical Focus on Number Field Sieve Cryptanalysis Algorithms for Most Efficient Prime Factorization on Composites Yogesh Malhotra, PhD www.yogeshmalhotra.com Griffiss Cyberspace, Global Risk Management Network, LLC www.griffisscyberspace.org, www.finrm.org May 03, 2013 "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception of including the secret key." - Yogesh Malhotra’s reformulation of Kerckhoffs's principle, 2013 "The enemy knows the system, but you ‘know’ better." - Yogesh Malhotra’s reformulation of Shannon’s maxim, 2013

Upload: others

Post on 24-Mar-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

Cryptology beyond Shannon’s Information Theory:

Preparing for When the ‘Enemy Knows the System’

With Technical Focus on

Number Field Sieve Cryptanalysis Algorithms for Most Efficient Prime Factorization on Composites

Yogesh Malhotra, PhD www.yogeshmalhotra.com

Griffiss Cyberspace, Global Risk Management Network, LLC

www.griffisscyberspace.org, www.finrm.org May 03, 2013

"A cryptosystem should be secure even if the attacker knows all details about the system, with

the exception of including the secret key."

- Yogesh Malhotra’s reformulation of Kerckhoffs's principle, 2013

"The enemy knows the system, but you ‘know’ better."

- Yogesh Malhotra’s reformulation of Shannon’s maxim, 2013

Page 2: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

Abstract

"A cryptosystem should be secure even if the attacker knows all details about the system, with

the exception of including the secret key."

- Yogesh Malhotra’s reformulation of Kerckhoffs's principle, 2013

"The enemy knows the system, but you ‘know’ better."

- Yogesh Malhotra’s reformulation of Shannon’s maxim, 2013

The problem is introduced as creation of encryption algorithms whose cracking is computationally

infeasible. Hegelian dialectic questioning that premise is posed by invoking Poe asking if it is even

possible for human ingenuity to ‘concoct a cipher’ which human ingenuity cannot resolve. Symmetric and

public key cryptography as well as various RSA benchmarks are reviewed to develop a sense of the

encryption vulnerability trend. Apparent overconfidence of expert scientist Rivest in his RSA encryption

is introduced as what he later called ‘our infamous “40 quadrillion years”’ challenge.1 Recognizing that

the ‘40 quadrillion years’ challenge became unraveled in less than a minuscule of tiniest fraction of order

~ 10-15

of estimated time to failure is the backdrop of the timeline of RSA failures depicting strong

encryption vulnerability trend. Next RSA benchmark on cusp of being cracked – unless it has already

been cracked in private – is the global encryption standard RSA-1024 in worldwide use for the most

critical national, economic and industrial activities. Factoring algorithms, the devil’s advocate to

cryptologists’ claim of computational infeasibility of encryption failures are introduced with discussion of

general and special purpose factoring algorithms. Central technical focus is on Number Field Sieve

(NFS), most potent of all factoring known algorithms used for recent strongest attacks. 5-phase operation

of NFS is discussed with specific focus on: polynomial selection, finding factor bases, sieving for optimal

congruent relations, solving linear equations with matrix, and computing square roots in number fields.

Then my investigation returns to the original question: What if Poe’s foresight about human

cryptologist’s ingenuity not being able to outsmart human cryptanalyst’s ingenuity was not far from

reality? Increasingly devastating real world encryption failures are reviewed lending credence to Poe’s

thesis. Parallels between Poe’s insight and Claude Shannon’s maxim and Kerckhoffs's principle are

clarified. By adapting both to bridge information theoretic information processing and my work on

human sense making (Malhotra 2001) I introduce the sketch of a new cryptology principle: Malhotra’s

principle of no secret keys. The proposed sketch develops upon my 20-yr information and communication

systems research which also included focus on issues such as competitive intelligence, misinformation

and disinformation. It further advances beyond my original communication with John Holland, the

inventor of genetic algorithms in 1995. Based on Holland’s observation that Shannon’s notion of

information in information theory missed critical human aspects of meaning, sense making, and knowing,

my research developed a widely accepted knowledge management framework applied by global

organizations such as NASA, US AFRL, US Army, US Navy, and US Air Force. The proposed principles

based on that framework aim to complement Shannon’s information theory originally designed for

controlling machines based on my research on the psychology of information, meaning, sense making,

and knowing. What having and knowing mean in the human behavior framework can fundamentally

advance notions such as information theory based two factor authentication by fundamentally rethinking

and reformulating very basic ideas such as: something that you have and something that you know.

1 Rivset, Ronald R. The Early Days of RSA – History and Lessons, ACM Turing Award Lecture.

Page 3: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

“Few persons can be made to believe that it is not quite an easy thing to invent a method of

secret writing which shall baffle investigation. Yet it may be roundly asserted that human

ingenuity cannot concoct a cipher which human ingenuity cannot resolve…”

-- Edgar Allan Poe

Introduction

Over 200 years ago, Edgar Allan Poe, a 19th century American author and poet known

for his influence on the field of cryptography, made the above comment. Consistent

with observations of some of the later scientists known for their central role in

advancing cryptography, he tended to “no cipher could be invented that could not also

be ‘unriddled’” (Gardner 1977).

More recent advances in cryptography have led to the development of symmetric

shared secret key based cryptography and subsequently public key cryptography that

does away with the need for the shared secret key. Diffie Helman invented public key

cryptography in 1976 which enabled encryption of messages and their authentication

by digital signatures. Next year, in 1977, Ron Rivest, Adi Shamir, and Leonard Adleman

introduced the RSA algorithm to enable public key cryptosystems. RSA’s cryptographic

strength was based upon presumed computational infeasibility of factorization of large

composite numbers created from very large ‘hard’ prime numbers. Even though the

private key used for encryption and decryption does not need to be shared in public key

cryptosystems, it is mathematically related to the public key. The cryptanalytic defense

of such system is in ensuring computational infeasibility of derivation of the private key

even though public key as well as their composite product is known.

An article published in the Scientific American (Gardner 1977) announced the new RSA

algorithm in its Mathematical Games column. The column offered a reward of $100 to

the first person who could decode the cipher text generated by the use of the RSA

encryption algorithm. The column observed that “It is this practical impossibility, in any

foreseeable future, of factoring the product of two large primes that makes the M.I.T.

public key cipher system possible.” It further underscored: “If the best algorithm

known and the fastest of today's computers were used, Rivest [the R in RSA] estimates

that the running time required would be about 40 quadrillion years!” Incidentally, the

RSA challenge estimated to remain undefeated for 105 times the life of the universe,

estimated around 1010, was won much earlier.

Page 4: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

The Original RSA Challenge in Scientific American column Mathematical Games, 1977

In fact, the RSA challenge was won within 17 years! Moore’s law asserts that

computing machinery doubles in processing power every 1.5 years or so: that does

seem to account for a 12-fold improvement. Moore’s law doesn’t explain the ~ 1015 times

overestimation about RSA’s purported encryption strength in any case! The current cause

for concern isn’t about the original RSA challenge being unraveled in 17 short years.

However, it is about the 1015 x magnitude by which one of the very best experts in the

discipline and practice overestimated the time to capitulation of 129-digit RSA.

Furthermore, the crucial concern is also about the trend of capitulations of increasingly

stronger RSA standards with perhaps the most critical of them all being the current

predominant 309-digit RSA-1024 which is the current gold standard of encryption.

Current Cause of Critical Concern is

based upon three issues: 1015x factor of

‘overconfidence’ in the RSA benchmark,

trend of stronger RSA capitulations

since then and most importantly the

current gold standard which is on cusp

of being broken… unless it is already

broken but not publicly revealed.

Even though NIST2 has approved larger key sizes of 2048 and 3072, RSA3 “currently

recommends key sizes of 1024 bits for corporate use and 2048 bits for extremely

valuable keys like the root key pair used by a certifying authority.”

2

NIST Special Publication 800 - 57 Recommendation for Key Management Part 1: General (Revision 3).

http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf

Page 5: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

Factoring Algorithms

Real defense or sustained viability of any encryption standard or encryption algorithm

depends upon the “practical impossibility, in any foreseeable future” of “factoring the

product of two large primes” as we already know. Hence, the factoring algorithms that

make the process of breaking such encryption benchmarks easier, cheaper, and faster

are of vital importance. For instance, Number Field Sieve, in its various forms, is

currently recognized as the most powerful family of factoring algorithms and was used

in most recent factoring attacks. Factoring algorithms come in two broad classifications:

special purpose factoring algorithms and general purpose factoring algorithms.

Of the two types, Special Purpose Factoring Algorithms (SPFA) are suitable for

factorizing composites of special classes of numbers such as Mersenne primes and

Fermat numbers. While suitable for factoring smooth composites with small prime

factors, SPFA can’t factor hard composites of large prime numbers that are relevant for

RSA encryption. Their efficiency depends upon unknown factors: they are too slow for

most factoring jobs and would run forever or fail to run for RSA composites. Special Purpose Factoring Algorithms and General Purpose Factoring Algorithms

Examples of SPFA include: Trial division (trial divide possible factors, check for zero

remainder), Pollard’s p – 1 (based on Fermat’s Little Theorem), Pollard’s ƒΟ (Monte

Carlo method: 8th), and Elliptic Curve Method (ECM) (p − 1 for points on elliptic curve).

General Purpose Factoring Algorithms (GPFAs) which are suitable for factoring

RSA-type hard composites with no small prime factors are the second type of factoring

3 How large a key should be used in the RSA cryptosystem? http://www.rsa.com/rsalabs/node.asp?id=2218.

Page 6: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

algorithms. They can factor any integer of a given size in about same time as any integer

of that size and their efficiency depends on the size of integer to factor.

General Purpose Factoring Algorithms Are Used for Factoring RSA Type Hard Composites

Three specific GPFAs listed above, Continued Fractions Algorithm (CFRAC), Quadratic

Sieve Algorithm (QS), and Number Field Sieve Algorithm (NFS), all operate on the

basis of the congruence of squares. Congruence of squares in the context of factoring a

hard composite n into its factors p and q can be understood as follows.

If we can find a function that satisfies: n = x2 – y2, we get factors of n: (x + y), (x – y).

Given that a large set of numbers needs to be searched with few satisfying the above

relation, this algorithm is slow. Hence, to speed things up to get a faster algorithm, n

can be factored to satisfy the following weaker congruence of squares condition.

n divides the composite product of two factors (x + y) (x - y), but it doesn’t divide (x + y)

or (x - y) alone so that the above modulus condition is satisfied. Thus, (x + y) and (x - y)

represent the two prime factors of n, namely p and q. Using Euclidean algorithm, the

greatest common divisors of (x + y, n) and (x - y, n) will yield these factors.

A comparison of representative sieve algorithms shown below recognizes the number

field sieve (NFS) as the fastest, i.e., most efficient algorithm for factorization of large

numbers that make hard composites for RSA encryption particularly for > 100 digits.

Page 7: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

Number Field Sieve is the Best Known Algorithm for Factoring Large Numbers

Number Field Sieve

NFS consists of a sieving phase to search for a fixed set of prime numbers which are

candidates for a particular algebraic relationship, modulo the number to be factored.

The next matrix solving phase creates a large matrix from the candidate values and then

solves it to determine the factors. While the sieving phase can be done in a distributed

fashion on a large number of processors simultaneously, the matrix solving phase is

usually performed on a large supercomputer given its need for massive processing

power. For large hard composite n, NFS asymptotically outperforms other sieve types

such as Quadratic Sieve (QS) and Rational Sieve (RS). While RS & QS both find smooth

numbers exponential in n NFS finds smooth numbers sub-exponential in n. QS operates

over integers only ℤ x ℤ, whereas NFS operates over Number Field ℤ and Number Ring

ℤ[m], i.e., over ℤ x ℤ[m] where m is the root of polynomial f(x), NF is a finite field

extension of the field ℚℚℚℚ, and NR is a subring of NF. The end goal of the various phases

in the operation of NFS is to find congruent squares mod n to determine the non-trivial

factors (x + y) (x - y) from the greatest common divisors of (x + y, n) and (x - y, n).

An outline of the key steps in the operation of NFS is presented below followed by

discussion on each of the steps in its operation.

Page 8: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

Main Steps in the Operation of the Number Field Sieve Algorithm Are Outlined Below

Since the concept of Algebraic Number Field and Number Ring are central to the NFS

algorithm, they are briefly reviewed below before the overall process of operation is

delineated. If r is a root of a nonzero polynomial equation such as: anxn + a(n-1)x (n-1) + ... +

Page 9: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

a1x + a0 = 0 where ais are integers or equivalently, rational numbers and r satisfies no

similar equation of degree < n, then r is said to be an algebraic number of degree n.

What is an Algebraic Number Field?

A Subring of R is a subset of a ring that is itself a ring for which binary operations of

addition and multiplication on R are restricted to the subset and which contains the

multiplicative identity of R. A Ring is an Abelian Group with a second binary operation

that is associative and is distributive over the Abelian Group operation. Abelian Group,

also called a Commutative Group as it adds Commutativity axiom to Group's four other

axioms, is a group in which the result of applying the group operation to two group

elements does not depend on their order (the axiom of commutativity). Group is a set of

elements together with an operation that combines any two of its elements to form a

third element also in the set while satisfying four conditions called the group axioms,

namely closure, associativity, identity, and invertibility.

Polynomial Selection The objective of the Polynomial Selection step is to identify a large set of usable

polynomials, remove bad polynomials from set (also called a heuristics), conduct small

sieving experiments on remaining polynomials, and choose one with best yield. Yield

Page 10: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

can be understood in terms of a function which provides a very good estimate for

removal of [algebraically] bad polynomials so that remaining polynomials can be put

through a small sieving test. The output of the process is polynomials with best yield.

The algebraic properties that are relevant in the process of finding the algebraically

optimal polynomials are summarized below.

Polynomial Selection Identifies a Large Set of Usable Polynomials

Finding Factor Bases Factor bases specify well-defined domains for optimal functioning of NFS algorithm.

The algebraic domains for selecting the most optimal smooth primes consistent with the

goals of achieving congruence of squares are defined three distinct factor and character

bases. These three are: rational factor base (RFB) of primes such as 2, 3, 5 up to an

empirically unknown bound (a + bm); algebraic factor base (AFB) of prime ideals in a

ring of algebraic integers, and quadratic character base (QCB) composed of a small set

of first degree prime ideals not in AFB. Relevant algebraic properties are listed below.

The Three Factor Bases: Rational Factor Base, Algebraic Factor Base, Quadratic Character Base

Page 11: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

When enough pairs are found that are simultaneously smooth over rational and algebraic

factor bases, then hopefully squares in both Number Field ℤ and Number Ring ℤ[m],

i.e., over ℤ x ℤ[m] will be identified.

Sieving to Find Optimal Sets of Relations (a, b)

Sieving is the most critical part of the Number Field Sieve operations. Two sieves are

used: the Rational Sieve and the Algebraic Sieve corresponding to RFB and AFB

respectively. Two different methods of sieving are available depending upon the size of

primes being factored and depending upon available memory resources. Line Sieving needs less memory, is best for small to medium primes and sieves over all

(a, b) pairs, one b-value at a time, and for each prime (p, r), find all pairs divisible by it.

In contrast, Lattice Sieving which is best for large primes but needs more memory. It

works by fixing a medium sized prime (q, s) ϵ AFB, sieves over all (a, b) pairs subject to

⟨ a – b⍬ ⟩ | (q, s) and forms a lattice of vectors for such pairs. The output of the sieving

phase is a set of (a, b) pairs that are RFB and AFB smooth.

Sieving is the Most Crucial Part of the NFS Algorithm Operation

Solving Linear Equations Using a Matrix

RFB and AFB smooth (a, b) pairs filtered above are used in this step to find the subset of

pairs which yield squares i.e. contain elements with unique factors having even powers.

The sieving results are filtered by removing duplicates and relations containing prime

ideals not present in other relations. The relations are then put into relation-sets to

Page 12: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

construct very large sparse matrix over Galois Field GF (2) (expressed by 2 = pm). The

matrix is then reduced resulting in some dependencies consisting of elements which

lead to a square modulo n.

As this step can result in a matrix with dimensions greater than 100 x 100, it may be

hard to represent in memory and thus may require a supercomputer for processing. The

matrix, an illustrative example of which is included below, consists of factorization over

RFB and AFB to minimize matrix size. So that, memory doesn’t pose a major constraint

to processing the matrix, its size is minimized by replacing odd powers with 1s and

even powers with 0s. The matrix is then reduced to reduced echelon form using either

Block Lanczos or Block Wiedemann techniques for optimal run time as Gaussian

Elimination which is the more known technique may not be optimal.

RFB Powers Are Shown Above and AFB Powers Are Shown Below in the Matrix

Calculating Square Roots in Number Fields

The computation of the matrix yields one or more products which are squares and can

yield trivial or non-trivial factors of the hard composite n. The rational square root and

algebraic square root need to be calculated and are of the following form.

Algebraic square root, x ϵ ℤ [m]: x2 =

Rational square root, y ϵ ℤ: y2 =

The element (x2, y2) just found yields a solution the congruence condition noted earlier:

Page 13: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

Thus, square root (x, y) of the squares (x2, y2) ϵ ℤ x ℤ [α] now need to be computed. For

the rational field it is feasible given squares can be extracted in Z. Computation of the

above large number y2 can also be avoided as prime factorization of each element (a –

bm) in the product and thus the product itself has been done. For the algebraic number

field however, even though prime ideal factorization of above x2 can be determined it is

not useful as prime ideals may not have generators at all. If we can find a function that

satisfies: n = x2 – y2, we get factors of n: (x + y), (x – y). As noted earlier, (x + y) and (x

- y) represent the two prime factors of n, namely p and q and can be computed by

finding the greatest common divisors of (x + y, n) and (x - y, n).

Rational Square Roots and Algebraic Square Roots Used to Compute Factors

‘The Enemy Knows the System…’: Beyond RSA-1024

Having completed the technical discussion on NFS and growing vulnerability of

stronger RSA encryptions with the future of RSA-1024 and the world literally up in the

air (‘unknown’), one would realize that odds are no more in favor of cryptologist

encryption creator’s overconfidence by a factor of ~ 1015. Given known real world

observations at hand about encryption failure of stronger systems and the enormity of

stakes involved (probably in hundreds of billions of dollars as illustrated further)

particularly in current era of everything-WWW, a much more conservative stance

about cryptology’s faith in secret keys is not only desirable but crucial. Economies

Page 14: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

and nations of the world literally can’t afford any complacency based upon similar

overestimation of ~ 1015 or even lesser in terms of overconfidence in encryption systems

and algorithms.

What if Poe’s foresight about human cryptologist’s ingenuity not being able to

outsmart human cryptanalyst’s (code breaker’s) ingenuity was not far from reality?

To develop a reality check about plausibility that ‘the enemy knows the system,’ i.e., the

gold standard of encryption RSA-1024 may already have been cracked in private, a

review of recent encryption cracking public media reports was done with illustrative

examples presented in Appendix: ‘When The Enemy Knows The System’.

[The author whose research and practice on limitations of Shannon’s information theory

go back 20-years is an active Certified Information Systems Security Professional (CISSP)

who monitors related reports on a regular basis. What is illustrated herein is a very

small sample from his research archive on this topic.]

A critical analysis of recent RSA and other strong encryption failures particularly over

the span of recent 2-3 years discovers evidence of most alarming and devastating

cyber-intrusion attacks in the history of worldwide adoption of WWW. Most critically,

recent issues such as the national cyber-directive by the US President [in words similar

to what is attributed to the author of this article in global interviews and published

research about ‘smart’ use of technologies by ‘smart’ people], US Secretary of Defense

and other US and other national executives seems unprecedented in the history of

cryptology and encryption schemes.

Furthermore, despite obvious reticence on the part of most hi-tech public companies,

recent highly public admissions of encryption failures of their global and national scale

systems by largest hi-tech companies such as Google and Facebook on both leading and

bleeding edge of cryptology applications seem telling in what they don’t really tell.

Most US defense contractors that service hundreds of billions of dollars US national

security contracts remain reticent about cyber-intrusion attacks even though Lockheed

Martin ‘admitted’ in May 2011 that it was the target of a "significant and tenacious"

cyber-attack. Its ‘admission’ and ‘non-admission’ of most of its competitors is

apparently more telling than what this one company actually said essentially to

ameliorate most concerns as such ‘public relations’ PR messages are expected to do.

Page 15: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

Therefore, it may be concluded beyond reasonable doubt that the ‘system is already

broken’: Poe’s foresight about the state of encryption attacks was not far from reality.

Preparing for When the ‘Enemy Knows the System’

Even though the author of the Scientific American article ‘40 quadrillion years’ RSA

challenge column asserted that: “Poe was certainly wrong. Ciphers that are unbreakable

even in theory have been in use for half a century. They are "one-time pads," ciphers

that are used only once, for a single message. Here is a simple example based on a shift

cipher, sometimes called a Caesar cipher because Julius Caesar used it.” It remains a

fact that even though one-time pads have their place in small minuscule of encryption

applications, future of the national and global economies is not as sensitive to them

given billion-fold or perhaps greater penetration of RSA-type encryption schemes.

Most cryptologists and cryptanalysts would recognize that Poe’s insight was not far

from reality even in terms of Claude Shannon’s maxim or Kerckhoff’s Principle.

Kerckhoff’s Principle states that: ‘A cryptosystem should be secure even if the attacker

knows all details about the system, with the exception of the secret key.’ Shannon’s

maxim was even more explicit in avoiding the apparent hubris in cryptologist’s claim of

over-confidence in any encryption algorithms stating: "The enemy knows the system."

Hence Poe was only suggesting what Kerckhoff and Shannon stated 100 years later: all

three emphasized design of encryption systems based on the premise that human

ingenuity cannot concoct a cipher which human ingenuity cannot resolve.

Cryptology beyond Shannon’s Information Theory

What can be done in terms of future cryptology advancements about the discussed

challenge of creating encryption systems that are computationally infeasible to break?

Based upon my own 20-year research focused on information and communication as

well as information and communication systems, I outline a sketch of future extensions

to Shannon’s maxim and Kerckhoff’s Principle. I outline these extensions in the form of

adaptations of both as follows and call them Yogesh Malhotra’s principle of no secret keys.

"A cryptosystem should be secure even if the attacker knows all details about the system, with

the exception of including the secret key."

- Yogesh Malhotra’s reformulation of Kerckhoffs's principle, 2013

Page 16: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

"The enemy knows the system, but you ‘know’ better."

- Yogesh Malhotra’s reformulation of Shannon’s maxim, 2013

The proposed extension is based on my communication of 1995 with Professor John

Holland, the computer scientists and psychologist inventor of genetic algorithms

(Malhotra 2001). Based on understanding Holland’s response to my query that

Shannon’s notion of information in information theory missed critical human aspects of

meaning, sense making, and knowing, my research has tried to fill that gap. Resulting

body of work has become widely accepted as a knowledge management framework that is

applied by top executives and commanders at global organizations such as NASA, US

AFRL, US Army, US Navy, and US Air Force. Above proposed principles of no secret keys

based on that framework aim to complement Shannon’s information theory originally

designed for controlling machines. Extensions are proposed in terms of my peer-reviewed

information systems research on the psychology of information, meaning, sense making,

and knowing noted among 'exemplars' of 'considerable impact on actual practice' in 2008

AACSB International Impact of Research Report. [While my name seemed to be the

only name noted as exemplar of contributions to Information Systems research, several

Finance and Economics Nobel laureates were mentioned as exemplars whose research

my works on quantitative modeling of risk and uncertainty is attempting to advance.]

I propose the above extensions to bridge the current gap between ‘information theoretic’

information processing and human sense making processes (Malhotra 2001). What having

and knowing mean in this human behavior framework can fundamentally advance

obsoleted notions such as information theory based two factor authentication by

fundamentally rethinking and reformulating very basic ideas such as: something that

you have and something that you know. The fundamental proposition is to extend prior

applications based on Shannon’s ‘information theory’ focused on static and

deterministic behavior of computing machines by integrating understanding about

dynamic and non-deterministic behaviors of human users of encryption systems.

The fundamental contrast between the two approaches is in terms of the treatment of

‘secret’. Shannon’s information theory [or, more specifically its encryption applications]

treat secret [or specifically related meaning resulting from interaction of senses and

symbols] as extrinsic to senses and embedded in exogenous symbols. In Shannon’s

Page 17: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

frame, specific sense or meaning related to such exogenous symbols is also assumed to

be homogeneous and static across all humans and more or less also across time [as if

they were computing machines processing information rather than humans making

sense].

My information systems research focused on multiple streams of psychology, learning,

and knowing has determined that the secret [or specifically related meaning resulting

from interaction of senses and symbols] is not extrinsic to senses and not embedded in

exogenous symbols. Rather, in my frame of reference based on widely examined

empirically validated streams of information psychology research, the secret is intrinsic

to subjective senses. In my frame, specific sense or meaning related to such exogenous

symbols is also assumed to be non-homogeneous across users as well as non-static

across time.

Despite their sophistication in terms of psychological research, the proposed principles

and extensions are based on naturally intuitive experience of real world locks and keys

that most of us are already familiar with. However, it applies the lessons learned from

that physical world in terms of what not to do (what is certainly not your birth date as a

pass phrase) to the virtual world rather than what to do (what is your birth date as a

pass phrase) which is evident in current naïve systems of pass phrases and two-factor

authentication which are predictably prone to failures. Hence, it turns the ‘security

through obscurity’ cliché on its head by transforming it from a critical liability of

unexamined systems prone to failure to a critical asset where the secret is intrinsic to

user’s mind rather than the external symbol(s): where others see mess, the specific

human creator sees the intuitive structure that others can’t possibly decipher using any

possible algorithm – as meaning interpreted though subjective senses is detached from

symbols. Where the machine-focused information theoretic paradigm has failed in

trying to control human behavior as if they were deterministic machines, the proposed

paradigm can possibly leverage the non-determinism inherent in their human behavior

to create ground-breaking processing and encryption capabilities yet to be discovered.

Page 18: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

References

Briggs, Matthew E. An Introduction to the General Number Field Sieve. Master of

Science in Mathematics Thesis. Virginia Polytechnic Institute and State University. April

17, 1998.

Crandall, R.E. and Pomerance, C. Prime Numbers: A Computational Perspective (2001).

2nd edition, Springer. ISBN 0-387-25282-7. Section 6.2: Number field sieve, pp. 278–301.

Cowie, J.; Dodson, B.; Elkenbracht-Huizing, R. M.; Lenstra, A. K.; Montgomery, P. L.;

Zayer, J. A. "World Wide Number Field Sieve Factoring Record: On to 512 Bits." In

Advances in Cryptology--ASIACRYPT '96 (Kyongju) (Ed. K. Kim and T. Matsumoto.)

New York: Springer-Verlag, pp. 382-394, 1996.

Jensen, Per Leslie.Integer Factorization. Master Thesis. Department of Computer

Science. University of Copenhagen. Fall 2005.

Lenstra, A. K. and Lenstra, H. W. Jr. "Algorithms in Number Theory." In Handbook of

Theoretical Computer Science, Volume A: Algorithms and Complexity (Ed. J. van

Leeuwen). New York: Elsevier, pp. 673-715, 1990.

Lenstra, A. K. and Lenstra, H. W. Jr. The Development of the Number Field Sieve.

Berlin: Springer-Verlag, 1993.

Lenstra, A. K., Lenstra, H. W. Jr., Manasse M.S., Pollard, The Number Field Sieve, The

Development of the Number Field Sieve, Springer Verlag, Berlin, Germany, 1993, pp.

11-42.

Lenstra, A. K. and Verheul, E. R. "Selecting Cryptographic Key Sizes." In Public Key

Cryptography. Third International Workshop on Practice and Theory in PublicKey

Cryptosystems, PKC 2000 (Ed. H. Imai and Y. Zheng). Berlin: Springer-Verlag, 446-465,

2000.

Malhotra, Y. Expert Systems for Knowledge Management: Crossing The Chasm

Between Information Processing and Sense Making. Journal of Expert Systems with

Applications 20 (2001). 7-16.

Montgomery, Peter L. Square roots of products of algebraic numbers, Mathematics of

Computation. 1943-1993 (W. Gautschi, ed.), Proc. Sympos. Appl. Math., vol. 48, 1994,

Page 19: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

pp. 567-571.

Peterson, Michael. Summary of the Number Field Sieve. University Honors Thesis in

Mathematics and Statistics. Texas Tech University. December, 2004.

Pollard, J. M. Factoring with Cubic Integers, The Development of the Number Field

Sieve, Springer Verlag, Berlin, Germany, 1993, pp. 4-10.

Pomerance, C. "A Tale of Two Sieves." Not. Amer. Math. Soc. 43, 1473-1485, 1996.

Page 20: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception

APPENDIX: ‘WHEN THE ENEMY KNOWS THE SYSTEM’

Page 21: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception
Page 22: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception
Page 23: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception
Page 24: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception
Page 25: Cryptology beyond Shannon’s Information Theory: Preparing ... · Abstract "A cryptosystem should be secure even if the attacker knows all details about the system, with the exception