trusted vs. secure software –so far, we saw: some security features of operating systems...

40
Trusted vs. secure software So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. Next: What does a “secure” software really mean? © Most of the material in these slides is taken verbatim from Pfleeger (the textbook)

Upload: lynne-lawrence

Post on 17-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Trusted vs. secure software

– So far, we saw:• Some security features of Operating Systems

(authentication, authorization) and• Secure operations.

– Next: What does a “secure” software really mean?

© Most of the material in these slides is taken verbatim from Pfleeger (the textbook)

Page 2: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

When can we say that a software is “secure”?

• Your boss asks you to buy a “secure” OS (remember OS is a software too). You have various choices: general purpose (Windows, Mac, Linux etc.., ) and special purpose (e.g., SE Linux) etc. How will you decide which one to pick?

Page 3: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

When can we say that a software is “secure”? (2)

• Various criteria can be used to decide if a software is secure. Here are some:– Firstly, what goals of security does your boss want?

Confidentiality, integrity, availability, authentication? – Does it implement the various software secure design principles?

(Principles of least privilege, defense in depth etc..)– Does it provide a historic reference in terms of vulnerabilities?– How secure is the actual implementation (e.g., does it validate

input)?

• All these are reasonable criteria – but not good enough! Why not?

Page 4: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

When can we say that a software is “secure”? (3)

• Despite applying the criteria that we saw in the previous slide, it is very hard to determine if a system is “secure”. Why? – Linux offers complete mediation through system calls. Does it mean that the

principle of complete mediation is fully implemented? – Answer: not necessarily. There may be covert channels (i.e., previously un-seen

channels). E.g., with a race condition attack, someone might get root privileges, bypassing Linux’s authorization controls.

– Similarly, how can one ensure that a software does not have buffer overflows? – History suggests that every day new software bugs are unearthed….

• So what should a security expert do?

Page 5: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

When can we say that a software is “secure”? (4)

• What should a security expert do?Which of the following choices will you choose? – (a) Give up – admit defeat and change careers to underwater basket weaving.– (b) Remain quiet and recede into the background on a corporate job.– (c) Take a positive attitude – after all the more bugs, the better job security.– (d) Drop the word “secure”.

• Answer: (d). Don’t use the word “secure”. Replace it with “trust”.

• Example: Tell your boss, that she is not looking for a “secure” OS, but a “trusted” OS. Why is “trust” different from “secure”?

Page 6: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

“Secure” vs. “Trust”

• Word secure reflects a dichotomy: – Something is either secure or not secure.

• On the other hand “trust” gives allowance for approximations. E.g., trust implies meets current security requirements (cannot speak to about the future). Trust has degrees.

Page 7: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Steps in Building trusted software: From Security Policies to Verification.

• Steps to determine if a software is “trusted”:– Step 1: Define the degree of trust:

we call this security policy: • This is the statement of the security we expect the system to

enforce. E.g., policy: John doe cannot access the /etc/shadow file for writing.

– Step 2: Define formal models that tell us the conditions to assure the policy succeeds.• E.g., If John Doe tries to open the file /etc/shadow for

writing, the system call returns -1 instead.

– Step 3: Verify that the implementation actually meets the security policies. This is done using testing and formal verification approaches.

Page 8: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Examples of determining if a software is trustworthy.

• We will see these steps using an Operating system as our example software.

• So the question: Is a specific OS trust worthy?

Page 9: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Step 1: Define some security policies

that are relevant to your context.• E.g., if your boss wants a trusted OS, is she looking for an

OS that ensures confidentiality of data? • We have seen some policies already:

– Confidentiality– Integrity– Availability

• Let us see a few more examples:– Military security policy: “need to know”, “top secret”, “secret”.– Corporate security policies, etc.

Page 10: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Military security policies

• In military, access to data is on a “need to know” basis. How does military achieve this?

Page 11: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Military security policies (2)• In military, access to data is on a “need to know”

basis. How does military achieve this?– “Data” and resources are divided into sensitivities

(e.g., top secret to unclassified).– Personnel are associated with various ranks (also called

clearance level). E.g., top secret clearance level.– Data across various sensitivities are divided up into

compartments.– Access control is achieved by associating rank with

compartments.

Page 12: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Figure 5-1  Hierarchy of Sensitivities.

Moderately Sensitive

Military security policy: Sensitivities

Military resources (e.g., data files on a disk) are divided into various “sensitivities”

Page 13: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Figure 5-2  Compartments and Sensitivity Levels.

Compartments in a Military security policy. Each compartment contains data across multiple sensitivity levels. Example on next slide.

Page 14: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Figure 5-3  Association of Information and Compartments.

A single piece of information may belong to multiple compartments. E.g. publications on cryptography may be part of the CRYPTO compartment: some publications may be top secret, while others are unclassified.

Page 15: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Terms• Information falls under different degrees of sensitivity:

– Unclassified, to top secret.– Each sensitivity is given as a numeric rank. E.g., unclassified = 0.

• Need to know: enforced using compartments– E.g., a particular project may need to use information which is

both top secret and secret. Solution; create a compartment to cover the information in both.

– A compartment may include information across multiple sensitivity levels.

• Clearance: A person seeking access to sensitive information must be cleared. Clearance is expressed as a pair: <rank, compartments>. E.g., – <top_secret, {CRYPTO}> allows all personnel with top_secret

clearance to read files in the CRYPTO compartment.

Page 16: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

So military security policy:expressed as a dominance relation.

Dominance relation (it's a partial order):- For a subject s and an object o. - s ≤ o if and only if:

– s.rank ≤ o.rank, and – s.compartments o.compartments⊆

I.e., a subject can read an object only if: – The clearance level of the subject is at least as high as that of the

information and– The subject has a need to know about all compartments for which

the information is classified. E.g. object <secret, {Sweden}> can be read by someone with clearance: <top_secret, {Sweden, Snowshoes}>, or <secret , {Sweden}>-- but not by somebody with <top_secret, {Crypto}>

Page 17: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Another example: Commercial Security Policies

What are some of the needs of a commercial policy? – how does corporate policy differ from military policy?

Answer: depends on the type of company. One example: conflict of interest. E.g., accounting firms are paid by the same company whose records are being audited by the accounting firms. How can they ensure fairness in the audit process? What security policies must be in place?

Page 18: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Example: Chinese Wall Policy

Addresses needs of commercial organizations: legal, medical, investment and accounting firms. Key protection: conflict of interest.

How is the policy expressed: Terms:

Objects: elementary objects such as files. Company groups: all objects concerning a particular company are grouped together.

Conflict classes: all groups of objects for competing companies are clustered together.

Page 19: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Example: Consider an advertising firm, that is work on advertisements for multiple companies that do similar things. E.g.,

Assume that the advertising firm represents three banks: Citicorp, Deutsche bank, Credit Lyonnais.

Now, the advertising firm has a conflict of interest. Employees who are making the ads for Citicorp should not be able to see the ads being made for Deutsche bank.

Otherwise, they may get an unfair advantage. E.g., Citicorp’s advertising team may know ahead of time that Deutsche bank is planning to advertise free checking. They may then advertise free checking + a free toaster when an account is opened!.

Page 20: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Example: Consider an advertising firm, that is work on advertisements for multiple companies that do similar things. E.g.,

To implement chinese wall model:

Step 1: Group all the documents of Citigroup into a single company class.

Group all documents of Deutsche bank into a single company class.

Group all documents of Credit Lyonnais into a single company class.

Step 2: Now, group all the three company classes (from above) into one conflict group.

Page 21: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Figure 5-5  Chinese Wall Security Policy.

Example: Citicorp is a company class that contains all the documents pertaining to Citicorp.

Here are examples of conflict groups.

E.g., Citicorp, Credit Lyonnais and Deutsche Bank are in one conflict group.

Page 22: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Simple policy (Chinese Wall Policy)

A person can access information from a company as long as the person has never accessed information from another company in the same conflict class.

E.g., An advertising exec can access Citicorp’s ad files as long as he/she hasn’t accessed any files of other companies in the same conflict class .

Page 23: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Figure 5-5  Chinese Wall Security Policy.

Example: Once an executive accesses a document (say from Credit Lyonnais, he/she is denied access to other documents.).

Page 24: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Application of chinese wall policy

Primarily in Business. E.g., Keeping auditing part of a firm separate from trading part.

Implementation: In OS and databases can be enforced using “multilevel security”. E.g., Oracle supports MLS (http://www.oracle.com/technetwork/database/options/label-security/index.html).Implementation of this is beyond the scope of this class – will be covered in ITEC 445.

Page 25: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

So far… addressed confidentiality (conflict of interest), in corporations, not integrity.

In commercial environments, integrity is important as well. E.g., Suppose a University wanted to purchase some equipment how will it do it? (i) Purchasing clerk creates an order for supply. Sends copies to supplier and receiving department. (ii) Supplier ships the goods to the receiving department. Receiving clerk, checks order with order from (i) and then signs the delivery form. Forwards the delivery form to accounting.(iii) Accounting clerk compares the invoice with the original order (to check for price and other terms)… and only then issues a check.

In the above transaction, the order was important! Why?

Page 26: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Integrity in corporations.

In the transaction on the previous slide the order was important.

If for instance the purchasing clerk issues the check first before getting the invoice, then there are issues with integrity of the order (maybe only a part of the invoice was shipped). Etc.

Similarly, when developing software, certain trusted software need to preserve such integrity of operations.

How? Clark-Wilson security policies.

Page 27: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Clark Wilson Security PolicyDefines tuples for every operation (e.g., purchasing clerk writes a check).

<userID, transformationProcedure, {CDIs…}>

Here,1. userID: person who can perform the operation.2. transformationProcedure: performs only certain operations depending

on the data. E.g., writeACheck if the data’s integrity is maintained. 3. CDIs: constrained data items: data items with certain attributes. E.g., when the

receiving clerk sends the delivery form to the accounting clerk… the delivery form has been already “checked” by the receiving clerk.

Think of these as “stamps” of approval.

Page 28: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Clark Wilson ModelBut such tuples called “well formed transactions” are not sufficient: also need to “separate responsibilities”.

In Clark Wilson model, separation of duties is accomplished by means of dual signatures.

Page 29: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Security Models

While policies tell us what we want from a software...models tell us formally (mathematically) what conditions the software needs to enforce in order to achieve a policy.

Understanding security models is important: when developing a trusted software (or purchasing one), you may need to determine if it follows a certain model…example on the next slides.

Page 30: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Example models: two famous security models

1. Bell LaPadula Model: to enforce confidentiality.• It defines a set of mathematical rules that a

software application must enforce to ensure confidentiality of information in a military type setting.

2. Biba Model: enforces integrity.• This deals with the set of mathematical rules that

ensure that a software enforces integrity of information in a military setting.

Next: an example – (class exercise) and after that the mathematics behind Bell LaPadula and Biba models..

Page 31: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Example models: two famous security models

To understand the security models, we study a structure called a Lattice.A lattice is a partial ordering such that every data item has a least upper bound and the greatest lower bound.

E.g., Military security model is a lattice. E.g., <secret, {Sweden}> and <secret, {France}> have a least upper bound and a greatest lower bound.

Page 32: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Lattice is a mathematical structure used to “partially” order data.

E.g., suppose you have the numbers: 455, 245, 445 – it is easy to order them in an ascending or descending order.

E.g., 245 ≤ 445 ≤ 455.

However, sometimes such “complete” orders are not possible. E.g., Consider the followings sets: 1. Set of all books in Computer Security (call it: SecuritySet)2. Set of all books in Databases (call it: DBSet).3. Set of all books in Information Technology (ITSet).Now, how will you arrange these different sets in an order? The fact is: there may or may not be a relation between SecuritySet and DBSet. There is however, a clear relation between DBSet and ITSet (DBSet is a subset of ITSet). Similarly (SecuritySet is a subset of ITSet). But, there is no relation between DBSet and SecuritySet. So, in such cases, we perform a partial ordering.We simply order the three sets as follows: DBSet ITSet, and SecuritySet ITSet ⊆ ⊆

Such ordering is called “partial ordering” and we use a lattice to represent such models

Page 33: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Figure 5-6  Sample Lattice.

Lattice is a mathematical structure used to “partially” order data. Notice: a complete order will be just a straight line.

Page 34: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Bell LaPadula Model for Confidentiality

Tells us “what conditions” need to be met to satisfy confidentiality to implement multi-level security policies (e.g., military policies):

Consider a security system with the following properties: (i) system contains a set of subjects S.

(ii) a set of objects O. (iii) each subject s in S and each object o in O has a

fixed security class (C(s), C(o)). In military security examples of class:

secret, top secret etc…(iv) Security classes are ordered by ≤ symbol.

Page 35: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Bell LaPadula Model for Confidentiality (2)

Properties:

(1) Simple security property: A subject s may have read access to an object o, only if C(o) ≤ C(s).

Is this property enough to achieve confidentiality? Why or why not?

Page 36: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Bell LaPadula Model for Confidentiality (2)

Properties:

(2) (* property) – read this as the star property

A subject s who has read access to an object o, may have write access to an object p only if C(o) ≤ C(p).

Why was this needed?

Page 37: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Figure 5-8  Subject, Object, and Rights.

Need for the two policies: Definition of subject, object and

access rights. E.g., s can “r” or “read” object o.

Page 38: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Figure 5-7  Secure Flow of Information.

Bell LaPadula; read down, write up.

Page 39: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Biba Model for Integrity.

Bell LaPadula is only for confidentiality, how about integrity… come up with a policy.

Page 40: Trusted vs. secure software –So far, we saw: Some security features of Operating Systems (authentication, authorization) and Secure operations. –Next:

Biba Model for Integrity. opposite of Security (Bell-Paluda): "no write up; no read down".Simple Integrity: Subject s can modify (write) object o

only if I(o) ≤ I(s).Here I is similar to C, except I is called Integrity class.

Integrity *-Property: If subject s has read access to object o with integrity

level I(o), s can have write access to object p only if I(p) ≤ I(o).

Why is the second policy important?