Links und Funktionen
Sprachumschaltung

Navigationspfad
You are here: Home / Teaching / Summer 2017 / Cryptography / Material / Notes, Chapter 3


Inhaltsbereich

Notes, Chapter 3

Plain Text icon notes3.txt — Plain Text, 4 KB (4602 bytes)

File contents


3 Semantic Security (KL3.1, KL3.2)
----------------------------------

In order to formally study secrecy of schemes that use shorter keys
one must somehow incorporate the computational power of an
adversary. One way of doing this is to use concrete bounds:

A scheme is (t,epsilon)-secure if every adversary running for time t
succeeds in breaking the scheme (in the appropriate sense) with
probability at most epsilon.

For example, if we use keys of bit length n and the only possible
attack is by trying out all keys (this happens e.g. with Vernam's
cipher) then assuming that 10^9 keys can be tried out every second
(1GHz) trying out 2^60 operations takes 35years thus several years
even with highly parallel computers. Thus, a 60bit key takes this time
to be broken with certainty, a 64 bit key takes this time to be broken
with probability 1/16 etc.  Currently, one recommends key lengths of
128bit (128 *bits of security*) to make the success probability
negligible (e.g. lower than that of a meteor crashing into the device)
and to allow for future technological advances.

Now, working with concrete runtimes and probabilities is difficult and
brittle because of its dependency on concrete hardware configuration,
programming language, implementation details, etc. As a result, one
uses instead an asymptotic approach where runtimes and probabilities
as well as sizes of keys and texts are functions of a *security
parameter* n. One then wants that the success probability of an
adversary whose runtime is polynomial in the security parameter is a
*negligible function* of the security parameter, i.e. smaller than n^k
for any k.

Definition: A private-key encryption scheme is a tuple of
probabilistic polynomial-time algorithms (Gen,Enc,Dec) such that:

1. Gen computes a key (in {0,1}^*) from the security parameter
(presented in unary, i.e. as 1...1 (n times)). We write this as k <-
Gen(n) or k <- Gen(1^n) to emphasize the unary notation.

2. Enc_k(m) and Dec_k(c) are as before, except that m, c are
bitsequences (in {0,1}^*). In particular, we must have Dec_k(Enc_k(m))=m.

A fixed-length private-key encryption scheme is defined analogously,
but there is a function l(n) fixing the length of plaintexts as a
function of the security parameter. Formally, when k is returned from
Gen(n) then Dec_k(m) is defined only for m of length exactly l(n).

For example, the one-time pad can be cast into this framework. Given
the security parameter n, Gen generates a random bit sequence of
length n. One has l(n)=n and Enc_k(m) = m+k etc.



Indistinguishability in the presence of an eavesdropper
- - - - - - - - - - - - - - - - - - - - - - - - - - - - 

We now want to adapt the notion of perfect secrecy from above to the
asymptotic framework. An adversary thus also gets the security
parameter n and its runtime must be bounded by a polynomial in n. It
is allowed to use randomization (has access to random bits). 

Given a scheme Pi=(Gen,Enc,Dec) and an adversary A the experiment
PrivK^eav_{A,Pi}(n) is now defined as follows:


  1. The adversary produces two messages m0, m1 of equal length
     and of length l(n) in the case of fixed length. 
  2. Key k is generated by Gen(n) and a bit b <- {0,1} is
     chosen uniformly at random (50/50). The (or rather a) ciphertext
     c = Enc_k(m_b) as well as the security parameter n are then given to A. 
  3. A answers by outputting a bit b'.
  4. The output of the experiment is defined 1 if b=b' and 0 otherwise.


Definition:  A scheme Pi=(Gen,Enc,Dec) has *indistinguishable
encryptions in the presence of an eavesdropper* if there is a
negligible function negl(n) such that for all adversaries A

      Pr[PrivK^eav_{A,Pi}(n) = 1] <= 1/2 + negl(n)

Remark: rather than having the adversary supply m0 and m1 they could
also be universally quantified. Randomizing over m0 and m1 would lead
to a weaker (and unreasonable) definition though.


*Semantic security* proper: 

One can show (see 3.2) that this definition is equivalent to other
ones that give even more power to the adversary. The most compelling
one is "semantic security" which assumes an arbitrary probability
distribution on the plaintexts and also partial information on the
plaintexts to be given to the adversary.

We remark that in general proofs of semantic security (in the sense of
the above definition) are constructive in that the negligible function
bounding the probability can be read off as an explicit term. From
this, one can --- possibly after optimizing the constructions used in
the proof --- read off concrete bounds for fixed security parameter n.

Document Actions


Funktionsleiste