Key Exchange

Spring 2019

The questions below are due on Sunday April 07, 2019; 11:59:00 PM.
 
You are not logged in.

If you are a current student, please Log In for full access to the web site.
Note that this link will take you to an external site (https://oidc.mit.edu) to authenticate, and then you will be redirected back to this page.

Back to Exercise 08

In Weeks 7 and 8, we've been working on encrypting our data using Caesar and Vigenere Ciphers. These have some obvious flaws in terms of their difficulty to crack, but there's also a frustrating problem with the fact that you must get together with whomever you are communicating with ahead of time and agree upon a key (whether that is a frame shift in the case of the Caesar Cipher or a key-word in the case of a Vigenere). This is potentially annoying because if the key gets known to outside parties you'll need to go and decide again on a key...and doing this requires a secure channel in its own right...this can get annoying.

What if we could avoid this by exchanging our keys in the open on the fly without having to get together with the other party? Obviously this seems problematic because a listener could listen-in and simply grab the key and use it, but what if we could somehow obscure the key from any outside listeners in a such a way that only the sender and receiver can figure out the key and the listener can't? You could then secretely determine a key and use that key to encrypt your data using something like a Vigenere to communicate! Seems too good to be true, but it can be done. How could we publicly send a key and ensure it can never be figured out? Well, the truth is that we can't at a 100% for-sure level, but we can set things up in such a way that the stuff we make public results in a math problem that is relatively hard to solve and that by using an amount of information that we keep private, the math problem becomes much easier to do. As a result, you can rely on a problem being too hard to solve as a way of preventing a non-desired party from gaining access. At the root of this is working with a math problem that is asymetrically hard to solve, and when applied in encryption is called asymetric key encryption.

1) The Type of Math Problem

We need a math problem that is hard to solve. We're going to skip a lot of the proof and theory behind this since 6.S08 is just an introduction (but there are some fantastic courses in Course 6 which dig into this). The key to this is modular arithmetic.

1.1) Modular Arithmetic

Modular Arithmetic is basically like regular arithmetic except it also uses the modulo operator (%) that we know and love. If you think about it, if you append a %n where (n) is some number, to the end of any arithmetic operation, you'll limit/loop it the set of values that can be generated to be n-1. Doing this actually results in some really interesting and useful properties, such as making some math operations very hard to figure out where before they were pretty easy.

When doing modular arithmetic you usually append at the end of your statement the fact that it exists in a modular space. For example:

4 + 10 = 4 \text{ (mod 5)}

is saying " (4+10)%5==4 ",

3\cdot 7=10 \text{ (mod 11)}

is saying "(3*7)%11==10", and

4^{6} = 1 \text{ (mod 9)}
is saying " (4**6)%9==1".

When using as an operator we'll have it listed in line as just \text{ mod p} which is equivalent to %p in code. For example the expression \left(5\cdot x\right) \text{ mod 8} would be (5*x)%8.

Answer the following modular arithmetic question:

\left(6-3\right) \text{ mod 2}

1.2) Emerging Property

You can see an interesting feature come about by bringing in this modular thing at the end. Where before something like:

4+10 = x

is just as easily solved as a variation on it:

4+y = 14

when we're doing modular arithmetic things get trickier:

4+10 \text{ mod 5} = x
is pretty easy to solve for x, but if we were to ask:

4+y \text{ mod 5} = 4

you might experience a bit more trouble in figuring out y, and this same sort of discrepency gets harder with things like exponentiation and logarithms in modular arithmetic....solving in one direction can be easy, but solving in the other direction can be very hard to do, and while there are whole courses which investigate the what/why of this meaning and the fact that this "hardness" is in fact a very definable thing, for now we'll just say that this asymmetry in the hardness (much easier to solve in one direction than another) opens the door to utilization in cryptography.

1.3) Modulo Multiplication

Imagine we had the following expression:

\left(x \cdot y\right) \text{ mod p }

Using the assertions that:

x = c_x p + r_x
where c_x is \text{floor}(\frac{x}{p}) and r_x = x \text{ mod p}
y = c_y p + r_y
where c_y is \text{floor}(\frac{y}{p}) and r_y = y \text{ mod p}

We can then say:

\left(x \cdot y\right) \text{ mod p} = \left(\left(c_x p + r_x\right)\left(c_y p + r_y \right)\right)\text{ mod p}

Multiplying those two terms out we get:

\left(c_xc_yp^2 + r_yc_xp + r_xc_yp + r_xr_y\right) \text{ mod p}

Because the first three terms are multiples of p, they evaluate to zero, which means it can simplify to:

\left(r_xr_y\right) \text{ mod p}

And if we remember earlier definitions, we can write:

\left(x \cdot y\right) \text{ mod p} = \left(x \text{ mod p}\right)\left(y \text{ mod p}\right) \text{ mod p}

Let's remember this equality for the next section!!

1.4) Modular Exponents

The math above can lead us further into asserting the following, which is critical for what we'll be doing below:

\left(m^a\right) \text{ mod p} = \left(m \text{ mod p}\right)^a \text{ mod p}

The first step in convincing ourselves of this comes from saying that since m^a just equals m\cdot m \cdot m ... so that there are a instances of m multiplied together. Therefore:

\left(m^a\right) \text{ mod p} = \left(m \cdot m \cdot ...\right) \text{ mod p}

which then means:

= \left(\left(m \text{ mod p}\right)\left(m \text{ mod p}\right)\left(m \text{ mod p}\right)...\right)\text{ mod p}
or more compactly:
\left(m \text{ mod p}\right)^a \text{ mod p}

and if we remember where we started, that means:

\left(m^a\right) \text{ mod p} = \left(m \text{ mod p}\right)^a \text{ mod p}

1.5) Where are we going with this

Now imagine you had a positive integer m raised to the ab power where a and b are also positive integers. We could then write:

m^{ab} = m^{ba}

Going further we could then write:

\left(m^{a}\right)^b = \left(m^b\right)^a

The same general equality would hold if we just modded both sides like so:

\left(m^{a}\right)^b \text{ mod p} = \left(m^b\right)^a \text{ mod p}

and from our equality up above we can rearrange this then to say:

\left(m^{a} \text{ mod p} \right)^b \text{ mod p} = \left(m^b \text{ mod p} \right)^a \text{ mod p}

This forms the core of what is known as the Diffie-Helman Key Exchange. The Diffie-Hellman key exchange is one such example of a way to exchange information publically between two entities that can then lead to a privately held key which can then be used for symmetric encryption of data. It uses (like many asymmetric protocols) modular arithmetic at its core and is based on the math above. Ok, so how do we apply it in real life?

Let's say we have two users: Alice and Bob. They are chatting over an insecure line. Their goal is to decide upon a numerical key which they can then use to secure their communication using an symmetric encryption method such as Vigenere (maybe the number they use can be use to generate a key word or something). Of course, for obvious reasons, both Users do not want anyone else to know their key so they must make sure that only they can figure it out.

They go about it as such:

First a p and a m are chosen and announced publicly. p can be any number, but m should be a primitive root modulo p. Then the two parties that want to communicate each choose (and keep to themselves) a private number. Alice is responsible for picking a and Bob is responsible for picking b. Alice and Bob do not need to (and really should not tell each other their private number).

Then Alice generates the following number:

t_1 = m^a \text{(mod p)}
and transmits it to Bob.

Bob generates the following number:

t_2 = m^b \text{(mod p)}
and transmits it to Alice.

Alice then takes the value transmitted to them and does the following:

k_1 = \left(t_2\right)^a \text{(mod p)}

which we know is actually the same as:

k_1 = \left(m^b \text{ mod p} \right)^a \text{ mod p}

which we know from our proof earlier is actually:

k_1 = \left(m^b \text{ mod p} \right)^a \text{ mod p} = m^{ab} \text{ mod p}

Meanwhile, Bob then takes the value transmitted to them and does the following:

k_2 = \left(t_1\right)^b \text{(mod p)}

which we know is actually the same as:

k_2 = \left(m^a \text{ mod p} \right)^b \text{ mod p}

which we know from our proof earlier is actually:

k_2 = \left(m^a \text{ mod p} \right)^b \text{ mod p} = m^{ba} \text{ mod p}

Now wait a second....if we investigate the expressions of we've got for the constant for Users One and Two, respectively (k_1 and k_2) we'll see that the expression is the same! This is interesting.

k_1 = k_2 = k

Now let's take a step back and think about what an outside user Eve on that public channel would have seen:

  • They would have seen p
  • They would have seen m
  • They would have seen t_1 and could know that that equals m^a \text{ mod p}
  • They would have seen t_2 and could know that that equals m_b \text{ mod p}

But they would not have known a or b since those were kept secret by Alice and Bob. The question is, could Eve have figured out what a and b are through just the four values above? Let's think about it with an example.

  • The two users pick p=71 and m=31 (let's say from the table here)
  • Alice picks a = 11 and generates t_1 = 31^{11} \text{ mod 71} =52
  • Bob picks b = 17 and generates t_2 = 31^{17} \text{ mod 71} = 67
  • Both users transmit their numbers.
  • Alice takes t_2 and generates k =67^{11} \text{ mod 71} = 21
  • Bob takes t_1 and generates k = 52^{17} \text{ mod 71} = 21

Any person listening in would know the following:

31^a \text{ mod 71} = 52

31^ b \text{ mod 71} = 67

But of course they wouldn't know a or b, but could solve for them. And could they solve for them? Well, yes, they could, but it takes a bit of time to do compared to the time it takes to do the math that Alice and Bob have to do. You might say "so what" to this. It wouldn't be too hard and it wouldn't take too long to figure out what a or b are given the two equations above. At the very worst you could brute-force and just trial and error your way from 0 up until you found a working value, and for small values like we're using (71 and 31) you're totally right; assuming good code, you could figure this out in far less than a millisecond. In real-life, very, very large numbers are used. In RSA, which is related to Diffie-Helman, they use numbers that are really big...minimum RSA nowadays is 1024 bits (remember 8 bits is up to decimal 256 so 1024 bits is pretty big) for its values. This is referred to the key size, and what generally happens is as the numbers used get really big, the time it takes to do the "user" equation (k =67^{11} \text{ mod 71}) changes much less than the the time it takes to solve the outsider equation (31^ b \text{ mod 71} = 67). For very, very large values, the time needed to perform the outsider calculation can be extremely long, to the point that it is not practical to solve unless you have really, really powerful computational resources that only massive companies or governments might have access to.

So basically, while the key can theoretically be figured out by Eve, it is practically impossible, so the key is computationally secure.

This is a moving target however. In the early 2000's a 1024 bit RSA key was considered safe, but nowadays the recommendation is usually 2048 or higher...as the cost of computational resources drops, the ability to brute-force attack these keys gets easier, so to keep the computational difficult enough, larger keys are needed.

2) The Code

Ok, we're going to write two pieces of code that could potentially be used to enable asymmetric key exchange between our server-side Python and our microcontroller-side C++. In both cases the functions are called dhke (for Diffie-Helman Key Exchange). The implementations in C++ and Python are slightly different based on the differences in the two languages, but the approach should be the same in general. The functions will take in values that would be known to one side in a Diffie-Helman key exchange user-pair, as well as a message to be either encrypted or decrypted.

Within the dhke function, you will call keyword_generate, which is a function that we have provided that generates a Vigenere keyword based on the key generated through the Diffie-Helman math. There are two versions of keyword_generate, one for Python and one for C++.

For Python it is:

keyword_generate(key) #returns the keyword given the key 

That keyword will then be used to either encrypt or decrypt a message using a Vigenere cipher of the type used in Lab09A and Lab09B, and return the ciphertext.

Thus, each dhke function takes:

  • values of for t, p, m
  • either a or b
  • an input plaintext or ciphertext message
  • a boolean indicating whether to encrypt or decrypt that incoming message

and returns the encrpyted or decrypted message (via return statement in Python, via reference in C++).

You can assume that a functioning Vigenere Cipher function vigenere_cipher for each language is available for you to use (as from exercises in Week 7).

2.1) Python Implementation

def dhke(t,p,m,a,message_in, encrypt): pass #Your code here

In the next exercise you'll implement this again in C++.

Back to Exercise 08



This page was last updated on Sunday March 31, 2019 at 09:51:42 PM (revision e6c1a36).
 
Course Site powered by CAT-SOOP 14.0.4.dev5.
CAT-SOOP is free/libre software, available under the terms
of the GNU Affero General Public License, version 3.
(Download Source Code)
CSS/stryling from the Outboxcraft library Beauter, licensed under MIT
Copyright 2017