Multiparty Computation Ivan Damgrd BRICS rhus University The
Multiparty Computation Ivan Damgård BRICS, Århus University
The MPC problem n players P 1, P 2, …, Pn Player Pi holds input xi Goal: for some given function f with n inputs and n outputs, compute f(x 1, …, xn)= (y 1, …, yn) securely, i. e. , we want a protocol such that: • Pi learns the correct value of yi • No information on inputs is leaked to Pi, other than what follows from xi and yi. We want this to hold, even when (some of) the players behave adversarially. Examples: Match-making, Electronic Voting,
Generality MPC is extremely general: a solution implies in principle a solution to any cryptographic protocol problem. But note: not all problems can be modelled by computing a single function securely. Example: Secure electronic payments – is more like secure computation of several functions, keeping track of some state information in between. Not a problem, however: the definition we will see is fully general, and the protocols we describe are actually fully general as well, although they are phrased as solutions for computing a single function, for simplicity.
Modelling Adversarial Behavior Assume one central adversary Adv may corrupt some of the players and use this to learn information he should not know, or mess up the results. When Pi is corrupted, Adv learns complete history of Pi. An adversary may be • Passive or Active: just monitor corrupted players or take full control. • Static or Adaptive: all corruptions take place before protocol starts, or happen dynamically during protocol (but once you’re corrupt, you stay bad). • Unbounded or probabilistic polynomial time Goal of MPC, a bit more precisely: Want protocol to work as if we had a trusted party T, who gets inputs from players, computes results and returns them to the players, hence: Adv may decide inputs for corrupted players, but honest players get correct results, and protocol tells Adv only inputs/outputs of corrupted players.
Bounds on corruption If Adv can corrupt an arbitrary subset of players, in most cases problem cannot be solved – for instance, what does security mean if everyone is corrupted? So need to define some bound on which subsets can be corrupt. Adversary Structure Γ: family of subsets of P= {P 1, …, Pn} Adv is a Γ-adversary: set of corrupted players is in Γ at all times To make sense, Γ must be monotone: B Γ and A B implies A Γ If Adv can corrupt set B, he can choose to corrupt any smaller set. Threshold-t structure: contains all subsets of size at most t. Γ is Q 3: for any A 1, A 2, A 3 Γ, A 1 A 2 A 3 is smaller than P Γ is Q 2: for any A 1, A 2 Γ, A 1 A 2 is smaller than P Threshold-t structure for t< n/3 is Q 3 Threshold-t structure for t< n/2 is Q 2 i. e.
Why General Access Structures? -And not just a bound on the number of players that can be corrupt? Threshold adversaries (where we just bound the number of corruptions) make sense in a network where all nodes are equally hard to break into. This is often not the case in practice. With general access structures, we can express things such as: the adversary can break into a small number of the more secure nodes and a larger number of less secure ones.
Modelling Communication Asynchronous network: adversary sees all messages, can delay them indefinitely. Some forms of the MPC problem harder or impossible on such a network, in any case additional complications. So in these lectures: Synchronous network: communication proceeds in rounds – in each round each player may send a message to each other player, all messages received in same round. Two main variants: • Information Theoretic scenario: Adv does not see communication between honest (uncorrupted) players can get security for unbounded Adv. • Cryptographic scenario: Adv sees all messages sent, but cannot change messages exchanged between honest players can only get security for (poly-time) bounded Adv.
Summary Corruption can be passive: just observe computation and mess. The players x 1, y 1 x 2, y 2 Or active: take full control Inputs, Desired outputs Co rr Adv up t Crypto scenario: Adv sees all messages; or Synchronous communication x 3, y 3 x 4, y 4 I. T. scenario: no info on honest-to-honest mess. Adv can choose which players to corrupt statically or adaptively – but set of corrupted players must be ”not too large”, i. e. , it must be in the given adversary structure
Known Results, Information theoretic scenario • Passive, adaptive, unbounded Γ-adversary: any function can be securely computed with perfect security iff Γ is Q 2 in threshold-t case, if and only if t< n/2 Meaning of ”only if”: there exists a function that cannot be computed securely, if condition on Γ (t) not satisfied. • Active, adaptive, unbounded Γ-adversary: any function can be securely computed with perfect security iff Γ is Q 3 in threshold-t case, iff t< n/3 If we assume that a broadcast channel is given for free, and we accept a non-zero error probability, more is possible: • i. t. scenario with broadcast and active, adaptive, unbounded Γ-adversary: any function can be securely computed with small error prob. iff Γ is Q 2 in threshold-t case, iff t< n/2 Results of [CCD 88, BGW 88, RB 89, HM 99, CDDHR 00]
Known Results, Cryptographic Scenario • Passive, adaptive, polynomial time adversary: Assuming one-way trapdoor permutations exist, any function can be securely computed with computational security if number of corrupted players is < n. • Active, adaptive, polynomial time Γ-adversary: Assuming one-way trapdoor permutations exist, any function can be securely computed with computational security iff Γ is Q 2 in threshold-t case, iff t< n/2. Results of [Y 86, GMW 87, CFGN]
Defining Security of MPC (or Protocols in General) Classical approach: write list of desired properties - Not good, hard to make sure the list is complete. Ex. Secure voting. New approach: define an ideal version of the functionality we want. Say that protocol is good if it is ”equivalent” to the ideal thing. -Advantage: from equivalence, we know that protocol has all good properties of the ideal version – also those we did not think of yet! We will give a variant of the definition of Canetti for Universally Composable (UC) Security. (Variant is due to Jesper Nielsen, main difference: explicitly considers synchronous networks, original UC was for the asynchronous case).
General Adversarial Activity Important note before definition: in general the adversary may be able to do more than corrupt players and control their behavior: • Most protocols run as a part of a bigger system, fx think of key exchange protocols or electronic payment schemes. • A real-life adversary will attack the whole system, not just the protocol • Hence the adversary may have some influence on the inputs that the honest players will contribute, maybe he can even control them. • Also, the adversary might get some information on the results that the honest players obtain from the protocol – perhaps even full information. In our definition, the adversary should be allowed to choose inputs for honest players and see their results.
General Adversarial Activity, Cont’d In our definition, the adversary should be allowed to choose inputs for honest players and see their results. Question: this is very strange – didn’t we say that the adversary should not get information about honest players’ results? ? Answer: what we said was that the protocol should not release such information. The protocol cannot help it, if the adversary learns something from other sources, such as the surrounding system. So we are going to demand that the protocols works as it should, regardless of what the adversary knows or can control by means that are external to the protocol.
Concepts in the definition. All entities (players, adversary, trusted parties) formally speaking are interactive, probabilistic Turing machines. Everyone gets as input security parameter k, Adv gets aux. input z. The Ideal Functionality (or Trusted Party) F: • Models the functionality we would like the protocol to implement. • Cannot be corrupted • Can communicate with all players and with the adversary. • Receives inputs from all players, does computation according to its program and returns results to players. • Has memory and can be invoked several times - can therefore be used to model any reasonable primitive we could imagine. Of course, no such F exists in real life, only used a specification of what we would like to have. Definition basically says that a protocol is good (w. r. t. F) if it creates in real life a situation equivalent to having F available.
Plan for Definition. Define two processes: • The Real Process • The Ideal Process In the Real Process, we have the adversary Adv and the players executing the protocol (no trusted party). In the Ideal Process, we still have the Adv, but is replaced by the trusted party F (plus a simulator S to be explained later). We will say that securely realizes F if Adv cannot tell whether he is in the real or in the ideal process.
The Real Process Exchange input/results with honest players Adv Corrupt Protocol If active, Adv chooses messages for corrupt players, always sees their internal data and received messages When protocol over, Adv outputs a bit. Given Adv, , k and z, this is a random variable called REALAdv, (k, z)
The Ideal Process Exchange inputs/results for honest players Exchange inputs/results with players Adv No Protocol, but an Ideal Functionality F. . . Corrupt Same Adv activities as before… So as honest players only forward info between Adv and F, we may as well abstract them away… F But Adv will expect to see internal data and protocol messages received by corrupted players. No protocol here – so what to do?
The Ideal Process To make things seem the same to Adv in the real as in the ideal process, we introduce the Simulator. . Who will ”magically” produce Adv’s view of attacking the protocol, based on the exchange with F. Adv Corrupt Exchange inputs/results for honest players F Exchange inputs/results for corrupt players S At end of protocol, Adv outputs a bit, a random variable called IDEALAdv, F, S(k, z)
The Definition We say that Г-securely realizes F perfectly if there exists a simulator S such that for all adversaries Adv corrupting only sets in Г it holds that Prob(IDEALAdv, F, S(k, z) =0) = Prob(REALAdv, (k, z) =0) For all k and z. Intuition: we think of Adv’s output bit as its guess at whether it is in the real or ideal world. The two probabilities equal means: it has no idea. Note we say for all Adv, so no bound here on Adv’s computing power. We talk about realizing F statistically if |Prob(IDEALAdv, T, S(k, z) =0) - Prob(REALAdv, (k, z) =0)| is negligible in k (for arbitrary choice of z). Realizing F computationally: the same, but only for all polynomial time bounded Adv. Note: misprint in notes, says ”for all Adv, there exists S such that. . ”, should be the other way around, as here.
Intuition on Definition It ensures that the real-life process inherits natural security properties from the ideal process. For instance. . The protocol forces Adv to be aware of which inputs corrupt players contribute: S must figure out which input values to send to F in ideal process. These inputs must follow from the protocol messages sent by corrupt players (and produced by Adv). The protocol ensures that honest players compute correct results: is always true in ideal process by def. of F, and if protocol produces inconsistent results, Adv could distinguish easily. The protocol does not release information it shouldn’t: S is able to simulate convincingly Adv’s view of attacking the protocol, based only on what F sends to corrupt players.
Secret Sharing A Dealer holds a secret value s in Zp*, p > n is a prime. Dealer chooses a random polynomial f() over Zp* of degree at most t, such that f(0)=s: f(x) = s + a 1 x + a 2 x 2 + …+ at xt Dealer sends si = f(i) privately to Pi. Properties: • Any subset of at most t players has no information on s • Any subset of at least t+1 players can easily compute s – can be done by taking a linear combination of the shares they know. A consequence – the reconstruction vector: There exists a reconstruction vector (r 1, …, rn) such that for any polynomial h() of degree less than n: h(0) = r 1 h(1) + … + rn h(n)
A Protocol for the Passive Case, I. T. Scenario - threshold adversary, may corrupt up to t players, t< n/2. - can assume desired computation specified as algebraic circuit over finite field K= GF(p). 3 Phases in protocol: 1. Shares inputs among players 2. Do the computation, results in sharings of the outputs 3. Open outputs Sharing Phase: Each Pi shares each of his input value using a random polynomial of degree at most t, sends a share of each input to each player. Notation: Notation a f() a 1, a 2, …, an means: value a has been shared using polynomial f(), resulting in shares a 1, …, an, where player Pi knows ai. f() is random polynomial of degree at most t, only constraint is that f(0) = a.
Computation Phase Addition Gates Input: a fa() a 1, …, an and b fb() b 1, …, bn Desired Output: c= a+b fc() c 1, …, cn Each player sets ci : = ai+bi. Then we have what we want: a+b fc() c 1, …, cn, with fc() = fa()+fb() - works, since adding two random polynomials of degree ≤ t produces random polynomial of degree ≤ t Multiplication Gates Input: a fa() a 1, …, an and b fb() b 1, …, bn Desired Output: c= ab fc() c 1, …, cn. Each player sets di : = ai bi. If we set h() = fa() fb(), then di = fa(i) fb(i) = h(i). Also h(0)= ab = c Unfortunately, h() may have degree up to 2 t, and is not even a random polynomial of degree at most 2 t. What to do?
Multiplication Gates, con’t We have public reconstruction vector (r 1, …, rn) – know that c= h(0) = r 1 h(1) + …+ rn h(n) = r 1 d 1 + … + rn dn - since deg(h)≤ 2 t < n Each player Pi creates di hi() ci 1, ci 2, …, cin. So we have: Known by: d 1 h 1() P 1 r 1 c 11 P 2 r 1 c 12 + d 2 h 2() r 2 c 21 … r 1 c 1 n + + r 2 c 22 + … r 2 c 2 n + … … dn hn() rn cn 1 rncn 2 = = c fc() c 1 Pn + … c 2 … … … rn cnn = c is now shared using polynomial fc(), where cn fc() = ri hi()
Output Opening Phase Having made our way through the circuit, we have for each output value y: y fy() y 1, …, yn If y is to be received by player Pi, each Pj sends yj to Pi. Pi reconstructs in the normal way. Security, intuitively: Outputs trivially correct, since all players follow protocol For every input from an honest player, intermediate result and outputs of honest players, Adv sees at most t shares. These are always t random field elements, so reveal no information.
Security proof, more formally First define functionality FMPC: Basically, it receives inputs from all players x 1, …, xn. Compute function (y 1, …, yn) = f(x 1, …, xn), send yi to Pi. Want to prove that protocol realizes FMPC with perfect security (assuming < n/2 players corrupted) By definition, must construct efficient simulator S, interacting on one side with Adv, on other side with FMPC. Assume first Adv is static. Interface Adv vs. S: In each round, S must specify messages that corrupt players receive from honest players; and Adv specifies messages that corrupt players send (since Adv is passive, we can assume the messages are computed according to protocol. Interface S vs. FMPC: S sends inputs to FMPC on behalf of corrupted players, FMPC sends results for corrupted players to S.
Sketch of Algorithm for S Input sharing phase: When Adv generates messages containing shares of corrupt players’ input, S can reconstruct and send input value to FMPC. To simulate honest players sharing their inputs, S chooses a random value for each corrupt player and shows them to Adv. Computation phase: Only multiplication gates generate communication, namely local products are shared. S records what corrupt players send and generate random values to simulate honest players sharings. Note: S can keep track of the shares that corrupt players hold of every intermediate result. Output phase: FMPC sends to S result y meant for a corrupt player. S knows the (simulated) shares of y held by corrupt players. S interpolates a polynomial that fits with result and the < t shares already known by Adv, and uses this to simulate the shares of honest players. This simulation is perfect
Proof for adaptive Adv New elements in the interaction of S S vs Adv: Adv can say at any point ”corrupt Pi”. S must then generate complete history of Pi participating in protocol, i. e. , internal data, random coins, messages received. Must be consistent with what Adv has seen so far S vs. FMPC: S can say ”corrupt Pi”. Then S gets inputs and results for Pi (if they have been specified at this point). Must add to algorithm of S a ”reconstruct history” procedure. Works essentially as follows: Internal data of Pi: essentially a list of polynomials used for secret sharing values. For each value x, some shares known already by Adv. S chooses random polynomial consistent with x and known shares. Extra technicalities for the output phase, see notes for details.
Protocol for Active Adversaries General idea: use protocol for passive case, but make players prove that they send correct information. Main tool for this: commitment scheme Intuition: committer Pi puts secret value s K ”in a locked box” and puts it on the table. Later, Pi can choose to open the box, by releasing the key. • Hiding – no one else can learn s from the commitment • Binding – having given away the box, Pi cannot change what is inside. We can model a commitment scheme by an ideal functionality Fcom, that offers the following commands: • Commit: player Pi sends a value s to the functionality, Fcom records it is internal memory • Open: if Pi sends this, Fcom recovers s from memory and sends it to all players. Trivially satisfies hiding and binding since Fcom cannot be corrupted.
Using Functionalities in Protocols The plan is to use a commitment scheme, i. e. , (an extention of) Fcom as a ”subrutine” to build a realization of FMPC , secure against active cheating. So we need: • A model specifying what it means to use an ideal functionality in a protocol. As a result, we can formally specify what it means that protocol ”implements FMPC when given access to Fcom”. • A theorem saying that if protocol realizes for instance Fcom securely, then it is OK to replace Fcom by . • Doing this in would result in the desired real-life protocol for FMPC
The G-Hybrid model: ”Realizing F given G” Exchange input/results with honest players Corrupt Adv Protocol G When protocol over, Adv outputs a bit. Given Adv, , k and z, this is a random variable called HYBRIDG, Adv, (k, z)
The Ideal Process Adv Corrupt Same as before, except: Adv now expects to act in a world where G is present, so S must also emulate Adv’s and corrupt players’ interaction with G Exchange inputs/results for honest players F Exchange inputs/results for corrupt players S At end of protocol, Adv outputs a bit, a random variable called IDEALAdv, F, S(k, z)
Definition We say that Г-securely realizes F perfectly in the G-hybrid model if there exists a simulator S such that for all adversaries Adv corrupting only sets in Г it holds that Prob(IDEALAdv, F, S(k, z) =0) = Prob(HYBRIDG, Adv, (k, z) =0) For all k and z. If is a protocol in the G-Hybrid model, and is a protocol realizing G, we let be the protocol obtained by replacing all calls to G by calls to . Composition Theorem If Г-securely realizes F in the G-hybrid model and securely realizes G (in the real model), then Г-securely realizes F in the real model. Intuition: to implement F, enough to first show to do it, assuming we are (magically) given access to G. Then show to implement G, then a (real) implementation for F automatically follows. Result holds also when several instances of the subprotocol are used concurrently.
Definition of Fcom functionality. Notation for commitments: [s]i -means: Pi has successfully committed to s, and Fcom has stored s in its internal memory. Commit command Goal: create [s]i • Executed if all honest players send a ”commit i” command Pi sends a value s. If Pi is corrupt, he may send ”refuse” instead. • If Pi refused, send ”fail” to all players, otherwise store s in a new variable and send ”success” to everyone. Open command Goal: open [s]i • Executed if all honest players send a ”open” command referring to [s]i. If Pi is corrupt, he may send ”refuse” instead. • If Pi refused, send ”fail” to all players, otherwise send s to everyone. • Can also be called as ”private open, j” where s is sent to only Pj.
We need Fcom to offer more functionality, it needs to implement Homomorphic Commitments, i. e. the following two commands Commit. Add command Goal: from [a]i and [b]i create new commitment [a]i + [b]i= [a+b]i • Executed if all honest players send a ”Commit. Add” command referring to [a]i and [b]i. • Fcom will compute a+b and store it in a new variable, as if committed to by Pi (in particular, Pi can open this new commitment, as if he had committed to a+b in the normal way. ) Constant. Mult command Goal: from [a]i and public u, create new commitment u [s]i = [ua]i • Executed if all honest players send a ”Constant. Mult u” command referring to [a]i. • Fcom will compute ua and store it in a new variable, as if committed to by Pi
Advanced commands From the basic Commit, Open, Commit. Add and Constant. Mult commands, anything else we need can be built, but for simplicity, we define some extra commands. . CTP command (CTP: Commitment Transfer Protocol) Goal: from [s]i , produce [s]j for i≠j • Executed if all honest players send a CTP command referring to [s]i, i and j. If Pi is corrupt, he may send ”refuse” instead. • If Pi refused, send ”fail” to all players, otherwise store s in a new variable as if committed by Pj, send ”success” to everyone and send s to Pj.
CMP command (CMP: Commitment Multiplication Protocol) Goal: Given [a]i [b]i [c]i Pi can convince all players that c=ab (if true) • executed if all honest players send a CMP command referring to [a]i [b]i [c]i. If Pi is corrupt, he may send ”refuse”. • if Pi refused or if c≠ab, send ”fail” to all players. Otherwise, send ”success” to everyone. CSP command (CSP: Commitment Sharing Protocol) Goal: Given [a]i , create [a 1]1, [a 2]2, …, [an]n where f(i)=ai and f is a polynomial of degree at most t. • executed if all honest players send a CSP command referring to [a]i. Pi should send a polynomial f() of degree at most t. If Pi is corrupt, he may send ”refuse”. • if Pi refused, send ”fail” to all players. Otherwise, for i=1. . n, compute ai = f(i) store it in a variable as if committed by Pi and send ”success” to everyone.
Implementation of CSP from basic Fcom commands. Pi chooses random polynomial fa(x) = a + c 1 x + c 2 x 2 + … + ct xt and make commitments: [c 1]i, [c 2]i, …, [ct]i. We define aj= fa(j). By calling the Commit. Add and Constant. Mult commands, we can create commitments: [aj]i= [a]i + j [c 1]i + j 2 [c 2]i + … + jt [ct]i. Finally, we use CTP to create [aj]j from [aj]i. During creation and manipulation of the commitments, Pi can refuse if he is corrupt (and he’s the only one who can do so). This counts as Pi refusing the entire CSP operation. We return to implementation of other commands later.
Protocol for Active Adversary - Adv is adaptive, unbounded and corrupts up to t players, t< n/3. - We assume Fcom is available, with the Commit, Open, Commit. Add, Constant. Mult, CTP, CMP and CSP commands. - We will assume that a broadcast channel is available (not trivial when Adv is active). Can be implemented via a subprotocol if t< n/3. Broadcast not used directly in high-level protocol, but is needed for the implementation of Fcom. Same phases as in passively secure protocol, but now we want to maintain that all players are committed to their shares of all values. For simplicity, assume first that no one behaves such that Fcom will return fail. Input Sharing Phase Pi commits to his input value a: creates [a]i, then we call the CSP command. So we have. .
Result of Input Sharing Phase • Each input value a has been shared by some player Pi using a polynomial fa(), where fa() is of degree ≤t. • If Pi is honest, fa() is random of degree ≤t. • Each player Pi is committed to his share in a. Notation: a fa() [a 1]1, [a 2]2, …, [an]n
Computation Phase Addition Gates Input: a fa() [a 1]1, [a 2]2, …, [an]n and b fb() [b 1]1, [b 2]2, …, [bn]n Desired Output: c= a+b fc() [c 1]1, [c 2]2, …, [cn]n Each player Pi sets ci : = ai+bi and all players compute [ci]i = [ai]i + [bi]i. Produces desired result with fc() = fa() + fb(). Multiplication Gates Input: a fa() [a 1]1, [a 2]2, …, [an]n and b fb() [b 1]1, [b 2]2, …, [bn]n Desired Output: c= a+b fc() [c 1]1, [c 2]2, …, [cn]n Each player Pi sets di : = ai bi, makes commitment [di]i and uses CMP on commitments [ai]i, [bi]i , [di]i to show that di is correct If we set h() = fa() fb(), then di = fa(i) fb(i) = h(i). Also h(0)= ab = c So we can use essentially same method as in passive case to get to a sharing of c using a random polynomial of degree ≤t.
Multiplication Gates, con’t Public reconstruction vector is still (r 1, …, rn) Using same method as in input sharing phase, each player Pi creates di hi() [ci 1]1, [ci 2]2, …, [cin]n. So we have: Committed by: P 1 d 1 h 1() r 1 [c 11]1 P 2 r 1 [c 12]2 + d 2 h 2() r 2 [c 21]1 Pn + + r 2 [c 22]2 + r 2 [c 2 n]n … + + … … dn hn() rn [cn 1]1 rn [cn 2]2 = = c fc() [c 1]1 r 1 [c 1 n]n … … … [c 2]2 … rn [cnn]n = … [cn]n c is now shared using polynomial fc(), where fc() = ri hi()
Output Opening Phase Having made our way through the circuit, we have for each output value y: y fy() [y 1]1, …, [yn]n If y is to be received by player Pi, ”private open i” is invoked for each commitment, such only Pi learns the shares. Opening may fail for some commitments, but the rest are guaranteed to be correct, so Pi can recontruct y in the normal way. Note: this would work, assuming only that t< n/2. In fact the entire highlevel protocol works for t< n/2. It is only the implementation of Fcom that needs t< n/3. High-level protocol can be used to get MPC for t< n/2 in the cryptographic model, if we can build a computationally secure implementation of Fcom in that scenario.
How to handle Failures If a player Pi sends refuse in some command, causing Fcom to return ”fail”: • In input sharing phase: ignore or use default value of input • In computation phase: can always go back to start, open Pi’s inputs and recompute, simulating Pi openly. Also more efficient solution: since t< n/3 at least n-t > 2 t players do multiplication step correctly. So can still do multiplication step using reconstruction vector tailored to the set that behaves well. • In output opening phase: the receiver of an output just ignores incorrectly opened commitments – there is enough info to reconstruct, since n-t > t.
Proving Security of High-Level Protocol. Very similar to proof for passive case, since same pattern, except for the commitments. More concretely: Sketch of Algorithm for S Input sharing phase: When Adv secret shares his input, this happens in the protocol by sending the input and a polynomial for sharing it to Fcom. However, S emulates also interaction between Adv and Fcom, so S receives this and can send input value to FMPC. If Adv sends incorrect stuff, S just returns fail, like the real Fcom would do, and sends default value to FMPC. To simulate an honest player sharing his inputs, S chooses a random value for each corrupt player and shows them to Adv, this emulates what corrupt players would receive from Fcom. The other phases are modified in a similar way from the simulator for the passive case. Leads to perfect simulation.
Implementing Fcom commands • Commit, Open • Commit. Add, Constant. Mult • CPT protocol, CMP protocol Idea for commitments: implement using secret sharing. To commit to s, a dealer D just creates s f() s 1, …, sn To open, D broadcasts f(), each player Pi says if his share really was f(i). Opening accepted, if at least n-t players agree. The good news: Commit. Add can be implemented by just locally adding shares, Constant. Mult by multiplying all shares by constant. Furthermore, if D remains honest, Adv learns no information at commitment time. The bad news: who says D distributes correctly computed shares? If not, s not uniquely determined, D may open different values later.
Some Wishful Thinking. . Suppose for a moment we could magically force D to distribute shares consistent with a polynomial f() of degree ≤t < n/3. Then it works! - Easy to see that things are fine if D remains honest - If D corrupt, want to show that D must open to value s or be rejected. Assume D opens some s’, by broadcasting polynomial f’(). If this is accepted, at least n-t> 2 t players agree at least t+1 honest players agree f’() agrees with f() in t+1 points f’()=f() s=s’. Therefore sufficient to force D to be consistent
How to force consistency Main tool: f(X, Y) = cij Xi. Yj - a bivariate polynomial of degree at most t in both variables. Will assume f() is symmetric, i. e. cij =cji Define, for 0< i, j ≤ n: f 0(X) = f(X, 0), and set s= f 0(0), si = f 0(i) fi(X) = f(X, i), fi(j) = sij How to think of this: s is the ”real” secret to be committed, using polynomial f 0(). Hence f 0(i) = si will be player Pi’s share in s. The rest of the machinery is just for checking. Observations, by symmetry: si = f 0(i) = f(i, 0) = f(0, i) = fi(0) sij = fi(j) = f(j, i) = f(i, j) = fj(i) = sji
Commit Protocol 1. Dealer D chooses random bivariate polynomial f() as above, such that f(0, 0)= s, the value he wants to commit to. Sends privately fi() to player Pi. 2. Pi sends sij= fi(j) to Pj, who compares to sji= fj(i) – broadcast ”complaint” if conflict. 3. D must broadcast correct value of all sij’s complained about 4. If some Pi finds disagreement between broadcasted values and what he received privately from D, he broadcasts ”accuse D” 5. In response to accusation from Pi, D must broadcast what he sent to Pi – fi(). This may cause further players to find disagreement as in Step 4, they then also accuse D. • If D has been accused by more than t players, commit protocol fails. • Otherwise, the commitment is accepted. Accusing players from step 4 use the fi() broadcast as their polynomial. Accusing players from step 5 use the polynomial sent in step 1. Each player Pi stores fi(0) as his share of the commitment.
Commitments, more concretely In our implementation, a commitment [a]i is a set of shares: a 1 held by P 1 a 2 P 2 . . . an Pn where Pi knows the polynomial fa() that was used to create the shares – and where fa(0) =a. • Checking using bivariate polynomial forces Pi to create shares correctly • Opening means Pi broadcasts fa(), each Pj checks if fa(j) = aj, complains if not, opening accepted iff at most t complaints. [a]i + [b]i means: each Pj has aj and bj, now computes cj: = aj + bj. Pi computes fc() = fa() + fb(). We now have new commitment [a+b]i, defined by shares c 1, …, cn, and polynomial fc(). u [a]i means: each Pj has aj, now computes dj: = u aj. Pi computes fd(): = u fa(). We now have new commitment [ua]i, defined by shares d 1, …, dn and polynomial fd().
Implementing CTP (Commitment Transfer) Command: Purpose: from [a]i, produce [a]j • Given [a]i, Pi sends privately to Pj the polynomial fa() defining the commitment. If Pj does not get something of correct form, he brodcasts a complaint and we go to the ”complain step” below • Pj creates [a’]j where a’ is the value he learned in the first step. Note that assuming Pj received correct info from Pi, he is now in state equivalent to having created [a]i himself. So we can use Commit. Add, Constant. Mult to create [a]i+(-1)[a’]j which we open. The result should be 0. If yes, continue with [a’]j, accept and stop. • Complaint step: If we reach this one, clear that at least one of Pi, Pj is corrupt. Hence OK to ask Pi to open [a]i. If this succeeds, continue with default commitment by Pj to a. Else the CPT fails.
Reminder: Given commitment by Pi to a value a : [a]i, the Commitment Share Protocol (CSP) works as follows: Pi chooses random polynomial fa(x) = a + c 1 x + c 2 x 2 + … + ct xt and make commitments: [c 1]i, [c 2]i, …, [ct]i. The j’th share of a is aj= fa(j). Players can now immediately compute commitments to the shares: [aj]i= [a]i + j [c 1]i + j 2 [c 2]i + … + jt [ct]i. Finally, we use CTP to create [aj]j from [aj]i. This trivially generalizes to polynomials of any degree.
Implementing CMP (Commitment Multiplication) Command Given [a]i , [b]i , [c]i , Pi wants to convince us that c= ab. Pi uses CSP command to create: a fa() [a 1]1, [a 2]2, …, [an]n b fb() [b 1]1, [b 2]2, …, [bn]n c fc() [c 1]1, [c 2]2, …, [cn]n Where fc() = fa() fb() Even if Pi corrupt, this guarantees that all committed shares are consistent with polynomials of degree at most t, t, and 2 t, and that fa(0)=a, fb(0)=b, fc(0)=c. Hence sufficient to verify that indeed fc() = fa() fb(): Each Pi checks that ci= ai bi. If not he complains and proves his case by opening the commitments. Honest players will do this correctly, so we know that fc() agrees with fa() fb() in at least n-t > 2 t points fc() = fa() fb().
Proving Fcom implementation is secure Basic Ideas: When corrupt player commits, simulator can reconstruct value committed to from the messages sent, because consistency is enforced. So you know what to send to Fcom. When honest player commits to value s, s is not known to simulator. So we show the adversary random values in place of the shares in s that the honest player would send. At opening time, simulator gets s from Fcom, then complete set of shares to a complete set of shares in s, and claim this was what the honest players held. Leads to perfect simulation. Note on the Fcom implementation: it is based on Shamir’s threshold secret sharing scheme. But it is has been designed such that any linear secret sharing scheme can be plugged in instead (more on this later). Using special properties of Shamir’s scheme, some parts can be done more efficiently. For instance, Commit protocol based on Shamir is already itself a CSP, fi()polynomials can be used as commitments to shares in s – details in notes.
Another Improvement of Fcom (works only for Shamir case) Alternative Open protocol Commitment [s]i has been established using s f() s 1, …, sn • Each player Pj sends sj to every other player • From received shares, each player reconstructs s using algorithm given below. Does not require use of broadcast, which is often very expensive This works, if we can construct algorithm with the following property: Given a set of values s’ 1, s’ 2, …. , s’n where s’i = f(i) for a polynomial of degree at most t< n/3, except for at most t values, compute f(). We already proved that since t< n/3, only one polynomial can be consistent with enough values, so can find f() by exhaustive search. Can we do it efficiently?
Algorithm Construct a bivariate polynomial Q(X, Y), such that for i=1…n: Q(i, s’i) = 0 Q(X, Y) = f 0(X) – f 1(X)Y Where deg(f 0) at most 2 t and deg(f 1) at most t. Conditions on Q() define linear system of equations with coefficients of f 0, f 1 as unknowns, so easy to find Q() if it exists. Hence enough to show that 1. a Q() of correct form always exists. 2. desired f() easy to find from f 0, f 1. As for 1, let A be the set of positions where the s’i do not agree with f(). If we set k(X) = i A (X-i) then Q(X, Y) = k(X) f(X) – k(X) Y does the trick. For 2. define Q’(X) = Q(X, f(X)). Turns out that Q’(i)=0 for all i not in A, so Q’(X) = f 0(X) – f 1(X)f(X) = 0 f(X) = f 0(X)/f 1(X)
Why are the bounds n> 2 t, n> 3 t optimal? In the passive case, it is impossible already for 2 players to compute for instance the AND function with unconditional security against both players: Inputs: from A: bit a from B: bit b Results: A and B learn a AND b Suppose a=0, then A is to learn nothing. Nevertheless: using infinite computing power, A can determine if the conversation she just had with B, could have resulted from both a=0 and a=1, i. e. , is my bit uniquely determined from conversation? . If not, B also learned nothing, and so must have b=0, else has b=1. Note: Multiparty case reduces to 2 -party case. In active case, it is impossible to do broadcast already for 3 players when 1 can be corrupt: Assume players A, B, C, A wants to broadcast bit b. A may send b to B and C, but e. g. B does not know if C received same bit as him. Only possibility is to ask B. If inconsistency, clear that A or C is corrupt, but no way to tell which one!
How to go from threshold to general adversaries. Use same ideas, but more general form of secret sharing… Shamir’s scheme can be written as fixed matrix secret+randomness a 1 11 12 … 1 t 1 22 … 2 t … 1 n 2 … nt r 1 = shares a 1 a 2 . . rt . . a n Each player ”owns” a row of the matrix and is assigned the share corresponding to his row. Can be generalized to other matrices than Van der Monde, and to more than one row pr. player.
Linear Secret Sharing Schemes (LSSS). M Rows of P 2 …… Rows of Pn Randomness Rows of P 1 s Share of P 1 = Share of P 2 Share of Pn Subset A can reconstruct s if their rows span (1, 0, 0, …, 0), otherwise they have no information. LSSS is most powerful general SS method known, can handle any adversary structure – but cannot be efficient on any structure (counting argument). Shamir, Benaloh-Leichter, Van Dijk, Brickell are special cases.
Reminder Adversary Structure Γ: family of subsets of P= {P 1, …, Pn} List of subsets the adversary can corrupt. Threshold-t structure: contains all subsets of size at most t. Γ is Q 3: for any A 1, A 2, A 3 Γ, A 1 A 2 A 3 is smaller than P Γ is Q 2: for any A 1, A 2 Γ, A 1 A 2 is smaller than P To make our protocol work for general Q 2/Q 3 adversaries, basically we plug in an LSSS M for Γ instead of Shamir’s scheme. Does this work? Let va be the vector chosen in order to secret share value a. Then complete set of shares is the vector M va. We can securely add shared secrets, local addition of shares of a and b means we compute M va + M vb = M(va + vb) - produces shares of the sum a+b, since vector va+vb has a+b in first coordinate.
Multiplication? For vectors u =(u 1, …, ud), v= (v 1, …. , vd) let u◊v = (u 1 v 1, …, udvd) and u v =(u 1 v 1, u 1 v 2, …, u 1 vn, u 2 v 1, . . . , udvd) Now, given shares of a, and b, M va and M vb , we can compute by local multiplication M va ◊ M vb. where each player knows a subset of the entries. We have M va ◊ M vb = (M M)(va vb) where M M is the matrix containing as rows all -products of rows in M with themselves. Note: va vb contains ab in the first coordinate. Thus we have produced a sharing of ab in a LSSS defined by M M. Definition M is multiplicative if the set of all players is qualified in the LSSS defined by M M. If M is multiplicative, we can use the same idea as for polynomial secret sharing to convert the sharing using M M to a sharing using M.
A matrix M defining a LSSS is NOT always multiplicative. However: Theorem[CDM 00]: from any LSSS M for a Q 2 adversary structure, can always construct multiplicative M’ of size at most twice that of M. This implies: from any LSSS M for Q 2 adversary structure Γ, can build general MPC protocols with perfect security against passive, adaptive Γ-adversaries. Can get protocol for active adversaries and Q 3 adversary structure also, by generalizing from the threshold protocol we have seen: Must implement commitments - same idea as before, secret share committed value using M. Since adversary structure is Q 3, committed value still determined if sharing is consistent. To verify consistency, use bivariate polynomial technique, generalized to LSSS’s, same commit protocol applies. For details, see [CDM 00], full version on my web page.
MPC from LSSS, cont’d Everything else in the Fcom implementation is generic and generalizes immediately to any LSSS, except the CMP protocol. Generalizing CMP requires an extra property: the given LSSS must be strongly multiplicative. Definition M is strongly multiplicative if the set of honest players is qualified in the LSSS defined by M M. Not known whether from LSSS M for Q 3 adversary structure, we can build strongly multiplicative M’ not much larger than M – the major open problem in this area! Fortunately, there is a solution that works for ANY homomorphic commitment scheme, and is only inferior in that it has an exponentially small error probability…
Generic Implementation of CMP (Commitment Multiplication) Command Given [a]i , [b]i , [c]i , Pi wants to convince us that c= ab. The following convinces a single player Pj that the statement is true, it can be repeated (in parallel) so every other player gets to play the role of Pj 1. Pi chooses at random and makes commitments [ ]i, [ b]i 2. Pj chooses a random challenge r (in the field GF(p)), sends to Pi 3. Pi opens the commitment r[a]i + [ ]i to reveal a value r 1. Also opens commitment r 1 [b]i – [ b]i – r [c]i, result must be 0. 4. If any of the openings fail, Pj rejects, otherwise he accepts. • If Pi remains honest so that ab=c, Pj will always accept. Moreover, all values opened are random or fixed to 0, so no extra information to Adv. Easy to construct simulator, techniques as seen before. • If Pi is corrupt, then after step 1, if Pi can answer convincingly 2 different values of r, then ab=c - so error probability is 1/p. Remark, simulation in UC model, vs. simulation for computational ZK protocols: in the UC model, simulator is NOT allowed to rewind the adversary (necessary for concurrent composition theorem).
Beyond LSSS? The above construction yields an MPC protocol of complexity polynomial in size of the given LSSS. Can we build MPC from ANY secret sharing scheme, ? – probably not! [CDD 00] Theorem there exists no efficient black-box reduction building MPC from any secret sharing scheme.
Protocols for the Cryptographic scenario Can be based on ideas from info. theory scenario, several possibilities: First Idea One can think of the I. T. scenario as the cryptographic scenario, augmented by an ideal functionality Ftransmit that securely transmits messages from player to player. Then, implement Ftransmit in the cryptographic model (using public-key encryption, for instance). Now, general MPC for cryptographic scenario follows from the composition theorem. Using standard CCA secure public key encryption, this works as long as the adversary is static. For adaptive adversary, technical problems with simulation: when honest players Pi, Pj communicate, S must create ciphertext c to show Adv without knowing the plaintext m. If Pi is corrupted later, S is given m, but must explain c as an encryption of m. Most likely impossible! Can be solved using stronger type of encryption known as non-committing encryption: the simulator can create special ”fake” ciphertexts that can later be explained as encryptions of anything.
Implementation CCA-secure encryption and non committing encryption exist if one-way trapdoor permutations exist. Hence, theorems stated for the IT scenario essentially imply the ones for the crypto scenario. CCA security can be implemented quite efficiently based on standard techniques, non-committing encryption is much less efficient, even using best known techniques.
Second Idea Gain efficiency by implementing not just message transmission, but also Fcom using cryptographic tools. Quite efficient implementations known for static adversaries, e. g. based on the discrete log problem. Let p= 2 q +1 where p, q are primes. Take g, h, y Zp* of order q. Then to commit to element a Zq , choose r at random, commitment is (gr mod p, yahr mod p) i. e. an El Gamal encryption of a. Clearly homomorphic mod q. Known techniques suffice for implementing CTP, CMP, etc. In particular, can use ZK protocols to implement efficiently, e. g. , earlier protocol for CMP. Using same approach for adaptive adversaries is not so interesting, since we would need non-committing encryption for message transmission, so we would loose efficiency again.
Protocol for proving knowledge of discrete logarithm - Example of efficient ZK proof for use in this context Given h= gw, in group or prime order q. P claims he knows w. P sends a= gr to V V sends a random e= 0 or 1. P responds with z= r +ew mod q V checks that gz = a hz Also works if e is random mod q – now error probability is 1/q, for the same price!
Third Idea In stead of starting from a protocol for the i. t. scenario, use a different paradigm, tailored for the crypto scenario. Maybe gain efficiency this way. . Basic primitive needed: Homomorphic threshold public-key encryption [CDN 02]. A common public key pk for everyone, secret key secret-shared among the players Adv cannot decrypt, honest players can. Player Pi supplies input xi by just publishing an encryption Epk(xi). Homomorphic property: the set of plaintexts is assumed to be a ring, and there is a multiplication operation on ciphertexts. The requirement is that we have for any plaintexts a, b: Epk(a)Epk(b) = Epk(a+b) Can also multiply constant ”into” encryption. Example: Paillier encryption. Gives the most efficient known protocols. Plaintext space is Zn for RSA modulus n, ciphertexts are numbers modulo n 2
MPC from homomorphic encryption Players publish encryptions of their inputs We walk through an arithmetic circuit, as before, adding and multiplying values while they are encrypted This produces encryptions of the outputs, which we can decrypt because we share the private key. Secure addition is immediate by homomorphic property: just multiply the two encryptions. Multiplication: from Epk(a), Epk(b), how to produce securely Epk(ab)? • Each Pi chooses ri at random, publishes Epk(ri) • Multiply Epk(a), by all Epk(ri), decrypt result. Leads to a+R, where R= r 1+…rn • From (a+R), Epk(b) we can all produce Epk((a+R)b). • From ri, Epk(b), each Pi can produce Epk(-rib), we multiply all these to get Epk(-Rb). • Finally, from Epk((a+R)b) and Epk(-Rb), produce Epk(ab).
Efficiency All information theoretic protocols we saw were polynomial time in C, size of circuit size of secret sharing scheme (in threshold case poly in n, number of players). k size of field Protocol based on homomorphic encryption needs to communicate O(n. Ck) bits. Additions are for free, so we get really practical solutions for electronic voting, for instance. Also practical in case circuit is not too large, say a few comparisons of integers (auctions, contract bidding). Recent work [DN 03]: even adaptive security with constant factor loss of efficiency.
Where to read more. Defintions/model for asynchronous communication Ran Canetti, paper on the UC model, Eprint archive on www. iacr. org More on UC model for synchronous communication, complete proof of composition theorem, and details on MPC from homomorphic encryption Jesper Nielsens Ph. D thesis, final version available soon on www. brics. dk Theory of linear secret sharing, details on MPC from LSSS Cramer, Damgård and Maurer: General Multiparty Computation from any Linear Secret Sharing Scheme, full version on www. daimi. au. dk/~ivan The one result we did not cover here: Protocols showing how to do t< n/2 for active Adv in IT model, assuming broadcast, and with small error probability Rabin and Ben-Or Verifiable secret sharing and multiparty computation with honest majority, STOC 89. Also later more efficient version by Cramer, Damgård, Dziembowski, Hirt and Rabin: Efficient MPC with dishonest minority. Impossibility of MPC from any secret sharing scheme Cramer, Damgård, Dziembowski: On the complexity of verifiable secret sharing and MPC, STOC’ 00 and www. daimi. au. dk/~ivan
- Slides: 73