Changeset View
Standalone View
src/secp256k1/src/modules/multiset/main_impl.h
- This file was added.
/********************************************************************** | |||||
* Copyright (c) 2017 Tomas van der Wansem * | |||||
* Distributed under the MIT software license, see the accompanying * | |||||
* file COPYING or http://www.opensource.org/licenses/mit-license.php.* | |||||
**********************************************************************/ | |||||
#ifndef _SECP256K1_MODULE_MULTISET_MAIN_ | |||||
#define _SECP256K1_MODULE_MULTISET_MAIN_ | |||||
#include "include/secp256k1_multiset.h" | |||||
#include "hash.h" | |||||
#include "field.h" | |||||
#include "group.h" | |||||
/* Converts a group element (Jacobian) to a multiset. | |||||
* Requires the field elements to be normalized | |||||
* Infinite uses special value, z = 0 */ | |||||
static void multiset_from_gej_var(secp256k1_multiset *target, const secp256k1_gej *input) { | |||||
if (input->infinity) { | |||||
deadalnix: I think we should not bother with the infinity here or elsewhere. Just pick a point that… | |||||
tomtomtom7AuthorUnsubmitted Not Done Inline ActionsI used empty = H("") earlier. At that point the API was slightly different, using "mutliset_from_element" and thus "multiset_add" and "multiset_combine" being the same thing. This doesn't really work because as you say, then have to remove H("") on each add. Such API also would make sense though, and I think it shows that a non-infinite empty can get awkward; IMO regardless of performance, it's cleaner to stick to the EC group operations instead of inventing our own. Though I can be convinced otherwise. tomtomtom7: I used empty = H("") earlier. At that point the API was slightly different, using… | |||||
memset(&target->d, 0, sizeof(target->d)); | |||||
} | |||||
else { | |||||
secp256k1_fe_get_b32(target->d, &input->x); | |||||
secp256k1_fe_get_b32(target->d+32, &input->y); | |||||
secp256k1_fe_get_b32(target->d+64, &input->z); | |||||
} | |||||
} | |||||
/* Converts a multiset to group element (Jacobian) | |||||
* Infinite uses special value, z = 0 */ | |||||
static void gej_from_multiset_var(secp256k1_gej *target, const secp256k1_multiset *input) { | |||||
secp256k1_fe_set_b32(&target->x, input->d); | |||||
secp256k1_fe_set_b32(&target->y, input->d+32); | |||||
secp256k1_fe_set_b32(&target->z, input->d+64); | |||||
target->infinity = secp256k1_fe_is_zero(&target->z) ? 1 : 0; | |||||
deadalnixUnsubmitted Not Done Inline ActionsRemove that guy and add a check that the point is valid. deadalnix: Remove that guy and add a check that the point is valid. | |||||
} | |||||
/* Converts a data element to a group element (affine) | |||||
* | |||||
* We use trial-and-rehash which is fast but non-constant time. | |||||
* Though constant time algo's exist we are not concerned with timing attacks | |||||
* as we make no attempt to hide the underlying data */ | |||||
deadalnixUnsubmitted Not Done Inline ActionsThe only info we leak via timing are about the hash of the data, so if we assume that the hash function is secure, we are good. I'm not sure, however, that the rehash method as this is appropriate. Usually, it done via a variation of Hbase = H(data); for (int i = 0; i < MAX_TRIAL; i++) { h = H(i, Hbase); if (isGood(h)) { dothething(h); return true; } } return false; Could you expand on the possible constant time options you considered and why you rejected them ? deadalnix: The only info we leak via timing are about the hash of the data, so if we assume that the hash… | |||||
tomtomtom7AuthorUnsubmitted Not Done Inline Actions
I don't think this is true. If I know that A takes 10 rounds, and B takes 1 round, I can differentiate between X,A and X,B even if X is unknown. I don't think this is a problem though. With regards to the rehash method. I am using this variant to "cheat" boundedness. We can safely assume the current algorithm finishes in finite loops, for each input. If we add a max_trial as you suggest, the algorithm can fail and we would have to handle these cases in our caller (coinview). I think this is an unneeded complication.
I believe that the algorithm in the PDF you added uses a constant time. It doesn't work on our curve though, only on binary curves. This means we would have to use an additional EC lib; we cannot ride on the efficiency of libsec256k1. And also, it is claimed that binary curves have weaker security properties, though understanding why is admittedly above my crypto-level. tomtomtom7: > The only info we leak via timing are about the hash of the data, so if we assume that the… | |||||
deadalnixUnsubmitted Not Done Inline ActionsThe problem is that you skew the proba chaining that way. The hash that result from a hash of a high value will be disproportionally reoresented. It is also harder to create collision via extension attack with double sha. If you are worried about the bound, make the integer 64 bits. deadalnix: The problem is that you skew the proba chaining that way. The hash that result from a hash of a… | |||||
static void ge_from_data_var(secp256k1_ge *target, const unsigned char *input, size_t inputLen) { | |||||
secp256k1_sha256 hasher; | |||||
unsigned char hash[32]; | |||||
/* hash to a first trial */ | |||||
secp256k1_sha256_initialize(&hasher); | |||||
secp256k1_sha256_write(&hasher, input, inputLen); | |||||
secp256k1_sha256_finalize(&hasher, hash); | |||||
/* loop through trials, with 50% success per loop */ | |||||
for(;;) | |||||
{ | |||||
secp256k1_fe x; | |||||
if (secp256k1_fe_set_b32(&x, hash)) { | |||||
if (secp256k1_ge_set_xquad(target, &x)) { | |||||
deadalnixUnsubmitted Done Inline ActionsCan you explain why secp256k1_ge_set_xo_var wasn't used ? I think we should make sure that the parity of y is well specified and not dependent on the quadratic residue implementation of libsecp256k1 . Also, I would avoid nesting and just use continue. deadalnix: Can you explain why `secp256k1_ge_set_xo_var` wasn't used ? I think we should make sure that… | |||||
tomtomtom7AuthorUnsubmitted Not Done Inline ActionsGood point. That is much better. I have changed this. tomtomtom7: Good point. That is much better. I have changed this. | |||||
VERIFY_CHECK(secp256k1_ge_is_valid_var(target)); | |||||
VERIFY_CHECK(!secp256k1_ge_is_infinity(target)); | |||||
break; | |||||
} | |||||
} | |||||
/* hash to a new trial */ | |||||
secp256k1_sha256_initialize(&hasher); | |||||
secp256k1_sha256_write(&hasher, hash, sizeof(hash)); | |||||
secp256k1_sha256_finalize(&hasher, hash); | |||||
} | |||||
} | |||||
/* Adds a data element to the multiset */ | |||||
int secp256k1_multiset_add(const secp256k1_context* ctx, | |||||
secp256k1_multiset *multiset, | |||||
const unsigned char *input, size_t inputLen) | |||||
{ | |||||
secp256k1_ge newelm; | |||||
secp256k1_gej source, target; | |||||
VERIFY_CHECK(ctx != NULL); | |||||
ARG_CHECK(multiset != NULL); | |||||
ARG_CHECK(input != NULL); | |||||
gej_from_multiset_var(&source, multiset); | |||||
ge_from_data_var(&newelm, input, inputLen); | |||||
secp256k1_gej_add_ge_var(&target, &source, &newelm, NULL); | |||||
secp256k1_fe_normalize(&target.x); | |||||
secp256k1_fe_normalize(&target.y); | |||||
secp256k1_fe_normalize(&target.z); | |||||
deadalnixUnsubmitted Not Done Inline ActionsI don't think normalizing all the time makes a lot of sense. deadalnix: I don't think normalizing all the time makes a lot of sense. | |||||
tomtomtom7AuthorUnsubmitted Not Done Inline ActionsThe normalizations here seem to be needed, as it crashes otherwise; note that these are field normalizations, not GE normalizations. The add leaves the fields less normalized. tomtomtom7: The normalizations here seem to be needed, as it crashes otherwise; note that these are field… | |||||
multiset_from_gej_var(multiset, &target); | |||||
return 1; | |||||
} | |||||
/* Removes a data element from the multiset */ | |||||
int secp256k1_multiset_remove(const secp256k1_context* ctx, | |||||
secp256k1_multiset *multiset, | |||||
const unsigned char *input, size_t inputLen) | |||||
{ | |||||
secp256k1_ge newelm, neg_newelm; | |||||
secp256k1_gej source, target; | |||||
VERIFY_CHECK(ctx != NULL); | |||||
ARG_CHECK(multiset != NULL); | |||||
ARG_CHECK(input != NULL); | |||||
gej_from_multiset_var(&source, multiset); | |||||
ge_from_data_var(&newelm, input, inputLen); | |||||
/* find inverse and add */ | |||||
secp256k1_ge_neg(&neg_newelm, &newelm); | |||||
deadalnixUnsubmitted Done Inline ActionsUsing secp256k1_ge_set_xo_var you could extract the element in its positive or negative form right away, without requiring any form of fixup. deadalnix: Using `secp256k1_ge_set_xo_var` you could extract the element in its positive or negative form… | |||||
tomtomtom7AuthorUnsubmitted Not Done Inline Actionsi combined the two methods. tomtomtom7: i combined the two methods. | |||||
secp256k1_gej_add_ge_var(&target, &source, &neg_newelm, NULL); | |||||
secp256k1_fe_normalize(&target.x); | |||||
secp256k1_fe_normalize(&target.y); | |||||
secp256k1_fe_normalize(&target.z); | |||||
deadalnixUnsubmitted Done Inline Actionsdito normalization. deadalnix: dito normalization. | |||||
multiset_from_gej_var(multiset, &target); | |||||
return 1; | |||||
} | |||||
/* Adds input multiset to multiset */ | |||||
int secp256k1_multiset_combine(const secp256k1_context* ctx, secp256k1_multiset *multiset, const secp256k1_multiset *input) | |||||
{ | |||||
secp256k1_gej gej_multiset, gej_input, gej_result; | |||||
VERIFY_CHECK(ctx != NULL); | |||||
ARG_CHECK(multiset != NULL); | |||||
ARG_CHECK(input != NULL); | |||||
gej_from_multiset_var(&gej_multiset, multiset); | |||||
gej_from_multiset_var(&gej_input, input); | |||||
secp256k1_gej_add_var(&gej_result, &gej_multiset, &gej_input, NULL); | |||||
secp256k1_fe_normalize(&gej_result.x); | |||||
secp256k1_fe_normalize(&gej_result.y); | |||||
secp256k1_fe_normalize(&gej_result.z); | |||||
deadalnixUnsubmitted Done Inline Actionsdito deadalnix: dito | |||||
multiset_from_gej_var(multiset, &gej_result); | |||||
return 1; | |||||
} | |||||
/* Hash the multiset into resultHash */ | |||||
int secp256k1_multiset_finalize(const secp256k1_context* ctx, unsigned char *resultHash, const secp256k1_multiset *multiset) | |||||
{ | |||||
secp256k1_sha256 hasher; | |||||
unsigned char buffer[64]; | |||||
secp256k1_gej gej; | |||||
secp256k1_ge ge; | |||||
VERIFY_CHECK(ctx != NULL); | |||||
ARG_CHECK(resultHash != NULL); | |||||
ARG_CHECK(multiset != NULL); | |||||
gej_from_multiset_var(&gej, multiset); | |||||
if (gej.infinity) { | |||||
deadalnixUnsubmitted Done Inline ActionsBy defining an init point, you can obviate this. If the perf impact end up being big, maybe just returning all zeros is the best thing ? deadalnix: By defining an init point, you can obviate this. If the perf impact end up being big, maybe… | |||||
tomtomtom7AuthorUnsubmitted Not Done Inline ActionsChanged to 0. Left the infinite as per discussion above. tomtomtom7: Changed to 0.
Left the infinite as per discussion above. | |||||
memset(buffer, 0xff, sizeof(buffer)); | |||||
} else { | |||||
/* we must normalize to affine first */ | |||||
secp256k1_ge_set_gej(&ge, &gej); | |||||
secp256k1_fe_normalize(&ge.x); | |||||
secp256k1_fe_normalize(&ge.y); | |||||
secp256k1_fe_get_b32(buffer, &ge.x); | |||||
secp256k1_fe_get_b32(buffer+32, &ge.y); | |||||
} | |||||
secp256k1_sha256_initialize(&hasher); | |||||
secp256k1_sha256_write(&hasher, buffer, sizeof(buffer)); | |||||
secp256k1_sha256_finalize(&hasher, resultHash); | |||||
return 1; | |||||
} | |||||
/* Inits the multiset with the constant for empty data, | |||||
represented by the Jacobian GE infinite */ | |||||
int secp256k1_multiset_init(const secp256k1_context* ctx, secp256k1_multiset *multiset) { | |||||
const secp256k1_gej inf = SECP256K1_GEJ_CONST_INFINITY; | |||||
VERIFY_CHECK(ctx != NULL); | |||||
multiset_from_gej_var(multiset, &inf); | |||||
return 1; | |||||
} | |||||
#endif |
I think we should not bother with the infinity here or elsewhere. Just pick a point that represent the empty set. Starting from the hash of a public string is a good idea, such as:
If ge_from_data_var use the prefix technique explained, this should not possible to generate an element that collide with the empty set short of breaking sha256.
This would require to remove an instance of the point when combining two sets, so maybe the performance tradeoff is not acceptable ?