Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Nathaniel B

Pages: [1]
1
General Discussion / Re: $5000 Bounty - Momentum Proof-of-Work Hacking
« on: November 07, 2013, 10:42:43 pm »
Thanks. I still think you're underestimating the issue. I think you should change it to have a large nonce space much larger than anyone could store in memory so that you have asymptotically linear scaling in terms of RAM.

Could you clarify what exactly you require in order to claim the full bounty?

2
General Discussion / Re: $5000 Bounty - Momentum Proof-of-Work Hacking
« on: November 07, 2013, 09:35:07 pm »
I decided I would graph the probability of NOT FINDING a hash after searching the NONCE space using three different amounts of memory... 100%   50% and 25%



Both use cases search the same number of nonces and thus do the same number of SHA512... but you take a performance hit if you reduce your memory...

Obviously, slight reductions in memory don't hurt too much, but if you attempted to reduce your memory from 768 MB down to 1 MB so that you could operate 1000 runs in parallel then your probability of NOT finding a match would be: 
0.368196 WITH MEMORY,
0.997402 WITHOUT MEMORY

So your probability of finding a match is 243x smaller for the same number of computations. 

So I guess this means that you can trade 765 MB of RAM for 243 parallel processes each with 1 MB of RAM and have the SAME hash rate.    Now you only actually decreased your RAM requirements by 33%...  while increasing the number of computations required by 243x...

The issue is that mining is NOT about finding 1 or more matches it's about finding as many matches as possible. Your numbers and graphs ALL describe mining as if the goal is to find just the first match whereas your algorithm let's any match be tested against the difficulty function to find a block. A miner wants to find many matches not just the first one.

What sort of implementation proof do you want? If I find the hashtable in the code base and make the change I suggested to artificially limit the number of nonces stored to half and then show it mining at more than half the speed would that be sufficient to convince you and win the bounty?

3
General Discussion / Re: $5000 Bounty - Momentum Proof-of-Work Hacking
« on: November 07, 2013, 05:49:47 pm »
To clarify, the 50% chance for at least 1 collision for 23 people in your example is with doing n^2 operations, but when you're hashing you check each one against the ones you already have saved each time you add it to the hash table. You're not doing all n^2 operations each time since you've already compared the previous hashes to each other. You're just going n comparisons each time to see if the new hash matches. There's a disconnect between what you describe in your graphs and the actual algorithm. That 50% chance would be after 23 hashes you'd find AT LEAST 1 collision, but miners want to find MULTIPLE collisions which changes the whole scaling.

I like the core idea of using the bday hash to force high memory usage. I think you're idea of saturating the memory bandwidth has promise. I think that with the right parameters you could design mining that would have a very low improvement (perhaps none without the scale of ATI behind it) going to ASICs over GPUs due to GPU optimization of parallel operations and memory bandwidth. You could give CPUs a fighting chance as well since they can leverage high amounts of memory to make up for less raw hashes/memory bandwidth.

Your algorithm as it is though does not scale the way you think it does.

Nathaniel

4
General Discussion / Re: $5000 Bounty - Momentum Proof-of-Work Hacking
« on: November 07, 2013, 07:03:21 am »
All your graphs are about finding 1 birthday collision, but miners want to find as many collisions as possible which completely changes the problem. Run the empirical experiment I suggested. Just add the one check so that after half the nounce space is exhausted you stop adding new birthdayHashed nounces to the hashmap to simulate having half the memory. You will see the result I presented.

5
General Discussion / Re: $5000 Bounty - Momentum Proof-of-Work Hacking
« on: November 07, 2013, 04:09:44 am »
Here's a proof that your algorithm scales sublinear with RAM i.e. adding more RAM up to the TARGET_RAM_USE actually has slight diminishing returns and even in the asymptomatic case is at best linear in speed up.
Informally, my impression is that you misunderstand what the actual steps in your algorithm look like. Because of the difficulty function, a user wants to find as many BirthdayHash collisions as possible rather than just the first match and each time a new nonce is BirthdayHashed and put into the hashtable it is compared to the others rather than all hashes being compared in one n^2 computation at the end.
I'll reference my original post on the bitcoin forum and quote extensively from it.
https://bitcointalk.org/index.php?topic=313479.msg3365654#msg3365654

Let B = the number of ‘birthdays’ (hashspace of BirthdayHash).
Let S = the number of BirthdayHash(nonce+H) stored in RAM.
Let M = maximum number that S can be given a particular system’s RAM.

If we find a new BirthdayHash(A + H), the expected number of matches ( BirthdayHash(A+H)==BirthdayHash(B+H) ) = S / B

Now we calculate the total number of expected matches during a time frame (average block creation time for instance). we take the number of BirthdayHash() we compute in that time (N). S will increase from 0 to N unless M<N, meaning we do not have enough RAM to store N results, from your comments I gather that you intend for M (TARGET_RAM_USE/(nounce size +hashtable overhead per nounce)) >= N.

N * Mean(S) / B = Total expected matches
If M>=N:
Mean(S) = (1 + 2 + 3 + … + N)/N = (N+1)/2

Now let's see what happens when a user only has M/2 RAM available and M/2<N<=M. To when your challenge I must prove that with M/2 RAM a user would still have at least half as many matches ((N+1)/2)*.5.

For the first M/2 hashes both users produce the same results. Then, the user stuck at M/2 RAM only has M/2 nonces to compare each new one to like this:
(1+2+3 + … M/2 + M/2 + … + M/2) = ((M/2)*(M/2 + 1)/2 + (M/2)*(N-M/2))
The mean then is  ((M/2)*(M/2 + 1)/2 + (M/2)*(N-M/2))/N
Total expected matches for M/2 RAM is (Ns cancel out): ((M/2)*(M/2 + 1)/2 + (M/2)*(N-M/2))/B

(M/2 scenario) ((M/2)*(M/2 + 1)/2 + (M/2)*(N-M/2))/B vs (N*(N+1)/2)/B (N<=M scenario)
Plug in the numbers M = 2^26 nonces worth of RAM, N = 2^26 hashes during the time period since it’s always more efficient to continue hashing until the nonce space is complete to avoid extra cost in rebuild the hashtable of potential matches.
(M/2 scenario)
((2^26/2)*(2^26/2 + 1)/2 + (2^26/2)*(2^26-2^26/2)) /2^50= 1.5000000149
vs,
(M scenario)
(2^26*(2^26+1)/2)/2^50 = 2.0000000298

So in the same time frame with half the RAM, I will find ~1.5 matches vs your ~2 matches your scaling is actually worse than a linear increase when you double the RAM.

Let me know if you have any questions. I suggest you run the empirical experiment yourself if you’re not convinced. Just change the code to stop adding new results to the hash table once you’re half way through the nonce space to simulate halving your RAM.

Hope this helps your design. You can get asymptotically linear increases per RAM increase if you make the TARGET_RAM_USE ridiculously huge like 1TB+.


Thanks,

Nathaniel
16EeLa9BrZc1A4sasztaRdnfp3aU8DmuVC

6
General Discussion / Re: $5000 Bounty - Momentum Proof-of-Work Hacking
« on: November 06, 2013, 03:33:39 pm »
Would you please clarify exactly what TARGET_RAM_USE is? Is it the total memory required to store the hashtable of all nonce results or some smaller value?

Should we use your example from http://bitsharestalk.org/index.php?topic=21.msg26#msg26 to empirically confirm? Also could you release FreeTrade's current implementation or standalone subset you mentioned there for testing?

Thanks,

Nathaniel

Pages: [1]