9646
General Discussion / Re: Scamming using bitcointalk nicknames
« on: November 08, 2013, 06:35:20 am »
If every forum supported Keyhotee ID this wouldn't be a problem
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
AMD FX6300 with 8Gb RAM
"hashespermin" : 13.74452367,
24 hours with nothing
is this normal?
at this rate yes. this is my machines, it might get you a feeling:
"hashespermin" : 50.06642312 --> 20 blocks
"hashespermin" : 41.14747421 --> 2 blocks
"hashespermin" : 25.34802261 --> nothing
"hashespermin" : 18.12231320 --> 1 block
"hashespermin" : 15.73872058 --> nothing
"hashespermin" : 11.80773078 --> nothing
Is that a better solution before a non-manual exchange exists?
http://54.238.185.113
hi, the pts pool, the testnet is broken, so it is just a beta version!
The results of the experiment in table 4 evoke some interesting discussion. If one excludes the preprocessing step, the
speed up is significant.
The preprocessing step however is an integral part of the
algorithm to port the Bloom Filter on to the GPU. Thus we
need to come up with better ways to preprocess a given set
of keys.
One more notable observation is that the actual
filter construction time and communication latency between
GPU and CPU is independent of the key size.
I thought the reward was supposed to decrease 5% a week. It looks like it's decreasing around 5% a day. Anyone have an explanation for this difference?
I suppose the performance of your algorithm would also suffer if instead of SHA512(X), SCRYPT(X) was used because the cost of doing this step twice would be much higher and less GPU friendly.
I wonder what would happen if we used NESTED momentum proof of work?
Change the nonce-space of the outer proof of work to a pair of 16 bit numbers that result in a X-bit collision?
Now you have a more memory intensive inner hash that is still quick to validate, but would significantly complicate GPU miners.
gigawatt,
Thank you for providing the first innovative algorithm for reducing the memory requirements. Let me attempt to post mitigating factors to your algorithm.
From a CPU miner perspective, your reduction in memory comes at the expense of performance so does break the algorithmic complexity of the algorithm.
From a GPU perspective you have to populate a bloom filter with 2^26 results... based upon my understanding of how bloom filters operate this would require updating a common data-structure from every thread and the resulting memory race conditions could create false negatives. If you have to do this step sequentially, then you might as well use a CPU with memory.
So do you have any solid algorithms that can populate a bloom filter with 2^26 results in parallel?
I've managed to make a huge step forward in proving Momentum being not nearly as good for proof-of-work as intended.
The magic lies in using a Bloom filter to store the intermediate hashes.
As a result, instead of using 12 bytes per hash/nonce in a Semi-Ordered Map (which results in ~750 MB of memory), the required memory is (-1 * (2^26 * ln(0.01)) / (ln(2)^2)), or about ~76 MB.
This number can be reduced arbitrarily if we're willing to have a false positive rate greater than 1%. For example, if we allowed up to a 50% chance of having a false positive, the memory requirement drops to ~11 MB.
Here's a overview of how the algorithm works:
Make a "main" bloom filter of size 2^26 with a false positive rate 1%: ~76 MB
Make a tiny "clash" bloom filter of size 2^16 and false positive rate 2^-26: ~0.7 MB
Make a vector of pairs< hash, nonce > to store candidate birthday collisions.
For each nonce in the search space, check if its hash exists in the "main" bloom filter. If it is, add it's entry to the "clash" bloom filter.
The "main" bloom is no longer required and can be discarded.
For each nonce in the search check if its hash exists in the "clash" bloom filter. If it does, add < hash, nonce > to a candidate list for investigation.
Sort the list of candidates by hash.
For each pair in the candidate list, see if the previous element has the same hash. If it does, add it to the output list. This step removes false positives by comparing the actual hash instead of the bloom filter's idea of a hash.
Return the output list as normal.
For your testing pleasure, I also built a working proof of concept.
(Most of the magic is right here. The bloom filter is a modified version of "bloom.cpp" called "bigbloom.cpp")
Unmodified source: http://i.imgur.com/k2cNrmd.png
Source using bloom filters: http://i.imgur.com/w8Enf9e.png
In exchange for lowering the memory requirements by a factor of ~10, the algorithm runs at about 1/4th speed, mainly due to the doubling calls to SHA512 and the hash calls within the bloom filters. The overall result is a net efficiency gain of approximately 2.5
The reduction in memory requirement means that if we could fit N instances of Momentum in GPU memory, we can instead fit 10*N instances. If we up the false positive rate in exchange for more time spent sorting, we can achieve ratios of up to 70*N.
Given that bloom filters, SHA512, and sorting data are all parallel/GPU friendly, we can conclude that Momentum as a proof-of-work isn't nearly as GPU-hard as initially intended.
Thanks. I still think you're underestimating the issue. I think you should change it to have a large nonce space much larger than anyone could store in memory so that you have asymptotically linear scaling in terms of RAM.
Could you clarify what exactly you require in order to claim the full bounty?