Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - arhag

Pages: 1 ... 10 11 12 13 14 15 16 [17] 18 19 20 21 22 23 24 ... 81
241
As for doing this right now... lets see, there might be a way... If you generate a BTC deposit address in the buy MOONFUND/BTC market and then enter that address in then sell BTS/BTC market you should have a two leg exchange from BTS into MOONFUND. Selling BTS->BTC->MOONFUND!

Oh good point. I assume there is a proper log so that in case something goes wrong and BTC needs to be returned it is possible to prove that the intermediate BTC address that sent the BTC is really the one generated on behalf of your BTS sell request?

242
General Discussion / Re: Moonstone Fundraiser Help Thread
« on: April 09, 2015, 09:48:58 pm »
Can you spell out more clearly why moontokens are valuable?

Each MOONFUND token will be bought back from the market with BTS in the future via delegate pay. For every donation you will receive 15% more MOONFUND tokens than you would BTS on a current exchange. This means that as long as the Moonstone wallet becomes sufficiently popular over the next 5-30 months you can make a return on your donation.

I'm concerned about the numbers. Let's say you put a 30 month cutoff on the delegate pay. This means each 100% delegate can only generate 3,905,560 BTS during this 30 month period. You already have 1,350,349 MOONFUND liability with only 5% funded. If we assume the price of BTS remains the same over the next 30 days, then raising $130,000 requires generating a total of (1.15 * 130,000 / 0.005399) = 27,690,313 MOONFUND tokens (assuming the crowdfunding is successful). In order to generate enough BTS to buyback the MOONFUND at a 1-to-1 rate, you would need 27,690,313/3,905,560 = 7.09 100% paid delegates (so round that up to 8 delegates). The problem will be even worse if the price of BTS continues to drop over the next 30 days.

The problem is you are offering 15% return on BTS. If BTS price is expected to shoot up in the next year or so like we all expect it will, you will be overpaying a lot! It was a much more realistic deal when the return was on the dollar amount donated.

Edit: I don't think you should change the deal now that the fundraiser has started. But you should just be clear about exactly how many delegates you plan to include in the opt-out selection for the Moonstone wallet and what your maximum cut off time is before you take the delegates down (or at least stop using their pay to buy back MOONFUND tokens). That way people can properly assess the risk of not getting paid back (or what percentage they will be paid back).

Furthermore, you should not do a continuous buyback since you cannot know if you will be able to fulfill the entire buyback at a 1-to-1 rate prior to the cut off time. In this case, the people who are fastest to sell their MOONFUND tokens will profit at the expense of the slower movers. To be fair, you should instead regularly distribute dividends to MOONFUND holders.

243
How much effort would it be to join some of the X/BTC markets with the MOONFUND/BTC market to get an MOONFUND/X market on metaexchange? It would be nice to at the very least join BTS/BTC with MOONFUND/BTC to get MOONFUND/BTS (even if it means higher spreads) so that someone can donate to the Moonstone fundraiser without needing to deal with a Bitcoin wallet.

244
General Discussion / Re: Shuffle Bounty $200 BitUSD
« on: April 09, 2015, 08:24:11 pm »
My wish would be for DPOS to use threshold signatures to reduce a round of 101 blocks to just a single block. Then a random number is generated every block as part of the threshold signature that cannot be corrupted unless a super majority of the delegates collude. The threshold signature would always be of the block header of the previous block in the blockchain so that this way delegates have 8 seconds to generate the threshold signature rather than 2 seconds (assuming the client is designed to stop accepting new transactions into the pending block 2 seconds prior to the block-production deadline).

I would split the 101 delegates into two disjoint sequences that are updated every block. The first sequence might be as small as two delegates, while the second sequence will be the rest (e.g. 99 delegates). Each time a delegate is replaced, the delegate that was voted out of the 101 is removed from either the first or second sequences and the delegate that was voted in is appended to the second sequence. For every block (including missed ones), the head of the first sequence (the one who should be producing that block) is removed from the first sequence and appended to the second sequence (unless it was the delegate voted out in that block), and the head of the second sequence is removed and appended to the first sequence (and this is repeated again as many times as voted-out delegate in that block happened to be removed from the first sequence). And after that, for each produced block, the random number from the threshold signature of that block is used to shuffle the second sequence.

There can also be failover blocks which do not have a threshold signature (only the signature of the block producer). In the case of these blocks, the shuffling will not be done and the blockchain should behave as if the entropy source has temporary halted (only random numbers from the threshold signature should be used as the entropy source). Failover blocks are limited in the kind of transactions they can include in the block and the kind of behavior that is allowed to execute as part of the block. Only transactions that update BTS votes are allowed in failover blocks (as a means of changing the set of active delegates who can then hopefully begin produce blocks with threshold signatures again). A regular block at the same block height as a failover block is always preferred over the failover block during fork resolution.

245
General Discussion / Re: Shuffle Bounty $200 BitUSD
« on: April 08, 2015, 05:03:58 am »
So I had to change it up considerably compared to the previous method to generalize to N delegates.

Instead of trying to use all 256-bits of entropy directly to select the permutation, I am resorting to generating enough pseudorandom data from the random_seed to break it up into a sequence of just the number of bits necessary to get the appropriate-sized offset for each delegate that will be added into the new order. I can then use the same process to regenerate more of that pseudorandom data to replace the original so that I can keep iterating until an offset within the appropriate limit is selected for each of the delegates or until I hit maximum iteration in which case I bias the results by selecting an offset of 0 (but that should be an incredibly low probability event: less than 2^-100 with the current selection of the MAX_ITERATION constant below).

I also now actually update the set of delegates (remove voted out delegaes, add voted in delegates). To maximize entropy and mixing, the old delegates that have not been voted out are added first in reverse order of the prev_delegate_order (because the last half of prev_delegate_order are the ones that have constraints on where they can go), and then the new delegates are added.

Code: [Select]
#include <algorithm>
#include <vector> 
#include <tuple>
#include <utility>   
#include <fc/crypto/sha256.hpp>
#include <fc/exception/exception.hpp>
#include <string.h>

namespace bts { namespace blockchain {

typedef int64_t delegate_id;

// Calculates minimum number of bits needed to fit n
inline uint8_t
calculate_bit_size(uint32_t n)
{
    uint8_t size = 1;
    for (uint32_t bound = 2; n >= bound; bound = (1 << (++size)));
    return size;
}

// Fills data with pseudorandom bytes using random seed and returns updated seed.
fc::sha256 generate_random_data(std::vector<uint8_t>& data, const fc::sha256& random_seed)
{
    uint64_t num_bytes = data.size();
    uint8_t hash_size = sizeof(random_seed._hash);
    fc::sha256 seed(random_seed.data(), (size_t) hash_size);
    for (uint64_t i = 0; i < num_bytes; i += hash_size) {
        memcpy(data.data() + i, seed.data(), std::min((uint64_t) hash_size, num_bytes - i));
        seed = fc::sha256::hash(seed);
    }
    return seed;
}

std::vector< std::pair<delegate_id, bool> >
prepend_new_delegates(const std::vector<delegate_id>& prev_delegate_order,
                      std::vector<delegate_id> top_101_delegates)
{
    std::vector< std::tuple<delegate_id, uint32_t, bool> > prev_delegates;
    prev_delegates.reserve(prev_delegate_order.size());
    for (uint32_t i = 0; i < prev_delegate_order.size(); ++i)
        prev_delegates.emplace_back(prev_delegate_order[i], i, false);
    std::sort(prev_delegates.begin(), prev_delegates.end(),
                 [] (const std::tuple<delegate_id, uint32_t, bool>& a,
                     const std::tuple<delegate_id, uint32_t, bool>& b) {
                    return (std::get<0>(a) < std::get<0>(b));
                 });
    std::sort(top_101_delegates.begin(), top_101_delegates.end());

    std::vector< std::pair<delegate_id, bool> > delegate_list;
    delegate_list.reserve(top_101_delegates.size() + prev_delegate_order.size());
   
    auto old_it = prev_delegates.begin();
    auto new_it = top_101_delegates.begin();
    while (old_it != prev_delegates.end() || new_it != top_101_delegates.end()) {
        if ((new_it == top_101_delegates.end()) || (std::get<0>(*old_it) < *new_it)) {
            // delegate std::get<0>(*old_it) has been removed
            *old_it = std::make_tuple(std::get<0>(*old_it), std::get<1>(*old_it), true);
            ++old_it;
        } else if ((old_it == prev_delegates.end()) || (std::get<0>(*old_it) > *new_it)) {
            // delegate *new_it has been added
            delegate_list.emplace_back(*new_it, false);
            ++new_it;
        } else {
            ++old_it;
            ++new_it;
        }
    }   
   
    uint32_t new_delegates = delegate_list.size();
    delegate_list.resize(new_delegates + prev_delegates.size());
    for (auto it = prev_delegates.begin(); it != prev_delegates.end(); ++it) {
        uint32_t i = new_delegates + std::get<1>(*it);
        delegate_list[i].first = std::get<0>(*it);
        delegate_list[i].second = std::get<2>(*it);
    }

    return delegate_list;
}

struct ReorderData {
    enum Status : uint8_t { PENDING = 0, COMPLETE };
   
    delegate_id d;
    uint32_t offset;
    uint32_t limit;
    uint64_t mask;
    uint64_t byte_index;
    uint8_t  bit_shift;
    uint8_t  num_bytes;
    Status  status;
};

std::vector<delegate_id>
next_delegate_order(const std::vector<delegate_id>& prev_delegate_order,
                    const std::vector<delegate_id>& top_101_delegates_sorted_by_vote,
                    const fc::sha256& random_seed)
{
    uint32_t num_delegates = prev_delegate_order.size();
    auto delegate_list = prepend_new_delegates(prev_delegate_order, top_101_delegates_sorted_by_vote);

    std::vector<ReorderData> reorder(num_delegates);

    uint64_t byte_index = 0, bit_index = 0;
    uint32_t middle = (num_delegates + 1)/2;
    uint32_t limit = middle;
    uint32_t i = num_delegates;
    uint32_t j = num_delegates;
    for (auto it = delegate_list.rbegin(); it != delegate_list.rend(); ++it) {
        if (j < middle) {
            --limit;
        }
        if (j > 0)
            --j;
        if (it->second) { // if missing
            ++limit;
            continue;
        }
        FC_ASSERT( i > 0 );
        --i;
        reorder[i].d = it->first;
        reorder[i].byte_index = byte_index;
        reorder[i].limit = limit;
        reorder[i].num_bytes = 1;
        uint8_t remaining_bits = calculate_bit_size(limit);
        uint64_t mask = 0;
        while ((remaining_bits + bit_index) >= 8) {
            remaining_bits = remaining_bits + bit_index - 8;
            mask = mask | ( ((1 << (8 - bit_index)) - 1) << remaining_bits );
            bit_index = 0;
            ++byte_index;
            ++(reorder[i].num_bytes);
        }
        reorder[i].bit_shift = 8 - (remaining_bits + bit_index);
        reorder[i].mask = (mask | ((1 << remaining_bits) - 1)) << reorder[i].bit_shift;
        bit_index += remaining_bits;
    }

    const uint32_t MAX_ITERATION = 100; // Probability of reaching max iteration is less than 2^(-MAX_ITERATION)
    fc::sha256 seed = random_seed;
    std::vector<uint8_t> random_data(byte_index + 1);
    for (uint32_t k = 0; k < MAX_ITERATION; ++k) {
        bool repeat = false;
        seed = generate_random_data(random_data, seed);
        for (int64_t i = num_delegates - 1; i >= 0; --i) {
            if (reorder[i].status != ReorderData::Status::PENDING)
                continue;
            uint64_t value = 0;
            uint16_t shift = (reorder[i].num_bytes - 1) * 8;
            for (uint64_t j = reorder[i].byte_index, end = j + reorder[i].num_bytes; j < end; ++j) {
                value = value | (random_data[j] << shift);
                shift -= 8;
            }
            value = (value & reorder[i].mask) >> reorder[i].bit_shift;
            if (value < reorder[i].limit) {
                reorder[i].offset = value;
                reorder[i].status = ReorderData::Status::COMPLETE;
            } else {
                repeat = true;
            }
        }
        if (!repeat)
            break;
    }

    const delegate_id free_slot = (delegate_id) (-1);
    std::vector<delegate_id> new_order(num_delegates, free_slot);
    for (int64_t i = num_delegates - 1; i >= 0; --i) {
        uint32_t offset = (reorder[i].status == ReorderData::Status::PENDING) ?
                           0 : reorder[i].offset; // Biases result in case of max iteration
        uint32_t j = num_delegates - 1;
        for (; offset > 0; --j) {
            if (new_order[j] == free_slot)
                --offset;
        }
        for (; new_order[j] != free_slot; --j);
        new_order[j] = reorder[i].d;
    }

    return new_order;
}

}}

246
General Discussion / Re: Shuffle Bounty $200 BitUSD
« on: April 07, 2015, 05:39:38 pm »
Define:
ITERATION_MAX1 = 41
ITERATION_MAX2 = 84
R = random_seed as uint256 integer
H(...) = SHA256 hash function
C1 = 51^44 (which is less than 2^254)
C2 = 51^6 * 51! (which is less than 2^250)

r1 = permutation selector 1
r2 = permutation selector 2

Code: [Select]
uint256 r1 = R;
for (uint k = 0; k < ITERATION_MAX1; ++k) { // Probability of continuing loop is 5.7% for each iteration assuming ideal hash function
    if (r1 < C1) break;
    r1 = H(r1) >> 2;
}
if (k == ITERATION_MAX1) {
    r1 = C1 - 1; // In the extremely unlikely event (less than 10^-50 with ITERATION_MAX1 == 41) of reaching maximum iteration, bias the results.
}

uint256 r2 = H(R);
for (uint k = 0; k < ITERATION_MAX2; ++k) { // Probability of continuing loop is 24.9% for each iteration assuming ideal hash function
    if (r2 < C2) break;
    r2 = H(r2) >> 6;
}
if (k == ITERATION_MAX2) {
    r2 = C2 - 1; // In the extremely unlikely event (less than 10^-50 with ITERATION_MAX2 == 84) of reaching maximum iteration, bias the results.
}

Then r1 defines the new locations of the last 44 delegates in the previous round, and r2 defines the new locations of the first 57 delegates in the previous round.

To actually convert these two numbers into a sequence of 101 delegate ID (vector<delegate_id> new_order), it requires a helper function with the following signature:
Code: [Select]
void add(vector<delegate_id>& new_order, delegate_id d, uint offset);
This function first assumes that delegate_id can contain a value that will not be mistaken for a valid delegate ID (for example -1) to represent a free slot. In fact, initially new_order is set to a vector of 101 -1's. The function add mutates new_order by adding the delegate_id d into the free slot reached by counting up to offset starting from the end of the vector (meaning counting in reverse order).

Using the above helper function the rest of the code is:
Code: [Select]
uint k = 101 - 1;
uint256 remainder = r1;

uint256 divisor1 = C1 / 51;
for (uint quotient; k >= 57; --k) {
    quotient = (uint) (remainder / divisor1);
    remainder = remainder % divisor1;
    divisor1 = divisor1 / 51;
    add(new_order, prev_delegate_order[k], quotient);
}

uint256 divisor2 = C2 / 51;
remainder = r2;
for (uint quotient; k >= 51; --k) {
    quotient = (uint) (remainder / divisor2);
    remainder = remainder % divisor2;
    divisor2 = divisor2 / 51;
    add(new_order, prev_delegate_order[k], quotient);
}

for (uint quotient; k >= 1; --k) {
    quotient = (uint) (remainder / divisor2);
    remainder = remainder % divisor2;
    divisor2 = divisor2 / k;
    add(new_order, prev_delegate_order[k], quotient);
}

add(new_order, prev_delegate_order[0], (uint) remainder);


Warning, I have not bothered to test any of the above code! Take this as pseudocode for the idea. Be aware of likely off-by-one errors. I also have still yet to write the add helper function but that should be straightforward. Also this is hard coded for 101 delegates. I would have to tweak it a little to make it work generally for any N delegates.

247
General Discussion / Re: Shuffle Bounty $200 BitUSD
« on: April 07, 2015, 03:54:56 pm »
However, I think BM meant the word "distance" this way: If a delegate X has index i in the previous order and index j in the next order, then | (n - i) + j | >= n/2. Imagine it as 2 sequences one after another and there should be a gap of n/2 other delegates before X can go again.

Ah of course. That makes sense. Thanks. Still would be great to get clarification from bytemaster on this, but I can see why that would be useful.

248
General Discussion / Re: Shuffle Bounty $200 BitUSD
« on: April 07, 2015, 03:35:07 pm »
I need to calculate the "next_delegate_order" such that the minimum distance between the same delegate is N/2 the number of delegates while maximizing the entropy and mixing.

I'm not sure I understand your requirements. If the index (0 to 100) of delegate A is n in one round and then it is shuffled to m in the next round, it means that you require |n - m| >= N/2 = 101/2 = 50.5? If so that is impossible. Where does delegate at index 50 (delegate rank 51) go? The two possible maximum indices are 0 and 100, but |50 - 0| = 50 and |50 - 100| = 50, both of which are smaller than 50.5.

Also, even if you relaxed that minimum distance requirement so it was possible, you would be unnecessarily (to me) reducing the possible shufflings allowed, so that would not allow for maximum entropy and mixing. Why exactly is the minimum distance requirement important and what is the problem with the current shuffling algorithm? If there is some reason you need that requirement but don't want to say, then at least make the requirement more precise because as stated it is either inconsistent or is too vague that I misinterpreted it as an impossible constraint.

249
4. Moonstone is not decentralized. It accesses a full client server which however cannot forge any signatures as the server holds no private keys.

Do you intend to eventually allow the client to poll multiple independent servers in future versions so that any one server cannot lie to the client without getting caught?

8. If the crowdfunder is not successful, meaning we don't reach our goal of 130,000 USD worth of BTC within the 30 day window, we will proceed to release the frontend under the GPL3 license without releasing the backend. The delegate buyback commences nonetheless and we will buyback the tokens in the same manner. No BTC can be taken back.

That seems like such a shame. What if the amount raised in 30 days was 120,000 USD? I'm sure people donating would rather see a cut in the 15% interest and get the MIT license rather than get nothing at all. If the first few months of delegate pay is sold to cover the deficit with interest, the amount of BTS estimated to be collected over the remaining months in the 30 month window can be used to calculate the new (lower) buyback rate.

One more thing. If we reach our fundraising goal we also pledge that V2 and V3 of the wallet will also be open source MIT licensed, the only potential exception being the ID verification module.

I'm curious regarding how the module system works. I expect to be able to compile (and audit) the client code myself for security reasons. So would these modules be plugins (some of which can be proprietary and pre-compiled) that the open source client can use? If so I would want to make sure that these plugins are in their own secure process sandbox because otherwise you would allow unknown proprietary code to have memory access to the client which they could use to steal private keys.

250
General Discussion / Re: 3 wishes for BitShares
« on: April 07, 2015, 01:03:37 am »
1) EdDSA
2) Changes to consensus algorithm for better lightweight validation and security (see here, here, here, and here)
3) Decentralized agents with corresponding UIA on the blockchain with support for full read access to the blockchain database state, ability to craft and sign any valid transaction on behalf of the agent, inter-agent message passing on the blockchain, auxiliary nested blockchains regularly committed to the parent blockchain, and optional forced impeachment of agent executors through UIA slate vote (aka Automated Transactions, aka Turing complete scripts, aka smart contracts, aka DApps, aka DAC extensions, aka child DACs)

251
General Discussion / Re: Reworking the wallet trading interface
« on: April 06, 2015, 09:34:31 pm »
you are trying to put to much information without the need to scroll!

I agree.

@svk, please don't try to cram everything into a space that can't handle that much information unless you have a large resolution screen. Allow for two-dimensional scrolling, but preferably avoid nested scrolling. So since these tables will naturally have large variable number of rows but a fixed number of columns, we expect vertical scrolling on the tables. This means vertical scrolling should be avoid on the main page. But horizontal scrolling is still available and should be used if necessary. Don't try to force too much information vertically. Spread it out horizontally. Those with large resolutions will be able to see everything within a single window (assuming they don't resize the window to be too small). Those with smaller windows (either by choice or because of screen resolution limits) will have to scroll horizontally to get to all of the presented information. Obviously there is a minimum limit to the vertical size you can support while avoiding vertical scroll, but this should be a pretty small number. People with larger vertical space will just get to see more rows of the bottom tables rather than a new table, graph, label, or input widget. Also, if it really makes sense to keep widget or UI elements as part of the same column, then at least allow accordion-style collapsible panels so that vertical scrolling is not needed even in very small vertical space. Or it should adapt and move some of the panels to a new adjacent panel when it detects there is not enough vertical space in the window.

252
We don't see PLAY or MUSIC having their own version of bitUSD.  In fact, you see them using the BitShares version.

I haven't been following Play lately, but who says they, especially Music, aren't/won't be using their own version of BitUSD? The Music blockchain isn't ready yet, so how can you make that claim? Unless there is some huge news I somehow missed, I am sure Music will have its own version of BitUSD.

I see no reason why this would not have been the case with Vote and DNS. In fact, I think it would have left it open for the market to fairly value each separate chain AND the service it planned to provide.  You also would have had people like Toast working on DNS and bringing in a team of people very passionate about one thing: Decentralized DNS.  That DAC would have funded the building of Decentralized DNS DACs that easily plug into future BitShares browsers, wallets, apps...etc.  Each would have drawn in different demographics and then as they all grew, they could start implementing feature sets employed in other DACs.  And each chain would have its own marketing delegates with their own focused message. 

Yeah that's great except they all need a stable foundation blockchain to build on and also all either need or greatly benefit from the BitAsset system as well. Bidding on domain names isn't that great when you have to account for the price volatility of your DNS token over the 30 day bid period, for example. Using BitUSD would be better. Music lovers would rather pay 99 cents for a song rather than 1500 NOTE today but then 1800 NOTE tomorrow, for example. Great so they can each have their own BitAssets except now they split liquidity across many chains making each of the BitAssets worse than they would be if they were all on the same chain.

Okay, so let's have them all use the same BitAsset as the one on the BitShares X chain. Well this is a technically tricky but doable with some trade-offs. I have discussed a way this could be done in the past and have since become a stronger and stronger advocate of this approach. But even then, it takes lots of coding and testing before such functionality would be available for use. So then core devs need to work on that. But that work is reusable on all other blockchains, so it would make the most sense for the BitShares X devs to work on that and then when it is done all the other chains can adopt that technology. And their chains aren't really 1.0 ready until that feature is ready. They can work on the other business logic of the chain in parallel though if they can get the funding to do so.

So now you have BitShares X devs working on improving the foundations of the blockchain and BitAsset system. They are improving the performance, eliminating bugs, improving the market engine, and developing the new features needed to allow other chains to use BTSX's BitAssets. How long would this take and what resources would it require? Well we know at least how long it would take all of our core devs working on all but the last thing. That is because that is exactly what they are working on right now and they are still not done. It all took WAY longer than originally promised. That's what happens in software development. Also, it basically took all of the money I3 had raised (in fact, with all the price drops, how much longer do the core devs have before their year-end bonuses aren't enough to even pay them an unsustainably low wage for their services?) and some small additional money from delegate dilution pay.

Would there be enough money available to allow them to finish that task much less any extra to devote to OTHER devs working on VOTE and DNS functionality in parallel? Where would that money come from? Crowdfunding similar DPOS blockchain related projects would likely just be taking away buy pressure from BTSX, which would reduce the delegate dilution pay for the core devs working on the foundation of the blockchain. We are a small group and not growing fast enough. But to grow fast enough we need a compelling product to sell to others who haven't bought into the vision yet. That means going after the really high value services and providing those services in a very high quality way. A decentralized exchange on a very robust platform with a fast lightweight client that looks beautiful and is easy to use is what is required. This is part of the foundation that other DACs depend on. It makes no sense to waste limited resources (money and dev talent) on things that are not going to grow the token's value fast enough. We need the token value to grow because that is the source of revenue to pay for more devs who will then be able to work on many interesting different blockchain services and features in parallel.

So the way I see it, the economics of the situation would have forced DNS and VOTE to languish anyway until enough of the BitShares X foundation was built. What is worse is that they would have avoided paying for the cost of the foundation that they use since the dilution would have been for BTSX only and not of the other DACs' tokens. I think the BitShares ecosystem would have not looked very differently in terms of functionality available or user adoption at this point in time, and I would even say we would have most likely been worse off than we currently are.

At this point, I think we just need to finish getting the core blockchain technology and the client software polished to get user adoption to increase and hopefully have the price of BTS increase, and then use those extra resources paid for through delegate dilution of the higher BTS price to build the functionality necessary to allow third-parties to concurrently build DApps and/or child DACs on the BitShares platform with minimal effort while using BTS's BitAssets and even leveraging the consensus/networking systems of the BTS blockchain (basically "Turing complete scripts" but done by allowing the validators or "child DAC delegates" to run arbitrary sandboxed executable code that implements the business logic of their DAC/DApp and uses the BTS blockchain and optionally their own nested blockchains, which are committed to the BTS parent blockchain, as the persistent data store for their DAC/DApp). Once this foundation is set, my hope is that we can have developer resources explode and see many third-parties concurrently working on new features like prediction markets, bond markets, voting, DNS, etc.

253
General Discussion / Re: Question
« on: April 06, 2015, 06:52:01 pm »
Demand for BitAssets will BID UP BTS and thus create demand for shorting the BitAsset.

Or it could lead to a higher BitAsset premium rather than bidding up BTS price, and therefore not encourage BTS holders to short because they may fear that BTS price will continue to fall (leading to a further increase in the BitAsset premium). The hope is that the shorts will at some point say enough is enough and short the premium away and thus reduce BTS sell pressure, but in theory the premium could exist indefinitely and even grow since there is no mechanism to correct it back down to the price feed like there is for BitAsset discounts (the expired short covering mechanism). Also, even if they take this leap there is no guarantee that they will be able to buy back the necessary amount of BitAssets to cover at a reasonable price. So there could be a lot of BTS sell pressure added back at the time of covering negating the BTS sell pressure reduced when the short was matched.

However, if shorts knew that even if the price of BTS stayed the same over some fixed period of time that they would be able to profit over that period by shorting when the BitAsset was trading at a premium, they would be far more encouraged to short during high BitAsset demand thus supporting the process through which BitAsset demand translates to bidding up the BTS price and BitAsset premiums would be short lived (meaning a better peg). But the cost of this is that one cannot hold a BitAsset indefinitely; they would have an expiration time like shorts do (although there is no need for shorts and longs to have the same expiration period). And thus this would make BitAssets less desirable and less fungible. All fungibility would not have to be given up though. I'm imagining something like seasonal BitAssets where each BitAsset has four variants: Spring, Summer, Fall, Winter. For example, all BitUSD shorted into existence during the Winter season would be BitUSD-Winter and would expire two months into the Spring season. After the end of the Winter season no new BitUSD-Winter could be created until the next year. Even if the expiration period of shorts was raised from its current 1 month to 2 months (which might good idea for the sake of encouraging shorting), all BitUSD-Winter shorts would still expire prior to the expiration of the BitUSD-Winter longs. Two months into the Spring season, any existing expired or margin-called BitUSD-Winter cover orders would be matched with any outstanding BitUSD-Winter at the price feed at that time (actually the outstanding BitUSD-Winter would only be matched exactly at the price feed if there were no margin-called covers, otherwise the match price would be slightly more than the BTS/BitUSD price feed, to the benefit of the longs, to account for the margin-call covers offered at the 10% premium). The blockchain would automatically give BitUSD-Winter holders the appropriate amount of BTS (just like in black-swan liquidation) and the corresponding covered short owners would get back the remaining BTS collateral. So at any given time there would only be at most two BitAsset variants in circulation: the one for that season and maybe the one for the prior season.

Perhaps for convenience the blockchain could automatically (assuming the appropriate flag was set in the BitUSD-Winter balance record) place the BTS received for the liquidated BitUSD-Winter into a BitUSD-Spring buy order at a price relative to the price feed on behalf of the BitUSD-Winter owner. It could also move this order along to the BitUSD-Summer/BTS market if the order was not fully matched prior to the end of the Spring season, and then to the BitUSD-Fall/BTS market if the order was not fully matched prior to the end of the Summer season, and so on. And it would be really fantastic if the relative offset from the price feed for those expired seasonal BitAsset buy orders started out with some minimum (some negative percent offset) and grew monotonically with time to some maximum (some positive percent offset). For example, two months into the Spring season, any remaining BitUSD-Winter would be liquidated at the price feed and used in a BitUSD-Spring buy order at a price offset by -3% from the BTS/USD price feed, then this offset would grow to 1% from the price feed just prior to the end of the Spring season, then immediately after the end of the Spring season the order would still be at 1% from the price feed but in the BitUSD-Summer/BTS market, then 1 month into the Summer season the offset would have grown to +2%, and finally the offset would grow to its maximum of +3% from the price feed by the end of the Summer season. If all goes well, a BitUSD holder could leave their balance alone and hopefully have close to the same value of BitUSD whenever they check it in the future (and likely more if you account for BitAsset yield).

254
Based on the outcome of the poll we have decided to denominate the buyback in terms of BTS. This means that for every BTC you send in the price will be converted into the equivalent in terms of BTS at the time of donation, and you will receive 1.15 Moonstone tokens for every BTS equivalent you donated.

How about giving the user two tokens for every BTS worth of BTC donated rather than just 1? For every 1 BTS worth of BTC donated, the donor would receive 1 MOON.A and 1 MOON.B. You would first use the delegate funds to buy back MOON.A with BTS at a rate of 1 MOON.A = 1.05 BTS. You would not buyback MOON.B while you still have less BTS in reserve than 1.05 times the amount of outstanding MOON.A yet to be bought back and destroyed. Take the average price of BTS (in USD) over the past week at the time enough BTS has been raised from delegates to be larger than 1.05 times the outstanding MOON.A supply and call that price P USD/BTS. If the total amount of MOON.A originally created as part of the crowdfund (call that S) multiplied by the price P is sufficiently greater than USD value of the crowdfund (for example 1.05 * S * P > 150,000), then you would not need to buyback MOON.B and could instead simply retire the delegates. This means that if someone simply holds their MOON.A until all of it can be bought back for BTS, the maximum dollar value percentage gain of their BTS holdings from the time of donation to the time of buyback would be 15.4% (similar to what it was before when you wanted to do the 1.15 UIA per USD donated scheme).

However, if you buyback all of MOON.A but the price P is not high enough to satisfy the inequality 1.05 * S * P > 150,000, then you can begin the buyback of MOON.B so that donors can continue to receive some interest on their BTS. Actually instead of starting the actual buyback after enough BTS is collected to buy back all remaining MOON.A, you would instead start a stage 2 in which another reserve is allocated (which contains R BTS, where R grows over time as more delegate funds are added to the reserve) for the eventual MOON.B buyback. Stage 2 ends as soon as 3 years elapses since the start of the crowdfund or R * (the 1 week moving average price of BTS in USD/BTS) > (150,000 - 1.05*S*P), whichever happens first. After the stage 2 ends, the delegates are retired and the buyback of MOON.B begins at a rate of 1 MOON.B = R/S BTS.

The end result of all of this is that if amount of value raised by the delegates within a 3 year period can reach $150,000, then donors will get back their USD value (via BTS) that they donated with at least 15% interest (and at least 5% return on BTS donated). If 3 years passes without generating enough value, then the donors will obviously get back less money. However, there is no defined time limit on at least paying back the BTS donated (with 5% interest), assuming the delegates can stay elected long enough that is. This means if BTS value increases a lot, the donors will get paid back sooner (less than 3 years) and in this case they will always have more BTS then they started with (at least 5% more) and they will have at least 15% more USD value than they started with. If BTS value continues to stagnate or even drops more, then it will take much longer to get paid back and the returns will be less. If it takes longer than 3 years, then the donor will receive less than 15% additional USD value than they started with (perhaps they will even lose USD value), although they will most likely still get back more BTS than they started with (5% more, but over a period that is potentially indefinitely long).


255
General Discussion / Re: BitShares.tv - A closer look at Moonstone
« on: March 31, 2015, 05:38:20 am »
Awesome interview .. plenty of information.

I myself explain the blockchain pretty much the same way as Taulant does: Append-only excel sheet :)

Yes, it seems that slowly but surely the industry is converging o a set of obvious interpretations.

Since blockchains are so obviously different from other databases (they never delete or modify anything), I have been thinking for some time that we could design a new kind of key-value-store which uses this "drawback" as an opportunity. Turns out the Ripple guys have been thinking about this for some time and have come up with their own special blockchain db! I would love to hear your opinion on NuDB. I think it might be an important potential change to Bitshares in the near future.

The blockchain data structure (which is just an immutable log of state transitions to apply, if valid, in a particular order) isn't as interesting to me as the state data structure (which does need to be mutable to be practical and efficient). To clarify the difference, the blockchain data structure may include multiple transactions that update the votes of a balance (it includes the full history of transactions and state transitions that give rise, starting from genesis, to any state existing at any point in the blockchain history), but the state data structure may only include a single balance record which has the amount of BTS it holds and the delegate slate the stake is voting for at the point in time in the blockchain history that the state data structure represents. If the state data structure is being mutated live (ignoring some minor details needed for chain reorganization or having recent state snapshots to optionally provide to old resyncing clients) the client can only keep the present state data structure and it also doesn't really need to look at the old blockchain history data structure for the purposes of continuing the sync, updating the state data structure, and keeping up with consensus (maybe the client would still look at the blockchain data structure to rebuild a user's transaction history, assuming they lost that cache of wallet-specific data, or a view of some blockchain history that the user is interested in).

I would like to see more work on such a state data structure. It should easy to spin-off a snapshot of the state at any time (maybe using copy-on-write memory mapped pages) and commit those to disk as desired, while the main state continues to be mutated live as new blocks come in. It should be possible to efficiently (meaning ideally live as the state data structure is mutated block-by-block) commit the entire state to a single root hash such that it is possible to use the state data structure to generate efficient (as in log(K) sized where K is the number of unique keys in the state) proofs of the existence of any key-value pair in the state data structure (assuming the proof verifier trusts the root hash of the state). Using data structures within the main state data structure which are required to maintain a particular order of some attribute (the key) of the value, it is possible to use the previously described proofs to actually also efficiently prove the non-existence of a particular value within the database (this would be useful, for example, to prove to a light client that a particular name has not yet been claimed). Finally, the state data structure should be portable/cross-platform (or it should be easily converted/serialized into a portable/cross-platform format) so that it can be sent to any one who wants to boot strap their state not from the genesis but rather from some point after that which they trust (via trusted checkpoints). Also, It would be a really nice bonus if it was possible to efficiently generate deltas between two state data structures so that one client could update the latest state of another client with less network bandwidth used (since presumably there would be a lot of overlap between the two states, especially if they are not separated by a large amount of blockchain time, which would not have to be included in the delta data structure).

Pages: 1 ... 10 11 12 13 14 15 16 [17] 18 19 20 21 22 23 24 ... 81