Is this because it would take too much time to compute the factor to multiply the BitUSD value by for each transaction? Meaning if each block determines a factor f_i (representing e^y where y is the per block variable yield rate) for each BitAsset, then for each transaction with BitAsset inputs you would need to get all the BitAsset's f_i from block i = m where the transaction was created to block i = n which is the current block where the transaction was spent, and then multiply the BitAsset value by f_m * f_(m+1) * ... * f_n to get its new value with yield.
Yes. It would greatly increase database size and transaction processing time. I know how to "do it proper" from the original dividends design for BTSX. It requires an accumulation table that must be updated every block for 1 years worth of blocks.
I'm guessing bytemaster wants to save the product between b[m] and b[n] for all pairs of blocks m, n. I posted how to avoid this earlier in this thread (four word answer: partial sum skip list), but I'll go into more detail since both of you apparently missed (or misunderstood) that post.
I propose using sums of logarithms instead of multiplying (so we can do many-block sequences without worrying too much about various approximation issues). Then only save partial sums that are aligned power-of-two length. So you save (in your notation):
run(m, 1) = f_m # for all blocks
run(m, 2) = f_m + f_{m+1} # for every 2nd block
run(m, 4) = f_m + f_{m+1} + f_{m+2} + f_{m+3} # for every 4th block
run(m, 8) = f_m + f_{m+1} + f_{m+2} + f_{m+3} + f_{m+4} + f_{m+5} + f_{m+6} + f_{m+7}
# for every 8th block, etc... for every power of 2
If total number of blocks is N, this data structure needs at most 2N-1 entries (hint: total size is a geometric series).
So if you want to find e.g. the sum of logs from 0x7159 (inclusive) to 0x1258a (exclusive), you can compute that as:
run(0x7159, 1) + run(0x715a, 2) + run(0x715c, 4) + run(0x7160, 0x20) + run(0x7180, 0x80) + run(0x7200, 0x200) + run(0x7400, 0x400) + run(0x7800, 0x800) + run(0x8000, 0x8000) + run(0x10000, 0x2000) + run(0x12000, 0x400) + run(0x12400, 0x100) + run(0x12500, 0x80) + run(0x12580, 8) + run(0x12588, 2)
The proper bit twiddling operations to implement this are left as an exercise to the reader.
Updating a block will usually be fast, requiring only the last few blocks. Block heights divisible by large powers of 2 will happen and reach higher in the blockchain (on an exponentially rare basis), but in all cases at most log(N) values need be saved, each of which requires at most log(N) time. (I think the actual bound is just log(N) instead of log(N)^2 because you re-use the tail part and only need to reach back one more entry, but log^2(N) is still fast enough.)