1
Technical Support / New API for witness_node
« on: December 26, 2016, 07:50:44 pm »
Continuing the discussion on changing / updating the API of witness_node.
The BitShares blockchain has a more diverse STATE than the Bitcoin blockchain (UTXO/Address Balance).
We have a lot of entities: accounts, balances, assets, orders, feeds, etc.
We need maximum flexibility when querying the state.
The state doesn't have all the appropriated indexes or doesn't have the specific function to query that specific part of the state.
Eg:
We needed to list all the holders of a specific asset.
The index was already there.
... ordered_unique< tag<by_asset_balance>,
composite_key<
account_balance_object,
member<account_balance_object, asset_id_type, &account_balance_object::asset_type>,
member<account_balance_object, share_type, &account_balance_object::balance>,
member<account_balance_object, account_id_type, &account_balance_object::owner>
>,
...
So, we just write the function using that index.
So if we want to query the state in any imaginable way we have many options each one of them with some trade-offs.
1) Add more boost_multi_index_container for the state with the corresponding functions when needed.
This will increase memory usage in all nodes since indexes occupy memory space.
But can be set at compile time or with a custom plugin.
2) Replicate the state in a MySQL/neo4J database.
Add a function to dump the FULL STATE of the object db.
And then apply state changes (add/edit/delete objects).
This process need to lock the block processing until we apply the state change in our underlying database.
Otherwise if we miss a delta-diff our outside-state will be different from the real state.
Eg: if we just connect to the WS interface, and wait for "object_changed" callback and affect the outside-state accordingly, then our process go down, the blockchain keeps updating objects, we reconnect later and .... we are in problems.
We have been experimenting with this approach developing a plugin that will call a python function with the changed_objects content.
(We have to take care of possible forks in the chain? Or the changed_objects callback also tell us [in case of undo] which objects need to be deleted?)
3) Create a pluggable db backend.
Take graphene::db::object_database as the base, create an interface and implement object_database_mysql, object_database_neo4j ... etc.
The BitShares blockchain has a more diverse STATE than the Bitcoin blockchain (UTXO/Address Balance).
We have a lot of entities: accounts, balances, assets, orders, feeds, etc.
We need maximum flexibility when querying the state.
The state doesn't have all the appropriated indexes or doesn't have the specific function to query that specific part of the state.
Eg:
We needed to list all the holders of a specific asset.
The index was already there.
... ordered_unique< tag<by_asset_balance>,
composite_key<
account_balance_object,
member<account_balance_object, asset_id_type, &account_balance_object::asset_type>,
member<account_balance_object, share_type, &account_balance_object::balance>,
member<account_balance_object, account_id_type, &account_balance_object::owner>
>,
...
So, we just write the function using that index.
Code: [Select]
vector<account_asset_balance> database_api_impl::get_asset_holders( asset_id_type asset_id )const
{
const auto& bal_idx = _db.get_index_type< account_balance_index >().indices().get< by_asset_balance >();
const auto range = bal_idx.equal_range( boost::make_tuple( asset_id ) );
vector<account_asset_balance> result;
for( const account_balance_object& bal : boost::make_iterator_range( range.first, range.second ) )
{
assert( bal.asset_type == asset_id );
auto account = _db.find(bal.owner);
account_asset_balance aab;
aab.name = account->name;
aab.account_id = account->id;
aab.amount = bal.balance.value;
result.push_back(aab);
}
return result;
}
So if we want to query the state in any imaginable way we have many options each one of them with some trade-offs.
1) Add more boost_multi_index_container for the state with the corresponding functions when needed.
This will increase memory usage in all nodes since indexes occupy memory space.
But can be set at compile time or with a custom plugin.
2) Replicate the state in a MySQL/neo4J database.
Add a function to dump the FULL STATE of the object db.
And then apply state changes (add/edit/delete objects).
This process need to lock the block processing until we apply the state change in our underlying database.
Otherwise if we miss a delta-diff our outside-state will be different from the real state.
Eg: if we just connect to the WS interface, and wait for "object_changed" callback and affect the outside-state accordingly, then our process go down, the blockchain keeps updating objects, we reconnect later and .... we are in problems.
We have been experimenting with this approach developing a plugin that will call a python function with the changed_objects content.
Code: [Select]
void db_update_plugin::plugin_initialize(const boost::program_options::variables_map& options)
{
auto script_str = options.at("python-script").as<std::string>();
if ( !script_str.size() )
return;
Py_Initialize();
my->_main_module = bp::import("__main__");
my->_global = my->_main_module.attr("__dict__");
auto script_path = bp::str(script_str);
bp::exec_file(script_path, my->_global, my->_global);
my->_handler = my->_global["Handler"]();
database().changed_objects.connect([&]( const std::vector<graphene::db::object_id_type>& ids) {
bp::list list_ids;
bp::list list_values;
for( auto id : ids ) {
fc::variant vo;
to_variant(id, vo);
list_ids.append( vo.get_string() );
const graphene::db::object *obj = database().find_object(id);
if ( obj != nullptr ) {
list_values.append( fc::json::to_string(obj->to_variant()) );
} else {
list_values.append( "" );
}
}
bp::call_method<void>(my->_handler.ptr(), "changed_objects", list_ids, list_values);
});
}
(We have to take care of possible forks in the chain? Or the changed_objects callback also tell us [in case of undo] which objects need to be deleted?)
3) Create a pluggable db backend.
Take graphene::db::object_database as the base, create an interface and implement object_database_mysql, object_database_neo4j ... etc.