Connect with us

Proposals

DragonFly BSD patches for bitcoind

I have a small series of patches that allow bitcoind to build on DragonFly BSD; what would be an acceptable way to submit these patches upstream?

Published

on

The following content was written by Venkatesh Srinivas on July 12, 2011, 12:44:13 AM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


Hi,

I have a small series of patches that allow bitcoind to build on DragonFly BSD; what would be an acceptable way to submit these patches upstream? It looks like I should make a github fork of the bitcoin/bitcoin repository, push the patches on the master branch of the clone, and submit a pull request.

Some patches are specific to DragonFly BSD; others correct minor problems that are more generic; should I submit them as one patch with one pull request? Multiple patches with one pull request? Multiple patches with multiple pull requests?

Thanks,
— vs

The following content was written by ghotir on July 14, 2011, 09:37:36 AM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


I’d be interested in those patches for my DragonFlyBSD install.

The following content was written by Vladimir on July 14, 2011, 09:42:23 AM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


you could simply make/change freebsd port

The following content was written by ghotir on July 14, 2011, 01:02:37 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


you could simply make/change freebsd port
DragonFlyBSD uses the NetBSD pkgsrc tree, so if the OP has already done some ground work, why reinvent the wheel?
On that note- OP should contact the pkgsrc-wip project about getting it submitted: https://lists.sourceforge.net/lists/listinfo/pkgsrc-wip-discuss

The following content was written by marino on July 14, 2011, 02:56:36 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


Vladimir,

I think vsrinivas is trying update the code base such that the change only has to be made once.  Your suggestion is to add the patches to the pkgsrc package (rather than the FreeBSD port) and leave the code base alone, and risk that the patches eventually stop working. 

If it’s not possible to fix the code base, fine, your suggestion would work as a last resort, but it’s better to attempt to fix it properly.

The following content was written by Venkatesh Srinivas on July 14, 2011, 05:25:33 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


Some of the patches are more general than just DragonFly support:

1) The bitcoind make target in makefile.unix defines USE_UPNP to 0; however net.cpp among others uses ‘#ifdef’ as the test for miniupnp, which means the upnp headers are always required.

2) main.cpp has:
-char pchMessageStart[4] = { 0xf9, 0xbe, 0xb4, 0xd9 };

g++ 4.4.5 on DragonFly disapproves; the constants are being narrowed from int -> char inside an array initializer. I think the error is correct, though harsh.

3) g++ on DragonFly wants -std=c++0x or -std=gnu++0x; I agree with it — typed enumerations are used in the source. I don’t know why the same g++ version on Linux doesn’t require it, though.

4) in db.cpp, make_tuple is meant to resolve to boost::make_tuple; at least in our configuration, it resolves to std::make_tuple, which is not correct. I don’t know enough to say what correct behavior is when a template is defined in more than one namespace (std and boost) and both are ‘using namespace ‘-ed. But qualifying it helps.

DragonFly specific stuff:

1) Currently net.cpp, there is an assumption that all BSD systems support SO_NOSIGPIPE; DragonFly doesn’t. The test should probably be for SO_NOSIGPIPE.

The following content was written by ghotir on July 20, 2011, 11:31:09 AM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


Some of the patches are more general than just DragonFly support:

1) The bitcoind make target in makefile.unix defines USE_UPNP to 0; however net.cpp among others uses ‘#ifdef’ as the test for miniupnp, which means the upnp headers are always required.

2) main.cpp has:
-char pchMessageStart[4] = { 0xf9, 0xbe, 0xb4, 0xd9 };

g++ 4.4.5 on DragonFly disapproves; the constants are being narrowed from int -> char inside an array initializer. I think the error is correct, though harsh.

3) g++ on DragonFly wants -std=c++0x or -std=gnu++0x; I agree with it — typed enumerations are used in the source. I don’t know why the same g++ version on Linux doesn’t require it, though.

4) in db.cpp, make_tuple is meant to resolve to boost::make_tuple; at least in our configuration, it resolves to std::make_tuple, which is not correct. I don’t know enough to say what correct behavior is when a template is defined in more than one namespace (std and boost) and both are ‘using namespace ‘-ed. But qualifying it helps.

DragonFly specific stuff:

1) Currently net.cpp, there is an assumption that all BSD systems support SO_NOSIGPIPE; DragonFly doesn’t. The test should probably be for SO_NOSIGPIPE.

I knew about the SO_NOSIGPIPE issue ( supported in FreeBSD >5 only. Being that DragonFly is based on the 4.x series i can understand why it’s not; but also read somewhere that Open and Net  don’t either )
The db.cpp boost issue has  been driving me absolutely nuts (usually I can figure such things out with a little poking around make files, etc.).
Anyway, if it’s working for you – compiled, all that, I’d be willing to be a 2nd to apply the patches and build on my dragonflybsd system.  Smiley

The following content was written by Gavin Andresen on August 07, 2011, 03:08:34 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


Two pull requests seem appropriate: one for the generic issues (talk with TheBlueMatt about the upnp #define, I believe it is working as designed), and one for DragonFly-specific stuff.

Frankly, DragonFlyBSD-specific stuff is unlikely to get pulled; there just aren’t enough DragonFly-BSD systems to justify the work of maintaining support for it (according to bsdstats.org it isn’t a very popular BSD variant).

The following content was written by JoelKatz on August 07, 2011, 03:36:46 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


The bitcoind make target in makefile.unix defines USE_UPNP to 0; however net.cpp among others uses ‘#ifdef’ as the test for miniupnp, which means the upnp headers are always required.
I believe this is intentional. USE_UPNP=0 means that UPNP is not used by default but can be enabled. It’s a three-way switch.

Quote
2) main.cpp has:
-char pchMessageStart[4] = { 0xf9, 0xbe, 0xb4, 0xd9 };

g++ 4.4.5 on DragonFly disapproves; the constants are being narrowed from int -> char inside an array initializer. I think the error is correct, though harsh.
Really? That’s kind of a nutty error. Are you sure you aren’t using non-standard warning flags? GCC will issue crazy warnings if you enable all possible warnings, even “unsigned char j[10], k[10]; j[ i ]^=k[ i ];” will trigger an int->char warning.

The following content was written by wumpus on August 07, 2011, 03:47:12 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


Quote
-char pchMessageStart[4] = { 0xf9, 0xbe, 0xb4, 0xd9 };

g++ 4.4.5 on DragonFly disapproves; the constants are being narrowed from int -> char inside an array initializer.
AFAIK the problem is not the narrowing from int to char (which is perfectly allowed if it fits). The problem is that plain “char” has undefined signedness in C (signed in most compilers these days). So values larger than >0x80 throw an overflow warning. The correct thing would be to use “unsigned char” or uint8_t.

The following content was written by Venkatesh Srinivas on August 07, 2011, 04:06:25 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


>> char pchMessageStart[4] = { 0xf9, 0xbe, 0xb4, 0xd9 };

>> g++ 4.4.5 on DragonFly disapproves; the constants are being narrowed from int -> char inside an array initializer.

>AFAIK the problem is not the narrowing from int to char (which is perfectly allowed if it fits). The problem is that plain “char” has undefined signedness in C (signed in most compilers these days). So values larger than >0x80 throw an overflow warning. The correct thing would be to use “unsigned char” or uint8_t.

You’re right; I misunderstood the error message:

main.cpp:1773: error: narrowing conversion of ‘249’ from ‘int’ to ‘char’ inside { }

unsigned char would be correct.

It also turns out that the SO_NOSIGPIPE issue affects NetBSD and OpenBSD, both of which don’t have SO_NOSIGPIPE, but do define BSD. So none of the issues are DragonFly specific.

The following content was written by Venkatesh Srinivas on August 11, 2011, 05:20:04 AM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


Okay, looks like all of the patches have been merged into bitcoind!

Thanks,

The following content was written by Venkatesh Srinivas on August 11, 2011, 04:23:11 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


To build bitcoind on DragonFly BSD (or NetBSD), you can now grab a copy of the git tree for bitcoin. From pkgsrc, you’ll need to install Berkeley DB (db4); I used version 4.8, but 4.6 (also in pkgsrc, as db46) should work. You’ll also need boost; I installed: boost-1.46.1, boost-libs-1.46.1, and boost-headers-1.46.

You’ll need to make a few changes to makefile.unix before building:

To LIBS, I added -L/usr/pkg/lib ; replace with wherever your pkgsrc libraries are located. In LIBS, I also replaced -ldb_cxx with -ldb4_cxx.
To DEFS, I added -std=gnu++0x and -I/usr/pkg/include and -I/usr/pkg/include/db4 ; these changes don’t really belong in DEFS, but its an easy place to add them.

After that, use GNU make with makefile.unix; bitcoind works and is able to download the blockchain and make transactions. This has been tested on DragonFly 2.10 / i386; the same steps should apply to NetBSD, but I haven’t had a chance to try it.

Thanks!

The following content was written by phma on August 19, 2011, 12:37:23 AM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


I’m getting an error; there are two files called serialize.h, and apparently it’s getting the wrong one. The error is:
Code:
# gmake -f ./makefile.unix bitcoind
g++ -c -O2 -Wno-invalid-offsetof -Wformat -g -D__WXDEBUG__ -DNOPCH -DFOURWAYSSE2 -DUSE_SSL -std=gnu++0x -I/usr/pkg/include -I/usr/pkg/include/db46 -DUSE_UPNP=0 -o obj/nogui/db.o db.cpp
In file included from headers.h:91,
                 from db.cpp:5:
serialize.h: In function ‘void Serialize(Stream&, const T&, long int, int) [with Stream = CDataStream, T = std::tuple, std::allocator >, std::basic_string, std::allocator >, long long unsigned int>]’:
serialize.h:1081:   instantiated from ‘CDataStream& CDataStream::operator<<(const T&) [with T = std::tuple, std::allocator >, std::basic_string, std::allocator >, long long unsigned int>]’
db.cpp:643:   instantiated from here
serialize.h:392: error: ‘const class std::tuple, std::allocator >, std::basic_string, std::allocator >, long long unsigned int>’ has no member named ‘Serialize’
gmake: *** [obj/nogui/db.o] Error 1
There are /usr/include/sys/serialize.h and bitcoin-0.3.24/src/src/serialize.h. I’m using DragonFly 2.11. I modified makefile.unix as follows:
Code:
CXX=g++

WXINCLUDEPATHS=$(shell wx-config –cxxflags)

WXLIBS=$(shell wx-config –libs)

USE_UPNP:=0

DEFS=-DNOPCH -DFOURWAYSSE2 -DUSE_SSL -std=gnu++0x -I/usr/pkg/include -I/usr/pkg/include/db46

# for boost 1.37, add -mt to the boost libraries
LIBS= \
 -Wl,-Bstatic -L/usr/pkg/lib \
   -l boost_system \
   -l boost_filesystem \
   -l boost_program_options \
   -l boost_thread \
   -l db46_cxx \
   -l ssl \
   -l crypto

ifdef USE_UPNP
LIBS += -l miniupnpc
DEFS += -DUSE_UPNP=$(USE_UPNP)
endif

LIBS+= \
 -Wl,-Bdynamic \
   -l gthread-2.0 \
   -l z \
   -l dl \
   -l pthread
There’s also a db5 package. I tried it and got an error, switched to db46, and still got an error.

The following content was written by Venkatesh Srinivas on August 19, 2011, 06:06:55 AM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


Let me just post my Makefile here:

Code:
# Copyright (c) 2009-2010 Satoshi Nakamoto
# Distributed under the MIT/X11 software license, see the accompanying
# file license.txt or http://www.opensource.org/licenses/mit-license.php.

CXX=g++

WXINCLUDEPATHS=$(shell wx-config –cxxflags)

WXLIBS=$(shell wx-config –libs)

USE_UPNP:=0

DEFS=-std=gnu++0x -DNOPCH -DUSE_SSL

# for boost 1.37, add -mt to the boost libraries
LIBS= \
 -L/usr/pkg/lib \
 -Wl,-Bstatic \
   -l boost_system \
   -l boost_filesystem \
   -l boost_program_options \
   -l boost_thread \
   -l db4_cxx \
   -l ssl \
   -l crypto

DEFS += -UUSE_UPNP -I/usr/pkg/include -I/usr/pkg/include/db4

LIBS+= \
 -Wl,-Bdynamic \
   -l gthread-2.0 \
   -l z \
   -l pthread


DEBUGFLAGS=-g -D__WXDEBUG__
CXXFLAGS=-O2 -Wno-invalid-offsetof -Wformat $(DEBUGFLAGS) $(DEFS)
HEADERS=headers.h strlcpy.h serialize.h uint256.h util.h key.h bignum.h base58.h \
    script.h db.h net.h irc.h keystore.h main.h wallet.h rpc.h uibase.h ui.h noui.h \
    init.h crypter.h

OBJS= \
    obj/util.o \
    obj/script.o \
    obj/db.o \
    obj/net.o \
    obj/irc.o \
    obj/keystore.o \
    obj/main.o \
    obj/wallet.o \
    obj/rpc.o \
    obj/init.o \
    obj/crypter.o \
    cryptopp/obj/sha.o \
    cryptopp/obj/cpu.o


all: bitcoin


obj/%.o: %.cpp $(HEADERS)
   $(CXX) -c $(CXXFLAGS) $(WXINCLUDEPATHS) -DGUI -o $@ $<

cryptopp/obj/%.o: cryptopp/%.cpp
   $(CXX) -c $(CXXFLAGS) -O3 -o $@ $<

bitcoin: $(OBJS) obj/ui.o obj/uibase.o
   $(CXX) $(CXXFLAGS) -o $@ $^ $(WXLIBS) $(LIBS)


obj/nogui/%.o: %.cpp $(HEADERS)
   $(CXX) -c $(CXXFLAGS) -o $@ $<

bitcoind: $(OBJS:obj/%=obj/nogui/%)
   $(CXX) $(CXXFLAGS) -o $@ $^ $(LIBS)

obj/test/%.o: test/%.cpp $(HEADERS)
   $(CXX) -c $(CFLAGS) -o $@ $<

test_bitcoin: obj/test/test_bitcoin.o
   $(CXX) $(CFLAGS) -o $@ $(LIBPATHS) $^ $(LIBS) -lboost_unit_test_framework

clean:
   -rm -f bitcoin bitcoind test_bitcoin
   -rm -f obj/*.o
   -rm -f obj/nogui/*.o
   -rm -f obj/test/*.o
   -rm -f cryptopp/obj/*.o
   -rm -f headers.h.gch

The following content was written by phma on August 26, 2011, 03:58:39 AM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


I had to remove crypter from the Makefile (there’s no such file in the source) and change “db4” to “db46”. I still get an error relating to serialize:
Code:
# gmake -f ./makefile.dfly bitcoind
g++ -c -O2 -Wno-invalid-offsetof -Wformat -g -D__WXDEBUG__ -std=gnu++0x -DNOPCH -DUSE_SSL -UUSE_UPNP -I/usr/pkg/include -I/usr/pkg/include/db46 -o obj/nogui/util.o util.cpp
g++ -c -O2 -Wno-invalid-offsetof -Wformat -g -D__WXDEBUG__ -std=gnu++0x -DNOPCH -DUSE_SSL -UUSE_UPNP -I/usr/pkg/include -I/usr/pkg/include/db46 -o obj/nogui/script.o script.cpp
g++ -c -O2 -Wno-invalid-offsetof -Wformat -g -D__WXDEBUG__ -std=gnu++0x -DNOPCH -DUSE_SSL -UUSE_UPNP -I/usr/pkg/include -I/usr/pkg/include/db46 -o obj/nogui/db.o db.cpp
In file included from headers.h:91,
                 from db.cpp:5:
serialize.h: In function ‘void Serialize(Stream&, const T&, long int, int) [with Stream = CDataStream, T = std::tuple, std::allocator >, std::basic_string, std::allocator >, long long unsigned int>]’:
serialize.h:1081:   instantiated from ‘CDataStream& CDataStream::operator<<(const T&) [with T = std::tuple, std::allocator >, std::basic_string, std::allocator >, long long unsigned int>]’
db.cpp:643:   instantiated from here
serialize.h:392: error: ‘const class std::tuple, std::allocator >, std::basic_string, std::allocator >, long long unsigned int>’ has no member named ‘Serialize’
gmake: *** [obj/nogui/db.o] Error 1

The following content was written by Venkatesh Srinivas on August 26, 2011, 04:55:13 AM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


Which version of bitcoin are you using? crypter is present in git master.

Also, what Berkeley DB package do you have? /usr/pkg/include/db4 was provided by db4-4.8.30 from pkgsrc.

The following content was written by phma on August 26, 2011, 08:16:47 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


db46-4.6.21
bitcoin-0.3.24
From the build-unix file:
You need Berkeley DB 4.7.  Don’t use 4.8, the database/log0000* files
are incompatible.

How hard would it be to make the build process use cmake or the like so that this sort of mess wouldn’t happen?

The following content was written by Venkatesh Srinivas on August 26, 2011, 09:05:18 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


Incompatible with what? If you download the blockchain from within the client and don’t need to share the db with precompiled clients, 4.8 works fine.

The following content was written by phma on August 29, 2011, 03:23:16 PM in the thread DragonFly BSD patches for bitcoind. All content is owned by the author of the bitcointalk.org post. (original)


I don’t know what it’s incompatible with. I quoted the file.

I installed db 4.8 and changed the “46” in the makefile to “4”. I’m still getting serialize errors. There are two it could be getting:
/usr/include/sys/serialize.h
/usr/local/src/bitcoin-0.3.24/src/src/serialize.h

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Proposals

A full shell script implementation of bitcoin?

I’ve actually been thinking about a full shell implementation of bitcoin for some time.

Published

on

The following content was written by grondilu on December 26, 2010, 12:38:42 PM in the thread A full shell script implementation of bitcoin ?. All content is owned by the author of the bitcointalk.org post. (original)


The thread
http://bitcointalk.org/index.php?topic=2459.0

make me feel like suggesting a full shell implementation of bitcoin.  I’ve actually been thinking about this for some time but I’d like to see if other people would be interested in such a crazy project.

Here is the kind of architecture I’d like to see :

– nodes communicate transactions via IRC, XMPP or similar (a shell script communicating with such a protocol should be easy to implement) ;
– each node publishes its blocks via its own http server (possibly via a TOR hidden service to avoid NAT traversal problems) ;
– each node also publishes the list of nodes it is currently connected to ;
– nodes store their blocks using GnuNet or maybe a NoSQL database such as MongoDB  (easily scripted) ;
– cryptographic functions are performed via the openssl command line ;
– blocks and wallets are stored in an human readable ASCII format ;

The following content was written by BioMike on December 27, 2010, 03:10:28 PM in the thread A full shell script implementation of bitcoin ?. All content is owned by the author of the bitcointalk.org post. (original)


That would be one big dependency hell…

The following content was written by MoonShadow on December 27, 2010, 03:16:58 PM in the thread A full shell script implementation of bitcoin ?. All content is owned by the author of the bitcointalk.org post. (original)


At least it would be trivial to have small CL tools to perform bitcoin operations in this system, such as tools to import and export transactions, blocks and keypairs to/from normal files.  *nix CL shells, such as Bash, are centered around the manipulation of data streams as files, and it’s an incredibly powerful model.

The following content was written by grondilu on December 27, 2010, 09:34:55 PM in the thread A full shell script implementation of bitcoin ?. All content is owned by the author of the bitcointalk.org post. (original)


At least it would be trivial to have small CL tools to perform bitcoin operations in this system, such as tools to import and export transactions, blocks and keypairs to/from normal files.  *nix CL shells, such as Bash, are centered around the manipulation of data streams as files, and it’s an incredibly powerful model.

Exactly.

I also wonder if all communications could not be done using HTTP.  Blocks would be published via http GET method (giving the hash of the preceding block), and transactions could be sent via POST method.

One advantage of using  http would be that it would be very hard for governments to forbid.

The following content was written by jgarzik on December 27, 2010, 10:35:14 PM in the thread A full shell script implementation of bitcoin ?. All content is owned by the author of the bitcointalk.org post. (original)


Shell script would be fine for a connect-work-disconnect method of working.

But it is grossly ineffective for maintaining a long-running P2P network node, where long-lasting TCP connections are preferred.
Continue Reading

Proposals

Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage)

This is a concrete proposal (and reference implementation) for a Merkle-tree of unspent-outputs (UTXOs).

Published

on

The following content was written by socrates1024 on August 19, 2012, 02:26:27 PM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)


This is a concrete proposal (and reference implementation) for a Merkle-tree of unspent-outputs (UTXOs). Eventually, the idea is to include the root hash of the current UTXO-tree in each block. This will enable full-validation (zero-trust) lite clients that use only a constant O(1) amount of storage – for example, a client that fits on a smart-card – and O(log N) amount of effort per txOut (where N is the total number of UTXOs at any given time).

**EDIT**: I included several changes from a follow-up post that improve transaction-validation complexity.

Overview
Validating a transaction requires checking that every tx-input hasn’t already been spent. One way of doing this is to iterate, for each tx-input, from the head of the chain backwards, looking for a transaction that spends it. This would be inefficient – instead, the standard client (including Peter Wuille’s UltraPrune variation) builds and maintains a database containing (at least) all the currently available coins. I propose to store the unspent outputs, as well as an additional “query-by-address” index, in a balanced Merkle search tree, (i.e., a RedBlack tree augmented with hashes). This will lead to several benefits:
  • Full-validation nodes (especially lite clients) will only need to store (and trust) the O(1) hash of the current head block. Verification data can be queried as necessary from a (completely untrusted) Distributed Hash Table (DHT) (e.g., the gossip-layer network).
  • Lite clients can “bootstrap” themselves immediately, just by selecting the correct head.
  • Clients can securely query the DHT for a complete list of spendable (standard) UTXOs associated with a given address, without having to “rescan” the blockchain.
  • Arbitrary-length forks can be tolerated without any need for a “reorg.”
  • “Proofs” of invalid transactions (e.g. a double spend) can be validated in O(log N) as well.

This proposal calls for a single additional hash to be placed in each block: the root hash of the UTXO Merkle tree. As such, it belongs in the HardFork Wishlist. However, it could also be included in an (optionally merge-mined) alt-chain. This will require additional computation for validating a transaction – the addition will be a constant factor, since each processed txinput/txoutput already requires (even in UltraPrune), an O(log N) update to a BTree table of the UTXOs. For now, the proposal simply describes a stand-alone data structure that can be derived from existing blockchain data.

The normative specification for this protocol consists of A) a serialization (and digest) format for nodes in the balanced tree, where each leaf node contains a single UTXO, and branch nodes contain hashes of their children; and B) insertion/deletion algorithms that preserve the RedBlack invariants (balancing) of the tree, guaranteeing O(log N) worst-case effort per update.

The reference implementation is written in python, and consists of 1) a modular, extensible, and side-effect-free RedBlack tree (including unit tests) that can be augmented with arbitrary ‘digest’ functions, 2) a instance of this structure with the specified digest function, and 3) a script that processes actual blockchain data and applies each update (each txinput and txoutput) to the Merkle tree. This reference implementation is not at all optimized, but instead is intended to be easy-to-analyze and to provide a bit-correct gold-standard for the Merkle tree at each step.

This proposal is also a precursor to the “Eventually Zero-Trust Probabilistic Validation” scheme I described in an earlier thread.

Related Work
ByteCoin first proposed including the hash of the current “balance sheet” in each block (i.e., the set of available unspent tx outputs), making it possible to validate transactions without needing to store the entire blockchain. Greg Maxwell later suggested using a Merkle tree in order to query such this set more efficiently. DiThi later made an equivalent proposal. Most recently, etotheipi suggested using a self-balancing Merkle tree so that incremental modification to the structure would be more efficient. I scoured the academic literature, finding a Usenix paper from 1996 by Moni Naor and Kobbi Nissim, suggesting that a RedBlack tree could be augmented with hashes instead of pointers. I wrote a reference implementation of such an “Authenticated Data Structure”. To my knowledge, it is the only available (open-source) implementation of a self-balancing hash tree.

Hash trees (of the non-balancing variety) are widely used, for example in Git and in Tahoe-LAFS.

Normative Protocol Specification
A. Serialization and Digest Format
Each node in a RedBlack binary search tree node consists of 1) a key (the largest key in the left subtree), 2) left and right children (possibly null), and 3) a single bit (representing Red or Black), used for balancing 4) a value (leaf nodes only). The keys must be well-ordered in order to support efficient search queries to this structure. Two kinds of queries are supported (although more could be added): a search by (txid,index), as is needed to validate transactions, and also a search by (address,txid,index) which is useful for clients to query for the balance of an address.

There are two kinds of mappings stored in the tree:
1. UTXO Validation:
       (txid, index) -> (hash of validation data)

    The validation data is a subset of the transaction data corresponding to txid: a single scriptPubKey, as well as some necessary metadata (isCoinbase, and nHeight).

2. Address scanning:
      (address, txid, index) -> ()

    No separate value is used for this type, since all the information is contained in the unique key.

The two key types are indicated by a four-byte prefix, “UTXO” for (txid,index), and “ADDR” for (address,txid,index). The ‘root-hash’ is the digest of the root node of the tree. Each digest is the SHA256 hash of the serialized node. In the serialized format, the left and right subtrees are represented by their digests. The digest of an empty tree is represented by 32 zero bytes (i.e., the genesis hash). Leaf nodes and branch nodes are serialized differently, with a sentinel value (“.” for leaves, “^” for branches) to distinguish between them.

Branch nodes are serialized as follows:
Code:
         (color, dL, ((“UTXO”, txhash, index),()), dR):
            [1 byte][1 byte][4 bytes][32 bytes][32 bytes][4 bytes][32 bytes]
               “^”    color   “UTXO”   H(Left)    txhash    index  H(Right)
            total: 1+1+4+32+32+4+32 = 106 bytes

          (color, dL, ((“ADDR”, address, txhash, index), ()), dR):
            [1 byte][1 byte][4 bytes][32 bytes][20 bytes][32 bytes][4 bytes][32 bytes]
            [  “^” ][ color][ “ADDR”][ H(Left)][    addr][  txhash][  index][H(Right)]
            total: 1+1+4+32+20+32+4+32 = 126 bytes

Leaf nodes are serialized with a different format, since the left/right digests are unnecessary, but the value must be included.
Code:
         (color, (), ((“UTXO”, txhash, index), utxohash), ()):
            [1 byte][1 byte][4 bytes][32 bytes][4 bytes][32 bytes]
               “.”    color   “UTXO”    txhash    index  utxohash
            total: 1+1+4+32+4+32 = 74 bytes

          (color, (), ((“ADDR”, address, txhash, index), ()), ()):
            [1 byte][1 byte][4 bytes][20 bytes][32 bytes][4 bytes]
               “.”    color   “ADDR”      addr    txhash    index
            total: 1+1+4+20+32+4 = 62 bytes


B. Balancing Algorithm
Any binary search tree defines three operations: insert, delete, and search. The RedBlack algorithm balances the tree during every insert and delete. Each operation is O(log N) worst-case. Since blocks will contain commitments to a particular root hash, and therefore a particular tree, it’s necessary to normatively specify the particular balancing rules (there are potentially several valid ways to balance a RedBlack tree).

The particular balancing rules I have chosen were published by (Stefan Kahrs, “Red-black trees with types” Journal of functional programming, 11(04), pp 425-432, July 2001), and are fully specified in this Haskell code. This algorithm is widely used, see for example Kazu Yamamoto’s LLRB trees, this blog post by Matt Might, and this blog post by Mark Chu-Carroll. These rules are based on Chris Okasaki’s famous PhD thesis, Purely Functional Data Structures, which provides four simple case-match rules sufficient for RedBlack tree insertion (see “balance” in the Haskell specification). To implement deletion, four additional cases must be handled (see “balleft” and “balright” in the Haskell specification).

For each transaction, first all the txinputs are deleted (in sequential order) from the tree, and second all the new txoutputs are inserted (also in sequential order). Each transaction is applied in sequential order within each block, and of course each block is applied in order as it appears in the chain.

Reference Impementation
The reference implementation consists of three python files.
  • redblack.py: an implementation of Stefan Kahrs’ red-black tree, with an extensible “digest” augmentation. Two “modes” are provided for each of the three operations: the Record mode operates on a full tree (represented as nested python tuples), and produces a Verification Object (VO) (i.e., an O(log N) path consisting of just the nodes that were accessed during this operation), while the Replay mode takes in a root-hash and an untrusted VO and verifies the operation by recomputing the hash at each step.
  • utxo_merkle.py: a specialization of RedBlack by definining by the digest/serialization protocol described above
  • merkle_scan.py: a script that iterates through the entire blockchain, from the genesis block to the head, incrementally updating the merkle tree as it goes (requires Gavin Andresen’s python bitcointools)

Additionally, some unit tests are provided, as well as a graphviz-based visualization tool that produces images like the ones in this post.

The reference implementation is not at all optimized. So far I have only run it on the first 140k blocks before getting impatient. It takes only a couple of minutes to run through the first 200k transactions. I will save discussion about optimized implementations for later posts in this thread (I invite you to help!), although I plan on updating this post at least with performance estimates as I go along.

Teaser Images (two-byte hashes)
Before-and-after inserting the element 10 (full O(N) tree, Record mode)


Before-and-after inserting the element 10 (just the O(1) root hash and O(log N) untrusted VO, Replay mode)


What Now?
This is not a new proposal; variations of this idea have been bouncing around since 2010. Because of its potential to improve the security and scalability of lite clients, it’s important to develop this sooner rather than later. My goal is to push this process along by proposing a specific protocol and providing a reference implementation. This thread should be about the specification and implementation details (before making a “why bother?” post, please read etotheipi’s thread or any the others I mentioned in Related Work).

This normative specification says very little about how the data should actually be stored or transmitted, so this may vary between implementations. Since the root node will change for every update, but leaf nodes will change infrequently, it would make sense to use a heuristic/probabilistic cache. The normative specification is very simple, so I expect other developers will have little difficulty making compatible implementations in other languages. The hardest part will likely to be implementing the RedBlack algorithm itself, which is why I tried to make my implementation simple to use as a guide and gold-standard. If you can think of a supporting feature I can build to help, (diagrams, explanations, example code, etc.), please ask for it in this thread and I will eagerly accommodate you.

I plan to gradually optimize the reference implementation, for both the O(1)-storage lite-clients and the full O(N)-storage (UltraPrune) clients, as well as to provide a scalable (untrusted) DHT that can serve tree data for any point in blockchain history. I will need to make at least some progress in this direction before I can offer any performance estimates. Here is a profile/callgraph for the current implementation, suggesting that less than 2% of the time is spent computing hashes (most of the time is spent in my sketchy pattern-matching function, so that’s the first thing to work on).

Both etotheipi and gmaxwell would prefer to use tries rather than balanced binary trees for reasons I consider uncompelling. Tries (at least ones with Merkle hashes) are likely to require more space and computation than the balanced trees. Etototheipi likes the insertion-order-independence of a trie, although this is irrelevant if each contains a commitment to a root hash (since you can find the previous root hash in the previous block). I don’t know why gmaxwell prefers tries. Either way, it’s their turn to provide a competing implementation!

The following content was written by etotheipi on August 20, 2012, 08:53:03 PM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)


Temporarily putting aside the debate about insert-order-dependence and tries-vs-BSTs, why not make all nodes “leaf” nodes?  The traditional BST doesn’t actually distinguish — it just has a left and right pointer and if they’re both NULL, it’s a leaf.  What I’m suggesting is that every node contains an element of the set, and while searching for that element you might stop before getting to the bottom of sub-branch, because the branch node itself is an element.   It’s “value” is the hash of some concatenation, such as sha256(elementHash || LeftPtrValue || RightPtrValue).  It seems like it might save some space, though I can’t figure out off the top of my head whether it’s significant.  Or does that break some property of the tree you’ve shown here?


The following content was written by socrates1024 on August 20, 2012, 08:58:56 PM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)


I want to make an improvement to this proposal.

Originally, my UTXO tree contained only keys, of the form (txid,index). In order to actually validate a transaction, it would be necessary to look up the corresponding transaction data. Even though only a small subset from this transaction data is relevant (just a single txout), an O(1)-storage client would also need to recompute the hash over the entire transaction in order to be secure. This means the validation cost, per txout, goes from O(log N) to O(T log N), where T is the size of a transaction (total number of txins and txouts).

What I want to do instead is to use the (currently-unused) ‘value’ field in each leaf node of the binary tree to store a hash of just the relevant validation data.

The validation consists of fields (isCoinbase, nHeight, amount, scriptPubKey). I would serialize it as follows:
Code:
             [    1 byte][4 bytes][8 bytes][     x bytes]
              isCoinbase  nHeight   amount  scriptPubKey

There is now a reason to treat branch nodes differently than leaf nodes. Leaf nodes need to contain an additional hash value, but since they don’t need to contain the left/right hashes, they’re just as small. (I use a sentinel byte instead to disambiguate leaf vs branch)

Code:
          (color, (), ((“UTXO”, txhash, index), utxohash), ()):
            [1 byte][1 byte][4 bytes][32 bytes][4 bytes][32 bytes]
               “.”    color   “UTXO”    txhash    index  utxohash
            total: 1+1+4+32+4+32 = 74 bytes

I can’t think of any reason to include ‘values’ for the ADDR lookups, but they can still benefit from omitting the child hashes.
Code:
          (color, (), ((“ADDR”, address, txhash, index), ()), ()):
            [1 byte][1 byte][4 bytes][20 bytes][32 bytes][4 bytes]
               “.”    color   “ADDR”      addr    txhash    index
            total: 1+1+4+20+32+4 = 62 bytes

With this scheme, transaction validation only costs O(log N) per txout, regardless of the total size of each transaction.

The following content was written by socrates1024 on August 20, 2012, 09:26:30 PM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)


why not make all nodes “leaf” nodes?  …  It seems like it might save some space, though I can’t figure out off the top of my head whether it’s significant.  Or does that break some property of the tree you’ve shown here?

The main reason is to reduce validation cost by storing values only at the leaf nodes. (Originally I was using only keys, and no values, so it wouldn’t have mattered).

When I was implementing my redblack tree, I actually switched back and forth on this issue several times, since I wasn’t sure if it was necessary to use keys+values or just keys. I ended up settling with keys+values, and now (see previous post) I have a good reason for it.

The following content was written by socrates1024 on August 21, 2012, 01:01:37 AM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)


I made a post in which I outlined the necessary requirements for the Bitcoin data structures. I focused on two different tasks: fork validation and transaction validation.

My main motivation for this is to address the fears that “order independence” may be important for some reason. I showed that it’s not required for any of the two tasks I defined. Is there a requirement missing for which order-independence is important? Or are the assumptions I made too strong? Otherwise, Merkle trees are at least as good asymptotically, and are likely faster by a constant factor.

On the other hand, if you allow stronger assumptions (such as about the number of txouts per unique txid), or weaker requirements, there are at least some possible scenarios where trie-based solutions are faster, but never by more than a constant factor.

Some related work using Merkle Tries

I’ve found several instances of Merkle Tries. None of them have mentioned any benefit from order-independence. Each one them suggests that the trie is on average faster than RedBlack Merkle trees by a constant factor, however this relies on the keys consisting only of random hashes. In the UTXO, we at least need to look up validation information by (txid,idx). One suggestion has been to use a trie where the only key is (txid) and all txouts with the same txid are grouped together. In this case, the average-case validation depends on some additional quantity, T, describing the ratio of txouts to unique txids: O(T log M).


The following content was written by etotheipi on August 21, 2012, 04:39:16 AM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)


I made a post in which I outlined the necessary requirements for the Bitcoin data structures. I focused on two different tasks: fork validation and transaction validation.

My main motivation for this is to address the fears that “order independence” may be important for some reason. I showed that it’s not required for any of the two tasks I defined. Is there a requirement missing for which order-independence is important? Or are the assumptions I made too strong? Otherwise, Merkle trees are at least as good asymptotically, and are likely faster by a constant factor.

On the other hand, if you allow stronger assumptions (such as about the number of txouts per unique txid), or weaker requirements, there are at least some possible scenarios where trie-based solutions are faster, but never by more than a constant factor.

Some related work using Merkle Tries

I’ve found several instances of Merkle Tries. None of them have mentioned any benefit from order-independence. Each one them suggests that the trie is on average faster than RedBlack Merkle trees by a constant factor, however this relies on the keys consisting only of random hashes. In the UTXO, we at least need to look up validation information by (txid,idx). One suggestion has been to use a trie where the only key is (txid) and all txouts with the same txid are grouped together. In this case, the average-case validation depends on some additional quantity, T, describing the ratio of txouts to unique txids: O(T log M).


You’re focusing a lot on access speed.  Getting the access speed question out of the way is important one, but I think dwelling on it is unnecessary.  We’ve established that even for billions of elements, the trie and BST will have the same, absolute order of magnitude access time.  The trie has better theoretical order of growth with respect to number of elements (O(1)), but it’s absolute constant access time will be comparable to the BST’s O(logN) for N < 10^10.  I’m already satisfied with the conclusion:  “both are fast enough.”

So then my concern remains with regards to order-independence.  Two particular concerns pop out that I’m sure you have figured out, but I want to make sure:

(1) How can you guarantee that node-deletion from the BST will result in the same underlying tree structure as you started with?  Any particular node addition could trickle rebalancing/rotation operations from the bottom of the BST to the top.  Is node-deletion guaranteed to undo those rotations?  If not, what do you need after every operation to make sure you know how to remove the node, if it becomes necessary?
(2)  If a lite-node requests all the UTXOs for a given address, how do you (the supplier) transmit that information?  You can’t just give it a sequential list of UTXOs, and you don’t know just from looking at the address tree what order they were inserted into the sub-branch.  How do you communicate this information to the lite-node so that it can verify the UTXO list?

Of course, I have to supplement those questions with the fact that tries don’t even have to think about this.  (1) Delete a node from the trie!  If two tries have the same elements, they have the same structure.  (2) Send the UTXO list in any order:  if two tries (or trie-branches) have the same elements, they have the same structure!

I don’t want to derail this thread too far in the trie-vs-BST direction.  I just want to make sure there’s good answers for these questions.  I really do appreciate that someone has put in time to try to make a proof-of-concept.  Even if I don’t agree with the particular optimization, there’s a lot of value/education in actually trying to put the theory into practice, and the lessons you learn will help us out in future — or you will simply finish it for us and then we will gratefully leverage your accomplishment!

So, thanks for pioneering this effort!

P.S. — You argued in a PM, against the point that a trie can be parallel-updated/maintained.  But I didn’t understand the argument.  I can design a system where the top-level branch node (branching on first byte), split up between X threads/HDDs/servers.  Each server handles 256/X sub-branches.  When a new block comes in, all additions and deletions induced by that block are distributed to the appropriate server.  The main thread waits for all the servers to finish — which happens by reporting back the updated “roots” of their sub-trees — then the main thread combines those into the final, master root.  This seems like a valid (major!) benefit.  




The following content was written by socrates1024 on August 21, 2012, 05:34:24 AM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)


It’s not a derail, it’s the most important thing we need to figure out.

Quote
My main motivation for this is to address the fears that “order independence” may be important for some reason. I showed that it’s not required for any of the two tasks I defined. Is there a requirement missing for which order-independence is important? Or are the assumptions I made too strong?

So then my concern remains with regards to order-independence.  Two particular concerns pop out that I’m sure you have figured out, but I want to make sure:

(1) How can you guarantee that node-deletion from the BST will result in the same underlying tree structure as you started with?  Any particular node addition could trickle rebalancing/rotation operations from the bottom of the BST to the top.  Is node-deletion guaranteed to undo those rotations?  If not, what do you need after every operation to make sure you know how to remove the node, if it becomes necessary?
(2)  If a lite-node requests all the UTXOs for a given address, how do you (the supplier) transmit that information?  You can’t just give it a sequential list of UTXOs, and you don’t know just from looking at the address tree what order they were inserted into the sub-branch.  How do you communicate this information to the lite-node so that it can verify the UTXO list?

1) Node-deletion is not the inverse of node-insertion. This isn’t a requirement. Both operations produce new trees, typically with new root hashes. There are potentially many trees that represent the same set of coins, but only a particular tree is committed in a given block. To answer the “if not, then what” question, I have tried to clearly describe two abstract scenarios:

  • Transaction Validation:  I assume you already know the root hash at time T, and have access to an untrusted copy of the UTXO-set (e.g., stored on a shared cloud host). Now you need to securely compute a new root hash for time T+1 (with one UTXO deleted). This is done deterministically, using the redblack deletion rules. You only need to fetch O(log M) elements from the untrusted storage. This is exactly as difficult as with UltraPrune (BTree), which also fetches at least O(log M) data (but requires trusted storage).

  • Fork Validation: If you are validating a fork, it is because you have received the head of a chain that purports to be larger than yours. Suppose the fork point goes back N blocks ago. Even though you have the full UTXO for your current fork, you need to simulate a node that is bootstrapping itself from just the header from N blocks ago. It takes O(N log M) operations to validate the new fork, but you only needed to download O(M + N) data, M for a snapshot of the UTXO-tree from N blocks ago in your chain, and N for all the transactions in the fork. You validate them in forward order. You’re not vulnerable to DDoS, because you validate the work first (headers).


2) This is an optional scenario, but I want to include it. This is about a client that wants to make a request of the form (roothash, address). The client wants to do a range query, receiving a set of paths O(m log M) paths where ‘m’ is the number of spendable coins with that address. The client is assumed to already know the root hash of the most recent valid block.

You (the untrusted server) have a chain N blocks long, and you want to be able to serve requests where ‘roothash’ falls anywhere in your chain. Then you must store O(M + N log M) data in total – M for your current UTXO snapshot, and N log M for all the “deltas” from every transaction in the past. This is a “persistent datastructure” because you can simulate queries from any of the N trees represented in your chain. This is “optional” because it isn’t necessary for Transaction Validation or for Fork Validation. I can’t prove that there are no variations of this problem for which a trie might give better performance. If you have a counter example, then we would need to see if the trie also performs satisfactorily for the two core requirements.

Perhaps you (the server) would want to store all this data (locally) in a trie. It wouldn’t need to be a Merkle trie. The sequence of UTXO-trees would still be updated according to redblack balancing rules, but you could use a trie to store all the node data you might have to serve.


Quote
P.S. — You argued in a PM, against the point that a trie can be parallel-updated/maintained.  But I didn’t understand the argument.  I can design a system where the top-level branch node (branching on first byte), split up between X threads/HDDs/servers.  Each server handles 256/X sub-branches.  When a new block comes in, all additions and deletions induced by that block are distributed to the appropriate server.  The main thread waits for all the servers to finish — which happens by reporting back the updated “roots” of their sub-trees — then the main thread combines those into the final, master root.  This seems like a valid benefit.

Ordinary tries can be updated in parallel, but this isn’t one of the performance characteristics that carries over when you augment them with collision-resistant hashes. The computation must be serial, but the storage can be easily sharded (so it’s concurrent (safe), not parallel (fast)). There are such things as parallel-update authenticated structures, but they require special hashes with homomorphic superpowers.

The following content was written by etotheipi on August 21, 2012, 04:08:05 PM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)


1) Node-deletion is not the inverse of node-insertion. This isn’t a requirement. Both operations produce new trees, typically with new root hashes. There are potentially many trees that represent the same set of coins, but only a particular tree is committed in a given block. To answer the “if not, then what” question, I have tried to clearly describe two abstract scenarios:

  • Transaction Validation:  I assume you already know the root hash at time T, and have access to an untrusted copy of the UTXO-set (e.g., stored on a shared cloud host). Now you need to securely compute a new root hash for time T+1 (with one UTXO deleted). This is done deterministically, using the redblack deletion rules. You only need to fetch O(log M) elements from the untrusted storage. This is exactly as difficult as with UltraPrune (BTree), which also fetches at least O(log M) data (but requires trusted storage).


I need more details to understand how this works.  You just say, “this isn’t a requirement” and “done deterministically using the redblack deletion rules.”  Those rules get you a different tree.  Every level of the redblack tree might’ve unrolled/reorganized, all the way up to the root.  Without saving the entire structure of the tree after every insertion (which would be an ungodly amount of data), I don’t know how you can expect to reverse those operations on a reorg. 

You keep mentioning how much data you need to download, but that’s not what I’m worried about.  I want to know what, and how much, data has to be saved before every insertion and deletion (or block of such operations), to guarantee that your tree at time T can be rolled back to time T-1.


Quote
P.S. — You argued in a PM, against the point that a trie can be parallel-updated/maintained.  But I didn’t understand the argument.  I can design a system where the top-level branch node (branching on first byte), split up between X threads/HDDs/servers.  Each server handles 256/X sub-branches.  When a new block comes in, all additions and deletions induced by that block are distributed to the appropriate server.  The main thread waits for all the servers to finish — which happens by reporting back the updated “roots” of their sub-trees — then the main thread combines those into the final, master root.  This seems like a valid benefit.

Ordinary tries can be updated in parallel, but this isn’t one of the performance characteristics that carries over when you augment them with collision-resistant hashes. The computation must be serial, but the storage can be easily sharded (so it’s concurrent (safe), not parallel (fast)). There are such things as parallel-update authenticated structures, but they require special hashes with homomorphic superpowers.

I’m not sure what you mean?  The hashes have no relevance to the structure of that tree:  only the (addr,txhash,idx) values matter.  You can parallel-insert or delete all the necessary leaves to get the correct structure, then walk up the tree from each leaf, recomputing the hashes for the branch nodes as you go.  Each sub-branch can be done completely independently, with only one process at the end to merge them into a single root hash.  What am I neglecting?

The following content was written by grau on August 22, 2012, 01:35:11 AM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)


Without saving the entire structure of the tree after every insertion (which would be an ungodly amount of data), I don’t know how you can expect to reverse those operations on a reorg. 

The requirement is to ensure that the set of transactions in the block alter the tree to a new state with given hash in the block header. For this the change has to be deterministic if inserts/deletes are executed in the order the transactions are listed in the block. If returning from a fork, the client might even see different tree for same set of transactions (because different order the miner captured them), that is fine, just as a different proof of work hash.

Continue Reading

Trending