The following content was written by socrates1024 on August 19, 2012, 02:26:27 PM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)
This is a concrete proposal (and reference implementation) for a Merkle-tree of unspent-outputs (UTXOs). Eventually, the idea is to include the root hash of the current UTXO-tree in each block. This will enable full-validation (zero-trust) lite clients that use only a constant O(1) amount of storage – for example, a client that fits on a smart-card – and O(log N) amount of effort per txOut (where N is the total number of UTXOs at any given time).
**EDIT**: I included several changes from a follow-up post that improve transaction-validation complexity.
Overview
Validating a transaction requires checking that every tx-input hasn’t already been spent. One way of doing this is to iterate, for each tx-input, from the head of the chain backwards, looking for a transaction that spends it. This would be inefficient – instead, the standard client (including Peter Wuille’s UltraPrune variation) builds and maintains a database containing (at least) all the currently available coins. I propose to store the unspent outputs, as well as an additional “query-by-address” index, in a balanced Merkle search tree, (i.e., a RedBlack tree augmented with hashes). This will lead to several benefits:
- Full-validation nodes (especially lite clients) will only need to store (and trust) the O(1) hash of the current head block. Verification data can be queried as necessary from a (completely untrusted) Distributed Hash Table (DHT) (e.g., the gossip-layer network).
- Lite clients can “bootstrap” themselves immediately, just by selecting the correct head.
- Clients can securely query the DHT for a complete list of spendable (standard) UTXOs associated with a given address, without having to “rescan” the blockchain.
- Arbitrary-length forks can be tolerated without any need for a “reorg.”
- “Proofs” of invalid transactions (e.g. a double spend) can be validated in O(log N) as well.
This proposal calls for a single additional hash to be placed in each block: the root hash of the UTXO Merkle tree. As such, it belongs in the HardFork Wishlist. However, it could also be included in an (optionally merge-mined) alt-chain. This will require additional computation for validating a transaction – the addition will be a constant factor, since each processed txinput/txoutput already requires (even in UltraPrune), an O(log N) update to a BTree table of the UTXOs. For now, the proposal simply describes a stand-alone data structure that can be derived from existing blockchain data.
The normative specification for this protocol consists of A) a serialization (and digest) format for nodes in the balanced tree, where each leaf node contains a single UTXO, and branch nodes contain hashes of their children; and B) insertion/deletion algorithms that preserve the RedBlack invariants (balancing) of the tree, guaranteeing O(log N) worst-case effort per update.
The reference implementation is written in python, and consists of 1) a modular, extensible, and side-effect-free RedBlack tree (including unit tests) that can be augmented with arbitrary ‘digest’ functions, 2) a instance of this structure with the specified digest function, and 3) a script that processes actual blockchain data and applies each update (each txinput and txoutput) to the Merkle tree. This reference implementation is not at all optimized, but instead is intended to be easy-to-analyze and to provide a bit-correct gold-standard for the Merkle tree at each step.
This proposal is also a precursor to the “Eventually Zero-Trust Probabilistic Validation” scheme I described in an earlier thread.
Related Work
ByteCoin first proposed including the hash of the current “balance sheet” in each block (i.e., the set of available unspent tx outputs), making it possible to validate transactions without needing to store the entire blockchain. Greg Maxwell later suggested using a Merkle tree in order to query such this set more efficiently. DiThi later made an equivalent proposal. Most recently, etotheipi suggested using a self-balancing Merkle tree so that incremental modification to the structure would be more efficient. I scoured the academic literature, finding a Usenix paper from 1996 by Moni Naor and Kobbi Nissim, suggesting that a RedBlack tree could be augmented with hashes instead of pointers. I wrote a reference implementation of such an “Authenticated Data Structure”. To my knowledge, it is the only available (open-source) implementation of a self-balancing hash tree.
Hash trees (of the non-balancing variety) are widely used, for example in Git and in Tahoe-LAFS.
Normative Protocol Specification
A. Serialization and Digest Format
Each node in a RedBlack binary search tree node consists of 1) a key (the largest key in the left subtree), 2) left and right children (possibly null), and 3) a single bit (representing Red or Black), used for balancing 4) a value (leaf nodes only). The keys must be well-ordered in order to support efficient search queries to this structure. Two kinds of queries are supported (although more could be added): a search by (txid,index), as is needed to validate transactions, and also a search by (address,txid,index) which is useful for clients to query for the balance of an address.
There are two kinds of mappings stored in the tree:
1. UTXO Validation:
(txid, index) -> (hash of validation data)
The validation data is a subset of the transaction data corresponding to txid: a single scriptPubKey, as well as some necessary metadata (isCoinbase, and nHeight).
2. Address scanning:
(address, txid, index) -> ()
No separate value is used for this type, since all the information is contained in the unique key.
The two key types are indicated by a four-byte prefix, “UTXO” for (txid,index), and “ADDR” for (address,txid,index). The ‘root-hash’ is the digest of the root node of the tree. Each digest is the SHA256 hash of the serialized node. In the serialized format, the left and right subtrees are represented by their digests. The digest of an empty tree is represented by 32 zero bytes (i.e., the genesis hash). Leaf nodes and branch nodes are serialized differently, with a sentinel value (“.” for leaves, “^” for branches) to distinguish between them.
Branch nodes are serialized as follows:
[1 byte][1 byte][4 bytes][32 bytes][32 bytes][4 bytes][32 bytes]
“^” color “UTXO” H(Left) txhash index H(Right)
total: 1+1+4+32+32+4+32 = 106 bytes
(color, dL, ((“ADDR”, address, txhash, index), ()), dR):
[1 byte][1 byte][4 bytes][32 bytes][20 bytes][32 bytes][4 bytes][32 bytes]
[ “^” ][ color][ “ADDR”][ H(Left)][ addr][ txhash][ index][H(Right)]
total: 1+1+4+32+20+32+4+32 = 126 bytes
Leaf nodes are serialized with a different format, since the left/right digests are unnecessary, but the value must be included.
[1 byte][1 byte][4 bytes][32 bytes][4 bytes][32 bytes]
“.” color “UTXO” txhash index utxohash
total: 1+1+4+32+4+32 = 74 bytes
(color, (), ((“ADDR”, address, txhash, index), ()), ()):
[1 byte][1 byte][4 bytes][20 bytes][32 bytes][4 bytes]
“.” color “ADDR” addr txhash index
total: 1+1+4+20+32+4 = 62 bytes
B. Balancing Algorithm
Any binary search tree defines three operations: insert, delete, and search. The RedBlack algorithm balances the tree during every insert and delete. Each operation is O(log N) worst-case. Since blocks will contain commitments to a particular root hash, and therefore a particular tree, it’s necessary to normatively specify the particular balancing rules (there are potentially several valid ways to balance a RedBlack tree).
The particular balancing rules I have chosen were published by (Stefan Kahrs, “Red-black trees with types” Journal of functional programming, 11(04), pp 425-432, July 2001), and are fully specified in this Haskell code. This algorithm is widely used, see for example Kazu Yamamoto’s LLRB trees, this blog post by Matt Might, and this blog post by Mark Chu-Carroll. These rules are based on Chris Okasaki’s famous PhD thesis, Purely Functional Data Structures, which provides four simple case-match rules sufficient for RedBlack tree insertion (see “balance” in the Haskell specification). To implement deletion, four additional cases must be handled (see “balleft” and “balright” in the Haskell specification).
For each transaction, first all the txinputs are deleted (in sequential order) from the tree, and second all the new txoutputs are inserted (also in sequential order). Each transaction is applied in sequential order within each block, and of course each block is applied in order as it appears in the chain.
Reference Impementation
The reference implementation consists of three python files.
- redblack.py: an implementation of Stefan Kahrs’ red-black tree, with an extensible “digest” augmentation. Two “modes” are provided for each of the three operations: the Record mode operates on a full tree (represented as nested python tuples), and produces a Verification Object (VO) (i.e., an O(log N) path consisting of just the nodes that were accessed during this operation), while the Replay mode takes in a root-hash and an untrusted VO and verifies the operation by recomputing the hash at each step.
- utxo_merkle.py: a specialization of RedBlack by definining by the digest/serialization protocol described above
- merkle_scan.py: a script that iterates through the entire blockchain, from the genesis block to the head, incrementally updating the merkle tree as it goes (requires Gavin Andresen’s python bitcointools)
Additionally, some unit tests are provided, as well as a graphviz-based visualization tool that produces images like the ones in this post.
The reference implementation is not at all optimized. So far I have only run it on the first 140k blocks before getting impatient. It takes only a couple of minutes to run through the first 200k transactions. I will save discussion about optimized implementations for later posts in this thread (I invite you to help!), although I plan on updating this post at least with performance estimates as I go along.
Teaser Images (two-byte hashes)
Before-and-after inserting the element 10 (full O(N) tree, Record mode)
Before-and-after inserting the element 10 (just the O(1) root hash and O(log N) untrusted VO, Replay mode)
What Now?
This is not a new proposal; variations of this idea have been bouncing around since 2010. Because of its potential to improve the security and scalability of lite clients, it’s important to develop this sooner rather than later. My goal is to push this process along by proposing a specific protocol and providing a reference implementation. This thread should be about the specification and implementation details (before making a “why bother?” post, please read etotheipi’s thread or any the others I mentioned in Related Work).
This normative specification says very little about how the data should actually be stored or transmitted, so this may vary between implementations. Since the root node will change for every update, but leaf nodes will change infrequently, it would make sense to use a heuristic/probabilistic cache. The normative specification is very simple, so I expect other developers will have little difficulty making compatible implementations in other languages. The hardest part will likely to be implementing the RedBlack algorithm itself, which is why I tried to make my implementation simple to use as a guide and gold-standard. If you can think of a supporting feature I can build to help, (diagrams, explanations, example code, etc.), please ask for it in this thread and I will eagerly accommodate you.
I plan to gradually optimize the reference implementation, for both the O(1)-storage lite-clients and the full O(N)-storage (UltraPrune) clients, as well as to provide a scalable (untrusted) DHT that can serve tree data for any point in blockchain history. I will need to make at least some progress in this direction before I can offer any performance estimates. Here is a profile/callgraph for the current implementation, suggesting that less than 2% of the time is spent computing hashes (most of the time is spent in my sketchy pattern-matching function, so that’s the first thing to work on).
Both etotheipi and gmaxwell would prefer to use tries rather than balanced binary trees for reasons I consider uncompelling. Tries (at least ones with Merkle hashes) are likely to require more space and computation than the balanced trees. Etototheipi likes the insertion-order-independence of a trie, although this is irrelevant if each contains a commitment to a root hash (since you can find the previous root hash in the previous block). I don’t know why gmaxwell prefers tries. Either way, it’s their turn to provide a competing implementation!
The following content was written by etotheipi on August 20, 2012, 08:53:03 PM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)
Temporarily putting aside the debate about insert-order-dependence and tries-vs-BSTs, why not make all nodes “leaf” nodes? The traditional BST doesn’t actually distinguish — it just has a left and right pointer and if they’re both NULL, it’s a leaf. What I’m suggesting is that every node contains an element of the set, and while searching for that element you might stop before getting to the bottom of sub-branch, because the branch node itself is an element. It’s “value” is the hash of some concatenation, such as sha256(elementHash || LeftPtrValue || RightPtrValue). It seems like it might save some space, though I can’t figure out off the top of my head whether it’s significant. Or does that break some property of the tree you’ve shown here?
The following content was written by socrates1024 on August 20, 2012, 08:58:56 PM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)
I want to make an improvement to this proposal.
Originally, my UTXO tree contained only keys, of the form (txid,index). In order to actually validate a transaction, it would be necessary to look up the corresponding transaction data. Even though only a small subset from this transaction data is relevant (just a single txout), an O(1)-storage client would also need to recompute the hash over the entire transaction in order to be secure. This means the validation cost, per txout, goes from O(log N) to O(T log N), where T is the size of a transaction (total number of txins and txouts).
What I want to do instead is to use the (currently-unused) ‘value’ field in each leaf node of the binary tree to store a hash of just the relevant validation data.
The validation consists of fields (isCoinbase, nHeight, amount, scriptPubKey). I would serialize it as follows:
isCoinbase nHeight amount scriptPubKey
There is now a reason to treat branch nodes differently than leaf nodes. Leaf nodes need to contain an additional hash value, but since they don’t need to contain the left/right hashes, they’re just as small. (I use a sentinel byte instead to disambiguate leaf vs branch)
[1 byte][1 byte][4 bytes][32 bytes][4 bytes][32 bytes]
“.” color “UTXO” txhash index utxohash
total: 1+1+4+32+4+32 = 74 bytes
I can’t think of any reason to include ‘values’ for the ADDR lookups, but they can still benefit from omitting the child hashes.
[1 byte][1 byte][4 bytes][20 bytes][32 bytes][4 bytes]
“.” color “ADDR” addr txhash index
total: 1+1+4+20+32+4 = 62 bytes
With this scheme, transaction validation only costs O(log N) per txout, regardless of the total size of each transaction.
The following content was written by socrates1024 on August 20, 2012, 09:26:30 PM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)
The main reason is to reduce validation cost by storing values only at the leaf nodes. (Originally I was using only keys, and no values, so it wouldn’t have mattered).
When I was implementing my redblack tree, I actually switched back and forth on this issue several times, since I wasn’t sure if it was necessary to use keys+values or just keys. I ended up settling with keys+values, and now (see previous post) I have a good reason for it.
The following content was written by socrates1024 on August 21, 2012, 01:01:37 AM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)
I made a post in which I outlined the necessary requirements for the Bitcoin data structures. I focused on two different tasks: fork validation and transaction validation.
My main motivation for this is to address the fears that “order independence” may be important for some reason. I showed that it’s not required for any of the two tasks I defined. Is there a requirement missing for which order-independence is important? Or are the assumptions I made too strong? Otherwise, Merkle trees are at least as good asymptotically, and are likely faster by a constant factor.
On the other hand, if you allow stronger assumptions (such as about the number of txouts per unique txid), or weaker requirements, there are at least some possible scenarios where trie-based solutions are faster, but never by more than a constant factor.
Some related work using Merkle Tries
I’ve found several instances of Merkle Tries. None of them have mentioned any benefit from order-independence. Each one them suggests that the trie is on average faster than RedBlack Merkle trees by a constant factor, however this relies on the keys consisting only of random hashes. In the UTXO, we at least need to look up validation information by (txid,idx). One suggestion has been to use a trie where the only key is (txid) and all txouts with the same txid are grouped together. In this case, the average-case validation depends on some additional quantity, T, describing the ratio of txouts to unique txids: O(T log M).
- Flexible Authentication Of XML documents – static merkle tries that let you do efficient “xpath” style queries over an xml structure (a dag)
- Authentic Time-Stamps for Archival Storage – it reduces the asymptotic total storage cost (amortized over all the merkle-tries), for the case where you must handle arbitrary queries to the tree at any point in history. Unfortunately, this average update cost is larger.
- Fern: An Updatable Authenticated Dictionary Suitable for Distributed Caching – for insertion-only (no deletion), they’re competitive with redblack trees and simpler simpler
- Nye’s Trie and Floret Estimators: Techniques for Detecting and Repairing Divergence in the SCADS Distributed Storage Toolkit: Uses “Nye Tries” (named after Bill Nye the science guy for some reason) makes tries even more efficient, but at the expense of truncating security bits in the hash function.
The following content was written by etotheipi on August 21, 2012, 04:39:16 AM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)
My main motivation for this is to address the fears that “order independence” may be important for some reason. I showed that it’s not required for any of the two tasks I defined. Is there a requirement missing for which order-independence is important? Or are the assumptions I made too strong? Otherwise, Merkle trees are at least as good asymptotically, and are likely faster by a constant factor.
On the other hand, if you allow stronger assumptions (such as about the number of txouts per unique txid), or weaker requirements, there are at least some possible scenarios where trie-based solutions are faster, but never by more than a constant factor.
Some related work using Merkle Tries
I’ve found several instances of Merkle Tries. None of them have mentioned any benefit from order-independence. Each one them suggests that the trie is on average faster than RedBlack Merkle trees by a constant factor, however this relies on the keys consisting only of random hashes. In the UTXO, we at least need to look up validation information by (txid,idx). One suggestion has been to use a trie where the only key is (txid) and all txouts with the same txid are grouped together. In this case, the average-case validation depends on some additional quantity, T, describing the ratio of txouts to unique txids: O(T log M).
You’re focusing a lot on access speed. Getting the access speed question out of the way is important one, but I think dwelling on it is unnecessary. We’ve established that even for billions of elements, the trie and BST will have the same, absolute order of magnitude access time. The trie has better theoretical order of growth with respect to number of elements (O(1)), but it’s absolute constant access time will be comparable to the BST’s O(logN) for N < 10^10. I’m already satisfied with the conclusion: “both are fast enough.”
So then my concern remains with regards to order-independence. Two particular concerns pop out that I’m sure you have figured out, but I want to make sure:
(1) How can you guarantee that node-deletion from the BST will result in the same underlying tree structure as you started with? Any particular node addition could trickle rebalancing/rotation operations from the bottom of the BST to the top. Is node-deletion guaranteed to undo those rotations? If not, what do you need after every operation to make sure you know how to remove the node, if it becomes necessary?
(2) If a lite-node requests all the UTXOs for a given address, how do you (the supplier) transmit that information? You can’t just give it a sequential list of UTXOs, and you don’t know just from looking at the address tree what order they were inserted into the sub-branch. How do you communicate this information to the lite-node so that it can verify the UTXO list?
Of course, I have to supplement those questions with the fact that tries don’t even have to think about this. (1) Delete a node from the trie! If two tries have the same elements, they have the same structure. (2) Send the UTXO list in any order: if two tries (or trie-branches) have the same elements, they have the same structure!
I don’t want to derail this thread too far in the trie-vs-BST direction. I just want to make sure there’s good answers for these questions. I really do appreciate that someone has put in time to try to make a proof-of-concept. Even if I don’t agree with the particular optimization, there’s a lot of value/education in actually trying to put the theory into practice, and the lessons you learn will help us out in future — or you will simply finish it for us and then we will gratefully leverage your accomplishment!
So, thanks for pioneering this effort!
P.S. — You argued in a PM, against the point that a trie can be parallel-updated/maintained. But I didn’t understand the argument. I can design a system where the top-level branch node (branching on first byte), split up between X threads/HDDs/servers. Each server handles 256/X sub-branches. When a new block comes in, all additions and deletions induced by that block are distributed to the appropriate server. The main thread waits for all the servers to finish — which happens by reporting back the updated “roots” of their sub-trees — then the main thread combines those into the final, master root. This seems like a valid (major!) benefit.
The following content was written by socrates1024 on August 21, 2012, 05:34:24 AM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)
It’s not a derail, it’s the most important thing we need to figure out.
(1) How can you guarantee that node-deletion from the BST will result in the same underlying tree structure as you started with? Any particular node addition could trickle rebalancing/rotation operations from the bottom of the BST to the top. Is node-deletion guaranteed to undo those rotations? If not, what do you need after every operation to make sure you know how to remove the node, if it becomes necessary?
(2) If a lite-node requests all the UTXOs for a given address, how do you (the supplier) transmit that information? You can’t just give it a sequential list of UTXOs, and you don’t know just from looking at the address tree what order they were inserted into the sub-branch. How do you communicate this information to the lite-node so that it can verify the UTXO list?
1) Node-deletion is not the inverse of node-insertion. This isn’t a requirement. Both operations produce new trees, typically with new root hashes. There are potentially many trees that represent the same set of coins, but only a particular tree is committed in a given block. To answer the “if not, then what” question, I have tried to clearly describe two abstract scenarios:
- Transaction Validation: I assume you already know the root hash at time T, and have access to an untrusted copy of the UTXO-set (e.g., stored on a shared cloud host). Now you need to securely compute a new root hash for time T+1 (with one UTXO deleted). This is done deterministically, using the redblack deletion rules. You only need to fetch O(log M) elements from the untrusted storage. This is exactly as difficult as with UltraPrune (BTree), which also fetches at least O(log M) data (but requires trusted storage).
- Fork Validation: If you are validating a fork, it is because you have received the head of a chain that purports to be larger than yours. Suppose the fork point goes back N blocks ago. Even though you have the full UTXO for your current fork, you need to simulate a node that is bootstrapping itself from just the header from N blocks ago. It takes O(N log M) operations to validate the new fork, but you only needed to download O(M + N) data, M for a snapshot of the UTXO-tree from N blocks ago in your chain, and N for all the transactions in the fork. You validate them in forward order. You’re not vulnerable to DDoS, because you validate the work first (headers).
2) This is an optional scenario, but I want to include it. This is about a client that wants to make a request of the form (roothash, address). The client wants to do a range query, receiving a set of paths O(m log M) paths where ‘m’ is the number of spendable coins with that address. The client is assumed to already know the root hash of the most recent valid block.
You (the untrusted server) have a chain N blocks long, and you want to be able to serve requests where ‘roothash’ falls anywhere in your chain. Then you must store O(M + N log M) data in total – M for your current UTXO snapshot, and N log M for all the “deltas” from every transaction in the past. This is a “persistent datastructure” because you can simulate queries from any of the N trees represented in your chain. This is “optional” because it isn’t necessary for Transaction Validation or for Fork Validation. I can’t prove that there are no variations of this problem for which a trie might give better performance. If you have a counter example, then we would need to see if the trie also performs satisfactorily for the two core requirements.
Perhaps you (the server) would want to store all this data (locally) in a trie. It wouldn’t need to be a Merkle trie. The sequence of UTXO-trees would still be updated according to redblack balancing rules, but you could use a trie to store all the node data you might have to serve.
Ordinary tries can be updated in parallel, but this isn’t one of the performance characteristics that carries over when you augment them with collision-resistant hashes. The computation must be serial, but the storage can be easily sharded (so it’s concurrent (safe), not parallel (fast)). There are such things as parallel-update authenticated structures, but they require special hashes with homomorphic superpowers.
The following content was written by etotheipi on August 21, 2012, 04:08:05 PM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)
- Transaction Validation: I assume you already know the root hash at time T, and have access to an untrusted copy of the UTXO-set (e.g., stored on a shared cloud host). Now you need to securely compute a new root hash for time T+1 (with one UTXO deleted). This is done deterministically, using the redblack deletion rules. You only need to fetch O(log M) elements from the untrusted storage. This is exactly as difficult as with UltraPrune (BTree), which also fetches at least O(log M) data (but requires trusted storage).
- …
I need more details to understand how this works. You just say, “this isn’t a requirement” and “done deterministically using the redblack deletion rules.” Those rules get you a different tree. Every level of the redblack tree might’ve unrolled/reorganized, all the way up to the root. Without saving the entire structure of the tree after every insertion (which would be an ungodly amount of data), I don’t know how you can expect to reverse those operations on a reorg.
You keep mentioning how much data you need to download, but that’s not what I’m worried about. I want to know what, and how much, data has to be saved before every insertion and deletion (or block of such operations), to guarantee that your tree at time T can be rolled back to time T-1.
Ordinary tries can be updated in parallel, but this isn’t one of the performance characteristics that carries over when you augment them with collision-resistant hashes. The computation must be serial, but the storage can be easily sharded (so it’s concurrent (safe), not parallel (fast)). There are such things as parallel-update authenticated structures, but they require special hashes with homomorphic superpowers.
I’m not sure what you mean? The hashes have no relevance to the structure of that tree: only the (addr,txhash,idx) values matter. You can parallel-insert or delete all the necessary leaves to get the correct structure, then walk up the tree from each leaf, recomputing the hashes for the branch nodes as you go. Each sub-branch can be done completely independently, with only one process at the end to merge them into a single root hash. What am I neglecting?
The following content was written by grau on August 22, 2012, 01:35:11 AM in the thread Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage). All content is owned by the author of the bitcointalk.org post. (original)
The requirement is to ensure that the set of transactions in the block alter the tree to a new state with given hash in the block header. For this the change has to be deterministic if inserts/deletes are executed in the order the transactions are listed in the block. If returning from a fork, the client might even see different tree for same set of transactions (because different order the miner captured them), that is fine, just as a different proof of work hash.