When one such snapshot of the state of the database as of block N1 is created, the delegates could coordinate to sign the hash of this state. Let's say the 90th unique active delegate to submit their signature confirming the hash submits that signature in block N2. At that point, the delegates all work on building up the new hash for the state of the database up to block N2 and repeat the process. Until the new hash is also confirmed, the old hash for the state of the database up to block N1 is referenced in each block (say by including it in the digest of the block that the delegate must sign).
The problem with this approach is that the client needs a way to authenticate the delegates who sign that snapshot. But the votes authorizing the delegates are part of the snapshot, i. e. you have a circular validation process without a solid anchor.
The problem you are describing only applies to the lightweight client validation benefit that I was describing. And indeed as I mentioned those clients do need a way to verify the set of active delegates who can sign the snapshot to break that circular loop:
If a user's client was also able to be confident of the current set of active delegates (there are ways this could be done cheaply on a lightweight client with very minimal trust), then the lightweight client could with very minimal trust verify the existence of funds owned by the user as of a time as recent as the latest snapshot timestamp.
I will describe that process in a moment.
But for a full client, they already have the entire up-to-date database will all of the delegate approval vote changes. The full clients know at any block who the 101 active delegates are through the same mechanism as they currently do. And so they know to treat the blocks where these 101 active delegates sign a valid database snapshot hash as a legitimate block.
Now consider a full client that is trying to bootstrap to the present state of the database starting with a copy of the database as of block N1, the portion of the blockchain from block N1 to the present, and a checkpoint in the client of a block M more recent than block N1 (M > N1). The client does not know whether to trust the database copy the blockchain receives from the network. But it is able to calculate its hash. It first assumes this is the right database for the sake of evolving the database state into the future, and then it will later verify if it was indeed correct. The blockchain also does not know if blocks N1 to (M-1) are legitimate, but the checkpoint does verify the validity of block M. Because of the hash link, this also necessarily validates blocks N1 to (M-1). The client can then process blocks N1 to M by evolving the state of the database. At some point during this evolution, the client reaches block N2. It is then able to see that the active delegates at that time validated the hash of the database that the client started with. While the active delegates at the point of block N2 are dependent on the starting state of the database as of block N1, the client knows the active delegates are the real active delegates because otherwise the checkpoint would not match. If the starting state of the database was modified to change even one active delegate, their block signatures would be different and therefore the checkpoint of block M would be different. Because the client knows the active delegates and because our security model already assumes the delegates will not double sign, the client can know that further evolution from block M to the present can be trusted as usual. So even if N2 > M, the client can still reach block N2 in a trust-free manner and verify that the starting database it received was in fact valid. Further evolution beyond that as usual can allow the client to reach the present state.
The above still provides the benefits of pruning the legacy code from the client over time and speeding up the process of restoring the database from scratch for full clients. But what about the added benefits I described for lightweight client validation? Lightweight clients cannot be expected to scan through the blockchain, even a recent portion of it. Therefore they cannot find out how the delegate votes evolve and thus who the active delegates are without some other added mechanism. It is important to clarify that if the lightweight client doesn't know who the active delegates should be during the several blocks leading up to block N2, then the lightweight client doesn't have any way to verify the proof that the hash of the database as of block N1 is indeed the correct one. Anyone could sign a hash of a fake database with 101 signatures claiming they were the active delegates. If the lightweight client believed that those were the set of active delegates, the attacker could supply the lightweight client with a database proof saying anything it wanted (such as the false existence of some balances in the victim's control in order to pull off a double spend).
To deal with this issue, the lightweight client would need some way of knowing the set of active delegates at any block. We could require that the hash of the ordered set of current active delegates could be included as part of the digest that the delegates must sign in every block in order for the block to be valid. A lightweight client could then download the block header of block M which it knows is valid because of the checkpoint in the client. This block header immediately proves to the client whether a given set of active delegates as of block M is correct. Then using only the small block headers, it could evolve this set of active delegates into the future. It can know which delegates are supposed to sign which blocks (using the active set and random numbers in block headers) and it can verify that the block headers it receives are properly signed by the intended delegate. In order for an attacker to supply fake block headers beyond block M that change the active set of delegates to a false one, it would require at least 51 delegates in one of the rounds to collude together to double sign fake blocks (which would by the way act as proof to get them fired, assuming they haven't been already). But that already breaks the security assumption that we hold in DPOS, so it is reasonable to assume this will not happen as long as the checkpoint block M is not too far in the past (not, for example, older than 6 months). The lightweight client can of course store an up-to-date copy of a checkpoint (derived from the evolution of block headers), so that it can resume this process whenever it wants to find the more recent set of active delegates. It would also store the database hash of the most recent verified database snapshot (which it is able to verify is the correct one because it checked the validation signatures of the active delegates at the time). With this information kept up-to-date, the lightweight client can then easily verify provided proofs of the existence of some (key, value) tuple in a recent database snapshot.