There is a known but fairly undocumented method of drastically improving SPV filtering that happens to remove this as an issue entirely, but it comes at the cost of requiring a hard fork change to Bitcoin.

Traditional BIP37 SPV

For BIP37, a client sets bloom filters on their peers and sequentially downloads transactions and blocks, relying on the remote peer to sift through all the data for that peer and produce pruned blocks which match their filter.

This is undesirable for a number of reasons:

In Bitcoin BIP37 bloom filtered SPV clients have absolutely zero privacy, even when using unreasonably high false positive rates.

Nodes in the network have to process an extremely large amount of data to return the results for just a single peer, and repeat work for every single peer connected on the receipt of a new block.

BIP37 SPV clients can be lied to by omission.

Peers loading an old wallet or rescanning one for transactions they may have missed must completely download all the filtered blocks, again at the cost of other network peers.

Bloom Filter Commitments

Before mining, a bloom filter would be deterministically constructed of all the pubkey-hash (address) and pay-to-script-hash (P2SH) elements in the candidate block.

The hash of this bloom filter would be required to be placed near the top of the block templates Merkle tree, and the contents of it not otherwise written into the block in any form.

When validating the block, this bloom filter would be deterministically reconstructed and verified to match the hash located in the Merkle tree. If the hash of the filter does not match, the block is invalid.

Fully validating nodes can cache the committed bloom filter to disk, or rebuild it at any time from the block data on disk as required (a time/storage tradeoff).

The new process for SPV clients:

Download block headers, the full committed bloom filter for every block in the chain, and the short Merkle path to the bloom filter commitment hash. Verify the header is valid, verify the Merkle path to the hash, verify the bloom filter matches the hash.

Locally compare the bloom filter for potential matches with the users wallet data (addresses, P2SH scripts). The peer independently gets to decide what blocks are interesting to them and which ones are not without having to actually download the contents. Similarly to the normal operation of bloom filters in BIP37 false positives can occur, but never false negatives.

If a block is found that may be interesting they can download the entire block and process it to find their desired transactions.

This neatly solves a number of issues with the traditional approach:

Random remote peers no longer get copies of the users bloom filter. They may be able to extrapolate that information from the blocks the client downloads, but the client can make many attempts to disguise this by downloading single blocks from various sources, making cover traffic of known uninteresting blocks, or even using non Bitcoin network storage once they know which blocks they are most likely interested in.

Remote peers can no longer lie by omission to an SPV client. If they alter the bloom filter committed in the block, it will no longer match the hash in the block, and the SPV client will know they have been deceived.

SPV clients can do quick rescans without needing to download any new data. By retaining the bloom filters once verifying them, they can do as many rescans against it without having to communicate with the network. For full storage nodes, this means significantly faster rescans without having to load whole blocks.

Nodes in the network no longer have any load serving SPV clients, beyond serving additional data when requested. It is their choice to cache the filters when verifying blocks (this comes at almost little cost as they would have to always create the filter anyway). This scales significantly better than BIP37 as the data remains the same for all peers, rather than having to be individually computed.

It's unclear if this would ever gain traction enough to be implemented in a hard fork, or what the optimal false positive rate for the committed bloom filter would be (it's a decision that needs to be made carefully, it can only be done once). There's a nasty tradeoff for usefulness between the size of the filter and it's false positive rate: too coarse means peers download far too many false positive blocks, too fine means the filters will be absolutely gigantic and impractical for anybody to download on an SPV client.

It's not obvious where that sweet spot is, or even if it exists.