I. Introduction and Executive Summary

Cryptocurrencies have been around now for just over a decade.1 Users and regulators have come to understand that they are far less anonymous than originally perceived.2 This has been a boon to law enforcement,3 but it has also dramatically curtailed the legitimate privacy interests of law-abiding persons who wish to use cryptocurrencies or related open blockchain technology.4 The present-day lack of cryptocurrency privacy is not, however, likely to last much longer.

Proposals to alter the software libraries powering existing cryptocurrencies5 as well as a range of next-generation cryptocurrencies6 promise to provide users with much greater transactional privacy while still enabling public certainty over the integrity of these systems. In essence, these systems can hide, or not record at all, the salient details of any particular transaction while still assuring users and the public generally that, across all transactions, there is no counterfeiting and transactions can only be authorized by persons who have previously received coins.7 In practice, using these cryptocurrencies is like using cash, i.e. tangible currency. In both cases, two people can pay each other without the need to trust an intermediary, and no information about these two people or the transaction they’ve just made need be released to the public or shared with any third party. These new cryptocurrencies truly offer users electronic cash. For clarity we will refer to these new technologies as electronic cash and to transactions made using them as electronic cash transactions.8

Similarly, regulators have come to expect that any exchange from one cryptocurrency to another will—by necessity—occur through trusted third parties, informally called cryptocurrency exchanges, which hold cryptocurrency for their users and match buyers and sellers of several currency pairs.9 As entities that accept and transmit currency substitutes10 for their users, these exchanges are regulated as “financial institutions” for purposes of the Bank Secrecy Act,11 and regulators have access to customer information from these exchanges.12 While this is unlikely to change with regard to exchanges between sovereign currencies and cryptocurrencies (due to the need for a trusted legal entity to maintain banking relationships in order to deal in sovereign currencies), it will soon no longer be the case for exchanges between cryptocurrencies and any other assets that are similarly blockchain-based.

Blockchain-based assets can be exchanged peer to peer without trusted intermediaries, with little friction, and with minimized counterparty risk thanks to the advent of blockchain-based smart contracts.13 Such smart-contract software can even facilitate the automatic creation of order books, the automatic matching of willing buyers and sellers on those books, and the settlement of trades without a third-party escrow provider.14 This allows for so-called decentralized exchange. During decentralized exchange, users retain custody of their cryptocurrency (rather than keep it with a trusted third party) and use smart contracts to trade them peer to peer. In essence, all the functions of a trusted third-party exchange can now be accomplished directly by the trading partners via software-based smart contracts and public blockchains capable of executing the logic of those smart contracts.15

The cumulative effect of these advances in technology is significantly less visibility into cryptocurrency transactions for the public, regulators, and law enforcement. Thanks to electronic cash transactions, data that would otherwise be available on a public blockchain may now be private to the transacting parties, and, thanks to decentralized exchange, many users seeking to exchange their cryptocurrencies for other cryptocurrencies may do so directly with each other rather than through a regulated third party, which could collect customer information.

Again, neither electronic cash nor decentralized exchange require trusted intermediaries of any kind. At the heart of these innovations lie only two types of parties:

Users who employ software tools and public blockchain networks to transact and exchange; and, Software developers who research, author, publish, and distribute source code that can be employed by the users to transact and exchange.

Users are, of course, culpable for their own illegal acts. However, aside from self-reporting their tax liabilities,16 they are not regulated and forced to collect and report to law enforcement information about their own lawful behavior or the lawful behavior of their commercial counterparties.17

Software developers are not culpable for unlawful acts committed by others using their research if they are unaware of those acts and lacked any intent to facilitate crimes.18 Indeed, software developers, to the extent they limit their activities to the publication of source code, are engaged in a protected speech act that cannot be regulated unless the government can prove a compelling state interest that could not be achieved through any less restrictive policy.19

Neither users nor developers are “financial institutions” as defined in the Bank Secrecy Act (BSA)—a financial surveillance statute the mandates recordkeeping and reporting in the U.S.20 The Secretary of Treasury can, through rulemaking, define a new category of financial institution that includes either users or developers.21 However, such a rulemaking would likely be unconstitutional under the Fourth Amendment of the U.S. Constitution.22

The Fourth Amendment prohibits warrantless search and seizure of information over which persons have a reasonable expectation of privacy.23 Existing BSA recordkeeping and reporting requirements are constitutional despite collecting large amounts of information without warrants because bank customers are said to lose their reasonable expectation of privacy when they voluntarily hand this information over to a third party in furtherance of a legitimate business purpose of that third party.24 If users do not voluntarily hand this information to a third party because no third party is necessary to accomplish their transactions or exchanges, then they logically retain a reasonable expectation of privacy over their personal records and a warrant would be required for law enforcement to obtain those records. Users cannot be forced to record and report their lawful activities without violating the 4th Amendment’s warrant requirement.25

Similarly, financial institutions can be forced to record and retain customer data because their customers willingly hand that data over to them and because that data are essential to their conduct of legitimate business purposes.26 Developers of electronic cash and decentralized exchange software have no legitimate business purpose for collecting that data and users do not volunteer that information to developers when they use their software tools. Indeed, a software developer will likely be even less aware of who is using their tools than the author of a book would know who has bought a copy and read it. Deputizing software developers to collect this information as a prerequisite to publishing their software tools would be unconstitutional under the Fourth Amendment because it would constitute a warrantless seizure of information over which users have a reasonable expectation of privacy.

Faced with both (a) a decline in readily surveillable data on public blockchains and from BSA-regulated exchanges, and (b) the inability to constitutionally deputize new entities as BSA-obligated surveilors, regulators may seek to outlaw the publication of electronic cash or decentralized exchange source code, or permission its publication on inclusion of backdoors that surreptitiously collect and report information to the government. Source code, the language by which developers communicate scientific and engineering ideas to each other and the world, is protected speech as described in the First Amendment.27 The government cannot ban the publication of types of speech nor can it require a person to speak unless it can prove a compelling state interest that could not be achieved through any less restrictive policy.28 Indeed, laws that require content-based licensing of speech carry a strong presumption of unconstitutionality that must be rebutted by the government when challenged in court.29 Any attempt to ban the publication of electronic cash and decentralized exchange source code, or any attempt to compel developers to rewrite their source code according to government strictures, would thus likely be found unconstitutional under the First Amendment.

In general, the emergence of electronic cash and decentralized exchange software challenges several assumptions of what is and is not regulated under existing law, and what can and cannot be regulated constitutionally even if Congress decided to create new law. This report is not aspirational or hypothetical. It does not advocate for new constitutional jurisprudence (e.g. the weakening of the third-party doctrine, or heightened scrutiny for compelled commercial speech). Rather, this report explains how new technologies fit or do not fit into uncontroversial statutory interpretations and existing, well-settled constitutional jurisprudence. The resulting analysis may be surprising to some who, for policy reasons, wish for greater regulatory authority over activities performed using this software, or others who are concerned about the effect that the emergence of electronic cash and decentralized exchange could have on law enforcement’s ability to find and apprehend criminals. Indeed the results may be especially surprising to those who harbored the incorrect belief that these technologies are no different than previous tools and therefore do not pose novel legal questions.

We will begin with a description of the technology behind electronic cash and decentralized exchange. Later, we will review the relevant constitutional law and analyze the constitutionality of certain hypothetical attempts to impose financial surveillance obligations onto software developers and users.

II. Technology Background

Rather than offer a comprehensive survey of the technology behind electronic cash or decentralized exchange, this section will be limited to a description of the aspects of the technology that are relevant to our discussion of constitutional law. At root, three aspects of these technologies are relevant to that discussion:

Unlike early transactions made with cryptocurrencies, electronic cash transactions can be completely private to the transacting parties and may leave no useful public record of the transaction on the blockchain. Unlike a transaction made through a centralized cryptocurrency exchange, a decentralized exchange may be strictly peer-to-peer and may have no legal or business entity that powers the exchange service. Both electronic cash and decentralized exchange originate from published software written in different computer languages. When that software is executed by diverse and unaffiliated persons around the world it can facilitate an electronic cash transaction or decentralized exchange between participants. However the development of that software is a separate activity (authorship) from the execution of that software (use) and the parties involved, authors and users, are distinct.

For more comprehensive information on these technologies we have added an Appendix to this report. The Appendix will be useful for readers who do not yet have a base of knowledge in cryptocurrencies and who wish to learn more about electronic cash and decentralized exchange, specifically: what they do, how they function, who builds them, and what that building process entails.

A. Electronic Cash Means Completely Private, Cash-Like Transactions

A typical bitcoin transaction leaves a plaintext30 record on the Bitcoin blockchain that includes:

The bitcoin address or addresses the sender is using to fund the transaction, The recipient’s bitcoin address or addresses, The amount sent, and A digital signature that proves the sender’s control over the sending addresses.

Anyone with a computer and an internet connection can freely download a copy of the blockchain and view the entirety of this transactional data for every bitcoin transaction that has ever been made since the network’s inception in 2009.31 Public websites provide free tools for exploring this massive data set,32 and specialty blockchain analysis companies provide even more user-friendly solutions for visualizing this data and linking these addresses and their transactional history with real world identities and organizations.33 In short, despite several incorrect headlines and reports,34 bitcoin transactions are not at all anonymous; They are, in fact, far less private than transactions made using a bank or credit card. As former DOJ prosecutor and Silkroad investigator Katie Haun has remarked, “If you wanted to cover your tracks and you were a good criminal, Bitcoin or cryptocurrency is one of the last things you should use.”35 It’s also the last thing you should use if you are a law abiding person who does not want the world at large to see and potentially scrutinize your entire financial history.

As we describe in depth in the Appendix, this level of publicity about transactions exists in part to allow the entire network of cryptocurrency users to independently verify that transactions are valid.36 As Bitcoin was originally designed, verifying the integrity of the blockchain necessitated public visibility into the details of every transaction.37

Since Bitcoin’s inception in 2009, several technical proposals have emerged that would improve privacy for Bitcoin users without sacrificing public verification of the blockchain.38 Some of these proposals involve changes to wallet software that people would use to access the Bitcoin network and store their bitcoins,39 others involve new networking protocols built on top of the Bitcoin network that could shuffle bitcoins amongst several addresses and transactions,40 and some involve fundamental changes to the core Bitcoin protocol software itself.41 Several of these proposals have been developed and allow Bitcoin users greater privacy than would be found in typical transactions. While the Bitcoin developer community has yet to incorporate any of the proposals that would necessitate comprehensive changes into the Bitcoin Core software itself, several of these proposals have been developed and launched as separate, standalone cryptocurrencies and associated networks.42

The details of this technological evolution are described in the Appendix. For our purposes, however, it is sufficient to know that this work is ongoing and that it allows for peer-to-peer cryptocurrency transactions that leave no plaintext record of sender or recipient addresses and no plaintext record of the amount sent on the blockchain. This information, if it is available to anyone at all, is kept private to the transacting parties who, in some of these systems, may also be able to share it with others (effectively decrypting otherwise unreadable information on the blockchain) using so-called view keys.43 This functionality is generally referred to as selective disclosure.44

Despite this lack of transaction publicity, mathematical proofs built into these software projects allow the public at large to verify the integrity of the blockchain without learning the details of any specific transactions.45 Trust in the scarcity of the underlying coins and the provenance of transactions is generated by an open set of impartial validators around the world just like Bitcoin’s miners.46 Unlike Bitcoin, however, privacy is guaranteed in these protocols by neglecting to share any information about transactions with these validators or the public at large except for the minimized amount of information necessary to prove scarcity and provenance. Additionally, selective disclosure systems ensure that counterparties and third parties can be given visibility into the details of any particular transaction whenever the initiator (and the initiator alone) wishes to be transparent or is compelled to be transparent by regulation or law.

There’s no widely accepted term for these software projects or the private transactions that they can enable. For clarity we will refer to this category of software as “electronic cash software” and this category of transactions as “electronic cash transactions.” Like cash, these new tools allow payments to be made directly, person to person, without leaving any authoritative record of the parties involved or how much money changed hands.47

B. Decentralized Exchange Means No Trusted Third Party

One can only make electronic cash transactions if one has obtained the underlying cryptocurrency of that blockchain (bitcoins if using Bitcoin with additional software to augment privacy, or some other cryptocurrency such as Zcash or Monero if using a new, privacy-focused blockchain). There are only two ways to obtain these cryptocurrencies: (1) participate in the blockchain consensus mechanism and receive rewards for your contributions in the form of newly minted cryptocurrency (i.e. mining),48 or (2) receive cryptocurrency from someone who already has it, either as a gift, payment as wages, or in exchange for other valuables (i.e. exchange).

Historically, mining is not an activity well-suited to non-technical individuals and may even be cost-prohibitive for all but the most expert mining entrepreneurs when the relevant blockchain is secured by a highly competitive proof-of-work consensus mechanism (e.g. Bitcoin).49

Therefore, the vast majority of cryptocurrency users will obtain their coins through an exchange. It is, of course, possible to find and meet individuals—either in person or over the internet—who would willingly sell some of their cryptocurrency holdings in exchange for cash or various other forms of electronic value transfer. In this scenario, the seller would transfer the cryptocurrency directly to the buyer making a blockchain transaction to an address generated by a software wallet on the buyer’s phone or other device. The buyer would pay the seller however is convenient. This approach, however, can carry risks. One party could take payment and fail to carry out the exchange, in-person meetings could result in robbery or other injury should one of the parties turn out to be criminal, and—even in the best circumstances—it may be difficult to find a counterparty with the amount and type of cryptocurrency one wishes to purchase.

Frictions associated with such direct exchange have resulted in the emergence of several so-called centralized cryptocurrency exchanges.50 These are, speaking generally, legally incorporated businesses with websites and banking relationships for accepting payments. Through their websites, these businesses allow users to establish accounts, fund those accounts with sovereign currencies through ACH or similar transfers, and then may serve as either a broker for persons wishing to buy cryptocurrencies or a matcher of buyers and sellers on their platform.

These centralized exchanges will also secure cryptocurrencies on behalf of their customers. These are often referred to as custodial wallets as contrasted with user-secured software wallets. In the context of a software wallet, cryptocurrency is received and kept in blockchain addresses that have associated cryptographic keys generated and secured directly on the user’s phone or computer. A custodial wallet will secure cryptocurrency in blockchain addresses whose matching cryptographic keys are safeguarded by the centralized exchange rather than by its customers.

As entities that accept and transmit currency substitutes51 for their users, these centralized cryptocurrency exchanges are regulated as financial institutions under the Bank Secrecy Act in the United States,52 and regulators have access to customer information from these exchanges.53

Decentralized exchange is best understood as a verb rather than a noun. Our earlier description of a direct person-to-person exchange is a decentralized exchange in the sense that two parities somehow find each other and trade their valuables without relying on any trusted third party in between. Advances in cryptocurrency software, however, can streamline this process and mitigate the risks otherwise associated with meeting a stranger and trusting them to honor their side of a bargain. We describe this software briefly below, but first a caveat: these software-powered decentralized exchanges are only possible for cryptocurrency-to-cryptocurrency trades. To trade sovereign currencies will always require either (A) some trusted third party with banking relationships or (B) physical cash, which necessitates in-person dealing.

Decentralized exchange software falls under the general umbrella of so-called smart contracts.54 For our purposes, a smart contract is simply a transaction made using cryptocurrency that has associated rules governing its execution, wherein these rules are enforced by the underlying blockchain itself rather than by some outside arbiter or legal entity. These rules could be as simple as: using bitcoin at address x, pay one bitcoin to address Y, if and only if the 567,238th block has been added to the Bitcoin blockchain. These rules would be expressed in computer code rather than English and would need to be in the particular coding language native to the blockchain on which the smart contract is meant to execute.55 Bitcoin blocks come around every 10 minutes on average and, as of writing, the blockchain is 565,222 blocks long. Therefore this transaction message is, in effect, a one bitcoin check payable to address Y that is post-dated to about two weeks in the future. Unlike a post-dated check, however, where we would rely on a bank to only cash it if the date was current, this transaction does not rely on any third party to execute its rules. If the recipient has the signed transaction message, she can submit it to the Bitcoin network and miners will put it in the blockchain when it is current and only once it is current. Any miner attempting to put it in the blockchain before block 567,238 would have her block automatically rejected by the rest of the network because it would contain an invalid transaction according to the rules of Bitcoin’s computing language. A contract-like conditional payment is made even though no third party is required to judge or enforce the condition; though it is simple, this is the essence of a smart contract.

Software for facilitating decentralized exchange is not much more advanced than this simple example. The computer code would simply describe a payment that is conditional on proof of some other payment being recorded on the blockchain. Various additional rules and conditions can be written as well, for example:

A rule to cancel the payment of either party (returning the cryptocurrency to the sending address), if and only if their counterparty fails to make their payment within a set time period,

A set of rules that make the contract an open-ended offer from the buyer at a set price. Anyone who finds the buyer’s signed transaction message (perhaps it’s posted on social media) can become the buyer’s seller if and only if they are the first to do so on the blockchain.

Some blockchain computing languages will even allow for rules that reference data on other blockchains such that payments on both chains are mutually codependent. This allows for so-called cross-chain atomic trades wherein a decentralized exchange could take place between users of two different blockchain networks (e.g. an exchange of bitcoin for ether).

Finally, decentralized exchange software can even be written that allows trading parties to store and access buy and sell offer information (i.e. an orderbook) in the blockchain or some other decentralized data store, and to utilize a matching engine whose logic is also executed by the blockchain so that trades happen automatically whenever signed offers to buy and sell overlap.

Some decentralized exchange software may rely on certain centralized parties to perform certain functions within the otherwise decentralized exchange. For example, centralized parties could be relied upon to store orderbook data or to actively match buyers and sellers. Then, once matched, the trade itself takes place directly and peer-to-peer using the smart contract. The cryptocurrency community will often call this arrangement a decentralized exchange even though there were certain centralized components, because the cryptocurrency always stayed in the custody of the participants and no third party ever had to be trusted to keep it safe. This quasi-centralization has also led to regulatory consequences for persons playing the centralized role within otherwise decentralized exchanges.56 We do not argue in this paper that there are constitutional barriers to regulating these centralized parties (we also do not intend to suggest there are not). Instead, this paper focuses exclusively on the users of electronic cash and decentralized exchange software and the authors of that software.

C. Electronic Cash and Decentralized Exchange are Powered by Software

At heart, developing electronic cash or decentralized exchange software is an academic engineering challenge like any other. There’s prior work from which to draw inspiration: decades of computer science research,57 cryptographic literature,58 and existing cryptocurrency software, which for all major networks is open-source and available without payment or licensing.59 There’s creative and innovative work to be done: forging new mathematical proofs, translating old ideas into new languages, and combining past work into novel and useful arrangements. As with any scientific inquiry, this process is ongoing and never-ending, and thousands of people around the world are actively contributing to the body of research.60 Periodically there are published results, both academic papers written in prose that describe new software tools as well as the software itself, written in a range of common coding languages.

Those published results, on their own, do not create electronic cash or decentralized exchange. Instead, the published software explains—in exacting detail—how one would make an electronic cash transaction or a decentralized exchange. Software is not self-executing; it’s a set of instructions, like a recipe for a meal or a musical score for a performance. Once published, it’s up to people around the world to follow those instructions.61 Software makes this a bit easier than performing a Beethoven sonata or baking a soufflé, because the instructions are so complete that they require little skill or improvisation and because their users can exploit a machine that can read the instructions, a computer, to do most of the work. But the users are essential nonetheless: they must run the software on their internet-connected computers, and it’s only once those computers start working together as a network62 that some usable functionality, like electronic cash or decentralized exchange, becomes possible.

The primary effect of these advances in technology are cryptocurrency networks that protect the privacy of their users. Developers and advocates genuinely believe that such tools are necessary to protect human dignity and autonomy, and argue that they are of profound political and societal importance in a world where transactions are increasingly surveilled and controlled by a handful of private financial intermediaries and powerful governments.63 A secondary effect of these advances is significantly less visibility into cryptocurrency transactions for regulators and law enforcement. Thanks to electronic cash transactions, data that would otherwise be public on a blockchain may now be private to the transacting parties, and, thanks to decentralized exchange, many users seeking to exchange their cryptocurrencies for other cryptocurrencies may do so directly with each other rather than through a regulated third party, which could collect customer information.

Faced with this reduction in surveillable information, governments may seek to extend Bank Secrecy Act obligations to electronic cash or decentralized exchange software developers or to the users of this software. This would be unconstitutional under the Fourth Amendment. Similarly, governments may seek to ban or permission the distribution electronic cash software or compell developers to introduce surveillance-friendly vulnerabilities or backdoors into their software; this would be unconstitutional under the First Amendment. Each of these arguments will be discussed in turn.

III. Electronic Cash, Decentralized Exchange, and the Fourth Amendment

The Fourth Amendment prohibits warrantless search or seizure of a person’s home and private papers.64 However, since 1971, a financial surveillance law, the Bank Secrecy Act, has mandated the bulk collection of customer information by banks and other financial institutions as well as automatic reporting of that data to regulators and law enforcement.65 This sweeping surveillance regime is arguably both a seizure and search of private financial information and it operates without warrants.66 The Supreme Court found this to be constitutional because customers willingly hand their information over to banks and banks have a legitimate business purpose that requires the collection and retention of that information; thus, the banks’ customers lose their reasonable expectation of privacy with respect to that information and no warrant is required for its seizure by government or by private entities deputized by government (e.g. banks).67

As we have just described, electronic cash and decentralized exchange work without the need to trust an intermediary like a bank or other financial institution and may leave little or no information about user transactions public on the blockchain for use by law enforcement.68 If regulators wish to impose Bank Secrecy Act obligations upon entities in the electronic cash or decentralized exchange space, the only possible targets would be the software developers of electronic cash protocols and decentralized exchange smart contracts or the persons running that software on the internet. Would the imposition of such obligations upon these parties be constitutional under current Fourth Amendment jurisprudence?

A. Fourth Amendment Protections Apply to Electronic Messages

The Fourth Amendment reads:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.69

Generally speaking, the Fourth Amendment prohibits warrantless searches and requires that warrants only be issued for searches of particularly described places with probable cause.

Much jurisprudence has been devoted to determining precisely when actions taken in a police or other government investigation constitute a search and therefore require a warrant.70 For many years, this inquiry hinged on an Anglo-Saxon common law interpretation of privacy that focused on physical trespass.71 When novel questions of electronic surveillance, such as wiretapping, emerged in the 1960s, the Supreme Court had to determine whether intrusions upon persons’ otherwise private communications constituted a search even if there was no physical trespass onto the property or person of the searched individual.72 Similarly, the Court had to grapple with whether the bulk collection of data made possible by electronic surveillance violated the Fourth Amendment’s “particularity requirement” clause, which requires that warrants only be granted for searching places that are “particularly described.”73

In the landmark 1967 case on these questions, Katz v. United States, the Court held that “[t]he Fourth Amendment protects people and not simply ‘areas’ against unreasonable searches and seizures, and… [the] Amendment cannot turn upon the presence or absence of a physical intrusion into any given enclosure.”74 The Court concluded that even immaterial intrusions using technology could qualify as a search and created a new test to determine when the Fourth Amendment’s protections should apply: whenever a person has a “reasonable expectation of privacy.”75

Also in 1967, the Court in Berger v. New York held that statutes authorizing sweeping eavesdropping via electronic surveillance may violate the particularity requirement76 of the Fourth Amendment and therefore constitute impermissibly general warrants unless they provide procedural safeguards to prevent overcollection.77 The opinion of the Court “condemns electronic surveillance, for its similarity to the general warrants out of which our Revolution sprang and allows a discreet surveillance only on a showing of ‘probable cause.’ These safeguards are minimal if we are to live under a regime of wiretapping and other electronic surveillance.”78

Further, the Court in Berger suggested that if it is not possible to narrow the scope of electronic data collection to fit the warrant requirement then such evidence will simply be inadmissible. The Court reasoned: “It is said that neither a warrant nor a statute authorizing eavesdropping can be drawn so as to meet the Fourth Amendment’s requirements. If that be true, then the ‘fruits’ of eavesdropping devices are barred under the Amendment.”79

The Court also addressed the concerns of law enforcement losing visibility and failing to prevent crimes, effectively urging investigators to try harder with other, less invasive techniques and suggesting that sometimes privacy must trump security in order to preserve freedom. The moral panic identified by the Court in many ways resembles that of present-day concerns over encryption, cryptocurrencies, and “going dark.”80

As the Court reasoned,

It is said with fervor that electronic eavesdropping is a most important technique of law enforcement, and that outlawing it will severely cripple crime detection. … In any event, we cannot forgive the requirements of the Fourth Amendment in the name of law enforcement. … [I]t is not asking too much that officers be required to comply with the basic command of the Fourth Amendment before the innermost secrets of one’s home or office are invaded. Few threats to liberty exist which are greater than that posed by the use of eavesdropping devices. Some may claim that, without the use of such devices, crime detection in certain areas may suffer some delays, since eavesdropping is quicker, easier, and more certain. However, techniques and practices may well be developed that will operate just as speedily and certainly and—what is more important—without attending illegality.81

When making an electronic cash or a decentralized exchange transaction, a person’s private ‘papers’ and ‘effects’ may now be data in the form of encoded messages sent over the internet. As with the early examples of electronic communications in Katz and Berger, the mere fact that these messages are electronic and exist outside the home poses no barrier to their continued protection against warrantless search, so long as the person to whom they belong has a reasonable expectation of their privacy.

B. The Third-Party Doctrine

In Katz, the Court held that data knowingly exposed to the public would not be protected, for the subject of the search would have lost her reasonable expectation of privacy. The Court held that “[w]hat a person knowingly exposes to the public, even in his own home or office, is not a subject of Fourth Amendment protection. But what he seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.”82

Thus any information that a cryptocurrency user shares publically, say by posting transaction data to a blockchain, will, of course, be freely available to regulators and law enforcement to search without any warrant or particularized suspicion. An electronic cash transaction may not, however, result in much publicly available information being recorded on the blockchain. In essence, the blockchain records encrypted data and displays it publicly in an unintelligible form; as in Katz, it is “preserved as private” but is displayed “in an area accessible to the public.” It follows that this private but accessible information will be constitutionally protected.

In United States v. Miller (a case about bank records that we will return to in greater detail below) and Smith v. Maryland (a case about telephone company records) the Court further fleshed out the reasonable expectation standard, holding that “a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.”83 This has come to be known as the third-party doctrine84 and is currently used to justify warrantless data collection from electronic intermediaries such as Google or Amazon.85

Recently, the third-party doctrine has come under attack from justices and legal scholars who believe it is predicated on an outmoded understanding of the modern information landscape and who fear that it is today used to enable truly massive private data collection with little to no judicial process or accountability.86 As people increasingly hand the entirety of their private correspondence and data over to cloud service providers and other online intermediaries, there grows, effectively, a gaping hole in our once comprehensive Fourth Amendment protections.87 As Justice Sotomayor wrote in a concurrence to the 2012 United States v. Jones case,

More fundamentally, it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.88

Adjacent to the third-party doctrine is the question of whether the third party in question has a legitimate business purpose to collect information about their customers in the first place, and whether the customer voluntarily provided the information. This question is pertinent because it speaks to the customer’s reasonable expectation of privacy. If I am willing to keep my private files unencrypted with a data storage provider, then I have reason to believe they may no longer be private. If, however, I am surreptitiously recorded by my doctor while being examined, I have no reason to believe that this interaction should not be private. Again, as stated in Katz, “what he seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.”89 Thus the question of whether personal information obtained by a third party is protected by a warrant requirement under the Fourth Amendment is not merely: Is the information still private to the searched party or has it been obtained by a third party? It must also ask: If obtained by a third party, did the third party have a legitimate business purpose to seek and retain that information and did the person voluntarily provide it?

This question was central to the Smith v. Maryland decision, although it was dealt with swiftly in that context.90 The controversy in Smith centered on whether law enforcement can collect records of phone numbers dialed (not recordings of phone conversations had) from telephone companies without a warrant or particularized suspicion of certain subscribers.91 The Court reasoned that whenever a caller dials numbers into her phone, she “voluntarily convey[s]”92 that information to the phone company as a necessary and obvious step in making a call. Moreover, phone companies have “legitimate business purposes”93 for recording that information. The Court therefore found that “although subjective expectations cannot be scientifically gauged, it is too much to believe that telephone subscribers, under these circumstances, harbor any general expectation that the numbers they dial will remain secret.”94 Without that reasonable expectation of privacy, the records of numbers dialed were deemed unprotected by the Fourth Amendment.

In United States v. Miller the Court dealt with the same question in the context of bank records.95 It found that bank customers could “assert neither ownership nor possession”96 of the documents; they were “business records of the banks.”97 The particular nature of the records and the necessity of their revelation in order to conduct business was, again, core to the customers’ privacy expectations. The Court found that the “contents of the original checks and deposit slips” are not private correspondence, but rather they are “negotiable instruments to be used in commercial transactions.”98 As with the phone numbers dialed in Smith, bank customers understand that they must hand this information over to the third party as a means to conducting business, else how would the phone company know to whom they wish to speak or the bank to whom they wish to pay? As the Court found, “all the documents obtained contain only information voluntarily conveyed to the banks and exposed to their employees in the ordinary course of business.”99

A recent case before the Court brought the question of legitimate business purposes and the third-party doctrine to a head. In Carpenter v. United States the Court refused to extend the reasoning behind the third-party doctrine to cellular phone location data collected by telecommunications providers.100 Instead, the Court found that a warrant was required to search or seize this data from cellular service providers.101 Cell phone users reveal their location to service providers because the radios on their devices regularly connect to multiple cell phone towers simultaneously (even when the user is not making a call). Thus it is a simple matter of triangulating signal strength in order to determine with high accuracy where the customer’s phone is at all times. To find that this third-party location data was protected unlike checks and phone number data in Smith and Miller, the Court had to distinguish why such data was either not voluntarily provided or went beyond a legitimate business purpose.

On the question of volition, the Court reasoned that the information was never voluntarily “shared” by customers because of the ubiquity of cell phones, their necessity to everyday life, and the fact that they simply cannot be used without revealing that data.102 The Court found that “Apart from disconnecting the phone from the network, there is no way to avoid leaving behind a trail of location data. As a result, in no meaningful sense does the user voluntarily assume the risk of turning over a comprehensive dossier of his physical movements.”103

On the question of legitimate business purposes, the Court noted that in both Miller and Smith the records in question were at the core of the legitimate business purpose of the third party.104 A phone company must know the number that their customer wishes to reach. The bank must know the name of the person the customer wishes to pay. The warrantless data collection in those cases was limited to those key items that customers must understand as essential to their use of the business’ services; items that a reasonable customer would expect the third party to have and retain. With cellular location data, however, the Court found that “there are no comparable limitations on the revealing nature” of the information sought.105 A cell phone company need not know the customer’s location at all times to connect calls, and subscribers would not expect them to have and retain this information as a condition of receiving cell service.

Customers understand that the numbers they ask to be connected with must be shared in order to be connected in a call. They do not contemplate trading the full revelation of their day-to-day movements merely because they wish to check their email. Interestingly, this holding does not argue that there is no legitimate business purpose that could justify the telecommunications providers collecting and retaining that data (surely knowing where your customers are is important to providing them with good mobile phone connectivity).106 Instead, it argues that the data sought by law enforcement was ancillary to the data that a customer would reasonably expect to provide within the context of the business relationship.107 It is data that may be legitimate for the business to obtain, but it is not essential to the provision of the service and is beyond the business purpose as the customer understands it and therefore within her reasonable expectation of privacy.108

The technology behind a digital cash transaction or a decentralized exchange is designed to obviate the need for users to hand any personal data over to any third party. Indeed, these systems are designed such that no trusted third party need even exist for the the transaction or exchange to take place. Therefore, it would be impossible to argue that the users of these systems voluntarily hand any personal data over to any third party when they transact. A user will construct her electronic messages to be compatible with the electronic cash protocol or decentralized exchange smart contract that she chooses to use, but this data alone will likely not be useful to regulators or law enforcement109 and it will certainly not include typical financial transaction data like the name or physical address of the user. Regardless of its lack of usefulness to law enforcement, this is the only data that a user of these protocols must provide in order to obtain the desired result and, consequently, it is the only data for which the user would no longer have a reasonable expectation of privacy.

No third party within these systems must know any additional information about the user for the transaction to take place; thus, it would be impossible to argue that such extra data was essential to the conduct of any supposed third party’s business purposes.110 Arguing in the opposite is equivalent to suggesting that envelope manufacturers have a legitimate business purpose in learning what letters people mail, or that safe manufacturers have a legitimate business purpose in learning what valuables people keep in their safes.

Lacking publicly available information about the user’s transaction and lacking a third party to whom the user has voluntarily revealed information pursuant to a legitimate business purpose, the only constitutional path to a search of information in an electronic cash transaction or decentralized exchange must, by necessity, go through the user herself, and that must require particularized suspicion of the user and a warrant from a judge.

Faced with these limitations, regulators may seek to deputize some other third party to collect additional information about these transactions. Again, because electronic cash and decentralized exchange transactions can be performed by the user(s) alone with nothing more than software and an internet connection, the only possible target for such deputization would be the software developers who invented the tools that the users employ.111 This would be a radical shift from the current administration of financial surveillance statutes. As we shall see in the next subsection, the Bank Secrecy Act has always taken for granted the existence of a third party that would already have a business-customer relationship and would already be in possession of customer transaction data. The question of surveillance now turns on whether regulators can impose similar reporting obligations on parties that would otherwise have no more connection to an illegal transaction than a car manufacturer would have to a bank robbery getaway vehicle.

C. The Bank Secrecy Act

The Bank Secrecy Act (BSA)112 is a federal law that orders financial institutions to collect and retain certain information about their customers and share that information with the Department of the Treasury.113

The BSA applies to “financial institutions, ” but the statute only offers loose definitions of various subcategories of financial institution,114 and grants power to the Secretary of the Treasury to craft new or more specific definitions through notice and comment rulemaking, thus expanding the range of businesses subject to the Act.115 The statute also does not spell out what sorts of records or reports must be made, but rather it authorizes the Secretary to prescribe by regulation certain recordkeeping and reporting requirements. The Secretary may mandate that financial institutions “require, retain, or maintain” as well as “report” to Treasury any records determined to have a “high degree of usefulness in criminal, tax, or regulatory investigations or proceedings.”116

The regulations implementing the Bank Secrecy Act117 (henceforward the “implementing regulations”) thereby determine both its breadth (which businesses are financial institutions) and depth (what degree of recordkeeping and reporting are required). These regulations have evolved over the years. With respect to domestic financial transactions made by customers of regulated financial institutions, the original implementing regulations only included insured banks within the ambit of financial institutions and only required recording and maintenance of identity information for their customers and those with signing authority, copies of checks drawn against the bank for over $100, and any extension of credit exceeding $5,000.118 The original implementing regulations also only required financial institutions to make reports to Treasury whenever a customer made a deposit, withdrawal, or other transfer involving “a transaction in currency of more than $10,000.”119 Thus for domestic transactions involving constitutionally protected U.S. persons, only those made with physical cash necessitated reports. These reports are referred to as Currency Transaction Reports or CTRs.

Today, the implementing regulations have significantly expanded. The definition of “financial institution” has grown from banks and a handful of similar businesses120 to include securities broker-dealers, telegraph companies, casinos, dealers in foreign exchange, check cashers, issuers or sellers of traveler’s checks or money orders, providers and sellers of prepaid access, money transmitters, and the U.S. Postal Service.121 The domestic reporting obligations also expanded in 1996 to include “suspicious activity reports” or SARs.122 SARs must be filed for every transaction or series of structured transactions over $5,000 (if the reporting financial institution is a bank) or over $2,000 (otherwise) whenever the financial institution “knows, suspects, or has reason to suspect” that the transaction:

“involves funds derived from illegal activities or is intended or conducted in order to hide or disguise funds or assets derived from illegal activities,” is designed to evade any requirements of regulations promulgated under the Bank Secrecy Act; or “has no business or apparent lawful purpose or is not the sort in which the particular customer would normally be expected to engage…”123

The inclusion of SAR reporting has spurred a massive increase in the amount of data reported under the Bank Secrecy Act to Treasury. SAR reporting has grown from around 60,000 SARs per year in 1996 when the rule was promulgated to 3,000,000 per year in 2017.124

Aside from SARs and CTRs, any additional information sought by Treasury from financial institutions will be released only via “existing legal process.”125 In other words, any examination of other records the collection of which is mandated under the BSA but the reporting of which is not required would necessitate either a judge-issued warrant (if the Fourth Amendment applies, which we will discuss next) or a mere subpoena (if the Fourth Amendment does not apply). SARs and CTRs do not require warrants or any other form of judicial process and must be automatically filed by regulated financial institutions with Treasury.

In short, the Bank Secrecy Act mandates the collection of an incredible amount of personal financial data and the reporting of that data to the government for purposes of criminal investigation without any particularized suspicion, finding of probable cause, or warrant. It is a program of warrantless mass surveillance. How is it constitutional?

D. The Constitutionality of the Bank Secrecy Act

It is unknown if the Bank Secrecy Act as currently applied is constitutional. Two cases brought not long after the law’s passage in 1970, California Bankers Association v. Shultz126 and United States v. Miller,127 found that it passed constitutional muster as applied in the implementing regulations of the day. As was just explained, however, the scope of the implementing regulations has expanded tremendously since that time.

In Shultz, the plaintiffs—a trade association of California bankers joined by the ACLU—argued that the BSA’s recordkeeping requirements were unconstitutional because they effectively made financial institutions agents of the government surveillance apparatus and directed them to seize records containing the personal information of their customers. The Court articulated why the third-party doctrine excluded those records from a customer’s reasonable expectation of privacy and therefore obviated any warrant requirement for such a seizure:

Plaintiffs urge that, when the bank makes and keeps records under the compulsion of the Secretary’s regulations, it acts as an agent of the Government, and thereby engages in a ‘seizure’ of the records of its customers. But all of the records which the Secretary requires to be kept pertain to transactions to which the bank was itself a party. …. The fact that a large number of banks voluntarily kept records of this sort before they were required to do so by regulation is an indication that the records were thought useful to the bank in the conduct of its own business, as well as in reflecting transactions of its customers.128

As with telephone numbers, the nature of checks and other negotiable instruments is such that customers must make certain pertinent facts available to their bank in order for any meaningful business to be accomplished. For example, a check must say who is paying whom in order to be cashed, or a series of dial tones must describe the called number in order to be connected. Furthermore, the Court reasoned that because the recorded information (presumably still held privately by the banks) would only be obtained by investigators by way of “existing legal process,” and because no such particular process (e.g. a subpoena for records) was yet being challenged (plaintiffs were challenging the statute and the implementing regulations generally), it could not find any constitutional defect with the recordkeeping scheme as implemented.129 This would not be the only instance in the Shultz opinion that the Court punted on a critical issue because of standing and ripeness.

Plaintiffs also argued that the reporting requirements violated the Fourth Amendment as a warrantless search, but the Court found that neither plaintiff could bring such a claim. The bankers association could not claim to represent the rights of customers harmed by the reporting requirement,130 and the ACLU, while it did have accounts with BSA-regulated banks, had not engaged in any currency transactions over $10,000, and therefore would never have been the subject of a CTR report.131 No harm no foul. These claims would have to wait for the next case, Miller, to be tested.

However, in separating the analysis between the seizure of records, which was discussed in Shultz, and the search, which would have to wait for Miller, the Court may have prejudged the outcome. As Justice Marshall, in a scathing dissent from the Shultz majority, wrote:

The seizure has already occurred, and all that remains is the transfer of the documents from the agent forced by the Government to accomplish the seizure to the Government itself. Indeed, it is ironic that, although the majority deems the bank customers’ Fourth Amendment claims premature, it also intimates that, once the bank has made copies of a customer’s checks, the customer no longer has standing to invoke his Fourth Amendment rights when a demand is made on the bank by the Government for the records. By accepting the Government’s bifurcated approach to the recordkeeping requirement and the acquisition of the records, the majority engages in a hollow charade whereby Fourth Amendment claims are to be labeled premature until such time as they can be deemed too late.132

Justice Marshall’s concern proved prescient. In Miller, the respondent had been indicted, effectively, for conspiracy to make moonshine, and the evidence at stake in the indictment was a series of transactions he had made through his bank for cargo van rentals, radio equipment, and metal piping.133 The bank had records of these transactions that it retained as per the implementing regulations of the BSA, and, when subpoenaed by the Treasury Department’s Alcohol, Tobacco and Firearms Bureau, the bank turned these records over to investigators.134

Again, the Court held that Miller had no reasonable expectation of privacy over these records because he had knowingly revealed this information to the bank during the usual course of business; the records were as much the bank’s information as Miller’s, and the bank was free to share them with law enforcement through the usual, warrantless legal processes:

The checks are not confidential communications, but negotiable instruments to be used in commercial transactions. All of the documents obtained, including financial statements and deposit slips, contain only information voluntarily conveyed to the banks and exposed to their employees in the ordinary course of business.135

The Court refused to entertain Miller’s arguments that it was the combined compulsion of the bank by the government to collect the information in the first place and the subsequent subpoena of that information once collected that constituted a search and seizure. Instead it merely analyzed, separately, whether Miller had a reasonable privacy expectation over the copies of the checks (no, because they are business records) or the original checks that were copied (no, because they were willingly handed over to a third party).136

Again, Justice Marshall lambasted the bifurcated analysis as a sham:

Today, not surprisingly, the Court finds respondent’s claims to be made too late. Since the Court in [Shultz] held that a bank, in complying with the requirement that it keep copies of the checks written by its customers, “neither searches nor seizes records in which the depositor has a Fourth Amendment right,” [] there is nothing new in today’s holding that respondent has no protected Fourth Amendment interest in such records. A fortiori, he does not have standing to contest the Government’s subpoena to the bank. … I wash my hands of today’s extended redundancy by the Court.137

In a separate dissent, Justice Brennan warned of the danger inherent in permitting such broad and judicially unchecked surveillance. Especially prescient was his concern over the characterization of persons’ provision of information to banks as “voluntary.” He wrote:

For all practical purposes, the disclosure by individuals or business firms of their financial affairs to a bank is not entirely volitional, since it is impossible to participate in the economic life of contemporary society without maintaining a bank account. In the course of such dealings, a depositor reveals many aspects of his personal affairs, opinions, habits and associations. Indeed, the totality of bank records provides a virtual current biography. … Development of photocopying machines, electronic computers and other sophisticated instruments have accelerated the ability of government to intrude into areas which a person normally chooses to exclude from prying eyes and inquisitive minds. Consequently, judicial interpretations of the reach of the constitutional protection of individual privacy must keep pace with the perils created by these new devices.138

This analysis, although it is in a dissent and carries no legal authority, states almost exactly the concern that ultimately swayed the court in Carpenter some 40 years later:

Cell phone location information is not truly ‘shared’ as one normally understands the term. In the first place, cell phones and the services they provide are “such a pervasive and insistent part of daily life” that carrying one is indispensable to participation in modern society. … [I]n no meaningful sense does the user voluntarily “assume the risk” of turning over a comprehensive dossier of his physical movements.139

Finally, it is important to remember that the constitutionality of the BSA as adjudged in Shultz and Miller was only “as applied” in the implementing regulations of the 1970s.140 As noted above, since the 1970s the BSA’s reach has expanded both in the number of businesses it treats as financial institutions and in the quantity and type of transaction reports those financial institutions are required to file. To our knowledge, for example, the constitutionality of domestic SARs has never been challenged or vindicated. Neither has the application of the BSA to businesses that are not traditionally understood to be financial institutions, such as casinos or retail sellers of prepaid cards.

The tenuous nature of the BSA’s constitutionality is underscored by the vote count in Shultz. The majority opinion of the Court is matched with a concurrence authored by Justice Powell and joined by Justice Blackmun. Had these two justices sided with the dissenters the outcome would have been 5-4 against the BSA’s constitutionality. Powell’s concurrence specifically says that his opinion is predicated on the narrow application of the BSA that existed at the time:

A significant extension of the regulations’ reporting requirements, however, would pose substantial and difficult constitutional questions for me. In their full reach, the reports apparently authorized by the open-ended language of the Act touch upon intimate areas of an individual’s personal affairs. Financial transactions can reveal much about a person’s activities, associations, and beliefs. At some point, governmental intrusion upon these areas would implicate legitimate expectations of privacy. Moreover, the potential for abuse is particularly acute where, as here, the legislative scheme permits access to this information without invocation of the judicial process. In such instances, the important responsibility for balancing societal and individual interests is left to unreviewed executive discretion, rather than the scrutiny of a neutral magistrate.141

Powell subsequently authored the majority opinion in Miller, but made clear that constitutionality was predicated on the narrowness of the investigation into Miller’s moonshine operation and the judicial process that accompanied it:

We are not confronted with a situation in which the Government, through “unreviewed executive discretion,” has made a wide-ranging inquiry that unnecessarily “touch[es] upon intimate areas of an individual’s personal affairs.” California Bankers Assn. v. Shultz, 416 U.S. at 416 U. S. 78-79 (POWELL, J., concurring). Here the Government has exercised its powers through narrowly directed subpoenas duces tecum subject to the legal restraints attendant to such process.142

With the introduction of SARs in the 1990s, the question alluded to above becomes: Is the automatic reporting of over three million transactions and associated personal details a “wide-ranging inquiry that unnecessarily touches upon intimate areas” of Americans’ personal affairs? Is it “unreviewed executive discretion” when this flow of personal data is the direct result of new implementing regulations that do not require investigators to seek a single subpoena or engage in any other judicial process?143

E. Regulating Software Developers Under the BSA Would be Unconstitutional

The surveillance obligations imposed on financial institutions by the BSA have only been found constitutional as they were applied in the 1970s implementing regulations. Since then, we’ve seen a substantial expansion in the number of businesses categorized as financial institutions as well as the depth of the domestic reporting requirements they must undertake.

The constitutionality of that regime as it currently stands is predicated on the third-party doctrine. Justices have already substantially weakened that doctrine with respect to location data and cellular service providers.144

Under the BSA, the Secretary of Treasury could, in theory, classify developers of electronic cash and decentralized exchange software as financial institutions through rulemaking and attempt to mandate their compliance with BSA recordkeeping and reporting obligations. In effect, the regulator would be ordering these developers to alter the protocols and smart contract software they publish such that users must supply identifying information to some third party on the network in order to participate and such that suspicious transactions are reported to the regulator and potentially blocked as per a reasonably calibrated anti-money laundering program.

It is unclear whether this would even be technologically feasible short of merely turning a decentralized cryptocurrency network into, in effect, a centralized payments provider like a custodial money transmitter or a bank. It’s also stunning to imagine that the BSA could be used to force a person to entirely change their line of business from being a developer who authors software tools and releases them to the public to becoming a centralized financial services provider with all of the attendant regulatory burdens. In effect, it’s like asking a novelist to stop merely publishing stories and now, instead, become a improvisational actor willing to participate in every reader’s experience of their books.

It is clear that this would be tantamount to an unconstitutional warrantless seizure and search of information over which users of electronic cash and decentralized exchange have a legitimate privacy expectation—an expectation that has not been abrogated by handing said information over to any third parties. These technologies are explicitly designed to operate without third parties. Developers are not third parties to transactions nor to any other interaction with users. They never have control over customer funds (indeed they may have no customers), nor need they even have any actual interaction with the peer-to-peer networks their software make possible.

It is true that the BSA placed obligations on banks to collect and retain information that they may have not otherwise collected, and one could argue that an obligation on software developers to collect cryptocurrency-user information would be no different. However, the holdings of Shultz and Miller are very clear. In those cases the mandate was not a seizure of customer records because the mandate only “pertain[ed] to transactions to which the bank was itself a party.”145 It involved only information voluntarily handed over to the bank from its customers and that information was limited to conducting the legitimate business purpose of operating a bank (e.g. signatures on negotiable instruments, payment instructions, and the like).146

A developer of electronic cash or decentralized exchange software does not have any legitimate business purpose to collect information about the users of their software. Indeed, such collection is anathema to the business purpose in which the developer has presumably engaged: the publication of software with strong privacy and security guarantees (e.g. no back doors or surveillance). Nor would users be voluntarily providing this information to the developer if they were operating under the misapprehension that the electronic cash or decentralized exchange software was delivering upon its stated purpose of enabling private transactions or cryptocurrency exchange without an intermediary. In effect, the users’ information would be surreptitiously captured while they operated under the false belief that the tools they were using honored their expectations of privacy.

If a developer of electronic cash or decentralized exchange software publicly announced that they were voluntarily incorporating BSA-style surveillance into their tools, users who continued to use those tools would likely lose their reasonable expectation of privacy over any information they provided when they used those tools. However, it is hard to imagine that every developer of electronic cash or decentralized exchange software would suddenly choose to voluntarily surveil the users of their software, even under pressure from law enforcement (many are not located in the U.S.). It is even more unbelievable that users would continue to use tools that had known backdoors if previous versions of the software without backdoors continued to exist in online archives or on peer-to-peer file sharing networks, or if other developers continued to offer more private alternatives.

If a developer refused to comply with a regulator’s demand that they add surveillance backdoors into their tools and the regulator either ordered them to cease publishing their software or compelled them to add the backdoor through a legal order then two additional constitutional questions would surface:

Is a licensing requirement or ban on the publication of electronic cash or decentralized exchange source code an unconstitutional prior restraint on protected speech? Is an order to only publish electronic cash or decentralized exchange source code with surveillance backdoors unconstitutionally compelled speech?

To answer these questions and the perfunctory matter of whether electronic cash or decentralized exchange source code is constitutionally protected speech, we must turn from the Fourth Amendment to the First.

IV. Electronic Cash, Decentralized Exchange, and the First Amendment

The First Amendment prohibits the content-based regulation of expressive speech unless the government can prove a compelling state interest that could not be achieved through any less restrictive policy.147 If electronic cash or decentralized exchange source code is expressive speech, then a publication ban or licensing requirement on developers would be presumed unconstitutional unless the government can prove in court that banning that software or licensing its publication achieves a compelling state interest that could not be achieved through any less restrictive policy. Similarly there would be a presumption of unconstitutionality if a law or regulation attempted to compel developers to rewrite their source code to include backdoors.148

Rarely do courts faced with bans on speech of a certain type or content find that the government’s interest is truly compelling and not achievable through less restrictive policies. Therefore, cases usually hinge on whether the speech is indeed protected and what level of protection it deserves. The remainder of this report argues that electronic cash and decentralized exchange source code is protected speech and that laws banning or requiring licensing for its publication, as well as laws compelling developers to alter their speech, should be presumed unconstitutional and must face strict scrutiny, rather than a lower standard such as intermediate scrutiny, upon judicial review.

A. Computer Code is Protected Speech

The Supreme Court has yet to hold generally that programs written in computer code are protected speech. However, holdings in cases dealing with novels, musical scores, and blueprints strongly suggest that computer code would be protected speech, and two recent cases related to video games and prescription datasets establish broad tests for whether any electronic data (software included) would qualify as protected speech. Lower courts have taken varied approaches, and some have found that computer code is protected speech because it is expressive conduct, like flag burning or nude dancing. As we shall discuss, this conduct-based approach has split the circuits, is misguided, offers lesser protection from regulation, and has no support in Supreme Court precedent.

i. Computer Code Expresses Ideas for Political and Social Change

In Roth v. United States, the Supreme Court found that “the First Amendment was fashioned to assure unfettered interchange of ideas for the bringing about of political and social changes desired by the people.”149 Generally, the particular medium through which ideas are expressed is inconsequential to First Amendment protection. If it is an idea of at least modest “political and social” significance, the Court certainly does not discriminate.150 It protects ideas regardless of the medium in which they are presented, even if it is gibberish or visual chaos. As the Court has found, the category of “unquestionably shielded” speech includes a “painting of Jackson Pollock, music of Arnold Schöenberg, or Jabberwocky verse of Lewis Carroll.”151

As discussed earlier,152 open source computer code shared over the internet is directly intended to convey the scientific and engineering ideas of a given project to other developers, including current collaborators, potential future collaborators, researchers, and the general public who may wish to use these tools and seek assurances of their correct operation, which can only be achieved through publicity and transparency. If digital tools derived from this science and engineering will be employed to, for example, organize social behavior on the internet, then their source code certainly holds at least as much social and political significance in the 21st century as a schematic of a steam engine or a blueprint for an amphitheater would have held in previous ages.

Indeed, the “unfettered interchange of ideas”153 found in computer code is the primary motivation behind open source software development as a practice. Rather than cloister one’s software project within the developer staff of a single corporation by enforcing copyrights, trade secrets, and other restrictions on dissemination through a proprietary software model, open source software development principles eschew copyrights and restrictive licenses, push for better ways to clearly and publicly display source code for review, and seek to solicit the widest possible audience in order to increase the odds that a member of that audience will catch errors that would otherwise go undetected or find opportunities for innovation that would otherwise have been ignored. This ethos is long established and well-captured in developer Eric Raymond’s landmark 1997 essay The Cathedral and the Bazaar.154 All major electronic cash and decentralized exchange software projects rigorously adhere to this open source model of development. Canonical changes to that software are only made after an exhaustive round of public sharing and discussion of the code itself.155

Moreover, computer code underlies systems we rely upon daily to organize our society—from email clients to traffic lights, police surveillance cameras to social networking websites and—more recently—private decentralized money and exchange. Everything we do (and cannot do) on those platforms and with those tools is mediated by software and ideas expressed in code. Anyone can learn to read the languages in which this code is written in order to elevate and formulate their view of debates surrounding these technologies, and anyone who has learned those languages can invent and suggest new and different ideas, including alternatives to the systems of today. Developers may learn these skills because they think they can build better, safer tools for organizing society, enabling individual freedom, or limiting the freedom of those who would do others harm.

Say what one will about the deservedly mocked mantra of Silicon Valley, “make the world a better place,” but software does make the world.156 Source code and the creative and scientific expression it contains now represents a substantial quantity of the world’s “ideas for the bringing about of political and social changes desired by the people.”157 Many remain surprised and even alarmed that a new language—many new languages in fact—are actively being used to fundamentally reshape the landscape of human interaction. But to deny this fact is to deny everything that has changed in our lives since the advent of digital computing. Similarly, to deny statements made in coding languages like C++158 or Rust159 the same protections we would grant statements made in English would make no more sense than to deny novels protection when they are written in French, symphonies protection because they are written in musical notation, or scientific papers protection because they tend to be filled with arcane graphs and formulae.

At least under the broad standard articulated by the Court in Roth, electronic cash and decentralized exchange software should be protected speech. A rigorous analysis, however, is not that simple. As we shall unpack in the next two subsections, some lower courts have muddled what should be a straightforward analysis by treating code as expressive conduct rather than speech, meaning it is subject to weaker First Amendment protections. By contrast, recent Supreme Court cases have eschewed this conduct-based approach and articulated extremely broad tests for what qualifies as strongly protected speech in the digital age. Later we will describe the different levels of protection (i.e. strict vs. intermediate scrutiny) to which various types of expression (i.e. expressive conduct vs. speech) are entitled, and the importance of this seemingly academic debate will be clear: if electronic cash or decentralized exchange software is found to be expressive conduct rather than speech it is entitled to substantially weaker protections.

ii. Publishing Computer Code is a Speech Act, Not Symbolic Conduct

The Supreme Court has yet to hold generally that programs written in computer code are protected speech. That said, it has also never explicitly found that short stories written in Russian are protected speech or that oboe concerti written in musical notation are protected speech. Some lower courts have begun to analyze this question under the jurisprudence of expressive conduct.160 These cases rely on the Spence161 and O’Brien162 tests for expressive conduct developed in earlier holdings from the Court. As we will argue later at length, these lower-court applications of Spence and O’Brien are misguided approaches to the question of whether computer code is protected speech. Those cases dealt with actions, not mere ideas: hanging a flag upside down in Spence,163 and burning a draft card in O’Brien.164 Actions may be expressive, but they can also have more immediate and dangerous consequences than mere words. Burning a building down may express someone’s feelings about that building, but it also presents obvious risks to life and property. Therefore, even if a expressive action, like burning a flag, is found to be speech, it will often be entitled to less-strict protection from regulation.

Computer code, however, is not an expressive or symbolic action. It is, quite literally, a written series of symbols themselves, i.e. letters and numbers or, once compiled, 0s and 1s. It is not like a musical performance, but rather like the printed score for an orchestra’s conductor or the printed roll for a player piano. While it is true that people will use computer source code to perform actions (just as one might use the musical score to perform music), the act of writing and sharing the code is an entirely separate act from the act of executing the code. Each or both may be protected speech, but they must be analyzed separately: analysis of the act of executing the code must use the Spence and O’Brien tests for expressive conduct, and analysis of the act of writing and sharing the code must use the same standards we use for authorship of novels or musical scores as articulated in Roth.165 To conflate the analysis and judge both the authorship and execution of code under Spence and O’Brien is to treat an impromptu performance of the 1812 Overture (cannons and all) the same as the moment Tchaikovsky put pen to paper on his musical score. The potentially disruptive performance should rightly and constitutionally be subject to somewhat prescriptive regulation, while the mere act of writing the music in notes and clefts on paper should not.

As we have discussed, making electronic cash or decentralized exchange transactions involves executing computer code. We do not argue in this report that the act of executing that code and actually transmitting or exchanging cryptocurrency is protected speech. (It may be protected speech in several contexts, but if we were making this argument we would likely need to use the Spence and O’Brien tests to determine whether a symbolic action is protected speech.) This report is concerned only with the developers of computer code and whether they can be banned from publishing code, made to get a license to publish it, or compelled to alter the code they publish such that it has surveillance backdoors. Although it is unlikely, a developer of electronic cash or decentralized exchange software may go her whole life without making an electronic cash transaction or a decentralized exchange. The question of whether she deserves First Amendment protection hinges not on what actions others may use her software to perform but merely on whether she, simply by publishing, has engaged in protected speech.

iii. Electronic Cash and Decentralized Exchange Software Are Protected Speech

In two cases, Brown v. Entertainment Merchants Association166 and Sorrell v. IMS Health Inc.,167 the Supreme Court has found that some computer programs and some digital data are worthy of protection as speech. It did not use the Spence or O’Brien test in either determination.

In Brown, the court found that video games were protected speech and even violent ones could not be banned from sale. Some scholars believe that Brown articulated a new, narrow standard for when novel modes of expression would be entitled to First Amendment protections.168 For example, lawyer Andrew Tutt writes:

Rather than reach beyond video games to software generally, the Court zeroed in on video games and held that they were speech because they communicated ideas through familiar literary devices. The Court reasoned that video games were speech because they expressed ideas in familiar ways: “Like the protected books, plays, and movies that preceded them, video games communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world).”169

Tutt views the Court’s failure to analyze the underlying code itself, and its focus on the analogous content between video games and more traditional entertainments, as indicative of a narrow standard: “Brown’s test is probably best read as defining ‘new speech’ as that which is directly analogous in presentation and mode to ‘old speech.’”170 Tutt, however, makes too much of this holding. The Court does not at any point hold that it is identifying a new standard that conflicts with or narrows previous interpretations, such as those in Roth. Instead, the Court holds that it is sufficient for a finding of protected speech that new modes of expression are analogous to old modes. At no point does the Court suggest that it is necessary for the new mode to bear this resemblance. As the Court held, resemblance “suffices to confer First Amendment protection.”171 Even if resemblance was now necessary rather than sufficient, open source software would easily be analogous to scientific publications shared amongst experts, which are protected as speech.172

In Sorrell, the Court articulated a surprisingly broad standard of what constitutes protected speech. It found that the mere “creation and dissemination of information” constitutes speech within the meaning of the First Amendment.173 Sorrell dealt with a law that “on its face” enacted “content- and speaker-based restrictions on the sale, disclosure, and use of prescriber-identifying information.”174 The Court found that a Vermont law limiting sales of and access to records of which medicines doctors prescribe “disfavors marketing, that is, speech with a particular content” and “disfavors specific speakers, namely pharmaceutical manufacturers.”175 Vermont contended that the sale, transfer, and use of prescriptions data was conduct and not speech (as we discussed earlier and will return to in the next section), but the Court rejected this argument out of hand, adding that:

Facts, after all, are the beginning point for much of the speech that is most essential to advance human knowledge and to conduct human affairs. There is thus a strong argument that prescriber-identifying information is speech for First Amendment purposes.176

The computer code within electronic cash and decentralized exchange systems is heavily laden with facts that advance human knowledge and allow us to conduct human affairs. If the essential factual nature of discrete logarithms was not well understood, to give one example, we would struggle to engage in any secure electronic conversations.177 Bank records, government secrets, and copyrighted content would all be up for grabs if not for pioneering advances in the science of applied cryptography. These are advances that, by and large, have always been best uncovered and expressed in computer code.

Therefore, even though there is no conclusive holding from the Supreme Court on the specific topic of computer code’s classification as protected speech, we can reasonably assume, based on older cases such as Roth178 as well as recent holdings such as Sorrell, that the issue would be non-contentious: it’s protected. Setting aside the issue of expressive conduct vs. speech, every court of appeals to rule on this issue has held that code is protected expression worthy of at least some First Amendment protections.179

However, as we shall see in the next two sections, the finding that code is protected expression does not mean that it cannot be regulated. Much depends on the nature of the speech and the concomitant level of scrutiny that regulations impacting that speech will face.

B. Strict vs. Intermediate Scrutiny for Regulation of Protected Speech

As we have discussed, electronic cash and decentralized exchange software is protected under the First Amendment. However, not all protected expression is protected equally. For our purposes, there are two standards of review that courts may use to judge the constitutionality of laws regulating electronic cash or decentralized exchange software: strict scrutiny and intermediate scrutiny.

Strict scrutiny is formulated such that a law or regulation will be found unconstitutional unless it is “narrowly tailored to serve a compelling state interest.”180

Intermediate scrutiny, on the other hand, is an easier hurdle for laws and regulations to clear. As the Second Circuit found in Universal City Studios, Inc. v. Corley, under intermediate scrutiny:

The regulation must serve a substantial governmental interest, the interest must be unrelated to the suppression of free expression, and the incidental restriction on speech must not burden substantially more speech than is necessary to further that interest.181

While this test may not appear drastically different from the strict scrutiny formulation above, in practice its application is significantly less charitable to speech. As constitutional scholar Ashutosh Bhagwat writes,

[I]n applying intermediate scrutiny to reconcile governmental interests with free speech claims, the appellate courts have tended to systematically favor the government. Although the balance that the courts have drawn in individual cases is often perfectly defensible, and indeed may be an inevitable consequence of the form of analysis mandated by the intermediate scrutiny test, [we] show that the aggregate consequence of this governmental preference is the suppression of substantial amounts of important, socially valuable speech.182

Symbolic conduct, like burning a flag, is only entitled to intermediate scrutiny because of the obvious public safety issues inherent in actions rather than words. When the standard of review is intermediate scrutiny, laws regulating speech tend to be upheld as constitutional and speech can be suppressed.183 Advocates for continued research and development of electronic cash and decentralized exchange software should not, therefore, accept that these tools are protected because they are symbolic conduct. Instead, they must argue that these tools are not conduct, but speech, and that their publication by developers is an entirely separate matter from their use by other persons to perform actions in the world. Aside from being more likely to garner strong constitutional protection, this approach is also correct.

With one exception, lower court judges have found that computer code is a hybrid of speech and conduct because it is “functional.”184 This a misguided approach that has not been adopted by the Supreme Court185 and that should be avoided by electronic cash and decentralized exchange advocates.

For example, in Junger v. Daley the Sixth Circuit held that “[t]he fact that a medium of expression has a functional capacity should not preclude constitutional protection. Rather, the appropriate consideration of the medium’s functional capacity is in the analysis of permitted government regulation.”186 At root, Junger suggests that if the code is functional then it is both conduct and expression. As expressive conduct, laws regulating its publication and distribution would be subject only to intermediate scrutiny thereby permitting more restrictive government regulation.

Some commentators187 suggest that these lower court judges have misunderstood how software works by failing to understand the difference between source code, which is primarily used by developers to express new systems and share their ideas with other developers, and object code, the compiled form of source code that will actually trigger a computer to do something functional.188

Even if that was the case, and even if we accept that judges should be better at discriminating between the two types of code, why should object code be expressive conduct rather than speech? After all, object code is merely a unique and often important arrangement of digits or bits.189 Returning to the musical metaphor, source code would be the composer’s score, a piano roll would be the object code, and the player piano would be the computer. Object code can in fact be read by particularly sophisticated developers in order to understand a message.190 Piano rolls too are used by musicians to share music; some may even be more adept at reading this style of musical notation than a traditional score.191 Regardless of whether we’re discussing dots and dashes on a roll of paper or 1s and 0s in a computer file,192 how can the creation and dissemination of these unique arrangements of data be anything but the “creation and dissemination of information,”193 which is the Supreme Court’s standard for speech in Sorrell? The Oxford English Dictionary defines “information” as “what is conveyed or represented by a particular arrangement or sequence of things.”194

Again, counter to the lower court in Junger, the Court in Sorrell felt no need to address Vermont’s argument that prescription data was conduct, and held that “if the acts of ‘disclosing’ and ‘publishing’ information do not constitute speech, it is hard to imagine what does fall within that category, as distinct from the category of expressive conduct.”195

In Corley, at least, the district court judge (who was praised and quoted heavily by the Second Circuit)196 did not appear to misunderstand software but rather felt that the ease with which an otherwise purely expressive piece of source code could be compiled into object code and executed by the user of a computer meant that, for all intents and purposes, the code should be regulated as conduct as well as expression.

As the district judge wrote:

Computer code, … no matter how functional, causes a computer to perform the intended operations only if someone uses the code to do so. Hence, one commentator, in a thoughtful article, has maintained that functionality is really ‘a proxy for effects or harm’ and that its adoption as a determinant of the level of scrutiny slides over questions of causation that intervene between the dissemination of a computer program and any harm caused by its use. The characterization of functionality as a proxy for the consequences of use is accurate. But the assumption that the chain of causation is too attenuated to justify the use of functionality to determine the level of scrutiny, at least in this context, is not. Society increasingly depends upon technological means of controlling access to digital files and systems, whether they are military computers, bank records, academic records, copyrighted works or something else entirely. There are far too many who, given any opportunity, will bypass those security measures, some for the sheer joy of doing it, some for innocuous reasons, and others for more malevolent purposes. Given the virtually instantaneous and worldwide dissemination widely available via the Internet, the only rational assumption is that once a computer program capable of bypassing such an access control system is disseminated, it will be used.197

While that rationale appears sensible, it also means that the perpetrator of the expressive conduct (executing the code) will be treated under the law as equivalent to the person who originally authored speech that was later used in that conduct. This has significantly more complicated consequences than the expressive conduct cases upon which these lower court judges rely where the only “speaker” in question is the person actually performing the conduct.

To illustrate the absurdity of this approach, let’s apply the reasoning of these lower court opinions to the facts in Texas v. Johnson,198 an expressive conduct case that used the Spence and O’Brien analysis to strike down state laws banning flag burning. According to the analysis in Corley, laws affecting Betsy Ross’s freedom to stitch the first American flag would be judged using the same intermediate scrutiny as laws affecting Johnson’s freedom to burn said flag in front of the 1984 Republican National Convention. It may be that we should judge both laws strictly and protect both forms of expression. However, it is absurd to suggest that Ross, in her solitary act of patriotic creativity, carries any responsibility for Johnson’s potentially dangerous street protest. Flags have several uses other than being burned, and Ross surely did not have this future public safety hazard in mind when she was sewing. Diminishing Ross’s First Amendment rights (by qualifying them with intermediate rather than strict scrutiny review) simply because her flag was subsequently used in a burning “slides over questions of causation,”199 to quote the judge in Corley.

This is not a stretched metaphor in the context of electronic cash and decentralized exchange software. Just like flags, that software is capable of at least as many non-subversive and legal uses as it is subversive or illegal uses. Similarly, the author of that software will likely have as little knowledge or awareness of what people are actually doing with her code as a flag designer will know of her flags. It is more logically consistent to say that a software developer produces speech (strongly protected under standards from Roth and Sorell), and that any person who runs that code is engaged in conduct (expressive or not), which is less protected under standards from O’Brien and Spence.

As some scholars have remarked, the expressive conduct cases may be an attempt “to reconcile the constitutional promise of expressive freedom with the practical need for governmental regulation.”200 Surely this is true, and people who blow up buildings in order to express political views should not enjoy First Amendment protection from prosecution. But is it right to deny protections to researchers whose chemical descriptions of dynamite made it, all other things being equal, much easier for someone those researchers had never met to commit an act of terror? Is it legitimate to police harmful conduct by denying constitutional rights to persons who had no knowledge of the crime or the criminal, nor any intent to facilitate the crime?

Nonetheless, three out of four lower courts looking at the question of whether software is speech have confused the analysis between speech and conduct. This confusion could perhaps be reconciled by suggesting that the Corley line of thinking represents some new form of judge-made contributory liability for software developers; again, the judge in Corley found that “functionality is really ‘a proxy for effects or harm.’”201 If this is true, then it is an unheard of form of contributory liability that does not require knowledge of- or intent to aid the illegal act, and can even go so far as to abrogate otherwise protected constitutional rights. After all, if I publish code in a textbook that could potentially be used to violate copyright law (say it decrypts content protected with digital rights management tools) but nobody ever uses it, then there’s no conduct and, presumably, it’s now just speech and should be afforded the strongest First Amendment protection. If, however, one person uses my code to violate someone’s copyright, then I no longer receive my full First Amendment rights (through no fault or action of my own). This would, we believe, be a rather unprecedented constitutional construct with no support from Supreme Court jurisprudence that we can find.

Indeed, the judge’s reasoning sounds more like policymaking in response to a changed world than it does constitutional interpretation. Perhaps these policy changes are necessary now that “society increasingly depends upon technological means of controlling access to digital files and systems.”202 But that decision would be up to Congress203 or the States,204 and if it involved abrogating established constitutional rights it would require an amendment to the Constitution.205 That’s a far cry from tweaking the test for what types of expression qualify for protection under intermediate or strict scrutiny review.

This conduct-speech confusion may also be understood if one assumes that these courts have begun their analysis with the wrong case law. Corley, Junger, and Karn all begin with the premise that one must look to the line of cases dealing with expressive conduct in order to determine whether the code in question is protected at all (under either strict or intermediate scrutiny). This prejudices the later question: is the