Sunday, November 26, 2006

Vigilantes & Deputies: Lesson from the Past

It is increasingly clear to those of us who study cybercrime that conventional law enforcement, alone, simply cannot handle the problem, for several reasons.

As I have said elsewhere, cybercrime challenges even the best law enforcement agencies because it demands resources they do not have (and we cannot supply), because it is so easy to be anonymous online, because it is committed on an expansive scale and because it is so often transnational in character.

This means we need to develop a modified approach, one that improves our ability to deter and therefore control cybercrime.


One solution is to involve the private sector in the battle against cybercrime.

The FBI and the U.S. Secret Service are doing this with their Infragard and Electronic Crime Task Force programs, both of which bring federal and state law enforcement officers together with individuals and entities from the private sector. The general purpose is to facilitate information-sharing about attacks and threats; a subsidiary function, at least in some instances, is to enhance the resources available to law enforcement.


That’s a good solution, I think. Another ad hoc approach is evolving – online vigilantism, which takes various forms: Artists Against 419 and Perverted Justice are two examples. I’m essentially agnostic about the Artists Against 419; I know that some of the things they do violate the law, but I just can’t manage to be indignant about the harassment of 419 scammers.

I am, though, distinctly not a fan of Perverted Justice, not on its own terms and especially not when they team up with MSNBC to broadcast their depressing stings. I know the people they catch are scum, but I still don’t like how they do it; I particularly don’t like the broadcast stings in which (and I have only seen a little bit of these) the MSNBC guy seems to take delight in demonstrating precisely how stupid these guys are. I also don't like the fact that the Perverted Justice people are sometimes deputized, and that they are paid for their efforts by MSNBC. I heard a police officer refer to this arrangement as "law enforcement for profit."


Enough venting. My goal here is not to go off on a tangent about Perverted Justice. I actually have a larger point I want to make.

I just finished a novel the events of which take take place ninety-some years ago, during the Spanish Influenza pandemic. In reading the novel, I learned about an organization I had never heard of: the American Protective League.

The American Protective League was “a voluntary association of patriotic citizens acting through local branches which were established in cities and counties throughout the country”. It was created in March of 1917, two weeks before the U.S. entered World War I. It was created as an auxiliary to the Bureau of Investigation of the Department of Justice (the precursor to the FBI), though military intelligence also seems to have been involved in its creation. Members carried a badge and membership card which showed they “were connected with the Department of Justice.” They were, in essence, federal deputies.

The APL had “twelve hundred units functioning across America, all staffed by business and professional people. It was a genuine secret society . . . . Membership gave every operative the authority to be a national policeman.” APL members investigated (and apparently coerced) “seditious and disloyal” citizens, checked on who had bought Liberty Bonds, rounded up “draft evaders,” broke strikes and generally seem to have harassed anyone they regarded as German sympathizers or “Reds.” They seem to have routinely violated civil rights and, as one source notes, they “burgled, vandalized, and harassed I.W.W. members and their offices.” And as that source notes, the Wilson administration supported the APL even though many of their activities were illegal.

In describing the APL’s activities to Congress, the Attorney General said, “[i[t is safe to say never in history has this country been so thoroughly policed.”

The APL has been a revelation to me. I have done some thinking and writing about how we could bring representative of the private sector into the battle against cybercrime. I have talked to others about this, and have heard the argument that corporate entities or at least certain of their employees should be deputized to give these private citizens more authority to investigate cybercrime and even to pursue cybercriminals. In a short piece I speculated a bit about how this would work in practice, wondering if it would be possible to actually deputize private citizens on a continuing basis and encourage them to “go after” cybercriminals.

Historically, vigilantes – like the Wild West vigilantes we in the U.S. always think of when we hear the term – have emerged when law enforcement was lacking or was perceived as ineffective. Vigilantes have, therefore, been a substitute for law enforcement, instead of an adjunct to law enforcement . . . or so I thought until I heard about the APL.

The APL saga makes me really concerned about the idea of formally bringing private sector personnel into law enforcement, either as adjuncts or as APL-style “deputies.” I know we’re a lot more sensitive to civil rights now than people were ninety-some years ago, and I might agree that the investigation (and even apprehension) of cybercriminals is not likely to create opportunities for the kinds of abuses the APL members inflicted early in the last century.

I think, though, I’m really hesitant about blurring boundaries – about formally bringing laypeople into the battle against cybercrime (or any other kind of crime, for that matter).

I still think we need to figure out an approach that allows us to utilize private-sector resources (personnel, equipment, money) to improve law enforcement’s ability to deal with cybercrime. I just think we need to be very, very careful how we do this.

(Oh, the photo is A. Mitchell Palmer, Attorney General during much of the time the American Protective League was operating.)

Wednesday, November 22, 2006

Civil Suits for Hacking, Malware, etc.


The basic federal cybercrime statute is 18 U.S. Code § 1030.

Section 1030 criminalizes various types of hacking (unauthorized access to computers), denial of service attacks, distributing malware and using computer technology to commit extortion or fraud. When it was originally enacted in 1984, §1030 only addressed conduct that targeted computers used by the federal government and a limited category of private computers, such as those used by financial institutions.

As computers became more common, it became apparent that the statute needed to be expanded in scope to give federal authorities the ability to pursue criminals who attacked purely “civilian” computers. So in 1996 the statute was expanded to criminalize a variety of conduct that is directed at “protected computers.”

Section 1030(e)(2) defines a “protected computer” as a computer that is either
  • (a) used “exclusively” by a financial institution or the federal government “or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government;” or
  • (b) “is used in interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States”. The second definition essentially gives federal authorities over any computer located in the United States (especially if it is linked to the Internet) AND gives them the ability to apply the provisions of §1030 extraterritorially, i.e., to conduct occurring outside the territorial United States.
The statute therefore gives the Department of Justice and federal law enforcement agents wide latitude to pursue those who engage in criminal activity directed at federal or civilian computers. But today, for a change, I don’t want to write about criminal matters. Instead, I want to point out another aspect of §1030.

Section 1030(g) creates a private civil cause of action for anyone who has been injured by a violation of the criminal provisions of the statute. In other words, if a cybercriminal hacks your computers, infects you with a virus or worm, launches a DDoS attack at you, uses a computer to extort money or property from you or uses a computer to defraud you, you can bring a civil suit against that person under §1030(g).

Specifically, §1030(g) says that “[a]ny person who suffers damage or loss by reason of a violation of [§1030] may maintain a civil action against the violator to obtain compensatory damages and injunctive relief or other equitable relief.” The civil action can be brought if the conduct
  • (a) violated one of the criminal provisions of the statute AND
  • (b) caused loss aggregating at least $5,000 in one year OR the modification or impairment, or potential modification or impairment, of the medical examination, diagnosis, treatment, or care of 1 or more individuals OR physical injury to any person OR a threat to public health or safety OR damage affecting a computer system used by or for a government entity in furtherance of the administration of justice, national defense, or national security”.
Damages for a violation causing only financial losses aggregating at least $5,000 in a one-year period are limited to economic damages. In Creative Computing v. Getloaded.com LLC, the Ninth Circuit Court of Appeals held that loss of business and loss of business goodwill constitute “economic damages under the statute.

And the Third Circuit Court of Appeals held, in P.C. Yonkers, Inc. v. Celebrations The Party and Seasonal Superstore, 428 F.3d 504 (2005), that the statute’s limitation to “economic damages” to mean that “if one who is harmed does seek compensatory damages based on such conduct, . . . then those damages will be so limited. That is, compensatory damages for such conduct will be awarded only for economic harm.” This court found that nothing in the sentence quoted above prevents a court from also providing injunctive relief against someone who has been shown to be in violation of the statute.


Section 1030(e)(11) defines “loss” as “any reasonable cost to any victim, including the cost of responding to an offense, conducting a damage assessment, and restoring the data, program, system, or information to its condition prior to the offense, and any revenue lost, cost incurred, or other consequential damages incurred because of interruption of service”. So all of these can be factored into the calculation of economic damages in a suit under §1030(g).

An injured party must file an action under §1030(g) “within 2 years of the date of the act complained of or the date of the discovery of the damage.” No action can be brought under §1030(g) “for the negligent design or manufacture of computer hardware, computer software, or firmware.”

I did a quick Westlaw search to see how many reported cases deal with suits brought under §1030(g) and found around 50. That seems a good number, given that most of the people who violate the criminal provisions of §1030 tend to be what we in the law call “judgment-proof,” i.e., without assets that could be used to pay off a civil judgment if a plaintiff were fortunate enough to prevail.

The theory behind provisions like §1030(g) is that private citizens act essentially as “adjunct Attorneys General.” That is, private citizens who bring suits under a statute like this are presumed to enhance the effectiveness with which the statute deters criminal violations, since the private suits also act as a sanction against those who violate the statute. I don’t know that anyone has actually conducted empirical research to see how well that works in practice, but it’s a reasonable theory.

Sunday, November 19, 2006

Border Wars

I've done several posts on the legal issues that arise from airport searches and seizures of laptop computers.

A Minnesota federal district court just issued an opinion in an airport laptop search case.

N. Furukawa, who is being prosecuted federally for possessing child pornography, moved to suppress the evidence seized from his laptop and statements he made during the laptop search at the airport.

The district court's opinion describes what happened, in detail, and I thought you might find it interesting. United States v. Furukawa, 2006 WL 3330726 (District of Minnesota, November 16, 2006).
United States Customs and Border Protection Officer Jeffrey R. Schmidt was on duty at Minneapolis/St. Paul International Airport during the afternoon hours on April 20, 2006. . . . Officer Schmidt . . . was conducting baggage searches when he encountered defendant Furukawa at approximately 1:30 p.m. Mr. Furukawa was being processed through United States customs upon arrival on an international Northwest Airlines flight from Tokyo, Japan. He was waiting in line for a routine inspection after being been referred from passport screening to "baggage control secondary" based upon a computer screen alert indicating that he may have purchased access to a Internet site that contained child pornography. The referral was made by Customs Officer Bulov.

Officer Schmidt first obtained the defendant's travel documents, including his passport and a customs declaration. Defendant indicated that he was returning to his office in New York following a business trip to the Philippines. After examining the defendant regarding any customs declarations, the officer obtained a binding declaration. Officer Schmidt was not aware of any particular reason for defendant's referral for baggage search until he checked the computer screen after obtaining the binding declaration. The officer then proceeded to examine Mr. Furukawa's checked and carry-on luggage and found that the defendant was carrying a laptop computer and an external hard drive. Officer Schmidt promptly opened the laptop, booted up the computer, and asked the defendant to sign in and enter his password. The officer designated the Windows 2000 operating system and the defendant entered his screen name and password without objection.

After gaining access to the designated program, Schmidt began a search for video and picture files which are construed as merchandise for customs purposes. The officer discovered a file list and thumbnail photos which included materials that were suspected to be pornographic. At that time he took the computer to his supervisor's office so that the screen would not be open to public viewing. Upon further examination of the files Officer Schmidt observed photos which appeared to be pictures of pre-teen girls engaged in acts of a sexual nature. He also found materials that were on the computer in violation of copyright protections and those materials were deleted or destroyed on site. The officer then closed the laptop computer and called his duty supervisor. In addition, agents from Immigration and Customs Enforcement were contacted. The laptop was seized along with 14 other items. Officer Schmidt's search lasted approximately one-and-a-half hours.

ICE Special Agent Paul Nichols arrived at the airport customs area at approximately 3:00 p.m. There he met with Special Agents Lang, Boyle and Yira. . . .The agents found and reviewed images on the laptop computer which were determined to be representations of child pornography. Meanwhile, Special Agent Lang, a computer forensics specialist, examined the external hard drive containing approximately 30,000 files and discovered numerous additional file names that suggested the existence of pornography.

Special Agent Nichols thereafter met with the defendant and identified himself. The agent read aloud the Miranda rights from a written U.S.I.C.E. Statement of Rights form . . . . The defendant stated that he understood each of his rights as they were read to him, and he himself read the rights. Mr. Furukawa printed his name and signed and dated the express waiver of rights contained on the bottom of the Statement of Rights form. The signature was witnessed in writing by Special Agent Nichols and Yira. At that time the defendant had not been placed under arrest and was not in handcuffs, though he was not free to leave. The defendant and the agents were located in a corner in the secondary inspection area which is not open to the public.

Defendant Furukawa stated that he was willing to answer questions and that his occupation was Internet business consulting. He indicated that his occupation involved searching the Internet for pornography and that he sometimes encountered child porn as an incident to the occupation, particularly as part of a mass download of pornographic materials, but that his clients were not producers of child pornography. The defendant acknowledged that he owned the laptop and the external hard drive that were examined by customs agents. He was cooperative and provided appropriate answers to questions posed by agents, but he did not himself ask any questions, and he declined to answer certain questions. The interview lasted approximately one hour and there was no request that questioning cease and no request for the assistance of an attorney.

During the interview the defendant was provided water on request and no threats were made to induce cooperation. He was advised that the reason for his detention was the discovery of child pornography on his computer. . . . In addition to the oral interview questions, the defendant provided written answers to some but not all of the written questions that were presented to him on a typed DHS/ICE Computer Forensics form. . . . The questions and answers on the form related to defendant's computer ownership, operating systems, and user and sign-on names and passwords. Mr. Furukawa described himself as an "expert user" in response to a question on the forensics form, and he provided a list of e-mail addresses.
The district court denied Furukawa's motions to suppress the evidence and to suppress the statements he made to the ICE officers.

Tuesday, November 14, 2006

Who Can Consent to a Search of Your Computer?

A recent federal case from the Eighth Circuit Court of Appeals -- United States v. Hudspeth, 459 F.3d 922 (8th Cir. 2006) – illustrates the issues that can arise when one person consents to law enforcement’s searching a computer owned/used by someone else.

In 2002, “as part of an investigation into the sale . . . of pseudoephedrine-based cold tablets, the Missouri State Highway Patrol . . . executed a search warrant at Handi-Rak Service, Inc.

As the officers searched the Handi-Rak office computers for evidence within the scope of the warrant, e.g., "papers and/or documents" related to the "inventory of pseudoephedrine based cold tablets”, they ran across a “homemade CD with a handwritten label.” When one officer opened a folder on the CD, he saw child pornography. The Sergeant in charge stopped the officers from searching further and called the U.S. Attorney’s office “for guidance.”

While that was going on, Hudspeth (a) consented to the search of his office computer, in writing and orally and (b) refused to consent to a search or seizure of his home computer. The officers then arrested Hudspeth, believing they had probable cause to believe he possessed child pornography.


They also believed they would find child pornography on his home computer, so they went to his home. The officers introduced themselves to Mrs. Hudspeth, told her they had arrested her husband and asked for consent to search the home computer. She asked “what would happen if she did not consent” and the officer in charge told her “he would leave an armed uniformed officer at the home to prevent destruction of the computer and other evidence while he applied for a search warrant.” After trying unsuccessfully to contact her attorney, Mrs. Hudspeth told the officers they could take the computer. They seized it, took it to their offices, obtained a warrant to search it and found more child pornography on the home computer.

Hudspeth moved to suppress the images found on his home computer, arguing that his refusal to consent to the seizure of his home computer trumped his wife’s subsequent consent to the seizure. To understand his argument, we need to examine consent for a moment.

Consent is an exception to the 4th Amendment’s requirement that police have a warrant to search or seize property. Consent is essentially a waiver of one’s 4th Amendment rights. The Supreme Court held, in United States v. Matlock, 415 U.S. 164 (1974), that co-users of property can each consent to the search or seizure of that property. So here, if Mrs. Hudspeth was a co-user of the home computer, she had the authority to consent to the search or seizure of that computer.

The Matlock Court held that the authority to consent derives not only from sharing ownership of property (thought that works, too), but also from sharing the use of property. Since it seems likely that Mrs. Hudspeth was both a co-owner and a co-user of the property, she had the authority to consent to the seizure of the home computer, which means her consent to the seizure would have been valid . . . had Mr. Hudspeth not refused to consent to that seizure earlier.


A year or three ago, his refusal might not have been important. It’s very likely that, a year or three ago, the Eighth Circuit would have held that Mrs. Hudspeth was a co-owner/co-user of the home computer and so could consent, in her own right, to its seizure. That’s where the law had been. The understanding was that as long as A co-owner/co-user of property consented to a search/seizure of the property, the consent (and the resulting search/seizure) was valid, even though the other owner/user of the property had refused to consent.

That changed, though, earlier this year when the Supreme Court decided Georgia v. Randolph, 126 S.Ct. 1515 (2006). The Randolph Court held that “a physically present inhabitant's express refusal of consent to a police search is dispositive as to him, regardless of the consent of a fellow occupant.” More precisely, Supreme Court held that the consent Scott Randolph’s estranged wife, Janet, gave to the search of the home she still shared with Scott was invalid because her consent came after Scott had refused to consent. (Essentially, the officer asked Scott to consent to a search of the home and, when Scott refused, “turned to Janet Randolph for consent to search, which she readily gave.”)

The Supreme Court held that a co-owner’s/co-user’s consent cannot overrule another “physically present” co-owner’s/co-user’s refusal to consent. In other words, the Court held that police cannot play one "physically present" owner/user off against another, obtaining consent from one in the face of another’s denial.


The Hudspeth Court applied the Randolph holding to the facts before it even though Mr. Hudspeth was not “physically present” when his wife was asked to consent to the search he had rejected. The Eighth Circuit found there are reasons to enforce the refusal to consent of an absent co-owner/co-user, as well as of one who is physically present when the issue of consent is raised and resolved. It therefore held that Mrs. Hudspeth’s consent to the seizure of the home computer was invalid under the 4th Amendment, which may result in the suppression of evidence obtained from that computer.

Bottom line:

• If you give others access to your computer (as well as to your other personal possessions or your home or your car), you have assumed the risk they will consent to allow law enforcement officers to search the property in your absence (and, inferentially, when you have not refused to consent to the search). (Since only owners and co-owners can consent to the seizure of property, you assume this risk only if you jointly own the property with someone else, who is present when you are not.)

• If you follow the Eighth Circuit’s rationale, police cannot obtain valid consent from a co-owner/co-user of the property in your absence if you have already refused to provide consent. . . even though you are not "physically present" where the computer is.

• If you read the Randolph Court’s holding strictly – as some will do – then the fact that you have refused to consent may not matter if you refused when you were not physically proximate to the property they want to search. It MAY be (and I emphasize “may”) that the Randolph Court’s holding only applies when you have two co-owners/co-users of property confronting each other, one consenting and one refusing to do so.

What do you think?

Wednesday, November 08, 2006

Seeking the Return of Seized Computers

In my last post, I talked about the provision in Rule 41 of the Federal Rules of Criminal Procedure which requires that a search warrant be “executed” within 10 days of being issued. Today I want to talk about a related issue: seeking the return of computer equipment that has been seized pursuant to a search warrant.

The usual dynamic under the Fourth Amendment for computer equipment is that law enforcement officers (a) get a warrant to seize and search computer equipment, (b) seize the equipment, analyze it and find evidence that is used to prosecute the owner for various crimes and (c) the owner moves to suppress that evidence on the grounds that the seizure and/or search of the computer somehow violated the Fourth Amendment. This is the dynamic we’re all used to: the operation of the Fourth Amendment’s exclusionary rule.

There is another, less well-known dynamic, one that arises under Rule 41(g) of the Federal Rules of Criminal Procedure. Rule 41(g) says that someone “aggrieved by an unlawful search and seizure of property or by the deprivation of property may move for the property's return.” If the party filing the motion shows good cause for the property’s being returned, the court will enter an order to that effect.

Motions for return of property are filed when the property at issue is, like computer equipment, not itself contraband but has been seized because it contains contraband (child pornography, say) or evidence of a crime (identity theft, extortion, hacking, etc.) The premise behind filing a Rule 41(g) motion in this context is that the computer was seized so the government could search it and find the evidence it contained; it has now been searched, the government has found and acquired the relevant evidence, so the computer should be returned to its owner.

This was the basis of a motion to return filed by a law firm in Massachusetts some years ago. As reported in Commonwealth v. Ellis, 10 Mass. L. Rptr. 429, 1999 WL 815818 (Mass. Super. 1999), law enforcement officers executing a search warrant at the firm’s office seized computers, back-up tapes and a printer to be searched off-site. After some time had passed, the law firm moved for the return of the seized property, arguing that the searches had been completed. The court denied the motion because it found that the government’s retaining the equipment was “reasonable” under the circumstances, the primary circumstance being that it had been (allegedly) used in the commission of crimes.

In People v. Lamonte, 61 Cal. Rptr. 2d 810 (Cal. App. 1997), on the other hand, the appellate court held that the defendant’s motion for the return of his computer should have been granted. This court explained that though the computer “may have” been used in committing a crime, it was not contraband, i.e., it itself was “not illegal to possess.”

These cases illustrate the traditional process of moving for return of seized property – a scenario I will call the “zero-sum seized property scenario.” In this scenario, the government has seized someone’s tangible property and, by retaining it, is completely depriving them of its possession and use. Only the government or the owner can have a computer, not both.

A new scenario – a non-zero sum seized property scenario – has emerged over the last few years. This scenario arises when, as is common, the government makes a copy, a mirror image, of a computer hard drive or other storage media and uses the copy for its analysis. What happens when the owner of the computer hard drive files a motion for the return of the copy of the hard drive?

This happened, for example, in Florida earlier this year. In the Matter of the Application of the United States for a Search Warrant, U.S. District Court – Middle District of Florida (Case No. 05-3113-01). Federal agents executed a search warrant at a business and made mirror images of the data contained in 3 laptop computers, 4 CPUs, two servers and 3 RAID drives. They took the copies away to be analyzed and, after some time had passed, the business moved for the return of all the data on the copies that was not relevant to the criminal investigation.

This is quite common; given the complexity and capacity of computer storage devices, they can contain a great deal of information that is irrelevant to the criminal investigation being conducted. And, as the business pointed out in this case, the irrelevant data is not within the scope of the warrant that justified the making and seizure of the copies; since it is not within the scope of the warrant, it seems its retention by the government would violate the Fourth Amendment.

That is what the business argued in the Florida case. The government’s response was that it should be allowed to retain the mirror images – in their entirety – “indefinitely” so they could be used to “authenticate seized information” and to conduct further searches, if necessary. An expert informed the court that the government should not need to retain the mirror images for authentication purposes, because a hash analysis of the mirror images could be used for that purpose. The government countered that, “for the last several years” it had been the practice among at least some U.S. Attorneys’ offices to retain mirror images of hard drives and other media “throughout the investigation and prosecution of the case.”

The District Court for the Middle District of Florida disagreed. It held that “the United States cannot, consistent with the Fourth Amendment, retain computer storage devices that contain data outside the scope of a search warrant after a search is completed, unless the computer storage devices have themselves been seized as instrumentalities or evidence of a crime or as contraband. . . . The United States should not, therefore, continue to take the cavalier attitude that it may retain computer storage devices throughout an investigation and prosecution without specific court authorization to do so.”

So this court, anyway, said the government cannot retain copied data that is not within the scope of the warrant used to copy computer storage media unless that data is relevant to an investigation. It also indicated that the owner of the seized computer storage media can seek the return of the data before the investigation has been completed, presumably after the government has been given a “reasonable” amount of time to analyze the seized copies.

I tend to agree with this court, but I suspect other courts may disagree. One of the reasons I find this issue of particular interest is because of a proposal I was asked to review last year. The author of this proposal advanced a system for collecting the data on all storage media copied by the government, pursuant to computer search warrants, and depositing it into a central data base. It would then be used for data mining, i.e., to conduct searches intended to identify criminal activity as to which the government was otherwise quite ignorant.

I argued that this was impermissible, that even though the government lawfully copied the data on the seized computer storage media, it cannot use that data for purposes unrelated to the investigation that justified the issue of the warrant authorizing the seizure and copying of the media. It was a rather difficult argument to make, since we have not historically had to deal with this non-zero-sum seized property scenario

traditional justification for seeking the return of tangible property is that you need it – you need to use the seized computer in your business or the seized car in your personal life. When the government takes a copy, this argument becomes more difficult, because they can keep the copy without interfering with your ability to use the computer media from which the data was copied.


I still think I’m right, and hope the proposal I note above does not become a reality.

Monday, November 06, 2006

Timely Execution of Search Warrants

A case from New Hampshire – United States v. Syphers, 426 F.3d 461 (1st Cir. 2005) – illustrates the issues that arise from a federal provision which requires the timely execution of search warrants, including computer search warrants. It also illustrates what seems to be a loophole, for lack of a better word, in the federal provision.

In November, 2001, a Concord police officer obtained a warrant to search Sypher’s home; the warrant was based on probable cause to believe Syphers had sexually assaulted two girls, who were 14 and 15 at the time. The officers seized a Gateway computer, among other evidence, and subsequently sought – and obtained – a separate warrant that authorized them to search the Gateway for child pornography. There seems to be no contention that this warrant was not supported by probable cause or otherwise met the procedural requirements of the Fourth Amendment.

The glitch arises with regard to the time the police were given to execute the warrant, i.e., to actually search the computer. The child pornography warrant issued on November 28, 2001. On the same day it was issue, the prosecutor moved that police should have an additional 12 months to complete the search “due to an `overwhelming backlog in similar computer crimes.’ The state court granted the extension.

In January, 2002, Syphers pled guilty to a reduced state charge of simple assault. He then asked for his computer, which seemed reasonable since the plea apparently resolved the investigation. New Hampshire authorities objected to returning the computer to Syphers on the ground that they needed additional time to complete their search of its contents (including, apparently, 64,000 “newly de-encrypted images” on it). They also said they needed additional time to be able to share what they found with the local U.S. Attorneys Office. The state court denied Syphers’ motion for the return of his computer, the state police completed their analysis of its contents and then shared what they had found with the FBI. Syphers is then indicted on one federal charge of possessing child pornography; the charge was based on what the New Hampshire police found on his computer.

At the federal district court level and then again at the federal court of appeals level, Syphers challenged the state court’s giving New Hampshire police an additional year to conduct the search of the computer. He based his challenged on Federal Rule of Criminal Procedure 41(e)(2)(A), which states that a search warrant must “command” the officer to whom it is issued to “execute the warrant within a specified time no longer than 10 days.” Syphers pointed out, quite correctly, that the government had been given far longer than 10 days to execute the warrant authorizing the search of his computer.

Federal authorities argued that the 10-day period incorporated in Rule 41 did not apply in this case because the search was conducted by state authorities, who are not bound by the rule. Syphers argued that the state authorities should be bound because they were executing the search for the benefit of the state authorities; Syphers, after all, had already plead guilty in state court.

The First Circuit Court of appeals rejected Syphers’ challenge. It said “the computer search that yielded evidence later used in a federal prosecution was conducted by state law enforcement pursuant to a state court search warrant. There is no evidence that federal agents participated in the state investigation, procurement of the warrant, or request for extension. Therefore, the investigation was not federal in character, and the ten-day stricture of Rule 41 does not apply.”

I decided to write about this case for two reasons: One, the more obvious reason, is this holding. On its face, it seems to mean that if federal authorities let state authorities handle the analysis of seized computers, they can avoid the requirements of Rule 41 (which I will examine in a minute). That seems fair if the state authorities are searching the computer in order to obtain evidence for use in a state proceeding, but I think it seems quite problematic if, as was the case here, the state authorities are not longer interested in using the evidence for a state prosecution. In that instance, they are, inferentially, anyway, analyzing the computer solely to find evidence that can be used by “someone else” – logically, the federal authorities. That seems an end run around the language of Rule 41.

Now, I don’t think it would be an end run around Rule 41 if the state authorities were searching the computer for their own investigation and, in so doing, found evidence that could be used to bring federal charges. I think the scenario would be more problematic if the state authorities and the federal authorities were working jointly on an investigation and the state authorities’ search of the computer found evidence that could be used by the federal authorities.

But the real issue I want to discuss is the 10-day time limit. It has become a bone of contention in the federal system, because agents and prosecutors point out – just as the state prosecutor did in Syphers – that because there is a tremendous backlog of seized computers, analysts simply cannot process a computer within 10 days from the time it is seized. The problem is being exacerbated by the increasing size of hard drives and other storage media.

Some federal agents and prosecutors argue the 10-day rule only applies to the seizure of the computer, that if they seize the computer (or other storage media) within 10 days from the time the warrant issues, they’re fine. The validity of that argument probably depends on why the federal rule (and many state rules, as well) incorporates the 10 day period.

I did some research on that a while back, and traced the 10-day period to a Prohibition-era statute, a statute that was involved in a case that went to the Supreme Court. In that case, the Court held that evidence obtained when a warrant was issued after the 10 day period had elapsed could not be used in court. The Supreme Court explained, as did the Syphers court, that the purpose of the 10-day rule is to ensure that the probable cause supporting the warrant does not become “stale.”

For example, assume federal agents get a warrant to search for and seize drugs that are located in a garage at the edge of town. They have probable cause to believe the drugs are there because an informant has told them the drugs are being stored there until they are shipped out of town. The officers obtain the warrant, but take two weeks (three?) to execute it. The 10-day rule incorporates the common sense principle that just because you have probable cause to believe evidence is in a particular place NOW, you do not have probable cause to believe the evidence will ALWAYS be there. It imports a temporal limitation into the probable cause-search warrant analysis.

The Syphers court also held that he loses on his Rule 41 argument “because there is no showing that the delay caused a lapse in probable cause.” That’s no doubt true, since the computer had been in the hands of law enforcement since it was seized; the law enforcement’s possession of the “container” of the evidence at least arguably stabilized the situation and sustained the existence of probable cause.

There’s another issue, though, that comes up with regard to the Rule 41 10-day period, and that is someone’s right to have their property – Syphers’ computer in this instance – returned to them after law enforcement has seized it and has had a “reasonable” opportunity to analyze it. I’ll talk about that in another post.

Wednesday, November 01, 2006

Cyberterrorism: FUD or . . . ?

Last week I was in Europe speaking at a workshop on cyberterrorism. When I started to prepare my presentation, I decided to focus on the whole issue of cyberterrorism – on whether it exists as a valid source of concern or is, as some say, merely FUD.

FUD stands for “fear, uncertainty and doubt” and refers to what some consider hype spread by computer security professionals who use the “myth” of cyberterrorism to generate business. Those who take this view tend to deny that cyberterrorism exists as a distinct threat category.

So I thought about that, about why there might be a divergence of views on this issue and about why some seem to deny the very possibility of cyberterrorism. I could not – can’t – understand the latter position because it seems to me computer technology is a tool, and I can’t understand why any tool can’t be used in some fashion to facilitate an act of terrorism. Cars can be turned into IEDs, and in 1994 Ramzi Yousef used Casio digital watches to assemble a bomb he planted, and detonated, on Philippine Airlines Flight 434. If cars, watches and other mundane devices can become tools of terrorism, why can’t computers?

As I thought about it, I decided that the divergence of views may be due to imprecise definitions – to the fact that one person’s conception of cyberterrorism may be very different from another person’s conception of the same phenomenon. It seemed, and seems, to me that maybe we need some definitional clarity here. Maybe we need to reflect on how cyberterrorism should be defined.

It seems to me that the definition of cyberterrorism needs to have two components: (i) semantic; and (ii) operational. The first goes to the legal concept – to the “harm” this hypothesized type of conduct inflicts. The second goes to the processes used to inflict that hypothesized “harm.”

I’m going to try to keep this relatively short (out of self-interest, if nothing else, as I am still jet-lagged), so let me briefly run through both dimensions.

Semantic definition: Cyberterrorism consists of using computer technology to advance terrorists’ goals. We can divide the goals into two arenas: primary goals and secondary goals.

Primary goals are the terrorists’ pursuit of their ideological agenda because terrorism is, after all, the use of certain methods in an effort to advance an ideological message or strategy. A federal criminal statute defines terrorism as using certain proscribed means (inflicting death, physical injury, damage to/destruction of property) in an effort to coerce a government or influence a civilian population for ideological purposes). So, regardless of whether the terrorists are white supremacists, jihadists or the labor terrorists that posed a problem in the nineteenth century U.S., the goal is to use violence and the threat of violence to demoralize governments and populations and thereby advance the terrorists’ agenda. This definition of primary goals holds for all types of terrorism, but I am, of course, focusing only on cyberterrorism.

Secondary goals are the terrorists’ use of certain methods to sustain their pursuit of the primary goals. Secondary goals go to issues such as recruiting and retaining members of the terrorist group, fundraising, propaganda, communication and coordination of activities, etc.

Operational definition: Here, I want to focus only on the operational definition of cyberterrorists’ primary goals. I think we need to divide this definition into three categores – three types of (forgive me) WMD: weapon of mass destruction; weapon of mass distracton; and weapon of mass disruption.
  • Weapon of mass destruction: This, I think is the primary source of FUD – the notion that a cyberterrorism attack will be a “digital Pearl Harbor,” or a “digital 911” – that it will be analogous to flying planes in the World Trade Center. I don’t think that is true; I think this notion, to the extent it exists, misunderstands how terrorists can use computer technology. I do not think that cyberterrorism – the use of computer technology to pursue an ideological agenda by those we regard as terrorists – can ever have the kind of visceral, demoralizing effect we experienced in 911. Indeed, I suspect that may be one reason why we have so far not, at least to my knowledge, seen any real instances of cyberterrorism.
  • Weapon of mass distraction: Here, computer technology is used to demoralize a civilian population (and undermine faith in government and other essential processes) by inflicting psychological “harm.” A few years ago, a federal official who worked in the area of terrorism/public security told me he got a call from the local authorities, in a very large American city. The local authorities said, “we have to evacuate the city.” The federal fellow asked why, and was told that “there’s a suitcase nuclear device” on a train in the subway system. He asked how they knew this, and the answer was uncertain; they had “heard” it. He asked if any subway train operator had described the rather unique appearance of a subway nuke, and was told none had. He pursued the matter in some more detail, and ultimately convinced the local authorities not to evacuate the city which, as he pointed out, would have done about as much damage – given the panic that would ensue – as a suitcase nuke. Point being: Misinformation, cleverly disseminated, can be used to sow chaos and confusion, which will in turn cause injury and property damage – the net effect being to undermine faith in our systems, our leaders and perhaps even our ideology.
  • Weapon of mass disruption: Here, computer technology is used to achieve a similar effect but the direct target is systems, not psychology. The U.S. Secret Service and Department of Homeland Security ran an exercise earlier this year – CyberStorm – in which a loosely linked set of domestic terrorist groups attacked various systems in the U.S. They interfered with the operation of air traffic control systems (thanks to help from a disgruntled FAA employee), did the same for some commuter trains, attacked at least one news website, altered balances in some accounts, went after power grids, etc. – a kind of smorgasboard of systemic attacks. To the extent that attacks such as this work, they would undermine our faith in our reality – in the stability of the systems we rely on to conduct our lives. That, of course, results in the demoralization of a civilian population which is, as I said before, a primary goal of any terrorist group.
I could write a lot more, but I think (hope?) this is enough to get my point across. The point is, simply, that we must not think of terrorists using computer technology in ways that are directly analogous to their use of IEDs and other traditional instruments of violence. Violence, I submit, is not what cyberterrorism is/would be about. It’s a much more subtle, and therefore perhaps more dangerous phenomenon, because it works on our minds and on our reality.