Friday, October 31, 2008

Not-Hearsay

As I noted in an earlier post, every state and the federal system have rules of evidence that bar the use of what’s called “hearsay.” Rule 801(c) of the Federal Rules of Evidence defines hearsay as “a statement, other than one made by the declarant while testifying at the trial or hearing, offered in evidence to prove the truth of the matter asserted.”

As I also explained in that earlier post, courts bar the use of hearsay – unless it falls within one of a few exceptions to the rule barring its use – because it’s presumptively unreliable.

As I noted there, allowing hearsay as a general matter would mean John Doe could take the stand and say he’d heard that the defendant – Jane Smith – had committed all kinds of crimes. It then becomes difficult for Jane or her attorney to rebut what John Doe had told the jury; they can’t cross-examine the person who allegedly said these things about Jane. So aside from its inherent unreliability, hearsay can deny the party against whom it is introduced the right to confront witnesses against them, a right guaranteed under the U.S. Constitution in criminal constitution.


Sometimes, though, a record or other item that seems to be hearsay is, in fact, not. That’s what this post is about. It comes from a decision by the Washington Court of Appeals: State v. Nordquist, 2008 WL 642615 (2008). Here are the facts in the case:
Scott Nordquist possessed a check drawn on Jodi Hamer's checking account from Fibre Federal Credit Union. On July 11, 2006, he walked into the credit union and presented the check for payment, with two pieces of identification, to credit union employee Kendra Thompson. Thompson took the check . . ., entered the check's information into the credit union's computer, and received an electronic bank memo alert on her computer that `this particular series of check numbers may have been stolen and to use caution when verifying the signature.”

Thompson excused herself . . . to compare the signature on the check with Hamer's signatures on past checks and her account card. Unable to match the signature on Nordquist's check with the signatures on Hamer's account, Thompson contacted her supervisor, who called the Longview Police Department. Meanwhile, Nordquist waited for about 15 minutes, until two police officers arrived.

After verifying Nordquist's identity, the officers took him to a room at the credit union, where they conducted an investigation. Nordquist told the officers that `he received the check from a girl named Amy.’ But after Officer Jennifer Jolly continued to question Nordquist about how he had obtained the check, he finally responded, `[W]ell, now that you put it that way, it doesn't make any sense.’ The officers arrested Nordquist for forgery.
State v. Nordquist, supra.

Nordquist was tried for, and convicted of, forgery. He appealed his conviction, arguing in part that the
trial court abused its discretion when, over his objection, it allowed the following testimony from Thompson: `There was a memo stating that this particular series of check numbers may have been stolen and to use caution when verifying the signature.’ Nordquist argues that the memo's statement was inadmissible hearsay evidence under [Washington Rule of Evidence] 801(c).
State v. Nordquist, supra.

The Washington Court of Appeals began its analysis of Nordquist’s argument by noting that hearsay can “`be admitted if offered for purposes other than to prove the truth of the matter asserted.’” The court then found that
Thompson's testimony about the bank's computer alert conveyed her rationale for excusing herself from Nordquist, checking the account holder's signature against the signature on the check that Nordquist had presented, and then calling her manager. Thompson did not testify that the check Nordquist presented and that she examined was stolen. Nor did the State charge Nordquist with possessing stolen checks or stealing the checks. Thus, the bank memo did not serve to prove the truth of a matter asserted in Thompson's testimony.

On the contrary, . . . the trial court allowed Thompson's testimony as an explanation for her actions, not as substantive evidence that some checks from this account had been stolen. Thus, her bank memo testimony was not hearsay under [Washington Rule of Evidence] 801 and, therefore, not excludable. . . . Accordingly, we hold that the trial court did not abuse its discretion in admitting Thompson's testimony about the computer alert.
State v. Nordquist, supra. So, not-hearsay = no problem.

Wednesday, October 29, 2008

Unlawful Use of Encryption

I’ve written a few times about encryption issues; those posts were about legal rules that facilitate or restrict your ability to use encryption to protect your data. This post is about something different: making it a crime to use encryption.

Six states – Arkansas, Illinois, Iowa, Minnesota, Nevada and Virginia – have statutes that make the “unlawful use of encryption” a crime. The statutes are relatively new; a couple of them date from 1999, others were adopted between 2001 and 2005 and Illinois’ statute is brand new. It goes into effect on January 1, 2009.

Illinois’ adding such a statute makes me wonder if we will see more states doing the same.


Perhaps because Illinois’ statute is the most recent, it is the most detailed. Since it is the most detailed, I’m going to use it to illustrate what these statutes do; then I’ll speculate a bit about why they’re being adopted and how effective they are likely to be in doing whatever it is they’re supposed to do. So here’s the Illinois statute (sans the boilerplate definitions in section (a)):
(b) A person shall not knowingly use or attempt to use encryption, directly or indirectly, to:

(1) commit, facilitate, further, or promote any criminal offense;
(2) aid, assist, or encourage another person to commit any criminal offense;
(3) conceal evidence of the commission of any criminal offense; or
(4) conceal or protect the identity of a person who has committed any criminal offense.

(c) Telecommunications carriers and information service providers are not liable under this Section, except for willful and wanton misconduct, for providing encryption services used by others in violation of this Section.

(d) A person who violates this Section is guilty of a Class A misdemeanor, unless the encryption was used or attempted to be used to commit an offense for which a greater penalty is provided by law. If the encryption was used or attempted to be used to commit an offense for which a greater penalty is provided by law, the person shall be punished as prescribed by law for that offense.

(e) A person who violates this Section commits a criminal offense that is separate and distinct from any other criminal offense and may be prosecuted and convicted under this Section whether or not the person or any other person is or has been prosecuted or convicted for any other criminal offense arising out of the same facts as the violation of this Section.
720 Illinois Compiled Statutes § 16D-5.5.

(The Arkansas, Minnesota and Nevada statutes are similar, but shorter. The Iowa and Virginia statutes consist of a single sentence, like this “Any person who willfully uses encryption to further any criminal activity shall be guilty of an offense which is separate . . . from the predicate criminal activity and punishable as a Class 1 misdemeanor.” Virginia Code § 18.2-152.15.)

Let’s begin by parsing what I consider to be the essential provisions of the statute: (b) and (e). Note that section (b) not only makes it a crime to use encryption in committing, aiding and abetting or concealing a crime, it makes it a crime to ATTEMPT to do any of these things.

As I’ve noted before, the primary reason we criminalize attempts – crimes that were, by definition, never actually committed -- is to give law enforcement the ability to step in and make an arrest without having to wait until the criminal actually carries out his or her evil plans. How would that work here? I’m having a little difficulty coming up with situations in which law enforcement could step in and arrest you for using encryption in an attempt to commit a crime.
There are two kinds of attempts: In one, police interrupt you before you commit your target crime (murder, theft, etc.); in the other, you do everything you can do commit the crime but fail.

The second category of attempts are known as “impossible” attempts; you fail because something makes it impossible for you to actually inflict the “harm” you tried to inflict. The classic example of that is someone who, say, wants to kill his neighbor (with whom he’s feuding); our perpetrator sneaks over to the neighbor’s house with a rifle, sees the neighbor sitting on the couch and shoots him. The shot would have killed the neighbor had he not died of a heart attack a few hours before; here, the perpetrator did everything he could to commit murder but failed. He can only be charged with an attempt to commit murder.


How would that work with attempts to use encryption to commit a crime? Assume John X works for a government agency that handles classified information; he decides to steal some of the information and sell it to whoever would be willing to buy it (A spy? A terrorist?). He copies what he believes to be classified information onto a thumb drive and encrypts the data to ensure no one can read it when he takes the thumb drive with him on his way home. He puts the thumb drive in his bad as he leaves work; FBI agents arrest him on his way out. What he doesn’t know is that the FBI has been suspicious of him for some time, and the “classified information”on the thumb drive is, in fact, not classified. He therefore can’t be charged with stealing classified information; he is charged with attempting to steal classified information AND with using encryption in his attempt to commit that crime.

Does that make sense? Does anyone have a better example of what the use encryption in an attempt to commit a crime offense might encompass? (I am not, by the way, even going to attempt to parse out what “indirectly using encryption in an attempt to commit a crime” might mean. I have neither the space nor the patience to do that here; maybe another time.)

I can see how the attempt option might apply to concealing a crime. Here’s an example: You encrypt your hard drive to keep police from finding the child pornography you then download onto it. Officers show up with warrants, arrest you and seize your computer. They find the encryption key, search the hard drive and find the child pornography. Your goal was to use encryption to conceal the commission of the crime of possessing child pornography; you didn’t succeed, so you could be charged with attempting to use it for that purpose.

I can also see how the using encryption in an attempt to aid and abet the commission of a crime option might work. Assume you and I are old friends; you’re broke and I work in a bank. You ask me to help you rob the bank; you want me to get you codes you can use, say, to access the bank vault at a time when it is not normally open. I agree. So over the course of a couple of workdays I locate and copy the codes; I save them in an encrypted file and email the file to you.

Unfortunately for us, I send it to the wrong email address; I send it to your old email address, the one you and your former husband (with whom you are involved in a very contentious divorce) use. He gets the email, figures out what we’re up to, goes to the police and turns us in. I did my best to aid and abet your robbing the bank, but I failed. So I could be charged with using encryption in an attempt to aid and abet bank robbery, as well with an attempt to aid and abet the robbery.


That brings me to the other notable aspect of the Illinois statute (and the other, similar statutes): section (e). It reiterates what I would argue is already clearly established: The “unlawful use of encryption” crime is a crime separate and distinct from other crimes; I think the purpose of this provision is to make it clear that this crime doesn’t merge into a completed substantive crime.

Some crimes merge, others do not. An attempt to commit a crime (murder, say) merges into the completed crime (murder) because an attempt has fewer elements and inflicts less “harm” than the completed crime the attempt was trying to achieve. So you cannot be charged with both (i) attempt to commit murder and (ii) committing murder if you kill someone. You can only be charged with murder; the attempt merges into the completed crime.

Section (e) of the Illinois statute (and comparable provisions in the other state statutes) is apparently intended to make it very clear that if you use encryption to commit, abet or conceal a crime that becomes an additional charge that can be brought against you. I assume it is intended to underscore the fact that using encryption ratchets up the liability and penalties you face if you are apprehended and prosecuted.

All this is speculation because I can’t find any cases in which someone was charged with violating one of these statutes. The Illinois statute hasn’t gone into effect yet, so it obviously hasn’t been used but some of the statutes are nearly a decade old. You’d think someone would have been prosecuted under one of them by now. Maybe the lack of prosecutions to date is due to people’s – criminals’ and aspiring criminals’ – not using encryption. I suspect that will change, if it has not already changed.

One more scenario before I quit: I did a post last year about a district court’s holding that a man could take the 5th Amendment and refuse to give up his encryption key. The man’s laptop was seized when he crossed the U.S.-Canadian border. Federal agents suspected there was child pornography on the laptop, but its hard drive was encrypted. So, the man can take the 5th Amendment and refuse to give them the key, which means they can’t access the files to confirm that child pornography is on the laptop. They know he encrypted his hard drive, which MAY contain child pornography. If they had access to a statute like the Illinois statute, could prosecutors charge him with using encryption to conceal his possession of child pornography?

The answer is no: If they have probable cause to believe there’s child pornography on the hard drive, prosecutors could charge him; but unless they can get into the hard drive, they would not be able to prove beyond a reasonable doubt that he actually used encryption to conceal his possession of child pornography. There would, therefore, be no point in charging him.

I came up with that scenario when I was trying to figure out if these “unlawful use of encryption” statutes would give prosecutors a way to go after someone who has encrypted evidence or contraband (something it is a crime to possess). By encrypting the evidence or contraband, she has effectively prevented the state from being able to use that data to prosecute her for a substantive crime (child pornography, terrorism, fraud). The prosecution can prove beyond any reasonable doubt that she encrypted the data; the problem, insofar as using the “unlawful use of encryption” laws is concerned, is that the prosecution suspects – but cannot prove – that the encrypted data proves she committed, attempted to commit, abetting, attempted to abet, concealed or attempted to conceal the commission of a crime.

Monday, October 27, 2008

Textual Child Pornography?

About a month ago, I did a post on textual obscenity, which is at least a conceptual possibility under U.S. law.

This post is about something different: textual child pornography . . . which I suspect is not a crime under U.S. law.


The question came up in the course of a conversation I had a couple of days ago with a reporter from Detroit. He said a prosecutor there – state or federal, I’m not sure which – is prosecuting pimps who apparently used Craiglist and other websites to prostitute children. I don’t know anything about the case, if such a case is in progress, but the reporter said something I found interesting.

He mentioned that a possible charge might be the distribution of child pornography. When I asked what such a charge would be based on, he said he thought it would be based on the pimps’ posting nude and/or sexually suggestive photos of children online along with text describing the sexual services they could, and would, provide. I found that interesting, because it raised the issue (in my mind, anyway) as to whether text can constitute child pornography.

I want to analyze that possibility, but I don’t want to use the possible-Detoit prosecution as the factual basis for our analysis. Instead, I want to focus on the ultimate issue: whether pure text could constitute child pornography. If it can, then someone who writes stories about children engaged in sexual activity (presumably with adults) could perhaps (we’ll come back to that later) be creating child pornography (a crime under state and federal law); if they posted it online or shared with others, they could be charged with disseminating child pornography.

The federal statute that defines the terms used in the child pornography and child exploitation statutes 18 U.S. Code § 2256. It defines “child pornography” as
any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit conduct, where--
(A) the production of such visual depiction involves the use of a minor engaging in sexually explicit conduct;
(B) such visual depiction is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct; or
(C) such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.
18 U.S. Code § 2256(8). It defines “visual depiction” as including “undeveloped film and videotape, and data stored on computer disk or by electronic means which is capable of conversion into a visual image and data which is capable of conversion into a visual image that has been transmitted by any means, whether or not stored in a permanent format". 18 U.S. Code § 2256(5).

I can’t find a definition of “visual image” in the U.S. Code or in any of the state criminal statutes. It seems reasonable to me, though to assume the term means what it clearly denotes, i.e., a picture of some kind, a graphical versus textual depiction of a person or persons. That would make sense given the reasons why we began criminalizing child pornography. As I explained in an earlier post, the U.S. Supreme Court has said there are two reasons why we criminalize child pornography: Its creation involves the victimization of childre and it preserves their victimization essentially forever. As I also explained, the Supreme Court said both rationales only justify the criminalization of child pornography the creation of which involves victimizing real children.

The question then becomes, do the rationales also mean that in criminalizing child pornography we only criminalize graphical depictions of the victimization of children . . . or should it also extend to textual depictions of such victimization? That’s a good question, and I’m not sure can answer it. I’ve done some thinking and some research on the issue, and I’m going to share what I’ve come up with, and found, with you . . . maybe you have some good ideas on all this.

Let’s start with the only reported case I know of in which someone was prosecuted for possessing child pornography based on his possessing textual material. In Regina v. Sharpe, 2001 CarswellBC 82 (Supreme Court of Canada 2002), John Sharpe was charged with possession of child pornography after Canadian Customs officers seized “computer discs containing a text entitled `Sam Paloc's Boyabuse -- Flogging, Fun and Fortitude: A Collection of Kiddiekink Classics” from his possession. Regina v. Sharpe, supra.

Sharpe moved to dismiss the charge, arguing that it violated the right to freedom of expression guaranteed in § 2(b) of the Canadian Charter of Rights and Freedoms. Regina v. Sharpe, supra. The prosecution – the Crown – conceded that the statute under which he was charged -- § 163.1(4) of the Canadian Criminal Code – infringed that right. The issue then became

whether this limitation of freedom of expression is justifiable under § 1 of the Charter, given the harm possession of child pornography can cause to children. Mr. Sharpe accepts that harm to children justifies criminalizing possession of some forms of child pornography. The. . . question therefore is whether §163.1(4) of the Criminal Code goes too far and criminalizes possession of an unjustifiable range of material.
Regina v. Sharpe, supra. Section 1 of the Canadian Charter of Rights says the Charter “guarantees the rights and freedoms set out in it subject only to such reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society.” So the issue was whether criminalizing Sharpe’s possession of textual child pornography could be upheld under this provision.

The Canadian Supreme Court began its analysis of the issue by noting that the Criminal Code defined child pornography in terms of “visual representations.” Under the Criminal Code, a visual representation can constitute child pornography in three ways: (i) “By showing a person who is, or is depicted as, being under . . . 18 years and is engaged in, or is depicted as engaged in, explicit sexual activity; (ii) by ”having, as its dominant characteristic, the depiction, for a sexual purpose, of a sexual organ or the anal region of a person under the age of 18”; or (iii) by “advocating or counselling sexual activity with a person under the age of 18 years that would be an offence under the Criminal Code”. Regina v. Sharpe, supra. The court noted that “[w]ritten material can constitute child pornography in only the last of these ways”. Regina v. Sharpe, supra.

Its opinion is almost 100 pages long, so I can’t begin to go into the analysis in detail. I’ll just note that its primary concern was the fact that the statute criminalized
Self-created works of the imagination . . . intended solely for private use by the creator. The intensely private, expressive nature of these materials deeply implicates § 2(b) freedoms, engaging the values of self-fulfilment and self-actualization and engaging the inherent dignity of the individual. . . . Personal journals and writings. . . may well be of importance to self-fulfilmen. . . . The fact that many might not favour such forms of expression does not lessen the need to insist on strict justification for their prohibition.
Regina v. Sharpe, supra. The court therefore read an exception into § 163.1(4); it “protects the possession of expressive material created through the efforts of a single person and held by that person alone, exclusively for his or her own personal use.” Regina v. Sharpe, supra.

I am only aware of one somewhat similar case in the United States. In 2001, 22-year-old Brian Dalton of Columbus, Ohio pled guilty to a pandering obscenity charge that was based on fantasies – stories – he had written. According to news reports, the stories described the sexual molestation and torture of three children (10 and 11) who were kept in a case in a basement. Dalton pled guilty to one pandering obscenity count to avoid being brought to trial on a second charge; if he had been convicted on both charges, he would have faced 16 years in prison. As it was, he was sentenced to 10 years in prison.

In 2003, an Ohio Court of Appeals held that Dalton should be allowed to withdraw his guilty plea because he received ineffective assistance from his counsel. State v. Dalton, 793 N.E.2d 509 (Ohio App. 2003). The court held that Dalton would have had a good argument as to the unconstitutionality of the charges against him:
Because there is constitutional significance to the distinction between pornographic depictions of real children and similar depictions of fictional children, understanding the factual basis for the charges against appellant was particularly important. It is uncontested that the children depicted in appellant's journal and the repugnant acts described therein were creations of appellant's imagination. Therefore, this case raises a substantial question concerning the constitutionality of a statute prohibiting the creation and private possession of purely fictitious written depictions of fictional children. One court in Ohio has held that [Ohio statutes] cannot constitutionally criminalize the private possession of an obscene but possibly fictitious letter involving children. `Otherwise, the legislature would in effect be punishing an individual for his/her thoughts.’

Because appellant's trial counsel did not understand that both counts were based solely upon the purely fictional personal journal, she could not have adequately advised appellant of the potential constitutional defense.
State v. Dalton, supra. The Court of Appeals relied on the decision I discussed in my earlier post, in which the Supreme Court held that the First Amendment bars the criminalization of child pornography the creation of which does not involve victimizing a real child. Dalton’s attorney apparently thought the stories at least in part depicted the sexual molestation of an actual child. State v. Dalton, supra. The Ohio Supreme Court declined to review the Court of Appeals’ decision, so it’s final. I have no idea what happened to Dalton; I assume the prosecutor did not try to charge him with anything after he was released from prison.

I agree with the decisions of both courts . . . but I wonder what would (will) happen if someone is charged with possession of child pornography based on his or her having textual accounts describing the sexual molestation of real, identifiable children. The accounts themselves would be purely fictitious, i.e., they would not describe the actual molestation of the children; they would, instead, record the writer’s fantasies of engaging in such activity. Would stories like that, I wonder, be treated any differently from the ones in the Dalton and Sharpe cases?

Friday, October 24, 2008

Virtual Divorce = Virtual Murder

You’ve probably seen the news stories about the recent virtual murder in Maple Story, a Second Life-style MMORPG.

According to these stories, a 43-year-old Japanese woman was “so angry” about being divorced by her virtual Maple Story husband she murdered his avatar. The news stories say the virtual murderess – a piano teacher in the real world – was furious because he divorced her “without a word of warning.”

How did she kill him, you ask? In some worlds -- like Second Life -- that could be quite difficult; I don’t know if killing “real” avatars is a standard part of Maple Story or not. I checked out the game’s North American (English) portal, but I couldn’t find an easy answer to that question.

My guess is that you can’t just kill another avatar, a theory I base on the method the piano teacher used to kill her faithless avatar spouse. According to news stories, she got him to tell her his Maple Story username and password when they were happily conjugal and when he divorced her, used that information to log into his account and kill him off, virtually, of course.

I find the “victim’s” response interesting: He went to the police in Sapporo, where he lives, and complained about his virtual ex-wife’s killing his avatar. I wonder how he phrased his complaint: Did he complain of virtual murder . . . or of a loss of virtual property? (I wonder if he could argue that her killing his avatar constituted a threat . . . on the premise that it implicitly communicated her intent to do something similar to him in the real world? I truly doubt that argument would fly, but it’s a thought.)

The police, naturally, didn’t go with virtual murder (or a loss of virtual property, for that matter). Instead, they arrested her on suspicion of illegally accessing a computer and manipulating electronic data; the news stories say that if she were charged with and convicted of this offense, she could face up to 5 years in prison or a $5,000 fine.

The charge would be brought under Japan’s Unauthorized Computer Access Law (Law No. 28 of 1999). You can find an English version of it here. Article 3(1) of the Act first states that “[n]o person shall conduct an act of unauthorized computer access.” It then defines “unauthorized computer access” as
(1) An act of making available a specific use which is restricted by an access control function by making in operation a specific computer having that access control function through inputting into that specific computer, via telecommunication line, another person’s identification code for that access control function (to exclude such acts conducted by the access administrator . . );

(2) An act of making available a restricted specific use by making in operation a specific computer having that access control function through inputting into it, via telecommunication line, any information (excluding an identification code) or command that can evade the restrictions placed by that access control function on that specific use (to exclude such acts conducted by the access . . .);

(3) An act of making available a restricted specific use by making in operation a specific computer, whose specific use is restricted by an access control function installed into another specific computer which is connected, via a telecommunication line, to that specific computer, through inputting into it, via a telecommunication line, any information or command that can evade the restrictions concerned.
Unauthorized Computer Access Law, Article 3(2). The Act defines “access control function” as a function that is
added, by the access administrator governing a specific use, to a specific computer or to another specific computer which is connected to that specific computer through a telecommunication line in order to automatically control the specific use concerned of that specific computer, and that removes all or part of restrictions on that specific use after confirming that a code inputted into a specific computer having that function by a person who is going to conduct that specific use is the identification code. . . .
Unauthorized Computer Access Law, Article 2(3). It defines “identification code” as a code that is granted to someone (known as “authorized user”) who has been
authorized by the access administrator governing a specific use of a specific computer to conduct that specific use, or to that access administrator (hereafter . . . authorized user and access administrator being referred to as “authorized user, etc.”) to enable that access administrator to identify that authorized user, etc., distinguishing the latter from another authorized user, etc.; and that falls under any of the following items or that is a combination of a code which falls under any of the following items and any other code:
(1) A code the content of which the access administrator concerned is required not to make known to a third party wantonly;
(2) A code that is compiled in such ways as are defined by the access administrator concerned using an image of the body, in whole or in part, of the authorized user, etc., concerned, or his or her voice;
(3) A code that is compiled in such ways as are defined by the access administrator concerned using the signature of the authorized user, etc., concerned.
Unauthorized Computer Access Law, Article 2(2). Finally, the Act defines “access administrator” as “a person who administers the operations of a computer (hereafter . . . “specific computer”) which is connected to a telecommunication line, with regard to its use (limited to such use . . . hereafter referred to as “specific use”). Unauthorized Computer Access Law, Article 2(2).

I must admit, I find the language of the Act a little hard to follow; it’s more technically grounded than the language you see in comparable U.S. statutes. It looks to me, though, like the charge against the piano teacher would properly be brought under Article 3(1) (which outlaws unauthorized access) coupled with Article 3(2)(1) (which defines “unauthorized computer access” as inputting someone else’s identification code in order to make a computer or computer system do what you want it to do).

I don’t see any requirement in the Act that the unauthorized computer access have caused “damage,” which is a requirement under the general federal cybercrime statute, 18 U.S. Code § 1030. Section 1030(a)(5)(B) makes it a federal crime to intentionally access a computer “without authorization, and as a result of such conduct, recklessly” cause “damage.” The statute defines “damage” as “any impairment to the integrity or availability of data, a program, a system, or information”. 18 U.S. Code § 1030(e)(8).

The piano teacher’s conduct would certainly constitute a U.S. federal crime under this statute because she (i) intentionally accessed a computer (the Maple Story computer and, specifically, her virtual husband’s account on that system) and (ii) caused damage (killing his avatar certainly qualifies as impairing the availability of data or a program). As I said, I don’t know if damage is a requirement under the Japanese statute; it is not required under some U.S. state unauthorized access statutes, on the theory that simply getting “into” a computer system without being authorized to do so is a crime, a virtual analogue of trespass. I suspect the damage element may come into play if and when the lady is being sentenced.

Last year I did a post on virtual murder in which I speculated on whether CONSENSUAL virtual murder in online worlds might someday be criminalized. As I explained there (and explain in a law review article that should be published soon), I don’t think it should be a crime, just as I don’t think any consensual acts that take place in a purely virtual world should become the focus of real-world criminal law. As long as it’s consensual, it’s part of a game and I don’t see why anyone should care (regardless of how bizarre the conduct becomes).

This case, though, raises a different issue, equally interesting. Should we make it a crime to commit real murder (nonconsensual murder) in virtual worlds? Let’s assume for the sake of analysis that the man whose avatar was killed in Maple Story won’t be able to resuscitate it; he’ll have to start over with a new avatar. Not being a serious gamer, I’m not sure how important that is; I know it can be very important in goal-directed games like World of Warcraft, and it looks a like Maple Story might be one of those.

For the purposes of analysis, again, let’s assume he DID lose a great deal when he lost that avatar; he lost, say, skills and property he will have to work very hard in-game to restore. The question, as far as criminal law is concerned, is should we treat this just as a type of unauthorized access w/damage (what some U.S. states define as aggravated hacking) or should we go further and treat it as analogous to a real world crime like theft or even murder?

It couldn’t be theft because she didn’t take the property we are assuming his avatar acquired during its brief lifespan; if we’re trying for a property crime analog, it would have to be some kind of property damage offense. As to whether we should create a crime of virtual murder – avatar murder – I really don’t know. I suppose the answer to that question will depend on how much time we come to spend in virtual worlds; if we come to spend a great deal of our time in these virtual environments – so that we invest much of our personal, emotional and professional lives in them – we might decide some virtual analog of murder is essential.


Can you imagine the dialog if and when this woman goes to prison? “What’d you do? I killed an avatar.” Chicago comes to Second Life.

Thursday, October 23, 2008

Not Cybercrime But . . .

Someone was kind enough to send me a link to a news story about a recent decision from a federal district court in Connecticut.

It is not a cybercrime case, as such, but it does touch on issues I’ve written about before, so I’d like to review it here.


The case is Spanierman v. Hughes, 2008 WL 4224483 (D. Conn. 2008). Here, according to the court, are the facts that resulted in this litigation:
On January 2, 2003, the State of Connecticut, Department of Education (“DOE”) hired the Plaintiff to be an English teacher at Emmett O'Brien High School. . . . (1) Hughes was . . . the Superintendent of the Connecticut Technical High School system, of which Emmett O'Brien is a part; (2) Druzolowski was . . . the Assistant Superintendent of the Connecticut Technical High School system; and (3) Hylwa was . . . the principal of Emmett O'Brien. . . .

The Plaintiff originally began to use MySpace because students asked him to look at their MySpace pages. [He] subsequently opened his own MySpace account, creating several different profiles. One. . . was called `Mr. Spiderman,’ which he maintained . . . from the summer of 2005 to the fall of 2005. [He] . . . used his MySpace account to communicate with students about homework, to learn more about the students so he could relate to them better, and conduct casual, non-school related discussions.

Elizabeth Michaud was a guidance counselor at Emmett O'Brien. In the fall of 2005, Michaud spoke with. . . a teacher . . . who informed Michaud that the Plaintiff had a profile on MySpace. Michaud alleges that she also received student complaints about the Plaintiff's profile page. After her conversation with Ford, Michaud viewed the . . . `Mr. Spiderman’ profile page. . . . Michaud . . . was disturbed by what she saw. . . . According to Michaud, the Plaintiff's profile page included a picture of the Plaintiff when he was ten years younger, under which were pictures of Emmett O'Brien students. In addition, Michaud stated that, near the pictures of the students were pictures of naked men with what she considered `inappropriate comments’ underneath them. Michaud . . . was disturbed by the conversations the Plaintiff was conducting on his profile page. Michaud stated [his] conversations with . . . students were `very peer-to-peer like,’ with students talking to him about what they did over the weekend at a party, or about their personal problems. Michaud felt that the Plaintiff's profile page would be disruptive to students. . . .

Michaud spoke with the Plaintiff about his email communications with students about things . . . not related to school, and suggested he use the school email system for the purpose of educational topics and homework. Michaud also told the Plaintiff that some of the pictures on his profile page were inappropriate. After Michaud spoke with the Plaintiff, he deactivated the `Mr. Spiderman’ profile page. The Plaintiff then created a new MySpace profile on October 14, 2005 called `Apollo68.’

[A teacher] . . . discovered the Plaintiff's new profile page and informed Michaud of it. The Defendants also allege that . . . students complained . . . about the Apollo68 profile. Michaud . . .separately viewed the . . . profile and came to the conclusion that it was nearly identical to the `Mr. Spiderman’ profile. The Plaintiff admits that the “Mr. Spiderman” profile and the “Apollo68” profile had the same people as friends and included the same types of communications.

Michaud reported . . . the “Apollo68” profile page to her supervisor. . . . [and] was told to report the situation to Hylwa. . . . In November 2005, Hylwa met with the Plaintiff, explained there would be an investigation, and placed the Plaintiff on administrative leave with pay. The Plaintiff deactivated the “Apollo68” profile when he was placed on administrative leave.
Spanierman v. Hughes, supra.

To summarize what followed, the school conducted an investigation and then told Spanierman “he had exercised poor judgment as a teacher” and “the DOE would not renew his contract”. Spanierman v. Hughes, supra. He brought a civil rights suit, claiming the school and its officials had violated his Fourteenth Amendment rights to due process and equal protection of the laws and his First Amendment rights to freedom of speech and association. Spanierman v. Hughes, supra. He lost.

The district court held, essentially, that he (i) had not shown he had an interest protected by the due process clause (his interest in having his contract renewed was not enough, according to the court); (ii) had not shown he was selectively prosecuted for what he did (that is, had not shown he was singled out for conduct others engaged in without having their employment terminated)l and (iii) had not shown that what the DOE did violated his rights under the First Amendment. Spanierman v. Hughes, supra. It can be difficult to prevail on these kinds of claims.

I thought this case was interesting, given some of the things I’ve posted about, because here we have postings on MySpace (or Facebook) coming back to haunt a teacher, not a student. I don’t know anything about education law, but sites like MySpace and Facebook obviously open up a whole new dimension in student-teacher interaction, which I’m sure schools will want to control. Seems to me –as a lawyer who knows nothing about the legal issues or practicalities involved here – that it would be a really good idea for schools to adopt policies specifying what are, and are not, appropriate uses of MySpace and Facebook by teachers in their professional capacity.

In law schools we have access to services offered by Westlaw and Lexis, both of which let us create websites and email groups and communicate with out students online and outside of class; I don’t know of any law schools that have adopted policies defining the appropriate uses of these sites, presumably because they don’t offer the opportunities for creative expression one finds on MySpace and Facebook. I don’t know what was going on with Mr. Spanierman, but he could have been a very well-meaning, enthusiastic teacher who was trying to interact with his students in new ways but ran afoul of formal or informal norms governing student-teacher interactions at the high school level.

Wednesday, October 22, 2008

The Rule of Completeness

I got an email from a forensic computer analyst in which he raises some good questions about how a rule of evidence applies to instant messages, comments posted on a blog and comments made during a chat session. I’m going to take a shot at dealing with his questions, but I’d be interested in hearing what others have to say on the issues.

The rule in question is Rule 106 of the Federal Rules of Evidence. States have their own versions of the rule, but since his issues arose in a federal case, we’re going to analyze the federal rule.

Rule 106 provides as follows: “When a writing or recorded statement or part thereof is introduced by a party, an adverse party may require the introduction at that time of any other part or any other writing or recorded statement which ought in fairness to be considered contemporaneously with it.” It codifies what, at common law, was known as the “rule of completeness.”

Rule 106 is meant to prevent two kinds of prejudice to one party in civil or criminal litigation. Since this is a cybercrime blog, I’m going to use criminal examples. The first kind of prejudice the rule is intended to prevent occurs when, say, the prosecutor introduces part of a “writing or recorded statement” and, in so doing, provides the jury with comments that are taken out of context. Rule 106 is meant to let the defendant -- in my example – require the prosecutor to introduce the rest of the writing or statement so the jury can see everything in context.

The other kind of prejudice Rule 106 addresses is the risk that if the prosecution is allowed to introduce statements that were taken out of context, the defense won’t be able to overcome the effect of that evidence by presenting other evidence later. In other words, the concern here is that the original, taken-out-of-context statements will so influence the jury the defendant won’t be able to change their minds -- to convince them that the statements were taken out of context and so don’t mean what they seem to mean. Rule 106 and similar state rules are meant to ensure that the jury gets the entire picture when it comes to writings or recorded statements.

In Henderson v. U.S., 632 A.2d 419 (D.C. 1993), for example, the defendant was convicted of murder and appealed. One of the issues he raised on appeal was that the trial court allowed the prosecution to introduce excerpts of the 80 page transcript of an interview he had with a police detective. The prosecution introduced approximately 21 of the 80 pages; the statements in those pages inferentially supported the defendant’s guilt by showing he lied to the detective and made comments that seemed to tie him to the victim’s murder. At trial, Henderson had asked the court to introduce the rest of his statement “in order to reflect fairly the length and nature of his statement and to show that he had provided an explanation of his conduct.” U.S. v. Henderson, supra. In other words, Henderson said the jury should have been able to put his comments into context.

The District of Columbia Court of Appeals said “fairness required” that the ”omitted portions be admitted . . . to avoid presenting the jury with a distorted understanding of the tenor of appellant's statement and, thus, of the admitted portions” of his statement.” U.S. v. Henderson, supra. In so doing, the court explained the importance of the rule:
[T]he rule of completeness is . . . implicated when the prosecution selectively introduces only the inculpatory portions of a statement made by the defendant. Although the decision . . . falls within the sound discretion of the trial judge, . . . to implement the fairness purpose underlying the rule, the trial judge upon request must admit additional portions that `concern the same subject and explain the part already admitted.’ . . . In addition, where the defense demonstrates that the admitted portions are misleading because of a lack of context, . . . the trial judge should permit `such limited portions to be . . . introduced as will remove the distortion.’ . . .

Furthermore, in a criminal case, the usual fairness concerns . . . are amplified by constitutional considerations. The rule . . . must be applied to ensure that a defendant is not forced to choose between allowing his or her statement to stand distorted as a result of selective introduction and abandoning his or her Fifth Amendment right not to testify in order to clarify that statement.
U.S. v. Henderson, supra.

Now let’s get back to the email we began with. The forensic analyst’s questions resulted from his testifying in an arson trial. Here’s a slightly edited version of the essence of the email:

The defendant had private messaged his co-defendant (who later made a deal), and other individuals. I found private messages which included their discussions, as well as discussions between the defendant and other parties, where the messages told several stories. The accurate story was that the fire was an incendiary (set with an open flame) fire, and other stories where the defendant indicated the fire was an accidental fire.
During my testimony the defense objected to the introduction of the private message conversation where the defendant admitted to starting the fire. The objection was that the other private messages should also be admitted, because they were occurring at the same time, contemporaneously during the other message sessions (Rule 106). Initially the Judge overruled the objection, then after further arguments sustained the objection.

In my view having an understanding of the different types of messaging is important in the determination of whether the message documentation is part of a single recording or different recordings, and is then therefore complete, or lacking completeness.

In a chat room, a chat session can be seen by all the participants of the chat room, and they all can participate in the discussion, so everyone’s responses and comments are needed for completion. On a blog where a discussion is made and individuals respond with comments a similar situation is created.

Typically in a private message and/or instant message session however, only two people are involved, carrying on a conversation, which no other participant has access to. Multiple private message sessions can be carried on simultaneously creating a condition where two distinctly different conversations/recordings can take place at the same time.
Email from Computer Forensic Analyst (October 21, 2008).

Let’s see what we can do with all this. I agree with the gentleman who sent this email as to how Rule 106 should apply (assuming its other requirements are met) to comments made during chat sessions and posted on a blog. In both those instances, I think that if the prosecution introduced only a portion of the chat session/blog posts, the defendant could logically and reasonably argue that other parts of the session/posts (at least) should be introduced to put everything into context. I think there would probably be some line-drawing involved, i.e., I think the court would have to decide how much of a chat session or how many posts on the blog really needed to be introduced to give the jury an adequate context for the portions introduced by the prosecution.

Now let’s get to the more difficult issue – the instant/private messages. Here, we have two distinct issues: The first one is whether the private/instant messages between two people in a session constitute a conversation, so that introducing only portions of the messages they exchanged (the conversation) would require the introduction of the rest of the messages under Rule 106. The other issue arises in either of two situations: if we decide the messages exchanged in an instant/private message session do not constitute a conversation but something else (a series of severable conversations, maybe); and if the participant in one such session wants to introduce messages exchanged in another session, on the premise that this additional conversation between the two is essential to put the first conversation into context.

I can’t find any reported cases or law review articles that deal with any of these issues, so we’re pretty much on our own here. As to the first issue, it seems to me that, as the forensic analyst points out, we are in effect dealing with a conversation . . . a single conversation between two people. This scenario seems to me to be analogous to a transcript of a telephone call or the transcript of Henderson’s conversation with the detective. If the other requirements of Rule 106 are met -- i.e., if the proponent of introducing the “rest of the conversation” shows that fairness requires that the jury hear that information in order to put the portions already introduced into context -- then I think the rest of the conversation should come in (subject to the line-drawing I noted above).

What about the residual issue . . . the situation in which the proponent of introducing the “rest of the conversation” argues that it in effect extends to other instant/private message sessions between the same two people? I found a reported federal case in which (a) the prosecutor had used transcripts of two calls between the defendant and his girlfriend made on one day and (b) the defendant wanted the court let the jury have the transcript of a third conversation between the two, a conversation that occurred on another day. The court declined to do so because it didn’t find that the third conversation was needed to put the others into context, but it assumed such a step could be appropriate, if the other requirements of Rule 106 were met. And I found another case in which the court made the same assumption, but told the defendant it would address the propriety of introducing particular additional conversations when the case came to trial (this opinion was ruling on pre-trial issues).

So a defendant (continuing with the example we’ve been using) could argue that instant/private messages exchanged by Persons A and B in session #2 (and maybe sessions #3 and #4) should be introduced to let the jury put comments made in session #1 (which they already have) into context. If the trial court finds that the messages in Session(s) #2/#3/#4
"ought in fairness to be considered contemporaneously with" the messages from Session #1, then it should let them in (again subject to the line-drawing process I noted earlier).

What do you think? Comments, corrections, additions . . . objections?

Monday, October 20, 2008

Aiding & Abetting Unauthorized Access

About a month ago, I did a post on aiding and abetting the crime of exceeding authorized access to a computer.

As I noted there, the exceeding authorized access crime is necessarily committed by an “insider,” somehow who has authorization to access part of a computer system but intentionally goes beyond the scope of their legitimate access. This post is about a related issue: aiding and abetting the crime of gaining unauthorized access to a computer.


The case we’re going to use to analyze this crime is U.S. v. Willis, 476 F.3d 1121
(10th Circuit Court of Appeals 2007). Here are the facts in the case:
[Todd] Willis was employed by Credit Collections, Inc., . . . [a] debt collection agency. To obtain information . . . for debt collection, the agency utilized a . . . website called Accurint.com -- owned by LexisNexis. The information . . . on Accurint.com includes the names, addresses, social security numbers, dates of birth, telephone numbers, and other data of many individuals. . . . [T]o access information on Accurint.com, customers must contract with LexisNexis and obtain a username and password. . . . Willis assigned to employees usernames and passwords to access Accurint.com. Employees were not authorized to obtain information from Accurint.com for personal use. Willis deactivated the usernames and passwords of employees who no longer worked for the company.

While investigating . . . Michelle Fischer and Jacob Wilfong for identity theft, police officers found pages . . . from Accurint.com with identifying information for many people. The information . . . was used to make false identity documents, open instant store credit at various retailers, and use the store credit to purchase goods that were sold for cash. A subpoena to Accurint.com revealed that the information had been obtained through the user name `Amanda Diaz,’ which was assigned to Credit Collections, Inc. Secret Service agents twice interviewed Willis about the identity theft. During the first interview, Willis insisted the username and password assigned to Amanda Diaz had been deactivated and there was no way to determine who had accessed the website. During the second interview, . . . Willis admitted he had given a username and password to his drug dealer in exchange for methamphetamine. . . . [H]e met Fischer through his drug dealer and began providing to her individuals' information he obtained through Accurint.com. After Fischer continued to ask . . . information, he gave her the Amanda Diaz username and password so she could access Accurint.com herself. . . . [When she] was having trouble accessing the site, Willis helped her to log on and . . . showed her how to obtain access to individuals' addresses, social security numbers, dates of birth, etc. . . . Fischer said she would `take care of [him] later.’ She later gave him a silver Seiko watch. When Willis learned through a newspaper article that Ms. Fischer had been arrested for identity theft, he deactivated the username and password.
U.S. v. Willis, supra.

Willis was indicted on 1 count of aiding and abetting unauthorized access to a computer in violation of 18 U.S.Code 1030(a)(2)(C) and convicted, He appealed, arguing that the prosecution had not proved he knowingly, and with the intent to defraud, aided another in obtaining unauthorized access to a computer. U.S. v. Willis, supra.

Willis argued that the person who aids and abets must have the intent to defraud in so doing. U.S. v. Willis, supra. He claimed there was no proof he knew Fischer would use the information she obtained from Accurint.com to commit identity theft; he said the evidence presented at trial only showed that he thought he was helping her obtain information on people who owed her money. U.S. v. Willis, supra.

The Circuit Court of Appeals disagreed. It began by noted that to be convicted of aiding and abetting, a defendant must share the intent to commit the underlying offense. U.S. v. Willis, supra. To be convicted of the underlying offense -- 18 U.S. Code § 1030(a)(2)(C) -- a defendant must “intentionally access[ ] a computer without authorization . . . and thereby obtain . . . information”. U.S. v. Willis, supra. The court held that § 1030(a)(2)(C) does not require proof of intent to defraud; it only requires proof that the defendant intentionally accessed a computer without authorization and obtained information.

Willis based his argument on the premise that “intent to defraud is an element of § 1030(a)(2)(C) because it is . . . an element under § 1030(a)(4).” U.S. v. Willis, supra. Section 1030(a)(4) makes it a federal crime to “knowingly and with intent to defraud” access a computer “and by means of such conduct further the intended fraud and obtains anything of value”. The Court of Appeals began its analysis of the issue by noting that a plain reading of the statute shows that
the requisite intent to prove a violation of § 1030(a)(2)(C) is not an intent to defraud (as it is under (a)(4)), it is the intent to obtain unauthorized access of a . . . computer. . . . [T]o prove a violation of (a)(2)(C), the Government must show that the defendant: (1) intentionally accessed a computer, (2) without authorization (or exceeded authorized access), (3) and thereby obtained information from any protected computer if the conduct involved an interstate or foreign communication. The government need not also prove that the defendant had the intent to defraud in obtaining the information or that the information was used to any particular ends.
U.S. v. Willis, supra.

The court also rejected Willis’ argument that § 1030(a)(2)(C) “is the general provision of the statute” and § 1030(a)(4) “is the specific provision of the statute. That is, he argues, subsection (a)(4) sets out the specific elements required to prove a violation of subsection (a)(2)(C), and his conduct should be judged under subsection (a)(4), requiring an intent to defraud.” U.S. v. Willis, supra. The Court of Appeals didn’t buy this argument, either:
[O]ther courts have explained that each subsection of § 1030 addresses a different type of harm. . . . For example, subsection (a)(2)(C) requires that a person intentionally access a computer without authorization and thereby obtain information, whereas subsection (a)(5)(C) requires that a person intentionally access a computer without authorization and thereby cause damage. . . . Similarly, subsection (a)(4) has different elements than subsection (a)(2)(C). In addition to requiring that a person act with the specific intent to defraud, a violation of (a)(4) also differs from (a)(2)(C) in that a person can violate the former by obtaining `anything of value’ by the unauthorized access, whereas, as noted above, a person violates (a)(2)(C) by obtaining `information.’
Willis does not contest that he provided Fischer unauthorized access to Accurint.com. He merely argues that he had no intent to defraud in so. . . . As the foregoing discussion demonstrates, such proof is not required to establish a violation of § 1030(a)(2)(C). Accordingly, his sufficiency of the evidence argument fails.
U.S. v. Willis, supra.

Friday, October 17, 2008

EnCase, Consent & Kyllo (2)

Someone sent me two really good questions:
What about someone who consents to a search of his computer without a warrant, but the officer uses Encase by surprise without telling the owner of the computer first? Does this constitute an illegal search because he used technology not available to the general public without the owner's consent?
The questions are a follow-up to my post “EnCase, Consent & Kyllo.”

That post was about whether the use of EnCase to read encrypted files on a computer violated the 4th Amendment as interpreted by the U.S. Supreme Court in the Kyllo case. As I explained in that post and in a prior post, in the Kyllo case the Supreme Court held it is a search under the 4th Amendment for police to use technology that is “not in general public use” essentially to do something they couldn’t without it. In the Kyllo case, the Supreme Court said it was a search for an officer to stand across the street from a home and use a thermal imager to detect the amount of heat emanating from parts of the home; the Court said the officer could not have gotten that information otherwise except by going into the house, which is obviously a search.


So, if police were to use EnCase to find and read files they could not read otherwise, this would be a 4th Amendment search if EnCase is a technology that is not in general public use under Kyllo. If such a use of EnCase is a search, then it would be constitutional only if it were authorized either by a search warrant or by an exception to the search warrant requirement . . . an exception like consent.

So let’s go back to the questions above. To analyze how they should be answered, I need to explain a little about the consent exception, what it is and how it works.

As I said, consent is an exception to the 4th Amendment’s requirement that police must get a search warrant to conduct a search of private places, like homes, offices, cars and computers. Some of the exceptions – like the exigent circumstances exception – track the 4th Amendment by requiring that officers have probable cause to believe they will find evidence in a particular place if they search it.

These exceptions, most notably the exigent circumstances exception, simply excuse the officers from getting a warrant on the theory that it’s not practicable to get a warrant when you’re dealing with an exigency. The exigencies the exception encompass go to things like entering to prevent the destruction of evidence or save a hostage or prevent a suspect from fleeing. The notion that is if officers took time to get a warrant in situations like this, evidence might be lost, a suspect might get away and/or someone might be injured because officers waited too long to enter.


The consent exception is different. It does not require that officers have probable cause to conduct a search because it’s based on the notion of waiver. Each of us has certain constitutional rights – like the 5th Amendment right not to be compelled to incriminate yourself or the 4th Amendment right to be free from unreasonable searches and seizures – that, in effect, “belong” to us. That means they don’t apply if we don’t want them to apply. It’s up to me whether I want to give up my 4th Amendment right and let police search my house or give up my 5th Amendment right and talk to the police or to a grand jury. The consent exception is based on waiver; I waive – give up – my 4th Amendment rights.

For a search based on the consent exception to be valid, the person who gave the consent must have done so voluntarily; police can’t coerce you into giving up your 4th Amendment or other rights. And the consent exception is to some extent analogous to a contract; that is, the exception applies only as long as and to the extent that you gave up your 4th Amendment rights and allowed the police to search for, and seize, evidence.

That second issue goes to what is known as scope. To be constitutional, an officer’s search of a person or a place or a thing must remain within the scope of the consent the person gave. As federal district court noted, a “search conducted pursuant to consent may not exceed the scope of the consent sought and given.” U.S. v. Benezario, 339 F. Supp.2d 361 (D. Puerto Rico 2004). Here is how one court explained the scope aspect f consent searches:
[T]he scope of the permissible search is limited to the consent given. . . . When the state relies on consent to support a search, it must prove . . . that officials complied with any limitations on the scope of the consent. . . .The scope of a person's consent does not turn on what the person subjectively intended. . . . [I]t turns on what a reasonable person would have intended. . . .The specific request that the officer made, the stated object of the search, and the surrounding circumstances all bear on our determination of the scope of a person's consent.
State v. Fugate, 210 Or.App. 8, 150 P.3d 409 (Or. App. 2006).

So let’s go back to the questions we started with. Let’s assume the officer says to the owner of the computer, “Can I search your computer of (let’s say) fraud?” The owner of the computer says, “yes, you can.” So the owner of the computer consents to a search of his computer, the scope of which is limited to finding evidence of fraud. (That MIGHT somehow limit the files the officer could look at, but probably not; courts have generally found that because files can be re-named, it’s not necessarily a problem if the officer looks at jpg and other files that might not seem to be related to fraud.)

Now let’s assume the officer uses EnCase (which he just happens to have with him and whips out, somehow) in the search. Let’s further assume that because the officer uses EnCase he is able to find evidence of fraud he would not otherwise have been able to find; as I noted in the earlier post, he’s able to read password-protected files because he’s using EnCase.

The question is whether the incremental use of EnCase to conduct the search exceeds the scope of the consent given. The computer owner could argue that he implicitly consented to a traditional search – a visual and tangible inspection by a human being acting without the assistance of special technology (technology not in general public use under Kyllo). The computer owner would say implicitly assumed this was the kind of search he was consenting to because he was not aware of EnCase or of the possibility it could be used to let the officer find things he could not have found if he only used his own senses and skills to conduct the search.

The officer could argue, in response, that the computer owner consented to a search of the computer that was designed to locate fraud, and that the search he conducted – with EnCase – did not exceed the scope of that consent. The officer would point out that this was not a case in which a police officer was given consent to search for X but instead proceeded to search for Y (maybe in addition to X). The officer might also say that the use of EnCase goes not to the SCOPE of the search but to its thoroughness, i.e., it did not let him search for more than he was authorized to search for; it simply let him conduct a better search for the item(s) he was authorized to search for given the cowner’s consent.

Wednesday, October 15, 2008

"Distribution"

I’ve done at least one post on the music and movie industries’ war against file-sharing.

Personally, I think, as I said earlier, that they’re pursuing a strategy that will ultimately prove futile, but they continue to pursue it.


Late last month there was an interesting development in a civil suit against an alleged file-sharer. The case is Capitol Records, Inc. v. Thomas, 2008 WL 4405282 (U.S. District Court for the District of Minnesota, 2008). You can read about the facts in the case and how it got to court in this article from last year.

As the article explains, last year Jammie Thomas, a 30 year old Native American single mother of two, refused to pay an out of court settlement to the Recording Industry Association of America (RIAA) when they accused her of illegally using file-sharing software to share music. Instead, Thomas went to trial. On October 4 of last year, a federal jury in Duluth, Minnesota found that Ms. Thomas had engaged in illegal file-sharing and ordered her to pay $9,250 for each of the 24 songs the RIAA said she had distributed illegally via the software. The total award was $220,000. The article cited above, which was written at the time, said the award would “almost certainly go uncollected” and would drive Ms. Thomas into bankruptcy.

Ms. Thomas’ lawyers filed a motion for a new trial with the court, raising several errors in the original proceeding. In ruling on her motion, the court first noted precisely what the RIAA had accused Ms. Thomas of: “On April 19, 2006, Plaintiffs filed a Complaint against Defendant Jammie Thomas alleging that she infringed Plaintiffs' copyrighted sound recordings pursuant to the Copyright Act, 17 U.S.C. §§ 101, 106, 501-505, by illegally downloading and distributing the recordings via the online peer-to-peer file sharing application known as Kazaa.” Capitol Hill Records, Inc. v. Thomas, supra.

The court then considered whether it erred in the instruction it gave the jury on the issue of “distributing” the recordings. “The Copyright Act provides that `the owner of copyright . . . has the exclusive rights to . . . distribute copies . . . of the copyrighted work to the public by sale or other transfer of ownership. . . .’ 17 U.S.C. § 106(3). The Act does not define the term `distribute.’ Capitol Hill Records, Inc. v. Thomas, supra. It noted that other courts disagree as to “whether making copyrighted materials available for distribution constitutes distribution under § 106(3).” Capitol Hill Records, Inc. v. Thomas, supra.

After reviewing a variety of sources – the language of the copyright statute itself, a dictionary, an opinion from the Register of Copyrights and the use of the term “distribute” in other sections of the U.S. Code – the Thomas court decided Ms. Thomas was correct in arguing that “the plain meaning of the term “distribution” does not including making available and, instead, requires actual dissemination” of the copyrighted material (the songs, in this instance). Capitol Hill Records, Inc. v. Thomas, supra.

The court also rejected the plaintiff’s argument that in this context “distribution” is synonymous with “publication:” The copyright statues at one point define publication as “the distribution of copies or phonorecords of a work to the public by sale or other transfer of ownership. . . . The offering to distribute copies or phonorecords . . . for . . . distribution . . . constitutes publication.” 17 U.S.Code § 101.

The court found that “[u]nder this definition, making sound recordings available on Kazaa could be considered distribution.” Capitol Hill Records, Inc. v. Thomas, supra. But it also found that the terms are not, in fact, synonymous:

[S]imply because all distributions within the meaning of § 106(3) are publications does not mean that all publications within the meaning of § 101 are distributions. The statutory definition of publication is broader than the term distribution as used in § 106(3). A publication can occur by means of the `distribution of copies . . . of a work to the public by sale or other transfer of ownership. . . .’ § 101. This portion of the definition. . . defines a distribution as set forth in § 106(3). However, a publication may also occur by `offering to distribute copies . . . to . . . persons for purposes of further distribution. . . .’ § 101. While a publication effected by distributing . . . of the work is a distribution, a publication effected by merely offering to distribute copies . . . to the public is merely an offer of distribution, not an actual distribution.
Capitol Hill Records, Inc. v. Thomas, supra.

The court then found that because it had erroneously instructed the jury that the “`act of making copyrighted sound recordings available for electronic distribution on a peer-to-peer network, without license from the copyright owners, violates the copyright owners' exclusive right of distribution, regardless of whether actual distribution has been shown”, Ms. Thomas is entitled to a new trial. As the court explained, “[l]iability for violation of the exclusive distribution right found in § 106(3) requires actual dissemination. Jury Instruction No. 15 was erroneous and that error substantially prejudiced Thomas's rights. Based on the Court's error in instructing the jury, it grants Thomas a new trial.” Capitol Hill Records, Inc. v. Thomas.

But the Thomas court didn’t stop there. In an aside – in what lawyers refer to as dicta, i.e., comments that are not essential to deciding the issues in the case – the judge in the Thomas case gave us his opinion of this and similar lawsuits:
The Court would be remiss if it did not take this opportunity to implore Congress to amend the Copyright Act to address liability and damages in peer-to-peer network cases such as the one currently before this Court. The Court begins its analysis by recognizing the unique nature of this case. The defendant is an individual, a consumer. She is not a business. She sought no profit from her acts. The myriad of copyright cases cited by Plaintiffs and the Government, in which courts upheld large statutory damages awards far above the minimum, have limited relevance in this case. All of the cited cases involve corporate or business defendants and seek to deter future illegal commercial conduct. The parties point to no case in which large statutory damages were applied to a party who did not infringe in search of commercial gain.

The statutory damages awarded against Thomas are not a deterrent against those who pirate music in order to profit. Thomas's conduct was motivated by her desire to obtain the copyrighted music for her own use. The Court does not condone Thomas's actions, but it would be a farce to say that a single mother's acts of using Kazaa are the equivalent, for example, to the acts of global financial firms illegally infringing on copyrights in order to profit in the securities market. . . .

While the Court does not discount Plaintiffs' claim that, cumulatively, illegal downloading has far-reaching effects on their businesses, the damages awarded in this case are wholly disproportionate to the damages suffered by Plaintiffs. Thomas allegedly infringed on the copyrights of 24 songs-the equivalent of approximately three CDs, costing less than $54, and yet the total damages awarded is $222,000-more than five hundred times the cost of buying 24 separate CDs and more than four thousand times the cost of three CDs. While the Copyright Act was intended to permit statutory damages that are larger than the simple cost of the infringed works in order to make infringing a far less attractive alternative than legitimately purchasing the songs, surely damages that are more than one hundred times the cost of the works would serve as a sufficient deterrent.

Thomas not only gained no profits from her alleged illegal activities, she sought no profits. Part of the justification for large statutory damages awards in copyright cases is to deter actors by ensuring that the possible penalty for infringing substantially outweighs the potential gain from infringing. In the case of commercial actors, the potential gain in revenues is enormous and enticing to potential infringers. In the case of individuals who infringe by using peer-to-peer networks, the potential gain from infringement is access to free music, not the possibility of hundreds of thousands-or even millions-of dollars in profits. This fact means that statutory damages awards of hundreds of thousands of dollars is certainly far greater than necessary to accomplish Congress's goal of deterrence.

. . . [B]y using Kazaa, Thomas acted like countless other Internet users. Her alleged acts were illegal, but common. Her status as a consumer who was not seeking to harm her competitors or make a profit does not excuse her behavior. But it does make the award of hundreds of thousands of dollars in damages . . . oppressive
Capitol Records, Inc. v. Thomas, 2008 WL 4405282 (D. Minn. 2008).

Monday, October 13, 2008

Business Records Exception

As I explained in an earlier post, U.S. law bars the introduction of hearsay evidence unless it falls into one of the exceptions to this basic rule.

As I explained in that earlier post, hearsay is “a statement, other than one made by the declarant while testifying at the trial or hearing, offered in evidence to prove the truth of the matter asserted.” Federal Rules of Evidence, Rule 801(c). The federal system and every state defines hearsay similarly, and they all recognized basically the same set of exceptions to the rule.

As I explained earlier, hearsay isn’t allowed, as a general rule, because it denies the party against whom it is introduced an opportunity to effectively challenge its accuracy and reliability. A rumor would be hearsay; so if I took the stand and said I’d heard a rumor that you’re an axe murderer, you couldn’t do much to attack the basic accuracy of the content of the rumor. You could try to attack my credibility, but since I’m saying I heard this story from John Doe, and I trust John Doe, you’re pretty well stymied in attacking the inherent believability and accuracy of the axe murderer story.

One of the exceptions is the “business records” exception. As Wikipedia explains, the rationale of this exception is the premise that “employees are under a duty to be accurate in observing, reporting, and recording business facts. The . . . belief is that special reliability is provided by the regularity with which the records are made and kept, as well as the incentive of employees to keep accurate records (under threat of termination or other penalty).” The presumptive accuracy with which business records are kept is assumed to overcome the law’s skepticism about admitting regular hearsay. The records are hearsay because the contents – the statements – they contain are being introduced to prove the truth of the matter(s) they attest to.

The use of the business records exception came up recently in a case from Ohio. Here are the facts in the case:

On April 6, 2006, the Cuyahoga County Grand Jury indicted [Denise] Sherrills for . . . unauthorized use of a computer. On June 15, Sherrills pled not . . . and . . . a jury trial commenced on March 2, 2007.

Janice Allen, . . . origination manager in . . . Deep Green Financial, . . . was Sherrills' immediate supervisor. . . . [O]n August 9, 2005, Sherrills called off sick, and it became necessary for Allen to gain access to Sherrills' electronic mail and voice mailbox to follow up on any pending communications with Deep Green's clients. Allen contacted her manager to get approval to access Sherrills' e-mail and voice mailbox accounts.

Allen accessed Sherrills' email and voice mailbox utilizing a password provided by Deep Green's [IT] department. . . . [S]he discovered an e-mail that had been sent . . . to an outside e-mail address . . . truloxs@hotmail.com. . . . [T]he e-mail had an attached Excel spreadsheet containing confidential information on sixty-four of Deep Green's clients. Allen . . . reported it to her manager, Patricia Kelly.

Kelly . spoke with Sherrills, who [said] she was unable to report for work, because she was ill. Kelly . . .asked Allen to review Sherrills' workload, and . . . arranged with the IT department to grant Allen access to Sherrills' e-mail and voice . . accounts. . . .

[Allen] discovered an e-mail containing confidential client information that had been sent from Sherrills' Deep Green e-mail account to an outside e-mail address. . . . [It] had an attachment, which included information on Deep Green's customers. . . . [She] reported the discovery to . . . Deep Green's Human Resources Director.

Kelly telephoned Sherrills and asked her to come into the office to discuss a customer issue. . . Sherrills was very combative and inquired if she was being fired. . . . Sherrills promised to come into the office later that day to discuss the matter, but . . . never did, and never reported to work thereafter.

Randy Zuendel, IT Security Manager, testified that the Human Resources Department asked him to investigate the e-mail . . . from Sherrills' Deep Green e-mail account to truloxs@hotmail.com. . . . [He found] four e-mails sent . . . to outside . . . addresses. . . . [O]ne had an attachment, which contained the names and account numbers of thousands of Deep Green's customers.

Zuendel also testified an e-mail dated July 1 was sent to . . . truloxs@yahoo.com. This e-mail was also sent to truloxs@hotmail.com. Zuendel testified the subject line of the e-mail dated July 1, 2005, that was sent from Sherrills' Deep Green's e-mail account was titled `note to myself.’ Further, this e-mail was written in the first person.

Zuendel testified that the information . . . in the e-mails sent from Sherrills' Deep Green e-mail account to the two outside e-mail addresses were proprietary in nature. . . . [If it] was disclosed to competitors. . . it could significantly harm Deep Green's interests.

Zuendel testified that . . . Deep Green's employees were also prohibited from uploading or downloading files from outside computers. . .
State v. Sherrills, 2008 WL 1822406 (Ohio Court of Appeals, 2008).

The jury convicted Ms. Sherrills of unauthorized use of computer property, and on April 10, 2007, the court sentenced her to probation for a year. She appealed her conviction, arguing that the trial court erred in admitting the four emails at issue into evidence; she argued that they should have been excluded as hearsay. They were, as I noted above, hearsay because they contain statements that were made outside of court and they are being offered to prove the truth of those statements. The prosecution got them admitted under the business records exception.

This is Ohio’s version of the exception:
The following are not excluded by the hearsay rule, even though the declarant is available as a witness:. . . . A memorandum, report, record, or data compilation, in any form, of acts, events, or conditions, made at or near the time by, or from information transmitted by, a person with knowledge, if kept in the course of a regularly conducted business activity, and if it was the regular practice of that business activity to make the memorandum, report, record, or data compilation, all as shown by the testimony of the custodian or other qualified witness . . . unless the source of information or the method or circumstances of preparation indicate lack of trustworthiness. . . .
Ohio Rule of Evidence 803(6). According to the Court of Appeals, it is not necessary that the
witness have first hand knowledge of the transaction giving rise to the record Rather, it must be demonstrated that the witness is sufficiently familiar with the operation of the business and with the circumstances of the record's preparation, maintenance, and retrieval, that he can reasonably testify on the basis of this knowledge that the record is what it purports to be, and that it was made in the ordinary course of business consistent with the elements of Rule 803(6).
State v. Sherrills, supra. The court found that Zuendel’s testimony met this requirement:
In his capacity as the IT Security Manager, Zuendel testified that all e-mails received or sent, including attached documents, are stored on Deep Green's exchange server in the normal and ordinary course of business. Zuendel testified in detail about the interface of the exchange server and an employee's workstation. Zuendel testified that all e-mails received or sent . . . go through Deep Green's exchange server. The person receiving or sending an e-mail has to connect to the exchange server from their work station through Microsoft Outlook to read or compose an e-mail.

Zuendel testified that Deep Green conducts its business primarily through the internet and corresponds with their clients largely through e-mails. Thus, the record of all e-mail received or sent . . . are kept in the normal course of business. Zuendel explained that once the e-mails were discovered, he was able to copy them from where they were stored on Deep Green's exchange server to a folder on his computer. . . . [O]nce the e-mails and attachments were copied to his computer, he printed the e-mails.

Zuendel's testimony demonstrated that he was familiar with the records of e-mails Deep Green kept in the ordinary course of business and the procedure to retrieve, transmit, and store the e-mails. Zuendel also had personal knowledge as to the retrieval of the e-mails after the discovery. Based on the foundation as established by Zuendel, the e-mails were admissible under the business records exception to the hearsay rule.
State v. Sherrills, supra.

Ms. Sherrills lost. Her conviction (and probation) stand, unless the Ohio Supreme Court decides to reverse, which I suspect is unlikely.