Last week, Tara McPherson and I presented a Scalar workshop in Henry Jenkins’ Public Intellectuals class at USC Annenberg, mapping our own traversals of the academic and non-academic realms going back to the launch of Vectors in 2005. We each agreed to show one Vectors project as a precursor to the hands-on demo of Scalar, which Henry asked his students to use to create their own prototype scholar-activist projects. I chose Trevor Paglen’s Unmarked Planes and Hidden Geographies, an easily overlooked project from 2006 designed by Raegan Kelly and programmed by Craig Dietrich. Paglen’s project was created while he was still a graduate student in Geography at UC Berkeley, devoted to mapping the contours of the military-industrial complex. An offshoot of his dissertation research that would become the books Blank Spots on the Map – The Dark Geography of the Pentagon’s Secret World and Torture Taxi – On the Trail of the CIA’s Rendition Flights, Paglen’s project used a combination of reverse surveillance tactics including long-range telephotography and a prescient form of data mining that allowed him to identify the flight paths and schedules of the planes used for black-ops and extraordinary rendition. Even so-called “torture flights” have to be charted by the FAA in order to control air traffic and Paglen’s project cleverly scraped the publicly available FAA flight data, using the absence of standard plane identifiers (tail numbers) as a means to reveal those flights that are supposed to be hidden from public scrutiny. Paglen’s insight in 2006 eerily prefigures current revelations about the utility of metadata as a means of tracking behavior and suggests the need for more instances of reverse surveillance if our democracy is going to survive.
For me, though, the real moment of revelation came as I was pontificating about my own project’s “innovative” commitment to “researching in public.” Tara commented blithely that I was basically doing a Humanities version of the kind of open data research that is increasingly standard practice in the Sciences. The sharing of data sets, now mandated for many publicly funded research projects in the Sciences, indeed closely mirrors my own hopes for the Technologies of Cinema archive – that the collection of media (currently 400+ media clips) that I have been amassing in Critical Commons will be found and used by others to perform parallel or, indeed, divergent projects using this public “data set.” As with their comparatively welcoming attitude toward collaboration and commitment to accelerated publication timelines, it turns out that we in the Humanities can once again learn from our colleagues in the Sciences.
Tara went on to offer a deliberately provocative metaphor for traditional archival research that does not aspire to openness, describing it as “vampiric.” Part of the goal of the ANVC’s development of archive partnerships around Scalar is to transform the relationship between scholars and archives, enabling a more bidirectional mode of interaction. Instead of scholars who make “their” discovery in some dusty corner of the archive, extracting what they need and hoping that no one else finds the same materials before they publish, a long-term goal of Scalar is to allow for two-way linkages between the contents of electronic archives and their treatment in scholarly publications. Instead of sucking archives dry, such a circuit of knowledge production and (re)distribution stands to benefit both archives and researchers of the future. Another way to think about this is the transformation of scholarly work from being its own “content” to serving as “metadata” attached to original sources in the archive. While the Scalar team continues to develop the technical and human infrastructure needed for this transformation, the conceptual architecture of open data and public intellectualism offers an equally important foundation for its recognition within the academy.
By now, anyone who might have been tempted to add an assertion of copyright ownership requiring “written consent” to reuse the content of their Facebook posts has hopefully been disabused of its efficacy and/or legal standing, thanks to multiple disavowals in commercial and social media, as well as innumerable comments within Facebook itself. Bogus memes like this one circulate and are debunked quickly, which means on one hand that the internet is doing its job; on the other hand, I’ve been disappointed by the lack of reflection on the cultural circumstances giving rise to this particular meme and the conservative ideology reflected in its denunciations by individuals, social media and commercial outlets alike.
From ABC news to Wired.com, debunkers have authoritatively declared the impossibility of overriding Facebook’s basic user agreement, citing the company’s own refutation, which reasserts that users retain ownership of their intellectual property for content posted to the site. “You’ll be happy to know,” says today’s Time magazine tech site, “the company’s making it clear that you — not Facebook — own your content, period.” Facebook’s official response to the “meme” strikes a similarly reassuring tone:
There is a rumor circulating that Facebook is making a change related to ownership of users’ information or the content they post to the site. This is false. Anyone who uses Facebook owns and controls the content and information they post, as stated in our terms. They control how that content and information is shared. That is our policy, and it always has been.
In this brief (i.e., easily circulated) “fact check,” Facebook does not bother to highlight the fact that rather than asserting “ownership” of content posted to the social network, their stated terms instead grant them a transferable license to *use* anything posted by its hundreds of millions of users any way they want, royalty-free, worldwide until it is deleted by you *and* everyone you shared it with.
So what’s the difference between this type of blanket, essentially irrevocable, licensing agreement and what we still laughably refer to as “ownership”? Reassuring users that Facebook does not assert “ownership” is an absurd prevarication that only has currency on an ideological level. Yes, you still “own” the “copyright” to your “IP” but if there is money to be made off of something posted to Facebook, you have already given them the right to do so without paying you a cent (remember “royalty-free”?). In the unlikely event something you posted turns out to have commercial value, Facebook can also sell the right to use it to a third-party to exploit it in a worldwide market (remember “transferable” and “worldwide” from your users’ agreement?), again without owing you a cent.
But I’m not actually interested in any of that. What interests me is our willingness to engage in conversations about — and adhere to systems that serve the interests of — a lottery-like system of extreme individual reward at the expense of a shared cultural good. I am talking, of course, about the difference between open and closed models of cultural production, and the fact that the dominant ideology of American culture continues to favor the latter, even when delivered over a technological infrastructure that uniquely facilitates the former.
In other words, the overwhelming discourse surrounding the “Facebook copyright meme” is mendaciously predicated on the possibility that someone is sooner or later going to get fabulously wealthy by exploiting our *individual* images, words, sounds or ideas. Focusing on the terms under which such a billion-to-one occurrence will get monetized deflects awareness away from the obvious fact that the value of Facebook lies in its efficient mobilization of a vast, collective network, not any one contribution to it. More importantly, every single day we consent to the rules and expectations imposed by corporate and legal institutions such as this one. If the dominant ideology of such a system revolves around repeated assertions and refinements of terms of ownership, monetization and control, then users will continue to act accordingly, protecting their “intellectual property,” sharing only when they are assured of not getting ripped off by other community members or their imputed corporate overlords.
So what really happens when a meme like this rises to prominence? A few people get to look smart by making others look dumb for buying into the hoax and misunderstanding both the nature of copyright law and their agreement with Facebook. Facebook gets to look generous and fair by disavowing the rumor that it is evil and trying to take away “ownership” of its users’ “content.” The smart people mimic what they have read about copyright law and we all, once again, function as obedient drones of the copyright industries’ public relations machine.
Does the popularity of this meme suggest that there is a problem with Facebook’s royalty-free licensing agreement with users? Maybe. Google had to nuance some of its royalty-free uses of YouTube videos in response to uploaders expressing their feelings of being exploited. Perception, in these cases is as important as reality and clearly there is some sector of Facebook’s users who is ready to believe that people — especially “open capital” corporations — are out to profit from their ideas. Now I get it! We’re talking about piracy — but for once, we get to be the injured innocents instead of the scurvy dogs!
For over a decade, we have all been bombarded by the copyright industries’ discourse of “piracy” (bad) vs. copyright (good) such that this binary has become second nature, a default position that dominates cultural discourse about ideas and ownership so effectively that we rarely even notice it’s happening. We speak this ideology every time we ask a remix artist how they got away with asserting basic fair use rights or stop ourselves from reusing images, sound or video for fear of incurring the wrath of a corporate legal department or its automated takedown trolls.
Facebook, by this logic, is out to “pirate” our content, taking it away from “us” the true owners, and the meme text encourages us to “declare that my copyright is attached to all of my personal details, illustrations, graphics, comics, paintings, photos, and videos, etc.” Copyright law, however imperfectly expressed it might be here, rides in to rescue our ideas! Although it is temporarily reversed, the “piracy” dichotomy is preserved and copyright is elevated as the hero of our personal struggle against profiteering corporations! Freedom, creativity, private ownership, and corporate beneficence all triumph in the name of copyright!
So who loses? Gullible fools (i.e., not me!) and “conspiracy theorists” — those who are either not cool enough to know their memes or not skeptical enough to separate smart conspiracy theories from dumb ones. Regardless, the winner of this meme and its debunkers remains an utterly conventional and individually disempowering model of copyright and intellectual property ownership.
I confess that when I first heard about Facebook users being encouraged to add text to their posts that would override the default copyright agreement, I naively assumed that the goal was to encourage greater sharing and openness, perhaps through the use of Creative Commons (CC) licenses. But after reading repeated denunciations of a user’s ability to override Facebook’s basic Terms of Service with the meme text, it seems clear that the attribution of a CC license would be deemed equally futile. In fact, the “smart people” denouncing the dumb ones in this meme frame their critiques with such condescension as to preclude any subsequent discussion whatsoever.
For example, Wired.com uses an image of a girl dressed in Harry Potter garb casting a spell to underscore its characterization of the copyright notice as “silly” and a “magic spell” (ironically the image used by Wired was posted on Flickr under a CC license that allows it to be circulated freely online without paying or seeking permission from its originator!). Snopes, likewise refers to the text as a “legal talisman.” A meme, once so aggressively debunked, does not get a second chance; to even continue the conversation risks putting you on the wrong side of the smart/dumb divide.
In either case, the end lesson remains the same and is flatly stated by Wired’s Ryan Tate:
A blunter way of summarizing the situation is to explain that if you want to use Facebook, you must play by Facebook’s rules, even when they change. If you don’t want to play by Facebook’s rules anymore, you must quit Facebook. The idea of remaining on Facebook but playing by your own rules via magic spells is a fantasy. Stay on Facebook or leave Facebook. There is no third option.
Time is only slightly more nuanced:
If you have a problem with Facebook’s privacy policies, you can either stick it out and lobby for Facebook to amend its terms, or you can quit Facebook.
So these are the options offered by the smart people: agree to Facebook’s terms, sign a petition that probably won’t do any good, or register an even more meaningless protest by taking your business elsewhere. This last option should not be entirely discounted, of course, but the historical metaphor of refusing to buy from a retailer where one has been mistreated hardly holds in the case of a billion-user system like Facebook. However you slice it, the prospect of a collective response is deeply muted by the ideology of individual action as the only — and ironically fundamentally hopeless! — course available.
The option that goes unmentioned is, of course, to initiate or participate in a collective action against Facebook. Organizing a boycott or an information campaign to prompt smart, public discussion of the real issues underlying copyright, privacy and ownership remains a viable option — particularly by taking advantage of the deliberate spreadability of social media via sites such as this one — and one that should not be reduced to a simple smart/dumb binary.
(originally posted on Facebook 11.27.12)
My talk on fair use for Educause Live! last month was picked up by Rodney Murray for his monthly podcast at Inside Higher Ed, “The Pulse.” Murray nicely excerpted and highlighted parts of the talk pertaining to obstacles and solutions for educators using copyrighted media (even though he left out the more self-serving parts of the presentation that focused on Critical Commons as an alternative to proprietary learning management systems!). So if you only have 20 minutes to spend thinking about fair use instead of 60, you can get the audio from Inside Higher Ed while exploring the contents of Critical Commons.
I just got back from SCMS 2011 in New Orleans, where I presented a talk on reenactments of the JFK assassination drawn from Technologies of History as part of a panel titled Cultural Logics of Replay. Although these slides don’t include the video clips used in the presentation, the JFK montage is included in the media survey I cut together for the book. The panel generated some very engaged responses and was followed by a book signing hosted by Mark Williams and Dartmouth College Press. Thanks to everyone who joined us for the panel and the signing!
The webcast recording of my presentation to Educause Live! on February 25, 2011 just went online. Although the title of the talk, “The Future of Fair Use” may have been a bit oversold, it was an amazing opportunity to speak on behalf of fair use to hundreds of higher ed professionals nationwide. For those who don’t have an hour to spare, the basic message is that non-specialists (educators, librarians, media makers) can and should contribute directly to the shaping of an assertive, ethical future for fair use. Citing the groundbreaking work done by the Center for Social Media’s best practices guides, the presentation also highlights Critical Commons as a case study of a fair use-enabled platform for promoting digital scholarship, teaching and research. The presentation sparked a lively discussion among the Educause community and a huge spike in traffic to Critical Commons. Thanks to Steve Worona of Educause for giving us this opportunity!
Kathleen Fitzpatrick’s thoughtful post How To Index Your Own Book and Why I’ll Never Do It Again on ProfHacker sparked a very interesting debate over the merits of self-indexing vs. professional indexing of academic books. Coincidentally, her post appeared less than 24 hours after I had completed my own index and enthusiastically blogged about the pleasures I found in indexing. I responded to Kathleen’s post with a brief comment and link on ProfHacker, which prompted several responses by professional indexers that ranged from bemused condescension to reassertions of the value of proper indexing.
I realize now that I should have done a bit more to inoculate my characterization of indexing as a creative reinterpretation of a text against being perceived as naive or irresponsible. For me, indexing is clearly an extension of the fundamental information architecture of a book, similar to chapter breaks, sub-heads, tables of contents, image captioning and the ordering of a book’s contents, none of which is routinely turned over to professionals or software programs to be completed objectively. I would make a similar argument for the importance of typography and page design in a printed text, but that’s another discussion, and indeed we do routinely (and with mixed results) turn this part of the publishing process over to professionals. The index, however, is arguably the heart of a book’s information architecture and we know that the categories and presuppositions of knowledge systems are at least coextensive with, if not co-constitutive of, any scholarly endeavor. If an author is inclined to do so, thinking seriously about the index as a creative interface offers an important way of directly addressing the fantasy that readers (particularly in a digital age) follow a linear trajectory through a text from start to finish. I am not against professional indexing, which doubtless results in a more faithful rendering of a work’s contents and is probably appropriate for most books, it’s just that for me, this would constitute a lost opportunity to reinforce certain paths and associations in the text that I hope will be productive for readers.
I should say that much of my work for the past decade has been devoted to thinking about the potentials of scholarly interface, information design and what good can come of encouraging humanities scholars to explore the creative (not just practical) potentials of electronic publication. The Vectors Journal that I co-edit with Tara McPherson has been doing this with some success through collaboration between scholars and designers for the past few years and we have now moved on to developing a platform called Scalar that encourages an even deeper reconsideration of scholarly publication and electronic argumentation. Both of these projects invite scholars to rethink their work in terms of database structures and the combinatoric possibilities they enable. The relational and/or semantic structures of databases open extremely productive avenues of possibility for some scholars and some works of scholarship, though clearly not all. Given my immersion in database-driven scholarship, interface design and cultures of remix, it was impossible for me to approach indexing as anything other than a welcome bridge between traditional text publication and the electronic publishing platforms that now I largely prefer.
I almost didn’t ask my editor if it would be possible to have Technologies of History published under a Creative Commons license. With many academic presses struggling economically and so much disinformation equating open publishing with communism or piracy (or both), such a request seemed ridiculously unlikely to be granted. To my surprise, the response of my editor at UPNE was curious and welcoming; he had heard of Creative Commons and knew that, although this book is about media and history, I am also deeply invested in issues of copyright and fair use. I wrote up an informal proposal, explaining that Creative Commons licensing did not mean giving the book away for free to everyone with an internet connection and why I believed it would ultimately help us to craft a more effective online marketing strategy. With the help of a former student who now works at Creative Commons, I also compiled a list of other academic presses and publications that have used CC licenses recently. He promised to run it by the suits and the law-talking guys at the press and, to my surprise, with just a few additional clarifications and reassurances (e.g., that the press could still collect royalties for parts of the book that might be republished in course readers, etc.), everyone was on board. Interestingly, initiating this conversation with my editor also sparked a discussion of ways to use CC licensing to reactivate older titles in their list that have stopped selling and, perhaps most importantly, it has made me feel even more invested as a partner in the marketing of the book, since I don’t need to feel like a total sellout for allowing a standard copyright notice to go in the front.
Although the book is not due out on shelves until the spring, I recently received a proof of the cover image for my forthcoming book Technologies of History that is too good not to share. Although academic presses have been known to be less than inventive with the design of book covers, the University Press of New England totally came through on this one. The real thanks go to Peter Brinson and Kurosh ValaNejad, the creators of The Cat and the Coup, from which the cover image is drawn. The striking visual style of the game comes from the Persian miniatures that Kurosh painstakingly created as a backdrop for the game, in which you play as the cat of Mohammad Mossadegh, the first democratically elected Prime Minister of Iran who was overthrown by a CIA sponsored coup. As it happened, I was just completing the final draft of the book as Peter and Kurosh were finishing the game last spring. I was amazed by how perfectly The Cat and the Coup resonated with the book’s focus on eccentric historiography and wound up using it as one of the centerpiece projects in my chapter on digital histories.
I have just completed the page proofs and indexing of my book Technologies of History, the final stages in a long and not entirely unpleasant process. Against the advice of my publisher, I created the index myself rather than hire a professional who is experienced and competent in such matters. Although this decision was initially driven by simple aversion to paying money for the service, I quickly recognized the process of indexing as coextensive with the creative and scholarly work in the digital realm that I have been focused on for most of the past decade. Specifically (obviously, now that I think about it), the process of indexing combines two of my core pleasures: interface design and remix. The index itself is, of course, an alternative interface, offering multiple points of entry and the possibility of non-linear navigation of the book’s contents. At the same time, it is a creative reinterpretation and visualization of the themes, people and works under discussion. Whereas pouring the book’s complete contents into the Wordle visualization engine brings mostly painful revelations (Am I really that obsessed with the JFK assassination? Should I try to find another word for “although”?), the distillation of the text according to concepts, sub-heads and page ranges suggests insights into my own writing patterns: seemingly fewer sustained discussions of complex ideas and primary texts than I would like; less precise parsing of terms such as “memory” than intended, etc.
On the other hand, I feel encouraged that the project has somehow never seemed boring or tedious, even as its arguments have grown overly familiar. Ironically, now that the book has finally been committed to its ultimate, linear form, I want nothing more than to subject it to the kind of dissection and recombination that is only possible via a fully digital, interactive database-driven platform (the kind of transformation to which Vectors has been devoted for the past six years). Is it any accident that the index, which is arguably the most “writerly” and hence most threatening aspect of text-based scholarship, should be politely relegated by academic convention to both the final stages of composition and the most extreme margins of the published book? For my next book, I will generate the index first and compel the written words and everything else to fall in line behind it.
Mark Williams organized this panel titled “Realizing Scalar Capacities To Transform Media Archives” with Erik Loyer, Craig Dietrich and myself, which was to be our first public debut of our work on Scalar for the Reimagining the Archive conference at UCLA on November 13, 2010. Unfortunately, Mark was unable to attend but was ably replaced by Jackson Stakeman, who stole the show with an improvised VJ set using sampled video sequences from his project about Walter White, incubated during the NEH funded Broadening the Digital Humanities seminar at USC last summer. You can download my presentation from the conference site as a PDF here; Erik’s slides are here.
Alternative Projections: Experimental Film in Los Angeles (1945–1980) is an extraordinarily ambitious three-day (Fri. Nov. 11-Sun. Nov. 13) symposium that focuses on the community of filmmakers, artists, curators and programmers who contributed to the creation and presentation of experimental cinema in Southern California. Co-organized and curated by Critical Studies professor David James, the event draws inspiration from his book The Most Typical Avant-Garde: History and Geography of Minor Cinemas in Los Angeles, and includes screenings of numerous rarely-seen films, videos, installations and performances right here at USC.
Of particular interest for the IMD community is the Single Wing Turquoise Bird Light Show on Saturday night (8-10PM) in Norris Theater and the Sunday afternoon (3-6PM) panel and screenings by members of the Oasis film collective that includes Morgan Fisher, Roberta Friedman, Amy Halpern, Tom Leeser, Beverly O’Neill, Pat O’Neill, Grahame Weinbren, and David Wilson; as well as the installation in the SCA Gallery of Side Phase Drift, a 1965 abstract three-screen performance projection piece by John Whitney Jr., in which each frame was composed of sets of images that were manipulated in form, color, superimposition and time.
Complete schedule is here.
My five-minute presentation sketching the origins, goals and context of Critical Commons for attendees at the 2009 Open Video conference.