The New Media Soup
Some thoughts on newer technologies and the visual arts

by John Antoine Labadie

For decades now computers and related technologies have, in many ways, made the lives of some of us more comfortable, convenient and complete. Computers have undeniably revolutionized everything from astronomy to the visual arts. The art works included on the Museum of Computer Arts (MOCA) website are more than sufficient evidence of this radical change. But there is far more for us to consider than what is presented on any single website or even on the net as a whole. Overall, it is clear that high technology in the form of digital computing has, in some very important ways, served some of us well since its introduction in the third quarter of the last century. For example, there are many workers in industries or at jobs that are inventions of the data-rich, computer-driven, post-postmodern era. We are saturated in computer-centric "information."

Let us review some of the more salient considerations of our time: Can the proverbial tables be turned in this new century? Do our computers really serve us well or have "they" hijacked our collective destiny? Is it as some futurists suggest that we are only a few years from the end of the human era? Is our highly technological world at that point at which (a la "The Terminator") a set of runaway technologies commandeer the future? Before turning to the digital arts and a discussion of new media, here are a couple of fascinating examples of how digital technologies have brought us into a present that few could have ever imagined.

Consider the following scenario: in 1997 in IBM Corporation's "Deep Blue" RS/6000-based parallel computer defeated former world chess champion Gary Kasparov 3.5 points to 2.5 points. What was Gary Kasparov up against? Deep Blue chose moves via an algorithm that evaluated the strength of chess positions rated by material (the number and value of the player's remaining pieces), position, king safety and tempo. The search algorithm was able to choose profitable-looking lines of play to search "deep," or several moves ahead. Deep Blue also contained a preprogrammed database of chess information, including more than 2,000 opening moves.

It has been widely reported that the ex-world chess champion was quite impressed with what he experienced. In fact, some time after he had been defeated by the computer Mr. Kasparov mused that he had perhaps had confronted God. "I met something I couldn't explain ... people turn to religion to explain things like that." In a literal sense (perhaps) this was hyperbole as the matter in question was not exactly a cosmic mystery. The Russian had played a 1.5 ton configuration of (more than) 500 computers which considered as many as 200 million moves a second in order to beat him. The very human Kasparov, evaluating at a rate of perhaps two or three moves a second, won one game and tied three in the six-game contest. The final outcome of such competitions does not, over time, seem in so much doubt. Numbers matter and they are the basis of all digital computing ... no matter what the output may be. Consider Moore's Law (as stated in 1965 by then Intel Corporation founder and CEO Gordon Moore) which posits that (the development of) computer performance will double every 18 months. This means that today's notebooks are exponentially more able than one of the granddaddies of all electronic digital processors, the Atanasoff-Berry computer. Built during World War II by John V. Atanasoff and Clifford Berry, the station wagon-size machine had a storage capacity of less than 400 characters and performed one operation approximately every 15 seconds. Some 50-plus years later, existing experimental machines are now capable of "teraflops" performance at one trillion floating-point operations per second. Such processors have rendered Moore's Law obsolete. Machines 1,000 times faster are on the digital horizon: petaflops are anticipated within few years.

Humans are maker/users of whatever they can manage to invent or locate to help them deal with both the natural forces and the forces of the world we keep inventing. Restive technologies have always been a force in human history. From ancient times and the introduction of pyrotechnologies to the ugly realities of nuclear energy, humans have invented things that are difficult to control. In the 21st century high technology seems about to reorient human culture. For example, in Silicon Valley, where smaller equals faster, nanotechnology, engineering on the molecular level, is pushing things even further down the structural ladder. In "Engines of Creation: the Coming Era of Nanotechnology" (1986), K. E. Drexler explains how we will eventually be able to create almost any arrangement of atoms desired. In this way, nanotechnology will further reduce the size (and increase the speed) of computers. Drexler predicts nano-supercomputers smaller than grains of sand. Imagine swarms of nano-scale cell-repair cruisers moving through the body, identifying faulty cells and repairing abnormal (or aging?) DNA. And what then? Already, consumer-grade products using digital technologies are getting much smarter: fuzzy logic washing machines can determine how much water to let in based on how dirty your clothes are, and "shape memory" eyeglass frames return to an original form when run under hot water. Even so, in 2002, it still takes human intelligence to conceptualize such clever uses for innovative materials and technologies. Some futurists conjecture that sometime before 2035 a computer somewhere will be nudged into consciousness and suddenly "wake up" to find it is capable of performing the processes now exclusively the domain of the human brain. That computer will have found computing's Holy-Grail-of-awareness: a condition we term "intelligence." After that many things will quickly get very interesting.

In this regard, it has also been suggested that such "smart" machines will be reproductive ... creating smarter machines, which will build yet smarter ones, ad infinitum. Technological progress would then explode, swelling superexponentially almost overnight toward what seers call the "Singularity." The term comes from mathematics and is the point at which a function goes infinite; it's also popularized in the science fiction novels of Vernor Vinge. He thinks of it this way: If we make machines as smart as humans, it's not difficult to imagine that we could make, or cause to be made, machines that are smarter. After that we could plunge into an incomprehensible era of "posthumanity." In 1983, at a lecture at NASA, Venge suggested that, "I'll be surprised if this event occurs before 2005 or after 2030." In this same forum he asserted, "I believe that the creation of greater than human intelligence will occur during the next thirty years." If this futurist prognosticating is accurate we are on the cusp of this era.

On the other hand, many futurists are not worried about the concept of the Singularity because "techno-prophecy" is almost always wrong. Edward Tenner, in "Why Things Bite Back: Technology and the Revenge of Unintended Consequences" (1997), suggests that almost nothing regarding the effects of technology has been ever predicted with any accuracy. Moreover, in many cases, innovation that solves one problem winds up creating another. For example: the development of the plastic soda bottle, which when discarded lasts practically forever, addressed the hassle of "returns", the re- washing of bottles and breakage, but litters the garbage dumps; and high-tech improvements in football gear designed to prevent injuries, which instead allow for more aggressive play, which in turn caused serious injuries to increase. One can only imagine what engineers were (not) thinking while inventing the leaf-blower or the jet-ski.

So what does all this high technology mean and where does it lead us? We simply don't know. We don't know whether technology will eventually convey us to the Singularity or more safely house some of us in the very sanitary suburbs of the future. We don't know whether to regard it as inherently benign, treacherous or transparent. And one might ponder the entire issue itself, considering the fact that perhaps 90 percent of the world's people have no telephone. Which side of the technological divide is the more disadvantaged remains to be seen.

But what of computers and art? A prime question might be what exactly is "digital art" and by what criteria shall it be judged? Well, a digital work is, by definition, composed on or translated by or through a binary computer. A digital work is, collectively, a carefully defined set of "0's" and "1's" which have been used to encode data into files that can contain, for example, text, audio or visual information. A 35mm slide once scanned through a "slide scanner" can be "digitized" according to the inclinations of the equipment operator and then immediately printed on a "photo-realistic" inkjet printer at a level of quality to rival that of most any camera store. But is the product of this process a photograph? Good question.

With the possibilities offered by computers, peripherals and software, those who become competent with these new and ever-evolving technologies can make or alter images in ways never before available. Many artists and art critics agree, once visual information is converted into binary code it is possible to produce original images that are as visually and aesthetically stunning as those produced through any other medium. Digital imaging is simply another way to communicate visually and artistically and perhaps the one of means to carry us into brave new worlds in the arts. The most recently developed of these worlds is represented by "new media" works.

What is "new media" anyway? The following definition was stitched together from various contemporary sources and cross-checked with a number of colleagues involved in these various areas and disciplines. Our collective definition is as follows: New media is, for the most part, a generalized, "catch-all" designation designed to encompass the many forms of electronic communication which have appeared (or will appear) since the original introduction of online communication which were mainly text-and-static-picture products. By default a definition of new media most often includes any and all of modes of moving digital and electronic information from one source to another: special audiovisual effects of any kind; displays larger than 17 inches; streaming video and streaming audio; 3-D and virtual reality environments and effects; highly interactive user interfaces (possibly including mere hypertext or not); mobile presentation and computing capabilities; any kind of communication requiring high-bandwidth, CD and DVD media, telephone and digital data integration, online communities, microdevices with embedded systems programming, live Internet broadcasting (aka: streaming), person-to-person visual communication, one-to-many visual communication as with applications of any of these technologies in particular fields such as medicine (telemedicine) and other fields.

The term "new media" is itself fairly new and bears another attempt to elaborate a workable definition. New media can be described as a product or service where one can or may combine various elements of computing technology, telecommunications and intellectual content in a way that permits interactive use by a consumer, receiver or user. The sense I want to construct here is that new media is, or can be, both a service and a product that is typically under (some sort of) computer control and often allows the end user selective interaction with that which is being delivered. In much of what I have seen exhibited as "new" media an element of interactivity has been presented as a critical element in these works. Such definitions often also acknowledge the impact of the Internet and the Web, and stress the role of telecommunications in possible services and products.

In terms of defining interactivity, web sites are sometimes good examples of new media because such constructs are accessible only through telecommunications technology and also because of the ways in which web design invariably incorporates a variety of media, including text, audio and animation into a final product. It is clear to even casual users of the web that the power of the medium derives partly from this ability to affect so many senses.

Perhaps even more powerful is the possibility offered by new media products to allow an experience shaped, or perhaps even formed, by the user. In this way new media, unlike traditional forms of media (a book), does not typically exist with a single point of entry to the work or exit from the work. Users are often presented with numerous possibilities for entry and these choices can then influence the options made available when exiting the product. In this way each encounter with the product can be crafted to be substantially different from any earlier sessions and a user's interactivity to some extent defines the experience(s).

Additionally, in my experience users of the term new media quite often seem to emphasize the visual design aspects of the newest of the digital technology experiences. In this way the traditional field of design (both 2D and 3D) has been moved into the twenty-first century and mutated by forces in business, the sciences and the arts. For example, those who design new media or multimedia works have discourse using newly described operational terms for the design of these new media works. Here are some I gleaned from course descriptions in online catalogues from institutions in Philadelphia, New York, Chicago, Los Angeles and other locations: sequenced media, time-based media, aural design, information engineering, interface design, interactive design, 3D computer modeling and animation, and motion graphics.

How would one study to do such things? Browsing the net took me to the Otis College of Art & Design in California. Their program in "digital media" describes itself as offering students the opportunity for "...finding, developing and combining individual visions of art with technical skills for visual effects, broadcast design and 3D animation." Here is an outline of the Otis curriculum:

1. 2D - Image creation and manipulation, typography, text as image and layered composition work to create art for use in animation, digital video, feature films, interface design and Internet design.
2. 3D - Once the exclusive preserve of high-end workstations, high-resolution 3D image synthesis can now be applied to the desktop. Our program concentrates on character design and animation, props, vehicles, virtual sets, and more as it extends into the creation of virtual sets and props for movies, television, gaming, Internet and interactive.
3. Motion Graphics - The way something moves is a vivid expression of its personality. Motion may be created within the computer or sampled from the real world and reapplied in many contexts.
4. Interactive Design - This aspect of the program trains students to engage an audience through creative and intelligent interactive design. An artist must expand his or her abilities to include storytelling skills.
5. Web - The Internet, in the form of the World Wide Web, is becoming the universal interface. It serves as post office, mall, meeting place, game environment, multimedia distribution medium, and publishing venue. The rapid growth of bandwidth has allowed the integration of sound, image, animations, and video into complex, scripted interactive 'space' on the average Web site.

This is formidable stuff indeed. Now combine this advanced and very contemporary curriculum with a heightened awareness of what might be done with digital technologies and the commitment of the Otis institution to work with business firms and we have the following statement (from the same webpage): "Not since the Renaissance has art played such an important role in commerce. Don't tell your children to grow up to be cowboys, or doctors, or lawyers. Tell them to be digital artists, because the salaries and opportunities are incredible." Yes, it would seem that commercial work, some forms of entertainment, and perhaps even the realm of the sciences will benefit and evolve with new media as a significant change agent.

But back to the arts. What about the difference between the gallery work we all grew up with and the products of originally digital and now new media experiments and products?

Many theorists write on the subject of new media. For me, one of the more effective of these is Lev Manovich. Consider the difference between, as Manovich has suggested in "The Language of New Media" (2000), the " ... dichotomy: an art object in a gallery setting versus a software program in a computer. On entering an exhibition of media art we encounter signs that tell us that we are in the realm of Art: the overall exhibition space is dark, each installation is positioned in a separate, carefully lit space, each accompanied by a label with an artist's name. We know well what to do in this situation: we are supposed to perceive, contemplate, and reflect. Yet these initial signs are misleading. An exhibition of media art points us to very different cultural settings such as a computer games hall or an entertainment park (in each of these one often has to wait in line before getting a chance to 'try' a particular exhibit) and also to a different type of cultural object (and, correspondingly, a different set of behaviors) -- a software program in a computer. In approaching a media artwork, we typically discover some elements of standard human-computer interface (a computer monitor, a mouse; arrows, buttons and so on); we have to read instructions which tell us how to do it; we then have to go through the process of learning its own unique navigational metaphors. All in all, the behaviors which are required of us are intellectual problem solving, systematic experimentation and the quick learning of new tasks. Is it possible to combine these with contemplation, perceptual enjoyment and emotional response? In other words, is it possible to experience the work aesthetically while simultaneously learning how to 'use' it?"

What do art critics and professors of new media think of such exhibitions? Of course, no two locations present the same sorts of works or experiences but let us take a look at what a search for such critiques on the web brought me today (January 14, 2002). From the "Wirednews" site these cogent observations were gleaned about a single exhibition by artist Tom Kemp. Each of the writers dealing with the exhibition was considering a work that focuses on the technology of the handheld Palm computer. Kendra Mayfield, of Wired, accurately suggests that, "Artists have long toyed with the latest technologies to create pioneering works of art. In that tradition, Tom Kemp has created what he calls the first 'serious' contemporary artwork produced entirely on the handheld Palm computer. Putting aside the merits of Kemp's specific work, his claim begs a larger question: Is artwork 'serious' simply because it has been done using a previously unexplored medium?"

When considering this same exhibition, Peter Lunenfeld, who teaches in the graduate program in Communication and New Media Design at the Art Center College of Design in Pasadena, California, shared these thoughts, "While such pioneering work is often interesting, the question is whether novelty alone is a useful criterion for art or merely a great excuse for talking about technology. Not everyone thinks that novelty is enough."

On this same subject Benjamin Weil, curator of media arts at the San Francisco Museum of Modern Art also shared his views, "What difference does it make if it has been produced with a Palm of not? I think that deeming it the 'first serious work of art' is somewhat preposterous. I am really suspicious of techno-driven and techno-celebrating projects that desperately seek to be called art. Art is about ideas, not about technology. I would therefore suggest we stop being techno-fetishist, and getting all excited at the gizmo-ization of a practice that is obviously more than just gee whiz!"

What does the artist himself have to say? Although it is clear that other artists have previously created palmtop computer artworks, Tom Kemp insists that his piece "Analysis" is different from these other Palm-based works. "There are a lot of paintings (done on the Palm). 'Some have been immaculately crafted,' Kemp said. "But they are exhibitions of skill, not necessarily exhibitions of art."

In my estimation, the core question is not at all "What do we do about such new media work?" Perhaps it could be better phrased as, "How can such possibilities be incorporated into what we already do?" As printing and publications standards are already in use (and evolving) in the graphic design industry, the question becomes "How does digital work benefit both the producers of the art in question and the consumers of said work?" Make no mistake, digital is here and it IS revolutionary, unprecedented and marvelously powerful. Even so, digital technology, taken as a whole, is nothing more, or less, than the tool we make of it. Certainly all artwork is interpretive, and digital imaging (in its myriad forms) is the first truly new and unprecedented interpretive tool since the introduction of photography in the 1820's. Given this background, in my view artistic efforts in any medium should be unhindered by critical disapproval which derides works accomplished by a means with no historical precedent. As N. Negroponte (co-founder of the MIT Media Lab) has written in his best seller, "Being Digital", the most facile future users of digital technologies will "live digitally." My suggestion: thoroughly investigate the possibilities now available to us and enjoy the products of this tasty new digital soup.

John Antoine Labadie
Computer Graphics and Media Integration
Art Department
University of North Carolina at Pembroke