Category: augmented


Visual Language (Encyclopedia Pictura)

October 26th, 2011 — 05:18 pm

In my thesis, I state:

“I think visual language evolution is on a trajectory toward becoming a real-world object. The shape of these letterform objects might correspond to embodied structures: visual analogs of mathematics that arise from the acoustic resonance inside our bodies. It can be argued that much of proportional aesthetics (theories of golden mean symettry etc.) arises from embodiment, evolutionary activity over millennia etching patterns in physiognomy.

What I am suggesting is that innate shapes (geometry or topology in Thom’s terms) already exist for letterforms. They implicitly underlie our oral audible language, they are subconscious sculptures intuited from the shape of diaphragm, larynx, mouth, lips and tongue. They have been etched there by speaking. Some shapes are personal, some shapes are cross-cultural. Yet it is these shapes and vibrational presences that are being given birth and dimensional form within 3D animation, ads, and digital poetry.”

Computation provides us with unprecedented tools to implement such a vision. Perhaps the most fundamental agreement with my viewpoint comes from an unusual source: Encyclopedia Pictura are a trio of motion-graphic artists who have made extraordinary music videos for artists such as Bjork and clients like Spore.

Near the bottom of their website menu is a discrete link to a page devoted to visual language: a set of drawings and eventually doodles which outline their vision for an augmented reality application which utilizes morphological text that is relationally appropriate to the sound of the voice of the speakers.

In other words they propose precisely what I have advocated in my thesis and worked towards with works like Human–Mind–Machine. Except they have actually gone farther, providing one-to-one relationships between sounds and candidate shapes. Continue reading »

Comments Off on Visual Language (Encyclopedia Pictura) | 3D, animism, augmented, conceptual, kinetic, multimedia

The Future: Augmented Walkabouts

March 9th, 2011 — 09:19 pm

“Playable text had earlier been achieved by interactive video installation – Tom White and David Small’s Stream of Consciousness (1998) and Camille Utterback and Romy Achituv’s Text Rain (1999) – but in the Cave environment, raining, or swarming, text becomes truly volumetric.”
Rita Raley, Writing 3D. Special Issue of Iowa Review. Sept. 2006

CAVEs are expensive items and it is unlikely they will achieve market penetration. On the other hand, cell-phones are cheap and rapidly becoming ubiquitous. And if the screen-size trend (identified as far as I know by Bill Buxton) toward wall screens (big) and handhelds (small) continues, it is reasonable to assume that some (that is to say: lots of) digital writing will become mobile, geo-locative and ultimately augmented. Narratives will superimpose themselves over normative reality. There are numerous examples of geo-locative narratives done with audio (Janet Cardiff, Murmur, Teri Rueb, etc…) and the artist BLUESCREEN did a piece where fictions could only be read at specific locations, but what I want to discuss here briefly is a foreseeable form of mobile literary immersion where the reader moves freely around finding phrases that can be both seen (superimposed as if extant) and heard; literature that can be played and plays out (like Blast Theory but with augmented reality on a cellphone) as if it were real.

Augmented reality is a subset of what I call assimilation of text by image. Imagine, for instance, I place GPS-triggered text over every road sign in my neighbourhood; readers who point their cellphone cameras at these signs will see this new text, superimposed as if it were there. There is an augmented app for mobile devices that already background subtracts, compensates for light, adjusts for viewing angle (emulating perspective), and incorporates the text directly over the actual objects: Word Lens. As of this writing, Word Lens simply translates between Spanish and English; future versions and spin-offs will obviously become writing tools that enable authoring onto the city, writing onto the surface of reality. Imagine (faster processors, better cameras and)Word Lens functionality wed to Layar, an augmented reality app that allows authors to create gps-specific overlays of cities accessible through cell-phones.  It echoes the vision of billboard-poet and QR-code visionary Giselle Beiguelman, who in Issue 1 of Emerging Language Practices ( April 2010), re-expresses what she has written about before: “Mobile Tagging is a phenomenon directly related to the popularization of mobile telephony and the popularization of QR-Codes. It is a kind of writing practice for the reading to be held in transit, based on a bimedimensional bar code – QR-Code (Quick Response Code). In other words, it is nomadic writing for expanded reading”.

Not only will this expanded reading alter the accessibility of reading, it will certainly accelerate subtle shifts in perception about text, destabilizing notions of where it is, who wrote it, and how it can be shared. It seems safe to assume that it will become increasingly difficult in upcoming eras to differentiate between inscription traces that originate in matter and others that emerge from remote display processes. Writing will detach from the womb of matter even as it paradoxically becomes more location and viewer specific.

Postscript Update: Two contemporary AR practitioners of note, the poetic short-story writer and future cinema researcher Caitlin Fisher has her AR piece Requim online — I saw Requim at DAC in 2009 and was charmed by its warped nostalgia and mutant pop-up book appeal — and the other practitioner of note: poet and MIT post-doc Amaranth Borsuk whose Between Page and Screen AR work (also online) exhibited at 2010 ELO conference was both technically superb and evocative.


Comments Off on The Future: Augmented Walkabouts | augmented, multimedia, thesis