embodied computing
Updated 3 months ago
i’ve often talked about my desire to build (or at the very least to see) ways of computing that are more humane. over time, however, the term humane computing has become more closely aligned or associated either with ai (a la the now infamous humane ai pin) or with combating the practices of modern social media platforms (a la CHT).
with that in mind, i think it’d be more useful to provide an alternative term to refer to what i see as an imperative for future means, methods, and interactions for computing: creating computation that is embodied.
note that i said embodied rather than immersive. people might immediately think of something like VR as an answer to what i’ve described. but VR is more interested in displacing you into another world entirely, thereby also displacing your relationship of your own body in space.
what about AR? nominally it gets us closer. but AR and VR both operate on an inside-out principle: the center of the computing experience is the individual, and a litany of screens or screen-like technologies to suffuse the user’s experience of the world. digital information is overlaid atop the real world. but how strong is the relationship between the two? which one ultimately overrides the other as the technology grows and becomes more sophisticated?
i often think of some of (Paula Scher’s?) comments in an interview she had with a publication. how her time in front of screens has often meant that relaxing has involved getting away from screens, engaging with physical craft, like painting. and i think about how oddly backwards the relationship we have with our computers is. in spite of all of our user-centered approaches, we somehow, in the big picture, ended up sitting in front of a screen, tapping away at buttons for hours at a time.
malleable computing is an area of research driven by the idea that computers ought to adapt to our way of thinking, rather than expecting us to conform to the means and methods provided by rigidly structured, siloed applications. i think this imperative is well-founded; it puts into context some of the experiences i have had personally in building this site, and could be seen as a parallel stream in the desire for reshaping our contemporary computing experience.
but why should this stop at the level of the visual interface, and primarily with intellectual work? is it not also that we are physical, mortal beings, with backs that can ache and muscles that can strain? and more than accounting for our physicality at the level of ergonomics, is there some way for us to take advantage of our multiplicitous senses and ways of moving through and interacting with the world?
to get a better look at something, we tilt and crane our necks. we look around it. move back, move forward. when we’re happy, we do a little dance. we high-five. we inspect things with our hands as much as our eyes. tapping on an object close to our ears tells us whether or not something is hollow, or metal. more than anything else, engaging with the physical world around us grounds us in ways digital computing experiences do not.
i do not want computing that is personal. the obsession with self has actually destroyed our sense of wellbeing. i want computing that feels more like an appliance, that could be used by you or by others. i want computing that you can walk away from; i want computing you can forget about.
ultimately, it could be said that i am interested in the type of humane computing that is being built at Dynamicland, or by folk. and that statement would be largely true in spirit. but both projects put a heavy emphasis on transparency of software structure to the user, which, while granting significant advantages in educational or research contexts, ultimately requires specialized knowledge and limits their application to more niche usage. something like the projects worked on at BERG are closer: an appliance-like apparatus that allows computing to integrate into ambient contexts.
but even further than that, there is something more foundational to computing that none of these approaches address: at its heart, computing technology is linguistic. it is reliant on signs and symbols to convey and represent information. this is founding rationale for why we type on keyboards, why we look at displays, and why technology can displace us from the experience of the physical world. digital computing technologies operate on a similar principle to paper, in that they provide a surface onto which other linguistic and representational technologies can operate the blank page
the concept of material literacy captures the gap between these symbolic systems and more tacit, intuitive knowledge: you cannot fully understand the properties of a fabric until you hold it in your hands, find the strength and the give, the way that the weft affects a drape. material science can provide you information for engineering purposes, but even the most complete linguistic description remains a poor substitute for actually handling and working with something in real life. similarly, knowing everything there is to know about a material intellectually will not transfer in any way to understanding how to handle a material intuitively. that can only be gained through direct manipulation and interaction with it.
is there a way for us to build systems that integrate intuitive sense-making?