Saturday, March 2, 2013

"Good Smart" and "bad Smart": What Smart Technologies Do and Don't

Everyone should read Evgeny Morozov's latest op-ed in the Wall Street Journal (via Alan Jacobs). Morozov elaborates on the latest smart social technologies and gadgets; technologies that by virtue of the cheap price of hardware, AI or crowd-sourced pattern recognition,  and the possibility of making your activity visible to your friends and acquaintances, serves to change your behavior in some personally or socially optimal way.  Examples: going regularly to the gym, eating healthier foods, or even (which is his chosen example) recycling the waste generated by a household.

Morozov, of course, as one might expect, is not happy with this.  He suggests an analytic distinction: "good smart" and "bad smart" technologies, which I think is really useful in thinking about the recent spate of products that use ubiquitous computing paradigm for social ends.   
How can we avoid completely surrendering to the new technology? The key is learning to differentiate between "good smart" and "bad smart."

Devices that are "good smart" leave us in complete control of the situation and seek to enhance our decision-making by providing more information. For example: An Internet-jacked kettle that alerts us when the national power grid is overloaded (a prototype has been developed by U.K. engineer Chris Adams) doesn't prevent us from boiling yet another cup of tea, but it does add an extra ethical dimension to that choice. Likewise, a grocery cart that can scan the bar codes of products we put into it, informing us of their nutritional benefits and country of origin, enhances—rather than impoverishes—our autonomy (a prototype has been developed by a group of designers at the Open University, also in the U.K.).

Technologies that are "bad smart," by contrast, make certain choices and behaviors impossible. Smart gadgets in the latest generation of cars—breathalyzers that can check if we are sober, steering sensors that verify if we are drowsy, facial recognition technologies that confirm we are who we say we are—seek to limit, not to expand, what we can do. This may be an acceptable price to pay in situations where lives are at stake, such as driving, but we must resist any attempt to universalize this logic. The "smart bench"—an art project by designers JooYoun Paek and David Jimison that aims to illustrate the dangers of living in a city that is too smart—cleverly makes this point. Equipped with a timer and sensors, the bench starts tilting after a set time, creating an incline that eventually dumps its occupant. This might appeal to some American mayors, but it is the kind of smart technology that degrades the culture of urbanism—and our dignity.
Image taken from here.  It shows the wired trash bin with the camera attached to its lid. 

What about BinCam, the product he opens his essay with?  BinCam is a trash bin whose lid comes attached with a camera.  It takes a picture of the bin's contents when the lid is shut, uploads it to Amazon Mechanical Turk, where some Turker determines whether you've been putting recyclables into your trash, then publishes the photo along with the Turk assessment to the user's Facebook or Twitter profiles.  The idea is that peer pressure and perhaps some mild social censure will make you better behaved -- "better" in the sense of being socially and ecologically optimal.


You would think BinCam falls into the "good smart" category but no; Morozov says that it falls "somewhere between good smart and bad smart."  
The bin doesn't force us to recycle, but by appealing to our base instincts—Must earn gold bars and rewards! Must compete with other households! Must win and impress friends!—it fails to treat us as autonomous human beings, capable of weighing the options by ourselves. It allows the Mechanical Turk or Facebook to do our thinking for us.
I think Morozov's concerns about surveillance are really useful.  But he lost me with this paragraph.  Since when did it become a "base instinct" to win and impress friends?  If someone buys BinCam with the intention of helping him or her adhere to certain recycling conventions, how is it different from someone who uses her friends to police her diet?  I think the key to understanding the paragraph is the reference to Facebook and Mechanical Turk; those are the two technologies that make Morozov uncomfortable. And there is the fact that the behavior in question here is less useful individually, than collectively.  Whether I recycle my trash or not has less consequences for me than it does for the society I live in (unlike, say, dieting or exercise, although one might argue that even these two activities have a "social" dimension; they will help bring down the high cost of health care).  But recycling also has another aspect: more so, than dieting: it is a behavior whose template is created by experts.  And it is precisely this: aligning my behavior into a template decreed by experts, and monitored by my friends, is for Morozov, an unacceptable loss of autonomy.

I am not sure I buy this.  And it highlights, I think, one of the interesting points of similarity between critics like Morozov and Nicholas Carr: the normative use of the Cartesian subject.  For Carr, humans have a deep need for solitude; in fact, solitary reflection (exemplified by deep reading) is what makes us most deeply human.  And the Web, by its very constitution, forces us away from this; it forces us into multi-tasking, into skimming, and into a form of constant sociality though Facebook and Twitter.

Morozov's concerns are different, and I think far more politically salient, than Carr's.  But for him too, the most deeply human thing about us is our freedom and our autonomy--not just from state surveillance (a form of "negative liberty"), but also from certain forms of "base" socialities.  And so, while I find the "good smart" and "bad smart" distinction really really useful, I suspect the devil is in the details. 

No comments: