In the philosophy of technology world, there are quite a few theories and descriptions of technology. In an attempt to simply (and most likely butcher) the dozens of highly nuanced views, I want to use philosopher Andrew Feenberg‘s two helpful questions that categorize four of the major views and then test them with Twitter. Again, this is a vast over-simplification, but I hope it will shed some light on the important – but often rather obscure – discussions happening among the big thinkers.
1. Do humans control technology (to some degree)?
Most would answer “yes” to this question, but there are those who believe that technology is “autonomous” in that it operates independently of what people want from it. Technological determinists look at history and see technology as the primary force in shaping social institutions like government, church, and the family. Many of the popular articles that talk about “technology making us dumber” inadvertently fall into this category.
2. Does technology have built-in values?
Most people answer, “no,” to this question because they tend to believe that they can use technology however they want with no effects on themselves. In more complex terms, they separate the means (technological tools) from the ends (what they want to do with it). As long as the ends are good, they don’t think the means (which technology we use) has any effect.
In contrast, those who answer “yes” would say that means (tools) and ends (goals) are intimately connected and related. We don’t just use technology for ends, because when we use a particular technology it works like a filter or a lens of values that we apply to the rest of our lives.
Charting out the Answers
According to how you answer these questions, you will fall into one of four philosophical categories.
Can humans control technology (to some degree)? | |||
No technology is autonomous |
Yes technology can be controlled |
||
Does technology have built-in values? | No technology is “neutral” |
Determinism Technology is the driving force of history to make things better |
Instrumentalism Technology can be used for anything we want, and it does not affect us in any way. |
Yes the means (tech) are connected to the ends (values). |
Substantivism Technology is the driving force of history, but it doesn’t always make things better |
Critical Theory Technology is a driving force of history, but humans can control and reshape both technology and history. |
Theories of Technology
If you’ve made it this far, you’re doing great for a blog post. Hang with me, we’re almost to Twitter.
Determinism (technology is autonomous and neutral) &
Substantivism (technology is autonomous and value-laden)
For simplicity, I’m grouping these two views together because they both argue that technology exerts control over society. The main difference is that Technological Determinism sees technology as neutral and therefore does not make judgments about the effects that technology has. It tends to see technology as purely hopeful in bringing society to a better place. In contrast, Substantivism does judge technology’s influences and concludes that its effects are not always positive.
Instrumentalism (technology is human-controlled and neutral)
This is probably the unstated assumption of the majority of today’s technology users. Instrumentalists critique Determinism and believe that technology should only be thought of in terms of how people use it with no consideration of the technology itself. This leads to statements like, “Guns don’t kill people, people kill people.” Unfortunately, this view is fairly naive in that it fails consider why there are differences between societies with guns and without guns, and it cannot explain why kids who watch hours of television from a young age have trouble concentrating.
Critical Theory (technology is human-controlled and value-laden)
Critical theory is somewhere between the extremes of Instrumentalism (technology is purely neutral) and Determinism (technology controls everything) because it sees technology and social practices as developing in tandem and influencing one another. The means (technology) affects the ends (human purposes), but human purposes (ends) also influence what technologies (means) are developed. Technologies are not fully neutral in that they contain the values of their makers and values inherent in their design, but technology is also not in control of society. For example, the bias towards efficiency in today’s technology is a reflection of capitalism which developed in tandem with it over the last several centuries.
Let’s Apply the Theories to … Twitter
What good are all these theories if we can’t use them with our favorite new technology – Twitter? (See similar posts on McLuhan and Crouch)
Twitter was designed to integrate with SMS and alert friends to what other friends were doing. But beyond this original design, Twitter (a means) has spawned cultural trends and has even altered how news outlets report stories and interact with audiences (ends). So does Twitter support Determinism/Substantivism? Well, not quite, because it is the users of the system who collectively shaped it, inventing things like @replies and retweets that were not originally part of the design.
But then is Twitter fully value-neutral? Many people have pointed out that Twitter’s main question “What are you doing?” can influence users to be self-focused. Its design also inherently values short, 140 character thoughts rather than developed arguments. Because of these values, we cannot call it purely neutral and we cannot agree with Instrumental. Yet user inventions like @replies have refocused twitter away from the self and other tools like URL shorteners point to longer content.
So if Twitter is value-laden, if it shapes its users, and if it is shaped by its users, then it seems to support a Critical Theory of technology.
That Wasn’t So Hard
Feenberg’s Critical Theory is helpful in that it articulates a middle way between deterministic views of technology and purely instrumental views. It sees a connection between the technology we use and how we exist in the world while still arguing that humans have some role in shaping that world.
However, in my reading, Feenberg’s view is still tied to portraying human history as moving along a upward trajectory getting better and better through technological advances. In contrast, a view of technology informed by Christian theology will still have to account for human depravity and replace hope in technology with an eschatological hope of Christ’s return.
That said, Feenberg’s categories are still useful because they allow us to look at technology both hopefully and skeptically without erring too far in either direction. We can appreciate the creativity displayed in technological advance and praise God for giving humanity such skill. At the same, we can indentify values embedded in technology and technological culture that are not consistent with the Christian faith and work against those aspects, so we can live lives that are fully human and truly God honoring.
Great post, John. A great introduction to Feenberg, who sounds familiar but I’m pretty sure I haven’t read (yet). What are you reading / studying by him?
Michael, so far I’ve just been working through the things he’s post on his faculty page some of which are previously published academic articles and some seem to be summaries of his book-length works.
Twitter, like you said, is basically a way to broadcast a text message to the world and make it a little more permanent. That, in and of itself, is morally neutral. That empowerment it gives is touched by humans and immediately fallen though. Your opinion on that affected state is directly drawn out from your opinion on the state of man.
A common theme among free software advocates is that information “wants” to be free and that people (as a large group, rather than as individuals) will subconsciously gravitate towards the means that offer them the greatest freedom to publish their thoughts. Twitter is, in my opinion, a very rudimentary manifestation of that principle. It is nearly impossible to control (just ask the authorities in Iran), and it makes no claims of being authoritative. It just “is,” just like the thoughts in our heads. They may be right or wrong or grossly biased.
The eventual consequence of this kind of freedom is, however, that those in charge of it will become corrupted. I was reading earlier about Google’s response to the GoogleBomb phenomenon (http://en.wikipedia.org/wiki/Google_bomb#Google.27s_response), and it is interesting how they went from basically ignoring it and treating as part of the organic culture, much like a mole on your skin, to trying to “fix” it while maintaining their non-interfering integrity. If you look at the changes that have been made to ebay feedback, Wikipedia entries, really anything that encourages anonymous or semi-anonymous feedback, eventually those in power tighten it down. Wikileaks is next, I am sure.
However, just like water will find a way through rock and Sharpie markers find their way into the hands of children, people’s voices find their way through the most carefully crafted controls. What we lose is usually the archives of what went before. Say what you will about Geocities and Angelfire, they have people tools to post their (albeit vapid) thoughts to the masses in an unprecedented way. The archives of that information are now in the hands of equally corruptible bodies. Most of it is worthless as individual pieces of information, but together is amounts to the best record we have for what the heck we have been doing with our lives. Sanitize that, and you may never get it back.
Random thought as I read back thru the post and comments – in what instances has technology had a negative effect on society/culture/whatnot?
I can think of a few but I’m still running through them in my thoughts. Suggestions?
Bill,
Certainly, the nuclear bomb’s invention and the ensuing Cold War was pretty bad, but many of the other areas are value judgments and in almost all cases there is some “good” and some “bad.”
As for books detailing this sort of thing, I would recommend, Neil Postman’s Amusing Ourselves to Death which covers Television, and next year’s release of The Shallows: What the Internet Is Doing to Our Brains by Nicholas Carr looks to do the same for the Internet: http://www.roughtype.com/archives/2009/10/the_shallows_pu.php
That was my first thought too. We have certainly become very efficient in killing people. Still, I wonder what actions the threat of such destruction (and even from other smaller yet still deadly tools) might have deterred? Obviously, we will never know and mankind still seems pretty willing to fight with each other.
Yeah, I’ve been meaning to get to Postman’s book but haven’t made it there yet. I will also check out the one by Carr.
John,
You know someone is posting great stuff when it’s 11:30pm on a Saturday night and they are commenting…..i.e. ME.
Really like seeing how you used the theories in a tool like Twitter. Really helps focus the practical use of our technological tools and how that might say something about what we believe. That is important because I think most of the time we come to these tools not knowing what we believe, but rather just jumping in cause it’s the coolest new thing. Which is what I did with Twitter. And now that I’m 2 years into it, I’m just now asking myself some tough questions, and I’m finding out that I have less control of something than I thought I had.
Rhett
this is a great article about the usefulness of twitter in the context of producing real news, written in the context of the Fort Hood shootings.
think you might appreciate it.
http://www.techcrunch.com/2009/11/07/nsfw-after-fort-hood-another-example-of-how-citizen-journalists-cant-handle-the-truth/