Revisiting The Luddite Fallacy.
The Luddite Fallacy is commonly leveled at the notion that technological unemployment – unemployment caused by technological changes – leads to widespread structural unemployment. It is deemed a fallacy because ever since the rise of printing presses and looms and other automation or labour-saving technologies we have not only destroyed old jobs but created new ones for people to work at – new jobs that could not have been foreseen. Through the seeming miracle of market growth and consumption, structural unemployment has been avoided.
However, I’m firmly in the camp that think that this time is different, and I share the views of the likes of commentators like Martin Ford (http://www.thelightsinthetunnel.com/) and others. I do not think that the Luddite Fallacy is applicable to the present era of technological change with advanced automation, robotics, expert AI’s, software development, and other solutions driving technological unemployment (http://en.wikipedia.org/wiki/Technological_unemployment).
This is most certainly a minority viewpoint at the present time. The vast majority of economists, especially those advising governments, big business, and think tanks believe that this time will be the same as all the others and the Luddite Fallacy is alive and well.
The other day I came across a recent example of this on an economics blog that I check out from time to time http://www.macrobusiness.com.au/2013/08/is-technology-killing-the-middle-class/. The key excerpt is:
Debate over whether technological advancement is killing jobs is as old as the hills. When the automobile was introduced, stagecoach drivers protested at the loss of jobs. Ditto the 19th century mill worker and the 20th century bank teller. In all cases, new jobs were created in areas unthought of at the time. The same will happen again. New jobs will come from somewhere, although for some workers whose skills are made obsolete, they will be forced to take on less financially rewarding work.
One of the reasons I think it is different this time, and why those economists who still persist with the Luddite Fallacy are wrong is as follows:
In times past our technological development, our automation, our labour-saving machines were focused on enhancing or compensating or replacing our bodies. We basically built bigger, stronger, and faster arms and legs that were tireless; think of the massive presses, pumps, and rollers of industrial mechanisation, forklifts, trucks, and rapid transit, turbines, drills, and excavators. Or we built louder voices and symbolic projections; think of telecommunications and anything that allows your voice, your image, or your writing to be rapidly reproduced and projected around the globe to instantly reach vast audiences. There are myriad examples.
This time around we are not enhancing our bodies, but our minds. We are guiding our technological evolution towards engineering intelligence – our species’ core competitive advantage, and the thing that enabled us to learn, adapt, get smarter and discover more complex things that needed doing. We will engineer machines that remember better than we do, solve problems better than we do, and make creative associations better than we do. IBM’s Watson program, DARPA’s SyNAPSE program, the various Big Brain programs, and others, are all laying the foundation on which these machines will be built. When a machine is eventually created that has the intelligence and problem solving ability of a human and the means to manipulate its environment, then there would seem to be, by definition, no conceivable job that a human could do that a machine could not do faster, better, and cheaper.
I also don’t believe this to be a bad thing. Quite the contrary. I think we should measure our progress by increases in our quality of life and collective standards of living, and this time around we’ll be capable of delivering simply unimaginable progress. There is no guarantee that we won’t mess this up, but I can’t help but be optimistic that we’ll muddle our way through.
Sure, we can nit-pick over the finer details, argue about the ever-shrinking pool of jobs that machines can’t yet do, or of clever humans exploiting early machine intelligence in niche expert areas to accomplish more than either humans or machines can alone. But in the end near-complete technological unemployment seems to me to be unavoidable, and it is good to see this getting occasional coverage – good or bad – in the mainstream media.
Sure, but how will the people who don’t own productive machines survive?
The ones saying “all the other times it was true ergo it will also be true this time” are the ones falling into a fallacy (since they give no real reasons to support their ideas).
Mark Bruce , on the contrary, puts forth a mechanism that tries to explain his prediction.
Even if he’s proven wrong his thinking is more sound and logical.
Right now wealth is distributed by ownership. Regardless of whether people need to work (and it can be easily argued that a 10 hour week could be normal right now) how do we distribute the wealth so that everyone has at least their basic needs met?
Randall Lee Reetz You think that about someone that uses the word “fallacy”, but what do you think about those that use “guarantee”?
I guarantee nothing, and form individual conclusions based on the unique nature of what someone says. I find that forming absolutes about tendencies is a poor substitute for a genuine conclusion.
Pointing out something that isn’t a strong argument isn’t an effective counterargument either.
Why don’t you try to infer what is being said as a whole, instead of inferring from a few particular words?
“Speech is a joint game between the talker and the listener against the forces of confusion.”
It is that, but it’s not just that. It means that we have to think seriously about what comes next or life will get real ugly, real fast when it does fall apart.
The two words I see with 100% correlation to arrogance and immaturity are “lol” and “fallacy”. Do a search for your self?
While I aim for perfect communication, I often fail. It seems to me that while some people may achieve 100% accuracy with the words they use, it is significantly more efficient to assume some degradation in the messages conveyed through fallible human speech and the effort required for me to compensate for such imperfection is less than is required for someone else to achieve such perfection. I think that to only listen to those few that in practice achieve 100% accuracy, i would be listening to some very boring people that could be spending their time better.
So no, I have noticed, nor do I exclusively value, a 100% correlation, as you do.
I also think that to define something like that as having achieved 100% prevents you from accurately gaining new data that may alter that conclusion.
I also hold the belief that I may well be wrong about some things, so to hold anything as 100% too adamantly, I may be preventing myself from learning. I find always putting some effort into checking redundancies and assuming some error in information exchange (similar to the 2nd law of thermodynamics) exists.
Do a search here on google+ of the instances of the word “fallacy”. Read the threads. Then do a search for the the other posts that person commented on where they use the word “fallacy”. This word is used frequently by people who want to shame a person, to derail the conversation away from critical and skeptical points being raised that threaten the borg-like geek think that google+ uniquely attracts. As for the label “Luddite”, I think you’d have to search far and wide to find anyone anti-technology on google+. Google+ is a tech freak’s utopia… Luddite is a lot overkill in geek land. When people on google+ question technology, they are questioning the choice of technology, or the motives driving particular technology choices… not the use of technology itself. How threatened does one have to feel to take the low road and label any critique “Luddite” simply because it was written by someone asking for a more intelligent use of or choice of technology?
Randall Lee Reetz Sorry, I do not need to Google search terms in order to determine that it would be ridiculous for me to disagree with an idea merely because the speaker used a certain word. This requires an exact alignment of those who use this word with invalid ideas and conclusions. Your stance is not that such people could be wrong, but that they are wrong, and you assign 100% accuracy.
Why are you even commenting on this thread? You associate people who use a word with “people who want to shame a person, to derail the conversation away from critical and skeptical points”, and look at what you yourself have done on this thread. You have read the first three words (which, by the way, are a clear reference to a fairly common phrase, in use here exclusively as a frame of reference for the subsequent conversation), and have apparently concluded that the ensuing ideas are meaningless. You have seemingly discovered a hitherto undiscovered method of learning that supercedes understanding concepts themselves, and requires only that you determine whether a person uses one of two words, and if they do, they are wrong.
Thank you for this grand wisdom, but I just think you’re a troll.
If I had said the exact opposite and also managed to use either “luddite” or “fallacy”, would you also categorically disagree with my opposite statement?
That’s a rhetorical question though, I just think you’re a troll.
Arrogance. Avoidance. Shame.
So why is this time different? Technology has been augmenting the mind for decades. Are faster decision-making, memory, and organization an augmentation of the body? I certainly don’t think so. Claiming that “this time” is the time that computers will really become intelligent, not just faster and more useable, is kind of silly. There’s no evidence for that. Again, for decades, computer programs have been accepting inputs that human brains would normally accept, and producing the same outputs, only more accurate and faster. This includes decision-making. A human can’t decide which of millions of documents matches a search pattern in a reasonable amount of time. A human can’t dig through many users’ online activity and produce a useful behavior analysis. A human can’t tell you whether two long documents are identical, he can only say whether they are similar. A human can’t keep track of his store inventory and transactions solely in his own brain. Computers do all of those things that human brains used to do. Computers have been replacing mental functions of humans for years. In fact, simple inventions like written language and paper have been replacing the function of memory for all of human history.
So, please, how is this time different? I mean, we’ve been “guiding our technological advances towards engineering intelligence,” forever, and yet we haven’t yet produced anything close to an intelligent automaton.
Because this time, we are trusting the basic process of evolution to computers themselves. That is THE difference.
Randall Lee Reetz, why can’t you be nice to people…
By “nice” I assume you mean “agree with everything said by geeks? Sorry, no can do. The geek motive is monolithic here on google+. Geek tech zealotry confuses engineering with science in a way that is harmful to the essence of science. The essence of science is the admission that noise is the enemy of observation and that the greatest and most prominent and dangerous source of noise is our own brain, our own desires and fears. What separates using science (engineering) from doing science (being or thinking like a scientist) is that scientists make it their primary desire to understand the universe… aligning their motive with the motive of the universe. Engineers don’t seem to understand this essential difference, letting their desire and fear run amuck, confusing want with what is. The problem is that the public doesn’t understand this difference. The public thinks anyone holding a test tube or a computer or a rocket is doing science.
Geeks don’t seem to care that their desire for a personal transhumanist or singularian future very much gets in the way of seeing things for what they are. Hubris and zealotry so often overwhelms honesty and empiricism. If you want to run around naked talking about how beautiful your clothes are… I am going to point at you and expose your lack of attire. Someone has to. Might as well be me.
Randall Lee Reetz I don’t think you would know what a scientist is even if he walked up to you naked. Your idealism of science is 100% wrong. We are all scientists here and we can all play nice.
This is your last warning about playing nice.
Theorem: “geek tech zealotry” correlates with “troll” with r=1.
Proof: follows trivially from above posts. QED.
(sorry, I just couldn’t resist).
Scientists don’t have a personal existential goal. That is called zealotry and holding a test tube doesn’t make it any more scientific.
Something I find very relevant to this discussion is the small essay “In praise of idleness”, by Bertrand Russel.
Did our ability to “mess up” diminish in the last hundred years?
Sadly, it seems not.
Mark Bruce, something I can always agree 100% on is that the world will be a very different place in 20-30 years 😉 However, I don’t think that the technological singularity will be that soon. It seems like we’re always further away than we think we are, sort of like how we’re always 30 years away from achieving fusion.
My opinion here, and I think opinion is all that I can usefully offer without being wildly speculative, is that humans will always have work to do, unless we have no choice in the matter. There are always frontiers, and we would not willingly give up our access to those frontiers for a cozy life.
EDIT: But nothing is impossible, right? I mean, I’m holding my breath for a replicator or holodeck in 30 years. I’ll be really blue by then…
I’m glad you’re trying to measure quality of life, but I nonetheless see some of fundamental problems with this argument. First there likely is an issue with unsustainably increasing complexity and technology, whereby it can’t continue at the current pace and probably will suffer a setback or collapse at social levels. Another is the need for man to “work” for both mental and physical health. Finally, technology isn’t driving the direction of our societies – it’s being driven by a massive consumption engine that primarily profits large corporations. The degree to which technology will be used for the benefit of people is limited.
I think a better discussion would be your referenced concept of “faster, better, and cheaper”. Do we seriously need to go faster? Given the rate of unsustainable consumption and the obvious global impacts, don’t you think we should go slower? What’s the definition of better? More efficient? Efficiency is also a ambiguously used term here – an efficient machine produces more output for less work, but the story is different for a man. A man(person) gets stronger by working. A man gets smarter by thinking (which is itself work). A man finds identity and meaning through work. Work benefits man – this should factor into your discussions. Cheaper? A good counterexample for this is to explore the common habit of picking a restaurant based on the amount of food you get for the least price. But in the model for a good local community, you may be eating at your neighbor’s restaurant, or maybe your sister’s place, and your thinking will be different. You will expect them to serve good food because they care about you, and you will want to pay them a reasonable amount because it is their livelihood.
Leave a comment