Chris Cotter and Beth Singler discuss the intersections between religion and Artificial Intelligence from slavery and pain to machines taking over religious functions and practices.
What is Artificial Intelligence and why might we want to consider it in relation to ‘religion’? What religion-related questions might be raised by AI? Are these ‘religious’ questions or ‘Christian’/’post-Christian’ ones? What ‘religious’ functions might AI serve? In what ways do popular discourses about AI intersect with religion-related discourses? Do narratives of AI form part of a teleological atheist narrative, or do they perpetuate prevalent tropes associated with ‘established’ or ‘new’ religious movements? And what are the intersections of AI and religion with issues such as slavery, human identity, affect and agency? This week, Chris is joined by Dr Beth Singler of the University of Cambridge to discuss these issues and many more.
Fund the RSP while you shop! Use an Amazon.co.uk, Amazon.ca, or Amazon.com affiliate link whenever you make a purchase. There’s no additional cost to you, but every bit helps us stay on the air!
The inspiration for this episode came from one of Russell McCutcheon's works which we had encountered through the undergraduate Religious Studies programme at the University of Edinburgh, entitled 'Critics Not Caretakers: Redescribing the Public Study of Religion'. The result is this compilation of differing opinions and interpretations ...
The study of religion in the media is an interdisciplinary field which has been of interest for scholars in media studies, religious studies and sociology among others. In this interview, Christopher Cotter and Teemu Taira discuss the relevance of study of religion in the media from the religious studies point of view as well as the media discourse on religion – the ways in which media covers religion, functions as defining what counts as religion and negotiates its social location.
Christopher Cotter (CC): At the weekend, I mentioned to my father that I was going to be recording an interview about the intersections between AI and religion. And he said, “I can’t think of anything that would be relevant there. How do they intersect at all?” And then, within the space of about two minutes, we were suddenly talking about all sorts of things, like: are human beings creating intelligences? Does that mean they’re acting like gods? Can you imagine that AI might be acting as religious functionaries, providing blessings? And what about pain, what about notions of slavery, what about the whole notion of the soul, and eternity, and transhumanism and everything? So suddenly we got into this massive discussion. And today I am pleased to be joined by Dr Beth Singler to continue that discussion in a more erudite fashion – not casting any aspersions on my father, of course! Dr Singler is the Homerton Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge. And her background is as a social anthropologist of new religious movements. And her first monograph, The Indigo Children: New Age Experimentation with Self and Science, published with Routledge in 2017, was the first in-depth ethnography of a group called the Indigo Children: a new age re-conception of both children and adults using the language of both evolution and spirituality. We’ll hear more about her research into AI and religion just now. But a relevant recent publication is her edited special issue on AI and religion, for the RSP’s sponsored journal Implicit Religion, which included her own articles: “An Introduction to Artificial Intelligence and Religion for the Religious Studies Scholar“, and “Roko’s Basilisk or Pascal’s? Thinking of Singularity Thought Experiments as Implicit Religion“. And today’s podcast builds on a roundtable discussion (that we had back . . . well, we had it in September 2016, but it was released in February 2017) featuring Dr Singler, myself, Dr Morelli, Vivian Asimos, and Jonathan Tuckett, titled “AI and Religion, an Initial Conversation“. So first off, Beth – welcome back to the Religious Studies Project!
Beth Singler (BS): Hello! Thank you for having me.
CC: It’s great to have you back. And hopefully this is the follow-up conversation that was promised!
BS: (Laughs) As foretold . . . !
CC: So many moons ago!
BS: (Laughs).
CC: So we’ll have covered a little bit of this already I think. But you’ll be in a different position now: years on, years older, years wiser!
BS: Oh, so much older!
CC: So, first off: artificial intelligence is going to be a sort-of contested term in public discourse. It takes on a variety of different nuances. So what are you meaning in this conversation?
BS: Well, I’m definitely meaning that it is a contested term, taking on many different forms. I think you can sort-of indicate towards something that is the field of artificial intelligence, within which there are processes and programmes and foci of research, looking at things like machine learning and vision systems and natural language processing. So you have this concept of a computer science field – which doesn’t really get its name until the 1950s – but you can see how, beyond the actual narrow form of the technology, artificial intelligence is understood in so many different ways by so many different people. I have a friend who once told me that their car had AI because when she walked towards her car with her keys, the doors unlocked. That’s not artificial intelligence. That’s a sensor in your keys. But lots of people have this idea of sort-of processes that seem intelligent, done by machines, and therefore must be artificial intelligence. And that’s what I’m really very interested in: that it’s so much broader than the original conception, which was ambitious in its own right. But everyone has attached AI to different things that they feel might represent intelligence. So it’s not only the computer programme that sits on a server, it’s also now the robot that takes over the world. Or it’s the far, future hope of an intelligence that will save us all from ourselves. So it’s all these very different things, and that’s what interests me.
CC: Yes. And you’re interested in that whole gamut, I suppose. So, not necessarily a technical definition of artificial intelligence.
BS: No. I mean, I know enough technologists who go, “Absolutely, 100%, it’s this one thing. That’s it. And anyone who’s talking about anything else, it’s complete nonsense!” Well, to a certain extent, yes. But you’ve got to pay attention to all the different interpretations, because that’s what’s getting out there into the world.
CC: So I began with my personal vignette, there, about chatting with my dad. But you’ve provided, much more eruditely, a justification for what we might mean by the intersections between AI and the study of religion, and why we’re even having this conversation. So – go!
BS: Go! Right. Well, from a very basic position, any form of technology intersects with religion.(5:00) That’s just the nature of our society works, how our conception of religion itself works, that it could be seen, in itself, as a form of technology. And therefore any kind-of shift and changes in how we do things – things that make our lives either more difficult or easier – there are repercussions and implications for how we imagine the world and how it works, therefore religion. I think where AI might be slightly different . . . . Although I am cautious about saying it’s revolutionary new technology and very disruptive – it does replicate lots of existing ideas and thoughts. What I think is interesting about AI is the way in which people see it as much more than that simplistic tool. That however narrow an intelligence it is at the moment, people extrapolate on to personify AI: AI will want to do x-y-z; AI will replicate humans in such a way that we won’t be able to tell the difference between humans and AI. And this the Sci-fi imagining. But it also comes out in our religious conceptions as well. And then, also, within the sphere of the non-religious or secular approaches to AI, you see again these repeating patterns of religious narratives, and tropes that people who – even if overtly and sometimes aggressively atheist – still draw on their cultural context: primarily sort-of Abrahamic, Western conceptions of what a god would be like. And they use that, and they fill in their conception of AI with some of the existing templates that they’ve already got. So it tends to fall into very eschatological language, and very singular monotheistic conceptions of what a god would be and pattern that onto artificial intelligence.
CC: So there’s that sort-of: whatever religion is, we’re never going to be able to extract it from society. Because whatever . . . we can argue about it being a social thing and AI is integrated with that. Then also, the sort-of religion-related tropes, narratives, and so on. But then also there are – I’ll maybe talk about this now – there are some groups that you might describe as new religious movements, or new un-religious movements, and things that are explicitly sort-of engaging with this.
BS: Yes, so with my new religious studies hat on – that I wore so well for doing my thesis – having moved into artificial intelligence as a subject area, I’m seeing similar sorts of formations of online identity. Primarily these sort-of groups form online. They’re sort-of geographically disparate, so online spaces are important, and so forums and hashtags on Twitter, and so forth, to bring them together to formulate ideas. And some of them do expressly call themselves churches. So you get the Turing Church; the Church of Assimilation recently got in touch with me. I went to do a little bit more digging around into what they’re up to. But I do know about assimilation theory. But yes, the groups that specifically say: we are in some ways attempting to define our spirituality in relationship to artificial intelligence; we might also be transhumanist, in that we think through technology we can solve some of those very pernicious problems of humanity – death being the big one.
CC: It’s a big one!
BS: It’s a big one. Some are not quite so ambitious, just want to solve suffering – which also sounds like a serious thing to be taking on! But some do seek to be immortal in some form, whether that involves mind-uploading or transference of consciousness through artificial intelligence – all these sorts of various shapes. But yes, absolutely there are specific groups that see their endeavour as religious. And some will call themselves un-religions because they’re drawing a sort-of ideological gap between themselves and how they perceive mainstream religious groups. So in sociology of religion you might call them “spiritual but not religious”. But they’re still using some of that terminology of “We are the church of x-y-z.” and they’re doing it in quite pragmatic ways. Some of them will talk very explicitly about using religion to encourage people into transhumanist ideas and encourage them into seeing this vision of the future that they see. So, arguably, you can sort-of take a slightly sceptical stance and say they’re not really, really religions. But who gets to decide that?
CC: Yes. Absolutely. Right. So in the introduction, as well, I mentioned potential . . . I suppose we could say “religious uses” for AI. I was talking to a friend yesterday about if you could hypothetically imagine being in a confessional, for example, would it need to be a human priest on the other side of that? Or could it . . . ? And we landed down on, “Well, if you didn’t know it wasn’t human then it might be ok.” But there is something about . . . .
BS: Like in a church Turing test. There is a church Turing hypothesis, but this is separate. Yes, I find it interesting, talking more broadly in terms of technology and religion, that there are periods of rejection, adoption and adaption (10:00): that when new technologies arise, sometimes more established religions can be quite negative about them for a period of time – and these are overlapping categories that are non-discrete – but, over time, we do see religious groups specifically producing their own forms of those technologies. So there’s like the Bless U-2 robots that are used in part of Reformation celebrations in Germany. And in other religious groups, I recently saw in Dubai they’ve come up with an algorithm for issuing fatwa’s as well – making Islamic jurisprudence decisions. So you’d go on line, put in “Is it ok for me to have done x-y-z?” Or “I failed to pray on a particular day, what’s the . . . ?” And basically, all that system is doing is looking at previous cases. But . . . .
CC: Yes. But that’s all a human does.
BS: That’s all a human does. I mean, the question arises: what happens with the data? But that’s a privacy . . . another issue. But yes, so specific established religious groups seeing the technology – just as, in the nineties, suddenly we got lots of internet churches, where people were encouraging people to go on line and do church in a different way. And now we have internet sites for churches. But it’s not so much the case in the mainstream religions that you go online to do faith. It’s just that your local church will have the internet. So that’s the adaption stage of: “This thing is around, we’re kind-of used to it, we use it, and we don’t necessarily have a big . . . .” Like, the Church of England they released an Alexa Skill. They had a big press conference. And all the Alexa Skill does is recite the Lord’s Prayer to you if you ask it to. There are other adaptions now where it can tell you what your local church is and what the services are. So it’s not really revolutionary! But, you know, “Here’s a thing we’re doing with this new technology.” And it gets a press release. And then, the next sort-of stage – non-discrete stage – is just being very casual with the technology as: “This is just something we use.” Like we used books when the printing press first came out. The first things printed were Bibles. And this was a specific use of that technology. And then, over time, it’s just books. And it’s not so astounding. But in that process you get these spikes of interest and discussion. And, yes, different reactions to the technology – whether positive or negative.
CC: Absolutely. So before we get to . . . I suppose to the reason that you’re in Edinburgh today, and we’re chatting . . . . So that’s been a little bit about potentially religious, or religion-related uses. But there’s lot of . . . . Again, in my intro, there were a lot of religion-related questions that are raised by AI. Things like . . . you’ve done work on pain; there’s things about slavery, and all that. If we create these intelligences and then use them to our will, is that ethical? And then you’ve already mentioned transhumanism, which may be an unfamiliar term to some Listeners. So maybe, if you could talk a little bit about these religion-related issues?
BS: Yes. As I say, AI in its narrowest definition is a piece of computer technology, it’s a tool, but it inspires all these hypotheticals. And obviously we’ve had a long tradition of science fiction that takes us into spaces where we can imagine AI embodied, often in robotic forms, as having something like personhood. And that raises all these questions about the barriers between the human and the non-human other. And, in some ways, these questions have come up for millennia every time we’ve encountered different intelligences. It just seems now that we’re hoping, or aspiring towards creating non-human intelligences – whereas before, we’ve discovered them. So we’ve discovered that actually monkeys are pretty smart. We’ve discovered that dogs are pretty smart. And then, I’m afraid, from a colonial perspective from our past, other humans are actually and even women – Gosh! Darn! – They can also be pretty smart!
CC: As we’re hearing now! (Laughs)
BS: I mean, what’s going on!? So, again and again, “we” – in that kind-of very limited “we” – have had to expand our kind-of borders of perception of what intelligence could and should be. And with AI it seems like we’re trying to produce it. It’s not, in this case, meeting aliens on another planet. It’s actually, we’re trying to create the aliens here on earth. Whether we’ll be successful or not, I’m very agnostic about that. But I think it’s interesting that we want to do that. And what we want to be able to do with it. So that’s where things like questions of personhood, and slavery, and also pain . . . .When I made “Pain in the Machine“, one of the interesting questions that kept coming up was, like, should we even bother? Because if we’re going to create things that can feel pain, we’re just increasing the overall suffering in the universe and that doesn’t sound necessarily like a good thing (15:00). And going back to the transhumanists, as I said. So transhumanism is the idea that you can improve humanity through technology, broadly, and then you might lead to a state in which we’re no longer the same form of human that we were before.
CC: A new evolutionary step.
BS: Exactly. You might be a form of cyborg. Or there’s people who talk about post-humanism, where we’re so completely different we’re not even similar at all. But this idea sort-of does narrow down to this question of suffering, and being in pain, and what the human being is for, and where we’re going. So these are all big questions that are obviously very familiar shapes to anyone who’s looked at religion all around the world: these are the kinds of questions people have always been trying to answer. And I find it fascinating that some of these groups, as I say, are very overtly secular – almost New Atheist, some of them really admire the five horsemen of the apocalypse – but the shapes that they tell their own stories of the future of humanity with are very, very familiar to anyone who’s studied religion for any period of time. So is it that we’re . . . trapped isn’t the word for me, but we’re bound to repeat these shapes? Is there something in us that always goes to these same sorts of big existential questions, and comes up with similar sorts of solutions for them? I don’t know. I think that’s the ongoing question in my work. But I can dig down into particular instances of it as an anthropologist and say, “Well here’s a moment” – and some of them are very, very small moments, I admit that. I’m not doing big, big science. Some big scientists I’ve spoken to go, “Well you’ve spoken to like five people about this. What does that say about anything? That’s not a big data set.” But I don’t do big data stuff, but instances, and moments of clarity, where you can see these entanglements really clearly. And so: well, they’re doing something with both the concept of religion and the concept of AI. And they’re coming together.
CC: So you were just alluding to your small data sets there. So, well, I don’t think it’s a small data set that you’re presenting on here, but I guess it depends on perspective. But you’ve been looking at this particular trope on Twitter, “blessed by the algorithm”. And that’s what your paper that you’re giving here today is called. So what’s going on there? How does it intersect with AI? Why is it relevant? Tell us!
BS: (Laughs) Tell us! Yes. As a digital ethnographer, anthropologist of social media, I spend a lot of time hanging out on Twitter – that’s my excuse anyway, I’ll stick with it! I spotted a couple of people using the phrase blessed by the algorithm which obviously rings bells for me instantly for the language. And I dug around and I found 181 instances so far of people online, tweeting – just on Twitter as a platform – in some combination, in some context using the words blessed by the algorithm. And then you could follow back and see the first instance – which was very much about a corporate use of social media, and someone saying, “Well because this corporation has money, they’re going to be blessed by the algorithm.” So it sits in that kind-of context. But one of the most popular tweets, and most retweets, and most likes was a comment from someone saying in the real world – the so-called real world, I don’t like that differential – but anyway, in the so-called real world they’d heard their Lyft driver – so the gig economy role – say that they’d had a great day, and they felt blessed by the algorithm. And this might be something like a reframing and re-understanding of how we exist in a society that involves algorithmic decision making systems in a gig economy: what you get is dependent on a machine somewhere, making a choice. I mean there’s lots of words in that I don’t like that I just used, but unfortunately we’re very bound by anthropomorphic language when it comes to AI, but anyway. And so I have a corpus of 181 tweets and, actually, three of those refer to things I’ve said. So I’m muddling the field site a bit myself.
CC: OK. You’re an insider!
BS: I’m an insider as well. Well it’s responses to papers I’ve given. But, yes, I’ve created a very rough typology of the types. And some are about getting decent recommendations through the algorithm, on sites like Spotify. Some people are very pleased that their own content has been recommended to other people. There are people who sort-of talk about it in a very nebulous way: “Today I have been blessed by the algorithm.” And no more information. And then some people who really push the pseudo-religious language and come up with little prayers. And one of the things I was very interested in, in some of my other work on new religious movements, was the move between parody and legitimation. So I looked a lot at Jediism, and the census, and how some people did certainly write “Jedi” in the census in 2001 and 2011 as parody. They were upset about being asked about religion. They didn’t like religion, perhaps, itself. So they wrote Jedi. But that snowballing effect of legitimation – the more people talk about a thing, the more legitimate it seems – can have an effect (20:00). So even if a lot of these tweets are tongue-in-cheek, it’s still kind-of distilling out of the conversation. So, I have a graph. I’m very excited about this. I have a graph! As someone who, very much, is on the qualitative side and I don’t do big data stuff at all, to have graph made me go “Oh, exciting! I have to do some maths!” But I didn’t really do very much. And you can see the shift and change. After this one very popular tweet, there are more tweets. Perhaps this is the beginning of a trend, more people thinking in this way? Or even if it’s not, it’s just interesting to see that conception of AI as having superagency – that it is in some way in charge of our lives – being blessed by it, in some way equivalent to being blessed by an omnipotent deity somewhere up there that we can’t see. It’s in a mystical . . . . So there’re overlaps in conception, there, that I’m really interested in.
CC: The Listener shouldn’t know that I had a little hiccup earlier, because I’ll have edited it out. But just before that, I had an excellent question which I’ve now remembered – because it was written down!
BS: Hurray!
CC: So a lot of these issues that we’ve been talking around – functions, ethical questions, even the discourses in the Twittersphere – to my ear, certainly sound quite Christian or post-Christian at least through monotheistic . . . . I’m just wondering if these issues . . . . Were we in a different cultural context, would different issues be being thrown up by AI? I guess, would even AI be different in a different cultural context? Because I suppose you will have a lot of conversation between researchers all over the world working in AI. So is AI culturally specific or . . . ?
BS: Yes, absolutely, I think it’s culturally specific. What does tend to happen, however, it’s that it tends to be quite a narrow binary of East and West in the discussion. So everyone says, “Western conceptions of AI are like this”, but they go, “Over there in the East” and they’re mostly talking about Japan, “actually, people have a very different conception of AI and they love robots. And the reason they love robots is because they have a Shinto religious background or they have a Buddhist religious background”. And sometimes that can be a very broad stroke, almost pseudo-techno-orientalism of “Those people over there, they never really went through the Enlightenment, and they never really rationalised away religion, and they still believe in spirits and everything!” So, obviously this is me being very sarcastic, by the way – if it’s not coming across that I don’t agree with this! (Laughs) I think, yes, cultural context is really important for conceptions of artificial intelligence and also for religion, and the entanglements of both of them. But it much more multiplicious . . . . That’s not a word!
CC: It could be a word!
BS: I’m going to make it up now. Multiplicious! It’s much more multiple than that. Not just this binary of East and West. There’s also Africa, India, Pakistan and within those countries as well, again. So what you need is just more anthropologists, basically. I think this is my call to arms. We need more people around the world connecting on this question of the impact of religion and cultural context on questions of artificial intelligence. Yes. So we are seeing specific difference. But I want to try and push away a little bit from that binary distinction. And the assumption that the West isn’t animistic in its own lovely ways. Which anyone who does religious studies for any period of time, here in the so-called West, realises that the so-called Enlightenment didn’t have as huge an effect as we like to think sometimes. And our big metanarratives of what we did, and how smart we became . . . .
CC: Yes, but the discourse that the Enlightenment did have an effect, it’s been quite pernicious.
BS: Yes. Very, very strong.
CC: We’ve been racing through things here, it’s fantastic. But we’re still at 25 minutes. So you’ve been hinting, there, that we need more anthropologists doing more stuff. And on the way to this interview you were telling me about some things you’ve been doing to do with Frankenstein and then, also, because this year’s the year that we’re all meant to be living in Blade Runner times. So maybe if you’d give us a flavour of some that maybe slightly peripheral stuff to your project, that you’ve been doing. And what’s next for you, what you would like to see next, as a way to wrap up.
BS: Yes. So interestingly, I suppose, the position I’m in now, my employment post, is a junior research fellowship specifically in artificial intelligence. So I came on board saying, “These are my interests. This is my background in Religious Studies.” They were all very interested and excited in that. But being someone who also can speak more broadly to AI, as well, any time people have a question about AI I’m called upon (25:00). Which is lovely, but it does mean that when a specific theme and AI comes up, I get involved. So last year was the . . . two hundredth anniversary? (I should know that!) . . . two hundredth anniversary of the publication of Mary Shelly’s Frankenstein. And a lot of people start thinking, then, of the parallels and connections with artificial intelligence: this idea that we are creating life (Wa-ha-hah! Mad scientists, all of us!) in some way, and there should be parallels between them. So I did about four or five public talks last year, specifically on Frankenstein. And there are similarities. There are huge differences as well. That was interesting for me, to kind-of return to a text I hadn’t thought about in a really long time and sort-of draw out so many pop culture references. I have a nice slide with all the times you’ve got a robotic Frankenstein. My favourite one was, I think, an issue of a Marvel comic where Frankenstein turns out to be a robot sent back in time by aliens. So all these sort-of mash-ups. That was really interesting. And then, like you say, this is the year of Blade Runner and I’ve just done an essay for Radio Three. And, again – not my academic background. But I’m doing something in that, in terms of sexual politics and Blade Runner. If you’ve seen the film, it doesn’t really pass the Bechdel test!
CC: No.
BS: A friend of mine, Kate Devlin, who’s written a fantastic book on sexbots, talks about how it has a problem with women. That basically . . . it’s a product of its time. It’s 1980s, but it’s also trying to do 1950s filme noir. So you’ve got the detective, and femme fatale, and the kind-of virginal woman. It’s not a great one for sexual politics. But also, it’s tied into all these questions of consent and slavery. If we’re going to create so-called artificial life . . . . And the Replicants in Blade Runner are as near to human – well that’s the slogan of the company, basically: “as near to human as you can’t tell the difference”. What does it mean that we are a society that wishes for that, or dreams of that? Or, take it a step back and say: what is it, that we tell these stories and that, again, we have predominantly female representations of synthetic lives, who don’t get to choose who they sleep with, and don’t get to choose their fates? And we want slaves? I mean, did we not evolve out of this? We thought we were trying. So, yes, there’s lots of big questions about the ethics and politics of that, as well. So it’s interesting. I’ve always been . . . . Anyone who knows me, I’ve always been a massive geek. So the fact that I ended up somehow trying to mesh that with a job, and an academic role, where legitimately I sat and watched Blade Runner again five times before I wrote my essay – that’s fantastic! I will go on, and other things I have coming up: I will do some work around techno-optimism and techno-utopianism in relation to Sophia the Hanson robot, if you’ve ever come across this creation? She/it is a wonderful example of . . . I’m really picking my words carefully! I think the nicest thing we could call her is a puppet. But she’s presented as the most advanced version of AI around at the moment. She holds conversations with people, but we know they’re actually scripted a lot of the time. There’s puppeteers involved. But you know she was given citizenship of Saudi Arabia. And she goes and she speaks on the Jimmy Kimmel Show and she’s on the front cover of magazines with her hair done. And, well, what does this say, that we’re so keen to jump on this idea of her actually being alive in some way? People tweet at her, send her, like, “I love you Sophia!”
CC: Didn’t you have an interaction with her?
BS: I did! Well, I had an interaction with whoever runs her social media accounts, where she was tweeting about how wonderful it was to travel around the world and talk in so many places. And I said, “Sophia, as a citizen of Saudi Arabia, where do you travel when you travel? Do you travel on a plane? Do you have a passport? What’s the deal here, if you’re being treated in this way?” She said something like, “For my safety, and the safety of others, at the moment I travel in the hold, in luggage, but I dream one day of being able to sit with the rest of you, and look out of the window.” This is so disingenuous. This is not an artificial intelligence listening to my tweets and responding, having thought through their situation, and projecting into the future where they want to be. This is someone behind the computer screen typing away! And, to be fair to the creators of Sophia, this is not uncommon. Lots of the technology we’re being sold as employing artificial intelligence actually employs people, on less than minimum wage, in third world countries, reading and listening to actual humans and feeding into the machine. They have the aspiration that eventually they’ll take those humans out of the loop. Same thing with Lift and Uber drivers – the whole gig economy. The treatment of those workers, and Amazon workers, is terrible and it’s on a pipeline towards getting rid of them (30:00). So all the work that those people do feeds into the system to replace them. And these big socio-economic changes that are coming because of automation, I’m a big sceptic about the bigger utopian dreams of universal basic income and everyone will get paid to exist and when the robots take our jobs.
CC: Well, it’s not happened yet.
BS: It’s not happened yet. And these are the sort of impacts on society that religions will respond to, will be a part of, because their communities will be a part of them. And we’ve got parallels. People go “Oh it’s another industrial revolution, and we survived other industrial revolutions, we’ll survive this one.” If you’re against them, you’re a Luddite – they’re back again, apparently! That’s not realistic to the individual lives, and the changes that come to individuals. There were blacksmiths who never worked again. So not to be Debbie Downer, but these are the important questions.
CC: Yes, lots of people have not survived. And I could always point out that colonialism is very much still happening.
BS: Oh, absolutely.
CC: It’s just been exported, and it’s clouded in the language of free trade and globalisation now.
BS: Absolutely.
CC: But just to raise the tone – an example that you may not be aware of, and you may have seen it, South Park did the episode about Alexa.
BS: I saw a picture today, actually. And I haven’t seen the episode so I need to catch up!
CC: It’s excellent, because all of the local people, lower down in the socio-economic spectrum, were kicking off that Alexa was stealing their jobs. And they manged to rally round. And then all to get Alexa’s job. So people would have a (audio unclear) or a Jimbob in their living room who looks things up on a smart phone and says “Boodoopboopboop!”
BS: Yes! (Laughs)
CC: But yes. Sort-of . . . explicitly buying into that.
BS: I need to catch up on that one. South Park are wonderful at doing this social commentary. The number of times I’ve used . . . specifically some of the episodes on their versions of Scientology– not their versions, their actual accounts of Scientology, Mormonism. They’re very useful resources. The parody opens up the possibility of thinking more critically about that, absolutely.
CC: Yes. Which I think we have managed to do today. So Listeners, do check out, we’ll try and link to that issue of Implicit Religion, we’ll link to Pain and the Machine, which is the film that Beth mentioned, and many more things I’m sure. So thank you, Beth, for joining us.
BS: Thank you very much for having me today.
If you spot any errors in this transcription, please let us know at editors@religiousstudiesproject.com. If you would be willing to help with transcription, or know of any sources of funding for the broader transcription project, please get in touch. Thanks for reading.
This work is licensed under a Creative Commons Attribution- NonCommercial- NoDerivs 3.0 Unported License. The views expressed in podcasts are the views of the individual contributors, and do not necessarily reflect the views of THE RELIGIOUS STUDIES PROJECT or the British Association for the Study of Religions.
"History can be of tremendous value for a society that is looking for roots... and can sometimes be a bit uncritical in its search for roots. People want an identity and may be clutching at something that can be a bit confrontational, for example, Muslims looking for an identity rooted in current conflicts in the Middle East, rather than reflecting on what is quite a long-standing presence in British society and culture."
"Pretty much unprepared for the sensory feast of a Santo Daime ritual, I was visually struck by the colourful ‘uniforms’ and brightly decorated ceremonial space. The strongly rhythmical and fervently sung ‘hymns’ also made an impact, as did the powerful smell and bitter taste of the religious sacrament which practitioners call ‘Daime’.
"...a vital tradition of the study of religion is the Durkheimian intellectual tradition. Generally dismissed by many in the study of religion because of its supposedly narrow "sociological" bent, the school of scholarship represented by Émile Durkheim, Henri Hubert, Marcel Mauss, Louis Dumont, Roger Caillois, Georges Bataille and others is, ...
In this interview with Thomas Coleman, McCutcheon discusses what he terms as the “socio-political strategy” behind the label of “sui generis” as it is applied to religion. The interview begins by exploring some of the terms used to support sui generis claims to religion (e.g. un-mediated, irreducible etc.)...
In Thomas Coleman’s interview for the RSP with Tom Flynn, secular humanism is described as a “complete and balanced life stance” rejecting supernaturalism. Recorded at the Center For Inquiry’s 2013 Student Leadership Conference, Tom argues that secular humanism offers more than agnosticism and atheism.
For those of us in Britain the question of Religious Education (notionally 'Religious Studies at primary and secondary school level') has become an ever-increasing issue of concern. Just what exactly should RE entail? Should RE be teaching about religion or teaching religion? Who, even, should be RE teachers? In this interview, ...