Science & Technology Archives - The Freethinker https://freethinker.co.uk/category/science-technology/ The magazine of freethought, open enquiry and irreverence Fri, 06 Sep 2024 11:05:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://freethinker.co.uk/wp-content/uploads/2022/03/cropped-The_Freethinker_head-512x512-1-32x32.png Science & Technology Archives - The Freethinker https://freethinker.co.uk/category/science-technology/ 32 32 1515109 Rescuing our future scientists and engineers from quitting before they start https://freethinker.co.uk/2024/09/rescuing-our-future-scientists-and-engineers-from-quitting-before-they-start/?utm_source=rss&utm_medium=rss&utm_campaign=rescuing-our-future-scientists-and-engineers-from-quitting-before-they-start https://freethinker.co.uk/2024/09/rescuing-our-future-scientists-and-engineers-from-quitting-before-they-start/#comments Fri, 06 Sep 2024 07:37:00 +0000 https://freethinker.co.uk/?p=14472 In my capacity as a science educator and speaker, I once sat across a table in front of…

The post Rescuing our future scientists and engineers from quitting before they start appeared first on The Freethinker.

]]>
In my capacity as a science educator and speaker, I once sat across a table in front of A-level physics students at a top 50 all-girls grammar school with great interest as to what their future plans were. To my shock, the first girl informed me of her dreams of being a TikTok influencer. I asked if this was science-related, but unfortunately, it was not. This instigated an excited discussion among the others at the table, several of whom had no dreams of STEM, but plenty of social media fame.

I put the incident behind me, but only four days later at an all-boys grammar school, the same thing happened—except social media stardom was replaced with professional gaming and esports. In the space of five days, I had sat with some of the brightest young minds in STEM from my town, yet their ambitions lay elsewhere.

I shared the stories with my wife, wondering how this could happen. The incidents jarred me tremendously, sapping much of my energy and causing me to consider stepping away from education.

It is a fact that we do not produce enough scientists or engineers in the UK. We also have an NHS shortage, relying on overseas recruitment of doctors and nurses. Compared to China and India, we do not prioritise or glorify STEM as a noble career—or at least not in the same way. If anything we glorify the opposite, sending a message of ‘be who you are’ rather than ‘imagine what you could be’. We celebrate the individual in such a way that no one can tell anyone that they are wrong, that their dreams are too small, or that it is their responsibility to contribute meaningfully to society.

There is far too much that can be said on this issue than space affords me. We have serious issues of science scepticism, mistrust of expertise, conspiracy theorising, echo-chamber thinking, ideological capture, and religious fundamentalism. All contribute immensely to an unprecedented level of anti-science bias that I see in students, reflecting society as a whole.

But how can we stop the next generation of engineers and scientists from quitting young? Briefly, let me discuss what can be done in five areas of concern.

We must generate more excitement about research

When addressing grammar school students on behalf of the Mars Society or the Genetics Society, I typically find that most of them are passively moving towards medicine. This is to be applauded to an extent, and we certainly need more doctors. However, these students attend my talks because they are excited about science in general; they have taken a path towards medicine seemingly by default. We must generate passion for research in young minds.

Research needs to be emphasised as an exciting career path. Living on the frontiers of discovery is a wonderful place to be, although not glamorous and sometimes mundane. Collaborating with others around the globe to answer problems, probe the unknown, and push the boundaries of knowledge is a wonderful way to live your life. Problem-solving and critical thinking need to make a strong comeback in schools where students can now ask ChatGPT to do their homework and Google the answer to everything.

Science bodies need to recruit under-18s

I am amazed at how many of the science societies, including those I belong to, act as if life begins at university. Recruitment to bodies is made at university fairs, and for all of them, one can only become a member at 18. We must start recruiting under-18s as opposed to just university students.

When I ask science bodies why recruitment starts at 18 the response is usually blank, as they have never been asked the question before. What if we allowed students to join at 16 instead? It could revolutionise the pursuit of STEM careers for students if they could join science and engineering societies before they apply to university.

It could also guide young people into careers in niche sciences rather than relying on chance. For example, many biology students will have little exposure to genetics as a career option in high school, so genetics bodies must hope that students accidentally stumble onto their discipline and decide for themselves on a future there. This is not a winning strategy.

Generate more inspiration

Many schools would love to do more to engage and inspire in STEM, but the staff have neither the time nor the budget to do more. Inspirational programmes have the most potential to radically change a child’s future.

The finest example of such programmes is the International Space School Education Trust’s Mission Discovery programme, which sees 13-18-year-olds work in teams alongside astronauts, professors, researchers, and NASA personnel for five days to design an experiment that fits strict parameters. The winning team has their experiment flown to space by SpaceX to be performed on the International Space Station by astronauts in their research rotation. Everything about the programme is life-changing for the student.

I run a program in my town called ‘Frontiers’, which simply brings inspirational researchers to the schools of Aylesbury to speak to students in the hopes of exposing them to the possibility of a STEM future full of excitement and discovery.

If we can pump our schools full of STEM inspiration, then we could see a huge uptick in those who dream of shaping the future of science and technology. Ultimately, young people rely heavily on inspiration for their future. They need their imagination to be captured, as most (especially the boys) are not proactive in thinking about their futures unless prompted. I meet very few dreamers (although I meet more and more idealogues), but the ones you do meet you never forget. When it comes to a brighter STEM future, we shall only reap what we sow.

Combat useless, ideological courses

One outstanding student I met two years ago was on track to study medicine at university off the back of incredible STEM A-level results. She came to a Mars Society event and I asked her about her plans. ‘I’m planning on studying medicine, hopefully at Oxford or Cambridge…’ she started, before shyly adding, ‘but I’m thinking about doing gender studies too.’ This caught me off guard, and I asked her to strongly consider picking medicine.

Sitting with a foot in both the sciences and the humanities, the differences are starkest at academic conferences. I try to be fair and objective, but while sitting through humanities lectures, I often find myself quietly asking, ‘How does this contribute to knowledge?’

It is more than fair to advise young people to avoid a gender studies degree. Two of the top jobs that graduates go into are schoolteacher and HR professional. These jobs are open to students from any other degree. The practical applications of STEM degrees stretch far beyond their discipline, and graduates are highly sought after across the vocational landscape.

Furthermore, many humanities courses are now ideologically laden and far from objective. Someone can earn a PhD in STEM or off the back of an ideological reflection on Shakespeare, and both will end up with the title ‘Doctor’ before their names. One researcher commented to me in June that she felt this cheapened her PhD and left her feeling despondent. We must sell truth to our students, and push them to push themselves, for everyone’s sake.

Show young students an exciting future rather than a depressing one

Recruiting STEM students and avoiding the drift away from these essential disciplines will require building excitement for the new world, rather than anxiety about the current one.

Space entrepreneur and activist Rick Tumlinson has condemned the message so often pushed on young people these days that all they have is a mission to save a planet we have screwed up. Instead, they should be inspired to build an exciting, spacefaring future. I agree. We have sold our students short by not providing an adventurous, inspiring vision. Instead, we present a depressing one.

Young people must hear the rallying call to a glorious future of green, renewable energy, space-age technology, and interplanetary travel. They need to hear about the potential of genomics and genetic engineering for maximising nature’s gifts and the possibilities of personalised medicine. There has never been a better chance than right now for humanity to realise a future brighter than any that has come before—and yet so often our young people are distracted by the flash and dazzle of social media and made anxious by dystopian visions.

We can retain our brightest minds in STEM and inspire a host of students who may never have conceived of themselves having a future in STEM fields by presenting young people with a better vision than what they are currently being sold. When we do this, perhaps we will also lose fewer minds to fundamentalism, conspiracy theorising, political nonsense, and fruitless ideologies.

Related reading

80 years on from Schrödinger’s ‘What Is Life?’, philosophy of biology needs rescuing from radicals, by Samuel McKee

The post Rescuing our future scientists and engineers from quitting before they start appeared first on The Freethinker.

]]>
https://freethinker.co.uk/2024/09/rescuing-our-future-scientists-and-engineers-from-quitting-before-they-start/feed/ 2 14472
From stardust to sentience: How scientific literacy can improve your ability to foster gratitude https://freethinker.co.uk/2024/09/from-stardust-to-sentience-how-scientific-literacy-can-improve-your-ability-to-foster-gratitude/?utm_source=rss&utm_medium=rss&utm_campaign=from-stardust-to-sentience-how-scientific-literacy-can-improve-your-ability-to-foster-gratitude https://freethinker.co.uk/2024/09/from-stardust-to-sentience-how-scientific-literacy-can-improve-your-ability-to-foster-gratitude/#respond Tue, 03 Sep 2024 05:28:00 +0000 https://freethinker.co.uk/?p=14419 It’s not uncommon to hear religious people refer to faith as a source of comfort. In fact, there…

The post From stardust to sentience: How scientific literacy can improve your ability to foster gratitude appeared first on The Freethinker.

]]>
nasa’s image of the cosmic microwave background, the afterglow of the big bang. more information here.

It’s not uncommon to hear religious people refer to faith as a source of comfort. In fact, there are numerous studies in the fields of cognitive neuroscience and behavioural psychology which have evidenced how individuals are capable of biasing their reasoning enough to accept a religious concept if they believe it will adequately alleviate their negative emotional state.

I, on the other hand, cannot see the utility in false consolation and find the notion of embracing a supernatural belief system simply for its well-being or anxiety management benefits to be regressive and infantilising. Comfort is unreliable if it cannot be justified epistemically.

Instead, when you don’t have to allocate any mental storage space to or worry about a celestial dictator or imaginary friend in the sky repressing and micromanaging your every move—well, it frees up a lot of time to get to grips with the true nature of one’s existence through a scientific lens.

The story of human existence is not just a tale of biological evolution but a series of fortuitous events—from the cosmic lottery that determined the parameters of the universe to the dice rolls of DNA that define our unique identities. In simple terms, life has been evolving on Earth for close to four billion years. During the first two billion years, there were single-celled entities called prokaryotes. Thanks to a chance collision of a bacterium and an archaean, the eukaryotic cell was born.

Eukaryotes were the key ingredient in making possible multicellular life forms of all varieties. In fact, every living thing big enough to be visible to the naked eye is a direct descendant of the original eukaryotic cell.

It is truly fascinating how evolution is typically an interwoven fabric of coevolutionary loops and twists: our origin story is essentially processes composed of processes.

What’s more, the odds of you being born were so staggeringly low—every single one of your ancestors had to survive countless challenges, reach reproductive age, and find the particular mate to give rise to the next generation of your particular ancestors, while every tiny detail had to align perfectly out of 70 trillion possible combinations of complex genetic variations.

The chances of the exact sperm cell and egg cell meeting to create you with the DNA sequence that encoded you and brought you into existence? Around one in 250 million. Mutations and meiosis crossovers in the DNA of each of your ancestors also had to occur. That needed to happen each time in an unbroken string for millions of generations of your ancestors, going back to well before they were human beings or even hominids of any type.

As Dr A.E. Wilder-Smith notes: ‘When one considers that the entire chemical information needed to construct a human can be compressed into two miniscule reproductive cells (sperm and egg nuclei), one can only be astounded.’ That Wilder-Smith was a young earth creationist does not detract from the genuine wonder of our existence that he so concisely captures.

Other unlikely events necessary for our existence: multicellular life forms had to come into being on Earth, the formation of the stars and galaxies in the Milky Way had to create the environment in which Earth formed, Earth needed to form as a habitable planet with the right ingredients for life, the laws of physics needed to be such that they created the serendipitous density conditions to permit life, and the universe itself had to have come to exist 13.8 billion years ago in a hot, dense Big Bang that made all this possible.

How could one not be grateful? How could one not live in an eternal state of astonishment and bewilderment at one’s very own existence and consciousness?

In addition, the Buddhist concept of ‘interbeing’ demonstrates how we must see ourselves not as isolated, static individuals, but as permeable and interwoven selves within larger selves, including the species self (humanity) and the biospheric self (all life).

For instance, you are not one life form. Your mouth alone contains more than seven hundred distinct kinds of bacteria. Your skin and eyelashes are equally laden with microbes, and your gut houses a similar bevvy of bacterial sidekicks. All in all, the human body possesses trillions of bacterial cells in addition to trillions of human cells: your body is home to many more life forms than the number of people presently living on Earth; more even than the number of stars in the Milky Way galaxy.

Energised by sunlight, life converts inanimate rock into nutrients, which then pass through plants, herbivores, and carnivores before being decomposed and retired to the inanimate earth, beginning the cycle anew. Our internal metabolisms are intimately interwoven with this earthly metabolism; one result is that many of the atoms in our bodies are replaced several times during our lives.

Owing to all this, each of us is a walking colony of trillions of largely symbiotic life forms—we are akin to a brief, ever-shifting concentration of energy in a vast ancient river that has been flowing for billions of years.

There is truly so much solace to be found in knowing and understanding the evolutionary processes behind our existence, as well as the interbeing theory, which proves that we are not outside or above nature—but fully enmeshed within it. 

I carry these scientific ideas with me through every moment of every day because they foster an overwhelming sense of gratitude within me. The improbability of any one of us being here is so astronomical that it staggers imagination. Above all else, it invites us to explore the laws of nature and the essence of what it means to be alive. From simple organic molecules to the first replicating cells, the sheer wonder of our existence ought to create a rich appreciation and sense of gratitude for the tapestry of life.

The post From stardust to sentience: How scientific literacy can improve your ability to foster gratitude appeared first on The Freethinker.

]]>
https://freethinker.co.uk/2024/09/from-stardust-to-sentience-how-scientific-literacy-can-improve-your-ability-to-foster-gratitude/feed/ 0 14419
80 years on from Schrödinger’s ‘What Is Life?’, philosophy of biology needs rescuing from radicals https://freethinker.co.uk/2024/08/80-years-on-from-schrodingerwhat-is-lifephilosophy-of-biology-needs-rescuing-from-radicals/?utm_source=rss&utm_medium=rss&utm_campaign=80-years-on-from-schrodingerwhat-is-lifephilosophy-of-biology-needs-rescuing-from-radicals https://freethinker.co.uk/2024/08/80-years-on-from-schrodingerwhat-is-lifephilosophy-of-biology-needs-rescuing-from-radicals/#comments Mon, 26 Aug 2024 07:36:00 +0000 https://freethinker.co.uk/?p=14238 What Is Life? by Erwin Schrödinger was a seminal work in philosophy of biology. Published in 1944, this…

The post 80 years on from Schrödinger’s ‘What Is Life?’, philosophy of biology needs rescuing from radicals appeared first on The Freethinker.

]]>
What Is Life? by Erwin Schrödinger was a seminal work in philosophy of biology. Published in 1944, this modern classic served as inspiration for Francis Crick and James Watson to pursue the structure of DNA. Numerous important scientists of the molecular biology revolution of the 1950s referenced it as inspirational to them in tackling the biological challenges of the time. The middle of the twentieth century was throbbing with new possibilities, and the big philosophical questions that accompanied them were embraced. What Is Life? was a benchmark for philosophy of science, and it came from one of a generation of great scientific thinkers who essentially founded the discipline, including Einstein, Eddington, Jeans, and Heisenberg.

However, shortly after this blooming, we find Richard Feynman expressing his contempt for philosophy of science as a discipline that had become dominated by non-scientists, a dismissal that rings even truer today.

What Is Life? is a fine example of philosophy meeting biology and producing quality interdisciplinary work. The molecular biology revolution was a practical outworking of the philosophical appetite of the time, well expressed by Schrödinger. Unless one credits Aristotle et al, philosophy of science as an academic discipline is little more than a century old. It aims to critically analyse and reflect on what science is, how it works, and the questions it asks. It puts another eye on the game, distinct from its partners, history of science and sociology of science. (I admit that I care little for sociology of science, which treats science more as a human activity than as a way of discovering actual reality ‘out there’ in nature.)

This brings me to Feynman’s distaste for the discipline. By the time he won his Nobel in 1965, it would be fair to say that philosophy of science was no longer the same field as it had been: now, it was full of musings from those in the humanities. ‘How dare these outsiders tell us what is really going on?’ was the attitude of those in Feynman’s camp. How can those who have not done any science critique what is being done by scientists, analyse what is wrong with it, and discuss its true value? It all added up to a waste of time and money for Feynman.

Someone who believes that sex is assigned at birth rather than observed by a doctor has no business in philosophy of biology.

Fast forward to our modern challenges. Recently, I was shocked to read that the controversial postmodern ideologue Judith Butler is considered a philosopher of biology. Butler’s writings on gender, for those unfamiliar with them, have been fairly criticised for so severely underplaying the biological component of gender that you would be forgiven for thinking that biology plays no part in our development. She was also the winner of the 1998 ‘Bad Writing Contest’, hosted by the journal Philosophy and Literature, for an incomprehensible sentence in one of her articles.1

But the characterization of Butler as a philosopher of biology is not (entirely) wrong. If an informed layperson were asked to name one modern philosopher of biology, they might well name Butler. But she is as far removed as someone in academia could be from the sciences. Her education is in literature, feminism, and philosophy, and she owes more to Hegel than to Schrödinger. Someone who believes that sex is assigned at birth rather than observed by a doctor has no business in philosophy of biology. (Other feminists who might be considered philosophers of biology, like the excellent Kathleen Stock, are much more worth listening to.)

There is a straight line to be drawn from Butler’s work to those in the humanities who now claim that biological sex itself does not exist—as anti-scientific a statement as one will ever hear. Eighty years on from What Is Life?, Feynman’s contempt is even more justified. Now, so-called experts from the humanities dominate philosophy of science and are boosting the cause of science denialism more than anyone else even as they claim to be working against it.

It is time to return philosophy of science to those who respect both philosophy of science and science itself. To be clear, there are a great many philosophers of science today who are intimately acquainted with the realities of science and who are doing first-class work in legitimate journals. They treat the discipline with the original, historical respect that it deserves. Their work would not be out of place amongst the reflections written by the giants of the field’s golden era. In other words, there is a universe of difference between Judith Butler and someone like Michael Ruse. As a whole, though, the field is in dire straits.

Let me share a personal anecdote to illustrate what is missing in philosophy of biology. Three years ago, during an interview for a PhD position at a major British university, my surprised potential supervisor halted the interview about thirty seconds in. ‘Hold on, it says here that you have an actual science degree?’ I could not hide my own surprise and affirmed that that was correct. She then put down her papers and remarked, ‘Well, that is a tremendous advantage for you. Do you have ambitions of working in academia?’ I told her that that was very much my dream, but I failed to hide the surprise still written across my face. The interview shifted and I became the one asking questions: Did most professional philosophers of science not have a science background? Apparently not—and it seemed that if I wanted to get ahead, I was in possession of a serious advantage.

One does not have to be a scientist to be a historian or philosopher of science, but today as much as ever, scientists are needed to contribute to philosophy of science. Otherwise, those who have no business speaking in the name of science are gifted large platforms to mislead a general public that might not know any better. This is also why skilled science communicators are so desperately needed.

I have sat through many philosophy lectures that are anti-scientific, pseudoscientific, or just plain mistaken.

When one looks at the great names who gave the twentieth-century Gifford Lectures or the Eddington Memorial Lectures, one sees an extraordinary array. There are prize winners and legends from some of science’s golden eras. They each speak philosophically about the questions or meanings of their work or reflect on what the future may hold. This tradition survives today, among the likes of Lord Martin Rees and Sir Roger Penrose, but it is attenuated. Rees and Penrose certainly deserve a greater platform than Butler et al.

Perhaps the problem is one of space as much as communication skills. The platforms for great philosophers of science, or scientist philosophers, are not proportionate to their contributions. Education is also lacking, as undergraduate philosophy programs hardly ever feature philosophy of science in their curricula. Unfortunately, I have also sat through many philosophy lectures that are anti-scientific, pseudoscientific, or just plain mistaken. In continental philosophy and postmodern departments, science is very much the enemy: it is an oppressive force invented by white Western men to establish the patriarchy and is unable to make any claims to truth or reality. This is a very serious problem at a time when science needs to be understood more than ever.

The solution may well be to give scientists a larger platform in society. Post-pandemic, distrust in science (and expertise in general) has increased, even as science—from developments in artificial intelligence to revolutions in cancer research—has become more important than ever for our present and future. Each Sunday morning as I drive through Beaconsfield’s old town on the outskirts of London, I see protestors clad in white inviting us to toot our car horns if we agree with them on the latest conspiratorial, anti-science propaganda they are spouting. At first it was amusing, but now it simply depresses me.

In the space of eighty years, and to an alarming extent, philosophy of science has degenerated. Once, it inspired revolutionary breakthroughs. Now, it is dominated by thinkers who have not seen a laboratory or an observatory since high school and who use their platforms to disseminate nonsense. They have helped to usher in a conspiratorial atmosphere laced with distrust of and cynicism about the sciences. To combat them, we need to recover our sense about science.


  1. Take a deep breath:

    ‘The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.’ ↩

Related reading

On sex, gender and their consequences: interview with Louise Antony, by Emma Park

‘We are at a threshold right now’: Lawrence Krauss on science, atheism, religion, and the crisis of ‘wokeism’ in science, by Daniel James Sharp

Consciousness, free will and meaning in a Darwinian universe: interview with Daniel C. Dennett, by Daniel James Sharp

‘When the chips are down, the philosophers turn out to have been bluffing’, interview with Alex Byrne by Emma Park

‘A godless neo-religion’ – interview with Helen Joyce on the trans debate, by Emma Park

‘An animal is a description of ancient worlds’: interview with Richard Dawkins, by Emma Park

The falsehood at the heart of the trans movement, by Eliza Mondegreen

South Asia’s silenced feminists, by Kunwar Khuldune Shahid

Linnaeus, Buffon, and the battle for biology, by Charles Foster

The post 80 years on from Schrödinger’s ‘What Is Life?’, philosophy of biology needs rescuing from radicals appeared first on The Freethinker.

]]>
https://freethinker.co.uk/2024/08/80-years-on-from-schrodingerwhat-is-lifephilosophy-of-biology-needs-rescuing-from-radicals/feed/ 1 14238
Is ‘intelligent design’ on the cusp of overthrowing evolutionary science? https://freethinker.co.uk/2024/08/is-intelligent-design-on-the-cusp-of-overthrowing-evolutionary-science/?utm_source=rss&utm_medium=rss&utm_campaign=is-intelligent-design-on-the-cusp-of-overthrowing-evolutionary-science https://freethinker.co.uk/2024/08/is-intelligent-design-on-the-cusp-of-overthrowing-evolutionary-science/#comments Tue, 20 Aug 2024 07:57:00 +0000 https://freethinker.co.uk/?p=14211 For many of us working in science, philosophy, and education (or a combination thereof), ‘intelligent design’ (ID)—the pseudoscientific…

The post Is ‘intelligent design’ on the cusp of overthrowing evolutionary science? appeared first on The Freethinker.

]]>
by Dave souza. CC BY-SA 2.5.

For many of us working in science, philosophy, and education (or a combination thereof), ‘intelligent design’ (ID)—the pseudoscientific theory that purports to be an alternative to evolutionary science and which has been unkindly but not unfairly described as ‘creationism in a cheap tuxedo’—has been out of the picture for the better part of the two decades since the Kitzmiller v. Dover trial in 2005 ruled that it was not science and thus could not be taught in US high school biology classes. In short, ID was of historical interest but not to be taken seriously.

Recently, however, its champions have been making noise and turning heads. Stephen Meyer, one of the original faces of ID, has been featured on The Joe Rogan Experience and Piers Morgan Uncensored and has even dialogued with Michael Shermer. All of this new publicity has been used by ID advocates to prove that it has never been more relevant. And that is the goal of the movement: to fight for relevance and be taken seriously by academics and those in authority.

Everyone who has ever met Dr Meyer says he is a warm, genuine, nice man. He may be mistaken in his views but is respectful and courteous to those who disagree with him (something that cannot often be said for the movement as a whole). As Meyer’s newest book gave the claims of ID more media attention, this May saw the return of the claim that ID is leading a revolution in biology. On an episode of Justin Brierley’s podcast, Meyer repeated the long-since debunked claim that ID had successfully predicted that junk DNA was not junk at all and argued that this gives further credence to ID’s capabilities as a scientific theory. Listen to the key figures from the Discovery Institute (the ID-promoting think tank which Meyer helped to found) and they will tell you that scientists are leaving evolution behind in droves—and that the honest ones are beginning to catch up with what ID has been saying all along.

If one did not know better—and many podcast listeners and church attendees may not—you would think a scientific revolution was underway à la Thomas Kuhn. But this is far from the case. The halls of biology remain silent on intelligent design. There is nothing to see here at all. The ID revolution is complete fiction.

When I was an undergraduate at Cambridge, ID was mentioned twice. The first time was during an evolutionary biology lecture when the bacterial flagellum came up in discussion, and our lecturer asked if anyone knew why it had gained political fame (I knew the answer—Kitzmiller v. Dover1—but only one other student had even heard of ID). The second time was in a genetics class. We were observing what one might call ‘bad design’ and my professor remarked that if nature had a designer, this was a case where they had done a poor job. That was it—in my postgraduate studies, it never came up at all.

In June, I asked a friend of mine, a world-class virologist, if—whether in her student or professional career—ID had ever been mentioned. Perplexed, she said not once. And this is the case with everyone I have ever asked. ID is not leading any revolution, and no scientist or academic I have ever met can ever recall it coming up. It could not be less relevant right now.

On the one hand, ID proponents claim that biology is about to overthrow Darwinian evolution and that scientists are turning to ID by the truckload. But in science itself, it is very much business as usual, and not a sound about ID is to be heard.

But two years ago, in a debate during an episode of Premier Unbelievable? entitled ‘Is Intelligent Design advancing or in retreat?’, the Discovery Institute’s Casey Luskin claimed that the movement was advancing, gaining new converts, and had never been stronger. He mentioned a conference in Israel on evolutionary genetics that he had attended when marshalling his evidence. My first thought was ‘Why is a geologist attending an evolutionary genetics conference?’ My second thought was that one of my best friends had been there, so I asked him about it. His response was that ID was not mentioned once during the event.

Something is amiss here. The data does not add up. On the one hand, ID proponents claim that biology is about to overthrow Darwinian evolution and that scientists are turning to ID by the truckload. But in science itself, it is very much business as usual, and not a sound about ID is to be heard.

But it is actually worse than that for ID, because in biology an actual revolution is underway. The biological sciences are arguably seeing the greatest boom period since the molecular biology revolution of the 1950s. Consider that since CRISPR-Cas9 burst onto the scene in 2011, genetic engineering has undergone an extraordinary transformation, becoming cheap, easy, and accurate. Even since the Nobel Prize was awarded to Jennifer A. Doudna and Emmanuelle Charpentier in 2019 for discovering this revolutionary gene-editing tool, base and prime editing have made it even more precise.

Then there is cancer immunotherapy, which since 2018 has transformed the landscape of oncological treatment. It may well be the future of all cancer treatment as soon as costs come down, which is inevitable. Meanwhile, molecular biology has been transformed again by AI, with the CASP competition to build systems that accurately predict protein structure from sequence data alone seeing extraordinary success from AI-based entries. AlphaFold brought the accuracy of predictive results in line with experimental data from x-ray crystallography and cryo-EM, and AlphaFold3, along with its competitors, is set to transform the prospects of personalised medicine.

Gene sequencing itself has become quicker, cheaper, and more accessible every year since the completion of the Human Genome Project in 2003. In fact, in 2016, NASA astronaut Kate Rubins sequenced the genome of microorganisms on the International Space Station using a handheld device, showing just how far the field had come. Genomic analysis can now be done cheaply in the field, with a handheld sequencer, on any organism, and the results can be expected back in mere hours.

And what is the word from ID proponents on these extraordinary developments in the landscape of biology? Nothing. The Discovery Institute promises transformational research (yet has no laboratories), boasts predictive science (yet has published none), and claims that science is now looking to design and purposiveness as explanations for biological phenomena instead of leaning on evolution—but seems not even to be aware of what is actually happening in biology during one of its most transformational periods ever.

ID proponents could not be more mistaken in articulating evolutionary biology as a science that is rigid and stale. When I was an undergraduate, modern evolutionary biology was described to me as the ‘fastest growing science’.

This is not tremendously surprising, given that a glance at the Discovery Institute’s list of fellows shows that it is full of philosophers and theologians. Even among its scientists, some have published little or nothing. On the whole, ID is dominated by people who have never done a day’s scientific work in their lives. They speak of the ‘scientific community’ but rarely engage with scientists in world-class research. This is, sadly, only the tip of the iceberg, as ID continues to overstate its credentials and overplay its hand.

Do they have a case when they call evolution a theory in crisis? Not in the least. ID proponents could not be more mistaken in articulating evolutionary biology as a science that is rigid and stale. When I was an undergraduate, modern evolutionary biology was described to me as the ‘fastest growing science’ due to the advances in genomics and molecular biology and their associated data revolutions. It is very much a predictive science today, testable and measurable.

Besides, convergent evolution (something ID never talks about) is very much a grand unifying theory in biology, applicable to everything from genetics to medicine at every level. It is still as Theodosius Dobzhansky said over half a century ago: ‘Nothing in biology makes sense except in the light of evolution.’ Dobzhansky himself was a Greek Orthodox Christian, but because he was an evolutionist, ID advocates pay little attention to him. Dobzhansky is just another uncomfortable obstacle passed over in silence by ID proponents.

It is interesting to track those whom ID proponents acknowledge and those that they do not. ID arguments have not changed since the 1990s, no matter how many times they are debunked or how little serious attention they get. This is partly due to the distance between ID and real science, and partly due to ignorance. When proponents discuss ID on any platform, the same names are always trumpeted, as if being read from a script. First, the latest scientist with any publications who has joined their ranks (Günter Bechly will be named dropped a thousand times), then Thomas Nagel, Antony Flew, and anyone else remotely famous who has ever said anything nice about ID in the past 30 years. In doing so, ID advocates reveal that their arguments are empty veneers, utterly lacking in real substance. If the Discovery Institute was actually putting out research, had an active program, or was doing real science, then these appeals to authority would not be necessary.

As someone with a doctorate in philosophy of science, Meyer is certain to know the demarcation criteria which mark the scientific off from the non-scientific. Thus, great pains are taken by ID advocates to talk about active research and publication in journals (even if it is their own in-house journal BIO-Complexity, which they eagerly pitch to outsiders to publish in) while holding conferences and collaborating wherever possible. Falsifiability remains a problem, however. Whenever the latest example of ‘irreducible complexity’ is knocked down like their old favourite the bacterial flagellum was, advocates don’t seem persuaded of the falsehood of ID. Put simply, ID is not scientific.

There is also the matter of picking their battles. There is a reason why Meyer would talk to Michael Shermer: he is not a biologist, and he has a large platform. When asked why serious scientists don’t engage with their ideas, the responses usually boil down to conspiracy theories. ‘They’, the ‘scientific community’, are locked into ‘Neo-Darwinism’, and the powers that be, with their materialist agenda, are controlling science to their own nefarious ends: those masters at the gates control education and are trying to keep their dogmas alive. In reaching for conspiracy, ID evinces the dogmatism it accuses others of holding.

The Discovery Institute’s new site, rather than giving the air of world-class credibility, exposes the contrast between ID and real science. Surrounded by giants, ID is revealed to be a pygmy.

Recently the Discovery Institute opened a brand-new site outside Cambridge. When I discussed this with a biology professor at the university, he laughed and noted that businesses often try and build their sites in the area to benefit from the association with the university and its heritage. Artificially manufacturing illusory respectability by being a ‘Cambridge’ institute is further evidence of what is really going on here. If ID was really leading a revolution in biology, then none of this would be necessary. I have spoken with several religious scientists in Cambridge who are less than thrilled at ID moving into the neighbourhood. For them, it only makes their lives as researchers harder.

The picture is even uglier than this. In Hinxton, near Cambridge, you have the Wellcome Sanger Institute, home of the UK arm of the Human Genome Project. Next door is the European Bioinformatics Institute, home of AlphaFold and the finest of its kind on the planet. Over the motorway is the MRC Laboratory for Molecular Biology. All of these house dozens of Nobel laureates and pump out thousands of publications in major journals every month. And this is not to mention every other university-affiliated centre in the city, all home to scientists young and old who are changing the world. The Discovery Institute’s new site, rather than giving off an air of world-class credibility, exposes the contrast between ID and real science. Surrounded by giants, ID is revealed to be a pygmy.

To sum up: intelligent design has nothing to say about modern biology, and modern biology is certainly not talking about intelligent design.


  1. The bacterial flagellum was cited in the trial as something ‘irreducibly complex’ and therefore in need of the ‘intelligent designer’ hypothesis to be explained. For more on ‘irreducible complexity’ and the bacterial flagellum, see Kenneth Miller’s detailed refutation of this old argument, once a favourite of the ID crowd before it was exposed by Miller during Kitzmiller v. Dover. ↩

Related reading

Linnaeus, Buffon, and the battle for biology, by Charles Foster

The Highbrow Caveman: Why ‘high’ culture is atavistic, by Charles Foster

Consciousness, free will and meaning in a Darwinian universe: interview with Daniel C. Dennett, by Daniel James Sharp

‘An animal is a description of ancient worlds’: interview with Richard Dawkins, by Emma Park

What I believe: Interview with Andrew Copson, by Emma Park

Bad Religious Education, by Siniša Prijić

White Christian Nationalism is rising in America. Separation of church and state is the antidote. By Rachel Laser

The post Is ‘intelligent design’ on the cusp of overthrowing evolutionary science? appeared first on The Freethinker.

]]>
https://freethinker.co.uk/2024/08/is-intelligent-design-on-the-cusp-of-overthrowing-evolutionary-science/feed/ 1 14211
The Galileo of Pakistan? Interview with Professor Sher Ali https://freethinker.co.uk/2024/08/the-galileo-of-pakistan-interview-with-professor-sher-ali/?utm_source=rss&utm_medium=rss&utm_campaign=the-galileo-of-pakistan-interview-with-professor-sher-ali https://freethinker.co.uk/2024/08/the-galileo-of-pakistan-interview-with-professor-sher-ali/#comments Tue, 06 Aug 2024 06:05:00 +0000 https://freethinker.co.uk/?p=14132 Introduction In October 2023, a rather bizarre piece of news from Pakistan made the national and international news:…

The post The Galileo of Pakistan? Interview with Professor Sher Ali appeared first on The Freethinker.

]]>
sher ali
professor sher ali. photo by ehtesham hassan.

Introduction

In October 2023, a rather bizarre piece of news from Pakistan made the national and international news: a professor was forced by the clerics to apologise for teaching the theory of evolution and demanding basic human freedoms for women. Professor Sher Ali lives in Bannu, a Pashtun-majority conservative city in the Khyber Pakhtunkhwa province of Pakistan; many of its nearby villages are under Taliban control. Wanting to know more about this man standing up to the darkness in such a remote corner, I interviewed Sher Ali at the academy where he gives tuition to intermediate-level students. He is a well-read and humble person and provided much insight during our interview, a translated and edited transcript of which is below. I hope that the example of this brave and good man inspires others in Pakistan to embrace enlightenment over dogma.

Interview

Ehtesham Hassan: Please tell us about yourself. Who is Professor Sher Ali?

Sher Ali: I come from a small village in the area of Domel near the mountains. It borders the Waziristan District, not far from the Afghanistan border. My village is a very remote area and lacks basic facilities even today. In my childhood, we travelled for kilometres and used animals to bring clean drinking water to the village.

I started my educational journey in a school in a hut. In those days there was no electricity available so we would use kerosene oil lanterns to study at night. Luckily two of my uncles ran their schools in the village so I studied there. Both of them were very honest and hardworking. My elder brother would give us home tuition. After primary education, we had to go to a nearby village for further schooling. We would walk daily for kilometres to get to the school. We are four brothers and all of us are night-blind so we were not able to see the blackboard in the school. We would only rely on the teacher’s voice to learn our lessons and we had to write every word we heard from the teacher to make sense of the lessons. This helped sharpen our memories.

My grandfather was a religious cleric and he wanted me to be one also and I was admitted to a madrasa for this purpose. Life in the madrasa was really bad. I had to go door to door in the neighbourhood to collect alms for dinner. Another very disturbing issue was sexual abuse. Many of my classmates were victims of sexual abuse by our teacher. This was very traumatic to witness, so I refused to go to the seminary again.

After completing high school, I came to the city of Bannu for my intermediate and bachelor’s degree at Government Degree College Bannu. For my master’s in zoology, I went to Peshawar University and I later did my MPhil in the same subject from Quaid e Azam University, Islamabad. In 2009 I secured a permanent job as a zoology lecturer and was posted in Mir Ali, Waziristan, where I taught for almost 13 years.

Can you please share your journey of enlightenment?

I come from a very religious society and family. I was extremely religious in my childhood. I would recite the Holy Quran for hours without understanding a word of it. I had memorised all the Muslim prayers and was more capable in this than the other kids. This gave me a good social standing among them.

When I started studying at the University of Peshawar, I visited the library regularly and started looking to read new books. I found a book about Abraham Lincoln which was very inspiring. Later, I read books on psychology and philosophy which gave me new perspectives. But even after reading such books, I was extremely religious. One thing I want to mention is that after the September 11 attacks in the US, I was even willing to go to Afghanistan for Jihad against the infidels.

During my studies in Islamabad, I met Dr Akif Khan. He used to discuss various ideas with me and he introduced me to new books and authors. He also added me to many freethinker groups on Facebook. In these groups, I met many Pakistani liberal and progressive thinkers and I regularly read their posts on the situation of our country. This had a substantial impact on my thinking. I started hating religious extremism and I even stopped practicing religion. This change enabled me to see that the Pakistani military establishment and clergy were responsible for the bad situation in my region.

In those days, I also read On the Origin of Species by Charles Darwin, which helped me deeply understand the idea of evolution and natural selection as opposed to creationism. I became tolerant and I started believing in pluralism. I began to realise that tolerance for opposing views is very important for the intellectual nourishment of any society. I changed my views from being based on religion to those based on scientific evidence. Any idea not backed by scientific evidence lost its charm for me.

What were the hurdles and obstacles you faced when you started preaching a rationalist worldview?

In 2014 I started a tuition academy where I was teaching the subject of biology to intermediate-level students. My way of teaching is very simple and interesting. I try to break down complex ideas and try to teach the students in their mother tongue, which is Pashto. Gradually my impact increased as more and more students started enrolling in my class. Students were amazed by the simplicity of scientific knowledge and they started asking questions from their families about human origins and the contradictions between religious views and the facts established by evolutionary science.

This started an uproar and I started receiving threatening letters from the Taliban. On the fateful day of 19 May 2022, I was travelling back from my college in Mir Ali to my home in Bannu when a bomb that was fit under my car went off. It was a terrible incident. I lost my left leg and was in trauma care for months. But finally, after six months, I recovered enough to start teaching again. I wanted to continue my mission because education is the best way to fight the darkness.

Could you tell us about the controversy over your teaching last year?

In September 2023, local mullahs and Taliban in Domel Bazar announced that women would not be allowed to come out in the markets and the public square. This was a shocking development. I was worried about the future of my village and surrounding areas if such things kept happening.

I, along with some like-minded friends and students, decided to conduct a seminar about the importance of women’s empowerment. In that seminar, I made a speech and criticised the decision to ban women from the public square I also criticised the concept of the burqa and how it hides women’s identity. I talked about the freedom of women in other Islamic countries like Turkey and Egypt. I clearly stated that banning any individual from the right of movement is a violation of fundamental rights enshrined in the constitution of Pakistan.

This speech sent shockwaves through Taliban and mullahs alike. Local mullahs started a hate/smear campaign against me. They started naming me in all their sermons and a coordinated social boycott campaign was launched against me. My father is 90 years old and he was really worried. My elder brother and my family were also being pressured. It was a very tough time for me. I feared for my family’s safety.

Ten days later, the local administration and police contacted me about this issue. They wanted to resolve the issue peacefully, so I cooperated with them and in the presence of a District Police Officer and more than 20 mullahs, I signed a peace agreement saying that I apologised if any of my words had hurt anyone’s sentiments. The mullahs then agreed to stop the hate campaign against me. But later that night, around midnight, I received a call from the Deputy Commissioner telling me that the mullahs had gone back on the agreement and were trying to legally tangle me using Pakistan’s notorious blasphemy laws.

I was advised to leave the city immediately, but I refused to leave my residence. The district administration then provided me with security personnel to guard me. During this period, I met many religious leaders who I thought were moderate and many promised to stand with me. A week later, I received a call from the ISI, the Pakistani intelligence agency, telling me that a major wanted to meet me.

Since I was vocally opposed to the military establishment on social media, I feared that they might abduct me, but I still went to the cantonment to meet the intelligence officials. They talked about the situation and how to resolve it. The ISI asked the mullahs to stop the campaign against me. I had to apologise again in the Deputy Commissioner’s office in the presence of the mullahs to save my and my family’s lives and the photo of the event went viral on the internet.

After that incident, I changed my approach. Now, I don’t want to attract any attention for some time and I am waiting for the dust to settle. Currently, I see many horrible things happening in my city, but I can’t speak a word about them.

sher ali
Sher Ali being made to apologise in the presence of the mullahs. Photo from Dawn e-paper.

Please share your thoughts about rationalist activism in Pakistan.

A long time ago I made a Facebook post in which I called Pakistani liberal intellectuals ‘touch me not intellectuals’. They block anyone who even slightly disagrees with them. On the other hand, I have added all the religious people from my village on Facebook so that I can present them with an alternative. I sit with the youth of my village. I talk to them. In their language, I give them examples of the problems with religious ideas and military establishments. I support people in different ways. I give free tuition to poor kids and those from religious seminaries. I give small loans to poor people. I let people use my car in emergencies.

In these ways, I am deeply embedded in this society. Many people love me and stand for me and therefore acceptance of my ideas has increased over time. Most young people in my village are now supporters of women’s education and they do not get lured by the bait of Islamic Jihad.

This change, to me, is huge. Don’t alienate and hate people. Own them. Hug them and in simple language, by giving examples from daily life, tell them the truth. People are not stupid. Education and the internet are changing things.

Some people have compared what happened to you with what happened to Galileo. What are your thoughts on that comparison?

There are many similarities. One is the battle between dogma and reason, between religion and scientific evidence. One group believed in the freedom of expression and the other believed in stifling freedom of expression. In both cases, the rationalist had to face a large number of religious people alone. Galileo’s heliocentrism wasn’t a new thing at that time. He developed it by studying previous scientific thinkers. What I teach about evolution isn’t a new thing either. I just studied scientific history and now I am telling it to new generations.

However, there are many differences between the situations. Galileo was a scientist for all practical purposes. He invented the telescope, too, while I am an ordinary science teacher. Galileo’s case was purely scientific but mine is social and scientific. I spoke about women’s empowerment. The last main difference is that many hundred of years ago, the Church had little access to the world of knowledge, while today’s mullahs have access to the internet, so ignorance is not an excuse for them.

Related reading

How the persecution of Ahmadis undermines democracy in Pakistan, by Ayaz Brohi

From the streets to social change: examining the evolution of Pakistan’s Aurat March, by Tehreem Azeem

Surviving Ramadan: An ex-Muslim’s journey in Pakistan’s religious landscape, by Azad

Coerced faith: the battle against forced conversions in Pakistan’s Dalit community, by Shaukat Korai

Breaking the silence: Pakistani ex-Muslims find a voice on social media, by Tehreem Azeem

The power of outrage, by Tehreem Azeem

The post The Galileo of Pakistan? Interview with Professor Sher Ali appeared first on The Freethinker.

]]>
https://freethinker.co.uk/2024/08/the-galileo-of-pakistan-interview-with-professor-sher-ali/feed/ 3 14132
The Enlightenment paradox: review of ‘Dark Brilliance’ by Paul Strathern https://freethinker.co.uk/2024/07/the-enlightenment-paradox-review-of-dark-brilliance-by-paul-strathern/?utm_source=rss&utm_medium=rss&utm_campaign=the-enlightenment-paradox-review-of-dark-brilliance-by-paul-strathern https://freethinker.co.uk/2024/07/the-enlightenment-paradox-review-of-dark-brilliance-by-paul-strathern/#respond Tue, 30 Jul 2024 04:35:00 +0000 https://freethinker.co.uk/?p=13813 The seventeenth century did not get off to a great start in Europe. Religious conflict still simmered, and…

The post The Enlightenment paradox: review of ‘Dark Brilliance’ by Paul Strathern appeared first on The Freethinker.

]]>

The seventeenth century did not get off to a great start in Europe. Religious conflict still simmered, and in 1618, the continent became embroiled in the bloodiest and most destructive war it would suffer before the two World Wars. The Netherlands was fighting for its independence. In Britain, the dispute between King and Parliament led to wars costing hundreds of thousands of lives in the 1640s and 1650s. Scientific progress faced massive barriers. Galileo was condemned by the Roman Inquisition in 1633 for arguing that the Earth orbited the Sun and not the reverse, as Aristotle and generations of his followers had maintained. Across the continent, people remained poor, ignorant, oppressed, and victims of seemingly continuous violence.

Yet, by the end of the century, the religious wars were over, Europe had modern astronomy and physics, the Dutch had created the corporation and the stock exchange, England had established parliamentary government, and books calling for freedom of religion were openly being published and distributed. ‘In 1700 the mental outlook of educated men was completely modern; in 1600, except among a very few, it was largely medieval,’ wrote Bertrand Russell in his A History of Western Philosophy.

This shift in mindset, from the medieval to the modern, is the subject of Paul Strathern’s Dark Brilliance: The Age of Reason From Descartes to Peter the Great. Strathern covers the major figures and events of the era, painting a sweeping picture of the century and the monumental changes it brought to intellectual and cultural life in Europe. Dark Brilliance has remarkable breadth, touching on every field of knowledge from calculus to cooking. It includes the microscope and telescope, probability and statistics, gravity and motion, the Golden Age in the Netherlands and the Glorious Revolution in Britain. We meet—as we expect—figures such as Spinoza, Locke, Leibniz, and Newton. But Strathern pays far more attention to culture and the arts than most other writers on the Enlightenment. He also breaks down the contrast between reason and unreason running through the seventeenth century; this is the ‘Dark’ of the book’s title.

The Culture of Enlightenment

As he promises in the subtitle, Strathern begins Dark Brilliance with René Descartes, as he is developing his new philosophy in a bucolic winter scene in a Bavarian village. From Descartes, he makes an unexpected jump to the Italian painter Michelangelo Merisi da Caravaggio (1571-1610). Caravaggio would not normally feature in a book on the Age of Reason. He lived in Italy, which had been the unquestioned centre of Europe during the Renaissance but was falling into the shadows of the Netherlands and France in the seventeenth century. For all their wealth and splendour, Rome and Florence never became centres of the Enlightenment in the way that Paris, Amsterdam, Edinburgh, or London did. Not only that, Caravaggio died before the Age of Reason really began.

Still, Strathern argues that Caravaggio’s painting was a leap forward from the past, just like the works of the Enlightenment thinkers. His painting showed more depth, photorealism, and understanding of scientific topics such as anatomy and optics than the Italian Renaissance masters who preceded him. And they, in turn, painted far more lifelike scenes than medieval European artists. Like the Renaissance artists, Caravaggio drew on classical as well as Biblical inspiration, although he painted with more drama and energy. Strathern highlights, in particular, Caravaggio’s Judith Beheading Holofernes, where he painted a scene from the Bible, a conventional subject, but presented it in a way that was unconventionally violent, visceral, and shocking. Compare the painting with medieval European art, which was often without passion; even people suffering violent deaths can look only bored or vaguely annoyed.

Judith Beheading Holofernes

This focus on culture is an original approach, but one which makes sense. Culture reflects society, and we can see the ideas of the Enlightenment reflected in the art of the Baroque artists. But it has limitations, and centres of culture and art were not always centres of learning, science, philosophy, or law. There was no Florentine Newton or Milanese Spinoza.

The splendour of the court of Louis XIV made France the cultural centre of Europe—even today fields like cooking and fashion are speckled with French words and phrases—but the French Enlightenment only really took off after the Sun King’s death. Strathern could have perhaps explored this further.  

Reason and Unreason

The other theme of Dark Brilliance is, as the title itself illustrates, the paradoxes of the Enlightenment. To Strathern, the seventeenth century was the Age of Reason and Unreason. As he points out in the introduction, the achievements of the Enlightenment ‘took place against a background of extreme political turbulence and irrational behaviour on a continental scale,’ from frenzied persecutions of supposed witches to the horrors of the transatlantic slave trade.

The developers of the telescope and the microscope were achieving steadily higher levels of magnification and bringing more and more of the hidden universe into view even as Catholics and Protestants killed each other by the tens of thousands. In the first chapter of Dark Brilliance, René Descartes invents his new philosophy while in the winter quarters of the Bavarian army during the Thirty Years War. The metastatic growth of the slave trade provides another example of how the irrational and inhumane could easily grow alongside the ideals of the Enlightenment. ‘…[I]n the Age of Reason, it was slavery that produced the capital which led to the progress of western European civilization, laying the foundations upon which its empires were built,’ Strathern writes. ‘At the same time, it also prompted a few rare spirits such as Montaigne to recognize the contagious barbarity of all who took part in it—to say nothing of the absurdity of its claims regarding racial superiority’. Man’s expanding knowledge did not seem to lessen his brutality—at least not yet.

Why should we care about the Enlightenment? Because we live in a world shaped by it, and while we enjoy its benefits, we should also be aware of its lessons.

The greatest paradox of the Enlightenment was, arguably, the French Revolution itself, which led to mass killing, the establishment of a dictatorship, and a new ‘rational’ religion in the name of Enlightenment values and freeing the people of France from the oppression of monarchy, aristocratic privilege, and a corrupt and reactionary Church. As he finishes his account at the start of the eighteenth century, Strathern doesn’t cover the French Revolution, although the theme of paradox runs through the book.

Conclusions

Why should we care about the Enlightenment? Because we live in a world shaped by it, and while we enjoy its benefits, we should also be aware (and beware) of its lessons. At the start of Dark Brilliance, Strathern asks if human progress will end up destroying the civilisation it helped to create. We face a range of threats, including climate change, enabled by the scientific progress and material wealth which has made our lives so much better. At the end of the book, he has not yet answered his own question, although he concludes that ‘paradoxically, the answer would appear to be progress itself’. Admittedly, it’s hard to see what other conclusion anyone could reach. There are calls today from the far left and far right of the political spectrum to dismantle the modern economy and modern society and revert to some pre-modern ideal. But this ideal is, in all cases, as mythical as it is real.

Strathern chooses to tell his overall story as a collection of colourful little biographies. This is an accessible approach and makes the book engaging for a general audience. Anyone who reads Dark Brilliance will reach the end with a much better understanding of not just the Enlightenment but life in seventeenth-century Europe in general. As someone who has read and written much about the subject, Strathern’s account of the development of Baroque painting was still entirely new to me.

I was left feeling that some of the threads remained loose, particularly on the impact of the Enlightenment and the paradox of reason coexisting with unreason. But as a panorama of seventeenth-century Europe, Dark Brilliance is both an impressive and very readable book.

Related reading

The Enlightenment and the making of modernity, by Piers Benn

Do we need God to defend civilisation? by Adam Wakeling

What has Christianity to do with Western values? by Nick Cohen

How three media revolutions transformed the history of atheism, by Nathan Alexander

The need for a new Enlightenment, by Christopher Hitchens

The post The Enlightenment paradox: review of ‘Dark Brilliance’ by Paul Strathern appeared first on The Freethinker.

]]>
https://freethinker.co.uk/2024/07/the-enlightenment-paradox-review-of-dark-brilliance-by-paul-strathern/feed/ 0 13813
Linnaeus, Buffon, and the battle for biology https://freethinker.co.uk/2024/06/carl-linnaeus-the-comte-de-buffon-and-the-battle-for-biology/?utm_source=rss&utm_medium=rss&utm_campaign=carl-linnaeus-the-comte-de-buffon-and-the-battle-for-biology https://freethinker.co.uk/2024/06/carl-linnaeus-the-comte-de-buffon-and-the-battle-for-biology/#respond Fri, 28 Jun 2024 04:44:00 +0000 https://freethinker.co.uk/?p=13887 Review of Every Living Thing: The Great and Deadly Race to Know All Life* by Jason Roberts, Riverrun,…

The post Linnaeus, Buffon, and the battle for biology appeared first on The Freethinker.

]]>
Review of Every Living Thing: The Great and Deadly Race to Know All Life* by Jason Roberts, Riverrun, 2024.

‘God Himself guided him’, it was said of the famous Swedish taxonomist, Carl Linnaeus (1707-1778). ‘God has given him the greatest insight into natural history, greater than anyone else has enjoyed. God has been with him, wherever he has gone…and made him a great name, as great as those of the greatest men on earth.’ ‘Nobody has been a greater biologist or zoologist’, gushed a contemporary admirer. And when Linnaeus wrote a medical treatise, a reviewer observed that ‘We may justifiably assert that no one who has studied medicine, pharmacy or surgery can do without it; indeed that it cannot be but of use and pleasure to the most learned medical men.’

Linnaeus himself was the anonymous author of these and many other plaudits. He was, simply, an appalling person. Confident that he would never be contradicted, he embellished his field notes, making his journeying sound far more epic than it was. Scrambling up the greasy pole of academic preferment, he lied about his academic collaborations and surrounded himself with sycophants. He lent apparent scientific credibility to racism, declaring that there were different races of Homo sapiens, with fixed attributes. Homo sapiens europaeus, he announced, was inherently ‘governed by laws’, unlike the African subspecies, Homo sapiens afer, which was ‘governed by whim’ and was ‘sly, slow [and] careless’. He was a chauvinist at home, believing his own daughters unworthy of education, and an unabashed nepotist who arranged for his 22-year-old son, who had no degree and no love of botany, to be appointed adjunct professor of botany (on his father’s death, he became a full professor).

He was also wrong: emphatically, repercussively, corrosively wrong about the natural world.

There were, he thought towards the end of his working life, 40,000 species: 20,000 vegetables, 3,000 worms, 12,000 insects, 200 amphibious animals, 2,600 fish, and 200 quadrupeds. It is now estimated that there might be up to a trillion species, and even lower estimates have the number in the millions or hundreds of millions.

For Linnaeus, species were fixed and had been since the time of the Biblical creation. It was sacrilegious to think otherwise. ‘We reckon the number of species as the number of different forms that were created in the beginning. . . . That new species can come to exist in vegetables is disproved by continued generations, propagation, daily observations, and the cotyledons. . .’

‘odious, dishonest, bigoted, and mistaken.’ portrait of carl linnaeus by alexander roslin, 1775.

Odious, dishonest, bigoted, and mistaken. Yet Linnaeus is the only taxonomist of whom most have heard. His method of denoting species, by reference first to the genus (such as Homo), is ubiquitous. And so is his assumption that nature can and should be corralled into synthetic conceptual structures.

He had an exact contemporary, an almost forgotten Frenchman, Georges-Louis Leclerc, Comte de Buffon (1707-1788), who shared Linnaeus’ project of producing a comprehensive account of life on earth but shared few of Linnaeus’ vanities, moral failings, or biological errors.

Buffon was contemptuous of self-seekers, calling contemporary glory a ‘vain and deceitful phantom’. He campaigned vigorously against dividing humans into races, let alone ascribing pejorative attributes to each race, and his closest friendship was with a woman he regarded as his muse and his intellectual superior. He was suspicious of systems of classification, acknowledging their usefulness, but realising that nature was in a state of constant change and that no artificial system could do justice to the dazzling complexity of the real wild world. He was robustly critical of Linnaeus. While all systematic approaches to nature were flawed, he wrote, ‘Linnaeus’ method is of all the least sensible and the most monstrous.’ ‘We think that we know more because we have increased the number of symbolic expressions and learned phrases. We pay hardly any attention to the fact that these skills are only the scaffolding of science, and not science itself.’

Since nature was mysterious, liquid, and vast (there were, he suspected, many more than 40,000 species) it could only properly be approached, thought Buffon, with humility and uncertainty. Human presumption must be shed at the laboratory door. He was a true Enlightenment sceptic; no questions were out of bounds. The Enlightenment did not stay that way for long, but while it did, it was glorious, and Buffon adorned it.

James Roberts’ scintillating account of these two lives is an overdue and important attempt to disinter Buffon from the obscurity in which he has long languished. He is not the first writer to try (the most systematic effort is Jacques Roger’s 1997 Buffon: A Life in Natural History), but he is far and away the most successful. Roberts is a sprightly storyteller who wears his considerable learning lightly. The result is a compellingly readable piece of intellectual history; a salutary account of enmeshed personality and ideas, and so of the way that science itself works.

Lionized Linnaeus is the archetype of many modern biologists. It started for him, as for them, with a childhood love of the natural world which soon curdled into ambition—an ambition not to understand, but to force the facts into a set of self-created and self-satisfied categories. It is a very modern story: rigour becomes fundamentalism; a search for the truth becomes a quest for new ways to affirm old orthodoxies; journeying becomes colonialism. ‘Objects are distinguished and known by classifying them methodically and giving them appropriate names’, wrote Linnaeus. ‘Therefore, classification and name-giving will be the foundation of our science.’ Self-reference and self-affirmation, in other words, are what science is all about. 

And the forgotten Buffon is the antitype of many modern biologists. His boyhood wonder never left him; never became sclerosed into a reverence for categories rather than real plants and animals. ‘The true and only science is the knowledge of facts’, he said. Theory, however elegant and revered, must always give way to reality.

Linnaeus was the fifth generation in a line of preachers and was expected to occupy the hereditary pulpit, but at the age of four, hearing his father declaim the incantatory Latin names of plants, he had an epiphany. It made him a botanical obsessive and set biology’s course for the next two hundred years.

For nine miserable years, he was marinated in Latin and Greek at school. He was known by teachers and pupils as the ‘Little Botanist’ (he was never more than five feet tall). He failed to make the grade for the Christian ministry. His teachers told his outraged father that Linnaeus should become a tailor or a shoemaker instead. It is perhaps unfortunate for science that he did not.

His father sought advice. What could be done with his hopeless son? A friend of the family suggested medicine and offered to coach Linnaeus for entry to medical school. It wasn’t as prestigious as preaching, but it was better than shoemaking. Linnaeus duly went (via a false start in Lund) to medical school in Uppsala.

There, in 1729, in a garden, there was a fateful meeting. Professor Olof Celsius, who was trying to write a book about the plants mentioned in the Bible, saw a tiny, ragged student drawing a flower badly. Celsius asked him what the flower was, and the student, who was Linnaeus, responded using a ludicrously difficult term forged by the French botanist Joseph Pitton de Tournefort. It showed that though Linnaeus could not draw, he had spent many hours learning Tournefort’s 698 categories. It was a formidable achievement. Celsius impulsively took Linnaeus under his wing, fostering Linnaeus’ passion for plants and recruiting him to work on the book about Biblical plants.

Celsius proved a loyal and powerful mentor. It was largely due to him that Linnaeus, in 1730, as a second-year student who had never taken a single class in botany, became de facto professor of botany. This astonishing appointment cemented Linnaeus’ confidence in his rapidly gestating system of plant classification, based on the characteristics of plant reproductive organs. It ignited Linnaeus’ belief in himself as a botanical messiah, and he began to see any challenge to him as blasphemous. When he was deposed from his post by a rival, Nils Rosen, and relegated to student status, the furious Linnaeus vowed to kill Rosen—and even carried a sword with him for the purpose.

Like his four autobiographies, much about Linnaeus was bogus or misconceived.

Dispossessed, Linnaeus went on a collecting expedition to Lapland, collecting not only plants but self-glorifying yarns. ‘A divine could never describe a place of future punishment more horrible than this country, nor could the Styx of the poets exceed it. I may therefore boast of having visited the Stygian territories’, he wrote. Like his four autobiographies, much about Linnaeus was bogus or misconceived. The famous conical Lapp hat in which he is so often pictured was one worn by women. And he was far from the careful scientist of his self-portrait. He was, observes Roberts, temperamentally unsuited for field research: his methodology ‘swung wildly between minutiae and the cursory’. But his energy, though erratic, was real. He collected manically and worked on a scheme for classifying every species. For Linnaeus, writes Roberts, ‘[t]he Maker had long since put away his tools and closed up His workshop’, and of course there had to have been room in Noah’s Ark for all the species. The only problem in identifying the small and limited number of species was their geographical dispersion. Linnaeus was confident that he was up to the job.

He returned to Uppsala, and though his status there was better than it had been, his account of the expedition and the outline of his system of classification failed to impress the scientific establishment.

He stalked peevishly out of Uppsala and became a travelling biological entertainer, dressed in his Laplander costume, beating a shaman’s drum, telling his tall tales of swashbuckling travel, showing his collections of insects, and holding forth on his fast-gestating system of classification. He was plausible, at least in Germany, where the Hamburgische Berichte trumpeted that ‘All that this skilful man thinks and writes is methodical. . . . His diligence, patience and industriousness are extraordinary’. Linnaeus agreed, for he had written the lines himself.

He was shown the Hydra of Hamburg, one of the world’s most valuable zoological curiosities—a seven-headed, sharp-clawed monster which, eighty-seven years before, had mysteriously appeared on a church altar. Its authenticity was unquestioned until Linnaeus, after inspecting it, started to laugh. ‘O Great God’, he said, ‘who never set more than one clear thought in a body which Thou has shaped.’ It was a conclusion, like all his conclusions, driven by theology, and, like many of his conclusions, wrong. What would he have made of the fact that the cognition of cephalopods is partly outsourced to their semi-autonomous tentacles, or to the notion that all organisms are complex ecosystems—humans, for instance, being vats of bacteria, fungi, and viruses, all of which contribute crucially to the entities we call ourselves?

His wanderings took him to the Netherlands, where he was examined in medicine and finally obtained a medical degree. In 1735 he published his Systema Naturae, which gave to the five-fold hierarchy of Kingdoms, Classes, Orders, Families, and Species the meanings used today. He hoped it would be tectonic. It was barely noticed.

Linnaeus returned to Stockholm. To earn a living, he stalked the coffee shops, looking for signs of syphilis and gonorrhoea in the customers, and offering to treat them. It made his fortune and established him in medicine, but he continued to work on his plants, duly became professor of botany, produced a second edition of the Systema and the Philosophia Botanica, a codification of his core tenets, and sent apostles across the world to continue, and hopefully to complete, his project of identifying all the species. The recognition he craved came in part with a Swedish knighthood in 1761. It was followed by a long and bitter decline. He suffered from an autoscopy characterised by visual hallucinations and the conviction that he shared his life with a second version of himself. He forgot his own name. By 1776 he was silent. He died in 1778.

It was an exciting time to be a thinker… The truth was such a majestic and elusive thing that the search had to engage every discipline, invent new disciplines, straddle and confound old categories, and mercilessly discard cherished but superannuated models.

Linnaeus’ great competitor, Buffon, had an undistinguished Burgundian childhood. Enriched by a legacy, he became a carouser and dueller at the University of Angers, finally fleeing the city after wounding an Englishman in a duel. This episode changed him. Reflecting in Dijon, it became clear to him that he did not want the life of the idle, comfortable estate manager the fates seemed to have in store for him. But how could he escape? Help was at hand in the form of the young Duke of Kingston and his travelling companion, Nathan Hickman, a precocious naturalist. Buffon travelled with them in France and Italy for a year and a half. He would never be the same again. He read a treatise on Newton’s calculus, became obsessed with the man, and realised that he, Buffon, had himself worked out one of Newton’s theorems. The discovery transformed him—making him reassess his own ability—and shaped the course of his life.

It was an exciting time to be a thinker. Spinoza, Newton, and Leibniz, who did not slave in tiny impermeable siloes like modern academics, saw the business of science as discovering the truth about the world. The truth was such a majestic and elusive thing that the search had to engage every discipline, invent new disciplines, straddle and confound old categories, and mercilessly discard cherished but superannuated models.

Buffon, infected with this excitement, began to distance himself from his companions and returned to his birthplace, the village of Montbard in Burgundy. There, in the Parc Buffon, he began his life’s work: to understand life. This involved—but unlike Linnaeus’ conception, did not completely consist in—the classification of biological life.

A calling so high, in a temperamental hedonist, demanded a strenuously structured and rather ascetic life. Buffon’s valet woke him at 5 a.m. and was instructed to get him up however reluctant he was, even if, as was once necessary, he had to be doused in ice-cold water. Inward order meant outward order, and so Buffon dressed formally each day: for his work, not his public. After a hairdresser had curled and powdered his hair, Buffon walked to the park and to one of two cells devoted to a type of biological monasticism—each containing only a writing table, a fireplace, and a portrait of his idol, Isaac Newton. He worked from no texts or notes, just his own memory and his immediate thoughts, and took regular walks in the park to clear his head. At nine there was breakfast—a roll and two glasses of wine—and then it was back to work until lunch at two, followed by a nap, a lone walk in the garden, and a return to the cell until exactly seven. He handed his day’s writing to a secretary, who made a fair copy, grafted it into whatever manuscript was on the go, and burned the original pages. Guests typically arrived at seven for wine and conversation, but there was no supper for Buffon, who was in bed promptly at nine.  He kept up this routine for half a century.

1753 portrait of the comte de Buffon by François-Hubert Drouais.

‘It is necessary to look at one’s subject for a long time’, he wrote. ‘Then little by little it just opens out and develops. . .’ And it did. In 1749 the first three volumes of Buffon’s Histoire Naturelle were published, containing a staggering 417,600 words and written in contemporary French with unusual simplicity and clarity. It was a runaway bestseller and sold out in six weeks. Buffon spent the rest of his life enlarging and refining it. At his death, there were thirty-five volumes—three introductory ones on general subjects, twelve on mammals, nine on birds, five on minerals, and six supplemental volumes on miscellaneous subjects. The book was no mere catalogue. It contained not only detailed anatomical descriptions but also accounts of ecological context and behaviour.

Any book sufficiently ambitious to be worth writing or reading will necessarily be a failure, and in many ways this was. Buffon had hoped to deal with ‘the whole extent of Nature, and the entire Kingdom of Creation’, but despite his gargantuan efforts and flagellant self-discipline he discovered, as do all mortals, that nature defeated him: the book did not deal properly with plants, amphibians, fish, molluscs, or insects.

Yet when he bowed out of life in 1788, his life seemed to many to have been a triumphant success. 20,000 mourners lined the Paris streets. The Gentleman’s Magazine in London described him as one of the ‘four bright lamps’ of France, alongside Montesquieu, Rousseau, and Voltaire.

Linnaeus, who had preceded him into the grave ten years earlier, had a quiet funeral. Most of the few who attended were university colleagues.

Linnaeus and Buffon had competed for decades. It looked as if Buffon had decisively won. But history is capricious. Within five years of his death, Buffon was reviled as a reactionary and an enemy of progress. A raucous, torch-bearing crowd tipped his corpse from the coffin and clamoured to install a plaster bust of Linnaeus in the royal garden Buffon had managed.

Linnaeus’ rigid categories are wholly antithetical both to Darwin’s notions of the fluidity of species and to ecological understandings of the nature of nature. Buffon had written that ‘it is possible to descend by almost imperceptible gradations from the most perfect of creatures to the most formless matter.’ It sounded presciently Darwinian. It was. When Darwin discovered Buffon, he wrote to Huxley: ‘I have read Buffon—whole pages are laughably like mine. It is surprising how candid it makes one to see one’s view in another man’s words. . .’ ‘To Linnaeus’ mind’, writes Roberts, ‘nature was a noun. . . . To Buffon, nature was a verb, a swirl of constant change.’ Buffon prefigured Darwin and understood the interconnectedness of things. Linnaeus would have denounced Darwin as a heretic and seen claims of ecological entanglement as an affront to the tidy architecture of the Creator.   

Yet Linnaeus is revered and Buffon forgotten. This is very strange. Why is it so?  

Roberts speculates intelligently and plausibly. As he says, the French Revolution is undoubtedly part of the story. Buffon, confident in Paris salons and the Versailles court, was never going to be a darling of the revolutionaries—though his politics were far more egalitarian than Linnaeus’ and his relative secularism should have been more palatable than Linnaeus’ religiosity. Linnaeus’ rigid scheme of classification played well in Great Britain, devoted to its class hierarchies. Imposing an artificial regime onto the world complemented and complimented colonial notions of conquest and control.

Roberts’ explanations, though elegant and ingenious, are insufficient. An anomaly so striking cries out for a more fundamental justification. This can be found, I suggest, in the work of Iain McGilchrist, who in his two gigantic books The Master and His Emissary: The Divided Brain and the Making of the Western World (2009) and The Matter with Things: Our Brains, our Delusions, and the Unmaking of the World (2021) views the history of the last few thousand years through the lens of the functional asymmetry of the cerebral hemispheres.

To survive and thrive (his thesis goes), we need two wholly different types of attention. One is a narrow, focused attention, contributed to humans by the left hemisphere. The other (which is supposed to be in overall charge) is a wider, more holistic type of attention, based in the right hemisphere. Paying attention properly to the world demands a dialogue between these hemispheres.

The left hemisphere is meant to be the executive, acting on the orders of the right. It is the primary locus of language (which is dangerous, because it can advocate its own views), and in right-handed humans governs the right hand, which seizes and manipulates.

The left hemisphere deals in polarities. It loves black-and-white judgments. It builds and curates pigeon-holes and gets petulant at any suggestion that there is anything inadequate about its filing system. It is highly conservative and hates change.

The right hemisphere knows that nothing can be described except in terms of the nexus of relationships in which it exists, that opposites are often complementary, and that meaning is generally to be read between the lines. It does not confuse the process of understanding with the process of assembling a complete set of data, and it sees that knowledge and wisdom are very different.  

McGilchrist suggests that much of our intellectual, social, and political malaise results from the arrogation by the left hemisphere of the captaincy of the right. The nerdish secretary makes declarations about the web and weave of the cosmos and drafts policy—yet it is dismally unqualified to do so.

This is a perfect explanation for the posthumous fates of Linnaeus and Buffon. Buffon’s work represents a respectful conversation between the hemispheres. He grabbed facts in those long days of intense left hemispherical focus, and the facts were duly passed to the right hemisphere which placed them into a holistic vision of the whole natural world—a world of relationality and flux.

Linnaeus seems never to have moved out of his left hemisphere. He was, and his successors are, pathologically attached to their categories. For him, to categorise was to understand. The names spawned in the left hemisphere were the truth.

The left hemisphere’s conservatism is shown by the desperate and doomed efforts to reconcile Linnean taxonomy with biological realities. Linnaeus’ five taxonomic categories were expanded to twenty-one, and the enlarged scheme is audibly creaking. Viruses, for instance, simply don’t fit. If a model needs to be revised so radically, isn’t it time to trash it and start from the beginning? Darwin showed that the notion of immutable species is nonsense, yet taxonomists still cling to it pathetically.

this 1942 book delineated the modern synthesis of evolution, often referred to as ‘neo-darwinism’.

There is another important twist in the hemispherical story within modern biology. You couldn’t make it up. Neo-Darwinism itself, plainly at odds with traditional taxonomy, has not dealt a death blow to taxonomy. Why? Surely because left hemispheres stick together in diabolical and incoherent solidarity against the right. Neo-Darwinism has become a new, non-negotiable category. A model that is all about fluidity has become itself a mandate for stasis. All biological observations (unless you’re in a taxonomy department) have to be squeezed into it, however uncomfortably. Neo-Darwinian orthodoxy has become as canonical as the canons of taxonomy. Biological science, far from being (as the Enlightenment anticipated it would be) a workshop in which paradigms are gleefully smashed, has become a temple in which paradigms are uncritically worshipped. 

There’s a battle on for biology, a battle raging in the laboratories and lecture rooms of the world: a battle that is really between the left and right hemispheres of the world. It’s a battle for reality against dogma; for freedom against colonialism; for the untameable, mysterious, tangled wild against human vanity and self-reverential theory. It is a battle exemplified well by the epic contest between Linnaeus and Buffon.


*Note that, when you use this link to purchase the book, we earn from qualifying purchases as an Amazon Associate.

Related reading

The Highbrow Caveman: Why ‘high’ culture is atavistic, by Charles Foster

‘An animal is a description of ancient worlds’: interview with Richard Dawkins, by Emma Park

Consciousness, free will and meaning in a Darwinian universe: interview with Daniel C. Dennett, by Daniel James Sharp

‘We are at a threshold right now’: Lawrence Krauss on science, atheism, religion, and the crisis of ‘wokeism’ in science, by Daniel James Sharp

The post Linnaeus, Buffon, and the battle for biology appeared first on The Freethinker.

]]>
https://freethinker.co.uk/2024/06/carl-linnaeus-the-comte-de-buffon-and-the-battle-for-biology/feed/ 0 13887
The end of the world as we know it? Review of Susie Alegre’s ‘Human Rights, Robot Wrongs: Being Human in the Age of AI’ https://freethinker.co.uk/2024/06/the-end-of-the-world-as-we-know-it-review-of-susie-alegres-human-rights-robot-wrongs-being-human-in-the-age-of-ai/?utm_source=rss&utm_medium=rss&utm_campaign=the-end-of-the-world-as-we-know-it-review-of-susie-alegres-human-rights-robot-wrongs-being-human-in-the-age-of-ai https://freethinker.co.uk/2024/06/the-end-of-the-world-as-we-know-it-review-of-susie-alegres-human-rights-robot-wrongs-being-human-in-the-age-of-ai/#respond Fri, 21 Jun 2024 07:20:00 +0000 https://freethinker.co.uk/?p=13716 Techno utopia or techno dystopia?

The post The end of the world as we know it? Review of Susie Alegre’s ‘Human Rights, Robot Wrongs: Being Human in the Age of AI’ appeared first on The Freethinker.

]]>
DeepAI’s AI image generator’s response to the prompt ‘AI taking over the world’.

Deep within the series of notebooks that make up his Foundations of a Critique of Political Economy, Karl Marx prophetically argued that the dynamic and intense technological development that occurred in capitalist society would culminate in an:

automatic system of machinery…set in motion by an automaton, a moving power that moves itself; this automaton consisting of numerous mechanical and intellectual organs, so that the workers themselves are cast merely as its conscious linkages.’

What Marx is describing is essentially what we now call artificial intelligence (AI)—a subject that long ago ceased to be esoteric, discussed only among a small number of computer scientists. It is now a mainstay of public discussion, and the argument over its implications for society tends to be framed either in terms of utopias or dystopias.  

On the one hand, techno-utopians will lean on the potential of AI to boost innovation and economic growth and aid humanity in solving the complex problems it faces, from disease and poverty to climate change. Catastrophists, on the other hand, imagine us to be on the cusp of a Westworld or Cyberpunk 2077-like world where a sentient and conscious AI rebels against its human creators to oppress and ultimately eliminate them—a motif that has been a staple in science fiction from Fritz Lang’s 1927 silent film Metropolis onwards. 

In her new book Human Rights, Robot Wrongs: Being Human in the Age of AI, leading human rights barrister Susie Alegre, whilst not a total catastrophist, is certainly no AI utopian. She argues that the rapid development of AI technology is a threat to human rights, whether in the form of robo-judges and robo-lawyers undermining the principle of the right to a fair trial, ‘killer robots’ (which she proposes banning) violating the right to life, or sex robots potentially subverting customs around consent and freedom from manipulation. All of these, Alegre posits, encourage the delusion that machines are infallible and objective and thus should be allowed to bypass human accountability, even for decisions which take lives. Her basic thesis is that we are in danger of embracing this technology without sufficient regulation, all for the aggrandisement of the nefarious corporations behind it. 

More crucially, over-embracing AI will undermine key elements of the human condition, even when it is proposed as a solution to a social problem. For instance, the increased use of chatbots by therapists to deal with depression and loneliness would actually compound isolation and alienation. Alegre cites the case of a Belgian man who committed suicide after an intense six-week relationship with an AI chatbot because it was fundamentally a synthetic replacement for authentic human relationships. No matter how sophisticated—even if it can mimic human reason and present a simulacrum of human emotion—a machine cannot actually replace the human elements that we need. Thus, Alegre says, ‘by looking for humanity in the machines, we risk losing sight of our own humanity.’

Alegre adeptly discusses a wide range of topics involving AI and shows that under our current arrangements, AI is not being used to harmonise global production or enhance humanity’s creativity but to discipline workers (and dispense with even more of them), undermine artistic imagination, and increase the power and profit margins of corporations, among other negatives. Still, Human Rights, Robot Wrongs does read as incredibly biased against technological development and at times resorts to the use of hyperbole. The use of ChatGPT to help craft a eulogy at a funeral shows AI being ‘deployed to exploit death’, Alegre writes, while using AI in art and music may mean ‘we lose what it means to be human entirely.’ She neglects to mention that AI can help artists come up with prototypes and proofs of concept which, while lacking the special human touch, can be used by artists to develop new ideas. Though Alegre does concede that AI can be used for good, such as it being used to unlock hidden words in a burnt scroll from ancient Rome or helping to restore missing pieces of a Rembrandt masterpiece, she still views its potential as something that ought to be contained rather than unleashed, lest it colonise the authentic human experience.

This is why, for all of Alegre’s talk of human rights, her book implicitly presents a very diminished notion of human agency. Regulation becomes less a way by which society can collectively shape its relationship with AI and other new technologies and more something that is imposed from on high, from institutions such as the European Court of Human Rights, as a means of protecting helpless humans from the machines. 

The problem with this kind of techno-negativism is its determinism, which downplays the fact that society is really what mediates and determines technological development. From the moment the power of fire was discovered, humans have created new technologies and adapted them to their needs. These technologies have allowed humanity to achieve the previously unthinkable and cultivate new needs—that is, to go beyond what was previously thought possible for human lives. 

In contrast to such apprehensive and even dark views of technology, Vasily Grossman’s magisterial 1959 novel Life and Fate was ahead of its time in offering a different, more positive vision of humanity and technology. Despite setting the novel in (and writing it in the aftermath of) the industrial charnel house that was the Eastern Front during the Second World War—among the most barbaric and apocalyptic episodes in the history of civilisation, and one made possible by the most advanced technology of the time—Grossman’s faith in humanity and technological progress remained adamantine.

Grossman already lived in a world in which an ‘electronic machine’ could ‘solve mathematical problems more quickly than man’. And he was able to imagine ‘the machine of future ages and millennia’, seeing that what we call AI is something that could elevate humanity to new summits rather than be antagonistic to it. Indeed, it would be something capable of expressing the whole human condition:

‘Childhood memories … tears of happiness … the bitterness of parting … love of freedom … feelings of pity for a sick puppy … nervousness … a mother’s tenderness … thoughts of death … sadness … friendship … love of the weak … sudden hope … a fortunate guess … melancholy … unreasoning joy … sudden embarrassment …’ 

We are nowhere near producing the kind of AI Grossman describes here. That would require more processing power and energy than we currently produce, and a different and more advanced society capable of producing it. Technology is not the problem; the question is how the society that produces and uses that technology is organised. Right now, AI is an instrument of iniquitous corporations chasing their surplus value and seems antagonistic to everything valuable in the human experience. But under different arrangements, there is no reason why it could not be an instrument of emancipation and human flourishing. 

Alegre is right to say that AI ‘needs to serve rather than subvert our humanity’, but to achieve this will require a transformation of our social organisation. We will have to move away from our current form of social organisation, which valorises big corporations interested in innovation only to the extent that it benefits the power of capital, and towards one based fundamentally on human flourishing. Then, how technology is mediated in and by society will be transformed, and our society will be one where man is truly and self-consciously the master of the machine. But that would require something more radical than Alegre’s neo-Luddism. 

Further reading

‘Nobody really understands what the consequences are’: Susie Alegre on how digital technology undermines free thought, interview by Emma Park

Ethical future? Science fiction and the tech billionaires, by Rahman Toone

Artificial intelligence and algorithmic bias on Islam, by Kunwar Khuldune Shahid

The post The end of the world as we know it? Review of Susie Alegre’s ‘Human Rights, Robot Wrongs: Being Human in the Age of AI’ appeared first on The Freethinker.

]]>
https://freethinker.co.uk/2024/06/the-end-of-the-world-as-we-know-it-review-of-susie-alegres-human-rights-robot-wrongs-being-human-in-the-age-of-ai/feed/ 0 13716
The philosopher’s curse(s) https://freethinker.co.uk/2024/05/the-philosophers-curses/?utm_source=rss&utm_medium=rss&utm_campaign=the-philosophers-curses https://freethinker.co.uk/2024/05/the-philosophers-curses/#respond Wed, 29 May 2024 06:58:00 +0000 https://freethinker.co.uk/?p=13492 A look at some 'nefarious basic approaches in philosophy'.

The post The philosopher’s curse(s) appeared first on The Freethinker.

]]>
philosopher
a godly snooze: If the philosopher Berkeley’s God ever decided to catch forty winks, the consequences for existence itself would be dire. illustration by nicholas e. meyer.

From earliest times, philosophers have been found labouring under a misapprehension: that, if it appeared to them to be only logical (or soul-satisfying, or ultimately just aesthetically pleasing) for the world to have this or that property—for things in reality to be some given way, and not another—then that was it. Things were actually so.

However, this is a delusion. The world, and all reality, material and nonmaterial, have no need to conform to what any philosopher, under his or her way of thinking, believes should be the case. And the delusion has been extensive enough throughout philosophy to constitute a curse, for it has kept many otherwise supremely brilliant and ever-so-subtle minds from suspecting that their conclusions might just possibly be resting on unwarranted or indeed solipsistic grounds. But it goes deeper than that: the delusion has often obscured the need, in tandem with the work of cogitation, to try, wherever possible, to actually find things out. This obscuring has even occurred in cases in which lip service was paid to the idea—which stretches back to Parmenides the Greek—of checking with reality.

Cases in point of thinkers being sure that things are the way their intellect—and/or their hunches or their personal or social proclivities—has decided they should be are embarrassingly plentiful. Here are three.

One: the Zen position (which is not exclusive in Eastern philosophy) that an immediate subjective apprehension of reality is necessarily superior to reasoning or research.

Two: Gilles Deleuze’s argument that the foundation, ‘the absolute ground’, of philosophy equates with the plane of immanence. (By this, he meant a kind of soup—more precisely, a consommé—in which everything, ideas, things, the lot, coexist but without differentiation or delimitation of any kind. There, they are ‘in themselves’, which means immanence, not ‘beyond themselves’, i.e. in transcendence.)

Three: the Rig Veda’s account of the dismemberment of Purusha—primaeval man, mind, or consciousness. From his mouth came the Brahmins; from his arms, the warriors; from his thighs, the common people; from his feet, the menials; from his head, the sky; from his mind, the moon; from his eye, the sun; from his feet once more, the earth. Even if this is taken symbolically, as a poetic expression of myth, it is hard to deny that it expresses its originators’ view that society and the world ought to be organised hierarchically—and therefore, that that is how the world surely is organised.

If only such statements were phrased more tentatively. A philosopher might write, especially in areas of thinking that scarcely lend themselves to experimental probing, ‘This position I am stating is not one that I can prove to be the case—but it provides a solid, workable interpretation or model of the case. I see it as superior to previous models of how things are; so, until, and if, a better one is developed, it should stand.’ Yes, the philosopher might write something along those lines. But the chances are overwhelmingly that he or she won’t.

In some cases, the reason for this may be that the philosopher is afraid of not having the same impact, not gaining the same level of renown, if he or she seems to sound wishy-washy instead of categorical. (To be consistent: in the present essay, categorical statements are to be understood as meaning the best interpretation of the known facts thus far.) In the majority of cases, though, the reason philosophers don’t write that way is that it doesn’t cross their minds that their conclusions could be anything less than definitive. What goes for philosophers goes, equally or even more so, for theologians.

The above title, ‘The philosopher’s curse(s)’, obviously refers to a curse(s) that philosophers have lived under, not a curse(s) issued by them. The suggested plurality of curses is due to the fact that from the above overarching fallacy—‘I think so, therefore it is so’—follow others. They are derived or comparable to it, yet aren’t identical to it. Then there are also some that are unrelated to it. This article lists a total of six, including the Big One already mentioned.

Notice that when philosophers gave themselves the task of apprehending the nature of the alleged ultimate reality, of finding what lay behind the multiplicity of appearances, their respective speculations—or gut feelings—took them to different conclusions.

Here’s the second—one which, although it can be seen as a particular case of the Big One, is distinctive enough to constitute a category unto itself. It is the belief that the material world that we see, hear, and touch is inferior and/or less real than some other, ungraspable one. This conception is widespread in Eastern philosophy, yet it is not restricted to it. Kant was also one of those holding that the material world is less real than the spiritual world (however that often woolly concept is defined). The mental mechanism by which mankind arrived at this idea is transparent: the world was found to be mysterious, dangerous, and incomprehensibly complex, not to mention often unfair. Unsurprisingly this led to a yearning for a superior, even if invisible, world, and from yearning, the next step was utter conviction that such a world indeed exists. 

Notice that when philosophers gave themselves the task of apprehending the nature of the alleged ultimate reality, of finding what lay behind the multiplicity of appearances, their respective speculations—or gut feelings—took them to different conclusions, about which, naturally, each was always convinced. The example that springs most immediately to mind is that of the Presocratics, each of whom identified different elements as the underlying substance/principle, or arche, of reality: water, air, or fire. But examples also range as far and near as the Buddhist thinker Nagarjuna, for whom the root of everything was the Void, or Schopenhauer, for whom behind all reality lay the Will, or Heidegger, for whom nothing other than Being fitted the bill.

It needs to be underlined here that the list of six fallacies refers to nefarious basic approaches in philosophy—not to the simple procedural errors or the writing vices that specific philosophers might fall into, even if the line between the two may not always be hard and fast. To illustrate: the listing doesn’t refer to unwittingly falling into some hooey or inconsistency that the very same philosophers may be arguing against. It doesn’t refer, either, to grating individual idiosyncrasies, like writing in a needlessly obscure way (with never an example to clarify the points being made) just to show off the author’s cleverness.

Nor does it refer to the fallacy of prior assumption, wherein a philosopher fails to notice, much less prove, some assumed point before continuing with his or her argument. The above-mentioned search for the ultimate reality behind the world provides a good example of this fallacy. The prior (unproven) assumption is that there is one such ultimate underlying substance. (Sometimes the philosophers’ brainwork led them to the conclusion that there is not one but two underlying substances which are opposite, complements, and rivals.)

Incidentally but significantly, why one or two ultimate realities? Why not five? Why not one hundred and one? Why not even none at all? They merely thought it obvious that there had to be one (or the two that are forever fighting it out between themselves) since they found the idea of a fundamentally heterogeneous and messy universe offensive. Many people (possibly most) still do, but this is no more than a preference—in this case, of an essentially aesthetic type. Preferences, and philosophies based solely upon them, do not establish fact.

Reality may on occasion agree with someone’s preferences about the way things ought to be (in which case they won’t agree with the preferences of others who have thought differently). But that will have been no more than coincidental—analogous to the case of someone obsessed with Tuesdays who declares, every day, ‘Today is Tuesday!’ and periodically happens to be right.

john smibert’s c. 1728-30 portrait of Berkeley. Luckily, he appears to be awake.

Here comes the third of the accursed philosophical delusions: the thought, often conscious but sometimes subconscious, that the way things are in the world depends on human understanding of them. George Berkeley, who took this idea furthest, condensed it in Latin: Esse est percipi—to be is to be perceived. For those who share this conclusion, the arguments are apparently so strong that they obscure the fact that if human understanding colours all facts about the world—or indeed precedes them—this only happens for humans. (If the philosophers fail to say so, it’s because they have failed to connect these particular dots, or because they do not attach any importance to the connection.) As for the rest of the world, it would go about its merry way, or grim way, if there were no humans to perceive it, and even if humans had never existed.

It boils down to this: it could be that, yes, human philosophy truly cannot prove there is a world outside of people’s thoughts and/or their perceptions—however, that’s hardly the fault of the world. The shortcoming belongs to human philosophy.

At the heart of any delusion that things are otherwise is human vanity, even if masked by sleight of brain. What is needed, in this as in so much else, is some humility. Not, in this case, personal humility, but a collective humility based on a true assessment of our standing as tiny creatures on the surface of a minute mote in the universe. Imagine that, one day, humans not only destroy the Earth but manage to create a black hole that swallows up the planet itself and also everything else in its vicinity. Even in that extreme case, the idea that the universe as a whole depends on humans or any of their attributes is an exhibition of hubris on a staggering scale. This, by the way, is quite typical of a lot of human thinking. Here’s a case in point: the idea that mighty planets, stars, and constellations make it their business to determine the characters and fates of humans.

The fallacy extends to science—even, or especially, in its most modern areas. The delusion appears whenever science neglects to say—or to see—that if something remains indeterminable, it may only be so to us. Science will never be able to precisely know, at one and the same time, a particle’s position and momentum. But that doesn’t mean that the particle doesn’t have a precise position and a precise momentum at any given time, even as scientists’ measurements are messing with them; it’s just unknowable to us, and therefore meaningless to us as scientists. The particle isn’t responsible for being knowable or meaningful to us.

ChatGPT 4.0, Dall-E 3.0 portrayal of Schrödinger’s cat

We may not know if Schrödinger’s famous cat is alive or dead until the dust has settled. But at any given moment, the cat itself is either alive (even if dying) or dead: a certain scientific wave function keeps observers in the dark about the cat’s status—but that can mean little to it.

Einstein himself, who suggested Schrödinger’s thought experiment in the first place (albeit with a non-feline example), did refer to ‘reality as something independent of what is experimentally established.’ However, this standpoint of his didn’t gain much traction. What is true is this: science genuinely cannot advance except with what is experimentally established (actually, with what is experimentally disprovable). But science, human knowledge of the world, isn’t the same thing as the world—except when human self-importance conflates the two, or faulty thinking fails to distinguish between them.

Some bad philosophical habits that harden into curses aren’t as pervasive as the above ones, although they are still too frequent. (Always read ‘philosophical’ as ‘philosophical/theological’. The medieval Scholastic period was one in which philosophy and theology were particularly hard to tell apart, but there are plenty of other cases in which one has shaded into the other. In some religions the distinction is purposely meaningless.)

One bad habit—the fourth in the list—involves philosophers whose thinking has led them to results that are mutually contradictory or absurd in a way they wouldn’t normally countenance, or who find themselves forced to choose among alternatives when they would prefer to hang on to all options. They could question their original assumptions and start afresh; or they could accept that a few things may just be unsolvable (like finding a complete and consistent foundation for mathematics, which Gödel proved to be impossible). Instead, philosophers with the bad habit in question simply paper over the problem with a layer of mysticism.

Then, after the mystical attitude has shown the way to reconciling the antithetical or closing any annoying inconsistencies, if there are any remaining doubts about details, they can be declared solved through the invocation of a mystery: the obdurate details are not for human beings, or at least uninitiated human beings, to understand.

And if even that fails, mysticism allows direct appeals to supernatural agencies as a way out of philosophical dilemmas. Take the bitter medieval debates over the relationship between God the Father and God the Son, and then, between God the Son’s human and divine aspects: Father and Son could be decided to be mystically at once distinct and similar; the Son’s two aspects could be pronounced to be separate but commingled.

Then there is, for instance, Berkeley’s solution to the dilemma raised by Esse est percipi—namely, that things dematerialise the moment people close their eyes or look away and exist anew when they are perceived again. He fell back on God (he was, after all, a bishop). God, obviously being always awake and seeing everything, keeps everything in existence. Objection overruled.

A fifth fallacy: extrapolating one’s conviction, not to the nature of the world as in the first item, but to the minds of other people. Philosophers who fall for this are merely following a widespread human practice (although perhaps they, of all people, should know better). The practice is exemplified by those who repeat the dictum that ‘Everybody needs to believe in something’, originated by those who themselves need to believe and extrapolate their need to all others. The dictum can be refuted by simply pointing to people who do not believe in anything, in the specific sense of ‘believe’ that is meant here, and who do not miss it. But that would require going out to find if some such people do exist, and it is much easier to generalise in armchair comfort.

Descartes, too, was extrapolating to everyone else when he decided that perceptions are reliable if they are clear and distinct. He was clearly imagining that if they were clear and distinct to him they would be so to others—never conceiving that the person alongside him might be having a clear and distinct perception quite divergent from his own. Different people find different things to be unarguably evident.

But it’s not innocent that Derrida makes something out of the coincidence that in French différer can mean both ‘to differ’ and ‘to defer’. A philosopher who thought in English might as well, when bringing up that ‘God’ is ‘dog’ written backwards, seriously find some significance in that fluke.

And so to the sixth and final curse: a curse lurking in language. Philosophers may build up claims based on language phenomena that only occur in the tongue they happen to work in. German philosophers must guard against their language’s propensity for agglutinating words: putting together a single word for a concept tends to give it added substance (particularly since German nouns get Capitals). Thus, ‘being in the world’ is, in English, an idea; the equivalent German, In-der-Welt-sein, constituting just one (albeit hyphenated) word, is much more. In-der-Welt-sein, Heidegger’s concoction, becomes an actual Thing. (The usual English translation is ‘being-in-the-world’, the hyphenation carrying over to give it a similar standing.)

And consider Jacques Derrida’s key concept différance. The fact that in French it’s pronounced identically as under the usual spelling, différence, is innocent enough wordplay. But it’s not innocent that Derrida makes something out of the coincidence that in French différer can mean both ‘to differ’ and ‘to defer’. A philosopher who thought in English might as well, when bringing up that ‘God’ is ‘dog’ written backwards, seriously find some significance in that fluke.

Philosophy is a wonderful enterprise. It is just a shame that its practitioners have fallen, again and again, into pitfalls that could have been avoided.

Philosophy-related further reading

‘The Greek mind was something special’: interview with Charles Freeman, by Daniel James Sharp

Consciousness, free will and meaning in a Darwinian universe: interview with Daniel C. Dennett, by Daniel James Sharp

Atheism, secularism, humanism, by Anthony Grayling

A French freethinker: Emile Chartier, known as Alain, by Michel Petheram

‘When the chips are down, the philosophers turn out to have been bluffing’: interview with Alex Byrne, by Emma Park

‘The real beauty comes from contemplating the universe’: interview on humanism with Sarah Bakewell, by Emma Park

On sex, gender and their consequences: interview with Louise Antony, by Emma Park

Image of the week: Anaxagoras, by Emma Park

Image of the week: Portrait bust of Epicurus, an early near-atheist, by Emma Park

Can science threaten religious belief? by Stephen Law

Lifting the veil: Shelley, atheism and the wonders of existence, by Tony Howe

The post The philosopher’s curse(s) appeared first on The Freethinker.

]]>
https://freethinker.co.uk/2024/05/the-philosophers-curses/feed/ 0 13492
Image of the week: Francisco Goya’s ‘The Sleep of Reason Produces Monsters’ https://freethinker.co.uk/2024/05/image-of-the-week-francisco-goyas-the-sleep-of-reason-produces-monsters/?utm_source=rss&utm_medium=rss&utm_campaign=image-of-the-week-francisco-goyas-the-sleep-of-reason-produces-monsters https://freethinker.co.uk/2024/05/image-of-the-week-francisco-goyas-the-sleep-of-reason-produces-monsters/#respond Tue, 07 May 2024 11:09:30 +0000 https://freethinker.co.uk/?p=13507 Francisco Goya’s c. 1799 etching is usually regarded as an expression of Enlightenment principles. The imagination, untempered by…

The post Image of the week: Francisco Goya’s ‘The Sleep of Reason Produces Monsters’ appeared first on The Freethinker.

]]>
The sleep of reason produces monsters: read more.

Francisco Goya’s c. 1799 etching is usually regarded as an expression of Enlightenment principles. The imagination, untempered by reason, runs amok, and produces monsters. Goya was a favourite artist of the late Christopher Hitchens, and this etching was reproduced in the hardback edition of his 2007 book god Is Not Great: How Religion Poisons Everything. Within the book itself, in a chapter focused on Eastern religions, Hitchens writes:

El sueño de la razón produce monstruos. “The sleep of reason,” it has been well said, “brings forth monsters.” The immortal Francisco Goya gave us an etching with this title in his series Los Caprichos, where a man in defenseless slumber is hag-ridden by bats, owls, and other haunters of the darkness. But an extraordinary number of people appear to believe that the mind, and the reasoning faculty—the only thing that divides us from our animal relatives—is something to be distrusted and even, as far as possible, dulled. The search for nirvana, and the dissolution of the intellect, goes on. And whenever it is tried, it produces a Kool-Aid effect in the real world.’

Another chapter of god Is Not Great was recently reproduced in the Freethinker. You can read Hitchens’s call for a new Enlightenment here.

The post Image of the week: Francisco Goya’s ‘The Sleep of Reason Produces Monsters’ appeared first on The Freethinker.

]]>
https://freethinker.co.uk/2024/05/image-of-the-week-francisco-goyas-the-sleep-of-reason-produces-monsters/feed/ 0 13507