Dualism redux

bat_high_resolution_desktop_1280x1024_wallpaper-247989

My post on the problem of consciousness troubled a few readers because I dared toy with the idea of dualism, something so offensive to scientists I’m wary to speak its name. But I’m going to continue to argue for dualism because it’s not clear to me that it is wrong even for all the flack it has received. I think a return to this topic is also warranted because of the controversy generated by Thomas Nagel’s latest book, ‘Mind and Cosmos’.

A charge made against my previous post was that dualism is a pernicious idea. Yet nihilism is a negative, and I would argue, damaging philosophy par excellence, but that has no bearing on its truth or falsity. Similarly, one commentator spoke about dualism being argued for because it speaks to a human desire to be something more than physical. But, again, that does not mean it’s wrong. This holds for any idea. We can only take people to task in arguing for something if the only reason they do so is because it comports with their views.

(On a side note I reckon the implications of a physicalist universe are far more terrifying than a dualist one. They are best spelled out by the atheist philosopher Alexander Rosenberg in his essay ‘The Disenchanted Naturalist’s Guide to Reality’.)

Previously I used David Chalmers’ zombie argument to question if the world as we conceive it can account for the presence of consciousness. Chalmers summarises the message of his original zombie argument as “If any account of physical processes would apply equally well to a zombie world, it is hard to see how such an account can explain the existence of consciousness in our world.” Arguments from logical possibility are contentious but let’s not lose sight of what Chalmers is saying.

The subtitle of Nagel’s book is likely to grab the attention of biologists – ‘Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False’. But you should hold your cries of “Creationist!” because they’re misplaced. I really need to stress that this debate should not be framed as being between science and religion or science and pseudoscience but rather physicalism and dualism. Indeed some of the more prominent defenders of the latter position, Nagel and Chalmers included, are card carrying atheists.

I haven’t read Nagel’s book yet (I hope to do so) but you can get a sense of the themes he develops from the following description, “The modern materialist approach to life has conspicuously failed to explain such central mind-related features of our world as consciousness, intentionality, meaning, and value.” He argues that there is something more needed to get us from physical matter to conscious thoughts, not even evolution by natural selection can get us there with a purely material world to manipulate. There is a difference of kind rather than degree here.

Some of the criticisms of his latest work argue that he leaves a lot unsaid and many of his arguments have been criticised as vague. But Nagel is most famously known for a famous 1974 essay he wrote on ‘What Is it Like to Be a Bat?’ He says that although bats are conscious they experience the world in an entirely different way to us i.e. there is something like it is to be a bat. If we imagine ourselves as being bats it’s actually through our human minds, i.e. we’re imagining what it would be like for us to have echolocation, which is entirely different. If instead we record all the sensory data that a bat experiences we’re still left wondering about how he/she experiences the world from its own point of view. The normal physicalist approach leaves us guessing.

Previously we were asked to have a prior commitment to physicalism but there are a number of properties of consciousness that should cause us to at least re-evaluate our priors. Neurons, physical entities that they are, do not seem to have the tools to do what is being asked of them. When it comes to physicalism, past performance is no guarantee of future success.

Author

Adam Kane: kanead [at] tcd.ie

Photo credit

Worldprints.com

Intelligent Design: Part Three – Dr Alistair Noble’s ‘The Scientific Evidence for Intelligent Design’: the review

BwcOmega911a

I would like to say that the talk presented a range of evidence for intelligent design and carefully countered the usual arguments against it. I would like to say this, but I can’t. The talk, which lasted over one hour, spent much of the time quoting non-scientists and misquoting scientists, painting ID proponents as martyrs to the cause and science as tautologically incapable of addressing questions of design. The religious beliefs of ID proponents were constantly referred to, despite supposedly being completely irrelevant, which was an indication that this was, after all, a religious proposition not a scientific one.

It would be easy to question the credentials of Dr Alistair Noble (PhD in chemistry) and ask how someone who has been outside of scientific academia longer than I have been alive can claim to have found fundamental flaws that no working biologist has been able to find, but I won’t. Instead, I have tried to focus on the claims of Dr Noble and see if they can be answered (see my last blog post).

There is much more that I could have said. The case for evolution is so strong that I could go on for hours about the evidence from multiple disciplines that support it. It seems that the same cannot be said for intelligent design. Dr Noble spent about 15 minutes of his (more than) one hour talk providing evidence which can be easily refuted by anyone who has even a basic understanding of evolutionary theory. His ‘evidence’ ultimately boiled down to an Argument from Incredulity with a side helping of the Argument from Authority.

I was disappointed by the lack of scientific rigor Dr Noble exhibited. Not one journal article was presented, not a single claim that hasn’t been refuted multiple times before. I had hoped for an intellectually stimulating talk that would force me to question my understanding of evolutionary theory but instead I was confronted with the same, tired claims that have been presented by ID proponents for years now. It is a shame that Dr Noble could not have used his clearly considerable intellect to study the actual science and see that evolutionary theory is not a threat to his faith but is an amazingly simple yet profound explanation into how the diversity of life arose.

Author

Sarah Hearne: hearnes[at]tcd.ie

Photo credit

wikimedia commons

Sampling gaps in our understanding of primate parasites

Jigokudani_hotspring_in_Nagano_Japan_001

*by parasites here I am referring to all kinds of infectious disease causing agents including bacteria, viruses, fungi, protozoa, helminths and arthropods.

Why do we care about primate parasites?

Many of the most devastating infectious diseases in humans have origins in wildlife. For example, the global AIDS pandemic originated through human contact with wild African primates and influenza viruses circulate among wild bird populations. These are not only historical occurrences. Recently, for example, rodents were identified as the source of a Hantavirus outbreak in Yosemite National Park, USA . As human populations continue to expand into new areas and global changes in temperature and habitat alter the distributions of wild animals, humans around the world will have greater contact with wildlife. Thus, understanding which infectious agents have the potential to spread from animals to humans is crucial for preventing future human disease outbreaks.

Many efforts are being made to collate information on wildlife and human diseases. Much of my research (which I will blog about when I get chance!) uses an amazing database known as the Global Mammal Parasite Database or GMPD for short. Every time a paper is published which contains details of parasites found in either primates, carnivores or ungulates, the information is added to the database. As much data as possible is recorded, including the species infected, the type of parasite, the prevalence of the parasite, and the geographic location of the study. Prof. Charles Nunn and his colleagues have been collecting data for the GMPD since around 2005 and it currently contains around 6000 records for primates alone. This definitely makes it the most comprehensive dataset of primate parasites in existence.

The GMPD sounds amazing…so what’s the problem?

The problem with the GMPD (and this is a feature of virtually all datasets) is that there is sampling bias. Certain primates are sampled for parasites much more frequently than others. Chimpanzees, for example, are sampled for parasites all the time, whereas species such as tarsiers are sampled much less often. This has the effect of making it look like chimpanzees have far more parasites than tarsiers, simply because they have been sampled more often. In analyses using the database we usually deal with this problem by adding sampling effort into our models, so we give less emphasis to high numbers of parasites in primates we have lots of samples for. Unfortunately this problem is also evident when we look at parasites (things like malarial parasites are often sampled because of their importance to human health) and geographic regions (areas with primate research stations are sampled far more regularly than more remote regions). If we hope to use the GMPD data to make reliable predictions about future risks to humans, we need to identify gaps in our knowledge of primate parasites.

So what did you do?

Without going into the technical details, we looked across the primate phylogeny and primate geographic ranges to identify gaps in our knowledge, and used statistical models to investigate what factors led to primates and geographic areas being relatively well- or relatively poorly-sampled. We also used species accumulation curves to extrapolate parasite species richness for primates.

Where are the gaps in our knowledge?

We found that apes (chimpanzees, gorillas and orangutans) were generally better-sampled than other primates, but there was incredible variation in sampling among all other major primate groups. Apart from apes, the primates that researchers appear to sample most are the species they encounter most often, i.e., widespread, terrestrial, diurnal species. However, some primates were sampled more often because they are already intensively studied for other research, because they live in frequently visited field sites, or because of their importance in medical research. Across countries, we found that in general, parasite sampling is highest in countries with more primates to sample. We expected that the GDP of the countries would also affect sampling effort, with wealthier countries having more money for disease monitoring. However, we found no evidence for this in our analyses, probably because most research on primate diseases is not funded by the country in which the research takes place.

Sampling effort for primate parasites across the world. Poorly-sampled countries are in red, and better sampled countries are in yellow.
Sampling effort for primate parasites across the world. Poorly-sampled countries are in red, and better sampled countries are in yellow.

When we extrapolated parasite species richness values we found that even within our best-sampled primates and countries, we are missing a lot of parasites. On average we predicted that 38-79% more parasite species than currently reported in the GMPD should be found in our best sampled primate species, and 29-40% more parasite species than currently reported in the GMPD should be found in our best sampled countries. This emphasizes exactly how poor our sampling is across all primates and countries. Another concern is that although viruses make up only 12% of the parasites in our dataset, viruses arguably present the greatest zoonotic disease threat to humans because their fast rates of evolution should allow them to easily adapt to new hosts.

What next?

Identifying parasite sampling gaps across primate species and geographic regions is only the first step; we need to find strategies to minimize these sampling gaps if we are to predict which primate diseases may emerge in humans. One solution is to set research priorities based on the sampling gaps, for example, by focusing effort and funding on relatively poorly-sampled primate species, arboreal primates, those with small geographic ranges, or those found in relatively poorly-sampled regions of South East Asia, Central and Western Africa, and South America.

Focusing on relatively poorly-sampled primate species and areas may improve our general understanding of primate parasites, but it is only one factor in predicting risk to humans. For example, hosts are more likely to share parasites with their close relatives than with more distant relatives. Thus, continuing to focus our sampling efforts on parasites of our closest relatives (chimpanzees, gorillas and orangutans) may provide the greatest return in the case of risks to humans. This is particularly important because we found that chimpanzees are expected to have 33-50% more parasites than currently found in the GMPD. In addition, ecological similarities also influence parasite sharing among primates, and humans share more parasites with terrestrial than arboreal primate species. As with sampling effort, this probably reflects higher contact rates among humans and terrestrial primates compared to arboreal primates. As a related issue, a host living at higher density is expected to have higher prevalence of parasites and may have more contact with human populations or our domesticated animals, thus increasing opportunities for host shifts to humans. The large numbers of zoonotic emerging infectious diseases with rodent or domesticated animal sources also highlight the importance of rates of contact and host density for disease emergence in humans.

In conclusion Sampling effort for primate parasites is uneven and low. The sobering message is that we know little about even the best studied primates, and even less regarding the spatial and temporal distribution of parasitism within species. Much more sampling is needed if we hope to predict or prevent future emerging infectious diseases outbreaks.

Author

Natalie Cooper

nhcooper123

ncooper[at]tcd.ie

Photo credit

Natalie Cooper, wikimedia commons

Palaeo-poetry and placental mammals

Furry_ball_by_aerox21

 

Recently Science published O’Leary et al.’s – new load of oil to fuel the burning debate on the origins of placental mammals.

Just to be clear: there is an important distinction between mammals in general that includes many fossils from the Jurassic as well as the extant platypus, kangaroo and your grandma; and placental mammals that includes your grandma and the armadillo (but not kangaroos or the platypus) and no fossil before 65Myr. For readers that are not used to the debate concerning the first placental mammal here’s the main question: did the first placental mammals diversify before or after 65Myr (the important KT boundary)?

(1) After Katie

One view suggests that early mammals lived in the shadow of dinosaurs and the demise of these mighty creatures allowed our rat-like ancestors to take over the earth, the seas and the skies “Save yourself mammals”. This idea was proposed by Simpson in the 1950’s and is supported by the fossil record; many dinosaurs (both big and small) were present before the 65 Myr KT boundary, then a catastrophic meteorite impact marked the KT limit and placental mammals radiated after that. This sequence of events seems to be very straightforward but reality appears not to be so simple. Increasing numbers of mammal species from the late Cretaceous are being discovered, (including rather big ones feeding on dinosaurs) and not many species in general are found in the fossil record before 55Myr when all groups of placental mammals seem to suddenly appear (for a full story see Luo’s 2007 Nature review).

(2) Before Katie

In contrast, another group of people, mainly assisted by molecular dating methods, found out that post-KT placental mammalian diversification may just be an artefact of the fossil record (like Meredith et al 2011 in Science again). Their DNA evidence seems to say that placental mammals evolved before the KT limit and that either palaeontologists failed to find them or else the fossil record failed to preserve them. One major criticism that moderate people argue is that there are still problems associated with molecular dating methods. I won’t go into the details (yes I’m trying hard not to) but molecular dating relies on DNA on the one hand (sampling quality and modelling) and on the fossil record on the other hand. So if the people using DNA criticise the fossil record and want to improve the DNA dating estimates, they have to rely on the same fossil record that they are criticising. The snake bites his own tail.

So what about O’Leary and colleague’s paper? They basically support the first theory (placental mammals evolved after KT). Fair enough, it was led by a number of great palaeontologists and based on a massive morphological data set (~4500 characters introduced as phenomics (from the phenotype) as opposed to genomics (from the genome) data) collected on 40 unambiguous fossils and 46 extant placental mammals. Genomic data based on 26 genes of these extant placental mammals was also included. This paper is the result of an impressive and unique collaborative work, but – Ned Stark from Games of Thrones said “nothing someone says before the word “but” really counts” –  but this paper is criticisable…

First of all, the data set: although the morphological data is impressive, the taxa sampling effort seems a bit weak, especially for extant placental mammals. Meredith et al used the same genomic data (26 genes) but based on ~164 mammals to answer the same question. Why couldn’t O’Leary use all of this already published mammal DNA? For the second criticism, I’m just going to quote Yoder’s review published in the same issue “Today, sophisticated theoretical and computational methods are used to estimate and calibrate molecular phylogenetic branch lengths (which represent time). Together with improved methods for integrating fossil and molecular data, dates derived from molecular phylogenies have inched closer to those implied by the fossil record. Is the approach used in the O’Leary et al. study directly comparable to these recent molecular phylogenetic studies? Not really, as it turns out.”

No wonder this paper supports the first theory, it is just a precise and massive analysis of the 40 species of the placental mammals fossil record. Personally, I’m really frustrated by how they managed to publish this paper. Since it’s part of my PhD research, I automatically get excited when I see fossils mixing with extant species so I really hoped this paper would link the two approaches instead of supporting the old fashioned view of evolution (the dinosaurs dying and the mammals taking over). I’d like to think that the history of life is a bit more complex and exciting…

A last comment to justify my title and which will be my main critique to this paper is that O’Leary et al. tried to recreate the “hypothetical placental mammal ancestor”.

ancestral_placental

As I said, this paper could be seen as a summary of the placental mammal fossil record. So why did they break the first rule that keeps palaeontology away from palaeo-poetry (i.e. going too far with palaeontological hypotheses)? Here they reconstructed a whole creature using their morphological data. What they made was essentially a mean (average) placental mammal (a primitive rat-like creature) – a throw-back to the early stages of palaeontological views of mammalian evolution. What did the ancestor of a duck and a beaver look like? Something in between – a platypus for example? As Olaf Bininda-Edmonds said on Ed-Yong’s Nature post “comparing the two estimates is like comparing “apples and oranges”, they haven’t really done anything to resolve this on-going dispute”.

This paper has also caused controversy on twitter. I’ll just cite two opinions.

Gavin Thomas (@Phalaropus)

“The reconstruction is fun – I’d love to see a picture based on 95% CIs for the ancestral states.”

and Rich Grenyer’s answer (@rich_)

“yes indeed. Something like this” (see our title image).

Many parts of the online science community got excited about this paper, you can see further discussions on Jerry A Coyne’s blog (here and here), on Ed Yong’s one (here and here or there) or else on the twitter feed #placental.

Author

Thomas Guillerme: guillert@tcd.ie

Photo credit

http://aerox21.deviantart.com/

Coursing conundrum

Female_Irish_mountain_hare

At first glance, many scientific ideas can appear counterintuitive. A press release from a leading Irish wildlife charity in support of the proposed coursing ban prompted me to attempt to balance the discussion of coursing impacts on the Irish hare population. The bill to ban coursing is due to come before the Dáil in the coming months. However, the above press release immediately struck me as biased, and so I felt a discussion of coursing impacts was required before the public were asked to sign any petitions in support of this ban. For those unsure of just what coursing is, it is a popular field sport which consists of a hare being chased by a pair of greyhounds over a short distance. Unlike fox and deer hunting, the aim of coursing is not to kill the hare. It is instead a speed and agility competition between two dogs, where each is awarded points depending on its ability to “turn” the hare from a direct route along the field. Irish hares (Lepus timidus hibernicus Bell 1837) are caught and held in captivity prior to an event during which the hare is coursed within an enclosed park. A running hare is given a 75m head start before the release of two dogs, whose performance is assessed by a judge, and surviving hares escape into an area from which the dogs are excluded. The duration of the pursuit is relatively brief, usually lasting less than a minute, and surviving hares are returned to the wild after the event.

The IWT states something which a few of us may agree with; that Ireland is lagging behind in terms of its attitude to welfare and conservation of native wildlife. However, the idea that a coursing ban would in some way improve this status is highly questionable. Welfare issues need to be taken into account, but these considerations must be viewed in parallel with the beneficial aspects of coursing, such as habitat conservation and the associated protection of both target and non-target species, before any final judgements regarding coursing acceptability can be made. It is perhaps unintuitive, but evidence indicates that coursing has an extremely large positive impact on hare numbers. Mortality of coursed hares stands at just 4.1% since the implementation of dog muzzling in 1993, and research has found coursing to have negligible impacts on hare populations due their large intrinsic rates of increase. People who participate in coursing maximise hare populations in coursing preserves through predator control and set aside to conserve habitat suited to the Irish hare. In fact, it is agricultural intensification (an issue completely ignored in the IWT article) which is more likely to blame for population declines. Habitat management to encourage target species for hunting can protect against the detrimental effects of modern agricultural policy on biodiversity. Irish Coursing Club preserves host a hare density 3 times greater than that supported by the wider countryside. What is more probable is that coursing is actually stemming the tide of anthropogenic destruction of many species our native wildlife (including corncrakes and many other farmland bird species) through habitat conservation aimed at artificially increasing hare populations for coursing. If coursing were to be banned in this country, this practice would be completely abandoned due to waning interest in encouraging hare numbers, and could potentially have serious ramifications for other wildlife which benefit from associated habitat management and predator control. Incentives to promote hare conservation would be required, but it’s questionable whether these would produce the same results as coursing-associated management due to a lack of personal interest for farmers and other landowners who practice coursing. Hare conservation in the absence of coursing, similar to that of other species benefiting from game management, would be a costly endeavour and would be unlikely to be awarded the necessary funding in the Republic of Ireland with the current economic climate.

We have the opportunity to be forward-thinking, innovative and inclusive in the way in which we achieve sustainable conservation of our native wildlife, something which appears all the more important in light of the EU agricultural policy reforms which were leaked in recent days. We can only hope that a review of the research will stop the Dáil bowing to ill-informed political pressure and perhaps, the future of farmland birds and our only endemic mammal, the Irish hare, will be ensured.

Author

Emma Murphy: butlere1[at]tcd.ie

Photo credit

wikimedia commons

Treasures of Natural History

Natural_History_Museum_London_Jan_2006

The Natural History Museum in London is one of my favourite places. The majesty and beauty of the building’s design is a fitting exterior to house the truly stunning collections within.

The new Treasures exhibition displays just 22 of the museum’s most prized possessions. It’s a special opportunity to see valued and varied treasures such as the type specimen of the earliest known bird, Archaeopteryx, Darwin’s pigeons and the Iguanodon teeth which sparked the discovery of the dinosaurs all lined up together. The stories behind the origin and significance of each of the treasures are fascinating.

Although not one of the most famous objects, the Emperor Penguin’s egg was my favourite item. The egg is beautiful in itself but its real value as a treasure lies in the story behind its collection. It is one of just 3 intact specimens which were collected by Captain Scott’s ill-fated Antarctic expedition between 1910 and 1913. In this centenary anniversary year, Scott’s quest to reach the South Pole remains one of the most inspiring examples of human endeavour. Before the age of GPS, insulated clothes or re-heatable “expedition food”, Scott’s crew ventured into the heart of the frozen continent.  As if striving to reach the pole wasn’t enough of a challenge in itself, the expedition also had the aim of collecting as much scientific data about Antarctica as possible. Their chief scientist Edward Wilson wanted to examine penguin embryos for evidence of an evolutionary link between birds and reptiles. However, Wilson was part of the team who perished with Scott on their return journey from the Pole. The specialist embryologist who was going to study the eggs died in the First World War and by the time results from studying the eggs were published in 1934, the evolutionary recapitulation theory on which the egg-study was based was outdated.

The story behind this egg certainly puts the trials of modern scientific research into perspective. While the rate of new scientific discoveries shows no sign of slowing, delving into the finer details of the inner workings of the cell or the evolution of drug-resistant bacteria doesn’t hold quite the same level of physical adventure as that which was represented by Scott’s expedition. I’m not for one instant suggesting that I long for a more dangerous yet adventurous age or that modern fieldwork is not without its trials and difficulties. I just don’t know of any current research projects which can match Scott’s story in terms of raw human endeavour into the most unknown, dangerous and inhospitable conditions imaginable. It was a treat to see such a precious and fragile reminder of past scientific endeavour on display.

The treasures exhibition is a stunning collection of prized objects and should be treated as a site of pilgrimage for anyone even remotely interested in evolution or the natural world. Most of the specimens are unique but fortunately an example of one treasure, the Great Auk can be seen in the TCD Zoology museum. So that’s one ticked off the list for Trinity – though I’ve a feeling that it might take just a little while for us to match the rest of London’s collection …

Author

Sive Finlay: sfinlay[at]tcd.ie

Photo credit

wikimedia commons

What is Life?

Erwin_Schrödinger

February 5th marked the 70th anniversary of the first lecture of what was later to become Schrödinger’s highly influential book ‘What is life’.

While Schrödinger may be more popularised by his infamous zombie cat, it was his thinking with regards to how life can live with the laws of physics that have allowed him to transcend that major divide between the physics and biological communities.

Schrödinger’s genius insight was to see life as a system behaving and constrained by the second law of thermodynamics, in particular describing the probable nature of a hereditary crystal, which would later lead scientists including Shannon and Weaver to the discovery of DNA and the genetic code.

However what makes the story of this work more fascinating, in particular to me as I have been lucky enough to get the opportunity to give a (very) small talk in the same lecture theatre were Schrödinger gave his lectures 70 years ago, is the story of how an Austrian physicist ended up in the capital of Ireland writing some of the most important work in biology of the last century.

Schrödinger, an Austrian, fled Germany in 1933 due to his dislike of the Nazi’s anti-Semitism and became a fellow in Oxford, during which time he received a Nobel Prize with Paul Dirac. However things turned sour with Oxford due to his less then monogamous approach to the opposite sex, and the lack of acceptance towards living with his wife and mistress lead him to leave for Princeton. These problems followed him there and eventual he found himself back in Austria in 1936. The occupation of Austria by the Germans in 1938 led him to again flee, this time to Italy. But in the same year Eamon de Valera, Ireland’s Taoiseach (Prime Minister) at the time, personally invited him to Dublin were he spent the next 17 years.

It was here in Trinity College Dublin that he delivered his lecture series which were to inspire both Watson and Crick to search for the genetic molecule and which has recently seen some increased popularity due to its use by Brian Cox in his latest series What is Life. However this work came about I like to think that “What is life” found its way to Ireland through Nazis and polygamy, a story surely of the calibre for the Discovery channel, if only it had some sharks.

Author

Kevin Healy: healyk[at]tcd.ie

Photo credit

wikimedia commons

Intelligent Design: Part Two – Dr Alistair Noble’s ‘The Scientific Evidence for Intelligent Design’: the claims

800px-The_Creation_of_Adam

A lie can travel halfway round the world before the truth has got its boots on” (Mark Twain, attributed).

In my previous post I gave some background on intelligent design, the theme of a talk I recently attended by  Dr Alistair Noble. This time, I’ll try and address his claims.

It is easy to say something that is not true. It is not always so easy to explain why it is not true. Such is my problem here. I can summarise Dr Noble’s arguments into a few sentences, but it takes paragraphs to explain why they are wrong. Here goes!

His argument centered around DNA. Dr Noble’s background in chemistry, specifically in trying to artificially synthesise chemicals, showed him how difficult it was to make even simple molecules. He explained his problems with DNA and used two specific examples to illustrate his argument: the bacterial flagellum and cytochrome C. His arguments were essentially:

  1. they look designed
  2. they are too complex to have arisen by chance

 

The design argument can be easily refuted. Apparent design does not mean actual design. Humans are extremely good at seeing things where they do not exist, like shapes in clouds and Jesus on burnt toast. This is a well-known psychological phenomena called paradolia and can lead us to see design where none exists.

The second claim requires a bit more care. DNA, the bacterial flagellum and Cytochrome C are all highly complex and could not have evolved by chance. In fact, as Dr Noble so carefully illustrated, Cytochrome C would have taken longer than the lifetime of the universe to arise by chance. So if they did not arise by chance then they must have arisen by design, surely? Well, no.

This conclusion can only be made if you have a deep misunderstanding of evolution. At a very basic level random mutations occur which may be beneficial, neutral, or detrimental to an individual. Then natural selection ‘selects’ those mutations which are beneficial and ‘rejects’ those that the detrimental. Small changes over long timescales lead to big changes, mutations can build on each other and can be co-opted to other functions. The bacterial flagellum is a perfect example, with studies showing how molecules were co-opted from other functions to form the flagella. At no point was there a useless proto-flagellum.

ID proponents, including Dr Noble, focus on the random aspect of evolution but completely ignore the selection part, which is arguably the more important aspect. If there were no natural selection then their claims would be valid, but its presence provides a beautifully simple explanation of how complex molecules, complex biological components, and even complex organisms could arise.

Next time, my review of the talk.

Author

Sarah Hearne: hearnes[at]tcd.ie

Photo credit

wikimedia commons

The Flora of the Future

Flora of  the future

It’s the year 2050. Several billion more humans occupy the world, and species translocations are by now the norm to mitigate against increased urban sprawl, climatic instability and a sea level now a third of a metre higher. In spite of unprecedented demands on the natural environment, governments have slowly developed capacity for conservation of wilderness and semi-natural habitat. Beyond this even, with the vast majority of the human race by now living in cities and the continued trend of rural land abandonment; restoration ecology has come to the fore at entire landscape and regional scales. The concept of ‘rewilding’ is debated openly amongst politicians and the public – no longer the mere theoretical exercise of academics. The monetary value of ecosystem services is also by now a very real and tangible concept within economic circles, embedded within highly developed metrics such as green-GDP.  Despite such positive developments, however, problematic legacies of the past remain. Intensification of agriculture has been unrelenting globally, notwithstanding inroads into adoption of agroecosystem approaches. A transition to truly renewable energy sources is still incomplete and of utmost urgency. One of the most critical questions of all most likely still looms – have we yet done enough to put a cap in the peak of this, the sixth great mass-extinction of life on the planet?

And so, it is within this future and none-the-less challenging world we find the modern ecologist and biodiversity practitioner at work.

What kind of new and useful technologies may exist to help tackle such problems and challenges of the not so distant future? It is interesting to deliberate on one low-tech tool in particular (the so-called bread and butter of biodiversity), which has been with us already for centuries – and that is the humble species checklist. Specifically we take a look at the Flora – and although coverage here is rather phyto-centric, it should be easy to draw equivalents to all forms of taxa, without (too) much stretch of the imagination.

So what is a Flora in the traditional sense, why is this changing, and how will the Flora of the Future look and function? To briefly tackle these first two questions, a Flora is primarily a list of plant biodiversity (either with or without diagnostic characters and keys) within a specified geographic range, be it local, national or at larger scales. Outside of this basic function there are the ‘added-extras’, which may include notes on distribution, ecology, synonymy, conservation status and even ethnobotanical use. Often the assemblage of national-level Floras has proven quite a mammoth task; logistically challenging, fraught with funding difficulties, and above all time-consuming – with efforts spanning over several decades for particularly biodiverse countries. This is all very well, and such traditional Floras have and will continue to serve as invaluable tools. In this modern age, however, change is called for to tackle some common short-comings of the Flora.  A considerable amount of valuable information collected by taxonomists and other experts in the production process is typically lost, never making its way into the public realm – and when such publications can easily run to over 20 volumes, it is clear to see the major constraints involved. Another key drawback is the sheer speed at which redundancy can occur. Even before the final volume of a Flora is published, taxa (species/genera/families) covered within the first volumes may have long been ripe for new taxonomic treatment.

The revolution in how biological information is collected, stored and disseminated is already greatly influencing the Flora. One of the most recently initiated national-level projects is the Flora of Nepal project, for which advances in biodiversity informatics have permeated the entire process from preparation to publication. Although the Flora of Nepal will still be published in printed format, a (if not the) main focus will be an E-Flora freely accessible online, which will also greatly expand the availability of information assembled by experts. A simple yet very significant feature will be the ease of portability of numerous volumes to the field in digital format.  Though perhaps most critically, the Flora of Nepal will be maintained and updated to reflect new findings – creating for the first time, in essence, an evolving Flora.

Before we really begin to speculate on the form and function of our Flora of the Future, we must first take a look to the current cutting edge of biodiversity informatics. In what must be one of the most significant advances in decades, the cooperative development of the Global Biodiversity Information Facility (GBIF) by many governments and organisations has promoted and facilitated the “mobilization, access, discovery and use of information about the occurrence of organisms”. This centralized repository of earth’s biodiversity is fast set to reach one billion indexed records within a few years from now, fed from diverse sources ranging from individuals to national biodiversity data centres. It is difficult to envisage how the Flora of the Future could in any plausible way side-step such a global network. Whereas floras have traditionally featured a top-down, expert driven synthesis – the Flora of the Future will also no doubt integrate the emergent trend of bottom-up assembly of knowledge – a good example of which is currently purveyed by the Encyclopaedia of Life.

Let’s get back now to our future ecologists and biodiversity practitioners, and take a little look in as they go about conducting their fieldwork. No matter what habitat or location they study in worldwide, they will each possess a small handheld device connected to the Flora of the Future. Automation of species identification by means of this device will have removed a large bottleneck in their work – leaving ecologists to focus on actual ecology. No longer will they be bound to a particular geographic territory due to limited floristic familiarity –  we will witness a complete opening of boundaries, and greater migration of ‘western’ ecologists to the frontline of areas of global biodiversity importance.

But just how exactly could such a device work? A potential basis could feature a combination of machine-learning morphometrics and DNA barcoding  – two presently very promising tools. For the former, development of algorithms for auto-identification of plant species is already well underway (see for example the Leafsnap mobile app). These function much like facial recognition technology, and through input of a digital scan/photo can pinpoint unique morphological characteristics required for successful classification. A key aspect of machine learning is removal of subjectivity by conversion of shapes into numerical descriptions – no need for argument any longer on just how ‘subglobose’ a feature is; the ball is already in motion towards a predictive and integrative taxonomy. Upon scanning a specimen in the field, an image will be broken down into key morphometric characteristics, and referenced against a large central database within the Flora of the Future. The Flora will prioritize this procedure by first referencing against species known to occur within a certain radius from where the user currently stands (a useful feature in itself!). The ecologist, on the spot, may learn that the specimen has a confirmed match, and proceed to download key local statistics of importance. On the other hand, this specimen may in fact represent an extension of the species known distributional range. The finding, however, of no known match in the database could spell discovery of a new species, whereas a positive match with notably low morphometric agreement may indicate new subspecific taxa or otherwise interesting findings (for which DNA barcoding could be employed for further verification in both cases).

Regardless of outcome, the above three scenarios will have allowed for a real-time and in situ solution to identification of species. The exact significance of this process will not only lie in the freeing up of both ecologists’ and taxonomists’ resources, but in the real-time flagging of new discoveries. As it stands, it is expected that discovery of remaining undescribed plant species will be an incredibly inefficient process (given that 50% of the world’s plant species have been discovered by only 2% of plant collectors), despite the vast number of these thought to exist. A recent study examining the exact inefficiency of the production chain from collection to publication uncovered that “on average, more than two decades pass between the first collection of samples of a new species and the publication of the species’ description in scientific literature”. In other words, a specimen of a new species has physically passed through the hands of many people before the simple ‘discovery’ (perhaps after many, many years in a herbarium) that it is something new to science. In this sense, an important function of the Flora of the Future will be instant recognition (perhaps even while standing in the field!) of a new discovery as just that – which can drastically reduce this presently overblown timeframe and waste of resources.

Getting back to the future for now, we see our biodiversity practitioners and ecologists as key players in the advancement of ecological as well as taxonomic discovery, with a highly efficient yet passive ability for discovery embedded within the commonplace tools they use, as they go about their work.  With an entirely streamlined approach to field research, and identification no longer a daunting prospect in the study and documentation of biodiversity, we will eventually see the peak of mass extinction pass, looking back behind us. The challenges of tomorrow are no doubt great, and a renewed vigour for the taxonomic process will be critical for progress on these fronts. The Flora of the Future will for the first time sew a seamless line between ecologists and taxonomy; the essential currency of biodiversity.

Author

Paul Egan: eganp5[at]tcd.ie

Photo Credit

Paul Egan