Skip to Main Content
Skip Nav Destination

Biotechnology is one of the hottest growth areas in medicine today. It is central to both the diagnosis and treatment of disease. This chapter provides an overview of the application of biotechnology to medicine from the early twentieth century to the present day. It begins by looking at how DNA first became associated with disease in the mid twentieth century and then moves on to explore a number of different biotechnological tools that have become important in medicine. As this chapter shows, this has not always been a straightforward process. Indeed, some of the developments, notably genetic engineering, stem cell therapy and gene therapy, have been dogged by controversy. In addition, a number of the new treatments aided by biotechnology pose significant challenges to traditional modes of production. Furthermore, they have greatly increased the cost of healthcare.

Biotechnology is intrinsic to medicine. Everywhere you look today, from medical research conducted in the laboratory through to the diagnosis and clinical treatment of a patient, biotechnology is pivotal to that process. Despite its importance, few non-scientists understand what biotechnology is, where it has come from or the many different functions it serves in everyday healthcare.

The term biotechnology was first coined in 1919 by Károly Ereky, a Hungarian agricultural engineer and economist. At its most basic level biotechnology refers to the controlled and deliberate manipulation of organisms and living cells to create products for the benefit of humans. In one form or another, humans have deployed biotechnology for thousands of years. Since prehistoric times, for example, they have used yeast to get bread dough to rise and to produce alcoholic drinks. Bacteria have also been added to milk for generations to make cheese and yoghurt. Animals and plants have also been selectively bred over many centuries to generate stronger and more productive offspring for multiple purposes. In more recent years, increasing knowledge about how to manipulate and control the functions of various cells and organisms, including their genes, has given birth to a burgeoning number of products and technologies for combating human disease.

Biotechnology is currently one of the hottest growth areas in medicine. Between 2001 and 2012 investment in medical biotechnology research rose globally from £6.7bn to £66bn.1  Such work is helping to determine the molecular causes of disease, to generate more accurate and faster diagnostic platforms and to develop drugs that are more precise in their target and personalised for individual patients.

Just how important biotechnology has become can be seen from the fact that by 2013 seven out of ten of the best selling drugs were biological products. Also known as biologics, such drugs replicate natural substances in our body, including enzymes, antibodies and hormones. They are made from a variety of natural resources—human, animal, and microorganism—and are usually manufactured using biotechnology techniques. Living entities, such as cells and tissues, also comprise biological products. Analysis in 2013 predicted that by 2018 biological drugs would account for a quarter of all drug spending worldwide and for more than 50 percent of the top selling drugs in the world. Such therapeutics include those that are manufactured in either animal cells or bacteria and make use of the body's natural immune system to fight disease.2 

Much of the application of biotechnology in medicine is directed towards addressing structural changes at the molecular level that cause disease. This rests on the premise that an illness can be driven by an abnormality or deficiency of a particular molecule. Such thinking can be traced back to the work of Linus Pauling and colleagues at the California Institute of Technology in the late 1940s. Importantly, they demonstrated that sickle-cell anaemia, an inherited blood disorder, was linked to an abnormal haemoglobin, the protein responsible for delivering oxygen to cells in the body. They proposed the hypothesis that a mistake in the protein was caused by a defective gene. Pauling would go on to win the Nobel Prize in Chemistry for this work in 1954.3  A gene is a distinct stretch of DNA (deoxyribonucleic acid) that carries the instructions needed to create proteins, specific molecules that are essential to the functioning of the body. Proteins not only do most of the work in cells they are also vital to the structure, function and regulation of the body's tissues and organs.

The concept that DNA could play a role in the disease process was highly novel for the time. DNA had been first discovered in the late nineteenth century, but remained little studied for many decades. In part this was due to the belief that DNA was an inert substance incapable of carrying genetic information because of its simple structure. Instead, proteins, which had a more complex structure, were assumed to act as genetic material. Attitudes to DNA began to shift as a result of some experiments by the physician and molecular biologist Oswald Avery and colleagues at the Rockefeller Institute in New York. In 1944 Avery showed that DNA could transform non-infectious bacteria associated with pneumonia into dangerous virulent forms.4  Avery's work ignited a new interest in DNA. It would take time, however, for scientists to agree that it was DNA, not proteins, that carried genetic information. Consensus finally emerged after experiments conducted by the geneticists Alfred Hershey and Martha Chase at Cold Spring Harbor in 1952.5 

By the 1950s a number of researchers had begun to investigate the structure of DNA in the hope that this would reveal how the molecule worked. The structure of DNA was finally cracked in 1953 as a result of the culmination of efforts by the biophysicists Rosalind Franklin, Maurice Wilkins and Ray Gosling, based at King's College London, and Francis Crick and James Watson based in the Cavendish Laboratory, Cambridge University. Their work showed DNA to be a long molecule made up of two strands coiled around each other in a spiral configuration called a double helix. Each strand was composed of four complementary nucleotides, chemical sub-units: adenine (A), cytosine (C), guanine (G) and thymine (T). The two strands were oriented in opposite directions so that adenines always joined thymines (A T) and cytosines were linked with guanines (C G). This structure helped each strand to reconstruct the other and facilitate the passing on of hereditary information.6–8 

Soon after this breakthrough, in 1955, Fred Sanger, a biochemist at the William Dunn Institute of Biochemistry, Cambridge University, unveiled the molecular composition of the first protein: insulin. This protein, Sanger showed, had a specific sequence of building blocks, known as amino acids.9  Sanger's finding was quickly seized upon by Crick, who by 1958 had developed a theory that the arrangement of nucleotides in DNA determined the sequence of amino acids in proteins and that this in turn regulated how a protein folded into its final shape.10  Crick argued that it was this shape that decided each protein's function. He further proposed that an intermediary molecule helped the DNA to specify the sequence of the amino acids in a protein. The key question was how to prove his hypothesis.11 

Crick recognised that one way to find out would be to investigate sickle-cell anaemia. Pauling and his colleagues had proposed the hypothesis that the difference in haemoglobin found in sickle-cell patients and healthy individuals could be down to a difference in the number of amino acids. How many amino acids were involved, remained unknown. Was it just one amino acid or more? Crick realised this could be resolved with the technique Sanger had developed to work out the composition of amino acids in insulin.

Based on this thinking, Crick launched a collaboration with Sanger and Vernon Ingram, a fellow colleague in the Cavendish laboratory. By 1957, after many hours of painstaking work, Ingram had determined that the difference between normal and sickle-cell haemoglobin was down to the replacement of ‘only one of nearly 300 amino acids’.12  Ingram's finding was a significant breakthrough. Not only did it challenge the scepticism of many scientists that the alteration of just one amino acid could produce a molecule as lethal as sickle-cell haemoglobin, it also marked the first time that anyone had managed to break the genetic code, the process by which cells translate information stored in DNA into proteins.11  Ultimately the work on sickle-cell anaemia laid the foundation for a whole new approach in medicine, known as molecular medicine. Critically, it ignited a search for other genes or molecules that contributed to disease and ways to harness them for treatment.13,14 

Unravelling the genetic process behind sickle-cell anaemia was just one investigation among many undertaken in the 1950s in order to understand the relationship between DNA and disease. Elsewhere microbiologists and biologists were examining the role of genetics in drug resistance. They were hunting for the biological mechanism bacteria use to resist viruses and other pathogens and thwart natural anti-microbial substances designed to kill or inhibit their growth. Such work was part of a broader effort to understand the mechanisms underlying the rising resistance of bacteria to antibiotic drugs, widely prescribed for medical treatment from the early 1940s. One of the fruits of such endeavours was the discovery of some biological mechanisms for manipulating and copying DNA.

Plasmids were one of the earliest biological tools scientists unearthed. Discovered in the 1940s, plasmids are small independent self-replicating strands of DNA that naturally exist in most bacteria and some fungi, protozoa, plants and animals. They come in a wide variety of lengths and provide the host organism with the necessary genes for coping with stress-related conditions, such as when encountering substances like antibiotics that impede their growth or threaten their survival. Plasmids have several useful characteristics. Firstly they contain only a small number of genes. Secondly, they snap quickly back into shape when cut open. Because of these features, scientists rapidly explored their use as a vehicle, or vector, for cloning, transferring and manipulating genes within the laboratory.

Soon after finding plasmids, scientists discovered some biochemical enzymes capable of cutting and pasting DNA. One of the first was polymerase, discovered in 1957. All living organisms make polymerase. It helps replicate a cell's DNA. Another important group of enzymes were restriction enzymes. This is a group of enzymes which bacteria use to cleave and destroy the DNA of invading viruses. Restriction enzymes were suggested to exist as early as 1952, but the first one was only isolated and characterised in the late 1960s. Often described as ‘molecular scissors’, restriction enzymes provided the means to cut DNA very precisely for the first time within the laboratory. By 1968 scientists had isolated another type of enzyme, known as ligase, which bacteria use to repair single-strand breaks in DNA. This provided an avenue for joining different DNA fragments together.

The discovery of plasmids and the different biochemical enzymes laid the foundation for the development of genetic engineering. This method involves selecting and cutting out a gene at specific point on a strand of DNA using restriction enzymes, and then inserting it into a plasmid to produce recombinant DNA. The very first piece of recombinant DNA was generated in June 1972 by Janet Mertz, a biochemistry graduate student working with Paul Berg at Stanford University, a subsequent Nobel Prize winner in 1980. This she did as part of a project to understand gene expression in human cells and its misregulation in cancer. Her recombinant DNA contained genes from the simian virus (SV40), a virus that lives in some monkey species, and a bacteriophage, a type of virus that infects bacteria.

Despite her achievement, Mertz was prevented from cloning the DNA. This involved inserting the recombinant DNA into bacteria for replication by its cell machinery. Mertz was unable to take the next step because of a controversy that broke out following her attendance at a workshop being run by Robert Pollack at Cold Spring Harbor Laboratory in June 1971. Pollack was alarmed to hear during the workshop that she was proposing to insert genes from SV40, into Escherichia coli (E. coli), bacteria that live in the guts of humans and other animals. SV40 is a largely harmless virus. While not known to cause any diseases in humans, SV40 had been shown within the laboratory to be capable of inducing the formation of tumours in rodents and human cells cultivated in culture. Pollack was particularly worried that some bacteria with the SV40 genes could escape from the laboratory, thereby infecting people and other mammals and possibly giving them cancer.

Mertz proposed the risk could be minimised by using an E.coli strain unable to survive outside of the laboratory. But Pollack continued to raise concerns. This persuaded Berg to self-impose a moratorium against anyone performing genetic engineering experiments in his laboratory that introduced SV40 genes into E. coli until the potential safety concerns had been addressed. In the end, the first cloning of recombinant DNA was carried out in June 1973 by Stanley Cohen and Herbert Boyer, based respectively at Stanford University and the University of California in San Francisco. They achieved this on the back of the methods originally outlined by Mertz.15–17 

News of Boyer and Cohen's experiment immediately ignited a fierce public debate about the safety of genetic engineering. So great was the furore that in 1974 a group of American scientists agreed to self-impose a voluntary moratorium on experiments involving genetic engineering. This was lifted a year later following the introduction of strict guidelines drawn up by an international conference organised by Berg in Asilomar, California. The guidelines required laboratories to install tight security facilities to contain any experiments with recombinant DNA.18  The Asilomar conference is now held up as a model of self-regulation by scientists. Indeed, it was recently used as a framework for experiments with CRISPR, a new form of gene editing described below. Significantly, Paul Berg played an important role in debating the guidelines for CRISPR.

Despite the initial controversy, genetic engineering soon grabbed the attention of venture capitalists, who swiftly began partnering with academic scientists to set up biotechnology companies to exploit the new technology. Genentech was the first company in the field, founded in 1976 in San Francisco. The new bioentrepeneurs envisaged inserting human genes into bacteria to encourage the production of unlimited quantities of heretofore scarce therapeutic proteins. Their vision was to spawn an entire new industry. The first two successful products prepared with genetic engineering were insulin, approved by the US Food and Drug Administration (FDA) in 1982, for the treatment of diabetes, and human growth hormone, approved by the FDA in 1985, to treat children with severely restricted growth. Since then genetic engineering has been used to generate medical drugs for a range of other diseases, including cancer, immune deficiency, HIV and heart attacks. As Chapter 2 by Alldread and Birch and Chapter 3 by Buckland show, the technique now underpins the production of many different drugs as well as vaccines. Yet, as these two chapters highlight, the adoption of genetic engineering in this sphere was not as easy or straightforward as originally anticipated. Indeed, it involved significant technical and scientific challenges on the manufacturing front.

Within two years of Boyer and Cohen's successful cloning of recombinant DNA, another important tool appeared on the scene—a technique for the laboratory production of monoclonal antibodies (Mabs). These antibodies are derived from the millions of antibodies the immune system makes every day to fend off bacteria, viruses, pollen, fungi, and any other substance that can threaten the body, including toxins and chemicals. The first Mabs were produced in 1975 by Georges Köhler and Cesar Milstein, based at the Laboratory of Molecular Biology, Cambridge, UK. They developed this as part of their search for a research tool to investigate how the immune system produces so many different types of antibodies specifically targeted to the infinite number of foreign substances that invade the body.

While Köhler and Milstein were subsequently awarded the Nobel Prize in 1994, their invention attracted far less public fanfare at the time than the development of recombinant DNA. Over time, however, their innovation was to have an even more far-reaching impact in the medical field. Able to bind to specific markers found on the surface of cells, Mabs provided an important tool for the detection of unknown molecules and established their function for the first time. As Marks points out in Chapter 4, Mabs opened up new pathways for understanding multiple diseases on an unprecedented scale and greatly enhanced the speed and accuracy of diagnostics.19  In addition they provided new avenues of treatment. Mab drugs are currently used to treat over 50 major diseases. Just how important they have become can be seen from the fact that Mab drugs now comprise six out of ten of the best-selling drugs worldwide and make up a third of all newly introduced medicines. Mabs are also now at the forefront of the development of immunotherapy for cancer, the subject of Chapter 5 by Marks. Hailed as one of the major advances in cancer treatment in recent years, such therapy is designed to induce, enhance or suppress the body's immune system to combat cancer.

Just two years after Köhler and Milstein produced their first Mabs in the Laboratory of Molecular Biology, the same laboratory witnessed the development of another revolutionary technique: DNA sequencing. Devised by Fred Sanger in 1977, this procedure helps determine the exact order of the four building blocks, nucleotides, that make up a piece of DNA.9,20  The method has four key steps. The first stage involves removing DNA from a cell. This can be done either mechanically or chemically. The second involves breaking up the DNA and inserting the fragments into cells that self-replicate so that they can be cloned. Once this is done, the DNA clones are mixed together with a dye-labelled primer (a short stretch of DNA that promotes replication) in a thermocycler, a machine that automatically raises and lowers the temperature to catalyse replication. Finally the DNA segments are placed on a gel and subjected to an electric current. Known as electrophoresis, this process helps sort out the DNA fragments by their size. When subjected to the electric current the smaller nucleotides move faster than the larger ones. The different nucleotide bases in the DNA are identified by their dyes, which are activated when they pass through a laser beam. All the information is fed into a computer and the DNA sequence is displayed on the screen for analysis (Figure 1.1).

Figure 1.1

The basic steps in DNA sequencing. Source: WhatIsBiotechnology.org (Reprinted with permission from Lara Marks).

Figure 1.1

The basic steps in DNA sequencing. Source: WhatIsBiotechnology.org (Reprinted with permission from Lara Marks).

Close modal

Today DNA sequencing is one of the most important tools in medicine. What helped propel it forward was the development of another technique known as the polymerase chain reaction (PCR). Sometimes called ‘molecular photocopying’, PCR uses the enzyme polymerase and two matching strands of DNA to amplify and copy small segments of DNA. The founding principle for PCR was first developed in 1971 by Kjell Kleppe, a Norwegian researcher working in the laboratory of the Nobel Prize winning scientist Har Gobind Khorana at the Massachusetts Institute of Technology, but its practical application was only demonstrated in 1985, by the American biochemist Kary Mullis, who was awarded a Nobel Prize in 1993 for this work.21 

PCR involves two basic steps. In the first instance a DNA sample is heated to separate its two strands. The two separated strands are then used as templates to synthesise two new DNA strands. This is done with the help of an enzyme called Taq polymerase. Once made the newly synthesised DNA strands are used as templates to generate two more copies of DNA. These steps are repeated multiple times with the help of a thermocycler. PCR can generate one billion exact copies of an original target of DNA within a couple of hours (Figure 1.2).

Figure 1.2

Diagram of PCR process.

Figure 1.2

Diagram of PCR process.

Close modal

DNA sequencing is now an automated process. This has dramatically reduced both the time and cost of such work and the scale on which it can be carried out. Now it is possible to run DNA tests on thousands of different molecules in parallel. The technology has advanced so much that DNA sequencing has become a routine laboratory process. DNA can now be analysed from millions of samples of blood, semen and tissue from patients every year. Some idea of how far the technology has travelled can be seen from the fact that the first human genome sequenced by the Human Genome Project, started in 1990, cost nearly $3 billion and took 23 years to complete. By late 2015 it had become possible to sequence a whole human genome for just below $1500 and it could be performed in just 26 hours. Such whole-genome sequencing (WGS) is paving the way to greater understanding of the relationship between genetic variations and disease. New devices are currently being developed to help in the management of critically ill infants with suspected but undiagnosed genetic disease.22,23 

Many of those who campaigned for the setting up of the Human Genome Project did so in the belief that it would pave the way to understanding the genetic code of life and open up a new chapter in medicine. It could take many more years, however, before researchers fully understand how instructions encoded in DNA shape health and disease. Such work will necessitate the investigation of genome samples taken from thousands, probably millions, of people. Nor will it be straightforward. Not only does the genetic profiling of individuals raise major ethical and legal concerns; many question the importance of genes in health compared with other factors such as environment and lifestyle. Often the genomic data does not reveal a neat relationship between genes and health.

One area where WGS is already having a major impact is in the fight to control drug-resistant bacteria. These strains are a growing threat to public health globally. Traditionally most bacterial pathogens have been identified by growing a patient's specimen in a culture, testing its susceptibility to antimicrobial drugs and comparing it with other bacterial strains. The whole process is labour-intensive and time-consuming. Identification and susceptibility testing can take days, in the case of rapidly growing bacteria such as E. coli, or months, in the case of slower growing bacteria such as Mycobacterium tuberculosis.22,23 

The exciting potential WGS technology holds can be seen from the case of the Rosie Hospital in Cambridge, UK. In 2011 three babies in the hospital's baby unit all tested positive at the same time for Methicillin-resistant Staphylococcus aureus (MRSA). On investigating the matter further the hospital's infection-control team discovered that a number of other babies had been infected sporadically by MRSA—sometimes months apart—over a period of 6 months. The team could not work out whether they were seeing an outbreak of MRSA or merely unrelated infections. Unable to answer the question using conventional methods the team called WGS experts from the Wellcome Trust Sanger Institute, Cambridge. The Sanger team quickly established that the samples were all related at the genome level, suggesting an outbreak. By sequencing other bacteria in the hospital's microbiology laboratory database with the same antibiotic susceptibility profile, they confirmed that this was a new strain of MRSA with twice as many infections as first thought and that it was also prevalent in the wider community. Two months after the previous infections were discovered, another baby tested positive despite the unit having just been thoroughly cleaned. So the team sequenced bacteria from swabs taken from staff on the unit and discovered that a particular individual was carrying the same strain of MRSA. Once that staff member was treated, the outbreak ended.9 

The work at the Rosie Hospital was one of the first occasions when WGS was used to track down and halt the outbreak of an antimicrobial-resistant infection in a hospital. One of the advantages of the method was that it made it possible to reconstruct the history of the outbreak and identify its transmission route. The sequencing not only showed that the outbreak resulted from a new strain of MRSA, but also managed to link it to earlier infections on the ward and to trace its complex transmission pathways from babies to their mothers, from these mothers to other mothers in the postnatal ward, and to partners of affected mothers. What the sequencing research also highlighted was the pitfall of concentrating merely on hospital-based infection control, which pointed to the need to develop strategies that could take on board the important transmission dynamics between the community and healthcare institutions.24 

WGS is not only being used to thwart the spread of antimicrobial-resistant infections in hospitals, but also in the community. British researchers are now pioneering a low-cost WGS tool for helping with tuberculosis, a growing public health threat. Usually it takes up to a month to confirm a diagnosis of TB, confirm treatment choices and to detect transmission between cases. This is major hindrance in terms of preventing the spread of the disease. Used directly on patient samples in the clinic, the new tool is being designed to provide results within the same day. Such a diagnostic would be a major breakthrough because it would make it easier to match patients to the right medication at the start of their treatment, thereby shortening the time they are infectious and thwarting the spread of drug-resistant tuberculosis.25,26 

Today biotechnology stands on the cusp of another revolution as a result of the recent development of another method known as ‘clustered regularly interspaced short palindromic repeats’ or CRISPR for short. Like older genetic engineering techniques, CRISPR deploys the cellular machinery bacteria use to recognise and edit the DNA of harmful viruses. Where it differs is that it facilitates the introduction or removal of more than one gene at a time, cutting the process down from a number of years to a matter of weeks. This has made it possible to carry out genetic engineering on an unprecedented scale and at very low cost. Not being species-specific CRISPR can also be applied to organisms previously resistant to genetic engineering.

As was the case with recombinant DNA in the 1970s, the development of CRISPR has fuelled intense controversy. Where CRISPR is prompting most concern is in its use for editing the genomes of human embryos. Some of the first experiments carried out in this area were reported by Chinese scientists in April 2015. They used the technology to modify a gene responsible for the genetically inherited disease, beta-thalassaemia, a potentially fatal blood disorder. The editing was done in 86 human embryos deemed unviable for in vitro fertilisation because they were tripronuclear. Such embryos start off dividing like normal embryos but then stop developing because they have an abnormal collection of genes. Disappointingly only four of the 54 embryos tested had the desired genetic change. What was worrying was that many of the embryos displayed potentially harmful off-target mutations. Based on the poor performance of CRISPR the Chinese researchers cautioned against its use for editing viable human embryos.27 

News of the Chinese experiment has triggered a major bioethical debate about how far CRISPR should be used.28  The technology faces two major issues. The first is a philosophical dilemma. It centres on the extent to which CRISPR should be used to alter ‘germ-line’ cells—eggs and sperm—which are responsible for passing genes on to the next generation. While it will take many more years before the technology will be viable for creating designer babies, a public debate has already begun on this issue. So great was the fear that some scientists, including some who helped pioneer CRISPR, in December 2015 called for a moratorium on its use in germ-line cells at an international meeting organised in Washington DC by the US National Academy of Science (NAS) and the National Association of Medicine (NAM) together with the UK Royal Academy of Sciences and Chinese scientists. This recommendation was subsequently reversed by a report put out by NAS and NAM in February 2017 which instead of calling for banning such experiments argued for proceeding with caution.29 

The second issue is one of safety. One of the major problems is that the technology is still in its infancy and knowledge about the genome remains very limited. Many scientists caution that the technology still needs a lot of work to increase its accuracy and make sure that changes made in one part of the genome do not introduce changes elsewhere which could have unforeseen consequences. This is a particularly important issue when it comes to the use of the technology for applications directed towards human health. Another critical issue is that once an organism, such as a plant or insect, is modified they are difficult to distinguish from the wild-type and once released into the environment could endanger biodiversity.

Where CRISPR is already proving useful, however, is for the generation of transgenic animals. These are animals that have had their genomes altered by the transfer of genes or a gene from another species or breed. The first such animal, a mouse, was generated in 1974. Transgenic animals serve many different purposes in medicine. For example, they are routinely used as research models to examine the role of genes in terms of human disease susceptibility and progression as well as to test therapies. Animals are also genetically modified to produce complex human proteins, including Mabs, for the treatment of human disease.

In addition to helping in the production of protein therapeutics, transgenic animals are being explored as a means of growing human organs, which are in great shortage for transplants. A key pioneer in this field is George Church who with a team at Harvard University is harnessing CRISPR to develop transgenic pigs for this purpose. One of the key problems in transplanting organs from animals into humans is that they will be rejected by the human immune system. In 2015 Church's group reported that they had successfully used CRISPR to modify genes encoding molecules that sit on the surface of pig cells known to trigger human immune responses or cause blood clotting. They had also managed to inactivate retroviruses naturally embedded in the pig's genome. This is important because up to now organ transplants from pigs have carried the risk of transferring viruses into humans. The breakthrough by Church's team is highly significant because, while experimentally it has been shown that pig kidneys can function in baboons for months, safety concerns about viruses have prevented testing them out in humans.30 

In the past it could take several years to engineer a transgenic animal. The process relied on genetically altering an animal's embryonic stem cells, injecting them into an embryo, and breeding multiple generations of animals. CRISPR has greatly reduced the number of steps in the process. Critically, it enables targeted genetic surgery of a fertilised egg. This has radically shortened the time needed to make a transgenic animal. Today, for example, it is possible to create a transgenic mouse in just 6 months.31  Another advantage of CRISPR is that it makes it possible to alter more than one gene at a time. In October 2015 Church reported that he and his group had managed to modify more than 60 genes in pig embryos. This was ten times more than had been edited in any other animal (Figure 1.3).32 

Figure 1.3

Time to produce transgenic mice with and without CRISPR.

Figure 1.3

Time to produce transgenic mice with and without CRISPR.

Close modal

In addition to helping create transgenic animals, CRISPR promises to aid gene therapy, an experimental technique that has been evolving since the 1980s. Such treatment uses a number of strategies, including the replacement of a mutated gene that causes a disease with a healthy copy of the gene, inactivating or ‘knocking out’ a mutated gene that functions improperly, or introducing a new gene into the body to combat a disease. What progress is being made in this field is explored by Addison in Chapter 6. As Addison points out, gene therapy has been dogged by controversy and its advance has been closely tied to debates about how far genetic intervention should go. This has not been helped by the fact that the very first patient treated with gene therapy, in 1990, died and that subsequent patients developed leukaemia. Added to this, there have been significant technical challenges in getting the therapy to work in humans.

Despite these set-backs gene therapy is now gaining renewed enthusiasm as a result of the development of CRISPR. Just how powerful CRISPR could prove for gene therapy is highlighted by the case of French teenager who in March 2017 was reported to have remained free of sickle-cell anaemia for 15 months following receipt of gene therapy. The procedure involved removing the patient's bone marrow, the part of the body that makes blood, and then genetically altering it in the laboratory using CRISPR. The aim was to modify it so that it no longer carried the DNA defect responsible for sickle-cell anaemia. Once done the bone marrow was infused back into the patient. The procedure, however, is still in its infancy and very expensive, which limits how far it can be rolled out to other patients.33 

Gene therapy is not the only area to have experienced many ups and downs. So too has stem cell therapy, a field examined by Kraft and Barry in Chapter 7. The treatment takes advantage of the fact that stem cells are the body's master cells and have the ability to grow into any one of the body's more than 200 cell types. Kraft and Barry show that the term ‘stem cell’ was first used in the 1860s to describe the starting point in cell formation. However, it took many years to unravel the precise function of stem cells and for them to be deployed for therapy. This emerged on the back of the development of bone marrow transplants, a procedure introduced as a treatment for cancer and genetic blood disorders in the 1960s.

Many different types of stem cells have now been identified. The first, known as embryonic stem cells, are sourced from embryos formed during the blastocyst phase of embryonic development, which is four or five days after fertilisation. They are usually taken from human embryos left-over from in vitro fertilisation. The second, known as adult or mesenchymal stem cells, are found in different types of tissue, including bone marrow, blood, blood vessels, skeletal muscles, skin and liver. Stem cells can also be sourced from umbilical cord blood.

Like gene therapy, stem cell therapy has been surrounded by controversy. Most of this has centred on the use of human embryonic stem cells, which has prompted fierce ethical debates and strict regulatory oversight. Major advances, however, have been made in stem cell science in recent years. A critical breakthrough occurred in 2006. That year a Japanese team led by Kazutoshi Takahashi and Shinya Yamanaka discovered a way of genetically modifying somatic cells to reprogram them with properties similar to those of embryonic stem cells. The new cells are known as induced pluripotent stem cells (iPS). Such cells are now the subject of intense clinical investigation. Yamanka was awarded a Nobel Prize in 2012.

Where the new iPS cells appear to be most promising is in the treatment of blinding eye diseases, a topic discussed in detail in Chapter 8 by Awwad, Khaw and Brochini. This chapter shows that stem cells are just one of a number of biological products now used to treat blinding eye diseases. Indeed, biotechnology is now paving the way to major breakthroughs for ophthalmic disorders. Some of the most notable successes have been with antibody-based drugs. Yet, as Awwad, Khaw and Brochini point out, the development of such treatments has not been straightforward. Their chapter is a salutary reminder of just how complex the body's physiology is when it comes to applying a biological drug at the clinical level. As we can see from their chapter the physical and biochemical barriers of the eye make it a particularly challenging environment in which to administer treatment.

Both stem cell and gene therapy are part of a wider group of products known as cellular therapies. This is a diverse spectrum of therapies, which use many different types of cells. These can be divided into two main categories: a patient's own cells (autologous therapies) and donor cells (allogeneic therapies). In many cases the therapy involves the development of bespoke cells for individual patients. The key aim of cellular therapy is either to replace a missing cell type, as in those that have a defective gene, the approach used for treating the French teenager with sickle cell anaemia described above, or to provide a necessary factor to boost the efficacy of a treatment, such as adoptive cellular therapy, described in Chapter 5, which enhances the power of T lymphocytes, white blood cells, for fighting cancer.

The field of cellular therapy has grown rapidly in recent years and can be considered the most recent phase of the biotechnology revolution in medicine. Most cellular therapies are still experimental and at a very early stage of clinical testing. The bulk of these have been directed towards rare disorders. While each of these affects only a small group of patients, when added up they represent a large population base. In 2009 it was estimated that the potential market for all cell-based therapies in the USA could exceed 100 million patients.34,35 

Cellular therapies could offer significant benefits to patients who currently have limited or no treatment options. Yet, rolling them out on a wide-scale is significantly more complex than conventional small molecules or newer biological drugs. Unlike these therapeutic products, which are usually manufactured in a centralised industrial setting on a large scale, cellular therapies are largely bespoke treatments tailored to individual patients. This is a time-consuming process as it often involves removing cells from the patient, re-engineering them in the laboratory and then transfusing them back into the patient. In addition, it is expensive as the whole process must be conducted in a sterile environment that conforms with good manufacturing practice, all of which is difficult to maintain and run. On top of this the cells have short-half lives, meaning they only survive a few hours. Most cellular therapies therefore need to be produced close to the clinical setting of patients, so require multiple manufacturing sites. Another problem in terms of the commercialisation of cellular therapy is the diversity of cell types used in cellular therapy. This makes it difficult to develop a one-size-fits all manufacturing platform.36  All of the issues outlined above have major implications in terms of cost. For example, the price of CAR-T therapy, an immunotherapy for cancer explored in Chapter 5, which an FDA panel recommended for marketing approval in July 2017, is predicted to be between $300 000 and $500 000 per treatment.37 

While cellular therapies are likely to be the most expensive products on the market, they are part of a growing trend. Many of the newer biological drugs now available are much more expensive than conventional drugs. This, in part, reflects the fact that they tend to be derived from living material and involve more complex manufacturing processes. Furthermore, there is little competition in the marketplace to drive down the cost.

With many of the treatments promising to improve disease conditions heretofore difficult to treat, demand for them is rising. This is posing an impossible conundrum for policy-makers wrestling with constrained healthcare resources. Nor is the price of the new drugs likely to be lowered in the future by the development of generic versions. This is because, unlike conventional drugs, biological drugs are difficult to replicate and require extensive clinical testing. Any inadvertent chemical modifications can affect their performance and safety. As a result, regulatory authorities require clinical trials to demonstrate equivalence for efficacy. This is not the case with other generics. To have sufficient power to be capable of detecting differences from the original drug, the clinical trial population often has to be very large. The biosimilar Mab, infliximab, approved in Korea in 2012 and in Europe in 2013, for example, underwent clinical testing in 874 patients in 20 countries across 115 sites. In addition to the costs of clinical testing, producing biosimilar Mabs requires state-of-the-art manufacturing technology, which is both expensive and cumbersome. Overall, analysts expect the complexities involved in manufacturing and testing biosimilar Mabs will keep costs high and that their prices will only be between 20 and 30% less than those of their patented counterparts. This is much lower than conventional drugs which cost between 80 and 90% less once off patent.38 

One route to making more affordable drugs might lie in the emergence of synthetic biology examined by Race in Chapter 9 and Fletcher and Rosser in Chapter 10. This is an emerging discipline in biotechnology that seeks to combine the principles of engineering with the new knowledge about genetics and cell biology. Race's chapter provides an overview of some of the history behind the technology, showing how it has evolved since 1912, when the term was first coined, to its recent upsurge, which was aided by the development of recombinant DNA and the sharp fall in the cost of DNA sequencing. What is exciting is that it provides a means to create cells and tissues from scratch. As Race puts it, synthetic biology is conceptually like building a structure with LEGO® bricks.

So far most of the early applications of synthetic biology have been confined to relatively simple modifications of microbial cellular pathways to enhance their productivity for synthesizing certain medicines. The most well known example is that of artemisinin, an anti-malarial drug originally extracted from the plant sweet wormwood, which can now be produced using engineered yeast strains. This is particularly important because sweet wormwood harvests are fickle. By comparison, the application of synthetic biology in mammalian cells for the production of protein therapeutics is still in its infancy, as shown by Fletcher and Rosser in their chapter. Nonetheless, as they point out, the technology is already proving helpful in the production of vaccines, as demonstrated by the development of a synthetic flu vaccine in 2013.

As can be seen from this chapter, biotechnology is a fast and growing field in medicine. A book of this kind can therefore only scratch the surface of the many developments now happening. What is striking throughout is the complex and dynamic interface between different biotechnological tools in this process. Drug discovery and development, for example, makes use of a large range of biotechnology platforms simultaneously. It is highly dependent on DNA sequencing, PCR, monoclonal antibodies and genetic engineering. The same is true in the field of diagnostics and other medical applications.

What is often forgotten in this story is the increasingly important role of immunology in the development of biotechnology. Yet, as this chapter shows, many of the biotechnological tools we now have at our disposal were born out of a desire to understand the mechanisms of the immune system. Two of the key breakthroughs in biotechnology, genetic engineering and monoclonal antibodies, on which the biotechnology industry was built, grew out of such research. The tools for recombinant DNA, for example, originated from research into understanding how bacteria resist foreign invaders. Similarly, monoclonal antibodies emerged out of a search for a means to investigate the diversity of the immune system. In addition, many of the advances now being made in the field of cancer rest just as much on harnessing the immune system as on understanding the genetic predisposition of patients to the disease.

As will become apparent from reading the following chapters the advances now being made in medicine through the use of biotechnology are controversial. Furthermore, they are not inevitable and involve many blind alleys as well as successes. Many of the changes brought about by biotechnology in medicine have been incremental and hidden from view. Concentrating merely on what impact biotechnology has had on the development of drugs, which have often stolen the media limelight, misses the enormous changes it has brought to our understanding about the pathways of disease and the new tools now at our disposal for diagnosis.

1

Much of this chapter draws on material I originally wrote for the website WhatIsBiotechnology.org. I am grateful to Silvia Camporesi for comments on an earlier draft of this chapter.

Close Modal

or Create an Account

Close Modal
Close Modal