A brief post on vaccines designed to fight the current pandemic…
Vaccines have been used for more than two centuries to protect people from disease. All vaccines are designed to work in more or less the same way; to trigger an immune response in the host that is directed specifically against some microbial invader. This response primes the immune system and allows for a more rapid, sustained, and efficacious reaction. In the past, most vaccines used killed or weakened microbes as the agent to set off immunity. Usually, proteins on the surface of these pathogens are recognized as foreign substances, causing the immune response to kick in. More recently, individual proteins or parts of proteins that make up the invading pathogen have been employed for the same purpose. All three of the candidate vaccines that have recently shown efficacy and safety in phase III trials aren’t protein based. They’re either made from RNA (Moderna and Pfizer) or DNA (AstraZeneca), molecules that are not proteins themselves but are intended to direct the synthesis of one of the pathogen’s proteins inside cells of the vaccinated individual. These nucleic acid based vaccines have two great advantages over the more traditional ones. First, they can be easily modified so that the protein that they can specify whatever protein is desired. And second, they can be synthesized quickly in great amounts. These advantages are critical, especially in a pandemic. Pandemics arise rapidly. The current one, Covid-19, began in early 2020. In less than a year, it has infected nearly 60,000,000 people globally and killed over 1,400,000. It appears that many more will contract the disease and die. Traditional vaccines take years to develop. Many are difficult to deliver in large quantities. Nucleic acids however, can be synthesized rapidly at an industrial scale. What’s more, once a facility has been set up to produce a nucleic acid based vaccine, the sequence of its DNA or RNA can be quickly changed so that an entirely different protein can be made in the same facility. Nucleic acid based vaccines represent a triumph of molecular biology. The tools that are available to the current generation of practitioners in the field as well as the skill with which they use them boggles the mind. As of this writing, none of the three candidate vaccines have yet been approved for release to the general public, but such approval is expected within weeks. However, if they are as successful as initial tests indicate, they will save thousands of lives. What’s more, they promise to be able to be used as weapons against other diseases, such as cancer. Their rapid development represents a breakthrough in the prevention of disease.
1 Comment
![]() I'm afraid. Not about the present political situation (although that's pretty scary too). I'm frightened because of a graph that is updated daily on this site from the Covid Tracking Project. It's a plot of the daily deaths from the SARS-CoV-2 virus versus the number of new infections, On the right I show the latest version that was posted on November 21. The seven day average of daily Covid19 cases is shown in red. The black line represents the number of deaths similarly averaged over a seven day period. I've labeled the graph with three numbers representing the three phases of the pandemic. In phase 1, which covers the period from the end of March to the middle of May, the proportion of deaths to cases was high. That was probably due to three causes. First, the testing rate was pretty low meaning that the number of cases were underrepresented. Second, because we were ill prepared, the contagion killed a higher percentage of people who were more vulnerable to the disease than in later phases, particularly patients in nursing homes. And third, because of the great number of hospitalizations and the novelty of the disease, the medical community had difficulty in coping. That resulted in many more deaths per case than currently. In phase two, which occurred during the summer, a broader section of the populace was infected, including younger people who are much less susceptible to dying from the disease. In addition, physicians and nurses were better prepared to deal with infected individuals and saved many lives that they would not have several months previously. Testing was also more widely available. The result of these factors was fewer deaths per detected case. One thing to notice about the two lines in phase two is that the peak in the number of deaths lags that of the number of cases by about two to four weeks. That's because people don't die right away after contracting the disease and their deaths aren't reported immediately. That brings us to phase three, which we're in now and which neither curve has reached its maximum. Not yet. When will that be? How many deaths will occur? Of course, nobody knows. But here's an informed guess. The percentage of deaths to cases when both curves were at their maximum in the summer was about 1.6%. That is, on average about one and a half people per hundred who tested positive for the virus died of it. Let's say the medical community has gotten better at saving lives, bringing that number down to 1.5% or so. That means, if the number of cases reaches 200,000 then we can expect to see 3,000 deaths per day after a lag of a few weeks! What's frightening is that that yesterday the number of detected cases was more than 198,000. And the red curve doesn't seem to be flattening. In fact, it seems to be gathering steam. As I said at the outset, I'm afraid. And I'm not the only one. An article that recently appeared in the Atlantic magazine cites epidemiologists who have reached similar conclusions ("How Many Americans Are About to Die?" by A. C. Madrigal and W. Moser The Atlantic, Nov 19, 2020). And it looks like the newly developed vaccines, even if they're as effective as touted, won't be widely available until the end of the year at best. It appears we're in for a tough time. It's been two months since my last confession --- posting. Since that time the number of deaths in the United States from the virus has risen from about 18,000 to about 110,000. But the two month period hasn't changed the debate: Should we open up more? Should we stop the measures that we've taken and go back to pre-Covid days? Or should we continue to shut down? For reasons that I don't really comprehend, the disagreement has degenerated (as many these days do) into something political - into a largely left/right issue. One side points to the effects on the economy. The statistics are indeed stark. Enormous numbers out of workers have lost their jobs. The GDP and almost all economic indicators have tanked. The other side cites the tremendous loss of life and the likelihood that there will be a recurrence, a spike in new cases that endanger countless others. They ask "What's a life worth?"
Of course, I don't know which side is correct, which path to take. But I think there are some considerations that should be taken into account no matter what we do. They're not original with me and they shouldn't be the basis for a political argument.
With these and some other measures in place, we should be able to get back to more nearly normal ways: eating out (outdoor venues are best), grocery shopping, going back to school, and shopping in general. Young people in their teens and twenties, who mostly are unsystematic after infection, can take more chances than others. But they have to take care not to infect their older friends and relations. ![]() It’s instructive to see how countries around the globe are handling the current pandemic – and to compare how we're doing in the United States to how other nations are coping with the scourge. Wikipedia (one of the great resources of the digital age) has offered up a slew of comprehensive reports on how the virus has affected many nations and how each has responded. I’ve taken advantage of them in writing this posting. They can be accessed on Wikipedia by entering “Coronavirus in [name of country]” in your search engine. The disease originated in China, with the first case reported in the middle of December, 2019. It spread from there. From what I’ve gathered from Wikipedia, most European states (Spain, Italy, United Kingdom, Germany, France, Sweden, Finland) experienced their first infection in late January, 2020 within a week of each other. The disease was first detected in the US about a week or two earlier. Other countries (Iran, Israel, Switzerland, Portugal, New Zealand) encountered the virus sometime later. My understanding is that most countries, like the US, quickly curtailed travel from China and isolated Chinese individuals with the initial infections. In many places, the disease seemed controlled. But soon it sprang up again, apparently due to its introduction by Europeans, particularly Italian visitors or returnees. There's also the unlikely possibility that the virus mutated in its residence in Italy and became more infective and lethal. Or perhaps Italians are more gregarious and encounter more people than the Chinese. At any rate, sequencing studies suggest that the great number of infections in New York are due to virus that originated in Italy, and it seems to me reasonable (but not proven) that much of the further spread of the disease spread from there. It is stunning to realize that in less than 10 weeks, the number of infected individuals outside of China has gone from a few dozen to close to two million as of midday, April 14. The response to the contagion has varied from one nation to another. But it appears that almost all countries have instituted similar quarantines and have already experienced a peak in infections, with the highest number of deaths coming soon. It looks like we have a way to go before it hits its heights in the United States. However, what to do in the intermediate term has caused a rift in the United States and other countries. There are those who contend that the shutdown has already been too prolonged, and that the damage to the economy will cause a depression/recession/inflation that will do more harm than the virus ever could. The other side argues that not keeping the isolation in place will risk hundreds of thousands of lives. While I’m neither an economist nor an epidemiologist, the solution probably lies somewhere in-between. We’ll want to lift the isolation as quickly as possible by identifying those who already have recovered from the disease and presumably can’t be reinfected and are free from the virus. This will require testing, both for the virus and for antibodies against it. Testing for the presence of virus (via PCR) will also have to be increased to identify carriers and more stringent measures will have to be introduced in order to isolate them from others, particularly the elderly and infirm. Testing for viral antigens will be useful in determining the optimum time to open schools and businesses. Meanwhile, with some time on my hands, I’ve made a spreadsheet comparing per capita results of infections and deaths in various places as of 11AM on Saturday, April 11. The numbers of infections and deaths come from Johns Hopkins University (https://coronavirus.jhu.edu/map.html) but their site hasn’t arranged the data on a per person basis. I know that some countries (China, Russia, Iran) are probably not releasing reliable statistics (although I have no evidence that this is so) and testing is incomplete everywhere, but a comparison of the US versus other countries is still illuminating. For instance, I thought that the number of deaths/case would say something about the medical systems in various countries (or perhaps their demographics). Maybe it does, but the UK stands out as being particularly worse off with deaths/case only behind Italy. They haven’t even reached a peak in the number of cases yet. And look at New Zealand. How do you explain only 4 deaths from over 1,000 infections? Were they better prepared because of previous viral infestations? It’s probable that the deaths/case numbers reflect an early stage of the pandemic because death is a lagging indicator but New Zealand has had a decreasing case load for quite a while. I’ve had some emails from friends who blame our current situation on the Chinese government. There seems to be no question that news of the initial infections were suppressed by local officials and that the government didn’t broadcast the severity of the illness in a timely manner. But we shouldn’t have had to rely on the truthfulness of the Chinese authorities for information about matters of such importance. That’s what our intelligence apparatus is for. I’m not sure whether they weren’t aware of the coming problem or whether they did their job and their higher ups didn’t listen (probably the latter). I’ve also had some discussion about whether the virus was inadvertently released from a Chinese virus laboratory. Again, there seems to be no argument that one of the Chinese facilities hasn’t been as careful in handling deadly viruses as they should have been. Everyone with the exception of some fringe groups, however, seems to agree that the release wasn’t deliberate. My feeling is that placing blame isn’t particularly helpful in dealing with the current situation around the world. ![]() Coronaviruses In light of the current outbreak, I thought that I would share some of what I’ve learned about the molecular biology of the coronarviruses in general and the Covid-19 virus in particular. My goal will be to describe how the virus acts in terms mostly devoid of the technical jargon that makes molecular biology so difficult for most people to understand. First of all, what’s a virus? Some people have suggested that viruses aren’t alive. Of course, that isn’t the case. Viruses don’t move about like tigers or grow like poison ivy, but they are organisms that can reproduce, undergo evolutionary changes, and are subject to Darwinian selection. These characteristics, to my mind, make them as alive as any other creature on earth. To be sure, they are obligate parasites, meaning that they can’t do much of anything without the help of a host. In the case of the coronavirus, their hosts are animal cells (there are other viruses, called bacteriophage, that use bacteria as hosts.) In order to survive, they must somehow insert their genetic material into living cells and utilize the machinery that they find there for their own reproduction. Coronaviruses are unlike most other organisms (and some other viruses) in that they don’t have genes made of DNA. Their genome is made of a single strand of RNA. It’s a big piece of RNA, about 30,000 bases long, the largest in any RNA virus. Recall that RNA looks a lot like DNA. It is a polymer of four bases (ribonucleotides), called A, U (instead of T in DNA), G, and C. Like DNA, it can be replicated to form a complementary strand. The RNA strand that is present inside coronaviruses is arbitrarily called by virologists the “plus” strand. In order to reproduce, the virus must somehow get this RNA into a host’s cell where its ultimate goal is to make new viral offspring. I’ll have more to say about this process below. An electron microscope image of some coronaviruses is shown above. Coronaviruses are quite small, with a diameter of somewhere around 100 - 120 nanometers. By contrast, an average human cell is some 50-100 micrometers in diameter, about 1,000 times bigger than these viruses. Accordingly viruses pass though most filters and can only be imaged using an electron microscope. How does this tiny organism manage to infect animal cells? How does it know when it’s encountered its target? Coronaviruses are envelope viruses, meaning that in their past infection they’ve stolen some of the cell membranes of their previous host and incorporated it into themselves. They’re surrounded by this membrane and embedded some of their own proteins into it. Most prominent among these embedded viral proteins are the spike (S) protein that forms the crown that give these viruses their name. In addition there’s a so-called membrane (M) protein and an envelope (E) protein. The lipid nature of the virus makes it susceptible to soap and detergents that burst it, kill it, and release its contents. It is the spike protein embedded in the membrane that is the device that the virus uses to identify and attach to its target. Acting somewhat like the sensors on a naval mine, when these proteins detect a complementary protein on their prey, the virus attaches. In the case of Covid-19, the human protein that the virus attaches to is a membrane bound enzyme called “angiotensin-converting enzyme 2” or ACE2. It is the spike protein’s specificity that determines the host that the virus attacks. Sometime in the past, Covid-19 suffered a change in the spike protein so that it no longer could only bind to some protein on the surface of a bat cell, and changed – via a mutation in the viral genome – so that it recognized the ACE2 human target. Because humans seldom encounter bats, there is speculation that Covid-19 may have undergone an intermediary mutation after infecting another mammal like a civet or a pangolin, but ultimately it must have switched its quarry to humans, all because of a change in sequence in the gene specifying the spike protein. Two other similar viruses, SARS-CoV and MERS-CoV seem also to have originated in bats, with civets and camels as suspected intermediate hosts. Once the spike protein and its target bind together, the virus envelope fuses with the cell membrane and the virus enters the cell. What does it do in this environment? ![]() The viral RNA bears all the hallmarks of a messenger RNA (for those more intimately acquainted with molecular biology, these hallmarks include a 5’ cap and a 3’ polyA tail) and it begins to be translated by the cell’s ribosomes just as if it were one of its own. But it does something a bit differently. In most organisms, translation begins at a specific three bases sequence (a codon) and continues till another three base sequence tells it to stop. That’s what happens during translation of the first two thirds of the virus RNA too. The result is a protein that is called “polyprotein a”. However, in a fraction of the translations, something very peculiar happens. Because of the three dimensional shape of the viral RNA, somewhere before the ribosomes reach the stop sequence, they back up and continue translation of the RNA in a different reading frame, thereby extending the polyprotein a. This results in a longer protein that shares the same beginning of polyprotein a but is longer (“polyprotein ab”). These two proteins are called “polyproteins” for good reason. They are enzymatically broken apart into something on the order of 17 smaller so-called nonstructural proteins (nsp’s) that make up the replication complex. Some of these nsp’s are proteolytic enzymes involved in the cleavage of the polyprotein. But the main nsp is the one responsible for duplication of the virus: an RNA dependent RNA polymerase (it uses RNA as a template to make complementary strands of RNA). It is aided by another protein that unwinds the RNA and still others that promote fidelity of replication and fulfill other replicative functions. As mentioned, the first two thirds of the viral RNA is responsible for the synthesis of this polyprotein. The remainder codes for the aforementioned spike, membrane, and envelope proteins as well as several other minor ones. How are they synthesized? Following its translation into the replicase polyprotein, the viral RNA - the genome - next acts as a template for the synthesis of a half dozen or so additional RNA’s. These begin at different locations along the viral RNA but terminate at the same place; at its end - they’re said to be nested. It’s important to realize that the copies produced by this process can’t act as mRNA’s - they’re the wrong polarity (the complementary strand of either DNA or RNA runs in the opposite direction of its template). But the virus goes ahead and makes complementary copies of these negative RNA’s. In other words, the virus starts with a positive (the viral genome itself); makes nested negative copies of some sequences; and makes complementary copies of the negatives. The result are RNA’s that are in the correct orientation and can act as messengers. These nested RNA’s (remember that they all started at different points along the viral RNA) code for the aforementioned spike, membrane, and envelope proteins as well as several smaller ones. In addition, they serve to code for the nucleocapsid protein that binds to the viral genome. Once the membrane, envelope, and a series of accessory proteins are manufactured, the virus is assembled within the internal membranes of the cell. The viral M and E proteins are implicated in this process along with several host factors only some of which have been characterized. The newly assembled viruses depart by fusion of the membrane bound viruses with the cell’s plasma membrane. Of course, the molecular biology of coronaviruses is very much more complicated than I have laid out. Many more details are described in the review articles that I list below and from which I have appropriated much of the material in this blog. Comments and corrections are welcomed. “The Molecular Biology of Coronaviruses” Paul S. Masters, Advances in Virus Research 66: 193-291 (2006). “Human Coronavirus: Host-Pathogen Interaction” To Sing Fung and Ding Xiang Liu, Annual Review of Microbiology 73:529–57 (2020). “Coronaviruses: An Overview of Their Replication and Pathogenesis” Anthony R. Fehr and Stanley Perlman, Methods Mol Biol. 1282: 1–23 (2015). ![]() The subject of my last posting was the epigenetic clock. This one is about another biological mechanism for keeping time. Way back in November 2018, I posted a short note (#43) entitled “Cancer Cell Immortality” in which I discussed the subject of chromosome ends and their relation to cancer and aging. Here’s a short summary. Many decades ago the eminent biologist Leonard Hayflick discovered that most cells when placed into tissue culture have a limited life span, now known as the Hayflick Limit. It was subsequently found that his limit was due to the inability of the enzymes that synthesize DNA to replicate short sequences at the very ends of chromosomes. At each round of DNA replication, a normal cell loses a piece of the ends of all of its chromosomes. Without their normal ends, chromosomes begin to break and fuse, causing genetic instability and apoptosis. Ultimately, with continued division, there is further loss of DNA. Essential genes begin to be affected and the cells stop dividing. Stem cells often need to divide past the Hayflick Limit and Nature has devised a clever way that allows them to do so. Its solution: telomeres, short (six base pairs, 5’ - TTAGGG - 3’) repetitive runs of DNA that are added to chromosome ends. A special enzyme, telomerase, adds more of the sequences whenever they run low, thereby allowing stem cells to replicate indefinitely. Telomerase is a reverse transcriptase. That is, it uses an RNA as a template to synthesize a specific sequence of DNA. In this case, the enzyme (TERT) encloses a specific non-coding RNA (called TERC) in a pocket, makes a DNA copy of it, and appends the copy on to the chromosome ends as required (see illustration). The actual assembly of a functional telomere requires additional steps and enzymes, a discussion of which is beyond the scope of this posting. Cancer cells, like stem cells and unlike their normal counterparts, can replicate indefinitely as evidenced by HeLa cells that have been grown in culture since 1951. While most normal cells have lost the ability to express telomerase, the cancer cells that derive from them have somehow regained the ability to make use of the enzyme. In fact, it is thought that 85-90% of all cancers have somehow managed to synthesize sufficient supplies of the telomerase enzyme so that they can divide indefinitely. Given the near universal presence of telomerase in cancer cells and its almost complete absence in normal, non-stem, cells, it would seem that targeting telomerase might be a useful strategy to combat cancer. The idea would be to find an inhibitor of the enzyme, administer it to patients, stop cancer cells dividing, and voila, a cure for cancer. Several inhibitors have been tried, but most effort has gone into the development of vaccines against the enzyme. What actually happens if you do try to inhibit telomerase? Are there adverse consequences? A total of twenty clinical trials, mostly phase I and II, have been conducted over the years and their results reviewed by Mizukoshi and Kaneko (1). They report that a variety of malignancies, including melanomas, prostate, breast, and pancreatic cancers were targeted with vaccines against telomerase. None showed adverse effects although it isn’t clear that if the therapy was continued for long period there might be. In most cases the investigators found that subjects mounted an immune response against the enzyme and, in some instances, survival rates increased modestly. The only trial that got to phase III, however, did not extend the lifespan of people with pancreatic cancer. In sum, the good news is that therapies directed at telomerase are well tolerated; the bad is that they have had only limited success to date. Several explanations for the lack of progress have been advanced. Firstly, there is an alternative route to telomere lengthening (called appropriately ALT, alternative lengthening of telomeres). It may be that cancers can escape the effects of telomerase inhibitors by turning to this mechanism. Secondly, there is a reasonably long interval between application of an inhibitor and the death of the cancer cell. That’s because, cells need to divide multiple times in order for their telomeres to shorten enough so that the cells stop dividing. Finally, there’s the fact that stem cells require telomerase. And stem cells are needed throughout the body to rejuvenate tissues. If our stem cells are prevented from doing their job it’s likely that it won’t do our long term health any favors. It may well be that we might be able to check cancer by broadly inhibiting telomerase, but only at the expense of a shortened lifespan. 1. “Telomerase-Targeted Cancer Immunotherapy”, Eishiro Mizukoshi and Shuichi Kaneko, Int. J. Mol. Sci. 20: 1823 (2019). ![]() Epigenetics and Aging New acquaintances would often ask what I did for a living before I retired. I would have to confess that in my former life I was a Professor of Genetics. What would follow would be a prolonged and uncomfortable pause in the conversation. “That’s a fascinating field”, would come the response after the delay. Surprisingly, a frequent question that might follow would be, “What’s all this I hear about epigenetics?” Apparently epigenetics is becoming a subject of public discourse. I’m never sure what to answer. It’s not a subject I’ve studied over the years, but a recent paper caught my attention and inspired me to learn a little more about it. Epigenetics is the study of heritable changes to cells and organisms that don’t involve alterations in DNA sequence. As a geneticist, that doesn’t seem like a reasonable definition because ultimately, all heredity can be traced back to the sequence of bases in DNA. But DNA can specify heritable secondary phenomena and agents that can modify how it is expressed. And that is what epigenetics is all about. Let me offer an example. When a cell becomes committed to a specific differentiated state, its progeny usually continue to have the same fate. For instance, when a liver cell first forms, it will subsequently divide forming more liver cells, not nerve or kidney cells. The inheritance of a specific differentiated state is due to the action of proteins that bind to DNA (transcription factors) that direct transcription of various liver-specific genes. These proteins continue to be synthesized and to do their job over subsequent cell divisions. While the maintenance of the differentiated state fits the definition of epigenetics, most scientists associate epigenetics with a heritable chemical change in either DNA or its accompanying proteins that doesn’t change the sequence of DNA but effects its expression. For example, the most common modification of DNA is the addition of a methyl group (CH3) to the base “C” preceding the base “G”, a change that occurs in a few percent of all the CG sequences in the human genome. It is catalyzed by a family of enzymes, DNA methylases, and is generally inherited from one cell generation to the next. Most often methylation of DNA is associated with the repression of transcription, but that’s not a universal rule. The phenomenon plays an important role in development and in carcinogenesis. But in this post, I’m going to focus on it as a marker for aging. My mother would often ask guests at parties that she threw to guess her age. She looked young and would glow when the estimates she was given were off by five or ten years. Her guests would take her word for her true age, but could they be sure? Is there some biological clock that can reveal the true chronological age of a person? The answer is yes, and it’s a remarkably accurate one when read correctly. The clock makes use of the pattern of DNA methylation; that is, the changes at specific sites in the genome that gain or lose methyl groups with time (see 1 for a recent review). Not all sites demonstrate age-specific changes, but if an appropriate set of positions are monitored, they can be used as an “epigenetic clock” to accurately predict the chronological age of a subject. The first accurate clocks were generated by a group of researchers at UCLA in 2011. They assayed the state of DNA methylation at two locations in the genome and used the results to predict the age of the subjects that they were studying. Samples were obtained from people ranging in age from their late teens to their early 70’s. Remarkably, the correlation between the state of methylation and age was incredibly good, with an average difference between actual age and predicted age of 5.2 years. Two other clocks that examined 71 and 353 sites, displayed accuracies of between 3 and 4 years respectively. No one knows why certain DNA locations change their state of methylation in an orderly manner with time. The most likely possibility is that it may represent some byproduct of the aging process. Or, more intriguingly, it may be a process that actually is causal to aging. If so, by changing the methylation of some sites it may be possible to control the aging process. That’s exciting, but unlikely. Aside from being a good party trick, what’s the practical value of being to tell the chronological age of someone from a sample taken from their blood or some other tissue? One use could be in forensics where determining the age of a possible perpetrator of a crime by analyzing tissue left at a crime scene might be a way of identifying a suspect. Another possibility derives from the observation that some people, as judged by their methylation numbers, seem to have aged faster or slower than others. The epigenetic clock might enable physicians to identify the environmental or hereditary factors that play a role in changing the rate of aging, one that either slow or hasten it, by studying such people. There’s a nice non-technical article about Steve Horvath, one of the early builders of the epigenetic clock, that appeared in Nature several years ago (2) some of these matters. 1. “Dynamic DNA Methylation During Aging: A ‘Prophet’ of Age-Related Outcomes”, Xiao et al., Front. Genet. 18 (2019). 2. “Biomarkers and ageing: The clock-watcher” Nature 508:169-170 (2014). Alzheimer’s Disease - A New Hypothesis
Alzheimer’s disease is primarily an affliction of the elderly. Its incidence doubles for every half decade after age 65. Beginning with mild cognitive impairment, it is a dementia that inexorably progresses until it robs victims of memory, mobility, dignity, and, ultimately, their life. What’s particularly frustrating for scientists who study the disease is that despite decades of intense research, its cause is unknown, exactly how it kills nerve cells is still a mystery, and the treatments devised to date that attempt to stem its march have largely proved ineffective. Alzheimer’s isn’t the only form of dementia, but it accounts for about 70% of all cases worldwide. Broadly speaking, there are two forms of the disease. One, termed early onset, commonly strikes just after middle age, and before age 65. The other, called sporadic or late onset Alzheimer’s, occurs later in life. There is reasonable evidence that both forms of the disease have a genetic basis, although, except in some rare cases, not a simple Mendelian one. At the cellular level, Alzheimer’s is characterized by some dramatic pathological structures found in the brains of its sufferers. These include the so-called “plaques” that occur outside of the cells in the brain, and “tangles” that appear intracellularly. The plaques are composed of aggregates of fragments of a protein called APP, or amyloid precursor protein. APP is a transmembrane protein that is critical for normal neuronal function. The fragments responsible for plaques are produced by two membrane bound proteolytic enzymes, 𝜷 and 𝛾 secretases that act on APP. The result of their proteolysis are protein pieces (peptides) of various lengths called A-𝜷. These bind tightly together to form the plaques characteristic of the disease. The tangles, on the other hand, are constructed from a protein called tau that plays a role in stabilization of microtubules, the molecular highways of cells. Tau needs to have phosphate groups added to it in order to function, but above normal phosphorylation (hyperphosphorylation) seems to cause tau to aggregate, thereby forming the tangles typical of the diseased neurons of Alzheimer’s victims. For nearly twenty years and until very recently it was thought that the plaques found in Alzheimer’s sufferers brains were not only symptomatic of the disease but also causative – the “amyloid cascade hypothesis”. One quite compelling piece of evidence for it was that people with Down syndrome, who have an extra 21st chromosome, almost invariably show Alzheimer symptoms before the age of 40. It turns out that the gene for APP is found on the 21st chromosome, and the extra chromosome results in the production of an excess of APP. The thought was that having an abnormal increase in APP made it more likely to have excessive A-𝜷 peptide and therefore heightened plaque formation. Additional evidence comes from patients with early onset familial Alzheimer’s disease who have been found to bear mutations in the APP gene, or, alternatively, the genes that encode one of the two secretases. Again, these changes are thought to increase the likelihood of forming plaques and are added evidence for the amyloid cascade theory. If the amyloid cascade hypothesis is correct, then it makes sense to devise agents that target the plaques or the enzymes that break the APP protein into the peptides that aggregate into them. In fact, several such drugs have been developed and tested in clinical trials. To date, all have failed to provide convincing evidence that they are effective, although some are still in the pipeline. The result of these disappointments has been that the amyloid hypothesis has come into disfavor. There have been no shortage of alternative ideas. A recent paper (1) has provided evidence for a theory that acknowledges the role of amyloid in the etiology of Alzheimer’s but adds an additional twist. First, some background. As I’ve noted in previous posts, we are literally not the same person we were minutes, hours, and days ago. In particular, our old proteins are continually being destroyed, their component amino acids recycled, and new proteins synthesized to take their place. The process of protein degradation is complex, but one critical pathway, called autophagy (“self eating”), utilizes small membrane-bound vesicles, lysosomes, and the host of digestive enzymes within them, to rid cells of old and aberrant proteins and other macromolecules. Autophagy is an essential process necessary for maintaining homeostasis, but problems arise when a macromolecule can’t be digested. The result – the lysosome fills with undigested macromolecules. A cascade of secondary effects can subsequently occur, ultimately leading to cellular damage and cell death. Some 70 different so called lysosomal “storage” diseases are known, all caused by mutations that affect genes responsible for various functions of the organelle. Many mutations effect neurons because nerve cells aren’t replaced very often. And most of these diseases are fatal, often killing afflicted individuals early in life. What has this to do with Alzheimer’s, a disease of the elderly? It turns out that amyloid aggregates are targets of lysosomes. But in Alzheimer’s they can’t be readily digested. But why might not? What’s preventing them from being broken down? The papers cited above place blame on the processes of isomerization and epimerization of the amino acid aspartic acid. Here’s a simplified explanation of these two terms that doesn’t require diving deep into the muddy waters of organic chemistry. Aspartic acid is one of two amino acids (the monomers of proteins) that bears two acid groups. When one amino acid binds to another during the process of protein synthesis, an acid group of one joins with the amino group of another, forming a peptide bond. Aspartic acid is no exception. One of its two acid groups, a particular one, is involved in forming the peptide bond. But sometimes, especially in older proteins, the second acid group, the wrong one, inserts itself and substitutes for the normal one. This results in a subtle but significant change in the three dimensional structure of the protein. Epimerization produces a similar result, although it operates via a different process. In both cases the protein becomes increasingly resistant to breakdown in the lysosome. The paper by Lambeth et al put together three observations to devise a new theory for the origin of Alzheimer’s disease. First, they point to the substantial evidence that aggregates of the A-𝜷 peptide undergo isomerization and epimerization. Second, they state that there is no doubt that A-𝜷 peptides are found in lysosomes. And finally, they note that the lysosomes in Alzheimer’s disease look like those found in lysosomal storage diseases. The Lambeth paper summarizes: “…Alzheimer’s disease would essentially represent a different type of lysosomal storage disorder… Rather than failure of a [defective] enzyme … to clear waste molecules, failure to digest or transport modified waste molecules would … eventually lead to lysosomal storage [disease].” Buttressing Lambeth et al’s case is that there is increasing evidence (2) that another neurodegenerarative disease, Parkinson’s, is also a lysosomal disorder. Like Alzheimer’s disease, Parkinson’s is characterized by the appearance of pathological inclusions (Lewy bodies) in specific areas of the brain. In the case of Parkinson’s, a different protein, α-synuclein, is involved. Both diseases result in the death of sets of particular neurons. If Alzheimer’s and Parkinson’s diseases (and perhaps other neurodegenerative diseases) are truly lysosome storage disorders, and if the disfunction of the lysosome precedes the appearance of plaques and Lewy body inclusions, it may change how scientists devise treatments. We’ll see. Over the years, there have been many different theories as what causes Alzheimer’s disease. We still don’t have definitive evidence that any one is correct. If Lambeth et al.’s conjecture is on target, it would be a major advance. 1. “Spontaneous isomerization of long-lived proteins provides a molecular mechanism for the lysosomal failure observed in Alzheimer’s Disease”, T. R. Lambeth et al., ACS Cent. Sci.5, 8, 1387-1395 (2019). 2. "Is Parkinson’s disease a lysosomal disorder?”, AD Klein and JR Mazzulli, Brain 141, 2255–2262 (2018). ![]() Henny My mother – formally named “Henrietta, but universally called “Henny – was a beautiful, energetic, and loving woman. Brought up in the Bronx, she and her three sisters suffered the loss of their father while teenagers. Her oldest sister, Belle, managed to earn a college degree, but my grandfather’s death meant that mom had to drop our of high school to help contribute to the family’s finances.Without a high school diploma it was difficult to find a good job, particularly during the depression, but by assuming Belle’s identity, she was able to secure a well paying position at the 1939 World’s Fair in Queens, NY. Belle’s boyfriend and later husband was responsible for introducing mom to my father. They married in 1939 when she was only 18. A move to an apartment in my paternal grandfather’s house in Brooklyn followed. And then two kids, me and my sister. When we reached junior high school age, mom got a job as a secretary to a hospital administrator. Somehow she had taught herself shorthand. A few years later she secured a position as secretary to the playwright screenwriter, and novelist Paddy Chayefsky. Chayefsky, a native of the Bronx like my mother, was near the height of his fame. He was the most prominent of writers during the “golden age of television” in the 1950’s, famous for penning the television play “Marty” (which later became a movie starring Earnest Borginine). During the time that mom was in his employ, he had turned to writing screenplays for the movies. One of the highlights of my life was shaking hands with the legendary actress Kim Novak on the set of “The Goddess”, a film she starred in with Frederick March. Among my mother’s talents (she painted, wrote a cookbook, and was famous for her baking) was a gift for penning little poems that she would compose for birthdays, anniversaries, and other similar occasions. They weren’t Shakespearean, but they often hit just the right note when they were offered. Dad was a furrier, and, as far as I could tell, a good salesman but a terrible business man. But the fur business was quite volatile and it probably wasn’t his fault that he retired at one of its periodic low points. Nevertheless, he and mom had sufficient funds to purchase a modest house in Tamarac Florida, near Fort Lauderdale, and to travel extensively. While at home, they particularly enjoyed hosting my kids. We would periodically send them down south, and by the time they returned, mom would have fattened them up considerably - and bought them a new wardrobe. My father’s dementia dealt mom a terrible blow. She was devastated when he had to be placed in a nursing home. And when he passed, she was crushed, Happily, another love entered her life. Ruby, a prince and a gentleman of the first order, was attracted to mom and courted her ardently. She resisted for some time, but he wasn’t to be deterred and ultimately prevailed. They became a loving couple, and their romance lasted for several years. But dementia again entered the picture. Mom because forgetful. She and her friend Ethyl would joke that they belonged to “CRAFT” (can’t remember a fuckin’ thing). They insisted that as long as they were aware of their forgetfulness they needn’t worry about Dr. Alzheimer. But that wasn’t the case. Ruby arranged an eightieth surprise birthday party for her. I noticed some ominous signs. As a tribute to her, I composed a bit of doggerel in her honor. She was grateful. It’s the only poem that I’ve ever written. To Mom on Her Eightieth Birthday It’s my turn to get back at you It’s a task that I’ve wanted to do Over multiple days In numerous ways It’s a plan that I never outgrew. Whenever an occasion arose Some committee specially chose To sit Henny at home To write out a poem Not just some anonymous prose. But now that we’re celebrating your day There’s nobody around to convey The special rhymes The rhythmical lines To honor you in your own unique way. You’ll have to accept my best shot Even if it’s not all that hot A little ditty Not very pretty That shows that I love you a lot Here’s what I’m aching to say A motto to mark this great day: “That Henrietta, There’s nobody better” Wonderful in every way. She’s a fabulous mother A granny like no other A founder of CRAFT Her grandkids would laugh When she mixed one up with his brother. Ruby was a saint and looked after my mother for several years. Ultimately, it became too much of a burden. Her house was sold and my sister Elaine flew down to Florida and shepherded mom back to Long Island. Mom had gotten Elaine to promise that she would never be placed in a nursing home. So, with the proceeds from the sale of her house, and with a generous contribution from Elaine’s husband, George, Elaine was able to secure an apartment near their home. She hired two Philippine ladies to care for mom 24 house a day. Within a year or two, mom no longer recognized me or my sister. After a little while longer, she stopped talking and was confined to a wheelchair. Soon after she passed away. What a horrible disease, both for its victims and for the people who care for them. And, as a scientist, it’s frustrating to not know its cause, nor how to treat it. In the next post, I’ll discuss some recent developments that hint at a better understanding of the disorder. ![]() I hold a grudge against Dr. Alzheimer and his eponymous disease. Well, perhaps not against the good doctor who was the first to characterize what he called “presenile dementia” in a 51 year old patient at the beginning of the twentieth century. But certainly against the disease that ultimately stole the minds of both of my parents. Their suffering still haunts me. My father first showed evidence of dementia in his late 60’s. At a party on the occasion of my oldest son's bar mitzvah, a neighbor, a nurse, approached me. She had been engaged in a long conversation with my dad who had come up from Tamarac, Florida where he and mom had moved after his retirement. She seemed concerned. As I recall, she didn’t mention dementia or Alzheimer’s disease (I wouldn’t have known about either condition at the time anyway), she sadly said that our family should be prepared for many difficulties in the coming years. And so it was. About a year later, my parents came up to Baltimore where we were living from Florida. My mother took me aside and told me that my father was acting strangely. They had always had a good relationship and she was puzzled by his recent behavior. She asked me to talk to him. Dad and I took a walk around the neighborhood. I asked what was wrong. He told me that mom was treating him like a child, always telling him what to do. I put it all down to retirement. Dad didn't have any hobbies. He was always at home without much to do, and he and mom were now always together, in close proximity. I told my mother that nothing was wrong. Another year passed. My mother asked me to fly down to Florida. The disease had progressed. Dad had hit a car in a shopping center parking lot and had driven off. Someone had seen the incident and had reported it. A policeman had come to the house. When asked, my father had no recollection of the incident. He began stealing little items at the supermarket, stuffing them into his pockets. He was extremely forgetful, not knowing how to drive to formerly familiar locations, forgetting names and faces. Worse, he would wander away from home and not return. Mom had to send out neighbors to find him. On the Fourth of July later that year my sister and her husband joined the rest of the family at a party. We were seated on our patio, enjoying some barbecue, and talking together. A firecracker went off in an adjacent yard. My father asked what the noise was. We told him what it was. A few minutes later, a similar sound came from the same location. My father asked the same question. We gave the same answer. This dialog was repeated four or five times every few minutes. It was clear that something suffering from some sort of mental disability. My mother took dad to a series of doctors. The final verdict was Alzheimer’s or a similar dementia. They offered no treatment, no remedy, no cure, no hope. My mother assumed the burden of caring for dad. It was a full time job. After a while, she hired Ruby, a gentle, tall, former milkman about my father’s age to give her some respite. But she insisted that she would rather have him at home, even in his demented state, rather than put him in a nursing home. After another two years, my mother called and told me that she couldn’t cope any longer. She said that we needed to put dad in an Alzheimer's facility. I flew down to Florida and we inspected several. It was a painful experience. Many smelled. Elderly inhabitants sat in wheelchairs; slumped over; spittle flowing from their mouths. They gazed emptily ahead; unseeing; mostly silent but occasionally suddenly moaning as if they were newly aware of their circumstance. We picked out what seemed like the least worst nursing home. Upon arrival dad was strapped into in a wheel chair. He didn’t seem alarmed or perturbed. He grabbed the wheels and worked his way around the room. A smile lit his face. This was a new toy and he was enjoying the ride. That reaction troubles me to this day. How a once proud, intelligent, capable, man could be reduced to behaving like a child saddens me to the point of tears. Within a couple of months, my father passed away. It wasn’t clear why. Except for his mental condition, his health was quite good. Even near the end, the nursing home complained that he was forever pinching the nurses and racing his chair about. My mother thought that the home was negligent, but for what ever the reason his death avoided the terrible consequences that mark the end point of the disease. This next part of the story has a semi-happy ending. Ruby, it turned out, had a crush on my mother. And within a year or so after my father’s passing, my mother returned the favor. Mom always claimed that my father was the “love of her life”, and that she couldn’t imagine finding someone that she cared for as much. But Ruby, a wonderful man, a real mensch, afforded her a second great romance. She was very lucky. Not many get two such chances. They lived together until she too contracted dementia. But that’s a story for the next post. And now for something completely different - the saga of our experience at the Texas Drivers License Center in north Austin. The “our” refers to me and my wife, Gail.
We both got letters in the mail saying that we needed to renew our drivers license. It had been ten years. In addition, the old license wouldn’t serve to get past the TSA at an airport or to enter federal facilities. A license will have to have a star in the upper right hand corner in order to pass muster. We needed an update. We had several months to do the deed, but we decided that we might as well get it over with as soon as possible. The letter said that we had to renew in person but that we could make a reservation on line. We tried. No luck. The site informed us that there were no reservations available. In order to rate our venture into the world of Texas bureaucracy, I’m going to keep a score card. The fact that we couldn’t make a reservation gets a minus one. I’ll take off another point for the non-existent explanation of why the reservation system didn’t work. That’s minus two if you’re counting. OK. We decided to make the short trip to the license bureau and take our chances there without the benefit of a reservation. We arrived at eleven in the morning. We encountered an overflowing room with people sitting on the floor. Inadequate space takes off another point. The gentleman serving as a receptionist was friendly and polite despite being subject to a bombardment of questions and complaints. He told us to register. He estimated the wait time as “at least an hour and a half”. Subtract another point for accuracy, but give him a point for civility, The score so far: minus three. After sitting a while, it was now almost noon. We decided to go to Nervous Charlies and have lunch. The bagels at NC are flown in frozen from New York. I had hot pastrami on mine. We returned an hour later, no longer hungry, ready to go. Add a point for access to restaurants nearby. The total score: minus two. A word about the system that is used to track the waiting hordes. When you arrive you register electronically and indicate what service you’re seeking. The computer assigns you a number and the next few people in line’s numbers are shown on an electronic display and the very next victim is announced via loud speaker. In the time we were at lunch the numbers hadn’t progressed much. We found two seats and waited. The lady across from us struck up a conversation. She had been in the same seat since 9:30 AM. It was now 2:00 PM. Her number looked like it was due to come up shortly. The long wait and the inaccurate estimate of how long it would take for us to get served earns the Center two more negative points. However, we did learn from the receptionist that the reservation system actually works. The reason it didn’t for us was that you had to go on line at 7:30AM to have a chance at getting a place. Since it looked like it would be another two hours or more before they would call our number, we left armed with this new information (another two points subtracted for not letting us know about this beforehand).’ The score: minus six. The next morning, bright and early, we went on line and tried to register. Gail went first. No problem, A 9:30 appointment was assigned and confirmed. I followed a few minutes later. I was given a 9:50 appointment. I pressed the button to confirm the time. No response. I had been given an appointment but no confirmation! Two more points taken away. Score: minus eight. Undaunted we returned to the Center. Gail had her reservation in hand, and I had a printed page showing that I had an appointment, but with no time indicated and no confirmation. We signed in again. The computer acknowledged that Gail had an appointment at 9:30. It never heard of me. We decided that the best strategy was not to yell at the receptionist. Rather, we assumed pitiful expressions and explained our situation. “I’m sorry” was the response. “If you’re not confirmed, you have to go to the back of the line. Apparently, in the interval between when you were assigned an appointment and the confirmation, somebody else came on line and got your time”. Minus five points for poorly written software. Score: minus 13. A supervisor standing nearby overheard our plight. She took Gail’s written confirmation letter and scribbled something on it indicating that it could be used for two people. Score five points for flexibility. Within a few minutes we were seated at adjacent desks, each of us facing a real person capable of renewing our license. Score: minus eight. The lady from whom I sat across never smiled during the ten minutes it took to complete the process. But she had a wicked sense of humor, and was pleasant and efficient. We exchanged little verbal jabs while she administered a vision test (I passed), and she gave as good as she got during the repartee that took place during the remainder of the interview. She was patient with me when I stood in the wrong place while my photo was taken. And, best of all, she told me that I looked more or less the same as in my picture from ten years ago. That’s three positive points for her pleasant conversation and five for a gentle lie that made my day. Final score: 0. Not a positive experience, but not too bad either. A few serious words about the role of government. Notwithstanding the comments of nearly everybody I talk with, I found most of the workers at local government agencies pleasant, efficient, and competent. This despite the often burdensome conditions under which they operate and their relatively low pay. It’s not their fault that processes like renewing one’s license aren’t more efficiently handled. Turning the procedure over to the private sector doesn’t seem to be a better solution. What’s needed is a really good industrial designer with the authority to make some major changes. It will probably cost more in the near term. But without some drastic improvements we’re doomed to long lines, inadequate software, and people sitting on the floor. ![]() Fruit Flies, Flower Power, and Cancer William Proxmire, a Democrat who represented the state of Wisconsin from 1957 to 1988, was a harsh critic of wasteful government spending. He was particularly well known for issuing “Golden Fleece Awards”, satirical prizes that he bestowed to those government agencies that appeared to squander public monies on silly and trivial grants. Between 1975 and 1988 he conferred 168 such prizes, some to scientific entities like the National Science Foundation, which, in Proixmire’s view, appeared to support worthless research projects. He particularly aimed at studies with ludicrous titles. Some members of the scientific community were dismayed by these awards, claiming, rightly in my opinion, that a funny title doesn’t necessarily mean that the work isn’t worth supporting. The criticism apparently didn’t hurt him. He was reelected repeatedly by huge margins. Proximire might have conferred a Golden Fleece on Eduardo Moreno of the Champalimaud Centre for the Unknown in Lisbon, Portugal. Moreno and his collaborators have been working for several years on a gene found throughout the animal kingdom that he has nicknamed “Flower”. You can imagine the headlines that accompany reviews of his work. Proxmire might have picked up on them, but the gene and its effects are not laughing matters. Minutes Professor Moreno’s research focuses on a phenomenon that is not widely appreciated. It’s called “cell competition”, and it was first discovered in fruit flies in conjunction with a peculiar set of mutants called “Minutes” (pronounced “my newts”). The first Minute mutation was discovered exactly 100 years ago by Calvin Bridges in Thomas Hunt Morgan’s laboratory in New York City. Flies with two copies of a defective Minute gene die. Flies with one copy have shorter bristles, rough eyes, lower viability, and reduced life span (By convention, in Drosophila the names of dominant mutants, ones that display a phenotype when present in one copy, are capitalized). There are many Minute genes and they’re found all over the Drosophila genome. Molecular analysis later revealed that each of them coded for a different protein component of the ribosome. Since the ribosome is the apparatus that is responsible for synthesizing all the proteins of an organism, defects in one or more of its parts will slow the rate of protein synthesis and thereby result in the Minute phenotype. Here’s where Moreno comes in. Using a technique that I’ll not describe, he managed to produce mosaic flies with a mixture of normal and Minute cells. As expected, the Minute mutant cells grow somewhat slower than their normal counterparts. What was surprising was that when the normal and mutant cells were next to one another, the Minute cells committed suicide, a phenomenon called “apoptosis”. How was that possible? How could the simple juxtaposition of normal and slower growing cells cause the laggards to kill themselves? The Flower Gene The Flower gene appears to be the key to the answer to these questions. The gene encodes a transcript that can be alternatively spliced to yield four different transmembrane proteins that I’ll call Fl1, F2, F3, and F4. Moreno found that when the Flower gene was completely shut down in a human breast cancer cell line there was no effect on cell growth. Similarly, if all four forms of the protein were simultaneously expressed, it also was without effect. However, when cells expressing F1 or F3 were co-cultured with cells expressing either F2 or F4, the F1 or F3 cells killed themselves. At the same time the F2 or F4 cells grew faster. Moreno calls the cells expressing F2 or F4 “winners”, and the F1 or F3 expressing cells “losers”. Winning and losing requires that the two cell types be in contact with one another. Cancer What has all this to do with cancer? In a recent Nature paper emanating from Moreno’s laboratory (“Flower isoforms promote competitive growth in cancer”, Madan et al., Nature 2019 Jul 24. doi: 10.1038/s41586-019-1429-3. [Epub ahead of print]) Moreno examined slides of cells from breast cancer patients. He found that malignant cells were winners, expressing more F2 or F4 than normal cells. Cells surrounding the tumor were high in F1 and F3, and clearly were losers. Their summary: F1, F2, F3, and F4 all are poorly expressed in healthy tissue, F2 and F4 are overexpressed in cancer tissues. And F1 and F3 are found in unusually high concentrations in tissues adjacent to the tumors. Apparently, cancer cells have subverted the Flower gene to become winners. At the same time, as I understand it, the surrounding tissue reacts to its winning neighbors by becoming losers, expressing the F1 and F3 forms of the gene and committing suicide. These results suggest that the Flower gene might be useful in cancer therapy. Moreno found that interfering with expression of the Flower gene did indeed reduce malignancy. He concludes, “…human [Flower] proteins can have a powerful effect on tumorigenicity" He and his collaborators propose that "therapies targeting these proteins have the potential to impair cancer growth and metastasis. My take When I was on the faculty of Johns Hopkins University, one of my more learned colleagues would often shock me by boldly asserting a little known fact about Biology (at least he thought what he was claiming was a fact) that he believed that I wasn’t aware of. One of these statements that was particularly surprising to me was that the scientific community doesn’t know how cancer kills. When challenged, he acknowledged that the assertion wasn’t universally true. For example, some tumors in the intestinal track may physically block digestion. And there are other exceptions. But in most instances, he claimed, we don’t know why people die of cancer. Moreno’s findings may offer an explanation. It goes like this. Malignant cancers invade tissues. Among other things they subvert normal processes and express winner Flower genes F2 and F4. Surrounding tissues become losers by turning on F1 and F3. The result is that tissues neighboring the tumor commit suicide. Their cell death is the wasting away that often is the basis of a cancer victim’s death. If that analysis is correct, it has profound implications. By targeting the Flower gene of cancers and surrounding tissues it may be possible to stop the cell death that often accompanies tumorigenesis. That would represent an entirely new way of treating malignancies. It’s amazing. If the therapy proved effective, we would be indebted to a mutant fly with shortened bristles and rough eyes first identified one hundred years ago. ![]() m6A An Analogy Conceptually the secret of life is pretty straightforward. It is best revealed by an analogy. Think of cells as factories. Each carries a manual with instructions for constructing machines, 25,000 or more in the case of humans. Each machine’s assembly is directed by the text of a chapter, one chapter for each machine to be made. To actually synthesize a machine, the instructions in a chapter are copied and sent off to giant fabricating devices, that carry out their assembly. That’s all there is to it. One just has to substitute biological entities into the analogy where appropriate, DNA is the instruction manual, genes are chapters in it, and RNA molecules are the copies of individual chapters. Proteins are the machines whose structure is dictated by the RNA instructions, and ribosomes are the giant devices that build them. That’s the big picture. It’s easy to lose sight of, particularly as one immerses themselves in the morass of details, complications, and exceptions that characterize molecular biology. A Complication Let me begin by disclosing the basis for an additional complication: multicellular organisms have over 200 cell types, each with a different population of proteins (machines in the analogy). This observation strongly suggests that there has to be a mechanism for specifying which chapters are copied in the various cell types to specify the protein population in each. There is. There’s a group of machines (proteins) whose job it is to direct the copying apparatus to different chapters. This process is called gene regulation. It should be clear that this extra bit of detail makes the secret of life vastly more complicated. These “regulatory machines”, like all the others, must themselves be made by copying the appropriate chapters from the book. But how does the cell decide which regulatory machines to make? The answer: other regulatory machines. And so on, ad infinitum, or so it seems. That’s hairy. More Complications But that’s not the least of it. It turns out that determining which chapters to read is only a part of the complex operations controlled by the regulatory apparatus. Here are some others.
RNA Modification I came across an additional means of gene regulation, one of which I was unaware, the other day while browsing “Science”, the journal of the American Association for the Advancement of Science. The regulation involves a change to the RNA copy of the gene, a minor alteration of one of RNA’s letters. Making modifications to the RNA copy of a gene has been a process that has been recognized for almost fifty years. It’s well known that an RNA copy, a transcript, routinely has its starting end “capped”, its trailing end filled with multiple copies of the letter “A”, and parts of its interior cut and thrown away (you can find out more about these processes under the heading “RNA processing” using any search engine). What is less well known, are the subtle changes to the various “A” bases in RNA copies that I describe below and which I learned about in Science magazine. Take a look at the figure. It shows the alteration in the “A” base that I’ve been alluding to. The change is shown in red for emphasis because it may not be obvious at first glance. It’s a subtle alteration, the substitution of a methyl group for a hydrogen, analogous to changing the font weight (to bold or italic, for example) in a section of text. It does not change the meaning of the words, but can make a difference in emphasis. I’ll refer to the modified base as m6A, its nickname. The methylation of some “A”’s in RNA was discovered decades ago. What wasn’t known are the consequences of the alteration or the players involved in adding or removing the methyl group. There’s been a lot of recent activity in those areas recently. Writers and Erasers The increased activity was initiated by a discovery by a group of scientists from the University of Chicago (Jia et al., Nature Chemical Biology 7:885–887 (2011)) that an enzyme with the bizarre name of “fat mass and obesity-associated protein (FTO)” acted as an “eraser” to remove the methyl group (CH3) from m6A. The FTO protein, as its name implies, seems to play an important role in metabolizing fats. It, and another eraser, ALKBH5, seem to be responsible for removal of m6A from mRNA’s. In addition to erasers, m6A “writers” have been discovered that add m6A to RNA. It’s been estimated that these writers are responsible for the 3 to 5 m6A’s found per molecule of mRNA. Subsequent studies showed that m6A was found predominantly in certain parts of messenger RNA, particularly near its termini. It’s also found more often in areas that will be spliced out of pre-mRNA transcripts. Both the localization of the modifications, and the fact that there are specific enzymes dedicated to their addition and removal suggested that m6A might somehow lay an important role in regulating aspects of mRNA behavior. Readers Today we know that besides erasers and writers, there are “readers”, that detect the presence of the modified RNA. They do so in three ways. Some proteins are known that bind preferentially to modified RNA’s, specifically recognizing m6A on the molecules. Some bind only to RNA’s in which the m6A modification has been removed. And some proteins detect m6A modification not by binding to the methylated base, but by virtue of the fact that m6A changes the overall three dimensional structure of of the modified RNA’s. The key question: What processes do these readers affect? In other words, what is the effect of this specific methylation of RNA? The answer in a word: it’s complicated. In fact, a recent review article (“It's complicated... m6A-dependent regulation of gene expression in cancer”, Christina M. Fitzsimmons, Pedro J. Batista,. BBA - Gene Regulatory Mechanisms 1862: 382–393 (2019)) acknowledges that in its title. The authors emphasize that m6A modification affects many processes, including RNA stability, RNA processing, RNA translation efficiency, RNA storage, and RNA transport from the nucleus to the cytoplasm. But the main focus of their paper is the role that m6A plays in cancer. Cancer Patients with acute myeloid leukemia have been shown to have increased levels of FTO. In addition, higher amounts of ALKBH5, the other major demethylase, are correlated with poorer prognosis for patients with glioblastoma, a brain cancer. In breast cancer, the low oxygen environment found in most tumors causes the over expression of ALKBH5, which, in turn, demethylates, and thereby stabilizes a specific mRNA that promotes cancer cell growth. Overall, Fitzsimmons and Batista’s paper lists 12 different cancer types that are influenced by mutations in erasers, writers, or readers. It’s been known for some time that m6A is essential for maintenance of the differentiated state, and when it is lacking cancer is more likely to occur. Other Roles One observation that further points to the importance of m6A is that it, and the enzymes that control and recognize it, are found in all eukaryotes assayed to date. Moreover, mutations in m6A writers are lethal in many organisms, indicating that this RNA modification is essential for life. Among the processes affected by reader, writer, and eraser mutations are circadian rhythms, neural development, learning, and progression through the cell cycle. These are early days for the study of m6A. But it’s already clear that this previously neglected area of gene regulation is no longer being neglected. Expect much more attention ahead. ![]() Carcinomas, sarcomas, and the other members of the cancer panoply are caused by errors in genes - mutations - mistakes in the sequence of DNA. But what causes these errors? My experience has been that most people when asked this question answer that mutations are caused by harmful agents that originate from industrial processes. Air pollution, atomic reactors, artificial chemicals added to our foods, pesticides, and the like would be examples that many would cite. After all, they would say, haven’t cancer rates been rising in the modern era? Cancer is indeed more prevalent now than it was in previous centuries, but that’s because we’re living longer and cancer is a disease primarily of the old. The fact is that there is strong evidence that the factors listed above, although certainly not benign, are mostly not the primary drivers of carcinogenesis. My argument in support of this view begins with information taken from chapter 20 of Albert et al’s “Molecular Biology of the Cell”. The authors argue that even in the absence of agents that cause mutations (mutagens) an error in DNA sequence occurs on the average once per cellular division. That is, the process of DNA replication is extremely accurate, but not perfect. They calculate that this means that on average any given gene will experience a change in sequence once every million cell generations. Alberts et al. estimate that in a lifetime a normal human will experience 10 quadrillion divisions (that’s a one with 15 zeroes!). Dividing the number of divisions by the rate of mutation results in the amazing estimate that every gene in our body will be subject to a change in sequence ten billion times! All this occurs without any external mutagens. If all it took was a single mutation in a growth promoting gene, cancer would be be much more common. The reason it isn’t is that most mutations are either benign or so detrimental that the cell dies. Moreover, carcinogenesis requires that more than one event must occur independently in the same cell. That happens infrequently and is one reason that cancer rates increase with age. A similar conclusion was reached by Bert Vogelstein and associates at Johns Hopkins University (C. Tomasetti, L. Li, B. Vogelstein, Science 355: 1330 (2017)). They assert that about two thirds of all cancers are due to “bad luck”, by mutations that occur during the normal process of DNA replication. Of course even if one accepts Vogelsteins ideas, that still means that external mutagens are responsible for a third of all cancers. It's clear that the chemicals in cigarette tars and the ultraviolet radiation from the sun vastly increase the rate of mutation and have been shown to be major contributors to lung and skin cancer. Asbestos and X-rays are proven mutagens and carcinogens. Even some common components of our diet carry known mutagens and are suspected carcinogens (see below). The take-home lesson is that the chances of getting cancer can be lessened by appropriate changes in behavior, but total prevention appears unattainable. Mutations/Carcinogens It’s clear from a public health and personal standpoint that it would be advantageous to recognize which substances are carcinogenic so that they can be avoided even if they only account for a minority of cancers. But most tests for carcinogenicity are laborious, time consuming, and expensive. Typically mice or rats are treated with or fed suspected chemical agents and examined for up to two years for the appearance of cancers. What was needed was a rapid and cheap test. In the mid 1970’s, Bruce Ames, a distinguished geneticist from the University of California in Berkeley, developed one. The Ames test, as it is now called, was originally devised to detect mutagens. It is based on a mutant strain of the bacterium Salmonella Typhimurium. Bacteria of this strain cannot grow on medium lacking the amino acid histidine because they are missing an essential gene required for its synthesis. When millions of such bacteria are spread on a petri dish that does not contain added histidine they will not form visible colonies. Occasionally however, the defective gene carried by these bacteria will undergo a mutation (technically called a “reversion” or “back mutation”) that restores its former ability to synthesize histidine. The result: a colony of bacteria will grow and be visible on the petri plate. Ames recognized that this behavior could provide a rapid and quantitative test for the presence of mutagens. The idea was that any substance that could increase the reversion frequency would be acting as a mutagen. The test consisted of adding the experimental substance to the bacteria, pouring the bacteria onto a Petri plate, waiting overnight for the bacteria to grow, and then counting the number of colonies on the plate compared to a control. An increased number of colonies over the spontaneous reversion rate would indicate the presence of a mutagen. Since there is good evidence that mutagenicity and carcinogenicity are closely correlated, the Ames test would seem to be ideal for detecting carcinogens. One difficulty however was quickly realized: It was well known that when chemicals are introduced into living animals they can be modified – mostly by enzymes in the liver. These modifications may make them more or less carcinogenic. For this reason, Ames refashioned his test. He added the suspected mutagen/carcinogen to a liver homogenate before exposing the bacteria to it reasoning that the whatever changes that occurred within an organism would take place in the liver homogenate {see illustration). This revised Ames test was then used to test a variety of substances. Some 300 compounds, almost all artificial organic chemicals, were evaluated by the mid 1970’s. The result was that the relationship between carcinogenicity and mutagenicity was confirmed and that a number of dangerous substances that were previously not known to be carcinogens were identified as such. Ames and others subsequently went on to test a variety of other substances that are naturally found in common foods. There they met a surprise. In a paper written in 1990, here’s what Ames and colleagues concluded: Plants produce toxins to protect themselves against fungi, insects, and animal predators .... Tens of thousands of these natural pesticides have been discovered, and every species of plant analyzed contains its own set of perhaps a few dozen toxins. When plants are stressed or damaged, such as during a pest attack, they may greatly increase their natural pesticide levels, occasionally to levels that can be acutely toxic to humans. Many of these “natural pesticides” were found to be mutagens via the Ames test and to be carcinogens when tested in rodents. Ames argued that we are devoting a disproportionate of money and attention to artificial chemicals - preservatives and pesticides – which are found in minute concentrations in foods. While some of these may have been identified as potential carcinogens, natural chemicals present in much greater concentrations and with similar mutagenic profiles are equally likely to be dangerous. For example, in a 1990 publication Ames enumerates 49 natural pesticides in cabbage, with names such as erythro-1-cyano-2-hydroxy-3,4-epithiobutane, 4-methylsulfinylbutylglucosinolate, and allylisothiocyanate. The last has been shown to be carcinogenic in rats. The lengthy names of these naturally occurring substances remind us that when the word “chemical” is bandied about, it doesn’t just refer to artificial constructs. Ames was not arguing that these data are necessarily relevant to human cancer. He recognized that diets that are rich in fruits and vegetables are known to be associated with lower cancer risk, perhaps because they carry abundant anticarcinogens and vitamins. But by concentrating our efforts on removing artificial chemicals from foods, we may be wasting resources that might be better employed elsewhere. Another important point that must be added is that many known carcinogens are not mutagens, at least as measured by the Ames test. Some chemicals are toxic to bacteria and therefore can’t be properly evaluated. Some carcinogenic substances can’t enter bacterial cells. And many carcinogens promote cell division, which may increase the risk of cancer without directly causing mutations and would not be picked up in a bacterial assay. Finally, with respect to cancer causation, we are left with a conundrum. It is clear that the environment plays a role in cancer causation and prevention, yet we don’t know how. Eating well done red meat increases the risk of prostate cancer. Exercise decreases the incidence of some types of breast cancer. Periodontal disease increase the rate of esophageal carcinoma. Diets rich in some vegetables and fruits decreases prostate cancer rates. It’s not known what environmental components are producing these results. What is clear is that cancer prevention is much more desirable than cancer treatment. Understanding just what role the environment plays must be a priority in the future. ![]() In the last few weeks I've gotten a chance to read a half dozen papers that describe novel and ingenious potential cancer immunotherapies. They've all been worthy of a post. However, just the other day, on Wednesday, January 23, I was introduced to a new approach to cancer therapy via a different route – an internet seminar, a "webinar". Hosted by the American Association for the Advancement of Science (AAAS), it was entitled "Cell and Gene Therapies for Cancer:Future Promises an Challenges". It was instructive to hear a presentation in this format because I had never attended a webinar previously. The format didn't allow me to actually see the speakers, but their slides were shown and I could hear their presentations. Overall, it was a splendid experience and I found it exciting to hear research scientists speak directly about their work. There were two presentations each a half hour long, but I'll only discuss the second one, offered by Dr. Lawrence Cooper, previously a professor and section chief of cell therapy at the MD Anderson Cancer Center in Houston. He left his position there to become, in 2015, the CEO of Ziopharm, a biotech company. Cooper is taking an innovative approach to immunotherapy, making use of engineered T cell receptors, rather than CAR's. Here's some of what I learned from his stimulating presentation. Disadvantages of CAR's Chimeric antigen receptor - T cells (and natural killers) have had remarkable success in the last decade or so in treating some cancers of the blood. But they've not been as successful in combating solid tumors. Cooper cites several reasons, some of which I've discussed in previous posts. One major problem that CAR's face is that they're engineered to target a specific antigen that sits on the surface of a cancer cell. Cooper calls these antigens "germinal" or "shared" or "public", meaning that they're common to both normal and tumor cells. When CAR T's directed at these antigens are deployed, they therefore attack normal as well as cancer cells. For example, when CAR T's were used as immunotherapy for leukemia, both the leukemic cells and normal B cells were killed. It was possible to ameliorate this problem by giving patients external antibodies, but that's not a general solution for other kinds of cancer. One way of getting around this problem is to target CAR T's against cancer specific public antigens (antigens found only on, or almost only on, tumors), but these have been difficult to identify (assuming that they exist at all). One reason? The antigens must be on the surface of cells in order for the CAR to bind to it. And surface proteins only represent a small fraction of the potential antigens that might be useful in distinguishing cancer cells from normals. Neoantigens Killer T cells can "see" proteins inside of cells by making use of their receptors and the MHC. That might expose more candidates for therapy, but Cooper makes note of am associated problem of which I wasn't aware. It appears that the T cell response to a shared antigen is generally weak, presumably because the mechanisms that prevent autoimmunity lessen their potency. For this reason, Cooper envisions the cancer immunity field turning to neoantigens as targets. Neoantigens arise by mutations in the genetic material that often result in proteins that carry amino acid differences with respect to normal cells. These differences can result in proteins that appear foreign to the immune system and are therefore potentially immunogenic and capable of being detected by T cell receptors. Using such neoantigens should decrease the likelihood that a patient's T cells will react with normal cells, thereby increasing the chances that their response will be more potent. But focusing therapy on neoantigens has many challenges. For one, therapy must be individualized because every tumor will likely harbor a different population of neoantigens. Second, many tumors will be heterogeneous, some sectors will undoubtedly bear neoantigens that are different from others. Such tumor heterogeneity will increase the chances that the tumor will regrow even after an initial positive response. Therefore, cancers may have to target multiple T cells bearing a variety of receptors. Third, and perhaps most important, is the problem of how to detect the neoantigens in tumors. It's all very well to say that neoantigens make good objectives for therapy, but how does one find them and then create T cells that seek them out? New DNA sequencing techniques have made it possible to find mutations in tumor cell populations. In particular, so-called whole exome sequencing, where scientists determine the DNA sequence of exons, the expressed portion of the genome that represent less than 2% of the entire DNA, can be done relatively cheaply. By comparing a tumor's DNA sequence to that of a normal cells, differences may indicate mutations and potential neoantigens (step 2 in the figure). In turn, it is possible to synthesize these protein fragments (step 3), transfer them into dendritic cells (step 4), have the dendritic cells display protein fragments (peptides) on their surfaces, and to select T cells from a tumor that specifically bind to them (steps 1a and 5). The result is a population of T cells that is dedicated to seeking out tumor cells bearing specific mutated genes that can be expanded and transferred back into the patient. This sophisticated, multistep approach seems to have been successful in several cases in the last few years, and has achieved particularly dramatic results recently. A 2018 paper in Nature Medicine describes how a patient with advanced metastatic breast cancer who had not responded to prior therapy was found to be cancer free 42 weeks after this mode of therapy. Her numerous liver and subcutaneous metastases had regressed completely. The results from this study demonstrate that this basic approach works; that they're a proof of principle. That's the good news. The bad news, from a practical point of view, is that the procedure is not scalable. And, because in trials not all patients have responded favorably from the treatment, its potency might also be improved. It's these two aspects of the treatment that Cooper addressed in his talk. An Improved Procedure Cooper notes that a large number of laboratories have developed methods for detecting neoantigens. As described above, exons from tumor fragments are typically sequenced and compared with DNA extracted from non-cancerous tissue in the same patient. The suspected neoantigen peptides are tested against the patients own T cells. In what Cooper considers a less effective technique, those T cells that recognize these neoantigens are amplified in culture and used for therapy. Cooper favor an improved approach, where the DNA that specifies the T cell receptors from reactive T cells is transferred into naive T cells. That is, a population of T cells from outside the tumor is engineered to produce a T cell receptor that will react with the tumor neoantigen. The idea is that these engineered naive cells will be more potent than the progeny of the T cells that have already infiltrated the tumor. This technique borrows from the CAR world. But instead of creating a new engineered B cell receptor for a T cell, the approach that Cooper describes puts a modified T cell receptor in place of a natural one. Cooper devotes most of the last third of his seminar to improvements in how this last step, a gene transfer, is achieved. I won't go into this part of his talk, except to say that the techniques he describes are much more efficient than the procedures that are usually employed. They offer the prospect of a far less expensive and perhaps even commercially viable treatment in the future, one that is applicable to many more patients. We'll see. According to Cooper, clinical trials will begin this year. ![]()
A Plethora of Promises
There's lots of new papers being published in the junction of the fields of cancer and immunology. To get a numerical grasp of the extent of the goings on, I did a Google scholar search for articles with the words "cancer" AND "immunity" anywhere in their text, limiting the output to papers published in 2019. I entered the two search terms on Jan 12 at 2PM and turned up 1,750 hits! If you're conting, that's just 12 days including weekends! The evident breakneck pace at which research is proceeding has implications that are both positive and negative. On the plus side is the obvious supposition that the chances of making progress in understanding both the basic biology of tumorigenesis and the prospects for finding an effective therapy go up the more research is done studying the subject. The negative implications are more subtle. What's an oncologist or cancer researcher to do under the weight of these papers? How can they possibly keep up? Added to their burden is the possibility, even the probability, that many of the conclusions reached in these publications will surely be incorrect or irrelevant or misleading. Or the therapy that they suggest may be appropriate for one patient and harmful to another. After all, we've seen that cancer is a heterogeneous disease. And then there's the issue of whether the few findings that show promise can be adequately tested in human subjects. Clinical trials are expensive. With so many studies ongoing, it will be a task to figure out which ones have the highest probabilities of panning out. Some may work best in combination with others, thereby multiplying the possibilities even further. IL-2 Not withstanding these challenges, I thought it would be worthwhile to occasionally alert readers to some recent publications that have aroused my interest and seem to me indications of future directions in which the field of immunotherapy may go. Keep in mind that many promising indications have lead to cul-de-sacs and even ingenious approaches often don't succeed. You've been warned. The paper that I'm going to discuss is by Daniel-Adriano Silva and 24 other authors and is entitled "De novo design of potent and selective mimics of IL-2 and IL-15" (Nature 565: 186 - 191 (2019)). Silva is from the University of Washington in Seattle, and his fellow collaborators come from universities located all over the world, including Portugal, Spain, the United Kingdom, California, and Maryland. I imagine that coordinating their project must have been a nightmare. Briefly, what they've done is to devise a new method of designing proteins and to use it to build an improved version of the protein interleukin-2 (IL-2) that may be useful for immumotherapy. Before getting into the meat of the paper, it might be useful to review some information about IL-2. It's a cytokine, specifically a member of the interleukin family that consists of some 40 odd signalling proteins. Interleukins are secreted proteins that bind to receptors on white blood cells (leukocytes) thereby changing their behavior. IL-2, in particular, was discovered more than 35 years ago, and was, according to Stephen Rosenberg (J Immunol 192: 5451-5458(2014)), "the first effective immunotherapy for cancer". In the lead paragraph of his paper, Rosenberg writes: "In November 1984, a 33-year-old woman with metastatic melanoma who had progressed through multiple prior treatments received the aggressive infusion of rIL-2. Within one month after treatment, biopsy of one of her tumors showed extensive necrosis, after two months, all tumor deposits were shrinking, and a few months later, all evidence of cancer was gone. This patient has remained disease-free for the past 29 years. She was the first cancer patient to respond to the administration of IL-2 and, thus, the first to demonstrate that a purely immunologic maneuver that stimulated T lymphocytes could mediate complete destruction of large, invasive, vascularized cancers in humans." (the rIL-2 in the first sentence refers to IL-2 produced in bacteria via recombinant DNA technology). Since that initial treatment, many other patients with metastasized melanomas and kidney tumors, cancers that have not responded to other treatments, have been treated with IL-2. About 5 to 10% of these patients showed complete and lasting remission of their disease. About double that number achieved partial remission. On the basis of these findings, in 1992 the FDA approved IL-2 therapy for the treatment of metastatic kidney disease. Six years later, IL-2 was approved for treatment of metastatic melanoma. That's the good news. The bad news is that it takes massive, near toxic, doses of IL-2 to achieve these results. ![]()
A New IL-2
Silva and his 24 collaborators set out to limit the toxicity of IL-2. They knew that IL-2 binds tightly to a consisting of three protein chains: alpha, beta, and gamma (see the figure at the right and the iCn3D derived molecular model below). It also can bind to just the gamma and beta proteins, albeit less tightly. When bound to only two proteins it exhibits less toxicity, at least in animals. Armed with this information, they took on the challenging task of creating a brand new protein that mimicked the structure and activity of IL-2 but would only bind to the gamma and beta protein components of its receptor. Because native IL-2 is an unstable protein, they also sought to build a version that had a longer life span. Creating a modified IL-2 by changing a few amino acids here and there is not a difficult task for genetic engineers. But designing a totally new protein, one that is more stable and capable of selectively binding to a specific receptor, is a task of herculean proportion. Other researchers have succeeded in similar efforts, but only with small proteins of much less complexity. I'm afraid that the way that they carried out this feat is beyond the scope of this blog (and largely beyond my understanding). In their own words, "We developed a computational protein design method in which the structural elements that interact with the desired receptor subunits(s) are fixed in space..., and an idealized de novo globular protein structure is built to support these elements." The result? A protein of 100 amino acids (called Neo-2/15) whose sequence is quite distinct from the IL-2 of mice or humans. Neo-2/15 is so different that antibodies against it don't cross react with native IL-2. Of course, the key issue is how well it works. Silva et al tested Neo-2/15 in mice with melanoma or colon cancer. Both kinds of tumors grew less well when treated with the Neo-2/15 protein as compared with native IL-2. And what of the future? The authors write: "De novo design of protein mimetics has the potential to transform the field of protein-based therapeutics, enabling the development of molecules that improve on biology by enhancing therapeutic properties and reducing side effects, not only for cytokines, but for almost any biologically active molecule..." Six of the scientists involved in this work have founded a company based on the inventions described in their paper. I intended to finish this blog with the last entry. But here I am back again. What happened? I was busily transposing blog entries into a book that I've been writing, and, after week or two, reached the chapters on the innate immune system. When I began reviewing articles relevant to natural killer (NK) cells, I discovered several new (to me) developments. Like CAR-T's, scientists have begun genetically engineering NK cells, adding CARs into their genomes! It's an exciting happening because CAR-NKs offer a number of potential advantages over CAR-T cells for cancer therapy.
Let me review. Recall that natural killer cells attack tumors and cells infected by viruses. They distinguish their objectives from their healthy counterparts by weighing a variety of inhibitory and activating signals that emanate from the target cell's surface. Most healthy cells are decorated with MHC I proteins on their outer membranes that act as inhibitors of NK cell activation and are the primary signals preventing attacks. When a cell gets stressed or becomes cancerous or gets infected by a virus, it may lose these "stop" signals. It then becomes vulnerable to NK assault. While MHC I molecules and other proteins inhibit NK cell activation, there are a great variety of excitatory ligands that are found on virus infected cells and tumors that interact with receptors on the NK surface. The NK cell integrates these activating and inhibitory stimuli and, depending on their relative weights, either ignores a target or attacks it. In addition, a number of cytokines stimulate NK function, activating them independent of other signals. Interleukin 15 is a prime example. NK cells can also recognize and destroy cells that are coated with antibody molecules. You may recall that when a natural killer cell is activated, it releases a barrage of granules into the immediate environment of its victim, causing apoptosis and cell death. In addition, NK cells let loose a variety of cytokines that have a similar effect on their targets. Moreover, these molecules cause the activation of some of the other cells of the innate immune system, heightening their potency. I've already discussed CAR-T cells as powerful therapeutic agents. They carry T cell receptors in addition to the engineered CARs that are added to their armory, and must be derived from a patient's own blood. If they aren't, if they're taken from another individual, the receptors may react with the patient's own cells with dire consequences. But such so called "autologous" treatments are expensive, both in terms of time (several weeks for cell growth, purification, and treatment) and money (upwards of $400,000). Often very ill patients can't delay treatment or afford the expense. Because NK cells don't carry T cell receptors, they offer the possibility of an off the shelf remedy, one that is immediately available, standardized, and obtainable at a more reasonable price compared to their T cell equivalents. In addition NK cells engineered with CARs possess other advantages. Patients often relapse because tumors change their dress and avoid the CAR that is aimed at them. Because NK cells retain their original receptors, they offer the possibility of toxicity via a pathway other than the one that they were engineered for. But NK cells aren't without deficits when it comes to tumor therapy. First, they have a relatively short lifetime after transfer into patients. Second, they don't seem to be as effective as killer T cells in penetrating solid tumors. Third, tumor cells can evade them by upregulating inhibitory receptors such as MHC I's. Fourth, they seem to be inhibited by inhibitory cytokines and other molecules secreted in the immediate environment of tumor cells. And last, it isn't clear what's the best source for these cells. Scientists have gone to great lengths in order to address these difficulties. NK cells taken directly from the blood of disease free individuals don't grow well in culture and therefore aren't very suitable for use as off the shelf reagents for therapy. Three alternatives are available. There are several NK tumor cell lines that can grow in culture indefinitely. These suffer from the disadvantage that they are derived from cancers and pose a potential risk. In clinical trials, such cells are irradiated to curb their proliferative capacity. Of course, this treatment also limits their persistence and, at least theoretically, makes them less effective cancer fighters. Another potential source of NK cells are embryonic stem cells or induced pluripotent stem cells. These cells must be treated with agents that direct them from an undifferentiated state to one that is similar to, or identical to NK cells. They too can be cultured indefinitely. NK cells derived from umbilical cord blood are a third possibility. Frozen cord blood is widely available (it's taken from the placenta and umbillicus after birth and would otherwise be discarded) and newly developed methods to isolate sizable quantities of NK cells from it promise commercial scale accessibility. A clinical trial designed to test the efficacy of cord blood derived CAR-NK cells is currently underway at the MD Anderson Cancer Center in Houston. Designed to study the efficacy of CAR-NK cells against several varieties of leukemia and lymphoma, the phase I/II trial is currently recruiting patients. As far as I can determine, it's the furthest along of any study involving CAR-NK cells with final results to be reported in 2022. I discuss it here in some detail as an example of the great promise of the many efforts being made to use NK cells for cancer therapy. The CAR-NK cells being tested in Houston have been ingeniously engineered to address some of the deficiencies noted above (See Liu et al, Leukemia 32: 520-531 (2018) for details). The CAR that has been introduced into these cells carries an antibody domain against an antigen found on the surface of leukemic cells and a typical transmembrane and activation domain. These parts of the CAR are similar to those employed against leukemia in CAR-T cells described previously that had so much success in clinical trials and which has been approved by the FDA. Liu et al have added two additional components to the intracellular portion of the CAR – the gene for interleukin 15 and an inducible suicide gene called iC9. When these engineered CAR-NK cells encounter their target, the interleukin 15 gene becomes active and stimulates their proliferation and survival. The suicide gene is there for safety. If the CAR-NK cells become too active and begin to cause damage, they can be quickly eliminated by activation of iC9 by a specific pharmacological agent. CAR-NK cells are only one of many novel immunological therapies being tested in the medical community's efforts to combat the scourge of cancer. Some of the others that I've noted, including checkpoint inhibitors and CAR-T cells have also shown evidence of efficacy. It is possible that a combination of these therapies may work better than any individual one. And, while it is possible to test many such combinations in mice and on cells in culture, real world testing – in people – is extremely expensive and time consuming. Simply finding enough patients willing to undergo a clinical trial so that a statistical significant result can be achieved is a challenge. As an outsider looking in, I'm impressed by the rapidity of progress in the understanding of both cancer and immunology, as well as the talent, creativity and ingenuity of the practitioners in these fields. But I'm distressed at the enormous difficulties facing the community in trying to bring their findings into the clinic. ![]() As described previously, the first generation of CAR's that were constructed some 25 years ago consisted of a receptor composed of three parts from different proteins fused together: a modified antibody, a transmembrane section, and an intracellular domain. These showed promise but weren't particularly effective and gave rise to succeeding generations that added one or two costimulatory domains to the intracellular portion. It is these latter versions that have proven successful in combating a variety of blood cancers. However, since non-blood cell tumors haven't responded particularly well to either the first or later generations, numerous laboratories around the world have begun devising new CAR's with enhanced ability aimed at eradicating solid tumors. These new CAR's have been designed to address several issues.
A very recent article by Nicholas Tokarew and colleagues ("Teaching an old dog new tricks: next-generation CAR T cells", British Journal of Cancer, Volume 119 - Available behind a paywall) describes how researchers have begun to deal with these points. Trafficking Upon biopsy, many tumors are found to be surrounded by T cells. For patients, that's generally good news. The presence of T cell infiltration is an indicator that the tumor may well respond to checkpoint inhibitors and other immune therapy. In order to improve the recruitment of T cells to the cancer, scientists have made use of chemokines, kinds of cytokines that are chemoattractants. Cancer cells "learn" to use these small proteins to further tumor progression. They secrete them into their immediate environment in an effort to recruit cells that inhibit the immune response. Scientists have tried to turn these molecules against the tumor. They have added chemokine receptors to the intracellular domain of the CAR so that the CAR-T cells will move to the cancer site. Such fourth generation CAR's have been called "TRUCKs", for "T cell Redirected for Universal Cytokine-mediated Killing". It isn't yet clear whether such vehicles are effective or not in humans,. However, because different tumors often express distinct populations of chemokines, the intriguing possibility exists that CAR's might be designed to respond to particular combinations and thereby specifically target specific cancers. Survival Ordinary killer T cells will die off it they do not encounter molecules to which their receptors bind. Moreover, they also require the presence of co-stimulants and cytokine signals. Second generation CAR's carry both an external binding site that is likely to be engaged and co-stimulatory domains. Newer CAR's have been created that also carry a cytokine signalling domain that will hopefully enhance the survival and killing ability of the CAR-T cell. Results are not yet in. Recognition As I've already mentioned, determining the appropriate target for a CAR is a challenge. The tumor antigen must be located on the cell surface and it must be specific to the cancer being attacked - not found on normal tissues. In practice, these two objectives are rarely achieved. What's more, even if they can be, tumors mutate. By selecting for the appropriate mutations, they are able to evolve and move antigens internally or eliminate them entirely. One strategy to get around the selectivity problem is to create a T cell that carries more than one CAR (so called "dual CARs"), each with a different antibody recognition site. The idea is that only when a cancer cell is detected that bears both antigens will the full killing response occur. Alternatively, some very clever researchers have developed what are called "split CARs". Again, T cells are made to carry two different CARs, one of which lacks a stimulatory domain, the other, directed at a different antigen, has a costimulatory domain but is missing an intracellular signalling domain. Only when these two CARs recognize a cell carrying the two antigens that they are directed against, does the T cell become active (see the figure at the right). Both dual and split CARs are being evaluated in ongoing clinical trials. Counteracting Immunosupressors Besides being hypoxic (low in oxygen) and lacking a reasonable blood supply, tumors often express inhibitory molecules such as PD-L1 that interact with PD-1 receptors on T-cells in order to prevent the immune system from attacking them. A few years ago, with the advent of CRISPR/Cas9 (a potent DNA editing system), it became possible to knock out the gene for the PD-1 receptor on CAR-T cells. This proved effective when tested in animal models. However, in 2017 the process was taken a step further. A group of scientists used CRISPR/Cas9 to eliminate three genes from a CAR-T cell. They removed the PD-1 receptor gene and the endogenous T cell receptor as well as the MHC genes. This created what is called a "universal CAR". It's universal because CAR-T cells can be collected from healthy individuals and deployed in anyone. That's because it's lacking both a T cell receptor and a MHC protein. If it and other techniques prove successful, having an "off the shelf" CAR-T will bring down the time needed to treat a patient as well as the costs, an exciting development. Control Because several patients have lost their lives due to CAR-T therapy, scientist have looked for ways to control CAR-T cells after they have been added to the bloodstream. These controls have taken several forms. I'll onlly discuss one - the NOT-gate. A NOT-gate circuit consists of a CAR T cell that carries two CARs. One is traditional, bearing a tumor seeking antigen and CD3 activation domain and costimulatory domains. The other, carries an antibody directed against normal cell surface antigens. Instead of being linked to an activation domain, when it finds a normal surface antigen it activates an inhibitory protein. When this dual CAR-T encounters a tumor, it will be activated and hopefully carry out its assigned task. However, in the presence of normal cells, its activity will be inhibited and it will not function. The idea is that this will increase the specificity of CAR-T therapy. ![]() Wrapping up It's clear from this and previous posts that the CAR-T technique has caught the attention of the scientific community. As further evidence, I offer the following statistics that come from clinicaltrials.gov, the website that the US government runs that tracks clinical trials worldwide. When I entered "CAR-T OR chimeric antigen receptor" into their search box a week ago I got 735 hits. They included trials that were recruiting (367), completed (150), not yet recruiting (69), and active, but not recruiting (55) (the status of the remaining trials were either unknown, terminated, suspended or withdrawn). The fact that there's so much attention being given to CAR-T therapy offers hope for the future. But the number of clinical trials also demonstrates that we're got a long way to go. While I've described many ingenious new CAR-T variants, I've only told you about the tip of the elephant's tail. And it's highly probable that some combination of techniques may be optimal. Ultimately, new procedures and combinations will have to be tried in humans. So, it's likely that many more clinical trials will be added to the government's database in the near future. Meanwhile, scientists will have to make use of mouse models of cancer to develop promising approaches. One brand new technique that may allow researchers to more quickly manipulate new combinations of CARs in both mice and humans was recently published. Called "SUPRA", for split, universal, and programmable, it divides a CAR into two parts. As shown in the figure, one part carries an external antibody (three different ones are shown); the other a CAR's intracellular signalling domain (it's carrying a red squiggly thing on its end). The two sections can be tethered together by means of a "zipper", a binding site on the intracellular piece that can match with one on the antibody (the blue and red squigglies bind to one another). Without going into too much detail, this allows researchers to add different antibodies to the CAR much more easily. One can imagine that a SUPRA CAR can be quickly switched from one antibody to another in a patient that hasn't responded to a given therapy. Well, that's all folks. I've decided to put these posts together into an electronic book. That's my next project. Look for it in the near future. Thanks for reading. Side Effects
I've emphasized that the major problem with cancer immunotherapy, in fact with cancer therapy in general, is distinguishing the malignant cells from normals. The CAR-T technique doesn't differ in this regard. As I mentioned in the last post, the two CAR-T constructs that have been approved by the FDA are directed at the C19 cell surface antigen. The reason? It's because the diseases that they're trying to remedy, leukemias and lymphomas, are cancers of B cells, and B cells rather specifically bear CD19 on their surface. But you should see one problem immediately. The treatment will not only kill the cancer, but also normal B cells - cells that are responsible for antibody production. Actually, this side effect turns out to be one of the less serious reactions to CAR-T therapy, at least in the short term. Patients who received CAR-T cells and who end up lacking antibody producing B cells, can be given immunoglobulins intravenously. However, because the CAR-T cells often persist, and their persistence is correlated with their efficacy, these treatments have to be continued, perhaps for the life of the patient. A more serious problem, one that has shown up in a large proportion of treated patients, is cytokine release syndrome. Apparently, and especially when the burden of tumor cells is large, CAR-T cells release large quantities of cytokines when they attack and destroy their targets. These cause some serious side effects including fever, nausea, heart problems, kidney difficulties, liver toxicities, and more. It looks like the very success of the CAR-T cells causes these difficulties. Nearly half of the patients in one of the CAR-T trials exhibited the syndrome. Depending on the severity of the cytokine response, there are several remedies available including the administration of monoclonal antibodies against the most potent cytokines. An unanticipated side effect that shows up after CAR-T treatment is neurotoxicity. Some patients in trials have exhibited a variety of serious neurological symptoms including delirium, muscle spasms, and seizures. Even worse, in one trial, five patients succumbed to cerebral edema – the particular CAR-T treatment used was terminated. No one knows the reason for these effects, although again, some suspect that over-expression of cytokines is responsible. Limitations One limitation of CAR-T therapy that it shares with all other modalities is that cancer cells can escape death by changing the targeted antigen or by masking it so that it isn't detected by the T cell. Some 7 to 25% with acute lymphocytic leukemia patients have experienced relapse via this route after CAR-T treatment. Since cancer cells are continually evolving, any therapy will select for cells that are resistant. Therefore it's important to rid the body of as many cancer cells as possible, the less left behind the better since that makes it less likely that a change in the targeted antigen will occur. Then there's there's the practical problem of cost. Novartis, the purveyor of one of the two approved CAR-T treatments has announced that a single infusion of CAR-T cells will set patients (or their insurance companies or the government) back $475,000. That's before pretreatment and hospitalization fees, and doesn't include post-treatment follow ups. If you add these costs in, patients will be spending upwards of $550,000. There's some justification for these prices. There's been the extensive developmental costs entailed by drug companies. And recall that the procedure is bespoke – an individual's blood must be drawn; white blood cells purified; CAR's must be inserted into the DNA of the patient's T cells; the transformed T cells must be placed in tissue culture and their number expanded; the cells must be injected back into the patient; and the patient monitored for side effects. Nevertheless, given that close to 175,000 people in the US are expected to be diagnosed with leukemia, lymphoma or myeloma in 2018 (according to the Leukemia and Lyphoma Society of America), these costs will prove a burden to our medical system if CAR-T treatment, as it is now employed, is extensively used. However, current treatments for blood cancers are also very expensive. And they're not as effective. Complex cost-benefit calculations will have to come into play as we employ different modalities of treatments in the future. The biggest limitation of CAR-T treatment is that while it has proven effective against blood borne tumors, they are only responsible for 6% of cancer deaths. Solid tumors have proved refractory to CAR-T treatment. Many factors seem to be responsible. Again, the choice of which antigen to target is the biggest obstacle. Over the years, more than a half dozen different ones have failed to benefit individuals with a variety of cancers. In several cases, severe toxicity was observed. For example, a clinical trial involving patients with renal carcinoma utilized a CAR directed against a kidney enzyme, carbonic anhydrase, a protein that traverses the cell membrane and is often overexpressed in solid kidney tumors. It was chosen to be a target for CAR-T therapy because it is found in low amounts in normal kidneys and other tissues. However, CAR-T treatment resulted in liver toxicity apparently due to off tumor expression of the enzyme in liver bile ducts. In another case, a patient with colorectal cancer died because of a reaction of the CAR used for his therapy with lung cells. Despite these setbacks, there have been some promising responses in some patients with melanoma and glioblastoma, but in general the results have been disappointing. Why? Actually, it's not surprising that solid tumors don't respond well to CAR-T therapy. Solid tumors are often surrounded by an extracellular matrix making it hard for T cells to reach malignant cells. What's more solid tumors are often low in oxygen, deficient in nutrients and important amino acids,m and poorly vascularized, creating an environment that makes it difficult for T cells to thrive. Worse, solid tumors produce a variety of inhibitory cytokines that impede T cell proliferation. Finally, CAR-T therapy requires that the cancer displays the target antigen on its cell surface. But most antigens that are tumor specific are expressed within cells. All these difficulties are being addressed by a new generation of CAR-T cells, a topic that I'll take up in the next post. ![]() CAR-T cells, T-cells that carry a chimeric antigen receptor, were first concocted by Zelig Eshhar, an immunologist at the Weizmann Institute in Israel. His invention, now a quarter of a century old, was a triumph of genetic engineering that utilized knowledge fueled by decades of basic research in immunology. Since its introduction it has produced some dramatic successes in treating a variety of blood cancers. While treatment of solid tumors has lagged somewhat behind, hopes are high that it will prove equally effective as the procedure improves. Because CAR-T developments are progressing so rapidly, even the latest textbooks are somewhat out of date. While Abbas et al. allots a few well written pages to the technique, I found two review articles published in 2018 that offer up more timely information. Both are freely available on line ("CAR T cell immunotherapy for human cancer" by Carl June et al, Science 359:1361-1365 (2018) and "Concise Review: Emerging Principles from the Clinical Application of Chimeric Antigen Receptor T Cell Therapies for B Cell Malignancies", by M. D. Jain and M. L. Davila, Stem Cells 36:36-44 (2018). While neither is directed at beginners, if you skip the hairy parts you can still get a good feel for the recent developments in the field and where it's going in the near future. CAR Structure To best understand how CAR-T works, I found it useful to review the structure of the T cell receptor (see the figure at the right) and how killer T cells operate. Notice that the antigen (that's the small bright yellow structure) recognized by the T cell is presented to it by the major histocompatibility complex (MHC I) located on an antigen presenting cell (APC), often a dendritic cell. Recognition of the MHC by the APC requires the CD8 receptor. The T cell receptor undergoes a conformational change upon antigen binding and thereby signals its success to a complex of proteins (CD3). In turn, in conjunction with a few costimulatory proteins, this sets off a chain of reactions that ultimately result in activation of the T cell. ![]() By contrast, consider the chimeric receptor devised by Dr. Eshhar. It is completely missing the part of the the T cell receptor that binds an antigen. Instead, it has exchanged it for a strange looking antibody composed of a variable light chain and a variable heavy chain tethered together by a string of amino acids. The idea is that this engineered antibody will act as a receptor and bind to a specific protein on the outside of a targeted cancer cell. Linked to the antibody on the same protein chain is a short span of amino acids that is designed to bridge the cell membrane. And following this transmembrane segment, still on the same chain, are several regions – labelled costimulatory domains in the figure – that are intended to enhance cell division, increase cytokine production, and prevent cell death in the T cells. Finally, the CAR carries a portion of the CD3 complex that is designed to transmit the signal that occurs upon antigen binding to the nucleus to activate the T cell. The result is truly a chimera - a melding of different proteins on one chain. Note that the CAR bypasses activation via the MHC, advantageous because many cancers lose MHC expression in the course of their development in an effort to bypass immunosurveillance. CAR Mechanism Here's how the CAR is supposed to function. Its antibody portion has been engineered to stick out of the cell membrane and bind to a specific antigen on a cancer cell (the CAR's that have gotten FDA approval for therapy bind specifically to a protein called CD19, a common antigen on the surface of B cells). This binding causes a change in the conformation of the receptor that is communicated to the portion sticking into the cell, causing activation, cell division, and the unleashing of the killing potential of the T cell. Notice again that unlike in normal T cells, activation requires neither an antigen presenting cell nor a MHC. And here's how it works in practice (this section and the next comes from a paper published just a few days before this was written – "Chimeric Antigen receptor (CAR T) Therapies for the Treatment of Hematologic Malignancies: Clinical Perspective and Significance", Boyiadzis et al; Journal for ImmunoTherapy of Cancer 6:137 (2018)). Blood is taken from a patient with leukemia or some other blood cancer and transferred to a facility where T lymphocytes are purified and placed into tissue culture. The cells are stimulated to divide with a variety of reagents. After they're grown to a reasonable number, some mechanism is used to introduce the chimeric gene into their genomes. T cells, now bearing a CAR directed against the cancer are transfused back into the patient where they, hopefully, kill the cancer. Time from initial blood collection to infusion back into the patient: three weeks. Results Two CAR T constructs have been approved by the FDA. Both target CD19. The first, approved in August 2017, is intended to treat acute lymphocytic leukemia in young patients up to 25 years old who have not responded to treatment or who are in relapse. Approval was given on the basis of a clinical trial involving 75 patients aged three to 25. The results stunned the scientific community who were used to incremental progress. Fully 81% of the patients were in remission by the end of the trial! The same CAR-T has been recently approved (May, 2018) for treatment of older individuals with diffuse large B-cell lymphoma after the preceding encouraging results. A second, slightly different CAR-T, was approved by the FDA in October, 2017. It targeted patients with diffuse large B-cell lymphoma who had relapsed after two prior treatments or had failed to respond to conventional therapy. Again, the phase II trial results on which the approval were based were dramatic, with an 18 month survival rate of over 50%. These two CAR-T's are just the beginning. A multitude of clinical trials are ongoing. Encouraging results have been forthcoming for treatment of multiple myeloma, acute myeloid leukemia, and Hodgkin lymphoma. These extremely promising results are the good news. They indicate that using CAR-T therapy can be effective against leukemia, lymphoma, and other cancers of white blood cells. They validate the general strategy of using modified T cells to attack and rid the body of malignant cells. They're breakthrough treatments and represent the first personalized gene therapy treatment that has gotten FDA approval. In the next post, I'll discuss the bad news: CAR-T cell toxicity, cost, and failure to offer effective treatment of solid tumors. I'll end with a discussion of the prospects for further developments for CAR-T therapy. ![]() There's good evidence that killer T cells (aka cytotoxic T cells or CTL's) are the main weapons employed by the immune system to protect us against tumors. Consistent with that, people are often found to carry killer T cells directed against the tumors that they harbor. However, it wasn't clear for a long time how such T cells got to carry out that function. Recall that the function of killer T cells is to destroy cells carrying a protein fragment on its surface, a fragment nestled in the arms of a class one major histocompatibility complex protein (MHC I). Remember these pieces of protein got there via the proteosome, the apparatus present in cells whose function is to digest internal proteins. Since most types of cancer cells are not good at antigen presentation, they're not particularly good at fulfilling these functions and thereby are ineffective at signalling to killer T cells that they've become malignant. On the other hand the class two MHC's (MHC II) are specialized to detect proteins (mostly viruses) outside of cells. As such, they are capable of detecting the proteins on tumors, but their function is to present them to helper T cells, not killers. Cross Presentation The solution to this conundrum is a phenomenon called "cross presentation". Apparently, there is a class of antigen presenting cells - a subset of dendritic cells – that can take up proteins from tumors and load them onto MHC I molecules, thereby presenting them to killer T cells. In turn, these assassins can destroy the cancer. However, as evidenced by the numbers of people who succumb to the disease, this attack doesn't always succeed in part because tumors may elude it. Cancer cells can evade the immune response in many ways. One common mechanism that they employ is to increase the synthesis and accumulation of two proteins – CTLA-4 (cytotoxic T lymphocyte antigen 4) and PD-1 (programmed death-1). Both these molecules work to inhibit T cell activation, a process that requires both T cell receptor binding to an antigen and co-stimulation by a dendritic cell (The illustration at the right shows CTLA-4 in action. It binds to the B7 receptor that is normally used to activate the T cell. PD-1 performs a similar inhibitory function, but works by binding to a protein found on tumor cells that helps cause T cells to commit suicide). Normally, these two proteins serve as checkpoints, preventing autoimmunity and excessive inflammation associated with uncontrolled T cell activation. Tumors are clever. They subvert these functions to limit their susceptibility to immune attack. Naturally, scientists who study these processes thought to inhibit the inhibitors, thereby coming up with a therapy that is called "checkpoint blockade". I was intrigued to discover that the first such therapies were suggested more than 20 years ago. Since then much has been learned, as I discuss below. Checkpoint Blockade In 1996, James Allison's laboratory reported that antibodies directed against CTLA-4 had a dramatic effect on controlling tumors in mice. Their findings spurred a rash of trials and, eventually, marketed pharmaceuticals. And just this year, Allison was awarded a portion of the Nobel Prize in Physiology or Medicine for his discoveries. Today, anti-CTLA-4 monoclonals have been approved for treating advanced melanoma. Monoclonal antibodies directed against PD-1 have proven even more effective and have been approved for treating melanoma, lung, kidney, bladder, and colon cancer. A combination of the two therapies has yielded improved results in some cases. Checkpoint blockades have become one of the major weapons in the battle against cancer. However, the news has not been all rosy. About half of all patients treated with antibodies against CTLA-4 and PD-1 fail to respond. Or, after responding, relapse. In addition, these therapies are not without risk - many patients develop autoimmune reactions, some quite serious, as a result of treatment. Questions A couple of nagging questions came to me after reading about checkpoint blockade and T cell killing of tumors. The first one was rather basic. Why do T cells respond to cancer cells at all? Since tumor cells are simply normal cells that have gone awry, why does the immune system attack them? I offered some answers in a previous post. One reason that has been offered is that cancers carry a burden of "neoantigens", that is, proteins that are unusual (not self) that result from the mutations that are essential for the expansion of tumors. These aberrant proteins appear foreign to the immune system and may generate a response. How common are these neoantigens? Intriguingly, it depends on the type of cancer. Melanomas and lung cancers carry, on average, something on the order of 10 mutations per million bases of DNA (the human genome size is 3 billion base pairs) and this is strongly correlated with the generation of neoantigens. By contrast, other kinds of cancer, including leukemias of various kinds, bear 10 or 20 times fewer mutations. Why this difference? I'm not sure, but it seems likely that the lungs of smokers and the skin of people exposed to the sun are subjected to agents that can cause many mutations – not only ones that can cause cancer, but base changes in incidental genes (bystanders) that can generate neoantigens. In the posts that follow, I will finally get to my main interest in developing this blog: chimeric antigen receptors, part of what has been termed "adoptive cell therapy". Hang in there. The end is near. ![]() From My Lawyers (not really) Allow me to open with a warning. Because I am not a physician, and, in fact, don't have any medical training at all, I'm not a source to be trusted for medical advice. For those readers who are afflicted with cancer, or have relatives or friends with the disease, seeking the opinions of a good oncologist is the best source to learn about therapeutic options. My goal in writing this blog is not to offer therapeutic guidance. It is, first of all, to help me to learn this extremely interesting area of molecular biology. And secondly, to help others do the same. The only way that this work might offer some direction to cancer sufferers is by providing a broad overview of the basic science behind many of the therapies that might be offered. It also might help to point to future developments. But it certainly shouldn't be relied on for concrete medical advice. At all. Tumor Immunity As we've seen, the immune system exists in order to defend against foreign invaders. As such, it must somehow distinguish the aliens from the natives. If it fails this task, it risks autoimmunity, often with grave consequences. With respect to cancer, there's the rub. That's because cancer cells are not foreigners. They're normal cells that have acquired mutations and lost their way. As I've found, this problem of telling the good guys from the bad, killing off the enemy without doing collateral damage, is the biggest problem faced by immunotherapy (and other therapies as well). Despite the fact that cancer cells and normals share most biochemical features, there is good evidence that tumors can elicit an immune response. One piece of evidence comes from studies with chemically induced cancers in mice. If such a tumor is totally removed and transplanted into a closely related sibling, the tumor will grow and eventually kill. If, on the other hand, the tumor is transplanted back into the original mouse, it will often fail to survive. Of course, mice and humans differ. However, it has long been known that human colon tumors are often surrounded by both killer and helper T cells, indicating that the tumor has elicited an immune response. In those cases where such infiltration hasn't occurred, it is associated with a poor prognosis. Another piece of evidence indicating that the immune system does respond to cancer comes from the observation that immune compromised individuals are more prone to cancer than people with an intact immune system. Despite these studies and observations, it's obvious that the immune system often fails to prevent cancer. There are several reasons for this inability to fend off the disease. First, cancers have several direct strategies to thwart an immune response, a topic that I'll come to later. Second, the immune response may simply lack the power to overcome some cancers. And third, because cancer is an evolutionary disease, the cancer may change its dress, hiding those antigens that evoked the immune response in the first place. At least four questions come immediately to mind. What are the antigens that that the immune system detects in tumors? Which arm of the immune system is used to attack tumors? What mechanisms do tumors use to mitigate the immune response? What strategies have oncologists developed to strengthen the immune system so that it can better ward off cancer? Cancer Antigens Cancer is a disease caused by mutations in the genome. Often these mutations are in the protein coding region of one or more genes, thereby changing the amino acid sequence of one or more proteins (Not always. Some mutations may occur in the regulatory region of a gene, increasing the concentration of an oncogene product for example. Alternatively, a mutation may wipe out a gene completely resulting in the complete absence of a particular protein, such as a tumor suppressor). In turn, changing the amino acid sequence of a protein will change its three dimensional shape, marking it as foreign and as a potential antigen. The mutation doesn't necessarily have to be in a protein in a pathway that promotes cell proliferation. Frequently, cancers carry mutations in many bystander genes (termed "passengers" by scientists in order to distinguish them from "drivers"). The proteins that result from such derangements in protein structure are called "neoantigens". However, even proteins that are part of the normal cell repertoire have been shown to become antigens. For reasons not clear to me, proteins overexpressed as a result of the amplification of a gene or because of some derangement in gene regulation may become antigenic. Also, proteins that are expressed in embryos that somehow show up in cancer cells, a not uncommon occurrence, may be marked as non-self by the immune system. These observations raise the prospect that more refined and improved immunological approaches might be developed to ward off oncogenesis. What are the main components of the immune system that attempt to fend off cancers? And how can these components be manipulated to better serve patients? Those topics are subjects for the next post. ![]() Leonard Hayflick I was living in Baltimore in the 1970's. Located on the campus of Johns Hopkins University, a few hundred yards from where I was working at the time, was a branch of the Carnegie Institute, a world famous research establishment. After attending one of the frequent seminars that was being presented there, I passed some time discussing science with one of the staff scientists. His laboratory specialized in studying cells in culture, and we were talking about a recent publication of Leonard Hayflick that had been making quite a stir. Hayflick claimed that if human cells are placed into tissue culture they will divide only a fixed number of times - about 60 cell divisions - after which they will remain alive, but will no longer be capable of mitosis (the Hayflick limit). In other words, normal human cells have a limited proliferative capacity. My colleague in Baltimore dismissed this as nonsense. He told me that Hayflick didn't know how to properly treat his cells. If only he would have added the appropriate goodies to his culture media, the cells he was studying would have gone on to replicate forever. "Furthermore," he said smugly, " if Hayflick is right, that would mean that something is slowly wearing out, division after division. Can you imagine what that could be?" At the time, no one could. But Hayflick was right. Most normal human cells can only divide a limited number of times. And his observation has profound implications for cancer and aging. It turns out there is something being lost every time a cell divides in humans and most other higher organisms. It's something that serves as a means of counting because it occurs at every cell division. The counter works because of a limitation of the enzyme that carries out DNA replication – DNA polymerase. For reasons beyond the scope of this post, every time linear DNA copies itself, a small chunk – 50 to 100 bases – is lost at its ends. As you might imagine, that would pose a severe problem, especially for cells that need to divide many times. The shortening of chromosomes appears to be the mechanism that accounts for the Hayflick limit. Telomeres Of course most cells need to divide many times during embryogenesis and stem cells must continuously divide to make up for cell death. How do they get around the limits of imposed by the failings of DNA polymerase? The answer, in a word, is telomeres, structures found at the end of the chromosomes of almost all creatures (many bacteria and mitochondria have circular chromosomes without ends and thus avoid the problem). Telomeres were first discovered by one of the giants of biology, Barbara McClintock, a woman whom I was fortunate to meet briefly shortly after she was awarded the Nobel Prize in Physiology or Medicine in 1983. McClintock found that when the chromosomes of corn had somehow lost their normal ends, they fused, end to end, with each other. She reasoned that the termini of unbroken chromosomes must bear structures that prevent this from happening. Many years later, it was found that the tips of chromosomes bore thousands of short repetitive sequences. Depending on the creature, these sequences varied from six to eight bases in length. In humans, the sequence of each telomere repeat is TTAGGG. At its very end, as shown in the figure at the right, the DNA in each telomere loops around and attracts a group of proteins that prevents the ends from fusing with each other. Telomerase Telomeres are placed on chromosome ends by telomerase, an enzyme, a reverse transcriptase, that adds bases to DNA using RNA as a template. Embryonic and stem cells have lots of this enzyme. As they divide, their telomeres are thereby readily replenished. But after the initial stages of embryogenesis, telomerase activity wanes, and therefore DNA is lost bit by bit from the ends of chromosomes at each cell division. What has this to do with cancer? If you've read "The Immortal Life of Henrietta Lacks" by Rebecca Skloot, you'd know that HeLa cells, the cancerous cervical cells that were removed from Ms. Lacks' tumor in Baltimore in 1951 and propagated since then in tissue culture in hundreds of laboratories, grow indefinitely. They are illustrative of the fact that cancer cells are immortal, and must somehow avoid the Hayflick limit. One mechanism that allows them to do that is to reactivate the telomerase gene. And, something on the order of 90% of all cancers do so. How cancer cells accomplish this reactivation is largely unknown, but the gene offers a tempting target for cancer therapy because of its central role in maintaining telomere length. However, there might be a trade off. If somehow or other a drug was found that stopped telomere extension, it might also stop the normal replenishment of telomeres in stem cells. That in turn, might affect aging, since stem cell renewal probably plays an important part in keeping us healthy. Some scientists have taken the opposite tack - they suggest increasing the level of telomerase so as to buttress the ability of stem cells to churn out fresh tissues in old age. In other words, to suppress aging. But of course, this opens up the possibility of encouraging the growth of tumors. Well, I've finished with my overview of cancer biology. Now on to the final topic: the use of immunotherapy for the treatment of the disease. A comprehensive and current overview of p53 biology can be found in a review article by Edward Kastenhuber and Scott Lowe of Sloan Kettering Institute in New York (Cell 170:1062 - 1078 (2017)). Unfortunately, the article is behind a paywall - but you already know my opinion at that. Let me borrow from the article to recapitulate some of the main take home lessons about p53.
Elephants I'll end this short post with a few unexpected pieces of information that I came across in preparing this essay. The first has to do with one of my favorite animals: elephants. It turns out that elephants don't often get tumors, despite the enormous number of cells in their body and their long lives. That is, from a statistical point of view, given their cell numbers and the rate of mutation, they shouldn't reach old age before they become riddled with tumors. The unexpectedly low incidence of cancer in elephants (2 to 5 times less susceptible) and other large vertebrates has been given a name – Peto’s Paradox. I quote from Wikipedia: "Peto's paradox is the observation, named after Richard Peto, that at the species level, the incidence of cancer does not appear to correlate with the number of cells in an organism. For example, the incidence of cancer in humans is much higher than the incidence of cancer in whales. This is despite the fact that a whale has many more cells than a human. If the probability of carcinogenesis were constant across cells, one would expect whales to have a higher incidence of cancer than humans." How then do whales (and elephants) avoid cancer despite being at greater risk? For whales it isn't known. However, African elephants are remarkable in that they have 20 p53 genes, 19 more than normal (Indian elephants have somewhat less). What's more, it appears that elephant cells respond better to DNA damage via apoptosis than cells from humans. And this improved response is not only correlated with, but seems to be dependent on these extra copies of the p53 gene. The supernumerary p53 genes are peculiar – they appear to be DNA copies of p53 mRNA. Somehow, back millions of years ago, p53 mRNA was converted into DNA and integrated into the elephant genome. This phenomenon has been observed previously, although not for p53. Genes formed this way are called "retrogenes" because they've been generated backwards – from RNA to DNA rather then the other way around. Only a few of these extra copies produce a protein, but apparently they are sufficient, at least to some extent, to protect elephants from the scourge of cancer. Aging These extra elephantine p53 genes suggest a therapeutic strategy. Why not introduce a more active version of p53 genes into the human population to protect us from cancer? Actually, experiments have been attempted whose results bear on this issue. A variant p53 gene, one that produced the p53 protein uncontrollably, was introduced into the germ-line of mice. Those with one copy of this overactive gene had a greatly reduced chance of contracting cancer. That's the good news. The bad was that they died sooner, apparently aging faster, than controls. Apparently, cells strike a balance. Too little p53 and the risk of cancer goes up. Too much p53, and cells don't divide sufficiently to keep up with natural losses that occur with time. In the next post, I'll discuss the aging of cells. That'll be the last post before I finally get to the topic that brought this blog about: cancer innunotherapy. ![]() The presence of p53 inhibits cell proliferation. Because p53 is present in almost undetectable amounts in most normal cells, they can properly respond to growth factors and divide when appropriate. A profusion of physiological signals can rapidly elevate the amount of p53 in response to some perceived problem, thereby limiting unwarranted cell proliferation. The protein responsible for regulating p53 has the peculiar name of "mouse double minutes 2" or Mdm2. (The name requires a brief explanation. Double minutes (not 60 seconds, "my-newts") are chromosome fragments that are often found in tumors. They consist of circular pieces of DNA that are present in multiple copies. They often carry oncogenes for, as you may recall, an increase in the number of oncogenes may spur cell proliferation and tumor formation). Mdm2 is an oncogene that was found in a mouse tumor that carried these double minutes. And the protein that it specifies, also called Mdm2, is the major controlling element of p53. Sometime later, mutant versions were found in several human cancers. Mdm2 How does Mdm2 control p53? In two ways. First and foremost, it helps degrade p53 by attaching ubiquitin to it, marking it for destruction. And what's ubiquitin? Ubiquitin is a small protein found in nearly every human cell (it is ubiquitous, hence its name) and serves to indicate to the proteosomes (remember them?) in the cell that the targeted protein should be broken down. Second, it binds to p53 and prevents it from acting as a transcription factor, thereby interfering with its antiproliferative capacity. Notice that that behavior makes Mdm2 an oncogene, but an oncogene that differs from those that I've written about previously. The others were either out-of-control growth receptors or aberrant enzymes directly in the downstream pathways emanating from these receptors. Mdm2, by contrast, acts by limiting the activity of a tumor suppressor. When it acquires increased activity via a mutation, it prevents p53 from doing its job. If Mdm2 is continually degrading p53, how does it escape? When an event occurs – see the figure – that signals a problem that p53 can address, several protein kinases may be activated. As you may recall, these enzymes attach phosphate groups on to proteins. They do so to p53 and inhibit Mdm2 from marking it for destruction. In addition, these same kinases may add phosphate groups on to Mdm2, lessening its ability to interact with p53. The result is an increase in the amount of p53. Complicated matters even further, there is at least one another protein that binds to Mdm2, preventing it from destroying p53. It too, then is a tumor suppressor because when it is absent via mutation, Mdm2 will have a freer hand, which in turn will allow p53 to accumulate. At first it was thought that p53 reacted only to DNA damage. For that reason, it was given the title, "guardian of the genome". But later research showed, as indicated by the figure, that other cellular stresses may cause p53 concentrations to swell. The list is long, but it grows even longer as more are being discovered. How p53 Guards the Genome If you've been following along, the next question that might occur to you is: How does p53 work to guard the cell from the problems shown in the figure above? That is, how does p53 limit aberrant cell division? The answer is that p53 is a transcription factor that regulates the activity of a great variety of genes (Weinberg lists 27; a recent review asserts that there are approximately 500!). One surprising target of p53 is the gene for Mdm2. p53 acts to increase the expression of the Mdm2 gene resulting in an elevation in Mdm2 protein concentrations. Thus, p53 sows the seeds for its own destruction. This negative feedback loop ensures that under ordinary conditions, the levels of p53 are kept in check. Mice that bear two inactivated Md2 genes, have greatly increased amounts of p53. That really isn't helpful. While they don't get cancer, they die before birth apparently because their normal cells don't divide as they should. Another target of p53 is a gene that encodes a protein that inhibits several cyclin dependent kinases – proteins that play a role in advancing cells through the G1 phase of the cell cycle. It is in this way that p53 retards cell division when activated by some defect in DNA damage, allowing cells time to repair problems if they can. Moreover, there is evidence that p53 activates the transcription of genes involved in DNA repair. All this means that when p53 is absent, organisms are at the mercy of agents that damage the genome, such as UV irradiation and chemical mutagens. In fact, mice that lack p53 are extremely prone to tumors when exposed to mutagenic agents. And p53-minus cells grown in culture have aberrant chromosome numbers as compared to controls with the normal complement of p53. ![]() Apoptosis There will be occasions, especially when a cell is teetering toward malignancy, where disturbances to the genome cannot be repaired. At this point, p53 assumes another role – as Dr. Kevorkian, an enabler of suicide. Programmed cell death, apoptosis, is a mechanism built into nearly every cell. It's there for several reasons. During embryogenesis, it helps sculpt organs by removing cells that aren't needed. For example, the formation of the digits of the hand and feet of humans is accomplished by removing cells from between the future fingers and toes (see the figure). The removal of the tadpole tail during metamorphosis is another particularly dramatic example. But apoptosis is occurring all the time. Several references assert that something on the order of a million cells in the human body are dying due to apoptosis each second. In the present context, p53 uses apoptosis to remove cells that pose a danger to the organism because of some damage that cannot be readily repaired. It does so in its role as a transcription factor by increasing the expression of several genes in the apoptosis pathway and inhibiting others that act to suppress apoptosis. Apoptosis is a phenomenon that has been known to biologists for over one hundred years. However, its mechanism has been uncovered only recently. As you might suspect, it's quite complex, and a lot is still being investigated. Unfortunately, I only have a middling acquaintance with the subject, certainly not enough to write much more about it in depth. For those who want to learn more, I've come across a well reviewed book, "Cell Death - Apoptosis and Other Means to an End" by Douglas R. Green that may be a helpful introduction. For now, I'm going to end my discussion of apoptosis right now and devote the next posting to summarize the role of p53. |