What does the science say about CAPE’s – Fossil Fuel Ads Make Us Sick campaign?

Regular readers of this blog know of my ongoing disappointment with the MDs at the Canadian Association of Physicians for the Environment (CAPE). No group has so consistently disappointed me with the variance between the reports they are capable of producing and their actual output. As I have detailed in previous blog posts, they have produced bad research on BC LNG,  bad epidemiology, bad takes on projects like the Site C Dam and the Trans Mountain Expansion Project, and their biggest and ongoing bad work on natural gas flaring, fugitive emissions and the climate effects of natural gas.

Naturally this led me to approach their latest campaign: Fossil Fuel Ads Make Us Sick with a jaundiced eye. What I have determined is this campaign builds on the misinformation and bad epidemiology described above and then adds new bad angles and newer bad data. The crux of their campaign is that:

air pollution from the burning of fossil fuels is one of the leading causes of premature mortality in Canada

In a recent article in the National Observer: “Doctors know banning fossil fuel ads is a matter of life or death” they have added another questionable claim that:

fossil fuel air pollution is responsible for one in seven premature deaths in Canada

As is normal with the work by CAPE, there is always a sliver of truth in their articles and campaigns but invariably their campaign seem to be built on an incomplete (or simple misreading) of the the underlying research. That being said, now I need to support my claims with actual research. So let’s begin:

Fossil Fuels are a leading cause of premature death

Let’s look at that claim that: “air pollution from the burning of fossil fuels is one of the leading causes of premature mortality in Canada”. Their claim links to a Health Canada report with the title: Health Impacts of Air Pollution in Canada: Estimates of morbidity and premature mortality outcomes – 2021 Report. Upon reading this report one thing becomes abundantly clear: the claim made by CAPE is not supported by (or even made in) the report.

Reading the Health Canada report I was most struck by the absence of any significant statements about fossil fuels. Specifically, the term “fossil fuels” appears only a single time in a discussion about the formation of nitrogen dioxide compounds. The report does indicate that fossil fuels are a leading cause of premature mortality in Canada.

The Health Canada report identifies “air pollution” as a major cause of premature mortality and then it goes on to discuss the causes of air pollution but in doing so it provides the data to point out that the burning of fossil fuels represents only a very minor cause of that air pollution.

Lets start with the PM2.5 which according to the Health Canada report represents the source of approximately two-thirds of the premature deaths. For those unfamiliar with the term, PM2.5 refers to a class of fine particulate matter (dust) that is sufficiently fine as to fit through a 2.5 micron filter. PM2.5 is a particularly troubling component of air pollution because it is believed to have a disproportionate effect on human health and may even be small enough to affect fetuses in utero. Decreasing exposure to PM2.5 is a solid goal that will improve community health. So what does the report say about the source of the PM2.5? Let’s look at their table:

Yes, you read that right, the “Oil and Gas Industry” and “Transportation and Mobile Equipment” contributed 48,000 tonnes of PM2.5 to the national emission total…out of a total of 1,600,000 tonnes! Doing the math fossil fuels contributed approximately 3% of the total anthropogenic PM2.5. Notice that qualifier: “anthropogenic”. That is an incredibly important proviso because forest fires produce almost the same amount of PM2.5 as humans activities but are not included in the accounting in this report (“fires” in the report represents cooking fires) but absolutely affect human health.

For those of you who like graphics, see below a figure depicting the various sources of PM2.5. What you will notice if you blow up the figure is those fossil fuels don’t appear. They are combined in the “Other” category because don’t even warrant their own colour.

With respect to mortality the report indicates:

Chronic exposure to PM2.5 air pollution contributed to 8.0% of all-cause nonaccidental mortality among Canadians over 25 years of age, equivalent to 10,000 deaths per year or 27 deaths per 100,000 population.

As a source of 3% of the PM2.5 that would make fossil fuels responsible for about 300 deaths per year. Certainly a tragedy but not “one of the leading causes of premature mortality” considering that “dust” is responsible for 5,000 deaths a year and CAPE is not leading a campaign against “dust”.

Now admittedly, fossil fuels represent a much higher percentage of NOx emissions (about 75% see below) but NOx is only responsible for 1300 deaths a year. Ground-level ozone is reported to be responsible for 2800 deaths a year and fossil fuels are responsible for up to 52% of those premature deaths but when added together they don’t hold a candle to our biggest nemesis: “dust“.

Now I am not trying to belittle those deaths but the point is this is the citation provided in support of the CAPE claim that “air pollution from the burning of fossil fuels is one of the leading causes of premature mortality in Canada” and this citation does not support that claim.

Fossil fuel air pollution is responsible for one in seven premature deaths in Canada

The second claim to be examined is the suggestion that fossil fuel pollution is responsible for one in seven premature deaths in Canada. This claim is derived from a paper Global mortality from outdoor fine particle pollution generated by fossil fuel combustion: Results from GEOS-Chem. Now in this case the reference actually makes that claim, but, as I will demonstrate, that claim is so obviously wrong as to be incredibly puzzling and clearly represents a failure in the peer review process

This article identifies the connection between PM2.5 and premature death and using a proprietary model argues that 34,000 deaths a year in Canada can be attributed to PM2.5. This is presented in Table 1 of the article below.

This number (34,000 deaths a year) is so much higher than the Health Canada value (10,000 deaths a year) that it has to raise alarm bells. Are we really to believe that Health Canada, the organization that compiles all this data for Canada, so completely missed out on all these deaths? Now remember Health Canada not only compiles these results but they correlate all the other causes of death and so we are to believe that Health Canada managed to misattribute 24,000 deaths a year?

That being said, even if their numbers were reliable it would still be a mess because look at Table 1. The study incorrectly asserts that fossil fuels are the source of 85% of PM2.5 emissions in Canada. But wait, didn’t we establish above that fossil fuels represent only 3% of anthropogenic PM2.5 emissions and likely less than 2% of total PM2.5 emissions (when you add forest fires). That means instead on 1 in 7 premature deaths being caused by “fossil fuels” the number is closer to 1 in 100 of premature deaths. But that number doesn’t make nearly as good a headline for a political campaign.

As for the claim that 1 in 5 deaths worldwide being caused by fossil fuels, that number is similarly ridiculous. In the US the study attributes 81% of PM2.5 deaths to fossil fuels while in Europe it is 75.7% of PM2.5 deaths. Given that fossil fuels represent about 3% of Canadian anthropogenic PM2.5 do we really imagine that either of those figures are at all credible?

I am quite certain the reply I will get from my critics is that this paper was PEER-REVIEWED [yes, it will be in all caps because that is how they address me on social media]. But as I have pointed out previously, the peer review process is notoriously challenged by multi-disciplinary work. This journal can’t get dozens of reviewers so they had to concentrate their review on the stuff that journal is about (modelling and epidemiology). It is likely no one stopped to ask whether the inputs for the models were appropriate because none of the reviewers would recognize the data looked wrong. It takes someone from that field to look at the data and say: “wait that number is wrong”. In this case anyone familiar with PM2.5 would ask about forest fires (not included in the inventory as they are not “anthropogenic”) or about construction dust or about simple road dust. Sadly, none of the reviewers asked those questions and so we have a paper with a huge attribution problem based on bad inputs.

So once again we have well-intentioned health practitioners demonstrating my father’s (a physician himself) adage: “never trust an MD on any topic that is not related to medicine”. CAPE’s “Stop Fossil Advertising” campaign, while clearly well-intentioned, is simply not supported by the research it cites. Rather, the campaign appears to be based on a combination of bad science and a bad reading of the science all mixed together with a lot of good intentions..

Posted in Fossil Fuel Free Future, Risk Communication, Uncategorized | Tagged , , , , | 7 Comments

Understanding the asbestos risks associated with any search of the Prairie Green Landfill

A lot has been written in the media, and on social media, about the proposal to search the Prairie Green Landfill for the victims of Jeremy Skibicki. It is believed that the bodies of at least two, and possibly more than two, victims of this horrendous individual may be found in the landfill and as a consequence a report “Landfill Search Feasibility Study Committee, Final report of the Technical Subcommittee” (Feasibility Report) was prepared on the feasibility of searching the landfill for the victims. As a parent I can’t imagine the anguish of not knowing whether your child is in that place but as a risk professional I think it is important that any discussion of the feasibility of such a search be grounded in risk science.

Looking at the list of contributors to the report, the professionals I see missing are anyone who may have experience with the handling of asbestos containing materials (ACMs). I note the presence of two professionals from “Waste Connections Canada” but a look at their website highlights that they are specialists in “non-hazardous solid waste collection” and not ACMs.

As someone with professional experience dealing with ACMs, I think it is important to expand on the risks with handling ACMs in a straightforward manner to inform the discussion on the feasibility of any search. The following is a discussion that deals only with the ACM component of the project. There are many more considerations involving reconciliation etc. that I will not cover because they are not in my area of expertise.

Let’s start with the basics of asbestos. Asbestos is a naturally occurring mineral fiber. It occurs in various forms and due to its unique chemical characteristics was used in hundreds of building materials including insulation and fireproofing until it was discovered to be incredibly harmful and dangerous and was banned for these uses in the late 1980’s.

Asbestos can be found in two major forms: friable and non-friable. Friable asbestos is defined as fibres in a material in a form that can be aerosolised when crumbled and is powdery in nature. When crushed it forms a dangerous dust that can be inhaled and when inhaled greatly increases risks of mesothelioma, lung cancer and pleural thickening. It is so dangerous that it has its own special procedures for handling by occupational health and safety authorities and to this day mesothelioma is the biggest industrial killer in Canada.

Only specially trained individuals are allowed to manage friable asbestos and when they do so it has to be in enclosed spaces where everyone is wearing personal protective equipment (PPE) and with careful and ongoing air treatment and monitoring. During asbestos clean-ups all the resulting waste (and PPE) must be sealed in plastic (double bagged in plastic bags) and then carefully transported to a small number of landfills equipped to handle the material.

ACM shippers can’t just ship their waste anywhere. Asbestos waste is only allowed at selected landfills and shippers have to warn the landfill they are coming so the landfill can prepare a location (a landfill cell) where the bagged material can be deposited and then immediately covered with other material. The cell, once complete, would then be capped with clay or silt to prevent the release of fibres when (when not if) the ACM-containing plastic bags burst under compaction. Sealed in a landfill, under a layer of silt or clay, the asbestos is safe unless someone disturbs the cell containing the ACM.

In the case of the Prairie Green Landfill, we are informed that at least 700 tonnes of ACM are in the section of the landfill that needs to be excavated. Therein lies the problem. In order to search the landfill the previously sealed ACM needs to be dug up and when it is dug up the friable asbestos risks being released. This will turn it into a risk to searchers and any community downwind of the search area.

The plan, as described in the Feasibility Report, involves excavating the section of the landfill; putting that material on a conveyor belt where it will be moved to an area where the soils will be screened; and then the screened materials searched for remains. Every step of this process poses massive risks in a situation where friable ACMs are involved.

As I noted previously, the ACMs were transported to the facility in plastic bags. Those plastic bags would absolutely have burst when the landfill machinery compacted the affected cells. By design they will have burst beneath a cap of silt or clay which will then become embedded with ACMs and will become a larger risk in their own right.

The Feasibility Report suggests that water can be used to reduce the generation of fibres but that is a bit of a misstatement. Water reduces the dust risk but doesn’t eliminate that risk and the amount of water needed to keep the asbestos fibres down to a relatively safe level will be too much to allow for the operation of any of the equipment needed to move or sieve the material. Essentially, the entire mess would need to be turned into a slurry to keep the dust down and the conveyor system and sieves don’t work for slurries. Moreover, the material would need to be kept wet forever because the second it dries the friable asbestos will dry up and be capable of being released into the air.

For a mental image, imagine that the asbestos was sawdust. When you shovel raw sawdust it puffs out everywhere. You can wet it to reduce the dust risk but when it dries it gets everywhere again. Put sawdust on a conveyor belt and it will generate more dust, run sawdust through a sieve and more dust is given off. Now imagine that every single particle of dust could cause a searcher, or a member of the downwind community, to get mesothelioma and die.

To understand the risks recognize that the way they initially realized how dangerous asbestos was to humans was the fact that families of asbestos workers kept dying of mesothelioma, simply from the dust brought home on workers’ clothes. There is no tolerable dose of asbestos, every inhaled particle increases the risk of mesothelioma and death.

Moreover, as I noted before, every step of the process increases the volume of contaminated material. Initially only the bagged material was ACM. Then the bagged material, the fill around the burst bags and the sealant on top became ACM through compaction. Finally, the process of mixing the material would result in any pile coming in contact with the earlier material being contaminated with asbestos. Arguably, at the end of the process some 60,000 -70,000 tonnes of material mixed and moved during the search would then be contaminated with fine asbestos that, once dry, could become airborne and become a risk to anyone who inhaled generated dust. The entire pile would become a massive pile of ACM that needed separate treatment.

Ultimately, none of the plans I have read, to date, come close to addressing the multiple stages of risk posed by the search. Starting with the excavator operator opening up the individual cells; to the transportation to the conveyor belt; to movement along the conveyor belt; to separation through the sieve; and then the management of the resulting 70,000 tonnes of potential waste. Every step of this process has inherent risk that is simply not addressed by wearing PPE and keeping the material wet.

Given the information above, I cannot fathom a scenario where a responsible government would authorize a search of the landfill without a better and comprehensive safety plan plan in place. While not searching the landfill will undoubtedly cause harm, the plan to search the landfill has too many holes. It ignores too many risks. Any search done wrong runs the risk of exposing searchers and the community to unsafe levels of asbestos and to the potential of painful death. No responsible government could justify placing the searchers, and the community, at such risk even if the cause is both noble and right.

Posted in Canadian Politics, Chemistry and Toxicology, Risk, Uncategorized | Leave a comment

A parent’s thoughts on BC’s new K-12 reporting system

I am the parent of three school-aged children, the husband of a teacher, a long-time DPAC Representative and am the former PAC President in a school that piloted the new grading and report card system in the Langley School District. In this blog post I want to share my thoughts on why I feel BC’s new reporting policy for K-12 students is not a good fit for our students.

Let’s start with a simple truth. I don’t want to be out here fighting the new reporting system. As a PAC we really want to be fighting about class size and composition. Our school was built in the 1990’s to handle 200 students with a maximum capacity for 250. It is now stuffed with 450. When I first volunteered with the PAC the location of my daughter’s classroom was a playground and tetherball court. Now it is home to multiple portables. But that battle has already been lost, they won’t be building us any new schools anytime soon. So I am out here fighting a battle that can still be won.

Our school of 450 is packed full, every teacher has a full class, and most are in remedy. Remedy is a term used when a teacher has more students in their class than is allowed under the School Act and their professional contract. We are talking about 30 students in a Grade 5 classroom, and that is not 30 easy to teach students, that is 25 typical students and at least 4-5 who have special needs and, by right, deserve significant Special Education Assistant (SEA) support. These are students who are not learning at grade level and have Independent Education Plans (IEPs).

In a traditional classroom there would be 1-2 SEAs supporting the students with IEPs and allowing the classroom teacher the time to give individual help to the average, quiet kids to ensure they didn’t fall behind. But those days are no more and instead we have massively overworked teachers in incredibly busy classrooms without enough SEA support. So who gets the short straw in this scenario? The quiet kids struggling to learn in a crowded system that has little time to give them the individual attention and support they deserve.

What does this have to do with grading? Well, the old grading system gave parents a tool to find out how their children were doing in a clear and semi-objective manner and now that tool is being taken away from us. These new reports provide little to no useful detail about our child’s holistic learning experience while making it harder to know how our children are doing in the larger sense.

Let’s start with a quick explanation of the new system without the bureaucratic gobbledygook. The system has four tiers Emerging, Developing, Proficient and Extending.

Emerging is the term reserved for students who are below grade level (say with IEPs) or who aren’t in a position for grade-level reporting.

Developing is the term for students who are just learning a topic but haven’t achieved proficiency. They don’t get the material yet. There are lots of reasons to be a Developing student and teachers will concentrate their limited time on their Developing students because they need the most help.

Extending is the term used for the high-flyers. These are the students who don’t just understand the material but know it well enough to help out their peers. Extending students do well wherever they go and aren’t a concern to this discussion.

All the rest of the students are in that mushy middle called Proficient. The Proficient scale has no gradation. Under the old system Proficient would encompass everything from a bare pass (C+) to near mastery (A). If your child is Proficient you have no clue if your child is barely getting by or is in the top of their class.

To clarify something that may not be clear, even the high-flyers aren’t graded as Extending in the middle of the session and when first introduced to a topic most students will be Developing with most migrating quickly to basic proficiency and a small minority achieving Extending towards the end of their study. Ultimately most students will be Proficient for most of the year. Your child is barely hanging in there: Proficient, your child is acing every test: still Proficient.

The historical letter grade scale provides a pretty blunt gradation scale typically A, B, C and F or 4 general levels of gradation. As kids got older the gradation increased with the addition of +/- resulting in a scale from A+ to C with 8 intervals. (A+, A, A-, B+ ,B, B-, C+, C). That level of gradation allowed teachers to report on subtle changes in performance where a C+ to a B- was a minor improvement while going from an A to a C+ was a huge drop. This ability to provide detail through gradation is eliminated in the new approach. 80% of the kids are in the same box with both C+ and A being Proficient.

As an analogy, if this were a fuel gauge in your car the three lines would read Full, Flashing Empty and Gas.  Now what family would trust a fuel gauge that only said “gas” before they went over the Coquihalla? Does “gas” mean I have enough fuel for a quick trip or am I going to run out while on my way to Grandmas?

The reality is parents want to know not only if their kids are Proficient they want to know in what direction those kids are moving academically, and the new system is not designed to collect nor relay that information to them in a consistent manner.

To give an example, imagine you have a child who was a strong reader in Grade 3 but due to other interests (sports, art etc..) didn’t concentrate enough on reading in Grades 4-7. A gradual decline in reading scores would be noted in a system with a reasonably grainy reporting format and would be missed in the new one.

A slow, gradual drop from A to C+ is all Proficient under the new reporting system. Such a drop can occur slowly over years so wouldn’t necessarily be noticed by individual teachers who only saw your child for one year. The child would be slowly getting further and further behind without anyone who could make a difference even recognizing it was happening. This is our biggest fear and the one none of the documentation provided by the Ministry addresses.

Additionally we have to look at the policy from the perspective of older students. We need to start with the recognition that the rest of the world still relies on letter grades/percentages in establishing student achievement. Universities and colleges still define entry standards based on grading and as such British Columbian students have to be prepared to operate in a world where they will be graded and their future opportunities will be restricted by the grades they achieve in high school.

Achieving high grades is not something that comes naturally to all who try. It takes learning how to deal with the pressures of trying and failing, learning where you are weak and where you need to work harder and learning how to achieve top grades in testing environments. Students who spend the entirety of their academic career in a world where all you need to be is “Proficient” do not build those skills/capabilities. To then arrive at Grade 10 and be told that you now need all these skills because your Grade 10 and 11 marks count is simply not fair to these students.

As an analogy, we don’t drop young athletes into championship competition without first training them and letting them learn how to compete and win in lower leagues with lower stakes and lower challenges. But our school system is doing just that. From Kindergarten until Grade 9 our students are being told that all that matters is they are Proficient and suddenly in Grade 10 they are being told that excellence matters. But they have no clue where they stand in the larger competition because they have never had the chance to find out.

Students who are “Proficient” at the C+ level literally don’t know that they are at the C+ level. They have always been told they are at the same level as their peers and the reporting supported those false beliefs. Suddenly in Grade 10 they get to find out that is not the case and it is often too late to address the challenge.

One critical skill our schooling system has failed to teach our kids is resilience and this decision to protect our children from the possibility of failure is not teaching them resilience. Children need the opportunity to try and fail then try again until they learn to succeed.

To conclude, as a parent I am not wedded to letter grades, I am not fighting to support anyone’s claim to having an honor student. What I want is a reporting system that supplies sufficiently grainy results so we can determine whether our kids are meeting their potential and when they don’t to tell us so we can give them more help. Unfortunately, the new approach being used in BC doesn’t do that. Instead it takes 80% of the students and drops them in one big black box called Proficient which leaves parents with insufficient information as we fight to get our kids the best education possible in a system under intense stress.

Posted in Canadian Politics, Uncategorized | 3 Comments

Are Gas Stoves Really Responsible for 12.7% of Current Childhood Asthma Cases in the US?

The news has been full recently with stories about the risk of childhood asthma caused by natural gas stoves. As someone who specializes in risk assessment and has experience with indoor air chemistry this seemed like it was right up my alley. As I went digging through the research; however, I discovered that the research seemed less about providing a good scientific examination of the topic and and more about generating a lot of headlines and press discussion of the topic.

The furor is all derived from a recent study published in an open-source journal called Population Attributable Fraction of Gas Stoves and Childhood Asthma in the United States (Gruenwald et al., 2022). The paper itself doesn’t present any new data but rather applies a rather arcane type of mathematical attribution analysis (Population Attributable Fraction or PAF) to the results from a ten-year-old meta-analysis that summarized work from the 80’s and 90’s. Needless to say, the paper absolutely doesn’t advance the science in any useful manner and appears designed instead to induce political change rather than inform policy.

Two of the authors of the paper are Talor Gruenwald and Brady A. Seals. Many of us are familiar with these names as they both work for the Rocky Mountain Institute. For those not familiar RMI is:

an independent, non-partisan, nonprofit organization of experts across disciplines working to accelerate the clean energy transition and improve lives.

Now I’m not going to slag the RMI as it really does do good work. But it is absolutely fair to note that two authors who work for an organization that is dedicated to transforming the global energy system to secure a clean, prosperous, zero-carbon future for all might not be the totally objective scientists you want doing your research on natural gas stoves.

Before we get too deep into evaluating the data used in the paper, I think it is pretty important we start with a little background on the critical statistical tool used in this paper (PAF). As described in the literature PAF

is an epidemiologic measure widely used to assess the public health impact of exposures in populations. PAF is defined as the fraction of all cases of a particular disease or other adverse condition in a population that is attributable to a specific exposure.

That sounds like a pretty useful measure but there is a hitch. PAF has been around since the 1950s but a Google Scholar search of the term finds less than 17,000 hits. From an academic perspective, this tells you a lot about the technique. A statistical tool in epidemiology (a field that publishes thousands of papers a year) that has been around for 70 years and only appears in a few thousand papers must have some issues, and PAF absolutely does. The big complaint is that PAF doesn’t work when there are multiple confounding variables. The challenge for academics unfamiliar with the tool PAF is

found in many widely used epidemiology texts, but often with no warning about invalidness when confounding exists.

So let’s consider Asthma as a disease. According to the American Lung Association Asthma can be caused by: Family History (genetics), Allergies. Viral respiratory infections in youth, Occupational exposures. Smoking, Air pollution and Obesity. Do you know what a statistician would call each of those SEVEN different causes of asthma? Confounding variables! So here we have a statistical analysis that is invalid when used in the presence of confounding variables and we have a disease that can be caused by a half dozen other factors, that are not controlled for in the analysis.

Reading the Gruenwald et al paper carefully, one discovers the terms “confounding” and “variable” do not appear. It is thus possible the authors simply did not recognize the issues with this statistical tool for this type of analysis as that omission would typically result in a bench rejection in most well-respected journals.

Another challenge with this paper is the data used to derive its conclusions. The research for this paper started with an evaluation of the academic literature. The authors started where most authors on this topic start. With the 2013 Meta-analysis of the effects of indoor nitrogen dioxide and gas cooking on asthma and wheeze in children by Lin, Brunekreef and Gehring. This is a seminal paper on this topic and I have seen it cited numerous times by those opposed to fossil fuel stoves. The major problem with the paper is that it is old. While it was written in 2013, it relies almost entirely on research articles from the 1980’s and 1990’s. From the perspective of indoor air assessment that is like the Stone Age. A look at the supplementary material for the work shows that most of the studies included were, by modern perspective, very small and had little statistical power.

Given that knowledge the authors of Gruenwald et al., looked for newer work and but unfortunately found no new data. Why? Because

Full manuscripts (n = 27) were independently reviewed…none reported new associations between gas stove use and childhood asthma specifically in North America or Europe.

So there were 27 major studies they could have included in their analysis but the authors deliberately limited their inputs by requiring the work be done entirely in North America and Europe because they were looking for “similarities in housing characteristics and gas-stove usage patterns”.

By making this editorial choice the authors managed to exclude the definitive research on the topic: Cooking fuels and prevalence of asthma: a global analysis of phase three of the International Study of Asthma and Allergies in Childhood (ISAAC). The ISAAC study was

a unique worldwide epidemiological research program established in 1991 to investigate asthma, rhinitis and eczema in children due to considerable concern that these conditions were increasing in western and developing countries. ISAAC became the largest worldwide collaborative research project ever undertaken, involving more than 100 countries and nearly 2 million children and its aim to develop environmental measures and disease monitoring in order to form the basis for future interventions to reduce the burden of allergic and non-allergic diseases, especially in children in developing countries

The ISAAC study collected data from 512,7070 students between 1999 and 2004. It has incredible statistical power due to its massive sample size and one of its signature conclusions was:

we detected no evidence of an association between the use of gas as a cooking fuel and either asthma symptoms or asthma diagnosis.

Arguably, in any study to evaluate the “Population Attributable Fraction of Gas Stoves and Childhood Asthma in the United States” a massive, recent, international study that showed that there was no evidence of an association between natural gas as a cooking fuel and asthma might be considered relevant. But no, that landmark study was ignored in this analysis.

Even worse…and I can’t believe I am saying this, even the seminal meta-analysis by Lin, Brunekreef and Gehring barely met their standard. Of the 41 papers evaluated in that meta-analysis the Gruenwald et al authors chose only to consider 10 (those where all subjects were from Europe or the US). The limitation of relying solely on European and US data was nominally due to the “similarities” between housing characteristics in the US and Europe but it further degraded the statistical power of their analysis

Now I am not speaking out of school when I point out that houses in the US are really not more comparable to European homes than homes in Australia or Japan. Anyone who has ever travelled to Europe can attest to how similar their housing design is to US building and frankly American houses are not all that comparable either. I would argue that the differences between houses in Nevada and New Hampshire would greatly exceed the differences between those in Nevada and Australia. Thus, it is fair to ask whether imposing this restriction was really about maintaining internal consistency of the data or whether other factors might have played a role?

To conclude, I can only restate that the Gruenwald et al paper seems to have some clear challenges that would typically preclude it from consideration in a policy-making process.

  • Its underlying data is of low statistical power.
  • Its conclusion is directly contradicted by more recent studies with significantly greater statistical power. and
  • It relies on a statistical tool that is considered invalid in situations with confounding variables yet it is being used to analyze an association that is absolutely rife with confounding variables.

Put simply, this is not the study I would rely on to make a major policy change that will affect millions of people and will cost billions to implement. As to its conclusion: are 12.7% of childhood asthma cases in the US attributable to cooking with natural gas? Based on the points above, that conclusion is almost certainly not the case.

Posted in Climate Change Politics, Risk Communication, Uncategorized | 12 Comments

Understanding Risk Assessment as a form of Sustainable and Green Remediation

One of my New Year’s resolutions is to write more posts that explain, in plain language, how our environmental regime in BC protects the public with respect to contaminated sites and to help clear up common misconceptions about contaminated sites.

My area of professional expertise is the investigation and remediation of former industrial and commercial sites. My specialization is risk assessment, specifically the assessment of petroleum hydrocarbon contamination and its effects on human and ecological health. For those of you not familiar with the terminology, I have included a background section at the bottom that can help you understand the topic of risk assessment in this context as well as links to previous blog posts where I address issues surrounding contaminated sites.

There is a common fallacy in the environmental and regulatory community that risk assessment is a cop-out. A way to avoid doing “real” remediation and is thus inherently unsustainable. Nothing could be further from the truth. Often risk assessment is the most green and sustainable choice for remediating contaminated sites in BC.

A typical example of the negative regulatory viewpoint was presented in the BC Ministry of Environment & Climate Change Strategy (BC ENV) discussion paper Making Contaminated Sites Climate Ready put out in the fall of 2022. The document repeatedly suggests that risk-based instruments should be subject to additional scrutiny without acknowledging that risk assessment often represents a preferred green/sustainable form of remediation.

Historically, the standard approach for a “real” and “permanent” remediation at a hydrocarbon-impacted site was the “dig and dump” excavation. In a dig and dump excavation, contaminated soils are dug out of the ground, along with significant volumes of less contaminated, or even uncontaminated soils, using diesel powered excavators which deposit the soil into diesel trucks to be transported to a landfill.

Given the presence of the hydrocarbons in these soils they typically cannot be shipped to just any landfill. Instead, they need to go to specially permitted facilities designed to receive and treat this type of waste soil. Most of these facilities are located in the lower mainland (in Richmond or Abbotsford). If your impacted site is in the interior this might require a 1000+ km round trip for soil disposal.

The trips are carried out by diesel trucks and each trip presents a real risk on the roads. The trucks will travel along community roads to the highway then often hundreds of kilometers on the highways before driving through more residential and busy urban communities to reach their goal. Each trip can generate hundreds of kilograms of carbon emissions and well as harmful diesel exhaust and multiple trips are typically required to achieve numerical closure.

Once the waste soils arrive at a permitted facility for treatment they will generates dangerous vapours while more diesel and greenhouse gas emissions are given off during their treatment. Once treated, the soils then get sent to the main landfill, taking up limited landfill space, for final disposal. But remember, you are only halfway done at this point. Having dug out the hole, you still need to fill it in.

To fill in the hole you need to excavate clean fill from somewhere else and transport that clean fill to your site which entails further transportation emissions, transportation risk and ecological consequences because that fill soil has to come from somewhere.

To summarize, a typical remedial excavation generates massive GHG and diesel emissions; poses transportation risks through busy communities; while using up non-renewable landfill space; and requiring the excavation and transportation of clean fill which entails further transportation emissions, transportation risk and ecological consequences. None of this is recognized in the BC ENV document.

So what is the alternative? The Environmental Management Act (EMA) provides the legislative framework for addressing contamination in British Columbia. The Contaminated Sites Regulation  provides the specific regulatory regime for managing contaminated sites under the EMA. Both identify risk assessment as a viable mechanism to remediate a site because it is a safe, environmentally friendly mechanism of addressing contamination. The decision to remediate via risk assessment has been a standard remedial approach in British Columbia for decades and BC ENV has repeatedly supported the use of risk assessment in their protocols and guidance documents. If a risk assessment demonstrates that there are no unacceptable risks to human health and the environment at a site, that site is considered remediated to risk-based standards.

Under risk assessment a qualified professional can develop a risk management plan to ensure that a contaminated site does not pose unacceptable risks to human or ecological health. Sometimes a risk assessment can’t make that assessment and other remedial options may be necessary, but often a series of relatively simple precautions can be undertaken that eliminate any real risk to the community posed by a contaminated site.

This is often the case in parts of Vancouver where the deep subsurface is dominated by dense glacial tills (sand and gravel that has been compacted by glaciers until it is as hard as concrete). Glacial tills are not only as hard as concrete they are virtually impenetrable by contamination and contain no extractable groundwater. Contamination confined by a glacial till poses no short or long-term risk to human or ecological health and will eventually biodegrade (naturally attenuate) until it no longer exists. Building a properly designed parking structure (as part of a high-rise building for example) over top of this type of contamination can ensure the contamination poses zero risk to the community as it attenuates over time.

Ultimately, the choice will often be to either leave contaminated soil where it poses no current or reasonable future human or ecological harm or conduct a remedial excavation which would generate massive greenhouse gas and diesel emissions, create additional traffic on the highways and in busy urban and residential corridors while taking up limited landfill space and requiring the importation of clean fill soil to replace the removed material.

From a sustainable and green remediation perspective the choice could not be any more clear. Risk assessment is often by far the best remedial option both economically and using any sustainability measure anyone can invent. A site remediated by risk assessment typically avoids significant ecological consequences, emissions and human and ecological risks associated with unnecessary dig and dump excavations or gas-fired oxidizers in vapour extraction systems while providing permanent solutions to contamination. This makes risk assessment a legitimate green approach to remediation.

Background

Because I deal with risk all the time in this blog, I have prepared a series of posts to help explain the risk assessment process. The posts start with “Risk Assessment Methodologies Part 1: Understanding de minimis risk” which explains how the science of risk assessment establishes whether a compound is “toxic” and explains the importance of understanding dose/response relationships. It explains the concept of a de minims risk. That is a risk that is negligible and too small to be of societal concern (ref). The series continues with “Risk Assessment Methodologies Part 2: Understanding “Acceptable” Risk” which, as the title suggests, explains how to determine whether a risk is “acceptable”. I then go on to cover how a risk assessment is actually carried out in “Risk Assessment Methodologies Part 3: the Risk Assessment Process. I finish off the series by pointing out the danger of relying on anecdotes in a post titled: Risk Assessment Epilogue: Have a bad case of Anecdotes? Better call an Epidemiologist.

Previous posts on Contaminated Sites topics:

A primer on environmental liability under BC’s Environmental Management Act.

On the Omnibus Changes to the BC Contaminated Sites Regulation

Posted in Chemistry and Toxicology, Risk Assessment Methodologies, Risk Communication, Uncategorized | 3 Comments

Understanding the role of, and opportunities for, Canadian fossil fuels in our net zero future

In my review of Seth Klein’s A Good War, I took issue with the author’s statement that in order to fight climate change we need to eliminate the fossil fuel industry. I have repeatedly pointed out how ridiculous that claim is and think it is time to put some numbers to my claims about fossil fuels and their continued role in our existence as a civilized society.

Sadly, as a start to any post of this type I have to do my climate acknowledgement:

I believe climate change is real and is one of the pressing concerns of our generation. I have spent years advancing low-carbon and zero carbon options and agree that we need to achieve a net zero economy, ideally well before 2050.

It is sad I have to do a climate acknowledgement but unfortunately, the reality of this topic is there are so many bad faith actors out there who insist that any data-driven discussion on climate change and its mitigation makes me an old school climate denier or part of the “New Climate Denialism”. I am, of course, neither so do the acknowledgement as a matter of rote.

My scientific area of interest has been evidence-based environmental decision-making and seeking pragmatic and effective reductions in our greenhouse gas emissions is an expression of that interest. Why is the last part important? Because a lot of the demands from the climate NGOs and activists will not reduce our greenhouse gas emissions. Rather, as I have pointed out, many of these ill-considered demands will increase emissions, decrease air quality, and increasing ecological risk.

Going back to the topic of this blog post. In his book the author insists that as part of our fight against climate change we need to eliminate the fossil fuel industry and I argue that the claim is ridiculous? Who is right?

Absolutely no one can deny that the vast majority of fossil fuel use involves using oil and its refined products as a transportation fuel or for the generation of heat or energy. According to the International Energy Agency (IEA) world oil demand is forecast to reach 101.6 million barrels a day (Mb/d) in 2023. Of that, transportation represents about 60% of total oil demand. But that leaves 40% of oil demand that is not from transportation.

The important thing to understand is that fossil fuels aren’t just a transportation fuel or a heat source. Fossil fuels are also the raw inputs for any number of technologies that are absolutely necessary to maintain our modern society. From pharmaceuticals, to petrochemicals, to fertilizer, to synthetic rubber, to carbon fibers to asphalt, fossil fuels are simply not replaceable given our current technologies and societal and ecological expectations.

Let’s start with the biggest user: pharmaceuticals and petrochemicals. The IEA has produced an incredibly useful document which details our reliance on petrochemicals called The Future of Petrochemicals. In this document the IEA indicates that currently we use the equivalent of 12 Mb/d for petrochemicals and that value is increasing as we look to build lighter vehicles, stronger plastics and more items from carbon fibers. From 2020 to 2040, BP expects plastics to represent 95 percent of the net growth in demand for oil (demand to increase by almost 6 million barrels/day). That is approximately 18 Mb/d of oil demand from petrochemicals and pharmaceuticals.

Recognize that most of this demand cannot be met through other sources. Petroleum hydrocarbons represent a massive natural bounty. They are the results of millions of years of solar energy converted into chemical form by plants and trapped in complex molecules that have been compressed to liquid form by huge geological forces. That process cannot be readily replaced with modern biofuels or other modern sources.

Another huge user of crude oil is asphalt. In 2019, global demand for asphalt was projected to be around 122.5 million metric tons (742.5 million barrels). That is better than 2 Mb/d of crude oil demand just for asphalt. Heavy oil is by far the best source of asphalt.

Another major demand for oil is for synthetic rubber. In 2021 the world used 26.9 million tonnes of rubber of which 53% was synthetic (derived from hydrocarbons). Rubber is another product that can be made via organic sources, but doing so increases risk to ecosystems from deforestation. The better ecological choice is via crude oil.

Adding up the various products, the demand for crude oil for non-energy, non-transportation uses will be around 20 million barrels of oil/day. That is 5 times Canada’s projected maximum production. That demand will continue to exist even once we have eliminated any transportation or energy demand.

So why is this important? Because we know the fossil fuel industry will be generating emissions to produce those 20 Mb/d and the countries that can produce their oil for the cheapest prices (including carbon taxes) while generating the fewest emissions will have an indefinite and ongoing market all to themselves.

As I have pointed out previously, Canadian oil sands produce very low cost oil, with a high asphalt component, and our existing production has an incredibly low depletion rate. We are ideally situated to be one of the last producers standing if we can produce net zero oil (and gas) to fill the perpetual oil and gas markets.

This brings us to the second half of our data-driven policy discussion. Were we to believe the faulty claims of the anti-oil NGOs then there would be no justification for developing technologies like carbon capture and storage or direct air capture of carbon dioxide. In fact, the activist community regularly argues we shouldn’t invest in these technologies. But as I have demonstrated above, there will be a tremendous ongoing demand for net zero crude oil for the indefinite future.

But the critical consideration is the “net zero” component. We need to invest right now in the technologies to turn our fossil fuel industry to a net zero one by reducing emissions at every possible step and developing tools to sequester or trap carbon to address the emissions we can’t eliminate. At our current price point we have a significant opportunity to permanently grab a slice of that ongoing oil demand, especially the heavy oil component which cannot be supplied by our most likely net zero competitors.

I am often asked, why do I appear to be supporting the fossil fuel industry with posts like this one? The answer is simple. You can’t solve a problem until you identify and diagnose the problem. The activist community has forwarded the idea that in order to effectively fight climate change we need to eliminate the fossil fuel industry. As I have shown above that demand is not possible. I am also an ecologist and a pragmatist and recognize that every action has a consequence. I want my kids to grow up in a society that still has healthcare, wildlands and a functioning ecosystem.

The fossil fuel industry is a necessary one and has the potential to provide reliable revenues for generations to come. But that will only happen if we ignore the anti-oil activists and develop the tools to get our oil production to net zero. Alternatively, we can do nothing and watch our industry die in the next 10-20 years and with it all the revenues that we currently use to pay for our social services and to help fight climate change.

Posted in Oil Sands, Pipelines, Uncategorized | 4 Comments

Reviewing Seth Klein’s A Good War – An interesting historical treatise that ignores the details of climate science

I finally bit the bullet and read “A Good War” by Seth Klein. The book describes itself as an exploration of:

how we can align our politics and economy with what the science says we must do to address the climate crisis.

But as I will discuss below, in my opinion the book presents some really interesting historical information while ignoring the details, and frankly the science, of what it will take to fight climate change. The book is written in a compelling style and is meticulously footnoted when discussing the political and economic conditions of the war era; but the high quality of his historical research is juxtaposed with the absolute dearth of reliable referencing when it comes to modern day climate science.

Ultimately the book is not about fighting climate change as an energy/GHG emissions issue and more about fighting the idea of climate change where “climate change” is used as a tool to re-align our political and economic systems to meet the author’s political ideals.

This book started out really badly for me because right from the start it was clear it was not going to rely on any peer-reviewed or reliable science. In his section on the “New Climate Denialism” the author provides the technical basis for his arguments against the Trans Mountain Expansion Project (TMX), the CGL pipeline and the fossil fuel industry in general. This should represent the critical intellectual core of his book and its quality should be consistent with his research into the war years. Instead, his understanding of these projects ends up being based on a handful of Canadian Centre for Policy Alternatives (CCPA) articles and a few Globe and Mail articles; all of which have been repeatedly debunked in the scientific literature. Let’s summarize:

He relies on a Marc Lee Globe and Mail article to claim that BC LNG has “a GHG profile very similar to coal”. This claim is a claim is demonstrably false and is contradicted by the peer-reviewed research.

His claim that the Trans Mountain will not generate better returns for oil to Asia was from another of his friends J David Hughes. That claim is demonstrably untrue, with more here.  

His claim that the Trans Mountain will add “13 to 15 million tonnes” of carbon emissions “equivalent to two million cars” isn’t even referenced, rather it is attributed to Katherine Harrison a “UBC political science professor.” This claim comes from a National Observer article by Dr. Harrison. The problem is the actual reference from which that range is derived said those values would only be valid for new production.

As I have written numerous times, there is no data to support the argument that the TMX will increase Canadian oil production or our carbon emissions. Rather, the information from the energy regulators is clear that the production that will move down the pipeline is not dependent on the pipeline. The only new production in development in Alberta will be completed at a price point where it is still financially viable whether the pipeline is built or not. There is no production in the development queue that has a price point where it is only viable with the completion of the TMX. As such, this production will be completed in the absence of the pipeline. In reality the pipeline will reduce transportation risk and emissions compared to the existing transportation options for that same production. The pipeline is a win for the fight against climate change.

More problematically, throughout the book the author argues we need to eliminate the fossil fuel industry. This demand is simply counter-factual. Fossil fuels are both an energy source and a source of necessary primary materials that form the basis of our modern world. As the International Energy Agency points out petrochemical feedstock accounts for 12% of global oil demand, or between 12-14 million barrels a day. From pharmaceuticals, to petrochemicals, to fertilizer, to synthetic rubber, to carbon fibers to asphalt, fossil fuels are simply not replaceable given our current technologies and societal and ecological expectations.

That 12-14 million barrels a day is expected to increase driven by increasing demand for plastics, fertilizers and other products. This represents 3-4 times Canada’s total oil production and for many of these uses heavy oil is the preferred hydrocarbon source and Canadian heavy oil is among the lowest emission heavy oil on the market. Similarly, his plans for eliminating nitrogen fertilizer would starve out our population. Even in a Net Zero future we will not be eliminating the fossil fuel industry.

As for electricity sources, anyone reading the book would totally forget that nuclear energy exists. A look in the index shows a complete lack of discussion of the topic. Similarly geothermal (which requires fracking by the way) is given short shrift.

Given all the above, I have to laugh at the author’s suggestion that the “CRTC could demand that reporting be scientifically factual” since doing so would cause them to stop his friends from publishing their faulty claims.

Now I am going to do something unexpected. I am going to point out that from a big picture perspective I think the author convinced me that only our government can mobilize the resources needed to achieve the fundamental changes necessary to reach Net Zero. No, we will not be eliminating the fossil fuel industry and yes we will be exporting LNG to Asia because both will help reduce global emissions. But we also need to acknowledge that the private sector alone is not going to achieve our goals. We need a strong government willing to strategically spend a lot of money and write good regulations to get us to Net Zero.

The author’s approach to using the power of government to force the public into converting from fossil fuel-based heating and transportation looks, to me, to be the best way to achieve our Net Zero goals. Similarly, I was convinced that the government leading in renewable and low carbon technologies would be the most efficient and likely most profitable (from a Canadian economy perspective) approach to the problem.

I was confused; however, how a trained Economist, like the author, could completely omit the economic and political limitations to his plans. Canada is not an island. We live in an inter-connected world of trade agreements and supply chains and the book is incredibly light on how his approach would fare once our international trading partners (and multi-national corporations) decided to challenge his preferred approach. During WWII Canada had the benefits of allies working towards the same goals, using the same means. The A Good War, go it alone approach is the exact opposite to that situation in WWII.

Ultimately, the quotation that absolutely typifies this book for me is one he presents from Greta Thunberg.  In the quote Greta says:

Avoiding climate breakdown will require cathedral thinking. We must lay the foundation, while we may not yet know exactly how to build the ceiling. 

Any serious thinker would instantly recognize how completely insane that statement is. A building foundation needs to be designed to handle the expected stresses associated with the building design. If you build a foundation without first designing the building you will either need to build a smaller, less effective design to address the limitations in the foundation; or you will need to massively overbuild the foundation wasting time and resources; or you will need to tear out the foundation once completed and lay a new one that reflects the needs of the final design.

Put another way, before you can come up with a solution to a problem you have to be able to diagnose the problem and to do that you need to understand the problem. Throughout this book the author talks about how to fight a problem he is unable to describe. He uses terms like “follow the science” as an alternative to describing what he actually wants done. His entire thesis misses that the fight against climate change isn’t just about carbon or methane, it is about energy and raw materials as well.

Oddly enough, even as the author mangled the energy and climate science he did a pretty reasonable job of convincing me that part of what he wanted accomplished was both possible and even necessary. I suppose that makes the book a partial success from his perspective.

To summarize, in A Good War the author makes it clear he really doesn’t understand our climate challenge from a technical and scientific perspective. To use a metaphor from the book, the author builds his cathedral using a flawed foundation, resulting in a structure unable to support his basic premise. It is worth the read for the historical perspective it provides, but sadly like many recent tomes on climate change, the book has less to do with fighting climate change and more to do with eliminating/defeating Neoliberalism.

Posted in Canadian Politics, Climate Change, Climate Change Politics, General Politics, Leap Manifesto | 4 Comments

BC’s new School Food Guidelines: an attempt by bureaucrats to squeeze the joy out of our kids’ childhoods while stripping away parental choice

I am the parent of three school-aged kids and the president of our local elementary school Parent Advisory Council (PAC). Last night our PAC looked at BC’s Proposed 2022 BC School Foods Guidelines For Food & Beverages in K-12 Schools and the accompanying Ministry’s rationale for the proposed 2022 Guidelines.

It is the opinion of our PAC that these documents represent massive bureaucratic overreach and read like they were written by bureaucrats instructed to suck the joy out of our kids’ childhoods while simultaneously using their bureaucratic power to eliminate parental choice in how we should to raise our kids. As a bonus, these Guidelines will kill some of our PAC’s most successful fundraising. I hope that after reading this post you will rush to your computer to fill out their feedback form to tell these bureaucrats to get out of the business of trying to parent our kids and return parental choice to parents where it belongs.

For those unfamiliar with the 2022 School Food Guidelines, they are nominally intended

to support healthy food environments at school by increasing access to healthy food while limiting access to unhealthy food.

but what they also explicitly admit is that

The Guidelines are for adults making food decisions on behalf of students in a school setting.

they literally are telling us that this is about bureaucrats taking away parental choice about how we feed our kids.

Let’s look at some examples. These guidelines don’t just deal with the food served in cafeterias or food prepared by school staff, they also apply to hot lunch programs and bake sales. Let’s start by considering bake sales, here is a list of baked goods.

I can just imagine a bake sale under the 2022 Guidelines. No cakes or pies, no cookies or muffins, no home-made treats. Instead we can sell loaves of rye or bulgur bread or whole wheat muffins made with with low fat milk and no refined sugar, butter or fat.

One of the most successful fundraisers for our PAC is the hot lunch. They happen at most once-a-month and involve fun, easy-to-prepare foods that the kids will eat: hotdogs, Subway sandwiches. pizza, even Cobb’s bread and Booster Juice. None of these options would be allowed under the 2022 draft Guidelines. Hot dogs are specifically mentioned as unacceptable, pizza has processed cheese and meat and Subway sandwiches have deli meats and soft, processed cheese.

I have heard a number of people saying that these are only “Guidelines” are are thus not mandatory. That is not true. Once a School District chooses to put these “Guidelines” into their policy documents they become mandatory for the schools in those school districts.No administrator is going to turn around and tell their District that they have decided to ignore District policies.

Let’s be clear here. I am not saying schools should feed kids donuts and pizza every day but that is not what we are talking about. The Guidelines lack proportionality and don’t provide exceptions for special events. I can understand a set of Guidelines for general use that acknowledges that there will be exceptional cases but the Guidelines make it absolutely clear they brook no exceptions. Consider the Family Fun Fair.

Before Covid our school had its annual Family Fun Fair. It is a community event that was attended by well over half of our school community. It included a concession that sold hot dogs and hamburgers. You could buy an ice cream treat and of course on a hot spring night the kids could get popsicles or Freezies. Besides the concession there were lots of little games where the kids could win a toffee or a sucker. This is not a weekly or monthly event, it happens once a year…and the Guidelines would make it impossible. The Guidelines explicitly identify fun fairs and says no hotdogs, no popsicles and no treats of any kind. Think I am joking? Look below at the list of allowed treats….but we can try to sell cottage cheese and whole milk…that will go over really well on a hot spring evening.

One of the teachers at our school gives children who succeed a Hi Chew as a special reward for reading success. Another will give out small packs of gummy bears or a sucker to take home. All these rewards will cease to be allowed under the new Guidelines. I think we all agree that teachers shouldn’t need to bribe kids to get them to read, but eliminating virtually every treat used as rewards takes that a step too far.

How about another example? Each year we have a regional track meet. The event occurs in late spring when it can get pretty darn hot and the concession will sell sports drinks including drinks designed specifically to replace the electrolytes lost by kids exercising hard in the heat. Yet the draft Guidelines literally identify electrolyte replacement drink as being on the naughty list. Young athletes working hard in the sun don’t get to replace their electrolytes. Instead they can have water or maybe some plain unsweetened milk, just like Olympic athletes drink at their events.

Ultimately what these inflexible draft Guidelines completely miss is all these PAC and school food programs are optional. Parents can opt their kids in or out of the programs. It is about parental choice and how we want to raise our kids. There are plenty of parents who don’t like treats at school and they have the right to say no to optional school food programs, but under the draft Guidelines parental choice has been utterly removed. The bureaucrats don’t trust us to feed our kids. They want to be the final arbiters of what our kids eat and what they drink.

The thing that angers me the most about these draft Guidelines is that they have been created by unelected bureaucrats who were never given a public mandate to make this significant a change. We recently had a provincial election but these draft Guidelines were kept secret until after the election. I paid attention during the election and the current education minister certainly did not run on a platform of destroying PAC fundraising and making school miserable for kids. Had the current government run on a platform of eliminating parental choice and giving this type of power over our kids to bureaucrats they would never have been elected.

The other point I have mentioned in passing but really matters is that all these changes will essentially eliminate our school PAC funding structure. Virtually every major fundraiser will be affected with most being eliminated. No hot lunches, no Christmas chocolate sales, no bake sales, no fun fairs, no concessions at sporting events.

In BC, PACs play an incredibly important role filling in the gaps left by the chronic underfunding of our education system and the new draft Guidelines will essentially eliminate my PAC’s ability to raise the money necessary to underwrite field trips, to supply financial support for enrichment supplies and teaching aides and even provide our school with more books for our school library. PACs help pay for clubs and events and all that depends on funding…and our government is not giving our school that funding.

To summarize, these new draft Guidelines are a power grab by unelected bureacrats who want to take decision-making about raising our kids away from parents. They will eliminate our PAC’s most effective fundraisers and ultimately won’t make a major difference in student health. I urge my fellow parents to fill out the feedback form provided to the ministry and remind everyone that you also might want to write or call your local MLA or the Education Minister to let them know how you feel about these draft Guidelines.

Posted in Canadian Politics, Uncategorized | 2 Comments

Why you needn’t fear the “Dirty Dozen” fruits and vegetables

There are certain things you can count on with the coming of spring. Two of the earliest are the arrival of the first Mexican and Californian strawberries in the produce aisle and the Environmental Working Group’s (EWG) annual “Dirty Dozen” report misrepresenting the risks of eating said strawberries. I have previously written about EWG’s reporting of risk but want to address them again because there is more to say about their approach to science communication.

For those not familiar with EWG, they are an organization partially funded by organic food trade organizations and organic producers. Absolutely coincidentally, each year they produce a list of fruits and vegetables they feel have excessive pesticide residues while simultaneously suggesting that consumers rely instead on more expensive organic alternatives for their fruit and veggie choices.

Sadly for science communication, their annual Dirty Dozen report regularly gets picked up by news outlets desperate to draw readers to their sites. This week I found over a dozen links to this report including ones from the Vancouver Sun, The Province, and The National Post.

In reading the Dirty Dozen report the first thing to understand is analytical chemists are extremely good at identifying infinitesimally small concentrations of discrete chemicals in mixtures. As I pointed out in a previous post; analytical chemistry has become so precise that a modern mass spectrometer can distinguish to the parts per trillion range. That would be 1 second in 30,000 years. When an activist report says they found “detectable” concentrations of a pesticide in a sample you should take that claim with a grain of salt since that same analysis has the capacity to find a single grain of salt on a 50 m stretch of sandy beach.

As a specialist in risk assessment, the first thing I look for in a report like the Dirty Dozen is the identified concentrations. They will tell me the true story about whether there are any real risks. The absolute tip-off about the Dirty Dozen report is that it does not present actual concentrations for the pesticides identified in the fruits or vegetables in the report. All they say is that pesticide residues were identified.

There is a simple rule of thumb in risk communication. If a toxicological report doesn’t give you the concentrations of a compound it is because the authors don’t want you to see those concentrations. This is not the sort of thing that happens by accident.

But that is not the only way in which the report keeps their readers in the dark. In toxicology, risk is dependent on exposure concentrations and professional toxicological bodies determine acceptable exposure concentrations through detailed, publicly-available, peer-reviewed research. The EWG reporting doesn’t even use toxicological terms in their reports, instead referring to their preferred concentrations as “benchmarks” without ever explaining what that term actually means.

Most importantly, they never explain the basis for their benchmarks. They don’t explain how they determine whether a concentration is safe or not safe. Their calculations have not been widely shared but they don’t appear to be based on the peer-reviewed toxicological literature. The best I can tell is that the values are arbitrary. Consider their benchmark for glyphosate. On their page How Does EWG Set a ‘Health Benchmark’ for Glyphosate Exposure? they write:

EWG calculated a health benchmark for the total amount of glyphosate a child might ingest in a day. EWG’s benchmark is 0.01 milligrams per day significantly lower than both the Environmental Protection Agency’s dietary exposure limit and California’s No Significant Risk Level.

There is no rationale provided to justify or support their benchmark.

For the record, the EPA has systematically (and publicly) reviewed the peer-reviewed toxicological research for glyphosate and has identified a safe dietary limit of 70 mg/day. California, which has a standard based on slightly different criteria, says a safe number is 1.1 mg/day. EWG’s undocumented benchmark (the one they use in their reports) is orders of magnitude lower than the levels identified as posing no significant risk based on the peer-reviewed toxicological literature. To my eye, EWG simply chose the lowest detection limit available from their research lab as the basis of their benchmark.

What the above tells you is that when EWG says something isn’t safe it is not based on the peer-reviewed science. That is not how good science works. In toxicology you don’t just get to declare something is not safe without explaining how you came to that conclusion. Consider a thought experiment:

Imagine that I, a highly credentialed scientist, created my own private “benchmark” for trip hazard risks. Imagine I claimed that individual grains of sand on the sidewalk represented dangerous trip hazards to children. Now it is generally understood that children don’t trip over individual grains of sand but the grains are detectable on the sidewalk if you look carefully enough. Imagine I then wrote a report indicating that the presence of grains of sand on the sidewalk posed a real and dangerous tripping hazard to neighborhood children and suggesting that families buy expensive leaf blowers to protect their children from these unsafe conditions. Does anyone imagine I could get dozens of media outlets in Canada to publish a story on my report detailing the risk of individual sand grains and promoting the sale of leaf blowers? Of course not, because unlike the toxicology every parent in Canada would recognize that my “benchmark was invalid.

Now I would love to write a snappy conclusion to this blog post but happily a peer reviewed academic journal beat me to the punch. As Winter and Katz wrote in their review of an earlier edition of the Dirty Dozen report (in Dietary Exposure to Pesticide Residues from Commodities Alleged to Contain the Highest Contamination Levels):

In summary, findings conclusively demonstrate that consumer exposures to the ten most frequently detected pesticides on EWG’s “Dirty Dozen” commodity list are at negligible levels and that the EWG methodology is insufficient to allow any meaningful rankings among commodities.our findings do not indicate that substituting organic forms of the “Dirty Dozen” commodities for conventional forms will lead to any measurable consumer health benefit.

Given the above I only wish Canadian content providers recognized when they were being played and stopped giving EWG so much free earned media coverage every year.

Image from Shutterstock

Posted in Chemistry and Toxicology, Risk, Risk Assessment Methodologies, Risk Communication | 1 Comment

Why an over-budget Trans Mountain Pipeline Expansion Project will still not be a financial loser for the Federal government

Last week new details emerged about ongoing cost increases on the Trans Mountain Pipeline Expansion (TMX) Project. If news media is to be believed, the price of the pipeline will likely exceed $17 billion. A far cry from the initial $7.4 billion price tag when the federal government bought the project. Opponents of the project will claim that at this price the TMX is a financial loser that should be abandoned. As I will demonstrate in this post, that claim is demonstrably false.

To summarize my argument, the opponents of the project will argue that the pipeline will possibly have a negative net present value (NPV) at its current $17 billion price tag. But as I will show, when it comes to government projects NPV is only part of the picture, and in this case, it is only a tiny piece of the much bigger economic picture. Except in the case of massive losses, the TMX makes absolute financial sense from a government perspective because the government has more than one way to generate revenue from this project.

I went into detail about the Parliamentary Budget Officer’s (PBO’s) report on the valuation of the TMX in a previous post (Understanding what the PBO report says about the Trans Mountain Pipeline Expansion Project). The PBO report presents numerous scenarios and depending on the cost of the project, the financing costs and other factors, the project may or may not have a positive NPV.

What does a negative NPV mean? Well, let’s think about why a company builds a pipeline. When Kinder Morgan proposed the pipeline, it had a simple plan. Build a pipeline for $4 – $7 billion and then sell space (tolls) on that pipeline at a price that allowed it to recoup its costs plus generating a profit for its shareholders. The challenge Kinder Morgan faced was that its only source of revenue on the project would be the tolls on the material transported by the pipeline. For Kinder Morgan, the NPV of the pipeline would really matter. If they were unable to recoup the costs of construction, over the lifetime of the project, then the project would be a money-loser and a financial drain. Companies don’t last a long time if they regularly build projects that generate a negative NPV.

In my earlier post I also went into detail into the concepts of “optionality” and the “WTI-WCS price differential”. To save you time I will copy some text from that post here:

Optionality refers to the availability of more pipeline export capacity to more downstream markets for Western Canadian oil producers. Optionality allows shippers more opportunities to maximize returns and reduce the netback disadvantage, reflected in the price differential between West Texas Intermediate (WTI) and Western Canadian Select (WCS)

The PBO also notes:

That analysis determined that a reduction in the WTI-WCS price differential of US$5 per barrel would, on average, increase nominal GDP by $6.0 billion annually over 2019 to 2023.

When considering optionality and the WTI-WCS price differential we are reminded that the federal and provincial governments are not private corporation with limited sources of incomes. Governments generate revenues from a variety of direct and indirect sources.

Consider the building of the Trans Mountain. When the government spends $17 billion building a pipeline, they generate tax revenues on that spend and the money invested has a multiplier effect throughout the community which generates more revenue. When a crown corporation pays GST that is direct revenue to the very government paying that crown corporation’s budget. Similarly, when a crown corporation pays staff to build a pipeline, that staff remits income taxes on all their income. Thus, a $17 billion project doesn’t actually cost the federal government $17 billion but rather $17 billion minus the taxes etc. that the government generated from that construction. Moreover, this type of economic activity generates spin-off economic activity from that construction activity.

If taxes and direct economic spinoffs were the only benefits from the project, then even the government could only afford a small loss in NPV over the long term since they can only make up so much value in taxes. But thankfully, those are really only secondary benefits. The primary benefit of the project is in optionality and its larger effect on national GDP.

When TMX is complete, it will increase optionality and will increase the value of the oil moved down the pipeline (as described by the PBO). Line 2 is projected to move 540,000 barrels/day. If optionality increases the value of that oil by a single dollar per barrel that means the pipeline would generate $540,000/day of added value to the economy at no additional cost. That multiplies to about $200 million/year per dollar of increased value. Remember this is simply an increase in the value of the existing production that would otherwise still be moving by rail to Asia or California Texas. It is pure cream which requires no further effort once the pipeline is built. If we use the PBO estimate of a $5 increase in value that comes out as $1 billion a year in added direct revenue from the TMX. That $1 billion in revenue means substantially higher royalties and higher tax revenues. That is more money for the government.

Thus, even if the pipeline ends up with a NPV of minus $1.2 billion, the government, through their other revenue sources, would make up that “loss” in very short order. Moreover, if increased demand raised the price of additional production (remember Alberta produces about 3 million barrels a day of heavy oil) that increase in value might spread to the remaining oil resulting in higher revenues off that oil as well. This is how the PBO comes up with their $6 billion/year in added revenue.

See how a negative NPV can still end up with a positive cash flow? Can you imagine any investment where $1.2 billion in one-time costs resulted in $1 billion to $6 billion a year in extra revenue? No business on the planet would say no to that proposition. And remember, this is not due to increased production this is simply increasing the value generated by producing the same product. It is simply getting paid more for the same product because you can now get it to a market that values it more.

Ultimately, we know the opponents of the TMX are going to make wild and unsupported claims about the project being a money loser, a financial drain, etc.. But the simple truth (as displayed above) is that their argument about NPV simply does not hold water. The federal government is not a private corporation with a single revenue stream. The federal government builds all sorts of projects that have negative NPV because they generate value through other means. From schools, to roads, to ports, to pipelines, these projects can generate either economic or social benefits. In the case of TMX both the direct and indirect revenue streams will result in the project being a big economic winner for the federal government even if it costs a bit more to build.

Posted in Canadian Politics, Pipelines, Trans Mountain | 56 Comments