Article Text

Download PDFPDF

Mobile phone use and cancer
Free
  1. M Kundi
  1. Correspondence to:
 Prof. Dr M Kundi
 Institute of Environmental Health, Medical Faculty, University of Vienna, Kinderspitalgasse 15, 1095 Vienna, Austria; Michael.Kundiunivie.ac.at

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The continued success of mobile telecommunication producing a still growing population of users amounting to hundreds of millions of people worldwide has raised concerns about possible consequences on public health if mobile phones turn out to be less safe than previously assumed. In fact, never before in history has a device been used that exposes such a great proportion of the population to microwaves in the near-field and at comparatively high levels. The advantages of this technology with respect to health protection are undeniable, considering the many lives that have been saved by mobile phone calls in cases of emergency. On the other hand, telephoning while driving has become an important cause of accidents. However, the issue of possible long term effects such as the development of cancer is controversial. While there are still not enough data to present a final risk assessment, there is evidence from epidemiological and experimental studies that long term exposure to emissions from mobile phones may lead to a small to moderate increased risk of developing certain types of cancer. If a high proportion of the population is exposed, even a small additional risk could be of great importance to public health protection; hence measures reducing exposure may be indicated. However, because of the undeniable scientific uncertainties, industry lobbying has been successful in avoiding too strict prevention strategies.

EXPOSURE TO MOBILE PHONES

After not more than 20 years of mobile telecommunication, the third generation of mobile phones is already awaiting introduction onto the market (some test areas are now in operation). In Europe mobile telephony started in 1981, and in the USA in 1983. The first generation mobile phones were the so called analogue types. The signal was transmitted by frequency modulation in frequency bands around 450 MHz and 900 MHz. The mobile phones had peak power of emission of about 8–15 W for the 450 MHz system and of about 1 W for the 900 MHz telephony. In the early 1990s second generation mobile phone systems were started in several European countries. This system was a digital one that used frequency bands around 900 MHz (GSM) and later 1800 MHz (DCS). This technology made compression of speech possible and therefore transmission to and from several mobile phones using the same channel could be applied. This type of transmission is called TDMA. Up to eight mobile phones can use the same channel for transmission. Each is given a time slot of 577 µs (hence the term TDMA: time division multiple access). Within this time slot a signal of analogue duration of 4.615 ms (called a frame) is transmitted by a microwave pulse (bit coding of the digital information is accomplished by GMSK). Hence the basic pulse frequency is 217 Hz. Furthermore, the system applies power regulation to adjust output to the quality of the signal received and it is capable of discontinuous transmission (DTX mode), thereby reducing the number of pulses emitted from the mobile phone if the user is silent. Peak power output of the mobile phone antenna is 2 W (900 MHz) and 1 W (1800 MHz). Different TDMA systems operate in the USA, Japan, and other countries. Digital cordless phones apply a similar technology, and despite considerably lower power may result in average exposures comparable to GSM mobile phones.

All mobile phone systems use microwaves (electromagnetic fields of frequencies between 300 MHz and 300 GHz) as carrier waves. The wavelengths are about 65 cm (450 MHz), 33 cm (900 MHz), and 17 cm (1800 MHz). The wavelength is not only important for the size of the antenna necessary for transmission but also for the determination of the area of near field exposure. The near field is subdivided into the reactive and radiating near field. The latter is characterised by a non-unique relation between the electric and magnetic component of the radiation (note that in the far field the electric and magnetic field vectors are orthogonal to each other and to the direction of propagation and are coupled by the plane-wave impedance of about 377 ohm in vacuum or air). Within the reactive near field the emission interacts with an object depending on its dielectric properties, and complicated exposure patterns may occur. Generally in the near field exposure conditions are more complicated and measurements without considering the exposed object that reacts with the field will give incorrect results. The border between near and far field is approximated by the Rayleigh distance 2d2/λ+λ/2 (where d is the size of the antenna and λ the wavelength). Assuming λ/4 type antennas we get the border at approximately two thirds of the wavelength. Hence while holding a mobile phone to the ear the head is within the near field of the antenna.

Abbreviations

CT: computed tomography

GSM: global system (service) for mobile telecommunication

DCS: digital cellular system

DNA: deoxyribonucleic acid

DTX: discontinuous transmission

EMF: electromagnetic field

GMSK: Gaussian minimum shift keying

IARC: International Agency for Research on Cancer

ICNIRP: International Commission for Non-Ionising Radiation Protection

MRI: magnetic resonance imaging

NRPB: National Radiological Protection Board

OR: odds ratio

SAR: specific (energy) absorption rate

SIR: standardised incidence ratio

TDMA: time division multiple access

UV: ultraviolet radiation

WHO: World Health Organisation

Exposure to electromagnetic fields in the frequency range from about 100 kHz to 10 GHz results in absorption of part of the energy carried by the waves within the body. The rate of energy absorption depends on the field strength, the wavelength, and polarisation of the electric and magnetic fields with respect to the orientation of the body, the dielectric properties of the tissues, and other features. Within a given volume of body tissue this rate of energy absorption is proportional to the square of the internal field strength; it is denoted specific absorption rate (SAR) and expressed in watts per kilogram. It is practically impossible to measure SAR in the exposed organism, hence different procedures are applied to estimate it. One possibility is to use phantoms that are filled with an electrolyte gel that has similar dielectric properties as the tissues at that frequency and to either measure electric field strength within the phantom or temperature increase (because absorbed energy is almost completely converted into heat). Another possibility is to solve the Maxwell equations for small adjacent volume elements of the body consecutively and compute the resulting field strengths. Both methods have their limitations because of the gross differences in tissue structure and properties.

The International Commission for Non-Ionising Radiation Protection (ICNIRP) has issued guidelines1 for limiting exposure to electromagnetic fields. Presently they recommend a basic restriction for SAR localised in the head of 2 W/kg averaged over any 10 g of tissue. More strict guidelines are used in the USA where the limit is 1.6 W/kg averaged over 1 g of tissue. In Great Britain NRPB has issued guidelines2 that limit localised SAR at 10 W/kg.

SAR from exposure to mobile phones varies considerably for different types and brands not only with respect to maximal SAR but also with respect to exposure pattern. Older analogue phones sometimes exceeded the above mentioned exposure limits while second generation mobile phones are compliant. However, depending on the type of antenna and other features, SAR varies for different brands by about 1 to 10.

CARCINOGENIC RISK ASSESSMENT

Evaluation of carcinogenicity in humans relies on three sources of data: epidemiological, experimental animal, and in vitro genotoxicity data. The different evaluation schemes differ in some aspects, especially concerning the emphasis that is placed on the results of animal and genotoxicity studies. The first one widely recognised was proposed by the International Agency for Research on Cancer (IARC), a WHO organisation. In spite of the mentioned differences, these schemes agree in that an agent cannot be classified as carcinogenic for the human unless there is substantial epidemiological evidence supporting this assumption.

By definition epidemiological evidence can only be provided if the agent under consideration is prevalent in a proportion of the population. Considering the prolonged duration necessary to detect a clinically relevant disease, decades of exposure may pass by until conclusions can be drawn. Therefore animal and in vitro studies gain importance, not only in supporting epidemiological findings, but also to assess mechanisms of action and for early indications of a possible risk to humans that may suffice to take action of risk reduction. Furthermore, epidemiological studies are often insufficient to assess dose-response relations. In order to quantitatively characterise the carcinogenic risk and to develop guideline levels for limiting the exposure (in cases where a ban is not feasible) dose-response data from animal experiments may serve as a surrogate.

Although for most types of cancer an understanding of the details of the underlying processes is still far from being complete, there is sufficient knowledge to crudely differentiate the process into several distinct stages. An important distinction is that between “initiation”, “promotion”, and “progression”; and agents acting at these stages are often termed initiating, promoting, or progressor agents.

As fig 1 indicates, there are several steps at which endogenous and exogenous agents can exert influence on malignant transformation and development. Concerning exposure to microwaves, although almost nothing is known about basic interaction mechanisms, most authors assume that if microwaves affect the process at all, they act as promoters.

Figure 1

Schematic diagram of the process of malignant transformation and development. Processes at which interference with endogenous or exogenous agents may occur are shown in red. The yellow flash sign denotes changes at the DNA level.

ARE EMISSIONS FROM MOBILE PHONES CARCINOGENIC?

It has been argued that due to the fact that microwave energy as emitted by mobile phones is far too low to cause ionisation it cannot be genotoxic. In fact, several reviews concluded that there is at most weak evidence for genotoxicity of radiofrequency fields and microwaves.3 However, all types of exposures have been combined in these analyses. It is as if, for example, all organic compounds were classified at once according to their genotoxic potential. If specific aspects of the incident field such as frequency and modulation are decisive, a general statement about all high frequency fields makes no sense. Furthermore, the argument that microwave energy is too low to directly damage DNA does not preclude an effect at initiation stage. Many experimental studies have applied procedures that have been developed for the assessment of chemical carcinogenicity and are probably useless for investigating electromagnetic fields.

Another and more radical argument is based on equilibrium thermodynamics. It is stated that unless the intensity of the internal electric field exceeds the threshold of thermal noise there cannot be an effect at all. While this argument, often referred to as the kT theory, poses a challenge to future theories of interaction between electromagnetic fields and the organism at the cellular level, it cannot be considered uniformly valid because otherwise we would not be able to hear (signal detection by hair cells of Corti’s organ is close to the quantum limit to measurement and well below thermal noise).

While the energy of the internal EMF as such might be too low to alter the distribution of velocities of molecules in tissues precluding an effect that solely relies on energy transfer, frequency and modulation of the EMF could be of biological meaning. There are several biological effects that have been consistently observed in independent investigations which occurred below levels that are compliant with the kT theory. Among these effects, despite some negative studies, are activation of heat-shock proteins,4–8 leakage of blood-brain barrier,9–12 increased frequency of micronuclei,13,14 and alteration of calcium homoeostasis.15–17 Also of importance is the observation that effects of low frequency modulated high frequency fields are similar or even equal to those of low frequency fields, suggesting a demodulating capacity of tissues. Another, possibly fundamental process that has been shown in vitro as well as in animal experiments is an increase in phosphorylation of enzymes. Thus transforming enzymes to their active state may play a key role in the process that leads to fixation of deviant DNA or to increased proliferation of deviant cell populations. There are, however, many open questions: Which processes occur upstream of the observed activation? Is the effect a consequence of an EMF sensitive DNA element or is it due to an acceleration of enzymatic loops? What is the role of duration of exposure? Which cell types are sensitive?

Carcinogenic risk assessment for electromagnetic fields and particularly for exposure to emissions from mobile phones is complicated for several reasons:

  • Except for tissue heating there is no established interaction mechanism between internal field strength and cellular processes18

  • Hence there is no evidence based starting point for experimental carcinogenicity studies

  • And epidemiological studies cannot apply an exposure metric derived from a sound mechanistic model.

The problem can be illustrated by the following considerations. Assume that the basic interaction mechanism involves resonance phenomena; for example, an effect at this level may occur only if certain conditions about the mass and charge of ions in the tissue and the low frequency component of the incident EMF are met; it follows that intensity of the field will be of minor importance; however, duration of exposure within the effective resonance window could be decisive. On the other hand, if the periods of relevant interaction are long, regulating cellular responses may be initiated that counteract the adverse reaction. If such conditions apply it will be very difficult to provide a meaningful exposure meter. An example from a very different field where similar problems arise (although likely of less complexity) is sleep disturbance by noise. In this case, not the average noise level but the number of noise events exceeding a certain threshold is a first approximation to the overall adversity of the environmental condition with respect to sleep.

As mentioned above there are two entwined lines of research that should be followed: a hunt for phenomena, following both in vitro and in vivo paradigms, and the development of theories about fundamental interaction mechanisms. There is some reluctance of the broader scientific community to take on this task, because many share the opinion that too much energy has already been wasted in the search for low level effects of EMFs. Especially concerning emissions from mobile phones it is argued that evidence rather points against an association to the development of cancer. However, if one takes a closer look at the evidence this may throw some doubts on this opinion.

Aspects to be considered in assessing epidemiological evidence

Some preliminary considerations are necessary to put the evidence into perspective. Most of these aspects have been more or less neglected in previous reviews.

First it has to be stressed that exposure to mobile phones is considered to act as a promoter (if it has an effect at all). If in a cohort the incidence, say of a certain brain tumour, has a specified age distribution, a promoter acting on the individuals in this cohort will only exert an effect in those that bear already a growing population of deviant cells. Its effect amounts to increasing the growth rate. This will result in an earlier clinical manifestation. Furthermore, in some individuals that would have died of competing causes of death the brain tumour will be diagnosed during their life span. Hence there is: (1) a shift of the distribution of age at diagnosis to earlier age; and (2) a (small) increase of incidence due to surviving competing causes of death. What would be the net effect on relative risk? This depends on two factors: the duration of the activity of the promoter, and the magnitude of the promoting effect. In the worst case the shift of age at diagnosis will be almost as great as the duration of exposure (note that it cannot be greater). In most epidemiological studies published so far, duration of exposure to mobile phones was less than 3–4 years in almost all subjects. Considering the age distribution of incidence for brain tumours that is roughly described by a log linear increase between 20 years and 70 years of age, a shift by, say, 3 years will have only a very small effect on relative risk estimates like odds ratios or incidence ratios. Even neglecting sampling error the relative risk will be not higher than 1.15. And this is an overestimate because for very fast growing tumours like glioblastoma, the shift cannot be noticed at all, except if exposure is also associated with progression. This follows from the fact that the upper limit of relative risk is equal to exp(βn), where β is the slope of the age/log incidence relation, and n is the shift of age at diagnosis in years. (For β a value of about 0.044 reflects overall brain tumour incidence.19)

Another aspect that has been neglected so far is related to the fact that brain tumours, the endpoint most often studied, comprise a heterogeneous class of tumours of benign as well as malignant type, with different cellular origin, clinical features, and prognosis. WHO stratifies them into four types according to their malignancy. Glioblastomas belong to the type with highest malignancy. Up to now no environmental or occupational factor has been consistently associated with glioblastoma. Only little is known about induction period of this tumour, however, duration between first clinical signs and diagnosis (or death) is weeks to at most months. Hence it is impossible to detect an effect on growth rate in this type of tumour applying traditional case-control or cohort approaches. Therefore the predominance of glioblastomas in some of the investigations about the association between cellular telephone use and brain tumours results in a risk attenuating effect. Furthermore, because only for lower grade brain tumours can an effect possibly be detected in these studies, power is greatly overestimated if all types of tumours are combined. As long as exposure duration is as short as in the majority of case-control studies published so far, the small relative risk of 1.15 that might be present would afford a sample size of more than 7000 cases and controls, to have a power of 80% to detect it. If only lower grade tumours are included, besides the effect of leaving out cases that cannot be affected by exposure, this has the advantage that the age/log incidence function is steeper and therefore the potential to detect an effect greater.

Of great importance is the localised nature of exposure when using a mobile telephone. In all investigations that included this aspect a great proportion of subjects used the phone predominantly on one side of the head only. In this case the contralateral side remains virtually unexposed. Because we are not dealing with initiation of tumours but with effects on tumour growth, only exposures at the same side of the head as the tumour is located can have an effect. This precondition further reduces the number of evaluable tumours to those that grow in a potentially irradiated area. Because this depends on type of phone and the position used while telephoning, to some extend misclassification of exposure at the site of interest is unavoidable. Calculation of the internal field strength at the location of development of the tumour would be the method of choice; however, data collected in studies published so far were insufficient to accomplish this. Therefore, stratification according to the following categories with increasing probability of exposure of the relevant region of the brain would be another possibility: no mobile phone use, use at the contralateral side, use on both sides, and ipsilateral use. However, in this case another problem has to be considered. During growth of the tumour, symptoms like tinnitus, hearing problems, and headaches may occur that could result in switching side of the head the telephone is held or to reduction or even cessation of use. Especially for acoustic neurinoma such symptoms are prevalent and often the cause to seek medical advice. Therefore it is important to investigate the conditions that were present before onset of these symptoms. Unfortunately none of the studies investigated symptoms that may interfere with using a mobile telephone, and especially with switching the side of the head against which the telephone was used during conversation. A minimal condition, however, for analysis of such tumours is to account for reasonable periods of latency. At least the last year before diagnosis should be disregarded in the analysis.

Epidemiological evidence

Overall nine epidemiological studies of mobile phone use and cancer have been published so far, four in the USA,20–24 two in Sweden,25,26 and one each in Germany,27 Finland,28 and Denmark.29 Most of these studies investigated brain tumours, one studied uveal melanoma, one additionally included salivary gland cancer, and another malignancies of the haematopoietic and lymphatic tissues. Additionally a small case-control study30 of 18 intratemporal facial nerve tumour cases has been published. Seven studies were case-control, two were cohort studies. Looking superficially at the results of these investigations one may conclude that there is evidence for the lack of an association with mobile phone use. Four of these nine studies found evidence for an increased risk, while five failed to find such an effect. Does this indicate lack of an association? To evaluate epidemiological findings it is important to base the assessment on criteria that are related to study methodology and to the endpoint studied. Furthermore, specific considerations that apply to the respective field of study (see above) have to be included.

Tables 1 and 2 present an overview of the studies published so far, including evaluation of the most important criteria for studies investigating risk of mobile phone use. As mentioned above all studies have some deficiencies. Most important is the lack of a proper exposure assessment and the too short duration of use of a mobile phone in most of the studies. Concerning exposure, all except one of the case-control studies28 obtained information on mobile phone use by interviews. However, as mentioned above, there is great variation between different types of phones and also variability in individual habits of use that greatly influence exposure. Note that due to the lack of an accepted and experimentally corroborated theory of action of microwaves it is impossible to decide whether number of calls, duration of calls, or some other surrogate of exposure is important (see table 3). Assessment of mobile phone use in the two cohort studies, one from the USA20,21 and one from Denmark,29 was also insufficient and based solely on subscription records. In the Danish study, duration of use was obtained only for 57% of the cohort and no data on intensity of use were available.

Table 1

Synopsis of epidemiological studies attributed as “negative” with respect to a possible association between mobile phone use and cancer

Table 2

Synopsis of epidemiological studies attributed as “positive” with respect to a possible association between mobile phone use and cancer

Table 3

Weight-of-evidence criteria for mobile phone use and cancer and gaps in knowledge

In all except three studies25–27 duration of mobile phone use was less than 5 years in the vast majority of cases, hence these studies did not even come close to latencies that are meaningful in cancer research even if effects of promoters are considered.

Concerning selection of study participants, the two cohort studies20,21,29 and one case-control study28 neglected corporate users, which are likely to be the heaviest users and those with the longest duration of use. Although it is convenient to use hospital controls in case-control studies, one has to consider carefully whether control patients can be considered stemming from the same population as the cases obtained. For example, including patients with heart disease for which it is recommended to carry a mobile phone for placing a call in case of emergency one would introduce a bias concerning mobile phone use. It is important to note that all case-control studies that found an effect of mobile phone use were population based studies, while all such studies that were negative were hospital based. There is no straightforward explanation for this discrepancy because there were a number of other differences that could have contributed. However, one aspect could be the higher proportion of high grade tumours in hospital based studies. As mentioned above inclusion of these tumours attenuate risk estimates because they cannot be affected by an exposure of such short duration as studied in these investigations.

Concerning power to detect a moderately increased risk, most studies were deficient because they had too low numbers of relevant cases. Firstly, the proportion of cases and controls using a mobile phone was low in most studies; secondly, the number of subjects that accrued a relevant number of years of use was smaller still. The number of subjects available to assess laterality (that is, the most important aspect of analysis that relates side of predominant use of a mobile phone to the location of the tumour) was negligibly small in most studies, especially if latency is considered.

Confounding has been considered in most studies. Concerning brain tumours the most important confounders are age and gender. Age and gender are indeed potential confounders as they fulfil the necessary condition for confounding—that is, they show a correlation to both, the outcome and the exposure. Due to yet unknown factors brain tumours have characteristic age distributions and most have pronounced sex differences. Mobile phone use is more prevalent in younger age groups and in males (however, due to the steep increase in mobile phone use with rates of use in excess of 75% in many countries these differences prevailing in the published studies have almost vanished by now). Until now no other factor has unequivocally been related to brain tumours with the exception of therapeutic x rays for meningioma. Confounding by therapeutic x rays has been considered in one investigation.25 Concerning uveal melanoma, an endpoint studied in the German investigation,27 except age and gender, socioeconomic status, hair and eye colour have been analysed. There was some criticism that confounding by UV exposure has not been considered. Indeed, UV exposure is potentially related to the risk of intraocular melanoma; however, it is less reasonable that UV exposure should be correlated to mobile phone use. Anyway, inclusion of UV exposure as a possible confounding variable would have improved credibility of results of this study.

Obvious sources of bias have been avoided in most studies; however, both in positive as well as negative studies there are some conditions that may have biased results. In one positive28 and two negative studies20,21,29 corporate users of mobile phones have been excluded. Observer bias may have been present in several negative studies where interviews have been conducted by personnel not blinded to case status.22–24 Recall bias may have shifted results towards a zero hypothesis in studies of brain tumours where interviews with cases were done shortly after surgery.22,23 Response bias has also been implied to affect results of two Swedish case-control studies25,26 that found a significant relation between side of the head of predominant mobile phone use and location of the tumour. This points to a dilemma present in many epidemiological investigations: important factors have to be assessed by questioning subjects that may, voluntarily or involuntarily, give incorrect answers. The predominant side of use of a mobile phone can hardly be independently corroborated by other evidence. Unlike intensity of use that may be checked against provider records, no such confirmation is available for side of use. Proxy interviews will rather aggravate the problem, because even close relatives will have problems to recall this habit of use. Although a response bias cannot be completely dismissed, a close inspection of study conduct makes an impact on results unlikely. Questioning of subjects did not focus on mobile phone use but extended over a number of different topics such as occupational use of chemicals, x ray exposure, and light drinks; furthermore, at the time the study was conducted there was almost no public discussion about a possible link between mobile phone use and cancer, and hence no conceivable tendency of subjects to blame the phone as a cause of their disease. In the second of these studies26 only two of more than 1300 brain tumour patients stated that they are thinking about mobile phone use as a possible cause of their disease.

Statistical evaluation was done by standard methods in most of the studies. Three US studies22–24 applied methods to analyse laterality which neglected control subjects, thereby losing control over confounders, and were therefore inappropriate.

Summarising the compiled evidence from epidemiological investigations it can be stated that all studies that approached reasonable latencies25–27 found an increased risk associated with mobile phone use. The most recent Swedish study26 even matched the criterion that both an overall increase of risk and a consistent laterality effect should be observed. All positive studies found strong indications of an increase of risk with increased duration of use and an increase with longer latencies. On the other side, all negative studies have insufficient latencies and can detect an effect on neither induction nor on promotion stage with reasonable power and hence cannot contribute to the assessment of evidence. While this does not prove an association between mobile phone use and cancer, it suffices to increase the efforts to clarify the issue by a concerted multinational research agenda.

Recommendations for future epidemiological studies

Future epidemiological studies should put emphasis on the history of the disease. These data should be utilised for an internal comparison of estimates of tumour growth rate as well as for censoring of mobile phone use. This would avoid a bias due to interference of the disease with mobile phone use. It should further be noted that as long as comparably short periods of use are prevailing, incidence ratios will greatly underestimate a potential promoter effect if high grade brain tumours are included. An explorative analysis of different exposure meters is mandatory because we have no reason to a priori prefer one over the other. If possible, SAR at the location of the tumour, given the anthropometric characteristics of the user’s head and the data about predominant mobile phone use, should be computed. These data may be weighted by intensity and duration of use. Because, presuming a promotional effect, in control subjects without a deviant cell population exposure can have no effect whatever the intensity of internal field strength, the problem of assigning controls an appropriate exposure meter points to the inadequacy of the classical case-control approach. However, these weighted SAR values can be used as a predictor for tumour growth, the essential endpoint for internal comparisons.

Animal and in vitro studies

If epidemiological evidence is equivocal or there are inherent methodological problems, long term animal experiments may provide support for or against the hypothesis of a carcinogenic effect of an agent. Concerning high frequency EMFs, however, standard experimental procedures as routinely applied in chemical carcinogenicity studies are inappropriate. Typically such studies are conducted in small laboratory animals such as mice and rats that receive doses just below levels of acute toxicity and (geometrically) declining doses approaching levels that may occur in human exposures. Concerning EMFs the highest “doses” applicable are at levels that are at the upper range occurring in humans—that is, at levels that cannot carry a very high risk (otherwise there would be no controversy). To start at such low levels is necessary to avoid interference with tissue heating (and sensing of the presence of the field by the animals). If an increase of incidence of at most two- to three-fold is expected, this poses a number of problems to study design. Although spontaneous tumour rates are often surprisingly high in inbred rodent strains (such as Fisher 344 rats), the incidence of specific types of tumours (for example, brain tumours) may be as low as 1% in a typical two year bioassay. Hence in order to detect an increase to, say, 2%, at a significance level of 0.05 with a probability of 90%, about 3000 animals must be tested in each group. This is obviously hardly feasible. Therefore, methods have to be applied that guarantee increased spontaneous rates of the tumour in question.

Essentially there are three methods that can be applied: Using a strain that has an increased spontaneous cancer rate (for example, because of an oncogenic virus or gene manipulation), using a known carcinogen to increase induction of cancer (for example, application of nitrosourea or benzo(a)pyrene), or implantation of tumour cells. The first study31 of low level long term exposure in experimental animals was conducted between 1980 and 1982. Concerning mobile communication frequencies, so far three long term animal experiments with exposures to GSM like signals have been conducted;32–34 additionally two short term experiments have been published.35,36 Other types of mobile telecommunication signals that are applied in the USA and Japan have been studied in four long term37–40 and two medium term animal assays.41,42 Results of these experiments are not easily summarised because of gross differences in methodology and exposure conditions. Exposure duration ranged from two times for half an hour per day, to 24 hours per day, or 4–7 days a week; some experiments exposed animals in the far field, and some in the near field; and specific absorption rate in exposed animals ranged from about 0.04 to about 4 W/kg. Depending on the type of tumour studied and the model used (chemical induction, cell implantation, habitually increased cancer incidence) different aspects of exposure may be crucial for the probability to detect an effect. Because we do not know at which stage of carcinogenesis, if at all, exposure exerts an effect, and due to the lack of a sound theory of interaction at low levels of exposure, the small number of animal carcinogenicity experiments cannot substantially contribute to risk assessment.

If we consider results of in vitro studies, duration of exposure could be important. It has been assumed that microwaves may act as promoters and therefore it has been hypothesised that longer duration of exposure may be more effective. However, it has been shown by in vitro experiments that some effects are transient or are stronger in intermittent exposures, and therefore it seems reasonable to assume that at longer exposure duration defence mechanisms are invoked that counteract the damage or changes induced by the exposure. Indications of such effects are found in several experiments where exposed animals had lower cancer incidences.33,34,38

Another crucial feature seems to be the time course of the disease. If tumours appear fast or survival declines rapidly, no effect of exposure has been detected. This points to an indirect mechanism of action, maybe by an influence of exposure on cell cycle control and apoptosis. The only long term experiment that resulted in a significant increase of cancer incidence was in a strain of transgenic animals with an increased spontaneous lymphoma rate that were exposed for only two times for half an hour per day, seven days a week, during up to 18 months.32 In this animal model lymphoma appear slowly (at about 10 months of age), and during the following 10 months about 20% develop lymphoma spontaneously. This rate was about doubled in exposed animals. Another experiment34 in the same strain of mice applied a different experimental strategy: animals were immobilised in tubes during exposure (a procedure that has previously been shown to increase cancer incidence) and exposure was only once a day for one hour, five days a week. In this experiment a reduction of lymphoma rate was found and the spontaneous rate was greatly enhanced, pointing to the presence of nuisance factors. Hence the controversy about possible long term effects of exposure to mobile phone frequencies cannot be settled by available evidence from animal experiments.

CONCLUSIONS

Mobile phones when operating close to the head of the user result in comparably high levels of exposure to microwaves in the near field. Never before in history have such high exposures occurred on a regular basis in such a great proportion of the population. Consequently there is concern that exposure may lead to long term adverse health effects and in particular to an increase in cancer incidence. The controversy about long term health effects from exposure to high frequency electromagnetic fields can only be settled by both independently corroborated empirical evidence and a sound mechanistic model of basic interaction mechanisms at low intensities of the exposure. Although there is evidence from independent epidemiological studies pointing to a moderately increased cancer risk for subjects using a mobile phone for several years, there remains always the possibility of bias and confounding unless there is supporting evidence from animal and in vitro studies as well as a mechanistic explanation. While it is epistemologically invalid to dismiss findings because of a mere potential of bias and confounding (because there is no finite sequence of tests to disprove this assumption), the evidence is not yet strong enough to convince the greater scientific community and regulating authorities to take immediate action. Rather a case is opened to increase quantity and quality of scientific investigations. There seems to be a tendency for large scale studies, experimenta crucis, designed to settle the controversy once and for all. However, the lack of theoretical understanding of the basic interaction mechanisms makes such a strategy hopelessly inadequate. On the other hand, epidemiological studies must be large enough to detect a moderately increased risk. This may imply a multinational approach such as supervised by WHO in the Interphone project to study brain tumours. Due to the vast increase in the proportion of the population using a mobile phone it has become difficult if not impossible to define an unexposed cohort, and case-control studies would face the same problem that almost all subjects have used a mobile phone. Hence exposure has to be quantified. But how should this be done without knowledge about the mechanism of interaction? While age at first exposure and years of exposure could be chosen as likely important beforehand without reference to a mechanistic model, intensity of exposure cannot be defined. It is equally likely that total hours of use of a mobile phone, total number of calls, duration of exposure above some threshold level, duration of use within a certain window of time, or some other aspects are important. Innovative strategies to the definition of an exposure metameter are warranted, that may even guide theoreticians to narrow down interaction mechanisms that could be responsible. The same difficulties are apparent in long term animal experiments. There are too many parameters of exposure that must be varied and it is therefore premature to hope for a decision in the near future. At least the following variables must be considered: duration of exposure, intensity of the field, near versus far field exposure, continuous versus intermittent exposure, time of day of exposure, carrier frequency of the field (scaled or unscaled; note that absorption pattern differs considerably between humans and experimental animals—we do not know whether the frequency of the carrier wave is important at all, maybe only the pattern of absorption and the related distribution of internal field strength counts), and varying or constant exposure pattern. If all these variables are considered the investigation would have several hundred independent experimental conditions. This is clearly not feasible. Therefore, innovations in design and execution of experiments must be introduced, starting from the scarce data basis available and proceeding from in vitro and short term studies to long term experiments. Rather than waiting for the decisive experiments, groups of research projects with specific aims should be defined that could lend support to each other without affording simple replication that is not very meritable in science. An example of such a research strategy can be found in the study of activation of heat-shock proteins by microwaves that has been shown by in vitro studies as well as animal experiments in different species. By such a strategy it can be hoped that the controversy about long term health effects of high frequency EMF exposure now lasting for at least 50 years can be settled without another 50 years to come. In the meantime some precautionary measures may be appropriate. Because epidemiological studies approaching meaningful latencies found evidence for an increased cancer risk and because there is corroborating evidence from at least one long term animal experiment and from other experimental in vitro and in vivo studies, the fundamental pessimism of environmental hygiene forces the recommendation of caution and introduction of protective measures that are easily implemented. Among these are discouraging use of mobile phones in children and adolescents,43 recommending head sets, advising on reduction of length of calls and number of calls, and encouraging manufacturers to design and market low emission phones (for example, with antennas transmitting away from the head).

QUESTIONS (SEE ANSWERS ON P 487)

  1. Exposure to high frequency EMFs in the range of 100 kHz to 10 GHz:

    1. Results in absorption of part of the energy of the EMF within the body.

    2. Produces, depending on intensity, an increase of temperature of part of the body or the whole body.

    3. Has sufficient energy to break chemical bonds.

    4. Is more complicated to assess if the source is far away.

    5. Has considerably decreased in the past decades.

  2. Determine whether the following statements are true or false:

    1. Epidemiological studies are essential in determining whether an agent is carcinogenic in humans.

    2. Long term animal experiments would not be used in this assessment, even if epidemiological findings are insufficient.

    3. Experimental studies in animals should only use exposure intensities of EMFs as high as tolerable.

    4. Exposure of isolated cells or tissues can be used to assess basic interaction mechanisms between EMFs and the organism.

    5. Dose-response relations cannot be determined in long term animal experiments.

  3. Assessment of carcinogenic risk of high frequency EMFs and particularly of mobile phones:

    1. Has been thoroughly performed prior to marketing of these products to ensure the public that these applications are safe.

    2. Unequivocally resulted in the overall conclusion that the weight of evidence points against an existing risk.

    3. Is difficult due to the lack of an accepted mechanism of action of these exposures in the low intensity range.

    4. Is superfluous because these exposures are not genotoxic.

    5. Is difficult because of the long latency of cancer.

  4. Epidemiological studies of mobile phone use:

    1. Indicated that there is no increased risk of cancer in users.

    2. Were predominantly about tumours localised in the head.

    3. Applied the most stringent methodologies available.

    4. Indicated an increased risk in long term users.

    5. Were mostly based on accurate calculations of exposure.

  5. If exposure to microwaves emitted by mobile phones is associated with a promoter effect, what will be the effect on relative risk?

    1. It will increase relative risk especially if exposure occurs late in tumour development.

    2. Relative risk will increase if incidence is similar in all relevant age groups.

    3. Relative risk will not or only slightly increase if duration of exposure is short.

    4. Because relative risk is an estimate of the incidence ratio it will increase proportional to the increase in growth rate of the tumour.

    5. Relative risk will show no dependency on intensity of use but only on duration of use.

REFERENCES

Linked Articles