Revue des sciences de l'eau
Journal of Water Science
Volume 12, numéro 3, 1999
Sommaire (9 articles)
-
La modélisation stochastique des étiages: une revue bibliographique
I. Abi-Zeid et B. Bobée
p. 459–484
RésuméFR :
La croissance continue de la population mondiale et l'augmentation du niveau de vie dans certaines parties de la planète exercent une pression de plus en plus forte sur la demande quantitative et qualitative de la ressource hydrique, nécessitant ainsi une gestion plus adéquate. Afin d'évaluer la fiabilité d'un système de ressources en eau et de déterminer son mode de gestion durant un étiage, il est utile d'avoir un outil de modélisation. Nous présentons ici une synthèse des travaux de modélisation réalisés dans le cadre de l'approche stochastique. Nous faisons d'abord le point sur la différence entre une sécheresse et un étiage, termes qui sont souvent confondus dans les publications, pour ensuite en présenter quelques indicateurs.
L'approche stochastique peut être subdivisée en deux catégories: l'étude fréquentielle et les processus stochastiques. La plupart des études d'analyse de fréquence ont pour objet de calculer des débits d'étiage critiques xT correspondant à une certaine période de retour T, tel que P(X<xT)=1/T. L'approche par les processus stochastiques consiste à modéliser les événements de déficit ou les variables d'intérêt sans utiliser directement des modèles de débit.
L'analyse de fréquence des débits ne tient pas compte des durées et émet des hypothèses trop simplistes de stationnarité. L'analyse des séquences permet l'obtention des lois de durées uniquement pour des processus de débits très simples. L'avantage de l'approche des processus ponctuels par rapport à l'analyse des séquences est qu'elle permet d'étudier des processus complexes, dépendants et non stationnaires. De plus, les processus ponctuels alternés permettent la modélisation des durée et la génération synthétique des temps d'occurrence des séries de surplus et de déficit.
Nous présentons dans cet article les travaux de modélisation des étiages basés sur l'analyse fréquentielle, la théorie des séquences et sur les processus ponctuels. Nous n'avons pas inclus les études qui développent des distributions des faibles débits à partir de modèles physiques, ni les études de type régional.
EN :
The increasing pressure on the water resources requires better management of the water deficit situations may it be unusual droughts or yearly recurring low-flows. It is therefore important to model the occurrence of these deficit events in order to quantify the related risks. Many approaches exist for the modeling of low-flow/drought events. We present here a literature review of the stochastic methods. We start by clarifying the difference between low-flows and droughts, two terms which are often used interchangeably. We then present some low-flow and drought indicators.
The stochastic approach may be divided into two categories: Frequency analysis and stochastic processes. Most frequency analysis studies aim to assign to a flow value X a cumulative frequency, either directly using empirical distribution functions, or by fitting a theoretical distribution. This allows the computation of a critical flow xT corresponding to a return period T, such that P(X<xT)=1/T. These studies use mostly the annual minima of daily flows where the hydrological data is assumed independent and identically distributed. It is also common to analyze Qm, the annual minimum of the m-consecutive days average flow, m being generally 7, 10, 30, 60, 90, or 120 days, and to adopt as critical flow the m-day average having a return period of T years. The distributions which are used include the Normal, Weibull, Gumbel, Gamma, Log-Normal (2), Log-Pearson (3), Generalized Extreme Value, Pearson type 3, and Pearson type 5 distributions (GUMBEL, 1954; MATALAS, 1963; BERNIER, 1964; JOSEPH, 1970; CONDIE and NIX, 1975; HOANG, 1978; TASKER, 1987; RAYNAL-VILLASENOR and DOURIET, 1987; NATHAN and MCMAHON, 1990; ADAMCZYK, 1992).
The approach using stochastic processes for low-flows may be direct (analytical) or indirect (experimental) (YEVJEVICH et al., 1983). The indirect approach (not described in this literature review) consists of obtaining flow models, generating synthetic flows and then empirically studying certain drought variables obtained from the synthetic data. The direct approach models deficit events and related variables without explicitely modeling flows. The stochastic processes are of two types and differ in the way that randomness is introduced in the model: ·
- State modeling: The process may be modeled as a probabilistic transition between various states (Markov processes for example). The states of the process {Xt } are obtained from the hydrological observations {Yt } using thresholds. The number of states of {Xt } is finite and run series analysis may be used to study the properties of the drought parameters; or
- Event modeling: The concept of random occurrence of an event is introduced, where an event is a transition between surplus and deficit and vice-versa. In this approach, stochastic point processes are appropriate. A deficit event is then considered a rare event and is characterized by its occurrence time.
We review the low-flows studies based on frequency analysis, run series analysis and on point processes. However, we do not include the physically-based models nor the regional analysis studies.
Run series analysis is applied to processes derived from flows and thresholds. A two-state process is obtained and Markov processes are often applied. The variables of interest are the duration of a deficit defined by the run length of series below the threshold (RL), the severity corresponding to the deficit volume over a negative run of length n (RSn), and the intensity In defined by the ratio RSn /RL (SALDARRIGA and YEVJEVICH, 1970; SEN, 1977; MILLAN and YEVJEVICH, 1971; MILLAN, 1972; SEN, 1980A; SEN, 1980B; SEN, 1980C; GÜVEN, 1983; MOYÉ et al., 1988; SEN, 1990). It is often assumed that the flow process is either independent or autoregressive of order 1 and that it is stationary except for SEN, 1980B.
Point processes are based on the notion of the occurrence of an event. They are defined by the occurrence time tj of an event ej. We present a classification of some of the pertinent processes and their relation to each other. These include the Poisson process, both homongeneous and non-homogeneous, the renewal process, the doubly stochastic process and the self-exciting process. These processes are well suited for obtaining models of deficit durations (NORTH, 1981; LEE et al., 1986; ZELENHASIC et SALVAI, 1987; CHANG, 1989; MADSEN and ROSBJERG, 1995; ABI-ZEID, 1997). The advantage of this approach is its ability to take into account nonstationarity where alternating surplus-deficit point processes are defined from daily flow data. ABI-ZEID (1997) proposed a physically-based alternating non-homogeneous Poisson process that takes into account precipitation and temperature, and defined low-flow risk indices computed from these developed models.
In conclusion, we remark that frequency analysis does not take into account well the duration aspcets and uses simplifying stationnarity hypothesis. Series analysis provides duration distributions for simple flow processes. The advantage of point processes is that they can model complex, dependent and non-stationary processes. Furthermore, alternating point processes can be used to model deficit durations and generate synthetic data such as occurrences of deficit and surplus events. We argue that the duration of low-flows is an important issue which has not received a lot of attention.
-
Efficacité de l'assainissement des eaux usées sur le bassin de la rivière Chaudière (Québec, Canada)
Y. Maranda et J. L. Sasseville
p. 485–507
RésuméFR :
Le Québec a consacré des efforts techniques et financiers substantiels à l'assainissement des eaux usées municipales et à l'entreposage des déjections animales afin de satisfaire les demandes des citoyens en matière de restauration des usages des cours d'eau. L'assainissement de l'eau a, dans l'ensemble, par ses choix technologiques et administratifs, engendré des investissements publics dépassant 7,2 milliards de dollars et plus de 400 millions de dollars annuellement au chapitre de l'exploitation. Ces choix ont-ils permis d'atteindre un niveau de qualité de l'eau correspondant à un optimum social?
À l'aide d'une étude de cas portant sur le bassin versant de la rivière Chaudière (Québec, Canada), cet article met en évidence les facteurs qui ont nuit à l'efficacité des politiques de contrôle de la pollution de l'eau au Québec. Sur ce bassin, 125 M$ ont été consacrés entre 1981 et 1992 à l'érection d'usines d'épuration utilisant différents types de traitement, 8,6 M$ ont été alloués pour la construction de structures d'entreposage de fumiers et le service de la dette pour l'assainissement des eaux usées municipales atteindrait près de 527 M$ selon une hypothèse de financement de 25 ans. La performance des usines d'épuration a permis de réduire significativement les apports au cours d'eau, notamment en DBO5 et en phosphore. Enfin, cette performance et le coût total de l'assainissement municipal sur le bassin de la Rivière Chaudière permettent d'évaluer, sur la base d'une relation coût-efficacité, qu'il y aurait un niveau optimal de qualité de l'eau pouvant résulté de l'établissement d'infrastructures d'assainissement des eaux usées municipales. Ainsi, dans l'optique d'une prise en charge sociale du problème collectif de la pollution de l'eau sur la base du bassin versant, il serait approprié que les gestionnaires et les usagers-contribuables de la ressource-eau, prennent en compte, non pas uniquement les objectifs de rejets, mais également les coûts et les performances de l'ensemble des usines d'épuration sur le bassin afin de retirer le maximum de charges polluantes là où les équipements sont les plus performants.
EN :
Considerable technical and financial effort has been invested by Québec in the cleaning up of municipal wastewaters and storage of animal manure to meet demands by citizens to restore the province's rivers to their former state. Water pollution control has required technological and management choices that have resulted overall in public investments in excess of $7.2 billion, with over $400 million going to operation costs annually. Have these choices enabled Québec to attain a water quality level consistent with a social optimum?
Based on a case study taken from the Chaudière river watershed, Québec, Canada, this article posits two conditions for achieving a social optimum and underscores the factors that have offset the efficiency of water pollution control policies in Québec. According to the data collected on this watershed, between 1981 and 1992, $125 M was invested in the construction of sewage water treatment plants using various treatment methods, while $8.6 M went towards manure storage facilities. On the whole, $527 M is expected to be spent over 25 years to service the debt for municipal wastewater treatment within the watershed.
While inputs of pollutants, especially BOD5 and phosphorus, have dropped significantly with the construction of the wastewater treatment plants, levels of residual pollution in the watershed remain high. It is suspected that total residual loads of phosphorus from municipal and agricultural sources are still well above the loads eliminated through wastewater treatment. If they are to achieve an efficient watershed-based approach to water management, decision-makers are faced with two conditions: the first addresses intersectoral efficiency in controlling pollution in a watershed and the second involves minimizing intrasectoral costs of pollution control. The <intersectoral efficiency> condition explains the administrative and technical choices made as well as the importance of the political market in allocating resources to water pollution control among the socioeconomic sectors responsible for water quality deterioration. The <minimizing intrasectoral costs> condition explains how to minimize the costs in a specific socioeconomic sector among the available water treatment solutions. Using performance data from wastewater treatment plants and the total cost of wastewater treatment in the Chaudière river watershed, it can be assumed, based on a cost efficiency ratio, that an optimal level of water quality should occur as a result of the establishment of municipal wastewater treatment infrastructures. However, it would appear from the results obtained that Québec's water treatment program has deviated from a social optimum, i.e., restoration costs have not been shared equitably among users/polluters within the watershed, and measures to ensure maximum removal of pollution at minimum cost have not been secured. The play of political forces is central to the allocation of resources among pollution sources. Without a proper hard core concept, a water pollution control policy will not be able to elaborate the best solutions oriented towards attaining a social optimum. In the context of the high residual pollution loads within the watershed, there remains the issue of what water quality level is desirable at what cost, particularly with respect to the community's contribution to date and the efficiency of the response strategies that have been implemented. Now that wastewater treatment infrastructures have been set up, and a watershed-based approach to water management becomes effective, water resource managers and users/taxpayers should turn their attention away from discharge objectives only to focus also on the costs and performance of the watershed's treatment plants as a whole, so that removal of pollutant loads at high-performance facilities may be maximized.
-
Estimation in situ de la respiration des boues activées par application d'un bilan sur l'oxygène
P. Chatellier et J. M. Audic
p. 509–514
RésuméFR :
Dans les stations d'épuration à boues activées il est essentiel de maintenir une biomasse de bonne qualité ceci afin que l'eau soit traitée correctement. Un des paramètres caractéristiques de l'activité de la biomasse est la respiration spécifique (c'est à dire la quantité d'oxygène consommé par unité de masse de biomasse et par unité de temps). Cette respiration peut être mesurée à l'aide d'appareils spécifiques (respiromètres) ou déduite à partir de mesures réalisées en ligne. Une méthode d'analyse de données destinée à déduire la respiration spécifique à partir de mesures simples (débits et concentration en oxygène dissous) a été mise au point et testée. Grâce au faible coût de mise en oeuvre de cette technique il devient raisonnable d'utiliser la respiration spécifique comme paramètre de conduite automatisée des stations d'épuration.
EN :
In activated sludge wastewater treatment plants, the activity of the biomass is essential if effluent quality standards are to be met. One of the parameters that characterizes the activity of activated sludge is the specific oxygen uptake rate (the amount of oxygen consumed per unit mass of biomass and per unit time). This rate may be measured using specialised apparatus (respirometer) or deduced from on-line measurements. A technique to deduce the specific oxygen uptake rate from simple measurements (flow rate, dissolved oxygen concentrations) has been developed and tested. This technique involves low cost probes and therefore specific oxygen uptake rate estimations may be used for treatment plant automation.
-
Relation entre température et intensité des circulations d'eau souterraine dans les massifs alpins: outil de prévision des venues d'eau dans les tunnels
J. C. Marechal
p. 515–528
RésuméFR :
Un nombre important de grands tunnels profonds ont été, ou seront à l'avenir, percés dans les massifs montagneux. Le suivi de l'évolution de la température des venues d'eau dans cinq grands tunnels alpins (Vereina, Gothard-N2, Mont-Blanc, Simplon et Gothard-AT) montre que la température de l'eau dans les ouvrages est fortement influencée par la perméabilité des massifs et les circulations d'eau souterraine qui s'y produisent. Les eaux froides qui s'infiltrent à haute altitude possèdent un effet réfrigérant sur le massif. Dès lors, la mesure de la température de l'eau en cours d'avancement d'un ouvrage souterrain constitue un outil de prévision efficace et peu coûteux des venues d'eau. Celles-ci peuvent être très localisées et provoquer une diminution de la température dans une zone du massif ou être diffuses et provoquer une diminution globale du gradient thermique des eaux dans le massif. Une corrélation négative a été mise en évidence entre le gradient thermique des eaux dans chacun des massifs étudiés et l'intensité des venues d'eau qui ont été ensuite observées dans les ouvrages les traversant.
EN :
Numerous long tunnels have been and will be drilled at great depths in mountainous alpine massifs. Water inflow temperatures in five existing long alpine tunnels (Vereina, Gothard-N2, Mont-Blanc, Simplon and Gothard-AT) have been studied and compared with the volume of water inflows.
The Vereina railway tunnel drilled in Austroalpine nappes encountered little water inflow. The linear discharge rates vary between 0.003 and 0.006 L/s/m. Water temperatures series have been observed in both the northern and southern parts of the tunnel trace: the northern thermal gradient is equal to 0.018 °C/m, whereas the southern thermal gradient is not very different with a value equal to 0.016 °C/m. No special thermal anomaly has been observed at this site.
The Gothard-N2 road tunnel (National route number two) intersects the Aar and Gothard External Crystalline Massifs. A general thermal gradient equal to 0.015°C/m is observed in the southern part of the tunnel trace in the Monte Prosa massif. Positive thermal anomalies have been measured in both the northern and central parts of the tunnel trace. They are due to topographical effects: in this region, the tunnel is situated beneath the Reuss river valley. Water inflows are weak in this tunnel: about 0.020 L/s/m in the Monte Prosa zone, for example.
The Mont-Blanc road tunnel intersects the Mont-Blanc External Crystalline Massif. A water thermal gradient equal to 0.016 °C/m has been observed on the northern part of the massif, at depths less than 1000 meters. This region corresponds to a low-permeability crystalline schist zone. The linear discharge rate is equal to 0.008 L/s/m in this zone. A large negative thermal anomaly was measured during the drilling of this tunnel. The water temperatures decreased from 32°C to 11.5°C beneath the Pointe Helbronner. This decrease corresponds to large water inflows (about 1000 L/s) in a strongly fractured zone. A second water thermal gradient (very weak: 0.007 °C/m) corresponds to the granitic unit which is globally more permeable than the schist with a linear discharge rate equal to 0.193 L/s/m.
The Simplon railway tunnel, drilled through the Penninic nappes, is also characterized by a negative thermal anomaly situated in the very permeable marbles of the Teggiolo zone. In this tunnel, the water temperatures decrease from 55°C in the Berisal gneissic zone to less than 15°C in the Teggiolo zone. The water thermal gradient in the northern part is high, in conformity with the weak water inflows (linear discharge rate less than 0.001 L/s/m). A third zone is observed in the Veglia marbles: it is characterized by a water thermal gradient equal to 0.010 °C/m for a linear discharge rate equal to 0.203 L/s/m.
The Gothard-AT gallery has been drilled in Penninic gneiss. A water thermal gradient equal to 0.013 °C/m has been measured over the first 3000 m. A negative thermal anomaly was encountered at the end of the gallery, due to the presence of very permeable metasedimentary rocks with important water circulation.
These results show that the water temperature in underground works is strongly dependent on the massif permeability and the existence of groundwater flows. Cold waters coming from high infiltration zones have a refrigerating effect on the massif. Thus, measuring water temperature during drilling constitutes a prediction tool for eventual water inflows. Two cases are possible: the observation of a local thermal anomaly due to a very localized aquifer zone, or the decrease of the water thermal gradient due to diffuse water inflows in the massif.
Local thermal anomalies, correlated with large water inflows along discrete zones, have been shown in the Simplon, Mont-Blanc and Gothard-AT tunnels. Such thermal anomalies can be measured hundreds of meters before the intersection of the tunnel with the aquifer zone: temperature monitoring thus constitutes a prediction tool for large water inflows localized in a particular aquifer zone. The use of 3D numerical simulations allows one to improve the prediction quantitatively, by taking into account the problem geometry, the heterogeneity and anisotropy of the thermal and hydrogeological properties of rocks, and the boundary conditions.
The comparison of water thermal gradients at the massif scale with linear discharge rates in the tunnels through the massif allows us to determine a mathematical relationship between these characteristics of the massif. This relation permits one to predict the water quantity expected during drilling, knowing the water thermal gradient.
These results show that water temperature measurements during drilling of an underground work constitute an efficient and cheap predicting tool for water inflows. Anomalies due to relief must be taken into account; these can be very important in such mountainous massifs. A 3D modeling of heat transfer in the massif is, in all cases, necessary to improve the precision of predictions.
-
Fractionnement et caractérisation des lixiviats de centres d'enfouissement technique de déchets ménagers: intérêt de la chromatographie liquide haute performance sur le gel d'exclusion stérique
F. Le Coupannec et J. J. Peron
p. 529–543
RésuméFR :
L'ultrafiltration et la chromatographie d'exclusion stérique haute performance sont utilisées pour la séparation et la caractérisation des composés organiques présents dans les lixiviats de centres d'enfouissement technique de déchets ménagers.
Le fractionnement de la matière organique est obtenu sur des colonnes type TSK PW, en élution eau pH 4 et eau-méthanol. La spectroscopie en UV-visible et en fluorescence, un détecteur évaporatif à diffusion de lumière sont utilisés pour la caractérisation des fractions.
Cette méthode rapide de séparation associée à une multidétection permet une mise en évidence, dans les fractions issues de l'ultrafiltration, de composés organiques caractéristiques. Dans la fraction de poids moléculaires inférieurs à 1000 Daltons, trois familles sont détectées. Les substances humiques et les protéines sont les principaux groupes présents dans la fraction de poids moléculaires supérieurs à 10000 Daltons.
EN :
Landfill leachates represent a source of organic pollution characterized by an important organic load, with high chemical oxygen demand in recent sanitary landfills and some organic compounds refractory to biodegradation. Several researchers have examined the organic matter in these landfill leachates. In addition to measuring parameters such as chemical and biological oxygen demands (COD and BOD) and UV-absorbance, different analytical techniques were applied: gas chromatography with flame ionization or mass detection; high performance liquid chromatography; infrared spectroscopy; nuclear magnetic resonance spectrometry; and elementary analysis. Raw leachates or samples after fractionation on Sephadex gel were characterized by ultrafiltration or adsorption on XAD resins.
Conclusions from these earlier studies were as follows:
- physico-chemical properties of leachates revealed not only a high organic pollution but also diversity and variability according to the age of the sanitary landfill and the climatic conditions;
- gel permeation chromatography and ultrafiltration revealed two main fractions in the leachates: one with molecular weights below 1000 Daltons (Da), and another with molecular weights above 5000-10000 Da;
- infrared spectroscopy and nuclear magnetic resonance spectrometry showed functional groups present in the humic and fulvic acid fractions of natural organic matter;
- a varying number of peaks detected by gas chromatography with flame ionization or mass detection proved the complexity of the matrix. Few compounds were identified and quantified, with the exception of fatty acids. Moreover this technique was only applicable to molecules with low molecular weight.
The purpose of the present work was to develop a new method of fractionation of organic matter in landfill leachates and to study their characterization and treatment biodegradation. Ultrafiltration, as a prefractionation step, divided the leachate into four fractions according to their molecular weight: above 10000 Da, from 10000 - 3000 Da, from 3000 - 1000 Da, and below 1000 Da. The second fractionation step was carried out using gel permeation chromatography. This technique has been was applied by earlier researchers for the characterization of landfill leachates, but at low pressure on Sephadex gels. In our study, we developed a high performance size-exclusion chromatography method using a polymer based TSK PW column, a hydrophilic cross-linked polyether. Three TSK G3000 PW columns and one G5000 PW column were tested with water at pH 4 with acetic acid and with a water/methanol mixture as mobile phases. This rapid method of separation, with short retention times, was coupled with on-line multidetection: UV-visible (254 nm - aromatic compounds), fluorescence spectroscopy (275/325 nm, protein-type molecules; 320/430 nm, humic-type molecules) and evaporative light scattering detection, ELSD. The ELSD allowed detection of all mineral and organic compounds that did not evaporate at the working temperature (45°C).
The effect of the sodium chloride concentration on retention times was tested with eluants and columns. Secondary effects, often observed with size-exclusion chromatography, occurred with the gel chosen. The elution of sodium chloride solutions at different concentrations showed that the TSK PW gel bears electronegative charges, and that the density of these charges differs from one column to another. For the leachate we observed this influence: chromatograms obtained on two TSK G3000 columns were different for fractions with molecular weights below 1000 Da.
The comparison of chromatograms obtained with the four detection methods provided information about the identity of the types of compounds present. For fractions with molecular weights below 1000 Da, separation was performed using a TSK G3000 PW column, with an eluant pH of 4 and a water-methanol mixture; three main families were detected. For fractions with molecular weights above 10000 Da, chromatographic separation was improved by elution with water/methanol (70/30) with TSK G5000 and G3000 columns in series; two main groups were identified, humic substances and protein-type compounds. The constituents of the two intermediate fractions with molecular weights between 10000 and 1000 Da were essentially humic substances, identified after separation on a TSK G3000 PW column with water-methanol (70/30) as the eluant.
-
Oxydation du diuron et identification de quelques sous-produits de la réaction
R. M. Ramirez Zamora et R. Seux
p. 545–560
RésuméFR :
Ce travail étudie la dégradation en milieu aqueux de l'herbicide diuron (N-(3,4 dichlorophényl)-N'-(diméthyl)-urée) par oxydation radicalaire. Des solutions de diuron (0,01 mg L-1 et 5 mg L-1) tamponnées à pH=7 ont été traitées par l'ozone et par le couple peroxyde d'hydrogène-ozone (0,35 mole/mole). Les doses d´ozone appliquées ont été de 2 mg O3 L-1 pour la solution de diuron à 0,01 mg L-1 et 6 mg O3 L-1 pour la solution à 5 mg L-1. Les temps de réaction utilisés ont été de 30 min pour la forte concentration en diuron et de 8 min pour la concentration faible. L'identification de quelques sous-produits par chromatographie gazeuse couplée à la spectrométrie de masse (CG-SM) a été réalisée sur les extraits des solutions traitées.
Les résultats montrent que lors de l'oxydation, le cycle aromatique est conservé pour trois sous-produits : la N-(3,4 dichlorophényl)-N-(méthyl)-urée (DCPMU), la N-(3,4 dichlorophényl)-N-(méthyl)-urée (DCPU) et la 3,4 dichloroaniline (DCA). Ces composés ont été identifiés à la fois dans les solutions de diuron traitées par l'ozone et dans celles traitées par le couple ozone-peroxyde d'hydrogène. La quantification du diuron et des sous-produits identifiés a été faite par chromatographie liquide à haute performance (HPLC). Le pourcentage d'oxydation du diuron par l'ozone et le couple ozone-peroxyde d'hydrogène est élevé (respectivement 80% et 90%), pour les conditions expérimentales : Co =5 mg L-1 ; temps de réaction=30 min et ozone appliqué=6 mg L-1. Le dosage du carbone organique total (COT) a mis en évidence une minéralisation partielle de cet herbicide (" 50%). Le bilan massique de la réaction montre que le DCPMU est un des principaux sous-produits d'oxydation du diuron (de 5 à 7% de la quantité initiale de diuron).
EN :
During drinking-water production, pesticides can be modified under the action of powerful oxidizing agents such as ozone. However, the mineralization process is rarely complete and it is therefore important to know both the nature and the concentration of intermediate by-products. The main goal of this work was to identify the Diuron oxidation reaction by-products in order to explain the reaction mechanisms and determine the efficiency factors of proposed treatments to destroy these substances in drinking-water.
Trials were carried out in a continuous "bubbling column" reactor type operating in an up-flow mode. This Pyrex reactor was one meter high with a one liter volume. The Diuron concentration in test solutions was fixed at either 0.01 mg L-1, which is the maximum value found in natural water, and 5 mg L-1 to facilitate the identification of the reaction by-products.
Test solutions were prepared from a standard solution by dilution into ultrapure water (Total Organic Carbon=TOC < 0.1 mg L-1) buffered with phosphate at pH=7. The following experimental conditions were used for the 0.01 mg L-1 and 5 mg L-1 test solutions, respectively: O3 dose=2 and 6 mg L-1 ; H2O2 /O3 molar ratio=0.35; contact times=8 and 30 min. Hypotheses on the nature of the Diuron oxidation reaction by-products were based on previous experiments carried out on Isoproturon and Metoxuron (Allemane 1994; Mansour 1992).
To identify the reaction products, we performed a liquid phase-solid phase extraction on C18 -grafted silica cartridges. Diuron byproducts were identified by gas chromatography-mass spectrometer (GC-MS), after a derivatization reaction with butyl-iodide. Results show that during the oxidation reaction the aromatic cycle is preserved in three by-products: N-(3,4 dichlorophenyl)-N-(methyl)-urea (DCPMU), N-(3,4 dichlorophenyl)-urea (DCPU) and 3,4 dichloroaniline (DCA). These compounds were found in Diuron solutions treated with ozone or with the ozone-hydrogen peroxide couple. They were identified by comparing their chromatograms with those of the pure isolated substances (retention times and mass spectra). Products were quantified by High Performance Liquid Chromatography (HPLC). The Diuron transformation percentages were found to be 80% with ozone and 90% with the O3 /H2O2 couple under extreme experimental conditions i.e. Co =5 mg L-1, [O3]=6 mg L-1 and a 30 minutes reaction time. The TOC measurements show that under these conditions the Diuron mineralization process reaches 50%. A mass balance showed that DCPMU was one of the main oxidation reaction residual by-products with amounts corresponding to 5-7% of the initial quantity of Diuron.
-
Détermination en milieu naturel du dioxide de chlore, des ions chlorite et chlorate basée sur l'utilisation du carmin indigo: étude des interférences
C. Elleouet, F. Quentel et C. L. Madec
p. 561–575
RésuméFR :
Différentes méthodes fondées sur l'exploitation d'un même réactif à savoir le carmin indigo ont été mises en œuvre pour réaliser le suivi du dioxyde de chlore et des sous-produits de dégradation que sont les ions chlorite et chlorate.
L'étude de la stabilité du carmin indigo a permis de montrer que la détermination du dioxyde de chlore doit être effectuée dans les premières heures qui suivent l'ajout de carmin indigo, une légère diminution de l'absorbance étant observée au delà de vingt heures. L'absorbance du carmin indigo en présence d'ions chlorite et chlorate reste en revanche stable plusieurs jours.
La recherche d'éventuelles interférences (substances humiques, ozone, hypochlorite) a également été effectuée. Les ions chlorite et chlorate réagissent avec les substances humiques en milieu acide selon une cinétique réactionnelle beaucoup plus lente que celle des ions chlorite et chlorate sur le carmin indigo. De ce fait, les pourcentages d'erreur sur les concentrations restent faibles. L'hypochlorite ou plus précisément l'acide hypochloreux réagit avec le carmin indigo ce qui conduit à des erreurs dans la détermination du dioxyde de chlore, des ions chlorite et chlorate. Dans le cas du dosage du dioxyde de chlore, les sources d'erreur peuvent être éliminées en ajoutant de l'ammoniaque avant l'introduction du carmin indigo dans l'échantillon.
Après avoir été validés dans des milieux synthétiques, les protocoles ont été appliqués à un milieu naturel : l'eau de distribution de la ville de Brest. Une analyse statistique a été effectuée dans le but de comparer les résultats avec ceux déduits d'autres méthodes basées sur des principes différents.
EN :
Over the last decade, chlorine dioxide has been increasingly used for disinfecting drinking water in many countries. A guarantee for the protection of the consumer is the presence of a sufficient residual concentration of the bactericidal reagent in drinking water. Thus it is important to determine exactly and accurately the levels of chlorine dioxide at the tap. During water treatment and subsequent distribution, chlorine dioxide can undergo a variety of reduction and disproportionation reactions producing primarily chloride but also chlorite and chlorate, which have been shown to cause haemolytic anemia. Reliable analytical methods are needed to identify and determine levels of chlorine dioxide, chlorite and chlorate in drinking water. A procedure based on the use of indigo carmine for the determination of each species in natural waters is suggested in this paper.
In phosphate buffer (pH 6.8), two moles of chlorine dioxide oxidize one mole of indigo carmine. The concentration of the bactericidal reagent can be determined by measuring the difference in absorbance of the dye at 610 nm before and after reaction with chlorine dioxide. This method is selective as chlorite and chlorate do not react with indigo carmine in phosphate buffer at pH 6.8. Although the spectrophotometric method can be used successfully used at levels of chlorine dioxide down to 30 µg/l, the determination of lower levels in tap water requires a more sensitive method such as an electrochemical stripping procedure. This analysis is based on the measurement of the decrease in the indigo carmine signal after addition of chlorine dioxide. The detection limit is around 1 µg/l.
At pH=2, one mole of indigo carmine reduces one mole of chlorite. Thus the chlorite concentration can be determined by measuring the indigo carmine absorbance at pH=2. At pH=0, indigo carmine reacts with both chlorite and chlorate. A measurement at pH=0 allows chlorate concentrations to be determined since the decrease in absorbance due to the presence of chlorite can be calculated.
The stability of indigo carmine absorbance has been studied. An indigo carmine solution prepared in phosphate buffer is stable over several days if kept in light-proof bottles. It is not surprising that the presence of chlorite and chlorate does not lead to a change in absorbance as they do not react with the dye at pH=6.8. A slight decrease in absorbance of an indigo carmine solution containing chlorine dioxide is observed after about twenty hours. This means that the chlorine dioxide concentration has to be determined in the first hours, which follow the addition of the dye to the sample in order to avoid errors.
Interferences can arise from other residual oxidants, which may also be used in water treatment, or from substances present in the sample, which may react with indigo carmine, chlorite and chlorate. Accordingly, we have considered the influence of humic substances, ozone and hypochlorite. The absorbance of indigo carmine at pH=2 and at pH=0 does not change in presence of natural organic matter (1 mg/l). Chlorite and chlorate react with humic substances but the kinetics are much slower than those of the reactions with indigo carmine. Errors arising from humic substances in chlorite and chlorate measurements are thus very weak. Ozone may interfere in analyses as it reacts with indigo carmine. However its existence in the distribution network is unlikely as it also reacts with chlorine dioxide, which is in excess, and chlorite to give chlorate. Hypochlorite causes errors in chlorine dioxide, chlorite and chlorate determinations as a result of a reaction with indigo carmine. In the case of chlorine dioxide determinations, errors can be eliminated by adding ammonia to the sample before indigo carmine.
Once the validity of the procedures had been proven in synthetic media, the methods were applied to a natural water, that of the water distribution network of the city of Brest, France. The results have been compared with those of other analytical techniques.
-
Utilisation d'un réseau de neurones pour appliquer le modèle de Muskingum aux réseaux d'assainissement
J. Vazquez, M. Zug, D. Bellefleur, B. Grandjean et O. Scrivener
p. 577–595
RésuméFR :
L'application du modèle de Muskingum pour simuler l'écoulement à surface libre dans les canaux d'irrigation a été largement utilisée et validée. Par extension, ce modèle est également employé pour simuler les écoulements en réseau d'assainissement. Or, nous avons pu montrer des erreurs allant jusqu'à 80% du débit de pointe entre le modèle de Muskingum à paramètres fixes et le modèle de référence de Barré de Saint-Venant. Nous proposons une nouvelle paramétrisation du modèle de Muskingum pour l'écoulement en collecteur circulaire en réseau d'assainissement et ceci pour un large domaine de longueurs, pentes et diamètres de collecteurs. Ce nouveau modèle non-linéaire a été calé par minimisation d'une fonction objectif traduisant la proximité du modèle proposé avec les résultats de la résolution des équations de Barré de Saint-Venant pour des hydrogrammes rectangulaires. Un réseau de neurones a été utilisé pour paramétrer le modèle. Cette nouvelle application des équations de Muskingum permet l'obtention d'erreurs relatives moyennes inférieures à 6% sur la valeur et l'instant du débit de pointe, ceci dans le cas de collecteurs ayant jusqu'à 6500 m de longueur, des pentes variant entre 0.5% et 1% et des diamètres entre 150 et 2500 mm et des hydrogrammes de débit de pointe proche de la capacité du collecteur. Le modèle a également été validé sur un hydrogramme de forme quelconque.
EN :
Certain towns and cities frequently suffer from failures of their sewer networks, especially in rainy weather. Pollution of the host environment, as the direct consequence of occasionally untimely spills, is not appreciated by the natural environment or the human population. Improving the quality of the natural environment therefore involves an increasingly sophisticated control of the hydraulics and the pollutant load in drainage systems, and especially in sewer networks. Real-time management of sewer networks can provide a solution for the protection of the natural environment. In this case, control strategies are provided for the sluices and pumps of the sewer network during a rainy event to minimize the urban effluent. Moreover, a better understanding and modeling of the transport of pollution in the mains is required.
To that end, not only must the hydraulic operation of the mains be correctly modeled (shape of the hydrograph, value and temporal position of the peak flow), but this numerical model must also be stable and converge towards the solution, irrespective of the initial conditions for modeling of the pollution, and the computer time must be compatible with the requirements of real-time management. The most representative model of unidimensional flows is that of Barré de Saint-Venant (1871). The non-linearity of the model, resulting in difficulties in solving these equations, together with the computer time required, are such that not all the criteria for a real-time application can be met. The conceptual equations model of Muskingum is another model that can be used.
In the case of round sewerage mains with a slope ranging from practically nil to a few per-thousandths and a few kilometers long, the K and α coefficients traditionally used do not yield correct results with respect to the benchmark model of Barré de Saint-Venant. To keep the advantages of the simplification of the Muskingum equations, and to avoid having to solve the Barré de Saint Venant system, we propose new parameters for the Muskingum equations and we use optimization and correlation calculation techniques using neural networks.
In modeling the mains of a sewer network, the discretization of their length, within the usual limits [50 m; 1000 m] is chosen empirically. This discretization plays an essential part in the propagation of the wave in a main. To take this effect into account, the round main of length L is discretized into N sections, and K is expressed on the basis of the maximum speed of the flow Vmax. The model setting parameters are now N and α, and will be calibrated for a wide range of slopes, lengths and flow rates for round mains with a constant roughness.
The calculation procedure is as follows:
- Setting of the optimal values of N and alpha giving results close to those calculated by Barré de Saint Venant;
- Determination of correlations of the parameters N and alpha according to the slope, length and diameter;
- Validation of the Muskingum model in relation to that of Barré de Saint-Venant.
The parameters alpha and N are set by minimizing an objective function giving the agreement between the results of the hydraulic simulations by Barré de Saint-Venant and the simulations of the proposed model. The objective function is defined by the sum of the relative quadratic deviations of the values and times of maximum flow rates. The maximum errors are in fact reduced from 90% to 10% on peak flows and from 30% to 10% at a given point in time during the peak flow. The mean error is reduced forty-fold for peak flow, and five-fold in the temporal position, with a reduction of the same order for the standard deviations.
Correlations of alpha and N are sought according to the slope, length and diameter of the mains modeled. As linear type relations failed to provide satisfactory results, the multi-layer Perceptron type (artificial) neural network model was used. The model includes 3 inputs and 2 outputs.
The first, essential stage consists of finding the optimal number of neurons in the masked layer. It is important to mention that despite maximum errors of 40% and 20% on the prediction of time and peak flow rate, mean errors of only 3% and 4% are observed. Given this result, 4 neurons were chosen in the masked layer. This model therefore includes 3 inputs, 4 neurons in the masked layer, and 2 outputs.
Following the learning phase with the results of the optimization phase, the so-called prediction phase was then performed. This consists of using the neural network with data with intermediary values with respect to those used in the learning phase. The neural network is used solely to predict values within the minimum and maximum limits of the learning phase. The prediction (or validation) phase revealed that the mean errors are in the order of 2.7% for the peak flow value and 5.5% for the instant of the same flow.
The choice of 4 neurons in the masked layer during the prediction phase gives results with the same order of magnitude as in the learning phase, thus validating the structure of the neural network chosen.
Subsequently, the proximity of the value and of the time position of the maximum flow rate for the propagation of rectangular hydrograms was studied. The performance of the model proposed is now verified by studying the propagation of a hydrogram of any given shape. Use of this model, validated on a hydrogram of any given shape and presenting several peaks of different intensities, yields a satisfactory reproduction of the output hydrogram and is a distinct improvement on the classic Muskingum model.
-
Stratégie de prospection hydrogéologique du socle de la bordure orientale tchadienne par optimisation du nombre et de la profondeur des sondages de reconnaissance
P. Gombert
p. 597–608
RésuméFR :
Sous le climat à faible pluviosité du Tchad, les altérites sont dénoyées et seul le socle fracturé est aquifère. Le taux d'échec des forages atteint 60% car les fractures ont une répartition très discontinue comme le montre leur organisation fractale. Cela entraîne la coexistence de secteurs productifs et stériles. A l'échelle kilométrique, on peut ajuster le nombre de sondages de reconnaissance par village en fonction des caractéristiques climatiques, topographiques et géologiques. On définit ainsi des zones de productivité forte, où le taux de succès atteint 79%, moyenne et faible. On peut alors affecter chaque village d'un " potentiel d'investigation " qui est le produit du nombre de sondages par leur profondeur prévisionnelle.
A l'échelle locale, une analyse en composantes principales des paramètres de forage montre que la présence d'eau souterraine est liée aux caractéristiques du socle fracturé et non altéré. Une analyse discriminante fournit une " équation de productivité " qui permet de prévoir 90% des résultats en cours de foration: dès que le forage a traversé une dizaine de mètres de socle non altéré, elle permet de définir une profondeur limite d'investigation dépendant des caractéristiques intrinsèques de chaque site. Elle est surtout applicable dans les zones les moins productives où l'on observe systématiquement un surcreusement inutile des forages négatifs.
On dispose ainsi d'une stratégie de prospection alliant le nombre et la profondeur des forages. Elle permet de limiter la profondeur des forages implantés sur des sites peu productifs et de reporter le métré ainsi récupéré sur des sites plus prometteurs.
EN :
The aim of this study is to define a new strategy for groundwater prospection in sahelian basement aquifers. At present, the number and the depth of boreholes are fixed a priori in the project document: these parameters are the same for all the villages, regardless of their environmental context. In fact, during the drilling campaign, we systematically observe a useless overdrilling of negative boreholes that affects the cumulative drilled length of the project (Table 1). This is particularly important in granitic basement areas under the low rainfall sahelian climate: water is difficult to find because of low success rates, and the driller needs to ensure no groundwater indication appears a few meters under the fatal 60 m depth.
An illustration of this methodology is proposed for the Guéra, Ouaddaï and Biltine provinces of eastern Chad (Figure 1). This 150 000 km2 area is situated at the border of the Chadian basin from 10 to 15 ° north latitude at elevations of 400-700 m, with an annual rainfall between 200 and 600 mm. The geology is represented by precambrian granitoïds. Tectonics are well developed with many fractures, faults and photolineations from metric to multi-kilometric scales.
In Chad, weak recharge rates imply that the weathered rock reservoir is unsaturated and the aquifer is constituted by the fractured granitic basement. Thus, the overall success rate of 500 boreholes is only 42%. The unequal distribution of fractures leads to the presence of productive and barren adjacent areas with significantly different success rates. A statistical analysis of photolineations shows their fractal distribution with a dimension around 1.57, similar to the 1.59 dimension obtained in fractal fracture models (Figure 2). Fracturation is a main component of hydrogeological knowledge in basement areas and its variations between the villages can explain the different potentials of basement productivity: we must consequently adjust an "investigation potential" depending of the characteristics of each area. The proposed strategy of prospection determines the number of boreholes to drill and their specific depth.
At the kilometer scale, the total number of boreholes can be adjusted according to climatic, topographical and geological characteristics (Table 2). We show that only four parameters can explain a range of success rate from 0 to 79% in different villages (Table 3): altitude, average rainfall, petrography and fracturation intensity (measured in situ). Thus we can define the investigation potential which is the previous depth divided by the theoretical success rate of the area including the village. It is interesting to notice that the success rate in the high productivity class is similar to the average value obtained in more rainy basement countries of West Africa: for example 79% in south-west of Burkina Faso or 73% in Togo.
At the local scale, a principal components analysis on 12 drilling parameters was performed. It shows that the appearance of groundwater is mainly correlated to parameters describing the unweathered fractured rocks (Figure 3). A discriminant analysis was then performed on four of these parameters: thickness of unweathered drilled basement, depth of first water arrival, number of water arrivals and hammer velocity in the unweathered basement. This yields a "productivity equation" which allows one to anticipate 90% of the borehole results (Table 4). According to this equation, we can define a maximum investigation depth based on the geological characteristics of each borehole site.
The last section presents the complete strategy of groundwater basement prospection and two examples applied to Chad. For an average aquifer depth of 60 m, the investigation potential of each village depends on its productivity class: it varies theoretically from 10 boreholes (i.e. 600 m) in low productivity area to 1.3 boreholes (i.e. 76 m) in high productivity zones (Table 5). This potential must then be distributed among the different sites according to the result of their productivity equations.
The village of Getgéré is situated in a particularly unproductive zone (see Table 3) where about ten boreholes are statistically needed to obtain a positive result: its investigation potential is supposed to be 300 m. Four negative boreholes were drilled from 62 to 75 m with a total depth of 261 m. In fact, the productivity equation showed all these sites were unproductive from drilled depths of 28 to 40 m deep (Table 6): the same result could have been obtained with only 130 m drilled; 131 m were uselessly consumed. With this excess drilled length, we could have drilled eight extra shallower boreholes and increased the probability of success in obtaining a productive well.
The village of Eroua is situated in a productive area where the success rate is 79%: its investigation potential is 60 / 0.79=76 m. The first borehole was negative at depth of 74 m, but the productivity equation already indicated this result after only 38 m of drilling. At the second site, a positive borehole was obtained at 41 m depth where the equation foresaw 43 m. Finally, the cumulative drilled length was 115 m and the investigation strategy would have permitted the transfer of 34 m to another more promising site.