Revue des sciences de l'eau
Journal of Water Science
Volume 11, numéro 4, 1998
Sommaire (9 articles)
-
Génèse des débits dans les petits bassins versants ruraux en milieu tempéré: 1 - Processus et facteurs
B. Ambroise
p. 471–496
RésuméFR :
La première des deux parties de cette synthèse bibliographique sur la genèse des débits montre que la complexité et la diversité des organisations et fonctionnements hydrologiques constatées dans les petits bassins versants ruraux peuvent s'analyser et s'interpréter à l'aide de "clés de lecture" simples, issues d'une approche systémique et dynamique, et utiles aussi pour les modéliser (cf. Partie 2). Elle présente les différents processus tant superficiels que souterrains pouvant contribuer à cette genèse, ainsi que les facteurs du milieu qui les contrôlent: forçages atmosphériques aux limites, conditions hydriques et hydrologiques initiales, propriétés hydrodynamiques des milieux et interfaces traversés, topographie et morphométrie en 3-D du bassin. Elle rappelle ou introduit plusieurs concepts utiles pour caractériser dans chaque cas les combinaisons de processus et facteurs en jeu et leurs effets hydrologiques: seuils fonctionnels et grandeurs caractéristiques contrôlant la forme et la non-linéarité de la réponse du bassin, concepts de "zone ou période active variable" pour un processus donné et de "zone ou période contributive variable" pour un flux aux limites donné décrivant son organisation interne. Elle discute les avantages et limites des différentes méthodes (graphiques, isotopiques, géochimiques) de décomposition des hydrogrammes de crue ainsi que leur complémentarité dans l'étude du système bassin versant.
EN :
This 2-part review on streamflow generation presents the state of the art in both field studies and modelling of the hydrologic behavior of rural catchments. It focuses mainly on temperate environments and water flows within small catchments, but many points have a more general significance.
The first part presents the main results of hillslope hydrology since the 1960s, mainly obtained on small research catchments. It appears that floods can be generated by a large range of both surface and subsurface processes, and not only by infiltration-excess surface runoff, as is still assumed by some hydrologists and modellers. In each case, the processes involved and their combinations are very variable in time and space, depending on the variable combinations of several environmental factors: precipitation and energy inputs imposed by atmosphere forcings at the upper boundary, variations in initial hydric (soil) and hydrologic (catchment) conditions which cause nonlinearities in catchment responses, water storage and resistance-to-transfer properties of the various compartments (vegetation, surface, soil, subsoil) and their interfaces, catchment 3-D topography and morphometry controlling compartment geometry and gravity forces.
The non uniform and non random distributions of these processes and factors determine the catchment functional, spatial and temporal organization: (1) at each point, process activation or deactivation results from a balance between water supply from above and local water storage or transfer capacities depending on functional thresholds related to these water properties; (2) spatio-temporal variations of factors lead to some recurrence of conditions favorable or unfavorable to each process in some areas of variable extent and some periods of variable duration: this leads to the concepts of "variable active area and/or period" (for a given process); (3) these active areas and periods contribute to outfluxes only if they are hydraulically connected to the catchment boundaries: this leads to the complementary concepts of "variable contributing area and/or period" (for a given global outflux). Several hydrograph separation methods are used to estimate various contributions to streamflowwhich are difficult to measure in situ. They all have severe limitations: graphical methods are rather arbitrary, tracer methods are based on simplifying assumptions (end-member homogeneity, conservative tracer behaviour,...) that are not very realistic. Moreover, considering the same streamflow from different points of view, they give results that are not comparable but rather complementary: velocity criterion (rapid, delayed, slow flows) for graphical methods, time origin criterion ("pre-event"/"event" water) for water-related isotope tracers, space origin criterion ("source" reservoirs) for other physico-chemical tracers. Lastly, none of them identifies directly the processes involved. Nevertheless, they are very useful in showing that streamflow is a complex mixing of various water types, with high proportions of subsurface and pre-event water in many cases - contrary to classical hydrologic interpretations.
Thus, the complexity and diversity of hydrologic patterns and behaviors observed in small rural catchments, and especially the continuum of streamflow generation situations (from pure surface to pure subsurface contributions), can be analysed and characterized using these simple concepts and methods provided by a dynamic systems approach. They are therefore useful for catchment modelling also (see Part 2).
-
Élimination des cations métalliques divalents : complexation par l'alginate de sodium et ultrafiltration
S. Benbrahim, S. Taha, J. Cabon et G. Dorange
p. 497–516
RésuméFR :
Depuis quelques années la pollution par les métaux lourds et devenue un problème important pour la protection de l'environnement et de nombreuses méthodes ont été développées pour éliminer les métaux toxiques présents dans l'eau.
Parmi les différents procédés utilisés, la complexation-ultrafiltration est bien connue et de nombreuses études sur ce sujet sont décrites dans la littérature. Cependant, le choix de nouveaux macroligands hydrosolubles demeure important pour développer cette technologie.
L'un des objectifs de ce travail était de montrer que dans ce procédé un biopolymère peut remplacer un macroligand de synthèse. Les expériences ont été menées avec de l'alginate de sodium, polysaccharide extrait des algues brunes, et porteur de groupements carboxyliques et hydroxydes capables de complexer les cations.
Notre étude se divise en trois parties. Après avoir décrit, dans la première, le matériau et les méthodes utilisées, nous étudions dans la seconde les conditions de l'ultrafiltration (seuil de coupure, pression appliquée, pH, concentration ), avant de discuter dans la troisième les résultats obtenus dans le traitement de solutions contenant Cd2+, Cu2+, Mn2+ and Pb2+.
EN :
For some years past, pollution by heavy metals has become one of the main problems for environmental protection. A number of methods have been developed to remove toxic metals from water. Among the various processes used, complexation-ultrafiltration is well known and numerous studies on this subject are described in the literature. However, the choice of new water-soluble macroligands remains important for developing this technology.
One aim of the present work was to prove that biopolymers can replace synthetic macroligands in the process. The experiments have been conducted with sodium alginate, a polysaccharide extracted from brown seaweeds and containing carboxylic and hydroxyl groups able to complex heavy cations. Filtration experiments were performed with a frontal system, equipped with a polysulfone membrane with a 20000 Daltons cut-off . The solutions studied were prepared by diluting in demineralized water either sodium alginate or "Titrisol Merck" for cations. Before filtration the two solutions were mixed and stirred for 20 min. The pH of the feed solutions was adjusted with HCl (or HNO3 for Pb) or NaOH and determined accurately using a calibrated probe.
The molecular weight of sodium alginate was determined by liquid chromatography and the viscosity was measured with either a viscosimeter for low values or a capillary method for concentrated solutions. Cation concentrations were measured by atomic absorption spectrophotometryBoth permeate and retentate macroligands concentrations were estimated from measurements of total organic carbon (TOC). Following each experiment, chemical cleaning was performed by filtration of HCl, NaOH and water. This procedure was followed by demineralized water filtration, to ensure that the initial permeability was restored.
In the first part of the work the ultrafiltration of sodium alginate solutions for different concentrations and various pressures was studied. Experimental results for macroligand retention, deduced from the TOC values, show a total rejection. All the curves, permeate flux versus time, present the same profile which indicates a significant concentration polarization. According to the obtained results we chose the value of 5 10-2 g L-1 for the ligand concentration and one bar for the applied pressure.
In the second part of the study, the retention of cations (Cd2+, Cu2+, Mn2+ and Pb2+) was investigated. The observed results show that the removal rates are close to 100%. These values depend both on the total concentration of cation and on the pH value. The retention of cations is shown to depend strongly on pH: a variation of pH between 3 and 5 leads to changes in retention efficiency from 0 to 100%. This can be explained by the dissociation of alginic acid as a function of pH. For lower pH values the macroligand is in a molecular form and the metallic cation remains free; for higher values metal complexation is possible, increasing the rejection. If coordination number, rejection rate and pH are known, the various association constants can be determined using a graphical method. It can be seen from the results that the stability of the complexes formed decreases in the sequence Pb>Cu>Mn>Cd.
In order to investigate the retention of these cations in a fresh water, the influence of calcium hardness was studied. The results indicate that cation removal decreases when the calcium concentration increases. This observation is an important restriction for fresh water treatment but does not affect the elimination of metals from a solution or an industrial waste containing cations.
-
Étude expérimentale et modélisation de la désinfection par le chlore des eaux usées épurées
H. Shayeb, T. Riabit, M. Roustan et A. Hassan
p. 517–536
RésuméFR :
Pour étudier la désinfection d'une eau usée épurée au stade secondaire, traitée à l'hypochlorite de sodium, des essais en réacteur fermé ont été effectués en utilisant des doses variant entre 1 et 10 mg de chlore par litre. Les résultats obtenus montrent que la cinétique de désinfection est loin d'être uniforme. L'utilisation du modèle de Chick et Watson n'est en effet possible que si on l'adapte pour tenir compte de la modification de la vitesse de désinfection au cours du processus. Le modèle de Collins et Selleck permet de rendre compte de façon satisfaisante de l'évolution de la vitesse d'élimination des germes au cours du temps. La faible valeur du paramètre t trouvée (0.26 min.mg/l pour les coliformes totaux et 0.58 min.mg/l pour les coliformes fécaux) semble cependant démontrer que la période de latence est relativement peut importante surtout lorsqu'on utilise des dose de chlore élevées. Il s'avère d'autre part que la demande en chlore de ce type d'eau est très importante. La concentration en chlore résiduel dans le réacteur décroît très rapidement pour atteindre environs 10 % de la dose de chlore injectée et cela quelle que soit la dose utilisée (de 1 à 10 mg/l). Le dimensionnement des réacteurs de désinfection fonctionnant en continu nécessite de prendre en compte le comportement hydrodynamique de l'eau dans le réacteur. Sachant qu'un abattement de 3 U-Log est nécessaire, dans le cas de la réutilisation de l'eau pour l'irrigation, un modèle intégrant l'expression de la cinétique de désinfection et l'hydrodynamique du contacteur a été proposé. Les résultats mettent en évidence l'intérêt de concevoir des réacteurs se rapprochant le plus possible de l'écoulement piston.
EN :
Secondary wastewater is considered as an important additional resource of water in a country, which has a semiarid climate as Tunisia. On quality basis, the use of water for irrigation is governed by chemical parameters of the water that affect plant, soils conditions and the underlying groundwater. Treated urban wastewater is susceptible to be used in irrigation without major risks (see table 1). However, the use of such a water can represent a risk of contamination of edible crops, pasture lands, and feed crops by direct contact with disease agent carried in reclaimed water or aerosols from spray irrigation. These sanitary risks can be considerably reduced with the practice of an efficient disinfection. Chlorinating is one of the simplest and less expensive disinfection processes. The objectives of this work are the study of the disinfection kinetics and the rate of exertion of chlorine demand of water, when the sodium hypochlorite is used for treating a secondary wastewater. for this, a batch reactor tests have been done. After applying a dose of chlorine (between 1 and 10 mg/l), we have determined the evolution with time of chlorine residual concentration and rate of inactivation of the total and fecal coliform.
Chlorine demand
To describe chlorine decay in the complex milieu of water, like wastewater, Haas and Karra (1984) have developed followed equation:
C=C0[XE-k1t+(1-X)e]-k2t
where:
C : concentration of residual chlorine, mg/l
C0 : dose of chlorine, mg/l
X : empirical constant
t : contact time, min
k1and k2 : rate constants, min-1
In the case of wastewater chlorinating, we can ascertain that the chlorine concentration in the batch reactor decrease rapidly, in the beginning of the reaction, and very moderately after satisfaction of the initial chlorine demand. This initial chlorine demand is very important. It represents near 90% of the dose injected in the reactor. The chlorine concentration decay can be described by Haas and Karra equation with X=0.9, k1=3 min-1 and k2=0.001 min-1.
After some minutes of contact time, we can notes, see figure 2, that the chlorine concentration decay is susceptible to be approximate by a first order equation:
t ≥ 2min ⇒ C ≅ 0.1e-0.001 t
The concentration of residual chlorine becomes practically constant in the reactor, once initial chlorine demand has been satisfied, with C ≅ 0.1 C0.
Disinfection kinetics
Using a pseudo first order Chick and Watson model reveals that the rate of inactivation of coliform bacteria is not uniform. We can employ this model to fit experimental data only when we subdivide the process in two stages characterised by different kinetics. for example, the logarithm of fecal coliform survival rate can be expressed by the relationships:
for C ∙ t ≤ 2.85 min ∙ mg/L Ln(N/N0)= -1.65C ∙ t
for C ∙ t ≥ 2.85 min ∙ mg/L Ln(N/N0)= -3.89-0.29C⋄t
Collins and Selleck (1972) developed a general kinetic expression for the effect of combined chlorine residual on both total and fecal coliform. Combining the work of chick and Gard (Gard, 1957) they developed the following formula describing the survival of these bacteria:
for C∙t<τ (N/N0)=1
for C∙t>τ (N/N0)=(τ/C∙t)n
where C : the combined chlorine residual, mg/l
t : contact time, minute
t : an environmental coefficient or induction time
n : Constant
In this formula, we have assumed that the concentration of chlorine remains constant. We can apply this model when we consider that C is the chlorine concentration after the immediate wastewater demand had been satisfied.
Using Collins and Selleck disinfection model to plot the survival rate of total and fecal coliform in chlorinated secondary effluent, we have obtained the coefficients below:
for total coliform : n=1.94 t=0.26 min∙mg/l
for fecal coliform : n=3.1 t=0.58 min∙mg/l
The constant τ represents the time required for the disinfectant to diffuse through the cell wall and begin its disinfectant activity. we can see that this initial lag time is very short. The disinfectant becomes rapidly efficacious to inactivate coliform bacteria.
Design of disinfectant contact facilities
So as to estimate the influence of the hydraulic efficiency of chlorine contact chambers on disinfection process performance, we have compared efficiency of two ideals reactors: the completely stirred tank reactor and plug flow reactor. The kinetic equations presented above had been established in a batch reactor. To obtain the mean conversion in the entire contactor, one has to apply the segregated flow model. Using the concept of segregated flow system, the survival ratio will be the sum of the batch reactions of all small aggregates, or:
∞
(N/N0)=∫(N/N0)batchE(Ɵ)dƟ
0
Ɵ : normalized contact time (contact time/hydraulic residence time)
E(Ɵ) : normalized frequency distribution of ages
In completely mixed regime, fluid particles are exponentially distributed throughout a tank until the fluid properties exhibited by the effluent leaving the unit are identical to those within the unit. Using Collins and Selleck model, the survival ratio corresponding to hydraulic residence time Ts and to the residual chlorine concentration C is expressed by equation:
∞ (τ/C∙Ts) ∞
(N/N0)=∫(N/N0)batche-ƟdƟ = ∫ e-ƟdƟ + ∫ (τ/C∙t)me-ƟdƟ
0 0 (τ/C∙Ts)
The survival ratio for a hydraulic residence times between 5 min and 60 min and a residual chlorine concentration between 0.2 mg/l and 1 mg/l are reported in table 3 and table 4.
In a plug flow regime all particles entering in a basin have equal velocity values, travel on parallel flow paths, and remain in the unit for an identical period known as the hydraulic residence time. the performance of such a reactor is identical at the batch reactor performance. In table 5 and table 6 we have reported the survival ratio given by plug flow reactor with residence time until 60 min and residual chlorine concentration between 0.2 mg/l and 1 mg/l.
Knowing that the secondary wastewater reuse necessitates a 3 U-Log fecal coliform inactivation, we can see clearly the importance of reactor design on his performance. Indeed, to reach a 3 U-log inactivation, it is sufficient (see table 6) to use a dose of 2 mg/l of chlorine and a hydraulic residence time of 30 minutes, in plug flow reactor. Such dose and the same residence time given only a 1 U -Log inactivation (see table 4) when the reactor is completely mixed.
-
Analyse du modèle CHIMIOTOX du point de vue de ses implications toxicologiques [Article bilingue]
F. Denizeau et A. C. Ricard
p. 537–554
RésuméFR :
Le modèle CHIMIOTOX a été mis au point comme outil de gestion dans le but de réduire de façon importante la quantité de substances toxiques déversées dans le fleuve Saint-Laurent. Ce modèle effectue un calcul dont le résultat est une valeur numérique qui se veut représentative de la charge toxique présente dans un effluent industriel. Pour ce faire, le modèle attribue à chaque substance toxique une constante de toxicité, le facteur de pondération toxique (Ftox), dont la valeur est déterminée à partir des critères de qualité de l'eau du ministère de l'Environnement du Québec. Le Ftox sert à calculer l'unité CHIMIOTOX (UC) qui est le produit de Ftox par la charge journalière du polluant (kg/jour). La sommation des UC de toutes les substances ciblées donne l'indice CHIMIOTOX (IC) qui doit représenter le potentiel toxique de l'effluent. Dans la présente étude, le modèle CHIMIOTOX a été analysé du point de vue de ses implications au plan toxicologique. Les résultats de cette analyse montrent les faits saillants suivants. En premier lieu, le calcul du potentiel toxique théorique se fait selon l'équation d'une droite de pente Ftox. Ceci implique que le potentiel toxique calculé est directement proportionnel à la quantité de la substance, et cela, quel que soit le niveau supposé d'exposition. Cette démarche n'est pas compatible avec le concept fondamental de la dose-réponse, basé sur l'observation expérimentale. À cette étape du modèle, l'estimation du théorique risque de s'écarter considérablement de la réalité. En second lieu, l'UC est calculé en utilsant la charge journalière moyenne de l'effluent à partir de mesures effectuées sur trois jours. Le modèle fait abstraction des variations ponctuelles dans le temps, variations qui peuvent influencer de manière significative le profil d'exposition des organismes, et par conséquent, la toxicité. En troisième lieu, l'IC, qui est la sommation des UC, ne tient pas compte des interactions toxiques pouvant survenir dans le cas d'un mélange de substances, ni de la bioaccumulation dans la chaîne trophique. Une comparaison du CHIMIOTOX avec le modèle des TEF (Toxic Equivalency Factor) développé pour les dibenzo-p-dioxines et les dibenzofurannes polychlorés, a été effectuée afin de souligner la difficulté d'obtenir des valeurs théoriques prédictives de la toxicité de mélanges complexes, même lorsque ses composants possèdent un mécanisme d'action commun, ce qui n'est pas le cas pour la plupart des substances considérées par le CHIMIOTOX. Au total, le modèle CHIMIOTOX génère une incertitude qui s'accroît à chaque étape du calcul. Ceci l'empêche d'avoir une véritable valeur quantitative et limite considérablement son utilité dans l'évaluation du rique environnemental associé aux substances toxiques.
EN :
CHIMIOTOX is a model designed to provide a numerical indicator of toxic discharges for the purpose of comparing and integrating sampling results. CHIMIOTOX was also intended to be used as a tool in managing toxic substances. In this paper, the CHIMIOTOX model has been analysed from the standpoint of its toxicological implications. The analysis shows that the model's numerical indicator does not integrate principles such as the dose-response relationship, the level of exposure of the target organisms in the receiving waters, the transformation of toxic substances in the environment and their bioaccumulation, or the possible interactions between the different components of a complex mixture of toxic substances. The CHIMIOTOX model has been compared to the toxic equivalency factor (TEF) approach developed for polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans to illustrate the difficulties in obtaining reliable predictive values for the toxicity of mixtures even when their components share a similar mechanism of action, which is not the case for most substances subjected to CHIMIOTOX. Because CHIMIOTOX generates a high degree of uncertainty that increases at each step of the calculation, and because this uncertainty is not taken into account, the usefulness of the model from the point of view of ecotoxicological risk assessment and management appears significantly limited.
-
Élimination du phénol par deux plantes aquatiques : Juncus fontanesii (Gay) et Lemna minor L.
M. A. Oueslati, M. Haddad et G. Blake
p. 555–568
RésuméFR :
L'élimination du phénol et de ses dérivés, substances organiques toxiques, fait appel à différents processus physico-chimiques ou biologiques. Certaines plantes aquatiques ont la capacité de déplacer des produits chimiques en les métabolisant, en les évaporant ou en les dégradant. Il faut, toutefois, rester à des concentrations inférieures aux seuils de toxicité des espèces employées.
Dans le présent travail, deux plantes aquatiques: le Jonc de Desfontaines (Juncus fontanesii) de la famille des Joncacées et la Lentille d'eau (Lemna minor) de la famille des Lemnacées, ont été testées pour éliminer le phénol. Le travail a été effectué sans addition d'éléments nutritifs ni acclimatation prélable, pour des concentrations variant de 8 à 48 mg/l et pour deux densités surfaciques de la biomasse végétale fraîche : 2,8 et 5,6 kg/m2.
Les deux espèces se sont révélées aptes à éliminer totalement le phénol avec des cinétiques différentes. Un phénomène de relargage, important dans le cas de l'emploi de J. fontanesii, a pu être observé. Une comparaison de ce type d'élimination à celui dû aux micro-organismes nous a permis, par utilisation des boues activées, d'aboutir à l'ordre de performance suivant : J. fontanesii > L. minor (faibles densités) > micro-organismes avec barbottage d'air > micro-organismes sous des conditions atmosphériques > témoins (sans plantes) > L. minor (fortes densités) > micro-organismes sous des conditions anaérobies.
EN :
Phenols are considered as toxic organic compounds. They can be treated by different physico-chemical or biological processes. These products can be oxidized by chemicals such as H2O2, TiO2, O3, etc. The performance of the process depends on pH, temperature and phenol/oxidant ratio. Otherwise, they can be transformed biologically by enzymes, fungi, yeast or plants. Considerable work has already been done with regard to uptake of phenol by aquatic plants.
In our study two aquatic plants: Juncus fontanesii, a rooted species from Joncaceae family and Lemna minor, floating species from Lemnaceae family, have been selected to study their ability to remove phenol from static phenolic solutions. The initial concentration of phenol varied from 8 to 48 mg/l. The density of biomass (wet weight) ranged from 2.8 to 5.6 kg/m2. Experiences were carried out without acclimation and without addition of nutritive elements. Controls (without plants) were prepared with the same concentrations. Under these conditions, the results of quantitative analyses show that J. fontanesii is able to remove phenol more rapidly than L. minor and can release a fraction of it to the medium particularly in the first ten hours of contact.
It has been observed that phenol uptake is sensitive to the density of biomass and the initial concentration. In order to examine more closely the effect of these variables, we have carried out experiments where the initial concentration was kept constant (8 mg/l) and biomass density varied. When the density of biomass increases, the kinetic uptake of phenol by J. fontanesii increases too; however, it decreases in the presence of L. minor. In fact, at high densities, L. minor covers fully the surface of the water and causes a screen effect, such that diffusion of atmospheric oxygen into the medium is limited. In addition, L. minor has a short root system, so the amount of oxygen that enters the solution is negligible. Elimination of phenol by L. minor is rapid when the density of biomass ranges from 0.7 to 1.4 kg/m2. For both plants, we have noticed the existence of a maximum time limit of degradation and an optimal density beyond which there is no improvement in elimination.
Phenol can be degraded by micro-organisms. In order to elucidate this pathway, an investigation was undertaken using activated sludges in the following situations: under atmospheric conditions, under anaerobic conditions and with bubbling air intermittently.
The comparison of obtained results shows that the rate and kinetics of the elimination decrease in the following order: J. fontanesii > L. minor (low densities) > micro-organisms with air bubbling > micro-organisms under atmospheric conditions > controls (without plants) > L. minor (high densities) > micro-organisms under anaerobic conditions.
-
Gestion optimale d'un réservoir en avenir déterminé
H. J. Morel-Seytoux
p. 569–598
RésuméFR :
Massé, dans ses deux volumes (1946), discute le problème de la gestion optimale des lâchures dans le cas d'un seul réservoir quand le bénéfice est dérivé de la production d'énergie hydroélectrique. Massé obtint ses résultats à la fois par un raisonnement économique et par une généralisation du Calcul des Variations. Sa méthode lui permit de fournir la preuve rigoureuse de la méthode graphique de Varlet (1923), dite du "fil tendu". Dans cet article on généralise la procédure de Massé au cas où (1) le bénéfice est réalisé bien en aval du point de lâchure, et (2) il y a plusieurs "point-cibles" (points où un certain objectif doit être assuré). Massé avait trouvé que la gestion optimale est celle qui maintient la valeur marginale du bénéfice constante dans le temps, pourvu que la gestion soit en régime libre, c' est à dire tant que le réservoir ne fonctionne ni à plein ni à vide. Par contre si le réservoir fonctionne par exemple à plein, Massé montra que la stratégie qui consiste à garder le réservoir plein ne peut être optimale que si la valeur marginale du bénéfice croît constamment avec le temps durant la période où le réservoir reste plein. On montre de manière rigoureuse dans le cas général que pour une gestion optimale ce qui doit rester constant c' est la valeur marginale future du bénéfice. Dans un article ultérieur on fournira la généralisation pour plusieurs réservoirs.
EN :
The problem for operations of reservoirs is to choose on a day to day basis the value of the release at the dam location. The choice of the value of that discharge is conditioned by a criterion of satisfaction of one or several objectives. These objectives are defined in one or several points in the system on the river, or the rivers, downstream from the point, or the points, of release. Typical objectives may be to maximize electric production, or to minimize damage due to flooding downstream from the dams or due to shortages of water in the rivers at diversion points for municipal water supply or other uses, etc.
adapted to the concerns of the managers, and relatively intuitive. The approach described in this article pursues the reasoning of Massé (1946) but generalizes it and therefore makes it more applicable. At first we look at the case of a single reservoir, located directly on the stream for the production of electric energy. In this case the target-point (the point where an objective function is to be evaluated) coincides with the point of release. This was precisely the problem studied by Massé (1946) in his classical two volumes on "Reserves and the Regulation of the Future". We pursue his reasoning but we use a more appropriate mathematical procedure which will allow us to obtain more general results. The same results are derived using two different approaches. The first one is more intuitive and uses the concept of marginal value to secure the necessary condition of optimality to be satisfied by the releases. The second procedure is more mathematical and uses, basically, the method of Calculus of Variations, generalized to the case where there are inequality constraints that must be satisfied. In the case of a single reservoir one shows that the optimality condition provides the rigorous proof of the graphical method of Varlet (1923). The results of Massé are generalized to the case where the objective function is evaluated downstream from the point of release and the management strategy must account for the phenomenon of propagation of discharges in the streams. Again in this case the results are obtained in two ways, (1) by the economic reasoning on the marginal values and (2) with the Constrained Calculus of Variations. Massé had found that the optimal policy for the releases was the one that maintained the marginal benefit constant in time. That applied for the case of a single reservoir and where the target-point coincides with the point of release. If B{x(t),t} is the instantaneous benefit obtained from making the release at the dam at a rate x(t) at time t, then the optimality condition is mathematically:
b{x(t),t}=L=constant with time
where b{x(t),t} is the marginal benefit, i.e. the partial derivative of B{x(t),t} with respect to the argument x. L is a constant, which in the mathematical formulation of the problem is the Lagrange multiplier associated with the mass balance constraint to be satisfied over the selected horizon of operations. In other words the cumulative volume of releases over the time horizon must be equal to the cumulative volume of inflows plus the drop in reservoir storage between the initial and final times. Economically the marginal benefit is the incremental benefit realized by making an extra release of one unit of water, given that the rate of release was x(t). Typically the marginal benefit decreases as the rate of release increases and that is often referred to as the "law of decreasing returns". For the case of electric production the marginal benefit will depend on the amount of releases made through the turbines but also on the season of year or day of week or hour of day. The price of electricity is higher in winter than it is in summer. It is higher during peak hours during the week than it is on weekends, etc. If on the other hand the marginal benefit is only a function of the release, and not a function of time, then the constancy of the marginal benefit with time is equivalent to the constancy of the release with time. Optimality becomes synonymous with regulation, i.e. releasing at a constant rate. It is only under these conditions that the graphical method of Varlet is applicable.
In the graphical domain of cumulative volume of releases versus time, the optimal "trajectory" is a straight line where such a strategy is feasible i.e. does not make the reservoir more than full nor less than empty. When the objective is evaluated at a point downstream from the point of release and the marginal benefit (or cost) has a seasonal character, neither the graphical procedure of Varlet nor the mathematical result of Massé apply. For this more general case the derived optimality condition states that it is no longer an instantaneous marginal benefit that must remain constant in time. What must remain constant in time is a time integrated and weighted value of the marginal benefit (or damage) between the time the release is made and a later time. That later time is the release time plus the memory of the propagation system. The memory time is the time that must lapse before an upstream release is no longer felt at the target point downstream. The longer the distance between the release point and the target-point the longer is the memory of the propagation system. At the downstream point the damage depends on the discharge at that point, which is of course related to the release rate but also to the lateral inflows in between from tributaries and on the amount of attenuation that happens between the point of release and the target-point downstream. The integrand at dummy integration time t' is the marginal damage at that time multiplied by the instantaneous unit hydrograph at that time. Mathematically the integrand is: f{q(t'),t'}*k(t'-t) where f is marginal damage, q(t') is discharge at target point and k(.) is instantaneous unit hydrograph of propagation between release and target points. This integrand is to be integrated between time t of the release and time t + M, where M is the memory of the system. It is that integral that we have called the "Integrated Marginal Future" (or IMF for short) value that must remain constant in time. That optimality condition applies as long as the trajectory remains in the feasible domain bounded by the constraints of the problem, the "interior domain". When on a bound, the IMF value does not remain constant but must vary monotonically in a given direction, i.e. increases or decreases with time, depending on the constraint on which the solution rests.
-
Étude de la production des ions bromate lors de l'ozonation des eaux de la Banlieue de Paris : choix du mode d'ozonation et variation des param tres physico-chimiques
K. Gelinet, J. P. Croue, C. Galey, A. Laplanche et B. Legube
p. 599–516
RésuméFR :
Cette étude a permis d'évaluer l'importance de la concentration en ions bromure, de la température et de la nature de la Matière Organique Naturelle (MON) sur la production des ions bromate en s'appuyant sur des expériences conduites en laboratoire et sur pilote semi-industriel (Centre d'Essais de Méry-sur-Oise).
Trois campagnes d'ozonation effectuées en parallèle à Méry-sur-Oise et au LCEE (Laboratoire de Chimie de l'Eau et de l'Environnement) sur des eaux filtrées sable, ont montré que les expériences conduites en laboratoire et sur pilote semi-industriel mènent à des résultats similaires, soit une relation linéaire [BrO3-]=f (C∙τ) vérifiant une pente identique pour des conditions expérimentales données (teneur en ions bromure, température, origine de l'eau). Ces travaux ont montré de façon nouvelle qu'une faible variation de la concentration en ions bromure (± 15 à 20 µg.L-1) suffisait à modifier significativement la formation des ions bromate. A C∙τ=10 et T=21°C, la production des ions bromate est passée de 16 à 27 µg.L-1 pour une augmentation de la concentration en ions bromure de 80 à 95 µg.L-1.
Les résultats obtenus ont montré de plus que la température est un facteur important puisqu'une différence de 8°C (13 à 21°C) a entraîné, pour la même eau (80 µg.L-1 d'ions bromure, C∙τ=10), une augmentation de la concentration en ions bromate de 10 à 16 µg.L-1. Pour d'autres eaux (Seine, Marne et Oise), trois autres campagnes conduites avec des eaux clarifiées ont été effectuées après ajustement de la teneur en ions bromure et régulation de la température, ces trois eaux présentant par ailleurs des caractéristiques similaires en ce qui concerne le pH et l'alcalinité.
A C∙τ équivalent, la production d'ions bromate s'est avérée significativement plus faible pour l'eau de l'Oise que pour les deux autres eaux. La nature de la MON pourrait donc avoir une influence notable sur la formation des ions bromate.
EN :
The publication of Kurokawa et al. in 1990 confirming the toxicity of bromate of rats and mice, initiated the research effort that was internationally conducted during the last seven years to better understand the reaction mechanisms of bromate formation during the ozonation of natural waters. Based on the research findings regarding the effect of a number of parameters (bromide, ozone dose, pH, temperature, alkalinity, DOC content, ammonia, ...), predictive models (empirical and reaction kinetic based models), including molecular and/or radical pathways, have been developed with more or less success. Complementary results are still needed to better understand this complex mechanism.
The main objective of our work was to evaluate how the seasonal variation of the physical chemical characteristics of Paris-area source waters (i.e. bromide content, temperature, natural organic matter) can affect the production of bromate during ozonation.
In order to confirm that lab-scale experiments could be proposed to develop such research program, parallel tests were first conducted at the bench- and pilot-scale based on comparable C∙τ conditions. The lab-scale reactor was a 380 ml glass column (internal diameter: 0.02 m; height: 1.2 m) equipped with a water jacket to allow temperature to be varied and maintained. These reactor was used as a continuous flow reactor with recirculation. The pilot-scale ozonation contactor installed at the Méry sur Oise water treatment plant was comprised of four 30-liter columns in series (diameter: 0.1m ; height: 4m). The first column is used as the application column while the three others are used as residence column. The results have shown that lab-scale ozonation experiments conducted on Méry sur Oise sand filtered water led to similar results compared to pilot ozonation conducted on the same water and at the same temperature (sampled the same day) using the Méry sur Oise pilot-scale reactor. For applied C∙τ that ranged from 4 to 20 mg O3/L.min, similar linear relationship between bromate formation and applied C∙τ was obtained with the two reactors.
A survey conducted on the Oise River has shown that the bromide concentration ranged from 40 µg/L (winter period) to 80 µg/L (summer period). If it is already well known that higher the bromide content, higher the bromate formation, our work has also pointed out that even a small increase of the bromide concentration from 80 to 95 µg/L (15 µg/L of bromide spiked as KBr) can significantly impact the bromate formation (same experimental conditions) that, as an example, increased from 16 to 27 µg/L for C.t of 10 at 21 °C.
The temperature of the Oise river can vary from 5 °C up to 25 °C. Using carefully controlled temperature conditions, one can observed that the slope of the bromate production versus applied C∙τ increased with increasing temperature (same water). For example, the production of bromate during the ozonation (applied C∙τ=10) of the Méry sur Oise sand filtered water was 7, 10 and 16 µg/L for 5, 13 and 21 °C, respectively. Complementary experiments, have shown that the impact of the variation of the initial bromide concentration was proportionally more important for low-temperature water (5 to 13 °C) than for moderate-temperature water (20 °C).
The origin and nature of the water is considered to play a significant role on the formation of bromate during ozonation, however few studies have evaluated the importance of these parameters using carefully controlled experimental conditions. In order to better define how important is the change in bromate production with the modification of the quality of the Paris suburbs water sources, especially the organic content (nature and concentration of the NOM), two sets of experiments were conducted.
In the first part of the work, the Méry sur Oise sand filtered water was sampled at three different periods of the year 1996 (June, July and December), and the ozonation experiments were conducted at the same temperature (21 °C) after bromide concentration was adjust to 80 µg/L. The three water samples had the same pH and did not contain ammonia. Significant differences were observed in the bromate production, showing a larger production with the winter water as compared to the summer water. The fact that the winter water was enriched in DOC (3.7 mg/L of DOC) as compared to the two others (2.6 - 2.7 mg/L of DOC) may explain this difference since a larger ozone dose was probably necessary (ozone transfert not controlled because of the small size of the lab-scale reactor) to reach the same applied C∙τ due to a higher ozone consumption from the natural organic matter. The slightly lower alkalinity of the winter sample (200 mg/L as CaCO3 as compared to 250 mg/L CaCO3 for the summer samples) could have led to a less pronounced scavenger effect, condition that favors the radical pathway which is generally predominant. However, it is also known that carbonate species can also promote the formation of bromate due to the production of carbonate radicals. Comparing the results obtained with the water samples collected during the summer period, more bromate was produced in July than in June. The higher hydrophobic character (more aromatic in character) of the NOM of the water sampled in July (SUVA=2.15) as compared to the June sample (SUVA=1.88), characteristic that favor the ozone consumption and consequently the OH radical production, may justify this finding.
In the second part of the work, the bromate formation obtained during the ozonation of the three major water sources of the Paris suburbs (sampled after clarification), Oise River, Marne River and Seine River, was compared (same temperature) after the bromide content was adjust to 80 µg/L. Similar results were obtained with the clarified Marne river and Seine River, the two waters showing the same physical chemical characteristics (2.2 and 2.5 mg/L of DOC; pH 7.9 and 7.8; Alkalinity: 225 and 210 mg/L as CaCO3). A lower production of bromate as a function of the applied C∙τ was observed with the clarified Oise river, result that is in contradiction with our previous hypotheses since this water source showed the highest DOC content, the highest SUVA and the lowest alkalinity among the three waters studied.
More work needs to be done to better understand the impact of the origin and nature of the NOM on the bromate formation mechanisms. As a general conclusion, this work also confirmed that the physical chemical characteristics of source water (DOC, temperature, alkalinity, bromide content,…) are more important factors as compared to the hydraulic characteristics of the reactor.
Keywords
.
-
À propos de la distribution statistique des cumuls pluviométriques annuels. Faut-il en finir avec la normalité?
H. Benjoudi et P. Hubert
p. 617–630
RésuméFR :
Il est communément admis que la distribution statistique des précipitations cumulées annuelles suit une loi de Laplace-Gauss. Les écarts entre cette loi et les distributions empiriques sont cependant un fait d'expérience : au-delà d'une probabilité au non dépassement correspondant à une période de retour d'une vingtaine d'années et pour les valeurs les plus fortes de pluie, l'ajustement n'est plus acceptable. Ce décrochage par rapport à la loi normale est mieux mis en évidence par l'étude des longues séries pluviométriques, plus riches en événements extrêmes. Pour étudier le comportement statistique de ces derniers, il est fait appel à un formalisme multifractal qui permet de mettre en évidence que, contrairement à ce qui est généralement admis, la décroissance de la probabilité au dépassement est de nature hyperbolique plutôt qu'exponentielle. Les probabilités des événements catastrophiques sont donc plus importantes que l'on ne le croyait jusqu'ici, ce qui peut avoir des conséquences particulièrement importantes. Cette approche appliquée à un ensemble de séries pluviométriques de longue durée permet de cerner le paramètre caractérisant la décroissance de la probabilité au dépassement. Les résultats obtenus jusqu'ici laissent à penser que ce paramètre pourrait être universel.
EN :
Up to now, annual rainfall accumulation have generally been modelled according to the Laplace-Gauss probability distribution. After a brief survey of the arguments for using this distribution to describe annual rainfall, which are mainly to consider annual rainfall accumulation as the sum of many independent individual rainfall events of similar magnitude identically distributed, we question its capacity to take into account the various characteristics of the rainfall events, in particular their magnitude, their number and their possible correlation or persistence. We have studied an alternative model based on the multifractal theory, well suited to model phenomena where matter and/or energy concentrate on a more and more sparse domain as the observation scale is decreasing. Some elements of the multifractal theory are briefly described. The main feature of the probabilistic model based on the assumption of a multifractal behavior is that for large enough accumulations, the probability distribution tail would have at any time scale an algebraic rather than an exponential behavior as it is the case for the Laplace-Gauss distribution. It is of value to note that such an algebraic behavior corresponds to a probability of occurrence decreasing much more slowly than in the case of an exponential one. It is also important to note that, unlike exponential distribution laws, all statistical moments of algebraic distribution laws are not defined, those of order greater than the exponent of the algebraic law diverging. This fact is of main importance in relation with sampling, as we cannot a priori be sure of the convergence of all statistical estimators and that the convergence of these estimators is likely to much more slow than that of exponential law estimators.
The application part of our study concerned 87 annual rainfall series spanning from 44 to 266 years with a mean of 116 years, gathered within the UNESCO FRIEND-AMHY project (Flow Regimes from International Experimental and Network Data - Alpine and Mediterranean Hydrology). For each series we have drawn on a log-log diagram the curve of the empirical probability of exceeding a given rainfall value (derived by the Weibull formula) versus the rainfall value. On such a diagram an algebraic behavior should be represented by a straight line the slope of which is the exponent characterizing the algebraic law tail.. Qualitatively the results tend to argue in favor of an algebraic rather than an exponential behavior of the probability distribution of high annual rainfall ( empirical return periods of more than 20 years ) but the exponent values are quite scattered, ranging from about 2 for the lowest values to more than 10. We then have concentrated our study on the basis of the 71 stations with series more than 90 years long. A diagram of the exponent value versus the corresponding series length suggest that there may be a slow convergence of exponent values towards a common value close to 3.8 as the length of the series increases ( This observation should be related to a possible divergence of high order moments which is the cause of poor estimation from small size samples ). This exponent would then be universal, related to the physical processes from which rainfall originates rather than to a geographical location. You have to recall here that the multifractal framework we used has been primarily designed to take into account the scaling symmetries of the Navier and Stokes equations. It is likely that the unknown partial differential equations governing rainfall processes share some properties with those governing atmospheric turbulence and it would not be too surprising that we can catch this way some physical feature of rainfall process.
These preliminary results, if confirmed by further researches, would have considerable theoretical and practical consequences. The algebraic behavior of heavy rainfall distribution, which is supposed, according to the multifractal theory, to arise at all time scales with the same exponent, should lead to reject the numerous empirical and ad-hoc distribution functions which are at present used mainly for practical purposes. These probability distribution functions have poor theoretical and/or physical basis and quite all of them exhibit more or less an exponential behavior. It is likely than a huge amount of efforts has been devoted for decades to the sophistication of fitting methods, using low and mean variate realizations, while large and extreme values were often understated and underused as extreme values are generally said to belong to another population or, equivalently, to be generated by an other random process than normal ones (outliers...). The new generation of distribution functions to design would be less empirical : it should take explicitly into account time scale invariance and should be able to reconcile into an unique model all the realizations of the rainfall accumulation variate, whatever their magnitudes are.
In order to size up the difference between algebraic and exponential statistical models one can note, for example, that the return period of the annual rainfall of four cities (Padova, Marseilles, Rome and Gibraltar) which were estimated at 1,000 years with the fitting of a Gaussian distribution, could be as low as 60 to 100 years with this new algebraic model. The return period is so divided by a factor of 10 ! All the same annual rainfall accumulations are said to have a rather " soft " behavior, the summation of numerous individual events being supposed to smooth their more " wild " behavior. This is probably not true and one can find in annual and even pluriannual rainfall accumulations the trace of extreme individual events. Whatever the time scale under consideration, it is easy to imagine the consequences of such a dramatic modification of return periods on engineering design. Such a revisiting of estimated return periods should be extended to hydrological events. As an example, the recent flooding of the river Oder in Germany, Poland, and Czech Republic (July 1997) for which preliminary information suggests a return period greater than 10,000 years (Gazowsky, personal communication), might not be so extreme if our new concepts are shown to be valid.
-
Comparaison de deux méthodes d'estimation du broutage des bactéries par les protozoaires en milieux aquatiques [Courte note]
P. Servais, S. Becquevort et F. Vandevelde
p. 631–639
RésuméFR :
L'objectif du présent travail est de comparer deux méthodes indépendantes permettant d'estimer, dans les milieux aquatiques, le flux de carbone transitant du compartiment bactérien vers les protozoaires. Les deux méthodes utilisées sont, d'une part, celle basée sur le suivi de la décroissance de radioactivité du matériel génétique bactérien après marquage à la thymidine tritiée (SERVAIS et al., 1985) et, d'autre part, celle de mesure du taux d'ingestion de bactéries fluorescentes (FLB) par les protozoaires. Elles ont été appliquées en parallèle sur des échantillons de la rivière Meuse (Belgique). L'emploi de la première méthode a montré des taux de broutage compris entre 0.002 h-1 et 0.016 h-1 qui représentent en moyenne 72 % des taux de mortalité totale. Une excellente corrélation entre les estimations de flux de broutage obtenues par les deux techniques a été trouvée, mais les valeurs estimées à partir de la méthode FLB sont systématiquement inférieures (d'environ 30% en moyenne) à celles obtenues par l'autre méthode. Une part de cette différence peut vraisemblablement s'expliquer par la non prise en compte par la méthode FLB du broutage par des organismes de taille supérieure à 100 µm.
EN :
The goal of the present work was to compare two methods allowing to estimate, in aquatic ecosystems, the carbon flux due to grazing of bacteria by protozoa. The first method follows the decrease of labeling in the DNA of natural assemblages of bacteria previously labeled with tritiated thymidine (SERVAIS et al., 1985) and the second procedure is based on the estimation of bacterial ingestion rate by protozoa using fluorescently labeled bacteria (FLB). Both methods were applied in parallel on river Meuse (Belgium) samples. Using the first method, grazing rates in the range 0.002 h-1 to 0.016 h-1 were observed; they represented in average 72 % of the total bacterial mortality rates. A very good correlation between both estimates of the grazing fluxes was found but the data obtained by the FLB method were systematically lower (around 30% in average) than those estimated with the other method. A part of this difference is probably due to he fact that the FLB method does not take into account grazing by organism higher than 100 µm.