发明名称 Method to predict the effluent ammonia-nitrogen concentration based on a recurrent self-organizing neural network
摘要 An intelligent method is designed for predicting the effluent ammonia-nitrogen concentration in the urban wastewater treatment process (WWTP). The technology of this invention is part of advanced manufacturing technology, belongs to both the field of control engineering and environment engineering. In order to improve the predicting efficiency, a recurrent self-organizing neural network, which can adjust the structure and parameters concurrently to train the parameters, is developed to design this intelligent method. This intelligent method can predict the effluent ammonia-nitrogen concentration with acceptable accuracy and solve the problem that the effluent ammonia-nitrogen concentration is difficult to be measured online. Moreover, the online information of effluent ammonia-nitrogen concentration, predicted by this intelligent method, can enhance the quality monitoring level and alleviate the current situation of wastewater to strengthen the whole management of WWTP.
申请公布号 US9633307(B2) 申请公布日期 2017.04.25
申请号 US201514668836 申请日期 2015.03.25
申请人 BEIJING UNIVERSITY OF TECHNOLOGY 发明人 Qiao Junfei;Hou Ying;Han Honggui;Li Wenjing
分类号 G06N3/08;G01N33/18;G06N3/04;C02F3/00;C02F101/16 主分类号 G06N3/08
代理机构 J.C. Patents 代理人 J.C. Patents
主权项 1. A method for predicting effluent ammonia-nitrogen concentration in wastewater based on a recurrent self-organizing neural network, comprising: (1) providing training samples, each training sample including input variables as measured parameters of a wastewater and a measured effluent ammonia-nitrogen concentration of the wastewater; (2) designing a topological structure of a recurrent self-organizing neural network having an input layer, a hidden layer and an output layer, wherein an initial structure of the recurrent self-organizing neural network is M-K-1, having M nodes in the input layer, K nodes in the hidden layer and 1 node in the output layer, where M>3 is a positive integer and represents the number of the input variables, K>2 is a positive integer; wherein an input vector of the recurrent self-organizing neural network is u(t)=[u1(t), u2(t), . . . , uM(t)] at time t, where u1(t) is the value of input variable 1, u2(t) is the value of input variable 2, and uM(t) is the value of input variable M, respectively, at time t; the output, y(t), of the recurrent self-organizing neural network, which is the calculated value of the effluent ammonia-nitrogen concentration at time t, is expressed as:y⁡(t)=∑k=1K⁢wk3⁡(t)⁢vk⁡(t),(1)where wk3(t) is connecting weight between kth node in the hidden layer and the node in the output layer at time t, where k=1, 2, . . . , K; and vk(t) is the output of kth node in the hidden layer at time t:vk⁡(t)=f⁡(∑m=1M⁢wmk1⁡(t)⁢um⁡(t)+vk1⁡(t)),(2)where wmk1(t) is connecting weight between mth node in the input layer and kth node in the hidden layer at time t, m=1, 2, . . . , M; vk1 (t) is feedback value of kth node in the hidden layer at time t which can be expressed as: vk1(t)=wk2(t)vk(t−1),  (3)where wk2(t) is self-feedback weight of kth node in the hidden layer at time t, vk(t−1) is the output of kth node in the hidden layer at time t−1; wherein a root-mean-squared error is defined as:E⁡(t)=12⁢T⁢∑t=1T⁢(yd⁡(t)-y⁡(t))2,(4)where yd(t) is the real value of the effluent ammonia-nitrogen concentration at time t and T is the number of training samples; (3) training the recurrent self-organizing neural network, {circle around (1)} initializing the connecting weight between the nodes in the hidden layer and the node in the output layer, the self-feedback weight of the nodes in the hidden layer, and the connecting weight between the nodes in the input layer and the nodes in the hidden layer, wk3(t)ε(0, 1), wk2(t)ε(0, 1), and wmk1(t)ε(0, 1), m=1, 2, . . . , M, k=1, 2, . . . , K, and pre-setting an expected error value Ed, Edε(0, 0.01]; {circle around (2)} calculating the total sensitivity of the nodes in the hidden layer, respectively, as follows:STk⁡(t)=Vark⁡[E⁡(y⁡(t)❘vk⁡(t))]Var⁡[y⁡(t)],(5)whereVark⁡[E⁡(y⁡(t)❘vk⁡(t))]=2⁢(Ak)2+(Bk)2,⁢Var⁡(y⁡(t))=2⁢∑k=1K⁢⁢((Ak)2+(Bk)2),(6)k=1, 2, . . . , K; Ak and Bk are Fourier coefficients which are given by:Ak=12⁢π⁢∫-ππ⁢cos⁡(ωk⁡(t)⁢s)⁢⁢ⅆs,⁢Bk=12⁢π⁢∫-ππ⁢sin⁡(ωk⁡(t)⁢s)⁢⁢ⅆs,(7)where the range of s is [−π, π]; ωk(t) is the frequency of kth node in the hidden layer, ωk(t) is determined by the output of kth node in the hidden layer as follows:ωk⁡(t)=arc⁢⁢sin⁢πbk⁡(t)-ak⁡(t)⁢(vk⁡(t)-bk⁡(t)+ak⁡(t)2),(8)where bk(t) is the maximum output of the kth node in the hidden layer during the training process, ak(t) is the minimum output of the kth node in the hidden layer during the training process; {circle around (3)} tuning the structure of the recurrent self-organizing neural network pruning step: if the total sensitivity STk(t)<α1, α1ε(0, 0.01], the kth node in the hidden layer will be pruned, the number of nodes in the hidden layer is updated, and K1=K−1; otherwise, the kth node in the hidden layer will not be pruned, and K1=K; growing step: if the root-mean-squared error E(t)>Ed, a new node will be added to the hidden layer, and an initial weight of the new node added to the hidden layer is given by:wnew1⁡(t)=wh1⁡(t)=[w1⁢h1⁡(t),w2⁢h1⁡(t),…⁢,w5⁢h1⁡(t)],⁢wnew2⁡(t)=wh2⁡(t),⁢wnew3⁡(t)=yd⁡(t)-y⁡(t)vnew⁡(t),(9)where wnew1(t) is connecting weight vector between the new node added to the hidden layer and the input layer, wnew2(t) is self-feedback weight of the new node added to the hidden layer, wnew3(t) is connecting weight between the new node added to the hidden layer and the output layer, h node is the node in the hidden layer which has the largest total sensitivity, wh1(t) is connecting weight vector between the hth node in the hidden layer and the input layer before adding the new node to the hidden layer, wh2 (t) is self-feedback weight of the hth node in the hidden layer before adding the new node to the hidden layer, and the output of the new node added to the hidden layer is defined as:vnew⁡(t)=f⁡(∑m=1M⁢⁢wmh1⁡(t)⁢um⁡(t)+vnew1⁡(t)),⁢vnew1⁡(t)=wh2⁡(t)⁢vh⁡(t-1),(10)and the number of nodes in the hidden layer is updated, K2=K1+1; otherwise, the structure of the recurrent self-organizing neural network will not be adjusted, and K2=K1; {circle around (4)} updating the weights wk1(t), wk2(t) and wk3(t) the adaptation strategies of weights is defined as:wk1⁡(t+1)=wk1⁡(t)+η1⁢∂E⁡(t)∂wk1⁡(t),⁢wk2⁡(t+1)=wk2⁡(t)+η2⁢∂E⁡(t)∂wk2⁡(t),⁢wk3⁡(t+1)=wk3⁡(t)+η3⁢∂E⁡(t)∂wk3⁡(t),(11)where k=1, 2, . . . , K2; wk1(t)=[w1k1(t), w2k1(t), . . . , wMk1(t)], η1ε(0, 0.1], η2ε(0, 0.1] and η3ε(0, 0.01] are respectively the learning rate of the connection weights between the input layer and the hidden layer, the learning rate of the self-feedback weight in the hidden layer, and the learning rate of the connection weights between the hidden layer and the output layer; {circle around (5)} importing training sample x(t+1), and repeating steps {circle around (2)}-{circle around (4)}, then, stopping the training process after all of the training samples are imported to the recurrent self-organizing neural network so as to obtain a trained recurrent self-organizing neural network; (4) providing the same input variables of a wastewater to be monitored as that of the training samples, and inputting the input variables of the wastewater to be monitored to the trained recurrent self-organizing neural network to carry out calculation, wherein the output of the trained recurrent self-organizing neural network is the predicted value of the effluent ammonia-nitrogen concentration of the wastewater to be monitored.
地址 Beijing CN