论文部分内容阅读
针对一类比Sigmoid更为宽泛的指数型激活函数,证明了三层前向神经网络的本质逼近阶.特别地证明了对于定义在Rd中紧子集上的任意连续函数f,存在隐层单元数为m(n)=Bdm(fi,nn)<ε(n+1)d(其中222(,)11,1,Bdfn=2+π2dωfn+2ω2(f,·)为f的二阶连续模,n为不小于1/ε的任意正整数)的近似指数型神经网络Rnσ(d)使其逼近f的精度与速度满足222(,())11,1.d∞fRnσd≤2+π2dωfn+2同时,当f属于α-Lipschtz函数类时,网络达到其本质逼近阶n?α(0<α≤2),所获结果较完整地刻画了该类神经网络的逼近特征,并揭示了该类神经网络逼近性态与网络拓扑之间的相依关系.
For a class of exponential activation functions more general than Sigmoid, the essential approximation order of the three-layer feedforward neural network is proved. It is especially proved that for any continuous function f defined on the compact subset of Rd, there are hidden layer units Is a second-order continuous mode in which m (n) = Bdm (fi, nn) <ε (n + 1) d where 222 (,) 11,1, Bdfn = 2 + π2dωfn + 2ω2 (f, (n is an arbitrary positive integer not less than 1 / ε), the precision and speed of approximate exponential neural network Rnσ (d) such that it approximates f satisfies 222 (, ()) 11,1.d∞fRnσd≤2 + π2dωfn + 2 At the same time, when f belongs to α-Lipschtz class, the network reaches its essential approximation order n α α (0 <α≤2), and the obtained results completely describe the approximation features of such neural networks, The Dependence between Neural Network Approximation Behavior and Network Topology.