Complex Neuro-Fuzzy System for Function Approximation

Published 09 October 2013 Abstract. Complex fuzzy sets have been developed recently and extends truth values to unit circle in complex plane. Complex fuzzy logic then developed by employing complex fuzzy sets. In this paper, a novel adaptive complex neuro fuzzy inference system based on complex fuzzy logic is proposed for function approximation. The underlying procedure of this network and its learning rule are described. Afterwards, the performance of this system is evaluated by two functions consisting of Sine wave and Sinc function.


Introduction
Fuzzy set theory proposed by Zadeh [1] and consequently fuzzy logic have been widely used for different purposes.Recently, complex fuzzy set [2] has been proposed as an extension to traditional fuzzy set theory.According to [2] , complex fuzzy set extends truth values to unit disk of complex plane while possible membership degree of traditional fuzzy set is limited to real numbers in the range of [0, 1].Therefore, complex fuzzy set is defined mathematically as: where S is the complex fuzzy set, U is universe of discourse and µ A is the complex membership function to characterize complex fuzzy set A. Each complex membership function is consisted of amplitude r S (x) bounded in [0, 1] and phase part ω S (x) which can range [0, 2π].
Complex fuzzy set maintain a novel framework which have all advantages of type-1 fuzzy set further to the extra properties and characteristics attributed to complexvalued nature.This nature of complex membership functions allow them to obtain "wave-like" properties and therefore they can have constructive and destructive interfere with each other [3] .Nevertheless, considering all properties and interactions of complex fuzzy sets is very difficult, if not impossible, for humans and even experts.This leads to use machine learning architecture in order to elicit rules and tune complex membership functions.
Chen et al. [4] proposed adaptive neurocomplex fuzzy inferential system (ANCFIS) as the first realization of complex fuzzy logic in machine learning.ANCFIS employs signature property [5] to reduce the network size.Moreover, complex membership functions are defined as sine to mimic the Fourier theorem [3] .Nonetheless, this system is univariate and only applied to time-series forecasting.
In this paper, we propose an adaptive complex neurofuzzy inferential system (ACNFIS) to deal with function approximation problems.The proposed system is based on well-known adaptive network fuzzy inference system (ANFIS) [6] with modifications to employ complex fuzzy sets.Moreover, we used the suggested approach in [5] to build complex membership function based on two separate real valued functions as amplitude and phase.
The rest of this paper is organized as follows: In section 2 a brief review of complex fuzzy sets and their operations presented.Section 3 describes the proposed AC-NFIS architecture.Results are discussed in section 4. It follows by conclusion and future work.

Complex Fuzzy Logic Review
Fuzzy logic suggested an alternative way for uncertainty modeling.Fuzzy allows to use human knowledge to solve vague and imprecise informations linguistically.This gives the opportunity to use linguistic variables, ifthen rules and human reasoning to model systems without precise qualitative analysis.Ramot et al. [2] proposed complex fuzzy set that extends possible values of traditional fuzzy sets from real numbers to complex plane.Complex fuzzy set S is characterized by a complex-valued membership function µ S (x) whose range is the unit disk in complex plane.Thus, complex membership function assign a complex number to any element x in the universe of discourse.This concept is different from fuzzy complex numbers [7][8][9][10] .Fuzzy complex number is a type-1 fuzzy set with complex-valued members.Equivalently, fuzzy complex number is a real valued function which is defined on complex number set.The notion of complex fuzzy set, however, employs complex-valued membership function to map each element into [0, 1] × [0, 2π].The basic operations introduced by Ramot et al. [5] are as follows: 1. Union: Let µ A = r A (x) • e jω A (x) and µ B = r B (x) • e jω B (x) be complex membership functions where A and B are complex fuzzy sets defined on universe of discourse U.The complex membership function of where ⊕ represents any s-norm function which satisfy the type-1 fuzzy union axioms.The phase part, the function can be select from other possibilities.The followings are mentioned in [5] : M in Max W inner Take All 2. Intersection: Assume µ A and µ B are two complex membership functions of two complex fuzzy sets A and B. Similar to the union, the intersection is introduced as: where can be selected as any t-norm that satisfy the type-1 fuzzy intersection axioms.The possible functions for the phase part are the same as introduced in union section.Selection of this operator is totally based on the application.In the rest of this paper, for intersection we use algebraic product as t-norm for magnitude and summation for the phase part.These selections are made in order to resemble the complex-valued production.3. Complement: Suppose µ A is complex membership functions of complex fuzzy set A. The complement operator mentioned in [2] is defined as: The complement operator breaks the complex membership function into amplitude and phase parts.
Traditional complement operator applies to the amplitude part, however, the complement of the phase part is based on the interpretation of membership phase.According to [2] , membership phase of S must be the same as S in order to satisfy the axioms.Therefore, the complement operator does not apply to the membership phase.
Complex fuzzy set is the backbone of complex fuzzy logic.The latter is a natural extension of fuzzy logic that benefits from advantages of complex fuzzy sets [5] .As a result, complex fuzzy logic can deal the problems that are difficult or impossible to be addressed with traditional fuzzy logic.

Architecture of ACNFIS
In this section, we introduce the ACNFIS and it's underlying procedure.ACNFIS is a multilayer feed- forward complex-valued neural network whose structure is given in Fig. 1.The architecture of ACNFIS is based on well known ANFIS [6] in which the node functions are modified to employ complex fuzzy logic.Each layer and its node function is as follows.
Layer 0: This layer is pass the input vector to the next layer without any modification.
Layer 1: This layer transforms the input value into complex membership grade for each input.Thus, fuzzification of input data is utilized at this stage.The node function of j th node can be written in terms of amplitude (A) and phase (P) as: We selected two separate real valued functions as A(x) and B(x) to represent amplitude and phase parts.This type of selection is made based on Liouvilles theorem [11] .According to this theorem, fully complex-valued function cannot be both analytic and bounded unless it is constant.However, the amplitude of complex membership function must be bounded in unit interval [0, 1].Selecting two separate real valued function gives the opportunity to build bounded complex membership function.
Nevertheless, other approaches can bound the complexvalued grade of membership such as Elliot function that has been utilized in [4] .Subsequently, in selection of function for phase part, there would be no restriction.Indeed, a function that spread on whole range of [0, 2π] is more effective.To this end, we selected two Gaussian functions to build the complex-valued membership functions as follows: where subscripts A and P indicate amplitude and phase parameters.Fig. 2 is visual representation of a sample complex membership function of this family.Each complex membership function has four nonlinear (antecedent) parameters to be identified and fine tuned.Therefore, the nodes at this layer are adaptive and the antecedent parameters will be updated iteratively based on mean squared error and gradient vector.
Layer 2: Nodes of this layer are static nodes.Each node represent a specific rule and its output is the firing strength of the rule.So this layer may be interpreted as rule base of complex fuzzy logic.Normally, we consider all the possible rules at this stage.Also complexvalued multiplication is selected as node functions since it satisfies both amplitude and phase t-norm axioms (see where j represent the index of the rule, k is the index of antecedents and n is total number of inputs for the j th rule.
Layer 3 : Normalized rule firing strengths are calculated at this layer.Output of each node is evaluated using: We used summation of weight amplitudes in order to normalize the weights.This leads to have a rotation invariant operator.In addition, denominator might be zero if we use summation of complex weights and will cause singularity at this stage which makes the system unreliable.
Layer 4: Takagi and Sugeno's inferential system [12] is implemented at this layer.Each node corresponds one consequent rule of Takagi and Sugeno's inferential system and therefore has linear parameters to find.The node function that we used is: where |w j | and ∠w j are amplitude and phase of normalized weights and { p j , q j , r j , s j } is consequent parameter set.
Layer 5: This is output layer which calculates the output value of the system by calculating summation of all its input values.
Aforementioned system is an adaptive network which implements complex fuzzy logic.This system, as it mentioned, has two set of parameters: premise(nonlinear) and consequent(linear) parameters.To find consequent set of parameters, we employed famous least square estimation(LSE) [6] .Let vector X be the set of consequent parameters.Based on the operations of layers four and five, we can write the output as a linear combination of consequent parameters( X ).Thus, having 2 inputs and P training data pairs leads us to (17) where A is the known coefficient matrix (18) and B is the desired output vector (19).

A X = B
(17 where T p is the target value for the p th training data pair and α i p and β i p are representing |w i p | cos(∠w i p ) and |w i p | sin(∠w i p ) for p th training data, respectively.Thus, we utilized pseudo-inverse least square estimation to find the best set of consequent parameters.The solution is given as: It has to be noted that during the forward pass, we identify consequent parameter set while premise parameters are assumed as constants.When the consequent set of parameters are identified, we use Levenberg-Marquardt algorithm [13] to update premise parameters.The error vector is defined: where O i and T i are ACNFIS and desired output for i th data pair, respectively.Accordingly, the squared error can be written as dot product of error vector: For the sake of simplicity, we added 1 2 in (22).According to LM back-propagation, update parameters are based on the following.
where x is antecedent parameter vector and ∆x is the update vector which is calculated based on the Jacobian and Hessian matrices of error w.r.t.antecedent parameters.So the update matrix becomes where J is the Jacobian matrix of error w.r.t.antecedent parameters and µ is LM coefficient.The Jacobian matrix has the form Value of µ changes adaptively based on the difference in two consequent errors.If the error decreased, µ is divided by factor β.However, if the error increased, µ is multiplied by β where β is a constant greater than one.
In our design, we chose β = 10.at each iteration.During training phase, the system uses mentioned hybrid learning algorithm iteratively to optimize all the antecedent and consequent parameters.

Results and Discussion
We evaluate performance of the system using approximation of Sine and Sinc functions.We chose input domain of [0, 2π] for Sine function.Then, we extracted 41 data pairs uniformly which consisted of 21 training data and 20 test data.ACNFIS with two rules has been trained using training data and mean squared error reached 1.4×10 −7 .Afterwards, we tested the trained system using testing data and it gives mean squared error of 1.3 × 10 −7 .Fig. 3 shows the output and error of ACNFIS for Sine testing data.Complex membership function parameters are given in Table 1.Same procedure has been done for Sinc function.We extracted 81 data pairs from input domain [−2π, 2π] uniformly.We used 41 data pairs as training data where after 200 epochs, mean square error reached 2.76 × 10 −4 and therefore training process stopped.Then, we tested the system using other 40 data pairs which gives mean squared error 2.52 × 10 −4 .Fig. 4 depicts ACNFIS output and its error for Sinc function.Table 2 shows the values of complex membership functions parameters.The performance of ACNFIS is compared with ANFIS and feedforward neural network in Table 3. ACNFIS shows better

Conclusion
Adaptive complex neuro fuzzy inference system (AC-NFIS) has been proposed in this paper.This system is capable of learning various functions based on inputoutput data and approximate them accurately.Underlying procedure of ACNFIS network shows that its network is equivalent to well-known ANFIS system; however, all the node functions are modified to utilize complex fuzzy logic.Moreover, closed form learning rule for ACNFIS is derived to optimize the parameters of ACNFIS in forward and backward passes.The performance of this system evaluated by Sine and Sinc functions which shows The future works can be concentrated on interpretation of complex fuzzy sets and complex valued grade of memberships.Furthermore, various structures have been developed for traditional fuzzy sets which can be modified to utilize complex fuzzy logic.

Fig. 3 .
Fig. 3. Approximation of Sine function; (a) test data (solid) versus ACNFIS output (dashed) and (b) testing error performance compared to two other approaches while it has fewer number of nonlinear and linear parameters as well.

Fig. 4 .
Fig. 4. Approximation of Sinc function; (a) test data (solid) versus ACNFIS output (dashed) and (b) testing error capability of proposed system for approximating nonlinear functions.

Table 1 .
Complex membership functions parameters for Sine approximation

Table 3 .
Performance comparison for Sinc function approximationNonlinear/Linear parameters MSE trn MSE chk