1 Introduction

While tracing the localization problems, the first significant study conducted on wireless positioning and localization could be traced back to the pioneering study of Applied Physics Lab (APL) based on monitoring the radio transmission of Sputnik (the first man-made satellite by the former Soviet Union 1957) [1]. In the sequel, the satellite was approximately along its orbit by applying the microwave signals emanating from the satellite and its Doppler shift effect. Afterward, this study led to the appearance of the TRANSIT, which was the first satellite positioning system, in 1961 [2]. In 1996 [2], the latter was obsolete because of the emergence of the Global Positioning System (GPS), which became the most popular and extensively applied positioning system in the world [3]. Subsequently, owing to the astonishing developments in wireless technologies, several device-enabled positioning systems have been developed.

The development of (wireless) positioning technology has been remarkable after the US Federal Communication Commission (FCC) introduced the requirement safety services, such as E-911 [4], which forced the cellular network operators to provide the position of the wireless terminals at a predefined accuracy level. Thus, this has become a significant driving force of research activities in localization technologies for almost two decades. Moreover, this turned out to be central for other critical activities, e.g., location-sensitive billing information, fraud detection, intelligent transportation systems, and enhanced network performance [4,5,6].

The importance of localization in wireless sensor networks (WSNs) arises from several factors, which include the identification and correlation of gathered data, node addressing, query management of nodes localized in a determined region, evaluation of node density and coverage, energy map generation, geographic routing, and object tracking. All of these factors make localization systems a key technology for developing and operating WSNs. In this review, we view the localization problem from the perspective of a WSN by particularly focusing on fuzzy-based reasoning. Even though wireless positioning systems are rooted back to earlier fifties as already pointed out and the maturity of several GPS- and GSM-like positioning technologies [7], several reasons motivate further developments in the field. First, since the achievements of satellite-based location services in outdoor applications, the provision has shifted to the indoor environment, where the improvements in indoor positioning have the potential to develop unprecedented opportunities for businesses. Second, indoor positioning techniques still encounter several technical issues that restrict their accuracy level. These include multipath due to non-light-of-sight conditions and a higher density of obstacles, which affect signal attenuation. Third, boosted by industrial applications, the demand for millimeter to nanometer positioning emerged.

Moreover, this review is mainly motivated by the manufacturers’ research departments. Therefore, it contains, to a large extent, unpublished solutions. Fourth, owing to the development of a 5G network, it becomes possible to establish multiple mobile relays. For instance, device-to-device (D2D) communication can reach an unprecedented scale to tap 30.6 Exabyte monthly by 2020 [8], which promotes the necessity of new collaborative architectures in positioning schemes. Additionally, owing to the exponential increase in data rate and diversity of mobile applications, big data analytics are expected to play a vital role in subsequent location-related services. Therefore, this ultimately opens the door for new positioning algorithms to address the new challenges that were not encountered in the previous wireless systems. For example, technology related to massive-multiple input–output with a rate order of GB per second has been developed, such as Samsung or Huawei [9]. Fifth, the development of the Internet of Things (IoT) technology enforces the need for new system design and architecture, supporting reliability, mobility, and spectrum management [10].

There are numerous review papers on wireless sensor positioning technologies and techniques. Reference [11] performed an extensive survey of wireless indoor positioning techniques and solutions, where state of the art up to 2005 of GPS, RFID, cellular-based, UWB, WLAN, and Bluetooth technologies have been surveyed. The performance parameters of 20 systems and solutions were compared in terms of accuracy, precision, complexity, scalability, and robustness. In [12], a survey of mathematical methods for indoor positioning was conducted, in which four categories were highlighted: geometry-based methods, cost-function minimization, fingerprinting, and Bayesian techniques. In [13], 13 different indoor positioning solutions were considered by focusing on high-precision technologies that operate in the millimeter to centimeter level. The evaluation is performed from the perspective of a geodesist and includes the criteria accuracy, range, signal frequency, principle, market maturity, and acquisition costs.

By specifying the methods for radio distance estimation, textbook [14] provides a comprehensive review of radio navigation techniques. Some more up-to-date developments have been provided in the field of wireless positioning by focusing on algorithms for moving receivers [15].

Owing to the success of fuzzy logic in various industrial and commercial applications [16], there is no clear comprehensive review that would provide both the academic community and practitioners with a global and detailed view regarding the implementation of fuzzy-based reasoning to wireless positioning, where the importance of this review contributes to fill this gap and opens up a new perspective in applications of fuzzy-based reasoning/soft-computing techniques and wireless positioning systems.

The rest of this paper is organized as follows. Section 2 introduces the terminologies of positioning problems and properties of the positioning systems. In Section 3, we review the design approaches and challenges from a modeling design viewpoint and sources of uncertainties. Section 4 describes the classification criteria followed in this study. Section 5 highlights the parametric models and evaluation criteria. Section 6 describes the surveyed fuzzy-based localization systems. Finally, the conclusion is presented in Section 7.

2 Terminology and background of the positioning systems

2.1 Terminology

We refer to an object whose position is unknown as the target object. The position or location of the target is determined with respect to a predefined frame, which can be defined on an absolute scale as a spatial Galilean frame or a relative scale (e.g., with regard to neighboring objects). Moreover, the positioning algorithm is referred to as the set of processes/steps or mathematical model(s) that establishes some spatial relationships between the target and measurements, leading to an exact or approximate estimation of the target location.

2.2 Location-based systems

Owing to the recent development in location-based services (LBS) [2], the positioning technology can be applied in various fields of studies, including industrial, medical, safety, and transportation, as well as many other commercial fields. Figure 1 gives an overview of the graphical description of LBS. Indeed, the positioning task answers the following question: “Where am I”? Therefore, this contributes in answering the subsequent questions: “What is nearby”? “How to reach that location”? “How to optimize my resources while achieving my task”?

Fig. 1
figure 1

LBS system architecture

As shown in Fig. 1, the positioning problem is mainly connected to the context, sensory information, and perceived environment, which, in turn, substantially constrains the position estimation process and the accuracy level. For instance, some mobile location services only attempt to know whether some predefined attractions (e.g., hotel, shop, and fuel station) occur in the vicinity of the user location. Moreover, these services do not require the exact location of the attraction since it is enough to indicate its presence or absence within the area. Similarly, in network-based localization, one requires, for example, to identify the node that is responsible for deteriorating the network service or subject to an initial attack. Therefore, this may require a detailed review of the activity history of all candidate nodes. In geo-data positioning, one often requires the latitude–longitude estimation of the target object, which may involve advanced state estimation techniques. Moreover, to achieve complex rendezvous tasks, industrial robotics-like applications regularly necessitate very high precision that can reach the order of nano-technology. Central to any positioning technology are the environmental constraints and the quality of the available prior knowledge, also reflecting the level of autonomy on the device(s) to be positioned. From this perspective, one can distinguish between a fully known environment, a partially known environment, and an entirely unknown environment. For example, in WSNs, the device-enabled emitter/receiver wireless signal continuously senses its surrounding environment and searches for event occurrences. Additionally, the latter may include changes in the received signal strength and other sensory information (e.g., temperature, pressure, lighting, and humidity). This mainly requires full knowledge of the nodes in which each piece of information is captured. Moreover, equipped with advanced cameras and a range of other wireless sensory modalities, autonomous systems can properly map a completely unknown environment and can execute complex navigation tasks. In this case, the estimation process includes both target positioning and environment map estimation. Techniques of simultaneous localization and mapping (SLAM) fall in this category [17, 18].

2.3 Technological trends in positioning systems

For an outdoor positioning system with a meter-like accuracy, the GPS is the most common and worldwide radio navigation system in the case of good satellite coverage. However, in the presence of obstacles or in indoor environments, electromagnetic waves are attenuated, drastically reducing the accuracy of the GPS signals [19]. For instance, the global navigation satellite system (GNSS) signals are attenuated indoors by 20–30 dB (a factor of 100–1000) compared to outdoors. Nowadays, infrared radiation (IR) technology is incorporated into most smartphones, PDA, and TV devices as a wireless positioning technology that utilizes the line-of-sight communication mode between the transmitter and receiver without interference from active light sources [19]. Radio frequency (RF) technology [20] has the advantages of penetrating through obstacles and human bodies, offering broader coverage and (relatively) reduced hardware infrastructure requirement. It also encompasses numerous (sub) technologies, including narrow-band-based technology (RFID, Bluetooth, and WLAN (Wi-Fi and FM)) and wide-band-based technology (UWB), achieving a centimeter-level accuracy. Furthermore, ZigBee technology is an emerging wireless technology standard, which provides a solution for the short- and medium-range communications of 20–30 m, designed for applications requiring low-power consumption and low-data throughput, where the distance between two ZigBee nodes is measured using RSSI values.

The ultrasound system is a cheap technology based on the nature of bats, and it operates in a lower-frequency band, where ultrasound signals are applied to estimate the position of the emitting tags from the receivers. These signals have relatively lower accuracy than many IR technologies, but they suffer from interference of reflected sources, such as walls, metals, or obstacles [21].

The availability of cheap accelerometers and odometer sensors enabled the development of internal mode positioning technology, wherein the location is determined through the integration over the traveled path from the initial position of the target. Over long distances, accumulation of errors obviously constitutes a severe challenge to such technology. However, the method is promising whenever it is possible to update the target position using external sensors (to reduce the effect of error accumulation) [22].

Moreover, the use of magnetic function and map has emerged as a promising positioning technology with the availability of compass sensors in many mobile handheld devices [23].

Finally, numerous hybrid models that utilize more than one technology also emerged, where various sensor technologies were applied in the same platform, offering the possibility to test many hybrid schemes. For instance, nowadays, numerous smartphones are already embedded with odometer sensors (internal positioning), proximity sensors, Wi-Fi, and Bluetooth sensors. Various available sensors and measurement modalities (e.g., signal strength, angle of arrival (AOA), time of flight (TOF) and its difference, and cell ID) have led to several localization schemes such as triangulation, trilateration, hyperbolic localization, and data matching. Figure 2 (from [24]) depicts some of these technologies regarding the accuracy level.

Fig. 2
figure 2

The accuracy of various technologies [24]

Moreover, numerous commercial hybrid positioning systems are currently used in various services from Combain Mobile, Navizon, Xtify, PlaceEngine, SkyHook, Devicescape, Google Maps for Mobile, and OpenBmap for applications in smartphones.

An FCC report [25] highlighted three primary location technologies currently in use: cellular sector/base ID, GPS technology, and Wi-Fi. First, in the cellular sector-like location, the location of the handset is associated with the coverage area of the service base station. Thus, the radius covered can vary from several miles to a city block or even an individual business or residence depending on the cell density and network architecture. Increased resolution can be achieved by triangulating among the overlapping cell sectors, and it is often used by service providers to improve the accuracy of emergency response and monitor coverage. Second, to obtain maps or other information based on a device’s location, the GPS-like location provided in the form of a simple coordinate (e.g., latitude and longitude) is often transmitted to third parties. Third, in Wi-Fi-based technology, the handheld device scans its surroundings for public or open networks. Wi-Fi LBS depends on active surveys of an area to consider the unique identifier and location of each Wi-Fi-based station, including everything from hotspots in coffee shops and hotels to residential and business networks. When a Wi-Fi-enabled device accesses a location service, the browser or application may send to the service the coordinates of Wi-Fi networks it is currently “seeing,” thereby allowing the current location to be triangulated.

2.4 Constraints on positioning algorithms

As shown in Fig. 3, any positioning algorithm heavily depends on the available resources, time constraints, computational costs, accuracy, and precision requirements, among others. However, there is no clear favored positioning system and algorithm across the spectrum.

Fig. 3
figure 3

Positioning system architecture

Moreover, several other aspects contribute to the choice of the appropriate positioning algorithm. In addition, these aspects are reported in many generic surveys [10,11,12, 26, 27]. We are mainly interested in showing the classes in which fuzzy theory tools have been employed. For example, through our review, we observed that there were almost no fuzzy-based approaches that explored the type of location information (e.g., physical, symbolic, absolute, or relative), despite the fact that symbolic and relative localization provides a very coarse-grained position [3], which impose a vague description of the position information. Therefore, this would essentially provide a rationale for applying fuzzy-reasoning-like analysis, and this is due to its appealing framework in modeling linguistic descriptive of human knowledge regarding the symbolic position information or the defining coarse-grained position information.

Moreover, according to the fuzzy literature, only a few studies considered localization problems from network-based [11, 23, 28,29,30,31] or handset-based perspectives [32,33,34,35]. A fewer number of studies considered the type of environment [36,37,38,39]. Regarding the measurement methodologies, numerous conducted studies investigated the received signal strength (RSS) [37, 40,41,42], whereas almost no study investigated other types of measurements such as time-of-arrival (TOA), time-difference-of-arrival (TDOA), AOA, received signal phase-of-arrival (POA), RSS, hops count, and RF. Moreover, fuzzy tools can be intuitively handy to model radio propagation for addressing the uncertainties imposed on the radio wave from the available and variable objects in the environment, for example, using specific communication protocols [26, 43,44,45,46,47]. The amount of processing performed by each node has been considered through either centralized or decentralized architectures [40, 48, 49]. Some previously conducted studies [43, 50] investigated the cooperative and non-cooperative aspects of the communication between nodes in the WSN architecture.

Among all of the classes, the calculating algorithm exhibited the maximum amount of investigations, such as geometric calculations (literation, multi-literation, triangulation, and area calculations), proximity (NN, KNN, and ID-CODE), and scene analysis (fingerprint). Moreover, each of these algorithms has its pros and cons, thereby motivating hybrid schemes to increase performance [51, 52]. Merging the knowledge from various algorithms also seems to be an attractive field of application of fuzzy-related tools by utilizing their flexibility at both modeling and aggregation stages. In this class, several subclasses can be distinguished, e.g., range-based versus range-free methods and deterministic versus non-deterministic methods.

To estimate locations using one or more measurement techniques, the range-based scheme needs either node-to-node distances or angles [20, 38, 53, 54]. Moreover, the range-free scheme includes fingerprinting, where an initially constructed map or grid is mapped to the actual measurement set [30, 53] or hop count from each anchor using a dedicated routing protocol [55, 56].

In deterministic methods, the location information is driven by a solution to some analytical or approximate problems through some deterministic mappings without a precise account of any uncertainty as opposed to probabilistic, fuzzy, or statistics-based models, which encounters the first class, k-means like-matching in fingerprint association, or deterministic range intersection method [55]. Moreover, non-deterministic methods include Bayesian-like reasoning for fingerprint matching, Kalman filtering, belief propagation approaches [57,58,59,60,61,62], joint probability distribution using factorization on a graphical model [36, 63, 64], and various soft-computing related techniques [65,66,67]. In general, if knowledge regarding the distribution is available, then the probabilistic techniques outperform the deterministic ones and are preferred.

2.5 Challenges of positioning systems

The indoor location market includes indoor positioning-based services (and thus positioning systems) and solutions designed to support use cases around (indoor) location-based analytics (e.g., understanding customer traffic), indoor navigation, and real-time tracking. In addition to mobile technology, these services and solutions can transform the user’s experience for customers and travelers. Similarly, for enterprises and corpora, leveraging indoor location data can result in improved business insights and new engagement models with customers. These new indoor location-based business opportunities are estimated at approximately $10B by 2020 [66]. In the retail domain, indoor localization use cases can increase customer loyalty and thus the sale revenues. It is expected that retail businesses that can employ such targeted messaging combined with indoor positioning systems may yield a 5% increase in sales revenues, and customer traffic analysis is expected to optimize the human resources in the enterprise. Overall, although the indoor LBS can transform the retail as well as Travel and Transportation (T&T) industries in a way that would optimize their internal resources and gain new market opportunities, this emerging area suffers from shortcomings such as complex maintenance tasks of the corresponding indoor sensing platforms, the lack of data quality assessment tools, and limited accuracy. In addition to the continuous challenges of the positioning systems caused by the presence of numerous facets of uncertainty, these shortcomings make the design of a universally accepted solution beyond reach. These challenges include data access restrictions and technological difficulties, especially when handling disparate data sources of distinct reliability and methodological limitations due to potential sub-optimality and approximation employed as part of the preprocessing/postprocessing of data. This, in turn, helps in the application of new uncertainty theories that enable wireless positioning systems.

For example, IBM suggested a set of data smoothing algorithms for cleaning noisy indoor data [67]. These algorithms reveal new market opportunities by supporting new indoor use cases, such as detection of common customer paths, targeted/wanderer customers, and queue length. This study is motivated by the FCC report [68], which clearly stated the LBS trends, challenges, and requirements.

3 Motivation grounds for the fuzzy-logic-based reasoning

3.1 The uncertainty pervading conventional wireless positioning systems

Uncertainty is always observed as an inherent operational aspect of any wireless system irrespective of the technology employed. In particular, when the types of uncertainty in sensor networks are identified and quantified, more effective and efficient data management strategies, which simply influence the quality of the positioning systems, can be developed. In this regard, several aspects of the uncertainty can be distinguished [15, 49, 69, 70].

  1. i)

    Communication uncertainty where mobile sensor networks can exhibit intermittent connection patterns. Therefore, quantifying the communication uncertainty of communication links contributes to better-rooting decisions.

  2. ii)

    Sensing uncertainty in which sensor range and coverage are dominantly affected by environmental interferences, noise, and other systematic physical limitations of the sensor hardware. The consideration of such information by applying statistical, soft-computing, or other models that can capture sensor behavior would facilitate effective sensor deployment strategies.

  3. iii)

    Data uncertainty caused by inherent imprecision affects sensor readings. Therefore, in networked sensor systems, assigning confidence values or distributions to sensor readings can basically provide better quality results and decision-making.

For example, in outdoor urban environments that employ the cellular network to estimate the exact position of a mobile client, the signal attenuation radio propagation model (RSS) can provide means of analysis concerning the receiver’s (mobile client) location. Such a model remains jeopardized to the NLOS and multipath, which, in turn, negatively affect the accuracy level. Therefore, both sensing uncertainty, which accounts for the signal propagation affected by environmental constraints as well as update rate limitation, correlated errors from the receiver’s clock offset lag, and so on, and data uncertainty due to fluctuations of sensor reading overtime should be accounted for through appropriate uncertainty modeling. Similarly, the use of odometer-like sensors, such as a wheel encoder that provides incremental position measurements, has an unbounded accumulation of estimation errors over long traveling distances, triggering both non-negligible sensory and data uncertainties.

Therefore, to improve the accuracy of location estimates in network-based systems, clarifying such uncertainty is very important.

3.2 Inherent characteristics of fuzzy systems for handling uncertainty

Fuzzy logic, which was introduced by Zadeh in the 1960s, is a form of multivalued logic that addresses approximate reasoning [69]. The base of FL is a fuzzy set, which is basically a prolongation of the classical set. It aims to model human reasoning, which is approximate by nature instead of precise and allows inferring a possible imprecise conclusion from a collection of imprecise premises. For example, using knowledge “IF Node A is CLOSE to Node B, THEN mobile accuracy is HIGH,” AND “IF Node A is FAR from Node B, Then mobile accuracy is MEDIUM;” we may want to infer the state of mobile accuracy if Node A is VERY FAR from Node B. In the sequel, the meaning of imprecise proposition is presented as an elastic constraint on (linguistic) variables, and the inference is derived using propagation of these elastic constraints, which extend the domain of inference systems of propositional/predicate/multivalued logic. The fact that fuzzy logic provides a systematic frame for tackling fuzzy quantifiers (e.g., very, high, and most) enables the underlying theory to subsume both the predicate logic and probability theory, which, in turn, makes it possible to handle various types of uncertainty within a single conceptual framework.

In addition to the aforementioned approximate reasoning view, the concept of linguistic variables, fuzzy quantifiers, fuzzy rules, canonical forms, and connectives plays a key role, and another significant fuzzy logic development arises from mathematically developing the fuzzy set theory [71, 72], which is quite vast. Indeed, fuzzy logic is a branch of the fuzzy set theory, and other branches are fuzzy arithmetic, fuzzy mathematical programming, fuzzy topology, and so on [73, 74]. Therefore, the development of the fuzzy set theory produced fuzzy estimation, fuzzy optimization, fuzzy pattern matching, fuzzy classification, and so on, eventually having robust potential applications in wireless positioning systems.

From a mathematical perspective, any linguistic variable is a variable with values of words (its values are linguistic instead of numerals) [75,76,77]. For example, “height” is a linguistic variable with values “short,” “tall,” and “very tall.” On the basis of the fuzzy sets, these values are employed to them as labels, in which each of them can be defined by its membership function, e.g., μshort(u) is associated with a numerical value u. The value of the degree of membership is in the interval [0, 1]. For instance, μshort(u) can be defined as follows:

$$ {\mu}_{\mathrm{short}}(u)=\left\{\begin{array}{c}1 ifu\le 50\\ {}\frac{1}{u} if50\le u\le 100\\ {}0 ifu>100\end{array}\right.. $$
(1)

An example of rules where this can be applied is “IF distance is ‘short’ AND elapsed time is ‘high’ THEN weight is ‘high’.”

Since fuzzy logic can systematically handle approximate information, it is ideal for controlling nonlinear systems, modeling complex systems, or drawing inferences from expert-like rules. Developing fuzzy logic components, which include determining the optimal number of fuzzy rules, as well as parameters of underlying fuzzy sets and connectives, is often arguable, and several contributions are nowadays available in the field. Examples of available fuzzy software can be found in the following references: [78, 79].

The development of fuzzy logic is based on IEEE 1855–2016, which is a standard concerning the fuzzy markup language (FML) [80] developed by the IEEE Standards Association. Moreover, FML ensures the modeling of a fuzzy logic system in a human-readable and hardware-independent way.

As a result of their capabilities to solve various problems through the provision of a notational platform for knowledge representation and inductive reasoning based on imprecision and uncertainty, fuzzy systems have become an important area where the fuzzy set theory can be applied. Additionally, they have been extensively and successfully applied in various disciplines and at diverse levels. In particular, fuzzy sets can incorporate human knowledge, granular computing, and deterministic and crisp information to describe complex system behaviors without requiring any precise mathematical models; notably, the positioning problem with the aforementioned imprecise knowledge and the lack of confidence and mathematical models establishes a rich ground for such application.

Another perceptible advantage of the fuzzy systems is their ability to work as standalone, or they are easy to combine fully or partially with other systems and techniques. Moreover, this can augment or hybridize other systems (e.g., neural network, genetic algorithm, and stochastic and statistical systems), yielding various hybrid modes of estimation theory. Further, it is extendable to tackle data representation and manipulation (e.g., the arithmetic of fuzzy numbers and operations), reasoning (fuzzy implications and inferences), statistics, classification, clustering, and estimation (fuzzy Bayesian, fuzzy Kalman, etc.) (for more details, see [76]).

4 An overview of principal fuzzy-based methodologies linked to wireless positioning systems

In this section, we propose a classification criterion for using fuzzy logic in the localization problem, and then we summarize the key fuzzy-related methodologies employed in most of the surveyed wireless fuzzy-based positioning systems.

4.1 Classification of fuzzy systems in localization

The main finding of our survey analysis concerns the level of application of the fuzzy-based methodology in the positioning system. On the basis of this perspective, one distinguishes two main streams. First, the fuzzy methodology is a part of the core estimation process of target positioning. Second, the fuzzy methodology plays only a secondary role as an assistant to some overall positioning systems in which a non-fuzzy-based algorithm is employed for the estimation process, and fuzzy reasoning is used to initiate a kind of support to the decision maker. We shall refer to the second class as incorporated fuzzy positioning (IFP), as depicted in Fig. 4, and the first class as assisted fuzzy positioning (AFP), as depicted in Figs. 5 and 6.

Fig. 4
figure 4

IFP

Fig. 5
figure 5

Pre-AFP

Fig. 6
figure 6

Post-AFP

In the IFP, the fuzzy system is integrated into the positioning algorithm, as demonstrated in Fig. 4. Within this class, various directions could also be identified based on the way and level that the fuzzy tools have been employed.

In the AFP, the fuzzy system assists the positioning algorithm to enhance the result of the position estimation. For example, to detect uncertainty in the readings of sensors/receivers and eliminate noise in the signal, pre-AFP (Fig. 5) is used to fine-tune the measurements taken from the environment. In particular, this was considered when data fusion techniques were included, in which more than one source was applied for the measurement in the system.

Moreover, post-AFP (Fig. 6) is utilized to calculate errors or uncertainties in the location estimation, as well as to provide feedback to the position algorithm or user to carry fine positioning tasks or maintain the positioning consistency, especially when combined with another estimator, such as the Kalman filter.

Alternatively, some other previously conducted studies focused on hybridizing the IFP and AFP to increase the uncertainty handling features of the positioning system, for example, [81,82,83].

Table 1 presents the fuzzy system usage within the localization problem based on the aforementioned classification. A light reading of the table indicates the dominance of IFP-like usage in the localization systems.

Table 1 A fuzzy system with the localization classification

4.2 The fuzzy inference system

Fuzzy inference appears to be, by far, the most fuzzy-based reasoning incorporated into the examined fuzzy-based positioning systems. As shown in Fig. 7, the conventional fuzzy inference system involves three stages: (i) fuzzification, where the fuzzy sets concerning the linguistic variables are constructed, (ii) fuzzy rule base aggregation, and (iii) defuzzification, which produces a potentially non-fuzzy output to be used in subsequent reasoning. The fuzzy rules comprehend the general knowledge concerning the problem domain and ultimately link antecedents to consequences or premises with conclusions.

Fig. 7
figure 7

An example of the fuzzy inference system

For instance, let X1, X2, …, Xn be the input domain variables and Y be a single output variable. Let \( {A}_i^j,i=1,2,\dots, n \) be the fuzzy input sets over the n input domains and Bj, j = 1, 2, …, m be the output fuzzy sets over the single output domain. Then, a system of m fuzzy if-then rules can be constructed as follows:

$$ {R}_1:\kern0.5em if\ {X}_1 is\ {A}_1^1\wedge \dots {X}_n\ is\ {A}_n^1\kern0.5em Then\ Y\kern0.5em is\ {B}_1; $$
$$ {R}_j:\kern0.5em if\ {X}_{1\kern0.5em } is\ {A}_1^j\wedge \dots {X}_n\ is\ {A}_n^j\kern0.5em Then\ Y\ is\ {B}_j; $$
$$ {R}_m: if\ {X}_1 is\ {A}_1^m\wedge \dots {X}_n\ is\ {A}_n^m\kern0.5em Then\kern0.5em Y\ is\ {B}_m. $$

To illustrate the functions of the various stages of the fuzzy inference system, first, the given (crisp) input Xi is fuzzified to obtain a fuzzy set \( {\overset{\sim }{X}}_i \) based on the corresponding input space.

Second, input fuzzy sets \( \left({\overset{\sim }{x}}_1,{\overset{\sim }{x}}_2,\dots .,\tilde{x}_{n}\right) \) are matched against the corresponding if-part sets of their input spaces in each of the rule antecedents in the fuzzy system (fuzzy rules), i.e.,

$$ {a}_i^j=S\left({A}_i^j,\kern0.75em {\overset{\sim }{X}}_i\right). $$
(2)

Typical S operators include max or any alternative t − conorm connectives [110].

Third, various matching degrees \( {a}_i^j \) of the n input fuzzy sets to the antecedent of a fuzzy if-then rule are combined to:

$$ {\mu}_j=T\left({a}_1^j,\dots, {a}_n^j\right). $$
(3)

Typical T operators include min, product, or more general t-norm connectives [110].

Fourth, the combined value μj fires the rule consequent or the output fuzzy set Yj. In numerous models of the fuzzy system, this Yj is taken as its centroid Yj, i.e.,

$$ {f}_j=f\left({\mu}_j,{Y}_j\right). $$
(4)

Fifth, the fired output fuzzy sets (or crisp sets), fj, j = 1, 2, …, m, are then aggregated to obtain the following final output fuzzy set:

$$ y=g\left({f}_1,{f}_2,\dots, {f}_m\right). $$
(5)

The most commonly employed aggregation functions are the center of the gravity defuzzification rule in the case of a Mamdani-type fuzzy inference system and weighted average (based on membership grade) in that of a Takagi–Sugeno fuzzy inference system [111].

In the Mamdani-type fuzzy inference system, the output B is given by its membership function μb as follows:

$$ {\mu}_b(x)={\bigvee}_{j=1}^m\ {\bigwedge}_{i=1}^n\left({a}_i^j\bigwedge \kern0.5em {\mu}_{b_j}(w)\right), $$
(6)

where \( {a}_i^j={A}_i^j\wedge \overset{\sim }{X_i} \), \( {\mu}_j={\bigwedge}_{i=1}^n{a}_i^j \), and \( {\mu}_{b_j}(w) \) denotes the fuzzy output set bj of the jth rule.

Various extensions of the above model (6) have also been considered in the literature [112]. Moreover, the number of rules grows with that of premise part variables. As the number of rules increases, the activity of the assembling rules can become very burdensome, and sometimes, it becomes difficult to understand the relationships between the premises and consequences.

The issue of optimizing the eliciting fuzzy sets (fuzzification stage) or optimal membership function identification that optimizes the number of fuzzy rules has attracted significant research attention in the fuzzy community, and several approaches have been investigated. These include expert-based eliciting, automatic classification, and clustering-based approach to complex optimization-based approaches involving neural networks, genetic algorithms, and so on. Adaptive neuro fuzzy inference system (ANFIS) based on a neuro-fuzzy learning mechanism is probably a commonly employed tool in generating fuzzy partition and optimizing the fuzzy rule database [113]. Another related development in this field is the emergence of the type-2 fuzzy logic system.

4.3 Type-2 fuzzy logic system

The concept of the type-2 fuzzy logic system is motivated by the uncertainty pervading the assignment of membership grade value [114]. Therefore, it was indicated to allow the membership grade between upper and lower value states for any element in universe of discourse, the membership grade can take any value within that interval. Figure 8 illustrates the type-2 fuzzy inference system. Output processing constitutes the type reduction that generates the type-1 fuzzy set and the defuzzifier that converts the generated type-1 fuzzy set to the crisp output.

Fig. 8
figure 8

An example of a type-2 fuzzy inference system

4.4 Fuzzy clustering

Clustering is always employed to assign a category to unknown observations. In general, it proposes a broad spectrum of methods that try to subdivide a data set, X, into c subsets (clusters), which are pairwise disjoint, are all nonempty, and reproduce X via union. Then, the clusters are termed hard (i.e., non-fuzzy) c-partition of X. Numerous algorithms with their mathematical clustering criteria for identifying “optimal” clusters have been discussed [58, 115]. A significant fact concerning the hard (non-fuzzy) algorithms is the defect in the underlying axiomatic model. Each point in X is unequivocally grouped with other members of “its” cluster, thus bearing no apparent similarity to the other members of X. One of such soft (fuzzy) clustering was introduced by Zadeh [69] to characterize an individual point’s similarity to all of the clusters. By utilizing a function (termed the membership function) whose values (called membership degrees) are between 0 and 1, the main point in fuzzy clustering is to represent similarity point shares with each cluster. Each sample can have a membership in every cluster, and memberships close to 1 signify a high degree of similarity between the sample and a cluster, whereas those close to 0 signify a little similarity between the sample and cluster. In addition, the net effect of such a function employed for clustering is to produce fuzzy c-partitions of a given data set. A fuzzy c-partition of X is one that characterizes the membership of each sample point in all of the clusters by applying a membership function, which ranges in the unit interval [0, 1]. Additionally, the sum of the membership grades for each sample point must be 1.

Fuzzy c-means (FCM) algorithm, which is proposed by Bezdek [116, 117], is one of the most extensively applied fuzzy clustering algorithms. Moreover, the algorithm introduces a fuzzification parameter, for example, m, that determines the degree of fuzziness in a cluster—where m can be in the range of [1–N]—with N being the number of data points in X. When m = 1, the effect is a hard clustering, and when m > 1, the degree of fuzziness among the various points in the decision space increases. In every iteration, the objective of the FCM is to minimize the objective function, F:

$$ F={\sum}_{i=1}^N\ {\sum}_{j=1}^C\ {u}_{ij}^m\ {\left\Vert {x}_i-{c}_j\right\Vert}^2, $$
(7)

where C denotes the number of clusters required, cj denotes the center vector of cluster j, and \( {u}_{ij}^m \) denotes the degree of membership for ith data point xi in cluster j. The norm ‖xi − cj‖ measures the similarity (or closeness) of the data point xi to the center vector cj of cluster j.

In each iteration, note that the algorithm maintains a center vector for each cluster. These data points are calculated as their weighted average, where the weights are given by the degrees of membership as follows:

$$ {u}_{ij}^m=\frac{1}{\sum_{k=1}^C\ {\left(\ \frac{\left\Vert {x}_i-{c}_j\right\Vert }{\left\Vert {x}_i-{c}_k\right\Vert }\ \right)}^{\frac{2}{m-1}}} $$
(8)

Here, m denotes the fuzziness coefficient. FCM obviously imposes a direct constraint on the fuzzy membership function associated with each point, so the sum of the membership grades for point xi in a decision space X must be 1. Moreover, cj can be calculated as follows:

$$ {c}_j=\frac{\sum_{i=1}^N\ \left({u}_{ij}^m.\kern0.5em {x}_i\right)}{\sum_{i=1}^N\ {u}_{ij}^m} $$
(9)

Although the FCM algorithm is slower than the hard clustering algorithm, it has been shown that the former provides better results in cases where data are incomplete or uncertain [117, 118].

4.5 Fuzzy optimization

From the very early stage of the fuzzy set theory, the application of fuzzy sets to optimization problems was considered. One of the possible applications of fuzzy sets is the idea of “optimization under fuzzy constraints.” In the suggested formulation, the product of the objective function value and satisfaction degree (membership degree) of fuzzy constraints is often maximized. The authors in [75] proposed a maximizing decision based on fuzzy constraints and fuzzy goals. Tanaka et al. [119] applied this idea to a mathematical programming problem, in which they considered the study of [75] using α-level sets and gave an algorithmic solution to the fuzzy mathematical programming problem accordingly.

More specifically, let \( \overset{\sim }{G} \) be a fuzzy set, and let \( \overset{\sim }{C} \) be a fuzzy constraint defined over a set X. A fuzzy goal is a fuzzy set whose membership function \( {\mu}_{\overset{\sim }{G}}:X\to \left[0,1\right] \) shows the degree of its goal achievement. Moreover, a fuzzy constraint \( \overset{\sim }{C} \) is a fuzzy set whose membership function \( {\mu}_{\overset{\sim }{C}}:X\to \left[0,1\right] \) shows the degree of its constraint satisfaction. Thus, the fuzzy decision, \( \overset{\sim }{D} \), can be defined as follows: \( \overset{\sim }{D}=\overset{\sim }{C}\cap \overset{\sim }{G} \) or equivalently \( {\mu}_{\overset{\sim }{D}}(x)=\min \left(\ {\mu}_{\overset{\sim }{G}}(x),{\mu}_{\overset{\sim }{C}}(x)\right),\kern0.5em \forall x\in X \). Then, the decision-making problem is formulated as follows: \( \underset{x\in X}{\operatorname{maximize}}{\mu}_{\overset{\sim }{D}}(x) \).

In [119], the authors showed that

$$ \underset{x\in X}{\sup }\ {\mu}_{\overset{\sim }{D}}(x)=\underset{\alpha \in \left[0,1\right]}{\sup}\min\ \left(\alpha, \kern0.5em \underset{x\in {\left|\overset{\sim }{C}\right|}_{\alpha }}{\max }\ {\mu}_{\overset{\sim }{G}}(x)\right), $$
(10)

where α-level set \( {\left|\overset{\sim }{C}\right|}_{\alpha } \) of \( \overset{\sim }{C} \) is defined by \( {\left|\overset{\sim }{C}\right|}_{\alpha }=\left\{x\in X\ |\ {\mu}_{\overset{\sim }{C}}(x)\ge \alpha \right\} \). They also demonstrated similar results when the fuzzy constraint is given by multiple fuzzy sets \( {\overset{\sim }{C}}_i=1,2,\dots, m \), i.e.,

$$ \underset{\alpha_1,{\alpha}_2,\dots, {\alpha}_m}{\sup}\min \left({\alpha}_1,{\alpha}_2,\dots, {\alpha}_m,\kern0.5em \underset{x\in {\left|\overset{\sim }{C_1}\ \right|}_{\alpha_1}\cap {\left|\overset{\sim }{C_2}\ \right|}_{\alpha_2}\cap \dots \cap {\left|\overset{\sim }{C_m}\ \right|}_{\alpha_m}}{\max }{\mu}_{\overset{\sim }{G}}(x)\right) $$
(11)
$$ =\underset{\alpha }{\sup}\min \left(\alpha, \kern0.5em \underset{x\in {\left|\overset{\sim }{C_1}\ \right|}_{\alpha}\cap {\left|\overset{\sim }{C_2}\ \right|}_{\alpha}\cap \dots \cap {\left|\overset{\sim }{C_m}\ \right|}_{\alpha }}{\max }{\mu}_{\overset{\sim }{G}}(x)\right). $$

Therefore, this implies that multiple fuzzy constraints can be aggregated to a single fuzzy constraint. By assuming that \( X={\mathbf{\mathcal{R}}}^n \) and the objective function f is given by its normalized form, they applied their results to a mathematical programming problem with fuzzy constraints, f, so f(x) takes a value in [0, 1] for any \( x\in cl\left( Supp\left(\overset{\sim }{C}\right)\right) \), where \( Supp\left(\overset{\sim }{C}\right)=\left\{x\in X\ |\ {\mu}_{\overset{\sim }{C}}(x)=1\right\} \), and cl denotes closure. Moreover, they assumed the continuity of \( {\mu}_{\overset{\sim }{C}}(x) \) and f, as well as the normality of \( \overset{\sim }{C} \), i.e., \( \left(\exists x\in X,{\mu}_{\overset{\sim }{C}}(x)=1\right) \) and the existence of x ∈ X such that maxx ∈ cl(Supp(C))f(x) = 1. On the basis of these assumptions, the problem reduces to finding a solution (α, x) such that

$$ \min \left({\alpha}^{\ast },f\left({x}^{\ast}\right)\right)=\underset{\alpha \in \left[0,1\right]}{\sup}\min \left(\alpha, \underset{x\in {\left|\overset{\sim }{C}\ \right|}_{\alpha }}{\ \max f(x)}\ \right). $$
(12)

In the sequel, this mathematical programming with fuzzy constraints and/or goals is called “flexible programming” [120]. Now, the question is whether the solution of the above formula is good or not when α < 0.5. Moreover, the optimization process is performed by updating the fuzzy goals and constraints together until it converges to an appropriate solution.

4.6 Fuzzy arithmetic and analytics

In numerous fields of sciences, such as systems analysis and operations research, a model can be constructed using approximately known data. Fuzzy set theory can make this possible provided that these fuzzy sets are defined over the universal set . Then, on the basis of certain conditions (semi-continuity, convexity, and normalization), these fuzzy sets can be considered as fuzzy numbers as well. Therefore, this approach may be of practical interest only if we can smoothly perform algebraic operations on them.

More formally, using a membership function \( {\mu}_{\overset{\sim }{n}} \), a fuzzy set \( \overset{\sim }{n} \) (Fig. 9) defined on the real line . \( {\mu}_{\overset{\sim }{n}} \) is said to be a fuzzy quantity (fuzzy number) if it satisfies the following conditions:

  1. i.)

    \( \overset{\sim }{n} \)is normal i.e.,\( hgt\left(\overset{\sim }{n}\right)=1 \).

  2. ii.)

    \( \overset{\sim }{n} \) is convex.

  3. iii.)

    There is exactly one \( \overline{x}\in \mathbb{R} \) with \( {\mu}_{\overset{\sim }{n}}\left(\overline{x}\right)=1\Longrightarrow core\left(\overset{\sim }{n}\right)=\overline{x} \).

  4. iv.)

    The membership function \( {\mu}_{\overset{\sim }{n}}\left(\overline{x}\right),x\in \mathbb{R} \) is at least piecewise continuous.

Fig. 9
figure 9

Fuzzy number (it can be observed as a fuzzy interval)

The main objective of defining fuzzy quantities is to have a proper definition of arithmetic operations as their counterparts of elementary operations, i.e., given fuzzy numbers \( {\overset{\sim }{n}}_1 \),\( {\overset{\sim }{n}}_2 \) with \( {\mu}_{{\overset{\sim }{n}}_1}\left({x}_1\right) \), \( {\mu}_{{\overset{\sim }{n}}_2}\left({x}_2\right) \), where x1, x2 ∈ . . The goal is to determine \( {\mu}_{\overset{\sim }{q}}(z) \)z ∈  of the fuzzy number \( \overset{\sim }{q}=E\left({\overset{\sim }{n}}_1,{\overset{\sim }{n}}_2\right) \), where E denotes one of the elementary operations (addition, subtraction, division, and multiplication). Moreover, fuzzy set theory generalizes tolerance analysis where the fuzzy arithmetic can be observed as an extension of the interval analysis and algebra of many values or quantities [69] as follows:

$$ {\mu}_{\overset{\sim }{q}}(z)=\underset{z=E\left({x}_1,{x}_2\right)}{\sup}\min \left\{\ {\mu}_{{\overset{\sim }{n}}_1}\left({x}_1\right),{\mu}_{{\overset{\sim }{n}}_2}\left({x}_2\right)\ \right\}\kern0.75em \forall {x}_1,{x}_2\in \mathbb{R}. $$
(13)

Subsequently, it was observed that the mathematics of fuzzy quantities can also be considered as an application of possibility theory [121]. Thus, an effective definition of arithmetic operations requires a practical implementation. In general, one can discuss three main streams for applying the extension principle. The first one is based on the L-R representation of fuzzy numbers proposed in [122]. The second one depends on the discretized fuzzy number proposed in [123]. Additionally, on the basis of the reduced decomposition of the fuzzy number of level cut operations proposed in [71], the third one can be considered as a generalized version of the second.

For simplicity’s sake and to stress on the sound mathematical ground of the fuzzy set theory, only the definitions of the first method are mentioned herein. The fundamental idea of the LR fuzzy number representation is to split the membership function \( {\mu}_{\tilde{n}{i}}\left({x}_i\right) \) of fuzzy number \( \tilde{n}_{i} \) into two curves \( {\mu}_{L_i}\left({x}_i\right) \) and \( {\mu}_{R_i}\left({x}_i\right) \), corresponding to the left and right of the modal value \( {\overline{x}}_i \), respectively (which can be either a single point or an interval). Then, \( {\mu}_{\tilde{n}_{i}}\left({x}_i\right) \) can be represented by the parameterized reference functions or shape functions L and R in the following form:

$$ {\mu}_{\tilde{n}_{i}}\left({x}_i\right)=\left\{\begin{array}{c}{\mu}_{L_i}\left({x}_i\right)=\mathrm{L}\left[\frac{{\overline{\mathrm{x}}}_{\mathrm{i}}-{x}_i}{\alpha_i}\right],\kern0.5em {x}_i<{\overline{x}}_i\\ {}{\mu}_{R_i}\left({x}_i\right)=\mathrm{R}\left[\frac{{\overline{\mathrm{x}}}_{\mathrm{i}}-{x}_i}{\beta_i}\right],{x}_i\ge {\overline{x}}_i\end{array}\right., $$
(14)

where αi ∧ βi denotes the spreads corresponding to the left-hand and right-hand curves of \( {\mu}_{\tilde{n}_{i}}\left({x}_i\right) \), respectively. Using the following abbreviated notations, \( \tilde{n}_{i}={\left\langle {\overline{x}}_i,{\alpha}_i,{\beta}_i\right\rangle}_{L,R} \) , where the subscripts L and R specify the type of reference function. The operations on such fuzzy numbers can be represented as follows:

  • Addition: \( {\left\langle {\overline{x}}_1,{\alpha}_2,{\beta}_2\right\rangle}_{L,R}+{\left\langle {\overline{x}}_2,{\alpha}_2,{\beta}_2\right\rangle}_{L,R}={\left\langle {\overline{x}}_1+{\overline{x}}_2,{\alpha}_1+{\alpha}_2,{\beta}_1+{\beta}_2\right\rangle}_{L,R} \).

  • Subtraction: \( {\left\langle {\overline{x}}_1,{\alpha}_2,{\beta}_2\right\rangle}_{L,R}-{\left\langle {\overline{x}}_2,{\alpha}_2,{\beta}_2\right\rangle}_{L,R}={\left\langle {\overline{x}}_1-{\overline{x}}_2,{\alpha}_1+{\beta}_2,{\beta}_1+{\alpha}_2\right\rangle}_{L,R} \).

  • Multiplication is a bit more approximation technique dependent, and two well-known techniques were utilized [70]:

Tangent approximation:

$$ {\left\langle {\overline{x}}_1,{\alpha}_2,{\beta}_2\right\rangle}_{L,R}\cdotp \kern0.5em {\left\langle {\overline{x}}_2,{\alpha}_2,{\beta}_2\right\rangle}_{L,R}\approx {\left\langle {\overline{x}}_1{\overline{x}}_2,\kern0.5em {\overline{x}}_1{\alpha}_2+{\overline{x}}_2{\alpha}_1,{\overline{\ x}}_1{\beta}_2+{\overline{x}}_2{\beta}_1\right\rangle}_{L,R} $$
(and)

Secant approximation:

$$ {\left\langle {\overline{x}}_1,{\alpha}_2,{\beta}_2\right\rangle}_{L,R}\cdotp \kern0.5em {\left\langle {\overline{x}}_2,{\alpha}_2,{\beta}_2\right\rangle}_{L,R}\approx {\left\langle {\overline{x}}_1{\overline{x}}_2,{\overline{x}}_1{\alpha}_2+{\overline{x}}_1{\alpha}_2-{\alpha}_1{\alpha}_2,\kern0.5em {\overline{x}}_1{\beta}_2+{\overline{x}}_2{\beta}_1-{\beta}_1{\beta}_2\right\rangle}_{L,R}. $$
  • The division is performed similarly, except that the multiplication is performed with the inverse of the divisor using again the same two approximation techniques.

    If \( \overset{\sim }{n}={\left\langle \overline{x},\alpha, \beta \right\rangle}_{L,R} \), then we use the tangent approximation \( {\left(\overset{\sim }{n}\right)}^{-1}\approx {\left\langle \frac{1}{\overline{x}},\frac{\alpha }{{\overline{x}}^2},\frac{\beta }{{\overline{x}}^2}\right\rangle}_{R,L} \) and secant approximation \( {\left(\overset{\sim }{n}\right)}^{-1}\approx {\left\langle \frac{1}{\overline{x}},\frac{\beta }{\overline{x}\left(\overline{x}+\beta \right)},\frac{\alpha }{\overline{x}\left(\overline{x}-\alpha \right)}\right\rangle}_{R,L} \).

4.7 Hybrid-based approach

Hybridization of fuzzy-based reasoning with other approaches (e.g., stochastic, rule base, neural network, and genetic algorithms) is quite common where several achievements can be distinguished. We remark that this survey is not aimed to explain the hybridizing techniques of fuzzy systems.

First, the conventional fuzzy controls with Mamdani- or TSK-type inference engines are applied to optimize the weights associated with measurements, for example, the weight estimations for fingerprinting technique [70, 124] or nearest neighbor algorithm [35, 41, 93].

Second, the fuzzy systems can be combined with other estimation and approximation tools, particularly the Kalman filter [83, 106, 125].

Third, on the basis of the power of fuzzy mathematics and probabilistic approaches, fuzzy sets and systems are utilized to build customized estimators [27, 51, 52, 83, 106, 126].

Fourth, fuzzy systems are used along with other soft-computing techniques such as neural networks and genetic algorithms to construct or simplify the rule base or maintain the weight calculation for the network in an adaptive manner [53, 108, 127,128,129].

5 Historical background

Some authors ([130]) claimed to be the first to introduce fuzzy logic into tracking problems. In their studies, the authors applied fuzzy logic to enhance the performance of the classical tracking system. In particular, the model-free function approximation capability of fuzzy logic was used to obtain high-resolution angle estimates from the spatial–spectral density. Moreover, their main focus was to estimate and track the source angular positions from a snapshot data vector. In the proposed system, the following two inputs were designed to obtain the distance between two sources: the maximum spatial power density (periodogram) and the main beam normalized bandwidth. The authors indicated that only fewer snapshots are necessary to ensure a successful angle estimation when compared with their previous studies. Even when the angle between the two users is less than the predefined resolution value in the data vector, the proposed system could produce an accurate estimate for the direction of arrival (DOA). Thus, the result was a robust tracking system that presents a low computational burden and attains a resolution comparable to that of singular value decomposition techniques.

At first glance, this study does not seem to be directly related to the positioning problem based on the definition introduced earlier. However, from another viewpoint, it discussed an angle position estimation problem that is mainly linked to positioning.

According to our review, in contrast to the above authors’ claims, numerous other earlier studies could also be linked to the use of fuzzy systems in the positioning problem. For example, we could pinpoint the “sketching” technique and experiment that was conducted in the early eighties [82], to create a system for deriving symbolic position estimates for objects from a relational scene (environment) description “layout problem,” the author utilized the fuzzy relational database and inference system. In the so-called sketching algorithm, the author employed fuzzy logic at two levels. First, a fuzzy inference system was used to build a relational database among various independent objects in the environment, which, in turn, is utilized to construct a coarse resolution sketch that depends on the symbolic spatial descriptive, i.e., left, right, above, below, distance, and bearing. This aimed to produce a two-dimensional position estimate for the object position in the environment. Second, the truth values were applied as a confidence interval to be associated with every symbolic descriptive rule, which was utilized for error analysis at a later stage. We can report several drawbacks for their technique, which includes the use of a single interval fuzzy variable and the assumption that the position of at least one fixed object must be known. In the case of an unknown object position, fixing the position at some bad initial points could definitely lead to poor performance because of the sequential nature of this technique. Despite such limitation, the symbolic power of fuzzy logic enhanced the sketching results and effectively leveraged the tradeoff between spatial relations and coordinate positions. More interestingly, this method performed well without much prior information concerning the environment provided that a relatively good initial position was fixed.

The use of fuzzy systems in the domain of positioning and localization gained momentum because they can be easily designed and utilized. Moreover, the challenge is not in pinpointing the earliest usage of fuzzy tools in the positioning problem but in coming up with some proper classification criteria for all of these studies as we approached that in Section 4. On top of that, to come up with evaluation criteria to assess the increasing number of available solutions, we try to approach that in the next section.

6 Parametric measures and evaluations

To develop rigorous foundations, we examine the performance of different positioning systems obtained from various perspectives in the literature. First, an intuitive study question that has been investigated is the following: whether the classification presented in Section 4 is sufficient. In this regard, we found that it is difficult to cast every piece of work in a single class. This is because a given proposal often attempts to accommodate numerous identified deficiencies of the classical positioning system at different levels, thereby overlapping with more than one class. Thus, we mention the following. First, the evaluation was investigated from a purely statistical perspective based on the occurrence of the related fuzzy terminology in either the title, keyword, or abstract of the papers. This mainly would exclude those papers in which the fuzzy reasoning has been part of the positioning methodology but this has been cited in neither the title nor the abstract of the article or even the associated keywords list. Second, the investigation has primarily considered two commonly employed scientific databases: IEEExplore and ScienceDirect, giving the popularity of positioning technology in such databases as well as the multiplicity of scientific journals in the field that are hosted by these databases. Third, using fuzzy logic, fuzzy arithmetic operations, and/or an inference system to come up with an estimation solution for the positioning problem, one still distinguishes cases where a fuzzy system was only employed as an aiding tool (closed box) to serve the positioning objectives from situations where the fuzzy tools were utilized both to represent knowledge and manipulate it at the deepest level. Fourth, particular interest was devoted to the fuzzy-based methodology employed in the underlying (fuzzy) positioning system. Fifth, we distinguish among various hybrid schemes where the fuzzy-based approach is employed along with a classical approach or with another soft-computing-based approach. Sixth, the results presented in Fig. 10 and Fig. 11 summarize the proportion of the main fuzzy tools employed by the identified fuzzy-based approach to tackle the positioning problem observed in ScienceDirect and IEEExplore databases, respectively. Surprisingly, none of these databases produced results by utilizing type-2 fuzzy in the problem of localization (of course using a specific set of keywords for both). Some studies in the field have been reported in [131,132,133,134]. Thus, the absence of such notification is justified from different viewpoints. First, histogram representation only reported the dominant methods, and we ignored those whose stats are below 1%. Second, the fuzzy-2 methods sometimes depend on the clustering method class as well, hiding the fine-grained distinction among the various clustering methods employed.

Fig. 10
figure 10

ScienceDirect fuzzy tool stats

Fig. 11
figure 11

IEEExplore fuzzy tool stats

Interestingly, the results demonstrated similar patterns in both Figs. 10 and 11 in the sense that optimization-based approaches are quite dominant in fuzzy literature related to positioning systems, followed by clustering-based approaches, then the classification and rule-base-like approaches, while fuzzy arithmetic-like tools are less common in both databases.

Another viewpoint was to see if the fuzzy systems (tools) were used as the only means for location estimation systems, or was ever combined with other soft-computing tools, for example, neural networks or other classical estimators (e.g., Kalman filters). Moreover, the returned results from ScienceDirect and IEEExplore are demonstrated in Fig. 12 and Fig. 13, respectively.

Fig. 12
figure 12

ScienceDirect stats

Fig. 13
figure 13

IEEExplore stats

Similar to Figs. 10 and 11, we also observe substantial similarities between ScienceDirect and IEEExplore database findings. This includes the dominance of the fuzzy-alone-based approaches, followed by hybrid fuzzy logic and neural-network-based approaches (although these have been ranked equal in the ScienceDirect database). Next, the hybrid schemes of fuzzy tools with swarm optimization, followed by fuzzy tools with Kalman filter, are particularly observed. Finally, a tiny proportion of the surveyed papers (which is less than 5%) investigated ANFIS-based systems applied to the positioning problems.

Next, we introduce the performance criteria to compare the proposed methodologies.

6.1 System metrics

We divide these performance criteria and/or parametric measures into four major parts: system metrics (Table 2), environment metrics (Table 3), fuzzy metrics (Table 4), and positioning metrics (Table 5).

Table 2 System metrics
Table 3 Environment metrics
Table 4 Fuzzy metrics
Table 5 The positioning metrics

Since our study discriminates between the fuzzy tools employed as an augmentation to other classical positioning approaches and that where a fuzzy system-like approach lies at the core of the positioning technique, the performance criteria and parametric measures are very important in our evaluation to enhance system performance or overcome some of the deficiencies observed during the implementation of the position estimation problem. Unfortunately, many of these measures were neither explicitly nor implicitly mentioned in a number of review papers that we encountered. These performance metrics can be summarized as follows.

  • Accuracy and precision

Accuracy and precision are two of the most important performance metrics in a positioning system. Position accuracy is defined as the numerical distance (in meter or centimeters) between the actual target position and that of the estimated. Moreover, precision tests the extent to which the provided estimation agrees when it can be repeated under the same circumstances.

  • Scalability

In general, position systems need to be scalable in terms of geographical space and density of client users or terminal devices. Generally, a system tends to be scalable if it can be deployed in a larger geographical space and if it can serve a larger size of the population with the same quality of service.

  • Robustness and adaptiveness

Robustness and adaptiveness are related to the ability of the positioning systems to handle any unforeseen circumstances or accidental changes in the environment. This includes, but not limited to malfunctioning of sensory nodes, APs, inherent perturbation of the system, inclusion, or exclusion of new obstacle bodies that may increase the noise and uncertainty levels of the testbed.

  • Cost (computation, labor, and implementation)

Naturally, computationally fast and plausible algorithms that can serve numerous localization queries within a given time frame are more attractive. Moreover, the cost criterion often includes energy and processing resource system efficiency, which are considered to be important in the case where the estimation is performed using limited-capability devices. As part of the positioning approach, labor intervention and system interaction are also considered as part of the cost factor.

  • Complexity (the type of measuring devices, mobile devices, and other network components)

Complexity entails the type of measuring instruments and the required network infrastructures that are necessary for generating measurements or inputs to the positioning system as well as the complexity associated with the estimation process itself, providing insights into the overall complexity of the underlined positioning system.

  • Latency

Latency is usually employed to quantify the responsiveness of a system for positioning queries. It is better when faster.

6.1.1 Discussion

  1. i)

    A careful observation of the surveyed papers illustrated in Table 2 indicates that those in which fuzzy localization techniques were employed are related to mobile robotics, manufacturing, cellular systems, indoor positioning using Wi-Fi, Bluetooth, RFID and laser scanning, vision system, and so on. Irrespective of the applied positioning methodology, various disciplines, expectations, and technologies utilized trivially induce distinct accuracy and performance levels.

  2. ii)

    When compared with the papers in positioning systems, a quick examination of those identified in the area reveals that they have rather a low citation score. Therefore, this shows the lack of involvement of the fuzzy community in impacting the current International Organization for Standardization (ISO) standards and even the known IEEE research groups on positioning systems. Therefore, further studies should be conducted in this field to attain a reference level.

  3. iii)

    At first glance, when observing the accuracy achieved by the studies reviewed in Table 2, one observes an accuracy of around a centimeter. However, we should also consider the sensory range of the applied sensors. From this perspective, one notes that the range of the utilized sensors is also limited to around a centimeter to a few meters since ultrasound, Wi-Fi, and laser scanner-like sensors have a trivially limited range.

  4. iv)

    Concerning complexity, although most of the fuzzy-based positioning papers focused on low-cost sensory architectures, Table 2 demonstrates that they yield reasonably low to medium computational cost, and very few studies reported high computational cost as well. An investigation of such studies showed that they are mainly related to methods where extra network infrastructure will be required to trigger the associated measurement method to ensure synchronization between the emitter and receiver, e.g., in the case of TDOA.

  5. v)

    Regarding scalability, it turns out that most of the surveyed studies in Table 2 did not consider such factors, especially when the approach applies only low-cost sensors and does not require any infrastructural change. Otherwise, if additional hardware is required to run the positioning system, the scalability of the approach is trivially questioned. Similarly, approaches that subsume full or even partial knowledge of the environment to run the positioning system have limited scalability as well.

  6. vi)

    We distinguish among some papers, e.g., [29], which are only based on simulation studies from hose that are based on real-time implementation. Notably, the simulation-based analysis does not necessarily justify all of the constraints that can be satisfied in a real-time implementation-related work. Therefore, their outcomes should be considered with caution.

  7. vii)

    Concerning latency, it is noteworthy that the quasi-majority of the surveyed papers in Table 2 does have a low-latency value, and we note that only three papers reported a latency value higher than 10 s. In fact, the analysis of such papers revealed that high latency has been mainly linked to approaches in which an additional step for environment mapping is required. Therefore, on the basis of the complexity of the environment and frequency of activation of sensors, the mapping time can thereby substantially increase, which, in turn, increases the latency of the overall system.

6.2 Environment metrics

The environment metrics are explained in this section, and its results are shown in Table 2.

  • Map requirements

A typical localization scheme requires prior information regarding the environment. Thus, this can be done through a site survey. For instance, in the fingerprint-based schemas, the collected patterns are manually annotated with their physical or logical fixes before the positioning algorithm is initialized. Moreover, other schemas may require a geographical map to obtain their absolute or relative estimates of the position.

  • Acquiring location fix

Some positioning systems may require the location fix from user devices. This could be obtained via GPS or other means to offer reasonable accuracy, whereas others do not. Positioning systems that can maintain the same level of accuracy without requiring any location fix are trivially more attractive.

  • Usage of the indoor/outdoor landmarks

An interesting feature of an ideal positioning system is its ability to process the target estimation anywhere without any prior knowledge concerning the layout of the deployment environment. Numerous positioning systems, for example, the fingerprinting-like approaches, require knowledge about the AP locations to approximate a distance for the target object. Similarly, navigation-based approaches require predefined locations to draw the trajectory to the destination place. Therefore, from a system autonomy perspective, the positioning systems without landmark requirements are considerably preferred over others.

  • Need for additional sensor (or hardware)

Although numerous sensors are already embedded in the current handheld devices, such as smartphones and tablets, some advanced positioning systems, such as in some robotics and manufacturing applications, require advanced bandwidth, throughput, and special sensory capabilities. Therefore, if the target mobile is not designed with such required devices or functionality, then the positioning systems may not function appropriately or, at least, would not be able to deliver the expected performance regarding accuracy and precision.

  • Addressing device heterogeneity

On the basis of the same network conditions, it has been found that the accuracy of some positioning systems is significantly affected by the type of measurement device, especially those that depend on RSS or TOA. Consequently, device heterogeneity is addressed as another metric parameter for evaluating the positioning system.

  • User participation

One of the fundamental ideas behind the calibration-free positioning systems is to involve users to implicitly participate in constructing the training database. For instance, any user having a wireless device may be expected to contribute to the radio-map construction. This user participation is more attractive when compared with the scenario where the professional deployment personal explicitly inputs location fingerprint data as feedback to the system. This allows building a more comprehensive and dense database, as well as scalable systems.

6.2.1 Discussion continues

The results presented in Table 3 reveal the following points.

  1. i)

    The application of fuzzy-system-based positioning systems equally focused on indoor and outdoor positioning.

  2. ii)

    Regarding the requirement of environmental knowledge, it is noteworthy that the application of fuzzy systems follows roughly the development of the navigation systems, where a clear difference between fully known, partially known, and fully unknown environments is observed. This shows that the proposed fuzzy-based approaches are mainly connected to the approach employed in mapping and modeling the surrounding/perceived environment. It also includes the grid-based approach, polygonal approximation, such as ultrasound beam or cellular grid network modeling, integration over a traveled distance path as in odometer-like sensing, and a straight line from known beacons. Accordingly, they derive a position estimation.

  3. iii)

    The examination of the free environmental knowledge constraint papers demonstrated that most of such studies can be mainly grouped into three: GPS positioning systems or differential GPS, local-based sensory strategies for proprioceptive-sensor in mobile robotics, and sensor node positioning in a large-scale WSN.

  4. iv)

    Moreover, the classification in Table 3 indicates some subjectivity. For instance, one may expect all of the fingerprinting-based approaches, e.g., construction of a radio map using access points and RSS information to need “map requirement.” However, the authors of such papers, e.g., [138], claim that the approach does not require any map-related knowledge. Therefore, reproducing the authors’ claim based on the environment knowledge requirements should be cautiously handled.

  5. v)

    The choices of location fix and use participation are primarily connected to the employed map-building approach. Most map-building approaches would typically require some prior knowledge of the environment, modeling structures (e.g., grid, straight line, polygonal cells, and cubic cells), and technologies employed. For instance, in case of a cellular network that utilizes the RSS signal intensity to calculate the mobile positioning, one requires information about the location of the base stations, their heights, power, and the type of environment (e.g., rural, urban, height of buildings, and wideness of streets) to tune the parameters of the radio propagation models that turn the RSS intensity into a mobile-base station distance. Similarly, to turn the RSS intensity to the distance or use any estimation-based technique, the use of triangulation with the Wi-Fi signal in the indoor environment would require at least the AP’s location. To apply vision-based techniques, for example, determining the target position with respect to the identified beacons, the beacon-like approach needs knowledge of the beacon locations, type, and shape. In a WSN array, the location of the target node would require knowledge of the reference nodes that may be applied to obtain the target’s physical location.

  6. vi)

    We distinguish among at least two types of user participation in the surveyed papers. The first one follows the crowdsourcing-based approach, where the users report their locations together with the observations (images, RSS, etc.). Then, it will be used to build some mappings of the environment. The second one is employed as a training phase to generate a model for position estimation. It uses a user interface as a part of the estimation process, where the user can intervene to validate or prioritize some typical choices.

  7. vii)

    The observation of the scalability result shows a quasi-majority of the indoor fuzzy-based positioning systems, which act in medium- to small-scale environments, whereas the outdoor-based positioning systems act in a medium- to large-scale environment. Reference [101] is an exception. It is related to the outdoor environment but considered as a small scale. This is motivated by the fact that this paper examined a small-scale array of a WSN.

6.3 Fuzzy evaluation-based metrics

This section fully explains the employed fuzzy metrics, and the corresponding results are presented in Table 4.

  • Single versus hybrid scheme

This measure indicates whether the fuzzy-based approach was used alone or with (or assisted by) another approach (e.g., Kalman filter or another soft computing, namely, neural network, genetic algorithm, and ANFIS). This can be useful for researchers who are interested in the relevance of specific hybrid schemes. For example, as far as this review is concerned, no survey paper has investigated the use of swarm intelligence or chaos theory related to the positioning problem.

  • Level of implementation in the localization process

This criterion examines how the fuzzy tool is actually implemented within the overall localization algorithm. For example, the fuzzy-based approach was used in many cases to assign relative weights to some parameters that were employed in subsequent reasoning. Some proposals explored the universal approximation ability of fuzzy reasoning to tackle system nonlinearities, and some used fuzzy reasoning to enhance user–system interaction.

  • Type of inference

Fuzzy inference is a vital application of the fuzzy set theory and fuzzy logic. The literature contains two common types of inference systems: Mamdani and Takagi–Sugeno inference systems. The Mamdani inference system primarily has output membership functions, whereas the TKS inference system has a crisp output. The former applies the defuzzification technique of a fuzzy output, whereas the latter applies a weighted average to compute the crisp output. The former is suitable for capturing expert knowledge, but it requires a substantial computational burden because of the defuzzification step. Moreover, the latter perfectly works with optimization and adaptive techniques, which customize dynamic nonlinear systems to the best data model. In addition, it is computationally more efficient [140].

  • Type of membership functions

Each fuzzy set is characterized by its associated membership function (MF) that describes how each point of the input space is mapped to a degree of membership, particularly between 0 and 1. Triangular and trapezoidal MFs have been often employed. The Gaussian or S-like MFs are more attractive when differentiation is involved. In other systems, an optimization process is performed to identify the type and/or parameters of the MFs. Therefore, it is important to know the type of MFs employed as a part of the fuzzy reasoning-based approach.

  • Number of rules, variables, and sets

The number of the fuzzy “If.. then..” rules is completely connected to those of the input variables of the fuzzy inference system or fuzzy controller and the fuzzy variables employed at each input/output. Although a higher number of the fuzzy variables is often claimed to enable the capturing of the fine-grained variations of the input/output variable, this can mainly result in a substantial increase in the number of rules and the overall complexity of the positioning system. Therefore, a tradeoff is often considered [141, 142], which motivates research based on optimizing the number of fuzzy rules and fuzzy variables to be employed.

  • Type of defuzzification

Defuzzification critically blocks the implementation of a fuzzy inference engine. This is due to several variations, such as execution time and instruction count, which basically affect the computational requirement and efficiency of the underlined algorithm. Although standard defuzzification techniques, such as the center of gravity or modal value, are commonly utilized in the fuzzy application, there is an increasing interest for axiomatic and computationally effective methods of defuzzification. Moreover, some comparative analyses of various defuzzification techniques have been reported [142], including trapezoid median average (TMA), weighted trapezoid median average (WTMA), and trapezoidal weighted trapezoid median average (TWTMA). Other studies focused on context-dependent defuzzification [136].

  • Rule base construction and rule simplification

The rule base automatically generated from the data may not be often easily interpreted. This is because of an increased redundancy in the form of similar fuzzy sets that can be driven from fuzzy models, resulting in poor transparency of the rule-based model. Additionally, the size of the rule base increases almost exponentially whenever the number of input increases. Several methods have been proposed to improve the interpretability of the fuzzy models. Some of these methods focused on the tradeoff between numerical accuracy and linguistic interpretability, whereas others emphasized the tradeoff between model accuracy and simplicity. To eliminate a redundant fuzzy set by incorporating a similar linguistic fuzzy variable into a single linguistic meta-variable, some authors introduced similarity analysis, set-theoretic similarity measures, orthogonal transformation-based methods, and so on [139].

To reflect on the results of Table 4, we mention the following points:

  1. i)

    The classification fuzzy alone versus the hybrid-based approaches bears some subjectivities. Even though the classification is primarily guided by both the authors’ claims and our scrutiny of the underlined papers, it turns out that numerous fuzzy-alone papers also apply some standard methods of regression analysis, simple statistical mean, and/or standard deviation, which, in turn, would cast the underlined fuzzy-alone paper under that of the hybrid-based approach.

  2. ii)

    The dominant majority of the fuzzy-alone methods unsurprisingly apply fuzzy inference systems as part of their core methodology. However, one can distinguish among various classes of application of fuzzy inference systems within the fuzzy-based positioning systems. First, on the basis of the input–output perspective, one distinguishes between the cases where the fuzzy inference system is applied at the input level to handle the uncertainty pervading the inputs. For instance, the fuzzy inference system refines the distance measurement/estimation so that the output of the fuzzy system is a refined distance measure, which can then be employed as an input to the core positioning estimation algorithm that may utilize triangulation, regression, or any other estimation-based strategies. From this perspective, the contribution of the fuzzy inference system can be compared to a filtering-like role that could enhance the quality of the input of the positioning algorithm. Another related class is based on the use of a fuzzy inference system to obtain a confidence measure associated with the input parameters, e.g., confidence interval and reliability (either as single-valued or functional). Therefore, to be utilized in the position estimation algorithm through some weighted regression or probabilistic estimation process, such a confidence estimate can be applied as complementary data to the inputs. A third class is related to the cases where the fuzzy inference system is utilized to estimate an entity that is directly related to the positioning system, e.g., the angular position of the target and xy position of the target. In this regard, the fuzzy rules are elicited such that the consequent part of the rule contains variables related to the components of the target. Moreover, these last two classes seem to be the most dominant trends in the surveyed fuzzy optimization systems. In addition, a fourth class involves cases in which the fuzzy inference system or fuzzy entity is jointly employed with another estimator (Kalman filter, neural network, and ANFIS). Regarding the Kalman filter, one distinguishes the cases where the fuzzy inference system can be applied to generate (after defuzzification step) one (crisp) input of the standard Kalman filter. In fuzzy literature, some proposals based on what is called the fuzzy Kalman filter have also been considered, in which a variance estimator under fuzzy constraints was investigated. Thus, to optimize the parameters of the fuzzy inference systems (e.g., number of fuzzy rules, fuzzy variables, modal values and spread of MFs, and connectives), hybridization with a neural network or ANFIS is mainly employed. The fifth class corresponds to the case where the localization approach involves map building either concurrently with the estimation process or as a prior step of the localization process. Therefore, we shall also mention the emergence of fuzzy clustering-based approaches that are employed to identify appropriate landmarks or perform suitable pattern matching. In general, fuzzy similarity measures and case-based reasoning techniques are mainly employed to identify the most plausible patterns and associative hypotheses.

  3. iii)

    Another result shown in Table 4 indicates that all fuzzy inference systems reviewed in the surveyed papers utilize reasonably few input variables and rules (less than nine variables). This is very common in fuzzy literature to ensure the interpretability of the results and the computational efficiency of the implemented algorithm. Moreover, to model fuzzy input variables considering their popularity in the Mamdani-like fuzzy inference system, the review shows the dominance of a trapezoidal- or triangular-like MF.

  4. iv)

    Surprisingly, for the position estimation problem, there are no reviewed studies that discuss the use of fuzzy arithmetic or fuzzy number-based approach. Although this seems to be an area of interest to be discovered in the future, we also mention the inherent properties of fuzzy arithmetic where the multiplicity of its operations can result in some bias or drifts that would require an automatic update.

6.4 Positioning evaluation metrics

The last evaluation set is not considered as a measure. As demonstrated in Table 5, it instead enumerates the positioning system properties based on the classification earlier performed: the type of location information required, the nature of the localization system (whether absolute or relative), the topology, the communication technology/protocol, the employed calculating algorithm, the signal measurement techniques, and the type of the environment.

Numerous positioning systems and algorithms have been proposed in the literature. However, owing to the discrepancy of the employed technologies, environmental constraints, and robustness of theoretical frameworks, it is still difficult to compare the performances of these systems and algorithms, as illustrated in Table 5. Thus, we suggested to evaluate their performances on categorical bases, which may provide some bases for future studies or guidelines for further evaluations.

A simple reading of the results shows the following points.

  1. i)

    The fuzzy-based approaches have been applied to various technology platforms, including mobile robotics with dead-reckoning, sonar, infrared, laser, ultrasound-like sensors, cellular network using GSM, cell ID, radio, differential GPS, indoor environment using Wi-Fi, Bluetooth, and ZigBee communication technology. Similarly, both timing-based (TOF, TDOF, and TOA) and non-timing-based measurements (AOA and RSS) have been investigated by researchers.

  2. ii)

    The calculating algorithms also differ from a simple count and proximity-based calculus to complex hybridization schemes passing through standard triangulation, multi-lateration, weighted average, and geometric-based reasoning. Moreover, numerous map-building-related positioning systems employ a fingerprinting-like strategy as well as the nearest neighbor or KNN-like decision rule.

  3. iii)

    Concerning the location description, it is also noteworthy that both the symbolic and physical locations have been considered in the literature. Moreover, fuzzy reasoning often allows us to also infer a symbolic description from a physical one. However, if physical and exact locations are not required, one expects higher accuracy of the fuzzy positioning system to only infer a symbolic description of the target location.

Similarly, except when GPS or GSN measurements are involved, it is often sufficient to provide relative positioning of the target instead of an absolute scale.

7 Conclusion

This study discussed the use of fuzzy logic and fuzzy-set-based reasoning in the problem of mobile or system positioning. Its challenge was to determine some classification criteria or common platforms for applying fuzzy sets in the positioning problem. Moreover, this is mainly due to the widespread problem under investigation and its interleave with numerous other (sub) problems, e.g., tracking, motion control, and the diversity of the environments it was implemented on. The authors briefly proposed two major classes: the IFP and AFP.

Moreover, for evaluation purposes, we distinguished among four main classes: system metrics, environment metrics, fuzzy metrics, and positioning metrics. In particular, for example, irrespective of the scale of the implemented environment in the system metrics, the accuracy of the proposed systems was enhanced based on the costs of complexity and computation. Moreover, by utilizing the power of reasoning and data extraction of fuzzy logic and fuzzy inference, it was observed that the fuzzy-based solutions outperformed those of the other numerous alternatives. In addition, when more variables were incorporated into the fuzzy inference, the precision level substantially increased. Very few studies reported or considered the rule base simplification problem. Our viewpoint on this needs to be thoroughly investigated. In most of the reported positioning systems, specificity, consistency, redundancy, and completeness of the rule base have not been sufficiently discussed. Therefore, it is important to mention the numerous advantages of the fuzzy logic in the context of mobile positioning, including its intuitive conceptual model, flexibility, easy computation, multiple combination modes, accommodation of logic-based reasoning, and hybridization with other (non) conventional techniques or soft-computing tools.

Generally, fuzzy logic is not a universally accepted tool for practitioners. This is because of the lack of awareness regarding its potential benefits between both the researchers and practitioner communities. Concerning performance, it requires further testing and evaluation, especially using benchmark data sets to create awareness. Another reason is its poor performance in some cases when compared with conventional positioning methods. Moreover, we believe that awareness of the context and metrics underpinning the design and application of the fuzzy reasoning-based tool would provide useful insights to consider the proposal and seek further enhancements, especially when the approach requires manual tuning of some critical parameters.

Finally, we highlighted some limitations that will guide future studies in this field and that therefore require further investigation in the fuzzy community. This includes the following points.

  • The use of a fuzzy number and fuzzy arithmetic-like approach for devising the fuzzy positioning system, especially investigating the effect of bias and propagation of uncertainty, which can exponentially grow in the case of iterative calculus on fuzzy entities.

  • In most of the surveyed papers, the asymptotic analysis of fuzzy positioning systems is yet to be discussed. This seems to be of paramount importance to enhance the theoretical foundations of the suggested techniques.

  • The proposed hybridization schemes often lack solid theoretical foundations as well.

  • The fuzzy-based positioning systems compete with communications and wireless communication studies. This seems to be a prerequisite to enforce other communication studies and would eventually yield enhanced hybridization schemes.

  • Manufacturing, virtual reality, and telemedicine, as well as their specialized constraints, have been far less explored in the fuzzy community.

  • There are very few studies that focused on the growing area of 5G networks with the substantial opportunities it offers in positioning systems.

  • On the basis of the growing area of artificial intelligence explain-ability, there is a need to greatly concentrate on the interpretability of the results of the underlined fuzzy positioning system. Therefore, more studies are required in this field.