Abstract

Nowadays, the importance and utilization of spatial information are recognized. Particularly in urban areas, the demand for indoor spatial information draws attention and most commonly requires high-precision 3D data. However accurate, most methodologies present problems in construction cost and ease of updating. Images are accessible and are useful to express indoor space, but pixel data cannot be applied directly to provide indoor services. A network-based topological data gives information about the spatial relationships of the spaces depicted by the image, as well as enables recognition of these spaces and the objects contained within. In this paper, we present a data fusion methodology between image data and a network-based topological data, without the need for data conversion, use of a reference data, or a separate data model. Using the concept of a Spatial Extended Point (SEP), we implement this methodology to establish a correspondence between omnidirectional images and IndoorGML data to provide an indoor spatial service. The proposed algorithm used position information identified by a user in the image to define a 3D region to be used to distinguish correspondence with the IndoorGML and indoor POI data. We experiment with a corridor-type indoor space and construct an indoor navigation platform.

1. Introduction

Most services involving spatial data are available for outdoors, compared to the indoors [1], despite being more crucial in urban areas, where people generally spend more time in a structure [2] or where navigation in cases of evacuation is experiencing more delay [3]. As this concern gains attention, various approaches in representation have been attempted across applications, particularly as this depends on the field and intent of service [4]. Mobile devices have also been increasingly popular and integrated with the daily lives of humans [5].

Multiple studies have shown efforts to represent the indoor environment, such as Light Detection and Ranging (LiDAR) or through Building Information Modeling (BIM) data [6], which require either cumbersome data collection or expensive equipment. The dynamic characteristic of indoor spaces, especially in urban areas, needs constant updating of datasets, and these methodologies pose problems in cost and time. Omnidirectional images taken with a 360° point of view [7] present an alternative, as this provides cheaper and faster avenues in depicting indoor space. However, these images present difficulty when used in providing services more than visualization, such as identifying spaces or objects within indoor space, which are crucial in applications such as navigation and facility management because they only contain pixel data. For example, we cannot distinguish directly if a room is directly accessible from the hallway, or if it is adjacent to another room from the images alone. In the same way, we can visually see facilities such as fire extinguishers or CCTV cameras. Still, their exact locations are unidentifiable due to a lack of geometric information.

Indoor network-based topological data may provide the information lacking in these omnidirectional images, such as spatial relationships of connectivity between spaces or containment of objects within the spaces. IndoorGML is the standard established by the Open Geospatial Consortium (OGC) for indoor spatial data, geared primarily for representing space for navigation applications [8]. Using topological data, such as IndoorGML, enables services utilizing query-based analysis on omnidirectional images that alone give only visualization. With this, there is a need to link omnidirectional images that display the spaces and the indoor topological data, which represents the relationships between said spaces, as well as objects contained within.

Different types of data represent various aspects of indoor space, and this variety may be a source of problems in utilization because of compatibility issues [9]. Data fusion is for datasets coming from various sources or formats to produce the same quality of output information or increase understanding of the underlying phenomena represented differently by these datasets to address this problem of variety. In this study, we aim to propose a methodology for the data fusion of image and a network-based topological data without undertaking data conversion, using a separate data model, or reference data. We demonstrate a procedure to establish a relationship between omnidirectional images and IndoorGML data for providing indoor space applications.

This paper is structured as follows. The next section discusses studies on efforts on indoor space expression and methodologies in data fusion. The third section presents the proposed methods for image and topology data fusion. By understanding the relationship between the omnidirectional images and IndoorGML data, we lay out the data requirements that must be satisfied before establishing the fusion. In the following section, we conduct an experimental implementation of the proposed methodology by developing an indoor space service based on navigation, as well as indoor POI display. Finally, the last section concludes with implications and limitations derived from this study and future studies.

In this section, we review how topological data and data fusion play a role in indoor space representation. We explore various studies regarding methods of expressing indoor space, as well as efforts for data fusion methods to produce indoor spatial data, in the aim to provide various indoor space services.

Typical means of expressing indoor space include the use of two-dimensional (2D) or three-dimensional (3D) data. 2D methods include the least amount of information that represents the indoor space [10], such as CAD floor plan drawings. However, if indoor space is in 3D, it is possible to analyze indoor space differently compared when it is in 2D [11] as specific characteristics may seem more apparent. These methods may use existing 3D CAD models or high-precision LiDAR measurement that can accurately represent indoor space but can entail high cost and large file size. These methods may be practical in military applications, in gaming, or in other cases where service is feasible only when indoor space is represented very accurately [4]. On the other hand, it is possible to support various functions such as attribute search and viewing if the data supports the expression of topological relationships of indoor spaces [12]. In the context of indoor navigation and LBS, network-based topological models are emphasized as necessary [13], especially in the context of the visualization and analysis of the internal composition of as-built structures [14]. Reference [15] has also demonstrated that these types of topological data are more efficient in performing spatial queries.

A typical method of expressing topological relationships among spaces is the Combinatorial Data Model (CDM) based on the Node-Relation Structure (NRS) [16]. The NRS transforms a 3D object into a node, and an edge represents the respective shared boundaries among the rooms, based on the Poincare duality. Hence, in the topological model, the nodes represent the indoor spaces, and the edges represent the topological relationships among connected nodes. Based on the CDM, the Open Geospatial Consortium (OGC), an international standards organization for spatial information, established IndoorGML as a framework to represent topological relationships for indoor spaces and data exchange. Similarly, indoor spaces are defined as nodes (also called states), while edges (or referred to as transitions) express topological relationships [8].

As IndoorGML is capable of representing the indoor spaces, its primary utilization is for the investigation of the usage of indoor space, such as indoor LBS, or indoor route analysis. Architectural components, fixtures, or objects found within the spaces are beyond the scope of this standard. However, to provide indoor spatial services successfully, these objects contained within these spaces, the targets of indoor navigation, must also be represented. Indoor points of interest (indoor POI) expressed geometrically as points represent positions of objects and are used to link their respective attribute information. As this IndoorGML does not directly have specifications about these objects, Jung and Lee [4] utilized the multilayered space model (MLSM) to simultaneously represent indoor topological information through the IndoorGML NRS and the indoor features via indoor POI in an indoor patrol service application. Similarly, Claridades et al. discussed this concept in integrating IndoorGML and indoor POI. In both studies, layers that make up the MLSM divide the space into nonoverlapping layers, and those nodes exist in each layer independently. In turn, interlayer relationships define the relationships among the layers. This definition emphasized an implementation-oriented expression of topological relationships between a node representing a space (for example, a room) and nodes representing objects within said spaces, illustrated in Figure 1 [17].

In many cases, multiple datasets may exist to represent the same geographic features existing in the real world [18]. These may represent various aspects of the feature they represent, or each of them is a representation using different data models that are accordingly suited to each application. This ambiguity may pose problems in data compatibility, in duplication, and in the process of integration. Data fusion is defined as a combination of two or more data sources to provide less expensive, more relevant, or higher quality of information [19]. In this manner, performing data fusion enables overcoming this predicament by linking data from separate sources, collected through different methods, or observing various standards. This approach may also resolve ambiguities caused by selecting an appropriate data model for an application, since one may be more specific over another in implementing a certain task [20].

Data fusion is especially helpful in GIS because even though geographic datasets are readily accessible across applications, it assists in the combination of features, each having their suitable aspects, to empower geospatial analysis [21, 22]. In the case of spatial datasets accessible through the web through Spatial Data Infrastructures (SDI) of both private and government organizations, data fusion is possible through the concept of linked data, using unique identifiers and standardized web formats to resolve conflicts in data [23, 24]. This technique is aimed at assisting the generation and updating of spatial data [25], as well as the construction of location-aware systems to provide services to record human movement and deliver visual feedback [26].

Stankute and Asche defined their approach by extracting the best-fit geometric data, and the most suitable semantic data from datasets through coordinate matching [21], while other approaches used attribute mapping to achieve feature correspondence [27]. For earth observation data, [28] has applied data fusion to combine multisource data using linguistic quantifiers for environmental status assessment.

As IndoorGML primarily presents topological information, topology-based data fusion approaches are most suitable. This approach is aimed primarily at enriching the dataset, since it contains only the minimum requirements to enable basic indoor spatial modeling. In one of its main application aims of indoor routing, IndoorGML’s proponents have suggested extensions for common application domains to be applied to increase utilization [29]. To augment both standards’ limitations in representing indoor space [30], an Indoor Spatial Data Model (ISDM) referring to CityGML for the feature model and IndoorGML for the topology model was proposed, by defining additional feature classes. On the other hand, Topological Relation Model (TRM) [20] essentially establishes connections between data through matching geometric data generated from respective models. Since not all datasets may be a source of geometric information, the Topological Relation-based Data Fusion Model approached the problem by generating topological data from surface, network, and volume-based data to establish matching [18]. Commonly, these approaches determine a correspondence between the features in the data as an approach to data fusion. However, a match among features is not possible in all datasets, such as in the case of images.

Several methodologies have utilized omnidirectional images to create a recreation of a room layout using omnidirectional images, such as RoomNet [31] and LayoutNet [32], which extract predicted room layouts from a single subunit of an interior, so the topological relationships between said subunit and other subunits are indistinguishable. Jung and Lee [4] examined the case for utilizing omnidirectional images and topological information in an indoor patrol application. Online web mapping services such as Google Street View [33] and Kakao Storeview [34] use these images to present a snapshot of the indoor space at the moment of image capture. Also, the method of collection along a corridor-type space was described by determining shooting points or locations where these images may be collected efficiently [35]. In establishing the indoor patrol application, the IndoorGML CellSpace class has an association relationship with the omnidirectional image denoting that each image represents a space situated in each shooting point, which in turn is a node in the IndoorGML NRG. The definition of IndoorGML relationships was extended by defining a within relationship for the objects contained in the spaces. Here, however, the connection between the image and topology data was implemented through the use of a reference data. The algorithm performs a spatial query on polygons, and the coordinate was calculated from a pixel location in the image to identify the containing space and attribute information. Besides, since pixels present in the images only present objects visually and not discretely as in vector data, using exact positions to define topological relationships in this manner may be difficult [4]. Semantically separating an image into different objects has been regarded as a chicken and egg problem—an object’s type and shape are essential to determine if a pixel belongs to the object or not, but this object must first be isolated to understand which objects are present in an image [36]. Furthermore, using connectivity relationships, images are loaded discretely in implementing space to space navigation. In effect, this gives the impression of discontinuity in indoor space, especially in the corridors.

The concept of spatially extended topology, based on the 9-intersection model [37], is intended to define topological properties of moving objects to provide a concierge service application [38] through defining regions of around points reflecting respective ranges of influence. This concept describes that for an object in a location, a scope of influence of a certain range describes a conceptual area of potential interaction with the said object. It is conceptual, not a physical region, such as the transmission footprint of an antenna or a broadcast range of a WiFi router. This possibility of defining a range of distance provides opportunities for various spatial reasonings about an object.

3. Methodology

In this section, we describe the framework to perform image and topological data fusion. Then, we describe the necessary algorithms to perform a match between locations identified in the image and the nodes that represent the spaces in the topological data. These methods are necessary to perform functionalities to demonstrate how the link of image data and topological data enables spatial analysis in the indoor space.

3.1. Framework for Image and Topology Data Fusion

We utilize the concept of data fusion on spatial data representing the same indoor space, to produce more information than when they are separate. By directly combining information from topological data and image data, this methodology is aimed at providing more relevant information in indoor space, which is especially useful in the context of indoor navigation and visualization.

Because of the high visual content, ease of updating, and relatively small file size, image data is a suitable representation of the indoor environment. However, to provide LBS in indoor space, it must be supplemented by a network-based topological data, in which spaces and their respective relationships are represented directly through nodes and edges, respectively. Images only provide information through individual pixels, which do not contain enough spatial information that may be further helpful to the LBS. In this data fusion approach, we aim to establish a relationship between the image pixel data and the nodes of the topological dataset. Within the LBS, a user can visualize a space (or object) in the image through the pixels. This pixel’s position is the key to perform a query to search for the corresponding node in the topological data. This node represents either the space that contains the position represented by the pixel or an object contained in that space. We illustrate the framework for this approach in Figure 2.

3.2. Establishing the Relationship between the Image and Topological Data

With the method of representing indoor space and objects in these spaces, it is difficult to establish a direct matching to the topology data. Images contain only pixels and do not have geometry, making it difficult to obtain attributes of features or identify the features themselves intuitively. With 3D coordinates, it is possible to establish a 1 : 1 match towards the nodes of the topological data. Still, a separate method is necessary to recognize the spaces or objects even though they are displayed visually.

Figure 3illustrates the general method of establishing a relationship between the data. Since a user can visually see an object, the process of space or object identification begins with the user selection of a position in the image. The user only selects a single pixel at this point in the image, and this calculated position defines the relationship with the topological data. We use the image heading (where north is equal to zero), image radius, and horizontal angle of the selected pixel, as well as the vertical angle to calculate and coordinates and image capture height for the coordinate. Moving forward, since coordinates of the user-selected pixel are present for the matching, coordinates of the nodes representing either indoor spaces or indoor objects are derived similarly. The function of the node, whether as an indoor space or as an object within the space, is also noted, since this differentiates the topological relationships, say connectivity from space (click) to space (node) or inclusion from space (click) to POI (node).

In the LBS, the pixel’s position defines the link to the topological data by knowing which nodes are in this position’s vicinity. With this, a 3D space around the nodes’ location can describe vicinities, to investigate which among them contains the pixel. Depending on the type of space or object that the node represents, the size of the region may vary. Table 1 enumerates the differentiation among the objects and space.

The processes described above calculates the position of the user click and the node closest to that click. Their corresponding relationships are determined using the Spatial Extended Point (SEP) approach based on the calculated coordinates, adopted from Lee [38]. In this study, we define the SEP as a region around each of the nodes to represent the area of potential influence or behaviors around the node’s point location. Using the SEP, we determine if the user-selected point is on the exterior, interior, or boundary of this region, easing on the limitation of the lack of geometric information in both image and topology datasets. It follows from this definition that the SEP’s interior defines in the region in which the node’s influence is present. The SEP’s exterior is where there is no influence anymore, and the boundary quantifies the limit of influence and the start of the noninfluencing area. Figure 4 illustrates this procedure.

The interactions between an area and a point described in Figure 4, extended in 3D for a point and region, are representing topological relationships for those entities in 3D space quantitatively as a matrix, referred to as the SEP matrix, as shown in Equation (1). This matrix may take up values of either 1 (satisfied) or 0 (not satisfied) for each element, depending on the conditions. This matrix simplifies the calculation for the topological relationships for two entities. A generalized expression defines for each observation in each projection of the 3D space in the three Cartesian planes. where is the selected position in image; is inside the boundary along the plane; is Inside the boundary along the plane; is inside the boundary along the plane; lies on the boundary along the plane; lies on the boundary along the plane; lies on the boundary along the plane; is outside the boundary along the plane; is outside the boundary along the plane; and is outside the boundary along the plane.

The SEP results from calculating the distance between nodes and a user-selected position in the image. These coordinates are calculated along three orthogonal planes in 3D space, to establish values in the matrix given by Equation (1). In this process, there is a smaller threshold value if the node represents an indoor POI compared to when it is representing a space. First, the distance to the user-identified position in the image is calculated along the three orthogonal planes, to determine if there exists an IndoorGML node within the allowable ranges from that point and its type, if this nearby node exists. If the clicked point has a smaller distance, i.e., closer to an IndoorGML node representing a space, the SEP matrix comes from a more significant threshold value. Depending on the type of space of the identified nearby node, the SEP matrix is populated with appropriate values to denote the topological relationship of the user-identified point and the corresponding IndoorGML node.

Moreover, the size of the region defining the SEP is a factor to be considered when linking images describing indoor space and objects to IndoorGML data. Various definitions of this region based on Table 1 defined the allowable ranges when determining the topological relationships using the SEP. This permissible value must be adjusted accordingly, depending on what the IndoorGML node represents. For example, nodes that represent indoor space must have a broader range, compared to nodes that represent objects such as doors. Pseudocode 1 shows the simplified pseudocode.

calculateSEP (Node_part, Tolerance)
Step 0. Initialize constants
    r ⟵ image radius
    offset ⟵ camera height
    Node_part ⟵ role of Node in scene (type of space)
    Tolerance ⟵ allowable range of the SEP
Step 1. Check click_mouse if FALSE
 Step 1.1 Read topological data and obtain node parameters
    Node_ath, Node_atv ⟵ position of Node in scene
    Node_part ⟵ part of Node in scene
Step 2. Define function to obtain coordinates from user input
   getCoordinate (mouse_ath, mouse_atv, r, offset) {
    If camera direction is North, set Hd ⟵0, increasing clockwise
    Angle_H ⟵ (Hd + mouse_ath) PI/180
    Angle_V ⟵ mouse_atv PI/180
    ⟵ r cos (Angle_H)
    ⟵ r sin (Angle_H)
    ⟵ offset + rsin (Angle_V)}
Step 3. Set click_mouse as TRUE
 Step 3.1 Obtain mouse_ath, mouse_atv from click
 Step 3.2 Calculate coordinates
  Step 3.3.1 Obtain coordinate of user identified point
     ⟵ getCoordinate (mouse_ath, mouse_atv, r, offset)
  Step 3.3.2 If Node_part is Indoor Space
     ⟵ getCoordinate (mouse_ath, mouse_atv, r, offset)
  Step 3.3.3 If Node_part is Indoor Object
     ⟵ getCoordinate (mouse_ath, mouse_atv, r, offset)
Step 4. Calculate SEP matrix
 Step 4.1 Initialize: blank 3 x 3 SEP Matrix ⟵ 0
 Step 4.2 Set Tolerance depending on type of space or type of object
  Step 4.2.1 Calculate SEP Matrix values along XY plane
    IF XY distance is less than Tolerance, SEP_Matrix [0][0] ⟵ 1
    IF XY distance is same as Tolerance, SEP_Matrix [0][1] ⟵ 1
    IF XY distance is greater than Tolerance, SEP_Matrix [0][2] ⟵ 1
  Step 4.2.2 Calculate SEP Matrix values along YZ plane
    IF YZ distance is less than Tolerance, SEP_Matrix [1][0] ⟵ 1
    IF YZ distance is same as Tolerance, SEP_Matrix [1][1] ⟵ 1
    IF YZ distance is greater than Tolerance, SEP_Matrix [1][2] ⟵ 1
  Step 4.2.3 Calculate SEP Matrix values along XZ plane
    IF XZ distance is less than Tolerance, SEP_Matrix [2][0] ⟵ 1
    IF XZ distance is same as Tolerance, SEP_Matrix [2][1] ⟵ 1
    IF XZ distance is greater than Tolerance, SEP Matrix [2][2] ⟵ 1
Pseudocode 1 Identifying objects or spaces in images using the SEP matrix.

4. Experimental Implementation

In this section, we demonstrate the proposed methodology for image and topological data fusion using omnidirectional images and IndoorGML, respectively, by building the visualization platform and implementing the algorithms and processes described earlier.

4.1. Datasets for Implementing Image and Topology Data Fusion

In this implementation, omnidirectional images represent and visualize the indoor space and the objects contained in these spaces. Also, IndoorGML is used to describe topological relationships between spaces. Figure 5 shows the schematic diagram to generate these datasets before data fusion. First, if the omnidirectional images represent indoor space, at the shooting points to be exact, a 360° view of a place is made. This manner of shooting subdivides one continuous space into subspaces [35]. The image headings (or where the image direction is numerically set to zero and referred to) vary from image to image because of inconsistencies in capture. The irregularity in image capture may cause inconsistencies when calculating positions of spaces in the images, so this must be corrected.

In this study, we express the spatial relationships through IndoorGML as with Jung and Lee [4]. The NRS is the basis of expression of topological relationships of adjacency and connectivity among indoor spaces, and the relationships of objects represented as POI are expressed similarly through a within relationship. Since connectivity relationships of spaces allow the use of an image-based topology authoring tool that produces an XML file structure, indexing each image into a scene is possible [4, 35]. We construct the IndoorGML data as an XML database integrated into the image XML file. For each scene in the XML data representing an image, IndoorGML topological relationships such as connected and adjacent spaces or even POI are contained in that particular space.

4.2. Study Area and Experimental Environment

The target area for this study is the 6th floor of the 21st Century Building, University of Seoul. Image capture was carried out in this location using a DLSR camera and Ricoh Theta S, equipped with a rotary rotator. We matched the captured images using PTGui 10.0.12 to generate the omnidirectional images. Table 2 summarizes these tools.

Based on the IndoorGML data, these images were connected using PanoTour Pro 2.5, an image-based topology authoring tool, to establish the connectivity relationships of each image and to build HTML and XML files used in the service. As PanoTour indexes the image data in XML format and links each to a scene, Krpano scene call scripts display the images. Accordingly, the topology data based on IndoorGML was constructed as an XML database and integrated into the image XML files.

Bitnami was used to build the server to handle the image and network-based topology data. We used web-based languages such as HTML, XML, and JavaScript to implement the algorithms discussed in the previous section. Functions used in the pseudocode described in Pseudocode 1 were assisted by Krpano JavaScript functions, particularly in obtaining coordinates from the images using getCoordinate. The output is a web browser-operated platform where the user can pan, zoom, and use scroll around a single point defined omnidirectional image capture location, corresponding to an IndoorGML node. The following section discusses the implementation of the algorithms within this platform to demonstrate the data fusion between omnidirectional images and IndoorGML.

4.3. Applying Data Fusion to Omnidirectional Images and IndoorGML Data

The integration of the image and topological data defined using algorithms described in the previous section demonstrates understanding the relationship between the nodes of the IndoorGML data and the omnidirectional images. In this section, we build an interface upon actions utilizing these algorithms through user-initiated actions, such as double-clicking and long-pressing.

First, when a user double-clicks on an omnidirectional image, the previously calculated SEP matrices of the location of the selected pixel is used to determine if an image or an object is present. In either case, a double-click would link the image’s or object’s information to the current image, and if it is an image, it would eventually load it into display. Figure 6 illustrates this process.

To further demonstrate the relationship between the IndoorGML data and omnidirectional image, a long click on a pixel expresses attributes of objects. In a similar process of identifying objects in the image, the algorithm checks the SEP if it contains doors and objects, and attribute data is only displayed if the object is present in or on the boundary of this SEP. This algorithm presents a method to display information about rooms and facilities, using the properties of the IndoorGML nodes. In other words, the user can see attributes of these items, visible as pixels but not as discrete objects in the images through a long click, which triggers the calculation of the SEP matrix to indicate the topological relationship of the identified position and the positions of each node in the interior space. Figure 7 illustrates this process.

The above operations derive information about the features and spaces using existing attributes present in the IndoorGML data, linked to the omnidirectional images that provide the visual interface to the user. To further illustrate the ability to portray spatial information through the omnidirectional image, we implement an image-based indoor navigation. The user inputs then names of the origin and destination locations, and these names are located within the list of the names of all omnidirectional images. The algorithm identifies the path from the start location towards the entered destination using the IndoorGML data. For a particular image, it draws the path by identifying the linked image attribute, which represents the successive image for that path, repetitively, until the destination. The sequence of images is loaded with appropriate delays and transitions to visualize a smooth and realistic visualization of navigating through the spaces represented by the images. Figure 8 illustrates this procedure.

4.4. Resulting Platform for Omnidirectional Images and IndoorGML Data Fusion

We implemented the process of identifying objects and spaces in the omnidirectional images using the SEP matrix, which defined the relationships of the objects in the indoor space and the nodes from the indoor topological data, as shown in Figure 9. When a user double-clicks on the image, the algorithm identifies the node which SEP contains the click position, and the corresponding linked image to that node is displayed. This linked image is present in the location represented by that node. This double-clicking action is an implementation of moving from one position to another, as visualized by successive displaying of omnidirectional images, and demonstrates that topological information, connectivity relationship, for instance, can be obtained directly from the images.

Similarly, by using the established relationship of the omnidirectional images and IndoorGML data, users can display attributes of objects using a long click. As shown in Figure 10, the long click identifies the node in which SEP contains the click position, and information about the node is only displayed if that node is representing an object or facility located indoors and not an indoor space. Similar to the previously demonstrated function, this shows that topological information can be obtained directly from the images and, in this case, attributes of the IndoorGML nodes.

In this application, the link between the image and topology data is also used by visualizing the navigation from a starting location to a destination location, even if the starting location is not the currently loaded scene. The user is made to enter names of the desired start and endpoints, and these names are searched in the image data attributes, after which the path between these points is established using the topology data. Each omnidirectional image along the established path is arranged from start to end, and each one is loaded with appropriate directions of turn, transitions, and delays to achieve a smooth visualization of navigation. The result of this process is illustrated in Figure 11.

Figure 11 illustrates a sample result of the visualization of navigation from one room to another, including the spaces along the path of movement. The user is prompted to input the names of the rooms, in this case from a starting point, Room 605 to the destination Room 607. The visualization of navigation commences with the image displaying that Room 605 is loaded, then rotates towards the direction of the closest door to the destination, loads the next image displaying the corridor, transitions to the next image near the door of Room 607, then transitions to the image displaying the interior of the destination. The continuous visualization of the path from one location to another demonstrates the continuity of indoor space, despite being represented discretely with subspaced nodes in the topological data and separate omnidirectional images in each image capture location.

5. Conclusions and Future Studies

Indoor space has been expressed in various ways in previous studies, with each method differing in method of collection and generation, emphasized aspect of space, and applications. Geometric datasets such as LiDAR provide realistic and accurate visualizations, but omnidirectional images provide comparable results in this aspect despite it being easier and much cheaper to collect, process, and update. In addition, while it is important to visualize indoor space in three dimensions, studies show that the capability for spatial analysis provided by topological datasets is necessary in providing indoor spatial services such as in the case of LBS—an aspect where omnidirectional images lack. With this, this study proposes a data fusion method between image data and topological data, implemented with omnidirectional images and IndoorGML.

Indoor spaces are expressed using topological data given by IndoorGML, an international standard established by OGC, where they are abstracted through a zero-dimensional node, and the respective spatial relationships are expressed as one-dimensional edges. Currently, IndoorGML does not explicitly support the representation of objects or facilities in these spaces, so the concept of the Point of Interest (POI) was implemented to expand IndoorGML’s definitions of topological relationships from defining those of between spaces, towards the objects contained in these spaces as well.

The image and topological data are used in this study together to recognize objects and spaces in the images through the concept of the SEP, where user-identified pixels are related to the nodes of the IndoorGML data. The SEP signifies a region of influence for each node, and it enables a simplified solution to represent topological relationships between positions in the image and the nodes that represent spaces. In our experimental implementation, we collected omnidirectional images in the interior of a building, and various functions are implemented based on this established relationship, from an image-to-image movement visualization to the visualization of continuous indoor navigation. Also, using the expanded conceptualizations of the topological relationships in IndoorGML, objects within the indoor space are portrayed as indoor POI, also represented as nodes. These indoor POI are not just visualized through the images, but the data fusion method through the SEP has also enabled spatial analysis such displaying attributes of facilities.

In our paper, the indoor topological data and the images are representing corridor types of indoor space. Because of this, there may be differences in the manner of generating IndoorGML data for other indoor environments. In addition, there may be differences in how the acceptable ranges for using the SEP will be applied. Considering these factors, however, it can still be expected that indoor topological data can be linked to omnidirectional images to achieve similar results. Also, there no officially established data model on how to represent indoor POI yet, and if such is created, this may help formalize how objects and facilities are to be represented along the spaces they are contained in.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Disclosure

This paper is based on the first author’s master’s thesis.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by a grant (20NSIP-B135746-04) from the National Spatial Information Research Program (NSIP) funded by Ministry of Land, Infrastructure and Transport of Korean government.