Intuition for Indoor Positioning Frameworks
Published:
I have had the pleasure of working on research that aims to solve the question: How can we achieve indoor positioning similar to the likes of current GPS technology that is an exceptionally accurate tool?
The academic research available suggests we are in a good place to making that a reality, but is there room for noticeable improvement? Primarily, that is the focus of this post. I might skip around some concepts and there might be some gaps, but I will give it my best effort to deliver a meaningful overview entailed in indoor positioning.
Naturally, what I feel inclined to deeply discuss is with any engineering application, what can create these amazing practicalities from research we strive for? Being exposed to a relevant but divergent (to the topic at-hand) discipline, like deep learning, allowed me to learn and build intuition. In particular, the recurring stance that with more data comes greater outcomes. I think we can find this perspective to be reasonable and effective, because research tends to be limited and not applicable due to constraints of data. Whether that data is not available or more samples need to be collected, it still concludes that data is important, especially if we have a lot of it.
However, that will not get you far enough (depending upon the goal you set). To have exceptional goals accomplished, it calls for exceptional data (quality, quantity, generalizability, novelty). Let me explain: The quality of your data that aims to be used in your research necessitates standards that need to be met for that data to be “valid”. Many things contribute to quality data, some of which are being representable of real-world information, sound documentation provided to interpret the data, and tries to maintain and reveal the purest form of this data when taking into account extraneous factors. Next, because I mentioned the relevance of quantity in creating a respectable dataset, let’s cover the importance of generalizability.
A good question to think about when I mention generalizability is: What aspects of the data collection process need to answer to it? The first would be generalizability in the raw information itself. Do most of the samples adhere to a certain distribution when mapped across your total samples? Far more importantly, is your data usable for academic research? Can other researchers incorporate your dataset into their experiments without needing to doubt the validity of it? Such a question could be answered dependent upon what the data looks like. Would you want a dataset comprised of outliers? Of course not. So, every step in the research process needs to be executed with utmost perfection (or at least the standard to strive towards).
The last component we want to address next in the post segues perfectly, but nevertheless, novelty is required in your data collection process to achieve exceptional outcomes (as mentioned a bit earlier). What is meant by this? At best, another word I can describe it as is feature incorporation. What other forms of data can we find/collect/create that improves experimental results, particularly in indoor localization? The current, widely adopted feature in academia for this research topic has been limited to Wi-Fi signals, measured in RSSI, pinged and retrieved by access points. Now, it is considerably understandeable of why that is the case: Wi-Fi is the single most available resource of data you can find anywhere you go that it makes the concept of indoor positioning possible. For that reason, Wi-Fi feature is superior to others (unless a scalable radio communication sensor can achieve the same feat). Our research among others found that RSSI data alone, outputs distance errors between 3 to 7 meters. So, we are reminded from earlier, is there room for noticeable improvement? Yes, I firmly believe it. Incorporating new sensor data, such as barometric pressure, accelerometer, gyroscope, can help extend to more use cases of indoor positioning. Perhaps at a point in time, estimations were restricted to single floor architectures. The new sensor data changes the narrative and extends the application of indoor localization, being able to utilize the orientation of the data-recording device (Android mobile phone for our case), the rate of change of velocity of that device, and measure in pressure to determine altitude. I think you can start to see the picture here: LIVE tracking has a new meaning to it (lives up to its name) with the capability to now analyze and provide greater estimations when considering the factors of the pace of which the user is moving, the angular velocity (more than likely, a constant velocity is preferred) for optimal data, and determining what floor(s) you are located. Is it clear now why novelty should be sought during the data collection process?
After data collection, what is there left to do? Well, to keep this post short and unique, I will refrain from going over the “iterative” process we typically see in machine learning pipelines. However, I will cover an aspect of it, with that the exploration to find an architecture that would find the hidden “complexities” within the data. Based on the literature review I have conducted, academia for indoor positioning is gunning towards adopting neural network architectures to unveil said complexities. The inherent behavior found in neural nets capitalize on metaphorically developing a “memory” (made of weights, biases, backpropagation, and the essential feed-forward layers) to map test samples as distinct “identities”, ultimately understanding this and outputs a estimation representative of that notion. Going further, can a specific neural net, like LSTM, RNN, ResNets, PINNs, make the identities apparent? Their architectures minimize the vanishing gradient problem. In turn, that gives a more sophisticated and complex memory. Can the nets still see the “connection” after so many feed-forward layers and backpropagation? The more unsevered connections, the better suited the outcome becomes.
I have exhausted all my thoughts about this topic and attempt to make it a novel reading experience. The analysis and reflection provided in here will be eventually incorporated and implemented with current research I am working on (thank you academic advisors for the opportunity). To end it off, I had a professor who I highly respect remark a statement that I always seem to come back to. This is not verbatim but it goes along the lines of: “What can I interpret from an encoded/compressed form of data? Given the opportunity to see the raw data itself, I might just get a better idea of what is going on.” The relation of this statement to the notion of unsevered connections, hopefully concludes its importance.
Please feel free to contact me for any conceptual errors that I might have overlooked. Thank you for your time!