Prof. Rais received his MS and PhD degrees in Computer Engineering with specialization in Networks and Distributed Systems from University of Nice, Sophia Antipolis, France in 2007 and 2011 respectively. Before that, he got BE in Computer Systems from National University of Science and Technology, Pakistan in 2002. He has experience of more than 20 years in teaching, research and industry R&D. Prof. Rais is author of several of publications in internationally recognized peer-reviewed journals and conferences. His research interests include Network Protocols and Architectures, Information-Centric and Software-Defined Networks, Machine Learning Algorithms, Cloud Computing, Network Virtualization, and Internet Naming & Addressing issues.
The research profiles of Prof. Rais can also be accessed at the following links:
Rao Naveed Bin Rais on Google Scholar
Rao Naveed Bin Rais on ResearchGate
Rao Naveed Bin Rais on Scopus
Rao Naveed Bin Rais on ORCID
[NOTE]: In the publication list below, refer to the following:
[ J*] ==> Journal Paper
[C*] ==> Conference Paper
[B*] ==> Book
[BC*] ==> Book Chapter
[PP*] ==> Preprint
[R*] ==> Research Report
Abstract—5G spectral efficiency requirements foresee network densification as a potential solution to improve capacity and throughput to target next-generation wireless networks (NGWNs). This is achieved by shrinking the footprint of base stations (BSs), effective frequency reuse, and dynamic use of shared resources between users. However, such a deployment results in unnecessary handovers (HOs) due to the cell size decrements, and limited sojourn time on a high train mobility. In particular, when a train speedily passes through the BS radio coverage footprints, frequent HO rate may result in serious communication interruption impacting quality of service (QoS). This paper proposes a novel context-aware HO skipping that relies on passenger mobility, trains trajectory, travelling time and frequency, network load and signal to interference and noise ratio (SINR) data. We have modelled passenger traffic flows in cardinal directions i.e, north, east, west, and south (NEWS), in a novel framework that employs realistic Poisson point process (PPP) for real-time mobility patterns to support mobile networks. Spatio-temporal simulations leveraging NEWS mobility prediction model with machine learning (ML) where support vector machine (SVM) shows an accuracy of 94.51%. ML-driven mobility prediction results integrate into our proposed scheme that shows comparable coverage probability, and average throughput to the no skipping case, while significantly reducing HO costs.
Due to the cell size decrements and limited sojourn time on a high speed train mobility, unnecessary handovers (HOs) occur, which can lead to higher network communication costs, and affect passengers quality of service (QoS). This paper proposes a novel blockchain-enabled privacy preserving HO skipping framework by using train mobility dataset from the city of London. Using a complex dataset parameters, passenger traffic flows are modelled by averaging various train lines and station's footfall numbers utilising blockchain to maintain privacy. The framework stores pseudonym addresses in order to track the path of users. The proposed framework allows for a better trade-off in terms of 2% (approx.) gain in average throughput, over 100% gain in the last-hop signal quality, and a 50% reduction in HO costs, while also addressing the needs for resources to operate the blockchain.
This paper introduces handover (HO) skipping topology analysis that adjust the HO Skipping of 5G and beyond applications to improve the overall network performance and diminish negative effects. We propose a novel Poisson point process (PPP) based context-aware HO skipping approach to focus on the impact of HO metrics such as, passenger’s trajectory, different velocities, and a mean-time of a passenger within a BS to maintain a good quality of service (QoS). Our proposed scheme, context-aware HO skipping enables a dynamic HO skipping where the skipping decision is taken based on the load of the BSs along the passenger’s trajectory. The parameters have been analysed and implemented in a dynamic simulator and have been investigated for different parameter sets in a high-speed railway simulation scenario. Our simulation results in the robustness of the framework show comparable coverage probability on various high train velocities and mean-times.
With the advent of Coronavirus Disease 2019 (COVID-19), the world encountered an unprecedented health crisis due to the severe acute respiratory syndrome (SARS) pathogen. This impacted all of the sectors but more critically the transportation sector which required a strategy in the light of mobility trends using transportation modes and regions. We analyse a mobility prediction model for smart transportation by considering key indicators including data selection, processing and, integration of transportation modes, and data point normalisation in regional mobility. A Machine Learning (ML) driven classification has been performed to predict transportation modes efficiency and variations using driving, walking and transit. Additionally, regional mobility by considering Asia, Europe, Africa, Australasia, Middle-East, and America has also been analysed. In this regard, six ML algorithms have been applied for the precise assessment of transportation modes and regions. The initial experimental results demonstrate that the majority of the world's travelling dynamics have been contrastively shaped with the accuracy of 91.21% and 84.5% using Support Vector Machine (SVM) and Random Forest (RT) for different transportation modes and regions. This study will pave a new direction for the assessment of transportation modes affected by the pandemic to optimize economic benefits for smart transportation.
Wireless Mesh Networks (WMNs) are considered self-organizing, self-healing, and self-configuring networks. Despite these exciting features, WMNs face several routing challenges including scalability, reliability and link failures, mobility, flexibility, and other network management issues. To address these challenges, WMNs need to make programmable to allow modifications of standard techniques to be configured and implemented through software programs that can be resolved by integrating Software Defined Networking (SDN) architecture. SDN, being a cutting-edge technology promises the facilitation of network management as well as routing issues of wireless mesh networks. However, the evolution of the legacy IP-based network model in its entirety leads to technical, operational, and economic problems that can be mitigated by full interoperability between SDN and existing IP devices. This study introduces a Robust Routing Architecture for Hybrid Software-Defined and Wireless Mesh Networks (Soft-Mesh), by systematic and gradual transitioning of WMNs to SDNs in an efficient manner. The main objective of this paper is to suggest improvements to the architecture of the SDN node that allow the implementation of various network functions such as routing, load balancing, network control, and traffic engineering for the hybrid SDN and IP networks. Mininet-WiFi Simulator is used to perform various experiments to evaluate the performance of proposed architecture by creating a hybrid network topology with a varying number of nodes that is 50, 100, 150, 200, and 250 including SDN hybrid and legacy nodes with varying proportion of SDN hybrid and legacy nodes. Results are taken for the average UDP throughput, end-to-end delay, packet drop ratio, and routing overhead while comparing with traditional routing protocols including Optimized Link State Routing (OLSR) and Better Approach to Mobile Adhoc Networking (BATMAN) and with existing hybrid SDN/IP routing architectures including Hakiri and wmSDN. The analysis of simulation results shows that the proposed architecture Soft-Mesh outperforms in terms of the aforementioned performance metrics than the traditional and exiting hybrid routing protocols. Soft-Mesh gives 50% to 70% improved results concerning the incremental proportion of SDN hybrid nodes.
We propose a reinforcement learning-based cell switching algorithm to minimize the energy consumption in ultra-dense deployments without compromising the quality of service (QoS) experienced by the users. In this regard, the proposed method can intelligently learn which small cells (SCs) to turn off at any given time based on the traffic load of the SCs and the macro cell. To validate the idea, we used the open call detail record (CDR) data set from the city of Milan, Italy, and tested our algorithm against typical operational benchmark solutions. With the obtained results, we demonstrate exactly when and how the proposed method can provide energy savings, and moreover how this happens without reducing QoS of users. Most importantly, we show that our solution has a very similar performance to the exhaustive search, with the advantage of being scalable and less complex.
As a promising next-generation network architecture, named data networking (NDN) supports name-based routing and in-network caching to retrieve content in an efficient, fast, and reliable manner. Most of the studies on NDN have proposed innovative and efficient caching mechanisms and retrieval of content via efficient routing. However, very few studies have targeted addressing the vulnerabilities in NDN architecture, which a malicious node can exploit to perform a content poisoning attack (CPA). This potentially results in polluting the in-network caches, the routing of content, and consequently isolates the legitimate content in the network. In the past, several efforts have been made to propose the mitigation strategies for the content poisoning attack, but to the best of our knowledge, no specific work has been done to address an emerging attack-surface in NDN, which we call an interest flooding attack. Handling this attack-surface can potentially make content poisoning attack mitigation schemes more effective, secure, and robust. Hence, in this article, we propose the addition of a security mechanism in the CPA mitigation scheme that is, Name-Key Based Forwarding and Multipath Forwarding Based Inband Probe, in which we block the malicious face of compromised consumers by monitoring the Cache-Miss Ratio values and the Queue Capacity at the Edge Routers. The malicious face is blocked when the cache-miss ratio hits the threshold value, which is adjusted dynamically through monitoring the cache-miss ratio and queue capacity values. The experimental results show that we are successful in mitigating the vulnerability of the CPA mitigation scheme by detecting and blocking the flooding interface, at the cost of very little verification overhead at the NDN Routers.
Due to the expeditious inclination of online services usage, the incidents of ransomware proliferation being reported are on the rise. Ransomware is a more hazardous threat than other malware as the victim of ransomware cannot regain access to the hijacked device until some form of compensation is paid. In the literature, several dynamic analysis techniques have been employed for the detection of malware including ransomware; however, to the best of our knowledge, hardware execution profile for ransomware analysis has not been investigated for this purpose, as of today. In this study, we show that the true execution picture obtained via a hardware execution profile is beneficial to identify the obfuscated ransomware too. We evaluate the features obtained from hardware performance counters to classify malicious applications into ransomware and non-ransomware categories using several machine learning algorithms such as Random Forest, Decision Tree, Gradient Boosting, and Extreme Gradient Boosting. The employed data set comprises 80 ransomware and 80 non-ransomware applications, which are collected using the VirusShare platform. The results revealed that extracted hardware features play a substantial part in the identification and detection of ransomware with F-measure score of 0.97 achieved by Random Forest and Extreme Gradient Boosting.