About
CTO | Experienced Program Manager | Advisor | Enterprise Architect who has demonstrated…
Articles by Dr. Fahim K
Activity
-
Today's Most Significant, Neutral and Unbiased News was selected by the "Autonomous AI" of our newly launched platform…
Today's Most Significant, Neutral and Unbiased News was selected by the "Autonomous AI" of our newly launched platform…
Shared by Dr. Fahim K Sufi
-
Dear all, our research platform Acumen Haven (https://acumenhaven.com/ ) is organizing a webinar on January 21, 2025. The topic is sustainable…
Dear all, our research platform Acumen Haven (https://acumenhaven.com/ ) is organizing a webinar on January 21, 2025. The topic is sustainable…
Liked by Dr. Fahim K Sufi
Experience
Education
Licenses & Certifications
Volunteer Experience
-
Consultant
RMIT University
- 5 years 11 months
Science and Technology
I supervise several PhD students at School of CS & IT, RMIT University.
As an consultant in the area of IT Data Migration, Service Delivery, System Architecture, IT System Design, Data Warehouse, Business Intelligence, Solution Architecture, I have authored more than 42 highly cited international publications (e.g. http://link.springer.com/chapter/10.1007/978-3-642-04117-4_17, www.crcpress.com/product/isbn/9781439800829, http://dx.doi.org/10.1016/j.eswa.2010.08.149…I supervise several PhD students at School of CS & IT, RMIT University.
As an consultant in the area of IT Data Migration, Service Delivery, System Architecture, IT System Design, Data Warehouse, Business Intelligence, Solution Architecture, I have authored more than 42 highly cited international publications (e.g. http://link.springer.com/chapter/10.1007/978-3-642-04117-4_17, www.crcpress.com/product/isbn/9781439800829, http://dx.doi.org/10.1016/j.eswa.2010.08.149, http://ieeexplore.ieee.org/xpls/articleDetails.jsp?arnumber=5643150, http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4909289http://dx.doi.org/10.1016/j.jnca.2010.07.004, http://dx.doi.org/10.1007/s10916-008-9172-6, http://dx.doi.org/10.1007/s10916-009-9412-4, http://dx.doi.org/10.1007/s10916-008-9208-y, http://dx.doi.org/10.1002/sec.76, http://dx.doi.org/10.1002/sec.226, http://dx.doi.org/10.1002/sec.44... details at http://www.fahimsufi.com/paper.html). As a result, I serve as a specialist reviewer and editor of several international journals and technical forums on Business Intelligence, Data Modelling and Data Warehouse Development.
-
Lead App Developer
Natinokobread
- Present 10 years 3 months
Science and Technology
Apart from being an expert in Enterprise, Desktop and Mainframe systems, I also design, develop and maintain numerous Apps for iPAD, iPhone, iPOD, Anroid, Windows Mobile platform with real-time cloud data integration (https://itunes.apple.com/us/app/dr.-brain-teaser/id976991031?mt=8, https://play.google.com/store/apps/developer?id=Natinoko+Bread, http://www.windowsphone.com/en-au/store/app/anxiety-monitor/536131b7-3dd7-409d-accf-6e06a22a8993).
Publications
-
Addressing Data Scarcity in the Medical Domain: A GPT-Based Approach for Synthetic Data Generation and Feature Extraction
Information, Vol. 15, No. 5, PP. 264
This research confronts the persistent challenge of data scarcity in medical machine learning by introducing a pioneering methodology that harnesses the capabilities of Generative Pre-trained Transformers (GPT). In response to the limitations posed by a dearth of labeled medical data, our approach involves the synthetic generation of comprehensive patient discharge messages, setting a new standard in the field with GPT autonomously generating 20 fields. Through a meticulous review of the…
This research confronts the persistent challenge of data scarcity in medical machine learning by introducing a pioneering methodology that harnesses the capabilities of Generative Pre-trained Transformers (GPT). In response to the limitations posed by a dearth of labeled medical data, our approach involves the synthetic generation of comprehensive patient discharge messages, setting a new standard in the field with GPT autonomously generating 20 fields. Through a meticulous review of the existing literature, we systematically explore GPT’s aptitude for synthetic data generation and feature extraction, providing a robust foundation for subsequent phases of the research. The empirical demonstration showcases the transformative potential of our proposed solution, presenting over 70 patient discharge messages with synthetically generated fields, including severity and chances of hospital re-admission with justification. Moreover, the data had been deployed in a mobile solution where regression algorithms autonomously identified the correlated factors for ascertaining the severity of patients’ conditions. This study not only establishes a novel and comprehensive methodology but also contributes significantly to medical machine learning, presenting the most extensive patient discharge summaries reported in the literature. The results underscore the efficacy of GPT in overcoming data scarcity challenges and pave the way for future research to refine and expand the application of GPT in diverse medical contexts.
-
A Systematic Review of Using Deep Learning in Aphasia: Challenges and Future Directions
Computers, Vol. 13, No. 5, PP. 117
In this systematic literature review, the intersection of deep learning applications within the aphasia domain is meticulously explored, acknowledging the condition’s complex nature and the nuanced challenges it presents for language comprehension and expression. By harnessing data from primary databases and employing advanced query methodologies, this study synthesizes findings from 28 relevant documents, unveiling a landscape marked by significant advancements and persistent challenges…
In this systematic literature review, the intersection of deep learning applications within the aphasia domain is meticulously explored, acknowledging the condition’s complex nature and the nuanced challenges it presents for language comprehension and expression. By harnessing data from primary databases and employing advanced query methodologies, this study synthesizes findings from 28 relevant documents, unveiling a landscape marked by significant advancements and persistent challenges. Through a methodological lens grounded in the PRISMA framework (Version 2020) and Machine Learning-driven tools like VosViewer (Version 1.6.20) and Litmaps (Free Version), the research delineates the high variability in speech patterns, the intricacies of speech recognition, and the hurdles posed by limited and diverse datasets as core obstacles. Innovative solutions such as specialized deep learning models, data augmentation strategies, and the pivotal role of interdisciplinary collaboration in dataset annotation emerge as vital contributions to this field. The analysis culminates in identifying theoretical and practical pathways for surmounting these barriers, highlighting the potential of deep learning technologies to revolutionize aphasia assessment and treatment. This review not only consolidates current knowledge but also charts a course for future research, emphasizing the need for comprehensive datasets, model optimization, and integration into clinical workflows to enhance patient care. Ultimately, this work underscores the transformative power of deep learning in advancing aphasia diagnosis, treatment, and support, heralding a new era of innovation and interdisciplinary collaboration in addressing this challenging disorder.
-
An innovative GPT-based open-source intelligence using historical cyber incident reports
Natural Language Processing Journal (Elsevier), No. 100074
In contemporary discourse, the pervasive influences of Generative Pre-Trained (GPT) and Large Language Models (LLM) are evident, showcasing diverse applications. GPT-based technologies, transcending mere summarization, exhibit adeptness in discerning critical information from extensive textual corpuses. Through prudent extraction of semantically meaningful content from textual representations, GPT technologies engender automated feature extraction, a departure from the fallible manual…
In contemporary discourse, the pervasive influences of Generative Pre-Trained (GPT) and Large Language Models (LLM) are evident, showcasing diverse applications. GPT-based technologies, transcending mere summarization, exhibit adeptness in discerning critical information from extensive textual corpuses. Through prudent extraction of semantically meaningful content from textual representations, GPT technologies engender automated feature extraction, a departure from the fallible manual extraction methodologies. This study posits an innovative paradigm for extracting multidimensional cyber threat-related features from textual depictions of cyber events, leveraging the prowess of GPT. These extracted features serve as inputs for artificial intelligence (AI) and deep learning algorithms, including Convolutional Neural Network (CNN), Decomposition analysis, and Natural Language Processing (NLP)-based modalities tailored for non-technical cyber strategists. The proposed framework empowers cyber strategists or analysts to articulate inquiries regarding historical cyber incidents in plain English, with the NLP-based interaction facet of the system proffering cogent AI-driven insights in natural language. Furthermore, salient insights, often elusive in dynamic visualizations, are succinctly presented in plain language. Empirical validation of the entire system ensued through autonomous acquisition of semantically enriched contextual information concerning 214 major cyber incidents spanning from 2016 to 2023. GPT-based responses on Actor Type, Target, Attack Source (i.e., Country Originating Attack), Attack Destination (i.e., Targeted Country), Attack Level, Attack Type, and Attack Timeline, underwent critical AI-driven analysis. This comprehensive 7-dimensional information gleaned from the corpus of 214 incidents yielded a corpus of 1498 informative outputs, attaining a commendable precision of 96%, a recall rate of 98%, and an F1-Score of 97%.
-
A New Time Series Dataset for Cyber-Threat Correlation, Regression and Neural-Network-Based Forecasting
Information, Vol. 15, No. 2
In the face of escalating cyber threats that have contributed significantly to global economic losses, this study presents a comprehensive dataset capturing the multifaceted nature of cyber-attacks across 225 countries over a 14-month period from October 2022 to December 2023. The dataset, comprising 77,623 rows and 18 fields, provides a detailed chronology of cyber-attacks, categorized into eight critical dimensions: spam, ransomware, local infection, exploit, malicious mail, network attack…
In the face of escalating cyber threats that have contributed significantly to global economic losses, this study presents a comprehensive dataset capturing the multifaceted nature of cyber-attacks across 225 countries over a 14-month period from October 2022 to December 2023. The dataset, comprising 77,623 rows and 18 fields, provides a detailed chronology of cyber-attacks, categorized into eight critical dimensions: spam, ransomware, local infection, exploit, malicious mail, network attack, on-demand scan, and web threat. The dataset also includes ranking data, offering a comparative view of countries’ susceptibility to different cyber threats. The results reveal significant variations in the frequency and intensity of cyber-attacks across different countries and attack types. The data were meticulously compiled using modern AI-based data acquisition techniques, ensuring a high degree of accuracy and comprehensiveness. Correlation tests against the eight types of cyber-attacks resulted in the determination that on-demand scan and local infection are highly correlated, with a correlation coefficient of 0.93. Lastly, neural-network-based forecasting of these highly correlated factors (i.e., on-demand scan and local infection) reveals a similar pattern of prediction, with an MSE and an MAPE of 1.616 and 80.13, respectively. The study’s conclusions provide critical insights into the global landscape of cyber threats, highlighting the urgent need for robust cybersecurity measures.
-
A Systematic Review on the Dimensions of Open-Source Disaster Intelligence using GPT
Journal of Economy and Technology
Natural and manmade disasters like landslides, floods, earthquake, cyclone, shooting, riots have detrimental effect in precious life, infrastructure, and economy. This study addresses the need for a comprehensive analysis of Generative Pre-Trained Transformers (GPT) in the context of open-source disaster intelligence, a topic where existing literature remains fragmented. Employing a systematic approach, a query scheme incorporating 11 atomic keywords was devised, resulting in the acquisition of…
Natural and manmade disasters like landslides, floods, earthquake, cyclone, shooting, riots have detrimental effect in precious life, infrastructure, and economy. This study addresses the need for a comprehensive analysis of Generative Pre-Trained Transformers (GPT) in the context of open-source disaster intelligence, a topic where existing literature remains fragmented. Employing a systematic approach, a query scheme incorporating 11 atomic keywords was devised, resulting in the acquisition of 53 relevant studies. These studies were meticulously reviewed and synthesized to propose six dimensions of GPT-based open-source disaster intelligence, yielding critical insights into disaster management strategies. Within these 6 dimensions, 24 studies were categorized under “Social Media Analytics for Disaster Response” dimension, 7 on “Disaster Prediction,” 11 on “Disaster Management,” 5 on “Disaster Support Via Technology”, 3 on “Climate Change and Disaster Communication,” and 5 studies were classified under the “General Disaster Analysis” dimension. Leveraging advanced methodologies and machine learning driven tools such as PRISMA, Litmaps, and VOSviewer, this research not only identifies key trends and collaborative efforts but also provides valuable bibliographical insights for researchers and practitioners in the field. For example, the co-citation analysis demonstrated a total of 3703 authors, among whom 51 authors garnered a minimum of 10 citations, leading to the identification of 3 distinct clusters. By addressing a critical research gap and offering a methodologically robust examination, this study contributes significantly to the advancement of knowledge in GPT-based open-source disaster intelligence, facilitating informed decision-making and enhancing disaster response strategies worldwide.
-
A Sustainable Way Forward: Systematic Review of Transformer Technology in Social-Media-Based Disaster Analytics
Sustainability, Vol. 16, No. 7
Transformer technologies, like generative pre-trained transformers (GPTs) and bidirectional encoder representations from transformers (BERT) are increasingly utilized for understanding diverse social media content. Despite their popularity, there is a notable absence of a systematic literature review on their application in disaster analytics. This study investigates the utilization of transformer-based technology in analyzing social media data for disaster and emergency crisis events…
Transformer technologies, like generative pre-trained transformers (GPTs) and bidirectional encoder representations from transformers (BERT) are increasingly utilized for understanding diverse social media content. Despite their popularity, there is a notable absence of a systematic literature review on their application in disaster analytics. This study investigates the utilization of transformer-based technology in analyzing social media data for disaster and emergency crisis events. Leveraging a systematic review methodology, 114 related works were collated from popular databases like Web of Science and Scopus. After deduplication and following the exclusion criteria, 53 scholarly articles were analyzed, revealing insights into the geographical distribution of research efforts, trends in publication output over time, publication venues, primary research domains, and prevalently used technology. The results show a significant increase in publications since 2020, with a predominant focus on computer science, followed by engineering and decision sciences. The results emphasize that within the realm of social-media-based disaster analytics, BERT was utilized in 29 papers, BERT-based methods were employed in 28 papers, and GPT-based approaches were featured in 4 papers, indicating their predominant usage in the field. Additionally, this study presents a novel classification scheme consisting of 10 distinct categories that thoroughly categorize all existing scholarly works on disaster monitoring. However, the study acknowledges limitations related to sycophantic behavior and hallucinations in GPT-based systems and raises ethical considerations and privacy concerns associated with the use of social media data. To address these issues, it proposes strategies for enhancing model robustness, refining data validation techniques, and integrating human oversight mechanisms.
-
Open-Source Cyber Intelligence Research Through PESTEL Framework: Present and Future Impact
Societal Impacts, Volume 3, June 2024, 100047
Recent scholarly endeavors in the domain of Cyber Intelligence have unveiled its multifaceted implications, intricately interwoven with various Sustainable Development Goals (SDGs), notably encompassing Goal 9 (Industry, Innovation, and Infrastructure), Goal 11 (Sustainable Cities and Communities), Goal 16 (Peace, Justice and Strong Institutions), among others. This study intricately dissects the symbiotic nexus between Cyber Intelligence research and these SDGs, whilst simultaneously…
Recent scholarly endeavors in the domain of Cyber Intelligence have unveiled its multifaceted implications, intricately interwoven with various Sustainable Development Goals (SDGs), notably encompassing Goal 9 (Industry, Innovation, and Infrastructure), Goal 11 (Sustainable Cities and Communities), Goal 16 (Peace, Justice and Strong Institutions), among others. This study intricately dissects the symbiotic nexus between Cyber Intelligence research and these SDGs, whilst simultaneously unraveling its profound reverberations across the diverse dimensions of the PESTEL (Political, Economic, Social, Technological, Environmental, and Legal) framework. Ten critical impacts inherent in current research works on cyber intelligence were identified, subsequently juxtaposing these impacts within the PESTEL dimensions. This analytical process further unraveled an additional eleven critical impacts yet to be addressed by current research works on cyber intelligence. Addressing these additional 11 impacts in forthcoming research endeavors is posited as a catalyst for optimizing societal benefits across the diverse spectra of PESTEL dimensions. Moving on from categorizing and classifying societal impacts of cyber research within PESTEL framework, this study finally establishes a strategic roadmap of 11 future research directions on cyber intelligence like sustainable cyber security practices, mental health aspects of cyber victimhood, ethical AI in cybersecurity among others. Fostering a cross-disciplinary dialog, this work contributes to the broader discourse on harnessing cyber intelligence for societal betterment, mitigating the potential detrimental effects cyber threats.
-
Generative Pre-Trained Transformer (GPT) in Research: A Systematic Review on Data Augmentation
Information, 15(2), 99
GPT (Generative Pre-trained Transformer) represents advanced language models that have significantly reshaped the academic writing landscape. These sophisticated language models offer invaluable support throughout all phases of research work, facilitating idea generation, enhancing drafting processes, and overcoming challenges like writer’s block. Their capabilities extend beyond conventional applications, contributing to critical analysis, data augmentation, and research design, thereby…
GPT (Generative Pre-trained Transformer) represents advanced language models that have significantly reshaped the academic writing landscape. These sophisticated language models offer invaluable support throughout all phases of research work, facilitating idea generation, enhancing drafting processes, and overcoming challenges like writer’s block. Their capabilities extend beyond conventional applications, contributing to critical analysis, data augmentation, and research design, thereby elevating the efficiency and quality of scholarly endeavors. Strategically narrowing its focus, this review explores alternative dimensions of GPT and LLM applications, specifically data augmentation and the generation of synthetic data for research. Employing a meticulous examination of 412 scholarly works, it distills a selection of 77 contributions addressing three critical research questions: (1) GPT on Generating Research data, (2) GPT on Data Analysis, and (3) GPT on Research Design. The systematic literature review adeptly highlights the central focus on data augmentation, encapsulating 48 pertinent scholarly contributions, and extends to the proactive role of GPT in critical analysis of research data and shaping research design. Pioneering a comprehensive classification framework for “GPT’s use on Research Data”, the study classifies existing literature into six categories and 14 sub-categories, providing profound insights into the multifaceted applications of GPT in research data. This study meticulously compares 54 pieces of literature, evaluating research domains, methodologies, and advantages and disadvantages, providing scholars with profound insights crucial for the seamless integration of GPT across diverse phases of their scholarly pursuits.
-
A global cyber-threat intelligence system with artificial intelligence and convolutional neural network
Decision Analytics Journal (Elsevier), Vol. 9, Page 100364
Global cyber-attacks significantly impact the economy, society, organizations, and individuals. Existing research on cyber-attacks lacks in demonstrating Artificial Intelligence (AI) based analytical solutions for providing country-wide cyber threat intelligence. Cyber strategists at a national level require AI-based decision support systems for deciding a country’s cyber posture or preparedness. This paper proposes an AI-based solution that autonomously collects multidimensional cyber-attack…
Global cyber-attacks significantly impact the economy, society, organizations, and individuals. Existing research on cyber-attacks lacks in demonstrating Artificial Intelligence (AI) based analytical solutions for providing country-wide cyber threat intelligence. Cyber strategists at a national level require AI-based decision support systems for deciding a country’s cyber posture or preparedness. This paper proposes an AI-based solution that autonomously collects multidimensional cyber-attack data on social media posts on cyber-related outcry. The proposed system provides critical analytical capability in the cyber-threat spectrum and uses sophisticated AI-based algorithms for anomaly detection, prediction, sentiment analysis, location detection, translation, etc. The proposed system was deployed from 11 October 2022 to 31 October 2022. During these 21 days, the system autonomously collected 30,203 records on cyber threats covering multiple dimensions of cyber-threat. These dimensions included country-wide daily cyber-attack records by ransomware, exploits, web threats, spam, malicious mail, network attacks, local infections, and on-demand scan. Moreover, the system performed AI-based acquisition and analysis of 3789 cyber-related tweets from 3402 tweet users in 37 different languages. The system also autonomously translated 893 non-English tweets. The proposed system is the first solution that uses Convolutional Neural Network (CNN) based anomaly detection to detect abnormalities in cyber-threat spectrum worldwide and predict cyber-attacks automatically. The proposed system was demonstrated to provide evidence-based decisions on global cyber threats in multiple platforms, including iOS, Android, and Windows.
-
A New Social Media Analytics Method for Identifying Factors Contributing to COVID-19 Discussion Topics
Information, Vol. 14, No. 10
Since the onset of the COVID-19 crisis, scholarly investigations and policy formulation have harnessed the potent capabilities of artificial intelligence (AI)-driven social media analytics. Evidence-driven policymaking has been facilitated through the proficient application of AI and natural language processing (NLP) methodologies to analyse the vast landscape of social media discussions. However, recent research works have failed to demonstrate a methodology to discern the underlying factors…
Since the onset of the COVID-19 crisis, scholarly investigations and policy formulation have harnessed the potent capabilities of artificial intelligence (AI)-driven social media analytics. Evidence-driven policymaking has been facilitated through the proficient application of AI and natural language processing (NLP) methodologies to analyse the vast landscape of social media discussions. However, recent research works have failed to demonstrate a methodology to discern the underlying factors influencing COVID-19-related discussion topics. In this scholarly endeavour, an innovative AI- and NLP-based framework is deployed, incorporating translation, sentiment analysis, topic analysis, logistic regression, and clustering techniques to meticulously identify and elucidate the factors that are relevant to any discussion topics within the social media corpus. This pioneering methodology is rigorously tested and evaluated using a dataset comprising 152,070 COVID-19-related tweets, collected between 15th July 2021 and 20th April 2023, encompassing discourse in 58 distinct languages. The AI-driven regression analysis revealed 37 distinct observations, with 20 of them demonstrating a higher level of significance. In parallel, clustering analysis identified 15 observations, including nine of substantial relevance. These 52 AI-facilitated observations collectively unveil and delineate the factors that are intricately linked to five core discussion topics that are prevalent in the realm of COVID-19 discourse on Twitter. To the best of our knowledge, this research constitutes the inaugural effort in autonomously identifying factors associated with COVID-19 discussion topics, marking a pioneering application of AI algorithms in this domain. The implementation of this method holds the potential to significantly enhance the practice of evidence-based policymaking pertaining to matters concerning COVID-19.
-
Identifying drivers of COVID-19 vaccine sentiments for effective vaccination policy
Heliyon (Elsevier / Cell Press), Vol. 9, No. 9
The COVID-19 pandemic has had far-reaching consequences globally, including a significant loss of lives, escalating unemployment rates, economic instability, deteriorating mental well-being, social conflicts, and even political discord. Vaccination, recognized as a pivotal measure in mitigating the adverse effects of COVID-19, has evoked a diverse range of sentiments worldwide. In particular, numerous users on social media platforms have expressed concerns regarding vaccine availability and…
The COVID-19 pandemic has had far-reaching consequences globally, including a significant loss of lives, escalating unemployment rates, economic instability, deteriorating mental well-being, social conflicts, and even political discord. Vaccination, recognized as a pivotal measure in mitigating the adverse effects of COVID-19, has evoked a diverse range of sentiments worldwide. In particular, numerous users on social media platforms have expressed concerns regarding vaccine availability and potential side effects. Therefore, it is imperative for governmental authorities and senior health policy strategists to gain insights into the public's perspectives on vaccine mandates in order to effectively implement their vaccination initiatives. Despite the critical importance of comprehending the underlying factors influencing COVID-19 vaccine sentiment, the existing literature offers limited research studies on this subject matter. This paper presents an innovative methodology that harnesses Twitter data to extract sentiment pertaining to COVID-19 vaccination through the utilization of Artificial Intelligence techniques such as sentiment analysis, entity detection, linear regression, and logistic regression. The proposed methodology was applied and tested on live Twitter feeds containing COVID-19 vaccine-related tweets, spanning from February 14, 2021, to April 2, 2023. Notably, this approach successfully processed tweets in 45 languages originating from over 100 countries, enabling users to select from an extensive scenario space of approximately 3.55 × 10249 possible scenarios. By selecting specific scenarios, the proposed methodology effectively identified numerous determinants contributing to vaccine sentiment across iOS, Android, and Windows platforms. In comparison to previous studies documented in the existing literature, the presented solution emerges as the most robust in detecting the fundamental drivers of vaccine sentiment.
-
Social Media Analytics on Russia–Ukraine Cyber War with Natural Language Processing: Perspectives and Challenges
Information 14(9)
Utilizing social media data is imperative in comprehending critical insights on the Russia–Ukraine cyber conflict due to their unparalleled capacity to provide real-time information dissemination, thereby enabling the timely tracking and analysis of cyber incidents. The vast array of user-generated content on these platforms, ranging from eyewitness accounts to multimedia evidence, serves as invaluable resources for corroborating and contextualizing cyber attacks, facilitating the attribution…
Utilizing social media data is imperative in comprehending critical insights on the Russia–Ukraine cyber conflict due to their unparalleled capacity to provide real-time information dissemination, thereby enabling the timely tracking and analysis of cyber incidents. The vast array of user-generated content on these platforms, ranging from eyewitness accounts to multimedia evidence, serves as invaluable resources for corroborating and contextualizing cyber attacks, facilitating the attribution of malicious actors. Furthermore, social media data afford unique access to public sentiment, the propagation of propaganda, and emerging narratives, offering profound insights into the effectiveness of information operations and shaping counter-messaging strategies. However, there have been hardly any studies reported on the Russia–Ukraine cyber war harnessing social media analytics. This paper presents a comprehensive analysis of the crucial role of social-media-based cyber intelligence in understanding Russia’s cyber threats during the ongoing Russo–Ukrainian conflict. This paper introduces an innovative multidimensional cyber intelligence framework and utilizes Twitter data to generate cyber intelligence reports. By leveraging advanced monitoring tools and NLP algorithms, like language detection, translation, sentiment analysis, term frequency–inverse document frequency (TF-IDF), latent Dirichlet allocation (LDA), Porter stemming, n-grams, and others, this study automatically generated cyber intelligence for Russia and Ukraine. Using 37,386 tweets originating from 30,706 users in 54 languages from 13 October 2022 to 6 April 2023, this paper reported the first detailed multilingual analysis on the Russia–Ukraine cyber crisis in four cyber dimensions (geopolitical and socioeconomic; targeted victim; psychological and societal; and national priority and concerns). It also highlights challenges faced in harnessing reliable social-media-based cyber intelligence.
-
Novel Application of Open-Source Cyber Intelligence
Electronics 12(17)
The prevalence of cybercrime has emerged as a critical issue in contemporary society because of its far-reaching financial, social, and psychological implications. The negative effects of cyber-attacks extend beyond financial losses and disrupt people’s lives on social and psychological levels. Conventional practice involves cyber experts sourcing data from various outlets and applying personal discernment and rational inference to manually formulate cyber intelligence specific to a country…
The prevalence of cybercrime has emerged as a critical issue in contemporary society because of its far-reaching financial, social, and psychological implications. The negative effects of cyber-attacks extend beyond financial losses and disrupt people’s lives on social and psychological levels. Conventional practice involves cyber experts sourcing data from various outlets and applying personal discernment and rational inference to manually formulate cyber intelligence specific to a country. This traditional approach introduces personal bias towards the country-level cyber reports. However, this paper reports a novel approach where country-level cyber intelligence is automatically generated with artificial intelligence (AI), employing cyber-related social media posts and open-source cyber-attack statistics. Our innovative cyber threat intelligence solution examined 37,386 tweets from 30,706 users in 54 languages using sentiment analysis, translation, term frequency–inverse document frequency (TF-IDF), latent Dirichlet allocation (LDA), N-gram, and Porter stemming. Moreover, the presented study utilized 238,220 open-intelligence cyber-attack statistics from eight different web links, to create a historical cyber-attack dataset. Subsequently, AI-based algorithms, like convolutional neural network (CNN), and exponential smoothing were used for AI-driven insights. With the confluence of the voluminous Twitter-derived data and the array of open-intelligence cyber-attack statistics, orchestrated by the AI-driven algorithms, the presented approach generated seven-dimensional cyber intelligence for Australia and China in complete automation. Finally, the topic analysis on the cyber-related social media messages revealed seven main themes for both Australia and China. This methodology possesses the inherent capability to effortlessly engender cyber intelligence for any country, employing an autonomous modality within the realm of pervasive computational platforms.
-
A New AI-Based Semantic Cyber Intelligence Agent
Future Internet
The surge in cybercrime has emerged as a pressing concern in contemporary society due to its far-reaching financial, social, and psychological repercussions on individuals. Beyond inflicting monetary losses, cyber-attacks exert adverse effects on the social fabric and psychological well-being of the affected individuals. In order to mitigate the deleterious consequences of cyber threats, adoption of an intelligent agent-based solution to enhance the speed and comprehensiveness of cyber…
The surge in cybercrime has emerged as a pressing concern in contemporary society due to its far-reaching financial, social, and psychological repercussions on individuals. Beyond inflicting monetary losses, cyber-attacks exert adverse effects on the social fabric and psychological well-being of the affected individuals. In order to mitigate the deleterious consequences of cyber threats, adoption of an intelligent agent-based solution to enhance the speed and comprehensiveness of cyber intelligence is advocated. In this paper, a novel cyber intelligence solution is proposed, employing four semantic agents that interact autonomously to acquire crucial cyber intelligence pertaining to any given country. The solution leverages a combination of techniques, including a convolutional neural network (CNN), sentiment analysis, exponential smoothing, latent Dirichlet allocation (LDA), term frequency-inverse document frequency (TF-IDF), Porter stemming, and others, to analyse data from both social media and web sources. The proposed method underwent evaluation from 13 October 2022 to 6 April 2023, utilizing a dataset comprising 37,386 tweets generated by 30,706 users across 54 languages. To address non-English content, a total of 8199 HTTP requests were made to facilitate translation. Additionally, the system processed 238,220 cyber threat data from the web. Within a remarkably brief duration of 6 s, the system autonomously generated a comprehensive cyber intelligence report encompassing 7 critical dimensions of cyber intelligence for countries such as Russia, Ukraine, China, Iran, India, and Australia.
-
A New Social Media-Driven Cyber Threat Intelligence
Electronics
Cyber threats are projected to cause USD 10.5 trillion in damage to the global economy in 2025. Comprehending the level of threat is core to adjusting cyber posture at the personal, organizational, and national levels. However, representing the threat level with a single score is a daunting task if the scores are generated from big and complex data sources such as social media. This paper harnesses the modern technological advancements in artificial intelligence (AI) and natural language…
Cyber threats are projected to cause USD 10.5 trillion in damage to the global economy in 2025. Comprehending the level of threat is core to adjusting cyber posture at the personal, organizational, and national levels. However, representing the threat level with a single score is a daunting task if the scores are generated from big and complex data sources such as social media. This paper harnesses the modern technological advancements in artificial intelligence (AI) and natural language processing (NLP) to comprehend the contextual information of social media posts related to cyber-attacks and electronic warfare. Then, using keyword-based index generation techniques, a single index is generated at the country level. Utilizing a convolutional neural network (CNN), the innovative process automatically detects any anomalies within the countrywide threat index and explains the root causes. The entire process was validated with live Twitter feeds from 14 October 2022 to 27 December 2022. During these 75 days, AI-based language detection, translation, and sentiment analysis comprehended 15,983 tweets in 47 different languages (while most of the existing works only work in one language). Finally, 75 daily cyber threat indexes with anomalies were generated for China, Australia, Russia, Ukraine, Iran, and India. Using this intelligence, strategic decision makers can adjust their cyber preparedness for mitigating the detrimental damages afflicted by cyber criminals.
-
Algorithms in Low-Code-No-Code for Research Applications: A Practical Review
Algorithms
Algorithms have evolved from machine code to low-code-no-code (LCNC) in the past 20 years. Observing the growth of LCNC-based algorithm development, the CEO of GitHub mentioned that the future of coding is no coding at all. This paper systematically reviewed several of the recent studies using mainstream LCNC platforms to understand the area of research, the LCNC platforms used within these studies, and the features of LCNC used for solving individual research questions. We identified 23…
Algorithms have evolved from machine code to low-code-no-code (LCNC) in the past 20 years. Observing the growth of LCNC-based algorithm development, the CEO of GitHub mentioned that the future of coding is no coding at all. This paper systematically reviewed several of the recent studies using mainstream LCNC platforms to understand the area of research, the LCNC platforms used within these studies, and the features of LCNC used for solving individual research questions. We identified 23 research works using LCNC platforms, such as SetXRM, the vf-OS platform, Aure-BPM, CRISP-DM, and Microsoft Power Platform (MPP). About 61% of these existing studies resorted to MPP as their primary choice. The critical research problems solved by these research works were within the area of global news analysis, social media analysis, landslides, tornadoes, COVID-19, digitization of process, manufacturing, logistics, and software/app development. The main reasons identified for solving research problems with LCNC algorithms were as follows: (1) obtaining research data from multiple sources in complete automation; (2) generating artificial intelligence-driven insights without having to manually code them. In the course of describing this review, this paper also demonstrates a practical approach to implement a cyber-attack monitoring algorithm with the most popular LCNC platform.
-
A decision support system for extracting artificial intelligence-driven insights from live twitter feeds on natural disasters
Decision Analytics Journal, Elsevier
Existing studies on Twitter-based natural disaster analysis suffer from shortcomings like limitations on supported languages, lack of sentiment analysis, regional restrictions, lack of end-to-end automation, and lack of Mobile App support. In this study, we design and develop a fully-automated artificial intelligence (AI) based Decision Support System (DSS) available through multiple platforms like iOS, Android, and Windows. The proposed DSS uses a live Twitter feed to obtain natural…
Existing studies on Twitter-based natural disaster analysis suffer from shortcomings like limitations on supported languages, lack of sentiment analysis, regional restrictions, lack of end-to-end automation, and lack of Mobile App support. In this study, we design and develop a fully-automated artificial intelligence (AI) based Decision Support System (DSS) available through multiple platforms like iOS, Android, and Windows. The proposed DSS uses a live Twitter feed to obtain natural disaster-related Tweets in 110 supported languages. The system automatically executes AI-based translation, sentiment analysis, and automated K-Means algorithm to generate AI-driven insights for disaster strategists. The proposed DSS was tested with 67,528 real-time Tweets captured between 28 September 2021 and 6 October 2021 in 39 different languages under two different scenarios. The system revealed critical information for disaster planners or strategists like which clusters of natural disasters were associated with the most negative sentiments. We evaluated the proposed system’s accuracy and user experiences from 12 different disaster strategists. 83.33% of users found the proposed solution easy to use, effective, and self-explanatory. With 97% and 99.7% accuracy in Twitter keyword extraction and entity classification, this DSS reported the most accurate disaster intelligence system on a mobile platform.
-
Automated Analysis of Australian Tropical Cyclones with Regression, Clustering and Convolutional Neural Network
Sustainability 2022, 14(16), 9830; https://doi.org/10.3390/su14169830
Tropical cyclones take precious lives, damage critical infrastructure, and cause economic losses worth billions of dollars in Australia. To reduce the detrimental effect of cyclones, a comprehensive understanding of cyclones using artificial intelligence (AI) is crucial. Although event records on Australian tropical cyclones have been documented over the last 4 decades, deep learning studies on these events have not been reported. This paper presents automated AI-based regression, anomaly…
Tropical cyclones take precious lives, damage critical infrastructure, and cause economic losses worth billions of dollars in Australia. To reduce the detrimental effect of cyclones, a comprehensive understanding of cyclones using artificial intelligence (AI) is crucial. Although event records on Australian tropical cyclones have been documented over the last 4 decades, deep learning studies on these events have not been reported. This paper presents automated AI-based regression, anomaly detection, and clustering techniques on the largest available cyclone repository covering 28,713 records with almost 80 cyclone-related parameters from 17 January 1907 to 11 May 2022. Experimentation with both linear and logistic regression on this dataset resulted in 33 critical insights on factors influencing the central pressure of cyclones. Moreover, automated clustering determined four different clusters highlighting the conditions for low central pressure. Anomaly detection at 70% sensitivity identified 12 anomalies and explained the root causes of these anomalies. This study also projected parameterization and fine-tuning of AI-algorithms at different sensitivity levels. Most importantly, we mathematically evaluated robustness by supporting an enormous scenario space of 4.737 × 108234. A disaster strategist or researcher can use the deployed system in iOS, Android, or Windows platforms to make evidence-based policy decisions on Australian tropical cyclones.
-
Tracking Anti-Vax Social Movement Using AI based Social Media Monitoring
IEEE Transactions on Technology and Society
Anti-Vax social movement possesses a dire threat to governments’ aim of mass vaccination. Moreover, by propagating serious misconceptions and misinformation about COVID-19, Anti-Vaxxers directly and indirectly cause harm to the overall health and wellbeing of society. In addition, the ongoing clashes between Anti-Vaxxers and Pro-Vaxxers have created great social conflicts in recent times. This paper proposes an Artificial Intelligence (AI) based solution to identify and monitor social groups…
Anti-Vax social movement possesses a dire threat to governments’ aim of mass vaccination. Moreover, by propagating serious misconceptions and misinformation about COVID-19, Anti-Vaxxers directly and indirectly cause harm to the overall health and wellbeing of society. In addition, the ongoing clashes between Anti-Vaxxers and Pro-Vaxxers have created great social conflicts in recent times. This paper proposes an Artificial Intelligence (AI) based solution to identify and monitor social groups like Anti-Vax and Pro-Vax in ethical manner. The proposed solution uses AI based Sentiment Analysis and Named Entity Recognition (NER) to advice political scientist, social scientists, and policy makers to assess the influence and impact of social groups. The proposed solution was deployed via iOS, Android, and Windows App on a range of platforms integrating publicly available 40,857 Twitter data related to COVID-19 in 55 different languages from 15 June 2021 till 31 December 2021. Our system demonstrated that Anti-Vax and Pro-Vax social movements posted 72% and 65% more negative contents compared to average negative sentiments of global COVID-19 related posts during the monitored period. Moreover, Anti-Vax related posts with “Hoax” keyword were found to have the highest level of social impact with 38,849 retweets and highest level of negativity (i.e., sentiment score of 0.87). We found out that Pro-Vax community engages in social conflict with Anti-Vaxxers referring them as ignorant (i.e., average sentiment Score 0.91), stupid (i.e., average sentiment Score 0.89), and confused (i.e., average sentiment Score 0.88). Most importantly, the social conflict detection system reveals possible locations of conflict.
-
AI-Tornado: An AI-based Software for analyzing Tornadoes from disaster event dataset
Software Impacts, Elsevier, Vol. 14, No. 100357
AI-Tornado is a decision support system for analyzing Tornadoes from any Tornado-event dataset using artificial intelligence (AI) based algorithms. With automated linear and logistic regression, it identifies which tornado features (e.g., duration, speed, area) matter the most, in Tornado-related causalities with detailed AI-based explanations. Since all these explanations are automatically generated using Natural Language Processing (NLP), AI-Tornado can easily be used by a disaster strategist…
AI-Tornado is a decision support system for analyzing Tornadoes from any Tornado-event dataset using artificial intelligence (AI) based algorithms. With automated linear and logistic regression, it identifies which tornado features (e.g., duration, speed, area) matter the most, in Tornado-related causalities with detailed AI-based explanations. Since all these explanations are automatically generated using Natural Language Processing (NLP), AI-Tornado can easily be used by a disaster strategist who has very limited knowledge of AI algorithms. This software can be accessed through Windows, iOS, and Android apps from a wide range of devices including mobiles, tablets, and desktops. AI-Tornado is available at https://github.com/DrSufi/BD_Tornedoes.
-
AI-SocialDisaster: An AI-based software for identifying and analyzing natural disasters from social media
Software Impacts, Elsevier, Volume 13, August 2022, 100319
AI-SocialDisaster is a decision support system for identifying and analyzing natural disasters like earthquakes, floods, bushfires using social media feeds. It captures real-time social media messages and then uses Natural Language Processing (NLP) based algorithms like entity detection, category classification, and sentiment analysis to identify and locate various natural disasters. Moreover, using Artificial Intelligence (AI) based algorithms like anomaly detection, regression, and…
AI-SocialDisaster is a decision support system for identifying and analyzing natural disasters like earthquakes, floods, bushfires using social media feeds. It captures real-time social media messages and then uses Natural Language Processing (NLP) based algorithms like entity detection, category classification, and sentiment analysis to identify and locate various natural disasters. Moreover, using Artificial Intelligence (AI) based algorithms like anomaly detection, regression, and clustering, AI-SocialDisaster generates AI-based insights for disaster planners and strategists. The software can be accessed through Windows, iOS, and Android apps from a wide range of devices including mobiles, tablets, and desktops. AI-SocialDisaster is available at https://github.com/DrSufi/DisasterAI.
-
A New Decision Support System for Analyzing Factors of Tornado Related Deaths in Bangladesh
Sustainability 2022, 14(10), 6303; https://doi.org/10.3390/su14106303
Tropical cyclones devastate large areas, take numerous lives and damage extensive property in Bangladesh. Research on landfalling tropical cyclones affecting Bangladesh has primarily focused on events occurring since AD1960 with limited work examining earlier historical records. We rectify this gap by developing a new Tornado catalogue that include present and past records of Tornados across Bangladesh maximizing use of available sources. Within this new Tornado database, 119 records were…
Tropical cyclones devastate large areas, take numerous lives and damage extensive property in Bangladesh. Research on landfalling tropical cyclones affecting Bangladesh has primarily focused on events occurring since AD1960 with limited work examining earlier historical records. We rectify this gap by developing a new Tornado catalogue that include present and past records of Tornados across Bangladesh maximizing use of available sources. Within this new Tornado database, 119 records were captured starting from 1838 till 2020 causing 8735 deaths and 97,868 injuries leaving more than 102,776 people affected in total. Moreover, using this new Tornado data, we developed an end-to-end system that allows a user to explore and analyze the full range of Tornado data on multiple scenarios. The user of this new system can select a date range or search a particular location, and then, all the Tornado information along with Artificial Intelligence (AI) based insights within that selected scope would be dynamically presented in a range of devices including iOS, Android, and Windows. Using a set of interactive maps, charts, graphs, and visualizations the user would have a comprehensive understanding of the historical records of Tornados, Cyclones and associated landfalls with detailed data distributions and statistics.
-
Identifying the drivers of negative news with sentiment, entity and regression analysis
International Journal of Information Management Data Insights, Elsevier
Modern-day news agencies cater for a wide range of negative news, since multiple studies show general people are more attracted towards negative news. Once a highly negative incident is reported by a local news agency, it is often propagated by many other foreign news agencies at a global scale characterizing the news as breaking news. This propagation of negative news generates significant impacts on groups (who conducted the event), location (where the event was conducted), societies (that…
Modern-day news agencies cater for a wide range of negative news, since multiple studies show general people are more attracted towards negative news. Once a highly negative incident is reported by a local news agency, it is often propagated by many other foreign news agencies at a global scale characterizing the news as breaking news. This propagation of negative news generates significant impacts on groups (who conducted the event), location (where the event was conducted), societies (that was impacted by the news) along with other factors. This research critically analyzes the impacts of negative news or breaking news with the help Artificial Intelligence (AI) based techniques like sentiment analysis, entity detection and automated regression analysis. The methodology described within this paper was implemented with a unique algorithm that allowed identification of all related factors or topics that drive negative perceptions towards global news. The solution was hosted in cloud environment from 2nd June 2021 till 1st September 2021. It automatically captured and analyzed 22,425 global news from 2397 different news sources of 192 countries. During this time, 34,975 entities are automatically categorized into 13 different entity groups. The classification accuracy of the entity detection was found to be 0.992, 0.995 and 0.994 in terms of precision, recall and F1-score. Moreover, the accuracies of logistic regression and linear regression were found to be 0.895 in AUC and 0.255 in MAPE on an average. Finally, the presented solution was successfully deployed in a wide range of environments including smartphones, tablets, and desktops.
-
Automated Disaster Monitoring From Social Media Posts Using AI-Based Location Intelligence and Sentiment Analysis
IEEE Transactions on Computational Social Systems
Worldwide disasters like bushfires, earthquakes, floods, cyclones, and heatwaves have affected the lives of social media users in an unprecedented manner. They are constantly posting their level of negativity over the disaster situations at their location of interest. Understanding location-oriented sentiments about disaster situation is of prime importance for political leaders, and strategic decision-makers. To this end, we present a new fully automated algorithm based on artificial…
Worldwide disasters like bushfires, earthquakes, floods, cyclones, and heatwaves have affected the lives of social media users in an unprecedented manner. They are constantly posting their level of negativity over the disaster situations at their location of interest. Understanding location-oriented sentiments about disaster situation is of prime importance for political leaders, and strategic decision-makers. To this end, we present a new fully automated algorithm based on artificial intelligence (AI) and natural language processing (NLP), for extraction of location-oriented public sentiments on global disaster situation. We designed the proposed system to obtain exhaustive knowledge and insights on social media feeds related to disaster in 110 languages through AI- and NLP-based sentiment analysis, named entity recognition (NER), anomaly detection, regression, and Getis Ord Gi* algorithms. We deployed and tested this algorithm on live Twitter feeds from 28 September to 6 October 2021. Tweets with 67 515 entities in 39 different languages were processed during this period. Our novel algorithm extracted 9727 location entities with greater than 70% confidence from live Twitter feed and displayed the locations of possible disasters with disaster intelligence. The rates of average precision, recall, and F₁-Score were measured to be 0.93, 0.88, and 0.90, respectively. Overall, the fully automated disaster monitoring solution demonstrated 97% accuracy. To the best of our knowledge, this study is the first to report location intelligence with NER, sentiment analysis, regression and anomaly detection on social media messages related to disasters and has covered the largest set of languages.
-
AI-based Automated Extraction of Location-Oriented COVID-19 Sentiments
Computers, Materials, Continua (CMC)
The coronavirus disease (COVID-19) pandemic has affected the lives of social media users in an unprecedented manner. They are constantly posting their satisfaction or dissatisfaction over the COVID-19 situation at their location of interest. Therefore, understanding location-oriented sentiments about this situation is of prime importance for political leaders, and strategic decision-makers. To this end, we present a new fully automated algorithm based on artificial intelligence (AI), for…
The coronavirus disease (COVID-19) pandemic has affected the lives of social media users in an unprecedented manner. They are constantly posting their satisfaction or dissatisfaction over the COVID-19 situation at their location of interest. Therefore, understanding location-oriented sentiments about this situation is of prime importance for political leaders, and strategic decision-makers. To this end, we present a new fully automated algorithm based on artificial intelligence (AI), for extraction of location-oriented public sentiments on the COVID-19 situation. We designed the proposed system to obtain exhaustive knowledge and insights on social media feeds related to COVID-19 in 110 languages through AI-based translation, sentiment analysis, location entity detection, and decomposition tree analysis. We deployed fully automated algorithm on live Twitter feed from July 15, 2021 and it is still running as of 12 January, 2022. The system was evaluated on a limited dataset between July 15, 2021 to August 10, 2021. During this evaluation timeframe 150,000 tweets were analyzed and our algorithm found that 9,900 tweets contained one or more location entities. In total, 13,220 location entities were detected during the evaluation period, and the rates of average precision and recall rate were 0.901 and 0.967, respectively. As of 12 January, 2022, the proposed solution has detected 43,169 locations using entity recognition. According to the best of our knowledge, this study is the first to report location intelligence with entity detection, sentiment analysis, and decomposition tree analysis on social media messages related to COVID-19 and has covered the largest set of languages.
-
A Novel Method of Generating Geospatial Intelligence from Social Media Posts of Political Leaders
Information, 13(3), 120
This paper proposed a novel approach in automatically processing real-time social media messages of political leaders with artificial intelligence (AI)-based language detection, translation, sentiment analysis, and named entity recognition (NER). This method automatically generates geospatial and location intelligence on both ESRI ArcGIS Maps and Microsoft Bing Maps. The proposed system was deployed from 1 January 2020 to 6 February 2022 to analyze 1.5 million tweets. During this 25-month…
This paper proposed a novel approach in automatically processing real-time social media messages of political leaders with artificial intelligence (AI)-based language detection, translation, sentiment analysis, and named entity recognition (NER). This method automatically generates geospatial and location intelligence on both ESRI ArcGIS Maps and Microsoft Bing Maps. The proposed system was deployed from 1 January 2020 to 6 February 2022 to analyze 1.5 million tweets. During this 25-month period, 95K locations were successfully identified and mapped using data of 271,885 Twitter handles. With an overall 90% precision, recall, and F1score, along with 97% accuracy, the proposed system reports the most accurate system to produce geospatial intelligence directly from live Twitter feeds of political leaders with AI.
-
AI-GlobalEvents: A Software for analyzing, identifying and explaining global events with Artificial Intelligence
Software Impacts, Elsevier, VOLUME 11, 100218
AI-GlobalEvents is a decision support dashboard for policy planners to conduct strategic threat assessments based on global news and events. It uses AI services and algorithms to automatically aggregate global news from online news portals, websites and social media for analyzing the events and identifying critical events requiring imminent attention. The software uses AI algorithms like Entity Detection, Sentiment Analysis, Anomaly detection and Regression to produce explanations in natural…
AI-GlobalEvents is a decision support dashboard for policy planners to conduct strategic threat assessments based on global news and events. It uses AI services and algorithms to automatically aggregate global news from online news portals, websites and social media for analyzing the events and identifying critical events requiring imminent attention. The software uses AI algorithms like Entity Detection, Sentiment Analysis, Anomaly detection and Regression to produce explanations in natural language. It is capable of presenting the result in a wide range of platforms including mobile phones (Android or iOS), tablets or desktop environments. AI-GlobalEvents is available at https://github.com/DrSufi/GlobalEvent.
-
AI-Landslide: Software for acquiring hidden insights from global landslide data using Artificial Intelligence
Software Impacts, Elsevier, VOLUME 10, 100177
AI-Landslide is a decision support software for a city planner, disaster recovery strategist or landslide researcher. The software can be deployed in a number of platforms including mobile phones (Android or iOS), tablets or desktop environments. AI-Landslide uses AI based algorithms like automated regression analysis, decomposition analysis and anomaly detection to find out hidden insights on landslide features. The user of AI-landslide does not need to have prior knowledge on AI algorithms…
AI-Landslide is a decision support software for a city planner, disaster recovery strategist or landslide researcher. The software can be deployed in a number of platforms including mobile phones (Android or iOS), tablets or desktop environments. AI-Landslide uses AI based algorithms like automated regression analysis, decomposition analysis and anomaly detection to find out hidden insights on landslide features. The user of AI-landslide does not need to have prior knowledge on AI algorithms, since the right algorithms are executed in the background and the hidden insights on the landslide data are presented in a natural language. AI-Landslide is available at https://github.com/DrSufi/GlobalLandslide.
-
Automated Multidimensional Analysis of Global Events With Entity Detection, Sentiment Analysis and Anomaly Detection
IEEE Access, Vol. 9, pp. 152449 - 152460
This paper presents an automated media monitoring system that can analyze unstructured global events reported in online news, government websites, and major social media to produce significant insights with explainable AI. Using this innovative system, a decision maker can focus on global events requiring urgent attentions since it segregates millions of unnecessary data by using the presented methodology involving entity detection, sentiment analysis, and anomaly detection. The system was…
This paper presents an automated media monitoring system that can analyze unstructured global events reported in online news, government websites, and major social media to produce significant insights with explainable AI. Using this innovative system, a decision maker can focus on global events requiring urgent attentions since it segregates millions of unnecessary data by using the presented methodology involving entity detection, sentiment analysis, and anomaly detection. The system was designed, deployed, and tested during June 2, 2021 – September 1, 2021. During this 92-day period, the system connected to 2,397 distinct types of news sources and automatically fetched 22,425 major event descriptions from 192 countries. Then, to assign meaning and context to the unstructured event descriptions, the system performed AI-based entity detection and sentiment analysis of these global events. The proposed system is sufficiently robust to detect anomalies instantly from 2.76×10^8404 possible scenarios and provides detailed explanations by using natural language descriptions along with dynamic line charts and bar charts to portray detailed reasoning. The entity detection algorithm had a F1-score of 0.994 and the anomaly detection algorithm had an area under curve score of 0.941, establishing the proposed system with explainable AI as the most accurate, robust media monitoring system according to the literature.
-
Knowledge Discovery of Global Landslides Using Automated Machine Learning Algorithms
IEEE Access, Vol. 9, pp. 131400 - 131419
We developed a new machine learning (ML)–based system for town planners, disaster recovery strategists, and landslide researchers. Our system revealed hidden knowledge about a range of complex scenarios created from five landslide feature attributes. Users of our system can select from a list of 1.295×10^64 possible global landslide scenarios to discover valuable knowledge and predictions about the selected scenario in an interactive manner. Three ML algorithms—anomaly detection, decomposition…
We developed a new machine learning (ML)–based system for town planners, disaster recovery strategists, and landslide researchers. Our system revealed hidden knowledge about a range of complex scenarios created from five landslide feature attributes. Users of our system can select from a list of 1.295×10^64 possible global landslide scenarios to discover valuable knowledge and predictions about the selected scenario in an interactive manner. Three ML algorithms—anomaly detection, decomposition analysis, and automated regression analysis—are used to elicit detailed knowledge about 25 scenarios selected from 14,532 global landslide records covering 12,220 injuries and 63,573 fatalities across 157 countries. Anomaly detection, logistic regression, and decomposition analysis performed well for all scenarios under study, with the area under the curve averaging 0.951, 0.911, and 0.896, respectively. Moreover, the prediction accuracy of linear regression had a mean absolute percentage error of 0.255. To the best of our knowledge, our scenario-based ML knowledge discovery system is the first of its kind to provide a comprehensive understanding of global landslide data.
-
Cardioids-based faster authentication and diagnosis of remote cardiovascular patients
Security and Communication Networks Volume 4, Issue 11, pages 1351–1368, November 2011
In recent times, dealing with deaths associated with cardiovascular diseases (CVD) has been one of the most challenging issues. The usage of mobile phones and portable Electrocardiogram (ECG) acquisition devices can mitigate the risks associated with CVD by providing faster patient diagnosis and patient care. The existing technologies entail delay in patient authentication and diagnosis. However, for the cardiologists minimizing the delay between a possible CVD symptom and patient care is…
In recent times, dealing with deaths associated with cardiovascular diseases (CVD) has been one of the most challenging issues. The usage of mobile phones and portable Electrocardiogram (ECG) acquisition devices can mitigate the risks associated with CVD by providing faster patient diagnosis and patient care. The existing technologies entail delay in patient authentication and diagnosis. However, for the cardiologists minimizing the delay between a possible CVD symptom and patient care is crucial, as this has a proven impact in the longevity of the patient. Therefore, every seconds counts in terms of patient authentication and diagnosis. In this paper, we introduce the concept of Cardioid based patient authentication and diagnosis. According to our experimentations, the authentication time can be reduced from 30.64 s (manual authentication in novice mobile user) to 0.4398 s (automated authentication). Our ECG based patient authentication mechanism is up to 4878 times faster than conventional biometrics like, face recognition. The diagnosis time could be improved from several minutes to less than 0.5 s (cardioid display on a single screen). Therefore, with our presented mission critical alerting mechanism on wireless devices, minute's worth of tasks can be reduced to second's, without compromising the accuracy of authentication and quality of diagnosis.
-
Polynomial distance measurement for ECG based biometric authentication
Security and Communication Networks Volume 3, Issue 4, pages 303–319, July/August 2010
Existing electrocardiography (ECG) based biometric systems are constantly being challenged by higher misclassification error, longer acquisition time, larger template size, slower processing time and pertinence of abnormal beats within the biometric template. These challenges are the prime hindrance for ECG based biometric being commercialized as a pervasive authentication mechanism. At least, ECG based biometric can provide a secured mechanism for cardiac patients being monitored over…
Existing electrocardiography (ECG) based biometric systems are constantly being challenged by higher misclassification error, longer acquisition time, larger template size, slower processing time and pertinence of abnormal beats within the biometric template. These challenges are the prime hindrance for ECG based biometric being commercialized as a pervasive authentication mechanism. At least, ECG based biometric can provide a secured mechanism for cardiac patients being monitored over telephony network. In this paper, we present a polynomial distance measurement (PDM) method for ECG based biometric authentication for the very first time, according to the literature and to the best of our knowledge. The proposed PDM method is up to 12 times faster than existing algorithms, requires up to 6.5 times less template storage, needs only 2.49 (average) acquisition time with the highest accuracy rate (up to 100 per cent) when experimented on a population size of 15. Moreover, this proposed ECG based biometric system was deployed on a mobile phone based telemonitoring scenario with multilayer authentication mechanism upholding its applicability.
-
A chaos-based encryption technique to protect ECG packets for time critical telecardiology applications
Security and Communication Networks Volume 4, Issue 5, pages 515–524, May 2011
Electrocardiography (ECG) signal is popularly used for diagnosing cardiovascular diseases (CVDs). However, in recent times ECG is being used for identifying person. As ECG signals contain sensitive private health information along with details for person identification, it needs to be encrypted before transmission through public media. Moreover, this encryption must be applied with minimal delay for authenticating CVD patients, as time is critical for saving CVD affected patient's life. Within…
Electrocardiography (ECG) signal is popularly used for diagnosing cardiovascular diseases (CVDs). However, in recent times ECG is being used for identifying person. As ECG signals contain sensitive private health information along with details for person identification, it needs to be encrypted before transmission through public media. Moreover, this encryption must be applied with minimal delay for authenticating CVD patients, as time is critical for saving CVD affected patient's life. Within this paper, we propose the usage of multi-scroll chaos to encrypt ECG packets. ECG packets are being encrypted by the mobile phones using the chaos key by patients' subscribed in tele-cardiology applications. On the other hand, doctors and hospital attendants receive the encrypted ECG packets, which can be decrypted using the same chaos key. Using the techniques described in this paper, end-to-end security can be applied to wireless tele-cardiology application, with minimal processing. Our experimentation with 12 ECG segments shows that with multi-scroll chaos implementation, CVD patients remain completely unidentified, upholding patients' privacy and preventing spoof attacks. Most importantly, the proposed method is 18 times faster than permutation-based ECG encoding, 25 times faster than wavelet-based ECG annonymization techniques and 31 times faster than noise-based ECG obfuscation techniques, establishing the proposed technique as the fastest ECG encryption system according to the literature.
-
A clustering based system for instant detection of cardiac abnormalities from compressed ECG
Expert Systems with Applications (Elsevier), Volume 38, Issue 5, pp 4705-4713, 2011
Compressed Electrocardiography (ECG) is being used in modern telecardiology applications for faster and efficient transmission. However, existing ECG diagnosis algorithms require the compressed ECG packets to be decompressed before diagnosis can be applied. This additional process of decompression before performing diagnosis for every ECG packet introduces undesirable delays, which can have severe impact on the longevity of the patient. In this paper, we first used an attribute selection method…
Compressed Electrocardiography (ECG) is being used in modern telecardiology applications for faster and efficient transmission. However, existing ECG diagnosis algorithms require the compressed ECG packets to be decompressed before diagnosis can be applied. This additional process of decompression before performing diagnosis for every ECG packet introduces undesirable delays, which can have severe impact on the longevity of the patient. In this paper, we first used an attribute selection method that selects only a few features from the compressed ECG. Then we used Expected Maximization (EM) clustering technique to create normal and abnormal ECG clusters. Twenty different segments (13 normal and 7 abnormal) of compressed ECG from a MIT-BIH subject were tested with 100% success using our model. Apart from automatic clustering of normal and abnormal compressed ECG segments, this paper presents an algorithm to identify initiation of abnormality. Therefore, emergency personnel can be contacted for rescue mission, within the earliest possible time. This innovative technique based on data mining of compressed ECGs attributes, enables faster identification of cardiac abnormalities resulting in an efficient telecardiology diagnosis system.
-
Diagnosis of cardiovascular abnormalities from Compressed ECG: A Data Mining based Approach
IEEE Transaction in Information Technology in Biomedicine, Volume 15, Issue 1, pp. 33-39 (2011)
Usage of compressed ECG for fast and efficient telecardiology application is crucial, as ECG signals are enormously large in size. However, conventional ECG diagnosis algorithms require the compressed ECG packets to be decompressed before diagnosis can be performed. This added step of decompression before performing diagnosis for every ECG packet introduces unnecessary delay, which is undesirable for cardiovascular diseased (CVD) patients. In this paper, we are demonstrating an innovative…
Usage of compressed ECG for fast and efficient telecardiology application is crucial, as ECG signals are enormously large in size. However, conventional ECG diagnosis algorithms require the compressed ECG packets to be decompressed before diagnosis can be performed. This added step of decompression before performing diagnosis for every ECG packet introduces unnecessary delay, which is undesirable for cardiovascular diseased (CVD) patients. In this paper, we are demonstrating an innovative technique that performs real-time classification of CVD. With the help of this real-time classification of CVD, the emergency personnel or the hospital can automatically be notified via SMS/MMS/e-mail when a life-threatening cardiac abnormality of the CVD affected patient is detected. Our proposed system initially uses data mining techniques, such as attribute selection (i.e., selects only a few features from the compressed ECG) and expectation maximization (EM)-based clustering. These data mining techniques running on a hospital server generate a set of constraints for representing each of the abnormalities. Then, the patient's mobile phone receives these set of constraints and employs a rule-based system that can identify each of abnormal beats in real time. Our experimentation results on 50 MIT-BIH ECG entries reveal that the proposed approach can successfully detect cardiac abnormalities (e.g., ventricular flutter/fibrillation, premature ventricular contraction, atrial fibrillation, etc.) with 97% accuracy on average. This innovative data mining technique on compressed ECG packets enables faster identification of cardiac abnormality directly from the compressed ECG, helping to build an efficient telecardiology diagnosis system.
-
Faster person identification using compressed ECG in time critical wireless telecardiology applications
Journal of Network and Computer Applications (Elsevier), Volume 34, Issue 1, pp 282-293, 2011
Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of electrocardiogram (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system if this needs to be done…
Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of electrocardiogram (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system if this needs to be done for millions of compressed ECG segments by the hospital. This paper proposes a novel method of ECG biometric directly form compressed ECG harnessing data mining (DM) techniques like attribute selection and clustering. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina, etc. The template size (and also the matching time) is up to 8533 times lower than face template, 61 times lower than existing percentage root mean square (PRD) ECG based biometric template and 9 times smaller than polynomial distance measurement (PDM) based ECG biometric. Smaller template size substantially reduces the one to many matching time for biometric recognition, resulting in a faster biometric authentication mechanism and ECG stream verification directly from compressed ECG.
-
Secured Transmission and Authentication; Efficient Transmission in Telecardiology; Efficient Cardiovascular Diagnosis;
Mobile Web 2.0: Developing and Delivering Services to Mobile Devices, Book Published by CRC Press
From basic concepts to research grade material, Mobile Web 2.0: Developing and Delivering Services to Mobile Devices provides complete and up-to-date coverage of the range of technical topics related to Mobile Web 2.0. It brings together the work of 51 pioneering experts from around the world who identify the major challenges in Mobile Web 2.0 applications and provide authoritative insight into many of their own innovations and advances in the field.
-
ECG based biometric: the next generation in Human Identification
Handbook on Information & Communication Security, Book Edited by: Peter Stavroulakis, Springer, DOI 10.1007/978-3-642-04117-4, pp. 309-331
A biometric system performs template matching of acquired biometric data against template biometric data [17.1]. These biometric data can be acquired from several sources like deoxyribonucleic acid (DNA), ear, face, facial thermogram, fingerprints, gait, hand geometry, hand veins, iris, keystroke, odor, palm print, retina, signature, voice, etc. According to previous research, DNA, iris and odor provide high measurement for biometric identifiers including universalities, distinctiveness and…
A biometric system performs template matching of acquired biometric data against template biometric data [17.1]. These biometric data can be acquired from several sources like deoxyribonucleic acid (DNA), ear, face, facial thermogram, fingerprints, gait, hand geometry, hand veins, iris, keystroke, odor, palm print, retina, signature, voice, etc. According to previous research, DNA, iris and odor provide high measurement for biometric identifiers including universalities, distinctiveness and performance [17.1]. DNA provides a one dimensional ultimate unique code for accurate identification for a person, except for the case of identical twins. In biological terms “Central Dogma” refers to the basic concept that, in nature, genetic information generally flows from the DNA to RNA (ribonucleic acid) to protein. Eventually protein is responsible for the uniqueness provided by other biometric data (finger print, iris, face, retina, etc.). Therefore, it can be inferred that the uniqueness provided by the existing biometric entities is inherited from the uniqueness of DNA. It is imperative to note that shape of the hand or palm print or face or even the shape of particular organs like the heart has distinctive features which can be useful for successful identification. The composition, mechanism and electrical activity of the human heart inherit uniqueness from the individuality of DNA.
-
Novel Methods of Faster Cardiovascular Diagnosis in Wireless Telecardiology
IEEE Journal on Selected Areas in Communications, VOL. 27, NO. 4, MAY 2009
With the rapid development wireless technologies, mobile phones are gaining acceptance to become an effective tool for cardiovascular monitoring. However, existing technologies have limitations in terms of efficient transmission of compressed ECG over text messaging communications like SMS and MMS. In this paper, we first propose an ECG compression algorithm which allows lossless transmission of compressed ECG over bandwidth constrained wireless link. Then, we propose several algorithms for…
With the rapid development wireless technologies, mobile phones are gaining acceptance to become an effective tool for cardiovascular monitoring. However, existing technologies have limitations in terms of efficient transmission of compressed ECG over text messaging communications like SMS and MMS. In this paper, we first propose an ECG compression algorithm which allows lossless transmission of compressed ECG over bandwidth constrained wireless link. Then, we propose several algorithms for cardiovascular abnormality detection directly from the compressed ECG maintaining end to end security, patient privacy while offering the benefits of faster diagnosis. Next, we show that our mobile phone based cardiovascular monitoring solution is capable of harnessing up to 6.72 times faster diagnosis compared to existing technologies. As the decompression time on a doctor's mobile phone could be significant, our method will be highly advantageous in patient wellness monitoring system where a doctor has to read and diagnose from compressed ECGs of several patients assigned to him. Finally, we successfully implemented the prototype system by establishing mobile phone based cardiovascular patient monitoring.
-
Enforcing Secured ECG Transmission for realtime Telemonitoring: A Joint Encoding, Compression, Encryption Mechanism
Security and Communication Networks, Wiley InterScience Vol. 1, No. 5, 2008, pp. 389-405
Realtime telemonitoring of critical, acute and chronic patients has become increasingly popular with the emergence of portable acquisition devices and IP enabled mobile phones. During telemonitoring, enormous physiological signals are transmitted through the public communication network in realtime. However, these physiological signals can be intercepted with minimal effort, since existing telemonitoring practise ignores the privacy and security requirements. In this paper, to achieve…
Realtime telemonitoring of critical, acute and chronic patients has become increasingly popular with the emergence of portable acquisition devices and IP enabled mobile phones. During telemonitoring, enormous physiological signals are transmitted through the public communication network in realtime. However, these physiological signals can be intercepted with minimal effort, since existing telemonitoring practise ignores the privacy and security requirements. In this paper, to achieve end-to-end security, we first proposed an encoding method capable of securing Electrocardiogram (ECG) data transmission from an acquisition device to a mobile phone, and then from a mobile phone to a centralised medical server by concealing cardiovascular details as well as features in ECG data required to identify an individual. The encoding method not only conceals cardiovascular condition, but also reduces the enormous file size of the ECG with a compression ratio of up to 3.84, thus making it suitable in energy constrained small acquisition devices. As ECG data transfer faces even greater security vulnerabilities while traversing through the public Internet, we further designed and implemented 3 phase encoding—compression—encryption mechanism on mobile phones using the proposed encoding method and existing compression and encryption tools. This new mechanism elevates the security strength of the system even further. Apart from higher security, we also achieved higher compression ratio of up to 20.06, which will enable faster transmission and make the system suitable for realtime telemonitoring.
Courses
-
10886A: Developing Microsoft SQL Server 2012 Databases
SolidQ, Melbourne 2012
-
AIM Management Skill Set
AIM Victoria 2011
-
Business Skills Program
Young Achievement Aus
-
Critical and Creative Thinking
RMIT University
-
Entrepreneurship
eGrad School
-
Implement Change
AIM Victoria 2012
-
Leadership and Communication
Curtin University
-
Manage Project
AIM Victoria 2012
-
Microsoft SQL Server Business Intelligence Bootcamp
SolidQ, Melbourne 2011
-
Project Management
UTS, Sydney
-
Public Policy
RMIT University
-
Research Commercialisation
QUT, Queensland
-
Risk Management
AIM Victoria 2012
Honors & Awards
-
Commercialization Training Scheme Scholarship
Commonwealth Government
Awarded $7000 AUD by Commercialization Training Scheme
-
Certificate of Appreciation for upholding DHS Value of 'Quality'
Department of Health
http://www.fahimsufi.com/Quality.pdf
-
Best Paper Runner Up
IEEE Victorian Chapter
http://www.fahimsufi.com/IEEEBP.pdf
-
Australian Post Graduate Award
Commonwealth Government, Australia
-
Best Literature Review Award
RMIT University
http://www.fahimsufi.com/BLR.pdf
-
Victorian Government ICT Post Graduate Award
Victorian Government ICT Ministry
-
Best PhD Thesis Award
School of CS & IT, RMIT University
http://www.fahimsufi.com/img013.jpg
Languages
-
Bengali
-
-
Hindi
-
Recommendations received
4 people have recommended Dr. Fahim K
Join now to viewMore activity by Dr. Fahim K
-
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'Nation State Hacking' category. GERA's Innovative…
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'Nation State Hacking' category. GERA's Innovative…
Shared by Dr. Fahim K Sufi
-
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'Pandemic Outbreak' category. GERA's Innovative…
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'Pandemic Outbreak' category. GERA's Innovative…
Shared by Dr. Fahim K Sufi
-
Artificial intelligence is advancing at an unprecedented pace, sparking both excitement and critical debate about its long-term impact. Former Google…
Artificial intelligence is advancing at an unprecedented pace, sparking both excitement and critical debate about its long-term impact. Former Google…
Liked by Dr. Fahim K Sufi
-
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'Pandemic Outbreak' category. GERA's Innovative…
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'Pandemic Outbreak' category. GERA's Innovative…
Shared by Dr. Fahim K Sufi
-
Today's Most Significant, Neutral and Unbiased News was selected by the "Autonomous AI" of our newly launched platform "https://bias.press" Lee, who…
Today's Most Significant, Neutral and Unbiased News was selected by the "Autonomous AI" of our newly launched platform "https://bias.press" Lee, who…
Shared by Dr. Fahim K Sufi
-
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'International Regulation and Agreements' category…
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'International Regulation and Agreements' category…
Shared by Dr. Fahim K Sufi
-
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'International Crisis' category. GERA's Innovative…
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'International Crisis' category. GERA's Innovative…
Shared by Dr. Fahim K Sufi
-
Today's Most Significant, Neutral and Unbiased News was selected by the "Autonomous AI" of our newly launched platform "https://bias.press" Three…
Today's Most Significant, Neutral and Unbiased News was selected by the "Autonomous AI" of our newly launched platform "https://bias.press" Three…
Shared by Dr. Fahim K Sufi
-
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'International Regulation and Agreements' category…
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'International Regulation and Agreements' category…
Shared by Dr. Fahim K Sufi
-
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'Government Data Dump of Citizens and Employees'…
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'Government Data Dump of Citizens and Employees'…
Shared by Dr. Fahim K Sufi
-
Today's Most Significant, Neutral and Unbiased News was selected by the "Autonomous AI" of our newly launched platform "https://bias.press" The…
Today's Most Significant, Neutral and Unbiased News was selected by the "Autonomous AI" of our newly launched platform "https://bias.press" The…
Shared by Dr. Fahim K Sufi
-
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'Terrorism Event' category. GERA's Innovative Rating…
Coeus Institute's GERA, autonomously identified one of the most impactful News of the day under 'Terrorism Event' category. GERA's Innovative Rating…
Shared by Dr. Fahim K Sufi
Other similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More