Publications

Journals

Epileptic seizure detection using CHB-MIT dataset: The overlooked perspectives


Type: Journal
Sub-Type: Journal Article
Title: Epileptic seizure detection using CHB-MIT dataset: The overlooked perspectives
Abstract: Epilepsy is a life-threatening neurological condition. Manual detection of epileptic seizures (ES) is laborious and burdensome. Machine learning techniques applied to electroencephalography (EEG) signals are widely used for automatic seizure detection. Some key factors are worth considering for the real-world applicability of such systems: (i) continuous EEG data typically has a higher class imbalance; (ii) higher variability across subjects is present in physiological signals such as EEG; and (iii) seizure event detection is more practical than random segment detection. Most prior studies failed to address these crucial factors altogether for seizure detection. In this study, we intend to investigate a generalized cross-subject seizure event detection system using the continuous EEG signals from the CHB-MIT dataset that considers all these overlooked aspects. A 5-second non-overlapping window is used to extract 92 features from 22 EEG channels; however, the most significant 32 features from each channel are used in experimentation. Seizure classification is done using a Random Forest (RF) classifier for segment detection, followed by a post-processing method used for event detection. Adopting all the above-mentioned essential aspects, the proposed event detection system achieved 72.63% and 75.34% sensitivity for subject-wise 5-fold and leave-one-out analyses, respectively. This study presents the real-world scenario for ES event detectors and furthers the understanding of such detection systems.
Citation:

Ali E, Angelova M, Karmakar C. Epileptic seizure detection using CHB-MIT dataset: The overlooked perspectives. Royal Society Open Science (RSOS). 2024 May;11(6):230601.

DOI/Link: 10.1098/rsos.230601

Sensor-based indoor air temperature prediction using deep ensemble machine learning: An Australian urban environment case study


Type: Journal
Sub-Type: Journal Article
Title: Sensor-based indoor air temperature prediction using deep ensemble machine learning: An Australian urban environment case study
Abstract: Accurate prediction of indoor temperature is critical for climate change adaptation and occupant health. The aim of this study is to investigate an improved deep ensemble machine learning framework (DEML), by adjusting the model architecture with several machine learning (ML) and deep learning (DL) approaches to forecast the sensor-based indoor temperature in the Australian urban environment. We collected ambient station-based temperatures, satellite-based outdoor climate characteristics, and low-cost sensor-based indoor environmental metrics from 96 devices from Aug 2019 to Nov 2022, and established DEML with a rolling windows approach to assess the prediction stability over time. The DEML model was compared with several benchmark models, including Random Forest (RF), Support Vector Machine (SVM), eXtreme Gradient Boosting (XGboost), Long-short term memory (LSTM), and Super Learner model (SL). A total of 13,715 days [median: 341 days; IQR (the interquartile range): 221–977 days] of low-cost sensor-based indoor temperature were included in 25 commercial and residential buildings across eight cities. The prediction performance of DEML was superior to the other five benchmark models in most of the sensors [coefficients of determination (R2) of 0.861–0.990 and root mean square error (RMSE) of 0.125–0.886 °C], followed by RF and SL algorithms. DEML consistently achieved high accuracy across different climate zones, seasons, and building types, which could be used as a crucial tool for optimizing energy use, maintaining occupant comfort and health, and adapting to the impacts of climate change.
Citation:

Yu W, Nakisa B, Ali E, Loke SW, Stevanovic S, Guo Y. Sensor-based indoor air temperature prediction using deep ensemble machine learning: An Australian urban environment case study. Urban Climate. 2023 Sep 1;51:101599.

DOI/Link: 10.1016/j.uclim.2023.101599

A LSB Based Image Steganography Using Random Pixel and Bit Selection for High Payload


Type: Journal
Sub-Type: Journal Article
Title: A LSB based image steganography using random pixel and bit selection for high payload
Abstract: Security in digital communication is becoming more important as the number of systems is connected to the internet day by day. It is necessary to protect secret message during transmission over insecure channels of the internet. Thus, data security becomes an important research issue. Steganography is a technique that embeds secret information into a carrier such as images, audio files, text files, and video files so that it cannot be observed. In this paper, based on spatial domain, a new image steganography method is proposed to ensure the privacy of the digital data during transmission over the internet. In this method, least significant bit substitution is proposed where the information embedded in the random bit position of a random pixel location of the cover image using Pseudo Random Number Generator (PRNG). The proposed method used a 3-3-2 approach to hide a byte in a pixel of a 24 bit color image. The method uses Pseudo Random Number Generator (PRNG) in two different stages of embedding process. The first one is used to select random pixels and the second PRNG is used select random bit position into the R, G and B values of a pixel to embed one byte of information. Due to this randomization, the security of the system is expected to increase and the method achieves a very high maximum hiding capacity which signifies the importance of the proposed method.
Citation:

Ehsan Ali UA, Ali E, Sohrawordi M, Sultan MN. A LSB based image steganography using random pixel and bit selection for high payload. International Journal of Mathematical Sciences and Computing (IJMSC). 2021 Aug 8;7(3):24-31.

DOI/Link: 10.5815/ijmsc.2021.03.03

Enhancement of single-handed Bengali sign language recognition based on HOG features


Type: Journal
Sub-Type: Journal Article
Title: Enhancement of single-handed bengali sign language recognition based on hog features
Abstract: Deaf and dumb people usually use sign language as a means of communication. This language is made up of manual and non-manual physical expressions that help the people to communicate within themselves and with the normal people. Sign language recognition deals with recognizing these numerous expressions. In this paper, a model has been proposed that recognizes different characters of Bengali sign language. Since the dataset for this work is not readily available, we have taken the initiative to make the dataset for this purpose. In the dataset, some pre-processing techniques such as Histogram Equalization, Lightness Smoothing etc. have been performed to enhance the signs’ image. Then, the skin portion from the image is segmented using YCbCr color space from which the desired hand portion is cut out. After that, converting the image into grayscale the proposed model computes the Histogram of Oriented Gradients (HOG) features for different signs. The extracted features of the signs’ are used to train the K-Nearest Neighbors (KNN) classifier model which is used to classify various signs. The experimental result shows that the proposed model produces 91.1% accuracy, which is quite satisfactory for real-life setup, in comparison to other investigated approaches.
Citation:

Tabassum T, Mahmud I, Uddin MD, Emran Ali, Afjal MI, Nitu AM. Enhancement of single-handed bengali sign language recognition based on hog features. Journal of Theoretical and Applied Information Technology (IJTAIT). 2020 Mar;98(5):743-756.

DOI/Link: http://www.jatit.org/

Mapping Character Position Based Cryptographic algorithm with Numerical Conversions


Type: Journal
Sub-Type: Journal Article
Title: Mapping Character Position Based Cryptographic algorithm with Numerical Conversions
Abstract: Security of data is the challenging aspects of modern information technology. An improved cryptology algorithm is introduced in this paper to offer comparatively higher security. We divide our message into several blocks as 8bits per block then convert each character into its corresponding positional number, where uppercase letters, lowercase letters, digits and special characters are mapped into some range of numbers. Then replace each decimal number into their binary equivalent consisting of 7-bits. Then combine 8 blocks of binary numbers into a single string. After performing some operation on the data, we get the final encrypted message. For decryption, we use same method in reverse way. Taking the decrypted message we perform some basic operation as replacing by binary or equivalent decimal and position then we get the original message back. Though the length of the encrypted message is larger than original message in this proposed algorithm, it offers higher security for the real-time communications.
Citation:

Moon M, Tanim AT, Shoykot MZ, Sultan MN, Ali UM, Ali E. Mapping character position based cryptographic algorithm with numerical conversions. International Journal of Computer Science and Software Engineering (IJCSSE). 2020 Mar 1;9(3):56-9.

DOI/Link: https://ijcsse.org/

Pseudo random ternary sequence and its autocorrelation property over finite field


Type: Journal
Sub-Type: Journal Article
Title: Pseudo random ternary sequence and its autocorrelation property over finite field
Abstract: In this paper, the authors have proposed an innovative approach for generating a pseudo-random ternary sequence by using a primitive polynomial, trace function, and Legendre symbol over an odd characteristic field. Let p be an odd prime number, F_p be an odd characteristic prime field, and m be the degree of the primitive polynomial f(x). Let w be its zero and a primitive element in F_p^m. In the beginning, a primitive polynomial f(x) generates a maximum-length vector sequence, then the trace function Tr((dot)) is used to map an element of the extension field F_p^m to an element of the prime field F_p, then a non-zero scalar A(isin)F_p is added to the trace value, and finally the Legendre symbol (a/p) is utilized to map the scalars into a ternary sequence having the values {(minus)1,0,1}. By applying the new parameter A, the period of the sequence is extended to its maximum value, which is p^m(minus)1. Hence, our proposed sequence has some parameters such as m, p, and A. This paper mathematically explains the properties of the proposed ternary sequence such as period and autocorrelation. Additionally, these properties are also justified based on some experimental results.
Citation:

Ali MA, Ali E, Habib MA, Nadim M, Kusaka T, Nogami Y. Pseudo random ternary sequence and its autocorrelation property over finite field. International Journal of Computer Network and Information Security (IJCNIS). 2017 Sep 1;11(9):54.

DOI/Link: 10.5815/ijcnis.2017.09.07

Smart Campus Using IoT with Bangladesh Perspective: A Possibility and Limitation


Type: Journal
Sub-Type: Journal Article
Title: Smart Campus Using IoT with Bangladesh Perspective: A Possibility and Limitation
Abstract: The concept of smart classroom has been around for quite a long time and a lot of work still in progress to facilitate teaching-learning environment in more productive and intuitive way. This SMARTness did not limit itself into a single classroom, rather it extended itself to make the whole institute campus smart by automating facilities and access to individual entity. This move gained much pace by the introduction of the concept of Internet of Things (IoT). IoT is not a new concept, but it actually formalized a process where an Object itself has ability to sense its environment, act (optionally) according to sensed data, and finally and more importantly, communicate this data to a remote entity over a network. This way an Object becomes a smart entity which can literally be applied to any field or context that is only limited by the imagination. Smart campus, smart city, smart classroom, and much more Machine-to-Machine (M2M) applications are examples of IoT. In this paper, IoT enabled smart campus environment was explored based on existing literatures and different applications. Then current state of a university campus in Bangladesh, Hajee Mohammad Danesh Science and Technology University (HSTU), was explored and finally, possibility and opportunity of applying IoT enabled smart classroom, laboratory, library, and buildings for the context of HSTU was investigated and necessary recommendations were suggested in order to avail the smartness in HSTU university campus.
Citation:

Sultan MN, Ali E, Ali MA, Nadim M, Habib MA. Smart campus using IoT with Bangladesh perspective: A possibility and limitation. International Journal of Research in Applied Science and Engineering Technologies (IJASET). 2017 Aug;8:1681-90.

DOI/Link: 10.22214/IJRASET.2017.8239

Study of Abstractive Text Summarization Techniques


Type: Journal
Sub-Type: Journal Article
Title: Study of Abstractive Text Summarization Techniques
Abstract: Nowadays, people use the internet to find information through information retrieval tools such as
Google, Yahoo, Bing and so on. Because of the increasing rate of data, people need to get meaningful
information. So, it is not possible for users to read each document in order to find the useful one. Among all the modern technologies, text summarization has become an important and timely tool for the users to quickly understand the large volume of information. Automatic text summarization system, one of the special data mining applications that helps this task by providing a quick summary of the information contained in the documents. Text summarization approach is broadly classified into two categories: extractive and abstraction. Many techniques on abstractive text summarization have been developed for the languages like English, Arabic, Hindi etc. But there is no remarkable abstractive method for Bengali text because individual word of every sentence accesses domain ontology & wordnet and it must require the complete knowledge about each Bengali word, which is lengthy process for summarization. It has thus motivated the authors to observe, analyze and compare the existing techniques so that abstractive summarization technique for Bengali texts can be proposed.
To do so, the authors have conducted a survey on abstractive text summarization techniques on various
languages in this paper. Finally, a comparative scenario on the discussed single or multi-document
summarization techniques has been presented
Citation:

Yeasmin S, Tumpa PB, Nitu AM, Uddin MP, Ali E, Afjal MI. Study of abstractive text summarization techniques. American Journal of Engineering Research (AJER). 2017;6(8):253-60.

DOI/Link: https://www.ajer.org/

Conferences

An Efficient Feature Optimization Approach with Machine Learning for Detection of Major Depressive Disorder Using EEG Signal


Type: Conference
Sub-Type: Conference Article
Title: An Efficient Feature Optimization Approach with Machine Learning for Detection of Major Depressive Disorder Using EEG Signal
Abstract: Major Depressive Disorder (MDD) is a mental health condition marked by persistent feelings of sadness, diminished interest in once-enjoyable activities, and an array of physical and emotional symptoms that profoundly disrupt an individual's daily life and functioning. In this circumstance, detecting major depressive disorder in the early phase is required. Identifying Major Depressive Disorder (MDD) involves the extraction of multiple features from unprocessed EEG signals. This study proposed a machine learning approach to identify minimum possible features as 14 significant features by removing highly correlated features and applying a Genetic algorithm with several combinations of maximum features. The proposed method obtained an accuracy of 84.1% in AdaBoost, 80.1% in DecissionTree, 85.5% in RandomForest, 84.9% Gradient Boosting, 84.9% in XG Boosting. To assess performance, Subject-wise five-fold cross-validation is used. This performance is comparable to the performance of all features, which is roughly 1300 features, and as a result, utilizing fewer features maintains nearly the same performance while reducing the time necessary for MDD detection.
Citation:

Bhuyain AR, Ferdouse J, Babar MU, Sohrawordi M, Islam MR, Ali E. An Efficient Feature Optimization Approach with Machine Learning for Detection of Major Depressive Disorder Using EEG Signal. In 2023; 26th International Conference on Computer and Information Technology (ICCIT) 2023 Dec 13 (pp. 1-6). IEEE.

DOI/Link: https://doi.org/10.1109/ICCIT60459.2023.10441051

Optimal Feature Identification and Major Depressive Disorder Prediction through Correlation-Based Machine Learning Approach


Type: Conference
Sub-Type: Conference Article
Title: Optimal Feature Identification and Major Depressive Disorder Prediction through Correlation-Based Machine Learning Approach
Abstract: Major Depressive Disorder (MDD) is a mental health disorder characterized by continuous feelings of sadness, diminished interest in activities, and a range of physical and emotional symptoms that profoundly affect a person’s daily life and functioning. Early detection of MDD is crucial for improved treatment outcomes. The detection of Major Depressive Disorder (MDD) relies on extracting several features from raw EEG signals. This study presents a machine learning-based approach to identify MDD using minimal EEG signal features, employing a correlation analysis with various correlation coefficient values. The proposed method achieved an impressive accuracy of 84% with SVM, showing 91.5% sensitivity, 75.5% specificity, and 85.7% f1-score with best correlation coefficient value. To evaluate performance, 5-fold cross-validation with inter-subject variability is utilized. The study extracts several feature sets using different coefficient values and assesses their performance with diverse coefficient values, aiming to identify the most effective features for early diagnosis of MDD.
Citation:

Babar MU, Bhuyain AR, Ferdouse J, Sohrawordi M, Ali E. Optimal Feature Identification and Major Depressive Disorder Prediction through Correlation-Based Machine Learning Approach. In 2023; 6th International Conference on Electrical Information and Communication Technology (EICT) 2023 Dec 7 (pp. 1-5). IEEE.

DOI/Link: https://doi.org/10.1109/EICT61409.2023.10427585

Performance Analysis of Entropy Methods in Detecting Epileptic Seizure from Surface Electroencephalograms


Type: Conference
Sub-Type: Conference Article
Title: Performance Analysis of Entropy Methods in Detecting Epileptic Seizure from Surface Electroencephalograms
Abstract: Physiological signals like Electrocardiography (ECG) and Electroencephalography (EEG) are complex and nonlinear in nature. To retrieve diagnostic information from these, we need the help of nonlinear methods of analysis. Entropy estimation is a very popular approach in the nonlinear category, where entropy estimates are used as features for signal classification and analysis. In this study, we analyze and compare the performances of four entropy methods; namely Distribution entropy (DistEn), Shannon entropy (ShanEn), Renyi entropy (RenEn) and LempelZiv complexity (LempelZiv) as classification features to detect epileptic seizure (ES) from surface Electroencephalography (sEEG) signal. Experiments were conducted on sEEG data from 23 subjects, obtained from the CHB-MIT database of PhysioNet. ShanEn, RenEn and LempelZiv entropy are found to be potential features for accurate and consistent detection of ES from sEEG, across multiple channels and subjects.
Citation:

Ali E, Udhayakumar RK, Angelova M, Karmakar C. Performance analysis of entropy methods in detecting epileptic seizure from surface electroencephalograms. In 2021; 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) 2021 Nov 1 (pp. 1082-1085). IEEE.

DOI/Link: 10.1109/EMBC46164.2021.9629538

Efficient Noise Reduction and HOG Feature Extraction for Sign Language Recognition


Type: Conference
Sub-Type: Conference Article
Title: Efficient Noise Reduction and HOG Feature Extraction for Sign Language Recognition
Abstract: Sign Language is the communication standard for people who have hearing and speaking deficiency, usually called deaf and dumb. It is the only way for such people to communicate. This paper proposes a model which would help in recognizing the different signs in American Sign Language. As this is still an emerging field of research, the dataset available in this topic is very noisy. In this paper, we proposed some image processing based operations such as Logarithmic Transformation, Histogram Equalization etc. to reduce the noise from the images of the dataset. Then, canny edges are detected from the segmented image of signs. After that, the proposed method identifies the signs based on the features extracted using Histogram of Oriented Gradients (HOG) Feature Extraction strategy. The extracted features of the signs are classified using KNN classifier. The experimental result shows that the proposed method offers better classification accuracy (94.23%) in comparison to the method based on BAG of features and SVM (86%).
Citation:

Mahmud I, Tabassum T, Uddin MP, Ali E, Nitu AM, Afjal MI. Efficient noise reduction and HOG feature extraction for sign language recognition. In2018 International Conference on Advancement in Electrical and Electronic Engineering (ICAEEE) 2018 Nov 22 (pp. 1-4). IEEE.

DOI/Link: 10.1109/ICAEEE.2018.8642983

Indexed Binary Search based efficient search generator for J2ME English to English dictionary


Type: Conference
Sub-Type: Conference Article
Title: Indexed Binary Search based efficient search generator for J2ME English to English dictionary
Abstract: In the present era of modern technology, everyone wants to get more powerful and efficient services from a tiny device called Cell Phone or Mobile Phone. Now cell phones are used not only in voice or text communication, but also in multimedia, web access, entertainment, education and many other purposes through the Mobile apps. English is the de-facto international language for communication and an English to English dictionary helps to learn English in an easy way. In this paper a J2ME English to English dictionary application has been developed for Java supported cell phones. To accelerate the searching in the dictionary we have developed a new searching methodology called Indexed Binary Search based on conventional binary search. The developed searching methodology first reduces the searching domain for a word to be searched and then performs the conventional binary search for the word. In the developed dictionary with 17700 words stored, the proposed Indexed Binary Search performs conventional binary search on 681 words in average to search a word whereas the conventional binary search uses all the 17700 words for searching. Thus, in this case the Indexed binary search is approximately two times faster than the conventional binary search.
Citation:

Uddin MP, Ali E, Marjan MA, Al Mamun MA. Indexed Binary Search based efficient search generator for J2ME English to English dictionary. In2014 International Conference on Informatics, Electronics & Vision (ICIEV) 2014 May 23 (pp. 1-6). IEEE.

DOI/Link: 10.1109/ICIEV.2014.6850692

Posters

Deep Learning Model for Detection of Electrographic Seizures from continuous EEG in ICU patients


Type: Poster
Sub-Type: Poster Abstract
Title: Deep Learning Model for Detection of Electrographic Seizures from continuous EEG in ICU patients
Abstract:

Rationale: Continuous electroencephalography (cEEG) is essential for accurate diagnosis of seizures or status epilepticus in critically ill patients in the intensive care unit (ICU). However, manual interpretation by experienced EEG readers to review the extensive data is labor intensive and time consuming. In this study, we assessed the potential of deep learning models for automated seizure detection from cEEG recordings, thereby expediting cEEG interpretation and clinical management.

Methods: Five records of 21-channel cEEG samples recorded from five different ICU patients on Profusion EEG Software V6 were analyzed. In each record, two trained epileptologists and an EEG technician reviewed and labelled 30-minute samples of non-seizure and seizure activities. The records contained 16 electrographic seizures in total. The experimental protocol of data splitting, model training and testing is shown in Fig. 1. We used a convolutional neural network (CNN) architecture to train deep-learning models for each cEEG channel from raw signal. In addition, a data augmentation technique was used to boost the minority class (seizure event). For reporting outcomes, we used a ‘leave-one-record-out’ testing approach, where four out of five records were used for training the model and the remaining record was used for testing. An event was classified as correct if >= 75% of consecutive segments from an event were classified correctly by the model. We measured accuracy (Acc), sensitivity (Sen) and false positive rate (FPR) of the model after each iteration and reported the average performance after five iterations per channel (one iteration for each test record).

Results: The average performance of individual cEEG channels is shown in Fig.2. Of the 21 channels used, 19 channels gave zero or insignificant FPR, a highly desired outcome to reduce false alarms in the ICU. The range of Acc of the model across these 19 channels was 90.77% - 96.90% and that of Sen was 87.50% - 93.75%. Even considering all 21 channels, including the ones that have a significant FPR, the minimum Acc and Sen were 93.85% and 87.5% respectively and the maximum FPR was 30.56%.

Conclusions: In this study, we demonstrate proof-of-concept of a deep learning model to detect electrographic seizures in ICU patients. The preliminary results obtained (high Acc, high Sen and low FPR) are promising: (1) to automate the detection of electrographic seizures in the ICU, and (2) potential for rapid interpretation (saves the need for laborious manual interpretation) of cEEG recordings, leading to more timely clinical management. Further evaluation of the model on a larger dataset is needed.


Citation:

Habib A, Pham C, Udhayakumar R, Ali E, Thom D, Laing J, Karmakar C, Kwan P, O'Brien T. Deep Learning Model for Detection of Electrographic Seizures from continuous EEG in ICU patients. American Epilepsy Society (AES) Annual Conference 2021, Categories : Neurophysiology, Submission Category: 3. Neurophysiology / 3G. Computational Analysis & Modeling of EEG, Submission ID: 1886502, 22 Nov 2021.

DOI/Link: https://www.aesnet.org/ (https://cms.aesnet.org/)