http://47.88.85.238/index.php/soic/issue/feedStatistics, Optimization & Information Computing2025-12-23T12:44:49+08:00David G. Yudavid.iapress@gmail.comOpen Journal Systems<p><em><strong>Statistics, Optimization and Information Computing</strong></em> (SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences. Topics of interest are (but not limited to): </p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap, Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series analysis, High-dimensional multivariate integrals, statistical analysis in market, business, finance, insurance, economic and social science, etc</li> </ul> <p> Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming </li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence, Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics, Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul>http://47.88.85.238/index.php/soic/article/view/2699Median Based Unit Weibull Distribution (MBUW): Do the Higher Order Probability Weighted Moments (PWM) Add More Information over the Lower Order PWM in Parameter Estimation 2025-12-21T12:02:44+08:00Iman Attiaimanattiathesis1972@gmail.com<p>This paper offers an in-depth investigation into the Probability Weighted Moments (PWMs) methodology for estimating parameters of the Median Based Unit Weibull (MBUW) distribution. The author delves into a thorough comparison of the commonly employed first-order PWMs against more advanced higher-order PWMs. The analysis highlights the significant benefits associated with adopting these more sophisticated techniques, particularly in terms of accuracy and reliability in parameter estimation. In addition to this comparative analysis, the author derives the asymptotic distribution of the PWM estimator, which provides a theoretical foundation for the results and enhances the robustness of the conclusions. To further illustrate the practical implications of the findings, the author includes a detailed real data analysis that exemplifies the effectiveness of the proposed methodology. Through these examples, the author underscores the relevance of PWMs in real-world applications, demonstrating how this approach can lead to improved parameter estimates when working with the MBUW distribution.</p>2025-12-11T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2802An Innovated G Family: Properties, Characterizations and Risk Analysis under Different Estimation Methods2025-12-21T12:02:45+08:00Mujtaba Hashimmsaeed@kfu.ddu.saG. G. Hamedanigholamhoss.hamedani@marquette.eduMohamed Ibrahimmohamed_ibrahim@du.edu.egAhmad M. AboAlkhairaaboalkhair@kfu.edu.saHaitham M. Yousofhaitham.yousof@fcom.bu.edu.eg<p><span class="fontstyle0">This paper introduces a novel class of continuous probability distributions called the Log-Adjusted Polynomial (LAP) G family, with a focus on the LAP Weibull distribution as a key special case. The proposed family is designed to enhance the flexibility of classical distributions by incorporating additional parameters that control shape, skewness, and tail behavior. The LAP Weibull model is particularly useful for modeling lifetime data, extreme events, and insurance claims characterized by heavy tails and asymmetry. The paper presents the mathematical formulation of the new family, including its cumulative distribution function, probability density function, and hazard rate function. It also explores structural properties such as series expansions and tail behavior. Risk analysis is conducted using advanced risk measures, including Value-at-Risk (VaR), Tail VaR (TVaR), and tail mean-variance (TMVq), under various estimation techniques. Estimation methods considered include maximum likelihood (MLE), Cramer–von Mises (CVM), Anderson–Darling (ADE), ´ and their right-tail and left-tail variants. These methods are compared using both simulated and real-world insurance data to assess their sensitivity to tail events. The performance of each estimator is evaluated in terms of bias, accuracy, and robustness in capturing extreme risks. The LAP Weibull model demonstrates superior performance in fitting heavy-tailed data compared to traditional models. AD2LE emerges as the most risk-sensitive estimator, producing the highest values for all key risk indicators. ADE also performs well, offering a balance between sensitivity and stability. MLE and CVM tend to underestimate tail risks, which could lead to insufficient capital reserves in insurance applications. The study highlights the importance of selecting appropriate estimation techniques based on the specific goals of the risk analysis. With its enhanced flexibility and performance in modeling extreme risks, the LAP Weibull model offers a robust framework for modern risk assessment. The findings support the use of AD2LE or ADE in high-stakes risk management scenarios, especially when dealing with heavy-tailed insurance data. This work contributes to the growing literature on advanced statistical models for actuarial and financial risk analysis. The LAP Weibull model proves particularly useful in capturing the tail behavior of claim distributions, improving the accuracy of risk predictions. The paper provides a solid foundation for future applications of the LAP family in modeling complex real-world phenomena under uncertainty.</span></p>2025-08-29T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2817Hidden Markov Models and Boosting for Robust Time Series Prediction2025-12-21T12:02:46+08:00Md. Shahidul Islamsshahid01921@gmail.comSanjida Kaniz Minhaemdadulh20@gmail.comJohn Lewis Smithbryan.smith.ack@gmail.com<p>This study explores the use of Hidden Markov Models (HMM) combined with a hybrid ensemble method using boosting techniques for the classification of different types of electrocardiogram (ECG) signals, including Normal, Ventricular Tachycardia (VTach), Ventricular Fibrillation (VFib), and Bradycardia. The analysis focuses on leveraging Auto Correlation and Partial Autocorrelation (PACF) for feature extraction and enhancing the classification performance through a hybrid approach integrating HMMs and boosting. First, Wavelet-based filtering was applied to remove noise from the ECG signals, providing cleaner data for subsequent feature extraction. Both Autocorrelation (AC) and Partial Autocorrelation (PACF) were computed for the filtered signals. While AC provided general periodicity information, PACF offered a more precise analysis by isolating direct correlations at each lag, which was especially useful for differentiating irregular rhythms like VTach and VFib. We then implemented a hybrid ensemble method combining Hidden Markov Models (HMMs) with boosting techniques, such as AdaBoost and Gradient Boosting, to improve classification accuracy. The HMMs were used to model the sequential dependencies in the ECG signals, while the boosting algorithms were applied to optimize the performance of the ensemble by weighting and improving weaker classifiers iteratively. The study demonstrates that the combination of HMMs, boosting, and PACF provides a powerful and efficient method for automatic arrhythmia detection, achieving high precision and robustness across different ECG signal types.</p>2025-11-19T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2852Factors Associated with Financial Inclusion in Indonesia Before and During COVID-19: Evidence from Global Findex Data2025-12-21T12:02:47+08:00Puguh Prasetyoputrapprasetyoputra@gmail.comYovita Isnasariyovita.isnasari@gmail.comAri Purwanto Sarwo Prasojoari.prasojo18@gmail.comIwan Hermawannawih09@gmail.com<p>This study examines the factors associated with financial inclusion and the use of financial technology (FinTech) in Indonesia, both before and during the COVID-19 pandemic, using the Global Findex data from 2017 and 2021. Multivariable logistic regression models were fitted to analyze the factors associated with formal account ownership, savings, borrowing, mobile/Internet payments, and mobile money services usage. The results suggest that formal account ownership remained stable, whereas savings and borrowing declined during the pandemic. Education was observed as a variable with a significant correlation with financial inclusion and the use of FinTech. Higher income and mobile phone ownership significantly increased the likelihood of inclusion for all the indicators. Female individuals have a higher probability of owning a formal account and saving in one than males. Moreover, the pandemic accelerated the adoption of digital financial services. Policy recommendations include: 1) strengthening financial and digital literacy programs, especially for underserved groups; 2) expanding affordable digital infrastructure; 3) developing gender-responsive financial products; 4) balancing FinTech innovation with consumer protection; and 5) leveraging public-private partnerships to scale digital payment ecosystems. Future research should examine the long-term impacts on household resilience and explore the behavioral factors influencing inclusion beyond socioeconomic variables.</p>2025-11-17T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2858Assessing the relationship between global health spending and carbon emissions using a gradient-boosting algorithm2025-12-21T12:02:48+08:00Abdelsamiea Abdelsamieataboelenien@imamu.edu.saElsayed Elashkareelashkar@ksu.edu.saMohammad A. Zayedmaazayed@imamu.edu.saMohamed F. Abd El-AalMohammed.fawzy@comm.aru.edu.eg<p>This paper examines the relationship between global health spending, CO2 emissions, the population aged 65 and above, GDP per capita growth, and GDP growth, utilizing a gradient-boosting algorithm. The paper confirms that the population aged 65 and older significantly influences current health expenditures (at 93%), GDP per capita growth (at 3%), and GDP growth and CO2 emissions, each contributing 2%. Additionally, the paper demonstrates a strong positive relationship between current health expenditures, the population aged 65 and older, and CO2 emissions. Conversely, there is a negative relationship between current health expenditures and both GDP growth and GDP per capita growth. This suggests that the world is optimistic about the transition to clean energy, which could lead to a decline in diseases and potentially reallocate part of health spending to other economic areas.</p>2025-11-06T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2746Record-Based Reliability Analysis of Weibull Models with Textile Applications2025-12-21T12:02:49+08:00Amal S. Hassanbotrosmary338@gmail.comHeba F. Nagybotrosmary338@gmail.comMary B. Abdel-Masehbotrosmary338@gmail.com<p>Severe operational conditions frequently lead to system failure. One frequent error, though, is that systems can quickly become unstable and stop functioning as intended when operating at extremely high or low levels. This article addresses reliability estimation (R = P(Y < X < Z)), emphasizing the constraint that the strength (X) must exceed the lower stress (Y) while remaining below the upper stress (Z). Assuming independent Weibull distributions for strength and stresses, reliability estimation of (R = P(Y < X < Z)\) from frequentist and Bayesian perspectives utilizing upper record values is investigated. This study develops Bayesian estimators for the reliability (R) using three different loss functions: quadratic, linear exponential, and minimum expected. Independent informative (gamma) and non-informative (uniform) priors are assumed, and the corresponding loss functions are incorporated to derive posterior estimates. Bayesian inference for the reliability parameter (R) is performed using Metropolis–Hastings within a Markov Chain Monte Carlo (MCMC) framework. Further, a detailed simulation study to evaluate the performance of the proposed estimators with MCMC techniques is conducted to facilitate the computation of the posterior estimates. Lastly, to validate the proposed methodologies, the reliability estimates are applied to three real jute fiber datasets. Jute fiber, a biodegradable and cost-effective natural material, is examined for its potential in textile applications. The results highlight its favorable mechanical and thermal properties, indicating that jute fiber is a sustainable and efficient alternative for eco-friendly textile materials.</p>2025-12-06T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2885Algorithms for Improving the Designs of Building Universal Kriging Models2025-12-21T12:02:50+08:00Shaymaa M. Younusyounus.altaweel@uomosul.edu.iqYounus Al-Taweelyounus.altaweel@uomosul.edu.iqZainab Abdulateef Rasheedyounus.altaweel@uomosul.edu.iq<p>Kriging models (KMs) have been used popularly in the analysis of computer experiments (CEs). Fast-running alternative models for computationally demanding computer codes (CC) are created through Kriging models. To predict the CC response at untested observations using the values at observed points, the design points should be carefully selected. Latin hypercube design (LHD) is a stratified design that has become popular for building KMs. However, in some cases, additional points may need to be added to the LHD to improve its performance. Several algorithms have been proposed for this purpose. This work aims to present and evaluate the performance of several algorithms to improve LHD performance. Therefore, several KMs are constructed based on various algorithms for generating the design points, and their performance is compared. We propose some measures for comparing the performance of KMs applied to real CCs.</p>2025-10-23T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2908Using Wavelet Estimation of the Weighted Exponential Regression Model2025-12-21T12:02:51+08:00Aseel Mahmood Shakiraseel.shakir@aliraqia.edu.iqHala Kadhum Obeadhala.obead@aliraqia.edu.iqSaifaldin Hashim Kamardr.saifhkamar@gmail.com<p>This paper aims to estimate the parameters of the weighted exponential regression model based on the methods of Maximum Likelihood, Jackknife, Wavelet, and Modified Jackknife method by Haar matrix to predict monthly mortality rates for leukemia patients in Iraq. We compared these methods by collecting samples from July 1, 2020, to February 18, 2023. Simulation was also used to generate three random samples with sizes of 8, 16, and 32 to represent the approved data using MATLAB program. The results indicated the following: The Modified Jackknife method by Haar matrix showed the smallest Mean Absolute Percentage Error compared to Maximum Likelihood, Jackknife, and Wavelet methods for the weighted exponential regression model. The results also showed the stability of the Modified Jackknife method by Haar matrix and the Wavelet method, whether by increasing or decreasing the sample size. That is, the specificity of both methods is not affected by the sample size. Regarding the real data results, the mortality rate from leukemia was forecast for eight months, which showed very low mortality rates that were close to zero.</p>2025-11-12T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2978The M/M/1/K_b Interdependent Queueing Model Includes Reverse Balkingwith Feedback and Controllable Arrival Rates2025-12-21T12:11:52+08:00Bebittovimalan Abebittovimalan@gmail.comThiagarajan Mthiagarajan_ma1@mail.sjctni.edu<p>Objective: Some research in the queueing theory literature describes batch-delivery systems instead of only individualised one-on-one support. This study initially presents adjustable arrival percentages and interdependency in a system like these arrival and service procedures in order to determine the likelihood and characteristics of the system for queuing. It also validated the results that were attained.<br>Methods: With Poisson as the default assumption (that there is only one arrival for every Poisson occurrence), it is expected that the input will regulate the by varying arrival rates in speed and duration. The service is trustworthy and authorised for exact results. We are just using one server for the duration of this service. Re-entering the queue to enhance the outcome is what feedback does. Service is started when the number of customers approaches or beyond the capacity that is set aside All of the probability in a stable state are found by a recursive technique.<br>Findings: Our utilisation feedback in M/M/1/K_b model the properties and solutions of steady-state are determined and examined. The reneged customer and feedback customer, respectively, will have probabilities p_1 and p_2. Anticipated client volume and wait duration are contingent upon interdependencies, service frequency, rapid arrival frequency, and delayed arrival frequency. Based on every criteria, every outcome has been confirmed.<br>Novelty: There are some works related to a feedback queueing system, but this will is new approach of finding the best result for the required model with controllable arrival rates along with Mutual dependability of arrival and procedure for services</p>2025-10-13T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2985Statistical and ANN-Based Modeling for Desertification Risk Prediction in Semi-Arid Regions: A Case Study of Nineveh Governorate2025-12-21T12:02:53+08:00Ziadoon Mohand Khaleelziadoon.khaleel@uomosul.edu.iqSafa Jawad Abedzekh97@gmail.comJalal Abdulkareem Sultanjalalstat2011@gmail.comNoor Marwan Ahmeednoor.marwan@uomosul.edu.iq<p>Desertification and land degradation threaten food, water, and livelihood security across Iraq’s semi-arid north. We develop a station-based, hybrid framework for operational desertification-risk assessment in Nineveh Governorate using multi-decadal observations from Mosul, Tal Afar, and Rabiya. The approach couples (i) distribution-free trend diagnostics (Mann–Kendall with Sen’s slope and persistence control via trend-free pre-whitening and effective-sample-size corrections) with (ii) an interpretable composite Desertification Risk Index (DRI) built from directionally normalized indicators (temperature, humidity, wind, sunshine, rainfall) and objective CRITIC weights. For prediction, we evaluate horizon-explicit forecasts of DRI at 1, 3, 5, and 10 years using leakage-free rolling-origin splits, training-only transformations, and chronological refitting. Baselines (Persistence, Climatology) are compared with ARIMA, artificial neural networks (ANN), Random Forest (RF), and XGBoost using RMSE/MAE/R² and skill vs Persistence; pairwise differences are tested with Diebold–Mariano (squared-error, Newey–West). Across stations and horizons, most settings exhibit positive skill relative to Persistence. RF dominates at medium–long horizons—especially in Tal Afar and Rabiya (skill ≈0.70–0.82 at h=10)—while ANN is competitive at short leads (e.g., Tal Afar h=1 ≈0.36; Mosul h=3 ≈0.35). ARIMA is only competitive at Mosul )h=1(. However, no comparisons are statistically significant at p<0.05, reflecting small effective samples and high inter-annual variability. The framework provides reproducible diagnostics and horizon-aware outlooks that can augment persistence/climatology in early-warning workflows. Future extensions should incorporate remote-sensing predictors, homogenization, and probabilistic targets to strengthen both predictive utility and statistical confidence.</p>2025-10-31T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2990Parameter Estimation of a Non-Homogeneous Inverse Rayleigh Process Using Classical, Metaheuristic, and Deep Learning Approaches 2025-12-21T12:02:54+08:00Noora T. Abduirazzaqmhmdmath@hotmail.comMaryam Qanbara33393765@gmail.comRasha Al-Molamhmdmath80@gmail.comMohammad Tashtoushtashtoushzz@su.edu.om<p>One of the problems of reliability engineering, survival analysis, and applied statistics is correct estimation of lifetime distributions. Relevancy of the inverse Rayleigh distribution, where non-monotonic failure rates are considered, has been especially noted, and classical estimation tools are frequently unable to give good estimates in nonlinear, complicated situations. This paper gives comparative research on four estimation methods namely Ordinary Least Squares (OLS), Maximum Likelihood Estimation (MLE), Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks in the estimation of the parameters of the inverse Rayleigh process. The performance of these methods was compared in terms of RMSE and AIC criteria using both as well as real-world data which were obtained through simulation experiments. The findings show the superiority of smart computational methods over traditional methods, where SVM was the most accurate and the strongest in all the conditions. Estimation performance was also significantly higher in LSTM than in OLS and MLE, but slightly lower than that of SVM. These results indicate the applied benefits of machine-learning-based methods in dealing with the nonlinear and complex forms of reliability data. The key finding of the present study is that it offers a comparative analysis of traditional and intelligent estimation approaches in a systematic way and demonstrates that the application of machine learning to statistical modelling may contribute to the high performance of the parameter estimation to a significant degree. This article contributes to the research by showing how SVM is better in estimating the inverse Rayleigh parameter, as well as by highlighting the significance of the hybrid statistical-computational methods in the reliability of applications in the real world. Future directions include the experimental use of additional high-level machine learning architectures, hybrids estimation systems and Bayesian approaches in order to achieve further accuracy improvements and understanding.</p>2025-11-04T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3011Boosting Mixed-Effects Models with SMOTE: Insights from Java’s Human Development Index2025-12-21T12:02:55+08:00Dimas Anggaraanggaradimas@apps.ipb.ac.idAnang Kurniaanangk@apps.ipb.ac.idKhairil Anwar Notodiputrokhairil@apps.ipb.ac.idIndahwati Indahwatiindahwati@apps.ipb.ac.id<p>This study aims to evaluate the performance of various regression models on unbalanced and clustered data, using the 2018 Human Development Index (HDI) data of regencies in Java Island, Indonesia, as a case study. The models assessed include Linear Mixed Models (LMM), Generalized Estimating Equations (GEE), Mixed Effects Regression Trees (MERT), and Gaussian Copula Marginal Regression (GCMR). These models share a common foundation in incorporating random effects, allowing for a fair and systematic comparison. Model performance was evaluated using two key metrics: Median Absolute Error (MedAE) and Root Mean Square Error (RMSE), applied to both the original dataset and an oversampled version generated using the Synthetic Minority Oversampling Technique (SMOTE). The results indicate that applying SMOTE consistently improves model accuracy. MERT achieved the lowest MedAE across both datasets, demonstrating superior capability in minimizing median prediction errors. Meanwhile, GCMR yielded the best RMSE on the original data, highlighting its robustness in handling complex data structures without requiring oversampling. Residual analysis using boxplots further supports these findings, showing that SMOTE effectively reduces residual variability and enhances model stability. Among the evaluated models, MERT exhibited the most consistent performance overall. These findings underscore the utility of oversampling techniques such as SMOTE in improving regression model performance on unbalanced and hierarchically structured data. Furthermore, both MERT and GCMR are identified as strong candidates for such analytical scenarios, contributing valuable insights toward developing more robust and accurate predictive models in data science and applied statistics</p>2025-10-30T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3016Weighted Least Support Vector Machine for Survival Analysis2025-12-23T01:01:54+08:00Rahmawatirahmawati.mahuseng@gmail.comSri Astuti Thamrintuti@unhas.ac.idArmin Lawiarmin@unhas.ac.idJeffry Kusumajeffry.kusuma@unhas.ac.id<p><strong>Background</strong>: The increasing complexity and volume of data across various disciplines have encouraged the use of machine learning methods, including in survival analysis. Given the large percentage of censored data in survival datasets, a methodological technique that can generate more precise survival probability forecasts is required. This study aims to advance survival analysis by applying the Weighted Least Squares Support Vector Machine, using a weighting approach to manage the information imbalance between censored observations and event occurrences. This strategy can yield a prognostic index that is easily categorized into low-risk and high-risk groups.<br><strong>Methods</strong>: This study proposes the Survival Weighted Least Squares Support Vector Machine (Surv-WLSSVM) model<br>through the integration of a weighting strategy based on the Kaplan–Meier estimator. Data with events are assigned weights that consider the value of the survival function, while censored data are given constant weights. Surv-WLSSVM was applied to both simulated and real datasets, and the results were compared with the unweighted method, namely Survival Least Squares Support Vector Machine (Surv-LSSVM). The simulation scenarios included the complexity of variable numbers, data distribution, sample size, and censoring percentage. The Real datasets used in this study consist of Breastfeeding, PBC, and Bone-Marrow data. A tuning parameters using Particle Swarm Optimization (PSO) was performed to enhance the performance of both Surv-LSSVM and Surv-WLSSVM models. Model performance was evaluated using the concordance index (c-index), where a higher c-index indicates a better model.<br><strong>Results</strong>: In every simulated data setting, the Surv-WLSSVM model continuously showed better performance. Similarly,<br>on real datasets, this model outperformed the alternative and produced more diverse prognostic indices, facilitating the categorization of individuals into low-risk and high-risk groups.<br><strong>Conclusion</strong>: The Surv-WLSSVM represents a significant advancement in SVM-based survival modelling. This approach<br>demonstrates greater reliability and adaptability in handling the complexity of modern survival data.</p>2025-12-06T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3024Bayesian-Optimized CLAHE for Enhanced Drowsiness Detection in Low-Light Conditions Using Time-Distributed MobileNetV2-GRU Architecture2025-12-21T12:02:58+08:00Farrikh Alzamialzami@dsn.dinus.ac.idMuhammad Naufalm.naufal@dsn.dinus.ac.idRuri Suko Basukiruri.basuki@dsn.dinus.ac.idSri Winarnosri.winarno@dsn.dinus.ac.idHarun Al Aziesharun.alazies@dsn.dinus.ac.idSyaheerah Lebai Lutfis.lutfi@squ.edu.omRivaldo Mersis Briliantorivaldomersisb@pusan.ac.kr<p>Driver drowsiness remains a critical factor in road traffic accidents, particularly under low-light conditions where conventional computer vision approaches struggle with poor image quality. This study presents a novel approach combining Bayesian-optimized Contrast Limited Adaptive Histogram Equalization (CLAHE) with a Time-Distributed MobileNetV2-GRU architecture for robust drowsiness detection in challenging lighting conditions. Using the NITYMED dataset containing 128 video sequences, this paper systematically compares three preprocessing strategies: original frames, fixed parameter CLAHE (clip limit=2.0), and Bayesian-optimized CLAHE. The methodology employs Bayesian Optimization to adaptively determine optimal CLAHE parameters based on Perceptual Image Quality Evaluator (PIQE) scores, transforming preprocessing into a task-aware component. Statistical analysis using Wilcoxon Signed-Rank Test demonstrates that the Bayesian-optimized approach significantly outperforms baseline methods, achieving mean accuracy of 93.77% ± 0.0521,F1-score of 93.77% ± 0.0522, and AUC of 97.85% ± 0.0145 across 10-fold cross-validation, with peak performance reaching 98.11% accuracy under optimal configuration (p-values < 0.05 for accuracy and F1-score comparisons). The integration of lightweight MobileNetV2 with GRU enables efficient temporal modeling while maintaining computational efficiency with only 62,449 trainable parameters. Results indicate that adaptive preprocessing significantly improves feature visibility and model convergence, demonstrating practical viability for deployment in Advanced Driver Assistance Systems (ADAS) when implemented with periodic optimization strategies.</p>2025-10-22T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3032From Intelligence to Trust: Evaluating AI-Powered Service Quality for User Satisfaction and Continuance in mHealth2025-12-21T12:03:00+08:00Majdi Abdellatiefm.mohammed@arabou.edu.saRaed AloatibiAlhafi@su.edu.sa<p>Mobile health (mHealth) applications are increasingly integrating artificial intelligence (AI), transforming digital health technologies by making them more convenient, accessible, and personalized. This research addresses the gap in understanding how AI functionalities influence user behavior, guiding the design of effective mHealth solutions. This study examines the correlation between AI-powered service quality, user satisfaction, and continuous usage, using the Sehhaty app in Saudi Arabia as a case study. We collected data via an online survey and analyzed it using Partial Least Squares Structural Equation Modeling (PLS-SEM) to test seven hypotheses. Results revealed that system quality significantly enhances both user satisfaction (β=0.462, p<0.05) and continuous usage (β=0.344, p<0.05). Interaction quality strongly influences user satisfaction (β=0.753, p<0.05) but not continued usage (β=0.165, p>0.05), while information quality negatively affects satisfaction (β=-0.324, p<0.05) and does not directly impact continued usage (β=-0.216, p>0.05). User satisfaction emerged as a crucial predictor of continued usage (β=0.587, p<0.05). These findings emphasize the need for user-centric design in mHealth apps to enhance satisfaction and sustain long-term usage. For developers, healthcare organizations, and policymakers, this research underscores the importance of balancing system efficiency, interaction quality, and information relevance to maximize the potential of AI-powered mHealth solutions. Further research is needed to explore how these dimensions collectively shape long-term usage.</p>2025-11-12T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3066Understanding the Spatial Distribution of Stunting in East Java, Indonesia: A Comparison of GWR and MS-GWR Models2025-12-21T12:03:00+08:00Dwi Rantinidwi.rantini@ftmm.unair.ac.idShofia Ishma Najiyyashofia.ishma.najiyya-2021@ftmm.unair.ac.idMohammad Ghanimohammad.ghani@ftmm.unair.ac.idSeptia Devi Prihastuti Yasmirullahseptia.devi@ftmm.unair.ac.idIndah Fahmiyahindah.fahmiyah@ftmm.unair.ac.idArip Ramadanaripramadan@telkomuniversity.ac.idFazidah Othman fazidah@um.edu.myNajma Attaqiya Alyanajma.attaqiya.alya-2020@ftmm.unair.ac.id<p>Stunting is a growth impairment condition in children under five years old, resulting from chronic malnutrition and repeated infections, which causes them to be shorter than expected for their age. East Java is one of twelve priority provinces, with a stunting prevalence of 17.7% in 2023. Accurate identification of the factor influencing stunting is essential to support effective and targeted interventions. Given the spatial variability in these factors, conventional regression models such as Ordinary Least Squares (OLS) are inadequate. Geographically Weighted Regression (GWR) addresses this by allowing local variation, yet it assumes a uniform spatial scale across variables. This study employs the Multiscale Geographically Weighted Regression (MS-GWR) model, which enables each explanatory variable to operate at its own optimal spatial scale. The results show that MS-GWR with an adaptive Gaussian weighting function provides the best fit, with an AICc of 67.7426 and an R² is 0.79. Seven variable groups significantly influence stunting, including exclusive breastfeeding, early initiation of breastfeeding (EIB), and upper respiratory tract infections (URTIs), as well as combinations of these factors. These findings highlight the importance of formulating location-specific and context-sensitive policies that reflect the dominant characteristics of each region to effectively and sustainably accelerate stunting reduction.</p>2025-11-10T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3144Comparative Evaluation of Classical Robust and Wavelet Enhanced Beta Regression Models for Proportional Data2025-12-21T12:03:02+08:00Mahmood M TaherMahmood81_tahr@uomosul.edu.iqTalal Abd Al-Razzaq Saead Al-Hassomahmood81_tahr@uomosul.edu.iqTaha Hussein Alitaha.ali@su.edu.krd<p>This paper presents a comprehensive comparative modeling of conventional, robust, and wavelet-augmented beta regression models in continuous lineage modeling. To preprocess the response variable, four discrete wavelet transforms—Daubechies IV, Coiflet IV, Simlet IV, and discrete Meyer (Dmey) were applied to remove noise and enhance model robustness. Performance was evaluated by conducting extensive Monte Carlo simulations using different sample sizes and numbers of predictors, where outliers were artificially incorporated to represent the overlap of the real data. The root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R^2) were used as measures of model performance. Results demonstrated that wavelet-enhanced models surpassed conventional and resilient Beta regression at all points, where filter Coiflets 4 and Symlets 4 produced the highest predictive precision and resilience. The wavelet pre-processing effectively eliminated noise and outlier influences, producing more precise and smoother forecasts. The developed models were also applied to a real body composition data set and were shown to replicate the simulation results as well as demonstrate real-world utility. This combined strategy focuses on the necessity of combining wavelet signal processing with stable regression procedures for better analysis of bounded continuous data with outliers.</p>2025-12-16T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3230Early Detection of Insurance Fraud: Integrating Temporal Patterns into Risk Stratification Models2025-12-21T12:03:04+08:00Eslam Abdelhakim Seyamislam2006siam@yahoo.comGaber Sallam Salem Abdallajssabdullah@imamu.edu.sa<p>Insurance fraud carries hefty economic burdens worldwide, prompting insurers to create more advanced detection capabilities that can transcend the weaknesses of static, traditional red-flag systems. Though many risk variables have been studied with machine learning, the temporal aspect of claims—the timing of accident and claims submissions in respect to the onset of policy—is an area largely left unexamined but potentially high leverage predictive area. In this research, we examine the potential in temporal patterns as leading indicators in the early identification of automobile insurance fraud. My main goal here is to create and verify a powerful statistical model that systematically isolates and quantifies the predictive strength of early-reporting behavior while also controlling for a large suite of well-known risks. A hierarchical logistic modeling structure was used with a large sample size of 15,420 auto claims, including 923 confirmed instances of fraud. Demographic, policy, and accident variables were gradually added in successive models prior to the final inclusion of binary early-timed event (accidents and claims reported in the first 15 days after policy onset) indicators. The resulting final model showed excellent discrimination and achieved an Area Under the Receiver Operating Characteristic Curve (AUC) value of 0.800. We found that while policy characteristics (e.g., all-perils coverage), and accident conditions (policyholder at fault, OR=14.2), were the most salient predictors, early-reporting temporal patterns also were revealed to be directionally significant determinants of increased fraud risk. From an operational view, the model shows substantial efficiency benefits, with the model identifying a successful 85.8% of all fraudulent instances in the top quartile (40% rounded down) of claims sorted by the resulting risk score. The discovery points to the benefit in the incorporation of temporal analytics in the structure of detection programs in frauds, allowing the possibility in moving from reactive investigation processes in moving and moving more towards proactive, data-driven stratification in risks.</p>2025-12-12T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2569Kernel ridge regression improving based on golden eagle optimization algorithm for multi-class classification2025-12-21T12:03:06+08:00Shaimaa Mahmoodshaimaa.waleed@uomosul.edu.iqZakariya Algamalzakariya.algamal@uomosul.edu.iq<p>Kernel Ridge Regression combines the principles of machine learning supervision and ridge regression by employing the kernel trick. This approach is particularly effective for regression problems with non-linear relationships between inputs and outputs. The kernel trick allows Kernel Ridge Regression (KRR) to perform ridge regression by learning non-linear functions in a high-dimensional space using ridge regression's regularization techniques. The success of KRR depends on the hyper-parameter settings, which determine the type of kernel used. Current methods for determining hyper-parameter values encounter three primary challenges: high computational costs, large memory demands, and low accuracy. This research introduces a significant improvement to the golden eagle optimization framework by incorporating elite opposite-based learning (EOBL) to enhance population diversity in the search space. We apply this strategy to efficiently select optimal hyper-parameters. Combining EOBL with KRR can lead to improved predictive accuracy. By selecting elite solutions and incorporating opposition-based methodologies, the model can circumvent local optima and broaden the range of potential solutions, leading to improved results, especially in complex datasets. The proposed enhancement to Kernel Ridge Regression was evaluated on ten publicly available multi-class datasets to demonstrate its effectiveness. The results from various evaluation criteria showed that the proposed enhancement achieved superior classification performance compared to all baseline techniques.</p>2025-07-25T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2759$b$-Local Irregular Chromatic Number of Graphs2025-12-21T12:03:08+08:00Arika Indah Kristianaarika.fkip@unej.ac.idSharifah Kartini Said Husainkartini@upm.edu.myN Mohanapriyan.mohanamaths@gmail.comRidho Alfarisialfarisi.fkip@unej.ac.idDafikd.dafik@unej.ac.idWitriany BasriWitriany@upm.edu.myV. Ponsathyaponsathya@gmail.com<p>In this paper, we study a new notation of coloring of graph, namely a $b$-local irregularity coloring. Suppose $l : V(G) \rightarrow \{1,2, \ldots ,k\}$ is called vertex irregular $k$-labeling and $w : V(G) \rightarrow N$, where $w(u) = \sum_{v \in N(u)} l(v)$. Every color class has a representative adjacent to at least one vertex in each of the color classes.$l$ is $b$-local irregularity coloring. The $b$-local irregular chromatic number denoted by $\chi_{b-lis}(G)$ is the largest of $k$ such that $G$ admits a $b$-local irregularity coloring. In this paper, we study the $b$-local irregular chromatic number of graphs namely path, cycle, star, friendship, complete, complete bipartite, and Wheel graph.</p>2025-10-26T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2807A Novel Hybrid Conjugate Gradient Algorithm for Solving Unconstrained Optimization Problems2025-12-21T12:03:09+08:00Romaissa Mellalromaissa.mellal@yahoo.frNabil Sellamisellaminab@yahoo.fr<div>We introduce a novel hybrid conjugate gradient method for unconstrained optimization, combining the AlBayati-AlAssady and Wei-Yao-Liu approaches, where the convex parameter is determined using the conjugacy condition.</div> <div>Through rigorous theoretical analysis, we establish that the proposed method guarantees sufficient descent properties and achieves global convergence under the strong Wolfe conditions. Using the performance profile of Dolan and Moré, we confirm that our method, denoted as RN, consistently outperforms both classical (HS, FR, PRP and DY CG) and hybrid (BAFR and BADY) methods, particularly for large-scale problems. </div>2025-10-15T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2819On the Wagstaff prime numbers in k-Fibonacci sequences2025-12-21T12:03:10+08:00Lokmane Rezaiguialokman.rez123@gmail.comHaitham Qawaqnehh.alqawaqneh@zuj.edu.joMohammed Salah Abdelouahabm.abdelouahab@centre-univ-mila.dz<p>A Wagstaff prime is a prime number that can be written in a special exponential form involving powers of two. For any integer greater than or equal to two, the so-called $k$-generalized Fibonacci sequence is a linear sequence in which each term is obtained by adding together the preceding $k$ terms, beginning with a fixed set of initial values. In this paper, we prove that the number three is the only Wagstaff prime that appears in any of these generalized Fibonacci sequences. Our proof makes use of lower bounds for linear forms in logarithms of algebraic numbers and a refined version of the Baker–Davenport reduction method, originally developed by Dujella and Pethő.</p>2025-10-13T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2838An Energy Valley Optimizer Approach for Solving the Modified Quadratic Bounded Knapsack Problem with Multiple Constraints2025-12-21T12:03:10+08:00Azka Hurin ‘Iinazkahurin18@gmail.comAgustina Pradjaningsihagustina.fmipa@unej.ac.idM. Ziaul Arifziaul.fmipa@unej.ac.idApriani Soepardiapriani.soepardi@upnyk.ac.idDiva Rafifah Mutiara Muhammaddivamutiara@gmail.com<p>The Modified Quadratic Bounded Knapsack Problem with Multiple Constraints (MQBKMC) represents a challenging combinatorial optimization problem with considerable importance in practical domains such as inventory management and logistics. This study investigates the performance and parameter sensitivity of the Energy Valley Optimizer (EVO) algorithm in solving MQBKMC. Specifically, we examine the effects of varying critical algorithm parameters, including the maximum number of function evaluations (<em>MaxFes</em>) and the number of particles (<em>nParticle</em>), on the quality of obtained solutions. Experimental results reveal that increasing MaxFes consistently leads to improved solution quality, underscoring the significance of extended exploration in facilitating algorithm convergence. In contrast, increasing the number of particles does not necessarily yield performance gains and instead significantly elevates computational demands. These findings provide practical insights into the optimal parameterization of EVO, particularly beneficial for applications that require efficient handling of high-dimensional, multi-constrained optimization problems. Overall, the EVO algorithm demonstrates promising efficacy and robustness for effectively addressing MQBKMC.</p>2025-12-12T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2855Intelligent Energy Scheduling for distributed Duhok Polytechnic University: A Chimpanzee Optimization Approach for Load Reduction2025-12-21T12:03:12+08:00Hayveen Saleem Sadiqhayveen.sadiq@gmail.comMohammed A.M. Sadeeqhayveen.sadiq@dpu.edu.krd<p>Universities operate some of the most energy-intensive real-estate on any public grid, yet their predictable timetables and centralized governance make them ideal testbeds for advanced demand-side control. This paper presents an Intelligent Energy Scheduling (IES) platform deployed across two geographically separated laboratories of Duhok Polytechnic University (DPU-ALDOSKI). Thirty-second electrical measurements are streamed to a nightly pipeline that (i) cleans and aggregates the data into hourly features, (ii) issues 24-step load forecasts with a seasonal ARIMA model, and (iii) converts academic timetables, public holidays and device-specific weekends into a binary permission mask. The resulting day-ahead scheduling task is solved by a binary Chimpanzee Optimization Algorithm (ChOA) that manipulates a 48-gene on/off chromosome while penalizing unmet demand and excessive switching .Over an eleven-month evaluation horizon (13 Feb 2022 – 19 Jan 2023), the scheduler reduced Total Consumed Power by 24 %, avoiding 80 MWh of electricity and 46 t CO₂-e without any hardware retrofits. Paired t-tests and Wilcoxon–Mann–Whitney tests returned p < 10⁻⁴ for every monitored variable, confirming statistical significance, while the optimizer’s runtime averaged ≈ 180 ms per device per day, well within the one-minute nightly budget. An open-source repository containing anonymized data, configuration files and turnkey code accompanies the paper, providing a reproducible benchmark for future campus-scale demand-response research and demonstrating the practical viability of ChOA-driven scheduling in live institutional setting</p>2025-10-29T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2862A high-dimensional Feature Selection Based on Modified Energy Valley Optimizer and ReliefF algorithm2025-12-21T12:03:13+08:00Islam S. Fathii.mohamed@anu.edu.joBajeszeyadaljunaeidiai.mohamed@anu.edu.joMohammed TawfikM.Tawfik@anu.edu.jo<p>A high-dimensional feature selection represents a crucial preprocessing phase in data mining and machine learning applications, exerting substantial influence on the effectiveness of machine learning algorithms. The primary goal of FS involves removing irrelevant attributes, minimizing computational time and memory demands, and improving the overall efficacy of the associated learning algorithm. The Energy Valley Optimizer (EVO) constitutes an innovative metaheuristic approach grounded in sophisticated physics concepts, specifically those connected to particle equilibrium and decomposition patterns. This research introduces an improved binary variant of The Energy Valley Optimizer (IBEVO) designed to tackle a high-dimensional feature selection challenges. The base EVO algorithm has been augmented with significant enhancement to boost its comprehensive effectiveness. ReliefF algorithm represents an addition incorporated into the EVO's initialization phase to strengthen the algorithm's capacity to utilize its potential in addition to, it integrated into the base EVO, accelerating convergence rates. The findings from ten gene expression datasets characterized by high dimensionality and limited sample sizes show that the newly developed method enhances predictive performance while simultaneously decreasing feature count, achieving highly competitive outcomes when compared to other state-of-the-art feature selection approaches.</p>2025-11-04T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2877Zero-Sum Reinsurance and Investment Differential Game under a Geometric Mean Reversion Model2025-12-21T12:03:14+08:00Winfrida Mwigilwawinfridamwigilwa184@gmail.comFarai Julius Mhlangafarai.mhlanga@ul.ac.za<p>This paper investigates a zero-sum stochastic differential game involving a large insurance company and a small insurance company. The large insurance company has sufficient assets to invest in both a risk-free asset and a risky asset. The price process of the risky asset follows the Geometric Mean Reversion (GMR) model and takes into account dividend payments and federal income tax. The small insurance company invests only in the risk-free asset and is subject to federal income tax on the interest earned. The large insurance company seeks to maximize the expected exponential utility of the difference between its surplus and that of the small insurance company to maintain its surplus advantages, while the small insurance company aims to minimize the same quantity to reduce its disadvantages. We establish the corresponding Hamilton-Jacobi-Bellman equations and derive optimal reinsurance-investment and investment-only optimal strategies. Finally, numerical simulations are performed to illustrate our findings.</p>2025-10-29T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2979Hybrid Emperor Penguin and Gravitational Search Optimization for Efficient Task Co-Offloading in Fog Computing Environments2025-12-21T12:03:15+08:00A. S. B. Sadkhandr00abbas@gmail.comMohsen Nickray m.nickray@qom.ac.ir<blockquote data-start="458" data-end="1485"> <p data-start="460" data-end="1485">The industrial fog computing setting is highly sensitive to challenges in efficiently scheduling tasks and offloading computing processes because of the heterogeneity of devices, their mobility, and fluctuations in resource availability. This work is motivated by the need to design a unified optimization mechanism for dynamic fog environments where both device and network heterogeneity severely affect task allocation. The proposed study aims to address the limitations of existing single-strategy offloading methods by developing a hybrid optimization framework that balances computational load, minimizes latency, and reduces energy consumption. The main contribution of this paper lies in introducing a hybrid GSA–EPO co-offloading model that achieves an adaptive trade-off between energy efficiency and service latency while maintaining scalability in large-scale fog computing systems. The hybridized framework integrates the Gravitational Search Algorithm (GSA) and Emperor Penguin Optimization (EPO) to reduce energy consumption, service latency, and penalties related to deadline violations. The model uses battery-aware willingness, memory constraints, and mobility-sensitive communication parameters to inform local execution, device-to-device offloading, and edge server offloading. The hybrid GSA–EPO algorithm combines the exploration capability of GSA with the convergence efficiency of EPO within a migration-based knowledge-exchange structure, thereby enhancing both convergence and diversity of solutions. Simulation results demonstrate that the proposed approach achieves up to 63% reduction in energy consumption and 75% reduction in delay compared to baseline methods, confirming the feasibility of hybrid metaheuristic strategies for reliable and efficient task co-offloading in dynamic fog computing networks.</p> </blockquote>2025-11-15T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2991A Minimizing Sequence Proof of the Banach Fixed Point Theorem2025-12-21T12:03:16+08:00Anwar Bataihaha.bataihah@jadara.edu.jo<p>We present a novel proof of the Banach contraction mapping theorem based on minimizing sequences. By analyzing the set $\mathcal{A} = \{d(x, T(x)) : x \in X\}$ of point-to-image distances, we construct a sequence that converges to the unique fixed point. A key technical contribution is Lemma~\ref{lemma1}, which establishes an optimal inequality between contraction coefficients and the b-metric constant. We demonstrate applications to b-metric spaces and discuss extensions to incomplete metric spaces. Examples show the effectiveness of this approach, which provides a geometric alternative to classical methods.</p>2025-11-02T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3013Fast Method for the Mobile Robot Path Planning Problem: The DM-SPP method2025-12-21T12:19:58+08:00Souhail Dhouibsouh.dhou@gmail.com<p>The main objective of the Mobile Robot Path Planning Problem is to find the optimal waypoints for a mobile robot with obstacles collision-free. This is a very complicated and needed task in robotic. Basically, planning rapidly the optimal task will increase the performance of the robot by increasing the speed to reach the target position and reducing energy conception. In this research work, the innovative technique namely Dhouib-Matrix-SPP (DM-SPP) is studied with eight movement directions as well as four. DM-SPP is a very rapid method built on the contingency matrix navigation and needing only <em>n</em> iterations to create the optimal path (where <em>n</em> is the number of nodes). The simulation results on several complicated case studies (varying from (20 x 20) grid map to (80 x 80) grid map) prove that DM-SPP can rapidly create an accurate trajectory with obstacles collision-free. Moreover, the proposed technique is compared with the very recently designed artificial intelligence approaches. The results of this comparison proved that the novel DM-SPP is the fastest approach: For example, it is (289.325) times rapider than the A* algorithm, (156.769) times faster than the Improved A* method, (127.901) times speedier than the Bidirectional A* technique, (69.586) times quicker than the Improved Bidirectional A* algorithm and (45.671) times rapider than the Variable Neighborhood Search BA* metaheuristic. These findings underline the speed of the proposed DM-SPP optimization technique and emerge the applicability of DM-SPP as a reliable option for the trajectory optimization.</p>2025-12-07T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3106Community Clustering based on Grey Wolf Optimization2025-12-21T12:03:18+08:00Lyes BADISl.badis@univ-bouira.dzAbuzer Hussein IbrahimAbouzer.hussein@gmail.comSally D.Abualgasimsally.dallah1@gmail.comMohamed Ahmed BOUDREFm.boudref@univ-bouira.dzMohamed Babiker A.Mohamedakoody@albutana.edu.sdAbdalrahiem Abdalla Salih Abakarabdu4856@gmail.com<p>Community detection remains a fundamental challenge in network analysis, with critical applications across diverse domains, including social networks and biological systems. This paper introduces GreyWolf Optimization Clustering (GWOC) A novel approach that enhances the standard Grey Wolf Optimizer (GWO) by integrating hierarchical clustering mechanisms into its optimization process. Specifically,GWOC refines the encircling and attacking phases of the grey wolves’ hunting strategy by embedding a clustering-based local search that improves convergence precision and the delineation of community boundaries.Unlike traditional methods, GWOC employs an innovative three-phase strategy inspired by grey wolves’ hunting behavior: tracking, encircling, and attacking, achieving remarkable improvements in both detection accuracy and computational efficiency. Through extensive experimentation on networks, as well as synthetic Lancichinetti- Fortunato-Radicchi (LFR) networks, GWOC exhibits robustness under varying inter-community mixing levels, maintaining Normalized Mutual Information (NMI) scores above 0.9, ResMI scores exceeding 88 and Adjusted Rand Index (ARI) scores exceeding 0.92 in high-noise environments. The algorithm’s effectiveness is particularly notable in handling large-scale networks and maintaining high detection accuracy even with increasing inter-community mixing levels. </p>2025-11-18T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2198Effect of Preprocessing on Modelling Soil Images Captured Using Smartphone2025-12-23T01:15:45+08:00Yudi Agustayudi@stikom-bali.ac.idNi Komang Sri Julyantariyudi@stikom-bali.ac.id<p>Knowing soil characteristics is one important step in agricultural process. Soil characteristics such as NPK and pH values could differ the production quantity and quality of a farm. To know soil characteristics, various methods could be implemented including the use of tools such as Soil Test Kit (STK), and Rapid Soil Testing (RST), among others. For an extreme case, soil laboratory work is sometimes conducted. However, such a process is considered taking time and expensive to realize. Nowadays, the use of smartphones is getting common. Smartphones can capture images, in this case soil images, in no time. However, recognizing soil characteristics based on images needs more processes. Various artificial intelligence (AI) methods exist and could be used for the purpose, including artificial neural networks, convolutional neural networks, random forest, and gradient boosting, among others. This paper tries to experiment how the soil images captured using smartphone could be used to predict soil characteristics. Various image preprocessing methods are chosen to produce images which could be modelled using various AI methods. The results show that random forest performed the best compared to other methods with overall lowest mean squared error. Predicting pH values based on soil images produced better accuracies compared to NPK values. Image preprocessing does not influence largely on the prediction accuracies. For some cases, modelling of images without preprocessing even resulted in better accuracies.</p>2025-10-30T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2492A Secure Predictive Framework for Preventing Health Care Data2025-12-21T12:03:21+08:00Bhargavi Kondabhargavikonda1@outlook.comAkhila Reddy Yadullaakhilareddyyadulla3@outlook.comVinay Kumar Kasulavinaykumarkasula3@gmail.comMounica YenugulamounicaYenugula1@outlook.comSupraja Ayyamgarisuprajaayyamgari@yahoo.com<p>The incorporation of Artificial intelligence (AI) with online resources in the healthcare sector has significantly enhanced medical services. Facilitating accurate diagnoses, tailored treatment strategies, and ongoing patient surveillance, Internet of Things (IoT) devices promote efficient communication, while AI analyzes detailed healthcare data to improve decision-making and reduce costs. Nevertheless, securing data storage and transmission poses a significant challenge, especially as the threat of data breaches and cyber-attacks increases. Protecting patient privacy and securing health records is essential to maintaining public health. To address these challenges, a new approach called Butterfly Optimization Based Modular Neural Network (BObMNN) has been developed, focusing on data security and predictive performance. The healthcare database was first assembled and imported into a Python environment for further analysis. Following this, data security protocols were established using encryption and decryption methods. The encrypted data was then subjected to preprocessing, specifically feature selection, where butterfly optimization (BOA) was utilized to determine the most important attributes for predictive analysis. The constructed model was evaluated using a variety of measures, including Area under the Curve (AUC), accuracy, Recall, F-score, and Precision.</p>2025-09-06T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2596Mathematical Modeling of Stenosis in the Carotid Bifurcation as a Cause of Ischemic Stroke2025-12-21T12:03:22+08:00Arif Fatahillahfatahillah767@gmail.com Dzawawi Dimas Adanidzawawidimas@gmail.comRobiatul Adawiyahrobiatul@unej.ac.id Susi Setiawani arif.fkip@unej.ac.idRafiantika Megahnia Prihandiniarif.fkip@unej.ac.idHaci Mehmet Baskonushmbaskonus@gmail.com<p>Ischemic stroke is the most common type of stroke, caused by disrupted blood flow to the brain. This study aims to analyze blood flow stenosis in the carotid bifurcation, a major cause of ischemic stroke. A mathematical model is developed using the finite volume method, considering variations in shape, thickness, and stenosis location. The stenosis shapes analyzed include bell-shaped, cosine-shaped, and elliptical, with narrowing levels of 60%, 70%, 80%, and 90%. The mathematical model is solved using the SIMPLE algorithm and simulated in Python to assess the impact of stenosis on ischemic stroke risk. The simulation provides velocity and pressure data across different stenosis shapes and thicknesses, enabling a comprehensive risk analysis. Computational Fluid Dynamics (CFD) is employed to simulate turbulent flow in the carotid bifurcation using ANSYS FLUENT. The findings indicate that 90% stenosis in the carotid bifurcation poses a significant risk, as the resulting flow velocity and pressure exceed normal thresholds. This study provides valuable insights into the effects of carotid bifurcation stenosis on ischemic stroke occurrence and its implications.</p>2025-11-09T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2673Recurrence Relation for The Moments of Order Statistics from The Beta Exponential-Geometric Distribution2025-12-21T12:03:23+08:00Ali Jaleel Najmalialmula86@gmail.comHossein Jabbari Khamneih_jabbari@tabrizu.ac.irSomayeh MakoueiMakouei@tabrizu.ac.ir<p>In this paper, a novel cumulative distribution function (cdf) for beta exponential-geometric (BEG) distribution, through two distinct practical frames, is developed. However, the presented models are obviously more pragmatic than the ones being demonstrated in previous works, in the case of extending further relations. Then, using the exhibited cdfs, certain recurrence relations for the single and product moments of the order statistics of a random sample of size n arising from BEG distribution are derived.</p>2025-10-13T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2690Predicting the QoE in Adaptive Video Streaming Using Weighted Averaging Federated Learning2025-12-21T12:03:24+08:00Jaafar Rashidjaedukar@gmail.comAbolfazl Diyanatadiyanat@iust.ac.ir<p>Recently, assessing the QoE in adaptive video streaming systems has become an interesting area of research since it directly gauges customer satisfaction. Quality of Experience (QoE) models potentially suffer from data availability issues and a lack in preserving sensitive data of clients. The ITU-T standards provided a guideline for low-cost practical objective QoE assessment to obtain the streaming, non-streaming parameters, and their corresponding accurate MOS scores. The existing QoE prediction literature came with a flavour of implementing the models separately or sharing the QoE data between distinct devices. This work simulating the user interaction with the online video distributers, and obtaining the labelled QoE data using the ITU-T P.1203. In addition, it proposes the Weighted Averaging Federated Learning (WAFL), an enhanced federated learning implementation using the feed forward neural networks to predict the QoE. The WAFL preserves the recent privacy requirements by avoiding sharing the entire data among the distributed models. The training is implemented in a sequential manner amongst the collaborated nodes and enhances the global model by aggregating the shared learned weights through the distributed learning rounds. The achieved QoE prediction accuracy is compared to the isolated learning and the traditional federated learning. The proposed QoE prediction provided an accuracy of 86.68% in estimating the QoE using a small number of streaming and non-streaming QoE parameters.</p>2025-11-04T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2703Third Harmonic Injection Reference with Edge Shifted Carriers to Three Phase Inverter Fed Induction Motor2025-12-21T12:03:25+08:00Bhavana Kadiyalabhavanakadiyala82@gmail.comR. Bensrajbensraj854@outlook.comP. Muthukumarmuthukumar954@outlook.com<p>A three-phase induction motor (IM) fed by an inverter is widely used in industrial and commercial drives because it can effectively convert Direct Current (DC) power to balanced three-phase Alternative Current (AC) power, which is suitable for driving IMs. Pulse Width Modulation (PWM) switching introduces harmonics into the output voltage, which can cause additional heating of the motor, torque ripple, and electromagnetic interference (EMI). This research investigates the design and performance improvement of a three-phase Voltage Source Inverter (VSI) system for effective IM operation, combined with a new Kookaburra-based Modular Neural Control Framework (KbMNCF). The first step is designing a conventional VSI topology using a DC voltage source, power electronic switches, and PWM to convert the DC input into a balanced three-phase AC output for industrial motor drives. To maximize inverter performance while adapting the control, the introduced KbMNCF is used to tune the PWM parameters. The PWM techniques adopted include Third Harmonic Injection (THI) for maximizing output voltage and an Edge-Shifted Carrier (ESC) method to allocate switching actions, thereby minimizing switching losses and thermal stress. The system's overall robustness is confirmed by assessing key performance metrics, including Total Harmonic Distortion (THD), Voltage, and system efficiency. Simulation and analysis have demonstrated significant improvements in waveform quality, torque smoothness, and energy efficiency, thereby verifying the validity of the proposed control framework for high-performance inverter-fed motor drives.</p>2025-10-22T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2720Memory Effects in Eco-Epidemiology: A Dynamical Systems Approach to Fear-Disease-Harvesting Interactions2025-12-21T12:03:27+08:00Siti Nurul Afiyahnoeroel@asia.ac.idFatmawatifatmawati@fst.unair.ac.idWindartowindarto@fst.unair.ac.idJohn O. Akannijide28@gmail.com<p>This research proposes a novel fractional-order eco-epidemiological model to investigate predator-prey<br>interactions under the combined effects of fear-induced behavioral changes, disease transmission in predators, and controlled<br>harvesting. Unlike classical integer-order models, our approach employs Caputo fractional derivatives to incorporate memory<br>effects and hereditary traits in ecological processes, offering a more realistic representation of long-term dynamics. We<br>establish the existence, uniqueness, and boundedness of solutions, and analyze the stability of all equilibrium points,<br>including the coexistence state where prey, susceptible predators, and infected predators persist. Numerical simulations<br>demonstrate that: (1) fear effects significantly reduce prey extinction risk by dampening predation rates, (2) harvesting<br>intensity critically influences system stability, with excessive harvesting driving predator extinction, and (3) fractional-order<br>dynamics reveal memory-dependent transitions not observable in traditional models. These findings provide actionable<br>insights for ecosystem management, particularly in designing harvesting policies that balance biodiversity conservation<br>and disease control. The model’s framework is adaptable to empirical data, bridging theoretical ecology and practical<br>conservation strategies.</p>2025-10-13T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2795Design of a Two-Stage Controller for Balancing and Stabilizing the Furuta Pendulum2025-12-21T12:21:11+08:00Sebastián David Pintor Ahumadasdpintora@udistrital.edu.coCamilo Esteban Pérez-Espíndolacepereze@udistrital.edu.coOscar Danilo Montoya Giraldoodmontoyag@udistrital.edu.co<p>Control systems play a fundamental role in regulating variables within dynamic systems, enabling stability according to specific requirements such as response time, reference tracking, and disturbance rejection. This research aims to design a two-stage controller capable of stabilizing the Furuta Pendulum in its upright position. To achieve this, a model predictive control (MPC) strategy is implemented, combined with a swing-up and energy-based lifting technique, and applied to the physical QUANSER inverted pendulum system. The article presents the mathematical modeling process and its integration into the predictive control design. To evaluate performance, a comparison is made with a classic state feedback controller, also implemented on the same physical system. Both controllers are subjected to standardized reference and disturbance scenarios, allowing for a comprehensive performance comparison through graphical analysis and the ITAE performance index. The results highlight the superior performance of the model predictive controller in terms of speed, tracking accuracy, and steady-state error, particularly in scenarios that require controller switching - an advantage not provided by the state feedback approach.</p>2025-10-07T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2789Facial Expression Recognition: A Survey of Techniques, Datasets, and Real-World Challenges2025-12-21T12:03:29+08:00Mohamed Abdeldayemmohamed-abdeldaym@eru.edu.egHesham F. A. Hamedmohamed-abdeldaym@eru.edu.egAmr M. Nagymohamed-abdeldaym@eru.edu.eg<p>Facial expressions are a powerful nonverbal communication tool that can convey emotions, thoughts, and intentions, enhancing the richness and effectiveness of human interaction. Facial Expression Recognition (FER) has gained increasing attention due to its applications in education, healthcare, marketing, and security. In this survey, we examine the key techniques and approaches employed in FER, focusing on three main categories: traditional machine learning, deep learning, and hybrid methods. We review traditional pipelines involving image preprocessing, feature extraction, and classification, along with deep learning methods such as convolutional neural networks (CNNs), transfer learning, attention mechanisms, and optimized loss functions. Furthermore, the study provides a comprehensive examination of existing research and available datasets related to emotion recognition. We also summarize the best-performing methods used with the most common datasets. In addition, the survey addresses the technical challenges of emotion recognition in real-world scenarios, such as variations in illumination, occlusion, and population diversity. The survey highlights state-of-the-art FER models, comparing their accuracy, efficiency, and limitations. Ultimately, this work serves as a comprehensive starting point for researchers, offering insights into current FER trends and guiding the development of more robust and accurate recognition systems.</p>2025-10-23T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2790Micro Generalized Star Semi Closed sets in Micro Topological Spaces2025-12-21T12:03:31+08:00P. Sathishmohansathishmohan@kongunaducollege.ac.inS. Mythilimythiliskumar2002@gmail.comK. Rajalakshmirajalakshmikandhasamy@gmail.comS. Stanley Roshanstanleyroshan20@gmail.com<p>The main objective of this paper is to introduce a new class of sets namely micro generalized star semi closed set (briefly Mic-g<sup>∗</sup>s closed set) and micro genralized star semi open set (briefly Mic-g<sup>∗</sup>s open set) in micro topological spaces.</p>2025-11-02T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2801Detection and Recognition for Iraqi Modern License Plate Using Deep Learning Approach 2025-12-23T12:44:49+08:00Mayaf Zakawa Hasanmayaf.zakawa@dpu.edu.krd Salim Ganim Saeed Al-Alimayaf.zakawa@dpu.edu.krd<p>In recent years, as the accuracy of deep learning techniques has become spectacular, object identification and recognition has recently become an increasingly popular target of computer vision applications. Among them, automatic number plate recognition (ANPR) has attracted much attention and is already a popular subject of research, but is still difficult in cases where there are few public datasets and differing plate formats. In this paper, the pipeline and a combination of YOLOv11n for detection and LPRNet as optical character recognition is proposed as deep learning-based. The experiments are performed on a dataset collected to perform the identification and detection of new Iraqi license plates. Our method had good detection performance on our dataset, in spite of applying cross-dataset pre-trained detection weights based on a CCPD dataset showing that it had good cross-domain generalization. In the recognition stage, an LPRNet model, the training and evaluation of which was conducted on our collected data only. The accuracy of detector and recognizer of this system was 98.4% and 99.6% respectively. The findings emphasize the power of deep learning models to perform cross domain ALPR tasks, and are an indication that future expansion of datasets will result in an increase in robustness on a variety of real-world circumstances.</p>2025-10-21T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2815Enhancement of Crop Yield Prediction using an Optimized Deep Network2025-12-21T12:03:33+08:00Sirivella Yagnasreesirivellayagnasree87@gmail.comAnuj Jainanujjain4anuj404@gmail.com<p>The importance of predicting crop yields lies in ensuring food security and optimizing agricultural practices. Precise crop yield forecasts empower farmers and policymakers to make well-informed decisions regarding harvesting, planting, and resource allocation, ultimately affecting the availability and affordability of food. While various methods for predicting crop yields exist, they often fall short in accuracy and efficiency. This research introduces the Honey Badger-based Deep Neural Predictive Framework (HBbDNPF). The model combines the concept of Honey Badger optimization and deep neural network to effectively predict different crop yields. The method includes modules such as preprocessing, feature extraction, and prediction. The module reduces the complexity and enhances the accuracy of the crop yield classification. The method is tested with the Unmanned Arial Vehicle (UAV) spectral image dataset. The model significance improved the accuracy of the prediction and consumed less time due to the selected features. The model validated the accuracy of 99.9% with 99.7% precision and 99.5% recall rate. By harnessing the synergy of optimization and deep learning, HBbDNPF empowers informed agricultural decision-making, resource allocation, and food production efficiency, contributing to global food security.</p>2025-11-02T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2848Enhancing Hotel Rating Predictions Through Machine Learning: Data Analytics Applications in Indian Hospitality and Digital Marketing2025-12-21T12:03:34+08:00Sapna Kumariysapna183@gmail.comMirza Tanweer Ahmad Beigmirzatanweer@gmail.comMohammad Anasanas_fabs@sgtuniversity.orgHaresh Kumar Sharma hareshshrm@gmail.com<p>With more consumers relying on online reviews, predicting hotel ratings accurately has become very important. This study investigates the use of machine learning models to predict overall hotel ratings based on key service-related features, including location, hospitality, cleanliness, facilities, food, value for money, and price. Using a real-world dataset of Indian hotels, we evaluate and compare the performance of six supervised learning models: Linear Regression, Random Forest, Gradient Boosting, Support Vector Regression, K-nearest neighbours, and PCA-based Linear Regression. The models were evaluated using Mean Squared Error (MSE) and R-squared (R²) as performance metrics. Gradient Boosting demonstrated the highest predictive accuracy, closely followed by Random Forest. Feature importance analysis identified hospitality, cleanliness, and location as the most significant predictors of customer satisfaction. Principal Component Analysis (PCA) further reduced dimensionality while retaining over 90% of the dataset's variance within the first four components. These findings demonstrate the effectiveness of ensemble learning methods for hotel rating prediction and offer actionable insights for service improvement in the Indian hospitality sector. Furthermore, the results underscore the role of data-driven analytics in shaping effective digital marketing and promotional strategies tailored to diverse customer preferences.</p>2025-10-29T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2886Phonological Awareness and Early Arabic Literacy: Predictive Insights from a Moroccan Case Study2025-12-21T12:03:36+08:00ZOUHAIR OUAZENEzouhair.ouazene@usmba.ac.maAmina KARROUMamina.karroum@usmba.ac.maJamal AMRAOUIjamal.amraoui@usmba.ac.maRachida GOUGILrachida.gougil@usmba.ac.ma<p>Phonological awareness is among the key variables in early reading and writing. The better the understanding of spoken language, the easier it will be to acquire written language. Moreover, phonological awareness predicts most of the cases of reading disabilities, such as dyslexia. This study investigates the effect of phonological awareness on the literacy skills of young learners by applying machine learning models. The research analyzed data from 606 preschool children aged 61 to 73 months, divided into control and experimental groups across ten educational institutions. Results reveal significant differences in phonological awareness scores favoring the experimental group, with no notable gender differences. Furthermore, the study found a strong positive correlation between phonological awareness and literacy skills, supported by machine learning metrics such as R² values (0.785 for reading and 0.658 for writing). The current research has underlined the efficacy of both phonological awareness interventions and machine learning for educational insight amplification. This study underscores the critical role that phonological awareness plays in the development of literacy, going on to provide actionable insight into ways to improve Arabic early literacy education.</p>2025-11-09T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2890From Clicks to Conversions: Leveraging Apriori and Behavioural Segmentation in E-Commerce2025-12-21T12:03:37+08:00Raouya El Youbiraouya.elyoubi@usmba.ac.maFayc¸al Messaoudifaycal.messaoudi@usmba.ac.maManal Loukilimanal.loukili@usmba.ac.maRiad Loukilimanal.loukili@usmba.ac.maOmar El Aalouchemanal.loukili@usmba.ac.ma<p> In this study, a data mining and machine learning approach is presented to analyse visitor behaviour and preferences on an e-commerce platform. The Apriori algorithm is employed for association rule mining to uncover patterns between item views, cart additions, and purchases. Visitor segmentation is performed based on browsing activity, and a logistic regression model is developed to predict purchase behaviour. It is observed that visitors who view specific items are more likely to add them to their cart or proceed to purchase, and that cart additions significantly increase the likelihood of purchase. Four distinct visitor segments are identified through clustering, reflecting varying levels of engagement. Among the features analysed, the number of items viewed and the total view count are found to be the most influential predictors of purchasing intent. Using these two features, the logistic regression model achieves an accuracy of 0.89, demonstrating the effectiveness of a simple, interpretable approach for behaviour-based personalization in e-commerce contexts.</p>2025-10-14T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2961Fractional Medium Domination Number of Graphs2025-12-21T12:03:38+08:00G. Umaphd20162810@alagappauniversity.ac.inS. Amuthaamuthas@alagappauniversity.ac.inN. Anbazhagananbazhagann@alagappauniversity.ac.in<p>This work develops the framework of the fractional medium domination number(\( MD_f(G) \)), focusing on connected, undirected graphs without loops. The (\( MD_f(G) \)) is defined as the ratio of the fractional total domination value (\( T_{DV_f}(G) \)) to the total number of unordered pairs of distinct vertices in a graph. This new parameter expands on traditional domination concepts by incorporating fractional values, providing a more refined measure of domination in graphs. The fractional domination value between vertices is computed as the sum of fractional contributions from their common neighbors, where each contribution is inversely proportional to the degree of the respective vertex. The paper explores bounds for the fractional medium domination number across various graph families and presents computational methods for determining \( MD_f(G) \) using Python programming. Practical applications, such as network optimization and disaster relief, are also discussed to illustrate the significance of this parameter.</p>2025-09-26T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2975Two-Echelon Inventory Control Systems in Supply Chain with Fuzzy Number for Quadratic Demand2025-12-21T12:27:10+08:00G. Muneeswari munees408@gmail.comR. Bakthavachalambakthaa@yahoo.com<p>In this paper, two-echelon inventory control system is discussed in supply chain using fuzzy number. The system consists of a manufacturer and single retailer. In our proposed model, manufacturer is an upper and retailer node is lower in echelon system and non-coordination supply chain model is investigated. Different types of demand models are given, particularly quadratic model is discussed. Here triangle fuzzy number is used for fuzzification and Graded Mean Integration and signed distance method are used for defuzzification. Some necessary theorems are proved.</p>2025-11-02T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3033Upside-Down Logic in Plithogenic Fuzzy Control System: Contradiction-Aware Rule Inversion and De-Plithogenication2025-12-21T12:03:40+08:00Takaaki FujitaTakaaki.fujita060@gmail.comAhmed Heilatahmed_heilat@yahoo.comRaed Hatamlehraed@jadara.edu.jo<p>In the real world, many reversal phenomena occur—for instance, situations in which a statement once regarded as false is later recognized as true. Upside-Down Logic provides a framework that formalizes such reversals as a logical system: it flips the truth and falsity of propositions via contextual transformations, thereby capturing ambiguity and reversal within reasoning processes. A Plithogenic Set represents elements using attribute-based membership and contradiction functions, extending the traditional frameworks of fuzzy, intuitionistic, and neutrosophic sets. In this paper, we investigate Plithogenic Fuzzy Control, which both generalizes classical fuzzy control and furnishes a formal mechanism for analyzing and implementing inversion (upside-down) operations.</p>2025-10-29T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3041Optimized Data Offloading in IoT-Based Wireless Sensor Networks2025-12-21T12:03:41+08:00Aya Ouaaroussaya.ouaarouss@etu.uae.ac.maHajar Chabbouthajar.chabbout@etu.uae.ac.maAbderrahim Zannoua.zannou@uae.ac.maJalal Isaadj.isaad@uae.ac.ma<p>The rapid growth of Internet of Things (IoT) data is driven by the massive increase in IoT devices, leading to an explosion in data generation. However, these devices often face challenges such as limited storage capacity and low energy reserves, making local data storage and processing costly, particularly in Wireless Sensor Networks (WSNs) where additional computational resources are scarce. To address these issues, offloading data to specialized edge devices is a crucial solution. However, determining whether IoT devices should store data locally or offload it to edge devices remains a significant challenge. In this paper, we propose an offloading mechanism that allows resource-constrained IoT devices to transfer their data to edge devices for storage or processing. First, we define the resource constraints of IoT devices that guide the design of our proposed system. Using these constraints, we determine each device’s memory fill time and battery life. Next, we apply k-means clustering to group the IoT devices into constrained and unconstrained groups based on memory fill time and battery life. In the second step, the constrained devices offload their data to edge devices. To enhance the control of the WSN, the edge devices forward the collected data to the base station, where Long Short-Term Memory (LSTM) networks are employed to predict the resource constraints, and the K-Nearest Neighbors (KNN) algorithm is used to classify the devices into constrained and unconstrained group. Simulation results show that our approach achieves higher energy efficiency, with the slowest decrease in residual energy and the fewest sensor failures compared to the two reference methods from the literature. It also maintains greater available memory space, declining only from about 90% to 60% while the other methods drop far lower. Data offloading to the edge devices remains stable between 1000 KB and 1300 KB, demonstrating consistent resource utilization.</p>2025-11-10T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3148Numerical solutions to nonlinear fractional differential equations: New results and comparative study in biological and engineering sciences2025-12-21T12:03:43+08:00Eman A. A. Ziadaeng_emanziada@yahoo.comNoura Roushdynmrushde@imamu.edu.saMohamed F. Aboueleneinmabouelenein@imamu.edu.saMonica Botrosmonica.botros@deltauniv.edu.eg<p>Mathematical models that extend beyond classical differential equations are necessary for understanding complicated dynamical systems, such as circadian rhythm-regulated brain activity or intricate feedback loops in control engineering. Nonlinear fractional differential equations (NFDEs), especially those that are constructed using the Caputo derivative (CD), have become more common in recent years because of their ability to represent memory-dependent phenomena in a variety of biological and engineering fields. Motivated by practical applications like the circadian variation of brain metabolites and the behavior of relaxation-oscillation systems, the present work performs a thorough comparative examination of these advanced models. Three different approaches to solving the NFDEs are used: the Picard Method (PM), the Adomian Decomposition Method (ADM), and the Proposed Numerical Method (PNM). Each approach is rigorously analysed in terms of convergence, accompanied by detailed error estimates that confirm the existence and uniqueness of the solutions.</p>2025-12-10T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computing