Statistics, Optimization & Information Computing
http://47.88.85.238/index.php/soic
<p><em><strong>Statistics, Optimization and Information Computing</strong></em> (SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences. Topics of interest are (but not limited to): </p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap, Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series analysis, High-dimensional multivariate integrals, statistical analysis in market, business, finance, insurance, economic and social science, etc</li> </ul> <p> Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming </li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence, Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics, Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul>International Academic Pressen-USStatistics, Optimization & Information Computing2311-004X<span>Authors who publish with this journal agree to the following terms:</span><br /><br /><ol type="a"><ol type="a"><li>Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="http://creativecommons.org/licenses/by/3.0/" target="_new">Creative Commons Attribution License</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li><li>Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li><li>Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li></ol></ol>Prediction Methods for Future Record Values from Two-Parameter Kies Distribution
http://47.88.85.238/index.php/soic/article/view/2123
<p>In this paper, we consider the prediction problem of the future records based on observed data from two-parameter, shape and scale parameter, Kies distribution. Different point predictors including maximum likelihood, conditional median, best unbiased and Bayesian predictors of the future records are obtained. The corresponding prediction intervals using pivotal quantity, Highest Conditional Density (HCD), Shortest Length and Bayesian prediction intervals are also developed. The Monte Carlo algorithm is used to compute simulation consistent Bayesian prediction intervals for future unobserved records. The performance of the so obtained point predictors and prediction intervals are compared via experimental numerical simulation. The criteria that were considered for comparison purposes are mean square prediction error (MSPE) and prediction bias for point predictors and coverage probability (CP) and the average length (AL) for prediction intervals. A real and simulated data sets are performed for illustrative purposes.</p>Nesreen Al-OlaimatHatim Solayman MigdadiHusam A. BayoudMohammad Z. Raqab
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-122026-02-121553381340010.19139/soic-2310-5070-2123New regression model for academic achievement and new classification method for school dropout based on Artificial Bee Colony Algorithm
http://47.88.85.238/index.php/soic/article/view/2420
<p>This article focuses on the analysis of two major phenomena in the field of education: academic achievement and student dropout. Academic achievement corresponds to the academic performance of students, generally assessed through their grades, averages or the achievement of educational objectives. It is influenced by various factors such as personal abilities, motivation, family support and the quality of education. On the other hand, school dropout refers to the premature abandonment of studies, often caused by academic, social or economic difficulties. These two phenomena are among the greatest challenges facing educational institutions in most countries of the world, especially in developing countries. They have serious social and economic consequences for individuals and societies. To analyze the risks resulting from these two phenomena, it is necessary to use advanced forecasting techniques and methods, including statistical methods and artificial intelligence algorithms using available data. These methods allow us to understand the factors of each phenomenon individually and predict its negative risks. In order to improve the quality of predictions, we propose in this article a new regression model based on the multiple exponential regression model and the polynomial regression model. In order to identify the impact of social, economic and personal factors of the student and his environment on school dropout, we present an innovative classification method based on a generalization of the logistic regression model, replacing the linear term with a multiple polynomial term. To estimate the coefficients of the two proposed models, we use the ABC (Artificial Bee Colony) optimization algorithm. The two proposed approaches were applied to two different databases: the regression model was used to predict academic achievement and the classification method was used to predict the risks of school dropout. We carry out comparative studies with recent methods. The results obtained showed the reliability and superiority of the proposed approaches in terms of prediction and accuracy.</p>Hicham EL Yousfi AlaouiZiad BousrarafAmal Hjouji Omar EL OgriJaouad EL-Mekkaoui
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-122026-02-121553401341510.19139/soic-2310-5070-2420Repair alert model for one component systems with discrete lifetimes belonging to the power series family
http://47.88.85.238/index.php/soic/article/view/2605
<p>Repair alert models are essential tools for optimizing preventive maintenance in engineering systems. However, the development of these models for systems with discrete lifetime measurements—such as operational cycles, weekly failure reports, or counts of pages printed—has not been systematically addressed under a general class of discrete lifetime distributions. This paper specifically addresses this research gap. We propose a comprehensive framework for repair alert modeling by assuming that the discrete lifetimes of devices belong to the “power series family” of distributions. This approach encompasses a wide class of practically relevant discrete distributions. As a key component of this framework, we address the parameter estimation challenges for three significant and well-known distributions within this family. The Akaike Information Criterion is employed for optimal model selection, and approximate confidence intervals for the parameters of the chosen distribution are derived. The validity and practical utility of the proposed model are demonstrated through an insightful analysis of a real dataset.</p>Mohammad AtlehkhaniMahdi Doostparast
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-242026-02-241553416342910.19139/soic-2310-5070-2605Stochastic Modeling of a Vaccine-Structured Epidemic Model Using Data from South Africa
http://47.88.85.238/index.php/soic/article/view/2707
<p><strong>Introduction:</strong> Understanding the initial dynamics of an epidemic, especially whether it will establish itself or die out, is critical for public health policy. Deterministic models provide insight into average population behavior but cannot capture the random chance, or demographic stochasticity, that governs the fate of an outbreak when infectious case numbers are low. This is particularly relevant for COVID-19, where population heterogeneity due to vaccination significantly influences transmission. In this paper, we develop and analyze a vaccine-structured epidemic model to quantify the probability of disease extinction and understand how vaccination status impacts these early, uncertain dynamics.<br><strong>Materials and Methods:</strong> We formulated a deterministic model using a system of eight ordinary differential equations (ODEs) to represent non-vaccinated and vaccinated populations, incorporating waning immunity. A corresponding Continuous-Time Markov Chain (CTMC) model was developed to capture stochastic effects. The basic reproduction number, R_0, was derived using the next-generation matrix method. We applied multitype branching process theory to analytically calculate the probability of disease extinction (P_0) and used Gillespie's Stochastic Simulation Algorithm to run 10,000 CTMC sample paths to numerically approximate this probability (P_A) and the finite time to extinction. The model was grounded using parameters fitted to COVID-19 data from South Africa during the period from March 5, 2020 to March 21, 2022.<br><strong>Results:</strong> The basic reproduction number was calculated as R_0 is approximately equals to 1.41, indicating the potential for sustained transmission. The extinction probability derived from the branching process (P_0) showed excellent agreement with the simulated approximation (P_A). A key finding is that an infection introduced by a vaccinated individual has a significantly higher chance of extinction (P_A approximately equals 0.90-0.93) compared to one from a non-vaccinated individual (P_A approximately equals 0.75-0.78). Furthermore, outbreaks initiated by an infectious vaccinated person that do go extinct resolve the fastest (T approximately equals 35 days), while those from an infectious non-vaccinated person persist the longest (T approximately equals 62 days).<br><strong>Conclusion:</strong> This study demonstrates that vaccination provides a dual benefit in containing new disease introductions: it substantially increases the probability of stochastic fade-out and shortens the duration of abortive outbreaks. These findings highlight the limitations of relying solely on deterministic thresholds like R}_0 and underscore the importance of stochastic models in providing a more nuanced risk assessment for public health planning, emphasizing that vaccination is a powerful tool for preventing new sparks from becoming major epidemics.</p>Aubrey NdovieClaris ShokoOlusegun S. EwemoojeSivasamy Ramasamy
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-222026-02-221553430346710.19139/soic-2310-5070-2707The Prediction by using Nonlinear Autoregressive Model
http://47.88.85.238/index.php/soic/article/view/2770
<p>The study introduced a non-linear autoregressive model with stability conditions of it. This model was employed to Prediction the daily count of new Covid-19 infections in the Kingdom of Saudi Arabia for the year 2022 which the primary aim of this paper. A suggested autoregressive model was introduced, and the stability criteria for this model were identified. The model outlined in the research comprises two components: a linear component and a non-linear component, the latter incorporating a decreasing function. This feature enabled us to get a numerical example that satisfy the theoretically stability criteria that were found, as demonstrated in Example 1 of the study. The suggested model was employed on real data the time series of daily new Covid-19 infections in the Kingdom of Saudi Arabia throughout an uninterrupted three-month period in 2022. Utilizing a Python program, we calculated the values of the model's parameter constants to satisfy the stability conditions. We verified the residuals and utilized the model to predict the anticipated number of new Covid-19 infections for the next month in 2022.</p>Salim M. Ahmad Anas YounsAmmar Saad Abduljabbar
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-01-152026-01-151553468348410.19139/soic-2310-5070-2770Bayesian Estimation and Model Assessment of the Exponentiated Rayleigh Survival Model Using Laplace and MCMC Techniques: Applications with Right-Censored Medical Data
http://47.88.85.238/index.php/soic/article/view/2986
<p>This paper presents a comprehensive comparative study of Bayesian and classical estimation techniques for the Exponentiated Rayleigh (expRay) survival model, particularly in the presence of right-censored medical data. Using both analytic and simulation-based Bayesian methods—including Laplace approximation, Independent Metropolis (IM), and Gibbs sampling via JAGS—the model's flexibility and robustness are evaluated. Two real-world datasets, Intrauterine Device (IUD) discontinuation times and bladder cancer remission durations, are analyzed to illustrate the model's practical applications. Results from Maximum Likelihood Estimation (MLE) are benchmarked against Bayesian estimates, representing that the IM algorithm offers the best balance between computational efficiency and statistical accuracy. Model diagnostic including the posterior predictive checks (ppc) confirms the model's adequacy. The study highlights the suitability of the expRay model for modeling varying hazard rates in clinical survival data and establishes the IM-based Bayesian framework as an effective tool for medical survival analysis.</p>Md Tanwir AkhtarNajrullah KhanAthar Ali Khan
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-182026-02-181553485351110.19139/soic-2310-5070-2986Improved Risk Modeling for Concurrent Diabetes and Hypertension: A Biresponse Nonparametric Logistic Regression Approach
http://47.88.85.238/index.php/soic/article/view/3118
<p>One of the key strategic priorities in Indonesia’s health development, as outlined in the Sustainable Development Goals (SDGs) agenda, is to reduce premature mortality from Non-Communicable Diseases (NCDs) by one-third. Diabetes and hypertension are two closely related NCDs that often coexist. This study aims to develop a risk model for the simultaneous incidence of diabetes and hypertension using a biresponse approach. Data were collected from 211 patients at the Internal Medicine Polyclinic of Airlangga University Hospital Surabaya. A Chi-square dependency test revealed a significant association between the incidence of diabetes and hypertension. Additionally, the relationship between each predictor variable and the observed logit of diabetes and hypertension demonstrated a non-linear pattern, suggesting that the impact of predictor variables on the risk of both diseases is not linear. A comparison of the biresponse logistic regression model with both parametric and nonparametric approaches indicated that the Biresponse Nonparametric Logistic Regression model outperformed the parametric approach in terms of performance and stability. The model’s accuracy improved significantly from 0.436 to 0.626, and the Area Under the Curve (AUC) increased from 0.62 to 0.83.</p>Marisa RifadaNur ChamidahElly AnaBudi LestariDursun AydinNaufal Ramadhan Al Akhwal SiregarMuhammad Fikry Al Farizi
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-192026-02-191553512352510.19139/soic-2310-5070-3118A Parametric Exponential Entropy Measure for Neutrosophic Sets and It’s Application in Decision Making
http://47.88.85.238/index.php/soic/article/view/3129
<p>In this paper, we propose a novel exponential entropy measure for Single-Valued Neutrosophic Sets. Neutrosophic sets as an extension of fuzzy and intuitionistic fuzzy sets, designed to handle uncertain, indeterminate, and inconsistent information in a more refined manner. The proposed entropy measure captures the degree of uncertainty inherent in single valued neutrosophic sets by simultaneously considering the truth-membership, indeterminacy-membership, and falsity-membership degrees. We establish the essential mathematical properties of the proposed entropy measure, including validation. Furthermore, illustrative examples and potential applications in decision-making is presented to validate the practical utility of the proposed measure.</p>Vaishali Manish JoshiJavid Gani DarSara Mohamed Ahmed Alsheikh
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-242026-02-241553526353810.19139/soic-2310-5070-3129Comparative Study of Conventional and Spatial Panel Models in Analyzing the Information and Communication Technology Development Index
http://47.88.85.238/index.php/soic/article/view/3159
<p>Indonesia continues to face digital inequality, both in terms of inter-provincial disparities and in comparison, to<br>other countries in Southeast Asia, where it lags behind Vietnam and is far below Singapore. These disparities highlight spatial<br>heterogeneity, where regional characteristics influence by Information and Communication Technology (ICT) capacity<br>differently, and spatial dependence, where development in one province can spill over to others. Given the strategic<br>importance of ICT in driving sustainable growth, overcoming such inequality is crucial. This challenge is also strongly<br>linked to SDG 9 and SDG 10, which emphasize inclusive digital development as a pathway to reducing gaps across regions.<br>This research seeks to examine the determinants affecting Information and Communication Technology Development Index<br>(ICT-DI) across Indonesian province by contrasting traditional panel data models with spatial panel modeling techniques.<br>Secondary data from Central Bureau of Statistics (Indonesia) for 2020-2023 were used. Descriptive analysis and thematic<br>mapping were conducted, followed by estimation using the panel model, as well as spatial panel models including Spatial<br>Autoregressive Fixed Effect (SAR-FE) and Spatial Error Model Fixed Effect (SEM-FE). The results indicate significant<br>spatial dependence across provinces, confirming the relevance of spatial analysis. The SAR-FE model was identified as<br>the best model, explaining 98.47% of the variation in ICT-DI with the lowest MAPE value (1.1023). Population density<br>was identified as the only significant positive factor, indicating that more densely populated regions tend to have better<br>ICT infrastructure and capacity. The findings emphasize the novelty of applying spatial panel models to ICT analysis in<br>Indonesia and underline their policy relevance. Considering dependence and heterogeneity enables policymakers to design<br>more inclusive and sustainable strategies, tailored to the unique priorities of each province, to reduce digital inequality.</p>Toha SaifudinCitrawani MarthabaktiEzha Easyfa Wieldyanisa
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-01-272026-01-271553539355410.19139/soic-2310-5070-3159Different methods to estimate reliability for Exponentiated Mukherjee--Islam distribution under type II censoring
http://47.88.85.238/index.php/soic/article/view/3183
<p>In the field of reliability and survival times, we note that many data are naturally limited above. The Exponentiated Mukherjee--Islam distribution (EMID) is considered one of the most important finite-range survival time distributions. It has two shape parameters and a scale parameter. Among its properties is that it can track the skewness and behavior of the hazard function while maintaining support at $(0, \theta)$. In this paper, more than one method was used to estimate both parameters and reliability of the EMID under type~II censoring. The model estimators were derived using maximum likelihood (ML) and maximum product spacing (MPS) methods. To demonstrate the efficiency of the estimators obtained in this paper, we presented an extended Monte Carlo simulation study in which each estimator was compared based on the value of the root mean square error (RMSE) using \texttt{R~Studio}. The simulation study used eight groups of parameters, sample sizes $n = 30, 50, 100, 300$, and censoring ratios $C = 0.0, 0.1, 0.2, 0.3$. The results show a decrease in RMSE with larger~$n$. High censoring ratios lead to an amplified RMSE, especially for the scaling parameter~$\theta$. Maximum likelihood estimates (MLE) are better in small samples with low censoring ratios, whereas MPS estimators tend to provide more robust and efficient results.</p>Khalida Ahmed MohammedBan Ghanim Al-AniZaid Al-Khaledi
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-202026-02-201553555357110.19139/soic-2310-5070-3183A Decade of Progress in Drowsiness Detection Using Facial Features
http://47.88.85.238/index.php/soic/article/view/3192
<p>This study presents a comprehensive bibliometric analysis of research trends in drowsiness detection using facial features, examining publications from 2014 to 2024. Employing a PRISMA-guided methodology, we extracted and analyzed 347 documents from the Scopus database, revealing significant patterns in research productivity, citation impact, geographical distribution, institutional contributions, and thematic evolution. Our analysis identifies an annual growth rate of 18.59% in publication volume, with a notable surge after 2019, coinciding with the widespread adoption of deep learning approaches. Geographically, Asian countries dominate research output, with India (182 publications) and China (100 publications) leading contributions, while China garnered the highest citation impact (1370 citations). Through sophisticated co-occurrence network analysis, we identified four distinct research clusters: (1) Physiological Insights via Neural Networks, (2) Computer Vision-Based Drowsiness Detection, (3) Multi-Modal Fatigue Detection for Accident Prevention, and (4) Deep Learning for Biomedical Signal Analysis. Temporal analysis of keyword evolution reveals a shift from traditional machine learning approaches toward deep neural networks, Internet of Things integration, and real-time monitoring systems. Our thematic mapping further categorizes research into basic themes (CNNs, eye aspect ratio), motor themes (driver fatigue detection, cloud computing), niche themes (3D head pose estimation, behavioral measurement), and emerging/declining themes (pupil detection, blink detection systems). Systematic analysis of deployment metrics reveals critical gaps: only 3.3% of top-cited papers report frames per second (FPS), and none report latency or time-to-alarm (TTA), despite frequent claims of real-time capability. Analysis of multimodal approaches shows 74% of studies focus exclusively on facial features, while 26% incorporate physiological or vehicular signals. This comprehensive bibliometric landscape illuminates the field's evolution, identifies research gaps particularly regarding deployment readiness, ethnicity-specific considerations and low-light environments, and provides a strategic roadmap for future research directions in drowsiness detection systems.</p>Moh Hadi SubowoPulung Nurtantio AndonoGuruh Fajar ShidikHeru Agus Santoso
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-052026-02-051553572359510.19139/soic-2310-5070-3192On Truncated XLindley Distribution: Statistical Properties, Estimation Methods, and Application in Sciences
http://47.88.85.238/index.php/soic/article/view/3256
<p>This study investigates the statistical properties and practical utility of the truncated variants of the X-<br>Lindley distribution. Key distributional characteristics such as the probability density function and hazard<br>rate function are examined, highlighting their consistent and flexible behavior. Several statistical measures<br>are derived, including moments, quantile functions, and associated metrics, offering a comprehensive un-<br>derstanding of the model. Estimation of parameters is conducted using the maximum likelihood estimation<br>(MLE) method, specifically tailored for the truncated form. To validate the proposed models, a real dataset<br>concerning the strength of aircraft window glass is analyzed, demonstrating the improved fit and applicability<br>of the truncated X-Lindley distribution in reliability and survival data contexts.<br>The results underscore the distribution’s potential for modeling truncated lifetime data and its relevance<br>in engineering and insurance applications. Additionally, this contribution explores the statistical character-<br>istics and real-world applications of the truncated X-Lindley distribution in the context of actuarial science.<br>The practical utility of this distribution is demonstrated through its application to truncated age-at-death<br>data relevant to life insurance pricing. Comparative analysis with traditional and modern models such as the<br>Exponential, Weibull, Truncated Weibull, Truncated Lindley, Truncated Gamma, Truncated Exponential,<br>and Truncated Log-Normal distributions reveals that the truncated X-Lindley model provides a superior fit<br>and more accurate estimation of the Net Single Premium (NSP). These results highlight the effectiveness<br>of the proposed model in life insurance valuation and its potential for broader use in actuarial and survival<br>data analysis.<br>Furthermore, the study introduces a reliability analysis using the truncated X-Lindley distribution for<br>machine failure time data. This analysis demonstrates the model’s suitability for predicting failure times in<br>engineering applications, showcasing its versatility across different sectors.</p>Ahlem DjebarZeghdoudi Halim
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-01-182026-01-181553596361210.19139/soic-2310-5070-3256 Hybrid Modeling of Currency Circulation Volatility: Evidence from the Central Bank of Iraq
http://47.88.85.238/index.php/soic/article/view/3279
<p>The currency in circulation is a key element of the monetary supply system of the Iraqi economy because it<br>reflects the level of economic activity and the liquidity level in the market. It can be expressed as an important tool when formulating monetary policy. This research aims to analyze and forecast the behavior of the currency in circulation in Iraq using the ARMA-GARCH model for monthly data from 2004 to 2025 to understand the dynamics of monetary liquidity, The sample was divided into two parts: approximately 80% for the training set (2004-2021), and approximately 20% for the testing set (2022-2025). Data were analyzed in Python using many packages. The results showed that the time series was initially non-stationary but became stationary after the first difference. The presence of the ARCH effect, i.e., the unstable variance, justifies the use of the GARCH model to analyze volatility. The ARMA-GARCH model was found to be the most appropriate model for representing the data, achieving the lowest values for the used criteria: RMSE, MAE, BIC, AIC. Furthermore, this study provides a free, open API that allows researchers from around the world to upload their own datasets and obtain instant comprehensive statistical analyses (including tables, diagnostics, and downloadable plots) based on an identical ARMA-GARCH framework. This paper recommends combining the GARCH model with machine learning techniques or Bayesian models, which will increase the accuracy of predictions and the effectiveness of future monetarypolicy choices</p>Abdulrazzaq Tallal AkramOmar Ali
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-182026-02-181553613362910.19139/soic-2310-5070-3279Distributional Analysis and Risk Assessment of U.K. Motor Non-Comprehensive Claims Using the Log-Exponential Family with Properties and Characterizations
http://47.88.85.238/index.php/soic/article/view/3451
<p><span class="fontstyle0">This paper studies the log-exponential-exponential (LEE) distribution which is a novel special case of the logexponential G (LE) family, tailored for flexible modeling of insurance claim sizes. The LEE distribution demonstrates exceptional versatility in capturing diverse density shapes including light-tailed with different forms, whose sign determines<br>the direction of skewness. We derive explicit expressions for its probability density function and establish rigorous<br>characterizations using truncated moments and reverse-hazard rate identities. A comprehensive simulation study is conducted<br>to assess the performance of six estimation techniques: maximum likelihood estimation (MLE), ordinary least squares (OLS),<br>Cramer–von Mises estimation (CVME), Anderson–Darling estimation (ADE), right-tail Anderson–Darling estimation ´<br>(RTADE), and left-tail Anderson–Darling estimation (LTADE), across various parameter configurations and sample sizes.<br>Finally, we compute key risk indicators (KRIs) including Value-at-Risk (VaR), Tail Value-at-Risk (TVaR), Tail Variance<br>(TV), Tail Mean–Variance (TMV), and Expected Loss (EL) using all six estimation methods, applied to real U.K. motor<br>non-comprehensive claims triangle data</span></p>Mohamed IbrahimG. G. HamedaniAbdullah H. Al-NefaieHaitham M. Yousof
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-222026-02-221553630365410.19139/soic-2310-5070-3451Environmental data modeling of different locations in Iraq based on the Ramos-Louzada distribution and its extensions
http://47.88.85.238/index.php/soic/article/view/3580
<p>Environmental data modeling, particularly wind speed variations across Iraq's diverse regions, supports renewable energy assessment and climate risk management amid fossil fuel dependency challenges. This study applies the Ramos-Louzada (RL) distribution and its extensions Generalized RL (GRL), Inverse Power RL (IPRL), Exponentiated GRL (EGRL), Inverse RL (IRL), and Ramos-Louzada Exponential (RLE) distribution to hourly 2023 wind speed data from four Iraqi cities: Basrah, Al-Sulaymaniyah, Tikrit, and Al-Kut. Parameters are estimated via maximum likelihood, with model performance evaluated using several criteria. Results reveal location-specific fits, with average wind speeds ranging from 2.75 m/s (Al-Sulaymaniyah) to 4.76 m/s (Basrah), all positively skewed and moderately kurtotic. The IRL distribution outperforms others across all sites, achieving highest coefficient of determination (R<sup>2</sup>) (0.9758–0.9877) and lowest root mean square error (0.0332–0.0432), Akaike information criterion, Bayesian information criterion, and the Kolmogorov–Smirnov statistic (KS), surpassing IPRL (second-best) and RL baselines. While, RLE distribution consistently ranks lowest. Further, IRL distribution also exceeds Weibull distribution benchmarks, with superior R<sup>2</sup> and reduced KS by up to 52%. These findings highlight RL extensions' flexibility for heavy-tailed, skewed wind regimes, informing wind energy potential, site-specific turbine design, and environmental forecasting in Iraq.</p>Rikan AhmedZakariya AlgamalZakariya Shehab
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-03-302026-03-301553655366710.19139/soic-2310-5070-3580Some Properties of Central Local Metric Dimension on Corona Product Graph
http://47.88.85.238/index.php/soic/article/view/2639
<p>In this paper, we present an exploration of the central local metric dimension in corona product graphs. The<br>central local metric dimension is a new development concept of a local metric set containing all central vertices. In real life, there are many applications of local metric dimension and central vertices. If a vital object is represented as a vertex in a graph, then its placement can use the concept of a central vertex so that people can easily reach it. Suppose the vital objects are health services, education, and water stations. The government can use the concept of local metric dimension to optimize transportation infrastructure management and create good transportation governance for these vital objects. Suppose G is a connected graph with vertex set V (G) and order n. A central vertex in G is a vertex with the shortest distance to any other vertex in G. A central set, S(G), is a set whose elements are all the central vertices in G. Suppose W is a local metric set of G, W is a central local metric set of G if S(G) ⊆ W. If W is a local metric set with minimal cardinality, then |W| is the central local metric dimension of G. This paper presents some properties of the central local metric dimension of G ⊙ H. The results show that the elements of the central set of G ⊙ H are vertices in V (G ⊙ H) that coming from the central set of<br>G. Since in G ⊙ H, the i-th vertex of G adjacent to all vertices of i-th copies of H, then there is no intersection between the central set of G ⊙ H and the local metric set of G ⊙ H.</p>Yuni ListianaLiliek SusilowatiSlamin SlaminKamal Dliou
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-202025-11-201553668367710.19139/soic-2310-5070-2639Parenting Fitness in Genetic Algorithms: Empirical Advantages Over NSGA-II in Logistics Optimization
http://47.88.85.238/index.php/soic/article/view/2669
<p>Genetic algorithms (GAs) are population-based metaheuristics widely employed to solve complex real-world problems such as networking and resource allocation. These algorithms evolve a population of candidate solutions through iterative processes of selection, crossover, and mutation. The composition of the next generation is determined by either a general approach, where only offspring are retained, or an elitist approach, which selects fit solutions from the current generation. While elitism enhances solution quality, it is susceptible to premature convergence.</p> <p>This article presents a comparative study between the parenting fitness mechanism and the crowding distance approach used in the Non-dominated Sorting Genetic Algorithm II (NSGA-II) for solving the Vehicle Routing Problem with Drones (VRPD). Experimental evaluations demonstrate that the proposed parenting fitness method yields consistent improvements in solution quality. Relative improvements range from 27.30% to 43.83% for small problem instances, 6.36% to 11.41% for medium instances, and 0.54% to 1.90% for large instances, with performance variations influenced by the population size. These results validate the effectiveness of parenting fitness as a diversity-preservation strategy in mono-objective optimization.</p>OUISS MustaphaETTAOUFIK AbdelazizMARZAK Abdelaziz
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-01-272026-01-271553678369610.19139/soic-2310-5070-2669On the r-Hued Edge Chromatic Number of Corona Products of Ladder, Cycle, and Wheel Graphs
http://47.88.85.238/index.php/soic/article/view/2910
<p>This paper explores the concept of r-hued edge coloring in simple graphs, wherein each edge must be adjacent to at least minrdeg(e) edges of distinct colors, where deg(e) denotes the number of edges adjacent to a given edge e. The minimum number of colors required to achieve such a coloring in a graph G is known as the r-hued edge chromatic number, denoted by r(G). We compute r(G) for various graph constructions involving corona products,<br>speci cally focusing on combinations of ladder graphs, cycle graphs, and wheel graphs.</p>S. PalaniammalV.C. THILAK RAJKUMAR
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-252026-02-251553697370510.19139/soic-2310-5070-2910Q-Learning-Assisted Simulated Annealing for Traveling Salesman Problem Optimization
http://47.88.85.238/index.php/soic/article/view/3028
<div>Simulated Annealing (SA) is a well-established metaheuristic for tackling combinatorial optimization problems. It draws inspiration from the physical process of annealing in metallurgy. In the optimization context, SA iteratively explores the solution space by accepting not only improving solutions but also, with a temperature-dependent probability, non-improving ones. This mechanism enables the algorithm to escape local optima, thereby enhancing its ability to approach the global minimum of an objective function. Nevertheless, its overall performance is susceptible to the choice of the cooling schedule and the use of fixed neighborhood structures. In this work, we include Q-learning into the SA framework to improve its flexibility. Q-learning is a model-free, value-based method that enables an agent to learn optimal action-selection policies by iteratively updating Q-values using rewards obtained through exploration of the environment. </div> <div>The suggested approach directs the search toward more promising areas by dynamically choosing a leader solution from a predefined set of potential solutions that are updated during iterations, using a learned Q-policy. The Q-values are updated according to the relative improvement each leader provides over time, allowing adaptive exploitation of successful guides. Experimental results on popular benchmark instances of the Travelling Salesman Problem (TSP) from TSPLIB95 demonstrate that the Q-learning-guided SA achieves better solution quality compared to classical SA in most of the tested instances. These results demonstrate how experience-driven decision-making in reinforcement learning can enhance metaheuristic performance.</div>NOUHAILA ADILFAKHITA EDDAOUDIHALIMA LAKHBABMOHAMED NAIMI
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-182026-02-181553706373010.19139/soic-2310-5070-3028New search direction based on a class of parametric kernel functions with a Full Newton step Infeasible O(nL) Interior point Methods for Linear Optimization
http://47.88.85.238/index.php/soic/article/view/3076
<div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p><span style="font-size: 9.000000pt; font-family: 'NimbusRomNo9L';">In this paper which is inspired by the work of Roos [</span><span style="font-size: 9.000000pt; font-family: 'NimbusRomNo9L'; color: rgb(100.000000%, 0.000000%, 0.000000%);">7</span><span style="font-size: 9.000000pt; font-family: 'NimbusRomNo9L';">] (SIAM J. Optim. 16(4):1110-1136, 2006), we analysed a new search direction based on a class of parametric kernel functions for IIPMs algorithms. The main iteration of the algorithm consists of a feasibility step and several centrality steps. The neighborhood of Newton process is more wider using a sharper quadratic convergence results. The complexity is polynomial and coincides with the currently best known iteration bound based on centrality steps. </span></p> </div> </div> </div>Samir BOUALI
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-042026-02-041553731375110.19139/soic-2310-5070-3076Image Recovery from Presence of Salt-and-Pepper Noise using Robust Conjugate Gradient Method
http://47.88.85.238/index.php/soic/article/view/3213
<p>Conjugate gradient techniques concentrate on the conjugate of the coefficient. In this research, we provide a novel coefficient conjugate gradient method to remove impulsive noise from images using the Taylor series. We have presented an intriguing conjugate gradient approach and a search strategy based on the new coefficient conjugate. Under certain conditions, we show that the proposed method converges globally. Our numerical findings show that this method works well for picture restoration.</p>Ayad Abdulaziz MahmoodSaif A. HusseinBasim Abas Hassan
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-01-132026-01-131553752376010.19139/soic-2310-5070-3213Optimal Control Strategy for Fractional Model of Monkeypox Transmission Under Real Data
http://47.88.85.238/index.php/soic/article/view/3283
<p>Monkeypox (mpox) is a zoonotic infectious disease that has re-emerged as a global public health concern due to its increasing transmission in various regions. In this study, we propose a fractional-order epidemiological model to investigate the transmission dynamics of mpox involving human and rodent populations. The use of fractional-order derivatives allows the model to incorporate memory effects, which are relevant for capturing the long-term influence of past infections, immune responses, and exposure history. To evaluate effective intervention measures, an optimal control framework is developed by combining two time-dependent control strategies: human vaccination and rodent eradication. The optimal control problem is solved using Pontryagin's Principle of Minimum in conjunction with a forward-backward iterative algorithm, while the fractional-order system is numerically approximated using an Eulerian scheme. Model parameters are estimated using real mpox case data, and the performance of the fractional-order model is compared across different fractional-order values. Numerical simulations show that the combined control strategy significantly reduces the infected population and overall implementation costs compared to a single control intervention. Furthermore, the results show that higher fractional orders, approaching the integer order case, result in improved system performance and earlier separation between control strategies. These findings highlight the importance of memory effects in mpox transmission dynamics and provide insights for designing efficient and cost-effective intervention policies.</p>Muhammad Akbar HidayatFatmawati FatmawatiCicik AlfiniyahEbenezer Bonyah
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-01-062026-01-061553761377910.19139/soic-2310-5070-3283DPACO: Dynamic Tuning of ACO with Adaptive Strategy for QoS-Aware Web Service Composition
http://47.88.85.238/index.php/soic/article/view/3355
<p>With the proliferation of services available on the Web, modern systems increasingly require mechanisms able<br>to orchestrate multiple services in order to meet complex business requirements. Web service composition is an efficient<br>approach to transform these basic services into coherent composite solutions, while ensuring the scalability and flexibility<br>essential in distributed environments. However, Web service composition (QoS-aware WSC) based on Quality of Service<br>presents a major problem. For each task in a workflow, several candidate services are available, characterized by significant<br>heterogeneity and variability. In order to select the optimal combination of services able to satisfy QoS constraints transforms<br>the problem into a complex multi-objective optimization challenge that is difficult to solve in polynomial time, especially in<br>large-scale and dynamic environment. In this paper, we proposed a new approach called Dynamic Parameter Ant Colony<br>Optimization (DPACO), an enhanced variant of the ACO algorithm for QoS-aware Web service composition. DPACO<br>introduces adaptive parameter control and a dynamic pheromone injection strategy, which improve the balance between<br>exploration and exploitation. This adaptivity helps to avoid stagnation and local optimal, while reducing search time and<br>ensuring robustness in dynamic and heterogeneous environments. Experimental evaluations conducted on multiple datasets<br>demonstrate the superiority of the proposed approach compared to classical ACO and its variants (SACO, MOACO, FACO,<br>and EFACO). Specifically, it achieves improvements of up to +15.2% over ACO, +12.6% over SACO, +9.8% over MOACO,<br>+7.3% over FACO, and +11.6% over EFACO in terms of solution quality. Moreover, the adaptive injection and parameter<br>adjustment mechanisms significantly reduce the computational time required to reach optimal composite services.</p>Naoufal EL ALLALIabderrahim zannouOmayma MahmoudiHakima Asaidi Mohamed Bellouki
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-202026-02-201553780381110.19139/soic-2310-5070-3355A Statistical and Optimization-Based Framework for Evaluating Digital Learning Platforms Using Student Survey Data
http://47.88.85.238/index.php/soic/article/view/3396
<p>The increasing adoption of digital learning platforms in higher education has led to the widespread use of student survey data to assess learning experiences and outcomes. While such data provide valuable insights into learners’ perceptions, their analytical potential is often underexploited due to the predominance of descriptive approaches. This paper proposes a statistical and optimization-based framework for evaluating digital learning platforms using student survey data. Perceived learning improvement is modeled as an ordinal response variable influenced by multiple explanatory factors, including accessibility, motivation, content adequacy, technical constraints, time availability, and external practice. The empirical analysis relies on survey data collected from 300 undergraduate students and combines multivariate regression modeling with exploratory factor analysis to identify key determinants and latent dimensions underlying student perceptions. The statistical results are subsequently embedded within an optimization framework that formalizes learning effectiveness as an objective function under practical resource constraints. The findings reveal that motivation and content adequacy are the most influential factors in explaining perceived learning improvement, while technical constraints exert a negative but secondary effect. By integrating statistical inference with optimization-oriented reasoning, the proposed framework provides a structured and decision-relevant approach for the quantitative evaluation of digital learning platforms in higher education.</p>Zouhair OUAZENEAmina KARROUMJamal AMRAOUIRachida GOUGIL
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-182026-02-181553812382410.19139/soic-2310-5070-3396New Family of Dai–Liao Conjugate Gradient Directions for Large-Scale Unconstrained Optimization with Applications to Image Restoration
http://47.88.85.238/index.php/soic/article/view/3584
<p style="font-weight: 400;">This paper addresses the critical challenge of designing efficient and robust conjugate gradient (CG) methods for large-scale unconstrained optimization, where classical CG variants often suffer from insufficient descent properties and convergence failures without restrictive line searches. We introduce two novel Dai-Liao-type CG variants, BH and BI, derived via functional approximations that incorporate objective reduction and curvature information within the Dai-Liao conjugacy framework. Important features of the proposed methods include the inherently satisfying of sufficient descent condition independent of the line search and preserving conjugacy while enhancing adaptability through problem-dependent scaling parameters. The global convergence is established under standard assumptions (Lipschitz gradients, convex level sets). Extensive numerical experiments on set of large-scale test problems demonstrate that the proposed algorithms significantly outperform some classical CG methods in iterations, function evaluations and CPU time. Performance profiles confirm their superior efficiency and robustness.</p>Basim A. HassanTalal M. AlharbiSulaiman M. IbrahimHashibah HamidSalah Mahmoud Boulaaras
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-222026-02-221553825384010.19139/soic-2310-5070-3584Control Design on a Non-minimum Phase Nonlinear System by Output Redefinition and Particle Swarm Optimization Method
http://47.88.85.238/index.php/soic/article/view/2436
<p>In this paper, we study the control design of a non-minimum phase nonlinear system. Here we investigate a nonlinear system in a particular class and use coordinate transformation to determine the normal form of the system. We present some theorems that state a non-minimum phase nonlinear system becomes the minimum phase with a new output. We further use the new output to determine the control variable, and we use particle swarm optimization to determine the desired output of the output that has been selected. From this design, the output of the system can track the desired output of the original system.</p>Ahmadin AhmadinFatmawati FatmawatiEdi WinarkoWindarto Windarto
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-12-012025-12-011553841385510.19139/soic-2310-5070-2436Enhancing RSA image encryption performance with multi-threaded parallel processing
http://47.88.85.238/index.php/soic/article/view/2592
<p>With the growing need for secure image transmission online, efficient encryption methods are essential. Existing RSA-based image encryption techniques often suffer from high computational delays due to sequential pixel processing. To address this, we propose a parallelized RSA encryption/decryption program that utilizes multi-threading for concurrent execution. By dividing the workload across threads and supporting multi-processor environments. Our solution achieves significantly faster processing times compared to conventional approaches.</p>Sameh AhmedNabil Amein Ali
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-242026-02-241553856387310.19139/soic-2310-5070-2592Enhancing Two-Wheeler Rider Safety with Helmet-Based Head Movement Monitoring and IoT Integration Using Safe Angle Formula
http://47.88.85.238/index.php/soic/article/view/2668
<p>Two-wheeler safety is crucial due to riders’ vulnerability, yet existing mechanisms often overlook unintentional head movements that compromise control and awareness. Current Internet of Things-based safety systems focus on collision detection, braking assistance, and rider posture monitoring, but often neglect the risks posed by unintended head movements, which can lead to loss of control and harmful accidents. To address this gap, we proposed a helmet-mounted system that calculates a safe head movement angle using a specialized safe angle formula based on motorcycle speed. Our system employs integrated gyroscopic sensors and Global Positioning System data to continuously monitor head orientation and speed, providing real-time alerts. The bike sensor records speed and Z-axis angles to determine bike tilt, while the helmet sensor captures the riders’ head angle. This data is transmitted to a microcontroller in the bike unit, which calculates the angle difference and sets a speed-based safe threshold. If head movement surpasses this calculated threshold, the system triggers audible and visual alarms. By ensuring that riders maintain a safe head position while riding, this system minimizes distractions that could lead to collisions, particularly at high speeds. The proposed solution enhances two-wheeler safety by preventing accidents caused by sudden or excessive head movements, integrating seamlessly with existing motorcycle safety mechanisms. Real-world testing and validation of our system demonstrate its effectiveness in reducing the likelihood of accidents caused by unintentional head movements. This innovation highlights the importance of maintaining safe head orientation and suggests integration into broader safety strategies, advanced rider training, and protective measures. The introduction of the safe angle formula positions this system as a potential benchmark for future motorcycle safety technologies.</p>Rahat TamzidEhfaz Faisal MaheeMd. Anwar Hussen WadudAnichur RahmanMiraz AhmedMd Ashikur RahmanAbu Saleh Musa MiahFahim Al FaridSarina Mansor
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-232026-02-231553874389610.19139/soic-2310-5070-2668Optimal Investment Decision Making Under Two Factor Uncertainty Using L´evy Processes
http://47.88.85.238/index.php/soic/article/view/2727
<p>This research presents and examines a problem in which a production company makes investment decisions using the real options approach. The assumption is that investment decisions are based on the dynamics of revenue streams from two different products. The processes are driven by geometric Brownian motion and compensated Poisson random jumps. In this situation, the classical net present value approach has some glaring shortcomings in modelling uncertainty associated with investment decisions, especially in environments characterised by sudden changes in production streams. To address these limitations, this research applies stochastic optimal stopping theory for L\'evy processes to investigate the problem. The main result is a theorem presented as variational inequalities for the optimal stopping problem. Partial integro-differential equations are derived from the valuation problem. An efficient, stable and convergent numerical scheme is deployed to solve the partial integro-differential equation. The results of the research show that infinite jump activities affect investment thresholds. The work also demonstrates the impact of L\'evy markets on the decision process of production firms.</p>Phyonah Oratile MokokaEriyoti ChikodzaRonald Tshelametse
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-01-292026-01-291553897392210.19139/soic-2310-5070-2727Unified Fixed Point Theory in Generalized Metric Structures with Applications to Nonlinear Economic Systems
http://47.88.85.238/index.php/soic/article/view/2818
<p>This paper introduces a comprehensive framework unifying recent advancements in fixed point theory through the novel concept of \emph{twisted weighted $\Theta$-$b$-metric spaces}. We establish a framework of fixed point theorems for multi-valued mappings satisfying generalized rational type contractions that incorporate control functions, weight functions, and twisted admissibility conditions. By synthesizing concepts from \v{C}iri\'{c}-type contractions, Berinde's almost contractions, Jleli's $\Theta$-contractions, and weighted $b$-metric spaces, we create a powerful analytical tool with unprecedented theoretical depth. The work provides rigorous proofs, extensive numerical validation, and demonstrates significant applications to economic systems, including production-consumption equilibrium models and fractional economic growth equations. Our results substantially generalize numerous classical theorems while opening new avenues for research in nonlinear analysis and mathematical economics.</p>Haitham Qawaqneh
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-01-122026-01-121553923394410.19139/soic-2310-5070-2818Enhancing Echocardiographic Segmentation through KL-Divergence-Based Intensity Distribution Constraints in U-Net Models
http://47.88.85.238/index.php/soic/article/view/2869
<p>Incorporating prior knowledge such as constraints can be crucial to improving the performance of image analysis methods, particularly when dealing with corrupted images, poor quality images, low contrast, and a lack of training data. To our knowledge, there has been nothing in the literature using the KL-divergence between the density distributions of the segmented objects as a constraint. Instead, the KL-divergence is used as a loss function between the predicted image and its label. In the present paper, we try to demonstrate the efficiency of applying an intensity distribution of the region of interest as a constraint with the KL-divergence function in 2D echocardiographic imaging segmentation (left ventricle segmentation). For this, we use the U-net neural network to which we added a second pseudo output which serves as a constraint, trained on echocardiographic medical images of good, medium and poor quality. A two-point improvement was achieved by applying our constraint, when compared with the use of the cross-entropy alone. For images where the cross-entropy performs better than our method, imposing a constraint provides a smoother and more logical segmentation, with a shape that more closely resembles the label than that obtained by only using the cross-entropy. In the segmentation of medical ultrasound images, the use of an intensity distribution as a constraint can be highly beneficial, especially when the target region has an intensity distribution different from that of the background distribution.</p>Zahir AITMATENSoraya ALOUIAhror BELAID
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-222026-02-221553945396310.19139/soic-2310-5070-2869Cross-Attention Transformer Networks with Optimized Feature Selection for Explainable Respiratory Disease Classification
http://47.88.85.238/index.php/soic/article/view/2912
<p>Respiratory diseases require accurate diagnosis for effective treatment, yet traditional methods rely on subjective assessments and expensive procedures. This paper presents a transformer-based cross-attention framework for acoustic respiratory disease classification with explainable AI integration. The CrossAttentionAcousticNetwork combines CNN spectral feature extraction with transformer temporal modeling, enhanced by cross-attention mechanisms for multi-modal feature fusion. Namib Beetle Optimization selects discriminative features from 100-dimensional handcrafted and deep spectral representations, while LIME provides clinical interpretability. Evaluation on ICBHI 2017 and KAU datasets achieves 99.0% and 95.0% accuracy respectively, representing 21.39% improvement over existing methods. The framework demonstrates superior performance across asthma, COPD, pneumonia, and heart failure classifications while maintaining computational efficiency for real-time deployment. Integrated explainable AI reveals clinically relevant acoustic patterns, with data augmentation improving minority class recognition by 19%. This approach bridges the gap between high-performance deep learning and clinical transparency requirements for automated respiratory disease screening.</p>Bajes Zeyad AljunaeidiMohammed Tawfik Issa M. AlsmadiYasser Mohammad Al-Sharo
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-12-012025-12-011553964398510.19139/soic-2310-5070-2912 Fixed Point Results in Neutrosophic n-Controlled Bipolar Metric Spaces
http://47.88.85.238/index.php/soic/article/view/2945
<p>We present the idea of Neutrosophic n-Controlled Bipolar Metric space in this study. We use n-non-comparable functions to establish several fixed point results. Furthermore, we employ a version of the Banach contraction principle to extrapolate the conclusions and provide a few non-trivial examples. Subsequently, we apply the main findings to solve financial modeling problems in fractional differential equations.</p>M. RathivelC. Inbam M. Jeyaraman
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-212025-10-211553986399710.19139/soic-2310-5070-2945A Hybrid Chaos-Based Approach for Securing Color Images Using Sine and Logistic Maps
http://47.88.85.238/index.php/soic/article/view/3046
<p>This paper proposes an innovative method to enhance the security of color images by integrating hybrid cryptographic techniques, specifically using Sine maps and a double application of the Logistic map. The proposed method aims to encrypt a color image in several distinct phases. During the first phase, the image would be converted into an M×N matrix. Before applying the Sine map, a bit permutation of the image is performed to increase confusion and effectively prepare the data for the next step. In addition, the Sine maps generate sequences for every color channel: red, green, and blue. We apply a bit-by-bit XOR operation between the matrix and the sequences, combining the results to obtain an encrypted image. In the second phase, the Logistic map is applied to the image that comes from the first phase. This step follows the same procedure as the sequence generation with the Logistic maps until the encrypted image is obtained. The final phase of the proposed method involves reapplying the Logistic maps to the image from phase 2, with modified Logistic map parameters. A comparison with established techniques demonstrates how effective the recommended strategy is in terms of both security and computational efficiency. Experimental results show resilience against various attacks, providing insights for practical integration into image security frameworks.</p>Fatima KoulouhSafae AMINEMohammed ES-SABRYNabil EL AKKAD
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-042026-02-041553998402010.19139/soic-2310-5070-3046Hyperfuzzy and SuperHyperfuzzy Weighted Averages
http://47.88.85.238/index.php/soic/article/view/3092
<p>Uncertainty modeling is fundamental to decision‑making across diverse domains, and numerous frameworks, such as Fuzzy Sets, Rough Sets, Hesitant Fuzzy Sets, Neutrosophic Sets, and Plithogenic Sets, have been developed to capture different facets of imprecision. Among these, Hyperfuzzy Sets and their recursive generalization, SuperHyperfuzzy Sets, assign set‑valued membership degrees at multiple hierarchical levels to represent uncertainty more richly. A Fuzzy Weighted Average computes a weighted mean of fuzzy numbers by applying the extension principle to their membership functions. In this paper, we extend this concept by defining the Hyperfuzzy Weighted Average and the SuperHyperfuzzy Weighted Average based on Hyperfuzzy and SuperHyperfuzzy Sets. We present formal definitions, prove key properties, such as well‑definedness, reduction to classical cases, and idempotency, and illustrate their application through examples, demonstrating enhanced aggregation of multi‑level uncertainty.</p>Takaaki FujitaIqbal M. BatihaNidal Anakira Mohammad S. HijaziAreen Al-KhateebTala Sasa
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-252026-02-251554021403310.19139/soic-2310-5070-3092Quantifying Crypto Portfolio Risk: A Simulation-Based Framework Integrating Volatility, Hedging, Contagion, and Monte Carlo Modeling
http://47.88.85.238/index.php/soic/article/view/3107
<p>Four key modules—volatility stress testing, stablecoin hedging, contagion modeling, and Monte Carlo simulation—are integrated into our modular, simulation-based framework for cryptocurrency portfolio risk. Stress design through volatility and correlation shocks, portfolio construction under static weights for controlled comparison, and multivariate price dynamics under tractable assumptions (log-normal baseline with correlation coupling) are all formalized by the mathematical architecture. Value-at-Risk (VaR) and Expected Shortfall (ES) backtesting, sensitivity analysis (shock magnitudes and rolling windows), and calibration diagnostics are all part of the empirical validation using daily BTC, ETH, and USDT data (2020–2024). A roadmap for future extensions, such as GARCH-type volatility models, jump-diffusion processes, copula-based contagion, network adjacency based on on-chain data, and EVT-based tail validation, is provided. We also acknowledge the limitations of distributional assumptions and linear dependence. The result is a reproducible, crypto-native risk framework with clear pathways for enhanced realism and broader asset coverage.</p>Kiarash Firouzi
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-042026-02-041554034405710.19139/soic-2310-5070-3107STEM-Based Analysis of Avocados (Persea americana Mill.) Leaf Classification using Deep Convolutional Neural Networks
http://47.88.85.238/index.php/soic/article/view/3124
<p>Avocado (Persea americana Mill.) is a high-value horticultural crop with great potential to support sustainable food security. However, its productivity is often limited by variety variability and the lack of efficient seed selection methods. This study proposes a machine learning-based framework for avocado variety classification through leaf morphology, including shape, size, texture, and colour. In this study, we integrated and evaluated Deep Convolutional Neural Networks technology to identify avocado types based on leaf image identification. In this study, we used a deep learning architecture and compared it with known approaches, EfficientNetV2, ConvNeXt, and Vision Transformer (ViT)-Hybrid CNN. A total of 1,400 image datasets were divided into training and testing data, containing 980 images and 420 images, respectively. Hyperparameters were considered, where the use of 100 epochs and a learning rate of 0.0001 provided the highest accuracy. The results show that the developed convolutional neural network (CNN) produced the highest accuracy of 97.83% on the EfficientNetV2 architecture, ConvNeXt produced an accuracy of 96.28% and Vision Transformer (ViT)-Hybrid CNN produced an accuracy of 95.14 %˙ The DCNN algorithm used produced the highest accuracy on the EfficientNetV2 architecture with an accuracy value of 97.83 % supported by stability with a FLOPS value of 1.34 GFLOPs.</p>Rina Sugiarti Dwi GitaZainur Rasyid RidloRifki Ilham BaihakiFirma Nur MuttakinDafikArika Indah Kristiana
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-272026-02-271554058408210.19139/soic-2310-5070-3124Explainable Deep Neural Network for a Reliable Intrusion Detection System with Shapley Additive Explanation Method
http://47.88.85.238/index.php/soic/article/view/3141
<div>The escalating complexity of cyber-attacks and the severe consequences of data breaches necessitate a shift</div> <div>toward more advanced, yet accountable, network security infrastructures. While deep learning models offer superior performance in identifying intrusions, their inherent ”black-box” nature hinders practical adoption in critical environments where security analysts must verify alerts, justify defensive actions, and conduct forensic audits. To address this lack of transparency, this study proposes an explainable Deep Neural Network (DNN) framework for a reliable Intrusion Detection System (IDS) using the CIC-IDS2017 dataset. By integrating the Shapley Additive Explanations (SHAP) method, we bridge the gap between high-performance detection and interpretability. Our experimental results demonstrate that our minimalist DNN architecture achieves an outstanding accuracy of 99.57% and an AUC-ROC of 0.9997, maintaining high detection rates across various attack types with significantly lower computational overhead compared to complex hybrid models. Furthermore, the SHAP analysis identifies Flow IAT Std and Packet Length Variance as the most influential features, offering granular insights into the model’s reasoning. This research demonstrates that high-performance deep learning can be paired with rigorous interpretability, providing a robust and transparent solution for real-time network security monitoring. </div>Fikri Mulyana SetiawanDodi DeviantoMawanda Almuhayar
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-282026-02-281554083409910.19139/soic-2310-5070-3141S-index computation for various Corona product graphs
http://47.88.85.238/index.php/soic/article/view/3142
<p>Degree based topological indices are excellent tools for distinguishing structural isomers. The S index, which is calculated using the fifth power of the degree of each vertex, performs well in molecules where branching strongly affects activity. It is particularly powerful when applied to complex molecular graphs, such as nanotubes, polymer structures, and different corona product graph construction. In this paper, we analyze the S index in various corona product graphs.</p>S.SANTHIYAO. V. Shanmugasundaram
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-022026-02-021554100411310.19139/soic-2310-5070-3142Hybrid Deep Learning Flood Forecasting Framework Optimized with the Snake Algorithm
http://47.88.85.238/index.php/soic/article/view/3150
<p>Accurate flood forecasting remains a major challenge due to the nonlinear dynamics of hydrological processes and the difficulty of optimizing deep learning models. This study proposes a hybrid deep learning framework integrating Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM), and Convolutional Neural Networks (CNN) with the Snake Optimization Algorithm (SOA) for hyperparameter tuning. The method includes feature normalization, training–testing partitioning, and multi-metric evaluation using MSE, RMSE, MAE, and R². The results reveal that the hybrid LSTM-SOA model achieved the best performance with R²= 0.8514, MSE=0.000386, RMSE=0.019653, and MAE=0.015849, outperforming standalone models. These results demonstrate the potential of hybrid optimisation-based deep learning as a trustworthy tool to support decisions in flood forecasting, early warning, and disaster preparedness.</p>Karin Younes Yaaqob Shereen Saleem Sadiq
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-022026-02-021554114413710.19139/soic-2310-5070-3150A Wavelet-Based Architecture for Efficient ECG Signal Denoising
http://47.88.85.238/index.php/soic/article/view/3155
<p><strong>Purpose:</strong> An electrocardiogram (ECG) is one of the most important biomedical signals in the detection and diagnosis of heart arrhythmias. As an interpretable biomedical signal, the ECG is subject to various interferences and noises, such as: baseline drift, power-line noise, and white Gaussian noise; all of which may obscure vital information and diagnostic features. This study aims to devise an efficient and simple method for noise reduction in ECG signals, to enhance the clarity of the signal while retaining the morphologies of the waveforms.</p> <p><strong>Approach:</strong> This study supports the simple technique of single Discrete Wavelet Transform (DWT) based architecture to signal decomposition and threshold adjustments, a straightforward method. This technique was tested on both synthetic ECG signals and ECG recordings from the MIT-BIH database. The performance of this method was analyzed based on the correlation coefficient (CC) and signal-to-noise ratio (SNR) in relation to older signal-cleaning techniques.</p> <p><strong>Results:</strong> The implemented system attained excellent performance, as indicated by the CC values of 0.9934, 0.9832 and 0.9524 in relation to the power-line noise, baseline drift, and white Gaussian noise, respectively. The SNR improved by 17.97 dB, 15.45 dB, and 10.06 dB, surpassing the previous methods of 16.58 dB, 14.82 dB, and 7.80 dB. Conclusion: The outcomes demonstrate that DWT technique is efficient in minimizing multiple noise types in one operation, which ultimately enhances performance of the ECG signal. This improvement in SNR and waveform link supports its use in correct clinical diagnosis and real-time biomedical uses.</p>Shelan KamalSerwan MohammedAhmed Khorsheed
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-162026-02-161554138416310.19139/soic-2310-5070-3155Finding Category Value Using Mean Shift Clustering to Optimize Naïve Bayes Classification
http://47.88.85.238/index.php/soic/article/view/3161
<p>The Naïve Bayes classifier is a simple classification method that can make predictions quickly and accurately by considering the independent variables separately from the class. However, in the Naïve Bayes classifier, each independent variable must be divided into several categories, while some of the data remain continuous and uncategorized. Therefore, this study proposes a measurable and precise model to categorize these independent variables effectively. The main objective is to develop a categorization model for independent variables using the Mean Shift clustering algorithm to optimize the performance of the Naïve Bayes classifier. To implement the proposed model, experiments were conducted on two types of datasets. The first dataset contains 191 records with 4 attributes and 6 classes, while the second dataset consists of 2,000 records with 7 attributes and 2 classes. In both datasets, several attributes were initially uncategorized and were categorized using the Mean Shift clustering method. The Mean Shift approach successfully grouped the uncategorized attributes into meaningful categories. In the first dataset, the accuracy of the proposed categorical Naïve Bayes classifier reached 80.1%, representing an improvement of 5.74%. Furthermore, in the second dataset, the accuracy increased to 84.25%, marking a 3% enhancement. The results of this research are expected to contribute to the field of education, especially in the subfield of machine learning.</p>Berlian Rahmy LidiawatyArip RamadanTita Ayu RospriciliaNajma Attaqiya AlyaDwi RantiniAlhassan Sesay
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-172026-02-171554164417110.19139/soic-2310-5070-3161A Hybrid Approach for Solving Fractional Caputo Partial Differential Equations with Convergence Analysis
http://47.88.85.238/index.php/soic/article/view/3235
<p>In this research, fractional Caputo partial differential equations are addressed using the q-Homotopy analysis method merged with the Sawi transform method through the construction of a novel algorithm. This approach combines the Sawi transform with the q-Homotopy analysis method to demonstrate how complex fractional differential equations can be solved analytically in a straightforward manner. The proposed algorithm illustrates the effectiveness of applying the Sawi transform in conjunction with the q-Homotopy method to overcome the challenges associated with handling nonlinear terms numerically. Several examples are provided to verify the accuracy and efficiency of the proposed approach. The results indicate that the method converges to the exact solutions when suitable parameters are chosen. Therefore, the proposed method proves to be a robust and flexible algorithm for solving nonlinear fractional partial differential equations.</p>Rania SaadehAbdelilah Kamal. H. SedeegGhassan AbufoudehAhmad QazzaMohamed Hafez
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-212026-02-211554172418610.19139/soic-2310-5070-3235Convergence Analysis and Numerical Approximation of the Fractional Fornberg–Whitham Equation via the Yasser–Jassim Transform
http://47.88.85.238/index.php/soic/article/view/3238
<p>This study introduces an innovative framework for addressing the fractional Fornberg–Whitham equation by melding the Yasser–Jassim integral transform with the Variational Iteration Method, all formulated under the Atangana–Baleanu fractional derivative in the Caputo interpretation. We first derive an explicit series representation of the solution and then rigorously prove that the iterative procedure converges, identifying conditions that guarantee both existence and uniqueness. In addition, we derive a bound on the truncation error to quantify the approximation’s accuracy. To validate the theoretical developments, a detailed computational example is provided, demonstrating rapid convergence and close agreement with the exact solution. The findings highlight the method’s robustness and suggest its broad applicability as an analytical tool for a wide range of nonlinear fractional partial differential equations.</p>Mohammed YasserNASSER sweenAthraa DasherLayla ZarzourHassan Jassim
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-182026-02-181554187420310.19139/soic-2310-5070-3238A Hybrid NLP–LLM Framework for Intelligent Classification of Individual and Collaborative Tasks in Learning Environments
http://47.88.85.238/index.php/soic/article/view/3268
<p>Natural Language Processing (NLP) plays a crucial role in automating text classification tasks, particularly in the context of education and scientific experimentation. However, the classification of practical tasks, especially distinguishing between individual and collaborative worksheets in laboratory sessions, remains an open challenge. This work extends our previous research on collaborative virtual laboratories by introducing an intelligent classification model that automatically determines task modality from worksheet specifications, enabling adaptive pedagogical orchestration. This article investigates the performance of Recurrent Neural Networks (RNNs) and their variants, including LSTM, GRU, and bidirectional models, in addressing this issue. For contextual benchmarking, selected transformer-based models were also evaluated to compare performance and computational trade-offs. We aim to determine which architecture best balances classification accuracy and computational efficiency. RNN-based models were selected due to their efficiency in sequential text modeling and their suitability for real-time deployment in educational platforms. To enhance data diversity and improve model generalization, a data augmentation step leveraging a Large Language Model (LLM) was employed to synthetically enrich the training corpus while preserving semantic consistency. Multiple RNN architectures were trained and evaluated on a domain-specific dataset of chemistry-related worksheets, using both original and LLM-augmented data. Performance was assessed using accuracy, precision, recall, and F1-score metrics. Among the models, LSTM achieved the highest accuracy (95.02%), demonstrating superior classification capabilities. GRU models offered competitive performance with lower computational costs, while bidirectional architectures improved contextual retention but exhibited variable performance depending on the dataset features. Although LLM-based data augmentation marginally enhanced model efficiency, the dataset's inherent simplicity ensured strong baseline performance across all models. Importantly, classification errors do not compromise learning outcomes but only affect execution modality, making the approach robust for real educational deployment. Overall, the findings highlight the efficiency of deep learning models in classifying practical educational tasks and underscore the potential of LLM-assisted augmentation to enhance adaptive learning environments.</p>Amel DouarYacine SLIMANIFairouz HadiAdel AltiHeythem AzzouzAbdallah Marouki
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-03-052026-03-051554204422510.19139/soic-2310-5070-3268Modeling and Dynamical Analysis of Corruption with the Influence of Anti-Corruption Education
http://47.88.85.238/index.php/soic/article/view/3328
<p class="p1">This study presents a dynamical model describing the spread of corruption by incorporating the influence of anti-corruption education. The proposed model consists of six compartments, namely the susceptible, exposed, corrupt, imprisoned, reformed, and honest groups. The model assumes that corruption propagates analogously to an infectious disease. We analyze the local stability of the corruption-free and corruption-endemic fixed points and show that their stability depend on the basic reproduction number. To examine the effect of anti-corruption education on the reduction of corrupt individuals, numerical simulations are performed for both fixed states. The results demonstrate that continuous anti-corruption education can effectively reduce the number of corrupt individuals in the population.</p>MuhafzanNarwenAhmad Iqbal BaqiZulakmal
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-242026-02-241554226423610.19139/soic-2310-5070-3328Decision-Level Fusion for Facial and Speech Emotion Recognition: A CNN-Based Web Application
http://47.88.85.238/index.php/soic/article/view/3341
<p>This paper presents a real-time web-based emotion recognition system based on unimodal deep learning models for facial and speech analysis, combined through decision-level score aggregation. Facial emotion recognition is performed using convolutional neural networks (CNNs), while speech emotion recognition relies on a CNN–BiLSTM architecture to capture both spatial and temporal speech patterns. These models are chosen for their effectiveness and low computational cost, making them suitable for web-based deployment. The facial model is trained on the FER2013 dataset, and the speech model is trained on the RAVDESS corpus using MFCC-based audio features. Rather than performing multimodal representation learning, this work demonstrates decision-level fusion by aggregating unimodal prediction scores to improve robustness when combining facial and speech information. Experimental results show competitive recognition performance and support the applicability of the proposed system for human-computer interaction in real-time and web-based affective applications.</p>Hind MestouriAbdelilah JraifiKamal Baraka
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-182026-02-181554237425410.19139/soic-2310-5070-3341Artificial Intelligence and Ensemble Learning for Coronary Artery Disease Prediction
http://47.88.85.238/index.php/soic/article/view/3354
<p class="abstract" style="text-indent: 0in;">coronary artery disease (CAD) continues to be a major cause of death linked to cardiovascular issues, and thus early diagnosis is crucial to enhance patient outcome and prevent unnecessary medical interventions. Machine learning (ML) and data mining are increasingly being recognized as robust predictive methods for CAD, with opportunities for early detection and preventive medicine. This article discusses the role of various ML algorithms to predict CAD and enhance diagnostic performance, emphasizing the importance of such methodologies in medicine. The methodology includes a rigorous study of ML techniques such as neural networks, decision trees, support vector machines, and ensemble techniques like Random Forest and XGBoost. The paper explains the advantages and disadvantages of these techniques based on their applications with publicly available medical datasets to predict CAD. Data balancing algorithms such as SMOTE and ADASYN are also incorporated for improving model performance. The findings reveal that ensemble techniques, particularly XGBoost, register the highest accuracy (94.7%), closely trailed by Random Forest (92.04%). Additionally, data balancing techniques also enhance model recall and specificity to make predictions even more accurate. The findings point towards the power of sophisticated machine learning algorithms for CAD detection as well as the need for preprocessing data to reach maximum model efficiency. This research demonstrates the broader significance of machine learning for transforming CAD prediction, and the potential to improve patient care, reduce healthcare costs, and facilitate a shift toward preventative therapy in cardiovascular disease management also we will discuss artificial intelligence (AI) applications and Recent advances in AI with CAD.</p>Hiba Dhia Jaafar Teba Jabbar Hassanmarwa Hussien Mohamed Baesher Abdullateff Abad Sara Salman Qasim
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-282026-02-281554255427610.19139/soic-2310-5070-3354Using Conformable Fractional Laplace Transform to Solve Fractional System
http://47.88.85.238/index.php/soic/article/view/3610
<p>In this study, we introduce the conformable fractional derivative, one of the most recent concepts in fractional calculus. We then employ the conformable fractional Laplace transform (CFLT) to solve a nonhomogeneous conformable fractional differential equation with variable coefficients, as well as a system of fractional differential equations, as an application.</p>Tamara SalamehGharib M. GharibMaha AlsaoudiMohamed A. Labeeb
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-03-242026-03-241554277428510.19139/soic-2310-5070-3610MORSO for Multi-Objective Fire Station Location on Urban Road Networks: The Mosul Case
http://47.88.85.238/index.php/soic/article/view/3297
<p>The effectiveness of urban fire response is heavily reliant on the fire station location strategy on a realistic road network. This paper presents a two-stage GIS framework for the additive fire station location problem on an urban road graph. Stage 1 relies on an economic location criterion to identify the number of new stations (N*) needed. Stage 2 employs a reinforced multi-objective particle swarm optimizer (MORSO) to explore potential locations and simultaneously optimize (i) the maximum and (ii) the average nearest station network distance, computed using multi-source Dijkstra algorithms with careful treatment of disconnected components. Feasibility constraints (such as minimum distance and equity/coverage criteria) are used to guarantee the existence of implementable solutions. A test example on Mosul, Iraq, illustrates that adding three stations to the existing nine (total 12) leads to a significant improvement in accessibility: the best Pareto solution decreases the average distance from 3,474.67 m to 2,823.09 m ($-18.8\%$) and the maximum distance from 12,595.07 m to 8,752.15 m (-30.5%), with further tail optimization (P90). Distances are also reported as travel-time proxy bands using an average speed of 35 km/h ($\approx$3.0/6.0/7.5 km for 4/8/10 minutes): coverage increases from 52.60/89.02/93.64% to 64.16/93.06/97.11% for the $\le$4/$\le$8/$\le$10-minute bands, respectively. Multi run bench marking and sensitivity analysis further support the robustness and practical usability of the proposed planning workflow. The resulting Pareto front and mapped layouts enable transparent efficiency risk trade offs for deployment.</p>Ziadoon Mohand KhaleelSafa Jawad AbedJalal Abdulkareem SultanNoor Marwan Ahmeed
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-03-152026-03-151554286430410.19139/soic-2310-5070-3297Bayesian Optimization for Enhanced Prediction of Earthquakes with Machine Learning Techniques
http://47.88.85.238/index.php/soic/article/view/3563
<p>Earthquakes are destructive natural hazards that strike suddenly and unexpectedly, caused by elastic strain energy released along faults, mainly resulting from the gradual accumulation and persistent movement of tectonic plates. Several new approaches have been suggested to predict earthquakes. Machine learning (ML) methodologies have recently emerged as a robust mechanism in dealing with huge, complex, nonlinear, and less dependent on stringent assumptions. Most studies are often too narrow model comparisons, systematic hyperparameter tuning, and remain geographically constrained. Comprehensive benchmarking of regression-based machine learning and Bayesian tuning remains scarce, especially for the high seismicity regions of northern and eastern Iraq. multiple Machine learning methods were employed to model the earthquake magnitude in Iraq for the period 2004-2024. Six Bayesian optimization methods were implemented across the ML methods to optimize the model hyperparameters for the models. Eight mathematical indicators of seismicity parameters are extensively used as input features for the target defined as the earthquake magnitude observed prediction. The Extra Trees Regressor method demonstrated dominant predictive capability among the evaluated metrics, after parameter optimization using the Bayesian Optimization Hyperband. Magnitude deficit, or the difference between the largest observed magnitude and the largest expected magnitude based on the Gutenberg-Richter relationship, had the key influence on earthquake magnitude, as notably by the analysis of variable importance for the eight seismic parameters. These findings provide clear evidence of the effectiveness of Extra Trees Regressor in elucidating the complex relationships underlying earthquake magnitude and provide significant insights to guide the development of data-driven strategies aimed at improving earthquake response in Iraq.</p>Muzahem Al-HashimiHeyam HayawiMohammed Qasim Yahya Alawjar
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-03-152026-03-151554305432210.19139/soic-2310-5070-3563Enhancing fuzzy transform using PCHIP interpolation: A novel approach to function approximation and solving differential equations
http://47.88.85.238/index.php/soic/article/view/2624
<p>In this study, a hybrid numerical method is presented, combining Fuzzy Transform (FT) and PCHIP (Piecewise Cubic Hermite Interpolating Polynomial) interpolation techniques in developing the accuracy and flexibility of function approximation and solutions to differential equations. The method operates in two stages: first, a low-dimensional fuzzy approximation is constructed using basis functions on a coarse grid, capturing global trends efficiently. Second, residuals between the fuzzy approximation and the true solution (or observed data) are interpolated using PCHIP, which preserves monotonicity and local shape characteristics while avoiding spurious oscillations.</p> <p>Numerical validation demonstrates a reduction of over 98% in mean square error compared to the standalone fuzzy transform, confirming the enhanced accuracy of the improved method across the tested cases. Theoretical error bounds are derived via the superposition principle, demonstrating that the total error is governed by the sum of FT approximation and PCHIP interpolation errors.</p> <p>Using this method, discrete measurements or sample observations can be mathematically modeled, and the method creates an interpolant in continuous space for any empirical data by using PCHIP, which makes it possible for any real-world data sets to be treated analytically (e.g., differentiated or integrated) over the observed values. So, this spoken itself satisfies the gap between measurements taken as discrete observations and using continuous representations in modeling. This would be highly useful for experimental science and engineering applications, such as when retrieving sensor data or performing irregularly sampled measurements that need intensive numerical treatment.</p>Ashwaq Abdul Qadir KhidrEdrees M. Nori Mahmood
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-07-192025-07-191554323433610.19139/soic-2310-5070-2624A Comparative Study of Multi-Class Classification based on Imbalanced Data
http://47.88.85.238/index.php/soic/article/view/2731
<p>Class imbalance presents a significant challenge in creating reliable and precise medical diagnostic models, especially in multi-class classification contexts where rare yet clinically important cases are insufficiently represented. This work addresses the imbalance problem across three different medical datasets: HAM10000, Skin Cancer ISIC, and Non-Alcoholic Fatty Liver Disease (NAFLD) by presenting an extensive deep learning framework using Cycle-Consistent GANs (CycleGAN) for data balancing, integrating advanced data augmentation methods, and applying Focal Loss to enhance training. The suggested architecture utilizes EfficientNet-B3 for image classification and a custom-built Multi-Layer Perceptron (MLP) for evaluating tabular clinical data. The CycleGAN model is employed to create realistic images of minority classes and to replicate oversampling in tabular domains, thus generating balanced and semantically varied datasets. To enhance generalization, we implement real-time augmentation techniques, which encompass image data augmentation via flipping, rotation, and color jittering, alongside normalization strategies for tabular features. This study presents a unified deep learning pipeline that implements real CycleGAN-based oversampling for both image and tabular medical datasets, distinguishing it from previous research. The amalgamation of CycleGAN with Focal Loss and EfficientNetB3 yields improved efficacy in minority-class detection, setting a novel benchmark for imbalanced multi-class medical classification. The performance evaluation was performed using stratified 5-fold cross-validation, employing measures like macro F1-score, balanced accuracy, and ROC-AUC. The proposed method attained superior results across all datasets, with an efficient peak accuracy of 99.33%, a macro F1-score of 96.85%, and a ROC-AUC of 0.9852 on the HAM10000 dataset. The comparative analysis with previous studies illustrates the supremacy of our pipeline in overall accuracy and minority-class sensitivity. The comparison analysis with past studies shows the superiority of our pipeline in generic accuracy and minority-class sensitivity.</p>Rojan-zaki AbdulkareemAdnan Mohsin Abdulazeez
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-172025-11-171554337435710.19139/soic-2310-5070-2731A Dynamic Signal Timing Control Algorithm for Urban Traffic Evaluation at Complex Signalized Intersections
http://47.88.85.238/index.php/soic/article/view/2974
<p> In this paper address the aforementioned problem by modeling the optimal traffic flow time, as the traditional signal distribution in the current traffic signal system is unpredictable. To determine the optimal signal durations, this research proposes a dynamic signal distribution approach. The primary objective of this study is to optimize traffic flow rates, reduce vehicle waiting times, and transform the conventional signal system into a dynamic model integrated with a multi-agent system. In this work, intersections are modeled as autonomous agents within a multi-agent framework using a comprehensive modeling technique. The proposed algorithms optimize phase sequences and signal timings to ensure traffic efficiency. Simulations were conducted with up to 300 iterations, and the results were compared with those obtained from the traditional model. The average queue length and vehicle waiting time were finally measured. The proposed method was applied to a simulated traffic scenario, and the results showed values of 89, 114, and 83 seconds, respectively.</p>Ramesh Kumar BharathiS. Solaiappan
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-072025-11-071554358437010.19139/soic-2310-5070-2974Short-term forecasting of hierarchical time series in electricity consumption: An application using South African data
http://47.88.85.238/index.php/soic/article/view/3194
<p>This article presents a comprehensive framework for short-term forecasting of hierarchical electricity consumption using South African data. The study promotes the precision and validity of the predictions of various energy sources by applying Stochastic Gradient Boosting (SGB) and XGBoost with reconciliation techniques. The results demonstrate that the XGBoost model effectively predicts the electricity consumption generated from solar (PV and CSP), coal, and diesel energy sources. Conversely, the SGB approach performs more efficiently in forecasting electricity consumption met by the electricity generated from nuclear and wind energy sources and emphasises using model-specific approaches for different energy sources. This research prefers applying multiple forecasting methods to improve overall accuracy of forecasting electricity consumption met by non-renewable energy sources. At the same time, hybrid models were particularly helpful for forecasting electricity consumption met by complex energy sources like wind and nuclear. Further, the research delves into the estimation of prediction intervals under linear regression and linear quantile regression and observes that the latter is superior with narrower interval widths, reduced interval scores and similar coverage probabilities. Findings capture notable gaps in the literature and generate real-world observations for energy policy-makers through the implication of hybrid methodologies for enhancing the quality of forecasts for electricity consumption.</p>Mantombi BasheClaris ShokoThakhani RaveleCaston Sigauke
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-02-112026-02-111554371439810.19139/soic-2310-5070-3194The Impact of Bancassurance Regulatory Reform on Performance and Efficiency: A Case Study of Misr Insurance Holding Company (MIHC) in Egypt
http://47.88.85.238/index.php/soic/article/view/3632
<p>This study investigates the evolution of bancassurance in Egypt, specifically examining the impact of regulatory frameworks on the performance and technical efficiency of the Misr Insurance Holding Company (MIHC). The research analyzes two pivotal eras: the "experimental" phase (2004-2013), marked by regulatory instability, and the "reactivation" phase (2014-2020), following the strategic 2013 decree by the Central Bank of Egypt (CBE) and the Financial Regulatory Authority (FRA). The methodology employs a three-tiered quantitative approach. First, descriptive statistics and Independent Samples T-tests reveal a dramatic surge in performance during the reactivation phase: average Return on Investment (ROI) rose from 3.5% to 19.04%, and Distributable Profit to Equity increased from 3.1% to 29.43%. Second, an Interrupted Time Series Analysis (ITSA) was utilized to assess the causal impact of the 2013 intervention, identifying a positive immediate level shift in performance indicators (β<sub>2</sub> = 0.1744 for ROI). Third, a non-parametric Data Envelopment Analysis (DEA) under Variable Returns to Scale (VRS) was applied to evaluate technical efficiency. Empirical results from the DEA model indicate a profound structural transformation; while the experimental phase exhibited high volatility and technical slack, with efficiency scores as low as 0.080, the post-2013 era achieved a stabilized and superior efficiency profile, reaching the "efficiency frontier" (score of 1.000) in the majority of the observed years. Furthermore, administrative expenses were successfully optimized, dropping from a peak of 326% of premiums in 2004 to a stable average of 14.13% after the reactivation. The study concludes that regulatory stability is the primary driver of operational maturity and resource optimization in the Egyptian life insurance sector, providing a roadmap for future bank-insurance integration.</p>Mohamed F. AboueleineinGaber Sallam Salem Abdalla
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-04-122026-04-121554399441610.19139/soic-2310-5070-3632An adversarial framework with dual genetic optimization for similarity-aware matrix factorization in recommendation systems
http://47.88.85.238/index.php/soic/article/view/3413
<p>Modern recommendation systems (RS) continue to face critical challenges, including data sparsity and the difficulty of modeling complex user–item interactions, which hinder their ability to deliver accurate and personalized recommendations. To address these limitations, we propose GaSimGAN, a novel collaborative filtering framework that integrates Generative Adversarial Networks (GANs), similarity-aware modeling, and a dual genetic optimization strategy. The proposed framework leverages personalized similarity matrices to focus on the most relevant users and items, thereby refining the input space of the generative process and improving recommendation accuracy. GaSimGAN adopts a matrix factorization-based generator coupled with an autoencoder-based discriminator, enriched with Pearson similarity information to better capture relational patterns in user–item interactions.Genetic Algorithms are employed in a dual role: first, during preprocessing to optimize similarity-based neighbor selection, and second, as an offline one-shot hyperparameter optimization step conducted prior to and independently of the adversarial training pipeline. Extensive experiments conducted on benchmark datasets, including MovieLens 1M, HetRec2011, and LastFM, demonstrate that GaSimGAN consistently outperforms state-of-the-art GAN-based recommendation systems in terms of Precision, Mean Average Precision (MAP), and Normalized Discounted Cumulative Gain (NDCG), confirming both its effectiveness and scalability.</p>Sanae FILALI ZEGZOUTIOumayma BANOUARMohamed BENSLIMANE
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-04-142026-04-141554417443610.19139/soic-2310-5070-3413MSDA-GDS: A Dual-Branch Hybrid Federated Explainable Deep Learning Framework for CAN Bus Intrusion Detection in Internet of Vehicles.
http://47.88.85.238/index.php/soic/article/view/3599
<p>The Controller Area Network (CAN) bus remains critically vulnerable to cyberattacks due to its lack of<br>authentication and encryption. Existing intrusion detection systems (IDS) for Internet of Vehicles (IoV) suffer from single-branch architectures that fail to capture multi-scale CAN byte dependencies, centralized training paradigms that compromise vehicular data privacy, and insufficient model interpretability. This paper proposes MSDA-GDS, a dualbranch hybrid federated explainable framework comprising a Multi-Scale Dilated Attention (MSDA) branch with parallel dilated convolutions and channel-spatial attention, and a Gated Depthwise Separable (GDS) branch with learnable gating mechanisms and residual connections, fused via learned attention weighting. The framework integrates Apache Spark-accelerated preprocessing, FedProx federated learning with differential privacy, and multi-method explainability (SHAP, LIME, gradient saliency). Evaluation on CICIoV2024 (1,408,219 CAN frames) and CIC-IDS-2017 (2.83M flows) demonstrates 99.99% and 99.40% accuracy respectively, with the federated variant achieving 99.97% under full privacy protection. Ablation analysis confirms the gating mechanism (∆F1 = −0.21) and engineered features (∆F1 = −0.27) as the most impactful components, while XAI analysis identifies DATA 2, DATA 1, and DATA 3 as the most discriminative byte positions with high cross-method consistency (ρ = 0.978).</p>Moh’D Suliman ShakkahBelal Al-sellamiAbdulnaser A HagarMohammed Tawfik
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-04-142026-04-141554437446310.19139/soic-2310-5070-3599A New Flexible Transmuted Distribution: Theory and Application
http://47.88.85.238/index.php/soic/article/view/3429
<p>In this paper, we introduce a new three-parameter lifetime distribution, termed the Transmuted Ibrahim (TI) distribution, which is constructed by applying the quadratic rank transmutation map to the Ibrahim distribution. The TI distribution represents a novel three-parameter lifetime model arising from the quadratic rank transmutation of the Ibrahim distribution. In contrast to many existing transmuted families that emphasize purely formal generalization, the TI distribution is specifically designed to overcome concrete shortcomings of the original Ibrahim model—particularly its poor performance in representing high kurtosis and pronounced right-tail behavior—while maintaining analytical and computational feasibility. The proposed framework addresses key weaknesses of the baseline Ibrahim distribution, most notably its inability to adequately model heavy tails and pronounced asymmetry that often appear in reliability and survival data.<br>We derive a broad set of statistical properties, including closed-form expressions for ordinary and incomplete moments, quantile functions, generating functions, and Rényi entropy. Conditions for unimodality are established, and the corresponding modal behavior is analyzed. Parameter inference is carried out using maximum likelihood estimation and the method of moments. A detailed Monte Carlo simulation study—implemented with careful attention to numerical robustness and convergence behavior—shows that the maximum likelihood estimators are consistent over a range of sample sizes.<br>The practical relevance of the TI model is demonstrated by fitting it to two real datasets: guinea pig and rat survival times. Its performance is benchmarked against the Weibull, Gamma, and Generalized Exponential distributions. For the guinea pig data, the TI distribution yields the best fit, as indicated by the smallest AIC value (787.424) and the largest Kolmogorov–Smirnov (KS) p-value (0.5316). For the rat survival data, the TI model attains the lowest KS statistic (0.160) and the highest p-value (0.1363) among all competing models. We also provide a critical assessment of the TI distribution’s limitations, including potential identifiability challenges near boundary parameter values, and we propose several promising directions for subsequent research.</p>H. M. Hamouda Khater A. E. Gad
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-04-142026-04-141554464448510.19139/soic-2310-5070-3429Infeasible primal-dual interior point methods based on the kernel function for convex QCQPs
http://47.88.85.238/index.php/soic/article/view/3206
<p>In this paper, we study convex quadratically constrained quadratic programming (QCQP) problems through the primal--dual interior-point methods based on kernel functions. In contrast to standard feasible interior-point approaches, we develop an infeasible method whose iterates do not necessarily satisfy the primal or dual constraints throughout the iterations. When the problem admits a feasible solution, primal feasibility and optimality are achieved simultaneously at convergence. When the feasible set is empty, infeasibility is automatically detected and an approximate solution is obtained via penalized relaxation minimizing the constraint violation, weighted by the chosen kernel function. Under standard convexity assumptions and the existence of optimal solutions, the resulting convex QCQP enjoys strong duality, and its optimal solutions are fully characterized by the Karush--Kuhn--Tucker (KKT) conditions. We introduce a kernel-function-based barrier framework that replaces the classical logarithmic barrier, leading to a parametrized perturbed KKT system with explicit primal and dual residuals. This system defines an infeasible central path, whose neighborhood is followed using exact Newton directions derived from the chosen kernel function. We demonstrate that this approach provides a flexible and unified framework for designing and analyzing efficient Newton-based algorithms for QCQP, with potential extensions to broader classes of conic optimization problems.</p>Mohamed SelamatMounia LaouarMahmoud Brahimi
Copyright (c) 2026 Statistics, Optimization & Information Computing
2026-04-142026-04-141554486452210.19139/soic-2310-5070-3206